code stringlengths 38 801k | repo_path stringlengths 6 263 |
|---|---|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python [conda env:azuremlftk_mar2018]
# language: python
# name: conda-env-azuremlftk_mar2018-py
# ---
# # Models in Azure Machine Learning Package for Forecasting
# This notebook demonstrates how to use the forecasting models available in Azure Machine Learning Package for Forecasting (AMLPF). The following types of models are covered:
#
# * Univariate Time Series Models
# * Machine Learning Models
# * Model Union
#
# We will also briefly talk about model performance evaluation.
# ### Import dependencies for this sample
# +
import pandas as pd
import numpy as np
from sklearn.ensemble import RandomForestRegressor, GradientBoostingRegressor
from ftk import TimeSeriesDataFrame
from ftk.data import load_dominicks_oj_features
from ftk.models import Arima, SeasonalNaive, Naive, ETS, RegressionForecaster, ForecasterUnion
print('imports done')
# -
# ## Load data
# Since the focus of this notebook is the AMLPF models, we load a preprocessed dataset with prepared features. Some features are from the [original dataset from Dominick's Finer Foods](https://research.chicagobooth.edu/kilts/marketing-databases/dominicks), and others are generated by the featurization transformers in AMLPF. Please see the sample notebooks on transformers for feature engieering tips with AMLPF.
train_features_tsdf, test_features_tsdf = load_dominicks_oj_features()
nseries = train_features_tsdf.groupby(train_features_tsdf.grain_colnames).ngroups
nstores = len(train_features_tsdf.index.get_level_values(train_features_tsdf.group_colnames[0]).unique())
print('Grain column names are {}'.format(train_features_tsdf.grain_colnames))
print('{} time series in the data frame.'.format(nseries))
print('Group column names are {}'.format(train_features_tsdf.group_colnames))
print('{} stores/groups in the data frame.'.format(nstores))
train_features_tsdf.head()
# The data contains 249 different combinations of store and brand in a data frame. Each combination defines its own time series of sales.
#
# The difference between _grain_ and _group_ is that _grain_ usually identifies a single time series in the raw data (without multi-horizon features), while _group_ can contain multiple time series in the raw data. As will be shown later, internal package functions use group to build a single model from multiple time series if the user believes this grouping helps improve model performance. By default, group is set to be equal to grain, and a single model is built for each grain.
# ## Univariate Time Series Models
#
# A univariate time series is a sequence of observations of the same variable recorded over time, ususally at regular time intervals. Univaraite time series models analyze the temporal patterns, e.g. trend, seasonality, in the target variable to forecast future values of the target variable.
# The following univariate models are available in AMLPF.
#
# * The **Naive** forecasting algorithm uses the actual target variable value of the last period as the forecasted value of the current period.
#
# * The **Seasonal Naive** algorithm uses the actual target variable value of the same time point of the previous season as the forecasted value of the current time point. Some examples include using the actual value of the same month of last year to forecast months of the current year; use the same hour of yesterday to forecast hours today.
#
# * The **Exponential Smoothing (ETS)** algorithm generates forecasts by computing the weighted averages of past observations, with the weights decaying exponentially as the observations get older.
#
# * The **AutoRegressive Integrated Moving Average (ARIMA)** algorithm captures the autocorrelation in time series data. For more information about ARIMA, see [this link](https://en.wikipedia.org/wiki/Autoregressive_integrated_moving_average)
# Since the univariate models only utilizes the sales values over time, we extract the sales values column to save computation time and space.
train_tsdf = TimeSeriesDataFrame(train_features_tsdf[train_features_tsdf.ts_value_colname],
grain_colnames=['store', 'brand'],
time_colname='WeekLastDay',
ts_value_colname='Quantity',
group_colnames='store')
test_tsdf = TimeSeriesDataFrame(test_features_tsdf[test_features_tsdf.ts_value_colname],
grain_colnames=['store', 'brand'],
time_colname='WeekLastDay',
ts_value_colname='Quantity',
group_colnames='store')
train_tsdf.head()
# Next, set the frequency and seasonality parameters for univariate models.
# **Frequency** is the time interval at which the observations are recorded, e.g. daily, weekly, monthly. The frequency of the Dominick's data is weekly, ended on every Wednesday. The frequency of a dataset can be obtained by calling the `get_frequency_dict` method of a TimeSeriesDataFrame.
# **Seasonality** is a periodic pattern in time series data with a fixed and known period. This pattern is usually associated with some aspect of the calendar. For example, a time series with quarterly frequency presents repeated pattern every four quarters, then the seasonality of this time series is 4. The Dominick's data don't present any strong seasonality pattern. Here we assume a yearly seasonality, which is 52 (weeks). The seasonality of a dataset can be obtained by calling the `get_seasonality_dict` of a TimeSeriesDataFrame.
series_freq = 'W-WED'
series_seasonality = 52
# ### Initialize Univariate Models
# +
naive_model = Naive(freq=series_freq)
seasonal_naive_model = SeasonalNaive(freq=series_freq,
seasonality=series_seasonality)
ets_model = ETS(freq=series_freq, seasonality=series_seasonality)
arima_order = [2, 1, 0]
arima_model = Arima(series_freq, arima_order)
# -
# ### Train Univariate Models
# The estimators in AMLPF follow the same API as scikit-learn estimators: a `fit` method for model training and a `predict` method for generating forecasts.
# Since these models are all univariate models, one model is fit on each grain of the data. Using AMLPF, all 249 models can be fit with just one function call.
naive_model_fitted = naive_model.fit(train_tsdf)
seasonal_naive_model_fitted = seasonal_naive_model.fit(train_tsdf)
ets_model_fitted = ets_model.fit(train_tsdf)
arima_model_fitted = arima_model.fit(train_tsdf)
# ### Forecast/Predict with Univariate Models
# Once the models are trained, you can generate forecast by calling the `predict` method with the testing/scoring/new data. Similar to the fit method, you can create predictions for all 249 series in the testing dataset with one call to the `predict` function.
naive_model_forecast = naive_model_fitted.predict(test_tsdf)
seasonal_naive_model_forecast = seasonal_naive_model_fitted.predict(test_tsdf)
ets_model_forecast = ets_model_fitted.predict(test_tsdf)
arima_model_forecast = arima_model_fitted.predict(test_tsdf)
arima_model_forecast.head()
# The output of the `predict` method is a [ForecastDataFrame](https://docs.microsoft.com/en-us/python/api/ftk.dataframe_forecast.forecastdataframe?view=azure-ml-py-latest) with point and distribution forecast columns.
# ## Machine Learning Models
#
# In addition to traditional univariate models, Azure Machine Learning Package for Forecasting also enables you to create machine learning models for forecasting.
#
# ### RegressionForecaster
#
# The [RegressionForecaster](https://docs.microsoft.com/en-us/python/api/ftk.models.regression_forecaster.regressionforecaster?view=azure-ml-py-latest) function wraps scikit-learn regression estimators so that they can be trained on [TimeSeriesDataFrame](https://docs.microsoft.com/en-us/python/api/ftk.dataframe_ts.timeseriesdataframe?view=azure-ml-py-latest). The wrapped forecasters have the following functionalities:
# 1. Put each `group` of data into the same model, so that it can learn one model for a group of series that are deemed similar and can be pooled together. One model for a group of series often uses the data from longer series to improve forecasts for short series.
# 2. Create one-hot encoding for categorical features, if `internal_featurization` is set to `True`, because scikit-learn estimators can generally only accept numeric features.
# 3. Create `grain` and `horizon` features, if both `internal_featurization` and `make_grain_features` are set to `True`.
#
# Here we demonstrate a couple of regression models. You can substitute these models for any other models in sckit-learn that support regression.
# ### Initialize Machine Learning Models
# Set "make_grain_features" to False, because our data already contain grain and horizon features
random_forest_model = RegressionForecaster(estimator=RandomForestRegressor(),
make_grain_features=False)
boosted_trees_model = RegressionForecaster(estimator=GradientBoostingRegressor(),
make_grain_features=False)
# ### Train Machine Learning Models
random_forest_model_fitted = random_forest_model.fit(train_features_tsdf)
boosted_trees_model_fitted = boosted_trees_model.fit(train_features_tsdf)
# ### Forecast/Predict with Machine Learning Models
random_forest_forecast = random_forest_model_fitted.predict(test_features_tsdf)
boosted_trees_forecast = boosted_trees_model_fitted.predict(test_features_tsdf)
boosted_trees_forecast.head()
# ## Combine Multiple Models
#
# The [ForecasterUnion](https://docs.microsoft.com/en-us/python/api/ftk.models.forecaster_union.forecasterunion?view=azure-ml-py-latest) estimator allows you to combine multiple estimators and fit/predict on them using one line of code. Here we combine all the models created above.
forecaster_union = ForecasterUnion(
forecaster_list=[('naive', naive_model),
('seasonal_naive', seasonal_naive_model),
('ets', ets_model),
('arima', arima_model),
('random_forest', random_forest_model),
('boosted_trees', boosted_trees_model)])
forecaster_union_fitted = forecaster_union.fit(train_features_tsdf)
forecaster_union_forecast = forecaster_union_fitted.predict(test_features_tsdf, retain_feature_column=True)
# ## Performance Evaluation
#
# Now you can calculate the forecast errors on the test set. You can use the mean absolute percentage error (MAPE) here. MAPE is the mean absolute percent error relative to the actual sales values. The ```calc_error``` function provides a few built-in functions for commonly used error metrics. You can also define your custom error function to calculate other metrics, e.g. MedianAPE, and pass it to the `err_fun` argument.
def calc_median_ape(y_true, y_pred):
y_true = np.array(y_true).astype(float)
y_pred = np.array(y_pred).astype(float)
y_true_rm_na = y_true[~(np.isnan(y_true) | np.isnan(y_pred))]
y_pred_rm_na = y_pred[~(np.isnan(y_true) | np.isnan(y_pred))]
y_true = y_true_rm_na
y_pred = y_pred_rm_na
if len(y_true) == 0:
# if there is no entries left after removing na data, return np.nan
return(np.nan)
y_true_rm_zero = y_true[y_true != 0]
y_pred_rm_zero = y_pred[y_true != 0]
if len(y_true_rm_zero) == 0:
# if all values are zero, np.nan will be returned.
return(np.nan)
ape = np.abs((y_true_rm_zero - y_pred_rm_zero) / y_true_rm_zero) * 100
median_ape = np.median(ape)
return median_ape
forecaster_union_MAPE = forecaster_union_forecast.calc_error(err_name='MAPE',
by='ModelName')
forecaster_union_MedianAPE = forecaster_union_forecast.calc_error(err_name='MedianAPE',
err_fun=calc_median_ape,
by='ModelName')
all_model_errors = forecaster_union_MAPE.merge(forecaster_union_MedianAPE, on='ModelName')
all_model_errors.sort_values('MedianAPE')
# The machine learning models are able to take advantage of the added features and the similarities between series to get better forecast accuracy.
| domain-packages/forecasting/models/AMLPF_models_sample_notebook.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="view-in-github" colab_type="text"
# <a href="https://colab.research.google.com/github/harnalashok/keras/blob/main/subClassingKerasModel.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# + id="7YD2UvWPNpjS"
# Last amended: 16th Jan, 2021
# Myfolder: harnalashok/keras/ @github
# Ref: Hands-On Machine Learningwith Scikit-Learn, Keras, and TensorFlow by <NAME>
# Page: 313
# https://www.tensorflow.org/guide/keras/custom_layers_and_models
#
# Subclassing keras 'Model' class to create Dynamic models
# Two examples
# + id="EKfKK6aOK6Sl"
# 1.0 Call libraries
from sklearn.datasets import fetch_california_housing
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import StandardScaler
# 1.1
import tensorflow as tf
from tensorflow.keras import layers
from tensorflow.keras import Model
from tensorflow.keras.utils import plot_model
# + id="a5b65DPeLMXn"
# 1.2 Fetch data
housing = fetch_california_housing()
# + id="56wr831BLPhi"
# 2.0 Little preprocessing
X = housing.data
y = housing.target
ss = StandardScaler()
X = ss.fit_transform(X)
# + colab={"base_uri": "https://localhost:8080/"} id="TIrL-K3uLSK_" outputId="a39b1387-ba65-4ec2-9006-690cab3ef081"
# 2.1 Split data
X_train, X_test,y_train,y_test = train_test_split(X,y, test_size = 0.2)
X_train.shape # (16512, 8)
X_test.shape # (4128, 8)
# + id="gzUlxxJwLVER"
# 3.0 Example 1
# Subclass Model class to build a simple NN architecture
class Simple(Model):
def __init__(self, size=30, activation= 'relu'):
super(Simple, self).__init__()
# Write your layers here but do not connect them
# Input layer
self.dense1 = layers.Dense(size,activation = activation)
self.dense2 = layers.Dense(size,activation = activation)
self.dropout = layers.Dropout(0.5)
self.dense3 = layers.Dense(1,activation = 'sigmoid')
# Connect your layers here: Forwardpass
# Some layers, in particular the BatchNormalization layer
# and the Dropout layer, have different behaviors during
# training and inference. For such layers, it is standard
# practice to expose a training (boolean) argument in the
# call() method.
# By exposing this argument in call(), you enable the
# built-in training and evaluation loops (e.g. fit())
# to correctly use the layer in training and inference
def call(self, inputs, training = False):
x = self.dense1(inputs)
x = self.dense2(x)
x = self.dense3(x)
if training:
x = self.dropout(x)
return x
# + id="-qR367m7LYLY"
# 3.1 Instantiate our 'Simple' Model subclass
wd = Simple(40,'relu')
# + id="svRlwEJE9GIG"
# 3.2 Create Input object and call
# the instantiated Simple object
inputs = layers.Input(shape = X_train.shape[1:])
x = wd(inputs, True) # Get the output layer
# 3.3 Create the Model object now
model = Model(inputs = inputs, outputs = x)
# + colab={"base_uri": "https://localhost:8080/"} id="HXM8NdeOLa3R" outputId="da1cdd8f-1b43-4e24-9487-c458c23c1a9a"
# 3.3 Note that WideDeep is treated as a layer
# And summary does not provide details within
# Simple model
model.summary()
# + id="Lpa2jPTkLdnV"
# 3.4 Compile the model now
model.compile(loss = "mse", metrics= "mse")
# + colab={"base_uri": "https://localhost:8080/"} id="BXJyUmpMLgb_" outputId="79b4aa7b-a21f-40a4-f3d3-125be946ff96"
# 3.5 Train the model
model.fit(X_train, y_train, epochs = 30)
# + colab={"base_uri": "https://localhost:8080/"} id="OIiyDxf_Lm-M" outputId="2cd07212-e9f3-4b69-8ed1-c6a1f994259e"
# 3.6 Evaluate the model
model.evaluate(X_test,y_test)
# + colab={"base_uri": "https://localhost:8080/", "height": 174} id="VDH2vdJdL_By" outputId="0ddc169b-db20-46bd-c870-dc559907ba21"
# 4.0 Plot the model
# Note that we do not get model details
# They are hidden within the Model
plot_model(model)
# + [markdown] id="1KMtSTx0o1QZ"
# # Wide and Deep with two inputs
# + id="NhC0O5k9M9AW"
# 5.0 Create a subclass of Model
class DeepWide(Model):
def __init__(self,units,activation):
# 5.1 Initialise super class that is Model class
super(DeepWide,self).__init__()
# 5.2 Create layers of our model
# But concatenation is not performed here
self.hidden1 = layers.Dense(units=units, activation=activation)
self.hidden2 = layers.Dense(units=units, activation=activation)
self.out = layers.Dense(1,activation = 'sigmoid')
# call() method can have just two arguments besides, self.
# One is 'inputs' and the other 'training'.
# 5.3
def call(self,inputs):
# 5.3.1 Extract inputs
input_a = inputs[0]
input_b = inputs[1]
# 5.3.2 Make forwardpass
x = self.hidden1(input_a)
x = self.hidden2(x)
# 5.3.3 Concatenate outputs
concat = tf.keras.layers.concatenate([x,input_b])
return self.out(concat)
# + id="pZtaSfbOuVvQ"
# 6 Get two inputs
input_a = tf.keras.layers.Input(shape = X_train[:,:8].shape[1:])
input_b = tf.keras.layers.Input(shape = X_train[:,:4].shape[1:])
# + id="zx1oZCYkQK4j"
# 6.1 Instantiate DeepWide class
# It takes two inputs
out = DeepWide(30,'relu')
# + id="0gL9ylbtQNHF"
# 6.2 Get the output of last layer
out = out((input_a,input_b))
# + id="Y8tGBF84Q2J6"
# 7.0 Create model now
model= Model(inputs = [input_a,input_b], outputs = [out])
# + id="7wphuZSDzlu-"
# 7.1
model.compile(loss= "mse", metrics = "mse")
# + colab={"base_uri": "https://localhost:8080/"} id="Pb_3_Dtf33-C" outputId="f272ee24-4c4e-4e48-893e-00ef256a08bf"
# 7.1 Fit the model
model.fit([X_train[:,:8], X_train[:,:4]], y_train,epochs = 30)
# + colab={"base_uri": "https://localhost:8080/"} id="nRD8C8Uy4Wvw" outputId="17d97d06-38c4-4462-869d-823d136d132d"
# 7.2 Evaluate the model
model.evaluate([X_test[:,:8], X_test[:,:4]], y_test)
# + colab={"base_uri": "https://localhost:8080/", "height": 174} id="bzCQOruBSDlc" outputId="411cd456-ea1b-4217-df10-2f468d28f133"
# 8.0 Plot the model
plot_model(model)
# + id="LdKnF60E7fFb"
########## I am done #############
| tensorflow/subClassingKerasModel.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Distribuciones de probabilidad
# +
# Importamos librerías a trabajar en todas las simulaciones
import matplotlib.pyplot as plt
import numpy as np
from itertools import cycle # Librería para hacer ciclos
import scipy.stats as st # Librería estadística
from math import factorial as fac # Importo la operación factorial
# %matplotlib inline
# -
# ## 1. Distrución de probabilidad uniforme
# $X\sim U(a,b)$ Parámetros $a,b \rightarrow $ intervalo
# $$\textbf{Función de densidad de probabilidad}\\f(x)=\begin{cases}\frac{1}{b-a} & a\leq x \leq b\\0& \text{otro caso}\end{cases}$$
# $$ \textbf{Función de distribución de probabilidad}\\F(x)=\begin{cases}0& x<a\\\frac{x-a}{b-a} & a\leq x \leq b\\1& x\geq b\end{cases}$$
# 
# ### Uso en python
a,b=1,2 # Interval
U = np.random.uniform(a,b)
U
# ## 2. Distribución normal
# $X\sim N(\mu,\sigma^2)$ Parámetros: Media=$\mu$ y varianza=$\sigma^2$
# $$ \textbf{Función de densidad de probabilidad}\\ f(x)= \frac{1}{\sigma\sqrt{2\pi}}e^{\frac{-(x-\mu)^2}{2\sigma^2}}$$
# $$ \textbf{Función de distribución de probabilidad}\\ F(x)= \frac{1}{\sigma\sqrt(2\pi)}\int_{-\infty}^{x}e^{\frac{-(v-\mu)^2}{2\sigma^2}}dv$$
# 
#
# ### Propiedades
# 
# ### Estandarización de variables aleatorias normales
#
# Como consecuencia de que la función normal es simétrica en $\mu$ es posible relacionar todas las variables aleatorias normales con la distribución normal estándar.
#
# Si $X\sim N(\mu ,\sigma ^{2})$, entonces
# $$Z = \frac{X - \mu}{\sigma}$$
#
# es una variable aleatoria normal estándar: $Z\sim N(0,1)$.
#
# ### El Teorema del Límite Central
# El Teorema del límite central establece que bajo ciertas condiciones (como pueden ser independientes e idénticamente distribuidas con varianza finita), la suma de un gran número de variables aleatorias se distribuye aproximadamente como una normal. **(Hablar de la importancia del uso)**
#
# ### Incidencia
# Cuando en un fenómeno se sospecha la presencia de un gran número de pequeñas causas actuando de forma aditiva e independiente es razonable pensar que las observaciones serán "normales". **(Debido al TLC)**
#
# Hay causas que pueden actuar de forma multiplicativa (más que aditiva). En este caso, la suposición de normalidad no está justificada y es el logaritmo de la variable en cuestión el que estaría normalmente distribuido. **(log-normal)**.
#
# ### Ejemplo de aplicación
# En variables financieras, el modelo de Black-Scholes, el cúal es empleado para estimar el valor actual de una opción europea para la compra (Call), o venta (Put), de acciones en una fecha futura, supone normalidad en algunas variables económicas. ver:https://es.wikipedia.org/wiki/Modelo_de_Black-Scholes para información adicional.
#
# > Referencia: https://es.wikipedia.org/wiki/Distribuci%C3%B3n_normal
# ### Uso en python
mu, sigma = 0, 0.1 # mean and standard deviation
N = np.random.normal(mu, sigma,5)
N
st.norm
# ## 3. Distribución exponencial
# $X\sim Exp(\beta)$ Parámetros: Media $\beta>0$ o tasa = $\lambda = 1/\beta$
#
# $$\textbf{Función de densidad de probabilidad}\\f(x) = \frac{1}{\beta} e^{-\frac{x}{\beta}}$$
# $$\textbf{Función de distribución de probabilidad}\\F(x) = 1-e^{-\frac{x}{\beta}}$$
# 
#
# ### Ejemplos
# Ejemplos para la distribución exponencial **es la distribución de la longitud de los intervalos de una variable continua que transcurren entre dos sucesos**, que se distribuyen según la distribución de Poisson.
#
# - El tiempo transcurrido en un centro de llamadas hasta recibir la primera llamada del día se podría modelar como una exponencial.
# - El intervalo de tiempo entre terremotos (de una determinada magnitud) sigue una distribución exponencial.
# - Supongamos una máquina que produce hilo de alambre, la cantidad de metros de alambre hasta encontrar una falla en el alambre se podría modelar como una exponencial.
# - En fiabilidad de sistemas, un dispositivo con tasa de fallo constante sigue una distribución exponencial.
#
# ### Relaciones
# La suma de k variables aleatorias independientes de distribución exponencial con parámetro $\lambda$ es una variable aleatoria de distribución de Erlang.
#
# > Referencia: https://en.wikipedia.org/wiki/Exponential_distribution
# ### Uso en python
beta = 4
E = np.random.exponential(beta,1)
E
st.expon
# ## 4. Distribución erlang
# Parámetros: Tamaño $k \in \mathbb{N}$, escala=$\frac{1}{\beta}$
# $$\textbf{Función de densidad de probabilidad}\\f(x)=x^{k-1}\frac{e^{-x/\beta}}{\beta^k\Gamma(k)}\equiv x^{k-1}\frac{e^{-x/\beta}}{\beta^k(k-1)!}$$
#
# $$\textbf{Función de distribución de probabilidad}\\F(x)=1-\sum_{n=0}^{k-1}\frac{1}{n!}e^{-\frac{1}{\beta}x}\big(\frac{x}{\beta}\big)^n$$
# 
#
# ### Simplificaciones
# La distribución Erlang con tamaño $k=1$ se simplifica a una distribución exponencial. Esta es una distribución de la suma de $k$ variables exponenciales donde cada una tiene media $\beta$
#
# ### Ocurrencia
# **Tiempos de espera**
#
# Los eventos que ocurren de forma independiente con una tasa promedio se modelan con un proceso de Poisson. Los tiempos de espera entre k ocurrencias del evento son distribuidos Erlang. (La cuestión relacionada con el número de eventos en una cantidad de tiempo dada descrita por una distribución de Poisson).
#
# Las fórmulas de Erlang se han utilizado en economía de negocios para describir los tiempos entre compras de un activo.
#
# > Referencia: https://en.wikipedia.org/wiki/Erlang_distribution
# ### Uso en python
# +
from scipy.stats import erlang
N = 10000 # Número de muestras
k,scale = 3,1/4 # Parámetros de la distribución
E1 = erlang.rvs(k,scale=scale,size=N)
E2 = np.random.gamma(k,scale,N) # Erlang como caso particular de la distribución gamma
plt.figure(1,figsize=[12,4])
plt.subplot(121)
plt.hist(E1,50,density=True,label='Usando Lib. scipy')
plt.legend()
plt.subplot(122)
plt.hist(E2,50,density=True,label='Usando Lib. numpy')
plt.legend()
plt.show()
# -
# ## 5. Distribución binomial
# $X\sim B(n,p)$ Parámetros: $n$ y $p$
# $$\textbf{Función de densidad de probabilidad}\\p_i=P(X=i)={n \choose i}p^i(1-p)^{n-i}= \frac{n!}{i!(n-i)!}p^i(1-p)^{n-i},\quad i=0,1,\cdots,n$$
# >Recordar:$$p_{i+1}=\frac{n-i}{i+1}\frac{p}{1-p} p_i $$
#
# $$\textbf{Función de distribución de probabilidad}\\F(x)=\sum_{i=0}^{k-1}\frac{n!}{i!(n-i)!}p^i(1-p)^{n-i}$$
# ## Método convencional
def proba_binomial(n:'Cantidad de ensayos',N:'Cantidad de puntos a graficar',
p:'Probabilidad de los eventos'):
pi = [(1-p)**n]
add = pi.append
for i in range(N-1):
add(((n-i)*p*pi[-1])/((i+1)*(1-p)))
return pi
# ## Método vectorizado
def proba_binomial_vect(n:'Cantidad de ensayos',
N:'Cantidad de puntos a graficar',
p:'Probabilidad de los eventos'):
global pi
pi = np.zeros(N);
pi[0] = (1-p)**n
def probability_vector(i:'Contador para llenar el vector pi'):
global pi
pi[i+1]= ((n-i)*p*pi[i])/((i+1)*(1-p))
[probability_vector(j) for j in range(N-1)]
return pi
# +
# Comprobación de función creada
# Distintos parámetros para graficar la función binomial
n = [50,100,150]
# Parámetro p de la dristribución
p = 0.5
# Resultado usando método convencional
P = list(map(lambda x,n: proba_binomial(n,100,p),range(len(n)),n))
P = np.asmatrix(P)
# # Resultado usando método vectorizado
P2 = list(map(lambda x,n:proba_binomial_vect(n,100,p), range(len(n)),n))
P2 = np.array(P2,ndmin=1)
P2.shape
def grafica_binomial(P:'Matriz de probabilidades binomiales',i):
# Gráfica de densidad de probabilidad
fig,(ax1,ax2) = plt.subplots(1,2)
fig.set_figwidth(10)
ax1.plot(P.T,'o',markersize=3)
ax1.legend(['n=50','n=100','n=150'])
ax1.set_title('Densidad de probabilidad')
# ax1.show()
# Probabilidad acumulada
F = np.cumsum(P,axis=1)
# plt.figure(2)
ax2.plot(F.T,'o',markersize=3)
ax2.legend(['n=%d'%n[0],'n=%d'%n[1],'n=%d'%n[2]])
ax2.set_title('Distribución acumulada')
if i==0:
plt.suptitle('Método convencional')
else:
plt.suptitle('Método vectorizado')
plt.show()
# Gráfica del método convencional y vectorizado
[grafica_binomial(p,i) for p,i in zip([P,P2],range(3))];
# -
# ### Características
# La distribución binomial es una distribución de probabilidad discreta que cuenta el número de éxitos en una secuencia de **n ensayos de Bernoulli independientes entre sí**, con una probabilidad fija p de ocurrencia del éxito entre los ensayos. A lo que se denomina «éxito», tiene una probabilidad de ocurrencia p y al otro, «fracaso», tiene una probabilidad q = 1 - p. En la distribución binomial el anterior experimento se repite n veces, de forma independiente, y se designa por $X$ a la variable que mide el número de éxitos que se han producido en los n experimentos.
#
# Cuando se dan estas circunstancias, se dice que la variable $X$ sigue una distribución de probabilidad binomial, y se denota $X\sim B(n,p)$.
#
# ### Ejemplo
# Supongamos que se lanza un dado (con 6 caras) 51 veces y queremos conocer la probabilidad de que el número 3 salga 20 veces. En este caso tenemos una $X \sim B(51, 1/6)$ y la probabilidad sería $P(X=20)$:
#
# $$P(X=20)={51 \choose 20}(1/6)^{20}(1-1/6)^{51-20} $$
n = 100; p=1/6; X=20
print('P(X=20)=',st.binom(n,p).pmf(X))
# ### Relaciones con otras variables aleatorias
#
# Si n tiende a infinito y p es tal que el producto entre ambos parámetros tiende a $\lambda$, entonces la distribución de la variable aleatoria binomial tiende a una distribución de Poisson de parámetro $\lambda$.
#
# Por último, se cumple que cuando $p =0.5$ y n es muy grande (usualmente se exige que $n\geq 30$) la distribución binomial puede aproximarse mediante la distribución normal, con parámetros $\mu=np,\sigma^2=np(1-p)$.
#
# > Referencia: https://en.wikipedia.org/wiki/Binomial_distribution
p = .5; n = 50
mu = n*p; sigma = np.sqrt(n*p*(1-p))
# Usando nuetra función creada
Bi = proba_binomial(n,50,p)
plt.figure(1,figsize=[10,5])
plt.subplot(121)
plt.plot(Bi,'o')
plt.title('Distribución binomial n=40,p=0.5')
# Usando la función de la librería scipy para graficar la normal
x = np.arange(0,50)
Bi_norm = st.norm.pdf(x,loc=mu,scale=sigma)
plt.subplot(122)
plt.plot(Bi_norm,'o')
plt.title('Distribución~normal(np,np(1-p))')
plt.show()
# ## 6. Distribución Poisson
# Parámetros: media=$\lambda>0 \in \mathbb{R}$, N°Ocurrencias = k
#
# - k es el número de ocurrencias del evento o fenómeno (la función nos da la probabilidad de que el evento suceda precisamente k veces).
# - λ es un parámetro positivo que representa el número de veces que se espera que ocurra el fenómeno durante un intervalo dado. Por ejemplo, si el suceso estudiado tiene lugar en promedio 4 veces por minuto y estamos interesados en la probabilidad de que ocurra k veces dentro de un intervalo de 10 minutos, usaremos un modelo de distribución de Poisson con λ = 10×4 = 40
#
# $$\textbf{Función de densidad de probabilidad}\\p(k)=\frac{\lambda^k e^{-\lambda}}{k!},\quad k\in \mathbb{N}$$
#
# ### Aplicación
# El número de sucesos en un intervalo de tiempo dado es una variable aleatoria de distribución de Poisson donde $\lambda$ es la media de números de sucesos en este intervalo.
#
# ### Relación con distribución Erlang o Gamma
# El tiempo hasta que ocurre el suceso número k en un proceso de Poisson de intensidad $\lambda$ es una variable aleatoria con distribución gamma o (lo mismo) con distribución de Erlang con $ \beta =1/\lambda $
#
# ### Aproximación normal
# Como consecuencia del teorema central del límite, para valores grandes de $\lambda$ , una variable aleatoria de Poisson X puede aproximarse por otra normal, con parámetros $\mu=\sigma^2=\lambda$. Por otro lado, si el cociente
# $$Y=\frac{X-\lambda}{\sqrt{\lambda}}$$
# converge a una distribución normal de media 0 y varianza 1.
#
# ### Ejemplo
# Si el 2% de los libros encuadernados en cierto taller tiene encuadernación defectuosa, para obtener la probabilidad de que 5 de 400 libros encuadernados en este taller tengan encuadernaciones defectuosas usamos la distribución de Poisson. En este caso concreto, k es 5 y, λ, el valor esperado de libros defectuosos es el 2% de 400, es decir, 8. Por lo tanto, la probabilidad buscada es
# $$P(5;8)={\frac {8^{5}e^{-8}}{5!}}=0,092$$
#
# > Referencia: https://es.wikipedia.org/wiki/Distribuci%C3%B3n_de_Poisson
k=5; Lamda = 8
print('P(5;8)=',st.poisson(Lamda).pmf(k))
# +
import scipy.special as sps
p = lambda k,l:(l**k*np.exp(-l))/sps.gamma(k+1)
k = np.arange(0,50)
l = [1,10,20,30]
P = np.asmatrix(list(map(lambda x:p(k,x*np.ones(len(k))),l))).T
plt.figure(1,figsize=[12,4])
plt.subplot(121)
plt.plot(P,'o',markersize=3)
plt.legend(['$\lambda$=%d'%i for i in l])
# Probabilidad acumulada
P_ac = np.cumsum(P,axis=0)
plt.subplot(122)
plt.plot(P_ac,'o',markersize=3)
[plt.hlines(P_ac[:,i],range(len(P_ac)),range(1,len(P_ac)+1)) for i in range(len(l))]
plt.legend(['$\lambda$=%d'%i for i in l])
plt.show()
# -
# 
# ## 7. Distribuciónn triangular
# Parámetros:
# - a : $a\in (-\infty ,\infty)$
# - b : $b > a$
# - c : $a\leq c\leq b$
# - Soporte: $a\leq x\leq b$
#
# $$\textbf{Función de densidad de probabilidad}\\f(x|a,b,c)={\begin{cases}{\frac {2(x-a)}{(b-a)(c-a)}}&{\text{para }}a\leq x<c,\\[4pt]{\frac {2}{b-a}}&{\text{para }}x=c,\\[4pt]{\frac {2(b-x)}{(b-a)(b-c)}}&{\text{para }}c<x\leq b,\\[4pt]0&{\text{para otros casos}}\end{cases}}$$
#
#
# $$\textbf{Función de distribución de probabilidad}\\F(x|a,b,c)={\begin{cases}{0}&{\text{para }}x\leq a,\\[4pt]{\frac {(x-a)^2}{(b-a)(c-a)}}&{\text{para }}a< x\leq c,\\[4pt]{1-\frac{(b-x)^2}{(b-a)(b-c)}}&{\text{para }}c<x< b,\\[4pt]1&{\text{para }}b\leq x\end{cases}}$$
#
# 
#
# ### Uso de la distribución triangular
# La distribución triangular es habitualmente empleada como una descripción subjetiva de una población para la que sólo se cuenta con una cantidad limitada de datos muestrales y, especialmente en casos en que la relación entre variables es conocida pero los **datos son escasos** (posiblemente porque es alto el costo de recolectarlos). Está basada en un conocimiento del mínimo y el máximo como el del valor modal. Por estos motivos, la Distribución Triangular ha sido denominada como la de "falta de precisión" o de información.
#
# > Referencia: https://en.wikipedia.org/wiki/Triangular_distribution
# # <font color ='red'> Tarea (Opcional)
# Generar valores aleatorios para la siguiente distribución de probabilidad
# $$f(x)=\begin{cases}\frac{2}{(c-a)(b-a)}(x-a), & a\leq x \leq b\\ \frac{-2}{(c-a)(c-b)}(x-c),& b\leq x \leq c \end{cases}$$ con a=1; b=2; c=5
# 1. Usando el método de la transformada inversa.
# 2. Usando el método de aceptación y rechazo.
# 3. En la librería `import scipy.stats as st` hay una función que genera variables aleatorias triangulares `st.triang.pdf(x, c, loc, scale)` donde "c,loc,scale" son los parámetros de esta distribución (similares a los que nuestra función se llaman a,b,c, PERO NO IGUALES). Explorar el help de python para encontrar la equivalencia entre los parámetros "c,loc,scale" y los parámetros de nuestra función con parámetros "a,b,c". La solución esperada es como se muestra a continuación:
# 
#
# 4. Generar 1000 variables aleatorias usando la función creada en el punto 2 y usando la función `st.triang.rvs` y graficar el histograma en dos gráficas diferentes de cada uno de los conjuntos de variables aleatorios creado. Se espera algo como esto:
#
# 
# ### La pongo como opcional por que puede aparecer en un quiz o un examen.
#
# # <font color ='red'>Tarea distribuciones de probabilidad:</font>
#
# La tarea debe de realizarse en grupos, los cuales están nombrados en la siguiente tabla. La tarea consiste en modificar una de las páginas que corresponde a el grupo conformado, por ejemplo si eres el grupo 1, debes de modificar la página que corresponde a tu grupo, no ninguna de las otras páginas. En dicha página les voy a pedir que en una breve exposición, de aproximadamente 5 a 7 minutos, la próxima clase martes 1 de octubre, expongan sus respectivas consultas aceca de cada una de las distribuciones de probabilidad asignadas. Lo que necesito que consulten es:
#
# 1. Explicación del uso de cada distribución de probabilidad.
#
# 2. Utilizar recursos audiovisuales, como videos, tablas, gifts, imágenes, enlace externos, etc, los cuales desde esta plataforma de canvas es posible introducir, en donde expliquen de la forma mas amigable y simple posible, las aplicaciones aplicaciones y usos de las distribuciones de probabilidad asignadas.
#
# 3. Consultar en libros, internet, aplicaciones de como usar dichas distribuciones y por qué usarlas.
#
# 4. También pueden poner la descripción matemática de dischas distribuciones. Noten que pueden ingresar código latex para poder ingresar ecuaciones y demás.
#
# La calificación estará basada, en la creatividad y el manejo que tengan de cada una de sus distribuciones de probabilidad a la hora de la exposición.
#
# <script>
# $(document).ready(function(){
# $('div.prompt').hide();
# $('div.back-to-top').hide();
# $('nav#menubar').hide();
# $('.breadcrumb').hide();
# $('.hidden-print').hide();
# });
# </script>
#
# <footer id="attribution" style="float:right; color:#808080; background:#fff;">
# Created with Jupyter by <NAME>
# </footer>
| TEMA-2/Clase11_DistribucionesProbabilidad.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python [conda env:root] *
# language: python
# name: conda-root-py
# ---
# +
import pathlib
import astropy.coordinates as coord
import astropy.table as at
import astropy.units as u
import matplotlib as mpl
import matplotlib.pyplot as plt
# %matplotlib inline
import numpy as np
from scipy.spatial import cKDTree
from scipy.stats import binned_statistic
from scipy.interpolate import interp1d
from tqdm import tqdm
from gala.mpl_style import turbo
from totoro.data import datasets
from totoro.abundance_helpers import elem_to_label
from totoro.config import cache_path, plot_path
# -
all_tbls = {}
for data_name, d in datasets.items():
this_cache_path = cache_path / data_name
tbls = {}
for path in this_cache_path.glob('optimize-results-*.csv'):
try:
elem = path.name.split('.')[0].split('-')[-1]
except:
print(f"FAILED {path}")
continue
tbls[elem] = at.Table.read(path)
if len(tbls) > 4:
all_tbls[data_name] = tbls
print(data_name, len(tbls))
# Unique colors per elem ratio:
# +
all_elems = set()
for tbls in all_tbls.values():
all_elems = all_elems.union(tbls.keys())
elem_to_color = {}
for i, elem in enumerate(all_elems):
elem_to_color[elem] = turbo(i / len(all_elems))
# +
fiducials = {
'mdisk_f': 1.,
'disk_hz': 0.28,
'zsun': 20.8,
'vzsun': 7.78
}
colcols = [
('mdisk_f', 'disk_hz'),
('mdisk_f', 'vzsun'),
('zsun', 'vzsun')
]
# -
for data_name, tbls in all_tbls.items():
fig, axes = plt.subplots(1, 3, figsize=(15, 5.5),
constrained_layout=True)
for elem in tbls:
for i, (col1, col2) in enumerate(colcols):
ax = axes[i]
ax.plot(tbls[elem][col1], tbls[elem][col2],
ls='none', marker='o', mew=0, ms=4,
label=elem_to_label(elem), color=elem_to_color[elem])
axes[0].legend()
axes[0].set_xlabel(r'${\rm M}_{\rm disk} / {\rm M}_{\rm disk}^\star$')
axes[1].set_xlabel(r'${\rm M}_{\rm disk} / {\rm M}_{\rm disk}^\star$')
axes[2].set_xlabel(r'$z_\odot$ [pc]')
axes[0].set_ylabel(r'$h_z$ [kpc]')
axes[1].set_ylabel(r'$v_{z,\odot}$ ' + f'[{u.km/u.s:latex_inline}]')
axes[2].set_ylabel(r'$v_{z,\odot}$ ' + f'[{u.km/u.s:latex_inline}]')
for ax, (col1, col2) in zip(axes, colcols):
ax.axvline(fiducials[col1], zorder=-10, color='#aaaaaa', linestyle='--')
ax.axhline(fiducials[col2], zorder=-10, color='#aaaaaa', linestyle='--')
fig.set_facecolor('w')
fig.suptitle(data_name, fontsize=24)
# ### Error ellipses
# +
# From https://matplotlib.org/devdocs/gallery/statistics/confidence_ellipse.html
from matplotlib.patches import Ellipse
import matplotlib.transforms as transforms
def confidence_ellipse(x, y, ax, n_std=1.0, facecolor='none', **kwargs):
cov = np.cov(x, y)
pearson = cov[0, 1] / np.sqrt(cov[0, 0] * cov[1, 1])
# Using a special case to obtain the eigenvalues of this
# two-dimensionl dataset.
ell_radius_x = np.sqrt(1 + pearson)
ell_radius_y = np.sqrt(1 - pearson)
ellipse = Ellipse((0, 0), width=ell_radius_x * 2, height=ell_radius_y * 2,
facecolor=facecolor, **kwargs)
# Calculating the stdandard deviation of x from
# the squareroot of the variance and multiplying
# with the given number of standard deviations.
scale_x = np.sqrt(cov[0, 0]) * n_std
mean_x = np.mean(x)
# calculating the stdandard deviation of y ...
scale_y = np.sqrt(cov[1, 1]) * n_std
mean_y = np.mean(y)
transf = transforms.Affine2D() \
.rotate_deg(45) \
.scale(scale_x, scale_y) \
.translate(mean_x, mean_y)
ellipse.set_transform(transf + ax.transData)
return ax.add_patch(ellipse)
def plot_cov_ellipse(m, C, ax, n_std=1.0, facecolor='none', **kwargs):
pearson = C[0, 1] / np.sqrt(C[0, 0] * C[1, 1])
# Using a special case to obtain the eigenvalues of this
# two-dimensionl dataset.
ell_radius_x = np.sqrt(1 + pearson)
ell_radius_y = np.sqrt(1 - pearson)
ellipse = Ellipse((0, 0), width=ell_radius_x * 2, height=ell_radius_y * 2,
facecolor=facecolor, **kwargs)
transf = transforms.Affine2D() \
.rotate_deg(45) \
.scale(n_std * np.sqrt(C[0, 0]),
n_std * np.sqrt(C[1, 1])) \
.translate(m[0], m[1])
ellipse.set_transform(transf + ax.transData)
return ax.add_patch(ellipse)
# -
def make_ell_plot(tbls):
elem_names = tbls.keys()
means = np.zeros((len(elem_names), 4))
covs = np.zeros((len(elem_names), 4, 4))
for j, elem in enumerate(elem_names):
mask = (np.isfinite(tbls[elem]['mdisk_f']) &
np.isfinite(tbls[elem]['zsun']) &
np.isfinite(tbls[elem]['vzsun']))
X = np.stack((tbls[elem]['mdisk_f'][mask],
tbls[elem]['disk_hz'][mask],
tbls[elem]['zsun'][mask],
tbls[elem]['vzsun'][mask]))
covs[j] = np.cov(X)
means[j] = np.mean(X, axis=1)
C = np.linalg.inv(np.sum([np.linalg.inv(cov) for cov in covs], axis=0))
m = np.sum([C @ np.linalg.inv(cov) @ mean
for mean, cov in zip(means, covs)], axis=0)
logdets = [np.linalg.slogdet(cov)[1] for cov in covs]
norm = mpl.colors.Normalize(vmin=np.nanmin(logdets),
vmax=np.nanmax(logdets),
clip=True)
norm2 = mpl.colors.Normalize(vmin=-0.2, vmax=1.1)
def get_alpha(ld):
return norm2(1 - norm(ld))
fig, axes = plt.subplots(1, 3, figsize=(15, 5.5),
constrained_layout=True)
for elem, logdet in zip(elem_names, logdets):
for i, (col1, col2) in enumerate(colcols):
ax = axes[i]
color = elem_to_color[elem]
mask = np.isfinite(tbls[elem][col1]) & np.isfinite(tbls[elem][col2])
if mask.sum() < 100:
print(f'skipping {elem} {col1} {col2}')
continue
ell = confidence_ellipse(tbls[elem][col1][mask],
tbls[elem][col2][mask],
ax,
n_std=1.,
linewidth=0, facecolor=color,
alpha=get_alpha(logdet),
label=elem_to_label(elem))
ell = confidence_ellipse(tbls[elem][col1][mask],
tbls[elem][col2][mask],
ax,
n_std=2.,
linewidth=0, facecolor=color,
alpha=get_alpha(logdet) / 2)
for j, i in enumerate([[2, 3], [1, 2], [0, 1]]):
mm = np.delete(m, i)
CC = np.delete(np.delete(C, i, axis=0), i, axis=1)
ell = plot_cov_ellipse(mm, CC, ax=axes[j],
n_std=1.,
linewidth=0, facecolor='k',
alpha=0.5, label='joint', zorder=100)
ell = plot_cov_ellipse(mm, CC, ax=axes[j],
n_std=2.,
linewidth=0, facecolor='k',
alpha=0.2, zorder=100)
axes[0].set_xlim(0.4, 1.8)
axes[1].set_xlim(0.4, 1.8)
axes[2].set_xlim(-60, 30)
axes[0].set_ylim(0, 0.8)
axes[1].set_ylim(0, 15)
axes[2].set_ylim(0, 15)
axes[2].legend(ncol=2)
axes[0].set_xlabel(r'${\rm M}_{\rm disk} / {\rm M}_{\rm disk}^\star$')
axes[1].set_xlabel(r'${\rm M}_{\rm disk} / {\rm M}_{\rm disk}^\star$')
axes[2].set_xlabel(r'$z_\odot$ [pc]')
axes[0].set_ylabel(r'$h_z$ [kpc]')
axes[1].set_ylabel(r'$v_{z,\odot}$ ' + f'[{u.km/u.s:latex_inline}]')
axes[2].set_ylabel(r'$v_{z,\odot}$ ' + f'[{u.km/u.s:latex_inline}]')
for ax, (col1, col2) in zip(axes, colcols):
ax.axvline(fiducials[col1], zorder=-10, color='#aaaaaa', linestyle='--')
ax.axhline(fiducials[col2], zorder=-10, color='#aaaaaa', linestyle='--')
fig.set_facecolor('w')
return fig, axes
for data_name, tbls in all_tbls.items():
fig, axes = make_ell_plot(tbls)
fig.suptitle(data_name, fontsize=24)
fig.savefig(plot_path / data_name / 'bootstrap-error-ellipses.png', dpi=250)
| notebooks/Optimize-results-debug.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 2
# language: python
# name: python2
# ---
# #Titanic: Machine Learning from Disaster
# <NAME> (VieVie31)
import graphlab as gl
data_train = gl.load_sframe("train.csv")
data_test = gl.load_sframe("test.csv")
data_train.head(3)
# ##Cleanning trainning data
data_train["male"] = data_train["Sex"] == "male"
data_train["female"] = data_train["Sex"] == "female"
data_train = data_train.remove_column("Sex")
data_train["no_age"] = data_train["Age"] == None
data_train["Age"] = gl.SArray([0 if v == None else v for v in data_train["Age"]])
data_train["embarked_s"] = data_train["Embarked"] == "S"
data_train["embarked_c"] = data_train["Embarked"] == "C"
data_train["embarked_q"] = data_train["Embarked"] == "Q"
data_train["embarked_none"] = data_train["Embarked"] == None
data_train = data_train.remove_column("Embarked")
data_train["1_class"] = data_train["Pclass"] == 1
data_train["2_class"] = data_train["Pclass"] == 2
data_train["3_class"] = data_train["Pclass"] == 3
data_train = data_train.remove_column("Pclass")
gl.canvas.set_target("ipynb")
print data_train.head(3)
print data_train["Ticket"]
for v in data_train["Ticket"]:
print v, " ",
# +
#processing the tickets numbers
#for try remove the non alpha numerics numbers
def toNumber(string):
s = "0"
for v in string:
if v in ["0", "1", "2", "3", "4", "5", "6", "7", "8", "9"]:
s += v
return int(s)
data_train["PC"] = gl.SArray(["PC" in v for v in data_train["Ticket"]])
data_train["CA"] = gl.SArray(["CA" in v for v in data_train["Ticket"]])
data_train["C.A."] = gl.SArray(["C.A." in v for v in data_train["Ticket"]])
data_train["W./C."] = gl.SArray(["W./C." in v for v in data_train["Ticket"]])
data_train["SOTON"] = gl.SArray(["SOTON" in v for v in data_train["Ticket"]])
data_train["number"] = gl.SArray([toNumber(v) for v in data_train["Ticket"]])
data_train.head(3)
# +
import re
civilite_pattern = re.compile(r" ([A-Za-z])+.")
def get_civilite(name):
try:
return civilite_pattern.search(name).group(0)
except:
return ""
civilites_lst = set([get_civilite(v) if get_civilite(v)[-1] == '.' else '' for v in data_train["Name"]])
civilites_lst.remove('')
print civilites_lst
for c in civilites_lst:
data_train[c] = gl.SArray([get_civilite(v) == c for v in data_train["Name"]])
data_train.head(1)
# +
def cabin_letter(cabin):
try:
return cabin[0]
except:
return ""
cabin_letters = set([cabin_letter(v) for v in data_train["Cabin"]])
cabin_letters.remove('')
print cabin_letters
for c in cabin_letters:
data_train[c] = gl.SArray([cabin_letter(v) == c for v in data_train["Cabin"]])
data_train.column_names()
# +
def cabin_number(cabin):
return toNumber(cabin)
data_train["cabin_number"] = gl.SArray([cabin_number(v) for v in data_train["Cabin"]])
# -
train_set_1, train_set_2 = data_train.random_split(.8)
print train_set_1.head(1)
features = ["Age", "SibSp", "Parch", "Fare", "male", "female", "no_age",
"embarked_s", "embarked_c", "embarked_q", "embarked_none",
"1_class", "2_class", "3_class",
"CA", "C.A.", "W./C.", "SOTON",
"cabin_number"] + list(civilites_lst) + list(cabin_letters) #, "number"]
# ##Create logistic model
# +
#help(gl.classifier.logistic_classifier.create)
# -
simple_logistic_classifier = gl.classifier.logistic_classifier.create(train_set_1, target="Survived",
features=features, validation_set=train_set_2)
# ##Create SVM model
simple_svm_classifier = gl.classifier.svm_classifier.create(train_set_1, target="Survived",
features=features, validation_set=train_set_2,
max_iterations=1000)
# ##Create a decision tree model
decision_tree_model = gl.decision_tree_classifier.create(train_set_1, validation_set=train_set_2,
target="Survived", features=features)
# ##Boosted Tree model
boosted_tree_model = gl.classifier.boosted_trees_classifier.create(train_set_1, validation_set=train_set_2,
target="Survived", features=features)
# ##Random Forest model
random_forest_model = gl.classifier.random_forest_classifier.create(train_set_1, validation_set=train_set_2,
target="Survived", features=features, num_trees=100)
# ##Cleanning testing data
# DO NOT FORGOT ANY CLEANING OPERATION MADE IN THE TRAINING DATA AND USED BY CLASSIFIER !!!
data_test["male"] = data_test["Sex"] == "male"
data_test["female"] = data_test["Sex"] == "female"
data_test = data_test.remove_column("Sex")
data_test["no_age"] = data_test["Age"] == None
data_test["Age"] = gl.SArray([0 if v == None else v for v in data_test["Age"]])
data_test["embarked_s"] = data_test["Embarked"] == "S"
data_test["embarked_c"] = data_test["Embarked"] == "C"
data_test["embarked_q"] = data_test["Embarked"] == "Q"
data_test["embarked_none"] = data_test["Embarked"] == None
data_test = data_test.remove_column("Embarked")
data_test["1_class"] = data_test["Pclass"] == 1
data_test["2_class"] = data_test["Pclass"] == 2
data_test["3_class"] = data_test["Pclass"] == 3
data_test = data_test.remove_column("Pclass")
data_test["number"] = gl.SArray([toNumber(v) for v in data_test["Ticket"]])
data_test["PC"] = gl.SArray(["PC" in v for v in data_test["Ticket"]])
data_test["CA"] = gl.SArray(["CA" in v for v in data_test["Ticket"]])
data_test["C.A."] = gl.SArray(["C.A." in v for v in data_test["Ticket"]])
data_test["W./C."] = gl.SArray(["W./C." in v for v in data_test["Ticket"]])
data_test["SOTON"] = gl.SArray(["SOTON" in v for v in data_test["Ticket"]])
data_test["number"] = gl.SArray([toNumber(v) for v in data_test["Ticket"]])
for c in civilites_lst:
data_test[c] = gl.SArray([get_civilite(v) == c for v in data_test["Name"]])
for c in cabin_letters:
data_test[c] = gl.SArray([cabin_letter(v) == c for v in data_test["Cabin"]])
data_test["cabin_number"] = gl.SArray([cabin_number(v) for v in data_test["Cabin"]])
# ##Making Predictions
data_test["Survived"] = boosted_tree_model.predict(data_test)
#random_forest_model.predict(data_test)
#boosted_tree_model.predict(data_test)
#decision_tree_model.predict(data_test)
#simple_svm_classifier.predict(data_test)
#simple_logistic_classifier.predict(data_test)
submission = gl.SFrame()
submission["PassengerId"] = data_test["PassengerId"]
submission["Survived"] = data_test["Survived"]
submission.save("kaggle.csv", format="csv")
# +
#data_train.show()
# +
#data_train.head(10)
# -
| titanic_kaggle.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ### Simple Simulated Environment
# import packages
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import gym
# ### Initialize Env
env = gym.make('CartPole-v0')
env.seed(0)
print('observation space:', env.observation_space)
print('action space:', env.action_space)
env.step()
env.action_space.sample()
env.reset()
# +
# np.array(self.state), reward, done, {}
env.reset()
for i in range(1000000):
a = env.step(env.action_space.sample())
if a[1]>5:
print(i,a)
break
# -
| 99_playground/emulated_simulator/envditto.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import matplotlib.pyplot as plt
import numpy as np
import logging
# #### Function and Parameters ####
# +
def delta(P, phi0, chi0):
return box(phi, P, np.abs(phi[0] - phi[1]), phi0, chi0)
def deltas(Ps, phi0s, chi0s):
model = np.zeros(faraday_depth_p.shape) * 1j
for i in range(phi0s):
model = delta(faraday_depth_p, Ps[i], phi0s[i], chi0s[i])
return model
# -
frequency_k = np.arange(400, 800, 1024) #in 400 to 800 MHz
faraday_depth_p = np.arange(-200, 200, 800) #in rad/m^2
c = 3 * 10**8 # speed of light in m/s
lambda2 = (c/frequency_k)**2
# #### Simulating an Observation ####
x_true_p = np.zeros(faraday_depth_p.shape)*0j
x_true_p += deltas(faraday_depth_p, 1, 450)
# faraday signal
# #### Measurement Matrix####
# +
logger = logging.getLogger ('Matrix A')
def matrix(lambda2, faraday_depth_p):
A = np.zeros((len(lambda2), len(faraday_depth_p)), dtype=np.complex)
for m in range(len(lambda2)):
A[m, :] = np.exp(2 * lambda2[m] * faraday_depth_p * 1j) * np.sinc(2. * faraday_depth_p * lambda2_width[m]/np.pi)
class operator:
def init(self, lambda2, faraday_depth_p):
if(np.all(lambda2_width != None)):
assert lambda2.shape == lambda2_width.shape
if(np.all(lambda2_width != None)):
A = matrix(lambda2, faraday_depth_p, lambda2_width, weights)
else:
A = matrix(lambda2, faraday_depth_p, lambda2 * 0., weights)
A_H = np.conj(A.T)
self.dir_op = lambda x: A @ x
self.adj_op = lambda x: A_H @ x
m_op_right = operator(lambda2,faraday_depth_p)
# -
| CTA200_Project.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # <img style="float: left; padding-right: 10px; width: 45px" src="https://raw.githubusercontent.com/Harvard-IACS/2018-CS109A/master/content/styles/iacs.png"> CS109A Introduction to Data Science
#
# ## Standard Section 3: Multiple Linear Regression and Polynomial Regression
#
# **Harvard University**<br/>
# **Fall 2019**<br/>
# **Instructors**: <NAME>, <NAME>, and <NAME><br/>
# **Section Leaders**: <NAME>, Abhimanyu (<NAME>, Robbert (<NAME><br/>
#
# <hr style='height:2px'>
#RUN THIS CELL
import requests
from IPython.core.display import HTML
styles = requests.get("http://raw.githubusercontent.com/Harvard-IACS/2018-CS109A/master/content/styles/cs109.css").text
HTML(styles)
# For this section, our goal is to get you familiarized with Multiple Linear Regression. We have learned how to model data with kNN Regression and Simple Linear Regression and our goal now is to dive deep into Linear Regression.
#
# Specifically, we will:
#
# - Load in the titanic dataset from seaborn
# - Learn a few ways to plot **distributions** of variables using seaborn
# - Learn about different **kinds of variables** including continuous, categorical and ordinal
# - Perform single and multiple linear regression
# - Learn about **interaction** terms
# - Understand how to **interpret coefficients** in linear regression
# - Look at **polynomial** regression
# - Understand the **assumptions** being made in a linear regression model
# - (Extra): look at some cool plots to raise your EDA game
# 
# +
# Data and Stats packages
import numpy as np
import pandas as pd
# Visualization packages
import matplotlib.pyplot as plt
import seaborn as sns
sns.set()
# -
# # Extending Linear Regression
#
# ## Working with the Titanic Dataset from Seaborn
#
# For our dataset, we'll be using the passenger list from the Titanic, which famously sank in 1912. Let's have a look at the data. Some descriptions of the data are at https://www.kaggle.com/c/titanic/data, and here's [how seaborn preprocessed it](https://github.com/mwaskom/seaborn-data/blob/master/process/titanic.py).
#
# The task is to build a regression model to **predict the fare**, based on different attributes.
#
# Let's keep a subset of the data, which includes the following variables:
#
# - age
# - sex
# - class
# - embark_town
# - alone
# - **fare** (the response variable)
# Load the dataset from seaborn
titanic = sns.load_dataset("titanic")
titanic.head()
# checking for null values
chosen_vars = ['age', 'sex', 'class', 'embark_town', 'alone', 'fare']
titanic = titanic[chosen_vars]
titanic.info()
# **Exercise**: check the datatypes of each column and display the statistics (min, max, mean and any others) for all the numerical columns of the dataset.
## your code here
print(titanic.dtypes)
titanic.describe()
# # %load 'solutions/sol1.py'
solutions/sol1.py
# **Exercise**: drop all the non-null *rows* in the dataset. Is this always a good idea?
## .dropna to drop na values
#axis=0 means you are dropping the row with na values
titanic = titanic.dropna(axis=0)
titanic.info()
# +
# # %load 'solutions/sol2.py'
# -
# Now let us visualize the response variable. A good visualization of the distribution of a variable will enable us to answer three kinds of questions:
#
# - What values are central or typical? (e.g., mean, median, modes)
# - What is the typical spread of values around those central values? (e.g., variance/stdev, skewness)
# - What are unusual or exceptional values (e.g., outliers)
# +
fig, ax = plt.subplots(nrows=1, ncols=3, figsize=(24, 6))
ax = ax.ravel()
#Normalized histogram (area under curve = 1)
sns.distplot(titanic['fare'], ax=ax[0])
ax[0].set_title('Seaborn distplot')
ax[0].set_ylabel('Normalized frequencies')
#Violin Plot
sns.violinplot(x='fare', data=titanic, ax=ax[1])
ax[1].set_title('Seaborn violin plot')
ax[1].set_ylabel('Frequencies')
#Box Plot
sns.boxplot(x='fare', data=titanic, ax=ax[2])
ax[2].set_title('Seaborn box plot')
ax[2].set_ylabel('Frequencies')
fig.suptitle('Distribution of count');
# -
# How do we interpret these plots?
# ## Train-Test Split
# +
from sklearn.model_selection import train_test_split
titanic_train, titanic_test = train_test_split(titanic, train_size=0.7, random_state=99)
titanic_train = titanic_train.copy()
titanic_test = titanic_test.copy()
print(titanic_train.shape, titanic_test.shape)
# -
# ## Simple one-variable OLS
# **Exercise**: You've done this before: make a simple model using the OLS package from the statsmodels library predicting **fare** using **age** using the training data. Name your model `model_1` and display the summary
from statsmodels.api import OLS
import statsmodels.api as sm
# Your code here
age_ca = sm.add_constant(titanic_train['age'])
model_1 = OLS(titanic_train['fare'], age_ca).fit()
model_1.summary()
# +
# # %load 'solutions/sol3.py'
# -
# ## Dealing with different kinds of variables
# In general, you should be able to distinguish between three kinds of variables:
#
# 1. Continuous variables: such as `fare` or `age`
# 2. Categorical variables: such as `sex` or `alone`. There is no inherent ordering between the different values that these variables can take on. These are sometimes called nominal variables. Read more [here](https://stats.idre.ucla.edu/other/mult-pkg/whatstat/what-is-the-difference-between-categorical-ordinal-and-interval-variables/).
# 3. Ordinal variables: such as `class` (first > second > third). There is some inherent ordering of the values in the variables, but the values are not continuous either.
#
# *Note*: While there is some inherent ordering in `class`, we will be treating it like a categorical variable.
titanic_orig = titanic_train.copy()
# Let us now examine the `sex` column and see the value counts.
titanic_train['sex'].value_counts()
# **Exercise**: Create a column `sex_male` that is 1 if the passenger is male, 0 if female. The value counts indicate that these are the two options in this particular dataset. Ensure that the datatype is `int`.
# +
# your code here
titanic_train['sex_male'] = (titanic_train.sex == 'male').astype(int)
titanic_train['sex_male'].value_counts()
# -
# Do we need a `sex_female` column, or a `sex_others` column? Why or why not?
#
# Now, let us look at `class` in greater detail.
titanic_train['class_Second'] = (titanic_train['class'] == 'Second').astype(int)
titanic_train['class_Third'] = 1 * (titanic_train['class'] == 'Third') # just another way to do it
titanic_train.info()
# This function automates the above:
#Do not forget the drop_first=True
titanic_train_copy = pd.get_dummies(titanic_train, columns=['sex', 'class'], drop_first=True)
titanic_train_copy.head()
# ## Linear Regression with More Variables
# **Exercise**: Fit a linear regression including the new sex and class variables. Name this model `model_2`. Don't forget the constant!
# sm.add_constant adds just one constant to entire matrix
model_2 = sm.OLS(titanic_train['fare'],
sm.add_constant(titanic_train[['age', 'sex_male', 'class_Second', 'class_Third']])).fit()
model_2.summary()
#Simple way to check your values for betas from model
#Makes sense that first class pays most, second pays less, third pays least
titanic_train.groupby('class').mean()
# ### Interpreting These Results
# 1. Which of the predictors do you think are important? Why?
# 2. All else equal, what does being male do to the fare?
#
# ### Going back to the example from class
#
# 
#
# 3. What is the interpretation of $\beta_0$ and $\beta_1$?
# Your overall model with constant, B1*xi and error.
# B0 is value for male and B1 is what you get when you subtract female-male (difference between the two)
# ## Exploring Interactions
#lmplot is linear model plot - runs regression for you - assume bands ar CIs
sns.lmplot(x="age", y="fare", hue="sex", data=titanic_train, size=6)
sns.lmplot(x="age", y="fare", hue="class", data=titanic_train, size=6)
# The slopes seem to be different for male and female. What does that indicate?
#
# Let us now try to add an interaction effect into our model.
# +
# It seemed like gender interacted with age and class. Adding gender and age into model
titanic_train['sex_male_X_age'] = titanic_train['age'] * titanic_train['sex_male']
model_3 = sm.OLS(
titanic_train['fare'],
sm.add_constant(titanic_train[['age', 'sex_male', 'class_Second', 'class_Third', 'sex_male_X_age']])
).fit()
model_3.summary()
# -
# **What happened to the `age` and `male` terms?**
# +
# It seemed like gender interacted with age and class. Adding gender and class interaction term
titanic_train['sex_male_X_class_Second'] = titanic_train['age'] * titanic_train['class_Second']
titanic_train['sex_male_X_class_Third'] = titanic_train['age'] * titanic_train['class_Third']
model_4 = sm.OLS(
titanic_train['fare'],
sm.add_constant(titanic_train[['age', 'sex_male', 'class_Second', 'class_Third', 'sex_male_X_age',
'sex_male_X_class_Second', 'sex_male_X_class_Third']])
).fit()
model_4.summary()
# -
# ## Polynomial Regression
#
# 
# Perhaps we now believe that the fare also depends on the square of age. How would we include this term in our model?
fig, ax = plt.subplots(figsize=(12,6))
ax.plot(titanic_train['age'], titanic_train['fare'], 'o')
x = np.linspace(0,80,100)
ax.plot(x, x, '-', label=r'$y=x$')
ax.plot(x, 0.04*x**2, '-', label=r'$y=c x^2$')
ax.set_title('Plotting Age (x) vs Fare (y)')
ax.set_xlabel('Age (x)')
ax.set_ylabel('Fare (y)')
ax.legend();
# **Exercise**: Create a model that predicts fare from all the predictors in `model_4` + the square of age. Show the summary of this model. Call it `model_5`. Remember to use the training data, `titanic_train`.
#Adding age^2 and running new regression
titanic_train['age^2'] = titanic_train['age'] **2
model_5 = sm.OLS(
titanic_train['fare'],
sm.add_constant(titanic_train[['age', 'sex_male', 'class_Second', 'class_Third', 'sex_male_X_age',
'sex_male_X_class_Second', 'sex_male_X_class_Third', 'age^2']])
).fit()
model_5.summary()
# ## Looking at All Our Models: Model Selection
# What has happened to the $R^2$ as we added more features? Does this mean that the model is better? (What if we kept adding more predictors and interaction terms? **In general, how should we choose a model?** We will spend a lot more time on model selection and learn about ways to do so as the course progresses.
#Plotting against degrees of freedom (# betas -1 because last beta is determined by all of the other predictors if you know all of them and the value of y)
models = [model_1, model_2, model_3, model_4, model_5]
fig, ax = plt.subplots(figsize=(12,6))
ax.plot([model.df_model for model in models], [model.rsquared for model in models], 'x-')
ax.set_xlabel("Model degrees of freedom")
ax.set_title('Model degrees of freedom vs training $R^2$')
ax.set_ylabel("$R^2$");
# **What about the test data?**
#
# We added a lot of columns to our training data and must add the same to our test data in order to calculate $R^2$ scores.
# +
#Transform test set in same way we transformed train set
# Added features for model 1
# Nothing new to be added
# Added features for model 2
titanic_test = pd.get_dummies(titanic_test, columns=['sex', 'class'], drop_first=True)
# Added features for model 3
titanic_test['sex_male_X_age'] = titanic_test['age'] * titanic_test['sex_male']
# Added features for model 4
titanic_test['sex_male_X_class_Second'] = titanic_test['age'] * titanic_test['class_Second']
titanic_test['sex_male_X_class_Third'] = titanic_test['age'] * titanic_test['class_Third']
# Added features for model 5
titanic_test['age^2'] = titanic_test['age'] **2
# -
# **Calculating R^2 scores**
# +
from sklearn.metrics import r2_score
r2_scores = []
y_preds = []
y_true = titanic_test['fare']
# model 1
y_preds.append(model_1.predict(sm.add_constant(titanic_test['age'])))
# model 2
y_preds.append(model_2.predict(sm.add_constant(titanic_test[['age', 'sex_male', 'class_Second', 'class_Third']])))
# model 3
y_preds.append(model_3.predict(sm.add_constant(titanic_test[['age', 'sex_male', 'class_Second', 'class_Third',
'sex_male_X_age']])))
# model 4
y_preds.append(model_4.predict(sm.add_constant(titanic_test[['age', 'sex_male', 'class_Second', 'class_Third',
'sex_male_X_age', 'sex_male_X_class_Second',
'sex_male_X_class_Third']])))
# model 5
y_preds.append(model_5.predict(sm.add_constant(titanic_test[['age', 'sex_male', 'class_Second',
'class_Third', 'sex_male_X_age',
'sex_male_X_class_Second',
'sex_male_X_class_Third', 'age^2']])))
for y_pred in y_preds:
r2_scores.append(r2_score(y_true, y_pred))
models = [model_1, model_2, model_3, model_4, model_5]
fig, ax = plt.subplots(figsize=(12,6))
ax.plot([model.df_model for model in models], r2_scores, 'x-')
ax.set_xlabel("Model degrees of freedom")
ax.set_title('Model degrees of freedom vs test $R^2$')
ax.set_ylabel("$R^2$");
# -
# ## Regression Assumptions. Should We Even Regress Linearly?
# 
# **Question**: What are the assumptions of a linear regression model?
#
# We find that the answer to this question can be found on closer examimation of $\epsilon$. What is $\epsilon$? It is assumed that $\epsilon$ is normally distributed with a mean of 0 and variance $\sigma^2$. But what does this tell us?
#
# 1. Assumption 1: Constant variance of $\epsilon$ errors. This means that if we plot our **residuals**, which are the differences between the true $Y$ and our predicted $\hat{Y}$, they should look like they have constant variance and a mean of 0. We will show this in our plots.
# 2. Assumption 2: Independence of $\epsilon$ errors. This again comes from the distribution of $\epsilon$ that we decide beforehand.
# 3. Assumption 3: Linearity. This is an implicit assumption as we claim that Y can be modeled through a linear combination of the predictors. **Important Note:** Even though our predictors, for instance $X_2$, can be created by squaring or cubing another variable, we still use them in a linear equation as shown above, which is why polynomial regression is still a linear model.
# 4. Assumption 4: Normality. We assume that the $\epsilon$ is normally distributed, and we can show this in a histogram of the residuals.
#
# **Exercise**: Calculate the residuals for model 5, our most recent model. Optionally, plot and histogram these residuals and check the assumptions of the model.
# +
# Plot residuals vs predicted and histogram of residual distribution to check assumptions
# Residual plot shows variance increases as y increases
# Histogram shows right skew of distribution
predictors = sm.add_constant(titanic_train[['age', 'sex_male', 'class_Second', 'class_Third', 'sex_male_X_age',
'sex_male_X_class_Second', 'sex_male_X_class_Third', 'age^2']])
y_hat = model_5.predict(predictors)
residuals = titanic_train['fare'] - y_hat
# plotting
fig, ax = plt.subplots(ncols=2, figsize=(16,5))
ax = ax.ravel()
ax[0].set_title('Plot of Residuals')
ax[0].scatter(y_hat, residuals, alpha=0.2)
ax[0].set_xlabel(r'$\hat{y}$')
ax[0].set_ylabel('residuals')
ax[1].set_title('Histogram of Residuals')
ax[1].hist(residuals, alpha=0.7, bins=20)
ax[1].set_xlabel('residuals')
ax[1].set_ylabel('frequency');
# Mean of residuals
print('Mean of residuals: {}'.format(np.mean(residuals)))
# -
# # %load 'solutions/sol7.py'
solutions/sol7.py
# **What can you say about the assumptions of the model?**
# ----------------
# ### End of Standard Section
# ---------------
# ## Extra: Visual exploration of predictors' correlations
#
# The dataset for this problem contains 10 simulated predictors and a response variable.
# read in the data
data = pd.read_csv('../data/dataset3.txt')
data.head()
# this effect can be replicated using the scatter_matrix function in pandas plotting
sns.pairplot(data);
# Predictors x1, x2, x3 seem to be perfectly correlated while predictors x4, x5, x6, x7 seem correlated.
data.corr()
sns.heatmap(data.corr())
# ## Extra: A Handy Matplotlib Guide
# 
# source: http://matplotlib.org/faq/usage_faq.html
#
# See also [this](http://matplotlib.org/faq/usage_faq.html) matplotlib tutorial.
# 
#
# See also [this](https://mode.com/blog/violin-plot-examples) violin plot tutorial.
# ---
| content/sections/section3/notebook/cs109a_section_3_chs.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # ML Pipeline Preparation
# Follow the instructions below to help you create your ML pipeline.
# ### 1. Import libraries and load data from database.
# - Import Python libraries
# - Load dataset from database with [`read_sql_table`](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.read_sql_table.html)
# - Define feature and target variables X and Y
# +
import nltk
nltk.download(['punkt', 'wordnet'])
# import libraries
import pandas as pd
import numpy as np
from sqlalchemy import create_engine
from nltk.stem.wordnet import WordNetLemmatizer
from nltk.tokenize import word_tokenize
from sklearn.model_selection import train_test_split, GridSearchCV
from sklearn.ensemble import RandomForestClassifier
from sklearn.neural_network import MLPClassifier
from sklearn.metrics import confusion_matrix, classification_report, roc_curve, auc
from sklearn.pipeline import Pipeline, FeatureUnion
from sklearn.feature_extraction.text import CountVectorizer, TfidfTransformer
from sklearn.multioutput import MultiOutputClassifier
import pickle
from joblib import dump, load
import warnings; warnings.simplefilter('ignore')
# -
# load data from database
engine = create_engine('sqlite:///disaster.db')
df = pd.read_sql_table('disaster', con=engine)
df = df.dropna(how='any')
X = df['message']
Y = df.drop(columns=['message', 'original', 'genre'])
Y.head()
# ### 2. Write a tokenization function to process your text data
def tokenize(text):
# tokenize and lower all texts
tokens = word_tokenize(text.lower())
# initiate lemmatizer
lemmatizer = WordNetLemmatizer()
text = lemmatizer.lemmatize(text)
return text
# ### 3. Build a machine learning pipeline
# This machine pipeline should take in the `message` column as input and output classification results on the other 36 categories in the dataset. You may find the [MultiOutputClassifier](http://scikit-learn.org/stable/modules/generated/sklearn.multioutput.MultiOutputClassifier.html) helpful for predicting multiple target variables.
pipeline = Pipeline([
('vect', CountVectorizer(tokenizer=tokenize)),
('tfidf', TfidfTransformer()),
('clf', MultiOutputClassifier(RandomForestClassifier())),
])
# ### 4. Train pipeline
# - Split data into train and test sets
# - Train pipeline
X_train, X_test, y_train, y_test = train_test_split(X, Y)
pipeline.fit(X_train, y_train)
# ### 5. Test your model
# Report the f1 score, precision and recall for each output category of the dataset. You can do this by iterating through the columns and calling sklearn's `classification_report` on each.
y_pred = pipeline.predict(X_test)
for idx in range(y_pred.shape[1]):
print(classification_report(y_test.as_matrix()[:,idx], y_pred[:,idx]))
y_pred_list.shape
y_test.as_matrix().shape
# ### 6. Improve your model
# Use grid search to find better parameters.
# +
parameters = {
'clf__estimator__n_estimators': [2,5,10,20,50],
'clf__estimator__max_depth': [5,10,20,50],
'clf__estimator__min_samples_split': [2,5,10,20]
}
cv = GridSearchCV(pipeline, param_grid=parameters)
# -
# ### 7. Test your model
# Show the accuracy, precision, and recall of the tuned model.
#
# Since this project focuses on code quality, process, and pipelines, there is no minimum performance metric needed to pass. However, make sure to fine tune your models for accuracy, precision and recall to make your project stand out - especially for your portfolio!
cv.fit(X_train, y_train)
cv.get_params()
y_pred = cv.predict(X_test)
y_pred.shape
# ### 8. Try improving your model further. Here are a few ideas:
# * try other machine learning algorithms
# * add other features besides the TF-IDF
pipeline_ann = Pipeline([
('vect', CountVectorizer(tokenizer=tokenize)),
('tfidf', TfidfTransformer()),
('clf', MultiOutputClassifier(MLPClassifier())),
])
parameters_ann = {
'clf__estimator__hidden_layer_sizes': [(5,),(5,5), (5,7,3)],
'clf__estimator__activation': ['tanh', 'relu']
}
cv_ann = GridSearchCV(pipeline_ann, param_grid=parameters_ann)
cv_ann.fit(X_train, y_train)
# ### 9. Export your model as a pickle file
dump(cv, 'randomforest.joblib')
dump(cv_ann, 'ann.joblib')
# ### 10. Use this notebook to complete `train.py`
# Use the template file attached in the Resources folder to write a script that runs the steps above to create a database and export a model based on a new dataset specified by the user.
| project_files/Disaster_response/prototype/ML Pipeline Preparation.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: 'Python 3.7.6 64-bit (''census'': conda)'
# language: python
# name: python37664bitcensusconda6de06cb7d1124a39a673c8ac5ed43be4
# ---
# # Prims Maps
import os
os.environ['HTTP_PROXY']=""
os.environ['HTTPS_PROXY']=""
# ## Prims Maps con KeplerGl
# +
from keplergl import KeplerGl
import geopandas as gpd
import re
import fiona
file = r"https://minsait-geospatial.s3.eu-west-3.amazonaws.com/data/Visualization/CONSTRU.gpkg"
gdf = gpd.read_file(file)
gdf = gdf.to_crs({'init': 'epsg:4326'})
gdf.head()
# -
#Create a basemap
map = KeplerGl(height=600, width=800)
#show the map
# Add data to Kepler
map.add_data(data=gdf, name="CONSTRU")
map
# ## Prims Maps con Modelos Digitales de Elevación
#
# Podemos representar un Modelo Digital de Elevaciones en 3D. Para ello usaremos los MDT del IGN generados con el vuelo LiDAR del PNOA. Descargamos un fichero asci (http://centrodedescargas.cnig.es).
#
# +
import rasterio as rio
from rasterio.plot import show
import numpy as np
file_mdt= "https://minsait-geospatial.s3.eu-west-3.amazonaws.com/data/SpatialDataModel/raster/MDT_Cercedilla.tif"
with rio.open(file_mdt) as src:
z=src.read()
nrows, ncols = src.shape
x = np.linspace(src.bounds[0], src.bounds[2], ncols)
y = np.linspace(src.bounds[1], src.bounds[3], nrows)
x, y = np.meshgrid(x, y)
z= np.squeeze(z, axis=0)
show(z)
# +
from matplotlib import cm
from matplotlib.colors import LightSource
import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d import Axes3D
fig = plt.figure()
ax = Axes3D(fig)
ls = LightSource(270, 45)
rgb = ls.shade(z, cmap=cm.gist_earth, vert_exag=0.1, blend_mode='soft')
surf = ax.plot_surface(x, y, z, rstride=1, cstride=1, facecolors=rgb, linewidth=0, antialiased=False, shade=False)
ax.invert_xaxis()
plt.show()
| notebooks/PrimsMaps.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ## Observations and Insights
#
# ## Dependencies and starter code
# +
# Dependencies and Setup
import matplotlib.pyplot as plt
import pandas as pd
import scipy.stats as st
# Study data files
mouse_metadata = "data/Mouse_metadata.csv"
study_results = "data/Study_results.csv"
# Read the mouse data and the study results
mouse_metadata = pd.read_csv(mouse_metadata)
study_results = pd.read_csv(study_results)
# Combine the data into a single dataset
merged_df= pd.merge(study_results, mouse_metadata,how="left", on="Mouse ID")
merged_df.head()
# -
# ## Summary statistics
# +
# Generate a summary statistics table of mean, median, variance, standard deviation, and SEM of the tumor volume for each regimen
regimen_df=merged_df.groupby(["Drug Regimen"])
means=regimen_df.mean()["Tumor Volume (mm3)"]
medians=regimen_df.median()["Tumor Volume (mm3)"]
variances=regimen_df.var()["Tumor Volume (mm3)"]
standard_dev=regimen_df.std()["Tumor Volume (mm3)"]
SEM=regimen_df.sem()["Tumor Volume (mm3)"]
summary_df= pd.DataFrame({"means":means, "medians": medians, "variances":variances, "standard_dev":standard_dev, "SEM":SEM})
summary_df
# -
# ## Bar plots
# +
# Generate a bar plot showing number of data points for each treatment regimen using pandas
counts=merged_df["Drug Regimen"].value_counts()
counts.plot(kind="bar")
plt.xlabel("Drug Regimen")
plt.ylabel("Number of Points")
plt.show()
# +
# Generate a bar plot showing number of data points for each treatment regimen using pyplot
counts=merged_df["Drug Regimen"].value_counts()
plt.bar(counts.index.values,counts.values)
plt.xlabel("Drug Regimen")
plt.ylabel("Number of Points")
plt.xticks(rotation=90)
plt.show()
# -
# ## Pie plots
# +
# Generate a pie plot showing the distribution of female versus male mice using pandas
counts=mouse_metadata.Sex.value_counts()
counts.plot(kind="pie",autopct="%1.1f%%")
plt.show()
# +
# Generate a pie plot showing the distribution of female versus male mice using pyplot
counts=mouse_metadata.Sex.value_counts()
plt.pie(counts.values, labels=counts.index.values,autopct="%1.1f%%")
plt.ylabel("Sex")
plt.show()
# -
# ## Quartiles, outliers and boxplots
# +
# Calculate the final tumor volume of each mouse across four of the most promising treatment regimens.
max_tumor=merged_df.groupby(["Mouse ID"]).max().reset_index()
merged_tumor_df=max_tumor[["Mouse ID", "Timepoint"]].merge(merged_df,on=["Mouse ID", "Timepoint"], how="left")
Capomulin=merged_tumor_df.loc[merged_tumor_df["Drug Regimen"]=="Capomulin"]["Tumor Volume (mm3)"]
Ceftamin=merged_tumor_df.loc[merged_tumor_df["Drug Regimen"]=="Ceftamin"]["Tumor Volume (mm3)"]
Infubinol=merged_tumor_df.loc[merged_tumor_df["Drug Regimen"]=="Infubinol"]["Tumor Volume (mm3)"]
Ramicane=merged_tumor_df.loc[merged_tumor_df["Drug Regimen"]=="Ramicane"]["Tumor Volume (mm3)"]
#Calculate the IQR and quantitatively determine if there are any potential outliers.
i_quantiles=Infubinol.quantile([.25,.5,.75])
i_lowerq=i_quantiles[.25]
i_upperq=i_quantiles[.75]
i_iqr=i_upperq-i_lowerq
i_lowerbound= i_lowerq-(1.5*i_iqr)
i_upperbound=i_upperq+(1.5*i_iqr)
Infubinol.loc[(Infubinol<i_lowerbound)|(Infubinol>i_upperbound)]
# +
# Generate a box plot of the final tumor volume of each mouse across four regimens of interest
plt.boxplot([Infubinol, Capomulin, Ceftamin, Ramicane], labels=["Infubinol", "Capomulin", "Ceftamin", "Ramicane"])
plt.ylabel("Final Tumor Volume")
plt.show()
# -
# ## Line and scatter plots
# +
# Generate a line plot of time point versus tumor volume for a mouse treated with Capomulin
PLT.PLOT
# +
# Generate a scatter plot of mouse weight versus average tumor volume for the Capomulin regimen
PLT.SCATTER
# +
# Calculate the correlation coefficient and linear regression model for mouse weight and average tumor volume for the Capomulin regimen
COMBO, PANDAS TO DO STD, R, LINEAR REG
# -
| Pymaceuticals/pymaceuticals_starter.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Table of Contents
# <p><div class="lev1 toc-item"><a href="#Linear-Regression-problem" data-toc-modified-id="Linear-Regression-problem-1"><span class="toc-item-num">1 </span>Linear Regression problem</a></div><div class="lev1 toc-item"><a href="#Gradient-Descent" data-toc-modified-id="Gradient-Descent-2"><span class="toc-item-num">2 </span>Gradient Descent</a></div><div class="lev1 toc-item"><a href="#Gradient-Descent---Classification" data-toc-modified-id="Gradient-Descent---Classification-3"><span class="toc-item-num">3 </span>Gradient Descent - Classification</a></div><div class="lev1 toc-item"><a href="#Gradient-descent-with-numpy" data-toc-modified-id="Gradient-descent-with-numpy-4"><span class="toc-item-num">4 </span>Gradient descent with numpy</a></div>
# %matplotlib inline
from fastai.learner import *
# In this part of the lecture we explain Stochastic Gradient Descent (SGD) which is an **optimization** method commonly used in neural networks. We will illustrate the concepts with concrete examples.
# # Linear Regression problem
# The goal of linear regression is to fit a line to a set of points.
# +
# Here we generate some fake data
def lin(a,b,x): return a*x+b
def gen_fake_data(n, a, b):
x = s = np.random.uniform(0,1,n)
y = lin(a,b,x) + 0.1 * np.random.normal(0,3,n)
return x, y
x, y = gen_fake_data(50, 3., 8.)
# -
plt.scatter(x,y, s=8); plt.xlabel("x"); plt.ylabel("y");
# You want to find **parameters** (weights) $a$ and $b$ such that you minimize the *error* between the points and the line $a\cdot x + b$. Note that here $a$ and $b$ are unknown. For a regression problem the most common *error function* or *loss function* is the **mean squared error**.
def mse(y_hat, y): return ((y_hat - y) ** 2).mean()
# Suppose we believe $a = 10$ and $b = 5$ then we can compute `y_hat` which is our *prediction* and then compute our error.
y_hat = lin(10,5,x)
mse(y_hat, y)
def mse_loss(a, b, x, y): return mse(lin(a,b,x), y)
mse_loss(10, 5, x, y)
# So far we have specified the *model* (linear regression) and the *evaluation criteria* (or *loss function*). Now we need to handle *optimization*; that is, how do we find the best values for $a$ and $b$? How do we find the best *fitting* linear regression.
# # Gradient Descent
# For a fixed dataset $x$ and $y$ `mse_loss(a,b)` is a function of $a$ and $b$. We would like to find the values of $a$ and $b$ that minimize that function.
#
# **Gradient descent** is an algorithm that minimizes functions. Given a function defined by a set of parameters, gradient descent starts with an initial set of parameter values and iteratively moves toward a set of parameter values that minimize the function. This iterative minimization is achieved by taking steps in the negative direction of the function gradient.
#
# Here is gradient descent implemented in [PyTorch](http://pytorch.org/).
# generate some more data
x, y = gen_fake_data(10000, 3., 8.)
x.shape, y.shape
x,y = V(x),V(y)
# Create random weights a and b, and wrap them in Variables.
a = V(np.random.randn(1), requires_grad=True)
b = V(np.random.randn(1), requires_grad=True)
a,b
learning_rate = 1e-3
for t in range(10000):
# Forward pass: compute predicted y using operations on Variables
loss = mse_loss(a,b,x,y)
if t % 1000 == 0: print(loss.data[0])
# Computes the gradient of loss with respect to all Variables with requires_grad=True.
# After this call a.grad and b.grad will be Variables holding the gradient
# of the loss with respect to a and b respectively
loss.backward()
# Update a and b using gradient descent; a.data and b.data are Tensors,
# a.grad and b.grad are Variables and a.grad.data and b.grad.data are Tensors
a.data -= learning_rate * a.grad.data
b.data -= learning_rate * b.grad.data
# Zero the gradients
a.grad.data.zero_()
b.grad.data.zero_()
# Nearly all of deep learning is powered by one very important algorithm: **stochastic gradient descent (SGD)**. SGD can be seeing as an approximation of **gradient descent** (GD). In GD you have to run through *all* the samples in your training set to do a single itaration. In SGD you use *only one* or *a subset* of training samples to do the update for a parameter in a particular iteration. The subset use in every iteration is called a **batch** or **minibatch**.
# # Gradient Descent - Classification
# For a fixed dataset $x$ and $y$ `mse_loss(a,b)` is a function of $a$ and $b$. We would like to find the values of $a$ and $b$ that minimize that function.
#
# **Gradient descent** is an algorithm that minimizes functions. Given a function defined by a set of parameters, gradient descent starts with an initial set of parameter values and iteratively moves toward a set of parameter values that minimize the function. This iterative minimization is achieved by taking steps in the negative direction of the function gradient.
#
# Here is gradient descent implemented in [PyTorch](http://pytorch.org/).
def gen_fake_data2(n, a, b):
x = s = np.random.uniform(0,1,n)
y = lin(a,b,x) + 0.1 * np.random.normal(0,3,n)
return x, np.where(y>10, 1, 0).astype(np.float32)
x,y = gen_fake_data2(10000, 3., 8.)
x,y = V(x),V(y)
def nll(y_hat, y):
y_hat = torch.clamp(y_hat, 1e-5, 1-1e-5)
return (y*y_hat.log() + (1-y)*(1-y_hat).log()).mean()
a = V(np.random.randn(1), requires_grad=True)
b = V(np.random.randn(1), requires_grad=True)
learning_rate = 1e-2
for t in range(3000):
p = (-lin(a,b,x)).exp()
y_hat = 1/(1+p)
loss = nll(y_hat,y)
if t % 1000 == 0:
print(loss.data[0], np.mean(to_np(y)==(to_np(y_hat)>0.5)))
# print(y_hat)
loss.backward()
a.data -= learning_rate * a.grad.data
b.data -= learning_rate * b.grad.data
a.grad.data.zero_()
b.grad.data.zero_()
# Nearly all of deep learning is powered by one very important algorithm: **stochastic gradient descent (SGD)**. SGD can be seeing as an approximation of **gradient descent** (GD). In GD you have to run through *all* the samples in your training set to do a single itaration. In SGD you use *only one* or *a subset* of training samples to do the update for a parameter in a particular iteration. The subset use in every iteration is called a **batch** or **minibatch**.
# # Gradient descent with numpy
from matplotlib import rcParams, animation, rc
from ipywidgets import interact, interactive, fixed
from ipywidgets.widgets import *
rc('animation', html='html5')
rcParams['figure.figsize'] = 3, 3
x, y = gen_fake_data(50, 3., 8.)
a_guess,b_guess = -1., 1.
mse_loss(a_guess, b_guess, x, y)
lr=0.01
def upd():
global a_guess, b_guess
y_pred = lin(a_guess, b_guess, x)
dydb = 2 * (y_pred - y)
dyda = x*dydb
a_guess -= lr*dyda.mean()
b_guess -= lr*dydb.mean()
# +
fig = plt.figure(dpi=100, figsize=(5, 4))
plt.scatter(x,y)
line, = plt.plot(x,lin(a_guess,b_guess,x))
plt.close()
def animate(i):
line.set_ydata(lin(a_guess,b_guess,x))
for i in range(30): upd()
return line,
ani = animation.FuncAnimation(fig, animate, np.arange(0, 20), interval=100)
ani
# -
| courses/dl1/lesson6-sgd.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="view-in-github" colab_type="text"
# <a href="https://colab.research.google.com/github/antoniomuso/speech2face/blob/master/Speech2Face_newDataset.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# + id="Us6ergKlbJbh" colab_type="code" colab={}
from google.colab import drive
drive.mount('/content/drive')
# + id="5nkKE256supL" colab_type="code" colab={}
# # ! pip3 install face_recognition
# #! pip install --upgrade wandb
# #! wandb login f9cd4b35bf9733e5ead9d2b06e13ef2259b1284e
# + id="m5FJW1pc-YZw" colab_type="code" colab={}
#import wandb
#wandb.init(project="speech2face")
# + id="7jXRWARmcebB" colab_type="code" colab={}
# path = "/content/drive/My Drive/Speech2Face/vox"
# # !curl --user voxceleb1912:0s42xuw6 -o "/content/drive/My Drive/Speech2Face/ff/vox.zip" http://www.robots.ox.ac.uk/~vgg/data/voxceleb/vox1a/vox1_test_wav.zip
# ! cp "/content/drive/My Drive/Speech2Face/vox1_dataset/wav_filtered_20_per_actor.zip" .
# #! cp "/content/drive/My Drive/Speech2Face/zippedFaces.tar.gz" /content
# ! cp "/content/drive/My Drive/Speech2Face/vox1_dataset/vox1_meta.csv" .
# ! cp "/content/drive/My Drive/Speech2Face/face_features_10_per_actor.zip" .
# ! cp "/content/drive/My Drive/Speech2Face/vox1_dataset/vox_audios/vox.zip" .
# # ! tar zxvf zippedFaces.tar.gz
# ! unzip face_features_10_per_actor.zip
# ! unzip wav_filtered_20_per_actor.zip
# ! unzip vox.zip
# + id="XOqMHNfkwaCL" colab_type="code" colab={}
import librosa
import numpy as np
import pandas as pd
from os import listdir
from os.path import join
from torch.utils.data import Dataset
import glob
import itertools
from PIL import Image
import torchvision.transforms as transforms
import torch
from torch.utils.data import DataLoader
import torch.nn.functional as F
import torch.nn as nn
from typing import Optional, Callable
from tqdm.notebook import tqdm, trange
from google.colab.patches import cv2_imshow
import cv2
device = 'cuda'
vgg_weights_path = '/content/drive/My Drive/Speech2Face/Pretrained/vgg_face_dag.pth'
face_decoder_weights_path = ''
# + id="YZEnesbYDawu" colab_type="code" colab={}
# Reproducibility stuff
# import random
# torch.manual_seed(42)
# np.random.seed(42)
# random.seed(0)
# + id="cXILePzTdsI_" colab_type="code" colab={}
##################### DEPENDECIES ###########################
class Vgg_face_dag(nn.Module):
def __init__(self):
super(Vgg_face_dag, self).__init__()
self.meta = {'mean': [129.186279296875, 104.76238250732422, 93.59396362304688],
'std': [1, 1, 1],
'imageSize': [224, 224, 3]}
self.conv1_1 = nn.Conv2d(3, 64, kernel_size=[3, 3], stride=(1, 1), padding=(1, 1))
self.relu1_1 = nn.ReLU(inplace=True)
self.conv1_2 = nn.Conv2d(64, 64, kernel_size=[3, 3], stride=(1, 1), padding=(1, 1))
self.relu1_2 = nn.ReLU(inplace=True)
self.pool1 = nn.MaxPool2d(kernel_size=[2, 2], stride=[2, 2], padding=0, dilation=1, ceil_mode=False)
self.conv2_1 = nn.Conv2d(64, 128, kernel_size=[3, 3], stride=(1, 1), padding=(1, 1))
self.relu2_1 = nn.ReLU(inplace=True)
self.conv2_2 = nn.Conv2d(128, 128, kernel_size=[3, 3], stride=(1, 1), padding=(1, 1))
self.relu2_2 = nn.ReLU(inplace=True)
self.pool2 = nn.MaxPool2d(kernel_size=[2, 2], stride=[2, 2], padding=0, dilation=1, ceil_mode=False)
self.conv3_1 = nn.Conv2d(128, 256, kernel_size=[3, 3], stride=(1, 1), padding=(1, 1))
self.relu3_1 = nn.ReLU(inplace=True)
self.conv3_2 = nn.Conv2d(256, 256, kernel_size=[3, 3], stride=(1, 1), padding=(1, 1))
self.relu3_2 = nn.ReLU(inplace=True)
self.conv3_3 = nn.Conv2d(256, 256, kernel_size=[3, 3], stride=(1, 1), padding=(1, 1))
self.relu3_3 = nn.ReLU(inplace=True)
self.pool3 = nn.MaxPool2d(kernel_size=[2, 2], stride=[2, 2], padding=0, dilation=1, ceil_mode=False)
self.conv4_1 = nn.Conv2d(256, 512, kernel_size=[3, 3], stride=(1, 1), padding=(1, 1))
self.relu4_1 = nn.ReLU(inplace=True)
self.conv4_2 = nn.Conv2d(512, 512, kernel_size=[3, 3], stride=(1, 1), padding=(1, 1))
self.relu4_2 = nn.ReLU(inplace=True)
self.conv4_3 = nn.Conv2d(512, 512, kernel_size=[3, 3], stride=(1, 1), padding=(1, 1))
self.relu4_3 = nn.ReLU(inplace=True)
self.pool4 = nn.MaxPool2d(kernel_size=[2, 2], stride=[2, 2], padding=0, dilation=1, ceil_mode=False)
self.conv5_1 = nn.Conv2d(512, 512, kernel_size=[3, 3], stride=(1, 1), padding=(1, 1))
self.relu5_1 = nn.ReLU(inplace=True)
self.conv5_2 = nn.Conv2d(512, 512, kernel_size=[3, 3], stride=(1, 1), padding=(1, 1))
self.relu5_2 = nn.ReLU(inplace=True)
self.conv5_3 = nn.Conv2d(512, 512, kernel_size=[3, 3], stride=(1, 1), padding=(1, 1))
self.relu5_3 = nn.ReLU(inplace=True)
self.pool5 = nn.MaxPool2d(kernel_size=[2, 2], stride=[2, 2], padding=0, dilation=1, ceil_mode=False)
self.fc6 = nn.Linear(in_features=25088, out_features=4096, bias=True)
self.relu6 = nn.ReLU(inplace=True)
self.dropout6 = nn.Dropout(p=0.5)
self.fc7 = nn.Linear(in_features=4096, out_features=4096, bias=True)
self.relu7 = nn.ReLU(inplace=True)
self.dropout7 = nn.Dropout(p=0.5)
self.fc8 = nn.Linear(in_features=4096, out_features=2622, bias=True)
def forward(self, x0, is_fc8=False):
if not is_fc8:
x1 = self.conv1_1(x0)
x2 = self.relu1_1(x1)
x3 = self.conv1_2(x2)
x4 = self.relu1_2(x3)
x5 = self.pool1(x4)
x6 = self.conv2_1(x5)
x7 = self.relu2_1(x6)
x8 = self.conv2_2(x7)
x9 = self.relu2_2(x8)
x10 = self.pool2(x9)
x11 = self.conv3_1(x10)
x12 = self.relu3_1(x11)
x13 = self.conv3_2(x12)
x14 = self.relu3_2(x13)
x15 = self.conv3_3(x14)
x16 = self.relu3_3(x15)
x17 = self.pool3(x16)
x18 = self.conv4_1(x17)
x19 = self.relu4_1(x18)
x20 = self.conv4_2(x19)
x21 = self.relu4_2(x20)
x22 = self.conv4_3(x21)
x23 = self.relu4_3(x22)
x24 = self.pool4(x23)
x25 = self.conv5_1(x24)
x26 = self.relu5_1(x25)
x27 = self.conv5_2(x26)
x28 = self.relu5_2(x27)
x29 = self.conv5_3(x28)
x30 = self.relu5_3(x29)
x31_preflatten = self.pool5(x30)
x31 = x31_preflatten.view(x31_preflatten.size(0), -1)
x32 = self.fc6(x31)
x33 = self.relu6(x32)
x34 = self.dropout6(x33)
x35 = self.fc7(x34)
x36 = self.relu7(x35)
x37 = self.dropout7(x36)
x38 = x37
else:
x38 = self.fc8(x0)
return x38
def vgg_face_dag(weights_path=None, **kwargs):
"""
load imported model instance
Args:
weights_path (str): If set, loads model weights from the given path
"""
model = Vgg_face_dag()
if weights_path:
state_dict = torch.load(weights_path)
model.load_state_dict(state_dict)
return model
# + id="DrYW-RmTyS9M" colab_type="code" colab={}
########################### DEPENDENCY - standard DECODER (w/o warping) ##########################
class Decoder(nn.Module):
def __init__(self, n_hidden, bottom_width=4, channels=512):
super().__init__()
self.channels = channels
self.bottom_width = bottom_width
self.linear = nn.Linear(n_hidden, bottom_width*bottom_width*channels)
self.dconv1 = nn.ConvTranspose2d(channels, channels // 2, 4, 2, 1)
self.dconv2 = nn.ConvTranspose2d(channels // 2, channels // 4, 4, 2, 1)
self.dconv3 = nn.ConvTranspose2d(channels // 4, channels // 8, 4, 2, 1)
self.dconv4 = nn.ConvTranspose2d(channels // 8, 3, 4, 2, 1)
self.bn0 = nn.BatchNorm1d(bottom_width*bottom_width*channels)
self.bn1 = nn.BatchNorm2d(channels // 2)
self.bn2 = nn.BatchNorm2d(channels // 4)
self.bn3 = nn.BatchNorm2d(channels // 8)
def forward(self, x):
x = F.relu(self.bn0(self.linear(x))).view(-1, self.channels, self.bottom_width, self.bottom_width)
x = F.relu(self.bn1(self.dconv1(x)))
x = F.relu(self.bn2(self.dconv2(x)))
x = F.relu(self.bn3(self.dconv3(x)))
x = torch.sigmoid(self.dconv4(x))
return x
def decoder(weights_path="/content/drive/My Drive/Speech2Face/models/face_decoder/epoch_3_steps_12800.pth", fc3_only=False, **kwargs):
"""
load imported model instance
Args:
weights_path (str): If set, loads model weights from the given path
"""
model = Decoder(4096)
if weights_path:
state_dict = torch.load(weights_path)["model_state_dict"]
model.load_state_dict(state_dict)
if fc3_only:
fc3_layer = nn.Sequential(list(model.children())[0])
for param in fc3_layer.parameters():
param.requires_grad = False
#print(fc3_layer)
return fc3_layer
return model
# dec_fc3 = decoder(fc3_only=True)
# + id="7UhlsETN5_3B" colab_type="code" colab={}
########################### DEPENDENCY - DECODER w/ warping ##########################
class DECODER(nn.Module):
def __init__(self, phase):
super(DECODER, self).__init__()
self.phase = phase
self.fc3 = nn.Linear(4096, 1000)
self.ReLU = nn.ReLU()
#self.fc_bn3 = nn.BatchNorm1d(1000)
self.fc4 = nn.Linear(1000, 14 * 14 * 256)
self.fc_bn4 = nn.BatchNorm1d(14 * 14 * 256)
def TransConv( i, kernal = 5, stride = 2, inp = None):
if not inp:
inp = max(256//2**(i-1), 32)
layer = nn.Sequential(
nn.ConvTranspose2d(inp, max(256//2**i, 32),
kernal, stride=stride, padding=2, output_padding=1,
dilation=1, padding_mode='zeros'),
nn.ReLU(),
nn.BatchNorm2d(max(256//2**i, 32)))
return layer
self.T1_ = TransConv(1, inp = 256)
self.T2_ = TransConv(2)
self.T3_ = TransConv(3)
self.T4_ = TransConv(4)
self.ConvLast = nn.Sequential(
nn.Conv2d(32, 3, (1,1), stride=1),
nn.BatchNorm2d(3),
nn.ReLU())
self.layerLandmark1 = nn.Linear(1000, 800)
self.layerLandmark2 = nn.Linear(800, 600)
self.layerLandmark3 = nn.Linear(600, 400)
self.layerLandmark4 = nn.Linear(400, 200)
self.layerLandmark5 = nn.Linear(200, 144)
def forward(self, x):
L1 = self.fc3(x)
L1 = self.ReLU(L1)
L2 = self.layerLandmark1(L1)
L2 = self.ReLU(L2)
L3 = self.layerLandmark2(L2)
L3 = self.ReLU(L3)
L4 = self.layerLandmark3(L3)
L4 = self.ReLU(L4)
L5 = self.layerLandmark4(L4)
L5 = self.ReLU(L5)
L6 = self.layerLandmark5(L5)
outL = self.ReLU(L6)
# B1 = self.fc_bn3(L1)
T0 = self.fc4(L1)
T0 = self.ReLU(T0)
# T0 = self.fc_bn4(T0)
T0 = T0.view(-1,256,14,14)
T1 = self.T1_(T0)
T2 = self.T2_(T1)
T3 = self.T3_(T2)
T4 = self.T4_(T3)
outT = self.ConvLast(T4)
if self.phase == "train":
return outL, outT
elif self.phase == "eval":
return outT
def decoder_warping(weights_path="/content/drive/My Drive/Speech2Face/models/face_decoder_warping/new_dataset_v4/epoch_94_steps_0.pth", fc3_only=False, **kwargs):
"""
load imported model instance
Args:
weights_path (str): If set, loads model weights from the given path
"""
model = DECODER('eval')
if weights_path:
state_dict = torch.load(weights_path)["model_state_dict"]
model.load_state_dict(state_dict)
if fc3_only:
fc3_layer = nn.Sequential(list(model.children())[0])
for param in fc3_layer.parameters():
param.requires_grad = False
#print(fc3_layer)
return fc3_layer
return model
# dec_fc3 = decoder_warping(fc3_only=True)
# + id="bl1vNDOGpKnx" colab_type="code" colab={}
def power_law(signal):
converted = signal
sign = np.sign(converted)
converted[:, :, 0] = np.abs(converted[:, :, 0])** 0.3
converted[:, :, 1] = np.abs(converted[:, :, 1])** 0.3
return sign * converted
def load_wav(wav_path):
def adjust(stft):
if stft.shape[1] == 601:
return stft
else:
return np.concatenate((stft,stft[:,0:601 - stft.shape[1]]),axis = 1)
wav, sr = librosa.load(wav_path,sr = 16000, duration = 6.0 ,mono = True)
spectro = librosa.core.stft(wav, n_fft = 512, hop_length = int(np.ceil(0.01 * sr)),win_length = int(np.ceil(0.025 * sr)) , window='hann', center=True,pad_mode='reflect')
spectroComplex = adjust(spectro)
converted = np.zeros((spectroComplex.shape[0], spectroComplex.shape[1], 2))
i = np.arange(spectroComplex.shape[0])
j = np.arange(spectroComplex.shape[1])
converted[i,j[:,np.newaxis], 0] = spectroComplex[i,j[:,np.newaxis]].real
converted[i,j[:,np.newaxis], 1] = spectroComplex[i,j[:,np.newaxis]].imag
return power_law(converted)
# load_wav('/content/wav/id10271/1gtz-CUIygI/00002.wav')
# + id="xoNPrJNGDfY7" colab_type="code" colab={}
def get_map_person2paths(path, format='wav'):
actor2data = dict()
for person in listdir(path):
n_path = join(path, person)
files = glob.glob(n_path + '/**/*.'+format, recursive=True)
actor2data[person] = files
return actor2data
def get_map_person2paths_new_dataset(path):
actor2data = dict()
images_path = join(path, 'Faces')
for image in listdir(images_path):
person = '_'.join(image.split('_')[:-1])
# idx = int(image.split('_')[-1].split('.')[0])
if not person in actor2data.keys():
actor2data[person] = []
actor2data[person] += [join(images_path, image)]
return actor2data
def load_metadata(path):
meta = pd.read_csv(path,sep='\t')
meta = meta.drop('Gender',axis=1)
meta = meta.drop('Nationality',axis=1)
meta = meta.drop('Set',axis=1)
return meta
def couple_data(voice_map, face_map, meta):
count = 0
out = []
for index, row in meta.iterrows():
if (row['VoxCeleb1 ID'] not in voice_map.keys()) or (row['VGGFace1 ID'] not in face_map.keys()):
count += 1
continue
# max(len(voice_map[row['VoxCeleb1 ID']]), face_map[row['VGGFace1 ID']])
coupled = list(zip(voice_map[row['VoxCeleb1 ID']], face_map[row['VGGFace1 ID']]))
out += coupled
print("elements not found:", count)
return out
def create_coupled_list(path_voices, path_faces, metaP):
voice_map = get_map_person2paths(path_voices)
face_map = get_map_person2paths_new_dataset(path_faces)
meta = load_metadata(metaP)
return couple_data(voice_map, face_map, meta)
# + id="5zZbwKKW-riA" colab_type="code" colab={}
class _Normalize_Tensor(object):
def __init__(self, color_mean, color_std):
self.color_mean = color_mean
self.color_std = color_std
def __call__(self, img):
# Convert image to Tensor
img = transforms.functional.to_tensor(img)
# Normalize image by the parameter of pre-trained face-encoder
img = transforms.functional.normalize(
img, self.color_mean, self.color_std)
return img
class Speech2FaceDataset(Dataset):
def __init__(self, path_voices, path_faces, metaP, size=224):
super().__init__()
self.path_voices = path_voices
self.path_faces = path_faces
self.size = size
self.coupled_list = create_coupled_list(path_voices, path_faces, metaP)
self.len = len(self.coupled_list)
self.features = np.load(join(path_faces,'facefeature.npy'))
self.features_fc8 = np.load(join(path_faces,'facefeature_fc8.npy'))
self.transform_fd = transforms.Compose([
transforms.Resize((self.size, self.size)),
transforms.ToTensor()
])
def __len__(self):
return self.len
def __getitem__(self, idx):
wav_p, img_p = self.coupled_list[idx]
wav = load_wav(wav_p)
img = Image.open(img_p)
idx_feature = int(img_p.split('_')[-1].split('.')[0])
#face_loc = face_locations(np_image)
# img_normalized = self.transform_fe(img)
image_preprocessed = self.transform_fd(img)
return (torch.tensor(self.features[idx_feature]).float(),
torch.tensor(wav).reshape(2,601,257).float()), (torch.tensor(self.features_fc8[idx_feature]).float() , image_preprocessed)
color_mean = [129.186279296875, 104.76238250732422, 93.59396362304688]
color_std = [1, 1, 1]
color_mean = [tmp / 255.0 for tmp in color_mean]
color_std = [tmp / 255.0 for tmp in color_std]
data = Speech2FaceDataset('wav_filtered_20_per_actor', 'Face_Feature','vox1_meta.csv')
data_test = Speech2FaceDataset('wav', 'Face_Feature','vox1_meta.csv')
# + id="PgMzo1Qov37x" colab_type="code" colab={}
#data[1]
# + id="o5sDbBkIa7Fm" colab_type="code" colab={}
class SpeechEncoder(nn.Module):
def __init__(self):
super(SpeechEncoder, self).__init__()
self.conv1 = nn.Conv2d(2, 64, kernel_size=4,stride=1)
self.conv2 = nn.Conv2d(64, 64, kernel_size=4,stride=1)
self.conv3 = nn.Conv2d(64, 128, kernel_size=4,stride=1)
self.pooling1 = nn.MaxPool2d(kernel_size=(2,1), stride=(2,1))
self.conv4 = nn.Conv2d(128, 128, kernel_size=4,stride=1)
self.pooling2 = nn.MaxPool2d(kernel_size=(2,1), stride=(2,1))
self.conv5 = nn.Conv2d(128, 128, kernel_size=4,stride=1)
self.pooling3 = nn.MaxPool2d(kernel_size=(2,1), stride=(2,1))
self.conv6 = nn.Conv2d(128, 256, kernel_size=4,stride=1)
self.pooling4 = nn.MaxPool2d(kernel_size=(2,1), stride=(2,1))
self.conv7 = nn.Conv2d(256, 512, kernel_size=4,stride=1)
self.conv8 = nn.Conv2d(512, 512, kernel_size=4,stride=2)
self.conv9 = nn.Conv2d(512, 512, kernel_size=4,stride=2) # Queste due celle sono diverse
self.pooling5 = nn.AvgPool2d(kernel_size=(6,1), stride=1)# Queste due celle sono diverse
self.fc1 = nn.Linear(512 * 1 * 57, 4096)
self.fc2 = nn.Linear(4096, 4096)
self.batch_norm1 = nn.BatchNorm2d(64)
self.batch_norm2 = nn.BatchNorm2d(64)
self.batch_norm3 = nn.BatchNorm2d(128)
self.batch_norm4 = nn.BatchNorm2d(128)
self.batch_norm5 = nn.BatchNorm2d(128)
self.batch_norm6 = nn.BatchNorm2d(256)
self.batch_norm7 = nn.BatchNorm2d(512)
self.batch_norm8 = nn.BatchNorm2d(512)
self.batch_norm9 = nn.BatchNorm2d(512)
self.relu = nn.ReLU()
def forward(self, x):
out = self.batch_norm1(self.relu(self.conv1(x)))
out = self.batch_norm2(self.relu(self.conv2(out)))
out = self.batch_norm3(self.relu(self.conv3(out)))
out = self.pooling1(out)
out = self.batch_norm4(self.relu(self.conv4(out)))
out = self.pooling2(out)
out = self.batch_norm5(self.relu(self.conv5(out)))
out = self.pooling3(out)
out = self.batch_norm6(self.relu(self.conv6(out)))
out = self.pooling4(out)
out = self.batch_norm7(self.relu(self.conv7(out)))
out = self.batch_norm8(self.relu(self.conv8(out)))
out = self.conv9(out)
out = self.batch_norm9(self.relu(self.pooling5(out)))
batch = out.shape[0]
out = out.view((batch, 512 * 1 * 57))
out = self.relu(self.fc1(out))
out = self.fc2(out)
return out
def speech_encoder(weights_path="/content/drive/My Drive/Speech2Face/models/speech_encoder/adam/adam_epoch_9.pth", **kwargs):
"""
load imported model instance
Args:
weights_path (str): If set, loads model weights from the given path
"""
model = SpeechEncoder()
if weights_path:
state_dict = torch.load(weights_path)["model_state_dict"]
model.load_state_dict(state_dict)
return model
# + id="wE_p5eTAw0HD" colab_type="code" colab={}
vgg = vgg_face_dag(vgg_weights_path)
vgg.eval()
vgg_fc8 = vgg.fc8
vgg_fc8.requires_grad = False
vgg_fc8.to(device)
# vgg = vgg.to(device)
model = SpeechEncoder()
model.to(device)
decoder = decoder_warping(fc3_only=True)
decoder.to(device)
optimizer = torch.optim.Adam(model.parameters(), eps=1e-04, betas=(0.5, 0.999)) #, weight_decay=0.95)
#optimizer_decay = torch.optim.AdamW(model.parameters(), eps=1e-04, betas=(0.5, 0.999)) #, weight_decay=0.95)
#optimizer = optimizer_decay
datal = DataLoader(data, 8, True, num_workers=16)
datal_test = DataLoader(data_test, 6, False, num_workers=16)
# W&B
#wandb.watch(model)
# + id="082fWCi0rJG2" colab_type="code" colab={}
def _save_model(epoch, model, optimizer, output_dir_name="/content/drive/My Drive/Speech2Face/models/speech_encoder/"):
torch.save({
'epoch': epoch,
'model_state_dict': model.state_dict(),
'optimizer_state_dict': optimizer.state_dict(),
}, join(output_dir_name, 'adam_epoch_{}.pth'.format(epoch)))
def make_averager() -> Callable[[Optional[float]], float]:
""" Returns a function that maintains a running average
:returns: running average function
"""
count = 0
total = 0
def averager(new_value: Optional[float]) -> float:
""" Running averager
:param new_value: number to add to the running average,
if None returns the current average
:returns: the current average
"""
nonlocal count, total
if new_value is None:
return total / count if count else float("nan")
count += 1
total += new_value
return total / count
return averager
def fit(
epochs: int,
train_dl: torch.utils.data.DataLoader,
test_dl: torch.utils.data.DataLoader,
model: torch.nn.Module,
VGG: torch.nn.Module,
decoder: torch.nn.Module,
opt: torch.optim.Optimizer,
tag: str,
device: str = "cuda",
restart_epoch: int = 0,
) -> float:
""" Train the model and computes metrics on the test_loader at each epoch
:param epochs: number of epochs
:param train_dl: the train dataloader
:param test_dl: the test dataloader
:param model: the model to train
:param opt: the optimizer to use to train the model
:param tag: description of the current model
:param perm: if not None, permute the pixel in each image according to perm
:returns: accucary on the test set in the last epoch
"""
print("VGG.training = ", VGG.training)
print("Speech2Face.training = ", model.training)
print("Starting training. Restart epoch:", restart_epoch)
lambda1 = 0.025
lambda2 = 200.0
for epoch in trange(epochs, desc="train epoch"):
if restart_epoch != 0:
epoch = restart_epoch
restart_epoch = 0
print("Epoch updated, current value:", epoch)
model.train()
train_loss_averager = make_averager() # mantain a running average of the loss
# TRAIN
tqdm_iterator = tqdm(
enumerate(train_dl),
total=len(train_dl),
desc=f"batch [loss: None]",
leave=False,
)
for batch_idx, (x, y) in tqdm_iterator:
embedding, wav = x
features_fc8, img = y
# send to device
wav = wav.to(device)
embedding = embedding.to(device)
features_fc8 = features_fc8.to(device)
# print(wav.shape)
# embedding = VGG(img_normal)# .to(device)
output = model(wav)
# print(output.shape)
fvgg_s = VGG(output)
fvgg_f = features_fc8
fdec_s = decoder(output)
fdec_f = decoder(embedding)
# loss = F.l1_loss(output, embedding)
loss = F.l1_loss(fdec_f, fdec_s) + lambda1 * loss_2(embedding, output) + lambda2 * dist_loss(fvgg_f, fvgg_s)
# loss = F.l1_loss(embedding, output) + lambda1 * loss_2(embedding, output) + lambda2 * dist_loss(fvgg_f, fvgg_s)
loss.backward()
opt.step()
opt.zero_grad()
#print('here')
train_loss_averager(loss.item())
tqdm_iterator.set_description(
f"train batch [avg loss: {train_loss_averager(None):.3f}]"
)
tqdm_iterator.refresh()
# TEST
model.eval()
test_loss_averager = make_averager() # mantain a running average of the loss
correct = 0
for (x, y) in test_dl:
with torch.no_grad():
# send to device
embedding, wav = x
features_fc8, img = y
# send to device
wav = wav.to(device)
embedding = embedding.to(device)
features_fc8 = features_fc8.to(device)
# print(wav.shape)
# embedding = VGG(img_normal)# .to(device)
output = model(wav)
# print(output.shape)
fvgg_s = VGG(output)
fvgg_f = features_fc8
fdec_s = decoder(output)
fdec_f = decoder(embedding)
# loss = F.l1_loss(output, embedding)
loss = F.l1_loss(fdec_f, fdec_s) + lambda1 * loss_2(embedding, output) + lambda2 * dist_loss(fvgg_f, fvgg_s)
# loss = F.l1_loss(embedding, output) + lambda1 * loss_2(embedding, output) + lambda2 * dist_loss(fvgg_f, fvgg_s)
test_loss_averager(loss)
print(
f"Epoch: {epoch}\n"
f"Train set: Average loss: {train_loss_averager(None):.4f}\n"
f"Test set: Average loss: {test_loss_averager(None):.4f}, "
#f"Accuracy: {correct}/{len(test_dl.dataset)} ({accuracy:.0f}%)\n"
)
_save_model(epoch, model, optimizer)
#torch.save(model.state_dict(), join(wandb.run.dir, 'model.pt'))
#wandb.log({"Train set Average loss:": train_loss_averager(None)})
# models_accuracy[tag] = accuracy
# return accuracy
# + id="lEyFPE9NBXYP" colab_type="code" colab={}
# taken from https://discuss.pytorch.org/t/is-this-loss-function-for-knowledge-distillation-correct/69863
def dist_loss(t, s):
T = 2
prob_t = F.softmax(t/T, dim=1)
log_prob_s = F.log_softmax(s/T, dim=1)
dist_loss = -(prob_t*log_prob_s).sum(dim=1).mean()
return dist_loss
def loss_2(vf, vs):
return F.mse_loss(F.normalize(vf), F.normalize(vs))
# + id="2jETsH8Nx7JU" colab_type="code" colab={}
check_path = None #"/content/drive/My Drive/Speech2Face/models/speech_encoder/adam_epoch_5.pth"
def _load_checkpoint(checkpoint_path):
checkpoint = torch.load(checkpoint_path)
global_ep = checkpoint["epoch"]
model.load_state_dict(checkpoint["model_state_dict"])
optimizer.load_state_dict(checkpoint["optimizer_state_dict"])
print("Loaded checkpoint, restart epoch is: ", global_ep)
return global_ep + 1
global_ep = 0
if check_path is not None:
global_ep = _load_checkpoint(check_path)
# + id="qK_BJXi8xsX2" colab_type="code" colab={}
fit(10, datal, datal_test, model, vgg_fc8, decoder, optimizer, "Speech2Face Training", restart_epoch=global_ep)
# + id="kTVgjwV0dHL6" colab_type="code" colab={}
from torchsummary import summary
# model = SpeechEncoder()
input = torch.unsqueeze(torch.tensor(converted).reshape(2,601,257), 0)
model(input.type(torch.float32)).shape
summary(model, (2,257,601))
# + id="kBS5g-ckzeV9" colab_type="code" colab={}
# Quick testing -- models creation
# enc = speech_encoder()
dec_w = decoder_warping()
enc = model
# + id="hLpG4SnDzzhh" colab_type="code" colab={}
# Quick testing -- actual test
enc.eval()
# dec.eval()
dec_w.eval()
test_wav_path = "/content/drive/My Drive/Speech2Face/vox1_dataset/vox_audios/ext/wav/id10279/3qAxPgeIvCQ/00001.wav"
test_wav = load_wav(test_wav_path)
test_wav = torch.tensor(test_wav).reshape(2,257,601).float().unsqueeze(0).to(device)
test_wav_path2 = "/content/drive/My Drive/Speech2Face/vox1_dataset/vox_audios/ext/wav/id10277/0rpfN7wThsg/00001.wav"
test_wav2 = load_wav(test_wav_path2)
test_wav2 = torch.tensor(test_wav2).reshape(2,257,601).float().unsqueeze(0).to(device)
print(torch.equal(test_wav, test_wav2))
#print(test_wav, test_wav2)
out = enc(test_wav)
decoded_w = dec_w(out)
out2 = enc(test_wav2)
decoded_w2 = dec_w(out2)
# + id="ld7jvncs3IJE" colab_type="code" colab={}
# Quick testing -- showing output
#1 - Decoder w/ warping
o_img_w = cv2.cvtColor(np.einsum('abc->bca',decoded_w[0].detach().numpy()*255), cv2.COLOR_BGR2RGB)
cv2_imshow(o_img_w)
o_img_w2 = cv2.cvtColor(np.einsum('abc->bca',decoded_w2[0].detach().numpy()*255), cv2.COLOR_BGR2RGB)
cv2_imshow(o_img_w2)
# #2 - Decoder w/o warping
# o_img = cv2.cvtColor(np.einsum('abc->bca',decoded[0].detach().numpy()*255),cv2.COLOR_BGR2RGB)
# cv2_imshow(o_img)
#
# o_img2 = cv2.cvtColor(np.einsum('abc->bca',decoded2[0].detach().numpy()*255),cv2.COLOR_BGR2RGB)
# cv2_imshow(o_img2)
| Speech2Face_newDataset.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: conda_python3
# language: python
# name: conda_python3
# ---
# # Prepare inputs for Amazon Quicksight visualization
#
# Amazon QuickSight is a cloud-scale business intelligence (BI) service that you can use to deliver easy-to-understand insights to the people who you work with, wherever they are. Amazon QuickSight connects to your data in the cloud and combines data from many different sources. In a single data dashboard, QuickSight can include AWS data, third-party data, big data, spreadsheet data, SaaS data, B2B data, and more. As a fully managed cloud-based service, Amazon QuickSight provides enterprise-grade security, global availability, and built-in redundancy. It also provides the user-management tools that you need to scale from 10 users to 10,000, all with no infrastructure to deploy or manage.
#
# In this notebook, we will prepare the manifest file that we need to use with Amazon Quicksight to visualize insights we generated from our customer call transcripts.
# ### Initialize libraries and import variables
# +
# import libraries
import pandas as pd
import boto3
import json
import csv
import os
# initialize variables we need
infile = 'quicksight_raw_manifest.json'
outfile = 'quicksight_formatted_manifest_type.json'
inprefix = 'quicksight/data'
manifestprefix = 'quicksight/manifest'
bucket = '<your-bucket-name>' # Enter your bucket name here
s3 = boto3.client('s3')
try:
s3.head_bucket(Bucket=bucket)
except:
print("The S3 bucket name {} you entered seems to be incorrect, please try again".format(bucket))
# -
# ### Review transcripts with insights for QuickSight
# When we ran the previous notebooks, we created CSV files containing speaker and time segmentation, the inference results that classified the transcripts to CTA/No CTA using Amazon Comprehend custom classification, we detected custom entities using Amazon Comprehend custom entity recognizer, and we finally detected the sentiment of the call transcripts using Amazon Comprehend Sentiment anlysis feature. These are available in our temp folder, let us move these to the quicksight/input folder
# Lets review what CSV files we have for QuickSight
# !aws s3 ls s3://{bucket}/{inprefix} --recursive
# ### Update QuickSight Manifest
# We will replace the S3 bucket and prefix from the raw manifest file with what you have entered in STEP 0 - CELL 1 above. We will then create a new formatted manifest file that will be used for creating a dataset with Amazon QuickSight based on the content we extract from the handwritten documents.
# +
# S3 boto3 client handle
s3 = boto3.client('s3')
# Create formatted manifests for each type of dataset we need from the raw manifest JSON
types = ['transcripts', 'entity', 'cta', 'sentiment']
manifest = open(infile, 'r')
ln = json.load(manifest)
t = json.dumps(ln['fileLocations'][0]['URIPrefixes'])
for type in types:
t1 = t.replace('bucket', bucket).replace('prefix', inprefix + '/' + type)
ln['fileLocations'][0]['URIPrefixes'] = json.loads(t1)
outfile_rep = outfile.replace('type', type)
with open(outfile_rep, 'w', encoding='utf-8') as out:
json.dump(ln, out, ensure_ascii=False, indent=4)
# Upload the manifest to S3
s3.upload_file(outfile_rep, bucket, manifestprefix + '/' + outfile_rep)
print("Manifest file uploaded to: s3://{}/{}".format(bucket, manifestprefix + '/' + outfile_rep))
# -
# #### Please copy the manifest S3 URIs above. We need it when we build the datasets for the QuickSight dashboard.
#
# ### We are done here. Please go back to workshop instructions.
| notebooks/5-Visualize-Insights/AIM317-reInvent2021-prepare-quicksight-inputs.ipynb |
# -*- coding: utf-8 -*-
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .r
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: R 3.3
# language: R
# name: ir33
# ---
install.packages("reinforcelearn")
library(reinforcelearn)
# **Documentation:** https://cran.r-project.org/web/packages/reinforcelearn/reinforcelearn.pdf
# 
#
# The grid-world task: The goal of the game is to find the “star”. At the beginning of each trial the agent is placed in the (1,3) cell, as shown. The shortest path to the goal is composed of 14 steps, one such optimal path is marked by a dashed line. (source: https://www.researchgate.net/figure/The-grid-world-task-The-goal-of-the-game-is-to-find-the-star-At-the-beginning-of-each_fig2_262526038)
# Create the reinforcement learning environment with makeEnvironment.
#There are some predefined environment classes, e.g. MDPEnvironment,
#which allows you to create a Markov Decision Process by passing on state transition
#array and reward matrix, or GymEnvironment, where you can use toy problems from OpenAI Gym.
env = makeEnvironment("gridworld", shape = c(4, 4), goal.states = 15, initial.state = 0)
env$visualize()
#With makeAgent you can set up a reinforcement learning agent to solve the environment,
#i.e. to find the best action in each time step.
#The first step is to set up the policy, which defines which action to choose.
# Create qlearning agent with softmax policy and tabular value function.
policy = makePolicy("softmax")
values = makeValueFunction("table", n.states = env$n.states, n.actions = env$n.actions)
#We want the agent to be able to learn something. Value-based algorithms learn a value
#function from interaction with the environment and adjust the policy according to the
#value function. For example we could set up Q-Learning with a softmax policy.
algorithm = makeAlgorithm("qlearning")
agent = makeAgent(policy, values, algorithm)
# Run interaction for 20 steps.
interact(env, agent, n.episodes = 20L)
| Section 01/qLearning in R.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# I dati utilizzati in questo notebook sono stati presi dalla competizione di Kaggle [Twitter Sentiment Analysis](https://www.kaggle.com/c/twitter-sentiment-analysis2).
# # Analisi del sentimento
# ## Indice
# 1. [Twitter Sentiment Analysis](#twitter)<br>
# 1.1 [Descrizione](#descrizione)<br>
# 2. [Analisi lessicale](#lessicale)<br>
# 2.1 [Sostituire pattern specifici](#sostituire)<br>
# 2.2 [Ridurre il tweet in *token*](#token)<br>
# 2.3 [Rimuovere le *stop word*](#stop_word)<br>
# 2.4 [Ridurre i *token* alla radice (*stemming*)](#stemming)<br>
# 3. [Analisi esplorativa](#esplorativa)<br>
# 3.1 [Preparare i dati per l'analisi esplorativa](#preparare)<br>
# 3.2 [Visualizzare i *token* e gli *hashtag* più frequenti dividendo tra tweet positivi e negativi](#token_hashtag)<br>
# 5. [Metriche di classificazione](#metriche)<br>
# 5. [Classificare i tweet](#classificare)<br>
# 5.1 [Creare una baseline](#baseline)<br>
# 5.2 [Creare una pipeline di classificazione](#pipeline)<br>
# 6. [Analizzare la performance del modello](#performance)<br>
# 7. [Analizzare il modello stimato](#analizzare_modello)<br>
# 8. [Analizzare gli errori di previsione](#errori)<br>
# +
import inspect
import matplotlib.pyplot as plt
import nltk
import numpy as np
import pandas as pd
# %load_ext autoreload
# %autoreload 2
# -
# # 1. [Twitter Sentiment Analysis](https://www.kaggle.com/c/twitter-sentiment-analysis2) <a id=twitter> </a>
# ## 1.1 Descrizione <a id=descrizione> </a>
# ### Description
# This contest is taken from the real task of Text Processing.
#
# The task is to build a model that will determine the tone (neutral, positive, negative) of the text. To do this, you will need to train the model on the existing data (train.csv). The resulting model will have to determine the class (neutral, positive, negative) of new texts (test data that were not used to build the model) with maximum accuracy.
#
# > Nota: la descrizione parla di tre classi ma nel dataset sono presenti solo due classi. La metrica nella descrizione sembra essere l'accuratezza ma in Evaluation sembra invece essere l'F1 score. Noi consideriamo il problema come di classificazione binario e utilizzeremo come metrica principale l'F1 score.
#
# ### Evaluation
# The evaluation metric for this competition is Mean F1-Score. The F1 score, commonly used in information retrieval, measures accuracy using the statistics precision p and recall r. Precision is the ratio of true positives (tp) to all predicted positives (tp + fp). Recall is the ratio of true positives to all actual positives (tp + fn). The F1 score is given by:
# $$
# F1 = 2\frac{p \cdot r}{p + r}\, \text{where}\, p = \frac{tp}{tp + fp},\, r = \frac{tp}{tp + fn}
# $$
# The F1 metric weights recall and precision equally, and a good retrieval algorithm will maximize both precision and recall simultaneously. Thus, moderately good performance on both will be favored over extremely good performance on one and poor performance on the other.
# ### Leggere i dati
# +
PATH = "datasets/twitter"
dati = pd.read_csv(PATH + "/train.csv", encoding="latin")
print("Dimensione del dataset: {} x {}".format(*dati.shape))
dati.head()
# -
# ### Dividere le variabili esplicative dalla variabile risposta
X, y = dati["SentimentText"].tolist(), dati["Sentiment"].values
# # 2. Analisi lessicale <a id=lessicale> </a>
# ## 2.1 Sostituire pattern specifici <a id=sostituire> </a>
# ### Sostituire i tag HTML
from bs4 import BeautifulSoup
tweet = X[91]
print("Tweet:\n{}".format(tweet))
print("\nTweet dopo aver sostituito i tag HTML:\n{}".format(BeautifulSoup(tweet, "lxml").get_text()))
# ### Sostituire i collegamenti ipertestuali
import re
tweet = X[16]
print("Tweet:\n{}".format(tweet))
print("\nTweet dopo aver sostituito i collegamenti ipertestuali:\n{}".format(re.sub("http\S+", " link ", tweet)))
# ## 2.2 Ridurre il tweet in *token* <a id=token> </a>
from nltk.tokenize import TweetTokenizer
# +
tokenizer = TweetTokenizer(
preserve_case=False, # se False: Questo è un ESEMPIO -> ['questo', 'è', 'un', 'esempio']
reduce_len=True, # se True: ma daiiiii non ci credooooo -> ['ma', 'daiii', 'non', 'ci', 'credooo']
strip_handles=True # se True: cosa ne pensi @mario? -> ['cosa', 'ne', 'pensi', '?']
)
tweet = X[2715]
print("Tweet:\n{}".format(tweet))
print("\nTweet dopo la riduzione in token:\n{}".format(tokenizer.tokenize(tweet)))
# -
# ## 2.3 Rimuovere le *stop word* <a id=stop_word> </a>
nltk.download("stopwords")
from nltk.corpus import stopwords
from string import punctuation
# ### Rimuovere alcune *stop word* predefinite e la punteggiatura
# +
stop_words = stopwords.words('english') + list(punctuation)
tweet = X[0]
tweet = tokenizer.tokenize(tweet)
print("Tweet dopo la riduzione in token:\n{}".format(tweet))
print("\nTweet dopo la rimozione delle stop words:\n{}".format([token for token in tweet if token not in stop_words]))
# -
# ### Rimuovere i numeri
tweet = X[3]
tweet = tokenizer.tokenize(tweet)
print("Tweet dopo la riduzione in token:\n{}".format(tweet))
print("\nTweet dopo la rimozione delle stop words:\n{}".format([token for token in tweet if not token.isdigit()]))
# ## 2.4 Ridurre i *token* alla radice (*stemming*) <a id=stemming> </a>
from nltk.stem.snowball import SnowballStemmer
# +
stemmer = SnowballStemmer("english")
tweet = X[1]
tweet = tokenizer.tokenize(tweet)
print("Tweet dopo la riduzione in token:\n{}".format(tweet))
print("\nTweet dopo la riduzione alla radice dei token:\n{}".format([stemmer.stem(token) for token in tweet]))
# -
# ### Esercizio
#
# 1. Completare la funzione `tweet_analyzer()` definita in `msbd/preprocessamento/tweet_analyzer.py`. Sulla traccia di quanto visto finora, la funzione dovrà:
# 1. Sostituire i tag HTML e i collegamenti ipertestuali;
# 2. Trasformare il tweet in una lista di *token*;
# 3. Rimuovere le *stop word* (compresi i numeri come visto sopra);
# 5. Ridurre i *token* alla radice.
# 2. Verificare la corettezza della funzione utilizzando pytest.
# +
from msbd.preprocessamento import tweet_analyzer
print(inspect.getsource(tweet_analyzer))
# -
# !pytest -v msbd/tests/test_tweet_analyzer.py
# ### Esempio di tweet dopo il preprocessamento
tweet = "@student! analyze this <3 tweeeet;, solution at http://www.fakelink.com :D 42 #42"
print("Tweet:\n{}".format(tweet))
print("\nTweet dopo la riduzione alla radice dei token:\n{}".format(tweet_analyzer(tweet, tokenizer, stemmer, stop_words)))
# # 3. Analisi esplorativa <a id=esplorativa> </a>
# ### Dividiere i dati in training e test
# +
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.1, random_state=42)
print("# tweet in train: {} ({} pos / {} neg)".format(len(X_train), (y_train == 1).sum(), (y_train == 0).sum()))
print("# tweet in test: {}".format(len(X_test)))
# -
# ## 3.1 Preparare i dati per l'analisi esplorativa <a id=preparare> </a>
# ### Preprocessare i tweet
import tqdm
X_preproc = [tweet_analyzer(tweet, tokenizer, stemmer, stop_words) for tweet in tqdm.tqdm(X_train)]
# ### Creare le liste dei *token* appartenenti a tweet con sentimento postivo e negativo
import itertools
token_pos = list(itertools.chain.from_iterable(list(itertools.compress(X_preproc, y_train == 1))))
token_neg = list(itertools.chain.from_iterable(list(itertools.compress(X_preproc, y_train == 0))))
# ### Creare le liste degli *hashtag* appartenenti a tweet con sentimento postivo e negativo
hashtag_pos = [token for token in token_pos if token.startswith("#")]
hashtag_neg = [token for token in token_neg if token.startswith("#")]
# ## 3.2 Visualizzare i *token* e gli *hashtag* più frequenti dividendo tra tweet positivi e negativi <a id=token_hashtag> </a>
# ### Creare un'istanza della classe `Counter` per ogni lista
from collections import Counter
c_token_pos = Counter(token_pos)
c_token_neg = Counter(token_neg)
c_hashtag_pos = Counter(hashtag_pos)
c_hashtag_neg = Counter(hashtag_neg)
# ### Grafici a barre
# +
N = 5
plt.figure(figsize=(15, 3))
plt.subplot(121)
plt.title("{} hashtag più frequenti nei tweet positivi".format(N))
plt.bar(*zip(*c_hashtag_pos.most_common(N)), color="gold")
plt.xticks(rotation="vertical")
plt.subplot(122)
plt.title("{} hashtag più frequenti nei tweet negativi".format(N))
plt.bar(*zip(*c_hashtag_neg.most_common(N)), color="midnightblue")
plt.xticks(rotation="vertical")
plt.show()
# +
N = 20
plt.figure(figsize=(15, 3))
plt.subplot(121)
plt.title("{} token più frequenti nei tweet positivi".format(N))
plt.bar(*zip(*c_token_pos.most_common(N)), color="gold")
plt.xticks(rotation="vertical")
plt.subplot(122)
plt.title("{} token più frequenti nei tweet negativi".format(N))
plt.bar(*zip(*c_token_neg.most_common(N)), color="midnightblue")
plt.xticks(rotation="vertical")
plt.show()
# -
# ### Nuvole di parole
from wordcloud import WordCloud
# +
MASK = plt.imread("figures/twitter.jpg")
MAX_WORDS = 200
MAX_FONT_SIZE = 200
RELATIVE_SCALING = 1
wc_pos = WordCloud(
mask=MASK,
max_words=MAX_WORDS,
background_color="white",
max_font_size=MAX_FONT_SIZE,
relative_scaling=RELATIVE_SCALING,
).generate_from_frequencies(c_token_pos)
wc_neg = WordCloud(
mask=MASK[:, ::-1, :],
max_words=MAX_WORDS,
background_color="midnightblue",
max_font_size=MAX_FONT_SIZE,
relative_scaling=RELATIVE_SCALING,
colormap=plt.cm.YlOrRd
).generate_from_frequencies(c_token_neg)
# +
plt.figure(figsize=(12, 6))
plt.subplot(121)
plt.imshow(wc_pos, interpolation='bilinear')
plt.axis("off")
plt.subplot(122)
plt.imshow(wc_neg, interpolation='bilinear')
plt.axis("off")
plt.tight_layout()
plt.subplots_adjust(wspace=0, hspace=0)
plt.show()
# -
# # 4. Metriche di classificazione <a id=metriche> </a>
# ### Matrice di confusione e metriche derivabili da essa
# 
#
# *Immagine presa dalla pagina [Confusion_matrix](https://en.wikipedia.org/wiki/Confusion_matrix) di Wikipedia.*
# ### Esercizio
#
# 1. Completare i metodi della classe `MetricheClassificazione` definita in `msbd/preprocessamento/metriche.py`;
# 2. Verificare la corettezza dei metodi definiti utilizzando pytest.
#
# > Suggerimenti:
# > 1. Prendere ispirazione dai metodi già definiti;
# > 2. Eseguire il controllo con pytest ogni volta che si definisce un nuovo metodo.
# +
from msbd.metriche import MetricheClassificazione
print(inspect.getsource(MetricheClassificazione))
# -
# !pytest -v msbd/tests/test_metriche_classificazione.py
# ### Esempio
# +
y_true = np.array([0, 0, 0, 0, 1, 1, 1, 1, 1, 1])
y_pred = np.array([0, 0, 0, 1, 0, 0, 1, 1, 1, 1])
print("# negativi: {}".format(MetricheClassificazione.n_negativi(y_true, y_pred)))
print("# positivi: {}".format(MetricheClassificazione.n_positivi(y_true, y_pred)))
print("# previsti negativi: {}".format(MetricheClassificazione.n_previsti_negativi(y_true, y_pred)))
print("# previsti positivi: {}".format(MetricheClassificazione.n_previsti_positivi(y_true, y_pred)))
print()
print("Matrice di confusione:")
print("# veri negativi: {}".format(MetricheClassificazione.n_veri_negativi(y_true, y_pred)))
print("# falsi positivi: {}".format(MetricheClassificazione.n_falsi_positivi(y_true, y_pred)))
print("# falsi negativi: {}".format(MetricheClassificazione.n_falsi_negativi(y_true, y_pred)))
print("# veri positivi: {}".format(MetricheClassificazione.n_veri_positivi(y_true, y_pred)))
print()
print("Tasso falsi positivi: {:.2f}".format(MetricheClassificazione.tasso_falsi_positivi(y_true, y_pred)))
print("Tasso veri positivi: {:.2f}".format(MetricheClassificazione.tasso_veri_positivi(y_true, y_pred)))
print("Precisione: {:.2f}".format(MetricheClassificazione.precisione(y_true, y_pred)))
print("Richiamo: {:.2f}".format(MetricheClassificazione.richiamo(y_true, y_pred)))
print("Punteggio F1: {:.2f}".format(MetricheClassificazione.punteggio_f1(y_true, y_pred)))
# -
# # 5. Classificare i tweet <a id=classificare> </a>
# ## 5.1 Creare una baseline <a id=baseline> </a>
from msbd.grafici import grafico_matrice_confusione
from sklearn.dummy import DummyClassifier
from sklearn.metrics import f1_score
from sklearn.metrics import precision_score
from sklearn.metrics import recall_score
# +
dc = DummyClassifier(strategy="most_frequent")
dc.fit(X_train, y_train)
y_pred = dc.predict(X_test)
precisione_baseline = precision_score(y_test, y_pred)
richiamo_baseline = recall_score(y_test, y_pred)
f1_score_baseline = f1_score(y_test, y_pred)
print("Precisione: {:.2f}".format(precisione_baseline))
print("Richiamo: {:.2f}".format(richiamo_baseline))
print("F1 score: {:.2f}".format(f1_score_baseline))
grafico_matrice_confusione(y_test, y_pred, ["neg", "pos"])
# -
# ### Esercizio
#
# `DummyClassifier` ha un F1 score del 73% e un richiamo addirittura del 100%! Ѐ utile in un caso reale la previsione fatta da questo modello? Motivare la risposta e riflettere sul risultato.
# # 5.2 Creare una pipeline di classificazione <a id=pipeline> </a>
from sklearn.feature_extraction.text import CountVectorizer
from sklearn.pipeline import Pipeline
from sklearn.tree import DecisionTreeClassifier
# ### Definire la pipeline
# +
vect = CountVectorizer(
analyzer=lambda t: tweet_analyzer(t, tokenizer, stemmer, stop_words),
min_df=50,
max_df=0.7,
)
tree = DecisionTreeClassifier(min_samples_leaf=25)
clf = Pipeline([('vect', vect), ('tree', tree)])
clf.fit(X_train, y_train)
# -
# > Nota: tutti gli iperparametri sono stati scelti "a priori" e, sopratutto, senza prendere decisioni basate sull'insieme di *test*. Volendo scegliere la combinazione di iperparametri migliore tra un insieme di candidati (vedi *grid search*, *random search*, ...), avremmo bisogno anche di un terzo insieme di *validation*. Lo stesso vale per la scelta tra algoritmi diversi (es: `DecisionTreeClassifier`vs `LogisticRegression`).
# # 6. Analizzare la performance del modello <a id=performance> </a>
# ### Stimare, per ogni tweet del test set, la probabilità che il suo sentimento sia positivo
# +
SOGLIA_DECISIONALE = 0.5 # default
y_score = clf.predict_proba(X_test)[:, 1]
y_pred = (y_score > SOGLIA_DECISIONALE).astype(int) # equivalente a y_pred = clf.predict(X_test)
# -
# ### Esercizio
#
# Descrivere un caso in cui la soglia decisionale di default (0.5) non è adeguata.
# ### Analizzare la performance del modello fissata la soglia decisionale
print("Precisione: {:.2f} (baseline = {:.2f})".format(precision_score(y_test, y_pred), precisione_baseline))
print("Richiamo: {:.2f} (baseline = {:.2f})".format(recall_score(y_test, y_pred), richiamo_baseline))
print("F1 score: {:.2f} (baseline = {:.2f})".format(f1_score(y_test, y_pred), f1_score_baseline))
grafico_matrice_confusione(y_test, y_pred, ["neg", "pos"])
# ### Analizzare le combinazioni di valori ottenibili per le metriche d'interesse al variare della soglia decisionale
y_pred_25 = (y_score > 0.25).astype(int)
y_pred_50 = (y_score > 0.5).astype(int)
y_pred_75 = (y_score > 0.75).astype(int)
from msbd.grafici import grafico_curva_precisione_richiamo
from msbd.grafici import grafico_curva_roc
# +
MARKER = "*"
S = 100
plt.figure(figsize=(10, 5))
plt.subplot(121)
grafico_curva_roc(y_test, y_score)
plt.scatter(MetricheClassificazione.tasso_falsi_positivi(y_test, y_pred_75), recall_score(y_test, y_pred_75),
marker=MARKER, s=S, c="brown", label="Soglia decisionale = 0.75", zorder=3)
plt.scatter(MetricheClassificazione.tasso_falsi_positivi(y_test, y_pred_50), recall_score(y_test, y_pred_50),
marker=MARKER, s=S, c="red", label="Soglia decisionale = 0.5", zorder=3)
plt.scatter(MetricheClassificazione.tasso_falsi_positivi(y_test, y_pred_25), recall_score(y_test, y_pred_25),
marker=MARKER, s=S, c="tomato", label="Soglia decisionale = 0.25", zorder=3)
plt.legend()
plt.subplot(122)
grafico_curva_precisione_richiamo(y_test, y_score)
plt.scatter(recall_score(y_test, y_pred_75), precision_score(y_test, y_pred_75),
marker=MARKER, s=S, c="brown", label="Soglia decisionale = 0.75", zorder=3)
plt.scatter(recall_score(y_test, y_pred_50), precision_score(y_test, y_pred_50),
marker=MARKER, s=S, c="red", label="Soglia decisionale = 0.5", zorder=3)
plt.scatter(recall_score(y_test, y_pred_25), precision_score(y_test, y_pred_25),
marker=MARKER, s=S, c="tomato", label="Soglia decisionale = 0.25", zorder=3)
plt.legend()
plt.show()
# -
# # 7. Analizzare il modello stimato <a id=analizzare_modello> </a>
# ### Visualizzare l'albero
from sklearn.tree import export_graphviz
import graphviz
dot_data = export_graphviz(
decision_tree=clf.named_steps["tree"],
max_depth=4,
feature_names=clf.named_steps["vect"].get_feature_names(),
class_names=("Neg", "Pos"),
filled=True,
rounded=True,
)
display(graphviz.Source(dot_data))
# ### Visualizzare l'importanza delle variabili
from msbd.grafici import grafico_importanza_variabili
# +
MAX_NUM = 50
plt.figure(figsize=(15, 3))
variabili = clf.named_steps["vect"].get_feature_names()
importanze = clf.named_steps["tree"].feature_importances_
titolo = "Importanza delle prime {} variabili su {}".format(MAX_NUM, len(variabili))
grafico_importanza_variabili(importanze, variabili, max_num=MAX_NUM, titolo=titolo)
plt.show()
# -
# # 8. Analizzare gli errori di previsione <a id=errori> </a>
X_test_preproc = [tweet_analyzer(tweet, tokenizer, stemmer, stop_words) for tweet in tqdm.tqdm(X_test)]
tweet_score = pd.DataFrame({"tweet":X_test, "tweet_preproc": X_test_preproc, "score": y_score,
"sentimento": y_test})
# ### Vero sentimento negativo, previsto positivo con elevata confidenza
# +
N = 5
print("Primi {} tweet con sentimento negativo previsti con sentimento positivo:".format(N))
for _, riga in tweet_score[tweet_score["sentimento"] == 0].sort_values("score", ascending=False).head(N).iterrows():
print("\nScore: {:.2f}".format(riga["score"]))
print("Tweet:\n{}".format(riga["tweet"]))
print("Tweet dopo il preprocessamento:\n{}".format(riga["tweet_preproc"]))
# -
# ### Vero sentimento positivo, previsto negativo con elevata confidenza
# ### Esercizio
#
# Analizzare il caso in cui il vero sentimento era positivo ma il modello lo ha previsto negativo con elevata confidenza.
# +
N = 5
print("Primi {} tweet con sentimento positivo previsti con sentimento negativo:".format(N))
# ============== YOUR CODE HERE ==============
raise NotImplementedError
# ============================================
# -
| 13_analisi_del_sentimento.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Convert the Dataset into Vectors
# The goal of this notebook is to generate a vector for each cell for all notebooks in the sliced-notebooks dataset.
# Dimensions of vector array: n * sequence count * 300
# # Import modules
import pandas as pd
import numpy as np
import os
import gensim
from gensim.models.doc2vec import TaggedDocument
from gensim.models.doc2vec import Doc2Vec
import torch
from tokenize_code import tokenize_code
# # Import dataset and doc2vec model
df = pd.read_json("../data/all-notebooks-tokenized.json", orient='index')
model = Doc2Vec.load("../model/notebook-doc2vec-model-apr24.model")
df = df[df.cell_type == "code"]
df['cell_num'] = df.groupby(['competition','filename']).cumcount()+1
len(df)
df.columns
# # Group the dataset by notebook and generate doc2vec vectors
df_test = df
df.columns
# + tags=[]
allVectors = []
allVectorsFilenames = []
for i, notebook in df_test.groupby("filename"):
vectorSeq = []
vectorNameSeq = []
# vectorSeq is a list of doc2vec vectors corresponding to [Cell0, Cell1, .... Celln]
# each vectorSeq list corresponds to a single notebook
for j, row in notebook.iterrows():
#print(row)
competition = row[3]
cell_num = row[5]
tokenized_source = row[4]
kernel_id = row[2]
source = row[1]
vector = model.infer_vector(tokenized_source)
vectorSeq.append(vector)
vectorNameSeq.append(notebook.iloc[0]['competition'] + "/" + notebook.iloc[0]['filename'].astype(str) + "_" + str(cell_num))
allVectors.append(vectorSeq)
allVectorsFilenames.append(vectorNameSeq)
# -
# ## Convert from lists of arrays to array of arrays
for i in range(0,A.shape[0]):
A[i] = np.asarray(A[i])
print(len(allVectors))
# # Save the arrays
arr = np.array(allVectors,dtype=object)
arrNames = np.array(allVectorsFilenames, dtype=object)
arrNames[8]
np.save("../data/notebooks-doc2vec-vectors-apr24.npy", arr)
np.save("../data/notebooks-doc2vec-vectors-filenames-apr24.npy", arrNames)
| backend/doc2vec-baselines/Generate-Doc2Vec-Model-From-Json/Doc2Vec_GenerateAllVecs-Tokenize.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import pandas as pd
from pandas import DataFrame, Series
s = Series([1, 2, 3, 4])
s
a = Series( s.values , index = ['a', 'b', 'c', 'd'])
a
dates = pd.date_range('2014-07-01', '2014-07-06')
dates
b = Series([80, 82, 85, 90, 83, 87],index = dates)
b
b.mean()
b2 = Series([70, 75, 69, 83, 79, 77],index = dates)
b2
b2['2014-07-03']
bb2 = DataFrame({'Missoula': b,'Philadelphia': b2})
bb2
bb2['Missoula']
bb2.Missoula
diff = bb2.Missoula - bb2.Philadelphia
diff
mult = bb2.Missoula * bb2.Philadelphia
mult
new = DataFrame({'Missoula': b,'Philadelphia': b2 , 'difference': diff , 'Multiply': mult})
new
new.columns
new.difference[1:4]
new.iloc[4]
new.ix[1].index
new.loc['2014-07-03']
new.iloc[[1, 3, 5]].difference
new.iloc[[2,4,0]].Missoula
new.Missoula > 82
new[new.Missoula > 82]
obj = pd.Series([4, 7, -5, 3 , 4, 7, 6],index = pd.RangeIndex(start = 0 , stop = 14 , step = 2))
obj.values
obj.index
obj1 = pd.Series([4, 7, -5, 6],index = ['a' , 'b' , 'c' ,'a'])
obj1.values
obj1['e'] = 34
obj1['a']sdata = {'Ohio': 35000, 'Texas': 71000, 'Oregon': 16000, 'Utah': 5000}
sdata = {'Ohio': 35000, 'Texas': 71000, 'Oregon': 16000, 'Utah': 5000}
sdata
states = ['California', 'Ohio', 'Oregon', 'Utah', 'Texas']
states
obj4 = pd.Series(sdata, index=states)
obj4
| pandas class 1.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
import re
import os
import io
import cv2
import json
import shutil
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import torch
import torch.nn.functional as F
from torch.nn.utils.rnn import pack_padded_sequence
from torch.utils.data import Dataset
from rdkit import Chem
import sys
sys.path.append('../')
from util import *
# from dataloader import MoleculeDataset
from models.resnet import resnet18
from models.transformer import trans128_4x, trans256_4x
from models.bilstm import biLSTM512
# %load_ext autoreload
# %autoreload 2
import IPython.display as Disp
np.set_printoptions(suppress=True)
# -
DATA_DIR = '/Users/prguser/big_data/bms_kaggle'
TEST_DIR = os.path.join(DATA_DIR, 'test')
TRAIN_DIR = os.path.join(DATA_DIR, 'train')
TRAIN_RESIZE_DIR = os.path.join(DATA_DIR, 'train_resize')
TEST_RESIZE_DIR = os.path.join(DATA_DIR, 'test_resize')
MAX_INCHI_LENGTH = 350
with open('../data/char_dict.json', 'r') as f:
CHAR_DICT = json.load(f)
with open('../data/ord_dict.json', 'r') as f:
ORD_DICT = json.load(f)
IMG_SIZE = 256
VOCAB_SIZE = len(CHAR_DICT.keys())
DEVICE = torch.device("cuda" if torch.cuda.is_available() else "cpu")
def get_n_samples(DIR):
n_samples = 0
for i in os.listdir(DIR):
if '.' == i[0]:
pass
else:
for j in os.listdir(os.path.join(DIR, i)):
if '.' == j[0]:
pass
else:
for k in os.listdir(os.path.join(DIR, i, j)):
if '.' == k[0]:
pass
else:
n_samples += len(os.listdir(os.path.join(DIR, i, j, k)))
return n_samples
n_train_samples = get_n_samples(TRAIN_DIR)
n_test_samples = get_n_samples(TEST_DIR)
'{} training samples, {} test samples'.format(n_train_samples, n_test_samples)
# Training samples are labeled with InChI strings, test samples are unlabeled.
train_labels = pd.read_csv(os.path.join(DATA_DIR, 'train_labels.csv'))
test_labels = pd.read_csv(os.path.join(DATA_DIR, 'sample_submission.csv'))
train_labels.head()
test_labels.head()
train_df = pd.read_csv('../data/train.csv')
val_df = pd.read_csv('../data/val.csv')
test_df = pd.read_csv('../data/test.csv')
# # Exploring the Inputs
# +
def get_path_from_img_id(img_id, DIR):
img_path = os.path.join(DIR, img_id[0], img_id[1], img_id[2], '{}.png'.format(img_id))
return img_path
def visualize_sample(train_labels):
plt.figure(figsize=(16,12))
sample = train_labels.sample(n=9)
img_ids = sample.image_id.values
labels = sample.InChI.values
for idx, (img_id, label) in enumerate(zip(img_ids, labels)):
plt.subplot(3, 3, idx+1)
img_path = get_path_from_img_id(img_id, TRAIN_RESIZE_DIR)
img = cv2.imread(img_path)
img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)
plt.imshow(img)
plt.title(f"{label[:30]}...", fontsize=10)
# plt.title("{} height, {} width".format(img.shape[0], img.shape[1]))
plt.show()
def visualize_img_id(img_id):
plt.figure(figsize=(16,12))
img_path = get_path_from_img_id(img_id, load_dir='train_resize')
img = cv2.imread(img_path)
img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)
plt.imshow(img)
plt.show()
visualize_sample(train_labels)
# -
# Obviously, images are very noisy. I think it'll be worth supplementing with less noisy labeled molecular structures (pretty sure we can easily get this data from PubChem). Additionally, we might be able to "fill in" very grainy images based on image correction machine learning models that are used for de-noising photographs, renders, etc.
#
# One way might be to find the exact same molecule on PubChem and train a model to first fix the noisy image.
#
# Finally, image sizes will need to be normalized but it is not clear what the best method for doing so will be. My intuition says that we should use a large padded input box that allows us to keep all features the same size (i.e. a single aromatic ring should occupy the same number of pixels regardless of the total size of the molecule).
# ### Denoise
def denoise_img(img_path, dot_size=2):
img = cv2.imread(img_path, cv2.IMREAD_GRAYSCALE)
_, BW = cv2.threshold(img, 127, 255, cv2.THRESH_BINARY_INV)
nlabels, labels, stats, _ = cv2.connectedComponentsWithStats(BW, None, None, None, 8, cv2.CV_32S)
sizes = stats[1:,-1]
img2 = np.zeros((labels.shape), np.uint8)
for i in range(0, nlabels-1):
if sizes[i] >= dot_size:
img2[labels == i+1] = 255
img = cv2.bitwise_not(img2)
return img
img_path = get_path_from_img_id(train_labels.sample(n=1).image_id.values[0], TRAIN_DIR)
img = cv2.imread(img_path, cv2.IMREAD_GRAYSCALE)
plt.figure(figsize=(8,8))
plt.imshow(img, cmap='gray')
plt.show()
img = denoise_img(img_path)
plt.figure(figsize=(8,8))
plt.imshow(img, cmap='gray')
plt.show()
from skimage import morphology
img_path = get_path_from_img_id(train_labels.sample(n=1).image_id.values[0], TRAIN_RESIZE_DIR)
img = (255 - cv2.imread(img_path, cv2.IMREAD_GRAYSCALE)) / 255
plt.figure(figsize=(8,8))
plt.imshow(img, cmap='gray')
plt.title('Original', fontweight='bold')
plt.show()
img = morphology.dilation(img, selem=morphology.square(1))
plt.figure(figsize=(8,8))
plt.imshow(img, cmap='gray')
plt.title('Dilation', fontweight='bold')
plt.show()
# img = morphology.erosion(img)
# plt.figure(figsize=(8,8))
# plt.imshow(img, cmap='gray')
# plt.title('Erosion', fontweight='bold')
# plt.show()
img = morphology.skeletonize(img)
plt.figure(figsize=(8,8))
plt.imshow(img, cmap='gray')
plt.title('Skeletonize', fontweight='bold')
plt.show()
# Denoising only works on images before they are resized. Must denoise and then resize.
# ### Resize
import PIL
from PIL import Image, ImageOps
from tqdm.auto import tqdm
PIL.__version__
def pillow_pad(img, desired_size, color=(255,255,255,0), resample=Image.LANCZOS, copy=False):
if copy:
img = img.copy()
old_size = img.size
ratio = float(desired_size)/max(old_size)
new_size = tuple([int(x*ratio) for x in old_size])
img.thumbnail(new_size, resample)
new_img = Image.new('RGB', (desired_size, desired_size), color=color)
new_img.paste(img, ((desired_size-new_size[0])//2,
(desired_size-new_size[1])//2))
return new_img
def resize_imgs(img_ids, DIR, resize=256):
os.makedirs('temp_denoise', exist_ok=True)
temp_path = 'temp_denoise/denoised.png'
for i, img_id in enumerate(img_ids):
img_path = get_path_from_img_id(img_id, DIR)
cv2.imwrite('temp_denoise/noised.png', cv2.imread(img_path, cv2.IMREAD_GRAYSCALE))
### must first denoise image
img = denoise_img(img_path)
### save to temporary path
cv2.imwrite('temp_denoise/denoised.png', img)
img = Image.open(temp_path)
img = pillow_pad(img, resize)
if save_dir == 'train':
resize_img_path = os.path.join(TRAIN_RESIZE_DIR, '/'.join(img_path.split('/')[-4:]))
elif save_dir == 'test':
resize_img_path = os.path.join(TEST_RESIZE_DIR, '/'.join(img_path.split('/')[-4:]))
img.save(resize_img_path)
shutil.rmtree('temp_denoise')
# +
### PAD EXAMPLE
resize = 256
img_path = get_path_from_img_id(train_labels.sample(n=1).image_id.values[0])
img = Image.open(img_path)
img = pillow_pad(img, resize)
img
# +
### Resize Training Samples (np.array)
resize = 256
resize_imgs(train_labels.image_id.values, TRAIN_DIR, resize=resize)
# +
### Resize Test Samples (np.array)
resize = 256
resize_imgs(test_labels.image_id.values, TEST_DIR, resize=resize)
# -
# All resized images are stored in the same directory structure as original data, but the folders are renamed `train_resize` and `test_resize`
# ### Optical Character Recognition (OCR)
import pytesseract
img_path = get_path_from_img_id(train_labels.sample(n=1).image_id.values[0], TRAIN_RESIZE_DIR)
img = Image.open(img_path)
pytesseract.image_to_boxes(img)
img
# tesseract sucks
def convolve_kernel(window, kernel):
kernel_size = kernel.shape[0]
x_range = window.shape[0] - kernel.shape[0]
y_range = window.shape[1] - kernel.shape[1]
convolutions = []
for x in range(x_range):
for y in range(y_range):
window_slice = window[x:x+kernel_size, y:y+kernel_size].copy()
convolution = np.sum(window_slice@kernel)
convolutions.append(convolution)
score = np.max(convolutions)
return score
def convolve_kernels(window, kernel_list):
scores = []
for kernel in kernel_list:
scores.append(convolve_kernel(window, kernel))
score = np.max(scores)
return score
o_kernel = np.zeros((7, 7))
o_kernel[0,2:5] = [1,1,1]
o_kernel[1,1:6] = [1,0,0,0,1]
o_kernel[2,1] = 1
o_kernel[2,5] = 1
o_kernel[3,1] = 1
o_kernel[3,5] = 1
o_kernel[4,1] = 1
o_kernel[4,5] = 1
o_kernel[5,1:6] = [1,0,0,0,1]
o_kernel[6,2:5] = [1,1,1]
fig = plt.figure(figsize=(5,5))
plt.imshow(o_kernel)
plt.show()
s_kernel = np.zeros((7, 7))
s_kernel[0,1:6] = [1,1,1,1,1]
s_kernel[1:4,1] = [1,1,1]
s_kernel[3,2:6] = [1,1,1,1]
s_kernel[4:6,5] = [1,1]
s_kernel[6,1:6] = [1,1,1,1,1]
fig = plt.figure(figsize=(5,5))
plt.imshow(s_kernel)
plt.show()
for window in window_list['imgs']:
plt.imshow(window)
plt.show()
# +
img_id = train_df.sample(n=1).image_id.values[0]
img_path = get_path_from_img_id(img_id, TRAIN_RESIZE_DIR)
img = Image.open(img_path)
img = img.convert('L')
# display(img)
img = np.array(img)
img = invert_and_normalize(img)
prebinarized = binarize(img)
edges = edge_enhance(prebinarized)
edges = edge_detect(edges)
vertex_map = get_vertices(img, window_size=4, window_mask=False, window_list=True)
fig, ax = plt.subplots(2, 2, figsize=(16,12))
ax[0,0].imshow(img)
ax[0,0].set_title('Raw')
ax[0,1].imshow(prebinarized)
ax[0,1].set_title('Binarized')
ax[1,0].imshow(edges)
ax[1,0].set_title('Edges')
ax[1,1].imshow(vertex_map)
ax[1,1].set_title('Vertices')
plt.show()
# +
threshold = 45
kernel_list = [o_kernel, s_kernel]
vertex_windows, window_list = get_vertices(prebinarized, window_size=7, window_mask=True, window_list=True)
coords_list = window_list['coordinates']
img_list = window_list['imgs']
keep_idxs = []
for i, window in enumerate(img_list):
score = convolve_kernels(window, kernel_list)
if score > threshold:
keep_idxs.append(i)
filtered_window_list = {'coordinates': [],
'imgs': []}
for idx in keep_idxs:
filtered_window_list['coordinates'].append(coords_list[idx])
filtered_window_list['imgs'].append(img_list[idx])
vertex_map, _ = get_vertex_map(filtered_window_list['coordinates'], img, window_size=7, window_list=False)
vertex_windows = np.where(vertex_map == 1, img, 0)
prebinarized = morph_around_windows(img, filtered_window_list, binarize)
edges = morph_around_windows(img, filtered_window_list, edge_enhance)
edges = morph_around_windows(edges, filtered_window_list, edge_detect)
fig, ax = plt.subplots(2, 2, figsize=(16,12))
ax[0,0].imshow(img)
ax[0,0].set_title('Raw', fontweight='bold')
ax[0,1].set_title('Binarized - {} windows kept'.format(len(keep_idxs)), fontweight='bold')
ax[0,1].imshow(prebinarized)
ax[1,0].set_title('Edges - {} windows kept'.format(len(keep_idxs)), fontweight='bold')
ax[1,0].imshow(edges)
ax[1,1].imshow(vertex_windows)
ax[1,1].set_title('Vertex Mask - {} windows kept'.format(len(keep_idxs)), fontweight='bold')
plt.show()
# -
# ### PubChemPy Supplementation
import pubchempy as pcp
with open('../data/fg_names.json', 'r') as f:
fg_names = json.load(f)
fg_names
import operator
sorted_x = sorted(fg_names.items(), key=operator.itemgetter(1))
sorted_x
counts = fg_names.values()
counts = sorted(counts)[:-10]
plt.plot(counts)
plt.axhline(10000, ls=':', color='black')
plt.show()
50000 * 100
np.percentile(counts, 95)
for inchi in train_labels.InChI:
c = pcp.get_compounds(inchi, 'inchi')
print((c, inchi))
test_inchi = 'InChI=1S/C24H43N5O2/c1-21(2)15-11-19(28-30)23(5,13-17(15)21)26-9-7-25-8-10-27-24(6)14-18-16(22(18,3)4)12-20(24)29-31/h15-18,25-27,30-31H,7-14H2,1-6H3/b28-19+,29-20+/t15-,16-,17+,18+,23+,24+/m1/s1'
c = pcp.get_compounds(test_inchi, 'inchi')[0]
c.record
pcp.download('PNG', 'test.png', test_inchi, 'inchi', overwrite=True)
visualize_img_id(train_labels[train_labels.InChI == test_inchi].image_id.values[0])
plt.figure(figsize=(16,12))
img = cv2.imread('test.png')
img = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
plt.imshow(img, cmap='gray', vmin=0, vmax=255)
plt.show()
for i in range(img.shape[0]):
for j in range(img.shape[1]):
if img[i,j] == 245:
img[i,j] == 0
plt.imshow(img, cmap='gray', vmin=0, vmax=255)
plt.show()
img
# ### Calc Mean and StdDev of Dataset
def calc_data_mean_std(DIR):
channel_1_means = []
channel_2_means = []
channel_3_means = []
channel_1_stds = []
channel_2_stds = []
channel_3_stds = []
for img_id in train_labels.sample(n=100).image_id.values:
img_path = get_path_from_img_id(img_id, DIR)
img = (255 - cv2.imread(img_path)) / 255
means = np.mean(img.reshape(-1, img.shape[-1]), axis=0)
stds = np.std(img.reshape(-1, img.shape[-1]), axis=0)
channel_1_means.append(means[0])
channel_2_means.append(means[1])
channel_3_means.append(means[2])
channel_1_stds.append(stds[0])
channel_2_stds.append(stds[1])
channel_3_stds.append(stds[2])
channel_1_mean = np.mean(channel_1_means)
channel_2_mean = np.mean(channel_2_means)
channel_3_mean = np.mean(channel_3_means)
channel_1_std = np.mean(channel_1_stds)
channel_2_std = np.mean(channel_2_stds)
channel_3_std = np.mean(channel_3_stds)
means = [channel_1_mean, channel_2_mean, channel_3_mean]
stds = [channel_1_std, channel_2_std, channel_3_std]
mean_std_dict = {'means': means,
'stds': stds}
with open(os.path.join(DIR, 'mean_std.json'), 'w') as f:
json.dump(mean_std_dict, f)
calc_data_mean_std(TRAIN_RESIZE_DIR)
# # Data Loading
def inchi_tokenizer(substring):
pattern = "(\[[^\]]+]|Br?|Cl?|Si?|N|H|O|S|P|F|I|D|T|b|c|n|o|s|p|h|t|m|i|\(|\)|\.|=|#|-|,|\+|\\\\|\/|_|:|~|@|\?|>|\*|\$|1[0-9]|2[0-9]|[0-9])"
regezz = re.compile(pattern)
tokens = [token for token in regezz.findall(substring)]
assert substring == ''.join(tokens), ("{} could not be joined -> {}".format(substring, tokens))
return tokens
def encode_inchis(inchi, max_len, char_dict):
"Converts tokenized InChIs to a list of token ids"
for i in range(max_len - len(inchi)):
inchi.append('<pad>')
inchi_vec = [char_dict[c] for c in inchi]
return inchi_vec
class MoleculeDataset(Dataset):
"""
PyTorch Dataset class to load molecular images and InChIs
"""
def __init__(self, labels_fn, source_dir, char_dict, max_inchi_len, transform=None):
self.labels = pd.read_csv(labels_fn)
self.source_dir = source_dir
self.char_dict = char_dict
self.max_inchi_len = max_inchi_len
self.transform = transform
with open(os.path.join(source_dir, 'mean_std.json'), 'r') as f:
mean_std_dict = json.load(f)
self.means = mean_std_dict['means']
self.stds = mean_std_dict['stds']
def __getitem__(self, i):
### grab image
img_id = self.labels.image_id.values[i]
img_path = get_path_from_img_id(img_id, self.source_dir)
img = (255 - cv2.imread(img_path)) / 255
if self.transform is not None:
img = self.transform(img)
img = torch.tensor(img)
### grab inchi
inchi = self.labels.InChI.values[i]
inchi = inchi.split('InChI=1S/')[1]
inchi = ''.join(inchi)
tokenized_inchi = substring_tokenizer(inchi)
tokenized_inchi = ['<sos>'] + tokenized_inchi
tokenized_inchi += ['<eos>']
encoded_inchi = torch.tensor(encode_inchis(tokenized_inchi, self.max_inchi_len, self.char_dict))
return img, encoded_inchi
def __len__(self):
return self.labels.shape[0]
mol_train = MoleculeDataset('../data/train.csv', TRAIN_RESIZE_DIR, CHAR_DICT, MAX_INCHI_LENGTH)
img, encoded_inchi = mol_train[0]
encoded_inchi
# # Building Model
# +
### Generate Test Samples
k = 3
test_img_ids = train_df.image_id.values[:5]
test_inchis = train_df.InChI.values[:5]
imgs = torch.zeros((5, 2, 256, 256))
for i, test_img_id in enumerate(test_img_ids):
img_path = get_path_from_img_id(test_img_id, TRAIN_RESIZE_DIR)
img = preprocess(img_path)
img = torch.tensor(img).unsqueeze(0)
if IMG_SIZE != 256:
img = F.interpolate(img, size=(128, 128))
imgs[i,:,:,:] = img
imgs = imgs.float()
encoded_inchis = torch.zeros((5, 350))
inchi_lengths = torch.zeros((5, 1))
for i, test_inchi in enumerate(test_inchis):
tokenized_inchi = tokenize_inchi(test_inchi)
tokenized_inchi = ['<sos>'] + tokenized_inchi + ['<eos>']
inchi_length = len(tokenized_inchi)
encoded_inchi = encode_inchi(tokenized_inchi, MAX_INCHI_LENGTH, CHAR_DICT)
encoded_inchis[i,:] = torch.tensor(encoded_inchi)
inchi_lengths[i,:] = inchi_length
encoded_inchis = encoded_inchis.long()
inchi_lengths = inchi_lengths.long()
fig, ax = plt.subplots(1, 2, figsize=(16,8))
ax[0].imshow(imgs.numpy()[k,0,:,:])
ax[0].set_title('Raw')
ax[1].imshow(imgs.numpy()[k,1,:,:])
ax[1].set_title('Vertices')
plt.suptitle(test_inchi, fontsize=8)
plt.show()
# -
encoded_inchis[k,:]
encoder = resnet18()
decoder = trans128_4x(vocab_size=VOCAB_SIZE)
# decoder = biLSTM512(vocab_size=VOCAB_SIZE, device=DEVICE)
enc_out = encoder(imgs)
preds, inchi, decode_lengths = decoder(enc_out, encoded_inchis, inchi_lengths)
decode_lengths
preds.shape, inchi.shape
targets = inchi[:,1:]
preds = pack_padded_sequence(preds, decode_lengths, batch_first=True).data
targets = pack_padded_sequence(targets, decode_lengths, batch_first=True).data
preds.shape, targets.shape
resnet26(img)
axialnet = AxialAttention(64, 64)
# # Exploring InChIs
substring_headers = {}
for i, inchi in enumerate(train_labels.InChI):
inchi = inchi.split('/')[2:]
for substring in inchi:
substring_header = substring[0]
if substring_header in substring_headers.keys():
substring_headers[substring_header] += 1
else:
substring_headers[substring_header] = 1
substring_headers
len(train_labels.InChI)
substring_headers['i'] / len(train_labels.InChI)
0.9 * 0.11
1000000 * 0.9 * 0.11111111111
n_samples = train_labels.shape[0]
n_train = int(n_samples * 0.8)
n_val = int(n_samples * 0.1)
n_test = n_samples - n_train - n_val
rand_idxs = np.random.choice(np.arange(train_labels.shape[0]), size=train_labels.shape[0], replace=False)
train_idxs = rand_idxs[:n_train]
val_idxs = rand_idxs[n_train:n_train+n_val]
test_idxs = rand_idxs[n_train+n_val:]
train_idxs.shape, val_idxs.shape, test_idxs.shape
train_df = train_labels.iloc[train_idxs]
val_df = train_labels.iloc[val_idxs]
test_df = train_labels.iloc[test_idxs]
# +
train_substring_headers = {}
for i, inchi in enumerate(train_df.InChI):
inchi = inchi.split('/')[2:]
for substring in inchi:
substring_header = substring[0]
if substring_header in train_substring_headers.keys():
train_substring_headers[substring_header] += 1
else:
train_substring_headers[substring_header] = 1
for k, v in train_substring_headers.items():
train_substring_headers[k] = v / train_df.shape[0]
# +
test_substring_headers = {}
for i, inchi in enumerate(test_df.InChI):
inchi = inchi.split('/')[2:]
for substring in inchi:
substring_header = substring[0]
if substring_header in test_substring_headers.keys():
test_substring_headers[substring_header] += 1
else:
test_substring_headers[substring_header] = 1
for k, v in test_substring_headers.items():
test_substring_headers[k] = v / test_df.shape[0]
# -
train_substring_headers
test_substring_headers
train_df.to_csv('train.csv')
val_df.to_csv('val.csv')
test_df.to_csv('test.csv')
# ### Tokenizing
# +
### Splitting into substrings
for inchi in train_df.InChI.values:
og_inchi = inchi
inchi = inchi.split('/')[1:]
chemical_formula = inchi[0]
atom_connection_layer = inchi[1]
remaining_substrings = inchi[2:]
hydrogen_layer = ''
stereochemical_layer = ''
isotopic_layer = ''
for substring in remaining_substrings:
if substring[0] == 'h' and len(hydrogen_layer) == 0:
hydrogen_layer = '/' + substring
elif substring[0] == 'i' or substring[0] == 'h' or (len(isotopic_layer) > 0 and (substring[0] == 'b' or substring[0] == 't' or substring[0] == 'm' or substring[0] == 's')):
isotopic_layer += substring + '/'
elif substring[0] == 'b' or substring[0] == 't' or substring[0] == 'm' or substring[0] == 's':
stereochemical_layer += substring + '/'
# print(og_inchi)
# print('-- Chemical Formula --')
chemical_formula_toks = substring_tokenizer(chemical_formula)
# print('{} -> {}'.format(chemical_formula, chemical_formula_toks))
# print('-- Atom Connection Sublayer --')
atom_connection_toks = substring_tokenizer(atom_connection_layer)
# print('{} -> {}'.format(atom_connection_layer, atom_connection_toks))
if len(hydrogen_layer) > 0:
# print('-- Hydrogen Sublayer --')
hydrogen_toks = substring_tokenizer(hydrogen_layer)
# print('{} -> {}'.format(hydrogen_layer, hydrogen_toks))
if len(stereochemical_layer) > 0:
stereochemical_layer = '/' + stereochemical_layer[:-1]
stereochemical_toks = substring_tokenizer(stereochemical_layer)
# print('-- Stereochemical Layer --')
# print('{} -> {}'.format(stereochemical_layer, stereochemical_toks))
if len(isotopic_layer) > 0:
# print('-- Isotopic Layer --')
isotopic_layer = '/' + isotopic_layer[:-1]
isotopic_toks = substring_tokenizer(isotopic_layer)
# print('{} -> {}'.format(isotopic_layer, isotopic_toks))
# print('\n')
recomp_inchi = 'InChI=1S/{}/{}{}{}{}'.format(chemical_formula, atom_connection_layer,
hydrogen_layer, stereochemical_layer, isotopic_layer)
if recomp_inchi[-1] == '/':
recomp_inchi = recomp_inchi[:-1]
if og_inchi != recomp_inchi:
print("{} -> og".format(og_inchi))
print("{} -> recomp".format(recomp_inchi))
print('\n')
# +
### Tokenizing entire InChI
train_char_dict = {}
max_train_length = 0
for inchi in train_labels.InChI.values:
og_inchi = inchi
inchi = inchi.split('InChI=1S/')[1]
inchi = ''.join(inchi)
tokenized_inchi = tokenize_inchi(inchi)
if len(tokenized_inchi) > max_train_length:
max_train_length = len(tokenized_inchi)
for tok in tokenized_inchi:
if tok not in train_char_dict.keys():
train_char_dict[tok] = 1
else:
train_char_dict[tok] += 1
# -
test_char_dict = {}
max_test_length = 0
for inchi in test_labels.InChI.values:
og_inchi = inchi
inchi = inchi.split('InChI=1S/')[1]
inchi = ''.join(inchi)
tokenized_inchi = tokenize_inchi(inchi)
if len(tokenized_inchi) > max_test_length:
max_test_length = len(tokenized_inchi)
for tok in tokenized_inchi:
if tok not in test_char_dict.keys():
test_char_dict[tok] = 1
else:
test_char_dict[tok] += 1
max_train_length, max_test_length
train_char_dict
test_char_dict
len(train_char_dict.keys()), len(test_char_dict.keys()), len(set(list(train_char_dict.keys()) + list(test_char_dict.keys())))
labels = sorted(train_char_dict.keys())
def plot_char_freq(char_dict, labels, title='Train'):
fig = plt.figure(figsize=(20,6))
counts = []
for label in labels:
counts.append(char_dict[label])
plt.bar(range(len(counts)), counts)
plt.xticks(range(len(labels)), labels)
plt.title(title, fontweight='bold', fontsize=16)
plt.show()
plot_char_freq(train_char_dict, labels)
desired_toks = ['+', '0', 'B', 'Br', 'D', 'Cl', 'F', 'I', 'P', 'S', 'Si', 'T', 'b', 'i', 'm', 's', 't']
plot_char_freq(test_char_dict, labels, title='Test')
train_df.InChI.values[0]
# +
char_dict = {}
ord_dict = {}
for i, label in enumerate(labels):
char_dict[label] = i
ord_dict[i] = label
n_toks = len(labels)
char_dict['<sos>'] = n_toks
char_dict['<eos>'] = n_toks + 1
char_dict['<pad>'] = n_toks + 2
ord_dict[n_toks] = '<sos>'
ord_dict[n_toks + 1] = '<eos>'
ord_dict[n_toks + 2] = '<pad>'
# -
with open('char_dict.json', 'w') as f:
json.dump(char_dict, f)
with open('ord_dict.json', 'w') as f:
json.dump(ord_dict, f)
CHAR_DICT
ord_dict
train_inchis = []
for inchi in train_df.InChI.values:
train_inchis.append(['<sos>'] + tokenize_inchi(inchi) + ['<eos>'])
params = {'NUM_CHAR': len(CHAR_DICT.keys()),
'MAX_LENGTH': MAX_INCHI_LENGTH,
'CHAR_DICT': CHAR_DICT}
char_weights = get_char_weights(train_inchis, params)
char_weights[-1] = 0.2
np.save('../data/char_weights.npy', char_weights)
train_df.shape[0] / 256
256 / 16
def rotate_img(img, p=0.5):
angles = [0, 90, 180, 270]
angle = np.random.choice(angles, size=1, p=[1 - p, p / 3, p / 3, p / 3])[0]
img = torch.tensor(img)
if angle == 0:
pass
elif angle == 90:
img = torch.rot90(img, 1, [1,2])
elif angle == 180:
img = torch.rot90(img, 1, [1,2])
img = torch.rot90(img, 1, [1,2])
elif angle == 270:
img = torch.rot90(img, -1, [1,2])
return img, angle
i = 6
test_img_id = train_df.image_id.values[i]
test_inchi = train_df.InChI.values[i]
img_path = get_path_from_img_id(test_img_id, TRAIN_RESIZE_DIR)
img = preprocess(img_path)
img = torch.tensor(img).unsqueeze(0)
img = F.interpolate(img, size=(128, 128))
img = img.squeeze(0)
img = img.numpy()
fig, ax = plt.subplots(1, 2, figsize=(16,8))
ax[0].imshow(img[0,:,:])
ax[1].imshow(img[1,:,:])
plt.suptitle(test_inchi)
plt.show()
fig = plt.figure(figsize=(8,8))
plt.imshow(img[1,:,:])
plt.show()
fig = plt.figure(figsize=(8,8))
plt.imshow(img[0,:,:])
plt.show()
fig = plt.figure(figsize=(8,8))
plt.imshow(img[1,:,:])
plt.show()
# +
test_img_id = train_df.image_id.values[0]
img_path = get_path_from_img_id(test_img_id, TRAIN_DIR)
img = Image.open(img_path)
img = np.array(img)
# -
img.shape
plt.imshow(img)
plt.show()
build_times = [10.309909014031291,20.02779781911522,40.275068615563214,80.98926744703203,163.3297253800556]
sizes = [0.028521257,0.080530632,0.161061264,0.322122528,0.64424508]
n_samples = [128,256,512,1024,2048]
plt.plot(n_samples, build_times)
plt.xlabel('# samples')
plt.ylabel('Build Time')
plt.show()
plt.plot(n_samples, sizes)
plt.xlabel('# samples')
plt.ylabel('Size')
plt.show()
gb_per_1K = 0.020098794 / 100 * 1000
n_sample = 200000 / 1000
gb = n_sample * gb_per_1K
gb
train_df.shape[0] / 200000
gb_per_1K * 200000 / 1000 + gb_per_1K * 200000 * (0.1) / 1000
test_labels.shape[0]
load_time = 0.30496428813785315
load_time * 200000 / 1000
test_imgs = np.load('test_imgs.npy')
i = 191607
fig, ax = plt.subplots(1,2,figsize=(16,8))
ax[0].imshow(test_imgs[i,0,:,:])
ax[0].set_title('Img')
ax[1].imshow(test_imgs[i,1,:,:])
ax[1].set_title('Vertices')
plt.show()
train_df.shape[0] / 256 * 1.8 / 60 / 60 * 20
train_df.shape[0] / 256 * 8 / 60 / 60 * 5
train_df.InChI.values[0]
17 / 256
80 / 200000
80*10 / 60
200000 / 256
(10432512 / train_df.shape[0])
val_df.shape[0]
| ipynb/od_exploration.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3 (system-wide)
# language: python
# metadata:
# cocalc:
# description: Python 3 programming language
# priority: 100
# url: https://www.python.org/
# name: python3
# resource_dir: /ext/jupyter/kernels/python3
# ---
# # Jupyter Test Notebook 1
#
# *Ein kleiner Text*
print('Hello World!')
# +
import numpy as np
arr = np.array([1,2,3,4,5])
arr = arr*2
for i in arr:
print(i)
# -
| Jupyter_Test_1.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + id="rwLN8uu4GQe0"
# !unzip "/content/drive/MyDrive/train/v2/train.zip" -d "/content/drive/MyDrive/train/v2/train/"
# +
from os import listdir, remove
from os.path import isfile, join
mypath = '/content/drive/MyDrive/train/v2/train/'
onlyfiles = [f for f in listdir(mypath) if isfile(join(mypath, f))]
print(len(onlyfiles))
| model/unzip.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: cv3
# language: python
# name: cv3
# ---
# ## 1. Setting up the environment
# +
import numpy as np
import gym
#pytorch
import torch
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
from torch.distributions import Normal
# setting manual seed
torch.manual_seed(0)
from unityagents import UnityEnvironment
#matplotlib
import matplotlib.pyplot as plt
# %matplotlib inline
# %config InlineBackend.figure_format = 'retina'
# imports for rendering outputs in Jupyter.
from JSAnimation.IPython_display import display_animation
from matplotlib import animation
from IPython.display import display
# Some more magic so that the notebook will reload external python modules;
# see http://stackoverflow.com/questions/1907993/autoreload-of-modules-in-ipython
# %load_ext autoreload
# %autoreload 2
# -
env = UnityEnvironment(file_name='unity_envs/Crawler_Linux_NoVis/Crawler.x86_64')
# get the default brain
brain_name = env.brain_names[0]
brain = env.brains[brain_name]
# +
# reset the environment
env_info = env.reset(train_mode=True)[brain_name]
# number of agents
num_agents = len(env_info.agents)
print('Number of agents:', num_agents)
# size of each action
action_size = brain.vector_action_space_size
print('Size of each action:', action_size)
# examine the state space
states = env_info.vector_observations
state_size = states.shape[1]
print('There are {} agents. Each observes a state with length: {}'.format(states.shape[0], state_size))
print('The state for the first agent looks like:', states[0])
# -
env_info = env.reset(train_mode=False)[brain_name] # reset the environment
states = env_info.vector_observations # get the current state (for each agent)
scores = np.zeros(num_agents) # initialize the score (for each agent)
step=0
while True:
actions = np.random.randn(num_agents, action_size) # select an action (for each agent)
actions = np.clip(actions, -1, 1) # all actions between -1 and 1
env_info = env.step(actions)[brain_name] # send all actions to tne environment
next_states = env_info.vector_observations # get next state (for each agent)
rewards = env_info.rewards # get reward (for each agent)
dones = env_info.local_done # see if episode finished
scores += env_info.rewards # update the score (for each agent)
states = next_states # roll over states to next time step
step+=1
if np.any(dones): # exit loop if episode finished
break
print('Total score (averaged over agents) this episode: {}'.format(np.mean(scores)))
# ## 2. Defining the policy
# defining the device
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
print ("using",device)
# +
state_size = state_size
action_size = action_size
# define actor critic network
class ActorCritic(nn.Module):
def __init__(self,state_size,action_size,action_high,action_low,hidden_size=32):
super(ActorCritic, self).__init__()
# action range
self.action_high = torch.tensor(action_high).to(device)
self.action_low = torch.tensor(action_low).to(device)
self.std = nn.Parameter(torch.zeros(action_size))
# common network
self.fc1 = nn.Linear(state_size,1024)
# actor network
self.fc2_actor = nn.Linear(1024,256)
self.fc3_action = nn.Linear(256,action_size)
#self.fc3_std = nn.Linear(64,action_size)
# critic network
self.fc2_critic = nn.Linear(1024,256)
self.fc3_critic = nn.Linear(256,1)
def forward(self,state):
# common network
x = F.relu(self.fc1(state))
# actor network
x_actor = F.relu(self.fc2_actor(x))
action_mean = F.sigmoid(self.fc3_action(x_actor))
## rescale action mean
action_mean_ = (self.action_high-self.action_low)*action_mean + self.action_low
#action_std = F.sigmoid(self.fc3_std(x_actor))
# critic network
x_critic = F.relu(self.fc2_critic(x))
v = self.fc3_critic(x_critic)
return action_mean_,v
def act(self,state):
# converting state from numpy array to pytorch tensor on the "device"
state = torch.from_numpy(state).float().to(device)
action_mean,v = self.forward(state)
prob_dist = Normal(action_mean,F.softplus(self.std))
action = prob_dist.sample()
log_prob = prob_dist.log_prob(action)
return action.cpu().numpy(),torch.sum(log_prob,dim=1),v.squeeze()
# -
# ## 3. Defining the RL agent
# +
from collections import deque
from itertools import accumulate
def compute_future_rewards(rewards,gamma):
future_rewards = np.zeros_like(rewards)
discounted_rewards = np.zeros(rewards.shape[0])
for time_step in range(future_rewards.shape[1]-1,-1,-1):
future_rewards[:,time_step] = rewards[:,time_step] + gamma*discounted_rewards
discounted_rewards = future_rewards[:,time_step]
return future_rewards
class Agent:
def __init__(self,env,learning_rate=1e-3):
self.env = env
nS = state_size
nA = action_size
action_low = -1
action_high = 1
self.policy = ActorCritic(state_size=nS,hidden_size=128,action_size=nA,
action_low=action_low,action_high=action_high).to(device)
self.optimizer = optim.RMSprop(self.policy.parameters(), lr=learning_rate)
def train(self,max_opt_steps=1000,num_trajectories=12,horizon=1000,gamma=.99,target_score= -250,
PRINT_EVERY=100):
# store eps scores
scores = []
scores_window = deque(maxlen=100)
for opt_step in range(1,max_opt_steps+1):
rewards = np.zeros([num_trajectories,horizon])
log_probs = torch.zeros([num_trajectories,horizon],dtype=torch.double,device=device)
value_estimate = torch.zeros([num_trajectories,horizon],dtype=torch.double,device=device)
for traj_count in range(1):
# reset state
env_info = env.reset(train_mode=False)[brain_name] # reset the environment
state = env_info.vector_observations # get the current state (for each agent)
# play an episode
for t in range(horizon):
action,log_prob,v = self.policy.act(state)
env_info = env.step(action)[brain_name] # send all actions to tne environment
next_state = env_info.vector_observations # get next state (for each agent)
reward = env_info.rewards # get reward (for each agent)
done = env_info.local_done # see if episode finished
# update state
state = next_state
log_probs[:,t] = log_prob
rewards[:,t] = reward
value_estimate[:,t] = v
# break if done
if np.any(done):
break
# compute advantage estimate to reduce variance
future_rewards = compute_future_rewards(rewards,gamma)
future_rewards = torch.from_numpy(future_rewards).double().to(device)
# b = future_rewards.mean(axis=0)
# A = (future_rewards - b)/future_rewards.std(axis=0)
# A = torch.from_numpy(A).double().to(device)
A = future_rewards-value_estimate
# compute loss and applying gradient
actor_loss = torch.sum(-log_probs*A)/(num_trajectories*horizon)
undiscounted_future_rewards = compute_future_rewards(rewards,gamma=1.0)
undiscounted_future_rewards = torch.from_numpy(undiscounted_future_rewards).double().to(device)
critic_loss = torch.sum((undiscounted_future_rewards-value_estimate)**2)/(num_trajectories*horizon)
# total loss
loss = actor_loss + critic_loss
self.optimizer.zero_grad()
loss.backward()
self.optimizer.step()
trajectory_total_rewards = rewards.sum(axis=1)
# update scores and score_window
scores.append(np.mean(trajectory_total_rewards))
scores_window.append(np.mean(trajectory_total_rewards))
#printing progress
if opt_step % PRINT_EVERY == 0:
print ("Episode: {}\t Avg reward: {:.2f}\t std: {}".format(opt_step,np.mean(scores_window),
self.policy.std))
# save the policy
torch.save(agent.policy, 'REINFORCE-crawler.policy')
if np.mean(scores_window)>= target_score:
print ("Environment solved in {} optimization steps! ... Avg reward : {:.2f}".format(opt_step-100,
np.mean(scores_window)))
# save the policy
torch.save(agent.policy, 'REINFORCE-crawler.policy')
break
return scores
# -
# ## 4. Training the agent!
# lets define and train our agent
agent = Agent(env=env,learning_rate=1e-4)
scores = agent.train(max_opt_steps=20000,horizon=400,gamma=0.98,target_score=500,PRINT_EVERY=100)
# plot reward curve over episodes
plt.figure()
plt.plot(scores)
plt.xlabel('Episode #')
plt.ylabel('Total Reward')
plt.show()
scores = agent.train(max_opt_steps=20000,horizon=200,gamma=0.98,target_score=-200,PRINT_EVERY=100)
# plot reward curve over episodes
plt.figure()
plt.plot(scores)
plt.xlabel('Episode #')
plt.ylabel('Total Reward')
plt.show()
scores = agent.train(max_opt_steps=20000,horizon=200,gamma=0.98,target_score=-180,PRINT_EVERY=100)
# plot reward curve over episodes
plt.figure()
plt.plot(scores)
plt.xlabel('Episode #')
plt.ylabel('Total Reward')
plt.show()
scores = agent.train(max_opt_steps=20000,horizon=200,gamma=0.98,target_score=-150,PRINT_EVERY=100)
# plot reward curve over episodes
plt.figure()
plt.plot(scores)
plt.xlabel('Episode #')
plt.ylabel('Total Reward')
plt.show()
scores = agent.train(max_opt_steps=20000,horizon=200,gamma=0.98,target_score=-150,PRINT_EVERY=100)
# ## 5. Watch the smart agent!
# uncomment this cell to load the trained policy for Pendulum-v0
# load policy
policy = torch.load('REINFORCE-Pendulum.policy',map_location='cpu')
agent = Agent(env_name='Pendulum-v0')
agent.policy = policy
# function to animate a list of frames
def animate_frames(frames):
plt.figure(dpi = 72)
plt.axis('off')
# color option for plotting
# use Greys for greyscale
cmap = None if len(frames[0].shape)==3 else 'Greys'
patch = plt.imshow(frames[0], cmap=cmap)
fanim = animation.FuncAnimation(plt.gcf(), \
lambda x: patch.set_data(frames[x]), frames = len(frames), interval=30)
display(display_animation(fanim, default_mode='once'))
# +
frames = []
total_reward = 0
state = env.reset()
value = []
r = []
for t in range(2000):
action, _,v = agent.policy.act(state[np.newaxis,:])
#frames.append(env.render(mode='rgb_array'))
next_state, reward, done, _ = env.step(action[0])
value.append(v.squeeze())
r.append(reward)
state=next_state
total_reward+= reward
if done:
break
print ("Total reward:",total_reward)
env.close()
#animate_frames(frames)
# -
r_ = compute_future_rewards(np.array(r)[np.newaxis,:],gamma=1.0)
plt.plot(r_[0])
plt.plot(value)
agent.policy.std
| .ipynb_checkpoints/REINFORCE-continuous-Crawler-ac-checkpoint.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# %matplotlib inline
import pandas as pd
import matplotlib.pyplot as plt
import sys
sys.path.append('../')
from src.carregamento.dados import escolas_nse_baixo, escolas_nse_medio, escolas_nse_alto
plt.scatter(range(len(escolas_nse_baixo.MEDIA_9EF_MT.values)), sorted(escolas_nse_baixo.MEDIA_9EF_MT.values))
plt.ylim(150, 400)
plt.show()
plot = plt.plot(sorted(escolas_nse_medio.MEDIA_9EF_MT.values))
plt.ylim(150, 400)
plt.show()
plot = plt.plot(sorted(escolas_nse_alto.MEDIA_9EF_MT.values))
plt.ylim(150, 400)
plt.show()
plt.hist(escolas_nse_baixo.MEDIA_9EF_MT.values)
plt.xlim(150, 400)
plt.show()
plt.hist(escolas_nse_medio.MEDIA_9EF_MT.values)
plt.xlim(150, 400)
plt.show()
plt.hist(escolas_nse_alto.MEDIA_9EF_MT.values)
plt.xlim(150, 400)
plt.show()
| experiments/Analise escolas de baixo nivel socio-economico.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import numpy as np
import matplotlib.pyplot as plt
plt.gray();
# +
### Still need to figure out a set of realistic parameters
sz=512
mpix=4e-8 ### pixel to meter
wavelen=2e-12/mpix ### wave length
ix=(np.arange(sz, dtype=float)-sz/2)*1e-3 ## indices along x direction. need to be much smaller than y
ix_mat=(ix-ix[:,None])**2
# +
### First try a simple example
### focus a parallel beam with a lens
def draw_lens(arc, w=1):
t=np.arange(-np.pi*arc, np.pi*arc+1e-5, np.pi*arc/20.)+(np.pi/2.)
x=np.cos(t)
y=np.sin(t)
y-=np.min(y)
x/=np.max(x)-np.min(x)
x*=w
plt.plot(x,y, 'C0')
plt.plot(x,-y, 'C0')
plt.plot([np.min(x), np.max(x)],[0,0],'--')
draw_lens(.15)
plt.plot([-.4,-.4],[0,.6], 'C1')
plt.plot([.4,.4],[0,.6], 'C1')
plt.plot([.4,-.2],[0,-.8], 'C1')
plt.plot([-.4,.2],[0,-.8], 'C1')
plt.axis("equal")
plt.axis('off');
# +
### Parallel beam input. Need Gaussian falloff to avoid edge artifacts
### plots are in amplitude / phase format
def gauss_edge(sz, cl=.2, gw=.15, gs=.06):
raw=np.ones(sz, dtype=np.complex)
clip=int(sz*cl)
gwidth=int(sz*gw)
gwidth=min(clip-1, gwidth)
gsig=sz*gs
raw[:clip]=0
raw[-clip:]=0
gaus=np.exp((-np.arange(gwidth, dtype=float)**2)/(gsig**2))
raw[clip-gwidth:clip]=gaus[::-1]
raw[-clip:-clip+gwidth]=gaus
return raw
img0=gauss_edge(sz)
plt.figure(figsize=(12,4))
plt.subplot(1,2,1)
plt.plot(abs(img0),'C0')
plt.subplot(1,2,2)
plt.plot(np.angle(img0),'C1')
# +
### Propagate the wave through a thin lens
### radial based phase shift
def wave_lens(raw, f, cs, doplot=True):
ps=((ix)**2)/(f*2)*(2*np.pi/wavelen) ### phase shift
ps+=cs*(ix**4)/4.
img=raw*np.exp(-1j*(-ps))
if doplot:
plt.figure(figsize=(12,4))
plt.subplot(1,2,1)
plt.plot(abs(img),'C0')
plt.subplot(1,2,2)
plt.plot(np.angle(img),'C1')
return img
img1=wave_lens(img0, 40, 0)
# +
### Propagate wave through space
### should shrink to a point at focal plane
def wave_prop(raw, d0, doplot=True):
dst=np.sqrt(ix_mat +d0**2)
cpx=raw[:,None]*np.exp(-1j*2*np.pi*dst/wavelen)*(1/dst**2)
img=np.sum(cpx, axis=0)
if doplot:
plt.figure(figsize=(12,4))
plt.subplot(1,2,1)
plt.plot(abs(img),'C0')
plt.subplot(1,2,2)
plt.plot(np.angle(img),'C1')
return img
img2=wave_prop(img1, 40)
# +
### How Cs of the lens affect the focus
plt.figure(figsize=(12,6))
for ii,cs in enumerate([0,3e4]):
plt.subplot(1,2,ii+1)
img1=wave_lens(img0, 40, cs, False)
imgs=[]
rg=np.arange(10,61,.5)
for i in rg:
imgs.append(wave_prop(img1, i, False))
imgs=np.array(imgs)
m=abs(imgs)
m/=np.std(m, axis=1)[:,None]
m=np.repeat(m,3,0)
plt.imshow(m, vmax=5)
plt.axis('off')
# +
### Now a slightly more complicated example
### Image thin sample under focus
d0=20.
d1=20.
f=1./(1/d0+1/d1)
draw_lens(.15)
plt.plot([-.4,0],[0,.4], 'C1')
plt.plot([.4,0],[0,.4], 'C1')
plt.plot([.4,.0],[0,-.4], 'C1')
plt.plot([-.4,-.02],[0,-.4], 'C1')
plt.axis("equal")
plt.axis('off');
# +
### Sample density input
ni=2
sample=np.zeros(sz)
dx=25/ni
nx=np.arange(ni)*dx
nx=nx-np.mean(nx)+sz//2
for x in nx:
sample+=np.exp(-(np.arange(sz, dtype=float)-x)**2/10)
plt.plot(sample)
# +
### Elastic scattering only cause phase shift at output beam
img0=gauss_edge(sz)
img0*=np.exp(-1j*sample*.5)
plt.figure(figsize=(12,4))
plt.subplot(1,2,1)
plt.plot(abs(img0),'C0')
plt.subplot(1,2,2)
plt.plot(np.angle(img0),'C1')
# +
### Propagate wave to the lens
img1=wave_prop(img0, d0)
# +
### Go through the lens
img2=wave_lens(img1, f, 0)
# +
### Get to the imaging plane
df=-.3
img3=wave_prop(img2, d1+df)
# +
### Now compare the sample input and the output image
m=abs(img3) # we only observe real space image on camera
m=m[::-1] # image is flipped along x axis
c=int(sz*.3) # remove the background
m[:c]=m[c]
m[-c:]=m[-c]
c=int(sz*.35)
m=m[c-1:-c-1]
m-=np.mean(m)
m/=np.max(abs(m))
s=sample.copy()
s=s[c:-c]
plt.plot(m)
plt.plot(s)
# +
### Compare in Fourier space
nn=(sz-c*2)//3
rft=np.fft.rfft(s)
rft/=np.max(abs(rft))
mft=np.fft.rfft(m)
mft/=np.max(abs(mft))
### Note that the signal is near zero at some frequency
### need to set the corresponding phase to zero for comparison
mx=abs(mft)
msk=np.ones(len(mx), dtype=bool)
msk[1:-1]*=mx[1:-1]<mx[:-2]
msk[1:-1]*=mx[1:-1]<mx[2:]
am=np.cos(np.angle(mft))
ar=np.cos(np.angle(rft))
am[mx<.01]=0
ar[mx<.01]=0
for i in np.where(msk>0)[0][:-1]:
ar[i]=ar[i+1]
am[i]=am[i+1]
plt.figure(figsize=(12,4))
plt.subplot(1,2,1)
plt.plot(abs(mft)[:nn])
plt.plot(abs(rft)[:nn])
plt.subplot(1,2,2)
plt.plot(am[:nn])
plt.plot(ar[:nn])
# +
### Compare the phase difference from simulation and the ideal CTF curve
k2=(np.arange(0,len(rft))/len(rft)*sz/2)**2
ci=df*3-1 ### not sure why this is needed yet. the number are fitted from the last cell
ctf=np.sin(ci*np.pi*wavelen*k2)
plt.plot(np.sign((ar*am)[:nn]))
plt.plot(ctf[:nn])
# plt.plot(np.sign(ctf[:nn])*.5)
print(ci)
# +
### Apply the ideal CTF curve to the input signal and compare with the simulation output
rc=rft*ctf
rc/=np.max(abs(rc))
plt.figure(figsize=(12,4))
plt.subplot(1,2,1)
plt.plot(abs(mft)[:nn])
plt.plot(abs(rc)[:nn])
plt.subplot(1,2,2)
ac=np.cos(np.angle(rc))
ac[mx<.01]=0
for i in np.where(msk>0)[0][:-1]:
ac[i]=ac[i+1]
plt.plot(ac[:nn])
plt.plot(am[:nn])
# -
# +
### This is used to fit the defocus scale factor between the simulation and ideal CTF curve
### Should be able to directly compute it from the parameters of the simulation...
rrg=-np.arange(.02,1.4,.05)
cis=[]
for df in rrg:
img3=wave_prop(img2, d1+df, False)
m=abs(img3)
m=m[::-1]
c=int(sz*.3)
m[:c]=m[c]
m[-c:]=m[-c]
c=int(sz*.375)
m=m[c-1:-c-1]
m-=np.mean(m)
m/=np.max(abs(m))
s=sample.copy()
s=s[c:-c]
nn=(sz-c*2)//3
rft=np.fft.rfft(s)
rft/=np.max(abs(rft))
mft=np.fft.rfft(m)
mft/=np.max(abs(mft))
mx=abs(mft)
msk=np.ones(len(mx), dtype=bool)
msk[1:-1]*=mx[1:-1]<mx[:-2]
msk[1:-1]*=mx[1:-1]<mx[2:]
am=np.cos(np.angle(mft))
ar=np.cos(np.angle(rft))
am[mx<.01]=0
ar[mx<.01]=0
for i in np.where(msk>0)[0][:-1]:
ar[i]=ar[i+1]
am[i]=am[i+1]
k2=(np.arange(0,len(rft))/len(rft)*sz/2)**2
sn=np.sign((ar*am))[:nn]
c0=[]
rr=np.arange(0,20,.1)
for i in rr:
ctf=np.sin(-i*np.pi*wavelen*k2)
ctf=np.sign(ctf[:nn])
c0.append(np.mean(ctf*sn))
ci=np.argmax(c0)
cis.append(rr[ci])
cis=np.array(cis)
k,b=np.polyfit(rrg, cis, 1)
print(k,b)
plt.plot(rrg, cis)
plt.plot(rrg, rrg*k+b)
| develop/muyuan/ctf.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Welcome to the ASHRAE - Great Energy Predictor Competition
# This notebook is a starter code for all beginners and easy to understand. The train and test data are very large so we will work with a data generator based on the template to generate the data on the fly <br>
# https://stanford.edu/~shervine/blog/keras-how-to-generate-data-on-the-fly
#
# Additionally we follow an efficient workflow. <br>
# We also use categorical feature encoding techniques, compare <br>
# https://www.kaggle.com/drcapa/categorical-feature-encoding-challenge-xgb
#
# For the first step we will take a simple neural network based on the keras library. After that we will use a RNN.<br>
# Current status of the kernel: The workflow is complete.<br>
# Next steps:
# * Improve the LSTM.
# * Expand the feature engineering based on the kernel: https://www.kaggle.com/drcapa/ashrae-feature-engineering
# # Load Libraries
# + _cell_guid="79c7e3d0-c299-4dcb-8224-4455121ee9b0" _uuid="d629ff2d2480ee46fbb7e2d37f6b5fab8052498a"
import numpy as np
import pandas as pd
import scipy.special
import matplotlib.pyplot as plt
import os
import random
# -
from keras.utils import Sequence
from keras.models import Sequential
from keras.layers import Dense, Dropout, Flatten, LSTM, Embedding
from keras.optimizers import RMSprop,Adam
import keras.backend as K
from sklearn.model_selection import train_test_split
from sklearn.impute import SimpleImputer
from sklearn.preprocessing import MinMaxScaler
import warnings
warnings.filterwarnings("ignore")
# # Load Data
path_in = '../input/ashrae-energy-prediction/'
print(os.listdir(path_in))
train_data = pd.read_csv(path_in+'train.csv', parse_dates=['timestamp'])
train_weather = pd.read_csv(path_in+'weather_train.csv', parse_dates=['timestamp'])
building_data = pd.read_csv(path_in+'building_metadata.csv')
# # Help function
def plot_bar(data, name):
fig = plt.figure(figsize=(16, 9))
ax = fig.add_subplot(111)
data_label = data[name].value_counts()
dict_train = dict(zip(data_label.keys(), ((data_label.sort_index())).tolist()))
names = list(dict_train.keys())
values = list(dict_train.values())
plt.bar(names, values)
ax.set_xticklabels(names, rotation=45)
plt.grid()
plt.show()
# # Handle missing values of building and weather data
# The missing data are numerical values. So for the first step we can use a simple imputer of the sklearn library.
cols_with_missing_train_weather = [col for col in train_weather.columns if train_weather[col].isnull().any()]
cols_with_missing_building = [col for col in building_data.columns if building_data[col].isnull().any()]
print(cols_with_missing_train_weather)
print(cols_with_missing_building)
imp_most = SimpleImputer(strategy='most_frequent')
train_weather[cols_with_missing_train_weather] = imp_most.fit_transform(train_weather[cols_with_missing_train_weather])
building_data[cols_with_missing_building] = imp_most.fit_transform(building_data[cols_with_missing_building])
# # Scale objective label
train_data['meter_reading'] = np.log1p(train_data['meter_reading'])
# # Create new features
# ## Train data
# Based on the timestamp we create new features which are cyclic.
train_data['month'] = train_data['timestamp'].dt.month
train_data['day'] = train_data['timestamp'].dt.weekday
train_data['year'] = train_data['timestamp'].dt.year
train_data['hour'] = train_data['timestamp'].dt.hour
# Additionally we create the feature weekend: 5 = saturday and 6 = sunday.
train_data['weekend'] = np.where((train_data['day'] == 5) | (train_data['day'] == 6), 1, 0)
# ## Weather data
# The feature wind_direction is cyclic.
train_weather['wind_direction'+'_sin'] = np.sin((2*np.pi*train_weather['wind_direction'])/360)
train_weather['wind_direction'+'_cos'] = np.cos((2*np.pi*train_weather['wind_direction'])/360)
train_weather = train_weather.drop(['wind_direction'], axis=1)
# # Encoding
# There is a greate encoding competition: https://www.kaggle.com/drcapa/categorical-feature-encoding-challenge-xgb
# ## Train data
# ### Feature meter
# There are 4 types of meters: <br>
# 0 = electricity, 1 = chilledwater, 2 = steam, 3 = hotwater <br>
# We use the one hot encoding for this 4 feature.
train_data = pd.get_dummies(train_data, columns=['meter'])
# ### Features month, day and hour
# We created the features month, day and hour which are cyclic.
features_cyc = {'month' : 12, 'day' : 7, 'hour' : 24}
for feature in features_cyc.keys():
train_data[feature+'_sin'] = np.sin((2*np.pi*train_data[feature])/features_cyc[feature])
train_data[feature+'_cos'] = np.cos((2*np.pi*train_data[feature])/features_cyc[feature])
train_data = train_data.drop(features_cyc.keys(), axis=1)
# ## Building data
# The feature primary_use is a categorical feature with 16 categories. For the first we use a simple mapping.
plot_bar(building_data, 'primary_use')
map_use = dict(zip(building_data['primary_use'].value_counts().sort_index().keys(),
range(1, len(building_data['primary_use'].value_counts())+1)))
building_data['primary_use'] = building_data['primary_use'].replace(map_use)
# +
#building_data = pd.get_dummies(building_data, columns=['primary_use'])
# -
# # Scale building and weather data
# ## Weather data
weather_scale = ['air_temperature', 'cloud_coverage', 'dew_temperature', 'sea_level_pressure', 'wind_speed']
mean = train_weather[weather_scale].mean(axis=0)
train_weather[weather_scale] = train_weather[weather_scale].astype('float32')
train_weather[weather_scale] -= train_weather[weather_scale].mean(axis=0)
std = train_weather[weather_scale].std(axis=0)
train_weather[weather_scale] /= train_weather[weather_scale].std(axis=0)
# ## Building data
building_scale = ['square_feet', 'year_built', 'floor_count']
mean = building_data[building_scale].mean(axis=0)
building_data[building_scale] = building_data[building_scale].astype('float32')
building_data[building_scale] -= building_data[building_scale].mean(axis=0)
std = building_data[building_scale].std(axis=0)
building_data[building_scale] /= building_data[building_scale].std(axis=0)
# # Merge data
train_data = pd.merge(train_data, building_data, on='building_id', right_index=True)
train_data = train_data.sort_values(['timestamp'])
train_data = pd.merge_asof(train_data, train_weather, on='timestamp', by='site_id', right_index=True)
del train_weather
train_data.to_csv("../working/ashrae_merged_data.csv")
# # Build the data generator
class DataGenerator(Sequence):
""" A data generator based on the template
https://stanford.edu/~shervine/blog/keras-how-to-generate-data-on-the-fly
"""
def __init__(self, data, list_IDs, features, batch_size, shuffle=False):
self.data = data.loc[list_IDs].copy()
self.list_IDs = list_IDs
self.features = features
self.batch_size = batch_size
self.shuffle = shuffle
self.on_epoch_end()
def __len__(self):
return int(np.floor(len(self.list_IDs)/self.batch_size))
def __getitem__(self, index):
indexes = self.indexes[index*self.batch_size:(index+1)*self.batch_size]
list_IDs_temp = [self.list_IDs[k] for k in indexes]
X, y = self.__data_generation(list_IDs_temp)
return X, y
def on_epoch_end(self):
self.indexes = np.arange(len(self.list_IDs))
if self.shuffle == True:
np.random.shuffle(self.indexes)
def __data_generation(self, list_IDs_temp):
X = np.empty((len(list_IDs_temp), len(self.features)), dtype=float)
y = np.empty((len(list_IDs_temp), 1), dtype=float)
X = self.data.loc[list_IDs_temp, self.features].values
if 'meter_reading' in self.data.columns:
y = self.data.loc[list_IDs_temp, 'meter_reading'].values
# reshape
X = np.reshape(X, (X.shape[0], 1, X.shape[1]))
return X, y
# # Split the random input data into train and val
# Since it's a timeseries problem, we split the train and validation data by timestamp and not with a random split.
train_size = int(len(train_data.index)*0.75)
val_size = len(train_data.index) - train_size
train_list, val_list = train_data.index[0:train_size], train_data.index[train_size:train_size+val_size]
print(train_size, val_size)
# # Define the features
no_features = ['building_id', 'timestamp', 'meter_reading', 'year']
features = train_data.columns.difference(no_features)
# # Define train and validation data via Data Generator
batch_size = 1024
train_generator = DataGenerator(train_data, train_list, features, batch_size)
val_generator = DataGenerator(train_data, val_list, features, batch_size)
# # Define Recurrent Neural Network
# We use a simple recurrent neural network for train and prediction. Later we will improve.
input_dim = len(features)
print(input_dim)
model = Sequential()
#model.add(Embedding(input_length=input_dim))
model.add(LSTM(units=8, activation = 'relu', input_shape=(1, input_dim)))
#model.add(LSTM(units=64, activation = 'relu'))
#model.add(Dense(128, activation='relu', input_dim=input_dim))
#model.add(Dense(256, activation='relu'))
#model.add(Dense(512, activation='relu'))
model.add(Dense(1, activation='relu'))
def rmse(y_true, y_pred):
""" root_mean_squared_error """
return K.sqrt(K.mean(K.square(y_pred - y_true)))
model.compile(optimizer = Adam(lr=1e-4),
loss='mse',
metrics=[rmse])
model.summary()
epochs = 1
# # Train model
history = model.fit_generator(generator=train_generator,
validation_data=val_generator,
epochs = epochs)
# # Analyse results
# A short analysis of the train results.
loss = history.history['loss']
loss_val = history.history['val_loss']
epochs = range(1, len(loss)+1)
plt.plot(epochs, loss, 'bo', label='loss_train')
plt.plot(epochs, loss_val, 'b', label='loss_val')
plt.title('value of the loss function')
plt.xlabel('epochs')
plt.ylabel('value of the loss function')
plt.legend()
plt.grid()
plt.show()
acc = history.history['rmse']
acc_val = history.history['val_rmse']
epochs = range(1, len(loss)+1)
plt.plot(epochs, acc, 'bo', label='accuracy_train')
plt.plot(epochs, acc_val, 'b', label='accuracy_val')
plt.title('accuracy')
plt.xlabel('epochs')
plt.ylabel('value of accuracy')
plt.legend()
plt.grid()
plt.show()
# # Delete train data
del train_data
# # Predict test data
# * We following the steps above to prepare the data
# * Build data generator
# * Predict subdate
# * Write data in an array
# +
nrows = 1667904
batch_size = 1022
steps = 25
y_test = np.empty(())
test_weather = pd.read_csv(path_in+'weather_test.csv', parse_dates=['timestamp'])
cols_with_missing_test_weather = [col for col in test_weather.columns if test_weather[col].isnull().any()]
test_weather[cols_with_missing_test_weather] = imp_most.fit_transform(test_weather[cols_with_missing_test_weather])
mean = test_weather[weather_scale].mean(axis=0)
test_weather[weather_scale] = test_weather[weather_scale].astype('float32')
test_weather[weather_scale] -= test_weather[weather_scale].mean(axis=0)
std = test_weather[weather_scale].std(axis=0)
test_weather[weather_scale] /= test_weather[weather_scale].std(axis=0)
test_weather['wind_direction'+'_sin'] = np.sin((2*np.pi*test_weather['wind_direction'])/360)
test_weather['wind_direction'+'_cos'] = np.cos((2*np.pi*test_weather['wind_direction'])/360)
test_weather = test_weather.drop(['wind_direction'], axis=1)
for i in range(0, steps):
print('work on step ', (i+1))
test_data = pd.read_csv(path_in+'test.csv', skiprows=range(1,i*(nrows)+1), nrows=nrows, parse_dates=['timestamp'])
test_data['month'] = test_data['timestamp'].dt.month
test_data['day'] = test_data['timestamp'].dt.weekday
test_data['year'] = test_data['timestamp'].dt.year
test_data['hour'] = test_data['timestamp'].dt.hour
test_data['weekend'] = np.where((test_data['day'] == 5) | (test_data['day'] == 6), 1, 0)
for feature in features_cyc.keys():
test_data[feature+'_sin'] = np.sin((2*np.pi*test_data[feature])/features_cyc[feature])
test_data[feature+'_cos'] = np.cos((2*np.pi*test_data[feature])/features_cyc[feature])
test_data = test_data.drop(features_cyc.keys(), axis=1)
test_data = pd.get_dummies(test_data, columns=['meter'])
test_data = pd.merge(test_data, building_data, on='building_id', right_index=True)
test_data = test_data.sort_values(['timestamp'])
test_data = pd.merge_asof(test_data, test_weather, on='timestamp', by='site_id', right_index=True)
test_data = test_data.sort_values(['row_id'])
for feature in features:
if feature not in test_data:
#print(' not in:', feature)
test_data[feature] = 0
test_generator = DataGenerator(test_data, test_data.index, features, batch_size)
predict = model.predict_generator(test_generator, verbose=1, workers=1)
predict = np.expm1(predict)
y_test = np.vstack((y_test, predict))
del test_data
del test_generator
# -
y_test = np.delete(y_test, 0, 0)
# # Delete data
del test_weather
del building_data
# # Write output for submission
output = pd.DataFrame({'row_id': range(0, len(y_test)),
'meter_reading': y_test.reshape(len(y_test))})
output = output[['row_id', 'meter_reading']]
output.to_csv('submission.csv', index=False)
| ashrae_code/ashrae-datagenerator-lstm.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="view-in-github" colab_type="text"
# <a href="https://colab.research.google.com/github/tsonnakul26/ASL_App/blob/main/ASL_Conv3D_Model.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# + [markdown] id="ry3_GzFt4ID4"
#
#
# # ASL Conv3D Handshape Classification Model - Vision & Language F20
# ### <NAME>, <NAME>, <NAME>, <NAME>
# ---
# + [markdown] id="Kfsk0lVQWWgx"
# # Imports
# + id="P5oJJsYqilR7"
from google.colab import drive
from google.colab.patches import cv2_imshow
import pandas as pd
import numpy as np
import random
import cv2 as cv
import matplotlib.pyplot as plt
import tensorflow
from tensorflow import keras
import keras
from keras.models import Sequential, Model
from keras.layers import Conv2D, MaxPooling2D, Dense, Flatten, Dropout, Conv3D, LSTM, Activation, MaxPooling3D, BatchNormalization
from keras.optimizers import Adam
from keras.callbacks import TensorBoard
import os
import pickle as pkl
# + [markdown] id="AiSf4qvHWfj9"
# # Data
# + id="bsevYbgmj4Rp"
#loading labels pkl file
labels = pkl.load(open('ASL_FINAL_labels.p', 'rb'))
# + colab={"base_uri": "https://localhost:8080/"} id="7z2LCz9lkr9Z" outputId="fd12deb0-d597-4983-a5f9-829a10681d14"
drive.mount('/content/drive', force_remount = True)
# + id="0hBi9A0RoNEV"
#loading videos dataframe from drive
DATA_PATH = "/content/drive/MyDrive"
infile = open(DATA_PATH+'/ASL/ASL_FINAL_data.p','rb')
final_dataframe = pkl.load(infile)
# + colab={"base_uri": "https://localhost:8080/"} id="v6mP8N78oizL" outputId="b717264f-fc54-4aa0-a25b-6275208b7b84"
#1986 video sequences of 9 256x256x3 images, and 1986 labels of 14-length one hot encoded vectors.
print(final_dataframe.shape)
labels.shape
# + id="w6YsaW0PrxrM"
def hot2label(x):
shapes = ['0,0-flat','1-D','1-X', '10,A', '15,15-close', '2', '20,G,L', '3-P,K', '5', '5-claw', 'C', '8,8-open', '9', 'None', 'S']
for i in range(len(x)):
if x[i] == 1:
return(shapes[i])
def delx(x, y):
for i in range(len(x)):
if i in y:
if i < len(x):
x.pop(i)
return x
# + colab={"base_uri": "https://localhost:8080/"} id="GhM7HwWP3W3e" outputId="5d186811-c04c-4906-ae53-2d0398a38172"
#Removing indexes with 'None' label
bad_idxs = []
for i in range(len(labels)):
if hot2label(labels[i]) == 'None':
bad_idxs.append(i)
print(len(bad_idxs))
# + colab={"base_uri": "https://localhost:8080/"} id="xbgnNGLL12kt" outputId="a14b7b5a-fc84-4bd1-f98c-1a43f425a753"
labels = list(labels)
print(len(labels))
print(len(bad_idxs))
label = delx(labels, bad_idxs)
print(len(label))
print(len(list(final_dataframe)))
dflist = list(final_dataframe)
data = delx(dflist, bad_idxs)
print(len(data))
label = label[:1000]
data = data[:1000]
# + id="bFp1gENv3mF8"
x = np.array(data)
y = np.array(label)
# + colab={"base_uri": "https://localhost:8080/"} id="b0aoZWEe8Zre" outputId="a31f4497-31ca-4e24-96c0-f28da424499d"
#Scaling all pixel values between 0 and 1
print(x.shape)
print('Min: %.3f, Max: %.3f' % (x.min(), x.max()))
for i in range(1000):
for j in range(9):
x[i][j] = x[i][j] / 255.0
print('After scaling: Min: %.3f, Max: %.3f' % (x.min(), x.max()))
# + colab={"base_uri": "https://localhost:8080/"} id="y_h6VxZ-6Eob" outputId="6dae5ae8-bc4d-431d-c02f-d4fc3b3ee1f6"
#Splitting X and Y into train and validation sets
x_train = x[:len(x)*4//5]
x_val = x[len(x)*4//5:]
y_train = y[:len(x)*4//5]
y_val = y[len(x)*4//5:]
print(x_train.shape)
print(x_val.shape)
print(y_train.shape)
print(y_val.shape)
# + [markdown] id="wuQLSWghXgtv"
# # Model Definition and Training
# + id="apDYyRD7ESnb"
# Defining Conv3D model
model = Sequential()
model.add(Conv3D(32, kernel_size=(3, 3, 3), input_shape= (9, 256, 256, 3), padding="same"))
model.add(Activation('relu'))
model.add(MaxPooling3D(pool_size=(3, 3, 3), padding="same"))
model.add(Dropout(0.25))
model.add(Conv3D(64, padding="same", kernel_size=(3, 3, 3)))
model.add(Activation('relu'))
model.add(MaxPooling3D(pool_size=(3, 3, 3), padding="same"))
model.add(Dropout(0.25))
model.add(Flatten())
model.add(Dense(512, activation='relu'))
model.add(BatchNormalization())
model.add(Dropout(0.5))
#14 categories, so last Dense layer has 14 neurons
model.add(Dense(14, activation='softmax'))
# + colab={"base_uri": "https://localhost:8080/"} id="rEJqR3hk6v54" outputId="d21ce5ca-0cad-4d2e-b66f-efd170e2a3e6"
model.compile(loss='categorical_crossentropy', optimizer=Adam(lr = 0.001), metrics=["accuracy"])
model.fit(x_train, y_train, epochs=20, batch_size = 20, verbose = 1)
# + colab={"base_uri": "https://localhost:8080/", "height": 1000} id="G3g3Z9QMXzGU" outputId="5ee481f2-1d03-4b80-b814-293311a6d42c"
from keras.utils import plot_model
plot_model(model)
# + [markdown] id="_Sxe3QIDZtsl"
# # Evaluation
# + id="ao3-z_2UKfxi"
predictions = model.predict(x_val)
# + colab={"base_uri": "https://localhost:8080/"} id="_DNIKibjKjY2" outputId="eb1d02f0-3c67-45ac-b89b-fcc307f3ef6c"
for i in range(10):
pred = predictions[i]
max = pred.max()
for j in range(14):
if pred[j] != max:
pred[j] = 0
else:
pred[j] = 1
print("Prediction", i)
print(pred)
print(y_val[i])
print("\n")
# + colab={"base_uri": "https://localhost:8080/"} id="n6jSPZ5oLQkp" outputId="f3e82a2f-4ef5-488a-c0c7-a7a0607bc8e5"
correct = 0
for i in range(10):
pred = predictions[i]
max = pred.max()
for j in range(14):
if pred[j] != max:
pred[j] = 0
else:
pred[j] = 1
same = True
for j in range(len(pred)):
if pred[j] != y_val[i][j]:
same = False
if same:
correct += 1
print(correct/ 10 * 100, "%")
#shows lower validation accuracy than training accuracy, so model is overfitted
# + [markdown] id="uMEs95GprWmp"
# ___
| ASL_Conv3D_Model.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="view-in-github" colab_type="text"
# <a href="https://colab.research.google.com/github/ahmedhisham73/deep_learningtuts/blob/master/deeplearningtutorialCNN.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# + id="l-Gqn5atyM8G" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 471} outputId="1a55bc21-b26e-488f-a12e-f959524259ca"
import tensorflow as tf
mnist = tf.keras.datasets.fashion_mnist
(training_images, training_labels), (test_images, test_labels) = mnist.load_data()
training_images=training_images / 255.0
test_images=test_images / 255.0
model = tf.keras.models.Sequential([
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(128, activation=tf.nn.relu),
tf.keras.layers.Dense(10, activation=tf.nn.softmax)
])
model.compile(optimizer='adam', loss='sparse_categorical_crossentropy', metrics=['accuracy'])
model.fit(training_images, training_labels, epochs=10)
test_loss = model.evaluate(test_images, test_labels)
# + id="-7r0QP2d3ORg" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 781} outputId="06faf3f0-5ee1-491a-f670-db8356670101"
import tensorflow as tf
print(tf.__version__)
mnist = tf.keras.datasets.fashion_mnist
(training_images, training_labels), (test_images, test_labels) = mnist.load_data()
training_images=training_images.reshape(60000, 28, 28, 1)
training_images=training_images / 255.0
test_images = test_images.reshape(10000, 28, 28, 1)
test_images=test_images/255.0
model = tf.keras.models.Sequential([
tf.keras.layers.Conv2D(64, (3,3), activation='relu', input_shape=(28, 28, 1)),
tf.keras.layers.MaxPooling2D(2, 2),
tf.keras.layers.Conv2D(64, (3,3), activation='relu'),
tf.keras.layers.MaxPooling2D(2,2),
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(128, activation='relu'),
tf.keras.layers.Dense(10, activation='softmax')
])
model.compile(optimizer='adam', loss='sparse_categorical_crossentropy', metrics=['accuracy'])
model.summary()
model.fit(training_images, training_labels, epochs=10)
test_loss = model.evaluate(test_images, test_labels)
# + [markdown] id="NiFXkzYS3nHr" colab_type="text"
# now lets see the journey of the image during the whole training process
# + id="xJ-AhUKT3wfT" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 269} outputId="475e5afa-5141-417a-b74a-59e3e6deb401"
import matplotlib.pyplot as plt
f, axarr = plt.subplots(3,4)
FIRST_IMAGE=1
SECOND_IMAGE=19
THIRD_IMAGE=29
CONVOLUTION_NUMBER = 1
from tensorflow.keras import models
layer_outputs = [layer.output for layer in model.layers]
activation_model = tf.keras.models.Model(inputs = model.input, outputs = layer_outputs)
for x in range(0,4):
f1 = activation_model.predict(test_images[FIRST_IMAGE].reshape(1, 28, 28, 1))[x]
axarr[0,x].imshow(f1[0, : , :, CONVOLUTION_NUMBER], cmap='inferno')
axarr[0,x].grid(False)
f2 = activation_model.predict(test_images[SECOND_IMAGE].reshape(1, 28, 28, 1))[x]
axarr[1,x].imshow(f2[0, : , :, CONVOLUTION_NUMBER], cmap='inferno')
axarr[1,x].grid(False)
f3 = activation_model.predict(test_images[THIRD_IMAGE].reshape(1, 28, 28, 1))[x]
axarr[2,x].imshow(f3[0, : , :, CONVOLUTION_NUMBER], cmap='inferno')
axarr[2,x].grid(False)
# + [markdown] id="ZnOO7vIf4C6Y" colab_type="text"
# trying to improve the accuracy
# + id="wTEC5mCa4Fxr" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 416} outputId="db91420c-86d3-4c13-8fa7-6821a08b2514"
import tensorflow as tf
print(tf.__version__)
mnist = tf.keras.datasets.mnist
(training_images, training_labels), (test_images, test_labels) = mnist.load_data()
training_images=training_images.reshape(60000, 28, 28, 1)
training_images=training_images / 255.0
test_images = test_images.reshape(10000, 28, 28, 1)
test_images=test_images/255.0
model = tf.keras.models.Sequential([
tf.keras.layers.Conv2D(32, (3,3), activation='relu', input_shape=(28, 28, 1)),
tf.keras.layers.MaxPooling2D(2, 2),
tf.keras.layers.Conv2D(16, (3,3), activation='relu', input_shape=(28, 28, 1)),
tf.keras.layers.MaxPooling2D(2, 2),
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(128, activation='relu'),
tf.keras.layers.Dense(10, activation='softmax')
])
model.compile(optimizer='adam', loss='sparse_categorical_crossentropy', metrics=['accuracy'])
model.fit(training_images, training_labels, epochs=10)
test_loss, test_acc = model.evaluate(test_images, test_labels)
print(test_acc)
# + id="ak3MSatc4ih3" colab_type="code" colab={}
| deeplearningtutorialCNN.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Exercise 1
# 1.Wap to check is a number is divisible by 5
# 2.Wap to check if no. is even or odd
# 3.WAP to check if roots of a quadratic equation are real,real and equal or immaginary
# 4.WAP to print grades of a student
# Less than 40 - NC
# 40 - Less than 50 - D
# 50 - Less than 60 - C
# 60 - Less than 70 - B
# 70 - Less than 80 - A
# 80 and Above - O
# 5.WAP print the Electricity Bill
# Upto 200 - 0.5/unit
# 201 - 500 - 1/unit for units consumed above 200
# 501 - 1000 - 2.5/unit " " 500
# 10001 - 1500 - 3.5/unit
# 1501 - 2500 - 5/unit
# Above 2500 - 10/unit
# Exercise ! prg 1
n = input("Enter a number")
n = int(n)
print(n)
if n%5 == 0:
print("Number is divisible by 5")
else:
print("Not divisible by 5")
# Exercise 1 prg 2
n = input("Enter a number")
n = int(n)
print(n)
if n%2 == 0:
print("Number is Even")
else:
print("Number is Odd")
# Exercise 1 prg 3
a = int(input())
b = int(input())
c = int(input())
d = (b*b)-(4*a*c)
d = int(d)
print(d)
if d > 0:
print("Roots are Real")
elif d == 0:
print("Roots are real and equal")
else:
print("Roots are Imaginary")
# Exercise 1 prg 4
# WAP to print grades of a student
# Less than 40 - NC
# 40 - Less than 50 - D
# 50 - Less than 60 - C
# 60 - Less than 70 - B
# 70 - Less than 80 - A
# 80 and Above - O
marks = int(input("Enter marks")) # By default any variable as String
if marks < 40:
print ("NC")
elif marks > 40 and marks < 50:
print("Grade D")
elif marks > 50 and marks < 60:
print("Grade C")
elif marks > 60 and marks < 70:
print("Grade B")
elif marks > 70 and marks < 80:
print("Grade A")
elif marks > 80:
print("Grade O")
# Exercise 1 prg 5
# WAP print the Electricity Bill
# Upto 200 - 0.5/unit
# 201 - 500 - 1/unit for units consumed above 200
# 501 - 1000 - 2.5/unit " " 500
# 10001 - 1500 - 3.5/unit
# 1501 - 2500 - 5/unit
# Above 2500 - 10/unit
unit = int(input("Enter units")) # By default any variable as String
if unit <= 200:
bill = (0.5*unit)
elif unit > 200 and unit <= 500:
bill = (0.5*200) + ((unit-200)*1)
elif unit > 500 and unit <= 1000:
bill = (0.5*200) + (500*1) + ((unit-500)*2.5)
elif unit > 1000 and unit <= 1500:
bill = (0.5*200) + (500*1) + (1000*2.5) + ((unit-1000)*3.5)
elif unit > 1500 and unit <= 2500:
bill = (0.5*200) + (500*1) + (1000*2.5) + (1500*3.5) + ((unit-1500)*5)
elif unit > 2500:
bill = (0.5*200) + (500*1) + (1000*2.5) + (1500*3.5) + (2500*5) + ((unit-2500)*10)
bill = float(bill)
print(bill)
# # Lab 3
# 1. WAP to check if a string is Palindrome or not
# 2. WAP that takes string as multiple words and then capitalize the first letter of each word and forms a new string
# 3. WAP to read a string and display the longest substring of the given string having just the consonants
# 4. WAP to read a string and then prints a sting that capitalizes every other letter in the string e.g. goel becomes gOeL
# 5. WAP to read an email id of a person and vertify if it belongs to banasthali.in domain
# 6. WAP to count the number of vowels and consonants in an input string
# 7. WAP to read a sentnece and count the articles it has
# 8. WAP to check if a sentence is a paniltimate or not
# 9. WAP to read a string and check if it has "good" in it
# Exercise 3 prg 1
str = input("Enter a string")
ss= str[::-1] #to reverse the string
print(str)
print(ss)
if(str == ss):
print("palindrome")
else:
print('Not palindrome')
# Exercise 3 prg 2
str = input("Enter string")
s = ""
lstr = str.split()
for i in range(0,len(lstr)):
lstr[i] = lstr[i].capitalize()
#lstr[i] = lstr[i].replace(lstr[i][0],lstr[i][0].upper())
print(s.join(lstr))
# Exercise 3 prg 4
str = input("Enter string")
for i in range(1,len(str),2):
str = str.replace(str[i],str[i].upper())
print(str)
# Exercise 3 prg 5
str = input("Enter email")
a = str.find("@")
s = "banasthali.in"
if(str[a+1:] == s):
print("Verified")
else:
print("Wrong email")
# Exercise 3 prg 6,7
str = input("Enter string")
articles = int(str.count("a")+str.count("an")+str.count("the"))
vowel = int(str.count("a")+str.count("e")+str.count("i")+str.count("o")+str.count("u"))
consonents = int(len(str)-vowel-str.count(" "))
print(articles)
print(vowel)
print(consonents)
# Exercise 3 prg 9
str = input("Enter string")
if(str.find("good") >= 0):
print("yes")
else:
print("no")
# # Lab 4
# 1. WAP to find minimum element from a list of integers alongwith its index in the list
# 2. WAP to calcualte mean of a given list of numbers
# 3. WAP to search an element in a list
# 4. WAP to count the frequency of an element in a list
# 5. WAP to find frequencies of all elements in a list
# 6. WAP to extact two slices from a list and add the slices and find which one is greater
# 7. WAP to find second largest number of a list
# 8. WAP to input a list of numbers and shift all zeros to the right and all non zeros to left of the list
# 9. WAP to add two matrices
# 10. WAP to multiply two matrices
# 11. WAP to print transpose of a matrix
# 12. WAP to add elements of diagonal of a matrix
# 13. WAP to find an element in a matrix
# 14. WAP to find inverse of a matrix
#1
l = list(input("Enter list"))
min = l[0]
for i in range(len(l)-1):
if(l[i+1]<min):
min=l[i+1]
print(min)
print(l.index(min))
#2
l = list(input("Enter list"))
sum = 0
sum = int(sum)
for i in range(len(l)):
sum+= int(l[i])
mean = float(sum/len(l))
print(mean)
#3
l = list(input("Enter list"))
ele = input("Enter element")
l.index(ele)
#5
l = list(input("Enter list"))
lst = []
for i in range(len(l)):
if(lst.count(l[i]) == 0):
print(l[i],l.count(l[i]))
lst.append(l[i])
#6
l = list(input("Enter list"))
lenght = int(len(l)/2)
l1 = l[0:lenght]
l2 = l[(lenght+1):int(len(l))]
if(l1>l2):
print(l1)
else:
print(l2)
#7
l = list(input("Enter list"))
min = l[0]
min2 = l[1]
for i in range(2,len(l)):
if(l[i]<min2):
if(l[i]<min):
min=l[i]
min2=min
else:
min2=l[i]
print(min2)
print(l.index(min2))
#8
l = list(input("Enter list"))
i = 0
j = len(l)-1
while(i<j):
if(l[i]==0):
if(l[j]==0):
j-=1
if(i!=j):
t=l[i]
l[i]=l[j]
l[j]=t
i+=1
print(l)
#9
mat1=[]
mat2=[]
r=int(input("Enter no. of rows : "))
c=int(input("Enter no. of columns : "))
print("Enter matrix 1 :")
for i in range(r):
a=[]
for j in range(c):
ele=int(input())
a.append(ele)
mat1.append(a)
print("Enter matrix 2 :")
for i in range(r):
a=[]
for j in range(c):
ele=int(input())
a.append(ele)
mat2.append(a)
res = [[mat1[i][j] + mat2[i][j] for j in range(len(mat1[0]))] for i in range(len(mat1))]
print("Added Matrix is : ")
print("Added Matrix is : ")
for r in res :
print(r)
# +
#10
mat1=[[4,3],[2,4]]
mat2=[[1,7],[3,4]]
res=[[0,0],[0,0]]
for i in range(len(mat1)):
for j in range(len(mat2[0])):
for k in range(len(mat2)):
res[i][j]+=mat1[i][k]*mat2[k][j]
print("Result Array : ")
for r in res:
print(r)
# -
#11
mat=[]
r=int(input("Enter no. of rows : "))
c=int(input("Enter no. of columns : "))
print("Enter matrix :")
for i in range(r):
a=[]
for j in range(c):
ele=int(input())
a.append(ele)
mat.append(a)
trans=[]
for j in range(c):
trans.append([])
for i in range(r):
t=mat[i][j]
trans[j].append(t)
print("Transposed Matirx : ")
for r in trans:
print(r)
#12
mat=[]
r=int(input("Enter no. of rows : "))
c=int(input("Enter no. of columns : "))
print("Enter matrix :")
for i in range(r):
a=[]
for j in range(c):
ele=int(input())
a.append(ele)
mat.append(a)
sum=int(0)
for i in range(r):
for j in range(c):
if i==j:
sum+=mat[i][j]
break
print(sum)
# 13
mat=[]
r=int(input("Enter no. of rows : "))
c=int(input("Enter no. of columns : "))
print("Enter matrix :")
for i in range(r):
a=[]
for j in range(c):
ele=int(input())
a.append(ele)
mat.append(a)
elem=int(input("Enter Element to be searched : "))
ind_r=0
ind_c=0
flag=0
for i in range(r):
for j in range(c):
if mat[i][j]==elem :
ind_r=i
ind_c=j
flag=1
break
if flag==1:
print(str(ind_r), str(ind_c))
else:
print("Element not found.")
# 14
mat=[]
r=int(input("Enter no. of rows : "))
c=int(input("Enter no. of columns : "))
print("Enter matrix :")
for i in range(r):
a=[]
for j in range(c):
ele=int(input())
a.append(ele)
mat.append(a)
# # Lab 5
# 1. Write a function in python to find max of three numbers
# 2. Write a function in python to sum all the numbers in a list
# 3. Write a function in python to multiply all the numbrers in a list
# 4. Write a function in python to reverse a string
# 5. Write a function in python that takes a list and returns a new list with unique elements of the first list
# 6. Write a function in python that checks whether a passed string is palindrome or not
# 7. Write a function in python to access a function inside a function
# 8. Write a function in python to generate even series upto the nth term
# 9. Write a function in python to check if a number is prime or not
# 10. Write a function in python to generate prime series between range inputted by a user.
# 11. Write a recursive function in python to sum all the numbers in a list
# 12. Write a recursive function in python to multiply all the numbrers in a list
# 13. Write a recursive function in python to reverse a string
# 14. Write a recursive function in python that takes a list and returns a new list with unique elements of the first list
# 15. Write a recursive function in python that checks whether a passed string is palindrome or not
# 16. Write a recursive function in python to generate even series upto the nth term
# 17. Write a recursive function in python to check if a number is prime or not
# 18. Write a recursive function in python to generate prime series between range inputted by a user.
#1
def max(a,b,c):
m = a if (a if a > b else b)>c else c
print(m)
max(1,2,3)
#2
def sum(l):
s = 0
for i in range(len(l)):
s+= int(l[i])
print(s)
l = list(input("Enter list"))
sum(l)
#3
def mul(l):
m= 1
for i in range(len(l)):
m= m*int(l[i])
print(m)
l = list(input("Enter list"))
mul(l)
#4
def rev(s):
r = s[::-1]
print(r)
rev("Kashish")
#6
def palin(str):
ss= str[::-1] #to reverse the string
print(str)
print(ss)
if(str == ss):
print("palindrome")
else:
print('Not palindrome')
str = input("Enter a string")
palin(str)
#7
def fun1():
print("First Function")
fun2()
def fun2():
print("Second Function")
fun1()
#8
def even_s(n):
for i in range(n):
if(i%2 == 0):
print(i, end=" ")
even_s(8)
# +
#9
def prime(n):
c=0
for i in range(1,n):
if n%i == 0:
c+=1
if c == 1:
print("Prime number")
else:
print("Not a prime number")
n = int(input("Enter number"))
prime(n)
# -
#10
def prime_s(n):
for j in range(n):
c = 0
for i in range(1,j):
if j%i == 0:
c+=1
if c == 1:
print(j, end=" ")
prime_s(5)
#11
def sum_no(n,l):
if n == 0:
return int(l[n])
else:
return(int(l[n])+sum_no(n-1,l))
l = list(input("Enter list"))
sum_no((len(l)-1),l)
#12
def mul_no(n,l):
if n == 0:
return int(l[n])
else:
return(int(l[n])*mul_no(n-1,l))
l = list(input("Enter list"))
mul_no((len(l)-1),l)
# +
#13
def revstr(str,n):
if n==0:
return str[0]
return str[n]+revstr(str,n-1)
print(revstr("helloworld",9))
# +
#14
def unique(list1,list2,n,i):
if i in list1:
list2.append(list1[i])
unique(list1,list2,n,i+1)
return list2
list1=[]
n=int(input("Enter size : "))
for i in range(n):
a=int(input("Enter the element :"))
list1.append(a)
print(list1)
list2=[]
list2=unique(list1,list2,n-1,0)
print(list2)
# +
#15
def isPal(st, s, e) :
if (s == e):
return True
if (st[s] != st[e]) :
return False
if (s < e + 1) :
return isPalRec(st, s + 1, e - 1);
return True
def isPalindrome(st) :
n = len(st)
if (n == 0) :
return True
return isPal(st, 0, n - 1);
st = "kashish"
if (isPalindrome(st)) :
print("Yes")
else :
print("No")
# -
#16
def rec_even(s,n):
if s>=n:
if(n%2 == 0):
print(n)
elif(s%2 == 0):
print(s)
return rec_even(s+2,n)
num = int(input("Enter a number"))
rec_even(0,num)
#17
def rec_prime(n,i):
if n<=2:
if n==2:
return True
return False
if n%i==0:
return False
if(i*i>n):
return True
return re_cprime(n,i+1)
num=int(input("Enter a number"))
print(rec_prime(num,2))
#18
def prime(n):
for j in range(n):
c = 0
for i in range(1,j):
if j%i == 0:
c+=1
if c == 1:
return True
else:
return False
def rec_prime(s,num):
if s>=num:
t=prime(s)
if t==True:
print(s)
else:
t=prime(s)
if t==True:
print(s)
return rec_prime(s+1,num)
num = int(input("Enter a number"))
rec_prime(0,num)
# # Lab 6
# 1. WAP to read a file and print the number of characters, words and lines it has
# 2. WAP to copy contents of one file into another file
# 3. WAP to read a file and count the number of vowels it has
# 4. WAP to read a file and count the number of articles it has
# 5. WAP to read a file reverse the contents of file (last line of the file should become the first line of the file)
f = open(r'D:\Kashish Goel\Sample.txt')
print(f.read())
f = open(r'D:\Kashish Goel\Sample.txt')
words=0
lines =0;
for line in f:
print(line)
f.close()
f = open(r'D:\Kashish Goel\Sample.txt')# no of lines
words=0
lines =0;
for line in f:
lines=lines+1
print(lines)
f.close()
#1
f = open(r'D:\Kashish Goel\Sample.txt')# no of lines
words=0
lines =0
for line in f:
lines=lines+1
word = line.split()
words= words + len(word)
print(lines)
print(words)
f.seek(0)
fo = f.read() # read file in one go
print(len(fo))
f.close()
# +
#2
f=open(r'D:\Kashish Goel\Sample.txt')
f1 = open("Copy.txt", "w+")
for line in f:
f1.write(line)
f1.seek(0)
print(f1.read())
f.close()
f1.close()
# +
#3
f=open(r'D:\Kashish Goel\Sample.txt')
fo=f.read()
vowels=0
for i in fo:
if i in ('a','e','i','o','u','A','E','I','O','U'):
vowels=vowels+1
print(vowels)
# +
#4
f=open(r'D:\Kashish Goel\Sample.txt')
articles=0
for line in f:
sentence=line.split()
for word in sentence:
if word in ('a','an','the','A','An','The'):
articles+=1
print(articles)
# +
#5
f=open(r'D:\Kashish Goel\Sample.txt')
list=[]
for line in f:
list.append(line)
f.close()
f=open(r'D:\Kashish Goel\Sample.txt',"w+")
for i in range(len(list)-1,-1,-1):
f.write(list[i])
f.seek(0)
print(f.read())
f.close()
# -
| Python Basics/.ipynb_checkpoints/Lab Assignment - 1-checkpoint.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Eigenvalue and eigenvectors calculation
#
# $$
# A\mathbf{x} = \lambda \mathbf{x}
# $$
#
# ### Power method (vector iteration)
# - find the largest eigenvalue $\lambda_{max}$
# \begin{align}
# \mathbf{q}_k & = \frac{\mathbf{z}_{k-1}}{\|\mathbf{z}_{k-1}\|_2}\\
# \mathbf{z}_k & = A\mathbf{q}_{k}\\
# \lambda_{max}^k & = \mathbf{q}^T_k \mathbf{z}_k
# \end{align}
# +
# %matplotlib inline
from numpy import *
from matplotlib.pyplot import *
import numpy.linalg
import scipy.linalg
n = 9
h = 1./(n-1)
x=linspace(0,1,n)
a = -ones((n-1,))
b = 2*ones((n,))
A = (diag(a, -1) + diag(b, 0) + diag(a, +1))
A /= h**2
#print A
z0 = ones_like(x)
def PM(A,z0,tol=1e-5,nmax=500):
q = z0/numpy.linalg.norm(z0,2)
it = 0
err = tol + 1.
while it < nmax and err > tol:
z = dot(A,q)
l = dot(q.T,z)
err = numpy.linalg.norm(z-l*q,2)
q = z/numpy.linalg.norm(z,2)
it += 1
print("error =", err, "iterations =", it)
print("lambda_max =", l)
return l,q
l,x = PM(A,z0)
l_np, x_np = numpy.linalg.eig(A)
print("numpy")
print(l_np)
# -
# ### Inverse power method
# - find the eigenvalue $\lambda$ **closest** to $\mu$
# \begin{align}
# M & = A-\mu I\\
# M & = LU \\
# & \\
# M\mathbf{x}_k &= \mathbf q_{k-1}\\
# \mathbf{q}_k & = \frac{\mathbf{x}_k}{\|\mathbf{x}_k\|_2}\\
# \mathbf{z}_k & = A\mathbf{q}_{k}\\
# \lambda^k & = \mathbf{q}^T_k \mathbf{z}_k
# \end{align}
#
# +
def IPM(A,x0,mu,tol=1e-5,nmax=500):
M = A -mu*eye(len(A))
P,L,U = scipy.linalg.lu(M)
err = tol + 1.
it = 0
q = x0/numpy.linalg.norm(x0,2)
while it < nmax and err > tol :
y = scipy.linalg.solve(L,dot(P.T,q))
x = scipy.linalg.solve(U,y)
q = x/numpy.linalg.norm(x,2)
z = dot(A,q)
l = dot(q.T,z)
err = numpy.linalg.norm(z-l*q,2)
it += 1
print("error =", err, "iterations =", it)
print("lambda =", l)
return l,q
l,x = IPM(A,z0,6.)
# -
| notes/08_eigenvalues.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import numpy as np
import pandas as pd
df =pd.read_csv("D:\\newproject\\New folder\\Chickpea.data.csv")
#Na Handling
df.isnull().values.any()
df=df.dropna()
from sklearn.model_selection import cross_val_predict, cross_val_score
from sklearn.metrics import accuracy_score, classification_report, confusion_matrix
X = df.drop(['Predictor'], axis=1)
X_col = X.columns
y = df['Predictor']
# +
#Savitzky-Golay filter with second degree derivative.
from scipy.signal import savgol_filter
sg=savgol_filter(X,window_length=11, polyorder=3, deriv=2, delta=1.0)
# -
sg_x=pd.DataFrame(sg, columns=X_col)
sg_x.head()
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(sg_x, y,
train_size=0.8,
random_state=23,stratify = y)
# +
from sklearn.discriminant_analysis import LinearDiscriminantAnalysis as LDA
lda = LDA(n_components=18)
X_train = lda.fit_transform(X_train, y_train)
X_test = lda.transform(X_test)
# -
from sklearn import svm
clf = svm.SVC(kernel="linear")
clf.fit(X_train, y_train)
y_pred = clf.predict(X_test)
# +
from sklearn.metrics import confusion_matrix
from sklearn.metrics import accuracy_score
cm = confusion_matrix(y_test, y_pred)
print(cm)
print('Accuracy' + str(accuracy_score(y_test, y_pred)))
# -
| chickpea_lda.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# %load_ext autoreload
# %autoreload 2
import deluca.core
import jax.numpy as jnp
import matplotlib
import matplotlib.pyplot as plt
from deluca.lung import ROOT
from deluca.lung.environments._stitched_sim import StitchedSim
from deluca.lung.experimental.controllers import Deep, deep_train
from deluca.lung.utils.scripts.run_controller import run_controller_scan
from deluca.lung.utils.data.analyzer import Analyzer
from deluca.lung.utils.core import BreathWaveform
from deluca.lung.controllers import PID
from deluca.lung.controllers import BangBang
def plot_controller_on_breathwaveform(controller, sim, abort=70, T=29, use_tqdm=False, **kwargs):
waveform = BreathWaveform.create()
run_data = run_controller_scan(
controller, R=None, C=None, T=T, abort=abort, use_tqdm=use_tqdm, env=sim, waveform=waveform, **kwargs
)
analyzer = Analyzer(run_data)
analyzer = Analyzer(run_data)
preds = analyzer.pressure.copy()
truth = analyzer.target.copy()
loss = jnp.abs(preds - truth).mean()
analyzer.plot(legend=True)
print('loss:' + str(loss))
return loss
loss_per_controller = {}
# # Load Simulator Trained Params
sim = deluca.load(f"{ROOT}/pkls/sim.pkl")
# # Load and Run Deep Controller
deep_controller = deluca.load(f"{ROOT}/pkls/deep_controller.pkl")
loss_per_controller['deep'] = plot_controller_on_breathwaveform(deep_controller, sim)
# # Run PID Controller
waveform = BreathWaveform.create()
params = {'K' : {'kernel' : jnp.array([[0.], [10.], [10.]])}}
pid = PID.create(waveform=waveform, params=params)
loss_per_controller['PID'] = plot_controller_on_breathwaveform(pid, sim)
# # Run BangBang Controller
waveform = BreathWaveform.create()
bangbang = BangBang.create(waveform=waveform)
loss_per_controller['BangBang'] = plot_controller_on_breathwaveform(bangbang, sim)
# # Compare Controller Losses
keys = loss_per_controller.keys()
values = loss_per_controller.values()
axes = matplotlib.pyplot.axes()
axes.set_xlabel("Controller")
axes.set_ylabel("Mean Absolute Error Loss")
plt.bar(keys, values)
| demo/2021-07-18 controller demo.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 2
# language: python
# name: python2
# ---
# +
#############################################
#### Exploratory Analysis of FirenzeCard data
#############################################
# import libraries
import sys
sys.path.append('../src/')
import warnings
warnings.filterwarnings('ignore')
import matplotlib.pyplot as plt
# %matplotlib inline
import psycopg2
from features.firenzecard import *
# +
# establish connection to db
# connection = connect(host='', port=)
df = pd.read_csv('../src/output/firenzedata_feature_extracted.csv')
# +
########################
# Card Usage Behaviour
########################
# How many cards are there?
print('How many Firenzecards are there?', len(df['user_id'].unique()))
# +
# How many cards were activated?
len(df[(df['adults_first_use'] == 1)])
# What is the most common day of activation?
day_of_activation, plot_url_activation = plot_day_of_activation(df, plotname='DOA')
plot_url_activation
# +
# How many users use the card for 24h or less? (not cumulative calculation)
print(len(df[df['total_duration_card_use'] <= 24].user_id.unique()))
# ... 24 - 48h?
print(len(df[(df['total_duration_card_use'] > 24) & (df['total_duration_card_use'] <= 48)].user_id.unique()))
# ... 48 - 72h?
print(len(df[(df['total_duration_card_use'] > 48) & (df['total_duration_card_use'] <= 72)].user_id.unique()))
# -
# How many museums visited per card / per day
total_museums_per_card, plot_url1 = plot_museums_visited_per_card(df, plotname1 = 'Number-museums-per-card')
plot_url1
# +
########################
# Popular Museums
########################
# What are the most popular museums?
popular_museums, plot_url2 = plot_museum_aggregate_entries(df, plotname='PM')
plot_url2
# +
########################
# State Museum Visits
########################
national_museum_entries, plot_url3 = plot_national_museum_entries(connection, export_to_csv=True,export_path='../src/output/')
plot_url3
# -
# How many cards are entering museums with minors? What proportion of all cards is this?
minors = df[df['is_card_with_minors'] == 1]
minors = minors.groupby('user_id').size().to_frame()
len(minors)
# +
##############################
# Entries in Museums over time
##############################
museum_list = ['Santa Croce', 'Opera del Duomo', 'Uffizi', 'Accademia',
'<NAME>', 'M. Palazzo Vecchio', '<NAME>', '<NAME>',
'San Lorenzo', 'M. Archeologico', 'Pitti', 'Cappelle Medicee',
'M. S<NAME>ella', 'M. San Marco', 'Laurenziana',
'M. Innocenti', 'Palazzo Strozzi', 'Palazzo Medici',
'Torre di Palazzo Vecchio', 'Brancacci', 'M. Opificio',
'La Specola', 'Orto Botanico', '<NAME>', '<NAME>',
'<NAME>', '<NAME>', '<NAME>', 'Casa Buonarroti',
'<NAME>', '<NAME>', '<NAME>', '<NAME>',
'<NAME>', '<NAME>', '<NAME>', '<NAME>',
'<NAME>', '<NAME>', 'Primo Conti','All Museums']
# +
df_date, plot_urls = get_museum_entries_per_timedelta_and_plot(df, museum_list, timedelta='date',
start_date='2016-06-01',
end_date='2016-09-30', plot=False, export_to_csv=False,
export_path='../src/output/')
df2_date = df_date['All Museums']
df_date['All Museums'].head()
# +
df_hour, plot_urls = get_museum_entries_per_timedelta_and_plot(df, museum_list, timedelta='hour',
start_date='2016-06-01',
end_date='2016-09-30', plot=False, export_to_csv=False,
export_path='../src/output/')
df2_hour = df_hour['All Museums']
df_hour['All Museums'].head()
# +
df_dow, plot_urls = get_museum_entries_per_timedelta_and_plot(df, museum_list, timedelta='day_of_week',
start_date='2016-06-01',
end_date='2016-09-30', plot=False, export_to_csv=False,
export_path='../src/output/')
df2_dow = df_dow['All Museums']
df_dow['All Museums'].head()
# -
# Timeline of usage(per avg hour, calendar day, calendar month, weekday) - segment per museum
mean_entries_hour, mean_entries_dow, mean_entries_date = get_timelines_of_usage(df2_hour, df2_date, df2_dow)
# mean_entries_hour.head(), mean_entries_dow.head(), mean_entries_date.head()
# Daily Museums entries
date, date_url = plot_timeseries_button_plot(df_date, timedelta= 'date', plotname='timeseries')
date_url
# Hourly Museums entries
hour, hour_url = plot_timeseries_button_plot(df_hour, timedelta= 'hour', plotname='timeseries')
hour_url
# Day of Week museum entries
dow, dow_url = plot_timeseries_button_plot(df_dow, timedelta= 'day_of_week', plotname='testing')
dow_url
# +
##########################
## Geographic Timseries map
##########################
# Which museums are full, and which are rather empty, at different times of the day?
# Are they located next to each other?
data, geomap_plot_url = plot_geomap_timeseries(df, df2_hour, date_to_plot='2016-07-10',
plotname='map-test', mapbox_access_token='<KEY>', min_timedelta=7,
max_timedelta=23)
geomap_plot_url
# +
######################
# Museum timeseries correlations
######################
lst = list(df.museum_id.unique())
corr_matrix, high_corr, inverse_corr = get_correlation_matrix(df=df2_hour, lst = lst, corr_method = 'spearman',
timedelta='hour', timedelta_subset = False,
timedeltamin = 0, timedeltamax = 3,
below_threshold= -0.7, above_threshold=0.7,
export_to_csv=True, export_path='../src/output/')
inverse_corr.head()
# -
| notebooks/FirenzeCard exploratory analysis.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Exploratory Data Visualization with Altair
#
# ## Materials at http://github.com/altair-viz/altair-tutorial
#
# <a href="https://altair-viz.github.io/gallery/"><img src="altair-gallery.png" alt='altair gallery'></a>
#
# These notebooks contain an introduction to exploratory data analysis with [Altair](http://altair-viz.github.io).
#
# To get Altair and its dependencies installed, please follow the [Installation Instructions](https://altair-viz.github.io/getting_started/installation.html) on Altair's website.
# ## Outline
#
# ### 1. Motivating Altair: Live Demo
#
# We'll start with a set of intro slides, followed by a live-coded demo of what it looks like to explore data with Altair.
#
# Here's the rough content of the live demo:
#
# - [01-Cars-Demo.ipynb](01-Cars-Demo.ipynb)
#
# This will cover a lot of ground very quickly. Don't worry if you feel a bit shaky on some of the concepts; the goal here is a birds-eye overview, and we'll walk-through important concepts in more detail later.
#
# ### 2. Simple Charts: Core Concepts
#
# Digging into the basic features of an Altair chart
#
# - [02-Simple-Charts.ipynb](02-Simple-Charts.ipynb)
#
# ### 3. Binning and Aggregation
#
# Altair lets you build binning and aggregation (i.e. basic data flows) into your chart spec.
# We'll cover that here.
#
# - [03-Binning-and-aggregation.ipynb](03-Binning-and-aggregation.ipynb)
#
# ### 4. Layering and Concatenation
#
# From the simple charts we have seen already, you can begin layering and concatenating charts to build up more complicated visualizations.
#
# - [04-Compound-charts.ipynb](04-Compound-charts.ipynb)
#
# ### 5. Exploring a Dataset!
#
# Here's a chance for you to try this out on some data!
#
# - [05-Exploring.ipynb](05-Exploring.ipynb)
#
# ---
#
# *Afternoon Break*
#
# ---
# ### 6. Selections: making things interactive
#
# A killer feature of Altair is the ability to build interactions from basic building blocks.
# We'll dig into that here.
#
# - [06-Selections.ipynb](06-Selections.ipynb)
#
# ### 7. Transformations: Data streams within the chart spec
#
# We saw binning and aggregation previously, but there are a number of other data transformations that Altair makes available.
#
# - [07-Transformations.ipynb](07-Transformations.ipynb)
#
# ### 8. Config: Adjusting your Charts
#
# Once you're happy with your chart, it's nice to be able to tweak things like axis labels, titles, color scales, etc.
# We'll look here at how that can be done with Altair
#
# - [08-Configuration.ipynb](08-Configuration.ipynb)
#
# ### 9. Geographic Charts: Maps
#
# Altair recently added support for geographic visualizations. We'll show a few examples of the basic principles behind these.
#
# - [09-Geographic-plots.ipynb](09-Geographic-plots.ipynb)
#
# ### 10. Try it out!
#
# - Return to your visualization in [05-Exploring.ipynb](05-Exploring.ipynb) and begin adding some interactions & transformations. What can you come up with?
| 04-altair/notebooks/00-Index.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
# # !wget https://f000.backblazeb2.com/file/malay-dataset/news/fake-news/fake-news-negatives-summarization.json
# # !wget https://f000.backblazeb2.com/file/malay-dataset/news/fake-news/fake-news-positives-summarization.json
# # !wget https://raw.githubusercontent.com/huseinzol05/Malay-Dataset/master/news/fake-news/compressed-fake-news.zip
# +
# # !wget https://raw.githubusercontent.com/huseinzol05/Malay-Dataset/master/news/fake-news/summarization-augmentation/250-news-with-valid-hoax-label-summaries.json
# # !wget https://raw.githubusercontent.com/huseinzol05/Malay-Dataset/master/news/fake-news/summarization-augmentation/600-news-with-valid-hoax-label-summaries.json
# # !wget https://raw.githubusercontent.com/huseinzol05/Malay-Dataset/master/news/fake-news/summarization-augmentation/facts-summaries.json
# # !wget https://raw.githubusercontent.com/huseinzol05/Malay-Dataset/master/news/fake-news/summarization-augmentation/hoax-summaries.json
# # !wget https://raw.githubusercontent.com/huseinzol05/Malay-Dataset/master/news/fake-news/summarization-augmentation/malaysia-scraping-syazanihussin-summaries.json
# +
# # !wget https://raw.githubusercontent.com/huseinzol05/Malay-Dataset/master/news/fake-news/indonesian/hoax.csv
# # !wget https://raw.githubusercontent.com/huseinzol05/Malay-Dataset/master/news/fake-news/indonesian/facts.csv
# # !wget https://raw.githubusercontent.com/huseinzol05/Malay-Dataset/master/news/fake-news/indonesian/250%20news%20with%20valid%20hoax%20label.csv
# # !wget https://raw.githubusercontent.com/huseinzol05/Malay-Dataset/master/news/fake-news/indonesian/600%20news%20with%20valid%20hoax%20label.csv
# # !wget https://raw.githubusercontent.com/huseinzol05/Malay-Dataset/master/news/fake-news/malaysia-scraping-syazanihussin.csv
# +
import re
from unidecode import unidecode
def cleaning(string):
string = unidecode(string)
string = string.replace('SPPPPLIIIT>', ' ').replace('SPPPPLIIIT', ' ')
string = re.sub(r'\w+:\/{2}[\d\w-]+(\.[\d\w-]+)*(?:(?:\/[^\s/]*))*', '', string)
string = re.sub(r'[ ]+', ' ', string).strip().split()
string = [w for w in string if w[0] != '@']
return ' '.join(string)
# +
import pandas as pd
df = pd.read_csv('250 news with valid hoax label.csv', sep = ';',encoding = "ISO-8859-1")
# +
X, Y = [], []
for i in range(len(df)):
X.append(df.iloc[i,0])
Y.append(df.iloc[i,1])
# +
df = pd.read_csv('600 news with valid hoax label.csv', sep = ';',encoding = "ISO-8859-1")
for i in range(len(df)):
X.append(df.iloc[i,0])
Y.append(df.iloc[i,1])
# +
import json
with open('250-news-with-valid-hoax-label-summaries.json') as fopen:
data = json.load(fopen)
# -
for i in data:
X.extend(i[0])
Y.extend([i[1]] * len(i[0]))
# +
with open('600-news-with-valid-hoax-label-summaries.json') as fopen:
data = json.load(fopen)
for i in data:
X.extend(i[0])
Y.extend([i[1]] * len(i[0]))
# -
df = pd.read_csv('facts.csv', sep = ',',encoding = "ISO-8859-1")
for i in range(len(df)):
X.append(df.iloc[i,1])
Y.append(df.iloc[i,2])
df = pd.read_csv('hoax.csv', sep = ',',encoding = "ISO-8859-1")
for i in range(len(df)):
X.append(df.iloc[i,1])
Y.append(df.iloc[i,2])
# +
with open('hoax-summaries.json') as fopen:
data = json.load(fopen)
for i in data:
X.extend(i[0])
Y.extend([i[1]] * len(i[0]))
# +
with open('facts-summaries.json') as fopen:
data = json.load(fopen)
for i in data:
X.extend(i[0])
Y.extend([i[1]] * len(i[0]))
# -
df = pd.read_csv('malaysia-scraping-syazanihussin.csv')
for i in range(len(df)):
X.append(df.iloc[i,0])
Y.append(df.iloc[i,1])
# +
with open('malaysia-scraping-syazanihussin-summaries.json') as fopen:
data = json.load(fopen)
for i in data:
X.extend(i[0])
Y.extend([i[1]] * len(i[0]))
# +
# # !unzip compressed-fake-news.zip
# +
from glob import glob
X_, Y_ = [], []
for file in glob('positive/*.json'):
with open(file) as fopen:
data = json.load(fopen)
X_.extend(data)
Y_.extend(['fake'] * len(data))
# -
for file in glob('negative/*.json'):
with open(file) as fopen:
data = json.load(fopen)
X_.extend(data)
Y_.extend(['real'] * len(data))
# +
with open('fake-news-negatives-summarization.json') as fopen:
data = json.load(fopen)
for i in data:
X_.extend(i)
Y_.extend(['real'] * len(i))
# +
with open('fake-news-positives-summarization.json') as fopen:
data = json.load(fopen)
for i in data:
X_.extend(i)
Y_.extend(['fake'] * len(i))
# +
from tqdm import tqdm
selected_X, selected_Y = [], []
mapping_label = {'fake': 0, 'hoax': 0, 'real': 1, 'valid': 1, 'fact': 1}
for i in tqdm(range(len(X_))):
try:
t = cleaning(X_[i])
if len(t) > 100:
selected_X.append(t)
selected_Y.append(mapping_label[Y_[i].lower()])
except:
pass
len(selected_X), len(selected_Y)
# +
from sklearn.model_selection import train_test_split
train_X, test_X, train_Y, test_Y = train_test_split(selected_X, selected_Y, test_size = 0.1)
len(train_X), len(test_X)
# -
train_X = train_X + selected_X
train_Y = train_Y + selected_Y
# +
from sklearn.utils import shuffle
train_X, train_Y = shuffle(train_X, train_Y)
# +
from malaya.text.bpe import WordPieceTokenizer
tokenizer = WordPieceTokenizer('BERT.wordpiece', do_lower_case = False)
# tokenizer.tokenize('halo nama sayacomel')
# -
def get_tokens(X, Y, maxlen = 1024):
input_ids, input_masks = [], []
actual_l = []
for i in tqdm(range(len(X))):
text = X[i]
tokens_a = tokenizer.tokenize(text)
tokens = ["[CLS]"] + tokens_a + ["[SEP]"]
input_id = tokenizer.convert_tokens_to_ids(tokens)
input_mask = [1] * len(input_id)
if len(input_id) <= maxlen:
input_ids.append(input_id)
input_masks.append(input_mask)
actual_l.append(Y[i])
return input_ids, actual_l
train_input_ids, train_actual_l = get_tokens(train_X, train_Y)
test_input_ids, test_actual_l = get_tokens(test_X, test_Y)
# +
import pickle
with open('relevancy-fastformer.pkl', 'wb') as fopen:
pickle.dump([train_input_ids, train_actual_l, test_input_ids, test_actual_l], fopen)
| session/relevancy/preprocess-fastformer.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
from models import regression_model
import data_preprocessing
from conformal_prediction import EnCQR
import utils
import data_loaders
# First, we specify several configuration hyperparameters and we store them in a dictionary.
# Not all of them are used at the same time. For example, if we decide to use an LSTM model, the parameters that specifies the TCN and RF model are not used.
# +
target_idx = [0] # target variables to predict
B = 3 # number of ensembles
alpha = 0.1 # confidence level
quantiles = [alpha/2, # quantiles to predict
0.5,
1-(alpha/2)]
# rf only
n_trees = 20 # number of trees in each rf model
# lstm and tcn only
regression = 'quantile' # options: {'quantile', 'linear'}. If 'linear', just set one quantile
l2_lambda = 1e-4 # weight of l2 regularization in the lstm and tcn models
batch_size = 16 # size of batches using to train the lstm and tcn models
# lstm only
units = 128 # number of units in each lstm layer
n_layers = 3 # number of lstm layers in the model
# tcn only
dilations = [1,2,4,8] # dilation rate of the Conv1D layers
n_filters = 128 # filters in each Conv1D layer
kernel_size = 7 # kernel size in each ConvID layer
# Store the configuration in a dictionary
P = {'B':B, 'alpha':alpha, 'quantiles':quantiles,
'n_trees':n_trees,
'regression':regression,'l2':l2_lambda, 'batch_size':batch_size,
'units':units,'n_layers':n_layers,
'dilations':dilations, 'n_filters':n_filters, 'kernel_size':kernel_size}
# -
# ### Data loading
# For this example, we will use 3 years of data relative to solar power production.
# We will use the first year for training, the second for validation, and the last year as test set.
#
# *Note:* To use your own data, you must write a data loader which returns a single DataFrame or, like in this case, 3 DataFrames (one for training, one for validation, and one for test). You can find two examples of data loaders in [data_loaders.py](https://github.com/FilippoMB/Ensemble-Conformalized-Quantile-Regression/blob/main/data_loaders.py).
# +
train_df, val_df, test_df = data_loaders.get_solar_data()
train_df.head()
# -
# ### Data preprocessing
#
# The ``data_windowing()`` function transforms each DataFrame into 3-dimensional arrays of shape \[*number of samples*, *time steps*, *number of variables* \].
# The input data, X, might have a different number of time steps and a different number of variables than the output data, Y. In this case, we want to predict the energy production for the next day given the measurements of the past week.
# Therefore, the second dimension of X is ``time_steps_in=168`` (hours in the past week) and the second dimension of Y is ``time_steps_out=24`` (hours of the next day). The input variables are the historical energy production plus 5 exogenous variables, so the last dimension of X is ``n_vars=6``. Since we want to predict the future energy production, we specify the target variable to predict: ``label_columns=['MWH']``. Note that in Y ``n_vars=1``.
#
# ``data_windowing()`` also rescales each variable in \[0,1\] and return the scaler, which is used to invert the transformation.
#
# In addition, it also splits training data in *B* disjoint sets, used to train the ensemble model. In this case, ``B=3``.
#
# 
# +
train_data, val_x, val_y, test_x, test_y, Scaler = data_preprocessing.data_windowing(df=train_df,
val_data=val_df,
test_data=test_df,
B=3,
time_steps_in=168,
time_steps_out=24,
label_columns=['MWH'])
print("-- Training data --")
for i in range(len(train_data)):
print(f"Set {i} - x: {train_data[i][0].shape}, y: {train_data[i][1].shape}")
print("-- Validation data --")
print(f"x: {val_x.shape}, y: {val_y.shape}")
print("-- Test data --")
print(f"x: {test_x.shape}, y: {test_y.shape}")
# Update configuration dict
P['time_steps_in'] = test_x.shape[1]
P['n_vars'] = test_x.shape[2]
P['time_steps_out'] = test_y.shape[1]
# -
# ### Training the quantile regression models
#
# Before looking into the conformalization of the PI, let's see how we can train different models that perform quantile regression.
#
# In the paper we considered three models:
# - a random forest (rf)
# - a recurrent neural network with LSTM cells
# - a feedforward neural network with 1-dimensional convolutional cells (TCN).
#
# In principle, any other model performing quantile regression can be used. Each model must implement a ``fit()`` function with is used to train the model parameters and a ``transform()`` function used to predict new data.
# The ``fit()`` function uses ``val_x`` and ``val_y`` to perform early stopping.
#
# Let's start with the **TCN** model.
# +
P['model_type'] = 'tcn'
# Train
model = regression_model(P)
hist = model.fit(train_data[0][0], train_data[0][1], val_x, val_y)
utils.plot_history(hist)
# Test
PI = model.transform(test_x)
utils.plot_PIs(test_y, PI[:,:,1],
PI[:,:,0], PI[:,:,2],
x_lims=[0,168], scaler=Scaler, title='TCN model')
# -
# The function ``plot_hist()`` plots how the loss, coverage, and PI length evolve during training on the train and validation set.
# Note that here we trained the model only on the first subset of the training set.
#
# Next we train the **LSTM** model. To do that, we just change ``model_type`` in the hyperparameters dictionary.
# +
P['model_type'] = 'lstm'
# Train
model = regression_model(P)
hist = model.fit(train_data[0][0], train_data[0][1], val_x, val_y)
utils.plot_history(hist)
# Test
PI = model.transform(test_x)
utils.plot_PIs(test_y, PI[:,:,1],
PI[:,:,0], PI[:,:,2],
x_lims=[0,168], scaler=Scaler, title='LSTM model')
# -
# Finally, we train the **RF** model. As before, we change ``model_type`` in the hyperparameters dictionary. Contrairly to the previous two neural network model, the ``fit()`` function does not use ``val_x`` and ``val_y`` since there is no early stopping.
# +
# Train
P['model_type'] = 'rf'
model = regression_model(P)
model.fit(train_data[0][0], train_data[0][1])
# Test
PI = model.transform(test_x)
utils.plot_PIs(test_y, PI[:,:,1],
PI[:,:,0], PI[:,:,2],
x_lims=[0,168], scaler=Scaler, title='RF model')
# -
# ### EnCQR
#
# Finally, we compute the intervals with the EnCQR method.
#
# This is done by calling the function ``conformalized_PI()``, which returns two intervals:
# - the PI computed by the ensemble of QR models
# - the conformalized PI
#
# In this example, we consider an ensemble of TCN models and show that after conformalization the coverage of the PI gets much closer to the desired confidence level.
# +
P['model_type'] = 'tcn'
# compute the conformalized PI with EnCQR
PI, conf_PI = EnCQR(train_data, val_x, val_y, test_x, test_y, P)
# Plot original and conformalized PI
utils.plot_PIs(test_y, PI[:,:,1],
PI[:,:,0], PI[:,:,2],
conf_PI[:,:,0], conf_PI[:,:,2],
x_lims=[0,168], scaler=Scaler)
# Compute PI coverage and length before and after conformalization
print("Before conformalization:")
utils.compute_coverage_len(test_y.flatten(), PI[:,:,0].flatten(), PI[:,:,2].flatten(), verbose=True)
print("After conformalization:")
utils.compute_coverage_len(test_y.flatten(), conf_PI[:,:,0].flatten(), conf_PI[:,:,2].flatten(), verbose=True)
| example.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 2
# language: python
# name: python2
# ---
# +
import numpy as np
import matplotlib.pyplot as plt
from mpl_toolkits.axes_grid1 import make_axes_locatable
from matplotlib.ticker import MultipleLocator
from mpl_toolkits.axes_grid1.inset_locator import zoomed_inset_axes, inset_axes
from mpl_toolkits.axes_grid1.inset_locator import mark_inset
# %matplotlib inline
import pandas as pd
from scipy.optimize import curve_fit
import json
def correlation_function(cov):
p = cov[0]
return (cov-p**2)/(p-p**2)
def straight_line_at_origin(porosity):
def func(x, a):
return a * x + porosity
return func
# +
#strings to output and input locations
beadpack_dic = {
"out_direc": "../../../analysis/covariance/ketton/",
"seed_min": 43,
"seed_max": 64,
"tisize": 64
}
data_dic = beadpack_dic
out_direc = data_dic["out_direc"]
# -
# ## Data Loading and Computation of radial averages
#
# We load the two-point probability function data and perform radial averaging as well as normalizing to the correlation function.
#
# $$ \kappa(r)=\frac{S^{(i)}_2(r)-\phi_{(i)}^2}{\phi_{(i)}-\phi_{(i)}^2}$$
# +
orig_cov_pph = pd.read_csv(out_direc+"orig_pph.csv")
orig_cov_gph = pd.read_csv(out_direc+"orig_gph.csv")
radial_avg_orig_pph = np.mean(orig_cov_pph.values.T, axis=0)
radial_avg_orig_gph = np.mean(orig_cov_gph.values.T, axis=0)
print radial_avg_orig_pph.shape
correlation_func_orig = correlation_function(radial_avg_orig_pph)
# -
# We compute the slope of the correlation function at the origin for visualisation purposes only.
# +
N = 5
slope_orig_corr, slope_orig_corr_cov = curve_fit(straight_line_at_origin(correlation_func_orig[0]), range(0, N), correlation_func_orig[0:N])
print slope_orig_corr
slope_radial_orig, sloper_radial_orig_cov = curve_fit(straight_line_at_origin(radial_avg_orig_pph[0]), range(0, N), radial_avg_orig_pph[0:N])
print slope_radial_orig
# -
# ## Data Loading Synthetic Samples and Processing
#
# We perform the same computations for the synthetic samples and also compute the mean and standard deviation at
# each lag distance $r$ to show the validity of our matched models.
# +
cov_data = None
with open(out_direc+"covariance_data.json", "r") as f:
cov_data = json.load(f)
chord_lengths_gphs = []
chord_lengths_pphs = []
orig_chord_length_gphs, orig_chord_length_pphs = None, None
for key in cov_data.keys():
if key == 'orig':
orig_chord_length_pphs = cov_data[key]['chord_length_pph']
orig_chord_length_gphs = cov_data[key]['chord_length_gph']
else:
chord_lengths_pphs.append(cov_data[key]['chord_length_pph'])
chord_lengths_gphs.append(cov_data[key]['chord_length_gph'])
avg_chord_length_gphs = np.mean(chord_lengths_gphs)
avg_chord_length_pphs = np.mean(chord_lengths_pphs)
print orig_chord_length_pphs, orig_chord_length_gphs
print avg_chord_length_pphs, avg_chord_length_gphs
# +
cov_pphs = []
cov_gphs = []
for i in range(data_dic["seed_min"], data_dic["seed_max"]):
cov_pph = pd.read_csv(out_direc+"S_"+str(i)+"_pph.csv")
cov_gph = pd.read_csv(out_direc+"S_"+str(i)+"_gph.csv")
cov_pphs.append(cov_pph.values.T)
cov_gphs.append(cov_gph.values.T)
cov_pphs = np.array(cov_pphs)
cov_gphs = np.array(cov_gphs)
print cov_pphs.shape
directional_averages_pph = np.mean(cov_pphs, axis=0)
directional_averages_gph = np.mean(cov_gphs, axis=0)
radial_averages_pph = np.mean(cov_pphs.reshape(-1, cov_pphs.shape[-1]), axis=0)
radial_std_pph = np.std(cov_pphs.reshape(-1, cov_pphs.shape[-1]), axis=0)
slope_radial_pph, slope_radial_pph_cov = curve_fit(straight_line_at_origin(radial_averages_pph[0]), range(0, N), radial_averages_pph[0:N])
directional_std_pph = np.std(cov_pphs, axis=0)
directional_std_gph = np.std(cov_gphs, axis=0)
radial_averaged_corr = np.mean( [correlation_function(cov) for cov in cov_pphs.reshape(-1, cov_pphs.shape[-1])], axis=0)
radial_std_corr = np.std([correlation_function(cov) for cov in cov_pphs.reshape(-1, cov_pphs.shape[-1])], axis=0)
slope_synth_corr, slope_synth_corr_cov = curve_fit(straight_line_at_origin(radial_averaged_corr[0]), range(0, N), radial_averaged_corr[0:N])
directional_x = np.array([correlation_function(cov) for cov in cov_pphs[:, 0, :]])
directional_y = np.array([correlation_function(cov) for cov in cov_pphs[:, 1, :]])
directional_z = np.array([correlation_function(cov) for cov in cov_pphs[:, 2, :]])
directional_averages_normalized = np.zeros((3, directional_x.shape[1]))
directional_std_normalized = np.zeros((3, directional_x.shape[1]))
directional_averages_normalized[0] = np.mean(directional_x, axis=0)
directional_averages_normalized[1] = np.mean(directional_y, axis=0)
directional_averages_normalized[2] = np.mean(directional_z, axis=0)
directional_std_normalized[0] = np.std(directional_x, axis=0)
directional_std_normalized[1] = np.std(directional_y, axis=0)
directional_std_normalized[2] = np.std(directional_z, axis=0)
orig_normalized = np.array([correlation_function(cov) for cov in orig_cov_pph.values.T])
# +
porosity_avg = np.mean(cov_pphs[:, :, 0])
porosity_std = np.std(cov_pphs[:, :, 0])
print porosity_avg, porosity_std
porosity_orig_avg = np.mean(orig_cov_pph.values.T[:, 0])
porosity_orig_std= np.std(orig_cov_pph.values.T[:, 0])
print porosity_orig_avg
# -
# ## Directional Two-Point Probability Function Pore Phase including errorbars
# +
fig, ax = plt.subplots(1, 3, figsize=(36, 12))
fig.subplots_adjust(left=None, bottom=None, right=None, top=None, wspace=None, hspace=0.05)
for i, (j, direc) in zip(range(0, 6, 2), enumerate([r"$x$", r"$y$", r"$z$"])):
if j == 2:
ax[j].errorbar(range(len(directional_averages_pph[j])), directional_averages_pph[j], yerr=directional_std_pph[j], c="black", fmt='-', label=r"$Synthetic$")
ax[j].plot(range(len(orig_cov_pph.values.T[j])), orig_cov_pph.values.T[j], linestyle="--", linewidth=4, c="red", label=r"$Original$")
ax[j].axvline(data_dic["tisize"], color="blue", linestyle="-.", linewidth=3)
ax[j].text(data_dic["tisize"]+3., 0.1,r'$Training \ Image \ Size$',rotation=90, fontsize=26)
ax[j].legend(fontsize=32)
else:
ax[j].errorbar(range(len(directional_averages_pph[j])), directional_averages_pph[j], yerr=directional_std_pph[j], c="black", fmt='-')
ax[j].plot(range(len(orig_cov_pph.values.T[j])), orig_cov_pph.values.T[j], linestyle="--", linewidth=4, c="red")
ax[j].axvline(data_dic["tisize"], color="blue", linestyle="-.", linewidth=3)
ax[j].text(data_dic["tisize"]+3., 0.1,r'$Training \ Image \ Size$',rotation=90, fontsize=26)
for tick in ax[j].xaxis.get_major_ticks():
tick.label.set_fontsize(20)
for tick in ax[j].yaxis.get_major_ticks():
tick.label.set_fontsize(20)
for j, direc in enumerate([r"$x$", r"$y$", r"$z$"]):
ax[j].set_title(direc+r"$-Direction$", fontsize=36, y=1.02)
ax[j].set_xlabel(r"$Lag \ Distance \ r($"+direc+"$) \ [voxels]$", fontsize=36)
#ax[0].set_ylabel(r"$Two-Point \ Probability \ Function \ S_2(r)$", fontsize=34)
ax[0].set_ylabel(r"$S_2(r)$", fontsize=36)
for ax_handle in ax.flatten():
ax_handle.set_xlim(-1, 100)
ax_handle.set_ylim(0.0, 0.15)
ax_handle.grid()
fig.savefig("../../../paper/figures/ketton_directional_s2_porephase.png", bbox_extra_artists=None, bbox_inches='tight',dpi=72)
# -
# ## Directional Correlation Function
# +
fig, ax = plt.subplots(1, 3, figsize=(36, 12))
fig.subplots_adjust(left=None, bottom=None, right=None, top=None, wspace=None, hspace=0.05)
for i, (j, direc) in zip(range(0, 6, 2), enumerate([r"$x$", r"$y$", r"$z$"])):
if j == 2:
ax[j].errorbar(range(len(directional_averages_normalized[j])), directional_averages_normalized[j], yerr=directional_std_normalized[j], c="black", fmt='-', label=r"$Synthetic$")
ax[j].plot(range(len(orig_normalized[j])), orig_normalized[j], linestyle="--", linewidth=4, c="red", label=r"$Original$")
ax[j].axvline(data_dic["tisize"], color="blue", linestyle="-.", linewidth=3)
ax[j].text(data_dic["tisize"]+3., 0.55,r'$Training \ Image \ Size$',rotation=90, fontsize=26)
ax[j].legend(fontsize=32)
else:
ax[j].errorbar(range(len(directional_averages_normalized[j])), directional_averages_normalized[j], yerr=directional_std_normalized[j], c="black", fmt='-')
ax[j].plot(range(len(orig_normalized[j])), orig_normalized[j], linestyle="--", linewidth=4, c="red")
ax[j].axvline(data_dic["tisize"], color="blue", linestyle="-.", linewidth=3)
ax[j].text(data_dic["tisize"]+3., 0.55,r'$Training \ Image \ Size$',rotation=90, fontsize=26)
for tick in ax[j].xaxis.get_major_ticks():
tick.label.set_fontsize(20)
for tick in ax[j].yaxis.get_major_ticks():
tick.label.set_fontsize(20)
for j, direc in enumerate([r"$x$", r"$y$", r"$z$"]):
ax[j].set_title(direc+r"$-Direction$", fontsize=36, y=1.02)
ax[j].set_xlabel(r"$Lag \ Distance \ r($"+direc+"$) \ [voxels]$", fontsize=36)
ax[0].set_ylabel(r"$Correlation \ Function \ \kappa(r)$", fontsize=34)
for ax_handle in ax.flatten():
ax_handle.set_xlim(-1, 100)
ax_handle.grid()
# -
# ## Correlation Function Plot and Chord Size
# +
fig, ax = plt.subplots(1, 1, figsize=(12, 12))
ax.errorbar(range(len(radial_averaged_corr)), radial_averaged_corr, yerr=radial_std_corr, c="black", elinewidth=1, fmt='-', label=r"$Synthetic$", linewidth=3)
ax.plot(range(len(correlation_func_orig)), correlation_func_orig, linestyle="--", linewidth=4, c="red", label=r"$Original$")
slope_range = np.array(range(0, 20, 1))
ax.plot(slope_range, slope_range*float(slope_orig_corr)+1., linestyle="-.", color="red", linewidth=3)
ax.plot(slope_range, slope_range*float(slope_synth_corr)+1., linestyle="-", color="black", linewidth=1)
ax.axvline(data_dic["tisize"], color="blue", linestyle="-.", linewidth=3)
ax.text(data_dic["tisize"]+2., 0.5, r'$Training \ Image \ Size$',rotation=90, fontsize=26)
ax.axhline(0.0, linestyle="-", color="black", alpha=0.5)
for tick in ax.xaxis.get_major_ticks():
tick.label.set_fontsize(20)
for tick in ax.yaxis.get_major_ticks():
tick.label.set_fontsize(20)
ax.set_xlabel(r"$Lag \ Distance \ r \ [voxels]$", fontsize=36)
ax.set_ylabel(r"$Correlation \ Function$", fontsize=36)
ax.set_xlim(-1, 100)
ax.set_ylim(-0.2, 1.0)
ax.grid()
ax.legend(fontsize=32)
# +
fig, ax = plt.subplots(1, 1, figsize=(12, 12))
ax.errorbar(range(len(radial_averages_pph)), radial_averages_pph, yerr=radial_std_pph, c="black", elinewidth=1, fmt='-', label=r"$Synthetic$", linewidth=3)
ax.plot(range(len(radial_avg_orig_pph)), radial_avg_orig_pph, linestyle="--", linewidth=4, c="red", label=r"$Original$")
slope_range = np.array(range(0, 20, 1))
ax.plot(slope_range, slope_range*float(slope_radial_orig)+radial_avg_orig_pph[0], linestyle="-.", color="red", linewidth=3)
ax.plot(slope_range, slope_range*float(slope_radial_pph)+radial_averages_pph[0], linestyle="-", color="black", linewidth=1)
ax.plot([0, 20], [porosity_avg, porosity_avg], linestyle="--", color="black", linewidth=3)
ax.text(10, 0.114, r'$\phi_{GAN}=%.2f \pm %.3f$' % (porosity_avg, porosity_std),rotation=0, fontsize=26)
ax.plot([0, 20], [porosity_orig_avg, porosity_orig_avg], linestyle="--", color="red", linewidth=3)
ax.text(10, 0.13, r'$\phi=%.2f$' % porosity_orig_avg, rotation=0, fontsize=26)
ax.axvline(data_dic["tisize"], color="blue", linestyle="-.", linewidth=3)
ax.text(data_dic["tisize"]+2., 0.1, r'$Training \ Image \ Size$',rotation=90, fontsize=26)
ax.axhline(0.0, linestyle="-", color="black", alpha=0.5)
for tick in ax.xaxis.get_major_ticks():
tick.label.set_fontsize(20)
for tick in ax.yaxis.get_major_ticks():
tick.label.set_fontsize(20)
ax.set_xlabel(r"$Lag \ Distance \ r\ [voxels]$", fontsize=36)
ax.text(0, -0.005, r'$\{$', rotation=-90, fontsize=50)
ax.text(orig_chord_length_pphs/2.-4, 0.007, r'$\overline{l}_C^{pore}$',rotation=0, fontsize=26)
ax.annotate(r'$\overline{l}_C^{grain}$', xy=(orig_chord_length_gphs, 0.0), xytext=(orig_chord_length_gphs+3, 0.006),
fontsize=26, arrowprops=dict(facecolor='black', shrink=0.01))
#ax.set_ylabel(r"$Two-Point \ Probability \ Function \ S_2(r)$", fontsize=36)
ax.set_ylabel(r"$S_2(r)$", fontsize=36)
ax.set_xlim(-1, 100)
ax.set_ylim(0.0, 0.15)
ax.grid()
ax.legend(fontsize=32)
fig.savefig("../../../paper/figures/ketton_radial_averaged_s2.png", bbox_extra_artists=None, bbox_inches='tight',dpi=72)
# -
| code/notebooks/covariance/Covariance Graphs Ketton.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
import os
import tfx
print("TFX version: {}".format(tfx.__version__))
# -
# # ExampleGen
# ## How to use an ExampleGen component
# ### importing csv
from tfx.utils.dsl_utils import csv_input
from tfx.components.example_gen.csv_example_gen.component import CsvExampleGen
# variable, examples, points to the folder of example data
examples = csv_input("/Users/jiankaiwang/devops/tfx_taxi/taxi/data/simple/")
example_gen = CsvExampleGen(input_base=examples)
# ### importing tfrecord
from tfx.utils.dsl_utils import tfrecord_input
from tfx.components.example_gen.import_example_gen.component import ImportExampleGen
tfrecord_example = tfrecord_input("/Users/jiankaiwang/Google 雲端硬碟/public/document/201908_DL_ObjectDetection/tfrecords/")
tfrecord_example_gen = ImportExampleGen(tfrecord_example)
# ## Data Split
# ### Split dataset with ratio (while in output)
from tfx.proto import example_gen_pb2
# Split the dataset into train and eval subdatasets in ratio 3:1.
output = example_gen_pb2.Output(split_config=example_gen_pb2.SplitConfig(splits=[
example_gen_pb2.SplitConfig.Split(name='train', hash_buckets=3),
example_gen_pb2.SplitConfig.Split(name='eval', hash_buckets=1)
]))
example_gen_split = CsvExampleGen(input_base=examples, output_config=output)
# ### Load the split dataset (while in input)
# Notice there is a `*` in declaring patterns.
#
# For the file-based retrieval system (like CsvExampleGen or ImportExampleGen), the pattern is the relative path to the input_base. For the query-based system like BigQuery (e.g. BigQueryExampleGen, PrestoExampleGen), the pattern is the SQL query.
#
# By default, the input is regarded as one source input and the ratio between train and eval is 2:1.
inputs = example_gen_pb2.Input(splits=[
example_gen_pb2.Input.Split(name="train", pattern="train/*"),
example_gen_pb2.Input.Split(name="eval", pattern="eval/*")
])
example_load_split = CsvExampleGen(input_base=examples, input_config=inputs)
# ## Customized ExampleGen
# The customized ExampleGen is inherited from BaseExampleGenExecutor, for example, extending from `FileBasedExampleGen` and `PrestoExampleGen`.
# ### Customized File-based ExampleGen
# +
from tfx.components.base import executor_spec
from tfx.components.example_gen.component import FileBasedExampleGen
from tfx.components.example_gen.csv_example_gen import executor
from tfx.utils.dsl_utils import external_input
examples = external_input("/Users/jiankaiwang/devops/tfx_taxi/taxi/data/simple/")
example_gen = FileBasedExampleGen(
input_base=examples,
custom_executor_spec=executor_spec.ExecutorClassSpec(executor.Executor))
# -
# ### Customized Query-based ExampleGen
# +
from tfx.examples.custom_components.presto_example_gen.proto import presto_config_pb2
from tfx.examples.custom_components.presto_example_gen.presto_component.component import PrestoExampleGen
presto_config = presto_config_pb2.PrestoConnConfig(host='localhost', port=8080)
example_gen = PrestoExampleGen(presto_config, query='SELECT * FROM chicago_taxi_trips')
# -
| frameworks/tensorflow/TFX_ExampleGen.ipynb |
# +
# Copyright 2010-2017 Google
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
from ortools.sat.python import cp_model
class SolutionPrinter(cp_model.CpSolverSolutionCallback):
"""Print intermediate solutions."""
def __init__(self, num_vendors, num_hours, possible_schedules,
selected_schedules, hours_stat, min_vendors):
cp_model.CpSolverSolutionCallback.__init__(self)
self.__solution_count = 0
self.__num_vendors = num_vendors
self.__num_hours = num_hours
self.__possible_schedules = possible_schedules
self.__selected_schedules = selected_schedules
self.__hours_stat = hours_stat
self.__min_vendors = min_vendors
def OnSolutionCallback(self):
self.__solution_count += 1
print ('Solution %i: ', self.__solution_count)
print(' min vendors:', self.__min_vendors)
for i in range(self.__num_vendors):
print(' - vendor %i: ' % i,
self.__possible_schedules[self.Value(self.__selected_schedules[i])])
print()
for j in range(self.__num_hours):
print(' - # workers on day%2i: ' % j, end=' ')
print(self.Value(self.__hours_stat[j]), end=' ')
print()
print()
def SolutionCount(self):
return self.__solution_count
# Create the model.
model = cp_model.CpModel()
#
# data
#
num_vendors = 9
num_hours = 10
num_work_types = 1
traffic = [100, 500, 100, 200, 320, 300, 200, 220, 300, 120]
max_traffic_per_vendor = 100
# Last columns are :
# index_of_the_schedule, sum of worked hours (per work type).
# The index is useful for branching.
possible_schedules = [[1, 1, 1, 1, 0, 0, 1, 1, 1, 1, 0, 8],
[1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 1, 4],
[0, 0, 1, 1, 1, 1, 1, 0, 0, 0, 2, 5],
[0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 3, 4],
[1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 4, 3],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 5, 0]]
num_possible_schedules = len(possible_schedules)
selected_schedules = []
vendors_stat = []
hours_stat = []
# Auxiliary data
min_vendors = [t // max_traffic_per_vendor for t in traffic]
all_vendors = range(num_vendors)
all_hours = range(num_hours)
#
# declare variables
#
x = {}
for v in all_vendors:
tmp = []
for h in all_hours:
x[v, h] = model.NewIntVar(0, num_work_types, 'x[%i,%i]' % (v, h))
tmp.append(x[v, h])
selected_schedule = model.NewIntVar(0, num_possible_schedules - 1,
's[%i]' % v)
hours = model.NewIntVar(0, num_hours, 'h[%i]' % v)
selected_schedules.append(selected_schedule)
vendors_stat.append(hours)
tmp.append(selected_schedule)
tmp.append(hours)
model.AddAllowedAssignments(tmp, possible_schedules)
#
# Statistics and constraints for each hour
#
for h in all_hours:
workers = model.NewIntVar(0, 1000, 'workers[%i]' %h)
model.Add(workers == sum(x[v, h] for v in all_vendors))
hours_stat.append(workers)
model.Add(workers * max_traffic_per_vendor >= traffic[h])
#
# Redundant constraint: sort selected_schedules
#
for v in range(num_vendors - 1):
model.Add(selected_schedules[v] <= selected_schedules[v + 1])
# Solve model.
solver = cp_model.CpSolver()
solution_printer = SolutionPrinter(num_vendors, num_hours, possible_schedules,
selected_schedules, hours_stat,
min_vendors)
status = solver.SearchForAllSolutions(model, solution_printer)
print('Status = %s' % solver.StatusName(status))
print('Statistics')
print(' - conflicts : %i' % solver.NumConflicts())
print(' - branches : %i' % solver.NumBranches())
print(' - wall time : %f s' % solver.WallTime())
print(' - number of solutions found: %i' % solution_printer.SolutionCount())
| examples/notebook/vendor_scheduling_sat.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Investor - Flow of Funds - US
# ### Introduction:
#
# Special thanks to: https://github.com/rgrp for sharing the dataset.
#
# ### Step 1. Import the necessary libraries
import pandas as pd
# ### Step 2. Import the dataset from this [address](https://raw.githubusercontent.com/datasets/investor-flow-of-funds-us/master/data/weekly.csv).
# ### Step 3. Assign it to a variable called
called = pd.read_csv('https://raw.githubusercontent.com/datasets/investor-flow-of-funds-us/master/data/weekly.csv')
called.head()
# ### Step 4. What is the frequency of the dataset?
# +
# weekly data
# -
# ### Step 5. Set the column Date as the index.
called = called.set_index('Date')
called.head()
# ### Step 6. What is the type of the index?
called.index
# ### Step 7. Set the index to a DatetimeIndex type
called.index = pd.to_datetime(called.index)
called.index
# ### Step 8. Change the frequency to monthly, sum the values and assign it to monthly.
monthly = called.resample('M').sum()
monthly
# ### Step 9. You will notice that it filled the dataFrame with months that don't have any data with NaN. Let's drop these rows.
monthly = monthly.dropna()
monthly
# ### Step 10. Good, now we have the monthly data. Now change the frequency to year.
year = monthly.resample('AS-JAN').sum()
year
# ### BONUS: Create your own question and answer it.
| 09_Time_Series/Investor_Flow_of_Funds_US/Exercises.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Preprocessing for Evaluation
# To evaluate the trust mining method, we use several bpmn diagrams, mesaure their features, mining time and metrics and compare them
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
pd.options.display.max_columns = None
pd.set_option('display.max_rows', None)
# This is the first dataset from BPMN 6219 and from the Unicam set. See here: https://ieee-dataport.org/documents/6219-pairs-bpmn-images-and-definition-files
bpmn_6219 = pd.read_csv("../evaluation_bpmn6219.csv")
bpmn_81 = pd.read_csv("../evaluation_bpmn81.csv")
bpmn_6219_81 = bpmn_6219.append(bpmn_81)
len(bpmn_6219_81)
plt.hist(bpmn_6219_81[['endEventRequiredErrors',
'startEventRequiredErrors', 'fakeJoinErrors', 'noDisconnectedErrors',
'superfluousGatewayErrors', 'subProcessBlankStartEventErrors',
'noGatewayJoinForkErrors', 'noImplicitSplitErrors',
'noInclusiveGatewayErrors', 'conditionalFlowErrors']], range=(0,5))
plt.show()
bpmn_6219_81[['endEventRequiredErrors',
'startEventRequiredErrors', 'fakeJoinErrors', 'noDisconnectedErrors',
'superfluousGatewayErrors', 'subProcessBlankStartEventErrors',
'noGatewayJoinForkErrors', 'noImplicitSplitErrors',
'noInclusiveGatewayErrors', 'conditionalFlowErrors']].sum()
acceptable_bpmn_6219_81_df = bpmn_6219_81[(bpmn_6219_81['endEventRequiredErrors'] == 0) &
(bpmn_6219_81['startEventRequiredErrors'] == 0) &
(bpmn_6219_81['noDisconnectedErrors'] == 0) &
#(bpmn_6219_81['superfluousGatewayErrors'] == 0) &
(bpmn_6219_81['subProcessBlankStartEventErrors'] == 0) &
(bpmn_6219_81['noGatewayJoinForkErrors'] == 0) &
(bpmn_6219_81['noImplicitSplitErrors'] == 0) &
(bpmn_6219_81['noInclusiveGatewayErrors'] == 0) &
(bpmn_6219_81['conditionalFlowErrors'] == 0) #&
#(bpmn_6219_81['fakeJoinErrors'] == 0)
]
usable_bpmn_6219_81_df = acceptable_bpmn_6219_81_df[acceptable_bpmn_6219_81_df["modelNP"]>1]
len(usable_bpmn_6219_81_df)
usable_bpmn_6219_81_df['tapeAvgRLU'] = usable_bpmn_6219_81_df['tapeALU'] / (usable_bpmn_6219_81_df['tapeGU'])
unique_bpmn_6219_81_df = usable_bpmn_6219_81_df.drop_duplicates(subset=['modelIsValidBPMN', 'modelTNT', 'modelTNCS', 'modelTNA',
'modelTNDO', 'modelTNG', 'modelTNEE', 'modelTNIE', 'modelTNSE',
'modelTNE', 'modelTNSF', 'modelNP', 'modelNL', 'modelCLA', 'modelCLP',
'modelPDOPin', 'modelPDOPout', 'modelPDOTOut', 'modelPLT', 'tapeGU',
'tapeLUB', 'tapeAvgLUB',
'tapeAvgDI', 'tapeAvgDD', 'tapeAvgMI', 'tapeAvgMD',
'applicationDomain', 'endEventRequiredErrors',
'startEventRequiredErrors', 'fakeJoinErrors', 'noDisconnectedErrors',
'superfluousGatewayErrors', 'subProcessBlankStartEventErrors',
'noGatewayJoinForkErrors', 'noImplicitSplitErrors',
'noInclusiveGatewayErrors', 'conditionalFlowErrors'], keep='first')
len(unique_bpmn_6219_81_df)
usable_bpmn_6219_81_df.columns
application_groups = unique_bpmn_6219_81_df.groupby('applicationDomain').count()
application_groups.sort_values('fileName')
# +
plt.style.use('ggplot')
application_groups = unique_bpmn_6219_81_df.groupby('applicationDomain').count().sort_values('fileName', ascending=False)
labels = application_groups.index
sizes = application_groups['fileName']
fig1, ax1 = plt.subplots()
wedges, texts, autotexts = ax1.pie(sizes, labels=None, autopct='%1.1f%%',
shadow=True, startangle=90)
ax1.axis('equal') # Equal aspect ratio ensures that pie is drawn as a circle.
ax1.legend(wedges, labels,
title="Application Domains",
loc="best",
bbox_to_anchor=(0.7, 0, 0.5, 1))
ax1.figure.set_size_inches(8, 6)
plt.savefig("../plots/pie_application_domains.pdf")
plt.show()
# +
#valid_bpmn_6219_81_df = bpmn_6219_81[bpmn_6219_81["modelIsValidBPMN"]]
#len(valid_bpmn_6219_81_df)
# -
unique_bpmn_6219_81_df['modelTotalProcessElements'] = unique_bpmn_6219_81_df['modelTNA'] + unique_bpmn_6219_81_df['modelTNG'] + unique_bpmn_6219_81_df['modelTNE'] + unique_bpmn_6219_81_df['modelTNSE'] + unique_bpmn_6219_81_df['modelTNEE'] + unique_bpmn_6219_81_df['modelTNDO']
unique_bpmn_6219_81_df.head()
usable_bpmn_6219_81_df["modelNP"].unique()
# ## Characteristics
# To get an overview over the raw dataset, we use violin plots to show the distribution of certain characteristics in the models and compare them later to the Metrics introduced with Trust Mining.
#
# The violin plots use a *kernel density estimator* with a width of 8 to smoothen the unequally distributed features.
import seaborn as sns
import matplotlib.pyplot as plt
static_characteristics = unique_bpmn_6219_81_df[['modelTNT', 'modelTNCS', 'modelTNA',
'modelTNDO', 'modelTNG', 'modelTNEE', 'modelTNIE', 'modelTNSE',
'modelTNE', 'modelTNSF', 'modelNP', 'modelNL', 'modelCLA', 'modelCLP',
'modelPDOPin', 'modelPDOPout', 'modelPDOTOut', 'modelPLT', 'tapeGU',
'tapeALU', 'tapeRLU', 'tapeLUB', 'tapeAvgLUB',
'tapeAvgDI', 'tapeAvgDD', 'tapeAvgMI', 'tapeAvgMD',
'tapeExecutionTimeMs', 'applicationDomain', 'endEventRequiredErrors',
'startEventRequiredErrors', 'fakeJoinErrors', 'noDisconnectedErrors',
'superfluousGatewayErrors', 'subProcessBlankStartEventErrors',
'noGatewayJoinForkErrors', 'noImplicitSplitErrors',
'noInclusiveGatewayErrors', 'conditionalFlowErrors']]
raw_model_characteristics_with_activities = unique_bpmn_6219_81_df[['modelTNA','modelTNDO', 'modelTNG',
'modelTNE', 'modelNP', 'modelNL', 'modelCLA', 'modelCLP']]
raw_model_characteristics_with_activities.describe().round(2)
raw_model_characteristics_with_activities.describe().round(2).to_csv('../raw_model_characteristics.csv')
raw_model_characteristics_activities = unique_bpmn_6219_81_df[['modelTNA']].dropna()
raw_model_characteristics_activities_series_stacked = raw_model_characteristics_activities.stack()
raw_model_characteristics_activities_series_stacked.index = raw_model_characteristics_activities_series_stacked.index.droplevel(level=0)
raw_model_characteristics_activities_df = pd.DataFrame(raw_model_characteristics_activities_series_stacked).reset_index()
raw_model_characteristics_activities_df.columns = ['feature', 'value']
ax = sns.violinplot(x="value", y="feature", data=raw_model_characteristics_activities_df, scale="count", palette="husl", width=0.8, cut=0, inner="stick", bw=0.1, linewidth=0.8)
ax.figure.set_size_inches(15, len(raw_model_characteristics_activities_df["feature"].unique())*2)
raw_model_characteristics = unique_bpmn_6219_81_df[['modelTNDO', 'modelTNG',
'modelTNE', 'modelNP', 'modelNL', 'modelCLA', 'modelCLP']].dropna()
raw_model_characteristics_series_stacked = raw_model_characteristics.stack()
raw_model_characteristics_series_stacked.index = raw_model_characteristics_series_stacked.index.droplevel(level=0)
raw_model_df = pd.DataFrame(raw_model_characteristics_series_stacked).reset_index()
raw_model_df.columns = ['feature', 'value']
ax = sns.violinplot(x="value", y="feature", data=raw_model_df, scale="count", palette="husl", width=0.8, cut=0, inner="stick", bw=0.1, linewidth=0.8)
ax.figure.set_size_inches(10, len(raw_model_df["feature"].unique())/2)
ax.set_xlim(0,14)
ax.set_xticks(np.arange(0,15))
plt.savefig("../plots/violin_raw_model_characteristics.pdf")
# +
fig, axs = plt.subplots(2, 4)
fig.set_size_inches(18, 12)
#fig.suptitle('Distribution')
axs[0, 0] = raw_model_characteristics_with_activities['modelTNA'].hist(bins=20, ax=axs[0, 0], color='k')
axs[0, 0].set_title('TNA')
axs[0, 0].set_ylabel('Occurrences')
axs[0, 0].set_xlabel('Value')
axs[0, 1] = raw_model_characteristics_with_activities['modelTNE'].hist(ax=axs[0, 1], color='k')
axs[0, 1].set_title('TNE')
axs[0, 1].set_ylabel('Occurrences')
axs[0, 1].set_xlabel('Value')
axs[0, 2] = raw_model_characteristics_with_activities['modelTNG'].hist(ax=axs[0, 2], color='k')
axs[0, 2].set_title('TNG')
axs[0, 2].set_ylabel('Occurrences')
axs[0, 2].set_xlabel('Value')
axs[0, 3] = raw_model_characteristics_with_activities['modelNP'].hist(ax=axs[0, 3], color='k')
axs[0, 3].set_title('NP')
axs[0, 3].set_ylabel('Occurrences')
axs[0, 3].set_xlabel('Value')
axs[1, 0] = raw_model_characteristics_with_activities['modelNL'].hist(ax=axs[1, 0], bins=6, color='k')
axs[1, 0].set_title('NL')
axs[1, 0].set_ylabel('Occurrences')
axs[1, 0].set_xlabel('Value')
axs[1, 1] = raw_model_characteristics_with_activities['modelTNDO'].hist(ax=axs[1, 1], color='k')
axs[1, 1].set_title('TNDO')
axs[1, 1].set_ylabel('Occurrences')
axs[1, 1].set_xlabel('Value')
axs[1, 2] = raw_model_characteristics_with_activities['modelCLA'].hist(ax=axs[1, 2], color='k')
axs[1, 2].set_title('CLA')
axs[1, 2].set_ylabel('Occurrences')
axs[1, 2].set_xlabel('Value')
axs[1, 3] = raw_model_characteristics_with_activities['modelCLP'].hist(ax=axs[1, 3], color='k')
axs[1, 3].set_title('CLP')
axs[1, 3].set_ylabel('Occurrences')
axs[1, 3].set_xlabel('Value')
plt.savefig("../plots/hist_raw_model_characteristics.pdf")
# -
# ## TAPE characteristics
trust_characteristics = unique_bpmn_6219_81_df[['tapeGU', 'tapeALU', 'tapeAvgRLU', 'tapeAvgLUB', 'tapeAvgDD', 'tapeAvgMD', 'tapeExecutionTimeMs', 'modelTotalProcessElements', 'modelTNSF']].dropna()
trust_characteristics.describe().round(2)
trust_characteristics_stacked = trust_characteristics.stack()
trust_characteristics_stacked.index = trust_characteristics_stacked.index.droplevel(level=0)
trust_characteristics_model_df = pd.DataFrame(trust_characteristics_stacked).reset_index()
trust_characteristics_model_df.columns = ['feature', 'value']
ax = sns.violinplot(x="value", y="feature", data=trust_characteristics_model_df, scale="count", palette="husl", width=0.8, cut=0, inner="stick", bw=0.1, linewidth=0.8)
ax.figure.set_size_inches(10, len(trust_characteristics_model_df["feature"].unique())/2)
#ax.set_xlim(0,14)
ax.set_xticks(np.arange(0,15))
# +
fig, axs = plt.subplots(2, 3)
fig.set_size_inches(15, 12)
#fig.suptitle('Distribution')
axs[0, 0] = trust_characteristics['tapeGU'].hist(label='GU', bins=20, ax=axs[0, 0])
axs[0, 0].set_title('GU')
axs[0, 0].set_ylabel('Occurrences')
axs[0, 0].set_xlabel('Value')
axs[0, 1] = trust_characteristics['tapeALU'].hist(label='ALU', bins=20, ax=axs[0, 1])
axs[0, 1].set_title('ALU')
axs[0, 1].set_ylabel('Occurrences')
axs[0, 1].set_xlabel('Value')
axs[0, 2] = trust_characteristics['tapeAvgRLU'].hist(label='avg RLU', bins=20, ax=axs[0, 2])
axs[0, 2].set_title('AvgRLU')
axs[0, 2].set_ylabel('Occurrences')
axs[0, 2].set_xlabel('Value')
axs[1, 0] = trust_characteristics['tapeAvgLUB'].hist(label='avg LUB', bins=20, ax=axs[1, 0])
axs[1, 0].set_title('AvgLUB')
axs[1, 0].set_ylabel('Occurrences')
axs[1, 0].set_xlabel('Value')
axs[1, 1] = trust_characteristics['tapeAvgDD'].hist(label='avg DD', bins=20, ax=axs[1, 1])
axs[1, 1].set_title('AvgDD')
axs[1, 1].set_ylabel('Occurrences')
axs[1, 1].set_xlabel('Value')
axs[1, 2] = trust_characteristics['tapeAvgMD'].hist(label='avg MD', bins=20, ax=axs[1, 2])
axs[1, 2].set_title('AvgMD')
axs[1, 2].set_ylabel('Occurrences')
axs[1, 2].set_xlabel('Value')
plt.savefig("../plots/hist_tape_characteristics.pdf")
# -
trust_characteristics.describe()
trust_characteristics.head()
sns.scatterplot(data=trust_characteristics, x="modelTotalProcessElements", y="tapeExecutionTimeMs", hue="modelTNSF")
plt.savefig("../plots/scatter_tape_execution_time.pdf")
usable_bpmn_6219_81_df["modelNP"].hist()
unique_bpmn_6219_81_df.describe()
import sys
# !{sys.executable} -m pip install mpltools
| trustminer-evaluation/src/aggregation_evaluation.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ## Extra homework
#
#
# #### <NAME>
# For those to whom too much Python is never enough.
import numpy as np
import quantecon as qe
import matplotlib.pyplot as plt
from scipy.linalg import eigvals
from numba import jit
# ### Exercise 1
#
# Let $X$ be an $n \times n$ matrix with all positive elements. The spectral radius $r(X)$ of $X$ is maximum of $|\lambda|$ over all eigenvalues $\lambda$ of $X$, where $|\cdot|$ is the modulus of a complex number.
#
# A version of the **local spectral radius theorem** states that if $X$ has all positive entries and $v$ is any strictly positive $n \times 1$ vector, then
#
# $$
# \lim_{i \to \infty} \| X^i v \|^{1/i} \to r(X)
# \qquad \qquad \text{(LSR)}
# $$
#
# where $\| \cdot \|$ is the usual Euclidean norm.
#
# Intuitively, the norm of the iterates of a positive vector scale like $r(X)$ asymptotically.
#
# The data file `matrix_data.txt` contains the data for a single matrix $X$.
#
# 1. Read it in and compute the spectral raduis using the tools for working with eigenvalues in `scipy.linalg`.
#
# 2. Test the claim in (LSR) iteratively, computing $\| X^i v \|^{1/i}$ for successively larger values of $i$. See if the sequence so generated converges to $r(A)$.
# !cat matrix_data.txt
X = np.loadtxt('matrix_data.txt')
n, _ = X.shape
# Using tools in `scipy.linalg`
np.max(np.abs(eigvals(X)))
# Iteratively:
# +
tol = 1e-9
iter_max = 40000
sr_estimate = 1.0
error = tol + 1
X_power = X
i = 1
o = np.ones((n, 1))
while error > tol and i < iter_max:
new_estimate = (np.linalg.norm(X_power @ o))**(1/i)
error = np.abs(sr_estimate - new_estimate)
X_power = X_power @ X
i += 1
sr_estimate = new_estimate
print(sr_estimate)
# -
i
# ### Exercise 2
# Recall that the quadratic map generates time series of the form
#
# $$ x_{t+1} = 4 \, x_t (1 - x_t) $$
#
# for some given $x_0$, and that these trajectories are chaotic.
#
# This means that different initial conditions generate seemingly very different outcomes.
#
# Nevertheless, the regions of the state space where these trajectories spend most of their time are in fact typically invariant to the initial condition.
#
# Illustrate this by generating 100 histograms of time series generated from the quadratic map, with $x_0$ drawn independently from the uniform distribution on $(0, 1)$. Use relatively long time series.
#
# Do they all look alike?
@jit(nopython=True)
def quadmap_series(x0, n, x_vec):
n = len(x_vec)
x_vec[0] = x0
for t in range(n-1):
x_vec[t+1] = 4.0 * x_vec[t] * (1 - x_vec[t])
# +
num_figs = 100
initial_conditions = np.random.uniform(size=num_figs)
ts_length = 100_000
x_vec = np.empty(ts_length)
for x0 in initial_conditions:
quadmap_series(x0, ts_length, x_vec)
fig, ax = plt.subplots()
ax.hist(x_vec)
plt.show()
# -
| day2/homework/extra_homework_solution.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3 (ipykernel)
# language: python
# name: python3
# ---
# # Code Samples From Large Group
#
# +
password = "<PASSWORD>"
yourpassword = input("First Time: Enter your password:")
while yourpassword != password:
print("That is not correct! ")
yourpassword = input("Re Enter your password: ")
# -
for i in range(10):
print(i)for
for i in "pumpkin":
print(i)
for i in 50:
print(i)
word = "pizza"
for letter in word:
print(letter)
choices = ["rock", "paper", "scissors"]
for choice in choices:
if choice== "rock":
print("Yay!")
else:
print(choice)
for number in range(2,22,2):
print(number)
number = int(input("Enter number for your mult. table: "))
plippers = int(input("Number of multpipliers: "))
for i in range(plippers):
product = i * number
print(f"{i} X {number} = {product}")
# +
# count the number of vowels in a word
# word
# output: number of vowels
"mike" --> 2
"try" --> 0
"pig" --> 1
input a word
for each letter in the word
if word is a vowel
increment vowel count
# +
word = "pig"
vowels = 0
for letter in word:
if letter == 'a' or letter == 'e' or letter =='i' or letter == 'o' or letter=='u':
vowels = vowels + 1
print(f"{word} has {vowels} vowels")
# +
word = input("Enter a word: ")
vowels = 0
for letter in word:
if letter == 'a' or letter == 'e' or letter =='i' or letter == 'o' or letter=='u':
vowels = vowels + 1
print(f"{word} has {vowels} vowels")
# +
word = input("Enter a word: ")
vowels = 0
for letter in word:
if letter == 'a' or letter == 'e' or letter =='i' or letter == 'o' or letter=='u':
vowels = vowels + 1
print(f"{word} has {vowels} vowels")
# +
# any sentinel loop
while True:
word = input("Enter a word, or just press ENTER to stop:")
if word == "":
break
# the work....
# -
while True:
word = input("Enter a word, or just press ENTER to stop:")
if word == "":
break
vowels = 0
for letter in word:
if letter == 'a' or letter == 'e' or letter =='i' or letter == 'o' or letter=='u':
vowels = vowels + 1
print(f"{word} has {vowels} vowels")
| lessons/04-Iterations/LargeGroup.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ## Coding Matrices
#
# Here are a few exercises to get you started with coding matrices. The exercises start off with vectors and then get more challenging
#
# ### Vectors
### TODO: Assign the vector <5, 10, 2, 6, 1> to the variable v
v = [5, 10, 2, 6, 1]
v
# The v variable contains a Python list. This list could also be thought of as a 1x5 matrix with 1 row and 5 columns. How would you represent this list as a matrix?
# +
### TODO: Assign the vector <5, 10, 2, 6, 1> to the variable mv
### The difference between a vector and a matrix in Python is that
### a matrix is a list of lists.
### Hint: See the last quiz on the previous page
mv = [v]
# mv
# -
# How would you represent this vector in its vertical form with 5 rows and 1 column? When defining matrices in Python, each row is a list. So in this case, you have 5 rows and thus will need 5 lists.
#
# As an example, this is what the vector $$<5, 7>$$ would look like as a 1x2 matrix in Python:
# ```python
# matrix1by2 = [
# [5, 7]
# ]
# ```
#
# And here is what the same vector would look like as a 2x1 matrix:
# ```python
# matrix2by1 = [
# [5],
# [7]
# ]
# ```
### TODO: Assign the vector <5, 10, 2, 6, 1> to the variable vT
### vT is a 5x1 matrix
vT = [
[5],
[10],
[2],
[6],
[1]
]
# ### Assigning Matrices to Variables
# +
### TODO: Assign the following matrix to the variable m
### 8 7 1 2 3
### 1 5 2 9 0
### 8 2 2 4 1
m = [
[8, 7, 1, 2, 3],
[1, 5, 2, 9, 0],
[8, 2, 2, 4, 1]
]
# -
# ### Accessing Matrix Values
### TODO: In matrix m, change the value
### in the second row last column from 0 to 5
### Hint: You do not need to rewrite the entire matrix
m[1][4]=5
# ### Looping through Matrices to do Math
#
# Coding mathematical operations with matrices can be tricky. Because matrices are lists of lists, you will need to use a for loop inside another for loop. The outside for loop iterates over the rows and the inside for loop iterates over the columns.
#
#
# Here is some pseudo code
# ```python
# for i in number of rows:
# for j in number of columns:
# mymatrix[i][j]
# ```
#
# To figure out how many times to loop over the matrix, you need to know the number of rows and number of columns.
#
#
# If you have a variable with a matrix in it, how could you figure out the number of rows? How could you figure out the number of columns? The [len](https://docs.python.org/2/library/functions.html#len) function in Python might be helpful.
# ### Scalar Multiplication
# +
### TODO: Use for loops to multiply each matrix element by 5
### Store the answer in the r variable. This is called scalar
### multiplication
###
### HINT: First write a for loop that iterates through the rows
### one row at a time
###
### Then write another for loop within the for loop that
### iterates through the columns
###
### If you used the variable i to represent rows and j
### to represent columns, then m[i][j] would give you
### access to each element in the matrix
###
### Because r is an empty list, you cannot directly assign
### a value like r[i][j] = m[i][j]. You might have to
### work on one row at a time and then use r.append(row).
r = []
for i in range(len(m)):
row = []
for j in range(len(m[0])):
row.append(m[i][j]*5)
r.append(row)
print(r)
# -
# ### Printing Out a Matrix
# +
### TODO: Write a function called matrix_print()
### that prints out a matrix in
### a way that is easy to read.
### Each element in a row should be separated by a tab
### And each row should have its own line
### You can test our your results with the m matrix
### HINT: You can use a for loop within a for loop
### In Python, the print() function will be useful
### print(5, '\t', end = '') will print out the integer 5,
### then add a tab after the 5. The end = '' makes sure that
### the print function does not print out a new line if you do
### not want a new line.
### Your output should look like this
### 8 7 1 2 3
### 1 5 2 9 5
### 8 2 2 4 1
def matrix_print(matrix):
result = ""
for row in matrix:
for column in row:
result += str(column) + " "
result += '\n'
return result
m = [
[8, 7, 1, 2, 3],
[1, 5, 2, 9, 5],
[8, 2, 2, 4, 1]
]
matrix_print(m)
# -
# ### Test Your Results
# +
### You can run these tests to see if you have the expected
### results. If everything is correct, this cell has no output
assert v == [5, 10, 2, 6, 1]
assert mv == [
[5, 10, 2, 6, 1]
]
assert vT == [
[5],
[10],
[2],
[6],
[1]]
assert m == [
[8, 7, 1, 2, 3],
[1, 5, 2, 9, 5],
[8, 2, 2, 4, 1]
]
assert r == [
[40, 35, 5, 10, 15],
[5, 25, 10, 45, 25],
[40, 10, 10, 20, 5]
]
# -
# ### Print Out Your Results
### Run this cell to print out your answers
print(v)
print(mv)
print(vT)
print(m)
print(r)
| 3-object-tracking-and-localization/activities/6-matrices-and-transformation-state/2. Matrices in python.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="view-in-github" colab_type="text"
# <a href="https://colab.research.google.com/github/martin-fabbri/colab-notebooks/blob/master/rnn/seq_to_seq_keras.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# + id="U9ZxIisYLDxo"
import numpy as np
from tensorflow.keras import layers
from tensorflow.keras import Model
from tensorflow.keras.models import Sequential
# + [markdown] id="gQT0ooynK1F5"
# ## LSTM quick recap
# + [markdown] id="MGEQ_UklLFR9"
# Creating a layer of LSTM memory units allows you to specify the `number of memory units` within the layer.
#
# ```python
# lstm = tf.keras.layers.LTSM(30) # number of memory units=30
# ```
#
# Each unit or cell within the layer has an `internal cell state` ($c$), and output a `hidden state` ($h$)
#
# ```python
# inputs = Input(shape=(3, 1)
# lstm, state_h, state_c = tf.keras.layers.LTSM(1, return_state=True)(inputs)
# ```
#
# Each LSTM cell will output one hidden state $h$ for each input.
#
# ```python
# h = tf.keras.layers.LTSM(X)
# ```
#
# + id="rD2GyT7cTUgH"
# input time steps
t1 = 0.1
t2 = 0.2
t3 = 0.3
time_steps = [t1, t2, t3]
one_memory_unit = 1
# + colab={"base_uri": "https://localhost:8080/"} id="Pc2NMUHRLFYN" outputId="5ecda9a6-934b-456f-995d-780d2ad02802"
# define the model
inputs1 = layers.Input(shape=(3, 1))
lstm1 = layers.LSTM(one_memory_unit)(inputs1)
model = Model(inputs=inputs1, outputs=lstm1)
# define input data -> inputs should include the
# batch reference (batch, time steps->sequence length, ?)
data = np.array(time_steps).reshape((1, 3, 1))
# make a prediction -> should output a single scalar hidden state
model.predict(data)[0][0]
# + [markdown] id="5qmAyjbDRq_X"
# It's possible to access the `hidden state output` $\ldots[\hat{y}_{t-1}],[\hat{y}_{t}],[\hat{y}_{t+1}]\ldots$ for each input time step.
#
# ```python
# LSTM(1, return_sequences=True)
# ```
# + colab={"base_uri": "https://localhost:8080/"} id="dQEO7ucPLKOY" outputId="fb6ad618-3751-409d-a808-ca2bf4bcab3b"
# define the model
inputs1 = layers.Input(shape=(3, 1))
lstm1 = layers.LSTM(one_memory_unit, return_sequences=True)(inputs1)
model = Model(inputs=inputs1, outputs=lstm1)
# define input data -> inputs should include the
# batch reference (batch, time steps->sequence length, ?)
data = np.array(time_steps).reshape((1, 3, 1))
# make a prediction -> should output y^ for each time step
model.predict(data)
# + [markdown] id="Hwr87gFDV7JT"
# Each LSTM call retains an `internal state` that `is not output`, called `cell state` ($c$).
#
# Keras provides the return_state argument to the LSTM layer that will provide access to the `hidden` state ($state_h$) and the `cell` state ($state_c$).
#
# ```python
# lstm1, state_h, state_c = LSTM(1, return_state=True)
# ```
# + colab={"base_uri": "https://localhost:8080/"} id="XPWLhVXhLLon" outputId="9183824f-6a57-4249-e302-db3846e9c8b1"
# define the model
inputs1 = layers.Input(shape=(3, 1))
lstm1, state_h, state_c = layers.LSTM(one_memory_unit, return_state=True)(inputs1)
model = Model(inputs=inputs1, outputs=lstm1)
# define input data -> inputs should include the
# batch reference (batch, time steps->sequence length, ?)
data = np.array(time_steps).reshape((1, 3, 1))
# make a prediction -> should output y^ for each time step
model.predict(data)
# + [markdown] id="fZpKWQGv1YkF"
# Hidden state fro the last time step
# + colab={"base_uri": "https://localhost:8080/"} id="X1M-PfKsXw-p" outputId="e6001365-20d8-49e1-c071-6bf0ebd3553e"
state_h[0]
# + [markdown] id="eXOHzh301cYp"
# Cell state for the last step
# + colab={"base_uri": "https://localhost:8080/"} id="uxEAaFXs0a_f" outputId="f9c03b0a-17b4-44a1-c7de-2c40cdb4f22f"
state_c[0]
# + [markdown] id="DXERevWR2ePP"
# ## TimeDistributed Layer
#
# > This wrapper allows to apply a layer to every temporal slice of an input. `TimeDistributedDense` applies a same Dense (fully-connected) operation to every timestep of a 3D tensor.<br><br>
# >Consider a batch of 32 video samples, where each sample is a 128x128 RGB image with channels_last data format, across 10 timesteps. The batch input shape is (32, 10, 128, 128, 3).<br><br>
# >You can then use TimeDistributed to apply the same Conv2D layer to each of the 10 timesteps, independently:
# + colab={"base_uri": "https://localhost:8080/"} id="nm3u31aH01mV" outputId="e660a492-11ee-4a88-e2c2-2105e73e5e64"
inputs = layers.Input(shape=(10, 128, 128, 3))
conv_2d_layer = layers.Conv2D(64, (3, 3))
outputs = layers.TimeDistributed(conv_2d_layer)(inputs)
outputs.shape
# + colab={"base_uri": "https://localhost:8080/"} id="jM8kpZi94nz_" outputId="6c30dc3c-49ab-4c8c-d3ec-b22c935d30c3"
length = 5
seq = array([i / length for i in range(length)])
seq
# + [markdown] id="_79BH_XW9J-7"
# ## One-to-One LSTM for Senquence Prediction
# + colab={"base_uri": "https://localhost:8080/"} id="SqOu9oQi8wVW" outputId="06b5b8d1-733e-4db0-8cb6-5c9e067b6b0e"
X = seq.reshape(5, 1, 1)
X
# + colab={"base_uri": "https://localhost:8080/"} id="EPPmEl7x-afg" outputId="9fb23b1e-3457-46f0-809b-3131dda39a89"
y = seq.reshape(5, 1)
y
# + [markdown] id="LB127pF4-uhu"
# We will define the network model as having 1 input with 1 time step. The first hidden layer will be an LSTM with 5 units. The output layer with be a fully-connected layer with 1 output.
# + id="580KHVz_-l4P"
length = 5
seq = array([i/length for i in range(length)])
X = seq.reshape(len(seq), 1, 1)
y = seq.reshape(len(seq), 1)
# + colab={"base_uri": "https://localhost:8080/"} id="-0w0tu_3_Slz" outputId="fa60934d-aec0-4a9c-bf05-12be61fef971"
n_memory_units = length
n_batch = length
n_epoch = 1000
model = Sequential([
layers.LSTM(n_memory_units, input_shape=(1, 1)),
layers.Dense(1)
])
model.compile(
loss='mean_squared_error',
optimizer='adam'
)
model.summary()
# + id="bZ0fSARiAPpQ"
history = model.fit(X, y, epochs=n_epoch, batch_size=n_batch, verbose=0)
# + colab={"base_uri": "https://localhost:8080/"} id="bEkhNQ77AkQH" outputId="0b4c1bf0-2e28-4e5a-b108-4868043727aa"
result = model.predict(X, batch_size=n_batch, verbose=0)
for value in result:
print(f'{value[0]:.1f}', end=' ')
# + id="H8pak1koBTqb"
| rnn/seq_to_seq_keras.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 2
# language: python
# name: python2
# ---
# #Instability of Parameter Estimates
#
# By Evgenia "Jenny" Nitishinskaya and <NAME>. Algorithms by <NAME>.
#
# Part of the Quantopian Lecture Series:
#
# * [www.quantopian.com/lectures](https://www.quantopian.com/lectures)
# * [github.com/quantopian/research_public](https://github.com/quantopian/research_public)
#
# Notebook released under the Creative Commons Attribution 4.0 License.
#
# ---
#
# # Parameters
#
# A parameter is anything that a model uses to constrain its predictions. Commonly, a parameter is quantity that describes a data set or distribution is a parameter. For example, the mean of a normal distribution is a parameter; in fact, we say that a normal distribution is <i>parametrized</i> by its mean and variance. If we take the mean of a set of samples drawn from the normal distribution, we get an estimate of the mean of the distribution. Similarly, the mean of a set of observations is an estimate of the parameter of the underlying distribution (which is often assumed to be normal). Other parameters include the median, the correlation coefficient to another series, the standard deviation, and every other measurement of a data set.
# ##You Never Know, You Only Estimate
#
# When you take the mean of a data set, you do not know the mean. You have estimated the mean as best you can from the data you have. The estimate can be off. This is true of any parameter you estimate. To actually understand what is going on you need to determine how good your estimate is by looking at its stability/standard error/confidence intervals.
# # Instability of estimates
#
# Whenever we consider a set of observations, our calculation of a parameter can only be an estimate. It will change as we take more measurements or as time passes and we get new observations. We can quantify the uncertainty in our estimate by looking at how the parameter changes as we look at different subsets of the data. For instance, standard deviation describes how different the mean of a set is from the mean of each observation, that is, from each observation itself. In financial applications, data often comes in time series. In this case, we can estimate a parameter at different points in time; say, for the previous 30 days. By looking at how much this moving estimate fluctuates as we change our time window, we can compute the instability of the estimated parameter.
# We'll be doing some examples, so let's import the libraries we'll need
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
# # Example: mean and standard deviation
#
# First, let's take a look at some samples from a normal distribution. We know that the mean of the distribution is 0 and the standard deviation is 1; but if we measure the parameters from our observations, we will get only approximately 0 and approximately 1. We can see how these estimates change as we take more and more samples:
# +
# Set a seed so we can play with the data without generating new random numbers every time
np.random.seed(123)
normal = np.random.randn(500)
print np.mean(normal[:10])
print np.mean(normal[:100])
print np.mean(normal[:250])
print np.mean(normal)
# Plot a stacked histogram of the data
plt.hist([normal[:10], normal[10:100], normal[100:250], normal], normed=1, histtype='bar', stacked=True);
plt.ylabel('Frequency')
plt.xlabel('Value');
# -
print np.std(normal[:10])
print np.std(normal[:100])
print np.std(normal[:250])
print np.std(normal)
# Notice that, although the probability of getting closer to 0 and 1 for the mean and standard deviation, respectively, increases with the number of samples, we do not always get better estimates by taking more data points. Whatever our expectation is, we can always get a different result, and our goal is often to compute the probability that the result is significantly different than expected.
#
# With time series data, we usually care only about contiguous subsets of the data. The moving average (also called running or rolling) assigns the mean of the previous $n$ data points to each point in time. Below, we compute the 90-day moving average of a stock price and plot it to see how it changes. There is no result in the beginning because we first have to accumulate at least 90 days of data.
# ##Example: Non-Normal Underlying Distribution
#
# What happens if the underlying data isn't normal? A mean will be very deceptive. Because of this it's important to test for normality of your data. We'll use a Jarque-Bera test as an example.
# +
#Generate some data from a bi-modal distribution
def bimodal(n):
X = np.zeros((n))
for i in range(n):
if np.random.binomial(1, 0.5) == 0:
X[i] = np.random.normal(-5, 1)
else:
X[i] = np.random.normal(5, 1)
return X
X = bimodal(1000)
#Let's see how it looks
plt.hist(X, bins=50)
plt.ylabel('Frequency')
plt.xlabel('Value')
print 'mean:', np.mean(X)
print 'standard deviation:', np.std(X)
# -
# Sure enough, the mean is increidbly non-informative about what is going on in the data. We have collapsed all of our data into a single estimate, and lost of a lot of information doing so. This is what the distribution should look like if our hypothesis that it is normally distributed is correct.
# +
mu = np.mean(X)
sigma = np.std(X)
N = np.random.normal(mu, sigma, 1000)
plt.hist(N, bins=50)
plt.ylabel('Frequency')
plt.xlabel('Value');
# -
# We'll test our data using the Jarque-Bera test to see if it's normal. A significant p-value indicates non-normality.
# +
from statsmodels.stats.stattools import jarque_bera
jarque_bera(X)
# -
# Sure enough the value is < 0.05 and we say that X is not normal. This saves us from accidentally making horrible predictions.
# # Example: Sharpe ratio
#
# One statistic often used to describe the performance of assets and portfolios is the Sharpe ratio, which measures the additional return per unit additional risk achieved by a portfolio, relative to a risk-free source of return such as Treasury bills:
# $$R = \frac{E[r_a - r_b]}{\sqrt{Var(r_a - r_b)}}$$
#
# where $r_a$ is the returns on our asset and $r_b$ is the risk-free rate of return. As with mean and standard deviation, we can compute a rolling Sharpe ratio to see how our estimate changes through time.
# +
def sharpe_ratio(asset, riskfree):
return np.mean(asset - riskfree)/np.std(asset - riskfree)
start = '2012-01-01'
end = '2015-01-01'
# Use an ETF that tracks 3-month T-bills as our risk-free rate of return
treasury_ret = get_pricing('BIL', fields='price', start_date=start, end_date=end).pct_change()[1:]
pricing = get_pricing('AMZN', fields='price', start_date=start, end_date=end)
returns = pricing.pct_change()[1:] # Get the returns on the asset
# Compute the running Sharpe ratio
running_sharpe = [sharpe_ratio(returns[i-90:i], treasury_ret[i-90:i]) for i in range(90, len(returns))]
# Plot running Sharpe ratio up to 100 days before the end of the data set
_, ax1 = plt.subplots()
ax1.plot(range(90, len(returns)-100), running_sharpe[:-100]);
ticks = ax1.get_xticks()
ax1.set_xticklabels([pricing.index[i].date() for i in ticks[:-1]]) # Label x-axis with dates
plt.xlabel('Date')
plt.ylabel('Sharpe Ratio');
# -
# The Sharpe ratio looks rather volatile, and it's clear that just reporting it as a single value will not be very helpful for predicting future values. Instead, we can compute the mean and standard deviation of the data above, and then see if it helps us predict the Sharpe ratio for the next 100 days.
# +
# Compute the mean and std of the running Sharpe ratios up to 100 days before the end
mean_rs = np.mean(running_sharpe[:-100])
std_rs = np.std(running_sharpe[:-100])
# Plot running Sharpe ratio
_, ax2 = plt.subplots()
ax2.set_xticklabels([pricing.index[i].date() for i in ticks[:-1]]) # Label x-axis with dates
ax2.plot(range(90, len(returns)), running_sharpe)
# Plot its mean and the +/- 1 standard deviation lines
ax2.axhline(mean_rs)
ax2.axhline(mean_rs + std_rs, linestyle='--')
ax2.axhline(mean_rs - std_rs, linestyle='--')
# Indicate where we computed the mean and standard deviations
# Everything after this is 'out of sample' which we are comparing with the estimated mean and std
ax2.axvline(len(returns) - 100, color='pink');
plt.xlabel('Date')
plt.ylabel('Sharpe Ratio')
plt.legend(['Sharpe Ratio', 'Mean', '+/- 1 Standard Deviation'])
print 'Mean of running Sharpe ratio:', mean_rs
print 'std of running Sharpe ratio:', std_rs
# -
# The standard deviation in this case is about a quarter of the range, so this data is extremely volatile. Taking this into account when looking ahead gave a better prediction than just using the mean, although we still observed data more than one standard deviation away. We could also compute the rolling mean of the Sharpe ratio to try and follow trends; but in that case, too, we should keep in mind the standard deviation.
# ##Example: Moving Average
#
# Let's say you take the average with a lookback window; how would you determine the standard error on that estimate? Let's start with an example showing a 90-day moving average.
# +
# Load time series of prices
start = '2012-01-01'
end = '2015-01-01'
pricing = get_pricing('AMZN', fields='price', start_date=start, end_date=end)
# Compute the rolling mean for each day
mu = pd.rolling_mean(pricing, window=90)
# Plot pricing data
_, ax1 = plt.subplots()
ax1.plot(pricing)
ticks = ax1.get_xticks()
ax1.set_xticklabels([pricing.index[i].date() for i in ticks[:-1]]) # Label x-axis with dates
plt.ylabel('Price')
plt.xlabel('Date')
# Plot rolling mean
ax1.plot(mu);
plt.legend(['Price','Rolling Average']);
# -
# This lets us see the instability/standard error of the mean, and helps anticipate future variability in the data. We can quantify this variability by computing the mean and standard deviation of the rolling mean.
print 'Mean of rolling mean:', np.mean(mu)
print 'std of rolling mean:', np.std(mu)
# In fact, the standard deviation, which we use to quantify variability, is itself variable. Below we plot the rolling standard deviation (for a 90-day window), and compute <i>its</i> mean and standard deviation.
# +
# Compute rolling standard deviation
std = pd.rolling_std(pricing, window=90)
# Plot rolling std
_, ax2 = plt.subplots()
ax2.plot(std)
ax2.set_xticklabels([pricing.index[i].date() for i in ticks[:-1]]) # Label x-axis with dates
plt.ylabel('Standard Deviation of Moving Average')
plt.xlabel('Date')
print 'Mean of rolling std:', np.mean(std)
print 'std of rolling std:', np.std(std)
# -
# To see what this changing standard deviation means for our data set, let's plot the data again along with the Bollinger bands: the rolling mean, one rolling standard deviation (of the data) above the mean, and one standard deviation below.
# Note that although standard deviations give us more information about the spread of the data, we cannot assign precise probabilities to our expectations for future observations without assuming a particular distribution for the underlying process.
# +
# Plot original data
_, ax3 = plt.subplots()
ax3.plot(pricing)
ax3.set_xticklabels([pricing.index[i].date() for i in ticks[:-1]]) # Label x-axis with dates
# Plot Bollinger bands
ax3.plot(mu)
ax3.plot(mu + std)
ax3.plot(mu - std);
plt.ylabel('Price')
plt.xlabel('Date')
plt.legend(['Price', 'Moving Average', 'Moving Average +1 Std', 'Moving Average -1 Std'])
# -
# # Conclusion
#
# Whenever we compute a parameter for a data set, we should also compute its volatility. Otherwise, we do not know whether or not we should expect new data points to be aligned with this parameter. A good way of computing volatility is dividing the data into subsets and estimating the parameter from each one, then finding the variability among the results. There may still be outside factors which are introduced after our sample period and which we cannot predict. However, the instability analysis and testing for standard error is still very useful for telling us how much we should distrust our estimates.
| lectures/Instability of parameter estimates.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
from ADLmainloop import ADLmain, ADLmainId
from ADLbasic import ADL
from utilsADL import dataLoader, plotPerformance
import random
import torch
import numpy as np
# random seed control
np.random.seed(0)
torch.manual_seed(0)
random.seed(0)
# load data
dataStreams = dataLoader('../dataset/hepmass2.mat')
print('All labeled')
# initialization
ADLnet = ADL(dataStreams.nInput,dataStreams.nOutput)
ADLnet0, performanceHistory0 = ADLmain(ADLnet,dataStreams)
plotPerformance(performanceHistory0[0],performanceHistory0[1],performanceHistory0[2],
performanceHistory0[3],performanceHistory0[4],performanceHistory0[5])
# initialization
ADLnet = ADL(dataStreams.nInput,dataStreams.nOutput)
ADLnet1, performanceHistory1 = ADLmain(ADLnet,dataStreams)
plotPerformance(performanceHistory1[0],performanceHistory1[1],performanceHistory1[2],
performanceHistory1[3],performanceHistory1[4],performanceHistory1[5])
# initialization
ADLnet = ADL(dataStreams.nInput,dataStreams.nOutput)
ADLnet2, performanceHistory2 = ADLmain(ADLnet,dataStreams)
plotPerformance(performanceHistory2[0],performanceHistory2[1],performanceHistory2[2],
performanceHistory2[3],performanceHistory2[4],performanceHistory2[5])
# initialization
ADLnet = ADL(dataStreams.nInput,dataStreams.nOutput)
ADLnet3, performanceHistory3 = ADLmain(ADLnet,dataStreams)
plotPerformance(performanceHistory3[0],performanceHistory3[1],performanceHistory3[2],
performanceHistory3[3],performanceHistory3[4],performanceHistory3[5])
# initialization
ADLnet = ADL(dataStreams.nInput,dataStreams.nOutput)
ADLnet4, performanceHistory4 = ADLmain(ADLnet,dataStreams)
plotPerformance(performanceHistory4[0],performanceHistory4[1],performanceHistory4[2],
performanceHistory4[3],performanceHistory4[4],performanceHistory4[5])
# +
# average performance
print('Mean Accuracy: ', np.mean([performanceHistory0[1][1:]+performanceHistory1[1][1:]+
performanceHistory2[1][1:]+performanceHistory3[1][1:]+
performanceHistory4[1][1:]]))
print('Std Accuracy: ', np.std([performanceHistory0[1][1:]+performanceHistory1[1][1:]+
performanceHistory2[1][1:]+performanceHistory3[1][1:]+
performanceHistory4[1][1:]]))
print('Hidden Node mean', np.mean([performanceHistory0[3][1:]+performanceHistory1[3][1:]+
performanceHistory2[3][1:]+performanceHistory3[3][1:]+
performanceHistory4[3][1:]]))
print('Hidden Node std: ', np.std([performanceHistory0[3][1:]+performanceHistory1[3][1:]+
performanceHistory2[3][1:]+performanceHistory3[3][1:]+
performanceHistory4[3][1:]]))
print('Hidden Layer mean: ', np.mean([performanceHistory0[4][1:]+performanceHistory1[4][1:]+
performanceHistory2[4][1:]+performanceHistory3[4][1:]+
performanceHistory4[4][1:]]))
print('Hidden Layer std: ', np.std([performanceHistory0[4][1:]+performanceHistory1[4][1:]+
performanceHistory2[4][1:]+performanceHistory3[4][1:]+
performanceHistory4[4][1:]]))
# -
# ### 50% labeled data
# +
## dataset
# sea
# hyperplane
# weather
# rfid
# permutedMnist
# rotatedMnist
# susy
# hepmass
# -
print('50% labeled')
# initialization
ADLnet = ADL(dataStreams.nInput,dataStreams.nOutput)
ADLnet0, performanceHistory0 = ADLmain(ADLnet,dataStreams,labeled = False, nLabeled = 0.5)
plotPerformance(performanceHistory0[0],performanceHistory0[1],performanceHistory0[2],
performanceHistory0[3],performanceHistory0[4],performanceHistory0[5])
# initialization
ADLnet = ADL(dataStreams.nInput,dataStreams.nOutput)
ADLnet1, performanceHistory1 = ADLmain(ADLnet,dataStreams,labeled = False, nLabeled = 0.5)
plotPerformance(performanceHistory1[0],performanceHistory1[1],performanceHistory1[2],
performanceHistory1[3],performanceHistory1[4],performanceHistory1[5])
# initialization
ADLnet = ADL(dataStreams.nInput,dataStreams.nOutput)
ADLnet2, performanceHistory2 = ADLmain(ADLnet,dataStreams,labeled = False, nLabeled = 0.5)
plotPerformance(performanceHistory2[0],performanceHistory2[1],performanceHistory2[2],
performanceHistory2[3],performanceHistory2[4],performanceHistory2[5])
# initialization
ADLnet = ADL(dataStreams.nInput,dataStreams.nOutput)
ADLnet3, performanceHistory3 = ADLmain(ADLnet,dataStreams,labeled = False, nLabeled = 0.5)
plotPerformance(performanceHistory3[0],performanceHistory3[1],performanceHistory3[2],
performanceHistory3[3],performanceHistory3[4],performanceHistory3[5])
# initialization
ADLnet = ADL(dataStreams.nInput,dataStreams.nOutput)
ADLnet4, performanceHistory4 = ADLmain(ADLnet,dataStreams,labeled = False, nLabeled = 0.5)
plotPerformance(performanceHistory4[0],performanceHistory4[1],performanceHistory4[2],
performanceHistory4[3],performanceHistory4[4],performanceHistory4[5])
# +
# average performance
print('Mean Accuracy: ', np.mean([performanceHistory0[1][1:]+performanceHistory1[1][1:]+
performanceHistory2[1][1:]+performanceHistory3[1][1:]+
performanceHistory4[1][1:]]))
print('Std Accuracy: ', np.std([performanceHistory0[1][1:]+performanceHistory1[1][1:]+
performanceHistory2[1][1:]+performanceHistory3[1][1:]+
performanceHistory4[1][1:]]))
print('Hidden Node mean', np.mean([performanceHistory0[3][1:]+performanceHistory1[3][1:]+
performanceHistory2[3][1:]+performanceHistory3[3][1:]+
performanceHistory4[3][1:]]))
print('Hidden Node std: ', np.std([performanceHistory0[3][1:]+performanceHistory1[3][1:]+
performanceHistory2[3][1:]+performanceHistory3[3][1:]+
performanceHistory4[3][1:]]))
print('Hidden Layer mean: ', np.mean([performanceHistory0[4][1:]+performanceHistory1[4][1:]+
performanceHistory2[4][1:]+performanceHistory3[4][1:]+
performanceHistory4[4][1:]]))
print('Hidden Layer std: ', np.std([performanceHistory0[4][1:]+performanceHistory1[4][1:]+
performanceHistory2[4][1:]+performanceHistory3[4][1:]+
performanceHistory4[4][1:]]))
# -
# ### 25% Labeled Data
print('25% labeled')
# initialization
ADLnet = ADL(dataStreams.nInput,dataStreams.nOutput)
ADLnet0, performanceHistory0 = ADLmain(ADLnet,dataStreams,labeled = False, nLabeled = 0.25)
plotPerformance(performanceHistory0[0],performanceHistory0[1],performanceHistory0[2],
performanceHistory0[3],performanceHistory0[4],performanceHistory0[5])
# initialization
ADLnet = ADL(dataStreams.nInput,dataStreams.nOutput)
ADLnet1, performanceHistory1 = ADLmain(ADLnet,dataStreams,labeled = False, nLabeled = 0.25)
plotPerformance(performanceHistory1[0],performanceHistory1[1],performanceHistory1[2],
performanceHistory1[3],performanceHistory1[4],performanceHistory1[5])
# initialization
ADLnet = ADL(dataStreams.nInput,dataStreams.nOutput)
ADLnet2, performanceHistory2 = ADLmain(ADLnet,dataStreams,labeled = False, nLabeled = 0.25)
plotPerformance(performanceHistory2[0],performanceHistory2[1],performanceHistory2[2],
performanceHistory2[3],performanceHistory2[4],performanceHistory2[5])
# initialization
ADLnet = ADL(dataStreams.nInput,dataStreams.nOutput)
ADLnet3, performanceHistory3 = ADLmain(ADLnet,dataStreams,labeled = False, nLabeled = 0.25)
plotPerformance(performanceHistory3[0],performanceHistory3[1],performanceHistory3[2],
performanceHistory3[3],performanceHistory3[4],performanceHistory3[5])
# initialization
ADLnet = ADL(dataStreams.nInput,dataStreams.nOutput)
ADLnet4, performanceHistory4 = ADLmain(ADLnet,dataStreams,labeled = False, nLabeled = 0.25)
plotPerformance(performanceHistory4[0],performanceHistory4[1],performanceHistory4[2],
performanceHistory4[3],performanceHistory4[4],performanceHistory4[5])
# +
# average performance
print('Mean Accuracy: ', np.mean([performanceHistory0[1][1:]+performanceHistory1[1][1:]+
performanceHistory2[1][1:]+performanceHistory3[1][1:]+
performanceHistory4[1][1:]]))
print('Std Accuracy: ', np.std([performanceHistory0[1][1:]+performanceHistory1[1][1:]+
performanceHistory2[1][1:]+performanceHistory3[1][1:]+
performanceHistory4[1][1:]]))
print('Hidden Node mean', np.mean([performanceHistory0[3][1:]+performanceHistory1[3][1:]+
performanceHistory2[3][1:]+performanceHistory3[3][1:]+
performanceHistory4[3][1:]]))
print('Hidden Node std: ', np.std([performanceHistory0[3][1:]+performanceHistory1[3][1:]+
performanceHistory2[3][1:]+performanceHistory3[3][1:]+
performanceHistory4[3][1:]]))
print('Hidden Layer mean: ', np.mean([performanceHistory0[4][1:]+performanceHistory1[4][1:]+
performanceHistory2[4][1:]+performanceHistory3[4][1:]+
performanceHistory4[4][1:]]))
print('Hidden Layer std: ', np.std([performanceHistory0[4][1:]+performanceHistory1[4][1:]+
performanceHistory2[4][1:]+performanceHistory3[4][1:]+
performanceHistory4[4][1:]]))
# -
# ### Infinite Delay
print('Infinite Delay')
# initialization
ADLnet = ADL(dataStreams.nInput,dataStreams.nOutput)
ADLnet0, performanceHistory0 = ADLmainId(ADLnet,dataStreams)
plotPerformance(performanceHistory0[0],performanceHistory0[1],performanceHistory0[2],
performanceHistory0[3],performanceHistory0[4],performanceHistory0[5])
# initialization
ADLnet = ADL(dataStreams.nInput,dataStreams.nOutput)
ADLnet1, performanceHistory1 = ADLmainId(ADLnet,dataStreams)
plotPerformance(performanceHistory1[0],performanceHistory1[1],performanceHistory1[2],
performanceHistory1[3],performanceHistory1[4],performanceHistory1[5])
# initialization
ADLnet = ADL(dataStreams.nInput,dataStreams.nOutput)
ADLnet2, performanceHistory2 = ADLmainId(ADLnet,dataStreams)
plotPerformance(performanceHistory2[0],performanceHistory2[1],performanceHistory2[2],
performanceHistory2[3],performanceHistory2[4],performanceHistory2[5])
# initialization
ADLnet = ADL(dataStreams.nInput,dataStreams.nOutput)
ADLnet3, performanceHistory3 = ADLmainId(ADLnet,dataStreams)
plotPerformance(performanceHistory3[0],performanceHistory3[1],performanceHistory3[2],
performanceHistory3[3],performanceHistory3[4],performanceHistory3[5])
# initialization
ADLnet = ADL(dataStreams.nInput,dataStreams.nOutput)
ADLnet4, performanceHistory4 = ADLmainId(ADLnet,dataStreams)
plotPerformance(performanceHistory4[0],performanceHistory4[1],performanceHistory4[2],
performanceHistory4[3],performanceHistory4[4],performanceHistory4[5])
# +
# average performance
print('Mean Accuracy: ', np.mean([performanceHistory0[1][1:]+performanceHistory1[1][1:]+
performanceHistory2[1][1:]+performanceHistory3[1][1:]+
performanceHistory4[1][1:]]))
print('Std Accuracy: ', np.std([performanceHistory0[1][1:]+performanceHistory1[1][1:]+
performanceHistory2[1][1:]+performanceHistory3[1][1:]+
performanceHistory4[1][1:]]))
print('Hidden Node mean', np.mean([performanceHistory0[3][1:]+performanceHistory1[3][1:]+
performanceHistory2[3][1:]+performanceHistory3[3][1:]+
performanceHistory4[3][1:]]))
print('Hidden Node std: ', np.std([performanceHistory0[3][1:]+performanceHistory1[3][1:]+
performanceHistory2[3][1:]+performanceHistory3[3][1:]+
performanceHistory4[3][1:]]))
print('Hidden Layer mean: ', np.mean([performanceHistory0[4][1:]+performanceHistory1[4][1:]+
performanceHistory2[4][1:]+performanceHistory3[4][1:]+
performanceHistory4[4][1:]]))
print('Hidden Layer std: ', np.std([performanceHistory0[4][1:]+performanceHistory1[4][1:]+
performanceHistory2[4][1:]+performanceHistory3[4][1:]+
performanceHistory4[4][1:]]))
# -
| ADL_hepmass.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: dl
# language: python
# name: dl
# ---
# # Notebook showing how to use `GymBoard` by using `Actor Critic` implementation found on [keras.io](https://keras.io/examples/rl/actor_critic_cartpole/).
# %load_ext autoreload
# %autoreload 2
import gym
import numpy as np
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras import layers
from gymboard import GymBoard
tf.get_logger().setLevel('ERROR')
seed = 42
gamma = 0.99
max_steps_per_episode = 10000
env = gym.make('CartPole-v0')
env.seed(seed)
eps = np.finfo(np.float32).eps.item()
# +
num_inputs = 4
num_actions = 2
num_hidden = 128
inputs = layers.Input(shape=(num_inputs,))
common = layers.Dense(num_hidden, activation='relu')(inputs)
action = layers.Dense(num_actions, activation='softmax')(common)
critic = layers.Dense(1)(common)
model = keras.Model(inputs=inputs, outputs=[action, critic])
# +
optimizer = keras.optimizers.Adam(learning_rate=0.01)
huber_loss = keras.losses.Huber()
action_probs_history = []
critic_value_history = []
rewards_history = []
running_reward = 0
episode_count = 0
gboard = GymBoard()
gboard.display()
gboard.write_env(env, model, step=0)
while True: # Run until solved
state = env.reset()
episode_reward = 0
with tf.GradientTape() as tape:
for timestep in range(1, max_steps_per_episode):
# env.render(); Adding this line would show the attempts
# of the agent in a pop up window.
state = tf.convert_to_tensor(state)
state = tf.expand_dims(state, 0)
# Predict action probabilities and estimated future rewards
# from environment state
action_probs, critic_value = model(state)
critic_value_history.append(critic_value[0, 0])
# Sample action from action probability distribution
action = np.random.choice(num_actions, p=np.squeeze(action_probs))
action_probs_history.append(tf.math.log(action_probs[0, action]))
# Apply the sampled action in our environment
state, reward, done, _ = env.step(action)
rewards_history.append(reward)
episode_reward += reward
if done:
break
# Update running reward to check condition for solving
running_reward = 0.05 * episode_reward + (1 - 0.05) * running_reward
# Calculate expected value from rewards
# - At each timestep what was the total reward received after that timestep
# - Rewards in the past are discounted by multiplying them with gamma
# - These are the labels for our critic
returns = []
discounted_sum = 0
for r in rewards_history[::-1]:
discounted_sum = r + gamma * discounted_sum
returns.insert(0, discounted_sum)
# Normalize
returns = np.array(returns)
returns = (returns - np.mean(returns)) / (np.std(returns) + eps)
returns = returns.tolist()
# Calculating loss values to update our network
history = zip(action_probs_history, critic_value_history, returns)
actor_losses = []
critic_losses = []
for log_prob, value, ret in history:
# At this point in history, the critic estimated that we would get a
# total reward = `value` in the future. We took an action with log probability
# of `log_prob` and ended up recieving a total reward = `ret`.
# The actor must be updated so that it predicts an action that leads to
# high rewards (compared to critic's estimate) with high probability.
diff = ret - value
actor_losses.append(-log_prob * diff) # actor loss
# The critic must be updated so that it predicts a better estimate of
# the future rewards.
critic_losses.append(
huber_loss(tf.expand_dims(value, 0), tf.expand_dims(ret, 0))
)
# Backpropagation
loss_value = sum(actor_losses) + sum(critic_losses)
grads = tape.gradient(loss_value, model.trainable_variables)
optimizer.apply_gradients(zip(grads, model.trainable_variables))
# Clear the loss and reward history
action_probs_history.clear()
critic_value_history.clear()
rewards_history.clear()
# Log details
episode_count += 1
gboard.write_scalar('running_reward', running_reward, episode_count)
if running_reward > 195: # Condition to consider the task solved
print("Solved at episode {}!".format(episode_count))
break
gboard.write_env(env, model, step=episode_count)
# -
# <img src="https://raw.githubusercontent.com/mishig25/GymBoard/master/viz.gif"/>
| tutorial.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3 (ipykernel)
# language: python
# name: python3
# ---
# +
import numpy as np
import tensorflow as tf
# %pylab inline
# -
# ## At the level of the model
# +
images = tf.random.uniform(
shape=(5,51,51,1), minval=0, maxval=1, dtype=tf.dtypes.float32, seed=None, name=None
)
# +
def pad(image):
r"""Convert images to 64x64x1 shaped tensors to feed the model, using zero-padding."""
pad = tf.constant([[0,0], [6,7],[6,7], [0,0]])
return tf.pad(image, pad, "CONSTANT")
def crop(image):
r"""Crop back the image to its original size and convert it to np.array"""
return tf.image.crop_to_bounding_box(image, 6, 6, 51, 51)
# +
padded_imgs = pad(images)
padded_imgs.shape
# +
figure(figsize=(5,5))
imshow(padded_imgs.numpy()[0,:,:,0])
grid('minor')
# +
cropped_imgs = crop(padded_imgs)
cropped_imgs.shape
# +
figure(figsize=(5,5))
imshow(cropped_imgs.numpy()[0,:,:,0])
grid('minor')
show()
figure(figsize=(5,5))
imshow(cropped_imgs.numpy()[0,:,:,0] - images.numpy()[0,:,:,0])
grid('minor')
show()
# -
# ## At the level of the prox operator
# +
def convert_and_pad(image):
r"""Convert images to 64x64x1 shaped tensors to feed the model, using zero-padding."""
image = tf.reshape(
tf.convert_to_tensor(image),
[np.shape(image)[0], np.shape(image)[1], np.shape(image)[2], 1]
)
# pad = tf.constant([[0,0], [6,7],[6,7], [0,0]])
# return tf.pad(image, pad, "CONSTANT")
return image
def convert_and_pad_v2(image):
r""" Convert images to tensorflow's tensor and add an extra 4th dimension."""
return tf.expand_dims(tf.convert_to_tensor(image), axis=3)
def crop_and_convert(image):
r"""Crop back the image to its original size and convert it to np.array"""
#image = tf.reshape(tf.image.crop_to_bounding_box(image, 6, 6, 51, 51), [np.shape(image)[0], 51, 51])
image = tf.reshape(image, [np.shape(image)[0], 51, 51])
return image.numpy()
def crop_and_convert_v2(image):
r"""Convert to numpy array and remove the 4th dimension."""
return image.numpy()[:,:,:,0]
# -
# +
np.amax(new_imgs, axis=(1,2)).shape
# +
new_imgs = np.random.rand(5,51,51)
new_imgs[1,:,:] -= 1
new_imgs.shape
# +
tf_imgs = convert_and_pad_v2(new_imgs)
tf_imgs.shape
# +
op_imgs = crop_and_convert_v2(tf_imgs)
op_imgs.shape
# +
multiple = np.array([np.sum(im)>0 for im in new_imgs]) * 2. - 1.
multiple
# +
np.sum(new_imgs, axis=(1,2)) * 2. - 1.
# -
tf_new_imgs = tf.convert_to_tensor(new_imgs)
tf_new_imgs /= 0.5
tf_new_imgs
| testing_notebooks/checking-pad-crop-tf-functions.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Initialize
# +
import numpy as np
import pandas as pd
import scanpy as sc
import matplotlib.pyplot as plt
import os
import nbimporter
import inCITE_tools as ict
import statsmodels.stats as sms
import scvelo as scv
scv.set_figure_params()
run_folder = 'p65_inCITE'
target_dir = 'seq_data/%s' %run_folder
sc.settings.figdir = './analyses/%s' %run_folder
if not os.path.isdir(sc.settings.figdir): os.makedirs(sc.settings.figdir)
sc.settings.verbosity = 3
sc.logging.print_versions()
processed_file = './write/p65_inCITE.h5ad'
adata = sc.read(processed_file)
antibodies = ['p65']
adata.obs['annot'] = adata.obs['assignment']
# -
# # Compare RNA and CITE
# +
plt.figure(figsize=(3,3))
axes = plt.axes()
gene = 'RELA'
protein = 'p65_nCLR'
axes.set_ylim([-2,2])
axes.set_yticks([-2,0,2])
axes.set_yticklabels(labels=[-2,0,2], fontsize=12)
axes.spines['top'].set_visible(False)
axes.spines['right'].set_visible(False)
axes.spines['bottom'].set_linewidth(1)
axes.spines['left'].set_linewidth(1)
axes.set_xlabel(xlabel=gene,fontsize=12)
axes.set_ylabel(ylabel='p65 nCLR',fontsize=12)
axes.figure.set_size_inches(3,3)
x_list = ict.get_feat_values(adata, gene)
y_list = ict.get_feat_values(adata, protein)
color_vals = adata.obs['assignment'].map({'NT':'#34c6f4', 'TNFa':'#ef3b39'})
plt.scatter(x_list+np.random.uniform(low=-0.05,high=0.05,size=len(x_list)), y_list,
s=1,alpha=0.3,color=list(color_vals),axes=axes) # '#78909C'
plt.xlabel(gene,fontsize=12)
plt.ylabel('p65 nCLR',fontsize=12)
Rela_pos = adata[adata[:,gene].layers['counts'].toarray()>0]
X_V,Y_V = ict.get_linear_regression_stats(adata,gene,protein)
plt.savefig('%s/scatter_RELA_p65_nCLR_colored.pdf' %(sc.settings.figdir),
bbox_inches='tight')
# -
# # Linear model: Gene ~ p65
# +
adata.obs['log_hashtag_counts'] = np.log(adata.obs['hashtag_counts'])
FORMULA = 'Gene ~ p65_nCLR + log_ncounts + G2M_score + S_score + log_hashtag_counts'
params_nCLR, pvals_nCLR = ict.run_linear_model(adata, FORMULA,
'HeLa_both_simple_p65_nCLR_model', '',
model='GLM_NegBin', run_mode='lognorm',
antibody='p65_nCLR')
sig_nCLR = ict.get_significance_df(pvals_nCLR, method='fdr_bh', alpha=0.01)
sig_nCLR.sum()
# -
# ### top genes + GO analysis
upregulated_significant = pvals_nCLR[(sig_nCLR['p65_nCLR'] & params_nCLR['p65_nCLR']>0)]
top10_p65_genes = upregulated_significant.sort_values(by='p65_nCLR',ascending=True).head(7)
top10_p65_genes
GO_upregulated = ict.parse_GO_query(upregulated_significant.index, 'hsapiens')
GO_upregulated = GO_upregulated.loc[GO_upregulated['source']=='GO:BP']
ict.plot_GO_terms(GO_upregulated.head(10),0.05,'simple_model_NegBin_p65',xlims=[0,15])
# ## plot top vs. bottom of p65
# +
def sort_idx_by_p65(indices, ad, feat='p65_norm'):
# sort cell indices by feature level
sorted_idx = ad.obs.loc[indices,feat].sort_values(ascending=False).index
return sorted_idx
def p65_sorted_subset_by_treatment(ad, plot_N=1000, group_order=['TNFa','NT']):
import random
# select equal random subsets of each treatment
plot_indices = []
for treatment in group_order:
ad_clust = ad[ad.obs['assignment']==treatment]
idx = random.sample(set(ad_clust.obs.index), plot_N)
# sort indices based on p65
sorted_idx = sort_idx_by_p65(idx, ad)
plot_indices.extend(list(sorted_idx))
return plot_indices
# -
# identify top 10% and bottom 10% of all cells based on p65 levels
sorted_idx = sort_idx_by_p65(adata.obs.index,adata,'p65_nCLR')
group_len = int(len(sorted_idx)/10)
adata.obs['p65_level'] = 'p65_middle'
adata.obs.loc[sorted_idx[0:group_len],'p65_level'] = 'p65_high'
adata.obs.loc[sorted_idx[-group_len:],'p65_level'] = 'p65_low'
ad_top_bottom = adata[((adata.obs['p65_level']=='p65_high')|(adata.obs['p65_level']=='p65_low'))]
# for the top vs. bottom 10% p65 cells, construct data for plotting
quintile_sorted_idx = sort_idx_by_p65(ad_top_bottom.obs.index,adata,'p65_nCLR')
ad_top_bottom = ad_top_bottom[quintile_sorted_idx]
plot_p65_level = ad_top_bottom.obs['p65_level']
plot_p65_int = plot_p65_level.map({'p65_high':0,'p65_low':1})
plot_p65_int = plot_p65_int.astype(int).to_frame()
plot_p65_quintile = ad_top_bottom.obs['p65_nCLR'].to_frame()
GENES_TO_PLOT = top10_p65_genes.index
VALUES = 'zscore'
if VALUES=='zscore':
vals = ad_top_bottom[:,GENES_TO_PLOT].X
# vals = ad_top_bottom[:,GENES_TO_PLOT].layers['zscore'].toarray()
elif VALUES=='lognorm':
vals = ad_top_bottom[:,GENES_TO_PLOT].layers['counts'].toarray()
plot_matrix_quintile = pd.DataFrame(vals, index=ad_top_bottom.obs.index,columns=GENES_TO_PLOT)
# +
import seaborn as sns
sns.reset_orig()
fig, (ax1, ax2, ax3) = plt.subplots(3, 1, sharex=True, figsize=(6,4),
gridspec_kw={'height_ratios': [0.05, 1.2, 2]})
# subplot 2
sns.heatmap(plot_p65_int.T,
cmap=['k','#d3d3d3'],
yticklabels=[], xticklabels=[],
ax=ax1, cbar=False)
# subplot 1
ax2.bar(height=plot_p65_quintile['p65_nCLR'],
x=ad_top_bottom.obs.index,
color='#bfbfbf')
ax2.set_xticks([])
ax2.set_yticks([-2,2])
ax2.set_ylim([-2,2])
ax2.spines['top'].set_visible(False)
ax2.spines['right'].set_visible(False)
ax2.spines['bottom'].set_linewidth(1)
ax2.spines['left'].set_linewidth(False)
# subplot 3
cax = fig.add_axes([.93, .28, .02, .2])
sns.set(font_scale = 0.02)
ax_h = sns.heatmap(plot_matrix_quintile.T,
cmap='bwr',
vmin=-1.5, vmax=1.5, center=0,
yticklabels=GENES_TO_PLOT,
xticklabels=[],
ax=ax3,
cbar_ax=cax)
cbar_kws=dict(ticks=[])
ax_h.vlines([group_len],*ax_h.get_ylim(), color='k',linewidth=0.5)
fig.savefig('%s/heatmap_decile_p65_nCLR_genes.pdf' %(sc.settings.figdir), bbox_inches='tight')
sns.reset_orig()
# -
| notebooks/HeLa_analysis.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
print('test')
# +
x = 1
print(x)
# -
type(x)
# +
#x.shape
# +
import numpy as np
# -
a = np.zeros([2,2])
a
print(a)
print('test',12,'test')
xs = [3, 1, 2] # Create a list
print(xs, xs[2])
# ### LIST
# note
l1 = [1,2,3,4,5]
print(l1)
l1[2]
l1.append(6)
l1
l1.append('test')
l1
l1[0:2]
l1[0:3]
l1[0:]
l1[:3]
for a in l1:
print(a)
for i,item in enumerate(l1):
print(i,'---->',item)
d = {'cat': 'cute', 'dog': 'furry'} # Create a new dictionary with some data
print(d['cat'])
set(l1)
len(l1)
def funcnames(x):
print(x)
funcnames(1)
funcnames('test')
def sign(x):
if x > 0:
return 'positive'
elif x < 0:
return 'negative'
else:
return 'zero'
for x in [-1, 0, 1]:
print(sign(x))
def hello(name, loud=False):
if loud:
print('HELLO, %s!' % name.upper())
else:
print('Hello, %s' % name)
hello('ali')
hello('ali',True)
class Greeter(object):
family = 1
# Constructor
def __init__(self, name):
self.name = name # Create an instance variable
# Instance method
def greet(self, loud=False):
if loud:
print('HELLO, %s!' % self.name.upper())
else:
print('Hello, %s' % self.name)
g = Greeter('Fred')
g.name
g.family
# +
g.family=2
# -
Greeter.family
g.family
while x>s:
etet
import numpy as np
a = np.array([1, 2, 3])
a
type(a)
type(a.shape)
l2=[1,3,3,4,5,8,6,7,6]
np.array(l2)
np.random.random((2,2))
a = np.array([[1,2,3,4], [5,6,7,8], [9,10,11,12]])
a
a[:2, :]
np.transpose(a)
np.multiply(a,a)
a*a
a.dot(np.transpose(a))
a
a.sum(axis=0)
a.sum(axis=1)
a.sum(axis=2) # it has error ... we solve it by new idea ...
b = np.array([a,a])
b
b.sum(axis=0)
b.sum(axis=2)
b[:,:,0]
mnist_tmp = np.random.random((28,28))
mnist_tmp.shape
mnist_tmp.reshape((-1)).shape
mnist_tmp
from scipy.misc import imread, imsave, imresize
# +
# Read an JPEG image into a numpy array
img = imread('C:\\Users\\Yasin\\Pictures\\6bee5ff7c5467bcdbdc4047b59e1a092.jpg')
print(img.dtype, img.shape) # Prints "uint8 (400, 248, 3)"
# -
from matplotlib import pyplot as plt
# %matplotlib inline
plt.imshow(img)
x = np.arange(0, 3 * np.pi, 0.1)
y = np.sin(x)
# +
plt.subplot(2,1,1)
plt.plot(x,y,'--*')
plt.xlabel('x')
plt.ylabel('sin(x)')
plt.title('draw sample')
plt.legend(['sin(x)'])
plt.grid('on')
plt.subplot(2,1,2)
plt.plot(x,y,'--*')
plt.xlabel('x')
plt.ylabel('sin(x)')
plt.title('draw sample')
plt.legend(['sin(x)'])
plt.grid('on')
| Lectures/Lecture-02/lecture-code.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3 (ipykernel)
# language: python
# name: python3
# ---
# ## Notes
#
# - maybe we might need to crop the image to remove black borders on the side
# - normalize the image
# - use cross validation
# - cv2 to visualise images or plt
#
# # Eye disease Recognition
#
# ### Introduction
#
# Using images from eyes fundus recognition an healthy eye versus a sick eye and determine what's the sickeness.
#
# ### Dataset
#
# Ocular Disease Intelligent Recognition (ODIR) is a structured ophthalmic database of 5,000 patients with age, color fundus photographs from left and right eyes and doctors' diagnostic keywords from doctors.
#
# This dataset is meant to represent ‘‘real-life’’ set of patient information collected by Shanggong Medical Technology Co., Ltd. from different hospitals/medical centers in China. In these institutions, fundus images are captured by various cameras in the market, such as Canon, Zeiss and Kowa, resulting into varied image resolutions.
# Annotations were labeled by trained human readers with quality control management. They classify patient into eight labels including:
#
# - Normal (N),
# - Diabetes (D),
# - Glaucoma (G),
# - Cataract (C),
# - Age related Macular Degeneration (A),
# - Hypertension (H),
# - Pathological Myopia (M),
# - Other diseases/abnormalities (O)
#
# ### Objectives
#
# Identify one or more diseases in a specific eye using an image of the fundus. Given an eye fundus, the model will output a list of diseases if any present.
# imports
import pandas as pd
import matplotlib.pyplot as plt
# ## Data exploratory analysis
# load data
df = pd.read_csv("../data/full_df.csv")
df[df['ID'] == 4659]
# In the dataset, each row corresponds to one patient. Each row contains one image for each eye.The diagnosis does not specify which eye has the disease although we can get this information from the columns labelled as `Left-Diagnostic Keywords` and `Right-Diagnostic Keywords`.
#
# Using the labels mentioned above, we need to convert the wordings into the labels with right abbreviations which are N,D,G,C,A,H,M,O.
#
# The inputs to the model would be the images of both eyes and the output would be one or more of the following for each patient:
#
# - Normal (N),
# - Diabetes (D),
# - Glaucoma (G),
# - Cataract (C),
# - Age related Macular Degeneration (A),
# - Hypertension (H),
# - Pathological Myopia (M),
# - Other diseases/abnormalities (O)
df.describe()
# There are 6392 patients,ranging from 1 year old to 91 years old. The mean is 57.8 years old,the median is 59.
value_counts = df['labels'].value_counts(sort=False).to_dict()
plt.bar(value_counts.keys(), value_counts.values())
value_counts
(value_counts["['N']"]/len(df.index))*100
# In the dataset, the percentage of patients having both healthy eyes is 44.9%
| Eye Disease Recognition.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Demonstrate Setting an Orientation
#
# This notebook will demonstrate getting an orientation from the PyMol Desktop software and then using it with binderized PyMol.
# This builds on [Demo of Getting a Structure and Producing an Image](notebooks/demo_fetch.ipynb), and so you should be familiar with that first.
#
# Return to [the first page](index.ipynb) for a list of the demonstrations available.
#
#
# ----
#
# <div class="alert alert-block alert-warning">
# <p>If you haven't used one of these notebooks before, they're basically web pages in which you can write, edit, and run live code. They're meant to encourage experimentation, so don't feel nervous. Just try running a few cells and see what happens!.</p>
#
# <p>
# Some tips:
# <ul>
# <li>Code cells have boxes around them. When you hover over them a <i class="fa-step-forward fa"></i> icon appears.</li>
# <li>To run a code cell either click the <i class="fa-step-forward fa"></i> icon, or click on the cell and then hit <b>Shift+Enter</b>. The <b>Shift+Enter</b> combo will also move you to the next cell, so it's a quick way to work through the notebook.</li>
# <li>While a cell is running a <b>*</b> appears in the square brackets next to the cell. Once the cell has finished running the asterix will be replaced with a number.</li>
# <li>In most cases you'll want to start from the top of notebook and work your way down running each cell in turn. Later cells might depend on the results of earlier ones.</li>
# <li>To edit a code cell, just click on it and type stuff. Remember to run the cell once you've finished editing.</li>
# </ul>
# </p>
# </div>
#
#
# ---
# ## Preparation
#
# The intial parts to set up to send commands to PyMol are the same and so we'll define those as block of code we can prepend in front of special things to do.
init_block = '''#!/usr/bin/python
import sys, os
# pymol environment
moddir='/opt/pymol-svn/modules'
sys.path.insert(0, moddir)
os.environ['PYMOL_PATH'] = os.path.join(moddir, 'pymol/pymol_path')
import pymol
cmd = pymol.cmd
'''
# With a block of code defined that we can use within this running notebook, we can now step through each of the basic steps to get a structure and make an image using it with PyMol.
# ## Acquiring an orientation for a structure
# In the desktop PyMol application, use it as you normally would to get the involved structure situated in a view you want to replicate via the binderized repo. Perhaps you want to make a fancy image via the binderized PyMol while you still explore your structure on your desktop.
#
# In getting 'the scene' set as you'd like, you may find it useful to click on an atom you'd want at the center, and then enter `center sele` in the command line above the structure window or right below the structure window.
#
# Once you have 'the scene' set as you'd like it in the desktop PyMol, run the following command in the line above the structure window or right below the structure window.
#
# ```python
# get_view
# ````
#
# You'll see something like the following print out in the application.
#
# ```python
# ### cut below here and paste into script ###
# set_view (\
# 0.525620461, -0.152837604, -0.836876869,\
# -0.682576001, -0.662907958, -0.307641536,\
# -0.507752895, 0.732934058, -0.452761710,\
# 0.000000000, 0.000000000, -158.795608521,\
# 29.419000626, 19.603000641, 10.395999908,\
# 117.110687256, 200.480606079, -20.000000000 )
# ### cut above here and paste into script ###
# ```
#
# Copy that text from the panel and paste it somewhere conventient such as a text editor fow now.
# That is going to be the view you'll want to apply to the same structure in the binderized pymol session.
# The following is an example of using that view here.
# ## Making use of the orientation to produce an image of a structure
#
# This is going to be very similar to the final section of [Demo of Getting a Structure and Producing an Image](notebooks/demo_fetch.ipynb); however, we are going to add use of the `set_view` command to apply the results from `get_view`.
#
# The `set_view` command takes a list of numbers. Unfortunately, despite the text saying things like `cut above here and paste into script`, it isn't quite ready to paste into an Python script of the type we are writing here. (As copied,it will paste right into the command line view in the desktop application though to trigger going back to an orientation you like.) Luckily, all that is needed is two minor changes to be read to use it here. The text `cmd.` has to to be added in front of `set_view` and **quotes have to be put around all the numbers** inside the parantheses.
#
# In other words, the text we got back from `get_view` looked liked the following:
# ```python
# set_view (\
# 0.525620461, -0.152837604, -0.836876869,\
# -0.682576001, -0.662907958, -0.307641536,\
# -0.507752895, 0.732934058, -0.452761710,\
# 0.000000000, 0.000000000, -158.795608521,\
# 29.419000626, 19.603000641, 10.395999908,\
# 117.110687256, 200.480606079, -20.000000000 )
# ```
# Now we need to edit it to make it look like the following:
# ```python
# cmd.set_view (\
# "0.525620461, -0.152837604, -0.836876869,\
# -0.682576001, -0.662907958, -0.307641536,\
# -0.507752895, 0.732934058, -0.452761710,\
# 0.000000000, 0.000000000, -158.795608521,\
# 29.419000626, 19.603000641, 10.395999908,\
# 117.110687256, 200.480606079, -20.000000000" )
# ```
# In that form it can be added directly to a script we can run here.
# Let's see that by adding that to a script from the previous notebook. Below we add it on the second line. Let's run that cell and see the result.
cmds2run = '''cmd.fetch('1d66');cmd.zoom()
cmd.set_view (\
"0.525620461, -0.152837604, -0.836876869,\
-0.682576001, -0.662907958, -0.307641536,\
-0.507752895, 0.732934058, -0.452761710,\
0.000000000, 0.000000000, -158.795608521,\
29.419000626, 19.603000641, 10.395999908,\
117.110687256, 200.480606079, -20.000000000" )
cmd.set ("ray_opaque_background", 0)
cmd.bg_color ("white")
cmd.set ("cartoon_fancy_helices", 1)
cmd.set ("cartoon_side_chain_helper", "on")
cmd.hide ("everything", "all")
cmd.show ("cartoon", "all")
cmd.util.cbc()
cmd.show ("sphere", "metals")
def hex_to_rgb(value):
#based on https://stackoverflow.com/a/214657/8508004
value = value.lstrip('#')
lv = len(value)
return tuple(int(value[i:i + lv // 3], 16) for i in range(0, lv, lv // 3))
cmd.set_color ("ion_color", [*hex_to_rgb("#7D80B0")])
cmd.color ("ion_color", "metals")
cmd.color ("wheat","polymer.nucleic")
cmd.set ("fog_start", 0.80)
cmd.png('1d66improved.png', 800, 800, dpi=300, ray=1)
'''
script_txt = init_block + cmds2run
# %store script_txt >script_o.py
# !pymol -cq script_o.py
from IPython.display import Image
Image("1d66improved.png")
# If everything went well, then a nice zoom on the cadmium ions of Gal4p bound to DNA (PDB id: 1d66) with a helix jutting into the major groove should be seen.
# An additional way I like to code setting the orientation for added usefulness is to do something similar to the following somewhere near the top of my code:
# ```python
# def set_my_view():
# cmd.set_view("-0.618128955, 0.332359225, 0.712325752, -0.207372814, -0.943042099, 0.260065436, 0.758196831, 0.013028201, 0.651856720, 0.000000000, 0.000000000, -661.349548340, 53.364715576, -2.287246704, 2.716659546, 548.118957520, 774.580017090, -20.00")
# set_my_view()
# ```
# This way if I later, wish to re-apply the orientation as a I build an expanding, complex script, I can just call `set_my_view()` again, and not have to paste in the entire `cmd.set_view` line with many numbers.
# Needing to reapply the orientation can become an issue when you start generating surfaces. The creation of surfaces cause the focus of the view to shift to the center of the newly generated surface, and thus you may wish to frame the subject if your view again after such a step is added.
#
# This code be used to produce more compact text to paste into the scripts without the odd spaces and `\` symbols. Just paste your `set_view` results in place of this example code in `text2format`.
text2format='''
set_view (\
0.525620461, -0.152837604, -0.836876869,\
-0.682576001, -0.662907958, -0.307641536,\
-0.507752895, 0.732934058, -0.452761710,\
0.000000000, 0.000000000, -158.795608521,\
29.419000626, 19.603000641, 10.395999908,\
117.110687256, 200.480606079, -20.000000000 )
'''
formatted = "cmd."+text2format.strip().replace(
" "," ").replace(
" "," ").replace(
" "," ").replace("( ",'("').replace(" )",'")')
print(formatted)
# ----
#
# Return to [the first page](index.ipynb) for a list of the demonstrations available.
#
# ----
| notebooks/demo_orient.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
#data preprocessing
import pandas as pd
# Read data and drop redundant column.
data = pd.read_csv('https://raw.githubusercontent.com/dollcg24/diabetes_dataset/master/data.csv')
from sklearn import preprocessing
#diab = pd.read_csv("diabrisk.csv")
preprocess = preprocessing.LabelEncoder()
data['gender'] = preprocess.fit_transform(data['gender'])
data['age'] = preprocess.fit_transform(data['age'])
data['bmi'] = preprocess.fit_transform(data['bmi'])
data['heredity']= preprocess.fit_transform(data['heredity'])
data['calorie'] = preprocess.fit_transform(data['calorie'])
data['sleep'] = preprocess.fit_transform(data['sleep'])
data['bp'] = preprocess.fit_transform(data['bp'])
data['smoke'] = preprocess.fit_transform(data['smoke'])
data['alcohol'] = preprocess.fit_transform(data['alcohol'])
data['mental'] = preprocess.fit_transform(data['mental'])
data['physical'] = preprocess.fit_transform(data['physical'])
data['skin'] = preprocess.fit_transform(data['skin'])
data['pcos'] = preprocess.fit_transform(data['pcos'])
data['risk'] = preprocess.fit_transform(data['risk'])
feature_columns = ['gender','age','bmi','heredity','calorie','sleep','bp','smoke','alcohol','mental','physical','skin','pcos']
predicted_class = ['risk']
X = data[feature_columns].values
y = data[predicted_class].values
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.30, random_state=10)
from sklearn.ensemble import RandomForestClassifier
from sklearn.metrics import accuracy_score
# Create the model with 180 trees
#model is object of classifier
model = RandomForestClassifier(n_estimators=180, random_state=10)
model.fit(X_train,y_train.ravel())
random_pred=model.predict(X_test)
#print("Random forest : ",accuracy_score(y_test,random_pred, normalize = True))
try:
import dill as pickle
except ImportError:
import pickle
filename = 'finalized_model.p'
pickl = {
'model': model
}
#pickle.dump( pickl, open( 'finalized_model' + ".p", "wb" ) )
pickle.dump(model, open(filename, 'wb'))
| research/Untitled1.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="MfTh4bUzAmSX"
# <h1 style="padding-top: 25px;padding-bottom: 25px;text-align: left; padding-left: 10px; background-color: #DDDDDD;
# color: black;"> <img style="float: left; padding-right: 10px;" src="https://raw.githubusercontent.com/Harvard-IACS/2018-CS109A/master/content/styles/iacs.png" height="50px"> <a href='https://harvard-iacs.github.io/2021-AC215/' target='_blank'><strong><font color="#A41034">AC215: Advanced Practical Data Science, MLOps</font></strong></a></h1>
#
# # **<font color="#A41034">Exercise 3 - Mushroom Identification Models</font>**
#
# **Harvard University**<br/>
# **Fall 2021**<br/>
# **Instructor:**<br/>
# <NAME>
#
# <hr style="height:2pt">
# + [markdown] id="pWSXnfQUStq5"
# ## **<font color="#A41034">Competition</font>**
#
# ### **<font color="#f03b20">Due Date: Check Canvas</font>**
#
# #### **[View Leaderboard](http://ac215-leaderboard.dlops.io/)**
#
# Now your task for this exercise is to build the best model for mushroom classification. You are free to use any techniques. Here are some techniques you can try:
#
# * Data augmentation
# * Hyper parameters tuning
# * Transfer Learning using different pre-trained models
# * Learning rate schedulers
# * Early stopping
# * etc...
#
# #### **Exercise Requirements:**
# * Create TF Records to build your data pipelines
# * Perform model compression using Distillation
# * Perform model compression using Pruning
#
# You can submit as many models as you want with any techniques used but make sure to keep your work based on the above requirements before you submit your notebook on canvas
#
# <br>
#
# **Remember to submit your experiments to the cloud storage bucket using the code provided and also submit your notebook to Canvas**
#
# <br>
#
# **<font color="#f03b20">Leaderboard for this competition will be computed based on `hidden` test set. Winner gets a $50 Amazon gift card from Pavlos</font>**
# + [markdown] id="FgY9xWhgGdt8"
# ## **<font color="#A41034">Setup Notebook</font>**
# + [markdown] id="c-HGo-xOGr2t"
# **Copy & setup Colab with GPU**
# + [markdown] id="4qfXH3wYGtSa"
# 1) Select "File" menu and pick "Save a copy in Drive"
# 2) This notebooks is already setup to use GPU but if you want to change it. Go to "Runtime" menu and select "Change runtime type". Then in the popup in "Hardware accelerator" select "GPU" and then click "Save"
# 3) If you want high RAM there is an option for that
# + [markdown] id="xsHQIdyQHAkV"
# **Imports**
# + id="dB7OG0AQAlha"
import os
import requests
import zipfile
import tarfile
import shutil
import math
import json
import time
import sys
import cv2
import string
import re
import subprocess
import hashlib
import numpy as np
import pandas as pd
from glob import glob
import collections
import unicodedata
import seaborn as sns
import matplotlib.pyplot as plt
import matplotlib.cm as cm
# %matplotlib inline
# Tensorflow
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras.models import Model, Sequential
from tensorflow.keras.utils import to_categorical
from tensorflow.python.keras import backend as K
from tensorflow.python.keras.utils.layer_utils import count_params
# sklearn
from sklearn.model_selection import train_test_split
# Tensorflow Hub
import tensorflow_hub as hub
# Colab auth
from google.colab import auth
from google.cloud import storage
# + [markdown] id="HwwaaoEAMmLg"
# **Verify Setup**
# + [markdown] id="wD106cXQMm_8"
# It is a good practice to verify what version of TensorFlow & Keras you are using. Also verify if GPU is enabled and what GPU you have. Run the following cells to check the version of TensorFlow
#
# References:
# - [Eager Execution](https://www.tensorflow.org/guide/eager)
# - [Data Performance](https://www.tensorflow.org/guide/data_performance)
# + id="gHjjqJjIMtFH"
# Enable/Disable Eager Execution
# Reference: https://www.tensorflow.org/guide/eager
# TensorFlow's eager execution is an imperative programming environment that evaluates operations immediately,
# without building graphs
#tf.compat.v1.disable_eager_execution()
#tf.compat.v1.enable_eager_execution()
print("tensorflow version", tf.__version__)
print("keras version", tf.keras.__version__)
print("Eager Execution Enabled:", tf.executing_eagerly())
# Get the number of replicas
strategy = tf.distribute.MirroredStrategy()
print("Number of replicas:", strategy.num_replicas_in_sync)
devices = tf.config.experimental.get_visible_devices()
print("Devices:", devices)
print(tf.config.experimental.list_logical_devices('GPU'))
print("GPU Available: ", tf.config.list_physical_devices('GPU'))
print("All Physical Devices", tf.config.list_physical_devices())
# Better performance with the tf.data API
# Reference: https://www.tensorflow.org/guide/data_performance
AUTOTUNE = tf.data.experimental.AUTOTUNE
# + [markdown] id="yBRyDL1GMwj0"
# Run this cell to see what GPU you have. If you get a P100 or T4 GPU that's great. If it's K80, it will still work but it will be slow.
# + id="DbysV9VCMxDy"
# !nvidia-smi
# + [markdown] id="6i3sZbohM2K_"
# **Utils**
# + [markdown] id="AIn5czLvM2sS"
# Here are some util functions that we will be using for this notebook
# + id="wm_puO9WSoq3"
def download_file(packet_url, base_path="", extract=False, headers=None):
if base_path != "":
if not os.path.exists(base_path):
os.mkdir(base_path)
packet_file = os.path.basename(packet_url)
with requests.get(packet_url, stream=True, headers=headers) as r:
r.raise_for_status()
with open(os.path.join(base_path,packet_file), 'wb') as f:
for chunk in r.iter_content(chunk_size=8192):
f.write(chunk)
if extract:
if packet_file.endswith(".zip"):
with zipfile.ZipFile(os.path.join(base_path,packet_file)) as zfile:
zfile.extractall(base_path)
else:
packet_name = packet_file.split('.')[0]
with tarfile.open(os.path.join(base_path,packet_file)) as tfile:
tfile.extractall(base_path)
def compute_dataset_metrics(data_list):
data_list_with_metrics = []
for item in data_list:
# Read image
image = cv2.imread(item[1])
data_list_with_metrics.append((item[0],item[1],image.shape[0],image.shape[1],image.nbytes / (1024 * 1024.0)))
# Build a dataframe
data_list_with_metrics = np.asarray(data_list_with_metrics)
dataset_df = pd.DataFrame({
'label': data_list_with_metrics[:, 0],
'path': data_list_with_metrics[:, 1],
'height': data_list_with_metrics[:, 2],
'width': data_list_with_metrics[:, 3],
'size': data_list_with_metrics[:, 4],
})
dataset_df["height"] = dataset_df["height"].astype(int)
dataset_df["width"] = dataset_df["width"].astype(int)
dataset_df["size"] = dataset_df["size"].astype(float)
dataset_mem_size = dataset_df["size"].sum()
value_counts = dataset_df["label"].value_counts()
height_details = dataset_df["height"].describe()
width_details = dataset_df["width"].describe()
print("Dataset Metrics:")
print("----------------")
print("Label Counts:")
print(value_counts)
print("Image Width:")
print("Min:",width_details["min"]," Max:",width_details["max"])
print("Image Height:")
print("Min:",height_details["min"]," Max:",height_details["max"])
print("Size in memory:",round(dataset_df["size"].sum(),2),"MB")
class JsonEncoder(json.JSONEncoder):
def default(self, obj):
if isinstance(obj, np.integer):
return int(obj)
elif isinstance(obj, np.floating):
return float(obj)
elif isinstance(obj, decimal.Decimal):
return float(obj)
elif isinstance(obj, np.ndarray):
return obj.tolist()
else:
return super(JsonEncoder, self).default(obj)
experiment_name = None
def create_experiment():
global experiment_name
experiment_name = "experiment_" + str(int(time.time()))
# Create experiment folder
if not os.path.exists(experiment_name):
os.mkdir(experiment_name)
def upload_experiment(data_details):
# Check the logged in account
# user_account = !gcloud config get-value account
user_account = user_account[0]
print("user_account",user_account)
# Check Bucket Access
bucket_name = "ac215-mushroom-app-models" # BUCKET NAME
# List buckets in a GCP project
storage_client = storage.Client(project="ac215-project") # PROJECT ID
# Get bucket for Experiments
bucket = storage_client.get_bucket(bucket_name)
print("Model Bucket:",bucket)
# Save data details used for the experiment
save_data_details(data_details)
# Copy the experiment folder to GCP Bucket
for file_path in glob(experiment_name+'/*'):
print(file_path)
blob = bucket.blob(os.path.join(user_account,file_path))
print('uploading file', file_path)
blob.upload_from_filename(file_path)
# Submit file
submit_file = "submit.txt"
with open(submit_file, "w") as f:
f.write("Submission!")
blob = bucket.blob(submit_file)
print('Uploading file', submit_file)
blob = bucket.blob(os.path.join(user_account,experiment_name,submit_file))
blob.upload_from_filename(submit_file)
def save_data_details(data_details):
with open(os.path.join(experiment_name,"data_details.json"), "w") as json_file:
json_file.write(json.dumps(data_details,cls=JsonEncoder))
def save_model(model,model_name="model01"):
# Save the enitire model (structure + weights)
model.save(os.path.join(experiment_name,model_name+".hdf5"))
# Save only the weights
model.save_weights(os.path.join(experiment_name,model_name+".h5"))
# Save the structure only
model_json = model.to_json()
with open(os.path.join(experiment_name,model_name+".json"), "w") as json_file:
json_file.write(model_json)
def get_model_size(model_name="model01"):
model_size = os.stat(os.path.join(experiment_name,model_name+".h5")).st_size
return model_size
def append_training_history(model_train_history, prev_model_train_history):
for metric in ["loss","val_loss","accuracy","val_accuracy"]:
for metric_value in prev_model_train_history[metric]:
model_train_history[metric].append(metric_value)
return model_train_history
def evaluate_save_model(model,test_data, model_train_history,execution_time, learning_rate, batch_size, epochs, optimizer,save=True):
# Get the number of epochs the training was run for
num_epochs = len(model_train_history["loss"])
# Plot training results
fig = plt.figure(figsize=(15,5))
axs = fig.add_subplot(1,2,1)
axs.set_title('Loss')
# Plot all metrics
for metric in ["loss","val_loss"]:
axs.plot(np.arange(0, num_epochs), model_train_history[metric], label=metric)
axs.legend()
axs = fig.add_subplot(1,2,2)
axs.set_title('Accuracy')
# Plot all metrics
for metric in ["accuracy","val_accuracy"]:
axs.plot(np.arange(0, num_epochs), model_train_history[metric], label=metric)
axs.legend()
plt.show()
# Evaluate on test data
evaluation_results = model.evaluate(test_data)
print(evaluation_results)
if save:
# Save model
save_model(model, model_name=model.name)
model_size = get_model_size(model_name=model.name)
# Save model history
with open(os.path.join(experiment_name,model.name+"_train_history.json"), "w") as json_file:
json_file.write(json.dumps(model_train_history,cls=JsonEncoder))
trainable_parameters = count_params(model.trainable_weights)
non_trainable_parameters = count_params(model.non_trainable_weights)
# Save model metrics
metrics ={
"trainable_parameters":trainable_parameters,
"execution_time":execution_time,
"loss":evaluation_results[0],
"accuracy":evaluation_results[1],
"model_size":model_size,
"learning_rate":learning_rate,
"batch_size":batch_size,
"epochs":epochs,
"optimizer":type(optimizer).__name__
}
with open(os.path.join(experiment_name,model.name+"_model_metrics.json"), "w") as json_file:
json_file.write(json.dumps(metrics,cls=JsonEncoder))
# + [markdown] id="d1BzqnfOVwuk"
# ## **<font color="#A41034">Dataset</font>**
# + [markdown] id="o9hohypOVzX2"
# #### **Download**
# + id="ZZP5r9sAVzvx"
start_time = time.time()
download_file("https://github.com/dlops-io/datasets/releases/download/v1.0/mushrooms_6_labels.zip", base_path="datasets", extract=True)
execution_time = (time.time() - start_time)/60.0
print("Download execution time (mins)",execution_time)
# + [markdown] id="3bhtRhKfV_8k"
# #### **Load & EDA**
# + id="1bxevi-PWB_U"
# Your Code Here
# + [markdown] id="_AFAkCSbWGpk"
# ## **<font color="#A41034">Build Data Pipelines</font>**
# + [markdown] id="e0FGp87kWQva"
#
# + id="HoSaYFL7WVeS"
# Your Code Here
# + [markdown] id="3eJGLU8MWQ_k"
# ## **<font color="#A41034">Build Image Classificaton Models</font>**
# + [markdown] id="kyOyZHlcWTmd"
# ### **Create Experiment**
#
# Use the util functions to create an experiment to keep track of hyper parameters, metrics, models etc. This will be used for your submission to the cloud storage
# + id="O0e1xIWNWoed"
# Create an experiment
create_experiment()
# + [markdown] id="zOtRZ1NOWrlX"
# ### **Build Model**
# + id="dRHjRhgKWU0u"
# Your Code Here
# + [markdown] id="fWUcehi3WwTW"
# ### **Train**
# + id="CwNjxJFzWyaA"
# Your code here
# Train model
start_time = time.time()
training_results = model.fit(
train_data,
validation_data=validation_data,
epochs=epochs,
verbose=1)
execution_time = (time.time() - start_time)/60.0
print("Training execution time (mins)",execution_time)
####################################
##### Use this code to Save ########
####################################
# Get model training history
training_history = training_results.history
# Evaluate and save the model details
evaluate_save_model(model,test_data, training_history,execution_time, learning_rate, batch_size, epochs, optimizer,save=True)
# + [markdown] id="_gJ1XFU1XLTe"
# ## **<font color="#A41034">Experiment Results</font>**
# + [markdown] id="5KacYD5GXOAI"
# #### **Compare Models**
# + id="yKqeVCdSXQA5"
models_metrics_list = glob(experiment_name+"/*_model_metrics.json")
all_models_metrics = []
for mm_file in models_metrics_list:
with open(mm_file) as json_file:
model_metrics = json.load(json_file)
model_metrics["name"] = mm_file.replace(experiment_name+"/","").replace("_model_metrics.json","")
all_models_metrics.append(model_metrics)
# Load metrics to dataframe
view_metrics = pd.DataFrame(data=all_models_metrics)
# Format columns
view_metrics['accuracy'] = view_metrics['accuracy']*100
view_metrics['accuracy'] = view_metrics['accuracy'].map('{:,.2f}%'.format)
view_metrics['trainable_parameters'] = view_metrics['trainable_parameters'].map('{:,.0f}'.format)
view_metrics['execution_time'] = view_metrics['execution_time'].map('{:,.2f} mins'.format)
view_metrics['loss'] = view_metrics['loss'].map('{:,.2f}'.format)
view_metrics['model_size'] = view_metrics['model_size']/1000000
view_metrics['model_size'] = view_metrics['model_size'].map('{:,.0f} MB'.format)
view_metrics = view_metrics.sort_values(by=['accuracy'],ascending=False)
view_metrics.head(10)
# + [markdown] id="cDHkyTPmXXTD"
# ## **<font color="#A41034">Upload Experiment to Cloud Storage</font>**
# + [markdown] id="HWyT7IxWXX5D"
# ### **Login using Google Acccount**
# + id="w4mJehNYXc8t"
# Authenticate
auth.authenticate_user()
# + [markdown] id="poph_6TLXmfT"
# ### **Save Experiment**
# + id="xBC3VWnSXnG2"
# Save data details used for the experiment
data_details = {
"image_width": image_width,
"image_height": image_height,
"num_channels": num_channels,
"num_classes": num_classes,
"label2index": label2index,
"index2label": index2label
}
# Upload experiment to cloud storage
upload_experiment(data_details)
| Exercise/exercise_3.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python [conda env:aparent]
# language: python
# name: conda-env-aparent-py
# ---
# +
from __future__ import print_function
import keras
from keras.models import Sequential, Model, load_model
import tensorflow as tf
import pandas as pd
import os
import pickle
import numpy as np
import scipy.sparse as sp
import scipy.io as spio
import isolearn.io as isoio
import isolearn.keras as iso
from aparent.data.aparent_data_array import load_data
# +
#Load designed MPRA data
data_version = '_seq'
file_path = '../../data/prepared_data/apa_array_data/'
array_gens = load_data(batch_size=1, file_path=file_path, data_version=data_version)
# +
#Load APARENT model
#model_name = 'aparent_large_lessdropout_all_libs_no_sampleweights'
#model_name = 'aparent_large_all_libs'
model_name = 'aparent_libs_30_31_34'
save_dir = os.path.join(os.getcwd(), '../../saved_models')
model_path = os.path.join(save_dir, model_name + '.h5')
aparent_model = load_model(model_path)
# +
#Predict from test data generator
iso_pred_test, cut_pred_test = aparent_model.predict_generator(array_gens['all'], workers=4, use_multiprocessing=True)
#Calculate isoform logits
logodds_pred_test = np.ravel(np.log(iso_pred_test / (1. - iso_pred_test)))
# +
#Copy the test set dataframe and store isoform predictions
array_df = array_gens['all'].sources['df'].reset_index().copy()
array_df['iso_pred'] = iso_pred_test
array_df['logodds_pred'] = logodds_pred_test
array_df = array_df[['seq', 'master_seq', 'iso_pred', 'logodds_pred']]
# +
#Dump prediction dataframe and cut probability matrix
isoio.dump({'array_df' : array_df, 'cut_prob' : sp.csr_matrix(cut_pred_test)}, 'apa_array_data/' + model_name + '_predictions' + data_version)
# -
| analysis/predictions/aparent_predict_designed_library.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + id="BDuXLPw9lUjT"
import os
os.chdir('/content')
# + colab={"base_uri": "https://localhost:8080/"} id="RqgsscmalXrh" outputId="c154d7b2-f917-4a7d-e5f9-d653246187a4"
from google.colab import drive
drive.mount('/content/drive')
# + colab={"base_uri": "https://localhost:8080/"} id="dYxRDwhylfoB" outputId="bf4c8762-c4e2-4d07-d8c0-0d47c0d2bc92"
# !wget http://dlib.net/files/shape_predictor_68_face_landmarks.dat.bz2
# !bzip2 -dk shape_predictor_68_face_landmarks.dat.bz2
# + id="AOmsX8nVRbYz"
# # !cp /content/shape_predictor_68_face_landmarks.dat /content/drive/MyDrive/released-project-dataset/InterpGazeData_release
# + colab={"base_uri": "https://localhost:8080/"} id="7BV-iJX9lj1s" outputId="8eda6b81-e1cf-4987-94bc-96822a35b26e"
import os
import string
import dlib
import cv2
import numpy as np
# rootDir = 'sample_data/gaze/'
# distDir = 'sample_data/gaze_patch/'
rootDir = '/content/drive/MyDrive/released-project-dataset/InterpGazeData_release/full_image'
distDir = '/content/drive/MyDrive/released-project-dataset/InterpGazeData_release/gaze_patch'
_files = []
list_dirs = os.walk(rootDir)
for root, dirs, files in list_dirs:
for f in files:
# print(f)
if f.endswith('.jpg'):# and f.path.join('_0P')!=-1:
_files.append(os.path.join(root,f))
for fp in _files:
# print(fp)
frame = cv2.imread(fp)
gray = cv2.cvtColor(frame,cv2.COLOR_BGR2GRAY)
points_keys = []
PREDICTOR_PATH = 'shape_predictor_68_face_landmarks.dat'
detector = dlib.get_frontal_face_detector()
predictor = dlib.shape_predictor(PREDICTOR_PATH)
rects = detector(gray,1)
for i in range(len(rects)):
landmarks = np.matrix([[p.x,p.y] for p in predictor(gray, rects[i]).parts()])
img = gray.copy()
for idx, points in enumerate(landmarks):
pos = (points[0,0],points[0,1])
points_keys.append(pos)
cv2.circle(img, pos, 2, (255,0,0),-1)
# cv2.imwrite('frame.png',img)
eye_l = landmarks[36:41]
eye_r = landmarks[42:48]
(x_l,y_l), r_l =cv2.minEnclosingCircle(eye_l)
(x_r,y_r), r_r =cv2.minEnclosingCircle(eye_r)
x_l, x_r = int(x_l), int(x_r)
y_l, y_r = int(y_l), int(y_r)
r_l, r_r = int(1.7*r_l), int(1.7*r_r)
eye_l_img = frame[y_l-r_l:y_l+r_l, x_l-r_l:x_l+r_l]
eye_r_img = frame[y_r-r_r:y_r+r_r, x_r-r_r:x_r+r_r]
_,fn = os.path.split(fp)
# dst_h = os.path.join(distDir, 'head')
dst_l = os.path.join(distDir, 'l_eye')
dst_r = os.path.join(distDir, 'r_eye')
# cv2.imwrite(os.path.join(dst_h, fn), frame)
cv2.imwrite(os.path.join(dst_l, fn), eye_l_img)
cv2.imwrite(os.path.join(dst_r, fn), eye_r_img)
| tools/process_gaze.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
# This Python 3 environment comes with many helpful analytics libraries installed
# It is defined by the kaggle/python docker image: https://github.com/kaggle/docker-python
# For example, here's several helpful packages to load in
import numpy as np # linear algebra
import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv)
from fastai.vision import *
from fastai import *
import os
from collections import defaultdict
from fastai.vision.models.cadene_models import *
# -
# ### Set up paths
train_pd = pd.read_csv('/root/.fastai/data/severstal/train.csv')
train_pd.head(5)
path = Path('/root/.fastai/data/severstal')
path.ls()
train_images = get_image_files(path/'train_images')
train_images[:3]
# ### Check maximum size of images
# +
def check_img_max_size(folder):
max_height = 0
max_width = 0
for train_image in train_images:
img = open_image(train_image)
if max_height < img.shape[1]:
max_height = img.shape[1]
if max_width < img.shape[2]:
max_width = img.shape[2]
return max_height, max_width
def show_image(images, index):
img_f = images[index]
print(type(img_f))
img = open_image(img_f)
print(img)
img.show(figsize=(5,5))
# -
mask_path = Path('/kaggle/mask')
if not os.path.exists(mask_path):
os.makedirs(str(mask_path))
# +
def convert_encoded_to_array(encoded_pixels):
pos_array = []
len_array = []
splits = encoded_pixels.split()
pos_array = [int(n) - 1 for i, n in enumerate(splits) if i % 2 == 0]
len_array = [int(n) for i, n in enumerate(splits) if i % 2 == 1]
return pos_array, len_array
def convert_to_pair(pos_array, rows):
return [(p % rows, p // rows) for p in pos_array]
def create_positions(single_pos, size):
return [i for i in range(single_pos, single_pos + size)]
def create_positions_pairs(single_pos, size, row_size):
return convert_to_pair(create_positions(single_pos, size), row_size)
def convert_to_mask(encoded_pixels, row_size, col_size, category):
pos_array, len_array = convert_encoded_to_array(encoded_pixels)
mask = np.zeros([row_size, col_size])
for(p, l) in zip(pos_array, len_array):
for row, col in create_positions_pairs(p, l, row_size):
mask[row][col] = category
return mask
def save_to_image(masked, image_name):
im = PIL.Image.fromarray(masked)
im = im.convert("L")
image_name = re.sub(r'(.+)\.jpg', r'\1', image_name) + ".png"
real_path = mask_path/image_name
im.save(real_path)
return real_path
def open_single_image(path):
img = open_image(path)
img.show(figsize=(20,20))
def get_y_fn(x):
return mask_path/(x.stem + '.png')
def group_by(train_images, train_pd):
tran_dict = {image.name:[] for image in train_images}
pattern = re.compile('(.+)_(\d+)')
for index, image_path in train_pd.iterrows():
m = pattern.match(image_path['ImageId_ClassId'])
file_name = m.group(1)
category = m.group(2)
tran_dict[file_name].append((int(category), image_path['EncodedPixels']))
return tran_dict
def display_image_with_mask(img_name):
full_image = path/'train_images'/img_name
print(full_image)
open_single_image(full_image)
mask_image = get_y_fn(full_image)
mask = open_mask(mask_image)
print(full_image)
mask.show(figsize=(20, 20), alpha=0.5)
# -
grouped_categories_mask = group_by(train_images, train_pd)
# ### Create mask files and save these to kaggle/mask/
image_height = 256
image_width = 1600
if not os.path.exists(mask_path/'0002cc93b.png'):
for image_name, cat_list in grouped_categories_mask.items():
masked = np.zeros([image_height, image_width])
for cat_mask in cat_list:
encoded_pixels = cat_mask[1]
if pd.notna(cat_mask[1]):
masked += convert_to_mask(encoded_pixels, image_height, image_width, cat_mask[0])
if np.amax(masked) > 4:
print(f'Check {image_name} for max category {np.amax(masked)}')
save_to_image(masked, image_name)
# ### Prepare Transforms
# +
def limited_dihedral_affine(k:partial(uniform_int,0,3)):
"Randomly flip `x` image based on `k`."
x = -1 if k&1 else 1
y = -1 if k&2 else 1
if k&4: return [[0, x, 0.],
[y, 0, 0],
[0, 0, 1.]]
return [[x, 0, 0.],
[0, y, 0],
[0, 0, 1.]]
dihedral_affine = TfmAffine(limited_dihedral_affine)
def get_extra_transforms(max_rotate:float=3., max_zoom:float=1.1,
max_lighting:float=0.2, max_warp:float=0.2, p_affine:float=0.75,
p_lighting:float=0.75, xtra_tfms:Optional[Collection[Transform]]=None)->Collection[Transform]:
"Utility func to easily create a list of flip, rotate, `zoom`, warp, lighting transforms."
p_lightings = [p_lighting, p_lighting + 0.2, p_lighting + 0.4, p_lighting + 0.6, p_lighting + 0.7]
max_lightings = [max_lighting, max_lighting + 0.2, max_lighting + 0.4, max_lighting + 0.6, max_lighting + 0.7]
res = [rand_crop(), dihedral_affine(),
symmetric_warp(magnitude=(-max_warp,max_warp), p=p_affine),
rotate(degrees=(-max_rotate,max_rotate), p=p_affine),
rand_zoom(scale=(1., max_zoom), p=p_affine)]
res.extend([brightness(change=(0.5*(1-mp[0]), 0.5*(1+mp[0])), p=mp[1]) for mp in zip(max_lightings, p_lightings)])
res.extend([contrast(scale=(1-mp[0], 1/(1-mp[0])), p=mp[1]) for mp in zip(max_lightings, p_lightings)])
# train , valid
return (res, [crop_pad()])
def get_simple_transforms(max_rotate:float=3., max_zoom:float=1.1,
max_lighting:float=0.2, max_warp:float=0.2, p_affine:float=0.75,
p_lighting:float=0.75, xtra_tfms:Optional[Collection[Transform]]=None)->Collection[Transform]:
"Utility func to easily create a list of flip, rotate, `zoom`, warp, lighting transforms."
res = [
rand_crop(),
symmetric_warp(magnitude=(-max_warp,max_warp), p=p_affine),
rotate(degrees=(-max_rotate,max_rotate), p=p_affine),
rand_zoom(scale=(1., max_zoom), p=p_affine)
]
# train , valid
return (res, [crop_pad()])
# -
# ### Prepare data bunch
train_images = (path/'train_images').ls()
src_size = np.array(open_image(str(train_images[0])).shape[1:])
valid_pct = 0.10
codes = array(['0', '1', '2', '3', '4'])
def create_data_bunch(bs, size):
src = (SegmentationItemList.from_folder(path/'train_images')
.split_by_rand_pct(valid_pct=valid_pct)
.label_from_func(get_y_fn, classes=codes))
data = (src.transform(get_simple_transforms(), size=size, tfm_y=True)
.databunch(bs=bs)
.normalize(imagenet_stats))
return src, data
bs = 4
size = src_size//4
src, data = create_data_bunch(bs, size)
# ### Create learner and training
# Starting with low resolution training
# ##### Some metrics functions
# +
name2id = {v:k for k,v in enumerate(codes)}
void_code = name2id['0']
def acc_camvid(input, target):
target = target.squeeze(1)
mask = target != void_code
argmax = (input.argmax(dim=1))
comparison = argmax[mask]==target[mask]
return torch.tensor(0.) if comparison.numel() == 0 else comparison.float().mean()
def acc_camvid_with_zero_check(input, target):
target = target.squeeze(1)
argmax = (input.argmax(dim=1))
batch_size = input.shape[0]
total = torch.empty([batch_size])
for b in range(batch_size):
if(torch.sum(argmax[b]).item() == 0.0 and torch.sum(target[b]).item() == 0.0):
total[b] = 1
else:
mask = target[b] != void_code
comparison = argmax[b][mask]==target[b][mask]
total[b] = torch.tensor(0.) if comparison.numel() == 0 else comparison.float().mean()
return total.mean()
def calc_dice_coefficients(argmax, target, cats):
def calc_dice_coefficient(seg, gt, cat: int):
mask_seg = seg == cat
mask_gt = gt == cat
sum_seg = torch.sum(mask_seg.float())
sum_gt = torch.sum(mask_gt.float())
if sum_seg + sum_gt == 0:
return torch.tensor(1.0)
return (torch.sum((seg[gt == cat] / cat).float()) * 2.0) / (sum_seg + sum_gt)
total_avg = torch.empty([len(cats)])
for i, c in enumerate(cats):
total_avg[i] = calc_dice_coefficient(argmax, target, c)
return total_avg.mean()
def dice_coefficient(input, target):
target = target.squeeze(1)
argmax = (input.argmax(dim=1))
batch_size = input.shape[0]
cats = [1, 2, 3, 4]
total = torch.empty([batch_size])
for b in range(batch_size):
total[b] = calc_dice_coefficients(argmax[b], target[b], cats)
return total.mean()
def calc_dice_coefficients_2(argmax, target, cats):
def calc_dice_coefficient(seg, gt, cat: int):
mask_seg = seg == cat
mask_gt = gt == cat
sum_seg = torch.sum(mask_seg.float())
sum_gt = torch.sum(mask_gt.float())
return (torch.sum((seg[gt == cat] / cat).float())), (sum_seg + sum_gt)
total_avg = torch.empty([len(cats), 2])
for i, c in enumerate(cats):
total_avg[i][0], total_avg[i][1] = calc_dice_coefficient(argmax, target, c)
total_sum = total_avg.sum(axis=0)
if (total_sum[1] == 0.0):
return torch.tensor(1.0)
return total_sum[0] * 2.0 / total_sum[1]
def dice_coefficient_2(input, target):
target = target.squeeze(1)
argmax = (input.argmax(dim=1))
batch_size = input.shape[0]
cats = [1, 2, 3, 4]
total = torch.empty([batch_size])
for b in range(batch_size):
total[b] = calc_dice_coefficients_2(argmax[b], target[b], cats)
return total.mean()
def accuracy_simple(input, target):
target = target.squeeze(1)
return (input.argmax(dim=1)==target).float().mean()
def dice_coeff(pred, target):
smooth = 1.
num = pred.size(0)
m1 = pred.view(num, -1) # Flatten
m2 = target.view(num, -1) # Flatten
intersection = (m1 * m2).sum()
return (2. * intersection + smooth) / (m1.sum() + m2.sum() + smooth)
# -
# ### Customized loss function
# ##### The main training function
# +
from fastai import callbacks
def train_learner(learn, slice_lr, epochs=10, pct_start=0.8, best_model_name='best_model',
patience_early_stop=4, patience_reduce_lr = 3):
learn.fit_one_cycle(epochs, slice_lr, pct_start=pct_start,
callbacks=[callbacks.SaveModelCallback(learn, monitor='dice_coefficient',mode='max', name=best_model_name),
callbacks.EarlyStoppingCallback(learn=learn, monitor='dice_coefficient', patience=patience_early_stop),
callbacks.ReduceLROnPlateauCallback(learn=learn, monitor='dice_coefficient', patience=patience_reduce_lr),
callbacks.TerminateOnNaNCallback()])
# -
# ### First Training
metrics=accuracy_simple, acc_camvid_with_zero_check, dice_coefficient, dice_coefficient_2
wd=1e-2
learn = unet_learner(data, models.cadene_models.se_resnext101_32x4d, metrics=metrics, wd=wd, bottle=True)
learn.loss_func = CrossEntropyFlat(axis=1, weight=torch.tensor([1.0, .5, .5, .5, .5]).cuda())
learn.loss_func
learn.model_dir = Path('/kaggle/model')
learn = to_fp16(learn, loss_scale=4.0)
lr_find(learn, num_it=400)
learn.recorder.plot()
lr=1e-05
train_learner(learn, slice(lr), epochs=10, pct_start=0.8, best_model_name='bestmodel-frozen-1',
patience_early_stop=4, patience_reduce_lr = 3)
learn.save('stage-1')
learn.load('stage-1');
learn.load('bestmodel-frozen-1');
# +
# learn.export(file='/kaggle/model/export-1.pkl')
# -
learn.unfreeze()
lrs = slice(lr/100,lr)
train_learner(learn, lrs, epochs=10, pct_start=0.8, best_model_name='bestmodel-unfrozen-1',
patience_early_stop=4, patience_reduce_lr = 3)
learn.save('stage-2');
learn.load('stage-2');
learn.export(file='/kaggle/model/export-2.pkl')
# ### Go Large
learn=None
gc.collect()
bs=1
def create_large_learner(bs=4, size=size, transform_func=get_simple_transforms, model_to_load='bestmodel-unfrozen-1'):
data = (src.transform(transform_func(), size=size, tfm_y=True)
.databunch(bs=bs)
.normalize(imagenet_stats))
learn = unet_learner(data, models.cadene_models.se_resnext101_32x4d, metrics=metrics, wd=wd, bottle=True)
learn.model_dir = Path('/kaggle/model')
learn.loss_func = CrossEntropyFlat(axis=1, weight=torch.tensor([1.5, .5, .5, .5, .5]).cuda())
learn = to_fp16(learn, loss_scale=8.0)
if model_to_load is not None:
learn.load(model_to_load)
return learn
learn = create_large_learner(bs=bs, size=src_size, transform_func=get_simple_transforms, model_to_load='bestmodel-unfrozen-1')
lr_find(learn, num_it=400)
learn.recorder.plot()
lr=6e-07
train_learner(learn, slice(lr), epochs=5, pct_start=0.8, best_model_name='bestmodel-frozen-3',
patience_early_stop=4, patience_reduce_lr = 3)
learn.save('stage-3');
learn.load('bestmodel-frozen-3');
learn.export(file='/kaggle/model/export-3.pkl')
learn = create_large_learner(bs=bs, transform_func=get_extra_transforms, model_to_load='bestmodel-frozen-3')
learn.unfreeze()
lrs = slice(lr/1000,lr/10)
train_learner(learn, lrs, epochs=10, pct_start=0.8, best_model_name='bestmodel-4',
patience_early_stop=5, patience_reduce_lr = 3)
learn.save('stage-4');
learn.load('bestmodel-4');
learn.export(file='/kaggle/model/export-4.pkl')
# !pwd
# !cp /kaggle/model/export.pkl /opt/fastai/fastai-exercises/nbs_gil
from IPython.display import FileLink
FileLink(r'export-4.pkl')
# ### Inference
learn=None
gc.collect()
test_images = (path/'test_images').ls()
inference_learn = load_learner('/kaggle/model/', file='export-2.pkl')
inference_learn = to_fp16(inference_learn, loss_scale=4.0)
# +
def predict(img_path):
pred_class, pred_idx, outputs = inference_learn.predict(open_image(str(img_path)))
return pred_class, pred_idx, outputs
def encode_classes(pred_class_data):
pixels = np.concatenate([[0], torch.transpose(pred_class_data.squeeze(), 0, 1).flatten(), [0]])
classes_dict = {1: [], 2: [], 3: [], 4: []}
count = 0
previous = pixels[0]
for i, val in enumerate(pixels):
if val != previous:
if previous in classes_dict:
classes_dict[previous].append((i - count, count))
count = 0
previous = val
count += 1
return classes_dict
def convert_classes_to_text(classes_dict, clazz):
return ' '.join([f'{v[0]} {v[1]}' for v in classes_dict[clazz]])
# -
image_to_predict = train_images[16].name
display_image_with_mask(image_to_predict)
pred_class, pred_idx, outputs = predict(path/f'train_images/{image_to_predict}')
pred_class
torch.transpose(pred_class.data.squeeze(), 0, 1).shape
# #### Checking encoding methods
encoded_all = encode_classes(pred_class.data)
print(convert_classes_to_text(encoded_all, 3))
image_name = train_images[16]
print(get_y_fn(image_name))
img = open_mask(get_y_fn(image_name))
img_data = img.data
print(convert_classes_to_text(encode_classes(img_data), 3))
img_data.shape
# ### Loop through the test images and create submission csv
# +
import time
start_time = time.time()
defect_classes = [1, 2, 3, 4]
with open('submission.csv', 'w') as submission_file:
submission_file.write('ImageId_ClassId,EncodedPixels\n')
for i, test_image in enumerate(test_images):
pred_class, pred_idx, outputs = predict(test_image)
encoded_all = encode_classes(pred_class.data)
for defect_class in defect_classes:
submission_file.write(f'{test_image.name}_{defect_class},{convert_classes_to_text(encoded_all, defect_class)}\n')
if i % 5 == 0:
print(f'Processed {i} images\r', end='')
print(f"--- {time.time() - start_time} seconds ---")
# -
# ### Alternative prediction methods
preds,y = learn.get_preds(ds_type=DatasetType.Test, with_loss=False)
preds.shape
pred_class_data = preds.argmax(dim=1)
len((path/'test_images').ls())
data.test_ds.x
| nbs_gil/severstal/Severstal Competition 14 resnext.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Classes
# This third notebook is an optional extension describing how classes work in python. Classes are very useful in developing software, but you will likely not need to know how to define them if you are going to use python for analysing data.
#
# Classes are effectively a **nice way to package functionality and data together** that has to be applied to multiple different objects of the same type. An instance of a class is called an **object** and can have attributes attached to it for maintaining its state. Class instances can also have methods for modifying its state.
#
# Classes are defined as follows:
# ```python
# class className:
# globalValue = "global"
# # methods that belong to the class
# def __init__(self, name):
# # this method is called whenever a new instance is created
# self._instanceName = name
#
# def classMethod(self):
# # this is a method that belonds to the class
# # Note how we have an argument self, which is a reference to the object itself
#
# def classMethod2(self):
# # Another method
# ```
#
# We define a new object as:
# ```python
# newObject = className("Example")
# ```
#
#
# You use classes to define objects that represent the concepts and things that your program will work with. For example, if your program managed exam results of students, then you may create one class that represents an Exam, and another that represents a Student.
#
# Let us see how this can be implemented:
# +
class Exam:
def __init__(self, max_score=100):
self._max_score = max_score
self._actual_score = 0
def percent(self):
return 100.0 * self._actual_score / self._max_score
def setResult(self, score):
if score < 0:
self._actual_score = 0
elif score > self._max_score:
self._actual_score = self._max_score
else:
self._actual_score = score
def grade(self):
if self._actual_score == 0:
return "U"
elif self.percent() > 70.0:
return "A"
elif self.percent() > 60.0:
return "B"
elif self.percent() > 50.0:
return "C"
else:
return "F"
class Student:
def __init__(self, name):
self._exams = {}
self._name = name
def addExam(self, name, exam):
self._exams[name] = exam
def addResult(self, name, score):
self._exams[name].setResult(score)
def result(self, exam):
return self._exams[exam].percent()
def grade(self, exam):
return self._exams[exam].grade()
def grades(self):
g = {}
for exam in self._exams.keys():
g[exam] = self.grade(exam)
return g
# -
# We can now create a `Student` object and add exams to it:
# You will need to add a name for the student below
s = Student("Ignat") # create new object s of type Student
s.addExam("Maths", Exam(20)) # use Students' method
s.addExam("Chemistry", Exam(75))
# The student now has exams which he has attended. Now we have to give him grades:
s.addResult("Maths", 15)
s.addResult("Chemistry", 62)
s.grades()
# Programming with classes makes the code easier to read, as the code more closely represents the concepts that make up the program. For example, here we have a class that represents a full school of students.
class School:
def __init__(self):
self._students = {}
self._exams = []
def addStudent(self, name):
self._students[name] = Student(name)
def addExam(self, exam, max_score):
self._exams.append(exam)
for key in self._students.keys():
self._students[key].addExam(exam, Exam(max_score))
def addResult(self, name, exam, score):
self._students[name].addResult(exam, score)
def grades(self):
grades = {}
for name in self._students.keys():
grades[name] = self._students[name].grades()
return grades
# Now we can populate the school with students and their grades:
school = School()
school.addStudent("Andrew")
school.addStudent("James")
school.addStudent("Laura")
school.addExam("Maths", 20)
school.addExam("Physics", 50)
school.addExam("English", 30)
school.grades()
# We have a school of students but sadly all of them have grades U which stands for unassigned. How can we add grades? Functions and loops come to the rescue!
#
# The grades which the examiners have returned are:
maths_results = {"Andrew" : 13, "James" : 17, "Laura" : 14}
physics_results = {"Andrew" : 34, "James" : 44, "Laura" : 27}
english_results = {"Andrew" : 26, "James" : 14, "Laura" : 29}
# +
def addResults(school, exam, results):
for student in results.keys():
school.addResult(student, exam, results[student])
addResults(school, "Maths", maths_results)
addResults(school, "Physics", physics_results)
addResults(school, "English", english_results)
school.grades()
# -
# Let us take a step back and have a look the big picture now:
#
# 
#
# Hopefully this makes using the whole datastructure more intuitive and easy, which was the initial goal. Now we can easily change things and add new Students or Exams (and resulsts).
#
# Taking a step back even further, everything in Python is an object which was defined in a class somewhere. That is why it is reffered to as a object-orientated programming language.
#
# Strings, floats, integers, etc. Everything is a object, has its own values and methods.
# For an exercise exploring classes, see the notebook python-intro-exercises.
| python-intro-3-extra-classes-sol-Yixuan.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import matplotlib.cm as cm
import seaborn as sns
import matplotlib as mpl
#Importing data
df1 = pd.read_csv("C:/Users/alexandersa/Desktop/death.csv")
file = "C:/Users/alexandersa/Desktop/death.csv"
print(df1.head())
print(df1.columns)
print(df1.shape)
print(df1.info())
death = pd.read_csv(file, index_col='State')
print(death.head())
death = death.drop("113 Cause Name", axis=1)
print(death.head())
AL = death.loc["Alabama"]
print(AL.head())
print(AL.tail())
print(AL.shape)
AL_all = AL.iloc[1:19, 0:3]
print(AL_all)
plt.plot(AL_all["Year"], AL_all["Deaths"])
AL_max = AL.iloc[19:, :]
print(AL_max.head())
AL_max[AL_max['Deaths']==AL_max['Deaths'].max()]
# +
AL_1999 = AL.set_index(["Year"])
print(AL_1999.head())
AL_1999 = AL_1999.loc[1999]
print(AL_1999)
# -
AL_1999_2 = AL_1999.iloc[1:,:]
print(AL_1999_2.head())
AL_1999_2.plot(kind='bar',x='Cause Name',y='Deaths', title="Number of Deaths in Alabama in 1999 by Cause")
#Obtaining all deaths from 2016
df_2016 = pd.read_csv(file, index_col='Year')
df_2016 = df_2016.drop("113 Cause Name", axis=1)
df_2016 = df_2016.iloc[:,0:3]
df_2016 = df_2016.loc[2016]
print(df_2016.head())
print(df_2016.tail())
df_2016 = df_2016.sort_values(by=['Cause Name'])
print(df_2016.head(55))
df_2016 = df_2016.iloc[52:]
print(df_2016.head())
# +
#Ugly cause of death in each state in 2016
df_2016 = df_2016.loc[df_2016['State'] != "United States"]
df_2016.pivot(index='State', columns='Cause Name', values='Deaths').plot(kind='bar', stacked=True, legend=False, figsize=(20,10))
plt.legend(loc='upper left', prop={'size':14}, bbox_to_anchor=(1,1))
# +
#What the i dont think this is any better
import matplotlib.ticker as mtick
df_2016.groupby(['State','Cause Name'])['Deaths'].size().groupby(level=0).apply(
lambda x: 100 * x / x.sum()
).unstack().plot(kind='bar',stacked=True, figsize=(20,10))
plt.yticks(fontsize='small')
plt.gca().yaxis.set_major_formatter(mtick.PercentFormatter())
plt.legend(loc='upper left', prop={'size':14}, bbox_to_anchor=(1,1))
plt.show()
# +
#Lets look at suicide rates
suicide = death.loc[death['Cause Name'] == "Suicide"]
suicide = suicide.iloc[:,0:3]
suicide = suicide.reset_index()
print(suicide.head())
# +
#How have suicide rates changed from 1999 to 2016
syear_1999 = suicide.loc[suicide["Year"] == 1999]
syear_2016 = suicide.loc[suicide["Year"] == 2016]
print(syear_1999.head())
s9916 = syear_1999.append(syear_2016)
print(s9916.head())
print(s9916.tail())
s9916 = s9916.loc[s9916['State'] != "United States"]
# +
fig, ax = plt.subplots()
s9916_p1 = s9916.groupby('Year').plot(x='State', y='Deaths', legend=False, kind="bar", alpha=0.5,ax=ax,figsize=(20,10))
plt.ylabel('Deaths by Suicide')
# +
s9916_p2 = s9916.pivot(index='State', columns='Year', values='Deaths').plot(kind='bar', figsize=(20,10))
plt.ylabel('Deaths by Suicide')
# +
#Which states have had a larger change in suicide
syear_16_2 = syear_2016.set_index("State")
syear_99_2 = syear_1999.set_index("State")
print(syear_16_2.head())
print(syear_99_2.head())
s_delta_16_99 = syear_16_2['Deaths'] - syear_99_2['Deaths']
print(s_delta_16_99.head())
# +
#Total deaths
total_all_years=death.sort_values(by=['Cause Name'])
print(total_all_years.head())
total_all_years=total_all_years.loc[total_all_years['Cause Name'] == "All causes"]
print(total_all_years.tail())
#total deaths in 1999
total_1999 = total_all_years.loc[total_all_years['Year'] == 1999]
total_1999 = total_1999.reset_index()
total_1999 = total_1999.set_index("State")
total_1999 = total_1999.sort_values(by=['State'])
total_1999 = total_1999.iloc[:,0:3]
print(total_1999.head())
#total deaths in 2016
total_2016 = total_all_years.loc[total_all_years['Year'] == 2016]
total_2016 = total_2016.reset_index()
total_2016 = total_2016.set_index("State")
total_2016 = total_2016.sort_values(by=['State'])
total_2016 = total_2016.iloc[:,0:3]
print(total_2016.head())
# +
#percentage of suicides in 1999 and 2016
print(syear_99_2.head())
print(total_1999.head())
print(syear_16_2.head())
#Adding total deaths to number of suicides
syear_99_2['Total Deaths in 1999'] = total_1999['Deaths'].values
syear_16_2['Total Deaths in 2016'] = total_2016['Deaths'].values
#Dividing suicides by deaths to get percentage of deaths that were suicides
syear_99_2[['Suicide (Percent)']] = syear_99_2[['Deaths']].div(syear_99_2['Total Deaths in 1999'].values,axis=0)
syear_99_2.loc[:,'Suicide (Percent)'] *= 100
syear_16_2[['Suicide (Percent)']] = syear_16_2[['Deaths']].div(syear_16_2['Total Deaths in 2016'].values,axis=0)
syear_16_2.loc[:,'Suicide (Percent)'] *= 100
# +
#removing US
syear_99_3 = syear_99_2.reset_index()
syear_16_3 = syear_16_2.reset_index()
syear_99_3 = syear_99_3.loc[syear_99_3['State'] != "United States"]
syear_16_3 = syear_16_3.loc[syear_16_3['State'] != "United States"]
print(syear_99_3.head())
print(syear_16_3.head())
# +
# Adding two suicide rate dataframes together
s_p_9916 = syear_99_3.append(syear_16_3, sort=False)
print(s_p_9916.head())
# +
#suicide rate in 1999
fig, ax = plt.subplots()
percent_suicide_1999_p1 = syear_99_3.groupby('Year').plot(x='State', y='Suicide (Percent)', legend=False, kind="barh", alpha=0.5,ax=ax,figsize=(20,20))
# -
#Graphing both to see change in suicide percentage
fig, ax = plt.subplots()
s_p_9916_p1 = s_p_9916.groupby('Year').plot(x='State', y='Suicide (Percent)', legend=False, kind="barh", alpha=0.5,ax=ax,figsize=(20,10))
plt.ylabel('Deaths by Suicide')
# +
s_p_9916_p2 = s_p_9916.pivot(index='State', columns='Year', values='Suicide (Percent)').plot(kind='bar', figsize=(25,15))
plt.ylabel('Deaths by Suicide')
# +
# Change in suicide rates
syear_16_3["Change"] = syear_16_3['Suicide (Percent)'] - syear_99_3['Suicide (Percent)']
print(syear_16_3.head())
# -
syear_16_3.plot(kind='bar',x='State',y='Change', title="Change in Suicide Percentages from 1999 to 2016 by State", figsize=(25,15))
plt.grid(color='grey', linestyle='-', linewidth=.3, alpha=0.5)
pop = pd.read_csv("C:/Users/alexandersa/Downloads/uspop.csv", index_col="NAME")
pop = pop.loc[:,"POPESTIMATE2010":"POPESTIMATE2016"]
pop = pop.iloc[5:,:]
pop.columns = ["2010", "2011", "2012", "2013", "2014", "2015", "2016"]
pop.index.name = 'State'
pop = pop.drop("Puerto Rico")
print(pop.tail())
# +
#Adding 2016 population to deaths
pop_16 = pop[["2016"]]
print(pop_16.head())
pop_16 = pop_16.reset_index()
pop_16 = pop_16.sort_values(by=['State'])
syear_16_4 = syear_16_3.sort_values(by=['State'])
print(pop_16.head())
pop_s_16 = pd.merge(syear_16_4, pop_16, on='State')
print(pop_s_16)
# +
#Most deaths per population in 2016
pop_s_16[['Proportional Deaths']] = pop_s_16[['Total Deaths in 2016']].div(pop_s_16['2016'].values,axis=0)
print(pop_s_16.head())
# -
#Plotting proportional deaths
fig, ax = plt.subplots()
pop_deaths_16 = pop_s_16.groupby('Year').plot(x='State', y='Proportional Deaths', legend=False, kind="bar", alpha=0.75,ax=ax,figsize=(20,20))
plt.xlabel('State', fontsize=18)
# +
#sorting
pop_s_16_2 = pop_s_16.sort_values(by=['Proportional Deaths'])
print(pop_s_16_2.head())
pop_s_16_2.plot(kind='bar',x='State',y='Proportional Deaths', title="Death Rate by State in 2016", figsize=(20,20), legend=False)
# +
#suicide rate based from population
pop_s_16[['Suicide Rate']] = pop_s_16[['Deaths']].div(pop_s_16['2016'].values,axis=0)
pop_s_16_3 = pop_s_16.sort_values(by=['Suicide Rate'])
print(pop_s_16_3.head())
pop_s_16_3.plot(kind='bar',x='State',y='Suicide Rate', title="Suicide Rate by State in 2016", figsize=(20,20), color="red")
# -
| notebooks/Death_Sara.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3 (ipykernel)
# language: python
# name: python3
# ---
# # <span style="color:blue">Amostragem de Sinais Contínuos</span>
# <font size="+1"><b>Conteúdo:</b></font>
# <ol>
# <li><a href="#introducao">Introdução</a></li>
# <li><a href="#fundamentacao">Fundamentação Teórica</a></li>
# <li><a href="#sinal_basico">Sinal Básico</a></li>
# <li><a href="#subamostragem">Subamostragem</a></li>
# <li><a href="#outras_decimacoes">Outras Decimações</a></li>
# <li><a href="#sinal_gorjeio">Sinal Gorjeio (Chirp)</a></li>
# <li><a href="#aliasing_na_musica">Aliasing na Música</a></li>
# </ol>
# ## 1. Introdução<a name="introducao"></a>
# **Amostragem** de um sinal contínuo $x(t)$ produz réplicas do espectro $X(\omega)=F\{x(t)\}$ centradas em frequências múltiplas de $\omega_s=2\pi f_s=2\pi/T_s$. Supondo o sinal contínuo real ($x(t)\in\mathbb{R}$) e limitado em frequência, $|X(\omega)|=0$ para $|\omega|>\omega_{max}$ onde $\omega_{max}$ é a frequência máxima de $x(t)$. As réplicas espectrais devidas à amostragem não se sobrepõem se o **Teorema de Nyquist** (da amostragem) for observado: $\omega_s \geq 2.\omega_{max}$. No caso da amostragem Nyquist (crítica), a frequência de amostragem é escolhida como $\omega_s=2.\omega_{max}$.
#
# Processadores digitais de sinal (DSP's) e processadores de propósito geral (GPP's) só podem realizar operações aritméticas num intervalo limitado de números. Até agora, consideramos sinais discretos com valores de amplitude contínuos. Eles não podem ser manipulados pelos processadores dessa maneira. **Quantização** é o processo de mapeamento de valores de entrada de um conjunto grande (geralmente um conjunto contínuo) para valores de saída em um conjunto menor (contável), geralmente com um número finito de elementos. *Arredondamento* e *truncamento* são exemplos típicos de processos de quantização.
#
# A quantização escalar é uma operação instantânea e sem memória. Pode ser aplicada ao sinal de amplitude contínua, também referido como sinal analógico ou ao sinal discreto (amostrado temporalmente). O sinal discreto quantizado é denominado **sinal digital**.
# <p>Computadores com placas de áudio podem ser usados para explorar aspectos de <B>AMOSTRAGEM</B> e <B>ALIASING</B>. <P>Neste laboratório vamos gerar sinais e verificar os efeitos da reprodução com diferentes frequências de amostragem.
# ## 2. Fundamentação Teórica<a name="fundamentacao"></a>
# Considere a **digitalização** (amostragem + quantização + codificação) de um sinal analógico, $x_a(t)$, com frequência de
# amostragem $f_s = 1/T_s$, medida em Hz ou amostra/s. A sequência resultante da amostragem (em geral um vetor de amostras) é representada
# por:
# $$x[n]=x_a(n.T_s)$$
# onde $t=n.T_s=\frac{n}{f_s}$ são os instantes de amostragem do sinal analógico.<p>Seja a digitalização de um sinal cossenoidal de frequência $f$:
# <p>$$x_a(t)=A.cos(\omega.t+\theta)=A.cos(2\pi.f.t+\theta)$$
#
# em que $\omega$ é medida em __rad/s__, $f$ é medida em **Hz** e $\theta$ é medida em **rad**.
# A discretização temporal (amostragem) desse sinal é dada por:
# <p>$$x[n]=x_a(n.T_s)=A.cos(2\pi.f.n.T_s+\theta)=A.cos(2\pi.\frac{f}{f_s}.n+\theta)=A.cos(\Omega.n+\theta)$$
#
# onde: $\Omega=2\pi.f/f_s$ é a "freq. digital" medida em **amostra/ciclo**, e $\bar{f}=f/f_s$ é a freq. normalizada medida em **amostras**.
# Portanto, a faixa de variação de valores para as frequências do sinal em tempo contínuo e tempo
# discreto são:
# <p>$$-\infty \lt f \lt \infty\;\;\;\;\;\textrm{[Hz]}\;\;\;\;\,\Leftrightarrow\;\;\;\;-f_s/2 \lt \bar{f} \lt f_s/2\;\;\;\;\textrm{[ciclo/amostra]}$$
# <p>$$-\infty \lt \omega \lt \infty\;\;\;\textrm{[rad/s]}\;\;\Leftrightarrow\;\;\;\;-\pi \lt \Omega \lt \pi\;\;\;\;\;\;\textrm{[rad/amostra]}\;\;\;\;\;\;$$
# ### <font color="green">EXEMPLO: Análise do efeito **Aliasing** </font>
# Amostragem de dois sinais contínuos de frequência única (tom). Frequência de amostragem: $f_s=40 Hz$:
#
# <p>$x_1(t)=cos(2\pi.10t)\;\;\;f_1=10\;$Hz;$\;\;\;\rightarrow x_1[n]=cos\left(2\pi.\frac{10}{40}n\right)=cos\left(\frac{\pi}{2}n\right)$
#
# <p>$x_2(t)=cos(2\pi.50t)\;\;\;f_2=50\;$Hz;$\;\;\;\rightarrow x_2[n]=cos\left(2\pi.\frac{50}{40}n\right)=cos\left(\frac{5\pi}{2}n\right)=cos\left(2\pi n+\frac{\pi}{2}n\right)=cos\left(\frac{\pi}{2}n\right)$
#
# <p>$x_1[n]=x_2[n]\;\;\;\;\;$
#
# **ALIASING**: efeito que faz com que diferentes sinais se tornem indistinguíveis quando amostrados inadequadamente.
# ## 3. Sinal Básico<a name="sinal_basico"></a>
# Verifique se o <i>script</i> seguinte funciona como esperado, ou seja, se um tom musical da nota LA (440 Hz) é reproduzido por 2 segundos.
# %pylab inline
from numpy import arange, cos, pi, int8, fft
from pyaudio import PyAudio
from pylab import plot, show, figure
from scipy.io import loadmat
# +
def toca(tom,fs=8000):
x = PyAudio()
fluxo = x.open(format=x.get_format_from_width(1), channels=1, rate=fs, output=True)
amostras = ((tom + 1.) * 127.5).astype(int8) # amplit(tom): -1 a +1; amplit(amostras): 0 a 255
fluxo.write(amostras.tobytes())
Fs = 8000 # frequência de amostragem
Ts = 1./Fs # intervalo de amostragem
tfinal = 2 # qtde de tempo
n = arange(0,tfinal/Ts) # índice temporal
ftom1 = 440 # freq. do LA central (’A’)
ftom2 = 990 # freq. do MI baixo (’E’)
tom1 = 0.6*cos(2*pi*ftom1*n*Ts) # tom 1
tom2 = 0.4*cos(2*pi*ftom2*n*Ts) # tom 2
tom = tom1 + tom2
toca(tom,Fs) # toca o sinal (em 8000 sa/seg)
# Fonte: github.com/lneuhaus/pysine/blob/master/pysine/pysine.py
# -
tom.size
# No <i>script</i> acima, configuramos os parâmetros do sistema, geramos amostras e reproduzimos um tom (sinal senoidal) no alto-falante. Por padrão, o som é reproduzido com 8000 amostras/s. <p>Experimente amplitudes diferentes para o cosseno. Depois escolha uma amplitude que proporcione um volume de som confortável pois você vai ouvir esse sinal muitas vezes ao longo desse estudo. <p>Plote a magnitude do espectro de frequências do tom gerado:
# <span style="font-family:Courier New; font-size:1.3em;">plot(abs(fft.fft(tom)))</span>
# O que é representado no eixo das abscissas? Qual a unidade de medida dessas abscissas? O que são as abscissas com **pulsos**?
# Trace novamente a magnitude de espectro usando Hz como unidade de medida no eixo horizontal.
N = tom.size
f = arange(0,Fs,float(Fs)/N) # abscissas: frequência (Hz)
plot(f,abs(fft.fft(tom))/N); xlabel('$f$ (Hz)'); grid('on')
f = arange(-Fs/2,Fs/2,float(Fs)/N) # abscissas: frequência (Hz)
figure(figsize=(15,4))
plot(f,abs(fft.fftshift(fft.fft(tom)))/N);
xlabel('$f$ (Hz)'); grid('on')
# ## 4. Subamostragem<a name="subamostragem"></a>
# O sinal pode ser subamostrado pegando uma amostra e desprezando a seguinte...
tom2 = tom[::2] # y[n] = x[2n]
toca(tom2,Fs) # reproduz o sinal na freq. original
# Para efeito de comparação toque esse sinal com a metade da frequência original:
toca(tom2,int(Fs/2)) # reproduz o sinal numa taxa reduzida
# Como soa o sinal <span style="font-family:Courier New; font-size:1em;">tom2</span>? Como se compara a frequência deste sinal com a do primeiro sinal? O que se percebe ao ouvir ambos tons? Trace o espectro do sinal <span style="font-family:Courier New; font-size:1em;">tom2</span> nos moldes do que foi feito para o sinal <span style="font-family:Courier New; font-size:1em;">tom</span>. Explique as abscissas com picos.
# ## 5. Outras Decimações<a name="outras_decimacoes"></a>
# Vamos experimentar outras decimações, ouvindo e plotando os espectros dos sinais decimandos. <p>Em particular, subamostre o sinal por 3,5,8,9,10,15. O que acontece na decimação a partir do fator 9? Por que?
# gráficos serão incorporados ao notebook
fatores = [3,7,9,10]
for fator in fatores:
print('Decimando por',fator,'...')
input('Pressione [Enter] p/ iniciar\n') # aguardando a tecla [Enter]
tomdec = tom[::fator] # tom decimado
N = len(tomdec)
f = arange(0,Fs,float(Fs)/N) # abscissas: frequência (Hz)
plot(f,abs(fft.fft(tomdec))/N); xlabel('$f$ (Hz)'); grid('on')
show() # magnitude do espectro do tom decimado
toca(tomdec,Fs) # reproduz o tom decimado no alto-falante
# ### <font color="red">Exercício</font>
# Descreva exatamente que frequência é produzida por cada um dos fatores de decimação. Descreva o que está acontecendo quando o sinal começa a diminuir em frequência. Qual o nome que se dá a esse fenômeno, considerando o comportamento espectral. As raias de frequência mudam como esperado?
# <font color="blue"><b>Solução</b></font> (clique duas vezes nessa célula para digitar a sua resposta):
#
#
#
# ### Mudança da taxa de reprodução
# Agora vamos alterar a taxa (freq.) de reprodução.
toca(tom,int(Fs/1.9))
# O comando anterior reproduz o sinal em 4000 amostras/seg (ao invés do padrão de 8000 amostras/seg). Com o que se parece o som reproduzido? Porque? <p>Experimente reproduzir o tom em taxas do tipo: Fs, 1.1\*Fs, 0.9\*Fs, 2\*Fs, Fs/2, Fs/3, Fs/4.
# (escreva um pequeno *script* para facilitar a conclusão dessa tarefa.) <p>Descreva como o som produzido muda com estas taxas de amostragem, e porque?
# ## 6. Sinal Gorjeio (<i>Chirp</i>) <a name="sinal_gorjeio"></a>
# Agora vamos usar um sinal tipo gorjeio de pássaro, no qual a frequência instantânea muda com o tempo.
# Queremos um sinal que mude sua frequência no tempo, ou seja, no tempo inicial $t = 0$ a frequência deve ser de $f_1$ Hz e no tempo final $t = t_f$ a frequência deve ser de $f_2$ Hz, variando linearmente em função do tempo. Tal sinal é chamado de '*sinal chirp linear*'. <p>Para ajustar os parâmetros desse sinal, antes vamos dar uma olhada na relação entre frequência e fase de uma senoide. <p>Considere o sinal $s(t)$:
# $$s(t)=cos(2\pi f_{0}t)$$
# O argumento para a função cosseno é sempre a fase (sem dimensão). Neste caso, o argumento desse cosseno é $\theta(t)=2\pi f_0t$. Observe que a frequência do sinal pode ser calculada por:
# $$\frac{1}{2\pi} \frac{d\theta(t)}{dt}=f_0$$
# Nesse caso, a frequência é constante.
#
# Mais genericamente, podemos ter uma função de fase que não varia linearmente com o tempo, o que leva a uma frequência variante no tempo. Em geral, para uma função de fase $\theta(t)$ definimos como *frequência instantânea*:
# $$f(t)=\frac{1}{2\pi} \frac{d\theta(t)}{dt}\tag 1$$
# Agora vamos definir como queremos nossa frequência instantânea. Façamos $f(t)$ denotar a frequência como variável dependente do tempo. Queremos $f(0)=f_1$ e $f(t_f)=f_2$, variando linearmente entre esses extremos, $f(0)$ e $f(t_f)$. Então podemos escrever:
# $$f(t)=f_1+\frac{f_2-f_1}{t_f}t\;\;$$ ou $$\;\;f(t)=f_1+m.t\;\;$$ onde $m$ é o fator angular da função linear $f(t)$: $$\;\;m=\frac{f_2-f_1}{t_f}t$$
# Agora vamos usar isso no contexto da *frequência instantânea* definida na equação (1):
# $$\frac{1}{2\pi} \frac{d\theta(t)}{dt}=f_1+m.t$$
# $$\frac{d\theta(t)}{dt}-2\pi f_1 - 2\pi m.t=0$$
# Integrando: $$\theta(t)=2\pi(f_1t+\frac{1}{2}m.t^2)\tag 2$$
# Portanto, a equação (2) é o argumento da função cosseno que gera o sinal tipo gorjeio (*chirp*). Ou seja:
# $$s(t) = cos(\theta(t)) = cos\left[2\pi \left(f_1 + \frac{1}{2}m.t\right).t\right]$$
# Observe que a quantidade que multiplica o tempo $t$ é $$f_1+\frac{m.t}{2}$$
# No código abaixo, chamamos isso de frequência, embora não seja estritamente a frequência instantânea
# +
Fs = 8000 # frequência de amostragem
Ts = 1./Fs # período de amostragem
t0 = 0 # tempo inicial
tf = 4 # tempo final
t = arange(t0,tf,Ts) # base temporal
f1 = 440 # freq. inicial do gorjeio
f2 = 1000 # freq. final do gorjeio
m = (f2-f1)/tfinal # inclinação do gorjeio
fv = f1 + m*t/2 # frequência variante com o tempo (linear)
gorjeio = cos(2*pi*fv*t) # sinal chirp
# -
toca(gorjeio,Fs);
# Explique o que está acontecendo e por que isso funciona.
# <p>Agora mude as frequências inicial e final para $f_1$ = 2000 Hz e $f_2$ = 8000 Hz. Gere um gráfico da frequência e toque o sinal como antes. Qual é a frequência final percebida? Por que a frequência aumenta e depois diminui?
# Seu código
f1 = 2000 # freq. inicial do gorjeio
f2 = 8000 # freq. final do gorjeio
m = (f2-f1)/tfinal # inclinação do gorjeio
fv = f1 + m*t/2 # frequência variante com o tempo (linear)
gorjeio = cos(2*pi*fv*t)
toca(gorjeio,Fs);
plot(abs(fft.fft(gorjeio))); show()
# ## 7. Aliasing na Música<a name="aliasing_na_musica"></a>
# Agora vamos tentar o efeito aliasing numa música real. Existe um arquivo no sistema conhecido como handel, que tem um pedaço do Coro Aleluia. Você pode carregá-lo (na variável 'y') e reproduzi-lo
handel = loadmat("audio\handel.mat")
print( handel['y'])
aleluia = handel['y']
Fs = 8192
toca(aleluia,Fs)
# Para obter o efeito que o aliasing pode ter, experimente os seguintes comandos:
toca(aleluia[::2], Fs)
toca(aleluia[::2], int(Fs/2));
toca(aleluia[::3], int(Fs/3));
toca(4*aleluia[::4], int(Fs/4));
toca(aleluia[::5], int(Fs/5));
# ### <font color="red">Exercício</font>
# Descreva o efeito que esses comandos têm na reprodução da música e por que eles ocorrem. (Por exemplo, explique por que você obtém o coro de macacos no primeiro.) Por que tanto a decimação (como aleluia[::4]) e a alteração da taxa de amostragem (como Fs/4) são necessárias para manter as coisas corretamente.
# <font color="blue"><b>Solução</b></font> (clique duas vezes nessa célula para digitar a sua resposta):
#
#
#
# By **Prof. <NAME>**, Fev/19.
| 00-Amostragem_Quantizacao/Amostragem.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python [default]
# language: python
# name: python3
# ---
# # Open-loop simulations:
# ## Situation without control
#
# In this notebook the open-loop simulations for *A Hierarchical Approach For Splitting Truck Plattoons Near Network Discontinuities* are presented:
#
# - [Network topology](#network_topology)
# - [Symuvia connection](#symuvia_connection)
# - [Data examination](#data_examination)
# <a id='network_topology'></a>
#
# ## Network topology
#
# 
#
# Length of main road
#
# - Before merge *1000m*, merge zone *100m*, after merge *400m*
#
# Length of onramp road
#
# - Before merge *900m*, merge zone *100m*
# #### Parameters
# +
DT = 0.1 # Sample time
KC = 0.16 # CAV max density
KH = 0.0896 # HDV max density
VF = 25.0 # Speed free flow
W = 6.25 # Congestion speed
E = 25.0*0.3 # Speed drop for relaxation
GCAV = 1/(KC*W) # Time headway CAV
GHDV = 1/(KH*W) # Time headway HDV
SCAV = VF/(KC*W)+1/KC # Desired space headway CAV
SHDV = VF/(KH*W)+1/KH # Desired space headway HDV
dveh_twy = {'CAV': GCAV, 'HDV': GHDV}
dveh_dwy = {'CAV': 1/KC, 'HDV': 1/KH}
U_MAX = 1.5 # Max. Acceleration
U_MIN = -1.5 # Min. Acceleration
# -
# <a id='symuvia_connection'></a>
#
# ## Symuvia connection
#
# Libraries should be charged via `ctypes` module in python:
#
#
# ### Connection with Symuvia
#
# In this case connect to the simulator. First define the `libSymuVia.dylib` file
# +
import os
from ctypes import cdll, create_string_buffer, c_int, byref, c_bool
from sqlalchemy import create_engine, MetaData
from sqlalchemy import Table, Column, String, Integer, Float
from sqlalchemy import insert, delete, select, case, and_
from xmltodict import parse
from collections import OrderedDict, Counter
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
# Bokeh
from bokeh.plotting import figure, show
from bokeh.sampledata.iris import flowers
from bokeh.io import output_notebook
from bokeh.palettes import Viridis, Spectral11
from bokeh.plotting import figure, show, output_file
from bokeh.models import Span
output_notebook()
# Plotly
import plotly as py
from plotly.offline import download_plotlyjs, init_notebook_mode, plot, iplot
import plotly.graph_objs as go
init_notebook_mode(connected=True)
import matplotlib
from matplotlib import cm
import ipywidgets as widgets
from IPython.display import display
# -
# #### Load traffic library
dir_path = os.getcwd()
lib_path_name = ('..','Symuvia','Contents','Frameworks','libSymuVia.dylib')
full_name = os.path.join(dir_path,*lib_path_name)
symuvialib = cdll.LoadLibrary(full_name)
# #### Load Traffic network
file_path = ('..', 'Network', 'Merge_Demand_CAV.xml')
file_name = os.path.join(dir_path, *file_path)
m = symuvialib.SymLoadNetworkEx(file_name.encode('UTF8'))
# #### Define Output: Database
#
# Al results are stored in the folder `Output/SymOut.sqlite`. Table for storing results:
#
# 1. `traj` stores trajectories in open loop.
# +
engine_path = ('..','Output','SymOut.sqlite')
engine_name = os.path.join(os.path.sep,*engine_path)
engine_full_name = os.path.join(dir_path,*engine_path)
engine_call = 'sqlite://'+engine_name
engine = create_engine(engine_call)
metadata = MetaData()
try:
ltbstr = 'Loaded table in: '
connection = engine.connect()
traj = Table('traj', metadata, autoload=True, autoload_with=engine)
stmt = delete(traj)
results = connection.execute(stmt)
except:
ltbstr = 'Loaded table in: '
traj = Table('traj', metadata,
Column('ti', Float()),
Column('id', Integer()),
Column('type', String(3)),
Column('tron', String(10)),
Column('voie', Integer()),
Column('dst', Float()),
Column('abs', Float()),
Column('vit', Float()),
Column('ldr', Integer()),
Column('spc', Float()),
Column('vld', Float()))
metadata.create_all(engine)
connection = engine.connect()
finally:
print(ltbstr, engine)
# -
# #### Symuvia parsers
#
# This functions are intended to extract particular information from `Symuvia` or to parse information from the simulator, for use within this study.
#
# 1. Pointers: Variables to request data at each time step of the simluation
#
# 2. Parsers: Data format converters
#
# 3. V2V information: Information required to deploy the control strategy
#
# Pointers
sRequest = create_string_buffer(100000)
bEnd = c_int()
bSecond = c_bool(True)
def typedict(veh_dict):
"""
Converts dictionary file from xmltodict
into numeric formats to be stored in a database
"""
data = {'id': int(veh_dict['@id']),
'type': veh_dict['@type'],
'tron': veh_dict['@tron'],
'voie': int(veh_dict['@voie']),
'dst': float(veh_dict['@dst']),
'abs': float(veh_dict['@abs']),
'vit': float(veh_dict['@vit']),
}
return data
# #### V2V information
#
# Information regarding V2V communicatioin is computed. In particular which is the connectivity, and states derived from this case (*spacing* , *speed leader*) in this case only a single leader is identified
# +
# Identify Leader
def queueveh(dLeader, veh):
"""
This function creates a queue of vehicles
for a particular road segment
"""
if veh['tron'] in dLeader.keys():
if veh['id'] not in dLeader[veh['tron']]:
dLeader[veh['tron']].append(veh['id'])
else:
dLeader[veh['tron']] = [veh['id']]
return dLeader
def getlead(dLeader, veh):
"""
This function identifies the leader of a specific
vehicle i
"""
idx = dLeader[veh['tron']].index(veh['id'])
if idx != 0:
return dLeader[veh['tron']][idx-1]
else:
return dLeader[veh['tron']][idx]
# -
# Take into account that in order to finish writing of the `XML` file the kernel of the current session should be shut down.
# +
# Spacing
def getspace(lTrajVeh):
"""
This function obtains spacing between two vehicles
"""
# Equilibrium
det_eq_s = lambda x: SCAV if x['type']=='CAV' else SHDV
try:
# Case single vehicle
if lTrajVeh['id'] == lTrajVeh['ldr']:
return [{'spc':0.0+det_eq_s(lTrajVeh)}]
else:
# Last vehicle
# Leader out of Network @ ti
return [{'spc':None}]
except (TypeError, IndexError):
# Multiple veh @ ti
space = []
for veh in lTrajVeh:
if veh['id'] == veh['ldr']:
space.append(0.0+det_eq_s(veh))
else:
veh_pos = veh['abs']
ldr_id = veh['ldr']
ldr_pos = [ldr['abs'] for ldr in lTrajVeh if ldr['id']==ldr_id]
if ldr_pos:
space.append(ldr_pos[0]-veh_pos)
else:
# Leader out of Network @ ti
space.append(0.0)
space_dct = [{'spc': val} for val in space]
return space_dct
# Spacing
def getleaderspeed(lTrajVeh):
"""
This function obtains speed from the leader.
"""
try:
# Case single vehicle
if lTrajVeh['id'] == lTrajVeh['ldr']:
return [{'vld': lTrajVeh['vit']}]
else:
# Leader out of Network @ ti
return [{'vld':None}]
except (TypeError, IndexError):
# Multiple veh @ ti
speedldr = []
for veh in lTrajVeh:
if veh['id'] == veh['ldr']:
speedldr.append(veh['vit'])
else:
ldr_id = veh['ldr']
ldr_vit = [ldr['vit'] for ldr in lTrajVeh if ldr['id']==ldr_id]
if ldr_vit:
speedldr.append(ldr_vit[0])
else:
speedldr.append(veh['vit'])
speedldr_dct = [{'vld': val} for val in speedldr]
return speedldr_dct
def updatelist(lTrajVeh,lDict):
"""
Considering a list of dictionaries as an input
the funciton updates the parameter given by lDict
"""
try:
lTrajVeh.update(lDict[0])
except AttributeError:
for d,s in zip(lTrajVeh,lDict):
d.update(s)
return lTrajVeh
# -
# #### Launch symulation
max_time = 120
progressSim = widgets.FloatProgress(
value=5,
min=0,
max=max_time,
step=0.1,
description='Simulating:',
bar_style='info',
orientation='horizontal'
)
tiVal = widgets.BoundedFloatText(
value=7.5,
min=0,
max=max_time,
step=0.1,
description='Time step:',
disabled=False
)
# +
# %%time
N = 1200 # Simulation steps
# Start simulation from beginning
m = symuvialib.SymLoadNetworkEx(file_name.encode('UTF8'))
# Clean table
stmt = delete(traj)
results = connection.execute(stmt)
step = iter(range(N))
stmt = insert(traj)
t = []
display(progressSim)
display(tiVal)
#for step in steps:
bSuccess = 2
while bSuccess>0:
bSuccess = symuvialib.SymRunNextStepEx(sRequest, True, byref(bEnd))
try:
next(step)
dParsed = parse(sRequest.value.decode('UTF8'))
ti = dParsed['INST']['@val']
if dParsed['INST']['TRAJS'] is None:
#dummy = 1 # Guarantees correct export of XML
pass #print('')
#print('No vehicles in the network at time: {}'.format(ti))
else:
lVehOD = dParsed['INST']['TRAJS']['TRAJ']
lTrajVeh = []
try:
lTrajVeh = typedict(lVehOD)
lTrajVeh['ti'] = ti
dLeader = {lTrajVeh['tron']: [lTrajVeh['id']]}
lTrajVeh['ldr'] = getlead(dLeader, lTrajVeh)
except TypeError:
# Multiple veh @ ti
for i, veh in enumerate(lVehOD):
TrajVeh = typedict(veh)
TrajVeh['ti'] = ti
dLeader = queueveh(dLeader, TrajVeh)
TrajVeh['ldr'] = getlead(dLeader, TrajVeh)
lTrajVeh.append(TrajVeh)
lSpc = getspace(lTrajVeh)
lLdrV = getleaderspeed(lTrajVeh)
lTrajVeh = updatelist(lTrajVeh,lSpc)
lTrajVeh = updatelist(lTrajVeh,lLdrV)
results = connection.execute(stmt,lTrajVeh)
# print('{} vehicles in the network at time: {}'.format(results.rowcount, ti))
t.append(ti)
progressSim.value = ti
tiVal.value = ti
except StopIteration:
print('Stop by iteration')
print('Last simluation step at time: {}'.format(ti))
bSuccess = 0
except:
print(i)
bSuccess = symuvialib.SymRunNextStepEx(sRequest, True, byref(bEnd))
print('Return from Symuvia Empty: {}'.format(sRequest.value.decode('UTF8')))
print('Last simluation step at time: {}'.format(ti))
bSuccess = 0
# -
# <a id='data_examination'></a>
#
# ## Data examination
#
# This section reads results from the database and depicts plots of the open loop trajectories
# +
stmt = select([traj])
results = connection.execute(stmt).fetchall()
column_names = traj.columns.keys()
trajDf = pd.DataFrame(results, columns = column_names)
trajDf.head()
trajDf.info()
# -
vehicle_iden = trajDf['id'].unique().tolist()
vehicle_type = trajDf['type'].unique().tolist()
# #### Visualization Bokeh
#
# Non interactive visualization
# +
# Colormap
colormap = {'In_main': 'lightblue', 'In_onramp': 'crimson', 'Merge_zone': 'green', 'Out_main': 'gold'}
colors = [colormap[x] for x in trajDf.tron]
# Figure
p = figure(title = "Trajectories",
width=900,
height=900
)
p.xaxis.axis_label = 'Time [s]'
p.yaxis.axis_label = 'Position [m]'
# Horizontal line
hline = Span(location=0, dimension='width', line_color='darkslategrey', line_width=3)
# Data
p.circle(trajDf['ti'], trajDf['abs'], color = colors, size = 2)
p.renderers.extend([hline])
show(p)
# -
# #### Visualization Plotly
#
# Interactive visualization (Only notebook mode)
# +
layout = go.Layout(
title = 'Trajectories without Control',
yaxis = dict(
title = 'Position X [m]'
),
xaxis = dict(
title = 'Time [s]'
),
width = 900,
height = 900,
)
def trace_position_vehicle(traj_type, v_id, vtype):
"""
Plot trace single vehicle
"""
dashtrj = {'CAV': 'solid', 'HDV': 'dot'}
trace = go.Scatter(
x = traj_type['ti']-20,
y = traj_type['abs']-500,
mode = 'lines',
name = f'Vehicle {vtype} - {v_id}',
line = dict(
shape = 'spline',
width = 1,
dash = dashtrj[vtype]
)
)
return trace
def update_position_plot(vtype):
traj_type = trajDf[trajDf.type.isin(vtype)]
traj_id = traj_type.id.unique()
data = []
for v in traj_id:
traj_veh = traj_type[traj_type.id == v]
veh_type = traj_veh.type.unique()[0]
trace_i = trace_position_vehicle(traj_veh, v, veh_type)
data.append(trace_i)
fig = go.Figure(data = data, layout = layout)
iplot(fig)
veh_type_wgt = widgets.SelectMultiple(
options=vehicle_type,
value=vehicle_type,
rows=2,
description='Vehicle type',
disabled=False
)
widgets.interactive(update_position_plot, vtype=veh_type_wgt)
#update_position_plot(veh_type_wgt.value) #non-interactive
# -
trajDf.head()
trajDf['ctr']=None
trajDf.to_sql(name='closed', con = engine, if_exists='replace', index=False)
# +
layout = go.Layout(
title = 'Spacing without Control',
yaxis = dict(
title = 'Position X [m]'
),
xaxis = dict(
title = 'Time [s]'
),
width = 900,
height = 900,
)
def trace_space_vehicle(traj_type, v_id, vtype):
"""
Plot trace single vehicle
"""
trace = go.Scatter(
x = traj_type['ti'],
y = traj_type['spc'],
mode = 'lines',
name = f'Vehicle {vtype} - {v_id}',
line = dict(
shape = 'spline',
width = 1,
)
)
return trace
def update_space_plot(veh_id):
traj_type = trajDf[trajDf.id.isin(veh_id)]
traj_id = traj_type.id.unique()
data = []
for v in traj_id:
traj_veh = traj_type[traj_type.id == v]
veh_type = traj_veh.type.unique()[0]
trace_i = trace_space_vehicle(traj_veh, v, veh_type)
data.append(trace_i)
fig = go.Figure(data = data, layout = layout)
iplot(fig)
veh_id_wgt = widgets.SelectMultiple(
options=vehicle_iden,
value=vehicle_iden,
rows=12,
description='Vehicle type',
disabled=False
)
widgets.interactive(update_space_plot, veh_id=veh_id_wgt)
#update_space_plot(veh_id_wgt.value)
| Notebooks/01-Open-loop.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
# Import the required libraries
import warnings
warnings.filterwarnings('ignore')
import pandas as pd
import numpy as np
import seaborn as sns
import matplotlib.pyplot as plt
import joblib
# %matplotlib inline
from sklearn.linear_model import LogisticRegression
# +
# Read the data and display
diabetesDF = pd.read_csv('diabetes.csv')
diabetesDF.head()
# -
diabetesDF.drop(['Pregnancies', 'Glucose', 'BloodPressure', 'SkinThickness', 'Insulin'], axis=1, inplace=True)
diabetesDF.head()
# +
# Total 768 patients record
# Using 650 data for training
# Using 100 data for testing
# Using 18 data for validation
dfTrain = diabetesDF[:650]
dfTest = diabetesDF[650:750]
dfCheck = diabetesDF[750:]
# -
# Separating label and features and converting to numpy array to feed into our model
trainLabel = np.asarray(dfTrain['Outcome'])
trainData = np.asarray(dfTrain.drop('Outcome',1))
testLabel = np.asarray(dfTest['Outcome'])
testData = np.asarray(dfTest.drop('Outcome',1))
# +
# Normalize the data
means = np.mean(trainData, axis=0)
stds = np.std(trainData, axis=0)
trainData = (trainData - means)/stds
testData = (testData - means)/stds
# -
# models target t as sigmoid(w0 + w1*x1 + w2*x2 + ... + wd*xd)
diabetesCheck = LogisticRegression()
diabetesCheck.fit(trainData,trainLabel)
accuracy = diabetesCheck.score(testData,testLabel)
print("accuracy = ",accuracy * 100,"%")
| Diabetes Dataset/Experiment with Features/086_BMI, DiabetesPedigreeFunction and Age.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
# %load_ext autoreload
# %autoreload 2
import numpy as np
from sklearn.preprocessing import StandardScaler, MinMaxScaler
import matplotlib.pyplot as plt
import seaborn as sns
sns.set()
import sys
sys.path.append("../src")
import data_preparation
# -
# read data
data = np.load("../Data/RayTracingData/Remcom_4x4_IR_100taps.npy")
real_mean = np.mean(np.real(data))
real_std = np.std(np.real(data))
# +
imag_mean = np.mean(np.imag(data))
imag_std = np.std(np.imag(data))
# +
#one sample example
# -
one_sample = data[0:2,:,25:75]
one_sample_real = (np.real(one_sample) - real_mean)/real_std
one_sample_imag = (np.imag(one_sample) - imag_mean)/imag_std
plt.figure(figsize=(16,16))
for i in range(1, 17):
plt.subplot(4,4,i)
plt.plot(one_sample_real[0, i-1, :], label='Real')
plt.plot(one_sample_imag[0, i-1, :], label='Imaginary')
plt.legend()
one_sample_full = np.concatenate([one_sample_real, one_sample_imag], axis=0)
one_sample_full.shape
# We observe that a lot of information is contained on the imaginary part of the impulse. So the 16 antennas, we are gong to have 32 'Channels' for our dataset.
# So we will have a training batch of shape [batch_size, 32 , 100].
# # Siamese Neural Network
# +
import torch
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
from torch.utils.data import Dataset, DataLoader
from sklearn.model_selection import train_test_split
# -
# ## Setting up the Custom Dataset
idces = np.random.randint(0, data.shape[0], int(0.1*data.shape[0]))
data_undersampled = data[idces]
data_undersampled.shape
data_undersampled.shape
# train test split
train, test= train_test_split(data_undersampled)
train_dataset = data_preparation.SiameseDataset(train)
scaler = train_dataset.scaler_real, train_dataset.scaler_imag
test_dataset = data_preparation.SiameseDataset(test, scaler)
scaler[1].transform(np.real(test)).std()
train_dataset[0][0].std()
plt.figure(figsize=(16,16))
for i in range(1, 33):
plt.subplot(4,8,i)
plt.plot(train_dataset[0][0][i-1, :], label='1_sample')
plt.plot(train_dataset[0][1][i-1, :], label='2_sample')
plt.legend()
train_dataset.nb_channels()
# +
class SimpleNN(nn.Module):
def __init__(self):
super(SimpleNN, self).__init__()
self.conv1 = nn.Conv1d(in_channels=train_dataset.nb_channels(),
out_channels=128,
kernel_size=16)
self.conv2 = nn.Conv1d(in_channels=128,
out_channels=64,
kernel_size=8)
self.conv3 = nn.Conv1d(in_channels=64,
out_channels=16,
kernel_size=4)
f = data_preparation.conv1d_output_size
self.features = f(f(f(train_dataset.nb_samples(),kernel_size=16),
kernel_size=8),
kernel_size=4)
self.lin1 = nn.Linear(in_features= 16 * self.features, out_features=128)
self.lin2 = nn.Linear(in_features=128, out_features=32)
self.lin3 = nn.Linear(in_features=32, out_features=8)
self.lin4 = nn.Linear(in_features=8, out_features=3)
def forward(self, x):
x = F.relu(self.conv1(x))
x = F.relu(self.conv2(x))
x = F.relu(self.conv3(x))
x = torch.flatten(x, 1)
x = F.relu(self.lin1(x))
x = F.relu(self.lin2(x))
x = F.relu(self.lin3(x))
out = self.lin4(x)
return out
# -
model = SimpleNN()
y1 = model(train_dataset[0:10][0])
y2 = model(train_dataset[0:10][1])
def loss_function(x1, x2, y1, y2):
x_difference = torch.mean(torch.abs(x1 - x2), dim=[1,2])
y_difference = torch.mean(torch.abs(y1 - y2), dim=[1])
return torch.sum(torch.pow(x_difference - y_difference, 2))
x1, x2 = train_dataset[0:10][0], train_dataset[0:10][1]
y1, y2 = model(x1), model(x2)
# ## Training
a = len(test_dataset)/len(train_dataset)
batch_size = 64
train_loader = DataLoader(train_dataset, batch_size=batch_size, shuffle=True)
test_loader = DataLoader(test_dataset, batch_size=batch_size, shuffle=True)
criterion = loss_function
optimizer = optim.Adam(model.parameters())
loss_history = []
for e in range(20):
# train
loss = 0
for x1, x2 in train_loader:
optimizer.zero_grad()
y1, y2 = model(x1), model(x2)
batch_loss = criterion(x1, x2, y1 ,y2)
batch_loss.backward()
optimizer.step()
loss+=batch_loss
#validation
model.eval()
val_loss = 0
for x1, x2 in test_loader:
y1, y2 = model(x1), model(x2)
val_loss += criterion(x1, x2, y1 ,y2)
print(f"Epoch {e+1}, Training Loss: {a*loss}, Validation Loss: {val_loss}")
| notebooks/wandb/run-20200211_120113-ok3124qr/code/notebooks/SiameseNetChannelCharting-v1.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
myList = [1, 2, 3]
myList
myList = ['String', 100, 13.2]
myList
len(myList)
myList = ['one', 'two', 'three']
myList[0]
myList[1:]
myList
anotherList = ['four', 'five']
mylist + anotherList
myList + anotherList
newList = myList + anotherList
newList
clear()
newList
newList[0] = 'ONE ALL CAPS'
newList
newList.append('six')
newList
newList.append('Seven')
newList
clear()
newList.pop
newList.pop()
newList
clear
poppedItem = newList.pop();
poppedItem
newList
newList.pop(0)
newList
clear()
newList = ['a', 'e', 'x', 'b', 'c']
newList
newList.sort()
newList
mySortedList = newList.sort()
mySortedList
type(mySortedList)
clear
None
type(None)
newList.Sort()
mySortedList = newList
newList.sort()
mySortedList = newList
clear
mySortedList
type(mySortedList)
clear
numList = [4, 2, 7, 9, 1]
numList.sort()
numList
clear()
numList.reverse()
numList
| Jose Portilla/Python Bootcamp Go from Zero to Hero in Python 3/Python Object and Data Structure Basic/Lists.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
from Gridworld import Gridworld
from matplotlib import pyplot as plt
# #### Check that the image returned is the one rendered (must check visually)
rows, cols = 5, 5
gridworld = Gridworld(rows, cols)
img = gridworld.reset()
gridworld.render()
plt.imshow(img)
# #### Check moving around, always attempting beyond the edge. (Must check visually)
Action = Gridworld.Action
for i in range(rows):
img, reward, done, info = gridworld.step(Action.UP)
gridworld.render()
print(reward)
for i in range(cols):
img, reward, done, info = gridworld.step(Action.LEFT)
gridworld.render()
print(reward)
for i in range(rows):
img, reward, done, info = gridworld.step(Action.DOWN)
gridworld.render()
print(reward)
for i in range(cols):
img, reward, done, info = gridworld.step(Action.RIGHT)
gridworld.render()
print(reward)
| Gridworld/.ipynb_checkpoints/Gridworld_test-checkpoint.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + colab={"base_uri": "https://localhost:8080/"} id="l1-l47GAlH92" outputId="5ae37abd-2824-4626-cec5-de685a0e7710"
# ! wget 'https://storage.googleapis.com/kaggle-competitions-data/kaggle-v2/13836/1718836/bundle/archive.zip?GoogleAccessId=<EMAIL>&Expires=1612442059&Signature=geLsIRLO%2FLECw2TcQ4%2Bgem%2B2LWmE38iS7Qi4EWEVG0JzeTatHM%2FzgdEFdWG2ZsU%2BQusM51Qf2sZRuGiFXwAsWOUYrE%2BrozAN%2F%2BC7GZEqYqySLsNfgrU%2FjVxNoVDUVEXHD9UDJaZwAbxkqJnHWNvL%2BP3jfZ7aekrFUicCYObkbL4WozGCyvdr7BXXur0mCLJh1bqVluP9oG%2F8m%2BiGMrdW0sn5kLh%2BvQy8b3yt8gJXVzpVsO7%2BTI65uIpdqm8MgT%2BCeZjnpcGPamGRJlUEzPtK%2FFtp7zv56jO7WdxeyesPXfYf3eWcHY0rgvQXwWzzjq%2Bq2jQSe2BxYmeFNpRG5PoppQ%3D%3D&response-content-disposition=attachment%3B+filename%3Dcassava-leaf-disease-classification.zip'
# + id="FusNtpRKloWv"
# !unzip '/content/archive.zip?GoogleAccessId=<EMAIL>&Expires=1612442059&Signature=geLsIRLO%2FLECw2TcQ4+gem+2LWmE38iS7Qi4EWEVG0JzeTatHM%2FzgdEFdWG2ZsU+QusM51Qf2sZRuGiFXwAsWOUYrE+rozAN%2F+C7GZEqYqySLsNfgrU%2FjVxNoVDUV' -d '/content/dataset'
# + id="qeg4xzpymXv7"
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
from sklearn.utils import shuffle
import cv2
import tensorflow as tf
from tensorflow.keras import applications
from tensorflow.keras.preprocessing.image import ImageDataGenerator
from tensorflow.keras.layers import Dense, Dropout, BatchNormalization, GlobalAveragePooling2D
# + id="3C4xwkLlnDc4"
data_path = "/content/dataset/"
train_csv_data_path = data_path+"train.csv"
label_json_data_path = data_path+"label_num_to_disease_map.json"
images_dir_data_path = data_path+"train_images"
# + colab={"base_uri": "https://localhost:8080/", "height": 35} id="rKEJKEKHng9R" outputId="f806a84b-cab0-4373-e6bc-4fe2e6efb672"
train_csv_data_path
# + id="K8vNx66Bnik0"
train_csv = pd.read_csv(train_csv_data_path)
train_csv['label'] = train_csv['label'].astype('string')
label_class = pd.read_json(label_json_data_path, orient='index')
label_class = label_class.values.flatten().tolist()
# + colab={"base_uri": "https://localhost:8080/"} id="aF_dZNGVnkcP" outputId="f0cf7f7f-7eaa-4e75-d3b4-cc1990c203f8"
print("Label names :")
for i, label in enumerate(label_class):
print(f" {i}. {label}")
# + colab={"base_uri": "https://localhost:8080/", "height": 204} id="_WO3XcqRnl_X" outputId="278e345c-ebaf-415a-de4a-ee4958e0668a"
train_csv.head()
# + id="hIHGE8ggnn5c"
# Data agumentation and pre-processing using Keras
train_gen = ImageDataGenerator(
rotation_range=360,
width_shift_range=0.1,
height_shift_range=0.1,
brightness_range=[0.1,0.9],
shear_range=25,
zoom_range=0.3,
channel_shift_range=0.1,
horizontal_flip=True,
vertical_flip=True,
rescale=1/255,
validation_split=0.15
)
valid_gen = ImageDataGenerator(rescale=1/255,
validation_split = 0.15
)
# + id="axBZJtMxno8f"
BATCH_SIZE = 18
IMG_SIZE = 224
# + colab={"base_uri": "https://localhost:8080/"} id="ZoNT_IHenqSa" outputId="7e86bf4d-ac3e-4f5b-c545-5bc387754912"
train_generator = train_gen.flow_from_dataframe(
dataframe=train_csv,
directory = images_dir_data_path,
x_col = "image_id",
y_col = "label",
target_size = (IMG_SIZE, IMG_SIZE),
class_mode = "categorical",
batch_size = BATCH_SIZE,
shuffle = True,
subset = "training",
)
valid_generator = valid_gen.flow_from_dataframe(
dataframe=train_csv,
directory = images_dir_data_path,
x_col = "image_id",
y_col = "label",
target_size = (IMG_SIZE, IMG_SIZE),
class_mode = "categorical",
batch_size = BATCH_SIZE,
shuffle = False,
subset = "validation"
)
# + id="ycJPxxnjnrst"
batch = next(train_generator)
images = batch[0]
labels = batch[1]
# + colab={"base_uri": "https://localhost:8080/", "height": 565} id="xzVrlynSntJv" outputId="be1718bd-8e88-4193-f770-cb09cc0d5a59"
plt.figure(figsize=(12,9))
for i, (img, label) in enumerate(zip(images, labels)):
plt.subplot(2,3, i%6 +1)
plt.axis('off')
plt.imshow(img)
plt.title(label_class[np.argmax(label)])
if i==15:
break
# + [markdown] id="ejsoviEHnvkB"
# ## Building the model
# + colab={"base_uri": "https://localhost:8080/"} id="fvkCk7jynuM7" outputId="dede66db-afc2-4971-9891-686ca7446298"
# Loading the ResNet152 architecture with imagenet weights as base
base = tf.keras.applications.ResNet152(include_top=False, weights='imagenet',input_shape=[IMG_SIZE,IMG_SIZE,3])
# + colab={"base_uri": "https://localhost:8080/"} id="A_VpkMTdnxGm" outputId="49d05d7f-336e-4050-e22d-f669db2af8fc"
base.summary()
# + id="pK_0Szq4nygi"
model = tf.keras.Sequential()
model.add(base)
model.add(BatchNormalization(axis=-1))
model.add(GlobalAveragePooling2D())
model.add(Dense(5, activation='softmax'))
# + id="yKqh5teqnzoz"
model.compile(loss=tf.keras.losses.CategoricalCrossentropy(), optimizer=tf.keras.optimizers.Adamax(learning_rate=0.01), metrics=['acc'])
# + colab={"base_uri": "https://localhost:8080/"} id="n02Zk2q2n12O" outputId="1b62bc21-ba7a-4bf2-8aed-0178051bdd3d"
model.summary()
# + [markdown] id="nBjloa6jn4hi"
# ## Loading the trained model
# + colab={"base_uri": "https://localhost:8080/"} id="2TFysq-1n2_H" outputId="73ea4456-52fc-4f0a-e20c-9cf2a6ecc61b"
history = model.fit(
train_generator,
steps_per_epoch=BATCH_SIZE,
epochs=20,
validation_data=valid_generator,
batch_size=BATCH_SIZE
)
# + id="lmP1iWu_n6pk"
model.save('ResNet152.h5')
# + colab={"base_uri": "https://localhost:8080/"} id="uDVkfzG-n8MB" outputId="6ebc6b1d-d9a8-4002-d667-7b7bcdf20106"
# Loading the ResNet101 architecture with imagenet weights as base
base = tf.keras.applications.ResNet101(include_top=False, weights='imagenet',input_shape=[IMG_SIZE,IMG_SIZE,3])
model = tf.keras.Sequential()
model.add(base)
model.add(BatchNormalization(axis=-1))
model.add(GlobalAveragePooling2D())
model.add(Dense(5, activation='softmax'))
model.compile(loss=tf.keras.losses.CategoricalCrossentropy(), optimizer=tf.keras.optimizers.Adamax(learning_rate=0.01), metrics=['acc'])
history = model.fit(
train_generator,
steps_per_epoch=BATCH_SIZE,
epochs=20,
validation_data=valid_generator,
batch_size=BATCH_SIZE
)
model.save('ResNet101.h5')
# + colab={"base_uri": "https://localhost:8080/"} id="5EDK78Odn9UZ" outputId="ed6bb08d-8b33-44b4-ad22-da5a570d0797"
# Loading the ResNet50 architecture with imagenet weights as base
base = tf.keras.applications.ResNet50(include_top=False, weights='imagenet',input_shape=[IMG_SIZE,IMG_SIZE,3])
model = tf.keras.Sequential()
model.add(base)
model.add(BatchNormalization(axis=-1))
model.add(GlobalAveragePooling2D())
model.add(Dense(5, activation='softmax'))
model.compile(loss=tf.keras.losses.CategoricalCrossentropy(), optimizer=tf.keras.optimizers.Adamax(learning_rate=0.01), metrics=['acc'])
history = model.fit(
train_generator,
steps_per_epoch=BATCH_SIZE,
epochs=20,
validation_data=valid_generator,
batch_size=BATCH_SIZE
)
model.save('ResNet50.h5')
# + colab={"base_uri": "https://localhost:8080/", "height": 298} id="a93xnpkcn_qY" outputId="581da967-f48f-4d9d-b38a-5e0456b0f6b6"
test_img_path = data_path+"test_images/2216849948.jpg"
img = cv2.imread(test_img_path)
resized_img = cv2.resize(img, (IMG_SIZE, IMG_SIZE)).reshape(-1, IMG_SIZE, IMG_SIZE, 3)/255
plt.figure(figsize=(8,4))
plt.title("TEST IMAGE")
plt.imshow(resized_img[0])
# + id="tMUlS-1XoAjz"
preds = []
ss = pd.read_csv(data_path+'sample_submission.csv')
for image in ss.image_id:
img = tf.keras.preprocessing.image.load_img(data_path+'test_images/' + image)
img = tf.keras.preprocessing.image.img_to_array(img)
img = tf.keras.preprocessing.image.smart_resize(img, (IMG_SIZE, IMG_SIZE))
img = tf.reshape(img, (-1, IMG_SIZE, IMG_SIZE, 3))
prediction = model.predict(img/255)
preds.append(np.argmax(prediction))
my_submission = pd.DataFrame({'image_id': ss.image_id, 'label': preds})
my_submission.to_csv('submission.csv', index=False)
# + colab={"base_uri": "https://localhost:8080/"} id="ZU7ZWARzoB6a" outputId="65f991f8-0df6-4065-c8c8-bedc618de0f4"
# Submission file ouput
print("Submission File: \n---------------\n")
print(my_submission.head()) # Predicted Output
# + id="yvZvKnHioD70"
| Notebooks/Casava Plant Disease Prediction/Cassava_Plant_Disease.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # STATE Data Transformation
#
# Task: use Pandas to transform csv files into DataFrames that match desired tables for database schema
#
# Tables:
#
# - STATE (done)
# - STATE_DATES (done)
# - STATE_CONTIGUITY (done)
# - STATE_RESOURCE (done)
import pandas as pd
import numpy as np
# !ls SourceData/CorrelatesOfWar/
# ## Create 'STATE' table
#
# Task: transform states2016.csv into a table with attributes:
#
# - StateID
# - StateAbbr
# - StateName
#
# in which each StateID occurs once (as it is the Primary Key)
dfStates = pd.read_csv('SourceData/CorrelatesOfWar/states2016.csv')
dfStates
dfStates.drop(columns=['styear', 'stmonth', 'stday', 'endyear', 'endmonth', 'endday', 'version'], inplace=True)
dfStates.drop_duplicates(inplace=True)
dfStates.rename(columns={"stateabb": "StateAbbr", "ccode":"StateID", "statenme":"StateName"}, inplace=True)
dfStates = dfStates[['StateID', 'StateAbbr', 'StateName']]
dfStates
badstatenames = dfStates['StateName'].str.contains('\&', regex=True)
sum(badstatenames)
dfStates['StateName'] = dfStates['StateName'].str.replace('\&', 'and')
dfStates
dfStates.to_csv('FinalData/state.csv', encoding='utf-8', index=False)
StateNameMaxLength = int(dfStates['StateName'].str.encode(encoding='utf-8').str.len().max())
print(StateNameMaxLength)
StateAbbrLength = int(dfStates['StateAbbr'].str.encode(encoding='utf-8').str.len().max())
print(StateAbbrLength)
# ## Create 'STATE_DATES' table
#
# Task: transform states2016.csv into a table with attributes:
#
# - StateID
# - StartDate
# - EndDate
# - StartYear
# - StartMonth
# - StartDay
# - EndYear
# - EndMonth
# - EndDay
#
# in which each combination of StateID and StartDate occurs only once.
#
# Note: StartDate and EndDate must be in the format 'YYYY-MM-DD'
dfStateDates = pd.read_csv('SourceData/CorrelatesOfWar/states2016.csv')
dfStateDates
dfStateDates.drop(columns=['stateabb', 'statenme', 'version'], inplace=True)
dfStateDates.rename(columns={"ccode":"StateID", "styear": "StartYear", "stmonth":"StartMonth", "stday":"StartDay", "endyear": "EndYear", "endmonth":"EndMonth", "endday":"EndDay"}, inplace=True)
dfStateDates['StartDate'] = pd.to_datetime(dict(year=dfStateDates.StartYear, month=dfStateDates.StartMonth, day=dfStateDates.StartDay))
dfStateDates['EndDate'] = pd.to_datetime(dict(year=dfStateDates.EndYear, month=dfStateDates.EndMonth, day=dfStateDates.EndDay))
dfStateDates = dfStateDates[['StateID', 'StartDate', 'EndDate', 'StartYear', 'StartMonth', 'StartDay', 'EndYear', 'EndMonth', 'EndDay']]
dfStateDates
# +
dfStateDates['StartDate'] = dfStateDates['StartDate'].apply(lambda x: x.strftime('%Y-%m-%d'))
dfStateDates['EndDate'] = dfStateDates['EndDate'].apply(lambda x: x.strftime('%Y-%m-%d'))
presentdate = dfStateDates.loc[0,'EndDate']
dfStateDates['EndDate'] = dfStateDates['EndDate'].replace(presentdate, '')
dfStateDates
# -
dfStateDates.dtypes
dfStateDates.to_csv('FinalData/state_dates.csv', encoding='utf-8', index=False)
# ## Create 'STATE_CONTIGUITY' table
#
# Task: transform contdir.csv into a table with attributes:
#
# - StateA
# - StateB
# - StartDate
# - EndDate
# - StartYear
# - StartMonth
# - EndYear
# - EndMonth
# - Type
# - Notes
dfStateCont = pd.read_csv('SourceData/CorrelatesOfWar/contdir.csv')
dfStateCont
dfStateCont.drop(columns=['dyad', 'statelab', 'statehab', 'version'], inplace=True)
dfStateCont.rename(columns={"statelno":"StateA", "statehno": "StateB", "conttype":"Type", "notes":"Notes"}, inplace=True)
dfStateCont['StartYear'] = dfStateCont['begin'].astype(str).str[0:4]
dfStateCont['StartMonth'] = dfStateCont['begin'].astype(str).str[4:6]
dfStateCont['EndYear'] = dfStateCont['end'].astype(str).str[0:4]
dfStateCont['EndMonth'] = dfStateCont['end'].astype(str).str[4:6]
dfStateCont.drop(columns=['begin', 'end'], inplace=True)
dfStateCont['StartDate'] = pd.to_datetime(dict(year=dfStateCont.StartYear, month=dfStateCont.StartMonth, day='01'))
dfStateCont['EndDate'] = pd.to_datetime(dict(year=dfStateCont.EndYear, month=dfStateCont.EndMonth, day='01'))
dfStateCont = dfStateCont[['StateA', 'StateB', 'StartDate', 'EndDate', 'StartYear', 'StartMonth', 'EndYear', 'EndMonth', 'Type', 'Notes']]
dfStateCont
NoteMaxLength = int(dfStateCont['Notes'].str.encode(encoding='utf-8').str.len().max())
print(NoteMaxLength)
dfStateCont.Type.unique()
dfStateCont['StartDate'] = dfStateCont['StartDate'].apply(lambda x: x.strftime('%Y-%m-%d'))
dfStateCont['EndDate'] = dfStateCont['EndDate'].apply(lambda x: x.strftime('%Y-%m-%d'))
dfStateCont.dtypes
badnotes = dfStateCont['Notes'].str.contains('\&', regex=True)
sum(badnotes)
dfStateCont['Notes'] = dfStateCont['Notes'].str.replace('\&', 'and')
dfStateCont
dfStateCont.to_csv('FinalData/state_contiguity.csv', encoding='utf-8', index=False)
# ## Create 'STATE_RESOURCE' table
#
# Task: transform NMC_5_0-wsupplementary.csv into a table with attributes:
#
# - StateID
# - Year
# - ResourceID
# - Amount
# - Source
# - Note
# - QualityCode
# - AnomalyCode
dfResources = pd.read_csv('SourceData/CorrelatesOfWar/NMC_5_0-wsupplementary.csv')
dfResources.columns
dfResMilex = dfResources[['ccode', 'year', 'milex', 'milexsource', 'milexnote']]
dfResMilPer = dfResources[['ccode', 'year', 'milper', 'milpersource', 'milpernote']]
dfResIrst = dfResources[['ccode', 'year', 'irst','irstsource', 'irstnote', 'irstqualitycode', 'irstanomalycode']]
dfResPec = dfResources[['ccode', 'year', 'pec', 'pecsource', 'pecnote', 'pecqualitycode', 'pecanomalycode']]
dfResTpop = dfResources[['ccode', 'year', 'tpop', 'tpopsource', 'tpopnote', 'tpopqualitycode', 'tpopanomalycode']]
dfResUpop = dfResources[['ccode', 'year', 'upop', 'upopsource', 'upopnote', 'upopqualitycode', 'upopanomalycode']]
dfResUpgrowth = dfResources[['ccode', 'year', 'upopgrowth', 'upopgrowthsource']]
dfResMilex['ResourceID'] = 'milex'
dfResMilPer['ResourceID'] = 'milper'
dfResIrst['ResourceID'] = 'irst'
dfResPec['ResourceID'] = 'pec'
dfResTpop['ResourceID'] = 'tpop'
dfResUpop['ResourceID'] = 'upop'
dfResUpgrowth['ResourceID'] = 'upopgrowth'
dfResMilex.rename(columns={'ccode':'StateID', 'year':'Year', 'milex':'Amount', 'milexsource':'Source', 'milexnote':'Note'}, inplace=True)
dfResMilex['QualityCode'] = ''
dfResMilex['AnomalyCode'] = ''
dfResMilex = dfResMilex[['StateID', 'Year', 'ResourceID', 'Amount', 'Source', 'Note', 'QualityCode', 'AnomalyCode']]
dfResMilex
dfResMilPer.rename(columns={'ccode':'StateID', 'year':'Year', 'milper':'Amount', 'milpersource':'Source', 'milpernote':'Note'}, inplace=True)
dfResMilPer['QualityCode'] = ''
dfResMilPer['AnomalyCode'] = ''
dfResMilPer = dfResMilPer[['StateID', 'Year', 'ResourceID', 'Amount', 'Source', 'Note', 'QualityCode', 'AnomalyCode']]
dfResMilPer
dfResUpgrowth.rename(columns={'ccode':'StateID', 'year':'Year', 'upopgrowth':'Amount', 'upopgrowthsource':'Source'}, inplace=True)
dfResUpgrowth['QualityCode'] = ''
dfResUpgrowth['AnomalyCode'] = ''
dfResUpgrowth['Note'] = ''
dfResUpgrowth = dfResUpgrowth[['StateID', 'Year', 'ResourceID', 'Amount', 'Source', 'Note', 'QualityCode', 'AnomalyCode']]
dfResMilPer
dfResIrst.rename(columns={'ccode':'StateID', 'year':'Year', 'irst':'Amount', 'irstsource':'Source', 'irstnote':'Note', 'irstqualitycode':'QualityCode', 'irstanomalycode':'AnomalyCode'}, inplace=True)
dfResIrst = dfResIrst[['StateID', 'Year', 'ResourceID', 'Amount', 'Source', 'Note', 'QualityCode', 'AnomalyCode']]
dfResIrst
dfResPec.rename(columns={'ccode':'StateID', 'year':'Year', 'pec':'Amount', 'pecsource':'Source', 'pecnote':'Note', 'pecqualitycode':'QualityCode', 'pecanomalycode':'AnomalyCode'}, inplace=True)
dfResPec = dfResPec[['StateID', 'Year', 'ResourceID', 'Amount', 'Source', 'Note', 'QualityCode', 'AnomalyCode']]
dfResPec
dfResTpop.rename(columns={'ccode':'StateID', 'year':'Year', 'tpop':'Amount', 'tpopsource':'Source', 'tpopnote':'Note', 'tpopqualitycode':'QualityCode', 'tpopanomalycode':'AnomalyCode'}, inplace=True)
dfResTpop = dfResTpop[['StateID', 'Year', 'ResourceID', 'Amount', 'Source', 'Note', 'QualityCode', 'AnomalyCode']]
dfResTpop
dfResUpop.rename(columns={'ccode':'StateID', 'year':'Year', 'upop':'Amount', 'upopsource':'Source', 'upopnote':'Note', 'upopqualitycode':'QualityCode', 'upopanomalycode':'AnomalyCode'}, inplace=True)
dfResUpop = dfResUpop[['StateID', 'Year', 'ResourceID', 'Amount', 'Source', 'Note', 'QualityCode', 'AnomalyCode']]
dfResUpop
resources = [dfResMilex, dfResMilPer, dfResIrst, dfResPec, dfResTpop, dfResUpop, dfResUpgrowth]
dfstateresources = pd.concat(resources)
dfstateresources
dfstateresources.AnomalyCode.unique()
dfstateresources['AnomalyCode'] [dfstateresources['AnomalyCode'] == 'Assumed component zero values used in PEC computation for NMC 5.0.'] = '0'
dfstateresources.AnomalyCode.unique()
dfstateresources.to_csv('FinalData/state_resource.csv', encoding='utf-8', index=False)
| .ipynb_checkpoints/STATE Data Transformation-checkpoint.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] toc-hr-collapsed=true
# # Planning
# - Goals
# - Deliverables
# - How to get to the end?
#
# -
# ## Goals
# - Complete first complete pipeline project from start to finish
# - Find a model that predicts the likelihood of a person having a stroke
# - Learn a new technique during this project
#
# ## Deliverables
# - A completed notebook full of visuals, commented code with markdown and comments, and machine learning models.
# ## How?
# - Begin by selecting and acquiring the data set
# - I chose a data set that contains over 5100 records of patient data of stroke indicators.
# - Examine the data for missing values and obvious outliers
# - Prepare the data for exploration and statistical tests
# - Explore the univariate, bivariate, and multivariate relationships.
# - Run Stats tests to verify that the features are acceptable to be modeled
# - Create a model baseline
# - Run various models with different hyperparameters for the best results
# - Select and test the best performing model.
# +
# ignore warnings
import warnings
warnings.filterwarnings("ignore")
# Data getting, cleaning, and exploring
import wrangle as w
import explore as ex
# Python without these is hard
import pandas as pd
import numpy as np
from scipy import stats
# Machine Learning
from sklearn.model_selection import train_test_split
from sklearn.metrics import confusion_matrix
import sklearn.preprocessing
from sklearn.preprocessing import MinMaxScaler
from sklearn.impute import SimpleImputer, KNNImputer
# Classification Modeling
from sklearn.tree import DecisionTreeClassifier
from sklearn.neighbors import KNeighborsClassifier
from sklearn.metrics import classification_report
from sklearn.metrics import confusion_matrix
from sklearn.ensemble import RandomForestClassifier
from sklearn.linear_model import LogisticRegression
# Visualization
import matplotlib.pyplot as plt
# %matplotlib inline
import seaborn as sns
from sklearn.tree import export_graphviz
# -
# ### Hypotheses:
#
# - Heart disease will be a driver of stroke
# - Decision tree will be my best model due to the large amount of binary features
# - Age will be a significant factor of my model
# - The dataset is too imbalaced to get an accurate prediction
#
# # Wrangle notes:
# ### Changes to df:
# - set index to id
# - made ever_married into binary variable
# - replaced 'Unknown' in smoking_status as 'never_smoked'
# - created dumm variables of residence_type and gender
# - impute knn for bmi using 'age', 'avg_glucose_level', 'heart_disease', 'hypertension'
# - created current smoker feature
# - created age_bin and gluc_bin
#
df = w.wrangle_stroke()
# shows that columns are not missing any data
w.missing_zero_values_table(df)
# shows that the no records are missing any data
w.missing_columns(df)
list(df.columns)
# organize my data into various groups of columns in the form of list
quant_cols = ['age', 'bmi']
bin_cols = ['hypertension','heart_disease','ever_married','rural_residence','urban_residence', 'current_smoker', 'is_female', 'is_male']
target = 'stroke'
cat_cols = ['work_type', 'smoking_status',]
# explore univariate information.
ex.explore_univariate(df, cat_cols, quant_cols)
# # Univariate Takeaways
#
# - Age is pretty even across the board
# - Most work is in private sector
# - Avg. glucose and bmi have a right skew, I assume they are related
# Split data
train, validate, test = w.train_validate_test_split(df, target, 42)
# Scale data
train, validate, test = w.train_validate_test_scale(train, validate, test, quant_cols)
# explore each variable against the target variable
ex.explore_bivariate(train, target, target, bin_cols, quant_cols)
# + [markdown] toc-hr-collapsed=true
# # Bivariate takeaways
#
# - Good features:
# - hypertension
# - heart disease
# - ever married
# - age
# - glucose
# - Bad features:
# - residency
# - gender
# - current smoker
# - Need more info:
# - bmi
# - ever_smoked...
#
# +
# Wanted to get a closer look at work_type relationship with stroke
sns.countplot(data=train, x='work_type', hue='stroke')
# Private sector had the highest number of strokes
# however that is most likely due to that larger number of
# private sector workers
# + [markdown] toc-hr-collapsed=true
# ## Work_type and Stroke
#
# - Wanted to get a closer look at work_type relationship with stroke.
# - Private sector had the highest number of strokes, however, that is most likely due to that larger number of private sector workers.
# + [markdown] toc-hr-collapsed=true
# # Statistical Analysis
# -
# ### χ<sup>2</sup> Test
#
# The χ<sup>2</sup> test allows me to test for independence of 2 categorical variables.
# ### Confidence
#
# - Confidence level will be 99%
# - Alpha will be 0.01
# - p-value must be below 0.01 to be statistically sigificant
# ### Hypothesis
# - The null hypothesis (H<sub>0</sub>) is: hypertension is independent from stroke
# - The alternate hypothesis (H<sub>1</sub>) is: hypertension and stroke are dependent
ex.chi2(train, 'hypertension', 'stroke', 0.01)
# ### Hypothesis
# - H<sub>0</sub> is: heart diease is independent from stroke
# - H<sub>1</sub> is: heart disease and stroke are dependent
ex.chi2(train, 'heart_disease', 'stroke', 0.01)
# ### Hypothesis
# - H<sub>0</sub> is: ever married is independent from stroke
# - H<sub>1</sub> is: ever married and stroke are dependent
ex.chi2(train, 'ever_married', 'stroke', 0.01)
# ### T-Test
#
# - The T-test allows me to compare the means of 2 subgroups
# ### Confidence
#
# - Confidence level will be 99%
# - Alpha will be 0.01
# - p-value must be below 0.01 to be statistically sigificant
# ### Hypothesis: Age of those who have had a stroke vs. the age of the those who have not had a stroke
#
# #### Two Sample, One Tail T-test
#
# - H<sub>0</sub> is: The age of those who have not had a stroke is equal to or higher than the age of those who have had a stroke.
# - H<sub>1</sub> is: The age of those who have not had a stroke is significantly less than the age of those who have had a stroke.
# +
# population_1: Series of train.age column filtering out those who have NOT had a stroke.
age_no_stroke = train[train.stroke == 0].age
# population_2: Series of train.age column filtering out those who have had a stroke
age_stroke = train[train.stroke == 1].age
# -
# Visual to explain why I think this would be a great feature
sns.boxenplot(data=train, y='age', x='stroke')
# +
# Running a 2 sample, 1 tail, t-test, predicting that the age of
# people who have not had a stroke is lower than those who have
# had a stroke.
ex.t_test(age_no_stroke, age_stroke, 0.01, sample=2, tail=1, tail_dir='lower')
# -
# ### Hypothesis: Average glucose level of those who have had a stroke and the average glucose level of those who have not had a stroke.
#
# #### Two Sample, Two Tail T-test
#
# - H<sub>0</sub> is: there is no difference in the glucose levels of those who had a stroke and those who did not
# - H<sub>1</sub> is: there is a significant difference in the gluclose levels of those who had a stroke and those who did not
# +
# population_1: Series of train.avg_glucose_level filtering for those WITHOUT a stroke
gluc_no_stroke = train[train.stroke == 0].avg_glucose_level
# population_2: Series of train.avg_glucose_level filtering for those WITH a stroke
gluc_stroke = train[train.stroke == 1].avg_glucose_level
# -
# Visual of avg_glucose_level and stroke
sns.boxenplot(data=train, y='avg_glucose_level', x='stroke')
# +
# Running a 2 sample, 2 tail, t-test, predicting that the average glucose
# level of people who have not had a stroke is sigificantly different than
# those who have had a stroke.
ex.t_test(gluc_no_stroke, gluc_stroke, 0.01, sample=2)
# -
# ## Statistical Summary
#
# ### χ<sup>2</sup> Results
# - heart_disease, hypertension, and ever_married all rejected the null hypothesis
# - It is now assumed that there is a dependency of each variable and stroke.
#
# ### T-test Results
# - a two sample one tail t-test was performed on age of those who had a stroke and those who did not have a stroke.
# - the null hypothesis was rejected.
# - the t-test proved that the age of those who have not had a stroke was significantly less than the age of those who have had a stroke.
#
# - a two sample two tail t-test was performed on average glucose levels of those who had a stroke and those who did not have a stroke.
# - the null hypothesis was rejected.
# # Modeling: Classification
# ### What am I looking for?
# - In these models I will be looking to the ones that produce the highest Recall or Sensitivity.
# - I need the model that produce as many True Positives are False Negatives as possible.
# - Accuracy in this case will not produce the best predictions since it will not capture most people who will have a stroke.
# +
X_train = train.drop(columns=['stroke'])
y_train = train.stroke
X_validate = validate.drop(columns=['stroke'])
y_validate = validate.stroke
X_test = test.drop(columns=['stroke'])
y_test = test.stroke
# -
# create list of features that will be used for modeling.
features = ['hypertension', 'heart_disease', 'ever_married', 'age_bin', 'gluc_bin']
# ### Baseline
# find out the mode of the target variable
train.stroke.value_counts()
# +
# Establish new column that contains the mode
train["most_frequent"] = 0
# Calcuate the baseline accuracy
baseline_accuracy = (train.stroke == train.most_frequent).mean()
print(f'My baseline prediction is survived = 0')
print(f'My baseline accuracy is: {baseline_accuracy:.2%}')
# -
# ### Model Selection Tools
# - During this project I stumbled upon some helpful tool in selecting the hyperparameters for each model.
# - This tool is the GridSearchCV from sklearn.model_selection.
# - This tool takes in a model, a dictionary of parameters, and a scoring parameter.
# - With a for loop it is easy to see what this tool does
from sklearn.model_selection import RandomizedSearchCV, GridSearchCV
from scipy.stats import uniform
# +
# Create a parameter dictionary for the model, {'parameter': [list of settings]}
parameters = [
{
'algorithm': ['auto', 'ball_tree', 'kd_tree', 'brute'],
'n_neighbors': [1, 3, 5, 7, 9],
'weights': ['distance'],
},
]
# Created variable model which holds the KNN model
model = KNeighborsClassifier()
# Create grid_search model, looking at recall
grid_search = GridSearchCV(model,
param_grid=parameters,
cv=5,
scoring='recall',
)
# Create variable r that hold the FIT grid_search
r = grid_search.fit(X_train[features], y_train)
scores = r.cv_results_
knn = r.best_estimator_
# -
# Returns max value of the mean test score
max(scores['mean_test_score'])
# loop that runs all of the possible parameter configurations from the parameter dictionary above
for mean_score, params in sorted(list(zip(scores["mean_test_score"], scores["params"])),key = lambda x: x[0]):
print(mean_score, params)
# ### Model 1: K Nearest Neighbors
# +
# Create the model
n_neighbors = 1
knn = KNeighborsClassifier(algorithm='brute', n_neighbors=n_neighbors, weights='distance')
# Fit the model with the train data
knn.fit(X_train[features], y_train)
# Predict the target
y_pred_knn = knn.predict(X_train[features])
# predict the probability
y_pred_proba_knn = knn.predict_proba(X_train[features])
# +
# Create confusion matrix, label true positive, true negative, false positive, false negative
[tn,fp],[fn, tp] = confusion_matrix(y_train, y_pred_knn)
# Calculate the true positive rate, true negative rate, false positive rate, and false negative rate
tpr = (tp / (tp+fn))
fnr = (fn / (fn+tp))
tnr = (tn / (tn+fp))
fpr = (fp / (tn+fp))
# -
print(f'The confusion matrix:\n {confusion_matrix(y_train, y_pred_knn)}\n')
print(f'Classificaiton Report:\n{classification_report(y_train, y_pred_knn)}\n')
print(f'The True Positive Rate is: {tpr:.2%}')
print(f'The False Positive Rate is: {fpr:.2%}')
print(f'The True Negative Rate is: {tnr:.2%}')
print(f'The False Negative Rate is: {fnr:.2%}\n')
print('Accuracy of KNN classifier on training set n_neighbors set to 4: {:.2f}'
.format(knn.score(X_train[features], y_train)))
print('Accuracy of KNN classifier on validate set with n_neighbors set to 4: {:.2f}\n'
.format(knn.score(X_validate[features], y_validate)))
# ### Model 2: Random Forest
# create the random forest model
rf = RandomForestClassifier(bootstrap=True,
n_estimators=50,
warm_start=True,
oob_score=True,
criterion='gini',
random_state=42)
# +
# fit the model with X_train
rf.fit(X_train[features], y_train)
# Predict the target
y_pred_rf = rf.predict(X_train[features])
# predict the probability
y_pred_proba_rf = rf.predict_proba(X_train[features])
# +
# Create confusion matrix, label true positive, true negative, false positive, false negative
[tn,fp],[fn, tp] = confusion_matrix(y_train, y_pred_rf)
# Calculate the true positive rate, true negative rate, false positive rate, and false negative rate
tpr = (tp / (tp+fn))
fnr = (fn / (fn+tp))
tnr = (tn / (tn+fp))
fpr = (fp / (tn+fp))
# -
print(f'\nThe confusion matrix:\n {confusion_matrix(y_train, y_pred_rf)}\n')
print(f'Classificaiton Report:\n{classification_report(y_train, y_pred_rf)}\n')
print(f'The True Positive Rate is: {tpr:.2%}')
print(f'The False Positive Rate is: {fpr:.2%}')
print(f'The True Negative Rate is: {tnr:.2%}')
print(f'The False Negative Rate is: {fnr:.2%}\n')
print('Accuracy of random forest classifier on training set: {:.2f}'
.format(rf.score(X_train[features], y_train)))
print('Accuracy of random forest classifier on the validate set: {:.2f}'
.format(rf.score(X_validate[features], y_validate)))
# ### Model 3: Decision Tree
# Create decision tree model
clf = DecisionTreeClassifier(max_depth=7, splitter='random', random_state=42)
# +
# fit the model
clf = clf.fit(X_train[features], y_train)
# predict the target
y_pred_clf = clf.predict(X_train[features])
# predict the probability
y_pred_proba_clf = clf.predict_proba(X_train[features])
# +
# Create confusion matrix, label true positive, true negative, false positive, false negative
[tn,fp],[fn, tp] = confusion_matrix(y_train, y_pred_clf)
# Calculate the true positive rate, true negative rate, false positive rate, and false negative rate
tpr = (tp / (tp+fn))
fnr = (fn / (fn+tp))
tnr = (tn / (tn+fp))
fpr = (fp / (tn+fp))
# -
print(f'The confusion matrix:\n {confusion_matrix(y_train, y_pred_clf)}\n')
print(f'Classificaiton Report:\n {classification_report(y_train, y_pred_clf)}')
print(f'The True Positive Rate is: {tpr:.2%}')
print(f'The False Positive Rate is: {fpr:.2%}')
print(f'The True Negative Rate is: {tnr:.2%}')
print(f'The False Negative Rate is: {fnr:.2%}\n')
print('Accuracy of Decision Tree classifier on training set: {:.2f}\n'
.format(clf.score(X_train[features], y_train)))
print('Accuracy of Decision Tree classifier on validate set: {:.2f}'
.format(clf.score(X_validate[features], y_validate)))
# ### Model 4: Logistic Regression
logit = LogisticRegression(penalty='l2', C=1, class_weight={0: 10, 1: 90}, random_state=42, solver='lbfgs')
logit.fit(X_train[features], y_train)
print('Coefficient: \n', logit.coef_)
print('Intercept: \n', logit.intercept_)
# +
# predict the target
y_pred_log = logit.predict(X_train[features])
# predict the probability
y_pred_proba_log = logit.predict_proba(X_train[features])
# +
# Create confusion matrix, label true positive, true negative, false positive, false negative
[tn,fp],[fn, tp] = confusion_matrix(y_train, y_pred_log)
# Calculate the true positive rate, true negative rate, false positive rate, and false negative rate
tpr = (tp / (tp+fn))
fnr = (fn / (fn+tp))
tnr = (tn / (tn+fp))
fpr = (fp / (tn+fp))
# -
print('Accuracy of Logistic Regression classifier on training set: {:.2f}\n'
.format(logit.score(X_train[features], y_train)))
print(f'The confusion matrix:\n {confusion_matrix(y_train, y_pred_log)}\n')
print(f'Classificaiton Report:\n {classification_report(y_train, y_pred_log)}\n')
print(f'The True Positive Rate is: {tpr:.2%}')
print(f'The False Positive Rate is: {fpr:.2%}')
print(f'The True Negative Rate is: {tnr:.2%}')
print(f'The False Negative Rate is: {fnr:.2%}\n')
print('Accuracy of on training set: {:.2f}'
.format(logit.score(X_train[features], y_train)))
print('Accuracy out-of-sample set: {:.2f}'.format(logit.score(X_validate[features], y_validate)))
# + [markdown] toc-hr-collapsed=true
# # Testing the Model
# -
# ### KNN Model had the best fit
#
# - Hyperparameters:
# - algorithm='brute'
# - n_neighbors=1
# - weights='distance'
# +
print(f'My baseline accuracy is: {baseline_accuracy:.2%}\n')
print('Accuracy of on training set: {:.2f}'
.format(knn.score(X_train[features], y_train)))
print('Accuracy out-of-sample validation set: {:.2f}'.format(knn.score(X_validate[features], y_validate)))
print('Accuracy out-of-sample test set: {:.2f}\n'.format(knn.score(X_test[features], y_test)))
print(f'The confusion matrix:\n {confusion_matrix(y_train, y_pred_knn)}\n')
print(f'Classificaiton Report:\n{classification_report(y_train, y_pred_knn)}\n')
# + [markdown] toc-hr-collapsed=true
# # Model Summary
# - My models performed pretty poorly
# - The imbalanced data set did not provide enough Stroke positive people to analyze thus making it difficult to see what is happeneing
#
# ## Best Model
# - Because Recall is the most important scoring metric, The KNN is the best performing model
# - The downfall of recall being the most important metric for modeling is that the precision is generaly negatively affected.
# - Accuracy decreased from 95% to 84%, and the Recall achieved a 29%
# + [markdown] toc-hr-collapsed=true
# # Conclusion
#
# With the current dataset, predicting stroke is extremely difficult. When I completed modeling of this data, I realized that finding a really good solution is problematic for a couple reasons.
# 1. The dataset is far too small. If stroke is the worlds 2<sup>nd</sup> leading cause of death, there should be much more informaiton available.
# 2. This dataset is far too imbalanced for a good machine learning algorithm to analyze.
# a. When imblearn is applied the dataset drops from 5000 records to 300.
#
# What can be done to make this project better?
# Collect more stroke victim data, perhaps conducting a large study to gather more patients' data and more data points like family history, blood disorders, etc.
# -
| final_stroke.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
import numpy as np
import tensorflow as tf
import tensorflow.python.platform
import matplotlib.pyplot as plt
import matplotlib
import pandas as pd
# %matplotlib inline
# Global variables.
NUM_LABELS = 2 # The number of labels.
BATCH_SIZE = 100 # The number of training examples to use per training step.
# +
# Helper function in place of original xrange() in python2
def xrange(x):
return iter(range(x))
# Read the data from the given data file and extract the features and labels
def csv_data_reader(filename):
# Arrays to hold the labels and features.
label = []
features = []
with open(filename, 'r') as datafile:
for line in datafile:
row = line.split(",")
label.append(int(row[2]))
features.append([float(x) for x in row[0:2]])
features_matrix = np.matrix(features).astype(np.float32)
labels_vect = np.array(label).astype(dtype=np.uint8)
labels_onehot = (np.arange(NUM_LABELS) == labels_vect[:, None]).astype(np.float32)
#return the features(1,2) and label
return features_matrix,labels_onehot
# Init weights method. (Reference : Delip Rao: http://deliprao.com/archives/100)
def weight_initializer(shape, init_method='xavier', xavier_params = (None, None)):
if init_method == 'zeros':
return tf.Variable(tf.zeros(shape, dtype=tf.float32))
elif init_method == 'uniform':
return tf.Variable(tf.random_normal(shape, stddev=0.01, dtype=tf.float32))
else: #xavier
(fan_in, fan_out) = xavier_params
low = -4*np.sqrt(6.0/(fan_in + fan_out)) # {sigmoid:4, tanh:1}
high = 4*np.sqrt(6.0/(fan_in + fan_out))
return tf.Variable(tf.random_uniform(shape, minval=low, maxval=high, dtype=tf.float32))
def predictor(x,w_hidden,b_hidden,w_out,b_out):
hidden = tf.nn.tanh(tf.matmul(tf.cast(x,tf.float32),w_hidden) + b_hidden)
y = tf.nn.softmax(tf.matmul(tf.cast(hidden,tf.float32), w_out) + b_out)
pred_result = tf.argmax(y,1)
return pred_result.eval()
# +
training_data_fname = "data/intro_to_ann3.csv"
test_data_fname = "data/intro_to_ann3.csv"
# Extract the csv data into numpy arrays.
train_data,train_labels = csv_data_reader(training_data_fname)
test_data, test_labels = csv_data_reader(test_data_fname)
train_size,num_features = train_data.shape
num_epochs =3000
num_hidden = 10
# The below place holders hold the features and label data
# that will be used by the program later
x = tf.placeholder("float", shape=[None, num_features])
y_ = tf.placeholder("float", shape=[None, NUM_LABELS])
test_data_node = tf.constant(test_data)
# +
# Construct Phase
# Hidden weights and bias initialization
w_hidden = weight_initializer(
[num_features, num_hidden],
'uniform',
xavier_params=(num_features, num_hidden))
b_hidden = weight_initializer([1,num_hidden],'zeros')
# Construct the hidden layers
hidden = tf.nn.tanh(tf.matmul(x,w_hidden) + b_hidden)
# output weights and bias initialization
w_out = weight_initializer(
[num_hidden, NUM_LABELS],
'uniform',
xavier_params=(num_hidden, NUM_LABELS))
b_out = weight_initializer([1,NUM_LABELS],'zeros')
# Construct the output layer
y = tf.nn.softmax(tf.matmul(hidden, w_out) + b_out)
model = tf.initialize_all_variables()
# -
# Optimization.
cross_entropy = -tf.reduce_sum(y_*tf.log(y))
train_step = tf.train.AdamOptimizer(0.1).minimize(cross_entropy)
# Verification Phase
correct_prediction = tf.equal(tf.argmax(y,1), tf.argmax(y_,1))
accuracy = tf.reduce_mean(tf.cast(correct_prediction, "float"))
# Execution Phase.
with tf.Session() as sess:
# Initialize all the variables.
tf.initialize_all_variables().run()
# Iterate and train.
for step in xrange(num_epochs * train_size // BATCH_SIZE):
offset = (step * BATCH_SIZE) % train_size
batch_data = train_data[offset:(offset + BATCH_SIZE), :]
batch_labels = train_labels[offset:(offset + BATCH_SIZE)]
train_step.run(feed_dict={x: batch_data, y_: batch_labels})
if step % 100 == 0:
print ("At step: ", step, " System accuracy is:", accuracy.eval(feed_dict={x: test_data, y_: test_labels}))
train = pd.read_csv("data/intro_to_ann3.csv")
X, y = np.array(train.ix[:,0:2]), np.array(train.ix[:,2])
cmhot = plt.cm.get_cmap("hot")
plt.scatter(X[:,0], X[:,1], s=20, c=y, cmap=cmhot)
plt.axis('off')
plt.show()
x_min, x_max = X[:, 0].min() - .5, X[:, 0].max() + .5
y_min, y_max = X[:, 1].min() - .5, X[:, 1].max() + .5
h = 0.01
# Generate a grid of points with distance h between them
xx, yy = np.meshgrid(np.arange(x_min, x_max, h), np.arange(y_min, y_max, h))
Z = predictor(np.c_[xx.ravel(), yy.ravel()],w_hidden,b_hidden,w_out,b_out)
Z = Z.reshape(xx.shape)
plt.contourf(xx, yy, Z, cmap=plt.cm.Spectral)
plt.axis('off')
plt.scatter(X[:,0], X[:,1], s=20, c=y, cmap=cmhot)
plt.show()
| assignment/Kolar-Prasanna/assign-03-pras-kolar.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python [conda root]
# language: python
# name: conda-root-py
# ---
import numpy as np
p = np.poly1d([1,0,0,0,0,0])
print (p)
print (p.integ())
p.integ()(1.0) - p.integ()(-1.0)
from sympy import integrate, symbols
x, y = symbols('x y', real=True)
integrate(x**5, x)
integrate(x**5, (x, -1, 1))
from sympy import N, exp as Exp, sin as Sin
integrate(Exp(-x) * Sin(x), x)
integrate(Exp(-x) * Sin(x), (x, 0, 1))
N(_)
integrate(Sin(x) / x, x)
integrate(Sin(x) / x, (x, 0, 1))
N(_)
integrate(x**1, (x, 0, 1))
from sympy import oo
integrate(Exp(-x**2), (x,0,+oo))
| Chapter08/Integration.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3 (ipykernel)
# language: python
# name: python3
# ---
# # XLM-RoBERTa Transfer Learning for Sentiment Analysis on Multilingual data
# - goal : multi-class classification
# - labels : `positive`, `negative`, `neutral`
# - accelerator : TPU v3-8
#
# #### XLM-R
# > [HuggingFace Multilingual Model](https://huggingface.co/docs/transformers/multilingual#xlmroberta)
# > XLM-RoBERTa was trained on 2.5TB of newly created clean CommonCrawl data in 100 languages via masked language modeling MLM.
# > It provides strong gains over previously released multi-lingual models like mBERT or XLM on downstream tasks like classification, sequence labeling and question answering.
#
# - why not `xlm-roberta-base` but `jplu/tf-xlm-roberta-base` ?
# > Because the tensorflow version of model weights of `xlm-roberta-base` is deprecated. Plus, I decided to use Tensorflow Distribute due to its easier usage (Pytorch/XLA is recently having issues with wheels and image on Google Colab with TPUs). I found `jplu/tf-xlm-roberta-base` on HuggingFace model hub and then found this [kaggle notebook](https://www.kaggle.com/xhlulu/jigsaw-tpu-xlm-roberta#Submission) to be based on.
#
# #### model building process :
# Most processes are modularized into [functions](#hf).
#
# 0. [import libraries, config TPU and set constant variables](#step0)
# 1. [load datasets in dataframe](#step1) `load_data_into_dataframe(train_csv_location, test_csv_location)`
# 2. [convert dependent variable (categorical) to one-hot-encoding](#step2) `encode_dependent_variable_in_OHE(train_df, label_name='sentiment', num_classes=3)`
# 3. [create Transformers Tokenizer](#step3) `transformers.AutoTokenizer.from_pretrained('jplu/tf-xlm-roberta-base')`
# 4. [split training data further into training and validation set](#step4) `split_train_validation(train_df, X_columns_name, label, validation_size=0.15, random_state=42)`
# 5. let Transformers Tokenizer tokenizes and encodes texts in embedding `tokenizer_encode(texts, tokenizer=tokenizer, maxlen=512)`
# 6. [load datasets in Tensorflow Dataset API for an efficient input pipeline](#step6) `load_into_tf_Dataset(tokenizer, train_texts, val_texts, y_train, y_valid, test_df, batch_size=BATCH_SIZE, prefetch_buffer_size=AUTO)`
# 7. [instantiate transformer model](#step7) `transformers.TFAutoModel.from_pretrained('jplu/tf-xlm-roberta-base')`
# 8. [create output layer on top of frozen body of XLM-r model](#step8) `build_model(transformer, num_classes=3, activation='softmax', max_len=512)`
# 9. [set `EarlyStopping` by monitoring validation loss to prevent overfitting](#step9)
# 10. [train model](#step10)
# 11. [plot model performance after training](#step11) `plot_model_history(history, measures)`
#
#
#
# Please refer to another notebook `data_analysis.ipynb` for data analysis.
# ### 0. Import Libraries <a class="anchor" id="step0"></a>
# +
import os
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from tqdm.notebook import tqdm
from tqdm.keras import TqdmCallback
import tensorflow as tf
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import LabelEncoder
from tensorflow.keras.utils import to_categorical
from tensorflow.keras.layers import Dense, Input
from tensorflow.keras.optimizers import Adam
from tensorflow.keras.models import Model
from tensorflow.keras.callbacks import EarlyStopping
import tensorflow_addons as tfa
import transformers
from transformers import TFAutoModel, AutoTokenizer
# -
# ### 0. TPU Config
# +
# Detect hardware, return appropriate distribution strategy
try:
# TPU detection. No parameters necessary if TPU_NAME environment variable is
# set: this is always the case on Kaggle.
tpu = tf.distribute.cluster_resolver.TPUClusterResolver()
print('Running on TPU ', tpu.master())
except ValueError:
tpu = None
if tpu:
tf.config.experimental_connect_to_cluster(tpu)
tf.tpu.experimental.initialize_tpu_system(tpu)
strategy = tf.distribute.experimental.TPUStrategy(tpu)
else:
# Default distribution strategy in Tensorflow. Works on CPU and single GPU.
strategy = tf.distribute.get_strategy()
print("REPLICAS: ", strategy.num_replicas_in_sync)
# AUTO set for prefetch buffer
AUTO = tf.data.experimental.AUTOTUNE
# -
# ### 0. Config / Set Constant Variable
# +
# seeds
from numpy.random import seed
seed(42)
tf.random.set_seed(42)
TRAIN_CSV_PATH = '../input/technical-test/train.csv'
TEST_CSV_PATH = '../input/technical-test/test.csv'
EPOCHS = 25
BATCH_SIZE = 16 * strategy.num_replicas_in_sync
MAX_LEN = 256
MODEL_NAME = 'jplu/tf-xlm-roberta-base'
# -
# ## Helper Functions <a class="anchor" id="hf"></a>
# +
# step 1. load datasets in dataframe
def load_data_into_dataframe(train_csv_location, test_csv_location):
"""
Load CSV datasets into Pandas Dataframe.
Parameters:
------------
train_location : str
A string of path location of training dataset in csv format.
test_location : str
A string of path location of test dataset in csv format.
Returns:
------------
train_df : Pandas Dataframe
A Dataframe of training data.
test_df : Pandas Dataframe
A Dataframe of test data.
"""
# Load csv in Pandas Dataframe
train_df = pd.read_csv(train_csv_location)
test_df = pd.read_csv(test_csv_location)
return train_df, test_df
# +
# step 2. convert dependent variable (categorical) to one-hot-encoding
def encode_dependent_variable_in_OHE(train_df, label_name='sentiment'):
"""
Encode the sentiment labels in the training Dataframe into one hot encoded format, since in the top layer output Dense layer use 'categorical_crossentropy'.
Parameters:
------------
train_df : Pandas Dataframe
A Dataframe of loaded training data.
label_name : str
A string of column name indicating the output variable in the train_df.
num_classes : int
An integer indicating how many categories there are.
Returns:
------------
dummy_y : Numpy Array
A matrix of one-hot-encoded class labels.
"""
# get array of sentiment labels
Y = train_df[label_name].values
# encode label values as integers
encoder = LabelEncoder()
encoder.fit(Y)
encoded_Y = encoder.transform(Y)
# convert integers to dummy variables - one hot encoded
dummy_y = to_categorical(encoded_Y)
return dummy_y
# +
# step 4. split training data further into training and validation set
def split_train_validation(train_df, X_columns_name, label, validation_size=0.15, random_state=42):
"""
Split training data into further training set and validation set in a ratio of ( 1-validation_size : validation_size).
In my case, I set 85% of the training data for training the model and 15% aside for validation.
Parameters:
------------
train_df : Pandas Dataframe
A Dataframe of loaded training data.
X_columns_name : str
A string of column name indicating the comments in texts in the train_df which will be used as independent variable for training.
label : Numpy Array or Tensorflow Tensor
An array of labels or a matrix of one-hot-encoded labels.
validation_size : float
A float number between 0-1 indicating desired validation size
random_state : int
Set the seed for reproucibility in splitting dataset.
Returns:
------------
train_texts : Numpy Array
An array of training set samples in textual format in the shape of (1-validation_size,).
val_texts : Numpy Array
An array of validation set samples in textual format in the shape of (validation_size,).
y_train : Numpy Array
An array of
y_valid : Numpy Array
"""
train_texts, val_texts, y_train, y_valid = train_test_split(train_df[X_columns_name].values, label,
random_state=random_state,
test_size=validation_size, shuffle=True)
print('Total number of examples: ', len(train_df))
print('number of training set examples: ', len(train_texts))
print('number of validation set examples: ', len(val_texts))
return train_texts, val_texts, y_train, y_valid
# +
# step 5. let Transformers Tokenizer tokenizes and encodes texts in embedding
def tokenizer_encode(texts, tokenizer, maxlen=512):
"""
Let Transformers Tokenizer API prepare the input data and encode, precisely tokenizing
Parameters:
------------
texts : list of str
A list of string to be encoded by the tokenizer.
tokenizer : Transformers AutoTokenizer
A Tensorflow AutoTokenizer object loaded in order to encode the text data.
max_len : int
An integer representing the maximun length of each sample, also as the shape of outputs from 'frozen' body of transformer model.
Returns:
------------
model : Numpy Array
An array of tokenizer-encoded vector from the texts.
"""
encoding = tokenizer.batch_encode_plus(
texts,
truncation=True,
return_attention_mask=False,
return_token_type_ids=False,
padding='max_length',
max_length=maxlen
)
encoding_array = np.array(encoding['input_ids'])
return encoding_array
# +
# step 6. load datasets in Tensorflow Dataset API for an efficient input pipeline
def load_into_tf_Dataset(tokenizer, train_texts, val_texts, y_train, y_valid, test_df, batch_size=BATCH_SIZE, prefetch_buffer_size=AUTO):
"""
Load splitted dataset of training, validation, and test into Tensorflow Dataset API for a more efficient input pipeline, especially for parallelism.
Parameters:
------------
tokenizer : Transformers AutoTokenizer
A Tensorflow AutoTokenizer object loaded in order to encode the text data.
train_texts : Numpy Array
An array of training set samples in textual format in the shape of (1-validation_size,).
val_texts : Numpy Array
An array of validation set samples in textual format in the shape of (validation_size,).
y_train : Numpy Array
An array of in the shape of (1-validation_size,).
y_valid : Numpy Array
An array of in the shape of (validation_size,).
test_df : Pandas Dataframe
A Dataframe of loaded test data.
batch_size : int
An integer indicating the size of the batch. Here uses 16*num_of_TPU_core (=128) by default.
prefetch_buffer_size : tf.int64 , tf.Tensor or tf.data.AUTOTUNE
A scalar representing the maximum number of elements that will be buffered when prefetching. Here uses tf.data.AUTOTUNE (AUTO) by default.
Returns
------------
train_dataset : tf.data.Dataset
A Tensorflow Dataset API object of training set as an input pipeline for model training.
valid_dataset : tf.data.Dataset
A Tensorflow Dataset API object of validation set as an input pipeline for model validation.
test_dataset : tf.data.Dataset
A Tensorflow Dataset API object of test set as an input pipeline for model inference.
"""
## Tokenize the textual format data by calling tokenizer_encode().
x_train = tokenizer_encode(texts=train_texts.tolist(), tokenizer=tokenizer, maxlen=MAX_LEN)
x_valid = tokenizer_encode(texts=val_texts.tolist(), tokenizer=tokenizer, maxlen=MAX_LEN)
x_test = tokenizer_encode(texts=test_df.content.values.tolist(), tokenizer=tokenizer, maxlen=MAX_LEN)
## Build Tensorflow Dataset objects.
train_dataset = (
tf.data.Dataset
.from_tensor_slices((x_train, y_train))
.repeat()
.shuffle(2048)
.batch(BATCH_SIZE)
.prefetch(AUTO)
)
valid_dataset = (
tf.data.Dataset
.from_tensor_slices((x_valid, y_valid))
.batch(BATCH_SIZE)
.cache()
.prefetch(AUTO)
)
test_dataset = (
tf.data.Dataset
.from_tensor_slices(x_test)
.batch(BATCH_SIZE)
)
return train_dataset, valid_dataset, test_dataset
# +
# step 7. create output layer on top of frozen body of XLM-r model
# function based on the notebook https://www.kaggle.com/xhlulu/jigsaw-tpu-distilbert-with-huggingface-and-keras
def build_model(transformer, num_classes=3, activation='softmax', max_len=512):
"""
Create top layer on top of HuggingFace Transformer model for down-stream task. cls_token
In my case, a multi-class classification is the goal. Taking into account that there are 3 classes,
I use categorical accuracy, as well as weighted F1 score and Matthews correlation coefficient as metrics.
Parameters:
------------
transformer : Transformers TFAutoModel
A string of path location of training dataset in csv format.
num_classes : int
A integer representing num
activation : str
A string indicating which actvation to be used in the output layer.
max_len : int
An integer representing the maximun length of each sample, also as the shape of outputs from 'frozen' body of transformer model.
Returns:
------------
model :
configed model ready to be train
"""
input_word_ids = Input(shape=(max_len,), dtype=tf.int32, name="input_word_ids")
sequence_output = transformer(input_word_ids)[0]
cls_token = sequence_output[:, 0, :]
out = Dense(units=num_classes, activation=activation, name='softmax')(cls_token) # set units=3 because we have three classes
# add weighted F1 score and Matthews correlation coefficient as metrics
f1 = tfa.metrics.F1Score(num_classes=num_classes, average='weighted')
mcc = tfa.metrics.MatthewsCorrelationCoefficient(num_classes=num_classes)
model = Model(inputs=input_word_ids, outputs=out)
model.compile(Adam(lr=1e-5), loss='categorical_crossentropy', metrics=['categorical_accuracy', f1, mcc])
return model
# +
# step 10. plot model performance after training
def plot_model_history(history, measures):
"""
Plot history for visualization of performance measures in matplotlib.
Parameters:
------------
history : Keras History object
A History object outputted from model.fit
measure : str list
A list of string of which performance measures to be visualized
"""
for measure in measures:
plt.plot(history.history[measure])
plt.plot(history.history['val_' + measure])
plt.title('model performance : ' + measure.replace("_", " "))
plt.ylabel(measure.replace("_", " "))
plt.xlabel('epoch')
plt.legend(['train', 'test'], loc='upper left')
plt.show()
# -
# ### 1. Load datasets in dataframe <a class="anchor" id="step1"></a>
# +
train_df, test_df = load_data_into_dataframe(TRAIN_CSV_PATH, TEST_CSV_PATH)
# print(train_df.shape[0]) #--> 25000
# print(test_df.shape[0]) #--> 2500
# train_df.sentiment.value_counts()
# train_df.sentiment.value_counts() / # train_df.sentiment.value_counts().sum()
# remove the one example with 'unassigned' label (data analysis is done beforehand, so I'll just proceed to delete this example here)
train_df = train_df.drop(train_df[train_df.sentiment == 'unassigned'].index)
train_df.head(10)
# -
# ### 2. Convert dependent variable (categorical) to one-hot-encoding <a class="anchor" id="step2"></a>
ohe_y = encode_dependent_variable_in_OHE(train_df, label_name='sentiment')
print(ohe_y[0:5])
# ### 3. Create Transformers Tokenizer <a class="anchor" id="step3"></a>
tokenizer = AutoTokenizer.from_pretrained(MODEL_NAME)
# ### 4. Split training data further into training and validation set <a class="anchor" id="step4"></a>
train_texts, val_texts, y_train, y_valid = split_train_validation(train_df, X_columns_name='content', label=ohe_y, validation_size=0.15, random_state=42)
# ### 6. Load datasets in Tensorflow Dataset API for an efficient input pipeline <a class="anchor" id="step6"></a>
# (step 5 happens insides `load_into_tf_Dataset` function)
train_dataset, valid_dataset, test_dataset = load_into_tf_Dataset(tokenizer=tokenizer, train_texts=train_texts, val_texts=val_texts,
y_train=y_train, y_valid=y_valid,
test_df=test_df, batch_size=BATCH_SIZE, prefetch_buffer_size=AUTO)
# ### 7. Create output layer on top of frozen body of XLM-r model <a class="anchor" id="step7"></a>
# #### Load Model into TPU
# %%time
with strategy.scope():
transformer_layer = TFAutoModel.from_pretrained(MODEL_NAME)
model = build_model(transformer_layer, num_classes=3, activation='softmax', max_len=MAX_LEN)
model.summary()
# ### 8. Set `EarlyStopping` by monitoring validation loss to prevent overfitting <a class="anchor" id="step8"></a>
ES_callback = EarlyStopping(monitor='val_loss', patience=3, mode='auto')
# ### 9. Train Model <a class="anchor" id="step9"></a>
# +
n_steps = train_texts.shape[0] // BATCH_SIZE
train_history = model.fit(
train_dataset,
steps_per_epoch=n_steps,
validation_data=valid_dataset,
epochs=EPOCHS,
callbacks=[TqdmCallback(verbose=2), ES_callback]
)
# -
# ### 10. Plot model performance after training <a class="anchor" id="step10"></a>
plot_model_history(train_history, ['categorical_accuracy', 'loss', 'f1_score', 'MatthewsCorrelationCoefficient'])
train_history.history
# ### Save Model Weights
# +
#def touch_dir(dirname):
# if not os.path.exists(dirname):
# os.makedirs(dirname)
# print(f"Created directory {dirname}.")
# else:
# print(f"Directory {dirname} already exists.")
def save_model(model, transformer_dir='.'):
"""
Special function to load a keras model that uses a transformer layer
"""
import pickle
transformer = model.layers[1]
#touch_dir(transformer_dir)
transformer.save_pretrained(transformer_dir)
softmax = model.get_layer('softmax').get_weights()
pickle.dump(softmax, open('softmax.pickle', 'wb'))
def load_model(transformer_dir='.', max_len=256):
"""
Special function to load a keras model that uses a transformer layer
"""
transformer = TFAutoModel.from_pretrained(transformer_dir)
model = build_model(transformer, max_len=max_len)
softmax = pickle.load(open('softmax.pickle', 'rb'))
model.get_layer('softmax').set_weights(softmax)
return model
# -
save_model(model, transformer_dir='.')
from IPython.display import FileLink
FileLink('./tf_model.h5')
| notebook/train/xlmr_train_kaggle.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import torch
from torch.autograd import Variable
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
from torch.optim.lr_scheduler import StepLR, MultiStepLR
import numpy as np
# import matplotlib.pyplot as plt
from math import *
import time
torch.cuda.set_device(2)
torch.set_default_tensor_type('torch.DoubleTensor')
# activation function
def activation(x):
return x * torch.sigmoid(x)
# build ResNet with one blocks
class Net(torch.nn.Module):
def __init__(self,input_width,layer_width):
super(Net,self).__init__()
self.layer_in = torch.nn.Linear(input_width, layer_width)
self.layer1 = torch.nn.Linear(layer_width, layer_width)
self.layer2 = torch.nn.Linear(layer_width, layer_width)
self.layer_out = torch.nn.Linear(layer_width, 1)
def forward(self,x):
y = self.layer_in(x)
y = y + activation(self.layer2(activation(self.layer1(y)))) # residual block 1
output = self.layer_out(y)
return output
dimension = 1
input_width,layer_width = dimension, 4
net = Net(input_width,layer_width).cuda() # network for u on gpu
# defination of exact solution
def u_ex(x):
temp = 1.0
for i in range(dimension):
temp = temp * torch.sin(pi*x[:, i])
u_temp = 1.0 * temp
return u_temp.reshape([x.size()[0], 1])
# defination of f(x)
def f(x):
temp = 1.0
for i in range(dimension):
temp = temp * torch.sin(pi*x[:, i])
u_temp = 1.0 * temp
f_temp = dimension * pi**2 * u_temp
return f_temp.reshape([x.size()[0],1])
# generate points by random
def generate_sample(data_size):
sample_temp = torch.rand(data_size, dimension)
return sample_temp.cuda()
def model(x):
x_temp = x.cuda()
D_x_0 = torch.prod(x_temp, axis = 1).reshape([x.size()[0], 1])
D_x_1 = torch.prod(1.0 - x_temp, axis = 1).reshape([x.size()[0], 1])
model_u_temp = D_x_0 * D_x_1 * net(x)
return model_u_temp.reshape([x.size()[0], 1])
# +
# # Xavier normal initialization for weights:
# # mean = 0 std = gain * sqrt(2 / fan_in + fan_out)
# # zero initialization for biases
# def initialize_weights(self):
# for m in self.modules():
# if isinstance(m,nn.Linear):
# nn.init.xavier_normal_(m.weight.data)
# if m.bias is not None:
# m.bias.data.zero_()
# -
# Uniform initialization for weights:
# U(a, b)
# nn.init.uniform_(tensor, a = 0, b = 1)
# zero initialization for biases
def initialize_weights(self):
for m in self.modules():
if isinstance(m,nn.Linear):
nn.init.uniform_(m.weight.data)
if m.bias is not None:
m.bias.data.zero_()
# +
# # Normal initialization for weights:
# # N(mean = 0, std = 1)
# # nn.init.normal_(tensor, a = 0, b = 1)
# # zero initialization for biases
# def initialize_weights(self):
# for m in self.modules():
# if isinstance(m,nn.Linear):
# nn.init.normal_(m.weight.data)
# if m.bias is not None:
# m.bias.data.zero_()
# -
initialize_weights(net)
for name,param in net.named_parameters():
print(name,param.size())
print(param.detach().cpu())
# loss function to DGM by auto differential
def loss_function(x):
# x = generate_sample(data_size).cuda()
# x.requires_grad = True
u_hat = model(x)
grad_u_hat = torch.autograd.grad(outputs = u_hat, inputs = x, grad_outputs = torch.ones(u_hat.shape).cuda(), create_graph = True)
laplace_u = torch.zeros([len(grad_u_hat[0]), 1]).cuda()
for index in range(dimension):
p_temp = grad_u_hat[0][:, index].reshape([len(grad_u_hat[0]), 1])
temp = torch.autograd.grad(outputs = p_temp, inputs = x, grad_outputs = torch.ones(p_temp.shape).cuda(), create_graph = True, allow_unused = True)[0]
laplace_u = temp[:, index].reshape([len(grad_u_hat[0]), 1]) + laplace_u
part_2 = torch.sum((-laplace_u - f(x))**2) / len(x)
return part_2
torch.save(net.state_dict(), 'net_params_DGM_ResNet_Uniform.pkl')
| code/Results1D/randomInitialization/DGM_ResNet_Uniform.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # 5. しきい値判定を行い、条件下で計測マーカーを打つ
#
# このケースでは、以下の方法を中心に解説します。
#
# * サーバー側に保存された数値データを取得し、特定のしきい値が超える区間を算出する
# * 特定の区間に対して、計測のタグ情報を付与する
#
# ```
# Warning:
# 計測マーカーの登録機能は、intdash-apiのバージョン v1.10.0 以降で使用可能です。
# ```
#
# ## シナリオ
# 本シナリオでは、iOSアプリケーション **intdash Motion** を活用しiPhoneのセンサーデータを取得します。加速度にしきい値を設けて、加速度が極端に大きくなっている時刻を特定し、その時刻に「Status : Exceeds threshold」 というタグを登録します。
#
# ## 事前準備
#
# 本シナリオを実施する前に、以下を用意する必要があります。
#
# - 計測用のエッジ
# - intdash Motion アプリでアップロードした計測(センサーデータを含む)
#
# ### 使用データ
# 本シナリオでは、事前に以下のデータをサーバー側に準備する必要があります。
# 使用する時系列データは **「1. 時系列データを取得し、CSVで保存する」**で作成した計測データを使用します。
#
# |データ項目|本シナリオで登場するデータ名|
# |:---|:---|
# |時系列データを登録するエッジ|sdk_edge1|
# |計測情報 (※)|measurement1|
# |信号定義| sp_ACCX, sp_ACCY, sp_ACCZ |
#
# (※) 使用する時系列データは **「1. 取得した時系列データをCSVで保存する」** と同様に、intdash Motionを使用して時系列データを登録します。
#
# ### パッケージのimportとクライアントの生成
# `intdash.Client` に与える `url` は intdashサーバーの環境情報から、 `edge_token` はログイン用エッジで発行したトークンを指定してください。
# (※ `username` と `password` でのログインも可能ですが、継続して動作する場合はエッジトークンの使用を推奨します)
# +
import pandas as pd
import intdash
from intdash import timeutils
# Create client
client = intdash.Client(
url = "https://example.intdash.jp",
edge_token = "<PASSWORD>",
)
# -
# ### 信号定義を登録する
#
# 「1.時系列データを取得し、CSVで保存する」 で使用した信号定義と同じ信号定義を使います。
# 汎用センサー型から数値に変更するための実行ファイルは、以下を確認してください。
#
# [汎用センサー(General Sensor)型向け 信号定義サンプル](https://docs.intdash.jp/sdk/python/latest/ja/guide/signals/generalsensor.html)
#
# 本シナリオでは、「汎用センサー型」のうち、「加速度」に対してのみ変換定義を登録します。
# ### 信号定義が登録されていることを確認する
# 上記で登録した信号定義を確認します。
signals = client.signals.list(label='sp')
for s in signals:
print(s.label, end=', ')
# ## 計測に使用したエッジを取得する
edges = client.edges.list(name='sdk_edge1')
sdk_edge1 = edges[0]
sdk_edge1.name
# ## 取得したい計測を取得する
# 計測のuuidを使用して検索したい場合は計測情報を取得します。
# 最初の取得時には `list()` を使用して時間指定を行います。
ms = client.measurements.list(
edge_uuid=sdk_edge1.uuid,
start=timeutils.str2timestamp('2020-07-17 00:00:00+09:00'),
end=timeutils.str2timestamp('2020-07-18 00:00:00+09:00'),
)
# Because there is only one measurement associated with `sdk_edge1`, it is specified as follows.
measurement1 = ms[0]
print(measurement1)
# ## 時系列データを取得する
# 本シナリオでは、時系列データの取得には ` client.data_points` を使用します。
# `labels` に、事前に登録している信号定義のラベル名を指定して実行します。
# `start` および `end` は前述の手順で作成した計測を含む時刻に変更してください。
dps = client.data_points.list(
edge_name='sdk_edge1',
start=timeutils.str2timestamp('2020-07-17 00:00:00+09:00'), # change appropriately.
end=timeutils.str2timestamp('2020-07-18 00:00:00+09:00'), # change appropriately.
labels=['sp_ACCX', 'sp_ACCY', 'sp_ACCZ'],
limit=0
)
print(dps[0])
# ## 時系列データを可視化する
# 「1. 取得した時系列データを、CSVで保存する」を参考に、DataFrameに変換します。その後 `matplotlib` を使用してデータを可視化し、しきい値を検討します。
# +
from intdash import data
df = pd.DataFrame( [ {
'time':d.time,
d.data_id:data.Float.from_payload(d.data_payload).value
}
for d in dps
]).groupby("time").last()
# -
# `matplotlib` を使用すると、以下のようにグラフとして可視化することができます。
# グラフを観察すると、 `sp_ACCZ` が極端に大きな値になっている個所があります。そこで、10を超えた箇所を `Exceeds threshold`としてタグに登録します。
# +
from matplotlib import pyplot as plt
df.plot(figsize=(15,6), grid=True, ylim=[-10, 10])
plt.xticks(color="None")
plt.show()
plt.close()
# -
# ## 取得した時系列データに対して、しきい値判定を行う
# 取得したデータに対し、`sp_ACCZ` が10を超えた箇所のみタグを打ちます。
# +
THRESHOLD = 10
tags= [] # Tags list for register.
for d in dps:
value = data.Float.from_payload(d.data_payload).value
if value > THRESHOLD:
tag = {
d.data_id : 'Exceeds threshold'
}
# Create measurements.
client.measurement_markers.create(
measurement_uuid=measurement1.uuid,
type='point',
detail=intdash.MeasurementMarkerDetailPoint(
# Time elapsed from the start of measurement.
occurred_elapsed_time=timeutils.str2timestamp(d.time) - measurement1.basetime
),
tag=tag
)
# -
# ## 登録した計測マーカーを確認する
# 登録されたマーカーを確認します。
markers = client.measurement_markers.list(
measurement_uuid=measurement1.uuid
)
print(markers[0])
# 計測マーカーは、**Visual M2M Data Visualizer** にて確認することができます。
# 1. Stored Data を開きます。
# 2. 左メニューの Markers を選択します。
# 3. 右側に Markers (計測マーカー) の一覧が表示されます。
#
# <img src="https://github.com/aptpod/intdash-py-samples/blob/master/img/img4.png?raw=true">
| ja/5_create-measurement-markers/5_create-measurement-markers.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
#A5-4 (3 points)
from linked_binary_tree import LinkedBinaryTree
class personal_LBP(LinkedBinaryTree):
def _delete_subtree(self, p):
if self.num_children(p) != 0:
for child in self.children(p):
self._delete_subtree(child)
return self._delete(child)
else:
return self._delete(p)
running time의 경우, O(n) time이 소요된다.
| Assignments/Assignment 2/codes/A5-4.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="qH90XCoU_k8g"
# Preprocessing Images (resizing them)
# + id="lKlGAKYFB-o1"
# !gdown https://drive.google.com/uc?id=1Lbk_pwATorDcUyk9VcGYzomLR4Rhj-lv #Downloading zip file from google drive
zip_path = '/content/Cat_GAN_1.zip' #Getting the path
# !unzip -q Cat_GAN_1.zip #Unzipping the folder
# !rm Cat_GAN_1.zip #Removing the zip folder
import os #Imported required libraries for this step
import pdb
import cv2
import operator
from PIL import Image
os.mkdir("Training") #making the new Training and testing folders
dirs = ["Domino", "Stormy"] #Making the names for the subdirec
pardirs = ["/content/Training"]
for pardir in pardirs:
for dir in dirs:
path = os.path.join(pardir, dir)
os.mkdir(path)
datadir1 = "/content/Cat_Classifier_Images_70-30 /Train/Domino"
datadir2 = "/content/Cat_Classifier_Images_70-30 /Train/Stormy"
datadir3 = "/content/Cat_Classifier_Images_70-30 /Test/Domino"
datadir4 = "/content/Cat_Classifier_Images_70-30 /Test/Stormy"
filelist1 = sorted(os.listdir(datadir1), key = lambda fname: int(fname.split("_")[0][-4:]))
filelist2 = sorted(os.listdir(datadir2), key = lambda fname: int(fname.split("_")[0][-4:]))
filelist3 = sorted(os.listdir(datadir3), key = lambda fname: int(fname.split("_")[0][-4:]))
filelist4 = sorted(os.listdir(datadir4), key = lambda fname: int(fname.split("_")[0][-4:]))
datadirs = [filelist1, filelist2, filelist3, filelist4]
inc = 0
idom = 0
istorm = 0
for filelist in datadirs:
for fil in filelist:
if inc == 0:
path = "/content/Cat_Classifier_Images_70-30 /Train/Domino/" + fil
idom += 1
img = cv2.imread(path)
imgResized = cv2.resize(img, (48, 64))
cv2.imwrite('/content/Training/Domino/DominoTR%03i.jpg' %idom, imgResized)
elif inc == 1:
path = "/content/Cat_Classifier_Images_70-30 /Train/Stormy/" + fil
istorm += 1
img = cv2.imread(path)
imgResized = cv2.resize(img, (48, 64))
cv2.imwrite('/content/Training/Stormy/StormyTR%03i.jpg' %istorm, imgResized)
elif inc == 2:
path = "/content/Cat_Classifier_Images_70-30 /Test/Domino/" + fil
img = cv2.imread(path)
idom += 1
imgResized = cv2.resize(img, (48, 64))
cv2.imwrite('/content/Training/Domino/DominoTR%03i.jpg' %idom, imgResized)
else:
path = "/content/Cat_Classifier_Images_70-30 /Test/Stormy/" + fil
img = cv2.imread(path)
istorm += 1
imgResized = cv2.resize(img, (48, 64))
cv2.imwrite('/content/Training/Stormy/StormyTR%03i.jpg' %istorm, imgResized)
inc += 1
# + [markdown] id="XW7i8wG7_r8e"
# Data augmentation
# + id="T8-ozHTLNV8J"
import pdb
import numpy as np
from PIL import Image
import matplotlib.pyplot as plt
import os
datadirDom = '/content/Training/Domino'
datadirStorm = '/content/Training/Stormy'
datadirDom = os.listdir(datadirDom)
datadirStorm = os.listdir(datadirStorm)
labels = []
cat_images = []
for fil in datadirDom:
path = '/content/Training/Domino/' + fil
image = Image.open(path)
horz_image = np.array(image.transpose(method = Image.FLIP_LEFT_RIGHT))
vert_image = np.array(image.transpose(method = Image.FLIP_TOP_BOTTOM))
rot_image = np.array(image.rotate(180))
image = np.array(image)
cat_images.append(image)
plt.imshow(image)
cat_images.append(horz_image)
cat_images.append(vert_image)
cat_images.append(rot_image)
for fil in datadirStorm:
path = '/content/Training/Stormy/' + fil
image = Image.open(path)
horz_image = np.array(image.transpose(method = Image.FLIP_LEFT_RIGHT))
vert_image = np.array(image.transpose(method = Image.FLIP_TOP_BOTTOM))
rot_image = np.array(image.rotate(180))
image = np.array(image)
cat_images.append(image)
cat_images.append(horz_image)
cat_images.append(vert_image)
cat_images.append(rot_image)
cat_images = np.array(cat_images)
print (cat_images.shape)
# + [markdown] id="w_ydFYwC_xCU"
# Building, training, and saving the GAN
# + id="kDJpit46Nco_"
from tensorflow.keras.layers import Activation, Dense, Input
from tensorflow.keras.layers import Conv2D, Flatten
from tensorflow.keras.layers import Reshape, Conv2DTranspose
from tensorflow.keras.layers import LeakyReLU
from tensorflow.keras.layers import BatchNormalization
from tensorflow.keras.layers import concatenate
from tensorflow.keras.optimizers import RMSprop
from tensorflow.keras.models import Model
from tensorflow.keras.datasets import mnist
from tensorflow.keras.utils import to_categorical
from tensorflow.keras.models import load_model
import numpy as np
import math
import matplotlib.pyplot as plt
import os
import argparse
import pdb
def build_generator(inputs, image_size):
image_resize = (image_size[0] // 4, image_size[1] // 4)
kernel_size = 5
layer_filters = [128, 64, 32, 3]
x = inputs
x = Dense(image_resize[0] * image_resize[1] * 3 * layer_filters[0])(x)
x = Reshape((image_resize[0], image_resize[1], 3 * layer_filters[0]))(x)
for filters in layer_filters:
if filters > layer_filters[-2]:
strides = 2
else:
strides = 1
x = BatchNormalization()(x)
x = Activation('relu')(x)
x = Conv2DTranspose(filters=filters, kernel_size=kernel_size, strides=strides, padding='same')(x)
x = Reshape((64, 48, 3, 1))(x)
x = Activation('sigmoid')(x)
generator = Model(inputs, x, name='generator')
return generator
def build_discriminator(inputs, image_size):
kernel_size = 5
layer_filters = [32, 64, 128, 256]
x = inputs
for filters in layer_filters:
if filters == layer_filters[-1]:
strides = 1
else:
strides = 2
x = LeakyReLU(alpha=0.2)(x)
x = Conv2D(filters = filters, kernel_size = kernel_size, strides = strides, padding = 'same')(x)
x = Flatten()(x)
x = Dense(1)(x)
x = Activation('sigmoid')(x)
discriminator = Model(inputs, x, name='discriminator')
return discriminator
def train(models, data, params):
losss = []
accc = []
generator, discriminator, adversarial = models
x_train = data
batch_size, latent_size, train_steps, model_name = params
save_interval = 500
noise_input = np.random.uniform(-1.0, 1.0, size=[16, latent_size])
train_size = x_train.shape[0]
accavg = 0
accavg1 = 0
epochs = [i for i in range(train_steps)]
for i in range(train_steps):
rand_indexes = np.random.randint(0, train_size, size=batch_size)
real_images = x_train[rand_indexes]
noise = np.random.uniform(-1.0, 1.0, size=[batch_size, latent_size])
fake_images = generator.predict(noise)
x = np.concatenate((real_images, fake_images))
y = np.ones([2 * batch_size, 1])
y[batch_size:, :] = 0.0
loss, acc = discriminator.train_on_batch(x, y)
losss.append(loss)
accc.append(acc)
accavg += acc
# log = "%d: [discriminator loss: %f, acc: %f]" % (i, loss, acc)
noise = np.random.uniform(-1.0, 1.0, size=[batch_size, latent_size])
y = np.ones([batch_size, 1])
loss, acc = adversarial.train_on_batch(noise, y)
accavg1 += acc
# log = "%s [adversarial loss: %f, acc: %f]" % (log, loss, acc)
# print(log)
if (i + 1) % save_interval == 0:
accavg = accavg / save_interval
print ("Average discriminator accuracy: " + str(accavg))
accavg1 = accavg1 / save_interval
print ("Average adversarial accuracy: " + str(accavg1))
accavg = 0
accavg1 = 0
plot_images(generator, noise_input, show = False, step = i + 1)
images = generator.predict(noise_input)
plt.plot(epochs, losss)
plt.title("Discriminator loss")
plt.xlabel("Epochs")
plt.ylabel("Loss")
plt.show()
plt.savefig('DCGAN_discriminator_loss.jpg')
plt.plot(epochs, accc)
plt.xlabel("Epochs")
plt.ylabel("Accuracy")
plt.title("Discriminator Accuracy")
plt.show()
plt.savefig('DCGAN_discriminator?accuracy.jpg')
generator.save(model_name + ".h5")
def plot_images(generator, noise_input, show=False, step=0, model_name="gan"):
os.makedirs(model_name, exist_ok=True)
filename = os.path.join(model_name, "%05d.png" % step)
images = generator.predict(noise_input)
plt.figure(figsize=(11.1, 11.1))
num_images = images.shape[0]
image_size = images.shape[1]
rows = int(math.sqrt(noise_input.shape[0]))
for i in range(num_images):
plt.subplot(rows, rows, i + 1)
plt.imshow(np.array(images[i - 1]).reshape((64, 48, 3)))
plt.axis('off')
plt.savefig(filename)
if show:
plt.show()
else:
plt.close('all')
def build_and_train_models():
x_train = cat_images
x_train = np.reshape(x_train, [x_train.shape[0], x_train.shape[1], x_train.shape[2], x_train.shape[3], 1])
x_train = x_train.astype('float32') / 255
model_name = "cgan_cat"
latent_size = 2
batch_size = 64
train_steps = 40000
lr = 2e-4
decay = 6e-8
input_shape = (64, 48, 3, 1)
label_shape = (2,)
image_size = (64, 48, 3)
inputs = Input(shape=input_shape, name='discriminator_input')
discriminator = build_discriminator(inputs, image_size)
optimizer = RMSprop(lr=lr, decay=decay)
discriminator.compile(loss='binary_crossentropy', optimizer=optimizer, metrics=['accuracy'])
discriminator.summary()
input_shape = (latent_size, )
inputs = Input(shape=input_shape, name='z_input')
generator = build_generator(inputs, image_size)
generator.summary()
optimizer = RMSprop(lr=lr*0.5, decay=decay*0.5)
discriminator.trainable = False
outputs = discriminator(generator(inputs))
adversarial = Model(inputs, outputs, name=model_name)
adversarial.compile(loss='binary_crossentropy', optimizer=optimizer, metrics=['accuracy'])
adversarial.summary()
models = (generator, discriminator, adversarial)
data = x_train
params = (batch_size, latent_size, train_steps, model_name)
train(models, data, params)
build_and_train_models()
# + [markdown] id="dRRYw_WeyhvE"
# Making the GIF
# + id="YRqZEKpPyjJ_"
datadir = '/content/gan'
filelist = sorted(os.listdir(datadir))
frames = []
for fil in filelist:
path = '/content/gan/' + fil
fil = Image.open(path)
frames.append(fil)
frames[0].save('Cat_Training.gif', format='GIF', append_images=frames[1:], save_all=True, duration = 300, loop = 0)
# + [markdown] id="k5MjGDacykTf"
# Generating a random cat image
# + colab={"background_save": true} id="D7MmJnHv4V6j"
from keras.models import load_model
import numpy as np
import matplotlib.pyplot as plt
generator = load_model('/content/dcgan_cat_64_48.h5')
noise_input = np.random.uniform(-1.0, 1.0, size=[16, 2])
images = generator.predict(noise_input)
plt.imshow(np.array((images[0])).reshape((64, 48, 3)))
plt.show()
| CatDCGAN64_48.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="view-in-github" colab_type="text"
# <a href="https://colab.research.google.com/github/afnf33/emoTale/blob/master/models/Korean_multisentiment_classifier_KoBERT.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# + id="nJaJL151DBcu" colab_type="code" colab={}
# 구글 드라이브와 연동합니다
from google.colab import drive
drive.mount('/content/drive')
# + id="fUgMXvr-C0N8" colab_type="code" colab={}
# 필요한 모듈을 설치합니다
# !pip install mxnet-cu101
# !pip install gluonnlp pandas tqdm
# !pip install sentencepiece==0.1.85
# !pip install transformers==2.1.1
# !pip install torch #원래 ==1.3.1
#SKT에서 공개한 KoBERT 모델을 불러옵니다
# !pip install git+https://git@github.com/SKTBrain/KoBERT.git@master
# + [markdown] id="bJUqMfBpYxCl" colab_type="text"
# # 1. 데이터 불러오기, 데이터 전처리
# + id="clnnv7QlqPQh" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 423} outputId="af21ec18-6e45-4aa8-bb46-fe8f3da6c37b"
import pandas as pd
sad = pd.read_excel('/content/drive/My Drive/data/tweet_list_슬픔 1~5000.xlsx')
happy = pd.read_excel('/content/drive/My Drive/data/tweet_list_기쁨 labeling_완료.xlsx')
annoy = pd.read_excel('/content/drive/My Drive/data/tweet_list_짜증_완료.xlsx')
fear = pd.read_excel('/content/drive/My Drive/data/tweet_list_공포_완료.xlsx')
sad2 = pd.read_csv('/content/drive/My Drive/data/추가_슬픔.csv')
happy2 = pd.read_csv('/content/drive/My Drive/data/추가_기쁨.csv')
annoy2 = pd.read_csv('/content/drive/My Drive/data/추가_분노.csv')
fear2 = pd.read_csv('/content/drive/My Drive/data/추가_공포1.txt', encoding='utf8')
sad
# + id="pv605f88s7fZ" colab_type="code" colab={}
#전처리를 위한 함수
def preprocessing(data, label):
import re
dt = data['raw_text'].copy() #문장만 선택
dt = dt.dropna() #결측치 제거
dt = dt.drop_duplicates() #중복 제거
sentences = dt.tolist()
new_sent=[]
for i in range(len(sentences)):
sent = sentences[i]
if type(sent) != str: # 문장 중 str 아닌 것 처리
sent = str(sent)
if len(sent) < 2: continue #길이 1 이상인 것만 선택
sent = re.sub('ㅋㅋ+','ㅋㅋ',sent)
sent = re.sub('ㅠㅠ+','ㅠㅠ',sent)
sent = re.sub('ㅇㅇ+','ㅇㅇ',sent)
sent = re.sub('ㄷㄷ+','ㄷㄷ',sent)
sent = re.sub('ㅎㅎ+','ㅎㅎ',sent)
sent = re.sub('ㅂㅂ+','ㅂㅂ',sent)
sent = re.sub(';;;+',';;',sent)
sent = re.sub('!!!+','!!',sent)
sent = re.sub('~+','~',sent)
sent = re.sub('[?][?][?]+','??',sent)
sent = re.sub('[.][.][.]+','...',sent)
sent = re.sub('[-=+,#/:$@*\"※&%ㆍ』\\‘|\(\)\[\]\<\>`\'…》]','',sent)
new_sent.append(sent)
dt = pd.DataFrame(pd.Series(new_sent), columns=['raw_text'])
dt['emotion'] = label
return dt
# + id="tF6ml3n4v1H4" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 185} outputId="98a40dd6-9983-49c4-d2a2-ede7a770adde"
sad = preprocessing(sad, '슬픔')
sad2 = preprocessing(sad2, '슬픔')
happy = preprocessing(happy, '기쁨')
happy2 = preprocessing(happy2, '기쁨')
annoy = preprocessing(annoy, '분노')
annoy2 = preprocessing(annoy2, '분노')
fear = preprocessing(fear, '공포')
fear2 = preprocessing(fear2, '공포')
for i in [sad, happy, annoy, fear]:
print('1차 레이블 결과', i['emotion'][0],len(i))
for i in [sad2, happy2, annoy2, fear2]:
print('2차 레이블 결과', i['emotion'][0],len(i))
print('최소 데이터: 공포 ', len(fear)+len(fear2))
# + id="tgFHcC3O7CPM" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 185} outputId="ee34ad2c-fe0a-40ea-d5ba-b146991f6d60"
## 데이터 개수 확인 후 학습을 위해 각 감정별 데이터 개수를 동일하게 맞춰줍니다.
sad_3 = sad[:1400]
happy_3 = happy[:800]
annoy_3 = annoy[:2400]
# 각 감정별 키워드 데이터가 약 1000개 씩으로 이루어져 있기 때문에 마지막 키워드에 대한 데이터 1000개를 평가 데이터로 선택
sentence_train = pd.concat([sad_3, happy_3, annoy_3, fear, sad2[:-1000], annoy2[:-1000], happy2[:-1000], fear2[:-1000]], axis=0, ignore_index=True)
sentence_eval = pd.concat([sad2[-1000:], annoy2[-1000:], happy2[-1000:], fear2[-1000:]], axis=0, ignore_index=True)
for i in ['슬픔','기쁨','분노','공포']:
print('sentence_train',i,len(sentence_train[sentence_train['emotion'] == i]))
print('-------------------------')
for i in ['슬픔','기쁨','분노','공포']:
print('sentence_eval',i,len(sentence_eval[sentence_eval['emotion'] == i]))
# + id="nfBU_EpxDH6r" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 261} outputId="2c8ef9d0-4b84-4bf3-a229-f98f130745e5"
#모델에 입력하기 위해 형식을 맞춰줍니다
def label(x):
if x=='슬픔': return 0.0
elif x=='기쁨': return 1.0
elif x=='분노': return 2.0
elif x=='공포': return 3.0
else: return x
sentence_train["emotion"] = sentence_train["emotion"].apply(label)
dtls = [list(sentence_train.iloc[i,:]) for i in range(len(sentence_train))]
dtls[:10] #형식이 통일되었습니다
# + [markdown] id="GfiPPefJZERK" colab_type="text"
# # 2. 모델 투입 준비
# + id="bZjU_IpkC5rC" colab_type="code" colab={}
import torch
from torch import nn
import torch.nn.functional as F
import torch.optim as optim
from torch.utils.data import Dataset, DataLoader
import gluonnlp as nlp
import numpy as np
from tqdm import tqdm, tqdm_notebook
from tqdm.notebook import tqdm
from kobert.utils import get_tokenizer
from kobert.pytorch_kobert import get_pytorch_kobert_model
from transformers import AdamW
from transformers.optimization import WarmupLinearSchedule
# + id="AtlhNDufC6-8" colab_type="code" colab={}
##GPU 사용 시
device = torch.device("cuda:0")
# + id="g3rXDyiKC_7I" colab_type="code" colab={}
bertmodel, vocab = get_pytorch_kobert_model()
# + id="6-RqqvLSDjS_" colab_type="code" colab={}
#koBERT의 토크나이저를 사용합니다
tokenizer = get_tokenizer()
tok = nlp.data.BERTSPTokenizer(tokenizer, vocab, lower=False)
# + id="2r1Z7XDbD1xu" colab_type="code" colab={}
class BERTDataset(Dataset):
def __init__(self, dataset, sent_idx, label_idx, bert_tokenizer, max_len,
pad, pair):
transform = nlp.data.BERTSentenceTransform(
bert_tokenizer, max_seq_length=max_len, pad=pad, pair=pair)
self.sentences = [transform([i[sent_idx]]) for i in dataset]
self.labels = [np.int32(i[label_idx]) for i in dataset]
def __getitem__(self, i):
return (self.sentences[i] + (self.labels[i], ))
def __len__(self):
return (len(self.labels))
# + id="y-f2qbUkEEVP" colab_type="code" colab={}
class BERTClassifier(nn.Module):
def __init__(self,
bert,
hidden_size = 768,
num_classes=4,
dr_rate=None,
params=None):
super(BERTClassifier, self).__init__()
self.bert = bert
self.dr_rate = dr_rate
self.classifier = nn.Linear(hidden_size , num_classes)
if dr_rate:
self.dropout = nn.Dropout(p=dr_rate)
def gen_attention_mask(self, token_ids, valid_length):
attention_mask = torch.zeros_like(token_ids)
for i, v in enumerate(valid_length):
attention_mask[i][:v] = 1
return attention_mask.float()
def forward(self, token_ids, valid_length, segment_ids):
attention_mask = self.gen_attention_mask(token_ids, valid_length)
_, pooler = self.bert(input_ids = token_ids, token_type_ids = segment_ids.long(), attention_mask = attention_mask.float().to(token_ids.device))
if self.dr_rate:
out = self.dropout(pooler)
return self.classifier(out)
# + [markdown] id="QlLrCM76Zg9V" colab_type="text"
# # 3. 학습
# + id="c2xP6wwzD4UY" colab_type="code" colab={}
## Setting parameters
max_len = 64
batch_size = 64
warmup_ratio = 0.1
num_epochs = 1
max_grad_norm = 1
log_interval = 200
learning_rate = 5e-5
# + id="Y8IyUdYDD7N9" colab_type="code" colab={}
# train, validation, test set을 나눠주세요
from sklearn.model_selection import train_test_split
dataset_train, dataset_test = train_test_split(dtls, test_size=0.2, random_state=123)
# + id="gx6Yeiw0D5Kq" colab_type="code" colab={}
data_train = BERTDataset(dataset_train, 0, 1, tok, max_len, True, False)
data_test = BERTDataset(dataset_test, 0, 1, tok, max_len, True, False)
train_dataloader = torch.utils.data.DataLoader(data_train, batch_size=batch_size, num_workers=5)
test_dataloader = torch.utils.data.DataLoader(data_test, batch_size=batch_size, num_workers=5)
# + id="lgvOXHZ0EFdR" colab_type="code" colab={}
#모델을 만들고 GPU 사용 설정을 해줍니다
model = BERTClassifier(bertmodel, dr_rate=0.5).to(device)
# + id="t6DX8S2ZEImk" colab_type="code" colab={}
# Prepare optimizer and schedule (linear warmup and decay)
no_decay = ['bias', 'LayerNorm.weight']
optimizer_grouped_parameters = [
{'params': [p for n, p in model.named_parameters() if not any(nd in n for nd in no_decay)], 'weight_decay': 0.01},
{'params': [p for n, p in model.named_parameters() if any(nd in n for nd in no_decay)], 'weight_decay': 0.0}
]
# + id="O4Qr7GfiEJ6x" colab_type="code" colab={}
#옵티마이저와 손실함수 설정
optimizer = AdamW(optimizer_grouped_parameters, lr=learning_rate)
loss_fn = nn.CrossEntropyLoss()
t_total = len(train_dataloader) * num_epochs
warmup_step = int(t_total * warmup_ratio)
scheduler = WarmupLinearSchedule(optimizer, warmup_steps=warmup_step, t_total=t_total)
#정확도를 계산하기 위한 함수
def calc_accuracy(X,Y):
max_vals, max_indices = torch.max(X, 1)
train_acc = (max_indices == Y).sum().data.cpu().numpy()/max_indices.size()[0]
return train_acc
# + id="1rnTrSSpELRg" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 175, "referenced_widgets": ["43289c26c0324a3d92efd43cc43ed99a", "b23353fcb0e24765b901439f5ff81707", "4e1af7d0eb21408490a59a39843ba46b", "e3617a8ae7fd4a3d844df08ccbbd5bf1", "7559946bc5ec46e1b5f05a6996bbbb88", "<KEY>", "<KEY>", "a6e46a6d04f34d0c8250813ea389fd50", "9dad7e3188664d8d9373e729985875ef", "<KEY>", "<KEY>", "<KEY>", "944643e29fdf4f9280cca4ae5902d12f", "<KEY>", "de8002dd8b2b4ca28568be68e8e6d222", "ad6c5fa0ffb244e4b2aca40df27ee3d1"]} outputId="78ceb3cb-ea84-462e-c768-185459293d11"
#학습 과정
for e in range(num_epochs):
train_acc = 0.0
test_acc = 0.0
model.train()
for batch_id, (token_ids, valid_length, segment_ids, label) in enumerate(tqdm(train_dataloader)):
optimizer.zero_grad()
token_ids = token_ids.long().to(device)
segment_ids = segment_ids.long().to(device)
valid_length= valid_length
label = label.long().to(device)
out = model(token_ids, valid_length, segment_ids)
loss = loss_fn(out, label)
loss.backward()
torch.nn.utils.clip_grad_norm_(model.parameters(), max_grad_norm)
optimizer.step()
scheduler.step() # Update learning rate schedule
train_acc += calc_accuracy(out, label)
if batch_id % log_interval == 0:
print("epoch {} batch id {} loss {} train acc {}".format(e+1, batch_id+1, loss.data.cpu().numpy(), train_acc / (batch_id+1)))
print("epoch {} train acc {}".format(e+1, train_acc / (batch_id+1)))
model.eval() #모델 평가 부분
for batch_id, (token_ids, valid_length, segment_ids, label) in enumerate(tqdm(test_dataloader)):
token_ids = token_ids.long().to(device)
segment_ids = segment_ids.long().to(device)
valid_length= valid_length
label = label.long().to(device)
out = model(token_ids, valid_length, segment_ids)
test_acc += calc_accuracy(out, label)
print("epoch {} test acc {}".format(e+1, test_acc / (batch_id+1)))
# + id="b4UA7SNAkSMC" colab_type="code" colab={}
'''
# 차후 사용을 위해 학습된 모델을 저장했습니다
torch.save(model.state_dict(), 'drive/My Drive//kobert_ending_finale.pt')
'''
# + [markdown] id="3aUX2jULZlqd" colab_type="text"
# #4 평가
# + id="_EBIlrWJclRG" colab_type="code" colab={}
##################################################
# 평가용 Test_set을 모델에 입력하기 위해 형식을 맞춰줍니다
sentence_eval["emotion"] = sentence_eval["emotion"].apply(label)
dtls_eval = [list(sentence_eval.iloc[i,:]) for i in range(len(sentence_eval))]
data_test = BERTDataset(dtls_eval, 0, 1, tok, max_len, True, False)
test_dataloader = torch.utils.data.DataLoader(data_test, batch_size=batch_size, num_workers=5)
# + id="aiSQF4Bhc5p6" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 143, "referenced_widgets": ["a5fea7f256da4931a8222c41ad623153", "<KEY>", "a089b6023cdf40cc83a9a99ffed45d9b", "dd066de4072745e49eff18dad8950461", "ac15c6077a50466982af43625443785a", "9ed6a9a03d2f478e89a2b19fa31b8b9f", "762b6a9d97ed4cb5bdfce2ba50662c46", "226c92fa9d64409b97db043675bf2d46"]} outputId="debfdd2f-9792-41a2-a66f-35ab4eda5386"
# 해당 데이터에 대해 분류를 시작합니다
model.eval()
answer=[]
train_acc = 0.0
test_acc = 0.0
for batch_id, (token_ids, valid_length, segment_ids, label) in enumerate(tqdm_notebook(test_dataloader)):
token_ids = token_ids.long().to(device)
segment_ids = segment_ids.long().to(device)
valid_length= valid_length
label = label.long().to(device)
out = model(token_ids, valid_length, segment_ids)
max_vals, max_indices = torch.max(out, 1)
answer.append(max_indices.cpu().clone().numpy())
test_acc += calc_accuracy(out, label)
print('정답률: ',test_acc / (batch_id+1))
# + id="umk0FMvic-A1" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 279} outputId="3d4a5012-6188-4a89-8fd0-f82acb65f034"
# 제출 형식에 맞춰 파일을 저장해줍니다
ls = []
for i in answer:
ls.extend(i)
pred = pd.DataFrame(ls, columns=['Predicted'])
df = pd.concat([sentence_eval['raw_text'], pred['Predicted'], sentence_eval['emotion']], axis=1)
def test(x):
if x==0.0: return '슬픔'
elif x==1.0: return '기쁨'
elif x==2.0: return '분노'
elif x==3.0: return '공포'
else: return x
df["Predicted"] = df["Predicted"].apply(test)
df["emotion"] = df["emotion"].apply(test)
for i in ['슬픔','기쁨','분노','공포']:
print(i, '개수', len(df[df['emotion'] == i]))
print('예측 개수', len(df[df['emotion'] == i][df['Predicted'] == i]))
print('정답률',len(df[df['emotion'] == i][df['Predicted'] == i])/len(df[df['emotion'] == i]))
# + id="xLH1XPoC8Gkv" colab_type="code" colab={}
| models/Korean_multisentiment_classifier_KoBERT.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# Evolution Strategies for Deep Learning
# ==============
#
# In this interactive tutorial, we will be reviewing using random optimization techniques and evolutionary algorithms for learning systems, like Neural Networks.
# The tutorial is active and will be updated over time with new experiments, results and improvements.
#
# Motivation
# =============
# Currently, deep neural networks and other models, are trained using gradient descent algorithms.
# For a lot of models and examples this works very, but also has a few downsides:
#
# * High memory usage. Computing the gradient of a model scales linearly with the batch size, which means that for large models multiple GB's of expensive, high bandwidth GPU memory
# * Because optimization depends on the gradient, the model must be differentable.
# * Gradient-based optimization without noise get stuck in local minima.
# * Gradient explosion and gradient vanishing problems
#
# In this interactive article, we will show how random search methods can learn / optimize, what methods help and how they compare to gradient-based optimization.
#
# Toy Example
# =============
#
# In this example we will be optimizing the function `f(x) = x²`. The minimum of `f(x)` is at `x=0`.
# Here, we will be using the simplest case of using just one sample per iteration and just one mutation.
# We also keep no history or pool of the best performing examples, but are going to apply an update at each step if it improves the value.
# +
import numpy as np, matplotlib.pyplot as plt, math
# seed the random number generator, so we'll get the same result every time
np.random.seed(42)
# the learning rate, or step which is multiplied by our random sample
lr = 1.0
# the parameters of our model, which we want to optimize
# We choose a not optimal value of -5
param_x = np.array(-5)
# The function f which we want to minimize
def quadratic(x):
return x ** 2
# the optimization algorithm
def optimize(f, param_x, lr=0.1, num_iters=40):
prev_val = f(param_x)
loss_history = [prev_val]
param_history = [param_x]
for _ in range(num_iters):
# we create a sample, here from a normal distribution
# with zero mean and
sample = np.random.normal(scale=lr, size=param_x.shape)
# we measure the improvement
improvement = prev_val - f(param_x + sample)
# apply update if it improves performance
if improvement >= 0.0:
param_x = param_x + sample
prev_val = f(param_x)
loss_history.append(prev_val)
param_history.append(param_x)
return loss_history, param_history
loss_history, param_history = optimize(quadratic, param_x, lr=lr)
loss_line, = plt.plot(loss_history, label='f(x)')
param_line, = plt.plot(param_history, label='x')
plt.legend(handles=[loss_line, param_line])
plt.show()
print("x: ", param_history[-1])
print("f(x): ", loss_history[-1])
# -
# We see that the loss quickly drops to about 0 down after about 8 iterations.
# Increasing the learning rate makes optimization faster, but also more unstable.
#
# Harder example
# =======
#
# In the next example, we define a function that is harder to optimize than a parabolic function.
#
# +
def harder(x):
return math.cos(x) + math.sin(4 * x)
vals = np.linspace(-20.0, 20.0, 100)
plt.plot(vals, [harder(x) for x in vals])
plt.show()
# -
# The function has (infinitely) many local minima and also global minima.
# +
param_x = np.array(0)
loss_history, param_history = optimize(harder, param_x, lr=2.0, num_iters=50)
loss_line, = plt.plot(loss_history, label='f(x)')
param_line, = plt.plot(param_history, label='x')
plt.xlabel('iters')
plt.legend(handles=[loss_line, param_line])
plt.show()
print("x: ", param_history[-1])
print("loss: ", loss_history[-1])
# -
# As you can see, after less than 40 changes the method found a global optimimum.
| 1.Intro.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import matplotlib.pyplot as plt
import seaborn as sns
from statsmodels.graphics.gofplots import qqplot
import plotly.express as px
import plotly.graph_objects as go
import plotly.figure_factory as ff
import plotly.offline as ply
# <h1 style="background-color:powderblue;">Plotting Classes</h1>
# +
class PlottingHelper:
"""Plotting Helper"""
def __init__(self, version):
self.version = version
self._df_cat = pd.DataFrame()
self._df_num = pd.DataFrame()
self._df_rymc = pd.DataFrame()
self._df_aymc = pd.DataFrame()
self._df_ctd = pd.DataFrame()
self._df_ablty = pd.DataFrame()
self._logreg = None
def __str__(self):
return f"Plotting helper version {self.version}"
def count_categorical(
self, df, x_var, ax_title, x_size, y_size, bar_color, box_aspect_radio
):
sns.set(style="whitegrid")
# Initialize the matplotlib figure
f, ax = plt.subplots(figsize=(x_size, y_size))
sns.set_color_codes("pastel")
ax = sns.countplot(x=x_var, data=df, color=bar_color)
ax.set_title(ax_title, fontsize=16)
ax.set_xticklabels(ax.get_xticklabels(), size=10, rotation=40, ha="right")
# print(len(ax.patches))
ax.set_box_aspect(
box_aspect_radio / len(ax.patches)
) # change 10 to modify the y/x axis ratio
plt.show()
def count_categorical_top(
self,
df,
x_var,
x_var_size,
x_var_asc,
ax_title,
x_size,
y_size,
bar_color,
box_aspect_radio,
):
sns.set(style="whitegrid")
#
df = (
df.groupby(x_var)[x_var]
.count()
.reset_index(name="count")
.sort_values(["count"], ascending=False)
.head(x_var_size)
)
# Initialize the matplotlib figure
f, ax = plt.subplots(figsize=(x_size, y_size))
sns.set_color_codes("pastel")
ax = sns.barplot(x=x_var, y="count", data=df, color=bar_color)
ax.set_title(ax_title, fontsize=16)
ax.set_xticklabels(ax.get_xticklabels(), size=10, rotation=40, ha="right")
# print(len(ax.patches))
ax.set_box_aspect(
box_aspect_radio / len(ax.patches)
) # change 10 to modify the y/x axis ratio
plt.show()
def get_df_cat(self):
return self._df_cat
def set_df_cat(self, _df):
self._df_cat = _df
def del_df_cat(self):
del self._df_cat
def get_df_num(self):
return self._df_num
def set_df_num(self, _df):
self._df_num = _df
def del_df_num(self):
del self._df_num
def get_df_rymc(self):
return self._df_rymc
def set_df_rymc(self, _df):
self._df_rymc = _df
def del_df_rymc(self):
del self._df_rymc
def get_df_aymc(self):
return self._df_aymc
def set_df_aymc(self, _df):
self._df_aymc = _df
def del_df_aymc(self):
del self._df_aymc
def get_df_ctd(self):
return self._df_ctd
def set_df_ctd(self, _df):
self._df_ctd = _df
def del_df_ctd(self):
del self._df_ctd
def get_df_ablty(self):
return self._df_ablty
def set_df_ablty(self, _df):
self._df_ablty = _df
def del_df_ablty(self):
del self._df_ablty
def get_logreg(self):
return self._logreg
def set_logreg(self, _logreg):
self._logreg = _logreg
def del_logreg(self):
del self._logreg
def count_categorical_plotty(
self,
df,
x_var,
x_var_size,
x_var_asc,
ax_title,
x_size,
y_size,
bar_color,
box_aspect_radio,
):
df = (
df.groupby(x_var)[x_var]
.count()
.reset_index(name="count")
.sort_values(["count"], ascending=x_var_asc)
.head(x_var_size)
)
fig = px.bar(x=x_var, y="count", title=ax_title, data_frame=df)
# Customize aspect
fig.update_traces(
marker_color="rgb(120,255,120)",
marker_line_color="rgb(0,0,0)",
marker_line_width=1.5,
opacity=0.5,
)
fig.show()
def func_count_cat_plotty(self, x_var_size, x_var, x_var_asc, ax_title):
self.count_categorical_plotty(
df=self._df_cat,
x_var=x_var,
x_var_size=x_var_size,
x_var_asc=x_var_asc,
ax_title=ax_title,
x_size=16,
y_size=8,
bar_color="lightskyblue",
box_aspect_radio=5,
)
def box_numerical_plotty(self, df, var_, title_, name_):
fig = go.Figure()
fig.add_trace(
go.Box(
y=df[var_],
name=name_,
boxpoints="outliers",
marker_color="rgb(107,174,214)",
line_color="rgb(107,174,214)",
boxmean=True,
notched=True,
showlegend=True,
)
)
fig.update_layout(
title_text=title_,
# width=500,
# height=500
)
fig.show()
def histogram_numerical_plotty(self, df, var_, title_, name_):
df_tmp = df[var_].to_frame()
fig = go.FigureWidget(
ff.create_distplot(
hist_data=([df_tmp[c] for c in df_tmp.columns]),
group_labels=df_tmp.columns,
bin_size=0.2,
histnorm="probability density",
curve_type="kde",
)
)
fig.update_layout(title_text=title_)
fig.show()
def func_box_numerical_plotty(self, var_, title_, name_, chart_type_):
if chart_type_ == "BoxPlot":
self.box_numerical_plotty(
df=self._df_num, var_=var_, title_=title_, name_=name_
)
elif chart_type_ == "Histogram":
self.histogram_numerical_plotty(
df=self._df_num, var_=var_, title_=title_, name_=name_
)
else:
self.box_numerical_plotty(
df=self._df_num, var_=var_, title_=title_, name_=name_
)
df_cat = property(get_df_cat, set_df_cat, del_df_cat)
df_num = property(get_df_num, set_df_num, del_df_num)
df_rymc = property(get_df_rymc, set_df_rymc, del_df_rymc)
df_ctd = property(get_df_ctd, set_df_ctd, del_df_ctd)
def bar_animation_frame(
self,
df_,
x_,
y_,
color_,
animation_frame_,
animation_group_,
range_y_,
title_text_
):
fig = px.bar(
df_,
x=x_,
y=y_,
color=color_,
animation_frame=animation_frame_,
animation_group=animation_group_,
range_y=range_y_,
)
fig.update_layout(
title_text=title_text_,
# width=500,
# height=500
)
fig.show()
def func_count_distinct_regist(self, df_, x_type, ax_title):
if x_type == "reg_year_month_bar":
df_ = df_
x_ = "pais"
y_ = "count"
color_ = "pais"
animation_frame_ = "registration_year_month"
animation_group_ = "pais"
range_y_ = [0,1000]
title_text_ = ax_title
self.bar_animation_frame(
df_,
x_,
y_,
color_,
animation_frame_,
animation_group_,
range_y_,
title_text_
)
elif x_type == "reg_year_month_scatter":
fig = px.scatter_geo(df_, locations="pais", color="pais",
hover_name="pais", size="count",
animation_frame="registration_year_month",
projection="natural earth")
fig.show()
elif x_type == "reg_year_month_treemap":
fig = px.treemap(df_, path=[px.Constant("registration_year_month"), 'registration_year_month', 'pais'], values='count',
color='count', hover_data=['count'],
color_continuous_scale='RdBu',
color_continuous_midpoint=np.average(df_rymc['count'], weights=df_rymc['count']))
fig.update_layout(margin = dict(t=50, l=25, r=25, b=25))
fig.show()
elif x_type == "Reg_country_bar":
df_tmp_1 = dataframeHelper.get_rymc_simple_group(df_ = df_, grp_col_ = "registration_year")
fig = px.bar(df_tmp_1.sort_values(by='count', ascending=False), y='count', x='registration_year', text='count')
fig.update_traces(texttemplate='%{text:.2s}', textposition='outside')
fig.update_layout(uniformtext_minsize=8, uniformtext_mode='hide')
fig.update_layout(
title_text="Registration_year",
# width=500,
# height=500
)
fig.show()
elif x_type == "Reg_year_bar":
df_tmp_1 = dataframeHelper.get_rymc_simple_group(df_ = df_, grp_col_ = "registration_month")
fig = px.bar(df_tmp_1.sort_values(by='count', ascending=False), y='count', x='registration_month', text='count')
fig.update_traces(texttemplate='%{text:.2s}', textposition='outside')
fig.update_layout(uniformtext_minsize=8, uniformtext_mode='hide')
fig.update_layout(
title_text="title_text_",
# width=500,
# height=500
)
fig.show()
else:
self.box_numerical_plotty(
df=self._df_num, var_=var_, title_=title_, name_=name_
)
def func_count_distinct_access(self, df_, x_type, ax_title):
if x_type == "access_year_month_bar":
df_ = df_
x_ = "pais"
y_ = "count"
color_ = "pais"
animation_frame_ = "access_year_month"
animation_group_ = "pais"
range_y_ = [0,1000]
title_text_ = ax_title
self.bar_animation_frame(
df_,
x_,
y_,
color_,
animation_frame_,
animation_group_,
range_y_,
title_text_
)
elif x_type == "access_year_month_scatter":
fig = px.scatter_geo(df_, locations="pais", color="pais",
hover_name="pais", size="count",
animation_frame="access_year_month",
projection="natural earth")
fig.show()
elif x_type == "access_year_month_treemap":
fig = px.treemap(df_, path=[px.Constant("access_year_month"), 'access_year_month', 'pais'], values='count',
color='count', hover_data=['count'],
color_continuous_scale='RdBu',
color_continuous_midpoint=np.average(df_rymc['count'], weights=df_rymc['count']))
fig.update_layout(margin = dict(t=50, l=25, r=25, b=25))
fig.show()
elif x_type == "access_country_bar":
df_tmp_1 = dataframeHelper.get_rymc_simple_group(df_ = df_, grp_col_ = "access_year")
fig = px.bar(df_tmp_1.sort_values(by='count', ascending=False), y='count', x='access_year', text='count')
fig.update_traces(texttemplate='%{text:.2s}', textposition='outside')
fig.update_layout(uniformtext_minsize=8, uniformtext_mode='hide')
fig.update_layout(
title_text="access_country_bar",
# width=500,
# height=500
)
fig.show()
elif x_type == "access_year_bar":
df_tmp_1 = dataframeHelper.get_rymc_simple_group(df_ = df_, grp_col_ = "access_month")
fig = px.bar(df_tmp_1.sort_values(by='count', ascending=False), y='count', x='access_month', text='count')
fig.update_traces(texttemplate='%{text:.2s}', textposition='outside')
fig.update_layout(uniformtext_minsize=8, uniformtext_mode='hide')
fig.update_layout(
title_text="access_year_bar",
# width=500,
# height=500
)
fig.show()
else:
df_ = df_
x_ = "pais"
y_ = "count"
color_ = "pais"
animation_frame_ = "access_year_month"
animation_group_ = "pais"
range_y_ = [0,1000]
title_text_ = ax_title
self.bar_animation_frame(
df_,
x_,
y_,
color_,
animation_frame_,
animation_group_,
range_y_,
title_text_
)
def func_count_distinct_regist_plotty(self, x_type, x_group, ax_title):
if x_group == "register":
self.func_count_distinct_regist(
df_=self._df_rymc,
x_type=x_type,
ax_title=ax_title
)
elif x_group == "access":
self.func_count_distinct_access(
df_=self._df_aymc,
x_type=x_type,
ax_title=ax_title
)
else:
self.func_count_distinct_regist(
df_=self._df_rymc,
x_type=x_type,
ax_title=ax_title
)
def comparative_bar_total_plotty(self, df_, ax_title):
fig = px.bar(df_,
x="feature",
y="value",
color="feature",
#pattern_shape="feature",
#pattern_shape_sequence=[".", "x", "+", "-", "\\"],
log_y=False,
)
fig.update_layout(title_text=ax_title)
fig.update_traces(
marker=dict(line_color="grey", pattern_fillmode="replace")
)
fig.show()
def comparative_sunburst_total_plotty(self, df_, ax_title):
fig =go.Figure(go.Sunburst(
labels=["total", "total_active", "total_active_confirmed", "total_active_confirmed_paying", "total_active_confirmed_sponsored"],
parents=["" , "total" , "total_active" , "total_active" , "total_active"],
values=[df_["total"][0], df_["total_active"][0], df_["total_active_confirmed"][0], df_["total_active_confirmed_paying"][0], df_["total_active_confirmed_sponsored"][0]],
))
fig.update_layout(title_text=ax_title, margin = dict(t=0, l=0, r=0, b=0))
fig.show()
def func_comparative_bar_total_plotty(self, dt_from, chart_type, ax_title):
dt_from = np.datetime64(dt_from)
if chart_type == "Bars":
df_act_count = dataframeHelper.get_active_users_count(df_ = df_users_conv, dt_from = dt_from)
self._df_ctd = df_act_count
self.comparative_bar_total_plotty(
df_=self._df_ctd,
ax_title=ax_title
)
elif chart_type == "Sunburst":
df_act_count = dataframeHelper.get_active_users_count_ln(df_ = df_users_conv, dt_from = dt_from)
self._df_ctd = df_act_count
self.comparative_sunburst_total_plotty(
df_=self._df_ctd,
ax_title=ax_title
)
else:
self.comparative_bar_total_plotty(
df_=self._df_ctd,
ax_title=ax_title
)
##################
def comparative_bar_total_paying_plotty(self, df_, ax_title):
fig = px.bar(df_,
x="feature",
y="value",
color="feature",
#pattern_shape="feature",
#pattern_shape_sequence=[".", "x", "+", "-", "\\"],
log_y=False,
)
fig.update_layout(title_text=ax_title)
fig.update_traces(
marker=dict(line_color="grey", pattern_fillmode="replace")
)
fig.show()
def comparative_sunburst_total_paying_plotty(self, df_, ax_title):
fig =go.Figure(go.Sunburst(
labels=["total",
"total_paying",
"total_paying_sponsored",
"total_paying_not_sponsored",
"total_not_paying",
"total_not_paying_sponsored",
"total_not_paying_not_sponsored"
],
parents=["",
"total",
"total_paying",
"total_paying",
"total",
"total_not_paying",
"total_not_paying"
],
values=[df_["total"][0],
df_["total_paying"][0],
df_["total_paying_sponsored"][0],
df_["total_paying_not_sponsored"][0],
df_["total_not_paying"][0],
df_["total_not_paying_sponsored"][0],
df_["total_not_paying_not_sponsored"][0]
],
))
fig.update_layout(title_text=ax_title, margin = dict(t=0, l=0, r=0, b=0))
fig.show()
def func_comparative_bar_total_paying_plotty(self, dt_from, hidden_wdg_, chart_type, ax_title):
dt_from = np.datetime64(dt_from)
if chart_type == "Bars":
df_act_count = dataframeHelper.get_paying_sponsored_count(df_ = df_users_conv, dt_from = dt_from, hidden_wdg_ = hidden_wdg_)
self._df_ctd = df_act_count
self.comparative_bar_total_paying_plotty(
df_=self._df_ctd,
ax_title=ax_title
)
elif chart_type == "Sunburst":
df_act_count = dataframeHelper.get_paying_sponsored_count_ln(df_ = df_users_conv, dt_from = dt_from, hidden_wdg_ = hidden_wdg_)
self._df_ctd = df_act_count
self.comparative_sunburst_total_paying_plotty(
df_=self._df_ctd,
ax_title=ax_title
)
else:
self.comparative_bar_total_paying_plotty(
df_=self._df_ctd,
ax_title=ax_title
)
def func_count_single_column_plotty(self, x_var_size, x_var, x_var_asc, ax_title):
#print(type(x_var))
self.count_categorical_plotty(
df=self._df_ablty,
x_var=x_var,
x_var_size=x_var_size,
x_var_asc=x_var_asc,
ax_title=ax_title,
x_size=16,
y_size=8,
bar_color="lightskyblue",
box_aspect_radio=5,
)
#########################################################################################################
def logreg_plotty(self, value_, ax_title):
fig = go.Figure(go.Indicator(
mode = "number+gauge+delta",
gauge = {'shape': "bullet"},
delta = {'reference': 100},
value = value_,
domain = {'x': [0.1, 1], 'y': [0.2, 0.9]},
title = {'text': ax_title}))
fig.show()
def func_logreg_plotty(self, txt_1, txt_2, txt_3,txt_4,txt_5,txt_6,txt_7,txt_8,txt_9):
# initialise data of lists.
data_tmp_test = [[ int(txt_1), txt_2, txt_3,txt_4,txt_5,txt_6,txt_7,txt_8,txt_9]]
# Create DataFrame
df_tmp_test = pd.DataFrame(data_tmp_test, columns = ['Temp_fecha_de_registro_month_Apr',
'Temp_pais_CRI',
'Temp_como_llego_lic_',
'Temp_pais_GTM',
'habilidades_acabadas_con_insignia_cnt',
'Temp_fecha_de_registro_month_Oct',
'Temp_correo_confirmado_Sí',
'evidencias_consultadas',
'habilidades_completadas'])
y_score_test = self._logreg.predict_proba(df_tmp_test)[:, 1]
self.logreg_plotty(value_ = y_score_test[0] * 100, ax_title = "Prediction")
def comparative_bar_act_month_plotty(self, df_, ax_title):
fig = px.bar(df_act_month,
x="fecha_de_registro_month",
y="count",
#color="fecha_de_registro_month",
#pattern_shape="feature",
#pattern_shape_sequence=[".", "x", "+", "-", "\\"],
log_y=False,
)
fig.update_layout(title_text=ax_title)
fig.update_traces(
marker=dict(line_color="grey", pattern_fillmode="replace")
)
fig.show()
# +
#class CustomWidgets():
# """Custom Widgets"""
#
# IntSlider_ =
#
# def __init__(self, version):
# self.version = version
#
# def __str__(self):
# return f"Custom Widgets version {self.version}"
#
# def count_categorical(self, df, x_var, ax_title, x_size, y_size, bar_color, box_aspect_radio):
# sns.set(style="whitegrid")
#
# # Initialize the matplotlib figure
# f, ax = plt.subplots(figsize=(x_size, y_size))
#
# sns.set_color_codes("pastel")
# ax = sns.countplot(x=x_var, data=df, color = bar_color)
#
# ax.set_title(ax_title, fontsize=16)
#
# ax.set_xticklabels(ax.get_xticklabels(), size = 10, rotation=40, ha="right")
#
# #print(len(ax.patches))
# ax.set_box_aspect(box_aspect_radio/len(ax.patches)) #change 10 to modify the y/x axis ratio
#
# plt.show()
| Users_Plot.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
import xarray as xr
import pandas as pd
import geoviews as gv
import geoviews.feature as gf
gv.extension('bokeh')
# -
# The Bokeh backend offers much more advanced tools to interactively explore data, making good use of GeoViews support for web mapping tile sources. As we learned in the [Projections](Projections.ipynb) user guide, using web mapping tile sources is only supported when using the default ``GOOGLE_MERCATOR`` ``crs``.
# # WMTS - Tile Sources
# GeoViews provides a number of tile sources by default, provided by CartoDB, Stamen, OpenStreetMap, Esri and Wikipedia. These can be imported from the ``geoviews.tile_sources`` module.
# +
import geoviews.tile_sources as gts
gv.Layout([ts.relabel(name) for name, ts in gts.tile_sources.items()]).options(
'WMTS', xaxis=None, yaxis=None, width=225, height=225
).cols(4)
# -
# The tile sources that are defined as part of GeoViews are simply instances of the ``gv.WMTS`` and ``gv.Tiles`` elements, which accept tile source URLs of three formats:
#
# 1. Web mapping tile sources: ``{X}``, ``{Y}`` defining the location and a ``{Z}`` parameter defining the zoom level
# 2. Bounding box tile source: ``{XMIN}``, ``{XMAX}``, ``{YMIN}``, and ``{YMAX}`` parameters defining the bounds
# 3. Quad-key tile source: a single ``{Q}`` parameter
#
# Additional, freely available tile sources can be found at [wiki.openstreetmap.org](http://wiki.openstreetmap.org/wiki/Tile_servers).
#
# A tile source may also be drawn at a different ``level`` allowing us to overlay a regular tile source with a set of labels. Valid options for the 'level' option include 'image', 'underlay', 'glyph', 'annotation' and 'overlay':
gts.EsriImagery.options(width=600, height=570, global_extent=True) * gts.StamenLabels.options(level='annotation')
# ## Plotting data
#
# One of the main benefits of plotting data with Bokeh is the interactivity it allows. Here we will load a dataset of all the major cities in the world with their population counts over time:
cities = pd.read_csv('../assets/cities.csv', encoding="ISO-8859-1")
population = gv.Dataset(cities, kdims=['City', 'Country', 'Year'])
cities.head()
# Now we can convert this dataset to a set of points mapped by the latitude and longitude and containing the population, country and city as values. The longitudes and latitudes in the dataframe are supplied in simple Plate Carree coordinates, which we will need to declare (as the values are not stored with any associated units). The ``.to`` conversion interface lets us do this succinctly. Note that since we did not assign the Year dimension to the points key or value dimensions, it is automatically assigned to a HoloMap, rendering the data as an animation using a slider widget:
points = population.to(gv.Points, ['Longitude', 'Latitude'], ['Population', 'City', 'Country'])
(gts.Wikipedia * points.options(width=600, height=350, tools=['hover'], size_index=2, color_index=2, size=0.005, cmap='viridis'))
# And because this is a fully interactive Bokeh plot, you can now hover over each datapoint to see all of the values associated with it (name, location, etc.), and you can zoom and pan using the tools provided. Each time, the map tiles should seamlessly update to provide additional detail appropriate for that zoom level.
#
#
# ## Choropleths
# The tutorial on [Geometries](Geometries.ipynb) covers working with shapefiles in more detail but here we will quickly combine a shapefile with a pandas DataFrame to plot the results of the EU Referendum in the UK. We begin by loading the shapefile and then us ``pd.merge`` by combining it with some CSV data containing the referendum results:
import geopandas as gpd
geometries = gpd.read_file('../assets/boundaries/boundaries.shp')
referendum = pd.read_csv('../assets/referendum.csv')
gdf = gpd.GeoDataFrame(pd.merge(geometries, referendum))
# Now we can easily pass the GeoDataFrame to a Polygons object and declare the ``leaveVoteshare`` as the first value dimension which it will color by:
gv.Polygons(gdf, vdims=['name', 'leaveVoteshare']).options(
tools=['hover'], width=450, height=600, color_index='leaveVoteshare',
colorbar=True, toolbar='above', xaxis=None, yaxis=None
)
# ### Images
# The Bokeh backend also provides basic support for working with images. In this example we will load a very simple Iris Cube and display it overlaid with the coastlines feature from Cartopy. Note that the Bokeh backend does not project the image directly into the web Mercator projection, instead relying on regridding, i.e. resampling the data using a new grid. This means the actual display may be subtly different from the more powerful image support for the matplotlib backend, which will project each of the pixels into the chosen display coordinate system without regridding.
dataset = xr.open_dataset('../data/pre-industrial.nc')
air_temperature = gv.Dataset(dataset, ['longitude', 'latitude'], 'air_temperature',
group='Pre-industrial air temperature')
air_temperature.to.image().options(tools=['hover'], cmap='viridis') * gf.coastline.options(line_color='black', width=600, height=500)
| examples/user_guide/Working_with_Bokeh.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Tutorial 2: Training a spiking neural network on a simple vision dataset
#
# <NAME> (https://fzenke.net)
# > For more details on surrogate gradient learning, please see:
# > <NAME>., <NAME>., and <NAME>. (2019). Surrogate Gradient Learning in Spiking Neural Networks.
# > https://arxiv.org/abs/1901.09948
# In Tutorial 1, we have seen how to train a simple multi-layer spiking neural network on a small synthetic dataset. In this tutorial, we will apply what we have learned so far to a slightly larger dataset.
# Concretely, we will use the [Fashion MNIST dataset](https://github.com/zalandoresearch/fashion-mnist).
# +
import os
import numpy as np
import matplotlib.pyplot as plt
from matplotlib.gridspec import GridSpec
import seaborn as sns
import torch
import torch.nn as nn
import torchvision
# -
torch.__version__
# +
# The coarse network structure is dicated by the Fashion MNIST dataset.
nb_inputs = 28*28
nb_hidden = 100
nb_outputs = 10
time_step = 1e-3
nb_steps = 100
batch_size = 256
# +
dtype = torch.float
# Check whether a GPU is available
if torch.cuda.is_available():
device = torch.device("cuda")
else:
device = torch.device("cpu")
# -
# Here we load the Dataset
root = os.path.expanduser("~/data/datasets/torch/fashion-mnist")
train_dataset = torchvision.datasets.FashionMNIST(root, train=True, transform=None, target_transform=None, download=True)
test_dataset = torchvision.datasets.FashionMNIST(root, train=False, transform=None, target_transform=None, download=True)
# +
# Standardize data
# x_train = torch.tensor(train_dataset.train_data, device=device, dtype=dtype)
x_train = np.array(train_dataset.data, dtype=np.float)
x_train = x_train.reshape(x_train.shape[0],-1)/255
# x_test = torch.tensor(test_dataset.test_data, device=device, dtype=dtype)
x_test = np.array(test_dataset.data, dtype=np.float)
x_test = x_test.reshape(x_test.shape[0],-1)/255
# y_train = torch.tensor(train_dataset.train_labels, device=device, dtype=dtype)
# y_test = torch.tensor(test_dataset.test_labels, device=device, dtype=dtype)
y_train = np.array(train_dataset.targets, dtype=np.int)
y_test = np.array(test_dataset.targets, dtype=np.int)
# -
# Here we plot one of the raw data points as an example
data_id = 1
plt.imshow(x_train[data_id].reshape(28,28), cmap=plt.cm.gray_r)
plt.axis("off")
# Since we are working with spiking neural networks, we ideally want to use a temporal code to make use of spike timing. To that end, we will use a spike latency code to feed spikes to our network.
# +
def current2firing_time(x, tau=20, thr=0.2, tmax=1.0, epsilon=1e-7):
""" Computes first firing time latency for a current input x assuming the charge time of a current based LIF neuron.
Args:
x -- The "current" values
Keyword args:
tau -- The membrane time constant of the LIF neuron to be charged
thr -- The firing threshold value
tmax -- The maximum time returned
epsilon -- A generic (small) epsilon > 0
Returns:
Time to first spike for each "current" x
"""
idx = x<thr
x = np.clip(x,thr+epsilon,1e9)
T = tau*np.log(x/(x-thr))
T[idx] = tmax
return T
def sparse_data_generator(X, y, batch_size, nb_steps, nb_units, shuffle=True ):
""" This generator takes datasets in analog format and generates spiking network input as sparse tensors.
Args:
X: The data ( sample x event x 2 ) the last dim holds (time,neuron) tuples
y: The labels
"""
labels_ = np.array(y,dtype=np.int)
number_of_batches = len(X)//batch_size
sample_index = np.arange(len(X))
# compute discrete firing times
tau_eff = 20e-3/time_step
firing_times = np.array(current2firing_time(X, tau=tau_eff, tmax=nb_steps), dtype=np.int)
unit_numbers = np.arange(nb_units)
if shuffle:
np.random.shuffle(sample_index)
total_batch_count = 0
counter = 0
while counter<number_of_batches:
batch_index = sample_index[batch_size*counter:batch_size*(counter+1)]
coo = [ [] for i in range(3) ]
for bc,idx in enumerate(batch_index):
c = firing_times[idx]<nb_steps
times, units = firing_times[idx][c], unit_numbers[c]
batch = [bc for _ in range(len(times))]
coo[0].extend(batch)
coo[1].extend(times)
coo[2].extend(units)
i = torch.LongTensor(coo).to(device)
v = torch.FloatTensor(np.ones(len(coo[0]))).to(device)
X_batch = torch.sparse.FloatTensor(i, v, torch.Size([batch_size,nb_steps,nb_units])).to(device)
y_batch = torch.tensor(labels_[batch_index],device=device)
yield X_batch.to(device=device), y_batch.to(device=device)
counter += 1
# -
# ### Setup of the spiking network model
# +
tau_mem = 10e-3
tau_syn = 5e-3
alpha = float(np.exp(-time_step/tau_syn))
beta = float(np.exp(-time_step/tau_mem))
# +
weight_scale = 7*(1.0-beta) # this should give us some spikes to begin with
w1 = torch.empty((nb_inputs, nb_hidden), device=device, dtype=dtype, requires_grad=True)
torch.nn.init.normal_(w1, mean=0.0, std=weight_scale/np.sqrt(nb_inputs))
w2 = torch.empty((nb_hidden, nb_outputs), device=device, dtype=dtype, requires_grad=True)
torch.nn.init.normal_(w2, mean=0.0, std=weight_scale/np.sqrt(nb_hidden))
print("init done")
# -
def plot_voltage_traces(mem, spk=None, dim=(3,5), spike_height=5):
gs=GridSpec(*dim)
if spk is not None:
dat = 1.0*mem
dat[spk>0.0] = spike_height
dat = dat.detach().cpu().numpy()
else:
dat = mem.detach().cpu().numpy()
for i in range(np.prod(dim)):
if i==0: a0=ax=plt.subplot(gs[i])
else: ax=plt.subplot(gs[i],sharey=a0)
ax.plot(dat[i])
ax.axis("off")
# We can now run this code and plot the output layer "membrane potentials" below. As desired, these potentials do not have spikes riding on them.
# ## Training the network
# +
class SurrGradSpike(torch.autograd.Function):
"""
Here we implement our spiking nonlinearity which also implements
the surrogate gradient. By subclassing torch.autograd.Function,
we will be able to use all of PyTorch's autograd functionality.
Here we use the normalized negative part of a fast sigmoid
as this was done in Zenke & Ganguli (2018).
"""
scale = 100.0 # controls steepness of surrogate gradient
@staticmethod
def forward(ctx, input):
"""
In the forward pass we compute a step function of the input Tensor
and return it. ctx is a context object that we use to stash information which
we need to later backpropagate our error signals. To achieve this we use the
ctx.save_for_backward method.
"""
ctx.save_for_backward(input)
out = torch.zeros_like(input)
out[input > 0] = 1.0
return out
@staticmethod
def backward(ctx, grad_output):
"""
In the backward pass we receive a Tensor we need to compute the
surrogate gradient of the loss with respect to the input.
Here we use the normalized negative part of a fast sigmoid
as this was done in Zenke & Ganguli (2018).
"""
input, = ctx.saved_tensors
grad_input = grad_output.clone()
grad = grad_input/(SurrGradSpike.scale*torch.abs(input)+1.0)**2
return grad
# here we overwrite our naive spike function by the "SurrGradSpike" nonlinearity which implements a surrogate gradient
spike_fn = SurrGradSpike.apply
# -
def run_snn(inputs):
h1 = torch.einsum("abc,cd->abd", (inputs, w1))
syn = torch.zeros((batch_size,nb_hidden), device=device, dtype=dtype)
mem = torch.zeros((batch_size,nb_hidden), device=device, dtype=dtype)
mem_rec = []
spk_rec = []
# Compute hidden layer activity
for t in range(nb_steps):
mthr = mem-1.0
out = spike_fn(mthr)
rst = out.detach() # We do not want to backprop through the reset
new_syn = alpha*syn +h1[:,t]
new_mem = (beta*mem +syn)*(1.0-rst)
mem_rec.append(mem)
spk_rec.append(out)
mem = new_mem
syn = new_syn
mem_rec = torch.stack(mem_rec,dim=1)
spk_rec = torch.stack(spk_rec,dim=1)
# Readout layer
h2= torch.einsum("abc,cd->abd", (spk_rec, w2))
flt = torch.zeros((batch_size,nb_outputs), device=device, dtype=dtype)
out = torch.zeros((batch_size,nb_outputs), device=device, dtype=dtype)
out_rec = [out]
for t in range(nb_steps):
new_flt = alpha*flt +h2[:,t]
new_out = beta*out +flt
flt = new_flt
out = new_out
out_rec.append(out)
out_rec = torch.stack(out_rec,dim=1)
other_recs = [mem_rec, spk_rec]
return out_rec, other_recs
# +
def train(x_data, y_data, lr=2e-3, nb_epochs=10):
params = [w1,w2]
optimizer = torch.optim.Adam(params, lr=lr, betas=(0.9,0.999))
log_softmax_fn = nn.LogSoftmax(dim=1)
loss_fn = nn.NLLLoss()
loss_hist = []
for e in range(nb_epochs):
local_loss = []
for x_local, y_local in sparse_data_generator(x_data, y_data, batch_size, nb_steps, nb_inputs):
output,_ = run_snn(x_local.to_dense())
m,_=torch.max(output,1)
log_p_y = log_softmax_fn(m)
loss_val = loss_fn(log_p_y, y_local)
optimizer.zero_grad()
loss_val.backward()
optimizer.step()
local_loss.append(loss_val.item())
mean_loss = np.mean(local_loss)
print("Epoch %i: loss=%.5f"%(e+1,mean_loss))
loss_hist.append(mean_loss)
return loss_hist
def compute_classification_accuracy(x_data, y_data):
""" Computes classification accuracy on supplied data in batches. """
accs = []
for x_local, y_local in sparse_data_generator(x_data, y_data, batch_size, nb_steps, nb_inputs, shuffle=False):
output,_ = run_snn(x_local.to_dense())
m,_= torch.max(output,1) # max over time
_,am=torch.max(m,1) # argmax over output units
tmp = np.mean((y_local==am).detach().cpu().numpy()) # compare to labels
accs.append(tmp)
return np.mean(accs)
# -
loss_hist = train(x_train, y_train, lr=2e-4, nb_epochs=30)
plt.figure(figsize=(3.3,2),dpi=150)
plt.plot(loss_hist)
plt.xlabel("Epoch")
plt.ylabel("Loss")
sns.despine()
print("Training accuracy: %.3f"%(compute_classification_accuracy(x_train,y_train)))
print("Test accuracy: %.3f"%(compute_classification_accuracy(x_test,y_test)))
def get_mini_batch(x_data, y_data, shuffle=False):
for ret in sparse_data_generator(x_data, y_data, batch_size, nb_steps, nb_inputs, shuffle=shuffle):
return ret
x_batch, y_batch = get_mini_batch(x_test, y_test)
output, other_recordings = run_snn(x_batch.to_dense())
mem_rec, spk_rec = other_recordings
fig=plt.figure(dpi=100)
plot_voltage_traces(mem_rec, spk_rec)
fig=plt.figure(dpi=100)
plot_voltage_traces(output)
# +
# Let's plot the hiddden layer spiking activity for some input stimuli
nb_plt = 4
gs = GridSpec(1,nb_plt)
fig= plt.figure(figsize=(7,3),dpi=150)
for i in range(nb_plt):
plt.subplot(gs[i])
plt.imshow(spk_rec[i].detach().cpu().numpy().T,cmap=plt.cm.gray_r, origin="lower" )
if i==0:
plt.xlabel("Time")
plt.ylabel("Units")
sns.despine()
# -
# In conclusion, we see that already this simple spiking network solves the classification problem with ~85% accuracy, and there is plenty of room left for tweaking. However, the hidden layer activities do not look very biological. Although the network displays population sparseness in that only a subset of neurons are active at any given time, the individual neurons' firing rates are pathologically high. This pathology is not too surprising since we have not incentivized low activity levels in any way. We will create such an incentive to address this issue by activity regularization in one of the next tutorials.
# <a rel="license" href="http://creativecommons.org/licenses/by/4.0/"><img alt="Creative Commons License" style="border-width:0" src="https://i.creativecommons.org/l/by/4.0/88x31.png" /></a><br />This work is licensed under a <a rel="license" href="http://creativecommons.org/licenses/by/4.0/">Creative Commons Attribution 4.0 International License</a>.
| notebooks/SpyTorchTutorial2.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python [default]
# language: python
# name: python3
# ---
# + [markdown] slideshow={"slide_type": "slide"}
# <div style="text-align: center;">
# <FONT size="8">
# <BR><BR><b>
# Stochastic Processes: <BR><BR>Data Analysis and Computer Simulation
# </b>
# </FONT>
# <BR><BR><BR>
#
# <FONT size="7">
# <b>
# Brownian motion 2: computer simulation
# </b>
# </FONT>
# <BR><BR><BR>
#
# <FONT size="7">
# <b>
# -Making Animations-
# </b>
# </FONT>
# <BR>
# </div>
# + [markdown] slideshow={"slide_type": "notes"}
# #### Note 1
#
# - In the previous plot, we wrote and used a vary simple python code to simulate the motion of Brownian particles.
# - Although the code is enough to produce trajectory data that can be used for later analysis, the strong graphic capability of the Jupyter notebook allows us to perform simulations with on-the-fly animations quite easily.
# - Today, I will show you how to take advantage of this graphics capability by modifying our previous simulation code to display the results in real time.
# + [markdown] slideshow={"slide_type": "slide"}
# # Simulation code with on-the-fly animation
#
# ## Import libraries
# + slideshow={"slide_type": "-"}
% matplotlib nbagg
import numpy as np # import numpy library as np
import matplotlib.pyplot as plt # import pyplot library as plt
import matplotlib.mlab as mlab # import mlab module to use MATLAB commands with the same names
import matplotlib.animation as animation # import animation modules from matplotlib
from mpl_toolkits.mplot3d import Axes3D # import Axes3D from mpl_toolkits.mplot3d
plt.style.use('ggplot') # use "ggplot" style for graphs
# + [markdown] slideshow={"slide_type": "notes"}
# #### Note 2
#
# - As always, we begin by importing the necessary numerical and plotting libraries.
# - Compared to the previous code example, we import two additional libraries, the `mlab` and `animation` modules from the `matplotlib` library.
# + [markdown] slideshow={"slide_type": "slide"}
# ## Define `init` function for `FuncAnimation`
# + slideshow={"slide_type": "-"}
def init():
global R,V,W,Rs,Vs,Ws,time
R[:,:] = 0.0 # initialize all the variables to zero
V[:,:] = 0.0 # initialize all the variables to zero
W[:,:] = 0.0 # initialize all the variables to zero
Rs[:,:,:] = 0.0 # initialize all the variables to zero
Vs[:,:,:] = 0.0 # initialize all the variables to zero
Ws[:,:,:] = 0.0 # initialize all the variables to zero
time[:] = 0.0 # initialize all the variables to zero
title.set_text(r'') # empty title
line.set_data([],[]) # set line data to show the trajectory of particle n in 2d (x,y)
line.set_3d_properties([]) # add z-data separately for 3d plot
particles.set_data([],[]) # set position current (x,y) position data for all particles
particles.set_3d_properties([]) # add current z data of particles to get 3d plot
return particles,title,line # return listed objects that will be drawn by FuncAnimation
# + [markdown] slideshow={"slide_type": "notes"}
# #### Note 3
# - For this lesson, we will perform a simulation of Brownian particles and we wish to see how their positions evolve in time. In addition, we want to visualize the trajectory of one chosen particle, to see how it moves in space.
# - The easiest way to animate your data in python is to use the "FuncAnimation" function provided by matplotlib.
# - To use this, we must define two basic functions that tell the library how to update and animate our data.
# - The first of these functions is "init". As its name implies, it is used to initialize the figure.
# - "init" will only be called once, at the beginning of the animation procedure.
# - It should define the different objects or "artists" that will be drawn.
# - Notice how we declare global variables explicitly in the function definition.
# - This allows us to modify variables which are declared outside of the function.
# - R,V,W will contain the current position,velocity and Wiener increment for each of the particles
# - Rs,Vs,Ws the corresponding values for all time steps
# - time will contain the time values.
# - We initialize all the variables to zero
# - We will define three different objects to draw, "particles", "line", and "title".
# - "particles" is used to display the particles as points in 3d space
# - "line" is used to display the trajectory of a given particle
# - "title" is used to display the current time
# - Here, the particles and line data are just empty arrays and time is set as an empty string.
# - These three objects will be modified later, when we call the "animate" function
# + [markdown] slideshow={"slide_type": "slide"}
# ## Define `animate` function for `FuncAnimation`
# + slideshow={"slide_type": "-"}
def animate(i):
global R,V,W,Rs,Vs,Ws,time # define global variables
time[i]=i*dt # store time in each step in an array time
W = std*np.random.randn(nump,dim) # generate an array of random forces accordingly to Eqs.(F10) and (F11)
V = V*(1-zeta/m*dt)+W/m # update velocity via Eq.(F9)
R = R+V*dt # update position via Eq.(F5)
Rs[i,:,:]=R # accumulate particle positions at each step in an array Rs
Vs[i,:,:]=V # accumulate particle velocitys at each step in an array Vs
Ws[i,:,:]=W # accumulate random forces at each step in an array Ws
title.set_text(r"t = "+str(time[i])) # set the title to display the current time
line.set_data(Rs[:i+1,n,0],Rs[:i+1,n,1]) # set the line in 2D (x,y)
line.set_3d_properties(Rs[:i+1,n,2]) # add z axis to set the line in 3D
particles.set_data(R[:,0],R[:,1]) # set the current position of all the particles in 2d (x,y)
particles.set_3d_properties(R[:,2]) # add z axis to set the particle in 3D
return particles,title,line # return listed objects that will be drawn by FuncAnimation
# + [markdown] slideshow={"slide_type": "notes"}
# #### Note 4
#
# - The "animate" function is the main funciton used by "FuncAnimation". It is called at every step in order to update the figure and create the animation.
# - Thus, the animate procedure should be responsible for performing the integration in time. It udpates the positions and velocities by propagating the solution to the Langevin equation over $\Delta t$.
# - After the updated configuration is found, we udpate the trajectory variables Rs,Vs,and Ws.
# - Next, we udpate the objects in our animation.
# - We set the title to display the current time
# - We set the line, which displays the trajectory of particle n, to contain all the x,y, and z points until step i
# - Finally, we set the current position of all the particles to be R
# - It is important that animate, as well as init, return the objects that are redrawn (in this case particles, title, line)
# - Notice how we used "n" even though it was not declared as global, this is because we never tried to modify the value, we only read it, but never tried to write to it.
# + [markdown] slideshow={"slide_type": "slide"}
# ## Set parameters and initialize variables
# + slideshow={"slide_type": "-"}
dim = 3 # system dimension (x,y,z)
nump = 1000 # number of independent Brownian particles to simulate
nums = 1024 # number of simulation steps
dt = 0.05 # set time increment, \Delta t
zeta = 1.0 # set friction constant, \zeta
m = 1.0 # set particle mass, m
kBT = 1.0 # set temperatute, k_B T
std = np.sqrt(2*kBT*zeta*dt) # calculate std for \Delta W via Eq.(F11)
np.random.seed(0) # initialize random number generator with a seed=0
R = np.zeros([nump,dim]) # array to store current positions and set initial condition Eq.(F12)
V = np.zeros([nump,dim]) # array to store current velocities and set initial condition Eq.(F12)
W = np.zeros([nump,dim]) # array to store current random forcces
Rs = np.zeros([nums,nump,dim]) # array to store positions at all steps
Vs = np.zeros([nums,nump,dim]) # array to store velocities at all steps
Ws = np.zeros([nums,nump,dim]) # array to store random forces at all steps
time = np.zeros([nums]) # an array to store time at all steps
# + [markdown] slideshow={"slide_type": "notes"}
# #### Note 5
#
# - Here, we define the parameters of our simulations.
# - We will work in 3d, with 1000 particles.
# - We use a time step of 0.05 and will simulate over a total of 1024 steps.
# - We set the friction constant, mass, and thermal energy equal to one.
# - We define the standard deviation of the Wiener process in order to satisfy the fluctuation dissipation theorem.
# - Finally, we create the necessary arrays. R,V,W will store the current position, velocity, and Wiener updates for each of the 1000 particles.
# - Rs,Vs,Ws will store the corresponding values for all 1024 time steps.
# - and the time array will contain the time value for each step
# + [markdown] slideshow={"slide_type": "slide"}
# ## Perform and animate the simulation using `FuncAnimation`
# + slideshow={"slide_type": "-"}
fig = plt.figure(figsize=(10,10)) # set fig with its size 10 x 10 inch
ax = fig.add_subplot(111,projection='3d') # creates an additional axis to the standard 2D axes
box = 40 # set draw area as box^3
ax.set_xlim(-box/2,box/2) # set x-range
ax.set_ylim(-box/2,box/2) # set y-range
ax.set_zlim(-box/2,box/2) # set z-range
ax.set_xlabel(r"x",fontsize=20) # set x-lavel
ax.set_ylabel(r"y",fontsize=20) # set y-lavel
ax.set_zlabel(r"z",fontsize=20) # set z-lavel
ax.view_init(elev=12,azim=120) # set view point
particles, = ax.plot([],[],[],'ro',ms=8,alpha=0.5) # define object particles
title = ax.text(-180.,0.,250.,r'',transform=ax.transAxes,va='center') # define object title
line, = ax.plot([],[],[],'b',lw=1,alpha=0.8) # define object line
n = 0 # trajectry line is plotted for the n-th particle
anim = animation.FuncAnimation(fig,func=animate,init_func=init,
frames=nums,interval=5,blit=True,repeat=False)
## If you have ffmpeg installed on your machine
## you can save the animation by uncomment the last line
## You may install ffmpeg by typing the following command in command prompt
## conda install -c menpo ffmpeg
##
# anim.save('movie.mp4',fps=50,dpi=100)
# + [markdown] slideshow={"slide_type": "notes"}
# #### Note 6
#
# - Now we can run the simulation and visualize the results.
# - First, we create a figure of size 10 inches by 10 inches, and we add an axis to this figure.
# - When we draw, we draw on the axis, not on the figure directly.
# - Notice how the axis was explicitly set to be '3d'
# - Next, we set the limits for each of the x,y, and z axis, as well as the labels.
# - Using the view_init command we specify the initial position of the camera. However, this is not fixed, as you are able to pan and zoom by clicking on the figure.
# - The main code here is to create the empy objects or "artist" that will later be updated by the animate function.
# - These are particles, title, and line, which are all set to be empty. Notice however, that we specify how these objects will be drawn. That is, we can specify the line or marker type, as well as the color. These parameters will be used throughout the simulation, even though the underlying data will change as the particle positions change.
# - Finally, we call the "FuncAnimation" function and specify where to draw (fig), how to initialize (init), and how to update the animation (animate). We must also specify how many frames, or steps to take.
# - Initially, all the particles are set to be at the origin. Notice how this droplet starts expanding radially outward.
# - This is basically how a drop of ink will diffuse through a container of water if given enough time.
# + [markdown] slideshow={"slide_type": "skip"}
# ## Homework
#
# - Perform a simulation for a single Brownian particle (nump=1) and plot its trajectory on a x-y, x-z, and y-z planes.
#
# - Compare the present results with the previous results obtained by the simple simulation code.
#
| edx-stochastic-data-analysis/downloaded_files/04/.ipynb_checkpoints/Stochastic_Processes_week04_3-checkpoint.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="-c0vWATuQ_Dn" colab_type="text"
# # Lambda School Data Science - Loading Data
#
# Data comes in many shapes and sizes - we'll start by loading tabular data, usually in csv format.
#
# Data set sources:
#
# - https://archive.ics.uci.edu/ml/datasets.html
# - https://github.com/awesomedata/awesome-public-datasets
# - https://registry.opendata.aws/ (beyond scope for now, but good to be aware of)
#
# Let's start with an example - [data about flags](https://archive.ics.uci.edu/ml/datasets/Flags).
# + [markdown] id="wxxBTeHUYs5a" colab_type="text"
# ## Lecture example - flag data
# + id="nc-iamjyRWwe" colab_type="code" outputId="6874f5d3-e5ff-4d07-cf10-46882c434303" colab={"base_uri": "https://localhost:8080/", "height": 3386}
# Step 1 - find the actual file to download
# From navigating the page, clicking "Data Folder"
flag_data_url = 'https://archive.ics.uci.edu/ml/machine-learning-databases/flags/flag.data'
# You can "shell out" in a notebook for more powerful tools
# https://jakevdp.github.io/PythonDataScienceHandbook/01.05-ipython-and-shell-commands.html
# Funny extension, but on inspection looks like a csv
# !curl https://archive.ics.uci.edu/ml/machine-learning-databases/flags/flag.data
# Extensions are just a norm! You have to inspect to be sure what something is
# + id="UKfOq1tlUvbZ" colab_type="code" colab={}
# Step 2 - load the data
# How to deal with a csv? 🐼
import pandas as pd
flag_data = pd.read_csv(flag_data_url)
# + id="exKPtcJyUyCX" colab_type="code" outputId="9ce9dfa4-92cd-4219-eb15-0254bb45e8fa" colab={"base_uri": "https://localhost:8080/", "height": 273}
# Step 3 - verify we've got *something*
flag_data.head()
# + id="rNmkv2g8VfAm" colab_type="code" outputId="2ec0a7de-c857-4bf8-da08-2288d92b9c0f" colab={"base_uri": "https://localhost:8080/", "height": 555}
# Step 4 - Looks a bit odd - verify that it is what we want
flag_data.count()
# + id="iqPEwx3aWBDR" colab_type="code" outputId="9488bd96-ccd0-48b3-e548-ed8a19e4c1bb" colab={"base_uri": "https://localhost:8080/", "height": 86}
# !curl https://archive.ics.uci.edu/ml/machine-learning-databases/flags/flag.data | wc
# + id="5R1d1Ka2WHAY" colab_type="code" outputId="c08d622a-ead1-4f21-93a5-aad7d3241660" colab={"base_uri": "https://localhost:8080/", "height": 4813}
# So we have 193 observations with funny names, file has 194 rows
# Looks like the file has no header row, but read_csv assumes it does
help(pd.read_csv)
# + id="o-thnccIWTvc" colab_type="code" outputId="049754ca-6b49-47c5-a60c-2edca831fce9" colab={"base_uri": "https://localhost:8080/", "height": 236}
# Alright, we can pass header=None to fix this
flag_data = pd.read_csv(flag_data_url, header=None)
flag_data.head()
# + id="iG9ZOkSMWZ6D" colab_type="code" outputId="2caf55a9-95c8-4bd4-a188-02569dd036d4" colab={"base_uri": "https://localhost:8080/", "height": 555}
flag_data.count()
# + id="gMcxnWbkWla1" colab_type="code" outputId="5f44d3c1-4377-45ca-a786-31097ae2c7a0" colab={"base_uri": "https://localhost:8080/", "height": 555}
flag_data.isna().sum()
# + [markdown] id="AihdUkaDT8We" colab_type="text"
# ### Yes, but what does it *mean*?
#
# This data is fairly nice - it was "donated" and is already "clean" (no missing values). But there are no variable names - so we have to look at the codebook (also from the site).
#
# ```
# 1. name: Name of the country concerned
# 2. landmass: 1=N.America, 2=S.America, 3=Europe, 4=Africa, 4=Asia, 6=Oceania
# 3. zone: Geographic quadrant, based on Greenwich and the Equator; 1=NE, 2=SE, 3=SW, 4=NW
# 4. area: in thousands of square km
# 5. population: in round millions
# 6. language: 1=English, 2=Spanish, 3=French, 4=German, 5=Slavic, 6=Other Indo-European, 7=Chinese, 8=Arabic, 9=Japanese/Turkish/Finnish/Magyar, 10=Others
# 7. religion: 0=Catholic, 1=Other Christian, 2=Muslim, 3=Buddhist, 4=Hindu, 5=Ethnic, 6=Marxist, 7=Others
# 8. bars: Number of vertical bars in the flag
# 9. stripes: Number of horizontal stripes in the flag
# 10. colours: Number of different colours in the flag
# 11. red: 0 if red absent, 1 if red present in the flag
# 12. green: same for green
# 13. blue: same for blue
# 14. gold: same for gold (also yellow)
# 15. white: same for white
# 16. black: same for black
# 17. orange: same for orange (also brown)
# 18. mainhue: predominant colour in the flag (tie-breaks decided by taking the topmost hue, if that fails then the most central hue, and if that fails the leftmost hue)
# 19. circles: Number of circles in the flag
# 20. crosses: Number of (upright) crosses
# 21. saltires: Number of diagonal crosses
# 22. quarters: Number of quartered sections
# 23. sunstars: Number of sun or star symbols
# 24. crescent: 1 if a crescent moon symbol present, else 0
# 25. triangle: 1 if any triangles present, 0 otherwise
# 26. icon: 1 if an inanimate image present (e.g., a boat), otherwise 0
# 27. animate: 1 if an animate image (e.g., an eagle, a tree, a human hand) present, 0 otherwise
# 28. text: 1 if any letters or writing on the flag (e.g., a motto or slogan), 0 otherwise
# 29. topleft: colour in the top-left corner (moving right to decide tie-breaks)
# 30. botright: Colour in the bottom-left corner (moving left to decide tie-breaks)
# ```
#
# Exercise - read the help for `read_csv` and figure out how to load the data with the above variable names. One pitfall to note - with `header=None` pandas generated variable names starting from 0, but the above list starts from 1...
# + [markdown] id="nPbUK_cLY15U" colab_type="text"
# ## Your assignment - pick a dataset and do something like the above
#
# This is purposely open-ended - you can pick any data set you wish. It is highly advised you pick a dataset from UCI or a similar "clean" source.
#
# If you get that done and want to try more challenging or exotic things, go for it! Use documentation as illustrated above, and follow the 20-minute rule (that is - ask for help if you're stuck).
#
# If you have loaded a few traditional datasets, see the following section for suggested stretch goals.
# + id="NJdISe69ZT7E" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 221} outputId="a13c2ec8-3ff0-4749-8159-ed2cb821df41"
# TODO your work here!
# And note you should write comments, descriptions, and add new
# code and text blocks as needed
import pandas as pd
#I am reading from a car model - mpg dataset
df = pd.read_csv('https://archive.ics.uci.edu/ml/machine-learning-databases/auto-mpg/auto-mpg.data', sep='\t')
print(df.shape)
df.head(5)
# + id="eTiJ7-znode7" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 204} outputId="47ea7da9-8f39-4bd7-8b59-ae63b61ce7cc"
#The data isn't shaped correctly
#I noticed it wasn't all tab separated - I used stackoverflow to find the delim_whitespace=True argument. This counts both spaces and tabs as a delimiter
df = pd.read_csv('https://archive.ics.uci.edu/ml/machine-learning-databases/auto-mpg/auto-mpg.data', delim_whitespace=True, header=None)
df.head()
# + id="wyAdEQ-5r_f0" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="16b01f66-f3f7-404f-a429-baa62e15eec3"
df.columns
# + id="m9o2-XdUqLzX" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 1599} outputId="f45ca86c-bb9c-4610-fbca-0999121294a7"
# I'd like to restructure the dataset so that car model is on the left hand side
# I'd also like to label the columns
labels1= ['MPG', 'Cylinders', 'Displacement', 'Horsepower', 'Weight', 'Acceleration', 'Model Year', 'Origin', 'Car Model']
df.columns = labels1
df.head(50)
# + id="61bgvtUTtooM" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 187} outputId="b7bf2699-1406-4f00-a40e-5544c5c51a02"
#lets clean the data
df.isnull().sum()
# + id="JyfBsohct5b5" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 187} outputId="769a7408-3e6e-48a7-be01-864a998cf7eb"
#The dataset said it had missing values where I got it. After examining the dataset, it uses '?' to denote missing values
df = pd.read_csv('https://archive.ics.uci.edu/ml/machine-learning-databases/auto-mpg/auto-mpg.data', delim_whitespace=True, header=None, na_values=['?'])
df.columns = labels1
df.isnull().sum()
# + id="o8q-kSj7uwPp" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 187} outputId="bc08f17c-db1a-4bed-bd25-fce3056f0107"
#Horsepower is the only column with missing values - it is numeric and only 6 samples need to be filled. Filling them with the median should be optimal.
import numpy as np
df.fillna(df.Horsepower.median(), inplace=True)
df.isnull().sum()
# + id="-s13igKTwo2G" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 359} outputId="652e62e2-2120-4f97-8aaa-1621d32f627b"
df.head(10)
# + id="GdaOIhPGxNB1" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 359} outputId="d86d782b-5572-4384-888f-9fd8a362e695"
#Reformatted dataset, it is loaded and ready!
df = df[['Car Model', 'MPG', 'Cylinders', 'Displacement', 'Horsepower', 'Weight', 'Acceleration', 'Model Year', 'Origin']]
df.head(10)
# + [markdown] id="MZCxTwKuReV9" colab_type="text"
# ## Stretch Goals - Other types and sources of data
#
# Not all data comes in a nice single file - for example, image classification involves handling lots of image files. You still will probably want labels for them, so you may have tabular data in addition to the image blobs - and the images may be reduced in resolution and even fit in a regular csv as a bunch of numbers.
#
# If you're interested in natural language processing and analyzing text, that is another example where, while it can be put in a csv, you may end up loading much larger raw data and generating features that can then be thought of in a more standard tabular fashion.
#
# Overall you will in the course of learning data science deal with loading data in a variety of ways. Another common way to get data is from a database - most modern applications are backed by one or more databases, which you can query to get data to analyze. We'll cover this more in our data engineering unit.
#
# How does data get in the database? Most applications generate logs - text files with lots and lots of records of each use of the application. Databases are often populated based on these files, but in some situations you may directly analyze log files. The usual way to do this is with command line (Unix) tools - command lines are intimidating, so don't expect to learn them all at once, but depending on your interests it can be useful to practice.
#
# One last major source of data is APIs: https://github.com/toddmotto/public-apis
#
# API stands for Application Programming Interface, and while originally meant e.g. the way an application interfaced with the GUI or other aspects of an operating system, now it largely refers to online services that let you query and retrieve data. You can essentially think of most of them as "somebody else's database" - you have (usually limited) access.
#
# *Stretch goal* - research one of the above extended forms of data/data loading. See if you can get a basic example working in a notebook. I suggset image, text, or (public) API - databases are interesting, but there aren't many publicly accessible and they require a great deal of setup.
| module2-loadingdata/LS_DS_112_Loading_Data.ipynb |
# Notedown example
# ----------------
#
# This is an example of a notebook generated from markdown using
# [notedown]. The markdown source is on [github] and the generated
# output can be viewed using [nbviewer].
#
# [notedown]: http://github.com/aaren/notedown
# [github]: https://github.com/aaren/notedown/blob/master/example.md
# [nbviewer]: http://nbviewer.ipython.org/github/aaren/notedown/blob/master/example.ipynb
#
# Try opening `example.ipynb` as a notebook and running the cells to
# see how notedown works.
ipython notebook example.ipynb
# Notedown is simple. It converts markdown to the IPython notebook
# JSON format. It does this by splitting the markdown source into code
# blocks and not-code blocks and then creates a notebook using these
# as the input for the new cells (code and markdown).
#
# We make use of the IPython api:
# +
import re
import sys
import argparse
from IPython.nbformat.v3.rwbase import NotebookReader
from IPython.nbformat.v3.nbjson import JSONWriter
import IPython.nbformat.v3.nbbase as nbbase
# -
# We create a new class `MarkdownReader` that inherits from
# `NotebookReader`. The only requirement on this new class is that it
# has a `.reads(self, s, **kwargs)` method that returns a notebook
# JSON string.
#
# We search for code blocks using regular expressions, making use of
# named groups:
# +
fenced_regex = r"""
\n* # any number of newlines followed by
^(?P<fence>`{3,}|~{3,}) # a line starting with a fence of 3 or more ` or ~
(?P<language> # followed by the group 'language',
[\w+-]*) # a word of alphanumerics, _, - or +
[ ]* # followed by spaces
(?P<options>.*) # followed by any text
\n # followed by a newline
(?P<content> # start a group 'content'
[\s\S]*?) # that includes anything
\n(?P=fence)$ # up until the same fence that we started with
\n* # followed by any number of newlines
"""
# indented code
indented_regex = r"""
\n* # any number of newlines
(?P<icontent> # start group 'icontent'
(?P<indent>^([ ]{4,}|\t)) # an indent of at least four spaces or one tab
[\s\S]*?) # any code
\n* # any number of newlines
^(?!(?P=indent)) # stop when there is a line without at least
# the indent of the first one
"""
code_regex = r"({}|{})".format(fenced_regex, indented_regex)
code_pattern = re.compile(code_regex, re.MULTILINE | re.VERBOSE)
# -
# Then say we have some input text:
text = u"""### Create IPython Notebooks from markdown
This is a simple tool to convert markdown with code into an IPython
Notebook.
Usage:
```
notedown input.md > output.ipynb
```
It is really simple and separates your markdown into code and not
code. Code goes into code cells, not-code goes into markdown cells.
Installation:
pip install notedown
"""
# We can parse out the code block matches with
code_matches = [m for m in code_pattern.finditer(text)]
# Each of the matches has a `start()` and `end()` method telling us
# the position in the string where each code block starts and
# finishes. We use this and do some rearranging to get a list of all
# of the blocks (text and code) in the order in which they appear in
# the text:
# +
code = u'code'
markdown = u'markdown'
python = u'python'
def pre_process_code_block(block):
"""Preprocess the content of a code block, modifying the code
block in place.
Remove indentation and do magic with the cell language
if applicable.
"""
# homogenise content attribute of fenced and indented blocks
block['content'] = block.get('content') or block['icontent']
# dedent indented code blocks
if 'indent' in block and block['indent']:
indent = r"^" + block['indent']
content = block['content'].splitlines()
dedented = [re.sub(indent, '', line) for line in content]
block['content'] = '\n'.join(dedented)
# alternate descriptions for python code
python_aliases = ['python', 'py', '', None]
# ensure one identifier for python code
if 'language' in block and block['language'] in python_aliases:
block['language'] = python
# add alternate language execution magic
if 'language' in block and block['language'] != python:
code_magic = "%%{}\n".format(block['language'])
block['content'] = code_magic + block['content']
# determine where the limits of the non code bits are
# based on the code block edges
text_starts = [0] + [m.end() for m in code_matches]
text_stops = [m.start() for m in code_matches] + [len(text)]
text_limits = zip(text_starts, text_stops)
# list of the groups from the code blocks
code_blocks = [m.groupdict() for m in code_matches]
# update with a type field
code_blocks = [dict(d.items() + [('type', code)]) for d in
code_blocks]
# remove indents, add code magic, etc.
map(pre_process_code_block, code_blocks)
text_blocks = [{'content': text[i:j], 'type': markdown} for i, j
in text_limits]
# create a list of the right length
all_blocks = range(len(text_blocks) + len(code_blocks))
# cells must alternate in order
all_blocks[::2] = text_blocks
all_blocks[1::2] = code_blocks
# remove possible empty first, last text cells
all_blocks = [cell for cell in all_blocks if cell['content']]
# -
# Once we've done that it is easy to convert the blocks into IPython
# notebook cells and create a new notebook:
# +
cells = []
for block in all_blocks:
if block['type'] == code:
kwargs = {'input': block['content'],
'language': block['language']}
code_cell = nbbase.new_code_cell(**kwargs)
cells.append(code_cell)
elif block['type'] == markdown:
kwargs = {'cell_type': block['type'],
'source': block['content']}
markdown_cell = nbbase.new_text_cell(**kwargs)
cells.append(markdown_cell)
else:
raise NotImplementedError("{} is not supported as a cell"
"type".format(block['type']))
ws = nbbase.new_worksheet(cells=cells)
nb = nbbase.new_notebook(worksheets=[ws])
# -
# `JSONWriter` gives us nicely formatted JSON output:
writer = JSONWriter()
print writer.writes(nb)
| example.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3 (ipykernel)
# language: python
# name: python3
# ---
# [](https://www.pythonista.io)
#
# # Iteraciones con ```for```... ```in```.
#
#
# ## Objetos iterables.
#
# Una de las grandes fortalezas de Python es la de poder de realizar iteraciones de forma dinámica a partir de diversos objetos "iterables", los cuales son colecciones de objetos con métodos capaces de regresar elementos de uno en uno.
#
# Algunos tipos iterables son:
#
# * ```str```.
# * ```list```.
# * ```tuple```.
# * ```dict```.
# * ```set```.
# * ```frozenset```.
# * ```bytes```.
# * ```bytearray```.
# ## Iteradores y generadores.
#
# Además de los objetos iterables existen otros objetos capaces de regresar otros objetos de uno en uno. Estos objetos son los iteradores y los generadores, los cuales se estudiarán más adelante.
# ## La estructura ```for``` ... ```in```.
#
# Para iterar un objeto iterable se utiliza la siguiente sintaxis:
#
# ```
# for <nombre> in <iterable>:
# ...
# ...
# ```
# Donde:
#
# * ```<iterable>``` puede ser un objeto iterable, un iterador o un generador.
#
# * ```<nombre>``` es el nombre al que se le asignará cada objeto entregado por el objeto iterable en cada iteración. Al final, dicho estará ligado al último objeto entregado.
# **Ejemplos:**
# * La siguiente celda asignará al nombre ```letra``` cada caracter del objeto ```'Chapultepec'```, el cual será desplegado por la función ```print()```.
for letra in "Chapultepec":
print(letra)
# * El nombre ```letra``` queda ligado al objeto ```'c'```, el cual es el útlimo elemento del objeto ```'Chapultepec'```.
letra
# * La siguiente celda asignará al nombre ```item``` cada elemento contenido en el objeto ```['uno', 'dos', 3]```, el cual será desplegado por la función ```print()```.
for item in ['uno', 'dos', 3]:
print(item)
# * El nombre ```item``` queda ligado al objeto ```'3'```, el cual es el útlimo elemento del objeto ```['uno', 'dos', 3]
# ```.
item
# ## Iteraciones con la función ```range()```.
#
# La forma más común de realizar iteraciones en otros lenguajes de programación es por medio de rangos numéricos de forma similar a lo que hace la función ```range()```.
#
# Esta función es un generador que regresa una sucesión de números en un rango definido.
#
# ```
# range(<m>, <n>, <s>)
# ```
#
# Donde:
#
# * ```<m>``` es el primer número del rango.
# * ```<n>``` es el valor límite del rango. La sucesión llegará a un número previo a ```n```.
# * ```<s>``` es a magnitud de los inrementos o decrementos del rango.
#
# Si sólo se ingresa un argumento, ese valor se le asignará a ```n```, mientras que ```<m>``` será igual a ```0``` y ```<s>``` será igual a ```1```.
#
# Si sólo se ingresan 2 argumentos, el primero se le asignará a ```m```, el segundo a ```<n>``` y ```<s>``` será igual a ```1```.
#
# Es posible que el valor de ```<s>``` sea negativo siempre que ```m```, sea menor que ```<n>```.
# **Ejemplos:**
# * La siguiente celda desplegará una sucesión de números del ```0``` al ```7``` en incrmentos de ```1```.
""" Cuenta del 0 hasta 7 en incrementos de a 1."""
for contador in range(8):
print(contador)
print()
# * La siguiente celda desplegará una sucesión de números del ```5``` al ```9``` en incrementos de ```1```.
""" Cuenta del 5 hasta antes de 9 en incrementos de a 1. """
for contador in range(5, 9):
print(contador)
print()
# * La siguiente celda desplegará una sucesión de números del ```3``` al ```9``` en incrementos de ```2```.
""" Cuenta de 3 hasta antess de 11 en incrementos de a 2. """
for contador in range(3, 11, 2):
print(contador)
print()
# * La siguiente celda desplegará una sucesión de números del ```26``` al ```14``` en decrementos de ```-4```.
""" Cuenta del 26 hasta más de 10 en decrementos de a -4. """
for contador in range(26, 10, -4):
print(contador)
# ## Desempaquetado.
#
# Python permite el uso de expresiones capaces de asignar valores a mas de un nombre a partir de una colección.
#
# Si el número de nombres coincide con el número de objetos contenidos en la colección, entonces se asignará el primer valor al primer nombre y así sucesivamente.
#
# ```
# <nombre 1>, <nombre 2>, ..., <nombre n> = <colección de tamaño n>
# ```
#
# En caso de que el número de nombres y el tamaño de la colección no coincida, se desencadenará un error de tipo ```ValueError```.
# **Ejemplo:**
# * La siguiente celda asignará a cada uno de los 3 nombres definidos el valor de cada elemento del objeto tipo ```list``` cuyo tamaño es igual a ```3```.
nombre, apellido, calificacion = 'Juan', 'Pérez', 7.5
nombre
apellido
calificacion
# * En la siguiente celda se intentarán asignar 3 nombres a partir de una colección con 4 elementos, lo que desencadenará un error de tipo ```ValueError```.
nombre, apellido, calificacion = ['Juan', 'Pérez', 'Erroneo', 7.5]
# ### Desempaquetado con ```for```... ```in```.
#
# La declaración ```for``` ... ```in``` es capaz de realizar el desempaquetado de colecciones contenidas dentro de un objeto iterable cuando se definen más de un nombre. La condición es que el tamaño de las colecciones que regrese el objeto iterable sea igual al numero de nombres definidos después del ```for```.
#
# ```
# for <nombre 1>, <nombre 2>, ..., <nombre n> in <iterable>:
# ...
# ...
#
# ```
#
# Donde:
#
# * ```<iterable>``` es un objeto capaz de regresar colecciones de tamaño ```n``` en cada iteración.
# * ```<nombre x>``` es el nombre al que se le asignará el elemento con índice ```x``` a partir de la colección de tamaño ```n ```.
# **Ejemplo:**
# * El objeto ```palabras``` es un objeto de tipo ```list``` que contiene objetos de tipo ```str``` de 4 letras.
palabras = ["gato", "pato", "zeta", "cita"]
# * La siguiente celda hará una iteración para cada objeto contenido por ```palabras```, el cual será asignado a ```item``` en cada iteración.
for item in palabras:
print(item)
# * La siguiente celda realizará una iteración para cada elemento contenido en ```palabras```.
# * Para cada elemento obtenido en la iteración desempacará los 4 caracteres que contiene y los asignará de forma consecutiva a:
# * ```primera```
# * ```segunda```
# * ```tercera```
# * ```cuarta```
for primera, segunda, tercera, cuarta in palabras:
print(primera)
print(segunda)
print(tercera)
print(cuarta)
print("----------")
# <p style="text-align: center"><a rel="license" href="http://creativecommons.org/licenses/by/4.0/"><img alt="Licencia Creative Commons" style="border-width:0" src="https://i.creativecommons.org/l/by/4.0/80x15.png" /></a><br />Esta obra está bajo una <a rel="license" href="http://creativecommons.org/licenses/by/4.0/">Licencia Creative Commons Atribución 4.0 Internacional</a>.</p>
# <p style="text-align: center">© <NAME>. 2021.</p>
| 14_iteraciones.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# %matplotlib inline
# %run ../proofs/notebook_setup.py
# +
"""Europa-Io occultation from the PHEMU campaign."""
import numpy as np
from matplotlib import pyplot as plt
from astropy.time import Time
import astropy.units as u
from astropy.timeseries import TimeSeries
from astroquery.jplhorizons import Horizons
import os
import starry
from matplotlib.patches import ConnectionPatch
def get_body_ephemeris(times, body_id="501", step="1m"):
start = times.isot[0]
# Because Horizons time range doesn't include the endpoint
# we need to add some extra time
if step[-1] == "m":
padding = 2 * float(step[:-1]) / (60 * 24)
elif step[-1] == "h":
padding = 2 * float(step[:-1]) / 24
elif step[-1] == "d":
padding = 2 * float(step[:-1])
else:
raise ValueError(
"Unrecognized JPL Horizons step size. Use '1m' or '1h' for example."
)
end = Time(times.mjd[-1] + padding, format="mjd").isot
# Query JPL Horizons
epochs = {"start": start, "stop": end, "step": step}
obj = Horizons(id=body_id, epochs=epochs, id_type="id")
eph = obj.ephemerides(extra_precision=True)
times_jpl = Time(eph["datetime_jd"], format="jd")
# Store all data in a TimeSeries object
data = TimeSeries(time=times)
data["RA"] = np.interp(times.mjd, times_jpl.mjd, eph["RA"]) * eph["RA"].unit
data["DEC"] = np.interp(times.mjd, times_jpl.mjd, eph["DEC"]) * eph["DEC"].unit
data["ang_width"] = (
np.interp(times.mjd, times_jpl.mjd, eph["ang_width"]) * eph["ang_width"].unit
)
data["phase_angle"] = (
np.interp(times.mjd, times_jpl.mjd, eph["alpha_true"]) * eph["alpha_true"].unit
)
eph = obj.ephemerides(extra_precision=True)
# Boolean flags for occultations/eclipses
occ_sunlight = eph["sat_vis"] == "O"
umbra = eph["sat_vis"] == "u"
occ_umbra = eph["sat_vis"] == "U"
partial = eph["sat_vis"] == "p"
occ_partial = eph["sat_vis"] == "P"
occulted = np.any([occ_umbra, occ_sunlight], axis=0)
data["ecl_par"] = np.array(
np.interp(times.mjd, times_jpl.mjd, partial), dtype=bool,
)
data["ecl_tot"] = np.array(np.interp(times.mjd, times_jpl.mjd, umbra), dtype=bool,)
data["occ_umbra"] = np.array(
np.interp(times.mjd, times_jpl.mjd, occ_umbra), dtype=bool,
)
data["occ_sun"] = np.array(
np.interp(times.mjd, times_jpl.mjd, occ_sunlight), dtype=bool,
)
# Helper functions for dealing with angles and discontinuities
subtract_angles = lambda x, y: np.fmod((x - y) + np.pi * 3, 2 * np.pi) - np.pi
def interpolate_angle(x, xp, yp):
"""
Interpolate an angular quantity on domain [-pi, pi) and avoid
discontinuities.
"""
cosy = np.interp(x, xp, np.cos(yp))
siny = np.interp(x, xp, np.sin(yp))
return np.arctan2(siny, cosy)
# Inclination of the starry map = 90 - latitude of the central point of
# the observed disc
data["inc"] = interpolate_angle(
times.mjd, times_jpl.mjd, np.pi / 2 * u.rad - eph["PDObsLat"].to(u.rad),
).to(u.deg)
# Rotational phase of the starry map is the observer longitude
data["theta"] = (
interpolate_angle(
times.mjd, times_jpl.mjd, eph["PDObsLon"].to(u.rad) - np.pi * u.rad,
).to(u.deg)
) + 180 * u.deg
# Obliquity of the starry map is the CCW angle from the celestial
# NP to the NP of the target body
data["obl"] = interpolate_angle(
times.mjd, times_jpl.mjd, eph["NPole_ang"].to(u.rad),
).to(u.deg)
# Compute the location of the subsolar point relative to the central
# point of the disc
lon_subsolar = subtract_angles(
np.array(eph["PDSunLon"].to(u.rad)), np.array(eph["PDObsLon"].to(u.rad)),
)
lon_subsolar = 2 * np.pi - lon_subsolar # positive lon. is to the east
lat_subsolar = subtract_angles(
np.array(eph["PDSunLat"].to(u.rad)), np.array(eph["PDObsLat"].to(u.rad)),
)
# Location of the subsolar point in cartesian Starry coordinates
xs = np.array(eph["r"]) * np.cos(lat_subsolar) * np.sin(lon_subsolar)
ys = np.array(eph["r"]) * np.sin(lat_subsolar)
zs = np.array(eph["r"]) * np.cos(lat_subsolar) * np.cos(lon_subsolar)
data["xs"] = np.interp(times.mjd, times_jpl.mjd, xs) * u.AU
data["ys"] = np.interp(times.mjd, times_jpl.mjd, ys) * u.AU
data["zs"] = np.interp(times.mjd, times_jpl.mjd, zs) * u.AU
return data
def get_phemu_data(file="data/G20091204_2o1_JHS_0.txt"):
y, m, d = file[6:10], file[10:12], file[12:14]
date_mjd = Time(f"{y}-{m}-{d}", format="isot", scale="utc").to_value("mjd")
data = np.genfromtxt(file)
times_mjd = date_mjd + data[:, 0] / (60 * 24)
time, flux, phemu_model = np.vstack([times_mjd, data[:, 1], data[:, 2]])
return Time(time, format="mjd"), flux, phemu_model
def get_starry_args(time):
# Ephemeris
eph_io = get_body_ephemeris(time, step="1m")
eph_europa = get_body_ephemeris(time, body_id="502", step="1m",)
# Get occultation parameters
obl = np.mean(eph_io["obl"].value)
inc = np.mean(eph_io["inc"].value)
theta = np.mean(eph_io["theta"].value)
ro = np.mean((eph_europa["ang_width"] / eph_io["ang_width"]).value)
rel_ra = (eph_europa["RA"] - eph_io["RA"]).to(u.arcsec) / (
0.5 * eph_io["ang_width"].to(u.arcsec)
)
rel_dec = (eph_europa["DEC"] - eph_io["DEC"]).to(u.arcsec) / (
0.5 * eph_io["ang_width"].to(u.arcsec)
)
xo = -rel_ra.value
yo = rel_dec.value
xs = np.mean(eph_io["xs"].value)
ys = np.mean(eph_io["ys"].value)
zs = np.mean(eph_io["zs"].value)
rs = np.sqrt(xs ** 2 + ys ** 2 + zs ** 2)
xs /= rs
ys /= rs
zs /= rs
return inc, obl, xo, yo, dict(theta=theta, ro=ro, xs=xs, ys=ys, zs=zs)
# +
# Grab the PHEMU light curve
time, flux, phemu_model = get_phemu_data()
# A very rough estimate of the errorbars
ferr = 0.02
# Get geometrical parameters
inc, obl, xo0, yo0, kwargs = get_starry_args(time)
# -
import exoplanet
import pymc3 as pm
import theano.tensor as tt
with pm.Model() as model:
# Instantiate a starry map & get geometrical parameters
map = starry.Map(ydeg=15, reflected=True)
map.inc = inc
map.obl = obl
# Load the Galileo SSI / Voyager composite
# https://astrogeology.usgs.gov/search/map/Io/
# Voyager-Galileo/Io_GalileoSSI-Voyager_Global_Mosaic_1km
map.load("data/io_mosaic.jpg")
# Free parameters
dx = pm.Uniform("dx", lower=-0.5, upper=0.5, testval=0.0)
dy = pm.Uniform("dy", lower=-0.5, upper=0.5, testval=0.0)
amp = pm.Uniform("amp", lower=0.01, upper=100.0, testval=1.0)
europa_amp = pm.Uniform("europa_amp", lower=0.3, upper=0.7, testval=0.50)
roughness = pm.Uniform("roughness", lower=0.0, upper=60.0, testval=5.0)
# Compute the flux model
map.roughness = roughness
flux_model = europa_amp + amp * map.flux(
xo=tt.as_tensor_variable(xo0) + dx, yo=tt.as_tensor_variable(yo0) + dy, **kwargs
)
# Track some values for plotting later
pm.Deterministic("flux_model", flux_model)
# Save our initial guess
flux_model_guess = exoplanet.eval_in_model(flux_model)
# The likelihood function
pm.Normal("obs", mu=flux_model, sd=ferr, observed=flux)
with model:
map_soln = exoplanet.optimize()
t = time.value - 55169
plt.plot(t, flux, "k.", alpha=0.75, ms=4, label="data")
plt.plot(t, map_soln["flux_model"], label="model")
plt.plot(t, phemu_model, label="phemu model")
plt.xlabel("time")
plt.ylabel("flux")
plt.legend();
for param in ["dx", "dy", "amp", "europa_amp", "roughness"]:
print("{} = {}".format(param, map_soln[param]))
map.show(colorbar=True, model=model, point=map_soln)
| tex/figures/io_europa.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import torch.nn as nn
class BasicBlock(nn.Module):
multiplier=1
def __init__(self, input_num_planes, num_planes, strd=1):
super(BasicBlock, self).__init__()
self.conv_layer1 = nn.Conv2d(in_channels=input_num_planes, out_channels=num_planes, kernel_size=3, stride=stride, padding=1, bias=False)
self.batch_norm1 = nn.BatchNorm2d(num_planes)
self.conv_layer2 = nn.Conv2d(in_channels=num_planes, out_channels=num_planes, kernel_size=3, stride=1, padding=1, bias=False)
self.batch_norm2 = nn.BatchNorm2d(num_planes)
self.res_connnection = nn.Sequential()
if strd > 1 or input_num_planes != self.multiplier*num_planes:
self.res_connnection = nn.Sequential(
nn.Conv2d(in_channels=input_num_planes, out_channels=self.multiplier*num_planes, kernel_size=1, stride=strd, bias=False),
nn.BatchNorm2d(self.multiplier*num_planes)
)
def forward(self, inp):
op = F.relu(self.batch_norm1(self.conv_layer1(inp)))
op = self.batch_norm2(self.conv_layer2(op))
op += self.res_connnection(inp)
op = F.relu(op)
return op
| Chapter03/ResNetBlock.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
import numpy as np
from keras.models import Model
from keras.applications import Xception
from keras.layers import Dense, GlobalAveragePooling2D
from keras.optimizers import Adam
from keras.applications import imagenet_utils
from keras.utils import np_utils
from keras.callbacks import EarlyStopping
# -
from keras.datasets import cifar10
(X_train, y_train), (X_test, y_test) = cifar10.load_data()
# +
n_classes = len(np.unique(y_train))
y_train = np_utils.to_categorical(y_train, n_classes)
y_test = np_utils.to_categorical(y_test, n_classes)
X_train = X_train.astype('float32')/255.
X_test = X_test.astype('float32')/255.
# -
xception_model = Xception(weights='imagenet', include_top=False)
x = xception_model.output
x = GlobalAveragePooling2D()(x)
x = Dense(512, activation='relu')(x)
out = Dense(10, activation='softmax')(x)
model = Model(inputs=xception_model.input, outputs=out)
for layer in xception_model.layers:
layer.trainable = False
opt = Adam()
model.compile(optimizer=opt, loss='categorical_crossentropy', metrics=['accuracy'])
model.summary()
callbacks = [EarlyStopping(monitor='val_acc', patience=5, verbose=0)]
n_epochs = 10
batch_size = 50
history = model.fit(X_train, y_train, epochs=n_epochs, batch_size=batch_size, validation_split=0.2, verbose=1, callbacks=callbacks)
for i, layer in enumerate(model.layers):
print(i, layer.name)
for layer in model.layers[:115]:
layer.trainable = False
for layer in model.layers[115:]:
layer.trainable = True
opt_finetune = Adam()
model.compile(optimizer=opt_finetune, loss='categorical_crossentropy', metrics=['accuracy'])
model.summary()
history_finetune = model.fit(X_train, y_train, epochs=n_epochs, batch_size=batch_size, validation_split=0.2, verbose=1, callbacks=callbacks)
| Section06/Fine-tuning with Xception.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Cas9 Mutational Analysis
#
# ### Imports
import sys
import os
import matplotlib.pyplot as plt
import scipy
from scipy import stats
import pandas as pd
import math
import random
import numpy
# ### Load Mutational Data
# There are two experiments to analyze. e4 C PAMs, e5 T PAMs. File headers have the ID and fullname columns as sample descriptors, and then the amino acid and position for each following column. In the file on each line, the actual substitution for those positions is listed.
#
# Data saved into a list of dictionaries. Each dict contains the id and the mutations as a list. The list positions ccorrespond to the locations list also returned by the function.
def process_mutation_file(filename):
mutation_data = []
locations = []
with open(filename) as f:
first_line = True
for line in f:
line_data = line.strip('\n').split('\t')
if first_line:
locations = line_data[2:]
first_line = False
continue
id = line_data[0]
mutations = line_data[2:]
mutation_data.append({'id': id, 'mutations': mutations})
return locations, mutation_data
e4locations, e4mutations = process_mutation_file('e4mutdata.txt')
e5locations, e5mutations = process_mutation_file('e5mutdata.txt')
# ### Mutation frequency analysis
# Attempting to determine locations that are most commonly mutated. This will not tell us which are most important, just what happens most frequently.
#
# Co-correlated mutations? Mutually exclusive? w/ significance.
# +
cumulative_data = {}
e4_data = {}
e5_data = {}
number_of_samples = 0
for sample in e4mutations:
number_of_samples += 1
for i, mutation in enumerate(sample['mutations']):
if mutation != '':
if e4locations[i][1:] not in cumulative_data:
cumulative_data[e4locations[i][1:]] = 0
e4_data[e4locations[i][1:]] = 0
cumulative_data[e4locations[i][1:]] = cumulative_data[e4locations[i][1:]] + 1
e4_data[e4locations[i][1:]] = e4_data[e4locations[i][1:]] + 1
for sample in e5mutations:
number_of_samples += 1
for i, mutation in enumerate(sample['mutations']):
if mutation != '':
if e5locations[i][1:] not in cumulative_data:
cumulative_data[e5locations[i][1:]] = 0
if e5locations[i][1:] not in e5_data:
e5_data[e5locations[i][1:]] = 0
cumulative_data[e5locations[i][1:]] = cumulative_data[e5locations[i][1:]] + 1
e5_data[e5locations[i][1:]] = e5_data[e5locations[i][1:]] + 1
# +
locations = [i for i in range(1080)]
counts = [0] * 1080
e4counts = [0] * 1080
e5counts = [0] * 1080
colors = []
for l in locations:
if l < 55:
colors.append('purple')
elif l < 91:
colors.append('blue')
elif l < 247:
colors.append('gray')
elif l < 455:
colors.append('green')
elif l < 510:
colors.append('cyan')
elif l < 541:
colors.append('magenta')
elif l < 655:
colors.append('yellow')
elif l < 667:
colors.append('teal')
elif l < 842:
colors.append('purple')
elif l < 946:
colors.append('blue')
else:
colors.append('black')
total_count = 0
for l, c in cumulative_data.items():
counts[int(l)-1] = float(c) / number_of_samples
total_count += 1
# 182 total mutations, 37 samples
# print(total_count)
# print(number_of_samples)
for l,c in e4_data.items():
e4counts[int(l)-1] = float(c)
for l,c in e5_data.items():
e5counts[int(l)-1] = float(c)
#print("Positions mutated in > 50% of samples (E4 + E5)")
#for i, c in enumerate(counts):
# if c > 0.5:
# print(i+1)
#
#print()
#
#print("Positions mutated in >40% of samples E4 only")
#for i,c in enumerate(e4counts):
# if c > 0.4:
# print(i+1)
#
#print()
#
#print("Positions mutated in >80% of samples E5 only")
#for i,c in enumerate(e5counts):
# if c > 0.8:
# print(i+1)
# -
# Plotting the counts of mutations along each position in the protein.
#
# +
fig = plt.figure(figsize=(12,8))
ax = fig.add_axes([0,0,1,1])
ax.bar(locations, counts, width=3)
plt.ylabel('Frequency')
plt.xlabel('Position')
plt.title('Frequency of mutation by position (E4 + E5)')
plt.show()
# +
fig = plt.figure(figsize=(12,8))
# add color code for domains
ax = fig.add_axes([0,0,1,1])
ax.bar(locations, e4counts, width = 3, color=colors)
plt.ylabel('Number of Mutations')
plt.xlabel('Position')
plt.title('Frequency of mutation by position (E4 only)')
plt.show()
# +
fig = plt.figure(figsize=(12,8))
ax = fig.add_axes([0,0,1,1])
ax.bar(locations, e5counts, width = 3)
plt.ylabel('Frequency')
plt.xlabel('Position')
plt.title('Frequency of mutation by position (E5 only)')
plt.show()
# -
# ### Mutual Exclusivity / Co-occurence Analysis
# For each position (ignoring specific muations here for now)
#
# | A | B | Neither | A not B | B not A | Both | Log2 Odds Ratio | p-value | q-value | tendency |
# | ----------- | ----------- | ----------- | ----------- | ----------- | ----------- | ----------- | ----------- | ----------- | --------------|
# | p1 | p2 | count | count | count | count | odds ratio | pv | qv | tendency |
#
# Log2 Odds Ratio: quantifies how strongly the presence or absence of alterations in A are associated with alterations in B
# (Neither * Both) / (A not B * B not A). Haldane-Anscombe correction applied (add 0.5 to each value)
#
# p-test: one-sided fisher exact
# q value from Benjamini-Hochberg FDR Correction
#
# Tendency: log2(OR) > 0: co-occurence. Log2(OR) <=0: mutual exclusivity. q-value < 0.05: significant
#
# TODO: Look at the variant with highest activity. For the mutations in that, look at mutations in other samples with co-occurance/mutal exclusivitiy and compare activity levels. If activity levels high and co-occuring, likely important.
# + tags=[]
mutex_list = []
for i,location1 in enumerate(e4locations):
for j,location2 in enumerate(e4locations):
mutex = {}
if i <= j:
continue
mutex['a'] = location1
mutex['b'] = location2
# get the counts by iterating through the samples to see if they have mut at this location
# Adding 0.5 for Haldane-Anscombe correction (deals with 0 values in the matrix)
mutex['neither'] = 0.5
mutex['anotb'] = 0.5
mutex['bnota'] = 0.5
mutex['both'] = 0.5
a_count = 0
b_count = 0
no_a_count = 0
no_b_count = 0
for sample in e4mutations:
a = False
b = False
if sample['mutations'][i] != '':
a = True
a_count += 1
else:
no_a_count += 1
if sample['mutations'][j] != '':
b = True
b_count += 1
else:
no_b_count += 1
oddsratio, pvalue = scipy.stats.fisher_exact([[a_count, b_count],[no_a_count, no_b_count]])
if a and not b:
mutex['anotb'] = mutex['anotb'] + 1
elif b and not a:
mutex['bnota'] = mutex['bnota'] + 1
elif a and b:
mutex['both'] = mutex['both'] + 1
else:
mutex['neither'] = mutex['neither'] + 1
mutex['log2'] = math.log2((mutex['neither'] * mutex['both']) / (mutex['anotb'] * mutex['bnota']))
mutex['pval'] = pvalue
mutex_list.append([mutex['a'], mutex['b'], str(mutex['neither'] - 0.5), str(mutex['anotb'] - 0.5), str(mutex['bnota']-0.5), str(mutex['both']-0.5), str(round(mutex['log2'],2)), str(round(pvalue, 5))])
# -
e4muts = ['P6', 'E33', 'K104', 'D152', 'F260', 'A263', 'A303', 'D451', 'E520', 'R646', 'F696', 'G711', 'I758', 'H767', 'E932', 'N1031', 'R1033', 'K1044', 'Q1047', 'V1056']
to_list = []
pd.set_option('display.max_rows', None)
for mutex in mutex_list:
# if (mutex[0] in e4muts or mutex[1] in e4muts) and float(mutex[7]) <= .05 and float(mutex[6]) < -3:
if float(mutex[7]) <= .02 and float(mutex[6]) < -2:
to_list.append(mutex)
#for l in to_list:
# print(l)
pd.DataFrame(to_list, columns = ["Position A", "Position B", "Neither", "A not B", "B not A", "Both", "Odds Ratio", "p-Value"])
# ### Load editing data
# MiSeq Data
# +
e4_pooled_all_7_positions_all_average = [19.4, 24.4,26.8, 40.9, 21.5, 26.3, 24.2, 22.6, 16.9, 23, 22.7, 21.3, 20, 18.6, 24.5, 23.3]
e4_pooled_all_7_positions_cpam_average = [64.48, 68.53, 68.59, 80.84, 68.01, 77.07, 70.91, 69.06, 53.15, 64.77, 71.02, 64.49, 56.33, 59.07, 61.66, 62.53]
e4_pooled_all_7_NCA = [73.74, 73.21, 68.75, 83.83, 70.44, 78.65, 73.69, 71.84, 46.49, 58.55, 74.98, 68.43, 63.57, 64.18, 66.99, 67.83];
e4_pooled_all_7_NCC = [66.61053243,69.49755142,69.95180207,80.4903751,71.95210255,80.5674104,73.92105949,74.429117,54.73933221,66.66920381,73.82650912,67.77429867,55.78605375,60.33931426,63.62912191,62.14925265];
e4_pooled_all_7_NCG = [56.55287791,63.27192605,70.9581768,78.57496302,61.87731455,70.69780974,64.75108786,60.7550885,61.15544185,73.41523203,65.13846724,58.93432484,52.0882119,54.39511595,57.73256788,61.33347971];
e4_pooled_all_7_NCT = [62.00788161,68.13080439,64.68944824,80.45063312,67.76559429,78.38102063,71.2837539,69.21367084,49.81703005,60.46215528,70.1309541,62.80081794,53.87129792,57.35949054,58.28577853,58.82172111];
e4_pooled_all_7_NAN = [7.4523,14.4409,18.0626,35.8614,10.3150,15.6185,14.1783,11.4205,8.1421,13.8988,11.0546,11.8430,11.5479,8.4213,18.6789,16.5240]
e4_pooled_all_7_NCN = [64.4777,68.5280,68.5878,80.8372,68.0083,77.0745,70.9115,69.0617,53.1533,64.7745,71.0181,64.4854,56.3296,59.0675,61.6599,62.5325]
e4_pooled_all_7_NGN = [2.3716,6.4856,11.2507,21.6210,3.1141,5.5312,4.8325,3.9880,3.9494,8.1158,3.5239,3.8291,6.7732,2.8810,8.9444,7.1171]
e4_pooled_all_7_NTN = [3.1748,7.9689,9.2492,25.1456,4.4795,6.8398,6.7655,5.9716,2.5258,5.3864,5.3975,4.9487,5.3465,3.9032,8.7444,7.0196]
e5_poled_all_7_all = [65.92,14.09,44.12,30.65,69.23,65.01,54.97,60.94,46.45,43.21,46.16,55.15,49.66,53.25,67.46,53.72,59.31]
e5_poled_all_7_tpam = [75.02,16.60,39.03,25.16,73.46,68.98,64.74,69.88,54.34,47.22,50.72,61.52,56.15,62.46,74.19,62.48,67.16]
e5_poled_all_7_NTA = [74.87852908,15.25212911,36.96283386,22.52329024,69.375605,66.00801628,62.41386083,68.95271462,49.79998762,44.166857,48.2120514,58.93802954,49.74323956,61.83875159,74.1842161,59.76101072,67.31782779]
e5_poled_all_7_NTC = [73.96951883,22.33873769,32.96501141,19.95899175,77.27669471,71.67083648,87.21908271,88.95441287,77.75242105,67.4399637,71.00563257,82.1109943,62.55560811,60.90708145,70.17189336,78.6549213,62.35564776]
e5_poled_all_7_NTG = [75.87832094,13.48640813,56.60132466,41.93045133,77.18208193,71.93333644,48.92122886,55.07598677,40.63892455,33.18595698,36.28242646,46.3440386,60.71490295,63.95220284,78.16375391,52.68676198,72.11253139]
e5_poled_all_7_NTT = [75.34861761,15.33550616,29.58848532,16.21934849,69.98939396,66.31009838,60.39407802,66.55384095,49.15344201,44.08081596,47.39059408,58.70018526,51.5904855,63.12857217,74.23081007,58.82158882,66.87181211]
e5_poled_all_7_NAN = [62.60,12.19,44.95,30.79,64.12,60.71,48.24,54.18,41.06,35.95,39.58,46.86,43.45,50.02,66.01,47.73,56.78]
e5_poled_all_7_NCN = [63.30,15.55,39.81,27.19,71.32,67.05,60.48,66.84,53.53,57.22,58.52,67.95,52.72,49.34,61.42,57.96,55.55]
e5_poled_all_7_NGN = [62.76,12.00,52.69,39.48,68.03,63.29,46.41,52.87,36.89,32.46,35.82,44.26,46.32,51.17,68.25,46.69,57.77]
e5_poled_all_7_NTN = [75.02,16.60,39.03,25.16,73.46,68.98,64.74,69.88,54.34,47.22,50.72,61.52,56.15,62.46,74.19,62.48,67.16]
e5_ANA = [62.11,11.11,33.87,21.61,60.35,57.43,49.13,55.64,41.70,39.33,41.45,50.29,39.43,50.96,63.89,44.52,53.74]
e5_CNA = [70.03,13.89,36.29,21.18,70.71,65.09,55.84,62.70,45.38,41.88,44.33,53.15,48.23,57.54,70.73,52.87,62.47]
e5_GNA = [60.91,10.35,34.35,23.08,58.67,57.38,45.37,53.10,38.34,33.71,37.42,45.09,38.03,48.67,65.15,43.64,54.81]
e5_TNA = [74.96,15.91,46.17,32.67,75.13,69.70,61.00,67.03,49.82,48.25,51.48,61.92,55.14,62.28,74.49,58.78,67.66]
e5_ANC = [58.47,19.32,36.00,24.21,68.27,64.08,77.96,82.78,66.90,60.93,63.93,75.60,47.84,46.51,59.48,69.02,50.29]
e5_CNC = [65.73,22.09,41.93,26.96,75.29,69.31,86.04,88.47,75.45,66.34,69.66,80.32,56.99,53.50,64.40,76.93,56.18]
e5_GNC = [58.38,17.84,36.56,22.59,69.06,64.44,73.63,78.95,62.34,53.79,56.80,67.60,50.06,45.49,58.77,64.93,50.93]
e5_TNC = [69.85,21.52,50.16,37.09,77.86,72.04,83.38,88.04,76.14,69.43,70.30,81.94,63.28,57.80,67.67,77.21,60.81]
e5_ANG = [61.97,9.13,51.75,39.28,68.28,65.05,34.25,40.67,29.27,29.17,33.03,39.03,48.56,50.35,67.37,37.76,60.45]
e5_CNG = [68.19,11.31,57.09,42.56,76.68,70.82,39.91,45.50,33.03,30.16,33.27,40.38,57.70,54.38,70.92,43.97,65.02]
e5_GNG = [61.68,8.92,52.81,37.40,68.02,63.89,32.19,38.83,26.79,25.24,27.41,33.96,46.86,46.97,68.05,37.11,59.28]
e5_TNG = [71.98,13.24,65.08,52.95,79.00,73.57,44.54,51.25,38.13,35.18,38.83,47.18,63.29,60.52,75.47,51.94,69.25]
e5_ANT = [63.77,11.10,34.95,22.40,60.12,57.90,43.51,50.06,35.74,37.19,41.14,48.44,39.51,50.96,64.90,44.27,54.58]
e5_CNT = [69.89,13.69,41.17,25.98,67.57,63.95,51.90,58.57,42.45,39.79,43.02,51.92,47.62,55.67,68.86,53.99,61.14]
e5_GNT = [65.47,11.02,34.72,21.20,61.10,58.47,46.34,52.26,36.77,35.68,38.61,47.57,38.93,50.23,67.57,45.24,57.17]
e5_TNT = [71.37,14.93,53.06,39.29,71.60,67.02,54.46,61.26,45.02,45.34,47.87,58.00,53.10,60.13,71.70,57.28,65.23]
e5_NNA = [67.00,37.67,24.64,66.21,62.40,52.84,59.62,43.81,40.79,52.61,54.86,68.57,49.95,59.67];
e5_NNC = [63.11,41.16,27.71,72.62,67.47,80.25,84.56,70.21,62.62,76.36,50.83,62.58,72.02,54.55];
e5_NNG = [65.96,56.68,43.05,72.99,68.33,37.72,44.06,31.81,29.94,40.14,53.05,70.46,42.69,63.50];
e5_NNT = [67.63,40.98,27.22,65.10,61.83,49.05,55.54,39.99,39.50,51.48,54.25,68.26,50.19,59.53];
e5_names = [1, 13, 17, 11, 12, 22, 24, 19, 28, 26, 34, 40, 5, 36]
e4_NNA = [20.87,23.23,26.23,40.31,20.51,24.18,22.32,20.88,13.27,18.31,21.79,20.44,21.77,18.94,25.62,24.55]
e4_NNC = [22.91,33.14,29.16,49.86,27.53,34.78,32.90,31.20,17.78,25.25,29.45,27.70,24.30,22.79,29.17,26.69]
e4_NNG = [16.21,19.81,28.24,36.05,18.03,21.94,19.49,17.90,22.19,29.25,18.99,17.82,16.16,16.00,21.39,21.33]
e4_NNT = [17.49,21.24,23.51,37.24,19.84,24.16,21.97,20.46,14.53,19.36,20.77,19.14,17.76,16.54,21.85,20.62]
#datas = [e4_pooled_all_7_positions_all_average, e4_pooled_all_7_positions_cpam_average, e4_pooled_all_7_NCA, e4_pooled_all_7_NCC, e4_pooled_all_7_NCG,e4_pooled_all_7_NCT, e4_pooled_all_7_NAN, e4_pooled_all_7_NCN, e4_pooled_all_7_NGN, e4_pooled_all_7_NTN]
#names = ['All-e4', 'All C-PAM-e4', 'NCA', 'NCC', 'NCG', 'NCT', 'NAN', 'NCN', 'NGN', 'NTN']
#datas = [e5_poled_all_7_all, e5_poled_all_7_tpam, e5_poled_all_7_NTA, e5_poled_all_7_NTC, e5_poled_all_7_NTG, e5_poled_all_7_NTT, e5_poled_all_7_NAN, e5_poled_all_7_NCN, e5_poled_all_7_NGN, e5_poled_all_7_NTN]
#names = ['All-e5', 'All T-PAM-e5', 'NTA', 'NTC', 'NTG', 'NTT', 'NAN', 'NCN', 'NGN', 'NTN']
#datas = [e5_ANA, e5_CNA, e5_GNA, e5_TNA, e5_ANC, e5_CNC, e5_GNC, e5_TNC, e5_ANG, e5_CNG, e5_GNG, e5_TNG, e5_ANT, e5_CNT, e5_GNT, e5_TNT]
#names = ['ANA', 'CNA','GNA', 'TNA', 'ANC', 'CNC', 'GNC', 'TNC', 'ANG', 'CNG', 'GNG', 'TNG', 'ANT', 'CNT', 'GNT', 'TNT']
datas = [e5_NNA, e5_NNC, e5_NNG, e5_NNT]
names = ['NNA', 'NNC', 'NNG', 'NNT']
#datas = [e4_NNA, e4_NNC, e4_NNG, e4_NNT]
#names = ['NNA', 'NNC', 'NNG', 'NNT']
all_datas_zipped = zip(datas, names)
# e4locations, e4mutations
position_map = {}
first = True
locs_all = []
vs_all = []
for datas, name in all_datas_zipped:
max_activity = 0
locations_to_plot = []
location_activity_data = {}
for i,location in enumerate(e5locations):
activity_average = {}
mutcount = {}
mutcount_total = 0
for sample in e5mutations:
#print(sample['id'])
id = int(sample['id'].split('-')[-1])
#id = int(sample['id'])
activity = datas[e5_names.index(int(id))]
if sample['mutations'][i] != '':
if sample['mutations'][i] not in activity_average:
activity_average[sample['mutations'][i]] = 0
mutcount[sample['mutations'][i]] = 0
activity_average[sample['mutations'][i]] += activity
mutcount_total += 1
mutcount[sample['mutations'][i]] += 1
for aa, activity in activity_average.items():
try:
activity_average[aa] = activity_average[aa] / mutcount[aa]
if activity_average[aa] > max_activity:
max_activity = activity_average[aa]
except ZeroDivisionError:
activity_average[aa] = 0
if mutcount_total > 1:
location_activity_data[location + aa] = activity_average[aa]
if first:
a = sorted(location_activity_data.items(), key = lambda x: x[1])
for i,v in enumerate(a):
position_map[v[0]] = i
else:
# use map
a = [('', 0)] * len(position_map.keys())
for k, v in location_activity_data.items():
a[position_map[k]] = (k, v)
locs = []
vs = []
for l,v in a:
if v > 0:
locs.append(l)
#vs.append(v/max_activity)
vs.append(v)
locs_all.append(locs)
vs_all.append(vs)
fig = plt.figure(figsize=(20,8))
ax = fig.add_axes([0,0,1,1])
ax.bar(locs, vs)
plt.ylabel('average activity of lagoons with mutation at position')
plt.xlabel('Position')
plt.title('Activity of lagoons with mutations at given positions ' + name)
plt.xticks(rotation=90,fontsize=13)
plt.xticks
plt.show()
fig.savefig('activity_by_mutations_' + name + '.svg', bbox_inches='tight')
first = False
with open('heatmap_data.txt', 'w') as f:
f.write('\t'.join(locs_all[0]) + '\n')
for vs in vs_all:
f.write('\t'.join(list(map(str, vs))) + '\n')
# -
| mutational_analysis/mutanalysis.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] deletable=true editable=true
# # requireJS -- load a define/require style javascript module
#
# We want to load the following module:
# + deletable=true editable=true
import jp_proxy_widget
from jp_proxy_widget import js_context
# + deletable=true editable=true
require_fn="js/simple_define.js"
print(js_context.get_text_from_file_name(require_fn))
# + deletable=true editable=true
requireJS = jp_proxy_widget.JSProxyWidget()
# + deletable=true editable=true
# callback for storing the styled element color
requireJSinfo = {}
def require_answer_callback(answer):
requireJSinfo["answer"] = answer
module_identifier = "three_primes"
requireJS.require_js(module_identifier, require_fn)
# initialize the element using code that requires the loaded module
requireJS.js_init("""
console.log("js init calling requirejs " + module_identifier);
element.html("<em>loading " + module_identifier + "</em>")
element.requirejs([module_identifier], function(module_value) {
console.log("js init using value for " + module_identifier);
element.html('<b>First three primes: ' + module_value.first_3_primes + '</b>')
require_answer_callback(module_value);
});
""", require_answer_callback=require_answer_callback, module_identifier=module_identifier)
#requireJS.uses_require(after_load)
#after_load()
def validate_requireJS():
expect = {"first_3_primes": [2,3,5]}
assert expect == requireJSinfo["answer"], repr((expect, requireJSinfo))
assert requireJS._require_checked
print ("Loaded requirejs value is correct!")
# + deletable=true editable=true
#requireJS.commands_awaiting_render
# + deletable=true editable=true
requireJS
# + deletable=true editable=true
from jp_proxy_widget import notebook_test_helpers
validators = notebook_test_helpers.ValidationSuite()
validators.add_validation(requireJS, validate_requireJS)
# + [markdown] deletable=true editable=true
# # A more realistic example `FileSaver.js`
#
# # WARNING: Running the widget will cause the browser to download a small text file to your downloads area
#
# + deletable=true editable=true
class SaverWidget(jp_proxy_widget.JSProxyWidget):
def __init__(self, *pargs, **kwargs):
super(SaverWidget, self).__init__(*pargs, **kwargs)
# Wiring: set up javascript callables and python callbacks.
self.require_js("saveAs", "js/FileSaver.js")
self.js_init("""
debugger;
element.html("Requiring saveAs...");
element.requirejs(["saveAs"], function(saveAs) {
element.html("saveAs = " + saveAs);
element.download = function(text, name, type) {
if (!type) {
type="text/plain;charset=utf-8";
}
var blob = new Blob([text], {type: type});
element.html("Now saving " + text.length + " as " + name + " with type " + type);
saveAs(blob, name);
confirm(text, name, type);
};
ready();
});
""", confirm=self.confirm, ready=self.ready)
is_ready = False
def ready(self):
"call this when the widget has fully initialized. Download a very small text file."
self.is_ready = True
saverWidget.element.download("Not very interesting text file content.", "Save_as_test.txt")
confirmed = None
def confirm(self, *args):
self.confirmed = args
saverWidget = SaverWidget()
saverWidget
# + deletable=true editable=true
expected_confirmation = [
'Not very interesting text file content.',
'Save_as_test.txt',
'text/plain;charset=utf-8']
def validateSaver():
assert saverWidget.is_ready
assert saverWidget.confirmed is not None, "confirmation hasn't happened yet."
assert list(saverWidget.confirmed) == list(expected_confirmation)
print ("file saver apparently worked!")
validators.add_validation(saverWidget, validateSaver)
# + deletable=true editable=true
validators.run_all_in_widget()
# + deletable=true editable=true
| notebooks/notebook_tests/requirejs test.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Getting Start Notebook
from utils import *
# ## Loading Dataset
# if you don't want to load the already preprocessed features, it is possible to reprocess the dataset with the ```!python dataset_creation.py```
#
# if not you can directly use provided preprocessed signals. With the frequential feature, is is as follow:
# +
path_to_dataset = '../../../Desktop/PhyDAA/Dataset' # insert here the path to the dataset
lab = np.load(os.path.join(path_to_dataset, 'Label.npy'))
participant = np.load(os.path.join(path_to_dataset, 'participant.npy'))
print('Label has a shape of : ',lab.shape)
print('Label has a shape of : ',participant.shape)
# +
feature_array = np.load(os.path.join(path_to_dataset, 'Array', 'freq_band.npy'))
feature_image = np.load(os.path.join(path_to_dataset, 'Img', 'freq_img.npy'))
print('Feature array has a shape of : ', feature_array.shape)
print('Feature Images have a shape of : ', feature_image.shape)
# -
# We have here:
# - ```lab``` representing the binary attention state (0 ~ distracted ; 1 ~ focus).
# - ```participant``` assigning the participant id for each signals of the dataset.
# - ```feature_array``` representing the preprocessed feature vector for each trials.
# - ```feature_image``` representing an array of images as described in the paper, each image corresponding to a trial's feature vector.
# ## Deep Learning Models Implementation
#
# coming soon
| Getting Start.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Evaluating Forecasts
# +
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
from sklearn.linear_model import LinearRegression
from sklearn.model_selection import TimeSeriesSplit, cross_val_score
# Set figure size to (14,6)
plt.rcParams['figure.figsize'] = (14,6)
# -
# # Step 1 - Load the Data
flights = pd.read_csv('flights_train.csv', index_col=0, parse_dates=True)
flights.head()
# Inspect the size of the data
flights.shape
flights.describe()
flights.info()
# # Plot the data
def plot_flights(df, title='Monthly Passenger Numbers in 1000 over Time', ylim=True):
'''
Custom plotting function for plotting the flights dataset
Parameters
----------
df : pd.DataFrame
The data to plot.
title : str
The title of the plot
ylim : bool
Whether to fix the minimum value of y; default is True
Returns
-------
Plots the data
'''
df.plot()
plt.title(title)
plt.ylabel('# of Passengers in 1000')
if ylim:
plt.ylim(ymin=0)
plt.show()
plot_flights(flights)
# # Step 2 - Clean the Data - this maybe 80% of your job as a DS!!
#
# Fortunately we do not have to do that in case of the flights data.
# # Step 3 - Extract the Timestep and the Seasonal Dummies for the whole Dataset
# Create a timestep variable - if you had missing values or dirty data, then the below assumption wouldn't hold
#flights['timestep'] = list(range(len(flights)))
flights['timestep'] = range(len(flights))
flights.head()
# ### Q: Why can we use matthias suggestion of a range object instead of a list of a range object?
#
# ### A: A range object is a generator
# * A range object can create a list of numbers, but in its nascent state it isn't a list
# * to extract a list from a range object, you need to pull the values out of it
# * how do you pull values out of a range object? - you need to iterate over them
iterator = iter(range(len(flights)))
#we can run cell this 131 (len(flights)) times, before we hit an error
next(iterator)
flights.head()
# Q: why does pandas accept list(range(len(flights))) or range(len(flights)) ?
# A: I don't exactly know, but there'll be something like the below in pandas codebase somewhere
def make_a_column(input_):
if type(input_) == list:
#make a column of that list
elif type(input_) == range:
#use a iterable on the range object, store the results in a list and proceed
# +
# Create the seasonal dummies
seasonal_dummies = pd.get_dummies(flights.index.month,
prefix='month',
drop_first=True).set_index(flights.index)
flights = flights.join(seasonal_dummies)
flights.head()
# -
# ## Q: what does drop_first=True do?
# ### A: lets think about 3 breakfast_drinks
# * coffee
# * tea
# * water
df1 = pd.get_dummies(['coffee', 'tea', 'water'])
df1
df2 = pd.get_dummies(['coffee', 'tea', 'water'], drop_first=True)
df2.columns= ['tea', 'water_or_coffee']
df2
# # 4) Train-Test-Split
#
# Fortunately not necessary for the flights data.
# * How would you train-test split a time-series? would you use train_test_split in sklearn? or some other method?
# * you can't use a random splitter, we can time-series split, or you can do it manually
#
# # 5) Model the Trend_Seasonal model
# Define X and y
X = flights.drop(columns=['passengers'])
y = flights['passengers']
# Create and fit the model
m = LinearRegression()
m.fit(X, y)
# Create a new column with the predictions of the trend_seasonal model
flights['trend_seasonal'] = m.predict(X)
flights.head()
# # Plot the original data and preliminary model
plot_flights(flights[['passengers', 'trend_seasonal']])
# # 6) - Extract the remainder
# +
# Fast - fourier transform - which decomposes a time-series into subcomponents
# -
# We want to extract the part of the model that the trend_seasonal is not able to explain
flights['remainder'] = flights['passengers'] - flights['trend_seasonal']
plot_flights(flights['remainder'], title='Remainder after modelling trend and seasonality', ylim=False)
# # 7) - Inspect the remainder to decide how many lags to include
#
# For now, I will include one lag only. - you might want to look autocorrelations to help you
#
# # 8) - Add the lags of the remainder to the training data
flights['lag1'] = flights['remainder'].shift(1)
flights.dropna(inplace=True)
flights.head()
# # 9) Run the full model
# Assign X
X_full = flights.drop(columns=['passengers', 'trend_seasonal', 'remainder'])
y_full = flights['passengers']
X_full.head()
m_full = LinearRegression()
m_full.fit(X_full, y_full)
# Create a new predictions column
flights['predictions_full_model'] = m_full.predict(X_full)
# # 10) - Plot the prediction vs passengers for the training data
plot_flights(flights[['passengers', 'trend_seasonal', 'predictions_full_model']])
# # Is this model good?
#
# # 10) - Evaluate our model
#
# We want to understand how good our model would work on data it has not been trained on. We can get an estimate of that by using cross-validation.
#
# Cross-validation so far:
#
# - Dividing training data into subsets (folds)
# - in each iteration singled out one fold as validation set
# - trained on the remaining training data and evaluated the fit on the validation set.
#
# Cross-validation for time series:
#
# - Dividing training data into subsets (folds)
# - in the first iteration, use the first fold to evaluate the second fold
# - in the second iteration, use the first and the second fold to evaluate the third fold
# - ...
# Create a TimeSeriesSplit object
ts_split = TimeSeriesSplit(n_splits=5)
ts_split.split(X_full, y_full)
# Split the training data into folds
for i, (train_index, validation_index) in enumerate(ts_split.split(X_full, y_full)):
print(f'The training data for the {i+1}th iteration are the observations {train_index}')
print(f'The validation data for the {i+1}th iteration are the observations {validation_index}')
print()
# Create the time series split
time_series_split = ts_split.split(X_full, y_full)
# Do the cross validation
result = cross_val_score(estimator=m_full, X=X_full, y=y_full, cv=time_series_split)
result
result.mean()
result_ordinary_cv = cross_val_score(estimator=m_full, X=X_full, y=y_full, cv=5)
result_ordinary_cv
result_ordinary_cv.mean()
# ---
# # im talking about 2 different things when i talk about metrics
# * Cost function - is the fuel for gradient descent
# * Score on the data - how you evaluate a fitted model
#
# * Cost - MSE
# * Score - R^2
# # Evaluation Metrics
#
# # Cost
#
# ### 1. Mean-Squared-Error (MSE)
#
# $\frac{1}{n} \sum (y_t - \hat{y_t}) ^2$
#
# #### Advantages:
# - Is widely implemented
#
# #### Disadvantages:
# - Strong penalty on outliers - preprocess to remove outliers (what is an outlier?)
# - Unit hardly interpretable
# - Not comparable across models with different units
# ### 2. Mean Absolute Error (MAE)
#
# $\frac{1}{n} \sum |y_t - \hat{y}_t|$
#
# #### Advantages:
#
# - Error is in the unit of interest
# - Does not overly value outliers
#
# #### Disadvantages:
#
# - Ranges from 0 to infinity
# - Not comparable across models with different units
# ### 3. Root-Mean-Squared-Error (RMSE)
#
# $\sqrt{\frac{1}{n} \sum (y_t - \hat{y_t}) ^2}$
#
# #### Advantages:
# - Errors in the unit of interest
# - Does not overly value outliers
#
# #### Disadvantages:
# - Can only be compared between models whos errors are measured in the same unit
# ### 4. Mean Absolute Percent Error (MAPE)
#
# $\frac{1}{n} \sum |\frac{y_t - \hat{y}_t}{y_t}| * 100$
#
# #### Advantages:
# - Comparable over different models
#
# #### Disadvantages:
# - Is not defined for 0 values
# ### 5. Root Mean Squared Log Error (RMSLE)
#
# $\sqrt{\frac{1}{n} \sum (log(y_t + 1) - log(\hat{y_t} + 1)) ^2}$
#
# #### Advantages:
# - Captures relative error
# - Penalizes underestimation stronger than overestimation
# # Score
#
# ### 6. $R^2$
#
#
# $1 - \frac{\sum{(y_i - \hat{y_i})^2}}{\sum{(y_i - \bar{y})^2}}$
# ### 7. $R_{adj}^2$
#
#
# $1 - (1-R^2)\frac{n-1}{n-p-1} $
#
# * n = no.of data points
# * p = no. of features
# ---
from sklearn.metrics import mean_squared_error, mean_squared_log_error, mean_absolute_error, r2_score
#paraphased from stackoverflow1!! - link to follow
def adj_r2(df, r2_score, y_test, y_pred):
adj_r2 = (1 - (1 - r2_score(y_test,y_pred)) * ((df.shape[0] - 1) /
(df.shape[0] - df.shape[1] - 1)))
return adj_r2
mses = []
maes = []
rmse = []
mape = []
rmsle = []
r2 = []
ar2 = []
for i, (train_index, validation_index) in enumerate(ts_split.split(X_full, y_full)):
model = LinearRegression()
model.fit(X_full.iloc[train_index], y_full.iloc[train_index])
ypred = model.predict(X_full.iloc[validation_index])
mses.append(mean_squared_error(y_full.iloc[validation_index], ypred))
maes.append(mean_absolute_error(y_full.iloc[validation_index], ypred))
rmse.append(np.sqrt(mean_squared_error(y_full.iloc[validation_index], ypred)))
mape.append(sum(abs((y_full.iloc[validation_index] - ypred) / y_full.iloc[validation_index])) * 100 / len(y_full.iloc[validation_index]))
rmsle.append(np.sqrt(mean_squared_log_error(y_full.iloc[validation_index], ypred)))
r2.append(r2_score(y_full.iloc[validation_index], ypred))
ar2.append(adj_r2(X_full,r2_score,y_full.iloc[validation_index], ypred))
#create a descriptive index labelling each time-series split %
index = [f'{x}%' for x in range(20,120,20)]
evaluations = pd.DataFrame(dict(mse=mses, mae=maes, rmse=rmse, mape=mape, rmsle=rmsle, r2=r2, adj_r2=ar2), index=index)
evaluations
# # if you feel you need to change the cost function in your regression model, MAKE_SCORER function in sklearn to help you!!
# ---
# # Out of scope - AIC!
| week_07/Evaluating_Forecasts.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
import os
import numpy as np
import logging
import json
import matplotlib.pyplot as plt
import pandas as pd
from matplotlib.backends.backend_pdf import PdfPages
logger = logging.getLogger()
logger.setLevel(logging.DEBUG)
# -
# %matplotlib inline
# +
tableau20 = [(31, 119, 180), (174, 199, 232), (255, 127, 14), (255, 187, 120),
(44, 160, 44), (152, 223, 138), (214, 39, 40), (255, 152, 150),
(148, 103, 189), (197, 176, 213), (140, 86, 75), (196, 156, 148),
(227, 119, 194), (247, 182, 210), (127, 127, 127), (199, 199, 199),
(188, 189, 34), (219, 219, 141), (23, 190, 207), (158, 218, 229)]
# Scale the RGB values to the [0, 1] range, which is the format matplotlib accepts.
for i in range(len(tableau20)):
r, g, b = tableau20[i]
tableau20[i] = (r / 255., g / 255., b / 255.)
# +
MORPH_ANALYSIS_PATH = os.path.join(os.environ['HOME'], "projects/nlg/experiments/morph-analysis")
def get_all_errors():
all_errors = {}
for lang in ['en', 'ru']:
d = os.path.join(MORPH_ANALYSIS_PATH, lang)
all_errors[lang] = {}
mlp_data = open(os.path.join(d, 'mlp'), 'r').read().split('\n')
hard_data = open(os.path.join(d, 'hard'), 'r').read().split('\n')
soft_data = open(os.path.join(d, 'soft'), 'r').read().split('\n')
ref_data = open(os.path.join(d, 'ref'), 'r').read().split('\n')
all_errors[lang]['mlp'] = [(x,ref_data[i]) for i, x in enumerate(mlp_data) if x != ref_data[i]]
all_errors[lang]['hard'] = [(x,ref_data[i]) for i, x in enumerate(hard_data) if x != ref_data[i]]
all_errors[lang]['soft'] = [(x,ref_data[i]) for i, x in enumerate(soft_data) if x != ref_data[i]]
assert len(mlp_data) == len(soft_data) == len(hard_data)
return all_errors
all_errors = get_all_errors()
# +
def get_indices(data_size, num_samples):
indices = np.random.choice(np.arange(data_size), data_size, replace=False)
return indices[:num_samples]
def analyze_one_model(model_errors):
num_total = len(model_errors)
logger.info('Total errors: %d', num_total)
logger.info('Errors breakdown:')
nocase_errors = [x for x in model_errors if x[0].lower() != x[1].lower()]
case_err_num = num_total - len(nocase_errors)
logger.info(' Case errors: %d', case_err_num)
nocase_err_num = len(nocase_errors)
logger.info(' Nocase errors: %d', nocase_err_num)
sample_nocase_err_indices = get_indices(nocase_err_num, 100)
sample_nocase_errors = [nocase_errors[i] for i in sample_nocase_err_indices]
return sample_nocase_errors
# -
en_mlp_errors = analyze_one_model(all_errors['en']['mlp'])
en_soft_errors = analyze_one_model(all_errors['en']['soft'])
with open('en_soft_errors.json', 'w') as errout:
json.dump(en_soft_errors, errout)
en_hard_errors = analyze_one_model(all_errors['en']['hard'])
with open('en_hard_errors.json', 'w') as errout:
json.dump(en_hard_errors, errout)
# +
def plot_en_morph_mlp_analysis_results():
# NOTE: hardcoded values, since the analysis was done manually
pp = PdfPages('morph_mlp_errors.pdf')
sizes = [42,29,29]
error_types = ['wrong lemma', 'wrong form', 'alt. form']
plot_labels = ['%s, %1.1f%%' % (l,s) for l,s in zip(error_types, sizes)]
colors = tableau20[:3]
plt.gca().axis("equal")
patches, texts = plt.pie(sizes, colors=colors, shadow=False, startangle=90)
plt.legend(patches, plot_labels,
bbox_to_anchor=(1,0.7), loc="upper right", fontsize=10,
bbox_transform=plt.gcf().transFigure)
plt.subplots_adjust(left=0.0, bottom=0.1, right=0.65)
plt.axis('equal')
pp.savefig()
pp.close()
def plot_en_morph_soft_analysis_results():
# NOTE: hardcoded values, since the analysis was done manually
pp = PdfPages('morph_soft_errors.pdf')
sizes = [8,17, 29, 27, 13]
error_types = ['wrong form', 'alt. form', 'non-existing form', 'proper noun err', 'wrong digit seq']
plot_labels = ['%s, %1.1f%%' % (l,s) for l,s in zip(error_types, sizes)]
colors = tableau20[:5]
patches, texts = plt.pie(sizes, colors=colors, shadow=False, startangle=90)
plt.legend(patches, plot_labels,
bbox_to_anchor=(1,0.7), loc="upper right", fontsize=10,
bbox_transform=plt.gcf().transFigure)
plt.subplots_adjust(left=0.0, bottom=0.1, right=0.65)
plt.axis('equal')
pp.savefig()
pp.close()
def plot_en_morph_hard_analysis_results():
# NOTE: hardcoded values, since the analysis was done manually
pp = PdfPages('morph_hard_errors.pdf')
sizes = [1,25, 57, 4]
error_types = ['wrong lemma', 'wrong form', 'alt.form', 'non-existing form']
plot_labels = ['%s, %1.1f%%' % (l,s) for l,s in zip(error_types, sizes)]
colors = tableau20[:4]
patches, texts = plt.pie(sizes, colors=colors, shadow=False, startangle=90)
plt.legend(patches, plot_labels,
bbox_to_anchor=(1,0.7), loc="upper right", fontsize=10,
bbox_transform=plt.gcf().transFigure)
plt.subplots_adjust(left=0.0, bottom=0.1, right=0.65)
plt.axis('equal')
pp.savefig()
pp.close()
# -
plot_en_morph_mlp_analysis_results()
plot_en_morph_soft_analysis_results()
plot_en_morph_hard_analysis_results()
| components/utils/morph_analysis.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: 'Python 3.7.10 64-bit (''algo'': conda)'
# name: python3
# ---
# +
import sqlalchemy
import pandas as pd
import numpy as np
import mplfinance as mpf
import utils.ta_lib_indicators as ti
import ipywidgets as widgets
import talib
# +
db_connection_string = 'sqlite:///./Resources/products.db'
engine = sqlalchemy.create_engine(db_connection_string)
inspector = sqlalchemy.inspect(engine)
table_names = inspector.get_table_names()
print(table_names)
# -
# Update table names by looking at the list created above
# MSFT was used to create the example, replace it with the symbol you used
stock_ticker = 'TSLA'
daily_df = pd.read_sql_table(
stock_ticker + '_1_Day_Candles',
con=engine,
index_col='Datetime',
)
minutely_df = pd.read_sql_table(
stock_ticker + '_1_Min_Candles',
con=engine,
index_col='Datetime'
)
minutely_slice = minutely_df.iloc[-30:]
minutely_candle_plot, ax = mpf.plot(
# portfolio_list[0],
minutely_slice,
type='candle',
volume=True,
returnfig=True,
mav=(7,20),
)
pattern_list = []
pattern_df = pd.DataFrame(list(ti.pattern_recognition.items()), columns=['Index', 'Pattern'])
pattern_df = pattern_df.set_index('Index')
# print(pattern_df)
for pattern, p_name in ti.pattern_recognition.items():
pattern_list.append(pattern)
print(pattern_list)
sel_pattern = widgets.SelectMultiple(
options=pattern_df['Pattern'],
value=['Doji'],
rows=min(25, len(pattern_list)),
description='Candle Pattern:',
grid_layout=''
)
overlap_list = []
items = ti.overlap_studies.items()
overlap_df = pd.DataFrame(list(items), columns=['Index', 'Overlap'])
overlap_df = overlap_df.set_index('Index')
for i, name in items:
overlap_list.append(i)
print(overlap_list)
sel_overlap = widgets.SelectMultiple(
options=overlap_df['Overlap'],
# value=['Doji'],
rows=min(25, len(overlap_list)),
description='Overlap Studies:'
)
momentum_list = []
items = ti.momentum_indicators.items()
momentum_df = pd.DataFrame(list(items), columns=['Index', 'Momentum'])
momentum_df = momentum_df.set_index('Index')
for i, name in items:
momentum_list.append(i)
print(momentum_list)
sel_momentum = widgets.SelectMultiple(
options=momentum_df['Momentum'],
# value=['Doji'],
rows=min(25, len(momentum_list)),
description='Momentum Indicators:'
)
volume_list = []
items = ti.volume_indicators.items()
volume_df = pd.DataFrame(list(items), columns=['Index', 'Volume'])
volume_df = volume_df.set_index('Index')
for i, name in items:
volume_list.append(i)
print(volume_list)
sel_volume = widgets.SelectMultiple(
options=volume_df['Volume'],
# value=['Doji'],
rows=min(25, len(volume_list)),
description='Volume Indicators:'
)
volatility_list = []
items = ti.volatility_indicators.items()
volatility_df = pd.DataFrame(list(items), columns=['Index', 'Volatility'])
volatility_df = volatility_df.set_index('Index')
for i, name in items:
volatility_list.append(i)
print(volatility_list)
sel_volatility = widgets.SelectMultiple(
options=volatility_df['Volatility'],
# value=['Doji'],
rows=min(25, len(volatility_list)),
description='Volatility Indicators:'
)
price_transform_list = []
items = ti.price_transform.items()
price_transform_df = pd.DataFrame(list(items), columns=['Index', 'Price Transform'])
price_transform_df = price_transform_df.set_index('Index')
for i, name in items:
price_transform_list.append(i)
print(price_transform_list)
sel_price_transform = widgets.SelectMultiple(
options=price_transform_df['Price Transform'],
# value=['Doji'],
rows=min(25, len(volume_list)),
description='Price Transform:'
)
cycle_list = []
items = ti.cycle_indicators.items()
cycle_df = pd.DataFrame(list(items), columns=['Index', 'Cycle'])
cycle_df = cycle_df.set_index('Index')
for i, name in items:
cycle_list.append(i)
print(cycle_list)
sel_cycle = widgets.SelectMultiple(
options=cycle_df['Cycle'],
# value=['Doji'],
rows=min(25, len(cycle_list)),
description='Cycle Indicators:'
)
statistic_functions_list = []
items = ti.statistic_functions.items()
statistic_functions_df = pd.DataFrame(list(items), columns=['Index', 'Statistic Functions'])
statistic_functions_df = statistic_functions_df.set_index('Index')
for i, name in items:
statistic_functions_list.append(i)
print(statistic_functions_list)
sel_statistic_functions = widgets.SelectMultiple(
options=statistic_functions_df['Statistic Functions'],
# value=['Doji'],
rows=min(25, len(statistic_functions_list)),
description='Statistic Functions:'
)
df = minutely_df.copy()
output = widgets.Output()
button_submit = widgets.Button(
description='Submit',
disabled=False,
)
display(
sel_overlap,
sel_momentum,
sel_volume,
sel_volatility,
sel_price_transform,
sel_cycle,
sel_pattern,
sel_statistic_functions,
button_submit,
output,
)
with output:
print(list(pattern_df[pattern_df['Pattern'].isin(list(sel_pattern.value))].index))
# +
def show_df(b):
df = minutely_df.copy()
output.clear_output()
sel_overlap_list = list(overlap_df[overlap_df['Overlap'].isin(list(sel_overlap.value))].index)
sel_momentum_list = list(momentum_df[momentum_df['Momentum'].isin(list(sel_momentum.value))].index)
sel_volume_list = list(volume_df[volume_df['Volume'].isin(list(sel_volume.value))].index)
sel_volatility_list = list(volatility_df[volatility_df['Volatility'].isin(list(sel_volatility.value))].index)
sel_price_transform_list = list(price_transform_df[price_transform_df['Price Transform'].isin(list(sel_price_transform.value))].index)
sel_cycle_list = list(cycle_df[cycle_df['Cycle'].isin(list(sel_cycle.value))].index)
sel_pattern_list = list(pattern_df[pattern_df['Pattern'].isin(list(sel_pattern.value))].index)
sel_statistic_functions = list(statistic_functions_df[statistic_functions_df['Cycle'].isin(list(sel_statistic_functions.value))].index)
print('printing')
for overlap in sel_overlap_list:
print(overlap)
pattern_function = getattr(talib, overlap)
try:
result = pattern_function(df['Open'], df['High'], df['Low'], df['Close'])
df[overlap] = result
except Exception as e:
print(f"{type(e)} Exception! {e}")
for pattern in sel_pattern_list:
print(pattern)
pattern_function = getattr(talib, pattern)
try:
result = pattern_function(df['Open'], df['High'], df['Low'], df['Close'])
df[pattern] = result
except Exception as e:
print(f"{type(e)} Exception! {e}")
# print(df.head())
# return df
with output:
# print(f"{sel_pattern_list}")
print(df.head())
# return sel_pattern_list
button_submit.on_click(show_df)
# -
# df = minutely_df.copy()
sel_pattern_list = list(pattern_df[pattern_df['Pattern'].isin(list(sel_pattern.value))].index)
for pattern in sel_pattern_list:
pattern_function = getattr(talib, pattern)
try:
result = pattern_function(df['Open'], df['High'], df['Low'], df['Close'])
df[pattern] = result
except Exception as e:
print(f"{type(e)} Exception! {e}")
print(df.head())
len(sel_pattern_list)
df['Sum Patterns'] = df.iloc[:, -(len(sel_pattern_list)):].sum(axis=1)
# df.drop(columns=sel_pattern_list, inplace=True)
# +
atr_function = getattr(talib, 'ATR')
atr_result = atr_function(df['High'], df['Low'], df['Close'], timeperiod=14)
atr_factor = 2.5
df['Trailing Stop'] = df['Close'] - (atr_result * atr_factor)
# +
df['Trade Signal'] = 0.0
threshold_value = 0.0
def check_sum_value(sum_value):
if sum_value > threshold_value:
return 1
elif sum_value < -threshold_value:
return -1
else:
return 0.0
df['Trade Signal'] = df['Sum Patterns'].apply(lambda x: check_sum_value(x))
df.drop(columns='Sum Patterns', inplace=True)
# -
df['Pct Change'] = df['Close'].pct_change()
df['Stop Loss'] = np.where(df['Trade Signal']==1,df['Trailing Stop'],0.0)
print(df.iloc[:, -4:].tail(20))
print(df['Trade Signal'].value_counts())
last_stop = 0.0
# df['Recalculated Stop Loss'] = pd.NA
for index, row in df.iterrows():
if row['Pct Change'] > 0.0 and row['Trailing Stop'] > last_stop:
df.loc[index,'Stop Loss'] = row['Trailing Stop']
else:
df.loc[index,'Stop Loss'] = last_stop
last_stop = row['Stop Loss']
print(df.iloc[:, -5:].head(20))
df.dropna().to_sql('Indicators', con=engine, if_exists='replace')
| products.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import pandas as pd
pd.set_option("display.max_columns", 500)
pd.set_option("display.max_rows", 500)
# ## Table description generation
# Enter the following info:
# - Table name
# - Location
# - Separator
# - Encoding (optional)
# - Decimal mark (optional)
table = "DSS_SINIESTROS_AUTOS.csv "
location = "../../data/raw"
sep = ';'
encoding = 'latin1'
decimal = ','
# ### Make a first view of the dataset to check most interesting columns
# **Run this if it's a big file**
for chunk in pd.read_csv(f"{location}/{table}",
sep=sep,
encoding=encoding,
decimal=decimal,
chunksize=1000000):
df = chunk
break
# **Run this if it's a relatively small file**
df = pd.read_csv(f"{location}/{table}",
sep=sep,
encoding=encoding,
decimal=decimal)
df.head(15)
df.dtypes
", ".join(sorted(df.columns))
df.isna().mean().sort_values(ascending=False)
# *Based on last output, fill this list to mark most relevant columns*
to_use = [col for col in df.columns if df[col].isna().mean() < 0.7]
sorted(to_use)
# ### Now write the file
# **If it was a big file, read it completely with this line**
chunks = pd.read_csv(f"{location}/{table}",
sep=sep,
encoding=encoding,
decimal=decimal,
chunksize=1000000)
df = pd.concat(chunks)
# +
f = open(f'../../docs/{table} feature description.csv','w')
f.write('Column;Used;Null Rate; Type; Unique values; Values\n')
for column in df.columns:
null_rate = round(df[column].isna().mean()*100,2)
unique_vals = df[column].nunique()
if (column in to_use) and null_rate < .5 and unique_vals > 1:
used = 'X'
else:
used=''
dtype = df[column].dtype
if(dtype == 'object'):
values = f"Top 10:\n{df[column].value_counts(dropna=False).head(10).to_string()}"
else:
values = f'[{df[column].min()};{df[column].max()}]'
f.write(f'{column};{used};{null_rate};{dtype};{unique_vals};"{values}"\n')
f.close()
# -
| notebooks/1. Analysis/Generate table description - DSS_SINIESTROS_AUTOS.tsv .ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] deletable=false editable=false
# 
#
# Exercise material of the MSc-level course **Numerical Methods in Geotechnical Engineering**.
# Held at Technische Universität Bergakademie Freiberg.
#
# Comments to:
#
# *Prof. Dr. <NAME>
# Chair of Soil Mechanics and Foundation Engineering
# Geotechnical Institute
# Technische Universität Bergakademie Freiberg.*
#
# https://tu-freiberg.de/en/soilmechanics
#
#
# -
| TUBAF_header.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="bOChJSNXtC9g" colab_type="text"
# # Convolutional Neural Networks
# + [markdown] id="OLIxEDq6VhvZ" colab_type="text"
# <img src="https://raw.githubusercontent.com/GokuMohandas/practicalAI/master/images/logo.png" width=150>
#
# In this lesson we will learn the basics of Convolutional Neural Networks (CNNs) applied to text for natural language processing (NLP) tasks. CNNs are traditionally used on images and there are plenty of [tutorials](https://pytorch.org/tutorials/beginner/blitz/cifar10_tutorial.html) that cover this. But we're going to focus on using CNN on text data which yields amazing results.
#
#
# + [markdown] id="VoMq0eFRvugb" colab_type="text"
# # Overview
#
# The diagram below from this [paper](https://arxiv.org/abs/1510.03820) shows how 1D convolution is applied to the words in a sentence.
# + [markdown] id="ziGJNhiQeiGN" colab_type="text"
# <img src="https://raw.githubusercontent.com/GokuMohandas/practicalAI/master/images/cnn_text.png" width=500>
# + [markdown] id="qWro5T5qTJJL" colab_type="text"
# * **Objective:** Detect spatial substructure from input data.
# * **Advantages:**
# * Small number of weights (shared)
# * Parallelizable
# * Detects spatial substrcutures (feature extractors)
# * Interpretable via filters
# * Used for in images/text/time-series etc.
# * **Disadvantages:**
# * Many hyperparameters (kernel size, strides, etc.)
# * Inputs have to be of same width (image dimensions, text length, etc.)
# * **Miscellaneous:**
# * Lot's of deep CNN architectures constantly updated for SOTA performance
# + [markdown] id="8nCsZGyWhI9f" colab_type="text"
# # Filters
# + [markdown] id="lxpgRzIjiVHv" colab_type="text"
# At the core of CNNs are filters (weights, kernels, etc.) which convolve (slide) across our input to extract relevant features. The filters are initialized randomly but learn to pick up meaningful features from the input that aid in optimizing for the objective. We're going to teach CNNs in an unorthodox method where we entirely focus on applying it to 2D text data. Each input is composed of words and we will be representing each word as one-hot encoded vector which gives us our 2D input. The intuition here is that each filter represents a feature and we will use this filter on other inputs to capture the same feature. This is known as parameter sharing.
#
# <img src="https://raw.githubusercontent.com/GokuMohandas/practicalAI/master/images/conv.gif" width=400>
# + id="1kTABJyYj91S" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="5f7c9497-63bf-46e3-e01b-74b5902ae569"
# Loading PyTorch library
# !pip3 install torch
# + id="kz9D2rrdmSl9" colab_type="code" colab={}
import torch
import torch.nn as nn
# + [markdown] id="1q1FiiIHXjI_" colab_type="text"
# Our inputs are a batch of 2D text data. Let's make an input with 64 samples, where each sample has 8 words and each word is represented by a array of 10 values (one hot encoded with vocab size of 10). This gives our inputs the size (64, 8, 10). The [PyTorch CNN modules](https://pytorch.org/docs/stable/nn.html#convolution-functions) prefer inputs to have the channel dim (one hot vector dim in our case) to be in the second position, so our inputs are of shape (64, 10, 8).
# + [markdown] id="tFfYwCcjZj79" colab_type="text"
# <img src="https://raw.githubusercontent.com/GokuMohandas/practicalAI/master/images/cnn_text1.png" width=400>
# + id="b6G2nBvOxR-e" colab_type="code" outputId="00f935dc-b4be-472c-8a9c-a3dc608955e9" colab={"base_uri": "https://localhost:8080/", "height": 34}
# Assume all our inputs have the same # of words
batch_size = 64
sequence_size = 8 # words per input
one_hot_size = 10 # vocab size (num_input_channels)
x = torch.randn(batch_size, one_hot_size, sequence_size)
print("Size: {}".format(x.shape))
# + [markdown] id="GJmtay_UZohM" colab_type="text"
# We want to convolve on this input using filters. For simplicity we will use just 5 filters that is of size (1, 2) and has the same depth as the number of channels (one_hot_size). This gives our filter a shape of (5, 2, 10) but recall that PyTorch CNN modules prefer to have the channel dim (one hot vector dim in our case) to be in the second position so the filter is of shape (5, 10, 2).
# + [markdown] id="ZJF0l88qb-21" colab_type="text"
# <img src="https://raw.githubusercontent.com/GokuMohandas/practicalAI/master/images/cnn_text2.png" width=400>
# + id="WMK2TzgDxR8B" colab_type="code" outputId="32609363-1d3d-4437-c709-57bbac524064" colab={"base_uri": "https://localhost:8080/", "height": 85}
# Create filters for a conv layer
out_channels = 5 # of filters
kernel_size = 2 # filters size 2
conv1 = nn.Conv1d(in_channels=one_hot_size, out_channels=out_channels, kernel_size=kernel_size)
print("Size: {}".format(conv1.weight.shape))
print("Filter size: {}".format(conv1.kernel_size[0]))
print("Padding: {}".format(conv1.padding[0]))
print("Stride: {}".format(conv1.stride[0]))
# + [markdown] id="lAcYxhDIbeWE" colab_type="text"
# When we apply this filter on our inputs, we receive an output of shape (64, 5, 7). We get 64 for the batch size, 5 for the channel dim because we used 5 filters and 7 for the conv outputs because:
#
# $\frac{W - F + 2P}{S} + 1 = \frac{8 - 2 + 2(0)}{1} + 1 = 7$
#
# where:
# * W: width of each input
# * F: filter size
# * P: padding
# * S: stride
# + [markdown] id="2c_KKtP4hrJx" colab_type="text"
# <img src="https://raw.githubusercontent.com/GokuMohandas/practicalAI/master/images/cnn_text3.png" width=400>
# + id="yjxtrM89xR5a" colab_type="code" outputId="d7787d5f-8450-4cdc-b234-03b68452c739" colab={"base_uri": "https://localhost:8080/", "height": 34}
# Convolve using filters
conv_output = conv1(x)
print("Size: {}".format(conv_output.shape))
# + [markdown] id="vwTtF7bBuZvF" colab_type="text"
# # Pooling
# + [markdown] id="VXBbKPs1ua9G" colab_type="text"
# The result of convolving filters on an input is a feature map. Due to the nature of convolution and overlaps, our feature map will have lots of redundant information. Pooling is a way to summarize a high-dimensional feature map into a lower dimensional one for simplified downstream computation. The pooling operation can be the max value, average, etc. in a certain receptive field.
#
# <img src="https://raw.githubusercontent.com/GokuMohandas/practicalAI/master/images/pool.jpeg" width=450>
# + id="VCag6lk2mSwU" colab_type="code" outputId="d60fdc17-1791-41ba-f7ae-86e78553e20a" colab={"base_uri": "https://localhost:8080/", "height": 34}
# Max pooling
kernel_size = 2
pool1 = nn.MaxPool1d(kernel_size=kernel_size, stride=2, padding=0)
pool_output = pool1(conv_output)
print("Size: {}".format(pool_output.shape))
# + [markdown] id="c_e4QRFwvTt8" colab_type="text"
# $\frac{W-F}{S} + 1 = \frac{7-2}{2} + 1 = \text{floor }(2.5) + 1 = 3$
# + [markdown] id="l9rL1EWIfi-y" colab_type="text"
# # CNNs on text
# + [markdown] id="aWtHDOJgHZvk" colab_type="text"
# We're going use convolutional neural networks on text data which typically involves convolving on the character level representation of the text to capture meaningful n-grams.
#
# You can easily use this set up for [time series](https://arxiv.org/abs/1807.10707) data or [combine it](https://arxiv.org/abs/1808.04928) with other networks. For text data, we will create filters of varying kernel sizes (1,2), (1,3), and (1,4) which act as feature selectors of varying n-gram sizes. The outputs are concated and fed into a fully-connected layer for class predictions. In our example, we will be applying 1D convolutions on letter in a word. In the [embeddings notebook](https://colab.research.google.com/github/GokuMohandas/practicalAI/blob/master/notebooks/12_Embeddings.ipynb), we will apply 1D convolutions on words in a sentence.
#
# **Word embeddings**: capture the temporal correlations among
# adjacent tokens so that similar words have similar representations. Ex. "New Jersey" is close to "NJ" is close to "Garden State", etc.
#
# **Char embeddings**: create representations that map words at a character level. Ex. "toy" and "toys" will be close to each other.
# + [markdown] id="bVBZxbaAtS9u" colab_type="text"
# # Set up
# + id="y8QSdEcDtXUs" colab_type="code" colab={}
import os
from argparse import Namespace
import collections
import copy
import json
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import re
import torch
# + id="VADCXjMwtXYN" colab_type="code" colab={}
# Set Numpy and PyTorch seeds
def set_seeds(seed, cuda):
np.random.seed(seed)
torch.manual_seed(seed)
if cuda:
torch.cuda.manual_seed_all(seed)
# Creating directories
def create_dirs(dirpath):
if not os.path.exists(dirpath):
os.makedirs(dirpath)
# + id="mpiCYECstXbT" colab_type="code" outputId="0ef00d64-963e-4e23-cbea-d2e8187f5553" colab={"base_uri": "https://localhost:8080/", "height": 34}
# Arguments
args = Namespace(
seed=1234,
cuda=False,
shuffle=True,
data_file="names.csv",
vectorizer_file="vectorizer.json",
model_state_file="model.pth",
save_dir="names",
train_size=0.7,
val_size=0.15,
test_size=0.15,
num_epochs=20,
early_stopping_criteria=5,
learning_rate=1e-3,
batch_size=64,
num_filters=100,
dropout_p=0.1,
)
# Set seeds
set_seeds(seed=args.seed, cuda=args.cuda)
# Create save dir
create_dirs(args.save_dir)
# Expand filepaths
args.vectorizer_file = os.path.join(args.save_dir, args.vectorizer_file)
args.model_state_file = os.path.join(args.save_dir, args.model_state_file)
# Check CUDA
if not torch.cuda.is_available():
args.cuda = False
args.device = torch.device("cuda" if args.cuda else "cpu")
print("Using CUDA: {}".format(args.cuda))
# + [markdown] id="ptb4hJ4Bw8YU" colab_type="text"
# # Data
# + id="bNxZQUqfmS0B" colab_type="code" colab={}
import re
import urllib
# + id="MBdQpUTQtMgu" colab_type="code" colab={}
# Upload data from GitHub to notebook's local drive
url = "https://raw.githubusercontent.com/GokuMohandas/practicalAI/master/data/surnames.csv"
response = urllib.request.urlopen(url)
html = response.read()
with open(args.data_file, 'wb') as fp:
fp.write(html)
# + id="6PYCeGrStMj7" colab_type="code" outputId="849f86d5-1bdc-404f-bb45-44b3532a5069" colab={"base_uri": "https://localhost:8080/", "height": 204}
# Raw data
df = pd.read_csv(args.data_file, header=0)
df.head()
# + id="pbfVM-YatMnD" colab_type="code" outputId="58746131-868b-4ab6-cef3-147f137e04e6" colab={"base_uri": "https://localhost:8080/", "height": 323}
# Split by nationality
by_nationality = collections.defaultdict(list)
for _, row in df.iterrows():
by_nationality[row.nationality].append(row.to_dict())
for nationality in by_nationality:
print ("{0}: {1}".format(nationality, len(by_nationality[nationality])))
# + id="KdGOoKFjtMpz" colab_type="code" colab={}
# Create split data
final_list = []
for _, item_list in sorted(by_nationality.items()):
if args.shuffle:
np.random.shuffle(item_list)
n = len(item_list)
n_train = int(args.train_size*n)
n_val = int(args.val_size*n)
n_test = int(args.test_size*n)
# Give data point a split attribute
for item in item_list[:n_train]:
item['split'] = 'train'
for item in item_list[n_train:n_train+n_val]:
item['split'] = 'val'
for item in item_list[n_train+n_val:]:
item['split'] = 'test'
# Add to final list
final_list.extend(item_list)
# + id="DyDwlzzKtMsz" colab_type="code" outputId="ba9fc7a2-819f-4889-941b-05b2868cb342" colab={"base_uri": "https://localhost:8080/", "height": 85}
# df with split datasets
split_df = pd.DataFrame(final_list)
split_df["split"].value_counts()
# + id="17aHMQOwtMvh" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 204} outputId="eba52fd0-e276-459c-de8a-633f6ed9e05a"
# Preprocessing
def preprocess_text(text):
text = ' '.join(word.lower() for word in text.split(" "))
text = re.sub(r"([.,!?])", r" \1 ", text)
text = re.sub(r"[^a-zA-Z.,!?]+", r" ", text)
return text
split_df.surname = split_df.surname.apply(preprocess_text)
split_df.head()
# + [markdown] id="6nZBgfQTuAA8" colab_type="text"
# # Vocabulary
# + id="TeRVQlRZuBgA" colab_type="code" colab={}
class Vocabulary(object):
def __init__(self, token_to_idx=None, add_unk=True, unk_token="<UNK>"):
# Token to index
if token_to_idx is None:
token_to_idx = {}
self.token_to_idx = token_to_idx
# Index to token
self.idx_to_token = {idx: token \
for token, idx in self.token_to_idx.items()}
# Add unknown token
self.add_unk = add_unk
self.unk_token = unk_token
if self.add_unk:
self.unk_index = self.add_token(self.unk_token)
def to_serializable(self):
return {'token_to_idx': self.token_to_idx,
'add_unk': self.add_unk, 'unk_token': self.unk_token}
@classmethod
def from_serializable(cls, contents):
return cls(**contents)
def add_token(self, token):
if token in self.token_to_idx:
index = self.token_to_idx[token]
else:
index = len(self.token_to_idx)
self.token_to_idx[token] = index
self.idx_to_token[index] = token
return index
def add_tokens(self, tokens):
return [self.add_token[token] for token in tokens]
def lookup_token(self, token):
if self.add_unk:
index = self.token_to_idx.get(token, self.unk_index)
else:
index = self.token_to_idx[token]
return index
def lookup_index(self, index):
if index not in self.idx_to_token:
raise KeyError("the index (%d) is not in the Vocabulary" % index)
return self.idx_to_token[index]
def __str__(self):
return "<Vocabulary(size=%d)>" % len(self)
def __len__(self):
return len(self.token_to_idx)
# + id="bH8LMH9wuBi9" colab_type="code" outputId="8293745c-751e-40d9-9993-78e84093e203" colab={"base_uri": "https://localhost:8080/", "height": 68}
# Vocabulary instance
nationality_vocab = Vocabulary(add_unk=False)
for index, row in df.iterrows():
nationality_vocab.add_token(row.nationality)
print (nationality_vocab) # __str__
index = nationality_vocab.lookup_token("English")
print (index)
print (nationality_vocab.lookup_index(index))
# + [markdown] id="57a1lzHPuHHm" colab_type="text"
# # Vectorizer
# + id="MwS5BEV-uBlt" colab_type="code" colab={}
class SurnameVectorizer(object):
def __init__(self, surname_vocab, nationality_vocab):
self.surname_vocab = surname_vocab
self.nationality_vocab = nationality_vocab
def vectorize(self, surname):
one_hot_matrix_size = (len(surname), len(self.surname_vocab))
one_hot_matrix = np.zeros(one_hot_matrix_size, dtype=np.float32)
for position_index, character in enumerate(surname):
character_index = self.surname_vocab.lookup_token(character)
one_hot_matrix[position_index][character_index] = 1
return one_hot_matrix
def unvectorize(self, one_hot_matrix):
len_name = len(one_hot_matrix)
indices = np.zeros(len_name)
for i in range(len_name):
indices[i] = np.where(one_hot_matrix[i]==1)[0][0]
surname = [self.surname_vocab.lookup_index(index) for index in indices]
return surname
@classmethod
def from_dataframe(cls, df):
surname_vocab = Vocabulary(add_unk=True)
nationality_vocab = Vocabulary(add_unk=False)
# Create vocabularies
for index, row in df.iterrows():
for letter in row.surname: # char-level tokenization
surname_vocab.add_token(letter)
nationality_vocab.add_token(row.nationality)
return cls(surname_vocab, nationality_vocab)
@classmethod
def from_serializable(cls, contents):
surname_vocab = Vocabulary.from_serializable(contents['surname_vocab'])
nationality_vocab = Vocabulary.from_serializable(contents['nationality_vocab'])
return cls(surname_vocab, nationality_vocab)
def to_serializable(self):
return {'surname_vocab': self.surname_vocab.to_serializable(),
'nationality_vocab': self.nationality_vocab.to_serializable()}
# + id="zq7RoFAXuBo9" colab_type="code" outputId="3555b683-841a-4e2a-a246-c8ae4cb74636" colab={"base_uri": "https://localhost:8080/", "height": 221}
# Vectorizer instance
vectorizer = SurnameVectorizer.from_dataframe(split_df)
print (vectorizer.surname_vocab)
print (vectorizer.nationality_vocab)
vectorized_surname = vectorizer.vectorize(preprocess_text("goku"))
print (np.shape(vectorized_surname))
print (vectorized_surname)
print (vectorizer.unvectorize(vectorized_surname))
# + [markdown] id="wwQ8MNp5ZfeG" colab_type="text"
# **Note**: Unlike the bagged ont-hot encoding method in the MLP notebook, we are able to preserve the semantic structure of the surnames. We are able to use one-hot encoding here because we are using characters but when we process text with large vocabularies, this method simply can't scale. We'll explore embedding based methods in subsequent notebooks.
# + [markdown] id="Mnf7gXgKuOgp" colab_type="text"
# # Dataset
# + id="YYqzM53fuBrf" colab_type="code" colab={}
from torch.utils.data import Dataset, DataLoader
# + id="gjolk855uPrA" colab_type="code" colab={}
class SurnameDataset(Dataset):
def __init__(self, df, vectorizer):
self.df = df
self.vectorizer = vectorizer
# Data splits
self.train_df = self.df[self.df.split=='train']
self.train_size = len(self.train_df)
self.val_df = self.df[self.df.split=='val']
self.val_size = len(self.val_df)
self.test_df = self.df[self.df.split=='test']
self.test_size = len(self.test_df)
self.lookup_dict = {'train': (self.train_df, self.train_size),
'val': (self.val_df, self.val_size),
'test': (self.test_df, self.test_size)}
self.set_split('train')
# Class weights (for imbalances)
class_counts = df.nationality.value_counts().to_dict()
def sort_key(item):
return self.vectorizer.nationality_vocab.lookup_token(item[0])
sorted_counts = sorted(class_counts.items(), key=sort_key)
frequencies = [count for _, count in sorted_counts]
self.class_weights = 1.0 / torch.tensor(frequencies, dtype=torch.float32)
@classmethod
def load_dataset_and_make_vectorizer(cls, df):
train_df = df[df.split=='train']
return cls(df, SurnameVectorizer.from_dataframe(train_df))
@classmethod
def load_dataset_and_load_vectorizer(cls, df, vectorizer_filepath):
vectorizer = cls.load_vectorizer_only(vectorizer_filepath)
return cls(df, vectorizer)
def load_vectorizer_only(vectorizer_filepath):
with open(vectorizer_filepath) as fp:
return SurnameVectorizer.from_serializable(json.load(fp))
def save_vectorizer(self, vectorizer_filepath):
with open(vectorizer_filepath, "w") as fp:
json.dump(self.vectorizer.to_serializable(), fp)
def set_split(self, split="train"):
self.target_split = split
self.target_df, self.target_size = self.lookup_dict[split]
def __str__(self):
return "<Dataset(split={0}, size={1})".format(
self.target_split, self.target_size)
def __len__(self):
return self.target_size
def __getitem__(self, index):
row = self.target_df.iloc[index]
surname_vector = self.vectorizer.vectorize(row.surname)
nationality_index = self.vectorizer.nationality_vocab.lookup_token(row.nationality)
return {'surname': surname_vector, 'nationality': nationality_index}
def get_num_batches(self, batch_size):
return len(self) // batch_size
def generate_batches(self, batch_size, collate_fn, shuffle=True,
drop_last=True, device="cpu"):
dataloader = DataLoader(dataset=self, batch_size=batch_size,
collate_fn=collate_fn, shuffle=shuffle,
drop_last=drop_last)
for data_dict in dataloader:
out_data_dict = {}
for name, tensor in data_dict.items():
out_data_dict[name] = data_dict[name].to(device)
yield out_data_dict
# + id="hvy-CJVSuPuS" colab_type="code" outputId="878b8447-2345-4918-d621-182583e88b7e" colab={"base_uri": "https://localhost:8080/", "height": 85}
# Dataset instance
dataset = SurnameDataset.load_dataset_and_make_vectorizer(split_df)
print (dataset) # __str__
print (np.shape(dataset[5]['surname'])) # __getitem__
print (dataset.class_weights)
# + [markdown] id="XY0CqM2Rd3Im" colab_type="text"
# # Model
# + id="pWGpAzKPd32f" colab_type="code" colab={}
import torch.nn as nn
import torch.nn.functional as F
# + id="d7Q0_nkjd30L" colab_type="code" colab={}
class SurnameModel(nn.Module):
def __init__(self, num_input_channels, num_output_channels, num_classes, dropout_p):
super(SurnameModel, self).__init__()
# Conv weights
self.conv = nn.ModuleList([nn.Conv1d(num_input_channels, num_output_channels,
kernel_size=f) for f in [2,3,4]])
self.dropout = nn.Dropout(dropout_p)
# FC weights
self.fc1 = nn.Linear(num_output_channels*3, num_classes)
def forward(self, x, channel_first=False, apply_softmax=False):
# Rearrange input so num_input_channels is in dim 1 (N, C, L)
if not channel_first:
x = x.transpose(1, 2)
# Conv outputs
z = [conv(x) for conv in self.conv]
z = [F.max_pool1d(zz, zz.size(2)).squeeze(2) for zz in z]
z = [F.relu(zz) for zz in z]
# Concat conv outputs
z = torch.cat(z, 1)
z = self.dropout(z)
# FC layer
y_pred = self.fc1(z)
if apply_softmax:
y_pred = F.softmax(y_pred, dim=1)
return y_pred
# + [markdown] id="7XlJwSKQkL_C" colab_type="text"
# # Training
# + [markdown] id="rh_1heUNSUYN" colab_type="text"
# **Padding:** the inputs in a particular batch must all have the same shape. Our vectorizer converts the tokens into a vectorizer form but in a particular batch, we can have inputs of various sizes. The solution is to determine the longest input in a particular batch and pad all the other inputs to match that length. Usually, the smaller inputs in the batch are padded with zero vectors.
#
# We do this using the pad_seq function in the Trainer class which is invoked by the collate_fn which is passed to generate_batches function in the Dataset class. Essentially, the batch generater generates samples into a batch and we use the collate_fn to determine the largest input and pad all the other inputs in the batch to get a uniform input shape.
# + id="wLLmIuKRkNYW" colab_type="code" colab={}
import torch.optim as optim
# + id="sV-Dc_5ykNgS" colab_type="code" colab={}
class Trainer(object):
def __init__(self, dataset, model, model_state_file, save_dir, device, shuffle,
num_epochs, batch_size, learning_rate, early_stopping_criteria):
self.dataset = dataset
self.class_weights = dataset.class_weights.to(device)
self.model = model.to(device)
self.save_dir = save_dir
self.device = device
self.shuffle = shuffle
self.num_epochs = num_epochs
self.batch_size = batch_size
self.loss_func = nn.CrossEntropyLoss(self.class_weights)
self.optimizer = optim.Adam(self.model.parameters(), lr=learning_rate)
self.scheduler = optim.lr_scheduler.ReduceLROnPlateau(
optimizer=self.optimizer, mode='min', factor=0.5, patience=1)
self.train_state = {
'done_training': False,
'stop_early': False,
'early_stopping_step': 0,
'early_stopping_best_val': 1e8,
'early_stopping_criteria': early_stopping_criteria,
'learning_rate': learning_rate,
'epoch_index': 0,
'train_loss': [],
'train_acc': [],
'val_loss': [],
'val_acc': [],
'test_loss': -1,
'test_acc': -1,
'model_filename': model_state_file}
def update_train_state(self):
# Verbose
print ("[EPOCH]: {0} | [LR]: {1} | [TRAIN LOSS]: {2:.2f} | [TRAIN ACC]: {3:.1f}% | [VAL LOSS]: {4:.2f} | [VAL ACC]: {5:.1f}%".format(
self.train_state['epoch_index'], self.train_state['learning_rate'],
self.train_state['train_loss'][-1], self.train_state['train_acc'][-1],
self.train_state['val_loss'][-1], self.train_state['val_acc'][-1]))
# Save one model at least
if self.train_state['epoch_index'] == 0:
torch.save(self.model.state_dict(), self.train_state['model_filename'])
self.train_state['stop_early'] = False
# Save model if performance improved
elif self.train_state['epoch_index'] >= 1:
loss_tm1, loss_t = self.train_state['val_loss'][-2:]
# If loss worsened
if loss_t >= self.train_state['early_stopping_best_val']:
# Update step
self.train_state['early_stopping_step'] += 1
# Loss decreased
else:
# Save the best model
if loss_t < self.train_state['early_stopping_best_val']:
torch.save(self.model.state_dict(), self.train_state['model_filename'])
# Reset early stopping step
self.train_state['early_stopping_step'] = 0
# Stop early ?
self.train_state['stop_early'] = self.train_state['early_stopping_step'] \
>= self.train_state['early_stopping_criteria']
return self.train_state
def compute_accuracy(self, y_pred, y_target):
_, y_pred_indices = y_pred.max(dim=1)
n_correct = torch.eq(y_pred_indices, y_target).sum().item()
return n_correct / len(y_pred_indices) * 100
def pad_seq(self, seq, length):
vector = np.zeros((length, len(self.dataset.vectorizer.surname_vocab)),
dtype=np.int64)
for i in range(len(seq)):
vector[i] = seq[i]
return vector
def collate_fn(self, batch):
# Make a deep copy
batch_copy = copy.deepcopy(batch)
processed_batch = {"surname": [], "nationality": []}
# Get max sequence length
max_seq_len = max([len(sample["surname"]) for sample in batch_copy])
# Pad
for i, sample in enumerate(batch_copy):
seq = sample["surname"]
nationality = sample["nationality"]
padded_seq = self.pad_seq(seq, max_seq_len)
processed_batch["surname"].append(padded_seq)
processed_batch["nationality"].append(nationality)
# Convert to appropriate tensor types
processed_batch["surname"] = torch.FloatTensor(
processed_batch["surname"]) # need float for conv operations
processed_batch["nationality"] = torch.LongTensor(
processed_batch["nationality"])
return processed_batch
def run_train_loop(self):
for epoch_index in range(self.num_epochs):
self.train_state['epoch_index'] = epoch_index
# Iterate over train dataset
# initialize batch generator, set loss and acc to 0, set train mode on
self.dataset.set_split('train')
batch_generator = self.dataset.generate_batches(
batch_size=self.batch_size, collate_fn=self.collate_fn,
shuffle=self.shuffle, device=self.device)
running_loss = 0.0
running_acc = 0.0
self.model.train()
for batch_index, batch_dict in enumerate(batch_generator):
# zero the gradients
self.optimizer.zero_grad()
# compute the output
y_pred = self.model(batch_dict['surname'])
# compute the loss
loss = self.loss_func(y_pred, batch_dict['nationality'])
loss_t = loss.item()
running_loss += (loss_t - running_loss) / (batch_index + 1)
# compute gradients using loss
loss.backward()
# use optimizer to take a gradient step
self.optimizer.step()
# compute the accuracy
acc_t = self.compute_accuracy(y_pred, batch_dict['nationality'])
running_acc += (acc_t - running_acc) / (batch_index + 1)
self.train_state['train_loss'].append(running_loss)
self.train_state['train_acc'].append(running_acc)
# Iterate over val dataset
# initialize batch generator, set loss and acc to 0; set eval mode on
self.dataset.set_split('val')
batch_generator = self.dataset.generate_batches(
batch_size=self.batch_size, collate_fn=self.collate_fn,
shuffle=self.shuffle, device=self.device)
running_loss = 0.
running_acc = 0.
self.model.eval()
for batch_index, batch_dict in enumerate(batch_generator):
# compute the output
y_pred = self.model(batch_dict['surname'])
# compute the loss
loss = self.loss_func(y_pred, batch_dict['nationality'])
loss_t = loss.to("cpu").item()
running_loss += (loss_t - running_loss) / (batch_index + 1)
# compute the accuracy
acc_t = self.compute_accuracy(y_pred, batch_dict['nationality'])
running_acc += (acc_t - running_acc) / (batch_index + 1)
self.train_state['val_loss'].append(running_loss)
self.train_state['val_acc'].append(running_acc)
self.train_state = self.update_train_state()
self.scheduler.step(self.train_state['val_loss'][-1])
if self.train_state['stop_early']:
break
def run_test_loop(self):
# initialize batch generator, set loss and acc to 0; set eval mode on
self.dataset.set_split('test')
batch_generator = self.dataset.generate_batches(
batch_size=self.batch_size, collate_fn=self.collate_fn,
shuffle=self.shuffle, device=self.device)
running_loss = 0.0
running_acc = 0.0
self.model.eval()
for batch_index, batch_dict in enumerate(batch_generator):
# compute the output
y_pred = self.model(batch_dict['surname'])
# compute the loss
loss = self.loss_func(y_pred, batch_dict['nationality'])
loss_t = loss.item()
running_loss += (loss_t - running_loss) / (batch_index + 1)
# compute the accuracy
acc_t = self.compute_accuracy(y_pred, batch_dict['nationality'])
running_acc += (acc_t - running_acc) / (batch_index + 1)
self.train_state['test_loss'] = running_loss
self.train_state['test_acc'] = running_acc
def plot_performance(self):
# Figure size
plt.figure(figsize=(15,5))
# Plot Loss
plt.subplot(1, 2, 1)
plt.title("Loss")
plt.plot(trainer.train_state["train_loss"], label="train")
plt.plot(trainer.train_state["val_loss"], label="val")
plt.legend(loc='upper right')
# Plot Accuracy
plt.subplot(1, 2, 2)
plt.title("Accuracy")
plt.plot(trainer.train_state["train_acc"], label="train")
plt.plot(trainer.train_state["val_acc"], label="val")
plt.legend(loc='lower right')
# Save figure
plt.savefig(os.path.join(self.save_dir, "performance.png"))
# Show plots
plt.show()
def save_train_state(self):
self.train_state["done_training"] = True
with open(os.path.join(self.save_dir, "train_state.json"), "w") as fp:
json.dump(self.train_state, fp)
# + id="OkeOQRwckNd1" colab_type="code" outputId="77032240-5a41-4461-8767-89827357f6f7" colab={"base_uri": "https://localhost:8080/", "height": 170}
# Initialization
dataset = SurnameDataset.load_dataset_and_make_vectorizer(split_df)
dataset.save_vectorizer(args.vectorizer_file)
vectorizer = dataset.vectorizer
model = SurnameModel(num_input_channels=len(vectorizer.surname_vocab),
num_output_channels=args.num_filters,
num_classes=len(vectorizer.nationality_vocab),
dropout_p=args.dropout_p)
print (model.named_modules)
# + id="3JJdOO4ZkNb3" colab_type="code" outputId="c80d4f6a-a05e-4f58-fb4f-91ccd95f68a3" colab={"base_uri": "https://localhost:8080/", "height": 357}
# Train
trainer = Trainer(dataset=dataset, model=model,
model_state_file=args.model_state_file,
save_dir=args.save_dir, device=args.device,
shuffle=args.shuffle, num_epochs=args.num_epochs,
batch_size=args.batch_size, learning_rate=args.learning_rate,
early_stopping_criteria=args.early_stopping_criteria)
trainer.run_train_loop()
# + id="0QLZfEyznVpT" colab_type="code" outputId="67d28ed8-2252-4947-ffc5-71c5c4f4fedd" colab={"base_uri": "https://localhost:8080/", "height": 335}
# Plot performance
trainer.plot_performance()
# + id="BWGzMSaBnYMb" colab_type="code" outputId="9fb5ed9a-8da0-4fc4-df7b-87a0ce67796a" colab={"base_uri": "https://localhost:8080/", "height": 51}
# Test performance
trainer.run_test_loop()
print("Test loss: {0:.2f}".format(trainer.train_state['test_loss']))
print("Test Accuracy: {0:.1f}%".format(trainer.train_state['test_acc']))
# + id="5672VEginYnY" colab_type="code" colab={}
# Save all results
trainer.save_train_state()
# + [markdown] id="HN1g2vP3nad_" colab_type="text"
# # Inference
# + id="Myr8QQjKnZ7k" colab_type="code" colab={}
class Inference(object):
def __init__(self, model, vectorizer, device="cpu"):
self.model = model.to(device)
self.vectorizer = vectorizer
self.device = device
def predict_nationality(self, dataset):
# Batch generator
batch_generator = dataset.generate_batches(
batch_size=len(dataset), shuffle=False, device=self.device)
self.model.eval()
# Predict
for batch_index, batch_dict in enumerate(batch_generator):
# compute the output
y_pred = self.model(batch_dict['surname'], apply_softmax=True)
# Top k nationalities
y_prob, indices = torch.topk(y_pred, k=len(self.vectorizer.nationality_vocab))
probabilities = y_prob.detach().to('cpu').numpy()[0]
indices = indices.detach().to('cpu').numpy()[0]
results = []
for probability, index in zip(probabilities, indices):
nationality = self.vectorizer.nationality_vocab.lookup_index(index)
results.append({'nationality': nationality, 'probability': probability})
return results
# + id="-VVn_zxkRcbf" colab_type="code" colab={}
# Load vectorizer
with open(args.vectorizer_file) as fp:
vectorizer = SurnameVectorizer.from_serializable(json.load(fp))
# + id="Wx46FK2YRchi" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 170} outputId="26a09df1-1548-469e-df7f-6613a644512c"
# Load the model
model = SurnameModel(num_input_channels=len(vectorizer.surname_vocab),
num_output_channels=args.num_filters,
num_classes=len(vectorizer.nationality_vocab),
dropout_p=args.dropout_p)
model.load_state_dict(torch.load(args.model_state_file))
print (model.named_modules)
# + id="LZE2Ov4xRcfq" colab_type="code" colab={}
# Initialize
inference = Inference(model=model, vectorizer=vectorizer, device=args.device)
# + id="kpPDszLpRfww" colab_type="code" colab={}
class InferenceDataset(Dataset):
def __init__(self, df, vectorizer):
self.df = df
self.vectorizer = vectorizer
self.target_size = len(self.df)
def __str__(self):
return "<Dataset(size={1})>".format(self.target_size)
def __len__(self):
return self.target_size
def __getitem__(self, index):
row = self.df.iloc[index]
surname_vector = self.vectorizer.vectorize(row.surname)
return {'surname': surname_vector}
def get_num_batches(self, batch_size):
return len(self) // batch_size
def generate_batches(self, batch_size, shuffle=True, drop_last=False, device="cpu"):
dataloader = DataLoader(dataset=self, batch_size=batch_size,
shuffle=shuffle, drop_last=drop_last)
for data_dict in dataloader:
out_data_dict = {}
for name, tensor in data_dict.items():
out_data_dict[name] = data_dict[name].to(device)
yield out_data_dict
# + id="LDpg2LPKRf0c" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 340} outputId="8f7a26c1-ab27-4b9b-cac2-28937d5d910b"
# Inference
surname = input("Enter a surname to classify: ")
infer_df = pd.DataFrame([surname], columns=['surname'])
infer_df.surname = infer_df.surname.apply(preprocess_text)
infer_dataset = InferenceDataset(infer_df, vectorizer)
results = inference.predict_nationality(dataset=infer_dataset)
results
# + [markdown] id="HQSsKNRSxjRB" colab_type="text"
# # Batch normalization
# + [markdown] id="r3EamVazx2hx" colab_type="text"
# Even though we standardized our inputs to have zero mean and unit variance to aid with convergence, our inputs change during training as they go through the different layers and nonlinearities. This is known as internal covariate shirt and it slows down training and requires us to use smaller learning rates. The solution is [batch normalization](https://arxiv.org/abs/1502.03167) (batchnorm) which makes normalization a part of the model's architecture. This allows us to use much higher learning rates and get better performance, faster.
#
# $ BN = \frac{a - \mu_{x}}{\sqrt{\sigma^2_{x} + \epsilon}} * \gamma + \beta $
#
# where:
# * $a$ = activation | $\in \mathbb{R}^{NXH}$ ($N$ is the number of samples, $H$ is the hidden dim)
# * $ \mu_{x}$ = mean of each hidden | $\in \mathbb{R}^{1XH}$
# * $\sigma^2_{x}$ = variance of each hidden | $\in \mathbb{R}^{1XH}$
# * $epsilon$ = noise
# * $\gamma$ = scale parameter (learned parameter)
# * $\beta$ = shift parameter (learned parameter)
#
#
# + [markdown] id="9koMITOdzfZB" colab_type="text"
# But what does it mean for our activations to have zero mean and unit variance before the nonlinearity operation. It doesn't mean that the entire activation matrix has this property but instead batchnorm is applied on the hidden (num_output_channels in our case) dimension. So each hidden's mean and variance is calculated using all samples across the batch. Also, batchnorm uses the calcualted mean and variance of the activations in the batch during training. However, during test, the sample size could be skewed so the model uses the saved population mean and variance from training. PyTorch's [BatchNorm](https://pytorch.org/docs/stable/nn.html#torch.nn.BatchNorm1d) class takes care of all of this for us automatically.
#
# <img src="https://raw.githubusercontent.com/GokuMohandas/practicalAI/master/images/batchnorm.png" width=400>
# + id="RsWdAKVEHvyV" colab_type="code" colab={}
# Model with batch normalization
class SurnameModel_BN(nn.Module):
def __init__(self, num_input_channels, num_output_channels, num_classes, dropout_p):
super(SurnameModel_BN, self).__init__()
# Conv weights
self.conv = nn.ModuleList([nn.Conv1d(num_input_channels, num_output_channels,
kernel_size=f) for f in [2,3,4]])
self.conv_bn = nn.ModuleList([nn.BatchNorm1d(num_output_channels) # define batchnorms
for i in range(3)])
self.dropout = nn.Dropout(dropout_p)
# FC weights
self.fc1 = nn.Linear(num_output_channels*3, num_classes)
def forward(self, x, channel_first=False, apply_softmax=False):
# Rearrange input so num_input_channels is in dim 1 (N, C, L)
if not channel_first:
x = x.transpose(1, 2)
# Conv outputs
z = [F.relu(conv_bn(conv(x))) for conv, conv_bn in zip(self.conv, self.conv_bn)]
z = [F.max_pool1d(zz, zz.size(2)).squeeze(2) for zz in z]
# Concat conv outputs
z = torch.cat(z, 1)
z = self.dropout(z)
# FC layer
y_pred = self.fc1(z)
if apply_softmax:
y_pred = F.softmax(y_pred, dim=1)
return y_pred
# + id="s_QcGx4vN3bQ" colab_type="code" outputId="0371a89e-fd9a-4e31-d5bc-033f1f99f9a9" colab={"base_uri": "https://localhost:8080/", "height": 255}
# Initialization
dataset = SurnameDataset.load_dataset_and_make_vectorizer(split_df)
dataset.save_vectorizer(args.vectorizer_file)
vectorizer = dataset.vectorizer
model = SurnameModel_BN(num_input_channels=len(vectorizer.surname_vocab),
num_output_channels=args.num_filters,
num_classes=len(vectorizer.nationality_vocab),
dropout_p=args.dropout_p)
print (model.named_modules)
# + [markdown] id="tBXzxtiaxmXi" colab_type="text"
# You can train this model with batch normalization and you'll notice that the validation results improve by ~2-5%.
# + id="ERMGiPgAPssx" colab_type="code" outputId="7fe57758-e8d8-4d90-b2b3-f4b83ae832ee" colab={"base_uri": "https://localhost:8080/", "height": 357}
# Train
trainer = Trainer(dataset=dataset, model=model,
model_state_file=args.model_state_file,
save_dir=args.save_dir, device=args.device,
shuffle=args.shuffle, num_epochs=args.num_epochs,
batch_size=args.batch_size, learning_rate=args.learning_rate,
early_stopping_criteria=args.early_stopping_criteria)
trainer.run_train_loop()
# + id="iiAW6AL0QAJ8" colab_type="code" outputId="b287547c-4193-4044-c673-0b1b5ea0d43d" colab={"base_uri": "https://localhost:8080/", "height": 335}
# Plot performance
trainer.plot_performance()
# + id="GPQH0NVwQAO3" colab_type="code" outputId="2c969a71-a95a-4793-b4cf-7ccec9866643" colab={"base_uri": "https://localhost:8080/", "height": 51}
# Test performance
trainer.run_test_loop()
print("Test loss: {0:.2f}".format(trainer.train_state['test_loss']))
print("Test Accuracy: {0:.1f}%".format(trainer.train_state['test_acc']))
# + [markdown] id="w6WRq-O3d1ba" colab_type="text"
# # TODO
# + [markdown] id="oEcbaRswd1d0" colab_type="text"
# * image classification example
# * segmentation
# * deep CNN architectures
# * small 3X3 filters
# * details on padding and stride (control receptive field, make every pixel the center of the filter, etc.)
# * network-in-network (1x1 conv)
# * residual connections / residual block
# * interpretability (which n-grams fire)
| notebooks/11_Convolutional_Neural_Networks.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
import pandas as pd
import io
import sqlalchemy as sa
import urllib
from geopy.geocoders import Nominatim
from geopy.extra.rate_limiter import RateLimiter
import numpy as np
# +
import os
from dotenv import load_dotenv, find_dotenv
load_dotenv(find_dotenv())
SERVER = os.environ.get("SERVER")
DATABASE = os.environ.get("DATABASE")
USERNAME = os.environ.get("USERNAME")
PASSWORD = <PASSWORD>("PASSWORD")
# -
conn= urllib.parse.quote_plus('DRIVER={ODBC Driver 17 for SQL Server};SERVER='+str(server)+';DATABASE='+str(database)+';UID='+str(username)+';PWD='+ str(password))
engine = sa.create_engine('mssql+pyodbc:///?odbc_connect={}'.format(conn),fast_executemany=True)
query = """
SELECT [KodStacji]
,[KodMiedzynarodowy]
,[NazwaStacji]
,[StaryKodStacji]
,[DataUruchomienia]
,[DataZamkniecia]
,[TypStacji]
,[TypObszaru]
,[RodzajStacji]
,[Wojewodztwo]
,[Miejscowosc]
,[Ulica]
,[Latitude]
,[Longitude]
FROM [SmogoliczkaArchive].[dbo].[stacje]
"""
# +
geolocator = Nominatim(user_agent="<EMAIL>")
# 1 - conveneint function to delay between geocoding calls
geocode = RateLimiter(geolocator.geocode, min_delay_seconds=1)
df_orig = pd.read_sql(query, engine)
df=df_orig.copy()
df['adres']=df['Ulica']+", "+df['Miejscowosc']
mask=df['Longitude']<0
df=df[mask]
df['lokalizacja'] = df['adres'].apply(lambda row: geocode(row))
df['lokalizacja_2'] = df['Miejscowosc'].apply(lambda row: geocode(row))
df['lokalizacja'].fillna(df['lokalizacja_2'],inplace=True)
df['punkt'] = df['lokalizacja'].apply(lambda loc: tuple(loc.point) if loc else None)
df[['latitude_new', 'longitude_new', 'altitude']] = pd.DataFrame(df['punkt'].tolist(), index=df.index)
df['Latitude'] = df['Latitude'].replace(-999, np.nan)
df['Longitude'] = df['Longitude'].replace(-999, np.nan)
df['Latitude'].fillna(df['latitude_new'],inplace=True)
df['Longitude'].fillna(df['longitude_new'],inplace=True)
df.drop(columns=['adres','lokalizacja','punkt','latitude_new','lokalizacja_2', 'longitude_new', 'altitude'],inplace=True)
df_orig.update(df)
# -
df_orig[df_orig['Longitude']<0]
#send data back to sql
df_orig.to_sql("stacje_v2", engine, if_exists="replace",index=False)
# +
# -
| notebooks/Fixing missing lat and lon in stations .ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] id="UKAaPqvOdXxD" colab_type="text"
# Нейронный перенос стиля с Pytorch
# ============================
# **Aвтор**: `<NAME> <https://alexis-jacq.github.io>` <br />
# **Адаптивный перевод**: `<NAME> <https://github.com/nestyme>` <br />
# Введение
# ------------
#
# В этом ноутбуке объясняется и показывается, как работает алгоритм переноса стиля
#
# `Neural-Style <https://arxiv.org/abs/1508.06576>`
#
# <NAME>, <NAME> и <NAME>.
#
#
# **Нейронный перенос стиля** -- это алгоритм, который принимает контент-изображение (например, черепаху), стиль-изображение (например, картинку известного художника) и возвращает изображение, которое будто бы нарисовано тем художником:
#
#
#
# **Как это работает?**
#
# Всего есть три картинки: вход, стиль и контент.
# Определим два расстояния:
# - $D_S$ - оно определяет на сколько разные стили у двух произвольных картинок.
# - $D_C$ - оно определяет на сколько разнится контент у двух произвольных картинок.
#
# задача сети - минимизировать $D_S$ от входной картинки до стиля и $D_C$ от входной картиники до контента.<br />
# В качестве входа обычно берется зашумленная к
# артинка контента.
#
#
# Это все что нам понадобится:
# + id="dBUuNcGgehLy" colab_type="code" outputId="69d2bda8-bacd-46a9-aabf-04dc5f3f3806" executionInfo={"status": "ok", "timestamp": 1545411530835, "user_tz": -180, "elapsed": 7351, "user": {"displayName": "\u041a\u0438\u0440\u0438\u043b\u043b \u0418\u0433\u043e\u0440\u0435\u0432\u0438\u0447 \u0413\u043e\u043b\u0443\u0431\u0435\u0432", "photoUrl": "", "userId": "04104925801798209408"}} colab={"base_uri": "https://localhost:8080/", "height": 156}
# !pip3 install torch torchvision
# !pip3 install pillow==4.1.1
# + id="Kej6kCnCdXxL" colab_type="code" outputId="3cbd2100-4e8f-475c-a9d2-c22670e9828b" executionInfo={"status": "ok", "timestamp": 1545411531087, "user_tz": -180, "elapsed": 7553, "user": {"displayName": "\u041a\u0438\u0440\u0438\u043b\u043b \u0418\u0433\u043e\u0440\u0435\u0432\u0438\u0447 \u0413\u043e\u043b\u0443\u0431\u0435\u0432", "photoUrl": "", "userId": "04104925801798209408"}} colab={"base_uri": "https://localhost:8080/", "height": 34}
# %matplotlib inline
from PIL import Image
import torch
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
import matplotlib.pyplot as plt
import torchvision.transforms as transforms
import torchvision.models as models
import copy
from google.colab import drive
drive.mount("/content/drive")
# + id="OfHBOzUXkuAk" colab_type="code" outputId="3ddb8fbd-12fb-4c59-fef9-095c6be812bc" executionInfo={"status": "ok", "timestamp": 1545411533104, "user_tz": -180, "elapsed": 9529, "user": {"displayName": "\u041a\u0438\u0440\u0438\u043b\u043b \u0418\u0433\u043e\u0440\u0435\u0432\u0438\u0447 \u0413\u043e\u043b\u0443\u0431\u0435\u0432", "photoUrl": "", "userId": "04104925801798209408"}} colab={"base_uri": "https://localhost:8080/", "height": 34}
# !ls drive/My\ Drive/11/images#choose your own path
# + [markdown] id="b9rv8Zs0dXxa" colab_type="text"
# **Загрузка изображений**
# + [markdown] id="AFBx9ThldXxe" colab_type="text"
# Нам понадобятся картинки стиля и контента, так что загрузим их.<br />
# Чтобы упростить реализацию, начнем с контента и стиля одного размера. Затем мы масштабируем их до требуемых размеров выходного изображения.
#
# Примеры изображений лежат в папке `Images` на гуглдиске
#
# Вы можете добавить туда свои собственные изображения -- главное, чтобы они были одного размера
# + id="ncG6aOckdXxk" colab_type="code" colab={}
imsize = 128
loader = transforms.Compose([
transforms.Resize(imsize), # нормируем размер изображения
transforms.CenterCrop(imsize),
transforms.ToTensor()]) # превращаем в удобный формат
# + id="161rb0RWdXxu" colab_type="code" colab={}
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
def image_loader(image_name):
image = Image.open(image_name)
image = loader(image).unsqueeze(0)
return image.to(device, torch.float)
style_img = image_loader("drive/My Drive/11/images/picasso.jpg")# as well as here
content_img = image_loader("drive/My Drive/11/images/lisa.jpg")#измените путь на тот который у вас.
# + [markdown] id="EewC5ycwdXx2" colab_type="text"
# Выведем то, что было загружено
# + id="e9l8XsSEdXx5" colab_type="code" outputId="92fc6010-1d0c-4b72-b2ef-4b3c2b2ecd90" executionInfo={"status": "ok", "timestamp": 1545411534064, "user_tz": -180, "elapsed": 10400, "user": {"displayName": "\u041a\u0438\u0440\u0438\u043b\u043b \u0418\u0433\u043e\u0440\u0435\u0432\u0438\u0447 \u0413\u043e\u043b\u0443\u0431\u0435\u0432", "photoUrl": "", "userId": "04104925801798209408"}} colab={"base_uri": "https://localhost:8080/", "height": 543}
unloader = transforms.ToPILImage() # тензор в кратинку
plt.ion()
def imshow(tensor, title=None):
image = tensor.cpu().clone()
image = image.squeeze(0) # функция для отрисовки изображения
image = unloader(image)
plt.imshow(image)
if title is not None:
plt.title(title)
plt.pause(0.001)
# отрисовка
plt.figure()
imshow(style_img, title='Style Image')
plt.figure()
imshow(content_img, title='Content Image')
# + [markdown] id="sctlCEBIdXyG" colab_type="text"
# Теперь нужно создать функции, которые будут вычислять расстояния ( $D_C$ и $D_S$). <br />
# Они будут выполенены в виде слоев, чтобы брать по ним автоградиент.
# + [markdown] id="bjOwZSZEdXyJ" colab_type="text"
# $D_S$ - средняя квадратичная ощибка input'а и target'а
# + id="hHaDH8n3dXyP" colab_type="code" colab={}
class ContentLoss(nn.Module):
def __init__(self, target,):
super(ContentLoss, self).__init__()
# we 'detach' the target content from the tree used
# to dynamically compute the gradient: this is a stated value,
# not a variable. Otherwise the forward method of the criterion
# will throw an error.
self.target = target.detach()#это константа. Убираем ее из дерева вычеслений
self.loss = F.mse_loss(self.target, self.target )#to initialize with something
def forward(self, input):
self.loss = F.mse_loss(input, self.target)
return input
# + [markdown] id="5mYBU18HdXyZ" colab_type="text"
# Матрица грама позволяет учесть не только сами значения feature map'а, но и кореляцию фич друг с другом. <br /> Это нужно для того, чтобы сделать акцент на встречаемость фич с друг другом, а не на их геометрическом положении. <br />
# Полное понимание этого момента можно получить с помощью [этого](https://arxiv.org/pdf/1508.06576.pdf) и [этого](https://m.habr.com/company/mailru/blog/306916/).
# + [markdown] id="YdvMTJJddXye" colab_type="text"
# Таким образом:
#
# $D_S$ = $\sum$($G_{ij}$($img_1$) - $G_{ij}$($img_2$)$)^{2}$
# + [markdown] id="2_nro8m7dXyi" colab_type="text"
# Сначала задаем спрособ подсчета матрицы грама: Это просто тензорное тензорное произведение вектора выхода уровня самого на себя.<br /> Однка наш выход - не вектор. В этом случае операция тоже возможна,<br /> но мы получим тензор третьего ранга. Поэтому перед перемножением выход нужно привести к форме вектора.<br />
# + id="lTexbIdXdXyp" colab_type="code" colab={}
def gram_matrix(input):
batch_size , h, w, f_map_num = input.size() # batch size(=1)
# b=number of feature maps
# (h,w)=dimensions of a feature map (N=h*w)
features = input.view(batch_size * h, w * f_map_num) # resise F_XL into \hat F_XL
G = torch.mm(features, features.t()) # compute the gram product
# we 'normalize' the values of the gram matrix
# by dividing by the number of element in each feature maps.
return G.div(batch_size * h * w * f_map_num)
# + [markdown] id="V_DIJ3S8dXyz" colab_type="text"
# Матрица грама готова, теперь нужно лишь реализовать MSE
# + id="XDwvgvOoc_Lq" colab_type="code" colab={}
# + id="7z80JWBmdXy5" colab_type="code" colab={}
class StyleLoss(nn.Module):
def __init__(self, target_feature):
super(StyleLoss, self).__init__()
self.target = gram_matrix(target_feature).detach()
self.loss = F.mse_loss(self.target, self.target)# to initialize with something
def forward(self, input):
G = gram_matrix(input)
self.loss = F.mse_loss(G, self.target)
return input
# + [markdown] id="ObRsgLONdXzA" colab_type="text"
# При тренировке VGG каждое изображение на котором она обучалась было нормировано по всем каналам (RGB). Если мы хотим изпользовать ее для нашей модели, то мы должны реализовать нормировку и для наших изображений тоже.
#
# + id="WVlITINLdXzD" colab_type="code" colab={}
cnn_normalization_mean = torch.tensor([0.485, 0.456, 0.406]).to(device)
cnn_normalization_std = torch.tensor([0.229, 0.224, 0.225]).to(device)
# + id="RGVU8rQXdXzM" colab_type="code" colab={}
class Normalization(nn.Module):
def __init__(self, mean, std):
super(Normalization, self).__init__()
# .view the mean and std to make them [C x 1 x 1] so that they can
# directly work with image Tensor of shape [B x C x H x W].
# B is batch size. C is number of channels. H is height and W is width.
self.mean = torch.tensor(mean).view(-1, 1, 1)
self.std = torch.tensor(std).view(-1, 1, 1)
def forward(self, img):
# normalize img
return (img - self.mean) / self.std
# + [markdown] id="B9BW_I9HdXzV" colab_type="text"
# Теперь соберем это все в одну функцию, которая отдаст на выходе модель и две функции потерь
# + [markdown] id="3VTU8EsldXzZ" colab_type="text"
# Определим после каких уровней мы будем счиатать ошибки стиля, а после каких ошибки контента
# + id="sxPtDONYdXzf" colab_type="code" colab={}
content_layers_default = ['conv_4']
style_layers_default = ['conv_1', 'conv_2', 'conv_3', 'conv_4', 'conv_5']
# + [markdown] id="WtmVji8kdXzn" colab_type="text"
# Определим предобученную модель
# + id="N96-cg1ZdXzq" colab_type="code" colab={}
cnn = models.vgg19(pretrained=True).features.to(device).eval()
# + id="eew82cBedXzw" colab_type="code" colab={}
def get_style_model_and_losses(cnn, normalization_mean, normalization_std,
style_img, content_img,
content_layers=content_layers_default,
style_layers=style_layers_default):
cnn = copy.deepcopy(cnn)
# normalization module
normalization = Normalization(normalization_mean, normalization_std).to(device)
# just in order to have an iterable access to or list of content/syle
# losses
content_losses = []
style_losses = []
# assuming that cnn is a nn.Sequential, so we make a new nn.Sequential
# to put in modules that are supposed to be activated sequentially
model = nn.Sequential(normalization)
i = 0 # increment every time we see a conv
for layer in cnn.children():
if isinstance(layer, nn.Conv2d):
i += 1
name = 'conv_{}'.format(i)
elif isinstance(layer, nn.ReLU):
name = 'relu_{}'.format(i)
# The in-place version doesn't play very nicely with the ContentLoss
# and StyleLoss we insert below. So we replace with out-of-place
# ones here.
#Переопределим relu уровень
layer = nn.ReLU(inplace=False)
elif isinstance(layer, nn.MaxPool2d):
name = 'pool_{}'.format(i)
elif isinstance(layer, nn.BatchNorm2d):
name = 'bn_{}'.format(i)
else:
raise RuntimeError('Unrecognized layer: {}'.format(layer.__class__.__name__))
model.add_module(name, layer)
if name in content_layers:
# add content loss:
target = model(content_img).detach()
content_loss = ContentLoss(target)
model.add_module("content_loss_{}".format(i), content_loss)
content_losses.append(content_loss)
if name in style_layers:
# add style loss:
target_feature = model(style_img).detach()
style_loss = StyleLoss(target_feature)
model.add_module("style_loss_{}".format(i), style_loss)
style_losses.append(style_loss)
# now we trim off the layers after the last content and style losses
#выбрасываем все уровни после последенего styel loss или content loss
for i in range(len(model) - 1, -1, -1):
if isinstance(model[i], ContentLoss) or isinstance(model[i], StyleLoss):
break
model = model[:(i + 1)]
return model, style_losses, content_losses
# + id="S0KkriPkdXz9" colab_type="code" colab={}
def get_input_optimizer(input_img):
# this line to show that input is a parameter that requires a gradient
#добоваляет содержимое тензора катринки в список изменяемых оптимизатором параметров
optimizer = optim.LBFGS([input_img.requires_grad_()])
return optimizer
# + id="ywteNtULg1x6" colab_type="code" colab={}
# + [markdown] id="04v5D5TSdX0G" colab_type="text"
# Дальше стандартный цикл обучения, но что это за closure?<br /> Это функция, которая вызывается во время каждого прохода, чтобы пересчитать loss. Без нее ничего не получется так как у нас своя функция ошибки
# + id="lxQadeAcdX0J" colab_type="code" colab={}
def run_style_transfer(cnn, normalization_mean, normalization_std,
content_img, style_img, input_img, num_steps=500,
style_weight=100000, content_weight=1):
"""Run the style transfer."""
print('Building the style transfer model..')
model, style_losses, content_losses = get_style_model_and_losses(cnn,
normalization_mean, normalization_std, style_img, content_img)
optimizer = get_input_optimizer(input_img)
print('Optimizing..')
run = [0]
while run[0] <= num_steps:
def closure():
# correct the values
# это для того, чтобы значения тензора картинки не выходили за пределы [0;1]
input_img.data.clamp_(0, 1)
optimizer.zero_grad()
model(input_img)
style_score = 0
content_score = 0
for sl in style_losses:
style_score += sl.loss
for cl in content_losses:
content_score += cl.loss
#взвешивание ощибки
style_score *= style_weight
content_score *= content_weight
loss = style_score + content_score
loss.backward()
run[0] += 1
if run[0] % 50 == 0:
print("run {}:".format(run))
print('Style Loss : {:4f} Content Loss: {:4f}'.format(
style_score.item(), content_score.item()))
print()
return style_score + content_score
optimizer.step(closure)
# a last correction...
input_img.data.clamp_(0, 1)
return input_img
# + id="Ldsnc-stdX0P" colab_type="code" outputId="7d5139d0-5230-4800-fb24-b21f74ad3d2d" executionInfo={"status": "ok", "timestamp": 1545411709750, "user_tz": -180, "elapsed": 185815, "user": {"displayName": "\u041a\u0438\u0440\u0438\u043b\u043b \u0418\u0433\u043e\u0440\u0435\u0432\u0438\u0447 \u0413\u043e\u043b\u0443\u0431\u0435\u0432", "photoUrl": "", "userId": "04104925801798209408"}} colab={"base_uri": "https://localhost:8080/", "height": 925}
input_img = content_img.clone()
# if you want to use white noise instead uncomment the below line:
# input_img = torch.randn(content_img.data.size(), device=device)
# add the original input image to the figure:
plt.figure()
imshow(input_img, title='Input Image')
output = run_style_transfer(cnn, cnn_normalization_mean, cnn_normalization_std,
content_img, style_img, input_img)
# + id="yZOyG0kzdX0e" colab_type="code" outputId="96e7afef-9111-4cb6-ebc3-3c2c5ca25452" executionInfo={"status": "ok", "timestamp": 1545411709755, "user_tz": -180, "elapsed": 185759, "user": {"displayName": "\u041a\u0438\u0440\u0438\u043b\u043b \u0418\u0433\u043e\u0440\u0435\u0432\u0438\u0447 \u0413\u043e\u043b\u0443\u0431\u0435\u0432", "photoUrl": "", "userId": "04104925801798209408"}} colab={"base_uri": "https://localhost:8080/", "height": 280}
plt.figure()
imshow(output, title='Output Image')
#plt.imsave(output, 'output.png')
# sphinx_gallery_thumbnail_number = 4
plt.ioff()
plt.show()
# + id="3tBZzDJDdX0r" colab_type="code" colab={}
| deep-learning-school/[14]style_transfer_gan/style transfer/[seminar]style_transfer.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="view-in-github" colab_type="text"
# <a href="https://colab.research.google.com/github/therealthaibinh/reference-creator/blob/master/FactCheck_references.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# + [markdown] id="_uAebY8gTm_M" colab_type="text"
# # Run this cell
# + id="6EeqbiqiTIyJ" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 138, "referenced_widgets": ["ebc8a35f56d94e60bfaf4d563d6df050", "f2ad49648dbe4846b756f1631d69950b", "1ad9f9f0e73e4253b1d69d3d300adf13", "91f430f257e94cf5915824904aa9a60d", "8285219a21a8488fa9612b6cd414aff7", "19f7b38951c34e5d9275e0ccca6b4467", "4b0065ca8f7e4bbe9bce5f28af75cd72"]} outputId="43449b6f-7f7e-4682-919e-7d58a8733922"
from IPython.core.display import display, HTML
import ipywidgets as widgets
# from ipywidgets import interact, interactive, fixed, interact_manual, Layout
from ipywidgets import interact, Layout
from datetime import datetime
try:
import requests
except:
# !pip install requests
import requests
try:
from bs4 import BeautifulSoup
except:
# !pip install beautifulsoup4
from bs4 import BeautifulSoup
def create_factcheck_ref(url):
page = BeautifulSoup(requests.get(url).content, 'html.parser')
# Author
lstAuthors = []
for strAuthor in page.findAll(attrs={'class' : 'author url fn'}):
lstAuthors.append(strAuthor.contents[0])
if len(lstAuthors)==1:
strFirstAuthor = lstAuthors[0]
strAuthor = strFirstAuthor[strFirstAuthor.find(' ')+1:] + ', ' + strFirstAuthor[:strFirstAuthor.find(' ')]
elif len(lstAuthors)==2:
strFirstAuthor = lstAuthors[0]
strSecondAuthor = lstAuthors[1]
strAuthor = strFirstAuthor[strFirstAuthor.find(' ')+1:] + ', ' + strFirstAuthor[:strFirstAuthor.find(' ')] + ' and ' +\
strSecondAuthor[strSecondAuthor.find(' ')+1:] + ', ' + strSecondAuthor[:strSecondAuthor.find(' ')]
else:
strFirstAuthor = lstAuthors[0]
strAuthor = strFirstAuthor[strFirstAuthor.find(' ')+1:] + ', ' + strFirstAuthor[:strFirstAuthor.find(' ')] + ' et. al'
# Title
strTitle = page.find(attrs={'class' : 'entry-title'}).contents[0]
link_tagged = "<a href='{href}'>"+strTitle+"</a>"
# Date
strDate = page.find(attrs={'class' : 'entry-date published updated'}).contents[0]
strDate = datetime.strptime(strDate, '%B %d, %Y').strftime("%-d %b %Y")
return {'AUTHOR':strAuthor, 'TITLE':strTitle, 'DATE':strDate}
def f(strURLs):
lstURL = strURLs.split('\n')
for url in lstURL:
try:
dict_extract = create_factcheck_ref(url)
link_tagged = "<a href='{href}'>"+dict_extract['TITLE']+"</a>"
full_tagged = dict_extract['AUTHOR'] +'. "' +\
link_tagged + '." FactCheck.org. ' +\
dict_extract['DATE'] + '.'
html = HTML(full_tagged.format(href=url))
display(html)
except:
print("Misformatted URL or maybe some other funky error")
widget_URL = widgets.Textarea(
value='',
placeholder='Paste URL, one per line (no need for comma in between)',
description='URLs',
disabled=False,
layout=Layout(width='80%', height='100px')
)
interactive_plot = interact(f, strURLs=widget_URL)
# + id="QO1kvsChP-9g" colab_type="code" colab={}
| FactCheck_references.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: 'Python 3.9.6 64-bit (''sct_clone'': conda)'
# name: python3
# ---
# default_exp vsc_test
# # vsc_test
#
# > I create this notebook to test the vs_editor bug
#hide
from nbdev.showdoc import *
# +
#exports
import fastcore.test as fct
# +
# This cell should be ignored by nbdev
x = 1
y = 2
print(x+y)
# +
#exporti
#This cell should be exported but not included in __all__
def test_funct_1(x, y):
print("Test_1")
return x + y
# -
fct.equals(test_funct_1(1, 2), 3)
# +
#export
def test_exp_funct(x, y):
'''The function adds x and y.
It returns the result + 1
'''
print('TEST 2')
return test_funct_1(x, y) + 1
# -
fct.equals(test_exp_funct(1, 1), 3)
#hide
from nbdev.export import notebook2script
notebook2script()
| tests/00_vsc_test.ipynb |