code stringlengths 38 801k | repo_path stringlengths 6 263 |
|---|---|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# <h1><center>Introduction to Computational Neuroscience</center></h1>
#
# <h1><center>Practice Session III: Machine Learning</center></h1>
# <center><NAME>, <NAME>, <NAME></center>
# My **Pseudonym** is: <font color='green'>[YOUR ANSWER]</font> and it took me approximately: <font color='green'>[YOUR ANSWER]</font> hours to complete the home work.
#
# The data of how long it took you to complete the home work will help us to improve the home works to be balanced.
# **Before** you start you need to install an extra library.
#
# After you activate your conda environment `activate Py3_ICNS` or `source activate Py3_ICNS` execute `conda install scikit-learn`
# <p>The main purpose of the fields of neural encoding and neural decoding is to learn the relation
# between a stimulus and the neural response elicited by this stimulus. When studying encoding
# we want to predict how the brain would respond given a certain stimulus, we estimate the
# probability p(brainresponse|stimulus). In neural decoding we do the inverse - we read the brain
# activity using imaging techniques - and ask the question ”which stimulus caused this activity?”,
# we look for p(stimulus|brainresponse).</p>
#
# <p>In a decoding paradigm we present the test subjects with different stimuli while at the same
# time recording responses on their brains. The task is to create a model, which takes any piece of
# the recorded brain data as an input and predicts which stimulus was responsible for producing
# this piece of data.</p>
#
# <p>In the last practice session we kind of manually created one such model: by looking at
# the firings of 72 neurons we might pretty accurately guess the orientation of the bar on the
# screen. We would be able to predict which stimulus was shown, given the neural responses.</p>
#
# <p>Nevertheless, very often the data can be too massive or too complex to find such direct relation
# between activity and stimulus by simple observation. That’s where machine learning comes to
# our aid.</p>
#
# <p>The main goal of machine learning can be summarized as providing an automatic way of
# finding the dependencies between a set of features (neural data) and the corresponding labels
# (the stimuli).</p>
# Before we continue let us go through vocabulary:
# + A dataset is a structure which contains all the data we have.
# + A dataset consist of instances, often also referred to as data points.
# + In case of classification, instances consist of features and a class label.
# + Features are parameters, which describe our data. For example the average spiking rate
# of a neuron might be a feature, the power of EEG in a certain frequency band might be
# a feature, etc. Each instance has its own values for each of the features.
# + All feature values of an instance put together form a feature vector.
# + Feature vector is a representation of the instance in the feature space.
# + Each instance belongs to a certain class - the name/ID number of stimulus that caused
# the features.
# + The goal of a machine learning algorithm is to create a model, which can guess the class
# (classify) of the instance given only its feature vector. A good model should be able to
# 1
# do this also on previously unseen feature vectors (generalize).
# + The model is created from examples. Those are instances for which the class is known. A
# set of such examples is called training set, because we train our model on it.
# + Test set is another set of instances, for which we also know the true class, but we do not
# share this knowledge with the model. Instead we ask the model to guess the class of each
# instance.
# + We can then see how many instances from the test set model has identified correctly and
# the rate:
#
# <center> $ \dfrac{Number\ of\ correctly\ classified\ instances}{Total\ number\ of\ instances} $ </center>
#
# is called accuracy and it is used to evaluate model’s performance.
# ## Exercise 1.1: Where is the rat? (2.5pt)
#
# In this task we will try to predict in which part of space a rat is located based on its neural
# activity. When collecting the data some electrodes were inserted to the rat’s hippocampus
# where neurons responsible for navigation are located. At the same time the location of the rat
# was tracked by a camera. We have preprocessed the data so that:
# - 1. Features are spike counts in different neurons during a 500ms interval
# - 2. Class corresponds to which of the 16 areas the rat was located during these 500ms
# - 3. Your goal is to create a model that predicts the area based on the neural activity.
#
#  <center> Figure 1: Find in which zone the rat is! </center>
#
#
# In the data folder you find the following files:
# 1. spikes `spike_counts.txt` - count of spikes form 71 neuron at each timestep
# 2. blocks `location_areas.txt` - region where the rat is at each timestep
# 1. Read the data and make sure you understand what is what.
# +
# Your code
# -
# 2. Randomly divide the instances into training and test sets, so that 80 percent of the data is in training set (You can use `sklearn.model_selection.train_test_split` from scikit-learn library).
# +
# Your code
# -
# 3. Use LDA (linear discriminant analysis) as the classifier (you can use `sklearn.discriminant_analysis.LinearDiscriminantAnalysis` from `scikit-learn` library).
# +
# Your code
# -
# 4. Learn a model using the examples in the training set. Predict the locations for instances
# in test set and compare them with the true locations.
#
# +
# Your code
# -
# 5. How much is the overall accuracy in the test set? How much is the precision for each class
# separately (return 16 values)?
# precision of class i =
# points in zone i correctly classif ied as i
# total number of points classif ied as in zone i
#
# **HINT**: class accuracies can be calculated by dividing the diagonal values of confusion
# matrix with the column sums.
#
# +
# Your code
# -
# 6) Draw a confusion matrix. What additional information (beyond class accuracies) does this matrix
# provide? (you can use `sklearn.metrics.confusion_matrix`)
#
# You need to produce a confusion matrix, but it does not need to be a nice colourful drawing. It can also be just a readable (!!) printout of the matrix.
#
# +
# Your code
# -
# ## Exercise 1.2: A few questions to keep in mind (1.5pt) (answer in more than 1 phrase)
# Q1 : Why do you need to separate training and test datasets, that is why can’t you evaluate
# your model on the training set? What does it mean if you accuracies on training set and test
# set are very different?
# <font color=green>Your Answer</font>
# Q2 : If our model in the previous excercise would just predict class labels completely randomly , what would be the
# average prediction accuracy?
# <font color=green>Your Answer</font>
# Q3 : If our model in the previous excercise would always predict the class label that is the most common label in our
# dataset, what would the average accuracy be?
# <font color=green>Your Answer</font>
# ## Exercise 2: Which picture was shown? (2pt)
#
# In this exercise we will work with fMRI dataset3 by Haxby et al.4. As you may recall, fMRI measures the
# blood oxygen level in the brain with high spatial precision. The data was recorded while test subject was
# presented with images from 9 categories:
#
# (1) house
# (2) scrambled
# (3) cat
# (4) shoe
# (5) bottle
# (6) scissors
# (7) chair
# (8) face
# (9) something else.
#
# You can see them (except for the “something else” category) on the Figure:
#  <center> Figure 2: Examples of stimuli. </center>
# The data we have is already preprocessed ([1](https://openfmri.org/dataset/ds000105)), so instead of ≈25000 voxels in the whole brain we only
# use 577 voxels from relevant brain areas. In machine learning terminology this means that each instance has
# 577 features and belongs to one of the 9 classes. You will find the feature data in `data/voxels.txt` and class
# information `data/labels.txt`. First one is 1452×577 matrix (1452 instances 577 features each) and the second
# one is a vector of length 1452 (each instance has a class). The question we want to answer is: Is it possible to
# decode from the fMRI signal the image the test subject was looking at ?. Your task will once again b to build a predictive model.
# Perform the following steps and report the results:
# 1. Load the data. (Look at it. Always look at the data)
# 2. Split data into training and test set, 80% for traing and 20% for testing.
# +
# Your code
# -
# 3. Use training set to train a LDA (Linear Discriminant Analysis) model on this data.
# +
# Your code
# -
# 4.With the model predict the classes of the test set.
# +
# Your code
# -
# 5. For each class calculate the precision on test set. Which class of images was the easiest
# to predict based on fMRI data? Which one was the hardest?
# +
# Your code
# -
# ***
# # <center> End of obligatory exercises </center>
# ***
# ## Exercise 3: Precision, Recall, F1-score (Bonus 1pt)
# Sometimes accuracy can fail us if we are dealing with unbalanced datasets. If we classify images
# of cats and dogs and in out test set we have 90 % of cats, it is possible to achieve accuracy of 0.9
# by simply always answering ”cat”. One possible countermeasure is to look not at the accuracy
# of the model, but at its *precision* and *recall*.
#
# Imagine you have 2 classes: ”cat” and ”dog”. Precision is calculated for each class separately
# and shows how many of the instances the model has identified as cats are really cats. For example
# if out of 10 instances classified as ”cats” two turn out to be ”dogs” we say that precision is 0.8.
# Recall is also calculated separately for each class. It shows how many of all cats present in
# the test set your model correctly identified as such. For example if there were 100 cats and 100
# dogs and our model correctly classified only 78 cats (other 22 it guessed as dogs) we say that its recall is 0.78.
# F1 score is a convenient metric to write precision and recall as one number:
#
# <center> $F1 = \dfrac{2 ·precision · recall}{precision + recall}$ </center>
#
# Your task is to calculate precision, recall and f1-score for each of 9 classes. Use your own code to calculate them. You can only use the corresponding functions from some library to check the results.
# +
# Your code
| 2021/03-Machine Learning/cns_practice_rat.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + slideshow={"slide_type": "skip"}
# %matplotlib inline
import seaborn as sns
import matplotlib.pyplot as plt
from jupyterthemes import jtplot
jtplot.style(theme='onedork', context='talk', fscale=1.8, spines=False, gridlines='--', ticks=True, grid=False, figsize=(12, 8))
from os.path import join
import pandas as pd
from matplotlib.ticker import FuncFormatter
# + [markdown] slideshow={"slide_type": "slide"}
# ### Load Transaction Data
# + hide_input=false slideshow={"slide_type": "fragment"}
baskets = pd.read_csv('grocery_transactions.csv', header=None)
baskets.iloc[:10, :10]
# + slideshow={"slide_type": "slide"}
baskets.info()
# + [markdown] slideshow={"slide_type": "slide"}
# ### Plot Basket Size Breakdown
# + slideshow={"slide_type": "fragment"}
baskets.count(axis=1).value_counts().plot.bar(title='Basket Size')
plt.xlabel('# Items in Basket')
plt.ylabel('# Transactions')
plt.tight_layout();
# + [markdown] slideshow={"slide_type": "slide"}
# ### Reshape data to create transaction-product matrix
# + slideshow={"slide_type": "fragment"}
baskets_stacked = baskets.stack()
baskets_stacked.index.names = ['tx_id', 'basket_id']
baskets_stacked.head()
# -
baskets_stacked.reset_index('basket_id', drop=True, inplace=True)
# + [markdown] slideshow={"slide_type": "slide"}
# ### Explore Item Frequencies
# + slideshow={"slide_type": "fragment"}
baskets_stacked.value_counts().head()
# + slideshow={"slide_type": "fragment"}
baskets_stacked.value_counts().tail()
# + slideshow={"slide_type": "fragment"}
baskets_stacked.nunique()
# + [markdown] slideshow={"slide_type": "slide"}
# ### Finalize Transaction-Product Matrix
# + slideshow={"slide_type": "fragment"}
items = pd.get_dummies(baskets_stacked, prefix='', prefix_sep='')
items.info()
# + slideshow={"slide_type": "fragment"}
items.head()
# + [markdown] slideshow={"slide_type": "slide"}
# ### Sum indicator variables by transaction
# + slideshow={"slide_type": "fragment"}
items = items.groupby(level='tx_id').sum()
items.head()
# + [markdown] slideshow={"slide_type": "slide"}
# ### Create function from data transform steps
# + slideshow={"slide_type": "fragment"}
def get_transaction_data():
"""Load groceries transaction data into DataFrame"""
df = pd.read_csv('grocery_transactions.csv')
df = df.stack().reset_index(-1, drop=True)
df.index.names = ['tx_id']
df = pd.get_dummies(df, prefix='', prefix_sep='')
return df.groupby(level='tx_id').sum()
# + [markdown] slideshow={"slide_type": "slide"}
# ### Explore Item Support
# + slideshow={"slide_type": "fragment"}
support = items.sum().div(len(items)).sort_values(ascending=False)
ax = support.plot.bar(title='Item Support')
ax.locator_params(nbins=30, axis='x')
plt.xticks(rotation=60)
plt.tight_layout();
# + [markdown] slideshow={"slide_type": "slide"}
# ### Display Transaction-Product Matrix
# + slideshow={"slide_type": "fragment"}
sns.heatmap(items.loc[:, support.index], cbar=False,
cmap='Blues', xticklabels=10)
plt.gca().set_title('Transaction-Item Matrix')
plt.xticks(rotation=60)
plt.tight_layout();
# -
| Section 2/01_prepare_data.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] colab_type="text" id="copyright-notice"
# #### Copyright 2017 Google LLC.
# + cellView="both" colab={"autoexec": {"startup": false, "wait_interval": 0}} colab_type="code" id="copyright-notice2"
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# + [markdown] id="PTaAdgy3LS8W" colab_type="text"
# # Introducción a los datos dispersos y las incorporaciones
#
# **Objetivos de aprendizaje:**
# * convertir datos de strings de reseñas de películas en un vector de atributos dispersos
# * implementar un modelo lineal de análisis de opiniones a través de un vector de atributos dispersos
# * implementar un modelo de RNP de análisis de opiniones a través de una incorporación que proyecte datos en dos dimensiones
# * visualizar la incorporación para observar qué aprendió el modelo acerca de las relaciones entre las palabras
#
# En este ejercicio, exploraremos datos dispersos y trabajaremos con incorporaciones mediante el uso de datos de texto de reseñas de películas (del [conjunto de datos de IMDB de ACL 2011](http://ai.stanford.edu/~amaas/data/sentiment/). Estos datos ya se procesaron en formato `tf.Example`.
# + [markdown] id="2AKGtmwNosU8" colab_type="text"
# ## Preparación
#
# Importemos nuestras dependencias y descarguemos los datos de entrenamiento y de prueba. [`tf.keras`](https://www.tensorflow.org/api_docs/python/tf/keras) incluye una herramienta de almacenamiento en caché y descarga de archivos que podemos usar para recuperar los conjuntos de datos.
# + id="jGWqDqFFL_NZ" colab_type="code" colab={"autoexec": {"startup": false, "wait_interval": 0}}
from __future__ import print_function
import collections
import io
import math
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import tensorflow as tf
from IPython import display
from sklearn import metrics
tf.logging.set_verbosity(tf.logging.ERROR)
train_url = 'https://download.mlcc.google.com/mledu-datasets/sparse-data-embedding/train.tfrecord'
train_path = tf.keras.utils.get_file(train_url.split('/')[-1], train_url)
test_url = 'https://download.mlcc.google.com/mledu-datasets/sparse-data-embedding/test.tfrecord'
test_path = tf.keras.utils.get_file(test_url.split('/')[-1], test_url)
# + [markdown] id="6W7aZ9qspZVj" colab_type="text"
# ## Desarrollo de un modelo de análisis de opiniones
# + [markdown] id="jieA0k_NLS8a" colab_type="text"
# Entrenemos un modelo de análisis de opiniones con estos datos, que prediga si una reseña es en general *favorable* (etiqueta de 1) o *desfavorable* (etiqueta de 0).
#
# Para eso, convertiremos nuestros `terms` con valores de strings en vectores de atributos a través de un *vocabulario*, una lista de cada término que esperamos ver en nuestros datos. Para los fines de este ejercicio, creamos un pequeño vocabulario que se enfoca en un conjunto de términos limitado. La mayoría de estos se consideraron fuertes indicadores de *favorable* o *unfavorable*, pero algunos se agregaron simplemente porque son interesantes.
#
# Cada término del vocabulario está asignado a una coordenada de nuestro vector de atributos. Para convertir los `terms` con valores de strings para un ejemplo en este formato de vector, los codificamos de manera tal que cada coordenada obtenga un valor de 0 si el término del vocabulario no aparece en la string de ejemplo y un valor de 1 si aparece. Los términos de un ejemplo que no aparecen en el vocabulario se descartan.
# + [markdown] id="2HSfklfnLS8b" colab_type="text"
# **NOTA:** *Desde luego, podríamos usar un vocabulario más grande. También existen herramientas especiales para crearlos. Además, en lugar de simplemente descartar los términos que no están en el vocabulario, podemos incorporar una pequeña cantidad de agrupamientos OOV (fuera de vocabulario), con los cuales se puede generar un hash para los términos que no están en el vocabulario. A su vez, podemos usar un enfoque de __generación de hashes de atributos__ que genere hashes de cada término, en lugar de crear un vocabulario explícito. Esto funciona bien en la práctica, pero dificulta la interpretabilidad, que es útil para este ejercicio. Consulta el módulo tf.feature_column para obtener herramientas para abordar esto.*
# + [markdown] id="Uvoa2HyDtgqe" colab_type="text"
# ## Desarrollo de la canalización de entrada
# + [markdown] id="O20vMEOurDol" colab_type="text"
# Primero, configuremos la canalización de entrada para importar nuestros datos a un modelo de TensorFlow. Podemos usar la siguiente función para analizar los datos de entrenamiento y de prueba (que se encuentran en formato [TFRecord](https://www.tensorflow.org/guide/datasets#consuming_tfrecord_data)) y devolver un diccionario de los atributos y etiquetas correspondientes.
# + id="SxxNIEniPq2z" colab_type="code" colab={"autoexec": {"startup": false, "wait_interval": 0}}
def _parse_function(record):
"""Extracts features and labels.
Args:
record: File path to a TFRecord file
Returns:
A `tuple` `(labels, features)`:
features: A dict of tensors representing the features
labels: A tensor with the corresponding labels.
"""
features = {
"terms": tf.VarLenFeature(dtype=tf.string), # terms are strings of varying lengths
"labels": tf.FixedLenFeature(shape=[1], dtype=tf.float32) # labels are 0 or 1
}
parsed_features = tf.parse_single_example(record, features)
terms = parsed_features['terms'].values
labels = parsed_features['labels']
return {'terms':terms}, labels
# + [markdown] id="SXhTeeYMrp-l" colab_type="text"
# Para confirmar que nuestra función se desempeña de acuerdo con lo esperado, construyamos un `TFRecordDataset` para los datos de entrenamiento y asignemos los datos a atributos y etiquetas a través del atributo que se incluyó más arriba.
# + id="oF4YWXR0Omt0" colab_type="code" colab={"autoexec": {"startup": false, "wait_interval": 0}}
# Create the Dataset object.
ds = tf.data.TFRecordDataset(train_path)
# Map features and labels with the parse function.
ds = ds.map(_parse_function)
ds
# + [markdown] id="bUoMvK-9tVXP" colab_type="text"
# Ejecuta la siguiente celda para recuperar el primer ejemplo del conjunto de datos de entrenamiento.
# + id="Z6QE2DWRUc4E" colab_type="code" colab={"autoexec": {"startup": false, "wait_interval": 0}}
n = ds.make_one_shot_iterator().get_next()
sess = tf.Session()
sess.run(n)
# + [markdown] id="jBU39UeFty9S" colab_type="text"
# Ahora, desarrollemos una función de entrada formal que podamos pasar al método `train()` de un objeto de Estimator de TensorFlow.
# + id="5_C5-ueNYIn_" colab_type="code" colab={"autoexec": {"startup": false, "wait_interval": 0}}
# Create an input_fn that parses the tf.Examples from the given files,
# and split them into features and targets.
def _input_fn(input_filenames, num_epochs=None, shuffle=True):
# Same code as above; create a dataset and map features and labels.
ds = tf.data.TFRecordDataset(input_filenames)
ds = ds.map(_parse_function)
if shuffle:
ds = ds.shuffle(10000)
# Our feature data is variable-length, so we pad and batch
# each field of the dataset structure to whatever size is necessary.
ds = ds.padded_batch(25, ds.output_shapes)
ds = ds.repeat(num_epochs)
# Return the next batch of data.
features, labels = ds.make_one_shot_iterator().get_next()
return features, labels
# + [markdown] id="Y170tVlrLS8c" colab_type="text"
# ## Tarea 1: Usar un modelo lineal con entradas dispersas y vocabulario explícito
#
# Para nuestro primer modelo, desarrollaremos un modelo de [`LinearClassifier`](https://www.tensorflow.org/api_docs/python/tf/estimator/LinearClassifier) con 50 términos informativos; siempre debemos comenzar por lo más simple.
#
# El siguiente código genera la columna de atributos para nuestros términos. El atributo [`categorical_column_with_vocabulary_list`](https://www.tensorflow.org/api_docs/python/tf/feature_column/categorical_column_with_vocabulary_list) crea una columna de funciones con la asignación de string a vector de funciones.
# + id="B5gdxuWsvPcx" colab_type="code" colab={"autoexec": {"startup": false, "wait_interval": 0}}
# 50 informative terms that compose our model vocabulary.
informative_terms = ("bad", "great", "best", "worst", "fun", "beautiful",
"excellent", "poor", "boring", "awful", "terrible",
"definitely", "perfect", "liked", "worse", "waste",
"entertaining", "loved", "unfortunately", "amazing",
"enjoyed", "favorite", "horrible", "brilliant", "highly",
"simple", "annoying", "today", "hilarious", "enjoyable",
"dull", "fantastic", "poorly", "fails", "disappointing",
"disappointment", "not", "him", "her", "good", "time",
"?", ".", "!", "movie", "film", "action", "comedy",
"drama", "family")
terms_feature_column = tf.feature_column.categorical_column_with_vocabulary_list(key="terms", vocabulary_list=informative_terms)
# + [markdown] id="eTiDwyorwd3P" colab_type="text"
# A continuación, generaremos el `LinearClassifier`, lo entrenaremos con el conjunto de entrenamiento y lo evaluaremos con el conjunto de evaluación. Después de leer todo el código, ejecútalo y observa el desempeño.
# + id="HYKKpGLqLS8d" colab_type="code" colab={"autoexec": {"startup": false, "wait_interval": 0}}
my_optimizer = tf.train.AdagradOptimizer(learning_rate=0.1)
my_optimizer = tf.contrib.estimator.clip_gradients_by_norm(my_optimizer, 5.0)
feature_columns = [ terms_feature_column ]
classifier = tf.estimator.LinearClassifier(
feature_columns=feature_columns,
optimizer=my_optimizer,
)
classifier.train(
input_fn=lambda: _input_fn([train_path]),
steps=1000)
evaluation_metrics = classifier.evaluate(
input_fn=lambda: _input_fn([train_path]),
steps=1000)
print("Training set metrics:")
for m in evaluation_metrics:
print(m, evaluation_metrics[m])
print("---")
evaluation_metrics = classifier.evaluate(
input_fn=lambda: _input_fn([test_path]),
steps=1000)
print("Test set metrics:")
for m in evaluation_metrics:
print(m, evaluation_metrics[m])
print("---")
# + [markdown] id="J0ubn9gULS8g" colab_type="text"
# ## Tarea 2: Usar un modelo de redes neuronales profundas (RNP)
#
# El modelo anterior es un modelo lineal que funciona bastante bien. Pero, ¿podemos obtener un mejor desempeño con un modelo de RNP?
#
# Cambiemos a un [`DNNClassifier`](https://www.tensorflow.org/api_docs/python/tf/estimator/DNNClassifier) en lugar del `LinearClassifier`. Ejecuta la siguiente celda y observa qué desempeño obtienes.
# + id="jcgOPfEALS8h" colab_type="code" colab={"autoexec": {"startup": false, "wait_interval": 0}}
##################### Here's what we changed ##################################
classifier = tf.estimator.DNNClassifier( #
feature_columns=[tf.feature_column.indicator_column(terms_feature_column)], #
hidden_units=[20,20], #
optimizer=my_optimizer, #
) #
###############################################################################
try:
classifier.train(
input_fn=lambda: _input_fn([train_path]),
steps=1000)
evaluation_metrics = classifier.evaluate(
input_fn=lambda: _input_fn([train_path]),
steps=1)
print("Training set metrics:")
for m in evaluation_metrics:
print(m, evaluation_metrics[m])
print("---")
evaluation_metrics = classifier.evaluate(
input_fn=lambda: _input_fn([test_path]),
steps=1)
print("Test set metrics:")
for m in evaluation_metrics:
print(m, evaluation_metrics[m])
print("---")
except ValueError as err:
print(err)
# + [markdown] id="cZz68luxLS8j" colab_type="text"
# ## Tarea 3: Usar una incorporación con un modelo de RNP
#
# En esta tarea, implementaremos nuestro modelo de RNP mediante el uso de una columna de incorporaciones. Una columna de incorporaciones toma datos dispersos como entrada y devuelve un vector denso de dimensiones más bajas como resultado.
# + [markdown] id="AliRzhvJLS8k" colab_type="text"
# **NOTA:** *Una embedding_column suele ser la opción más eficaz con relación al cómputo que se puede usar para entrenar un modelo con datos dispersos. En una [sección opcional](#scrollTo=XDMlGgRfKSVz) al final de este ejercicio, analizaremos en más detalle las diferencias de implementación entre el uso de una `embedding_column` y una `indicator_column`, así como las ventajas y desventajas de seleccionar una o la otra.*
# + [markdown] id="F-as3PtALS8l" colab_type="text"
# En el siguiente código, realiza lo siguiente:
#
# * Define las columnas de atributos para el modelo a través de una `embedding_column` que proyecte los datos en 2 dimensiones (para obtener más detalles sobre la firma de funciones para `embedding_column`, consulta la [documentación de TF](https://www.tensorflow.org/api_docs/python/tf/feature_column/embedding_column)).
# * Define un `DNNClassifier` con las siguientes especificaciones:
# * Dos capas ocultas de 20 unidades cada una
# * Optimización de AdaGrad con una tasa de aprendizaje de 0.1
# * Una `gradient_clip_norm` de 5.0
# + [markdown] id="UlPZ-Q9bLS8m" colab_type="text"
# **NOTA:** *En la práctica, es posible que proyectemos en más que 2 dimensiones, como 50 o 100. Pero, por ahora, 2 dimensiones son fáciles de visualizar.*
# + [markdown] id="mNCLhxsXyOIS" colab_type="text"
# ### Sugerencia
# + id="L67xYD7hLS8m" colab_type="code" colab={"autoexec": {"startup": false, "wait_interval": 0}}
# Here's a example code snippet you might use to define the feature columns:
terms_embedding_column = tf.feature_column.embedding_column(terms_feature_column, dimension=2)
feature_columns = [ terms_embedding_column ]
# + [markdown] id="iv1UBsJxyV37" colab_type="text"
# ### Completa el código a continuación
# + id="5PG_yhNGLS8u" colab_type="code" colab={"autoexec": {"startup": false, "wait_interval": 0}}
########################## YOUR CODE HERE ######################################
terms_embedding_column = # Define the embedding column
feature_columns = # Define the feature columns
classifier = # Define the DNNClassifier
################################################################################
classifier.train(
input_fn=lambda: _input_fn([train_path]),
steps=1000)
evaluation_metrics = classifier.evaluate(
input_fn=lambda: _input_fn([train_path]),
steps=1000)
print("Training set metrics:")
for m in evaluation_metrics:
print(m, evaluation_metrics[m])
print("---")
evaluation_metrics = classifier.evaluate(
input_fn=lambda: _input_fn([test_path]),
steps=1000)
print("Test set metrics:")
for m in evaluation_metrics:
print(m, evaluation_metrics[m])
print("---")
# + [markdown] id="eQS5KQzBybTY" colab_type="text"
# ### Solución
#
# Haz clic más abajo para conocer la solución.
# + id="R5xOdYeQydi5" colab_type="code" colab={"autoexec": {"startup": false, "wait_interval": 0}}
########################## SOLUTION CODE ########################################
terms_embedding_column = tf.feature_column.embedding_column(terms_feature_column, dimension=2)
feature_columns = [ terms_embedding_column ]
my_optimizer = tf.train.AdagradOptimizer(learning_rate=0.1)
my_optimizer = tf.contrib.estimator.clip_gradients_by_norm(my_optimizer, 5.0)
classifier = tf.estimator.DNNClassifier(
feature_columns=feature_columns,
hidden_units=[20,20],
optimizer=my_optimizer
)
#################################################################################
classifier.train(
input_fn=lambda: _input_fn([train_path]),
steps=1000)
evaluation_metrics = classifier.evaluate(
input_fn=lambda: _input_fn([train_path]),
steps=1000)
print("Training set metrics:")
for m in evaluation_metrics:
print(m, evaluation_metrics[m])
print("---")
evaluation_metrics = classifier.evaluate(
input_fn=lambda: _input_fn([test_path]),
steps=1000)
print("Test set metrics:")
for m in evaluation_metrics:
print(m, evaluation_metrics[m])
print("---")
# + [markdown] id="aiHnnVtzLS8w" colab_type="text"
# ## Tarea 4: Convencerte de que realmente hay una incorporación allí
#
# El modelo anterior utilizó una `embedding_column` y pareció tener un buen desempeño, pero esto no nos indica mucho qué ocurre internamente. ¿Cómo podemos comprobar que el modelo realmente está usando una incorporación dentro?
#
# Para comenzar, observemos los tensores del modelo:
# + id="h1jNgLdQLS8w" colab_type="code" colab={"autoexec": {"startup": false, "wait_interval": 0}}
classifier.get_variable_names()
# + [markdown] id="Sl4-VctMLS8z" colab_type="text"
# Bien, podemos observar que hay una capa de incorporaciones allí: `'dnn/input_from_feature_columns/input_layer/terms_embedding/...'`. (De hecho, lo interesante aquí es que esta capa se puede entrenar junto con el resto del modelo, al igual que cualquier capa oculta).
#
# ¿La capa de incorporaciones tiene la forma correcta? Ejecuta el siguiente código para descubrirlo.
# + [markdown] id="JNFxyQUiLS80" colab_type="text"
# **NOTA:** *Recuerda que, en nuestro caso, la incorporación es una matriz que nos permite proyectar un vector de 50 dimensiones en 2 dimensiones.*
# + id="1xMbpcEjLS80" colab_type="code" colab={"autoexec": {"startup": false, "wait_interval": 0}}
classifier.get_variable_value('dnn/input_from_feature_columns/input_layer/terms_embedding/embedding_weights').shape
# + [markdown] id="MnLCIogjLS82" colab_type="text"
# Dedica un tiempo a comprobar manualmente las diferentes capas y formas a fin de asegurarte de que todo esté conectado según lo esperado.
# + [markdown] id="rkKAaRWDLS83" colab_type="text"
# ## Tarea 5: Examinar la incorporación
#
# Ahora observemos el espacio de incorporación real y veamos dónde acaban los términos dentro de este. Realiza lo siguiente:
# 1. Ejecuta el código que aparece a continuación para ver la incorporación que entrenamos en la **Tarea 3**. ¿Los términos están donde esperabas?
#
# 2. Ejecutar el código de la **Tarea 3** otra vez para volver a entrenar el modelo y, luego, vuelve a ejecutar la visualización de la incorporación que aparece más abajo. ¿Qué permanece igual? ¿Qué cambia?
#
# 3. Finalmente, vuelve a entrenar el modelo con solo 10 pasos (lo cual producirá un modelo terrible). Vuelve a ejecutar la visualización de la incorporación que aparece más abajo. ¿Qué ves ahora y por qué?
# + id="s4NNu7KqLS84" colab_type="code" colab={"autoexec": {"startup": false, "wait_interval": 0}}
import numpy as np
import matplotlib.pyplot as plt
embedding_matrix = classifier.get_variable_value('dnn/input_from_feature_columns/input_layer/terms_embedding/embedding_weights')
for term_index in range(len(informative_terms)):
# Create a one-hot encoding for our term. It has 0s everywhere, except for
# a single 1 in the coordinate that corresponds to that term.
term_vector = np.zeros(len(informative_terms))
term_vector[term_index] = 1
# We'll now project that one-hot vector into the embedding space.
embedding_xy = np.matmul(term_vector, embedding_matrix)
plt.text(embedding_xy[0],
embedding_xy[1],
informative_terms[term_index])
# Do a little setup to make sure the plot displays nicely.
plt.rcParams["figure.figsize"] = (15, 15)
plt.xlim(1.2 * embedding_matrix.min(), 1.2 * embedding_matrix.max())
plt.ylim(1.2 * embedding_matrix.min(), 1.2 * embedding_matrix.max())
plt.show()
# + [markdown] id="pUb3L7pqLS86" colab_type="text"
# ## Tarea 6: Intentar mejorar el rendimiento del modelo
#
# Ve si puedes ajustar el modelo para mejorar el rendimiento. A continuación, se indican algunas acciones que puedes probar:
#
# * **Cambiar los hiperparámetros** o **usar un optimizador diferente**, como Adam (es posible que solo ganes uno o dos puntos en el porcentaje de exactitud con estas estrategias).
# * **Agregar términos adicionales a `informative_terms`.** Hay un archivo de vocabulario completo con los 30,716 términos para este conjunto de datos que puedes usar en https://download.mlcc.google.com/mledu-datasets/sparse-data-embedding/terms.txt. Puedes seleccionar términos adicionales de este archivo de vocabulario o usar el archivo completo a través de la columna de atributos `categorical_column_with_vocabulary_file`.
# + id="6-b3BqXvLS86" colab_type="code" colab={"autoexec": {"startup": false, "wait_interval": 0}}
# Download the vocabulary file.
terms_url = 'https://download.mlcc.google.com/mledu-datasets/sparse-data-embedding/terms.txt'
terms_path = tf.keras.utils.get_file(terms_url.split('/')[-1], terms_url)
# + id="0jbJlwW5LS8-" colab_type="code" colab={"autoexec": {"startup": false, "wait_interval": 0}}
# Create a feature column from "terms", using a full vocabulary file.
informative_terms = None
with io.open(terms_path, 'r', encoding='utf8') as f:
# Convert it to a set first to remove duplicates.
informative_terms = list(set(f.read().split()))
terms_feature_column = tf.feature_column.categorical_column_with_vocabulary_list(key="terms",
vocabulary_list=informative_terms)
terms_embedding_column = tf.feature_column.embedding_column(terms_feature_column, dimension=2)
feature_columns = [ terms_embedding_column ]
my_optimizer = tf.train.AdagradOptimizer(learning_rate=0.1)
my_optimizer = tf.contrib.estimator.clip_gradients_by_norm(my_optimizer, 5.0)
classifier = tf.estimator.DNNClassifier(
feature_columns=feature_columns,
hidden_units=[10,10],
optimizer=my_optimizer
)
classifier.train(
input_fn=lambda: _input_fn([train_path]),
steps=1000)
evaluation_metrics = classifier.evaluate(
input_fn=lambda: _input_fn([train_path]),
steps=1000)
print("Training set metrics:")
for m in evaluation_metrics:
print(m, evaluation_metrics[m])
print("---")
evaluation_metrics = classifier.evaluate(
input_fn=lambda: _input_fn([test_path]),
steps=1000)
print("Test set metrics:")
for m in evaluation_metrics:
print(m, evaluation_metrics[m])
print("---")
# + [markdown] id="ew3kwGM-LS9B" colab_type="text"
# ## Conclusión
#
# Es posible que hayamos obtenido una solución de RNP con una incorporación que se desempeñó mejor que nuestro modelo lineal original, pero el modelo lineal también era bastante bueno y algo más rápido para entrenar. Los modelos lineales se entrenan más rápidamente porque no tienen tantos parámetros para actualizar o capas para realizar propagación inversa.
#
# En algunas aplicaciones, la velocidad de los modelos lineales puede cambiar las reglas del juego. A veces, los modelos lineales pueden ser muy convenientes desde el punto de vista de la calidad. Y en otras áreas, la complejidad adicional del modelo y la capacidad proporcionada por las RNP puede ser más importante. Al definir la arquitectura del modelo, recuerda explorar tu problema lo suficiente como para saber en qué espacio te encuentras.
# + [markdown] id="9MquXy9zLS9B" colab_type="text"
# ### *Análisis opcional:* Ventajas y desventajas entre `embedding_column` y `indicator_column`
#
# A nivel conceptual, cuando entrenamos un `LinearClassifier` o un `DNNClassifier`, necesitamos una adaptación para usar una columna dispersa. TF ofrece dos opciones: `embedding_column` o `indicator_column`.
#
# Al entrenar un clasificador lineal (como en la **Tarea 1**), se usa una `embedding_column` como opción avanzada. Como se ve en la **Tarea 2**, al entrenar un `DNNClassifier`, debes elegir explícitamente `embedding_column` o `indicator_column`. En esta sección, se observa un ejemplo simple para analizar la diferencia entre ambas, así como las ventajas y desventajas de usar una o la otra.
# + [markdown] id="M_3XuZ_LLS9C" colab_type="text"
# Imagina que tenemos datos dispersos que contienen los valores `"great"`, `"beautiful"` y `"excellent"`. Dado que el tamaño del vocabulario que estamos usando aquí es de $V = 50$, cada unidad (neurona) de la primera capa tendrá 50 ponderaciones. Denotamos el número de términos de una entrada dispersa mediante el uso de $s$. Por lo tanto, para los datos dispersos de este ejemplo, $s = 3$. Para una capa de entrada con $V$ valores posibles, una capa oculta con $d$ unidades debe hacer una multiplicación de vector por matriz: $(1 \times V) * (V \times d)$. Esto tiene un costo de cómputo de $O(V * d)$. Ten en cuenta que este costo es proporcional al número de ponderaciones en una capa oculta e independiente de $s$.
#
# Si las entradas tienen codificación de un solo 1 (un vector booleano con una longitud de $V$ con un 1 para los términos presentes y un 0 para el resto) que usa una [`indicator_column`](https://www.tensorflow.org/api_docs/python/tf/feature_column/indicator_column), esto significa multiplicar y sumar muchos ceros.
# + [markdown] id="I7mR4Wa2LS9C" colab_type="text"
# Cuando logramos exactamente los mismos resultados al usar una [`embedding_column`](https://www.tensorflow.org/api_docs/python/tf/feature_column/embedding_column) con un tamaño de $d$, buscamos y sumamos solamente aquellas incorporaciones correspondientes a las tres atributos presentes en la entrada de nuestro ejemplo de `"great"`, `"beautiful"` y `"excellent"`: $(1 \times d) + (1 \times d) + (1 \times d)$. Dado que las ponderaciones de los atributos que están ausentes se multiplican por cero en la multiplicación de vector por matriz, estas no contribuyen al resultado. Las ponderaciones de los atributos que están presentes se multiplican por 1 en la multiplicación de vector por matriz. Por lo tanto, al sumar las ponderaciones obtenidas a través de la búsqueda de incorporaciones, se obtendrá el mismo resultado que en la multiplicación de vector por matriz.
#
# Al usar una incorporación, el cómputo de la búsqueda de incorporaciones es un cómputo de $O(s * d)$, el cual es mucho más eficiente con relación al cómputo que el costo de $O(V * d)$ para la `indicator_column` en datos dispersos, para los cuales $s$ es mucho más pequeño que $V$. (Recuerda que estas incorporaciones se están aprendiendo. En cualquier iteración de entrenamiento dada, las ponderaciones actuales son las que se buscan.)
# + [markdown] id="etZ9qf0kLS9D" colab_type="text"
# Como vimos en la **Tarea 3**, al usar una `embedding_column` al entrenar el `DNNClassifier`, nuestro modelo aprende una representación de dimensiones bajas para los atributos, en la que el producto de puntos define una métrica de similitud adaptada a la tarea deseada. En este ejemplo, los términos que se usan de manera similar en el contexto de las reseñas de películas (p. ej., `"great"` y `"excellent"`) estarán más cerca entre sí en el espacio de incorporación (es decir, tendrán un producto de puntos de gran tamaño) y los términos que son desemejantes (p. ej., `"great"` y `"bad"`) estarán más alejados entre sí en el espacio de incorporación (es decir, tendrán un producto de puntos pequeño).
| ml/cc/exercises/estimators/es-419/intro_to_sparse_data_and_embeddings.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Azure Kubernetes Service (AKS) Deep MNIST
# In this example we will deploy a tensorflow MNIST model in the Azure Kubernetes Service (AKS).
#
# This tutorial will break down in the following sections:
#
# 1) Train a tensorflow model to predict mnist locally
#
# 2) Containerise the tensorflow model with our docker utility
#
# 3) Send some data to the docker model to test it
#
# 4) Install and configure Azure tools to interact with your cluster
#
# 5) Use the Azure tools to create and setup AKS cluster with Seldon
#
# 6) Push and run docker image through the Azure Container Registry
#
# 7) Test our Elastic Kubernetes deployment by sending some data
#
# Let's get started! 🚀🔥
#
# ## Dependencies:
#
# * Helm v3.0.0+
# * A Kubernetes cluster running v1.13 or above (minkube / docker-for-windows work well if enough RAM)
# * kubectl v1.14+
# * az CLI v2.0.66+
# * Python 3.6+
# * Python DEV requirements
#
# ## 1) Train a tensorflow model to predict mnist locally
# We will load the mnist images, together with their labels, and then train a tensorflow model to predict the right labels
# +
from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets("MNIST_data/", one_hot=True)
import tensorflow as tf
if __name__ == "__main__":
x = tf.placeholder(tf.float32, [None, 784], name="x")
W = tf.Variable(tf.zeros([784, 10]))
b = tf.Variable(tf.zeros([10]))
y = tf.nn.softmax(tf.matmul(x, W) + b, name="y")
y_ = tf.placeholder(tf.float32, [None, 10])
cross_entropy = tf.reduce_mean(
-tf.reduce_sum(y_ * tf.log(y), reduction_indices=[1])
)
train_step = tf.train.GradientDescentOptimizer(0.5).minimize(cross_entropy)
init = tf.initialize_all_variables()
sess = tf.Session()
sess.run(init)
for i in range(1000):
batch_xs, batch_ys = mnist.train.next_batch(100)
sess.run(train_step, feed_dict={x: batch_xs, y_: batch_ys})
correct_prediction = tf.equal(tf.argmax(y, 1), tf.argmax(y_, 1))
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
print(sess.run(accuracy, feed_dict={x: mnist.test.images, y_: mnist.test.labels}))
saver = tf.train.Saver()
saver.save(sess, "model/deep_mnist_model")
# -
# ## 2) Containerise the tensorflow model with our docker utility
# First you need to make sure that you have added the .s2i/environment configuration file in this folder with the following content:
# !cat .s2i/environment
# Now we can build a docker image named "deep-mnist" with the tag 0.1
# !s2i build . seldonio/seldon-core-s2i-python36:1.14.0-dev deep-mnist:0.1
# ## 3) Send some data to the docker model to test it
# We first run the docker image we just created as a container called "mnist_predictor"
# !docker run --name "mnist_predictor" -d --rm -p 5000:5000 deep-mnist:0.1
# Send some random features that conform to the contract
# +
import matplotlib.pyplot as plt
import numpy as np
# This is the variable that was initialised at the beginning of the file
i = [0]
x = mnist.test.images[i]
y = mnist.test.labels[i]
plt.imshow(x.reshape((28, 28)), cmap="gray")
plt.show()
print("Expected label: ", np.sum(range(0, 10) * y), ". One hot encoding: ", y)
# +
import math
import numpy as np
from seldon_core.seldon_client import SeldonClient
# We now test the REST endpoint expecting the same result
endpoint = "0.0.0.0:5000"
batch = x
payload_type = "ndarray"
sc = SeldonClient(microservice_endpoint=endpoint)
# We use the microservice, instead of the "predict" function
client_prediction = sc.microservice(
data=batch, method="predict", payload_type=payload_type, names=["tfidf"]
)
for proba, label in zip(
client_prediction.response.data.ndarray.values[0].list_value.ListFields()[0][1],
range(0, 10),
):
print(f"LABEL {label}:\t {proba.number_value*100:6.4f} %")
# -
# !docker rm mnist_predictor --force
# ## 4) Install and configure Azure tools
# First we install the azure cli - follow specific instructions at https://docs.microsoft.com/en-us/cli/azure/install-azure-cli?view=azure-cli-latest
# !curl -sL https://aka.ms/InstallAzureCLIDeb | sudo bash
# ### Configure the azure CLI so it can talk to your server
# (if you are getting issues, make sure you have the permmissions to create clusters)
#
# You must run this through a terminal and follow the instructions:
# ```
# az login
# ```
#
# Once you are logged in, we can create our cluster. Run the following command, it may take a while so feel free to get a ☕.
# + language="bash"
# # We'll create a resource group
# az group create --name SeldonResourceGroup --location westus
# # Now we create the cluster
# az aks create \
# --resource-group SeldonResourceGroup \
# --name SeldonCluster \
# --node-count 1 \
# --enable-addons monitoring \
# --generate-ssh-keys
# --kubernetes-version 1.13.5
# -
# Once it's created we can authenticate our local `kubectl` to make sure we can talk to the azure cluster:
# !az aks get-credentials --resource-group SeldonResourceGroup --name SeldonCluster
# And now we can check that this has been successful by making sure that our `kubectl` context is pointing to the cluster:
# !kubectl config get-contexts
# ## Setup Seldon Core
#
# Use the setup notebook to [Setup Cluster](https://docs.seldon.io/projects/seldon-core/en/latest/examples/seldon_core_setup.html#Setup-Cluster) with [Ambassador Ingress](https://docs.seldon.io/projects/seldon-core/en/latest/examples/seldon_core_setup.html#Ambassador) and [Install Seldon Core](https://docs.seldon.io/projects/seldon-core/en/latest/examples/seldon_core_setup.html#Install-Seldon-Core). Instructions [also online](https://docs.seldon.io/projects/seldon-core/en/latest/examples/seldon_core_setup.html).
# ## Push docker image
# In order for the EKS seldon deployment to access the image we just built, we need to push it to the Azure Container Registry (ACR) - you can check if it's been successfully created in the dashboard https://portal.azure.com/#blade/HubsExtension/BrowseResourceBlade/resourceType/Microsoft.ContainerRegistry%2Fregistries
#
# If you have any issues please follow the official Azure documentation: https://docs.microsoft.com/en-us/azure/container-registry/container-registry-get-started-azure-cli
# ### First we create a registry
# Make sure you keep the `loginServer` value in the output dictionary as we'll use it below.
# !az acr create --resource-group SeldonResourceGroup --name SeldonContainerRegistry --sku Basic
# ### Make sure your local docker instance has access to the registry
# !az acr login --name SeldonContainerRegistry
# ### Now prepare docker image
# We need to first tag the docker image before we can push it.
#
# NOTE: if you named your registry different make sure you change the value of `seldoncontainerregistry.azurecr.io`
# !docker tag deep-mnist:0.1 seldoncontainerregistry.azurecr.io/deep-mnist:0.1
# ### And push the image
#
# NOTE: if you named your registry different make sure you change the value of `seldoncontainerregistry.azurecr.io`
# !docker push seldoncontainerregistry.azurecr.io/deep-mnist:0.1
# ## Running the Model
# We will now run the model. As you can see we have a placeholder `"REPLACE_FOR_IMAGE_AND_TAG"`, which we'll replace to point to our registry.
#
# Let's first have a look at the file we'll be using to trigger the model:
# !cat deep_mnist.json
# Now let's trigger seldon to run the model.
#
# ### Run the deployment in your cluster
#
# NOTE: In order for this to work you need to make sure that your cluster has the permissions to pull the images. You can do this by:
#
# 1) Go into the Azure Container Registry
#
# 2) Select the SeldonContainerRegistry you created
#
# 3) Click on "Add a role assignment"
#
# 4) Select the AcrPull role
#
# 5) Select service principle
#
# 6) Find the SeldonCluster
#
# 7) Wait until the role has been added
#
# We basically have a yaml file, where we want to replace the value "REPLACE_FOR_IMAGE_AND_TAG" for the image you pushed
# + language="bash"
# # Change accordingly if your registry is called differently
# sed 's|REPLACE_FOR_IMAGE_AND_TAG|seldoncontainerregistry.azurecr.io/deep-mnist:0.1|g' deep_mnist.json | kubectl apply -f -
# -
# And let's check that it's been created.
#
# You should see an image called "deep-mnist-single-model...".
#
# We'll wait until STATUS changes from "ContainerCreating" to "Running"
# !kubectl get pods
# ## Test the model
# Now we can test the model, let's first find out what is the URL that we'll have to use:
# !kubectl get svc ambassador -o jsonpath='{.status.loadBalancer.ingress[0].ip}'
# We'll use a random example from our dataset
# +
import matplotlib.pyplot as plt
# This is the variable that was initialised at the beginning of the file
i = [0]
x = mnist.test.images[i]
y = mnist.test.labels[i]
plt.imshow(x.reshape((28, 28)), cmap="gray")
plt.show()
print("Expected label: ", np.sum(range(0, 10) * y), ". One hot encoding: ", y)
# -
# We can now add the URL above to send our request:
# +
import math
import numpy as np
from seldon_core.seldon_client import SeldonClient
host = "172.16.31.10"
port = "80" # Make sure you use the port above
batch = x
payload_type = "ndarray"
sc = SeldonClient(
gateway="ambassador", ambassador_endpoint=host + ":" + port, namespace="default"
)
client_prediction = sc.predict(
data=batch, deployment_name="deep-mnist", names=["text"], payload_type=payload_type
)
print(client_prediction)
# -
# ### Let's visualise the probability for each label
# It seems that it correctly predicted the number 7
for proba, label in zip(
client_prediction.response.data.ndarray.values[0].list_value.ListFields()[0][1],
range(0, 10),
):
print(f"LABEL {label}:\t {proba.number_value*100:6.4f} %")
| examples/models/azure_aks_deep_mnist/azure_aks_deep_mnist.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
#installizing used library
from bs4 import BeautifulSoup as bs
import requests
import pandas as pd
from functools import reduce
# https://lunaf.com/astrology/zodiac-sign/
# I got the links of the relevant pages on this site separately
# There are 12 zodiac sign
signs = {
"Aries": "https://lunaf.com/astrology/zodiac-sign/aries/",
"Taurus": "https://lunaf.com/astrology/zodiac-sign/taurus/",
"Gemini": "https://lunaf.com/astrology/zodiac-sign/gemini/",
"Cancer": "https://lunaf.com/astrology/zodiac-sign/cancer/",
"Leo": "https://lunaf.com/astrology/zodiac-sign/leo/",
"Virgo": "https://lunaf.com/astrology/zodiac-sign/virgo/",
"Libra": "https://lunaf.com/astrology/zodiac-sign/libra/",
"Scorpio": "https://lunaf.com/astrology/zodiac-sign/scorpio/",
"Sagittarius": "https://lunaf.com/astrology/zodiac-sign/sagittarius/",
"Capricorn": "https://lunaf.com/astrology/zodiac-sign/capricorn/",
"Aquarius": "https://lunaf.com/astrology/zodiac-sign/aquarius/",
"Pisces": "https://lunaf.com/astrology/zodiac-sign/pisces/"
}
# +
# create empty list for multiple df
dfs = []
# Pulling the data of all the signs in the signs separately
for sign in signs:
result = requests.get(signs[sign])
soup = bs(result.text, "html.parser")
# ntr: subtittle
# val: value of subtittle
proftags_ntr = soup.findAll("span", {"class":"ntr"})
proftags_val = soup.findAll("span", {"class":"val"})
data_ntr = [i.get_text() for i in proftags_ntr]
data_val = [i.get_text() for i in proftags_val]
temp = {}
for index, i in enumerate(data_ntr):
temp[i] = data_val[index]
temp = pd.DataFrame([temp]).T.rename_axis("ntr/val")
dfs.append(temp)
# -
# merge multiple dfs in dfs
result = reduce(lambda i, j: i.merge(j, how="inner", on="ntr/val"), dfs)
result.columns = signs.keys()
result
| web_scraping_zodiac_sign.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ### Starting
# <h3 align="center">!! Part 03-Module 01-Lesson 01_Linear Regression/15. Linear Regression in scikit-learn !!</h3>
# * For your linear regression model, you'll be using scikit-learn's **LinearRegression** class. This class provides the function ``` fit()``` to fit the model to your data.
# ``` python
# from sklearn.linear_model import LinearRegression
# model = LinearRegression()
# model.fit(x_value, y_value)
# ```
# * In the example above, the **model** variable is a linear regression model that has been fitted to the data **x_values** and **y_values**. Fitting the model means finding the best line that fits the training data. Let's make two predictions using the model's ```predict()``` function.
# ``` python
# print(model.predict([[127], [248]]))
# ```
# The model returned an array of predictions, one prediction for each input array. The first input, ```[127]```, got a prediction of `438.94308857`. The second input, `[248]`, got a prediction of `127.14839521`. The reason for predicting on an array like `[127]` and not just `127`, is because you can have a model that makes a prediction using multiple features.
# <br>
# <h3 aligne='center'> Linear Regression Example </h3>
# <br>
# <br>
#
#
# * **Country** – The country the person was born in.
# * **Life expectancy** – The average life expectancy at birth for a person in that country.
# * **BMI** – The mean BMI of males in that country.
#
#
# #### You'll need to complete each of the following steps:
#
# **1. Load the data**
#
# * The data is in the file called `"bmi_healthcare.csv"`.
# * Use *pandas* `read_csv` to load the data into a dataframe (don't forget to import pandas!)
# * Assign the dataframe to the variable `bmi_healthcare`.
#
# **2. Build a linear regression model**
#
# * Create a regression model using scikit-learn's `LinearRegression` and assign it to `bmi_healthcare`.
# * Fit the model to the data.
#
# **3. Predict using the model**
#
# * Predict using a BMI of `21.07931` and assign it to the variable `laos_life_exp`.
#
# +
# STEP 0
import pandas as pd
from sklearn.linear_model import LinearRegression
# +
# STEP 1
bmi_data = pd.read_csv("./data/bmi_heaithcare.csv")
print(bmi_data)
# +
# STEP 2
bmi_model = LinearRegression()
bmi_model.fit(bmi_data[['BMI']], bmi_data[['Life expectancy']])
# +
# STEP 3
# INPUT = BMI | OUTPUT = LIEF EXPECTANCY
laos_life_exp = bmi_model.predict([[21.07931]])
print("Predict: {:.3f}".format(laos_life_exp[0][0]))
# +
# Testing Model
# IRAN: Lief expectancy = 73.1 | BMI = 25.31003
iran_life_exp = bmi_model.predict([[25.31003]])
print("Testing for Iran: {:.3f}".format(iran_life_exp[0][0]))
# -
# <h3 align="center">!! Part 03-Module 01-Lesson 01_Linear Regression/17. Multiple Linear Regression !!</h3>
# +
from sklearn.linear_model import LinearRegression
from sklearn.datasets import load_boston
boston_data = load_boston()
x = boston_data['data']
y = boston_data['target']
model = LinearRegression()
model.fit(x, y)
sample_house = [[0.2296, 0.000, 10.59, 0.000, 0.4890, 6.326, 52.50, 4.354, 4.000, 277.0, 18.60, 394.8, 10.97]]
prediction = model.predict(sample_house)
print("Prediction: {:.3f}".format(prediction[0]))
# +
import matplotlib.pyplot as plt
plt.plot(y, 'r.');
plt.show();
plt.plot(x);
# -
# <br>
# <h3 align="center">!! Part 03-Module 01-Lesson 01_Linear Regression/12. Regularization !!</h3>
# <br>
# <img src='./data/Regulariz00.jpg' align='center' width=650 height=300>
# <br>
# <img src='./data/Regulariz01.jpg' align='center' width=650 height=300>
# <br>
# <img src='./data/Regulariz02.jpg' align='center' width=650 height=300>
# <h3 align="center">!! Part 03-Module 01-Lesson 02_Perceptron Algorithm/01. Intro !!</h3>
# * Classification base on **Yes** or **No** and use mostly for Neural Networks for layars.
# <img src='./data/classification00.jpg' aligne='center' width=650 height=300>
# * Given the table in the video above, what would the dimensions be for `input features (`**x**`)`, the `weights (`**W**`)`, and the `bias (`**b**`)` to satisfy `(`**Wx + b**`)`?
#
# > **`W: (1xn), x: (nx1), b: (1x1) `**
# <h3 align="center">!! Part 03-Module 01-Lesson 02_Perceptron Algorithm/06. Perceptrons !!</h3>
# <br>
# <img src='./data/perceptron.jpg' align='center' width=650 heigth=300>
# <br>
#
# * `Likely Neural Network`
# > In Perceptron Algorithm we trying to find new line for come closer for point of out of area in plot.
#
# > If out point is above line and color of point is *Red* and up area is **Blue** and down is **Red**, so we need come closer to red point, model is $2x_{1} + x_{2} + 10 = 0$ and red point location is `(6, 9)` and **Learning Rate** is `0.1` and bias equal `1`, so we have new line and equal to $2$ from first parameter or $2x_{1}$, $2 - 6 * 0.1 = 2 - 0.6 = 1.4$, $1 - 0.9 = 0.1$ and $10 - 0.1 = 9.9$ so that $1.4x_{1} + 0.1x_{2} + 9.9 = 0$ **OR** reverse for *Blue* point, first time we multiple rate with blue point location and then pluse it to come closer line for blue point.
# <h3 align="center">!! Part 03-Module 01-Lesson 02_Perceptron Algorithm/09. Perceptron Algorithm !!</h3>
#
# +
import pandas as pd
from numpy import array
perceptron_data = pd.read_csv('./data/perceptron_data.csv')
X_per = array(perceptron_data[['x1', 'x2']])
Y_per = array(perceptron_data[['y']])
print(perceptron_data)
# +
import matplotlib.pyplot as plt
plt.plot(X_per, 'b.');
# -
# > * **And now we need perceptron with this code:**
#
#
#
#
# ``` python
# def perceptronStep(X, y, W, b, learn_rate = 0.01):
# for i in range(len(X)):
# y_hat = prediction(X[i],W,b)
# if y[i]-y_hat == 1:
# W[0] += X[i][0]*learn_rate
# W[1] += X[i][1]*learn_rate
# b += learn_rate
# elif y[i]-y_hat == -1:
# W[0] -= X[i][0]*learn_rate
# W[1] -= X[i][1]*learn_rate
# b -= learn_rate
# return W, b
# ```
#
#
#
# > * **Or this complete code:**
#
#
#
#
# ```python
# import numpy as np
# # Setting the random seed, feel free to change it and see different solutions.
# np.random.seed(42)
#
# def stepFunction(t):
# if t >= 0:
# return 1
# return 0
#
# def prediction(X, W, b):
# return stepFunction((np.matmul(X,W)+b)[0])
#
# # TODO: Fill in the code below to implement the perceptron trick.
# # The function should receive as inputs the data X, the labels y,
# # the weights W (as an array), and the bias b,
# # update the weights and bias W, b, according to the perceptron algorithm,
# # and return W and b.
# def perceptronStep(X, y, W, b, learn_rate = 0.01):
# for i in range(len(X)):
# y_hat = prediction(X[i], W, b)
# if y[i]-y_hat == 1:
# W[0] += X[i][0]*learn_rate
# W[1] += X[i][1]*learn_rate
# b += learn_rate
# elif y[i]-y_hat == -1:
# W[0] -= X[i][0]*learn_rate
# W[1] -= X[i][1]*learn_rate
# b -= learn_rate
#
# # Fill in code
# return W, b
#
# # This function runs the perceptron algorithm repeatedly on the dataset,
# # and returns a few of the boundary lines obtained in the iterations,
# # for plotting purposes.
# # Feel free to play with the learning rate and the num_epochs,
# # and see your results plotted below.
# def trainPerceptronAlgorithm(X, y, learn_rate = 0.01, num_epochs = 25):
# x_min, x_max = min(X.T[0]), max(X.T[0])
# y_min, y_max = min(X.T[1]), max(X.T[1])
# W = np.array(np.random.rand(2,1))
# b = np.random.rand(1)[0] + x_max
# # These are the solution lines that get plotted below.
# boundary_lines = []
# for i in range(num_epochs):
# # In each epoch, we apply the perceptron step.
# W, b = perceptronStep(X, y, W, b, learn_rate)
# boundary_lines.append((-W[0]/W[1], -b/W[1]))
# return boundary_lines
# ```
# <h3 align="center">!! Part 03-Module 01-Lesson 03_Decision Trees/01. Intro !!</h3>
#
# <img src='./data/decisionTree00.jpg' align='center' width=650 heigth=300>
#
# > * **Recommending App Example by Decision Trees**
# <img src='./data/decisionTree01.jpg' align='center' width=650 heigth=300>
#
# > * **Student Admissions Example by Decision Trees**
# <h3 align="center">!! Part 03-Module 01-Lesson 03_Decision Trees/11. Multiclass Entropy !!</h3>
# <br>
# <img src='./data/Entropy00.jpg' align='center' width=650 heigth=300>
# <br>
#
# > * **Entropy is probably of somethings, like we have 4 balls and 3 balls is _Red_ and 1 ball is _Blue_ and probablity Red is $0.75$ for every red ball and Blue is $0.25$ and if we have prob measure $\left(0.75*0.75*0.75\right)*\left(0.25\right)$ so it is $0.1054$ that we use log till multiple**
# <img src='./data/Entropy01.jpg' align='center' width=650 heigth=300>
# <br>
# <img src='./data/Entropy02.jpg' align='center' width=650 heigth=300>
# <br>
# <img src='./data/Entropy03.jpg' align='center' width=650 heigth=300>
# <br>
# <img src='./data/Entropy04.jpg' align='center' width=650 heigth=300>
# **You'll need to complete each of the following steps:**
#
# * 1. Build a decision tree model
#
# * Create a decision tree classification model using scikit-learn's **`DecisionTree`** and assign it to the variable **`model`**.
#
# * 2. Fit the model to the data
#
# * You won't need to specify any of the hyperparameters, since the default ones will fit the data with an accuracy of 100% in the dataset. However, we encourage you to play with hyperparameters such as **`max_depth`** and **`min_samples_leaf`**, and try to find the simplest possible model, i.e., the least likely one to overfit!
#
# * 3. Predict using the model
#
# * Predict the labels for the training set, and assign this list to the variable **`y_pred`**.
#
# * 4. Calculate the accuracy of the model
#
# * For this, use the function sklearn function **`accuracy_score`**.
#
#
#
#
# **Hyperparameters**
#
# When we define the model, we can specify the hyperparameters. In practice, the most common ones are
#
# * **`max_depth`**: The maximum number of levels in the tree.
# * **`min_samples_leaf`**: The minimum number of samples allowed in a leaf.
# * **`min_samples_split`**: The minimum number of samples required to split an internal node.
# * **`max_features`**: The number of features to consider when looking for the best split.
#
# For example, here we define a model where the maximum depth of the trees **`max_depth`** is $7$, and the minimum number of elements in each leaf **`min_samples_leaf`** is $10$.
#
# ``` python
# >>> model = DecisionTreeClassifier(max_depth = 7, min_samples_leaf = 10)
# ```
# +
import pandas as pd
import numpy as np
hyper_data = np.asarray(pd.read_csv('./data/hyperparameters.csv', header=None))
X = hyper_data[:, 0:2]
Y = hyper_data[:, 2]
from matplotlib.pyplot import plot
plot(X, 'g.');
# +
from sklearn.tree import DecisionTreeClassifier
from sklearn.metrics import accuracy_score
hyper_model = DecisionTreeClassifier()
hyper_model.fit(X, Y)
y_pred = hyper_model.predict(X)
acc = accuracy_score(Y, y_pred)
print(hyper_model.predict([[0.59793, 0.095029]]))
# -
# <h3 align="center">!! Part 03-Module 01-Lesson 04_Naive Bayes/05. Bayes Theorem !!</h3>
# <img src='./data/BayesTheorem00.jpg' align="center" width=650 heigth=300>
# **Overview**
#
# This project has been broken down in to the following steps:
#
# * Step 0: Introduction to the Naive Bayes Theorem
# * Step 1.1: Understanding our dataset
# * Step 1.2: Data Preprocessing
# * Step 2.1: Bag of Words(BoW)
# * Step 2.2: Implementing BoW from scratch
# * Step 2.3: Implementing Bag of Words in scikit-learn
# * Step 3.1: Training and testing sets
# * Step 3.2: Applying Bag of Words processing to our dataset.
# * Step 4.1: Bayes Theorem implementation from scratch
# * Step 4.2: Naive Bayes implementation from scratch
# * Step 5: Naive Bayes implementation using scikit-learn
# * Step 6: Evaluating our model
# * Step 7: Conclusion
#
#
# > * **Mostly used in `NLP`. This project on [GitHub-repo](https://github.com/udacity/NLP-Exercises/tree/master/1.5-spam-classifier)**
#
# > * **MainPage: [IPYNB](https://github.com/udacity/NLP-Exercises/blob/master/1.5-spam-classifier/Bayesian_Inference.ipynb) | SolutionPage: [IPYNB](https://github.com/udacity/NLP-Exercises/blob/master/1.5-spam-classifier/Bayesian_Inference_solution.ipynb)**
# <h3 align="center">!! Part 03-Module 01-Lesson 05_Support Vector Machines/04. Error Function Intuition !!</h3>
# <img src="./data/minimizingSVM.jpg" align="center" width=650 heigth=300>
# <h3 align="center">!! Part 03-Module 01-Lesson 05_Support Vector Machines/17. SVMs in sklearn !!</h3>
# **Hyperparameters**
#
# When we define the model, we can specify the hyperparameters. As we've seen in this section, the most common ones are
#
# * **`C`**: The C parameter.
# * **`kernel`**: The kernel. The most common ones are 'linear', 'poly', and 'rbf'.
# * **`degree`**: If the kernel is polynomial, this is the maximum degree of the monomials in the kernel.
# * **`gamma`**: If the kernel is rbf, this is the gamma parameter.
#
# ``` python
# >>> model = SVC(kernel='poly', degree=4, C=0.1)
#
# ```
#
# You'll need to complete each of the following steps:
#
# **1. Build a support vector machine model**
#
# * Create a support vector machine classification model using scikit-learn's **`SVC`** and assign it to the variable **`model`**.
#
# **2. Fit the model to the data**
#
# * If necessary, specify some of the hyperparameters. The goal is to obtain an accuracy of 100% in the dataset. Hint: Not every kernel will work well.
#
# **3. Predict using the model**
#
# * Predict the labels for the training set, and assign this list to the variable **`y_pred`**.
#
# **4. Calculate the accuracy of the model**
#
# * For this, use the function sklearn function **`accuracy_score`**.
#
#
# ``` python
# from sklearn.svm import SVC
# model = SVC()
# model.fit(x_values, y_values)
# print(model.predict([ [0.2, 0.8], [0.5, 0.4] ]))
#
# # OutPut
# [[ 0., 1.]]
#
#
#
# ```
# +
from matplotlib.pyplot import plot
import numpy as np
import pandas as pd
svm_data = np.asarray(pd.read_csv('data/svm_data.csv', header=None))
X = svm_data[:, 0:2]
y = svm_data[:, 2]
plot(X, 'y.');
# +
from sklearn.svm import SVC
from sklearn.metrics import accuracy_score
model_svm = SVC(kernel='rbf', gamma=27)
model_svm.fit(X, y)
y_pred_svm = model_svm.predict(X)
acc_svm = accuracy_score(y, y_pred_svm)
print(model_svm.predict([[0.26, 0.99]]))
# -
# <h3 align="center">!! Part 03-Module 01-Lesson 06_Ensemble Methods/01. Intro !!</h3>
# <img src='./data/ADABoost00.jpg' align='center' width=650 heigth=300>
# <br>
# <img src='./data/ADABoost01.jpg' align='center' width=650 heigth=300>
#
#
# > **In this model, top of `fraction` is correct point and down of fraction is incorrect point.**
# <br>
# <img src='./data/ADABoost02.jpg' align='center' width=650 heigth=300>
# <br>
# <img src='./data/ADABoost03.jpg' align='center' width=650 heigth=300>
# <br>
# <img src='./data/ADABoost04.jpg' align='center' width=650 heigth=300>
# <br>
# <img src='./data/ADABoost05.jpg' align='center' width=650 heigth=300>
# <br>
# <img src='./data/ADABoost06.jpg' align='center' width=650 heigth=300>
# <br>
# <img src='./data/ADABoost07.jpg' align='center' width=650 heigth=300>
# <br>
# <br>
# <h3 align='center'>!! Part 03-Module 01-Lesson 06_Ensemble Methods/09. AdaBoost in sklearn !!</h3>
# <br>
# <br>
#
# Building an AdaBoost model in sklearn is no different than building any other model. You can use scikit-learn's **`AdaBoost Classifier`** class. This class provides the functions to define and fit the model to your data.
#
# ``` python
# >>> from sklearn.ensemble import AdaBoostClassifier
# >>> model = AdaBoostClassifier()
# >>> model.fit(x_train, y_train)
# >>> model.predict(x_test)
# ```
#
# The **`model`** variable is a decision tree model that has been fitted to the data **`x_values`** and **`y_values`**. The functions **`fit`** and **`predict`** work exactly as before.
#
#
# **Hyperparameters**
#
# When we define the model, we can specify the hyperparameters. In practice, the most common ones are
#
# * **`base_estimator`**: The model utilized for the weak learners (**Warning**: Don't forget to import the model that you decide to use for the weak learner).
# * **`n_estimators`**: The maximum number of weak learners used.
#
# For example, here we define a model which uses decision trees of max_depth 2 as the weak learners, and it allows a maximum of 4 of them.
#
# ```python
#
# from sklearn.tree import DecisionTreeClassifier
# model = AdaBoostClassifier(base_estimator = DecisionTreeClassifier(max_depth=2), n_estimators = 4)
# ```
#
#
# <br>
# <br>
#
#
# **Why is the term "Naive" used in the Naive Bayes Classifier?**
#
# > We are assuming the features are independent events, when they may not be so.
#
# **What is an advantage of L1 regularization over L2 regularization?**
#
# > It tends to turn a lot of weights into zero, while leaving only the most important ones, thus helping in feature selection.
#
# **Which one of the following could reduce overfitting in a random forest classifier?**
#
# > Increase the number of trees
#
# **Which models can be used for non-binary classification?(select all that apply)**
#
# > * Decision Trees
# > * Random Forests
# > * Boosted Trees
#
#
# <h3 align='center'>!! Part 03-Module 01-Lesson 08_Supervised Learning Project/01. Overview !!</h3>
#
# **Starting the Project**
#
#
#
# For this assignment, you can find the **`finding_donors`** folder containing the necessary project files on the **[Machine Learning projects GitHub](https://github.com/udacity/machine-learning)**, under the projects folder. You may download all of the files for projects we'll use in this Nanodegree program directly from this repo. Please make sure that you use the most recent version of project files when completing a project!
#
# This project contains three files:
#
# * **`finding_donors.ipynb`**: This is the main file where you will be performing your work on the project.
# * **`census.csv`**: The project dataset. You'll load this data in the notebook.
# * **`visuals.py`**: This Python script provides supplementary visualizations for the project. Do not modify.
#
# In the Terminal or Command Prompt, navigate to the folder on your machine where you've put the project files, and then use the command **`jupyter notebook finding_donors.ipynb`** to open up a browser window or tab to work with your notebook. Alternatively, you can use the command jupyter notebook or ipython notebook and navigate to the notebook file in the browser window that opens. Follow the instructions in the notebook and answer each question presented to successfully complete the project. A README file has also been provided with the project files which may contain additional necessary information or instruction for the project.
#
#
# > **[FOLDER](https://github.com/udacity/machine-learning/tree/master/projects/finding_donors) | [IPYNB](https://github.com/udacity/machine-learning/blob/master/projects/finding_donors/finding_donors.ipynb) | [CSV](https://github.com/udacity/machine-learning/blob/master/projects/finding_donors/census.csv) | [MD](https://github.com/udacity/machine-learning/blob/master/projects/finding_donors/project_description.md) | [PYTHON](https://github.com/udacity/machine-learning/blob/master/projects/finding_donors/visuals.py)**
| MachineLearningNanoDegree/Part03/Part03_MLNaNoDeUdacity.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
import numpy as np
import matplotlib.pyplot as plt
plt.figure(figsize=(12,4))
# set width of bar
barWidth = 0.15
models = ['simple', 'basic', 'VGG16', 'ResNet', 'DenseNet']
# set height of bars
bars1 = [0.59, 0.66, 0.71, 0.71, 0.73]
bars2 = [0.50, 0.50, 0.14, 0.15, 0.19]
bars3 = [0.59, 0.58, 0.74, 0.70, 0.77]
# Set position of bar on X axis
r1 = np.arange(5)
r2 = [x + barWidth for x in r1]
r3 = [x + barWidth for x in r2]
plt.bar(r1, bars1, color='#003f5c', width=barWidth, edgecolor='white', label='Adam')
plt.bar(r2, bars2, color='#bc5090', width=barWidth, edgecolor='white', label='SGD')
plt.bar(r3, bars3, color='#ffa600', width=barWidth, edgecolor='white', label='RMSprop')
plt.xticks([r + barWidth for r in range(5)], models)
plt.ylim(bottom=0.1)
plt.title("Experiments on Optimizer")
plt.ylabel("Classification Accuracy")
plt.legend(loc=0)
plt.grid(axis="y", linestyle="--")
plt.savefig('./model_comparison.png')
plt.show()
| Lab 3/Bar-Graphs.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ## MA-VELP: Machine-Learning Assisted Virtual Exfoliation via Liquid Phase
# +
# MAVELP_DATA_PATH must be set for this gui module to import correctly.
# When installed on nanoHUB this should be set by tghe invoke script.
# If testing in the Jupyter notebook Tool, something like the following should be un-commented.
# (edit "BIN" to desired location)
# #HOME=%env HOME
#BIN='/devel/VELP/bin'
#datapath = HOME + BIN + '/data/data.dat'
# #%env MAVELP_DATA_PATH = $datapath
from mavelp.gui import *
# %matplotlib inline
tablist = ['Material Selection', 'Method Selection', 'Design', 'About']
Materails = ['BPY', 'TF']
tab = widgets.Tab()
a = tabs(tablist,Materails,tab)
md = """
## MA-VELP: Machine-Learning Assisted Virtual Exfoliation via Liquid Phase
Based on the dataset obtained from high-throughput computational study, MAVELP employs machine learning
algorithms to screen for an optimal solvent based on the user's material selection for exfoliation process
via liquid phase.
Currently, MAVELP uses the free energy barrier as the selection criterion.
MAVELP was developed at University of Illinois at Urbana-Champaign under the Nano-Manufacturing Group in
order to push the boundaries of exfoliation process solvent design!
### Contact us
<a href=https://github.com/nanoMFG/VELP target="_blank">See the code on GitHub</a>
<a href=https://github.com/nanoMFG/VELP/issues/new?assignees=&labels=bug&template=bug_report.md&title= target="_blank">Report a bug</a>
<a href=https://github.com/nanoMFG/VELP/issues/new?assignees=&labels=Feature+Request&template=feature_request.md&title= target="_blank">Request a feature</a>
For further assistance contact: <EMAIL>
Developers: <NAME>, <NAME>, and <NAME>
"""
tab.children = [
a.generate_materials(),
a.generate_method(),
a.generate_design(),
HTML(markdown.markdown(md)),
]
for cnt, name in enumerate(tablist):
tab.set_title(cnt, name)
display(tab)
# -
| bin/MA-VELP-App.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3 (ipykernel)
# language: python
# name: python3
# ---
# Import the relevant libraries
import pandas as pd
import matplotlib.pyplot as plt
import numpy as np
# Load the Dataset
df = pd.read_csv( 'C:\\Users\\<NAME>\\OneDrive\\Desktop\\Python\\PROJECT_1\\number-of-deaths-by-risk-factor.csv')
df.head()
df.info()
# ## India Analysis
# ## Linear Regression
# ## Analyzing Deaths due to Alcohol use in India
df_Ind = df[df['Entity']=='India']
df_Ind
df_Ind.columns
# Understand the null values
df.isnull().sum()
# Create X and Y and split the dataset into train and test sets
from sklearn.model_selection import train_test_split
X = df_Ind.iloc[:,2].values
y = df_Ind.iloc[:,14].values
X_train, X_test, y_train, y_test = train_test_split(X,y, test_size = 0.2, random_state = 1 )
# Training the Simple Linear Regression model on the Training set
from sklearn.linear_model import LinearRegression
regressor = LinearRegression()
regressor.fit(X_train.reshape(-1, 1), y_train)
# Predict the values
y_pred = regressor.predict(X_test.reshape(-1,1))
# Visualising the Training set results
plt.scatter(X_train, y_train, color = 'red')
plt.plot(X_train, regressor.predict(X_train.reshape(-1,1)), color = 'blue')
plt.title('Linear Regression (Training set)',fontdict ={'fontsize':20})
plt.xlabel('Year',fontdict ={'fontsize':20})
plt.ylabel('Deaths',fontdict ={'fontsize':20})
plt.show()
# Visualising the Test set results
plt.scatter(X_test, y_test, color = 'red')
plt.plot(X_test, regressor.predict(X_test.reshape(-1,1)), color = 'blue')
plt.title('Linear Regression(Test set)',fontdict ={'fontsize':20})
plt.xlabel('Year',fontdict ={'fontsize':20})
plt.ylabel('Deaths',fontdict ={'fontsize':20})
plt.show()
# +
# As seen from the above, it's quite clear that deaths due to alcohol use in India have risen with time
# and Linear Regression comes close to predicting acurate results.
# -
# ## Polynomial Regression
# Training the Polynomial Regression model on the whole dataset
from sklearn.preprocessing import PolynomialFeatures
poly_reg = PolynomialFeatures(degree = 3)
X_poly = poly_reg.fit_transform(X.reshape(-1, 1))
lin_reg_2 = LinearRegression()
lin_reg_2.fit(X_poly, y)
# Visualising the Polynomial Regression results (for higher resolution and smoother curve)
X_grid = np.arange(min(X.reshape(-1, 1)), max(X.reshape(-1, 1)), 0.1)
X_grid = X_grid.reshape((len(X_grid), 1))
plt.scatter(X.reshape(-1, 1), y, color = 'red')
plt.plot(X_grid, lin_reg_2.predict(poly_reg.fit_transform(X_grid)), color = 'blue')
plt.title('Polynomial Regression',fontdict ={'fontsize':20})
plt.xlabel('Year',fontdict ={'fontsize':20})
plt.ylabel('Deaths',fontdict ={'fontsize':20})
plt.show()
# ## Multiple Regression
# +
# Predicting lower Birth weights due to diet low in nuts, seeds, wholegrains and sodium
# -
# Analyzing the contribution of Alchohol, Drugs and Cigarettes in total deaths
from sklearn.model_selection import train_test_split
X_m = df_Ind.iloc[:,31:34].values
Y_m = df_Ind.iloc[:,12].values
X_train_m, X_test_m, y_train_m, y_test_m = train_test_split(X_m,Y_m, test_size = 0.2, random_state = 1)
# Training the Multiple Linear Regression model on the Training set
from sklearn.linear_model import LinearRegression
regressor_m = LinearRegression()
regressor_m.fit(X_train_m.reshape(-1,1), y_train_m)
# Predicting the Test set results
y_pred_m = regressor_m.predict(X_test_m.reshape(-1,1))
# Visualising the Training set results
plt.scatter(X_train_m, y_train_m, color = 'red')
plt.plot(X_train_m, regressor_m.predict(X_train_m.reshape(-1,1)), color = 'blue')
plt.title('Multiple Linear Regression (Training set)',fontdict ={'fontsize':20})
plt.xlabel('Year',fontdict ={'fontsize':20})
plt.ylabel('Deaths',fontdict ={'fontsize':20})
plt.show()
# Visualising the Test set results
plt.scatter(X_test_m, y_test_m, color = 'red')
plt.plot(X_test_m,y_pred_m , color = 'blue')
plt.title('Multiple Linear Regression(Test set)',fontdict ={'fontsize':20})
plt.xlabel('Year',fontdict ={'fontsize':20})
plt.ylabel('Deaths',fontdict ={'fontsize':20})
plt.show()
# ## Decision Tree Regression
X_d = df_Ind.iloc[:,31:34].values
y_d = df_Ind.iloc[:,12].values
# Training the Decision Tree Regression model on the whole dataset
from sklearn.tree import DecisionTreeRegressor
regressor = DecisionTreeRegressor(random_state = 0)
regressor.fit(X_d, y_d)
y_pred_d = regressor.predict(X_grid_d)
# Visualising the Decision Tree Regression results (higher resolution)
plt.scatter(X_d, y_d, color = 'red')
plt.plot(X_test_m,y_pred_m , color = 'blue')
plt.title('Decesion Tree (Test set)',fontdict ={'fontsize':20})
plt.xlabel('Year',fontdict ={'fontsize':20})
plt.ylabel('Deaths',fontdict ={'fontsize':20})
plt.show()
# ## SVR
X_s = df_Ind.iloc[:,31:34].values.astype(float)
y_s = df_Ind.iloc[:,12:13].values.astype(float)
# Feature Scaling
from sklearn.preprocessing import StandardScaler
sc_X = StandardScaler()
sc_y = StandardScaler()
X_s = sc_X.fit_transform(X_s)
y_s = sc_y.fit_transform(y_s)
# Training the SVR model on the whole dataset
from sklearn.svm import SVR
regressor = SVR(kernel = 'rbf')
regressor.fit(X_s, y_s)
# Plotting the output
X_grid = np.arange(min(X_s), max(X_s), 0.1)
X_grid = X_grid.reshape((len(X_grid), 1))
plt.scatter(X_s, y_s, color = 'green')
plt.plot(X_grid, regressor.predict(X_grid), color = 'orange')
plt.title('Support Vector Regression Model(High Resolution)')
plt.show()
| Death EDA.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Working with Large Data Sources in S3
#
# Today we're going to be working with two large public datasets that are hosted in S3: the [NYC Taxi dataset](https://registry.opendata.aws/nyc-tlc-trip-records-pds/) and [GDELT](https://registry.opendata.aws/gdelt/). We could theoretically download this data and work with it locally, but it is enormous and it is not practical to query and subset it on a local machine. It makes sense to work with the data in the AWS cloud, where it already lives. To this end, we will use Python's `boto3` library to run AWS Athena and S3 Select resources. With these resources, we can begin to access and understand these large datasets in a serverless capacity, without needing to set up database servers. We'll explore formal large-scale database solutions on Wednesday and further methods for analyzing and fitting models on large-scale data in Weeks 7 and 8 when we learn more about Spark and Dask, but this should whet your appetite and help you see how you can use `boto3` to do large-scale cloud computing tasks within a Jupyter notebook.
#
# If you haven't already, please install the `boto3` package (via Anaconda, this can be achieved by running `conda install boto3` on the command line). The package will allow you to work with AWS resources using the credentials you have provided as a part of your AWS CLI configuration. So, you should confirm that these credentials are correct (recall that your AWS Educate session token changes every 3 hours) in order to ensure the code runs.
import boto3
import time
import pandas as pd
# First, let's initialize our S3 resource via Boto3.
s3 = boto3.client('s3')
s3_resource = boto3.resource('s3')
# And we can identify data that fits particular criteria within an S3 bucket by cycling through all the objects within the bucket. Here, we identify files that record Yellow Cab data from 2019:
bucket = 'nyc-tlc'
bucket_resource = s3_resource.Bucket(bucket)
[obj.key for obj in bucket_resource.objects.all() if '2019' in obj.key and 'yellow' in obj.key]
# So, we seem to have a number of CSV files that fit this criteria. How do we actually see what's inside these files and begin to analyze it? First of all, we could assess the general content of a file by previewing the file. For instance, we could generate a url for the file in S3 and read only a subset of the CSV's rows into a Pandas DataFrame. You'll notice, though, that this takes a while to run. This strategy will not scale up very well for downloading large amounts of data locally.
# +
def s3_csv_preview(bucket, key, rows=10):
'''
Preview CSV in S3 Bucket as Pandas DataFrame
'''
data_source = {
'Bucket': bucket,
'Key': key
}
url = s3.generate_presigned_url(ClientMethod='get_object',
Params=data_source)
data = pd.read_csv(url, nrows=rows)
return data
t0 = time.time()
df = s3_csv_preview(bucket='nyc-tlc',
key='trip data/yellow_tripdata_2019-12.csv',
rows=10)
print(time.time() - t0, 'seconds')
df.head()
# -
# Alternatively, we might use S3 Select (via Boto3) to gather only a subset of the data that meets the criteria we're interested in. S3 Select is a serverless approach, which effectively spins up compute cores for us, searches for particular criteria in parallel, and then returns the results of the search to our local machine.
#
# As a demonstration, let's select 100 passenger_count and trip_distance datapoints, where the number of passengers in the cab was greater than 3.
#
# You'll notice that S3 Select uses SQL syntax to specify this subset of data. If you don't already know SQL, don't worry about it for this class. I would recommend, though, that you eventually learn (at a least a bit of) SQL, as this is the standard language for any database querying (as well as serverless database querying in AWS). If you're interested, you can [learn the basics in a couple of hours on DataCamp](https://learn.datacamp.com/courses/introduction-to-sql).
#
# If we run our S3 Select query and save the result into a DataFrame, we can see that it took a lot less time to use S3 Select than our preview function above.
# +
import io
def s3_select(Bucket, Key, Expression):
s3_select_results = s3.select_object_content(
Bucket=Bucket,
Key=Key,
Expression=Expression,
ExpressionType='SQL',
InputSerialization={'CSV': {"FileHeaderInfo": "Use"}},
OutputSerialization={'JSON': {}})
df = pd.DataFrame()
for event in s3_select_results['Payload']:
if 'Records' in event:
df = pd.read_json(io.StringIO(event['Records']['Payload'].decode('utf-8')),
lines=True)
return df
t0 = time.time()
df = s3_select(Bucket='nyc-tlc',
Key='trip data/yellow_tripdata_2019-12.csv',
Expression='''
SELECT passenger_count, trip_distance
FROM s3object s
WHERE s.passenger_count > '3'
LIMIT 100
'''
)
print(time.time() - t0, 'seconds')
df
# -
# Then, with this subset of data, we can use our standard Pandas tools to calculate summary statistics and make plots.
(df.groupby('passenger_count')
.describe()
)
(df.groupby('passenger_count').count()
.plot
.bar(legend=True,
title='12/2019 Yellow Cab Rides for > 3 Passengers (Selection)')
);
# One limitation to S3 Select, though, is that it can only search over and return a limited amount of data at one time. If you have larger queries you would like to perform (i.e. over many gigabytes or terabytes of data) on data in an S3 bucket, it would be better to use AWS Athena, which you can also access via the Boto3 Python package. Athena additionally allows you to run queries over all of the available files in the bucket being queried (for instance, over all of the Yellow Cab CSV files in the NYC Taxi Bucket).
#
# The only catch with Athena is that, before you can run any queries, you will need to specify your data schema -- how your data is structured and the data types that are used -- so that it knows how to read the data. After you have specified this information, Athena can run standard SQL queries in a serverless fashion. Alternatively, you can also use AWS Glue crawlers to discover the schematization for you if you want to abstract the process even further. We will not be using AWS Glue today, though, and will just manually specify the schema of our data (typically, this is already by defined for AWS' public datasets and does not require too much effort).
#
# Let's transition over to another large public dataset on AWS for our work with Athena -- the Global Database of Events, Language and Tone (GDELT) Project. The project "monitors the world’s broadcast, print, and web news from nearly every corner of every country in over 100 languages and identifies the people, locations, organizations, counts, themes, sources, emotions, quotes, images and events driving our global society every second of every day" ([Registry of Open Data on AWS](https://registry.opendata.aws/gdelt/)).
#
# First, let's specify a function to run an Athena query using Boto3. Then, we'll establish our data's schema using the `create_db` and `create_table` queries. From here, we should be ready to begin learning more about the data itself in the GDELT bucket!
# +
#Function for starting athena query
def run_query(query, database, s3_output):
client = boto3.client('athena')
response = client.start_query_execution(
QueryString=query,
QueryExecutionContext={
'Database': database
},
ResultConfiguration={
'OutputLocation': s3_output,
}
)
print('Execution ID: ' + response['QueryExecutionId'])
# Wait until query is done running to return response
running = True
while running:
execution = client.get_query_execution(QueryExecutionId=response['QueryExecutionId'])
execution_status = execution["QueryExecution"]["Status"]["State"]
if execution_status == 'QUEUED' or execution_status == 'RUNNING':
pass
else:
running = False
print('QUERY', execution_status)
return response
# Athena saves the results of each query to an S3 bucket, so we need to specify this bucket:
# You will need to choose a unique bucket name to run this yourself (e.g. not jclindaniel-athena)
s3_output = 's3://jclindaniel-athena/'
create_db = 'CREATE DATABASE IF NOT EXISTS gdelt;'
create_table = \
"""
CREATE EXTERNAL TABLE IF NOT EXISTS gdelt.events (`globaleventid` INT,`day` INT,`monthyear` INT,`year` INT,`fractiondate` FLOAT,`actor1code` string,`actor1name` string,`actor1countrycode` string,`actor1knowngroupcode` string,`actor1ethniccode` string,`actor1religion1code` string,`actor1religion2code` string,`actor1type1code` string,`actor1type2code` string,`actor1type3code` string,`actor2code` string,`actor2name` string,`actor2countrycode` string,`actor2knowngroupcode` string,`actor2ethniccode` string,`actor2religion1code` string,`actor2religion2code` string,`actor2type1code` string,`actor2type2code` string,`actor2type3code` string,`isrootevent` BOOLEAN,`eventcode` string,`eventbasecode` string,`eventrootcode` string,`quadclass` INT,`goldsteinscale` FLOAT,`nummentions` INT,`numsources` INT,`numarticles` INT,`avgtone` FLOAT,`actor1geo_type` INT,`actor1geo_fullname` string,`actor1geo_countrycode` string,`actor1geo_adm1code` string,`actor1geo_lat` FLOAT,`actor1geo_long` FLOAT,`actor1geo_featureid` INT,`actor2geo_type` INT,`actor2geo_fullname` string,`actor2geo_countrycode` string,`actor2geo_adm1code` string,`actor2geo_lat` FLOAT,`actor2geo_long` FLOAT,`actor2geo_featureid` INT,`actiongeo_type` INT,`actiongeo_fullname` string,`actiongeo_countrycode` string,`actiongeo_adm1code` string,`actiongeo_lat` FLOAT,`actiongeo_long` FLOAT,`actiongeo_featureid` INT,`dateadded` INT,`sourceurl` string) ROW FORMAT SERDE 'org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe' WITH SERDEPROPERTIES ('serialization.format' = '\t','field.delim' = '\t') LOCATION 's3://gdelt-open-data/events/';
"""
queries = [create_db, create_table]
for q in queries:
print("Executing query: %s" % (q))
res = run_query(q, 'gdelt', s3_output)
# -
# OK, let's actually run some queries on the data in the bucket. First of all, how many events are recorded in the dataset?
#
# **Important Note: Only run the following queries once (or not at all). Athena charges by the amount of data you query (~200 GB for this dataset), and you will quickly be out of AWS Educate credits if you run these too many times.**
query = '''
SELECT COUNT(*) as nb_events
FROM gdelt.events;
'''
res = run_query(query, 'gdelt', s3_output)
s3_csv_preview(bucket='jclindaniel-athena', key=res['QueryExecutionId']+'.csv')
# We can also select subset of our data for further analysis. Here, we count the number of events that occur each year in the dataset and plot those values once we have this subset of the data back on our local machine.
query = '''
SELECT year,
COUNT(globaleventid) AS nb_events
FROM gdelt.events
GROUP BY year
ORDER BY year ASC;
'''
res = run_query(query, 'gdelt', s3_output)
df = s3_csv_preview(bucket='jclindaniel-athena', key=res['QueryExecutionId']+'.csv', rows=100)
df.plot('year', 'nb_events');
# We can also, of course, write even more complicated queries, such as this one where we identify the number of events that involved "<NAME>" as an actor.
query = '''
SELECT year, COUNT(globaleventid) AS nb_events
FROM gdelt.events
WHERE actor1name='<NAME>'
GROUP BY year
ORDER BY year ASC;
'''
res = run_query(query, 'gdelt', s3_output)
df = s3_csv_preview(bucket='jclindaniel-athena', key=res['QueryExecutionId']+'.csv', rows=100)
df.plot('year', 'nb_events')
# If you run all of these queries, you'll notice that the number of files in your S3 bucket piles pretty quickly:
bucket = 'jclindaniel-athena'
bucket_resource = s3_resource.Bucket(bucket)
[obj.key for obj in bucket_resource.objects.all()]
# You can quickly delete all of the files in your bucket (so that you don't have to pay for them) by running the following `cleanup` function:
# +
def cleanup(bucket_name):
s3 = boto3.resource('s3')
bucket = s3.Bucket(bucket_name)
for item in bucket.objects.all():
item.delete()
cleanup('jclindaniel-athena')
| in-class-activities/05_Storage/s3.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
from pynq import Overlay
import bitpack_lib
pl = Overlay("bitpack_sample.bit")
core = pl.bitpack_top_0
core._A.values = [0.9, 0.8, 0.7, 0.6]
core._SEL.values = [0.5, 0.5]
core.cycle = 10000
prods = []
avgs = []
result = [['product', prods], ['average', avgs]]
for i in range(5):
core.resetseeds()
core.start()
prods.append(core._PROD.value)
avgs.append(core._AVG.value)
result
# -
| pynq/sample/bitpack_sample.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="view-in-github" colab_type="text"
# <a href="https://colab.research.google.com/github/ralsouza/python_fundamentos/blob/master/src/05_desafio/05_missao05.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# + [markdown] id="6HYwM-bgcfmf" colab_type="text"
# ## **Missão: Analisar o Comportamento de Compra de Consumidores.**
# ### Nível de Dificuldade: Alto
#
# Você recebeu a tarefa de analisar os dados de compras de um web site! Os dados estão no formato JSON e disponíveis junto com este notebook.
#
# No site, cada usuário efetua login usando sua conta pessoal e pode adquirir produtos à medida que navega pela lista de produtos oferecidos. Cada produto possui um valor de venda. Dados de idade e sexo de cada usuário foram coletados e estão fornecidos no arquivo JSON.
#
# Seu trabalho é entregar uma análise de comportamento de compra dos consumidores. Esse é um tipo de atividade comum realizado por Cientistas de Dados e o resultado deste trabalho pode ser usado, por exemplo, para alimentar um modelo de Machine Learning e fazer previsões sobre comportamentos futuros.
#
# Mas nesta missão você vai analisar o comportamento de compra dos consumidores usando o pacote Pandas da linguagem Python e seu relatório final deve incluir cada um dos seguintes itens:
#
#
# **Contagem de Consumidores**
# * Número total de consumidores
#
# **Análise Geral de Compras**
# * Número de itens exclusivos
# * Preço médio de compra
# * Número total de compras
# * Rendimento total (Valor Total)
#
# **Informações Demográficas Por Gênero**
# * Porcentagem e contagem de compradores masculinos
# * Porcentagem e contagem de compradores do sexo feminino
# * Porcentagem e contagem de outros / não divulgados
#
# **Análise de Compras Por Gênero**
# * Número de compras
# * Preço médio de compra
# * Valor Total de Compra
# * Compras for faixa etária
#
# **Identifique os 5 principais compradores pelo valor total de compra e, em seguida, liste (em uma tabela):**
# * Login
# * Número de compras
# * Preço médio de compra
# * Valor Total de Compra
# * Itens mais populares
#
# **Identifique os 5 itens mais populares por contagem de compras e, em seguida, liste (em uma tabela):**
# * ID do item
# * Nome do item
# * Número de compras
# * Preço Médio do item
# * Valor Total de Compra
# * Itens mais lucrativos
#
# **Identifique os 5 itens mais lucrativos pelo valor total de compra e, em seguida, liste (em uma tabela):**
# * ID do item
# * Nome do item
# * Número de compras
# * Preço Médio do item
# * Valor Total de Compra
#
# **Como considerações finais:**
# * Seu script deve funcionar para o conjunto de dados fornecido.
# * Você deve usar a Biblioteca Pandas e o Jupyter Notebook.
#
#
# + id="7JMf2QPAcWne" colab_type="code" colab={}
# Imports
import pandas as pd
import numpy as np
# Load file from Drive
from google.colab import drive
drive.mount('/content/drive')
# Load file to Dataframe
load_file = "/content/drive/My Drive/dados_compras.json"
purchase_file = pd.read_json(load_file, orient = "records")
# + [markdown] id="kEjZJXapSjFe" colab_type="text"
# ## **1. Análise Exploratória**
# + [markdown] id="hCkJJLYQYR05" colab_type="text"
# ### **1.1 Checagem das primeiras linhas**
# + id="C67576fXYYi2" colab_type="code" outputId="98f205b6-0d36-4994-871d-5f36b2c93964" colab={"base_uri": "https://localhost:8080/", "height": 419}
# Nota-se que os logins se repetem.
purchase_file.sort_values('Login')
# + [markdown] id="-FinGk2_Yonh" colab_type="text"
# ### **1.2 Checagem dos tipos dos dados**
# + id="lvGjYvS4YxkC" colab_type="code" outputId="1d78ec84-08a6-481a-c349-faca48230dd0" colab={"base_uri": "https://localhost:8080/", "height": 131}
purchase_file.dtypes
# + [markdown] id="M5IyGnBvY-ia" colab_type="text"
# ### **1.3 Checagem de valores nulos**
# + id="6sLi_lf0ZKAc" colab_type="code" outputId="3230fe96-ade8-4030-a889-b1a6009bac43" colab={"base_uri": "https://localhost:8080/", "height": 131}
purchase_file.isnull().sum().sort_values(ascending = False)
# + [markdown] id="zRALpHx5ZbZL" colab_type="text"
# ### **1.4 Checagem de valores zero**
# + id="EzLNf5KSZmae" colab_type="code" outputId="bfdeeb93-e376-41b1-baa1-ab5105fc510a" colab={"base_uri": "https://localhost:8080/", "height": 131}
(purchase_file == 0).sum()
# + [markdown] id="9A6T-er_Z6Xv" colab_type="text"
# ### **1.5 Distribuição de idades**
# O público mais representativo desta amostra encontra-se entre 19 há 26 anos de idade.
# + id="gpb8ss5daQr-" colab_type="code" outputId="79c59a34-fa2d-4912-c2ed-46a172f5078e" colab={"base_uri": "https://localhost:8080/", "height": 294}
plt.hist(purchase_file['Idade'], histtype='bar', rwidth=0.8)
plt.title('Distribuição de vendas por idade')
plt.xlabel('Idade')
plt.ylabel('Quantidade de compradores')
plt.show()
# + [markdown] id="ZwEgrg8bctl9" colab_type="text"
# ### **1.6 Distribuição dos valores**
# A maioria das vendas são dos produtos de `R$ 2,30`, `R$ 3,40` e `R$ 4,20`.
# + id="ezfCbcD-c0MX" colab_type="code" outputId="12728e21-a24c-4d08-c2b9-2ca6c733903b" colab={"base_uri": "https://localhost:8080/", "height": 294}
plt.hist(purchase_file['Valor'], histtype='bar', rwidth=0.8)
plt.title('Distribuição por Valores')
plt.xlabel('Reais R$')
plt.ylabel('Quantidade de vendas')
plt.show()
# + [markdown] id="7LcWJdQlfvEp" colab_type="text"
# ## **2. Informações Sobre os Consumidores**
# * Número total de consumidores
# + id="c5ChXwxsfEoS" colab_type="code" outputId="1651cb02-2f7b-455c-ed2f-d44bf768528a" colab={"base_uri": "https://localhost:8080/", "height": 32}
# Contar a quantidade de logins, removendo as linhas com dados duplicados.
total_consumidores = purchase_file['Login'].drop_duplicates().count()
print('O total de consumidores na amostra são: {}'.format(total_consumidores))
# + [markdown] id="Pe3psLqngdP6" colab_type="text"
# ## **3. Análise Geral de Compras**
# * Número de itens exclusivos
# * Preço médio de compra
# * Número total de compras
# * Rendimento total (Valor Total)
# + id="OZC6MkETglox" colab_type="code" outputId="4058fa33-9f49-49c7-e2c0-94af775f3f40" colab={"base_uri": "https://localhost:8080/", "height": 80}
# Número de itens exclusivos
itens_exclusivos = purchase_file['Item ID'].drop_duplicates().count()
preco_medio = np.average(purchase_file['Valor'])
total_compras = purchase_file['Nome do Item'].count()
valor_total = np.sum(purchase_file['Valor'])
analise_geral = pd.DataFrame({
'Itens Exclusivos':[itens_exclusivos],
'Preço Médio (R$)':[np.round(preco_medio, decimals=2)],
'Qtd. Compras':[total_compras],
'Valor Total (R$)':[valor_total]
})
analise_geral
# + [markdown] id="oiLlG1Ovgm4J" colab_type="text"
# ## **4. Análise Demográfica por Genêro**
# * Porcentagem e contagem de compradores masculinos
# * Porcentagem e contagem de compradores do sexo feminino
# * Porcentagem e contagem de outros / não divulgados
# + id="v0vRJxiSKhka" colab_type="code" colab={}
# Selecionar os dados únicos do compradores para deduplicação
info_compradores = purchase_file.loc[:,['Login','Sexo','Idade']]
# Deduplicar os dados
info_compradores = info_compradores.drop_duplicates()
# + id="TJU_HSD3NeOH" colab_type="code" colab={}
# Quantidade de compradores por genêro
qtd_compradores = info_compradores['Sexo'].value_counts()
# Percentual de compradores por genêro
perc_compradores = round(info_compradores['Sexo'].value_counts(normalize=True) * 100, 2)
# Armazenar dados no Dataframe
analise_demografica = pd.DataFrame(
{'Percentual':perc_compradores,
'Qtd. Compradores':qtd_compradores
}
)
# + id="1PF0mcHLSgTy" colab_type="code" outputId="22a4beaa-c789-4fc7-c0be-750a87a788fb" colab={"base_uri": "https://localhost:8080/", "height": 142}
# Impressão da tabela
analise_demografica
# + id="v7tbyFaWSkau" colab_type="code" outputId="af937002-4939-4853-aee9-ae9bb6808f5a" colab={"base_uri": "https://localhost:8080/", "height": 264}
plot = analise_demografica['Percentual'].plot(kind='pie',
title='Percentual de Compras por Genêro',
autopct='%.2f')
# + id="gCVZ5hmdVI8Q" colab_type="code" outputId="f50b7053-bf0c-45e8-d1e7-6a4118930e2d" colab={"base_uri": "https://localhost:8080/", "height": 281}
plot = analise_demografica['Qtd. Compradores'].plot(kind='barh',
title='Quantidade de Compradores por Genêro')
# Add labels
for i in plot.patches:
plot.text(i.get_width()+.1, i.get_y()+.31, \
str(round((i.get_width()), 2)), fontsize=10)
# + [markdown] id="sJmALdZcgsPB" colab_type="text"
# ## **5. Análise de Compras Por Gênero**
#
# * Número de compras
# * Preço médio de compra
# * Valor Total de Compra
# * Compras for faixa etária
#
# + id="HBOZq8z9g1wv" colab_type="code" colab={}
# Número de compras por genêro
nro_compras_gen = purchase_file['Sexo'].value_counts()
# Preço médio de compra por genêro
media_compras_gen = round(purchase_file.groupby('Sexo')['Valor'].mean(), 2)
# Total de compras por genêro
total_compras_gen = purchase_file.groupby('Sexo')['Valor'].sum()
analise_compras = pd.DataFrame(
{'Qtd. de Compras':nro_compras_gen,
'Preço Médio (R$)':media_compras_gen,
'Total Compras (R$)':total_compras_gen}
)
# + id="3EdDlaR-ak4C" colab_type="code" outputId="e66ccf5a-b1e8-4190-f646-c96cdc47ddd6" colab={"base_uri": "https://localhost:8080/", "height": 142}
# Impressão da tabela
analise_compras
# + id="sWLcQJKjfJxV" colab_type="code" outputId="e00e99af-71be-473e-8bd4-c2cdb85a4d55" colab={"base_uri": "https://localhost:8080/", "height": 419}
# Usar dataframe deduplicado
info_compradores
# + id="U2-Xk_H-hBpP" colab_type="code" colab={}
# Compras por faixa etária
age_bins = [0, 9.99, 14.99, 19.99, 24.99, 29.99, 34.99, 39.99, 999]
seg_idade = ['Menor de 10', '10-14', '15-19', '20-24', '25-29', '30-34', '35-39', 'Maior de 39']
info_compradores['Intervalo Idades'] = pd.cut(info_compradores['Idade'], age_bins, labels=seg_idade)
# + id="1gWtDQ32rBO-" colab_type="code" outputId="7519031a-666c-4dc9-8ca7-2b30fefa1710" colab={"base_uri": "https://localhost:8080/", "height": 369}
df_hist_compras = pd.DataFrame(info_compradores['Intervalo Idades'].value_counts(), index=seg_idade)
hist = df_hist_compras.plot(kind='bar', legend=False)
hist.set_title('Compras for faixa etária', fontsize=15)
hist.set_ylabel('Frequência')
hist.set_xlabel('Faixas de Idades')
# + [markdown] id="7fDsh7Ohg2hC" colab_type="text"
# ## **6. Consumidores Mais Populares (Top 5)**
# Identifique os 5 principais compradores pelo valor total de compra e, em seguida, liste (em uma tabela):
#
# * Login
# * Número de compras
# * Preço médio de compra
# * Valor Total de Compra
# * Itens mais populares
# + id="rUprkK20g5-u" colab_type="code" colab={}
consumidores_populares = purchase_file[['Login','Nome do Item','Valor']]
# + id="oBv9wF17BSqQ" colab_type="code" outputId="0f677cc9-c170-4569-f9a0-e9c9a2f6dc74" colab={"base_uri": "https://localhost:8080/", "height": 204}
consumidores_populares.head(5)
# + id="xJ_zqNv2-cvU" colab_type="code" colab={}
top_por_compras = consumidores_populares.groupby(['Login']).count()['Nome do Item']
top_por_valor_medio = round(consumidores_populares.groupby('Login').mean()['Valor'], 2)
top_por_valor_total = consumidores_populares.groupby('Login').sum()['Valor']
top_consumidores = pd.DataFrame({'Número de Compras': top_por_compras,
'Preço Médio(R$)': top_por_valor_medio,
'Valor Total(R$)': top_por_valor_total}) \
.sort_values(by=['Valor Total(R$)'], ascending=False) \
.head(5)
top_itens = consumidores_populares['Nome do Item'].value_counts().head(5)
# + id="T3aeD9ZCUrlw" colab_type="code" outputId="40630e1b-ddf3-449f-c2e4-8bfbae555e3a" colab={"base_uri": "https://localhost:8080/", "height": 235}
top_consumidores
# + id="3Y9KTvzk4gnH" colab_type="code" outputId="27274b68-2dba-40c4-a939-89878b0a6f99" colab={"base_uri": "https://localhost:8080/", "height": 204}
itens_populares = pd.DataFrame(consumidores_populares['Nome do Item'].value_counts().head(5))
itens_populares
# + [markdown] id="3PRmboAkh9-0" colab_type="text"
# ## **7. Itens Mais Populares**
# Identifique os 5 itens mais populares **por contagem de compras** e, em seguida, liste (em uma tabela):
# * ID do item
# * Nome do item
# * Número de compras
# * Preço Médio do item
# * Valor Total de Compra
# * Itens mais lucrativos
# + id="0Fnky5GS56So" colab_type="code" colab={}
itens_populares = purchase_file[['Item ID','Nome do Item','Valor']]
# + id="F8SCIJM6CMDf" colab_type="code" outputId="17c0900f-afb6-4e80-89f3-d0b5c8f26002" colab={"base_uri": "https://localhost:8080/", "height": 235}
num_compras = itens_populares.groupby('Nome do Item').count()['Item ID']
media_preco = round(itens_populares.groupby('Nome do Item').mean()['Valor'], 2)
total_preco = itens_populares.groupby('Nome do Item').sum()['Valor']
df_itens_populares = pd.DataFrame({
'Numero de Compras': num_compras,
'Preço Médio do Item': media_preco,
'Valor Total da Compra': total_preco})
df_itens_populares.sort_values(by=['Numero de Compras'], ascending=False).head(5)
# + [markdown] id="7tz_LinAg6rp" colab_type="text"
# ## **8. Itens Mais Lucrativos**
# Identifique os 5 itens mais lucrativos pelo **valor total de compra** e, em seguida, liste (em uma tabela):
# * ID do item
# * Nome do item
# * Número de compras
# * Preço Médio do item
# * Valor Total de Compra
# + id="FeiYoUmFhD95" colab_type="code" colab={}
itens_lucrativos = purchase_file[['Item ID','Nome do Item','Valor']]
# + id="8hCpMRii_IX2" colab_type="code" outputId="cdfc4199-aa95-4ade-f9f3-13664a1ad1ae" colab={"base_uri": "https://localhost:8080/", "height": 204}
itens_lucrativos.head(5)
# + id="FQw02CDd_T5P" colab_type="code" colab={}
qtd_compras = itens_lucrativos.groupby(['Nome do Item']).count()['Valor']
avg_compras = itens_lucrativos.groupby(['Nome do Item']).mean()['Valor']
sum_compras = itens_lucrativos.groupby(['Nome do Item']).sum()['Valor']
# + id="p8NdJUOEAqKY" colab_type="code" colab={}
df_itens_lucrativos = pd.DataFrame({
'Número de Compras': qtd_compras,
'Preço Médio do Item (R$)': round(avg_compras, 2),
'Valor Total de Compra (R$)': sum_compras
})
# + id="sgVj0XCUBglG" colab_type="code" outputId="9b1b579e-4cdc-42ea-cf78-491c05358e5f" colab={"base_uri": "https://localhost:8080/", "height": 235}
df_itens_lucrativos.sort_values(by='Valor Total de Compra (R$)', ascending=False).head(5)
# + id="33bzQWbSHKEn" colab_type="code" outputId="2fc91be1-b0f8-40dd-feb6-c4ceb815b21e" colab={"base_uri": "https://localhost:8080/", "height": 419}
itens_lucrativos.sort_values('Nome do Item')
# + id="C7ywcUXAJZSO" colab_type="code" outputId="c3c3c2d6-b531-4578-f4b3-ab3aabc38b8e" colab={"base_uri": "https://localhost:8080/", "height": 419}
itens_lucrativos.sort_values(by='Nome do Item')
| src/05_desafio/05_missao05.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Relationships Between Presidents Twitter and Political Issues
#
# # Introduction
# The purpose of the project is analyzing the Twitter behaviors of D<NAME> and Barack Obama and comparing their Twitter behaviors to what they do in the government. We will explore their tweets by a Natural Language Processing approch, and find out their characteristics. We will compare their tweets to find the changes of political concerns.
#
# First, we will briefly explore the usage of the twitter of both presidents, for example, frequency of tweets, popularity change, etc. Then we will process content of tweets with NLTK to explore aggregate text statistics of the speeches and visualize some of their properties. Also, we will analyze the relationships between the properties of the tweets, time, and provide a novel visualization of the relevant information.Lastly, we try to use Latent Dirichlet allocation model to find the political concerns change between the President Obama's last 10 Months of presidency and President Trump's first 10 months of presidency in twitter in order to match to the actual changes of political concerns between two presidents in the real world.
# # Visualization of the Raw Data
# ### Frequency of tweeting histograms
# We gather the frequency of tweets of President Obama and President Trump by month and visualize the distributions by histograms.
import pandas as pd
from PIL import Image
import codecs
import webbrowser
import shelve
obama = Image.open('fig/obama_month.png')
obama
trump = Image.open('fig/trump_month.png')
trump
# We can connect the frequencey of tweeting with political events. The presidential election of 2016 was held on November 8 and the number of tweets of Donald Trump in October is much higher than the other months. It seems the political events would influence the number of Twitters for <NAME>. The frequencey of tweeting of Brack Obama is very normal, we cannot generate insights based on the data.
# ### Popularity of both presidents in Twitter
obamapopul = Image.open('fig/obama_popularity_time.png')
obamapopul
trumppopul = Image.open('fig/trump_popularity_time.png')
trumppopul
# The popularity of President Obama is much better than President Trump. Trump's likes reached the peak twice, once in Nov 2016 and once in July 2017, those tweets got near 600000 likes. And the popularity of Obama suddenly changed after Jan 2017. We can easily guess Obama didn't always tweet before Jan 2017. His likes raised rapidly after Jan 2017, always got more than 2000000 likes. His likes reached the peak in Aug 2018, more than 4000000 likes, which was six times more than Trump.
# # Text Processing and Novel Visualization of the Tweets
# We processed and cleaned up the content of their tweets with NLTK. And we counted the number of words, unique words and characters of all tweets. We try to find the descriptive statistics(e.g. mean, count, sd, etc.) of cleaned tweets and explore insights.
# - ***n_words - numeber of words***
# - ***n_wwords - numeber of unique words***
# - ***n_chars - numeber of characters***
# ##### Descriptive statistics of Obama
obama.describe()
# ##### Descriptive statistics of Trump
trump.describe()
# ### Aggregate text statistics with NLTK
#
# We did the text processing with NLTK to compute aggregate text statistics with NLTK and visualized their properties.
nword = Image.open('fig/n_word_comparison.png')
nword
nuword = Image.open('fig/n_uword_comparison.png')
nuword
nchar = Image.open('fig/n_char_comparison.png')
nchar
# We can find both presidents' tweeting behaviors when we compare the aggregate text statistics of both presidents. Trump uses more words and characters than Obama on average, but the standard deviations of trump's text statistics are much higher than Obama. It means trump either writes a long tweet or short tweet while Obama is somewhere in the middle.
# # Change of political concerns between the President Obama last 10 Months of presidency and president's Trump first 10 months of presidency in Twitter
#
# We will use the LDA to analyze the political concerns of President Obama last 10 Months and President Trump first 10 months. Latent Dirichlet allocation (LDA) is a topic model that generates topics based on word frequency from a set of documents
topicobama = pd.read_hdf('result/n3.h5','topicobama')
topicobama
topictrump = pd.read_hdf('result/n3.h5','topictrump')
topictrump
# ### Compare the topic change by LDA
#
# ###### The important word from the top 5 topics of Obama's political concerns
# - First topic are **fair, suprem, peopl, continu**
# - Second topic is **court, time, chang, live, economi**
# - Third topic is **senat, will, leader, think**
# - Forth topic is **judg, climat, american, obamacar**
# - Fifth topic is **doyourjob, new, women, gun**
#
#
# ###### The important word from the top 5 topics for Trump's political concerns
# - First topic are **great, border, presid, iran, terribl**
# - Second topic is **fake, new, interview, tax, administr,hard**
# - Third topic is **countri, just, come, made**
# - Forth topic is **job, democrat, vote, white**
# - Fifth topic is **work, execut, women, help**
#
# Most topics from Obama's tweets are related to social issue and the communtiy, e.g. peopl, fair, obamacar, economi, senat, climat, women, gun. We can guess from the LDA analysis, his topics more about the economic, climate change, women rights, obama care and gun control. Actually, he did work on these topics in his presidency. Most topics from trump's tweets are related to national security, tax, media and his slogan "Make America Great Again" , e.g. great, border, iran, fake, new, interview, tax. We can guess from LDA analysis, his topics are related to his slogan "Make America Great Again", building wall along the Mexican border, blaming the media and Iran nuclear deal. These topics actually are his political concerns. From the above result, we can say that the LDA performed a well analysis of the topics of twitter data.
# ## WordCloud - Most Frequent Stemmed Words
#
obamaclond = Image.open('fig/word_clond_obama.png')
obamaclond
trumpclond = Image.open('fig/word_clond_trump.png')
trumpclond
# ## Compare the word cloud
# We can see that the most most frequent stemmed words of Obama's Twitter are *presid*, *senat* , *Obama*, *need*, *change*, *job*, *leader*, *American*.We can see that the most most frequent stemmed words of Trump's Twitter changed to *will*, *new* , *great*, *thank*, *job*, *fake*, *presid*, *today*. We know 2008 is a huge challenge for the United States. Job problem, terrorism problem, and climate problem was what Mr. Obama try to focus. And for Mr. Trump, we know "Make America Great Again" (MAGA) is a campaign slogan used in American politics that was popularized by President <NAME> in his 2016 presidential campaign.
# # Sentiment Analysis
# An anaylsis of Trump and Obama twitter data by using the Sentiment Anaylzer from the NLTK package. We use the sentiment anaylsis to get the positive and negative on their tweets , and compare their sentiment by the twitter data.
#
# #### The VADER algorithm outputs sentiment scores to 4 classes of sentiments
# The sentiment score from 0 to 1 measuring the subjectivness of the text. 0 is objective, 1 is subjective.
# - Postive:The score of measuring positive words
# - Negative:The score of measuring negative words
# - Neutral:The score of neutral words
# - Compound: The aggregated score which is normalized of the sum of all scores .
# ## Visualization of the Sentiment Analysis
# #### Postive and Negative Tweets of Both Presidents
Image.open('fig/comparison.png')
# #### Change of Sentiment Over Time
Image.open('fig/Obama_Sentiment_Analysis.png')
Image.open('fig/Trump_Sentiment_Analysis.png')
# We can find the differences between two presidents' sentiment based on the graphs above. But we can guess that President Trump is more likely an emotional person since he has more both negative and positive tweet than Obama, and the negative and positive curves fluctuated widely in the Trump sentiment analysis graph.
# #### Correlation matrix of Sentiments of Obama
Image.open('fig/obama_cor.png')
# #### Correlation matrix of Sentiments of Trump
Image.open('fig/trump_cor.png')
# ### Summay of the Sentiment Analysis for Obama
pd.read_hdf('result/n4.h5','obama_describe')
# ### Summary of the Sentiment Analysis for Trump
pd.read_hdf('result/n4.h5','trump_describe')
# ##### Comparing the Sentiment Analysis of Trump and Obama
# From the above results, we can see the compound value is for trump is 0.218 while obama is 0.221. The means of postive and negative value of Trump's tweet are higher than Obama. Obama's mean of neutral is higher than Trump. Therefore, we can see Obama' tweets trend to neutral .
| Analysis.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Creating Shapefiles for Web App - CA_DAUCO
#
# Inputs:
# - Pagg_ReportingUnit.csv. Contains reportinug unit info from WaDE database.
# - Detailed Analysis Units by County source shapefile.
#Needed Libararies
import os
import numpy as np
import pandas as pd
from datetime import datetime
import geopandas as gpd # the library that lets us read in shapefiles
pd.set_option('display.max_columns', 999) # How to display all columns of a Pandas DataFrame in Jupyter Notebook
# +
# Set working directory
workingDir = "C:/Users/rjame/Documents/RShinyAppPractice/CreateAppShapefiles/App2_AggregatedShape"
os.chdir(workingDir)
# Grab AggreagatedAmounts ReportingUnit.csv file.
reportingunits_input = pd.read_csv('SourceFiles/Pagg_ReportingUnit.csv')
df_1RU = pd.DataFrame(reportingunits_input)
df_1RU.head(3)
# -
# ### California - Detailed Analysis Units by County
# +
# Grab the CA Planning Area Shapefile.
# Paring RU_ID to ReportingUnitNativeID
CADAUCOshapefile_input = gpd.read_file('C:/Users/rjame/Documents/RShinyAppPractice/CreateAppShapefiles/App2_AggregatedShape/SourceFiles/Custom/CA/WaDECADAU.shp')
dfs_CADAUCO = pd.DataFrame(CADAUCOshapefile_input)
dfs_CADAUCO.head(3)
# +
# Custom
# State: CA, Detailed Analysis Units by County
###########################################################################
# Create temporal dataframes for state specific and reportingunit type storage
df_1RU_Custom_CAdauco = df_1RU[(df_1RU.ReportingUnitTypeCV == 'Detailed Analysis Units by County') & (df_1RU.StateCV == 'CA')]
# retreive ReportingUnitUUID.
ReportingUnitUUIDdict = pd.Series(df_1RU_Custom_CAdauco.ReportingUnitUUID.values, index = df_1RU_Custom_CAdauco.ReportingUnitNativeID).to_dict()
def retrieveCountyName(colrowValue):
if colrowValue == '' or pd.isnull(colrowValue):
outList = ''
else:
String1 = colrowValue
try:
outList = ReportingUnitUUIDdict[String1]
except:
outList = ''
return outList
dfs_CADAUCO['ReportingUnitUUID'] = dfs_CADAUCO.apply(lambda row: retrieveCountyName(row['RU_ID']), axis=1)
# Merging temporal dataframes into one, using left-join.
dfs_CADAUCO = pd.merge(dfs_CADAUCO, df_1RU_Custom_CAdauco, left_on='ReportingUnitUUID', right_on='ReportingUnitUUID', how='left')
# Creating new output state specific dataframe with fields of interest.
dfs_2CADAUCO = pd.DataFrame() #empty dataframe
dfs_2CADAUCO['OBJECTID'] = dfs_CADAUCO.index
dfs_2CADAUCO['Shape'] = 'Polygon'
dfs_2CADAUCO['UnitID'] = dfs_CADAUCO['ReportingUnitID']
dfs_2CADAUCO['UnitUUID'] = dfs_CADAUCO['ReportingUnitUUID']
dfs_2CADAUCO['NativeID'] = dfs_CADAUCO['ReportingUnitNativeID']
dfs_2CADAUCO['Name'] = dfs_CADAUCO['ReportingUnitName']
dfs_2CADAUCO['TypeCV'] = dfs_CADAUCO['ReportingUnitTypeCV']
dfs_2CADAUCO['StateCV'] = dfs_CADAUCO['StateCV']
dfs_2CADAUCO['geometry'] = dfs_CADAUCO['geometry']
# view output
dfs_2CADAUCO.head(3)
# -
# ### Concatenate and Export
# Merge dataframes
frames = [dfs_2CADAUCO]
outdf = pd.concat(frames)
outdf.head(3)
# drop NA rows
outdf = outdf.dropna(subset=['UnitID'])
outdf
# Export the dataframe to a shapefile.
dfsOut = gpd.GeoDataFrame(outdf, crs="EPSG:4326", geometry='geometry') # covert to geodataframe
dfsOut.to_file("Processed_Shapefiles/CA_DAUCO.shp") # export shape file
| CreateAppShapefiles/App2_AggregatedShape/.ipynb_checkpoints/App2ShapeFiles_CA_DAUCO-checkpoint.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ## Exercise 5.03: Examining Relationships between Predictors and Outcome
df = pd.read_csv('wrangled_transactions.csv', index_col='CustomerID')
df.plot.scatter(x="days_since_first_purchase", y="revenue_2020",figsize=[6,6])
plt.show()
# +
import seaborn as sns
# %matplotlib inline
sns.pairplot(df)
plt.show()
# -
sns.pairplot(df,y_vars="revenue_2020")
plt.show()
df.corr()
| Chapter05/Exercise 5.03.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Numerical solution to the diffusion equation
#
# Please indicate your name below, since you will need to submit this notebook completed latest the day after the datalab.
#
# Don't forget to save your progress during the datalab to avoid any loss due to crashes.
name=''
# In realistic reactor calculations the analytical solution of the transport, or even the diffusion equation is usually not possible (for example due to the heterogenities in the reactor, such as fuel rods, cladding, moderator channels, etc).
#
# We have previously seen how to handle the neutron transport problem with Monte Carlo methods. However, Monte Carlo particle transport does have a very significant flaw: it is time consuming. In practical, industrial applications neutron transport it solved with deterministic methods, and Monte Carlo is usually used as a validational method (although as the computers are becoming more powerful, the role of Monte Carlo calculations is getting even more significant).
#
# Deterministic neutron transport has many parts (from handling slowing down and thermalization to get group constants till performing full core neutron diffusion calculation), and it is outside of the scope of this course. Nevertheless, during this datalab we will look at the basic idea of solving neutron diffusion with numerical methods. For this we will limit ourselves to a very simple case: we will assume that our reactor is a 1-dimensional slab, where all the neutrons are traveling with the same speed (ie. one-group theory), and we will also not bother much with boundary conditions. The intention of the lab is to provide an impression of the deterministic solutions, and not to get lost in details (for readers who would like to go in depth, we can recommend the [Duderstadt & Hamilton book](https://deepblue.lib.umich.edu/handle/2027.42/89079) as a starter, and then the [Computational Nuclear Engineering book of McClarren](https://www.elsevier.com/books/computational-nuclear-engineering-and-radiological-science-using-python/mcclarren/978-0-12-812253-2), and finally for the brave the [Applied Reactor Physics book of Hébert](https://www.polymtl.ca/merlin/downloads/5784_depliant-AppliedReactor_ep3.pdf)).
#
# We will first solve the neutron diffusion problem for a non multiplying medium (thus $\Sigma_f=0$), and then try to tackle a multiplying medium. The general recipe will be that we try to convert our differential equation into linear algebra. Finally we will make a simplified approach to obtain group cross sections necessary for such numerical methods.
#
# Note that a slab reactor you can imagine as space infinite along $y$ and $z$ directions, but finite in $x$ direction. In such a reactor it does not matter where we are along the $y$ and $z$ axis, therefore we can solve for the flux in 1D.
#
# ## Experiment 1 : non multiplying medium
#
#
# Let's consider that we have a slab geometry, in which a neutron source is placed. In this case we know that we are going to have a steady state flux. We can describe the system as
#
# $$-D\frac{d^2\phi}{dx^2}+\Sigma_a\phi(x)=S(x)$$
#
# with boundary conditions
#
# $$\phi(0)=\phi(a)=0$$
#
# (let's ignore now the extrapolation length).
#
# We can discretize the spatial variable $x$ by choosing a set of $N+1$ discrete points ($x_0,x_1,...,x_i,...x_N$) which are equally spaced. The distance between the neigbouring points is $\Delta=a/N$.
#
# If we wanted to rewrite the above diffusion equation at each discrete $x_i$ point in the form of difference equations, we need an approximation for the term $\frac{d^2\phi}{dx^2}$. We can Taylor expend $\phi$ at $x_{i\pm1}$:
#
# $$\phi_{i+1}=\phi(x_{i+1})=\phi_i+\Delta\frac{d\phi}{dx}\Big\rvert_i+\frac{\Delta^2}{2}\frac{d^2\phi}{dx^2}\Big\rvert_i+...$$
#
# and
#
# $$\phi_{i-1}=\phi(x_{i-1})=\phi_i-\Delta\frac{d\phi}{dx}\Big\rvert_i+\frac{\Delta^2}{2}\frac{d^2\phi}{dx^2}\Big\rvert_i-...$$
#
# Upon adding these one arrives to
#
# $$\frac{d^2\phi}{dx^2}\Big\rvert_i\approx \frac{\phi_{i+1}-2\phi_i+\phi_{i-1}}{\Delta^2}$$
#
# with that our diffusion equation becomes
#
# $$-D\Bigg(\frac{\phi_{i+1}-2\phi_i+\phi_{i-1}}{\Delta^2}\Bigg)+\Sigma_a\phi_i=S_i \quad i=1,2,...$$
#
# We can rearrange this
#
# $$-\frac{D}{\Delta^2}\phi_{i-1}+\Big(\frac{2D}{\Delta^2}+\Sigma_a\Big)\phi_i-\frac{D}{\Delta^2}\phi_{i+1}=S_i$$
#
# so by introducing constants
#
# $$a_{i,i-1}\phi_{i-1}+a_{i,i}\phi_i-a_{i,i+1}\phi_{i+1}=S_i \quad i=1,2...,N-1$$
#
# (we don't have equations for $i=0$ and $i=N$, since we have boundary conditions there, which actually present themself by the fact that in case of $i=1$ the $\phi_{i-1}$ disappears).
#
# Now, one can see that this is a matrix multiplied with a vector, which results in a vector, in the form of
#
# $$\underline{\underline{A}}\underline{\phi}=\underline{S}$$
#
# where $\underline{\underline{A}}$ is made of the above defined coefficients:
#
# \begin{equation}
# \begin{pmatrix}
# a_{1,1} & a_{1,2} & 0 & 0 & 0 & \cdots \\
# a_{2,1} & a_{2,2} & a_{2,3} & 0 & 0 & \cdots \\
# 0 & a_{3,2} & a_{3,3} & a_{3,4} & 0 & \cdots \\
# 0 & 0 & a_{4,3} & a_{4,4} & a_{4,5} & \cdots \\
# \vdots & \vdots & \vdots & \vdots & \vdots & \vdots &
# \end{pmatrix}
# \end{equation}
#
# and the vector $\underline{\phi}$ is simply the flux at the discrete locations
#
# \begin{equation}
# \begin{pmatrix}
# \phi_1 \\ \phi_2 \\ \phi_3 \\ \vdots \\ \phi_{N-2} \\ \phi_{N-1}
# \end{pmatrix}
# \end{equation}
#
#
# this is what we seek as the solution!
#
# And $\underline{S}$ is the source at different locations.
#
# \begin{equation}
# \begin{pmatrix}
# S_1 \\ S_2 \\ S_3 \\ \vdots \\ S_{N-2} \\ S_{N-1}
# \end{pmatrix}
# \end{equation}
#
#
# which is an input. For example we could have just a plane source (only one non-zero value).
#
# Thus by inverting the matrix one can solve for the flux.
#
# $$\underline{\phi}=\underline{\underline{A}}^{-1}\underline{S}$$
#
# The problem could be further developed by assuming that $D$ and $\Sigma_a$ also depend on the spatial coordinate. And the problem is similar in multiple dimensions. However for the current exercise we can consider the simplest case, with constant $D$ and $\Sigma_a$ in a finite one-dimensional geometry.
#
# Once we implement matrix $\underline{\underline{A}}$, we can use numpy's linear algebra module to solve such matrix equations: `flux=np.linalg.solve(A,S)`. So let's do that! Implement the matrix below, then execute the code block!
# +
import numpy as np
import matplotlib.pyplot as plt
def createA(Sigma_a,D,a,N):
"""Function to create matrix A
Parameters
----------
Sigma_a : float
Macroscopic absorption cross section
D : float
Diffusion coefficient
a : float
Width of the slab
N : int
Number of discrete points
"""
A=np.zeros((N-1,N-1))
Delta=a/N
for i in range(1,N):
#write code to create matrix A
return A
# -
# And now let's use this. We can first solve for a case when there is a source plane at the center of the slab. Execute the code below.
# +
#Input data
a=100
D=0.9
Sigma_a=0.066
N=50
#create matrix A
A=createA(Sigma_a,D,a,N)
#create space coordinates
x=np.linspace(-a/2,a/2,N+1)
#Define the source and place a source in the middle
S=np.zeros(N-1)
S[int(N/2-1)]=1000
#Solve for the flux
flux=np.linalg.solve(A,S)
plt.figure()
plt.plot(x[1:-1],flux/(np.max(flux)))
plt.xlabel('x')
plt.ylabel(r'$\phi(x)/\phi_0$')
plt.show()
# -
# Now define a source which is uniform all over the slab. What shape do you expect for the flux?
# +
#Define the source and place a source in the middle
S=#complete the line
#Solve for the flux
flux=np.linalg.solve(A,S)
plt.figure()
plt.plot(x[1:-1],flux/(np.max(flux)))
plt.xlabel('x')
plt.ylabel(r'$\phi(x)/\phi_0$')
plt.show()
# -
# Let's see what happens when the absorption cross section is lowered in case of a planar source. Try to lower it in several steps with 10%, 1% and 0.1% of the original value, what is your expectation, and what have you found? Conclude your results!
# +
#your code comes here
# -
# Change this line to your conclusion!
# ## Experiment 2 : multiplying medium (k-eigenvalue problem)
#
# Now let's consider that the system does not have an external source, however it is built of multiplying medium. In this case the equation is slightly different, since instead of a fixed source, we have now a fission source:
#
# $$-D\frac{d^2\phi}{dx^2}+\Sigma_a\phi(x)=\frac{\nu}{k}\Sigma_f\phi(x)$$
#
# where we renormalized the fission source with the k-eigenvalue. The reason for this is because we would like to obtain a steady state equation. However, if a system is supercritical, the neutron population increases in time, therefore the normalization with $k>1$ will reduce the neutron production term due to fission. Whereas, for a subcritical system, when the neutron population in reality decreases, normalizing with $k<1$ will increase the number of neutrons produced in fission to the level when the system is at steady state.
#
# After similar considerations as before for the non-multiplying system, we can arrive to the discretized form
#
# $$-\frac{D}{\Delta^2}\phi_{i-1}+\Big(\frac{2D}{\Delta^2}+\Sigma_a\Big)\phi_i-\frac{D}{\Delta^2}\phi_{i+1}=\frac{\nu}{k}\Sigma_f\phi_i$$
#
# which we can write as an eigenvalue problem:
#
# $$\underline{\underline{A}}\underline{\phi}=\frac{1}{k}\underline{\underline{B}}\underline{\phi}$$
#
# where $\underline{\underline{A}}$ is as defined before, and $\underline{\underline{B}}$ is:
#
# \begin{equation}
# \begin{pmatrix}
# \nu\Sigma_f & 0 & 0 & 0 & 0 & \cdots \\
# 0 & \nu\Sigma_f & 0 & 0 & 0 & \cdots \\
# 0 & 0 & \nu\Sigma_f & 0 & 0 & \cdots \\
# 0 & 0 & 0 & \nu\Sigma_f & 0 & \cdots \\
# \vdots & \vdots & \vdots & \vdots & \vdots & \vdots &
# \end{pmatrix}
# \end{equation}
#
# We can arrange this into a standard eigenvalue problem
#
# $$\underline{\underline{A}}^{-1}\underline{\underline{B}}\underline{\phi}=\underline{\underline{C}}\underline{\phi}=k\underline{\phi}$$
#
# For which the largest eigenvalue will be the k-effective. Such problems can be solved with the [inverse power method](https://pythonnumericalmethods.berkeley.edu/notebooks/chapter15.02-The-Power-Method.html) (also detailed in the previously mentioned McClaren book). The idea is that
#
# 1. We take a random guess of $\underline{\phi}^{(0)}$ (where the upper index is the iteration number)
# 2. Perform $\underline{\underline{C}}\underline{\phi}^{(0)}$ to obtain $\underline{\phi}^{(1)}$
# 3. Normalize $\underline{\phi}^{(1)}$ to get an estimate of $k$
# 4. We repeat 1-4 until $k$ converged.
#
# We have implemented the inverse power algorithm in function `invPow()`. You will only need to create matrix $\underline{\underline{B}}$ and run the code!
#
# **Note** within function `invPow()` we use `np.linalg.inv()` to invert the matrix and `@` stands for matrix multiplication. And `np.dot()` performs the dot product of the matrix and the vector.
# +
def createB(nuSigma_f,N):
"""Function to create matrix B
Parameters
----------
nuSigma_f : float
nu*Sigma_f fission source
N : int
Number of discrete points
"""
B=np.zeros((N-1,N-1))
for i in range(1,N):
#complete the function
return B
def invPow(A,B,tol=1e-6):
phi=np.random.random((A.shape[0])) #initial guess
C=np.linalg.inv(A) @ B
converged = False
kold=0.0
while not converged:
phi = np.dot(C,phi)
k = np.max(np.abs(phi))
phi = phi / np.max(phi)
if abs(kold-k)<tol:
converged = True
kold=k
return k, phi
# -
# Now let's try this with some values. What is your expectation for the flux shape?
# +
a=100
D=3.85
Sigma_a=0.1532
nuSigma_f=0.157
N=50
x=np.linspace(-a/2,a/2,N+1)
#create matrix A
#create matrix B
k,phi=invPow(A,B)
print(k)
plt.figure()
plt.plot(x[1:-1],phi)
plt.xlabel('x')
plt.ylabel(r'$\phi(x)/\phi_0$')
plt.show()
# -
# Note that with little work you can modify the `invPow()` to see how the flux shape converges during the iteration. If you are interested, feel free to add that.
#
# What happens if we put a control rod into the reactor? Think, what modifications do you need to do to represent a control rod? (Note that you do not need to modify `createA()` or `createB()` for this, it is enough to modify one or more elements of the already created matrices).
# +
#write your code here!
# -
# # Experiment 3
#
# In the previous exercises we relied on 1-group cross sections which were given in the exercise. However in practice in order to perform a diffusion calculation (either 1 or multi-group) we first need to obtain the group cross sections.
#
# The group cross sections are the spectrum weighted averages of the continuous cross sections (where, for this homogeneous case we could consider that the material is a mixture of nuclides, eg. uranium nuclides and water nuclides, therefore first we would get the weighted cross sections of the nuclides and then determine the cross sections for the mixture.):
#
# $$\sigma_g=\frac{\int_{E_{g}}^{E_{g-1}}\sigma(E)\Phi(E)dE}{\int_{E_{g}}^{E_{g-1}}\Phi(E)dE}$$
#
# Where $g$ stands for the group. In case of a one-group approximation the integral is performed from $0$ to $\infty$ (or some appropriate lower and upper energy bounds). Also remember that due to neutrons slowing down in energy, the group labeling goes in a reversed order (see lecture notes).
#
# Note that the energy dependent flux needs to be a priori known for calculating the group cross sections. That is one of the reasons why determinstic methods solve the transport problem in multiple steps. First on a cell level we try to obtain the group cross sections, and then perform a full core calculation based on the group cross sections. Today a common method is to generate group constants from Monte Carlo methods (which is feasible at the assembly level), and then use them in fast deterministic calculations.
#
# Now we will evaluate the above integration to obtain the 1 group fission cross section of U-235. In the following code block we load in some data:
#
# - `energy` and `xs` will store the continious energy fission cross section of U-235.
# - `enlow`, `enhigh` and `flux` store the information on the neutron spectrum which we obtained from a separate openMC calculation (this is the type of data we have tallied during the previous datalab).
#
# Let's load it, and see what we can do with them.
# +
import urllib.request
import ssl
ssl._create_default_https_context = ssl._create_unverified_context
website='http://atom.kaeri.re.kr/nuchart/getData.jsp?target=jeff3.2,92,235,9228,3,18'
with urllib.request.urlopen(website) as response:
content235 = response.readlines()
def getXS(content):
"""Function to extract data from the html content provided by KAERI.
the content has a shape as follows like:
Energy(eV) XS(b)<br>
1.00000E-05 3.07139<br>"""
energy=[]
xs=[]
flag=False
i=0
for line in content:
x=line.strip().split()
if x[0]==b'Energy(eV)':
flag=True
continue
if x[0]==b'</span>':
flag=False
if flag:
energy.append(float(x[0]))
xs.append(float(x[1][:-4]))
return np.array(energy), np.array(xs)
energy,xs=getXS(content235)
enlow=np.array([1.00000000e-02, 1.10917482e-02, 1.23026877e-02, 1.36458314e-02,
1.51356125e-02, 1.67880402e-02, 1.86208714e-02, 2.06538016e-02,
2.29086765e-02, 2.54097271e-02, 2.81838293e-02, 3.12607937e-02,
3.46736850e-02, 3.84591782e-02, 4.26579519e-02, 4.73151259e-02,
5.24807460e-02, 5.82103218e-02, 6.45654229e-02, 7.16143410e-02,
7.94328235e-02, 8.81048873e-02, 9.77237221e-02, 1.08392691e-01,
1.20226443e-01, 1.33352143e-01, 1.47910839e-01, 1.64058977e-01,
1.81970086e-01, 2.01836636e-01, 2.23872114e-01, 2.48313311e-01,
2.75422870e-01, 3.05492111e-01, 3.38844156e-01, 3.75837404e-01,
4.16869383e-01, 4.62381021e-01, 5.12861384e-01, 5.68852931e-01,
6.30957344e-01, 6.99841996e-01, 7.76247117e-01, 8.60993752e-01,
9.54992586e-01, 1.05925373e+00, 1.17489755e+00, 1.30316678e+00,
1.44543977e+00, 1.60324539e+00, 1.77827941e+00, 1.97242274e+00,
2.18776162e+00, 2.42661010e+00, 2.69153480e+00, 2.98538262e+00,
3.31131121e+00, 3.67282300e+00, 4.07380278e+00, 4.51855944e+00,
5.01187234e+00, 5.55904257e+00, 6.16595002e+00, 6.83911647e+00,
7.58577575e+00, 8.41395142e+00, 9.33254301e+00, 1.03514217e+01,
1.14815362e+01, 1.27350308e+01, 1.41253754e+01, 1.56675107e+01,
1.73780083e+01, 1.92752491e+01, 2.13796209e+01, 2.37137371e+01,
2.63026799e+01, 2.91742701e+01, 3.23593657e+01, 3.58921935e+01,
3.98107171e+01, 4.41570447e+01, 4.89778819e+01, 5.43250331e+01,
6.02559586e+01, 6.68343918e+01, 7.41310241e+01, 8.22242650e+01,
9.12010839e+01, 1.01157945e+02, 1.12201845e+02, 1.24451461e+02,
1.38038426e+02, 1.53108746e+02, 1.69824365e+02, 1.88364909e+02,
2.08929613e+02, 2.31739465e+02, 2.57039578e+02, 2.85101827e+02,
3.16227766e+02, 3.50751874e+02, 3.89045145e+02, 4.31519077e+02,
4.78630092e+02, 5.30884444e+02, 5.88843655e+02, 6.53130553e+02,
7.24435960e+02, 8.03526122e+02, 8.91250938e+02, 9.88553095e+02,
1.09647820e+03, 1.21618600e+03, 1.34896288e+03, 1.49623566e+03,
1.65958691e+03, 1.84077200e+03, 2.04173794e+03, 2.26464431e+03,
2.51188643e+03, 2.78612117e+03, 3.09029543e+03, 3.42767787e+03,
3.80189396e+03, 4.21696503e+03, 4.67735141e+03, 5.18800039e+03,
5.75439937e+03, 6.38263486e+03, 7.07945784e+03, 7.85235635e+03,
8.70963590e+03, 9.66050879e+03, 1.07151931e+04, 1.18850223e+04,
1.31825674e+04, 1.46217717e+04, 1.62181010e+04, 1.79887092e+04,
1.99526231e+04, 2.21309471e+04, 2.45470892e+04, 2.72270131e+04,
3.01995172e+04, 3.34965439e+04, 3.71535229e+04, 4.12097519e+04,
4.57088190e+04, 5.06990708e+04, 5.62341325e+04, 6.23734835e+04,
6.91830971e+04, 7.67361489e+04, 8.51138038e+04, 9.44060876e+04,
1.04712855e+05, 1.16144861e+05, 1.28824955e+05, 1.42889396e+05,
1.58489319e+05, 1.75792361e+05, 1.94984460e+05, 2.16271852e+05,
2.39883292e+05, 2.66072506e+05, 2.95120923e+05, 3.27340695e+05,
3.63078055e+05, 4.02717034e+05, 4.46683592e+05, 4.95450191e+05,
5.49540874e+05, 6.09536897e+05, 6.76082975e+05, 7.49894209e+05,
8.31763771e+05, 9.22571427e+05, 1.02329299e+06, 1.13501082e+06,
1.25892541e+06, 1.39636836e+06, 1.54881662e+06, 1.71790839e+06,
1.90546072e+06, 2.11348904e+06, 2.34422882e+06, 2.60015956e+06,
2.88403150e+06, 3.19889511e+06, 3.54813389e+06, 3.93550075e+06,
4.36515832e+06, 4.84172368e+06, 5.37031796e+06, 5.95662144e+06,
6.60693448e+06, 7.32824533e+06, 8.12830516e+06, 9.01571138e+06])
enhigh=np.array([1.10917482e-02, 1.23026877e-02, 1.36458314e-02, 1.51356125e-02,
1.67880402e-02, 1.86208714e-02, 2.06538016e-02, 2.29086765e-02,
2.54097271e-02, 2.81838293e-02, 3.12607937e-02, 3.46736850e-02,
3.84591782e-02, 4.26579519e-02, 4.73151259e-02, 5.24807460e-02,
5.82103218e-02, 6.45654229e-02, 7.16143410e-02, 7.94328235e-02,
8.81048873e-02, 9.77237221e-02, 1.08392691e-01, 1.20226443e-01,
1.33352143e-01, 1.47910839e-01, 1.64058977e-01, 1.81970086e-01,
2.01836636e-01, 2.23872114e-01, 2.48313311e-01, 2.75422870e-01,
3.05492111e-01, 3.38844156e-01, 3.75837404e-01, 4.16869383e-01,
4.62381021e-01, 5.12861384e-01, 5.68852931e-01, 6.30957344e-01,
6.99841996e-01, 7.76247117e-01, 8.60993752e-01, 9.54992586e-01,
1.05925373e+00, 1.17489755e+00, 1.30316678e+00, 1.44543977e+00,
1.60324539e+00, 1.77827941e+00, 1.97242274e+00, 2.18776162e+00,
2.42661010e+00, 2.69153480e+00, 2.98538262e+00, 3.31131121e+00,
3.67282300e+00, 4.07380278e+00, 4.51855944e+00, 5.01187234e+00,
5.55904257e+00, 6.16595002e+00, 6.83911647e+00, 7.58577575e+00,
8.41395142e+00, 9.33254301e+00, 1.03514217e+01, 1.14815362e+01,
1.27350308e+01, 1.41253754e+01, 1.56675107e+01, 1.73780083e+01,
1.92752491e+01, 2.13796209e+01, 2.37137371e+01, 2.63026799e+01,
2.91742701e+01, 3.23593657e+01, 3.58921935e+01, 3.98107171e+01,
4.41570447e+01, 4.89778819e+01, 5.43250331e+01, 6.02559586e+01,
6.68343918e+01, 7.41310241e+01, 8.22242650e+01, 9.12010839e+01,
1.01157945e+02, 1.12201845e+02, 1.24451461e+02, 1.38038426e+02,
1.53108746e+02, 1.69824365e+02, 1.88364909e+02, 2.08929613e+02,
2.31739465e+02, 2.57039578e+02, 2.85101827e+02, 3.16227766e+02,
3.50751874e+02, 3.89045145e+02, 4.31519077e+02, 4.78630092e+02,
5.30884444e+02, 5.88843655e+02, 6.53130553e+02, 7.24435960e+02,
8.03526122e+02, 8.91250938e+02, 9.88553095e+02, 1.09647820e+03,
1.21618600e+03, 1.34896288e+03, 1.49623566e+03, 1.65958691e+03,
1.84077200e+03, 2.04173794e+03, 2.26464431e+03, 2.51188643e+03,
2.78612117e+03, 3.09029543e+03, 3.42767787e+03, 3.80189396e+03,
4.21696503e+03, 4.67735141e+03, 5.18800039e+03, 5.75439937e+03,
6.38263486e+03, 7.07945784e+03, 7.85235635e+03, 8.70963590e+03,
9.66050879e+03, 1.07151931e+04, 1.18850223e+04, 1.31825674e+04,
1.46217717e+04, 1.62181010e+04, 1.79887092e+04, 1.99526231e+04,
2.21309471e+04, 2.45470892e+04, 2.72270131e+04, 3.01995172e+04,
3.34965439e+04, 3.71535229e+04, 4.12097519e+04, 4.57088190e+04,
5.06990708e+04, 5.62341325e+04, 6.23734835e+04, 6.91830971e+04,
7.67361489e+04, 8.51138038e+04, 9.44060876e+04, 1.04712855e+05,
1.16144861e+05, 1.28824955e+05, 1.42889396e+05, 1.58489319e+05,
1.75792361e+05, 1.94984460e+05, 2.16271852e+05, 2.39883292e+05,
2.66072506e+05, 2.95120923e+05, 3.27340695e+05, 3.63078055e+05,
4.02717034e+05, 4.46683592e+05, 4.95450191e+05, 5.49540874e+05,
6.09536897e+05, 6.76082975e+05, 7.49894209e+05, 8.31763771e+05,
9.22571427e+05, 1.02329299e+06, 1.13501082e+06, 1.25892541e+06,
1.39636836e+06, 1.54881662e+06, 1.71790839e+06, 1.90546072e+06,
2.11348904e+06, 2.34422882e+06, 2.60015956e+06, 2.88403150e+06,
3.19889511e+06, 3.54813389e+06, 3.93550075e+06, 4.36515832e+06,
4.84172368e+06, 5.37031796e+06, 5.95662144e+06, 6.60693448e+06,
7.32824533e+06, 8.12830516e+06, 9.01571138e+06, 1.00000000e+07])
flux=np.array([0.02577856, 0.03066058, 0.03603053, 0.04240431, 0.05002994,
0.05839512, 0.06726075, 0.0774172 , 0.08861314, 0.10032227,
0.11204404, 0.1241643 , 0.13569794, 0.14679644, 0.15612227,
0.163737 , 0.1679735 , 0.16947752, 0.16634144, 0.16142205,
0.15277212, 0.14084592, 0.12866013, 0.11470102, 0.10077805,
0.08728958, 0.07620083, 0.06686899, 0.0595781 , 0.05401424,
0.0500681 , 0.04688048, 0.04562412, 0.04461072, 0.04429055,
0.04411934, 0.04360341, 0.04294853, 0.04230007, 0.04170813,
0.04151998, 0.04102959, 0.04036848, 0.03990512, 0.03981038,
0.03873371, 0.0388913 , 0.03930658, 0.03864539, 0.03847107,
0.03782591, 0.03827199, 0.03778458, 0.0375978 , 0.03731462,
0.03743993, 0.03766099, 0.03748085, 0.03692993, 0.03679025,
0.03594861, 0.03429717, 0.01577556, 0.03367109, 0.03985187,
0.04026397, 0.04114586, 0.04139404, 0.04003358, 0.04117315,
0.04142176, 0.04097295, 0.04057454, 0.02738534, 0.04108266,
0.04348966, 0.04373484, 0.04306126, 0.04201883, 0.03272645,
0.04509743, 0.04554253, 0.04547996, 0.04555959, 0.04138011,
0.04519736, 0.04433602, 0.04654428, 0.04765958, 0.04234967,
0.0454248 , 0.04665341, 0.04755707, 0.04616193, 0.05216002,
0.04278341, 0.04762769, 0.04762244, 0.04718167, 0.04804163,
0.04886103, 0.04887618, 0.04842858, 0.04834029, 0.04909005,
0.04867371, 0.04908572, 0.04886949, 0.04949842, 0.04942243,
0.05127609, 0.04863334, 0.0495057 , 0.0499865 , 0.05046035,
0.05118903, 0.04976062, 0.05068816, 0.05117331, 0.05018697,
0.05132272, 0.05110207, 0.05182998, 0.0513591 , 0.05255648,
0.05254036, 0.05264878, 0.05271078, 0.0528482 , 0.0533772 ,
0.05363962, 0.05414106, 0.05401488, 0.05520938, 0.05584227,
0.05598427, 0.05620829, 0.05657112, 0.05761279, 0.05755317,
0.05936508, 0.06002431, 0.06015312, 0.06145382, 0.06266386,
0.06410618, 0.06537679, 0.06627329, 0.06798494, 0.06997738,
0.07110513, 0.07421602, 0.07665185, 0.07888876, 0.08224286,
0.08476568, 0.08807564, 0.09017473, 0.09477286, 0.09989802,
0.10462593, 0.11046211, 0.1163072 , 0.12171737, 0.12945136,
0.13813527, 0.15085467, 0.15762304, 0.14900812, 0.133717 ,
0.15720701, 0.18154412, 0.19507056, 0.20602718, 0.21957512,
0.24308601, 0.22094504, 0.18701514, 0.19871816, 0.22715274,
0.21390225, 0.22701861, 0.22410287, 0.22002921, 0.21619758,
0.23041776, 0.23530943, 0.22175108, 0.20061146, 0.16351667,
0.1452543 , 0.13621642, 0.1142683 , 0.0940825 , 0.0703381 ,
0.05378389, 0.03569159, 0.02324831, 0.01404995, 0.00785918])
# -
# The spectrum is now integrated over energy bins, and it contains 200 energy bins. If you execute the following code block you can plot it.
plt.figure()
plt.loglog((enlow+enhigh)/2,flux)
plt.xlabel('Energy (eV)')
plt.ylabel('Group flux per source particle')
plt.show()
# Therefore you will need to
#
# - calculate the midpoints of the energybins
# - divide the integral spectrum with the length of the bins
# - interpolate the continuous cross section at the energy midpoints (you can use the `np.interp()` function)
# - calculate the one group cross section by weighting the continuous cross section with the spectrum. You can use the `np.trapz()` function for the integration.
#
# Your final result will be in barn units. If you needed macroscopic group cross sections, you have to multiply with the number density of the nuclide. If you would then multiply with the total flux (for this, one needs to know the power of the reactor), you would obtain the fission rate in U-235.
# +
energyC=#complete the line to calculate the midpoint energies
fluxpE=#complete the line to calculate the spectrum
plt.figure()
#plot the spectrum
plt.show()
xs5interp=#interpolate the continous cross sections at the values of energyC
xs51g=#calculate the 1group cross section
print('1G xs: {:.2f}'.format(xs51g))
# -
# # Experiment 4 (optional)
#
# As mentioned before, it is possible to use Monte Carlo codes to obtain group cross sections. One needs to realize that evaluating group constants is not that different than using tallies with the right *scores* and *filters*. In fact openMC has a multi-group cross section module called `openmc.mgxs`.
#
# In the separate '7b-openmc_mgxs.ipynb' notebook you can find a demonstration for a simple slab reactor on how to use openMC for group cross section generation.
| Datalabs/Datalab07/7-NumericalDiffusion.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ### Machine Learning Foundation Specialization
# ### University of Washington - Seattle
# <br/>
# #### Week 4 Quiz#1 Clustering and Similarity
# +
## Created by <NAME>
## Created on 03/30/2020 10:34
## <EMAIL>
## Mayo Clinic College of Medicine and Sciences
# -
# <br/>
# ##### **Problem 1.** A country, called Simpleland, has a language with a small vocabulary of just “the”, “on”, “and”, “go”, “round”, “bus”, and “wheels”. For a word count vector with indices ordered as the words appear above, what is the word count vector for a document that simply says “the wheels on the bus go round and round.”
# ##### Please enter the vector of counts as follows: If the counts were ["the"=1, “on”=3, "and"=2, "go"=1, "round"=2, "bus"=1, "wheels"=1], enter 1321211.
# ##### **Answer:** 2111211
# <br/>
# ##### **Problem 2.** In Simpleland, a reader is enjoying a document with a representation: [1 3 2 1 2 1 1]. Which of the following articles would you recommend to this reader next?
# ##### **A.** [7 0 2 1 0 0 1]
# ##### **B.** [1 7 0 0 2 0 1]
# ##### **C.** [1 0 0 0 7 1 2]
# ##### **D.** [0 2 0 0 7 1 1]
# ##### **Answer:** B
# <br/>
# ##### **Problem 3.** A corpus in Simpleland has 99 articles. If you pick one article and perform 1-nearest neighbor search to find the closest article to this query article, how many times must you compute the similarity between two articles?
# ##### **A.** 98
# ##### **B.** 98 * 2 = 196
# ##### **C.** 98 / 2 = 49
# ##### **D.** (98)^2
# ##### **E.** 99
# ##### **Answer:** A
# <br/>
# ##### **Problem 4.** For the TF-IDF representation, does the relative importance of words in a document depend on the base of the logarithm used? For example, take the words "bus" and "wheels" in a particular document. Is the ratio between the TF-IDF values for "bus" and "wheels" different when computed using log base 2 versus log base 10?
# ##### **Answer:** No
# <br/>
# ##### **Problem 5.** Which of the following statements are true? (Check all that apply):
# ##### **A.** Deciding whether an email is spam or not spam using the text of the email and some spam/no spam labels is a supervised learning problem
# ##### **B.** Dividing emails into two groups based on the text of each email is a supervised learning problem
# ##### **C.** If we are performing clustering, we typically assume we either do not have or do not use class labels in training the model
# ##### **Answer:** A and C
# <br/>
# ##### **Problem 6.** Which of the following pictures represents the best k-means solution? (Squares represent observations, plus signs are cluster centers, and colors indicate assignments of observations to cluster centers.)
# [Figure A](img/6A.png) [Figure B](img/6B.png) [Figure C](img/6C.png)
# ##### **Answer:** B
| Week_4/Quiz_1/W4Q1.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Environment (conda_tensorflow_p36)
# language: python
# name: conda_tensorflow_p36
# ---
# +
import keras
import keras.backend as K
from keras.datasets import mnist
from keras.models import Sequential, Model, load_model
from keras.layers import Dense, Dropout, Activation, Flatten, Input, Lambda
from keras.layers import Conv2D, MaxPooling2D, AveragePooling2D, Conv1D, MaxPooling1D, LSTM, ConvLSTM2D, GRU, CuDNNLSTM, CuDNNGRU, BatchNormalization, LocallyConnected2D, Permute, TimeDistributed, Bidirectional
from keras.layers import Concatenate, Reshape, Conv2DTranspose, Embedding, Multiply, Activation
from functools import partial
from collections import defaultdict
import os
import pickle
import numpy as np
import scipy.sparse as sp
import scipy.io as spio
import matplotlib.pyplot as plt
import pandas as pd
class MySequence :
def __init__(self) :
self.dummy = 1
keras.utils.Sequence = MySequence
import isolearn.keras as iso
from sequence_logo_helper_protein import plot_protein_logo, letterAt_protein
class IdentityEncoder(iso.SequenceEncoder) :
def __init__(self, seq_len, channel_map) :
super(IdentityEncoder, self).__init__('identity', (seq_len, len(channel_map)))
self.seq_len = seq_len
self.n_channels = len(channel_map)
self.encode_map = channel_map
self.decode_map = {
val : key for key, val in channel_map.items()
}
def encode(self, seq) :
encoding = np.zeros((self.seq_len, self.n_channels))
for i in range(len(seq)) :
if seq[i] in self.encode_map :
channel_ix = self.encode_map[seq[i]]
encoding[i, channel_ix] = 1.
return encoding
def encode_inplace(self, seq, encoding) :
for i in range(len(seq)) :
if seq[i] in self.encode_map :
channel_ix = self.encode_map[seq[i]]
encoding[i, channel_ix] = 1.
def encode_inplace_sparse(self, seq, encoding_mat, row_index) :
raise NotImplementError()
def decode(self, encoding) :
seq = ''
for pos in range(0, encoding.shape[0]) :
argmax_nt = np.argmax(encoding[pos, :])
max_nt = np.max(encoding[pos, :])
if max_nt == 1 :
seq += self.decode_map[argmax_nt]
else :
seq += "0"
return seq
def decode_sparse(self, encoding_mat, row_index) :
encoding = np.array(encoding_mat[row_index, :].todense()).reshape(-1, 4)
return self.decode(encoding)
class NopTransformer(iso.ValueTransformer) :
def __init__(self, n_classes) :
super(NopTransformer, self).__init__('nop', (n_classes, ))
self.n_classes = n_classes
def transform(self, values) :
return values
def transform_inplace(self, values, transform) :
transform[:] = values
def transform_inplace_sparse(self, values, transform_mat, row_index) :
transform_mat[row_index, :] = np.ravel(values)
import tensorflow as tf
from keras.backend.tensorflow_backend import set_session
def contain_tf_gpu_mem_usage() :
config = tf.ConfigProto()
config.gpu_options.allow_growth = True
sess = tf.Session(config=config)
set_session(sess)
contain_tf_gpu_mem_usage()
# +
#Re-load cached dataframe (shuffled)
dataset_name = "coiled_coil_binders"
experiment = "baker_big_set_5x_negatives"
pair_df = pd.read_csv("pair_df_" + experiment + "_in_shuffled.csv", sep="\t")
print("len(pair_df) = " + str(len(pair_df)))
print(pair_df.head())
#Generate training and test set indexes
valid_set_size = 0.0005
test_set_size = 0.0995
data_index = np.arange(len(pair_df), dtype=np.int)
train_index = data_index[:-int(len(pair_df) * (valid_set_size + test_set_size))]
valid_index = data_index[train_index.shape[0]:-int(len(pair_df) * test_set_size)]
test_index = data_index[train_index.shape[0] + valid_index.shape[0]:]
print('Training set size = ' + str(train_index.shape[0]))
print('Validation set size = ' + str(valid_index.shape[0]))
print('Test set size = ' + str(test_index.shape[0]))
# +
#Sub-select smaller dataset
n_train_pos = 40000
n_train_neg = 0
n_test_pos = 4000
n_test_neg = 0
orig_n_train = train_index.shape[0]
orig_n_valid = valid_index.shape[0]
orig_n_test = test_index.shape[0]
train_index_pos = np.nonzero((pair_df.iloc[train_index]['interacts'] == 1).values)[0][:n_train_pos]
train_index_neg = np.nonzero((pair_df.iloc[train_index]['interacts'] == 0).values)[0][:n_train_neg]
train_index = np.concatenate([train_index_pos, train_index_neg], axis=0)
np.random.shuffle(train_index)
test_index_pos = np.nonzero((pair_df.iloc[test_index]['interacts'] == 1).values)[0][:n_test_pos] + orig_n_train + orig_n_valid
test_index_neg = np.nonzero((pair_df.iloc[test_index]['interacts'] == 0).values)[0][:n_test_neg] + orig_n_train + orig_n_valid
test_index = np.concatenate([test_index_pos, test_index_neg], axis=0)
np.random.shuffle(test_index)
print('Training set size = ' + str(train_index.shape[0]))
print('Test set size = ' + str(test_index.shape[0]))
# +
#Calculate sequence lengths
pair_df['amino_seq_1_len'] = pair_df['amino_seq_1'].str.len()
pair_df['amino_seq_2_len'] = pair_df['amino_seq_2'].str.len()
# -
pair_df.head()
# +
#Initialize sequence encoder
seq_length = 81
residue_map = {'D': 0, 'E': 1, 'V': 2, 'K': 3, 'R': 4, 'L': 5, 'S': 6, 'T': 7, 'N': 8, 'H': 9, 'A': 10, 'I': 11, 'G': 12, 'P': 13, 'Q': 14, 'Y': 15, 'W': 16, 'M': 17, 'F': 18, '#': 19}
encoder = IdentityEncoder(seq_length, residue_map)
# +
#Construct data generators
class CategoricalRandomizer :
def __init__(self, case_range, case_probs) :
self.case_range = case_range
self.case_probs = case_probs
self.cases = 0
def get_random_sample(self, index=None) :
if index is None :
return self.cases
else :
return self.cases[index]
def generate_random_sample(self, batch_size=1, data_ids=None) :
self.cases = np.random.choice(self.case_range, size=batch_size, replace=True, p=self.case_probs)
def get_amino_seq(row, index, flip_randomizer, homodimer_randomizer, max_seq_len=seq_length) :
is_flip = True if flip_randomizer.get_random_sample(index=index) == 1 else False
is_homodimer = True if homodimer_randomizer.get_random_sample(index=index) == 1 else False
amino_seq_1, amino_seq_2 = row['amino_seq_1'], row['amino_seq_2']
if is_flip :
amino_seq_1, amino_seq_2 = row['amino_seq_2'], row['amino_seq_1']
if is_homodimer and row['interacts'] < 0.5 :
amino_seq_2 = amino_seq_1
return amino_seq_1, amino_seq_2
flip_randomizer = CategoricalRandomizer(np.arange(2), np.array([0.5, 0.5]))
homodimer_randomizer = CategoricalRandomizer(np.arange(2), np.array([0.95, 0.05]))
batch_size = 32
data_gens = {
gen_id : iso.DataGenerator(
idx,
{ 'df' : pair_df },
batch_size=(idx.shape[0] // batch_size) * batch_size,
inputs = [
{
'id' : 'amino_seq_1',
'source_type' : 'dataframe',
'source' : 'df',
#'extractor' : lambda row, index, flip_randomizer=flip_randomizer, homodimer_randomizer=homodimer_randomizer: (get_amino_seq(row, index, flip_randomizer, homodimer_randomizer)[0] + "#" * seq_length)[:seq_length],
'extractor' : lambda row, index, flip_randomizer=flip_randomizer, homodimer_randomizer=homodimer_randomizer: get_amino_seq(row, index, flip_randomizer, homodimer_randomizer)[0],
'encoder' : IdentityEncoder(seq_length, residue_map),
'dim' : (1, seq_length, len(residue_map)),
'sparsify' : False
},
{
'id' : 'amino_seq_2',
'source_type' : 'dataframe',
'source' : 'df',
#'extractor' : lambda row, index, flip_randomizer=flip_randomizer, homodimer_randomizer=homodimer_randomizer: (get_amino_seq(row, index, flip_randomizer, homodimer_randomizer)[1] + "#" * seq_length)[:seq_length],
'extractor' : lambda row, index, flip_randomizer=flip_randomizer, homodimer_randomizer=homodimer_randomizer: get_amino_seq(row, index, flip_randomizer, homodimer_randomizer)[1],
'encoder' : IdentityEncoder(seq_length, residue_map),
'dim' : (1, seq_length, len(residue_map)),
'sparsify' : False
},
{
'id' : 'amino_seq_1_len',
'source_type' : 'dataframe',
'source' : 'df',
'extractor' : lambda row, index, flip_randomizer=flip_randomizer, homodimer_randomizer=homodimer_randomizer: len(get_amino_seq(row, index, flip_randomizer, homodimer_randomizer)[0]),
'encoder' : lambda t: t,
'dim' : (1,),
'sparsify' : False
},
{
'id' : 'amino_seq_2_len',
'source_type' : 'dataframe',
'source' : 'df',
'extractor' : lambda row, index, flip_randomizer=flip_randomizer, homodimer_randomizer=homodimer_randomizer: len(get_amino_seq(row, index, flip_randomizer, homodimer_randomizer)[1]),
'encoder' : lambda t: t,
'dim' : (1,),
'sparsify' : False
}
],
outputs = [
{
'id' : 'interacts',
'source_type' : 'dataframe',
'source' : 'df',
'extractor' : lambda row, index: row['interacts'],
'transformer' : NopTransformer(1),
'dim' : (1,),
'sparsify' : False
}
],
randomizers = [flip_randomizer, homodimer_randomizer],
shuffle = True
) for gen_id, idx in [('train', train_index), ('valid', valid_index), ('test', test_index)]
}
#Load data matrices
[x_1_train, x_2_train, l_1_train, l_2_train], [y_train] = data_gens['train'][0]
print("x_1_train.shape = " + str(x_1_train.shape))
print("x_2_train.shape = " + str(x_2_train.shape))
print("l_1_train.shape = " + str(l_1_train.shape))
print("l2_train.shape = " + str(l_2_train.shape))
print("y_train.shape = " + str(y_train.shape))
# +
#Define sequence templates
sequence_templates = [
'$' * i + '@' * (seq_length - i)
for i in range(seq_length+1)
]
sequence_masks = [
np.array([1 if sequence_templates[i][j] == '$' else 0 for j in range(len(sequence_templates[i]))])
for i in range(seq_length+1)
]
# +
#Calculate background distributions
pseudo_count = 0.1
x_means = []
x_mean_logits = []
for i in range(seq_length + 1) :
x_train_len = x_1_train[np.ravel(l_1_train) == i, ...]
if x_train_len.shape[0] > 0 :
x_mean_len = (np.sum(x_train_len, axis=(0, 1)) + pseudo_count) / (np.sum(x_train_len, axis=(0, 1, 3)).reshape(-1, 1) + 20. * pseudo_count)
x_mean_logits_len = np.log(x_mean_len)
x_means.append(x_mean_len)
x_mean_logits.append(x_mean_logits_len)
else :
x_means.append(np.zeros((x_1_train.shape[2], x_1_train.shape[3])))
x_mean_logits.append(np.zeros((x_1_train.shape[2], x_1_train.shape[3])))
# +
#Visualize a few background sequence distributions
visualize_len = 67
plot_protein_logo(residue_map, np.copy(x_means[visualize_len]), sequence_template=sequence_templates[visualize_len], figsize=(12, 1), logo_height=1.0, plot_start=0, plot_end=81)
visualize_len = 72
plot_protein_logo(residue_map, np.copy(x_means[visualize_len]), sequence_template=sequence_templates[visualize_len], figsize=(12, 1), logo_height=1.0, plot_start=0, plot_end=81)
visualize_len = 81
plot_protein_logo(residue_map, np.copy(x_means[visualize_len]), sequence_template=sequence_templates[visualize_len], figsize=(12, 1), logo_height=1.0, plot_start=0, plot_end=81)
# +
#Calculate global background distribution
pseudo_count = 0.1
x_mean = (np.sum(x_1_train, axis=(0, 1)) + pseudo_count) / (np.sum(x_1_train, axis=(0, 1, 3)).reshape(-1, 1) + 20. * pseudo_count)
x_mean_logit = np.log(x_mean)
# +
#Visualize background sequence distribution
plot_protein_logo(residue_map, np.copy(x_mean), sequence_template="$" * seq_length, figsize=(12, 1), logo_height=1.0, plot_start=0, plot_end=81)
# +
#Load cached dataframe (shuffled)
dataset_name = "coiled_coil_binders"
experiment = "coiled_coil_binders_alyssa"
data_df = pd.read_csv(experiment + ".csv", sep="\t")
print("len(data_df) = " + str(len(data_df)))
test_df = data_df.copy().reset_index(drop=True)
batch_size = 32
test_df = test_df.iloc[:(len(test_df) // batch_size) * batch_size].copy().reset_index(drop=True)
print("len(test_df) = " + str(len(test_df)))
print(test_df.head())
# +
#Construct test data
batch_size = 32
test_gen = iso.DataGenerator(
np.arange(len(test_df), dtype=np.int),
{ 'df' : test_df },
batch_size=(len(test_df) // batch_size) * batch_size,
inputs = [
{
'id' : 'amino_seq_1',
'source_type' : 'dataframe',
'source' : 'df',
#'extractor' : lambda row, index: (row['amino_seq_1'] + "#" * seq_length)[:seq_length],
'extractor' : lambda row, index: row['amino_seq_1'],
'encoder' : IdentityEncoder(seq_length, residue_map),
'dim' : (1, seq_length, len(residue_map)),
'sparsify' : False
},
{
'id' : 'amino_seq_2',
'source_type' : 'dataframe',
'source' : 'df',
#'extractor' : lambda row, index: row['amino_seq_2'] + "#" * seq_length)[:seq_length],
'extractor' : lambda row, index: row['amino_seq_2'],
'encoder' : IdentityEncoder(seq_length, residue_map),
'dim' : (1, seq_length, len(residue_map)),
'sparsify' : False
},
{
'id' : 'amino_seq_1_len',
'source_type' : 'dataframe',
'source' : 'df',
'extractor' : lambda row, index: len(row['amino_seq_1']),
'encoder' : lambda t: t,
'dim' : (1,),
'sparsify' : False
},
{
'id' : 'amino_seq_2_len',
'source_type' : 'dataframe',
'source' : 'df',
'extractor' : lambda row, index: len(row['amino_seq_2']),
'encoder' : lambda t: t,
'dim' : (1,),
'sparsify' : False
}
],
outputs = [
{
'id' : 'interacts',
'source_type' : 'dataframe',
'source' : 'df',
'extractor' : lambda row, index: row['interacts'],
'transformer' : NopTransformer(1),
'dim' : (1,),
'sparsify' : False
}
],
randomizers = [],
shuffle = False
)
#Load data matrices
[x_1_test, x_2_test, l_1_test, l_2_test], [y_test] = test_gen[0]
print("x_1_test.shape = " + str(x_1_test.shape))
print("x_2_test.shape = " + str(x_2_test.shape))
print("l_1_test.shape = " + str(l_1_test.shape))
print("l_2_test.shape = " + str(l_2_test.shape))
print("y_test.shape = " + str(y_test.shape))
# +
def get_shared_model() :
#gru_1 = Bidirectional(GRU(64, activation='tanh', recurrent_activation='sigmoid', recurrent_dropout=0, unroll=False, use_bias=True, reset_after=True, return_sequences=False), merge_mode='concat')
gru_1 = Bidirectional(CuDNNGRU(64, return_sequences=False), merge_mode='concat')
drop_1 = Dropout(0.25)
def shared_model(inp) :
gru_1_out = gru_1(inp)
drop_1_out = drop_1(gru_1_out)
return drop_1_out
return shared_model
shared_model = get_shared_model()
#Inputs
res_both = Input(shape=(1, seq_length * 2, 19 + 1))
[res_1, res_2] = Lambda(lambda x: [x[:, 0, :seq_length, :], x[:, 0, seq_length:, :]])(res_both)
#Outputs
true_interacts = Input(shape=(1,))
#Interaction model definition
dense_out_1 = shared_model(res_1)
dense_out_2 = shared_model(res_2)
layer_dense_pair_1 = Dense(128, activation='relu')
dense_out_pair = layer_dense_pair_1(Concatenate(axis=-1)([dense_out_1, dense_out_2]))
pred_interacts = Dense(1, activation='linear', kernel_initializer='zeros')(dense_out_pair)
pred_interacts_sigm = Activation('sigmoid')(pred_interacts)
predictor = Model(
inputs=[
res_both
],
outputs=pred_interacts_sigm
)
predictor.load_weights('saved_models/ppi_rnn_baker_big_set_5x_negatives_classifier_symmetric_drop_25_5x_negatives_balanced_partitioned_data_epoch_10.h5', by_name=False)
predictor.trainable = False
predictor.compile(
optimizer=keras.optimizers.SGD(lr=0.1),
loss='mean_squared_error'
)
# +
#Plot distribution of positive binding prediction and calculate percentiles
x_test = np.concatenate([
x_1_test,
x_2_test
], axis=2)
y_pred_test = predictor.predict(x=[x_test], batch_size=32)[:, 0]
perc_50 = round(np.quantile(y_pred_test, q=0.5), 2)
perc_80 = round(np.quantile(y_pred_test, q=0.8), 2)
perc_90 = round(np.quantile(y_pred_test, q=0.9), 2)
f = plt.figure(figsize=(6, 4))
plt.hist(y_pred_test, bins=50, edgecolor='black', color='blue', linewidth=2)
plt.axvline(x=perc_50, color='green', linewidth=2, linestyle="--")
plt.axvline(x=perc_80, color='orange', linewidth=2, linestyle="--")
plt.axvline(x=perc_90, color='red', linewidth=2, linestyle="--")
plt.xlabel("Predicted Binding Prob.", fontsize=12)
plt.ylabel("Pair Count", fontsize=12)
t = np.sort(np.concatenate([
np.array([0.0, 0.2, 0.4, 0.6, 0.8, 1.0]),
np.array([perc_50, perc_80, perc_90])
], axis=0))
plt.xticks(t, t, fontsize=12, rotation=45)
plt.yticks(fontsize=12)
plt.xlim(0, 1)
plt.ylim(0)
plt.tight_layout()
plt.show()
# +
#Pre-sample background onehots
n_bg_samples = 100
bg_samples = []
for len_ix in range(len(x_means)) :
print("Processing length = " + str(len_ix))
if np.sum(x_means[len_ix]) <= 0. :
bg_samples.append(None)
continue
samples = []
for sample_ix in range(n_bg_samples) :
bg = x_means[len_ix]
sampled_template = np.zeros(bg.shape)
for j in range(bg.shape[0]) :
sampled_ix = np.random.choice(np.arange(20), p=bg[j, :] / np.sum(bg[j, :]))
sampled_template[j, sampled_ix] = 1.
samples.append(np.expand_dims(sampled_template, axis=0))
bg_samples.append(np.concatenate(samples, axis=0))
# +
import sis
#Run SIS on test set
fixed_threshold = 0.7
dynamic_threshold_scale = 0.8
n_samples_per_step = 32
n_seqs_to_test = x_1_test.shape[0]
importance_scores_1_test = []
importance_scores_2_test = []
predictor_calls_test = []
def _mask_and_template(onehot, bg_samples) :
indicator = np.min(onehot, axis=-1)
onehot[indicator == -1, :] = 0.
sampled_ix = np.random.choice(np.arange(bg_samples.shape[0]))
onehot[indicator == -1, :] = bg_samples[sampled_ix, indicator == -1, :]
return onehot
for data_ix in range(n_seqs_to_test) :
print("Processing example " + str(data_ix) + "...")
threshold = fixed_threshold if y_pred_test[data_ix] >= fixed_threshold * (1. / dynamic_threshold_scale) else dynamic_threshold_scale * y_pred_test[data_ix]
print("Threshold = " + str(round(threshold, 3)))
x_curr = np.concatenate([
x_1_test[data_ix, 0, ...],
x_2_test[data_ix, 0, ...]
], axis=0)
bg_samples_1 = bg_samples[l_1_test[data_ix, 0]]
bg_samples_2 = bg_samples[l_2_test[data_ix, 0]]
bg_samples_curr = np.concatenate([bg_samples_1, bg_samples_2], axis=1)
seq_mask = np.concatenate([
np.max(x_1_test[data_ix, 0, ...], axis=-1, keepdims=True),
np.max(x_2_test[data_ix, 0, ...], axis=-1, keepdims=True)
], axis=0)
predictor_counter = { 'acc' : 0 }
def _temp_pred_func(batch, mask=seq_mask, bg_sample=bg_samples_curr, predictor_counter=predictor_counter) :
temp_data = np.concatenate([np.expand_dims(np.expand_dims(_mask_and_template(np.copy(arr), bg_sample) * mask, axis=0), axis=0) for arr in batch for sample_ix in range(n_samples_per_step)], axis=0)
predictor_counter['acc'] += temp_data.shape[0]
temp_out = np.mean(np.reshape(predictor.predict(x=[temp_data], batch_size=64)[:, 0], (len(batch), n_samples_per_step)), axis=-1)
return temp_out
F_PRED = lambda batch: _temp_pred_func(batch)
x_fully_masked = np.ones(x_curr.shape) * -1
initial_mask = sis.make_empty_boolean_mask_broadcast_over_axis(x_curr.shape, 1)
collection = sis.sis_collection(F_PRED, threshold, x_curr, x_fully_masked, initial_mask=initial_mask)
importance_scores_test = np.expand_dims(np.expand_dims(np.zeros(x_curr.shape), axis=0), axis=0)
if collection[0].sis.shape[0] > 0 :
imp_index = collection[0].sis[:, 0].tolist()
importance_scores_test[0, 0, imp_index, :] = 1.
importance_scores_test[0, 0, ...] = importance_scores_test[0, 0, ...] * x_curr
importance_scores_1_test_temp = importance_scores_test[:, :, :81, :]
importance_scores_2_test_temp = importance_scores_test[:, :, 81:, :]
importance_scores_1_test.append(importance_scores_1_test_temp)
importance_scores_2_test.append(importance_scores_2_test_temp)
predictor_calls_test.append(predictor_counter['acc'])
importance_scores_1_test = np.concatenate(importance_scores_1_test, axis=0)
importance_scores_2_test = np.concatenate(importance_scores_2_test, axis=0)
predictor_calls_test = np.array(predictor_calls_test)
# +
#Gradient saliency/backprop visualization
import matplotlib.collections as collections
import operator
import matplotlib.pyplot as plt
import matplotlib.cm as cm
import matplotlib.colors as colors
import matplotlib as mpl
from matplotlib.text import TextPath
from matplotlib.patches import PathPatch, Rectangle
from matplotlib.font_manager import FontProperties
from matplotlib import gridspec
from matplotlib.ticker import FormatStrFormatter
def plot_protein_logo(residue_map, pwm, sequence_template=None, figsize=(12, 3), logo_height=1.0, plot_start=0, plot_end=164) :
inv_residue_map = {
i : sp for sp, i in residue_map.items()
}
#Slice according to seq trim index
pwm = pwm[plot_start: plot_end, :]
sequence_template = sequence_template[plot_start: plot_end]
entropy = np.zeros(pwm.shape)
entropy[pwm > 0] = pwm[pwm > 0] * -np.log2(np.clip(pwm[pwm > 0], 1e-6, 1. - 1e-6))
entropy = np.sum(entropy, axis=1)
conservation = np.log2(len(residue_map)) - entropy#2 - entropy
fig = plt.figure(figsize=figsize)
ax = plt.gca()
height_base = (1.0 - logo_height) / 2.
for j in range(0, pwm.shape[0]) :
sort_index = np.argsort(pwm[j, :])
for ii in range(0, len(residue_map)) :
i = sort_index[ii]
if pwm[j, i] > 0 :
nt_prob = pwm[j, i] * conservation[j]
nt = inv_residue_map[i]
color = None
if sequence_template[j] != '$' :
color = 'black'
if ii == 0 :
letterAt_protein(nt, j + 0.5, height_base, nt_prob * logo_height, ax, color=color)
else :
prev_prob = np.sum(pwm[j, sort_index[:ii]] * conservation[j]) * logo_height
letterAt_protein(nt, j + 0.5, height_base + prev_prob, nt_prob * logo_height, ax, color=color)
plt.xlim((0, plot_end - plot_start))
plt.ylim((0, np.log2(len(residue_map))))
plt.xticks([], [])
plt.yticks([], [])
plt.axis('off')
plt.axhline(y=0.01 + height_base, color='black', linestyle='-', linewidth=2)
for axis in fig.axes :
axis.get_xaxis().set_visible(False)
axis.get_yaxis().set_visible(False)
plt.tight_layout()
plt.show()
def plot_importance_scores(importance_scores, ref_seq, figsize=(12, 2), score_clip=None, sequence_template='', plot_start=0, plot_end=96, save_figs=False, fig_name=None) :
end_pos = ref_seq.find("#")
fig = plt.figure(figsize=figsize)
ax = plt.gca()
if score_clip is not None :
importance_scores = np.clip(np.copy(importance_scores), -score_clip, score_clip)
max_score = np.max(np.sum(importance_scores[:, :], axis=0)) + 0.01
for i in range(0, len(ref_seq)) :
mutability_score = np.sum(importance_scores[:, i])
letterAt_protein(ref_seq[i], i + 0.5, 0, mutability_score, ax, color=None)
plt.sca(ax)
plt.xlim((0, len(ref_seq)))
plt.ylim((0, max_score))
plt.axis('off')
plt.yticks([0.0, max_score], [0.0, max_score], fontsize=16)
for axis in fig.axes :
axis.get_xaxis().set_visible(False)
axis.get_yaxis().set_visible(False)
plt.tight_layout()
if save_figs :
plt.savefig(fig_name + ".png", transparent=True, dpi=300)
plt.savefig(fig_name + ".eps")
plt.show()
# +
#Visualize importance for binder 1
for plot_i in range(0, 5) :
print("Test sequence " + str(plot_i) + ":")
sequence_template = sequence_templates[l_1_test[plot_i, 0]]
plot_protein_logo(residue_map, x_1_test[plot_i, 0, :, :], sequence_template=sequence_template, figsize=(12, 1), plot_start=0, plot_end=81)
plot_importance_scores(importance_scores_1_test[plot_i, 0, :, :].T, encoder.decode(x_1_test[plot_i, 0, :, :]), figsize=(12, 1), score_clip=None, sequence_template=sequence_template, plot_start=0, plot_end=81)
#Visualize importance for binder 2
for plot_i in range(0, 5) :
print("Test sequence " + str(plot_i) + ":")
sequence_template = sequence_templates[l_2_test[plot_i, 0]]
plot_protein_logo(residue_map, x_2_test[plot_i, 0, :, :], sequence_template=sequence_template, figsize=(12, 1), plot_start=0, plot_end=81)
plot_importance_scores(importance_scores_2_test[plot_i, 0, :, :].T, encoder.decode(x_2_test[plot_i, 0, :, :]), figsize=(12, 1), score_clip=None, sequence_template=sequence_template, plot_start=0, plot_end=81)
# +
#Save predicted importance scores
model_name = "sufficient_input_subsets_" + dataset_name + "_zeropad_thresh_07_sampled_32"
np.save(model_name + "_importance_scores_1_test", importance_scores_1_test)
# +
#Save predicted importance scores
model_name = "sufficient_input_subsets_" + dataset_name + "_zeropad_thresh_07_sampled_32"
np.save(model_name + "_importance_scores_2_test", importance_scores_2_test)
# +
#Save number of predictor calls consumed per pattern
model_name = "sufficient_input_subsets_" + dataset_name + "_zeropad_thresh_07_sampled_32"
np.save(model_name + "_predictor_calls_test", predictor_calls_test)
# +
#Print predictor call statistics
print("Total number of predictor calls = " + str(np.sum(predictor_calls_test)))
print("Average number of predictor calls = " + str(np.mean(predictor_calls_test)))
# +
#Binder DHD_154
#seq_1 = ("TAEELLEVHKKSDRVTKEHLRVSEEILKVVEVLTRGEVSSEVLKRVLRKLEELTDKLRRVTEEQRRVVEKLN" + "#" * seq_length)[:81]
#seq_2 = ("DLEDLLRRLRRLVDEQRRLVEELERVSRRLEKAVRDNEDERELARLSREHSDIQDKHDKLAREILEVLKRLLERTE" + "#" * seq_length)[:81]
seq_1 = "TAEELLEVHKKSDRVTKEHLRVSEEILKVVEVLTRGEVSSEVLKRVLRKLEELTDKLRRVTEEQRRVVEKLN"[:81]
seq_2 = "DLEDLLRRLRRLVDEQRRLVEELERVSRRLEKAVRDNEDERELARLSREHSDIQDKHDKLAREILEVLKRLLERTE"[:81]
print("Seq 1 = " + seq_1)
print("Seq 2 = " + seq_2)
encoder = IdentityEncoder(81, residue_map)
test_onehot_1 = np.tile(np.expand_dims(np.expand_dims(encoder(seq_1), axis=0), axis=0), (batch_size, 1, 1, 1))
test_onehot_2 = np.tile(np.expand_dims(np.expand_dims(encoder(seq_2), axis=0), axis=0), (batch_size, 1, 1, 1))
test_len_1 = np.tile(np.array([[len(seq_1)]]), (batch_size, 1))
test_len_2 = np.tile(np.array([[len(seq_2)]]), (batch_size, 1))
pred_interacts = predictor.predict(x=[np.concatenate([test_onehot_1, test_onehot_2], axis=2)])[0, 0]
print("Predicted interaction prob = " + str(round(pred_interacts, 4)))
# +
x_1_test = test_onehot_1[:1]
x_2_test = test_onehot_2[:1]
l_1_test = test_len_1[:1]
l_2_test = test_len_2[:1]
import sis
#Run SIS on test set
fixed_threshold = 0.7
dynamic_threshold_scale = 0.8
n_samples_per_step = 4
n_seqs_to_test = x_1_test.shape[0]
importance_scores_1_test = []
importance_scores_2_test = []
def _mask_and_template(onehot, bg_samples) :
indicator = np.min(onehot, axis=-1)
onehot[indicator == -1, :] = 0.
sampled_ix = np.random.choice(np.arange(bg_samples.shape[0]))
onehot[indicator == -1, :] = bg_samples[sampled_ix, indicator == -1, :]
return onehot
for data_ix in range(n_seqs_to_test) :
print("Processing example " + str(data_ix) + "...")
threshold = fixed_threshold if y_pred_test[data_ix] >= fixed_threshold * (1. / dynamic_threshold_scale) else dynamic_threshold_scale * y_pred_test[data_ix]
print("Threshold = " + str(round(threshold, 3)))
x_curr = np.concatenate([
x_1_test[data_ix, 0, ...],
x_2_test[data_ix, 0, ...]
], axis=0)
bg_samples_1 = bg_samples[l_1_test[data_ix, 0]]
bg_samples_2 = bg_samples[l_2_test[data_ix, 0]]
bg_samples_curr = np.concatenate([bg_samples_1, bg_samples_2], axis=1)
seq_mask = np.concatenate([
np.max(x_1_test[data_ix, 0, ...], axis=-1, keepdims=True),
np.max(x_2_test[data_ix, 0, ...], axis=-1, keepdims=True)
], axis=0)
predictor_counter = { 'acc' : 0 }
def _temp_pred_func(batch, mask=seq_mask, bg_sample=bg_samples_curr, predictor_counter=predictor_counter) :
temp_data = np.concatenate([np.expand_dims(np.expand_dims(_mask_and_template(np.copy(arr), bg_sample) * mask, axis=0), axis=0) for arr in batch for sample_ix in range(n_samples_per_step)], axis=0)
predictor_counter['acc'] += temp_data.shape[0]
temp_out = np.mean(np.reshape(predictor.predict(x=[temp_data], batch_size=64)[:, 0], (len(batch), n_samples_per_step)), axis=-1)
return temp_out
F_PRED = lambda batch: _temp_pred_func(batch)
x_fully_masked = np.ones(x_curr.shape) * -1
initial_mask = sis.make_empty_boolean_mask_broadcast_over_axis(x_curr.shape, 1)
collection = sis.sis_collection(F_PRED, threshold, x_curr, x_fully_masked, initial_mask=initial_mask)
importance_scores_test = np.expand_dims(np.expand_dims(np.zeros(x_curr.shape), axis=0), axis=0)
if collection[0].sis.shape[0] > 0 :
imp_index = collection[0].sis[:, 0].tolist()
importance_scores_test[0, 0, imp_index, :] = 1.
importance_scores_test[0, 0, ...] = importance_scores_test[0, 0, ...] * x_curr
importance_scores_1_test_temp = importance_scores_test[:, :, :81, :]
importance_scores_2_test_temp = importance_scores_test[:, :, 81:, :]
importance_scores_1_test.append(importance_scores_1_test_temp)
importance_scores_2_test.append(importance_scores_2_test_temp)
importance_scores_1_test = np.concatenate(importance_scores_1_test, axis=0)
importance_scores_2_test = np.concatenate(importance_scores_2_test, axis=0)
# +
save_figs = False
model_name = "sufficient_input_subsets_" + dataset_name + "_zeropad_thresh_07_sampled_32"
pair_name = "DHD_154"
#Visualize importance for binder 1
for plot_i in range(0, 1) :
print("Test sequence " + str(plot_i) + ":")
sequence_template = sequence_templates[l_1_test[plot_i, 0]]
plot_protein_logo(residue_map, x_1_test[plot_i, 0, :, :], sequence_template=sequence_template, figsize=(12, 1), plot_start=0, plot_end=81)
plot_importance_scores(importance_scores_1_test[plot_i, 0, :, :].T, encoder.decode(x_1_test[plot_i, 0, :, :]), figsize=(12, 1), score_clip=None, sequence_template=sequence_template, plot_start=0, plot_end=81, save_figs=save_figs, fig_name=model_name + "_scores_" + pair_name + "_binder_1")
#Visualize importance for binder 2
for plot_i in range(0, 1) :
print("Test sequence " + str(plot_i) + ":")
sequence_template = sequence_templates[l_2_test[plot_i, 0]]
plot_protein_logo(residue_map, x_2_test[plot_i, 0, :, :], sequence_template=sequence_template, figsize=(12, 1), plot_start=0, plot_end=81)
plot_importance_scores(importance_scores_2_test[plot_i, 0, :, :].T, encoder.decode(x_2_test[plot_i, 0, :, :]), figsize=(12, 1), score_clip=None, sequence_template=sequence_template, plot_start=0, plot_end=81, save_figs=save_figs, fig_name=model_name + "_scores_" + pair_name + "_binder_2")
# +
#Binder DHD_154
test_onehot_1 = np.tile(np.expand_dims(np.expand_dims(encoder(seq_1), axis=0), axis=0), (batch_size, 1, 1, 1))
test_onehot_2 = np.tile(np.expand_dims(np.expand_dims(encoder(seq_2), axis=0), axis=0), (batch_size, 1, 1, 1))
test_len_1 = np.tile(np.array([[len(seq_1)]]), (batch_size, 1))
test_len_2 = np.tile(np.array([[len(seq_2)]]), (batch_size, 1))
bg = np.tile(np.expand_dims(np.expand_dims(np.concatenate([
x_means[test_len_1[0, 0]],
x_means[test_len_2[0, 0]]
], axis=0), axis=0), axis=0), (batch_size, 1, 1, 1))
seq_mask = np.concatenate([
np.max(test_onehot_1[0, 0, ...], axis=-1, keepdims=True),
np.max(test_onehot_2[0, 0, ...], axis=-1, keepdims=True)
], axis=0)
x_curr = np.concatenate([test_onehot_1, test_onehot_2], axis=2)[0, 0, ...]
bg_curr = bg[0, 0, ...]
x_curr[np.sum(importance_scores_test, axis=(0, 1, 3)) <= 0.,:] = -1
def _mask_and_template_proper(onehot, bg) :
indicator = np.min(onehot, axis=-1)
sampled_mask = np.ones(onehot.shape)
sampled_template = np.zeros(onehot.shape)
for j in range(indicator.shape[0]) :
if indicator[j] == -1 :
sampled_mask[j, :] = 0.
sampled_ix = np.random.choice(np.arange(20), p=bg[j, :])
sampled_template[j, sampled_ix] = 1.
new_onehot = onehot * sampled_mask + sampled_template
return new_onehot
sample_curr = np.expand_dims(np.expand_dims(_mask_and_template_proper(x_curr, bg_curr), axis=0), axis=0)
sample_curr = sample_curr * np.expand_dims(np.expand_dims(seq_mask, axis=0), axis=0)
pred_interacts = predictor.predict(x=[sample_curr])[0, 0]
print("Predicted interaction prob = " + str(round(pred_interacts, 4)))
#Re-do test a number of times
n_test_samples = 1000
pred_interacts = []
for i in range(n_test_samples) :
sample_curr = np.expand_dims(np.expand_dims(_mask_and_template_proper(x_curr, bg_curr), axis=0), axis=0)
sample_curr = sample_curr * np.expand_dims(np.expand_dims(seq_mask, axis=0), axis=0)
pred_interacts.append(predictor.predict(x=[sample_curr])[0, 0])
pred_interacts = np.array(pred_interacts)
# +
#Plot distribution of binding predictions on samples
target_prob = 0.8533
mean_kl = target_prob * np.log(target_prob / pred_interacts) + (1. - target_prob) * np.log((1. - target_prob) / (1. - pred_interacts))
print("Mean predited prob = " + str(round(np.mean(pred_interacts), 3)))
print("Mean KL = " + str(round(np.mean(mean_kl), 3)))
f = plt.figure(figsize=(6, 4))
plt.hist(pred_interacts, bins=50, edgecolor='black', color='red', linewidth=2)
plt.xlabel("Predicted Binding Prob.", fontsize=12)
plt.ylabel("Sample Count", fontsize=12)
plt.xticks(fontsize=12, rotation=45)
plt.yticks(fontsize=12)
plt.xlim(0, 1)
plt.ylim(0)
plt.tight_layout()
plt.show()
# +
#Re-do test a number of times with mean predictions
n_test_samples_outer = 512
n_test_samples_inner = 32
pred_interacts = []
for i in range(n_test_samples_outer) :
batch_inner = []
for j in range(n_test_samples_inner) :
sample_curr = np.expand_dims(np.expand_dims(_mask_and_template_proper(x_curr, bg_curr), axis=0), axis=0)
sample_curr = sample_curr * np.expand_dims(np.expand_dims(seq_mask, axis=0), axis=0)
batch_inner.append(sample_curr)
batch_inner = np.concatenate(batch_inner, axis=0)
pred_interacts.append(np.mean(predictor.predict(x=[batch_inner], batch_size=n_test_samples_inner)[:, 0]))
pred_interacts = np.array(pred_interacts)
# +
#Plot distribution of binding predictions on samples
f = plt.figure(figsize=(6, 4))
plt.hist(pred_interacts, bins=50, edgecolor='black', color='red', linewidth=2)
plt.xlabel("Mean Predicted Prob. (" + str(n_test_samples_inner) + " samples)", fontsize=12)
plt.ylabel("Sample Count", fontsize=12)
plt.xticks(fontsize=12, rotation=45)
plt.yticks(fontsize=12)
plt.xlim(0, 1)
plt.ylim(0)
plt.tight_layout()
plt.show()
# -
| analysis/coiled_coil_binders/sufficient_input_subsets_coiled_coil_binders_zeropad_sampled_mask_32_samples_w_predictor_calls.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: text_renderer
# language: python
# name: text_renderer
# ---
import cv2
import matplotlib.pyplot as plt
import numpy as np
import keras_ocr
import pandas as pd
import os
# pylint:disable=too-many-locals
def warpBox(image,
box,
target_height=None,
target_width=None,
margin=0,
cval=None,
return_transform=False,
skip_rotate=False):
"""Warp a boxed region in an image given by a set of four points into
a rectangle with a specified width and height. Useful for taking crops
of distorted or rotated text.
Args:
image: The image from which to take the box
box: A list of four points starting in the top left
corner and moving clockwise.
target_height: The height of the output rectangle
target_width: The width of the output rectangle
return_transform: Whether to return the transformation
matrix with the image.
"""
if cval is None:
cval = (0, 0, 0) if len(image.shape) == 3 else 0
if not skip_rotate:
box, _ = keras_ocr.tools.get_rotated_box(box)
w, h = keras_ocr.tools.get_rotated_width_height(box)
margin = h*margin
assert (
(target_width is None and target_height is None)
or (target_width is not None and target_height is not None)), \
'Either both or neither of target width and height must be provided.'
if target_width is None and target_height is None:
target_width = w
target_height = h
scale = min(target_width / w, target_height / h)
M = cv2.getPerspectiveTransform(src=box,
dst=np.array([[margin, margin], [scale * w - margin, margin],
[scale * w - margin, scale * h - margin],
[margin, scale * h - margin]]).astype('float32'))
crop = cv2.warpPerspective(image, M, dsize=(int(scale * w), int(scale * h)))
target_shape = (target_height, target_width, 3) if len(image.shape) == 3 else (target_height,
target_width)
full = (np.zeros(target_shape) + cval).astype('uint8')
full[:crop.shape[0], :crop.shape[1]] = crop
if return_transform:
return full, M
return full
def calc_box(img_path, detector):
low_text_options = [0.3, 0.2, 0.1, 0.05, 0.01, 0.001, 0]
low_text_index = 0
img = keras_ocr.tools.read(img_path)
image = keras_ocr.detection.compute_input(img)
bboxes = keras_ocr.detection.getBoxes(detector.model.predict(np.array([image])),
text_threshold=0.9)
while bboxes[0].shape[0]>1:
bboxes = keras_ocr.detection.getBoxes(detector.model.predict(np.array([image])),
text_threshold=low_text_options[low_text_index],
detection_threshold = 0.01)
low_text_index +=1
return bboxes, img
detector = keras_ocr.detection.Detector()
bboxes, image = calc_box(img_path='output/default/00000155.jpg', detector=detector)
bboxes
boxed = warpBox(image, bboxes[0][0], margin = 0.1)
plt.imshow(boxed)
# +
bboxes, image = calc_box(img_path='output/default/00000155.jpg', detector=detector)
boxed = warpBox(image, bboxes[0][0], margin = 0.1)
plt.imshow(keras_ocr.tools.drawBoxes(image, bboxes[0][0]))
plt.figure()
plt.imshow(boxed)
# -
def combine_single_generated_set(folder):
df = pd.read_table(folder +'/tmp_labels.txt', sep = ' ',header = None, names=['id', 'transcription'], dtype={'id': object})
files = pd.DataFrame({'path':os.listdir(folder)})
files['id'] = [x.replace('.jpg','') for x in files.path]
files['path'] = [ folder + '/'+ x for x in files['path']]
df = df.merge(files, how='left', on='id')
return df
df = combine_single_generated_set('output/default')
df.head()
cat(['./data/fonts/'+ i for i in os.listdir('data/fonts/1001fonts')])
for i in ['./data/fonts/1001fonts/'+ i for i in os.listdir('data/fonts/1001fonts')]:
print(i)
for i in ['./1001fonts/'+ i for i in os.listdir('data/fonts/ef_1001fonts')]:
print(i)
| 07_ji_keras_ocr_cut_single_word_customized.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 2
# language: python
# name: python2
# ---
# # Figure 5: Cross-border model generalization
#
# This notebook generates individual panels of Figure 5 in "Combining satellite imagery and machine learning to predict poverty".
# +
from fig_utils import *
import matplotlib.pyplot as plt
import time
# %matplotlib inline
# -
# ## Out-of-country performance
#
# In this experiment, we compare the performance of models trained in-country with models trained out-of-country.
#
# The parameters needed to produce the plots for Panels A and B are as follows:
#
# - country_names: Names of survey data countries
# - country_paths: Paths of directories containing pooled survey data
# - survey: Either 'lsms' or 'dhs'
# - dimension: Number of dimensions to reduce image features to using PCA
# - k: Number of cross validation folds
# - trials: Number of trials to average over
# - points: Number of regularization parameters to try
# - alpha_low: Log of smallest regularization parameter to try
# - alpha_high: Log of largest regularization parameter to try
# - cmap: Color scheme to use for plot, e.g., 'Blues' or 'Greens'
#
# For 10 trials, the LSMS plot should take around 5 minutes and the DHS plot should take around 15 minutes.
#
# Each data directory should contain the following 4 files:
#
# - conv_features.npy: (n, 4096) array containing image features corresponding to n clusters
# - nightlights.npy: (n,) vector containing the average nightlights value for each cluster
# - households.npy: (n,) vector containing the number of households for each cluster
# - image_counts.npy: (n,) vector containing the number of images available for each cluster
#
# Each data directory should also contain one of the following:
#
# - consumptions.npy: (n,) vector containing average cluster consumption expenditures for LSMS surveys
# - assets.npy: (n,) vector containing average cluster asset index for DHS surveys
#
# Exact results may differ slightly with each run due to randomly splitting data into training and test sets.
# #### Panel A: LSMS consumption expenditures
# Parameters
country_names = ['nigeria', 'tanzania', 'uganda', 'malawi', 'pooled']
country_paths = ['../data/output/LSMS/nigeria/',
'../data/output/LSMS/tanzania/',
'../data/output/LSMS/uganda/',
'../data/output/LSMS/malawi/',
'../data/output/LSMS/pooled/']
survey = 'lsms'
dimension = 100
k = 10
trials = 10
points = 30
alpha_low = -2
alpha_high = 5
cmap = 'Greens'
t0 = time.time()
performance_matrix = evaluate_models(country_names, country_paths, survey,
dimension, k, trials, points,
alpha_low, alpha_high, cmap)
t1 = time.time()
print 'Time elapsed: {} seconds'.format(t1-t0)
print 'Corresponding values:'
print performance_matrix
# #### Panel B: DHS assets
# Parameters
country_names = ['nigeria', 'tanzania', 'uganda', 'malawi', 'rwanda',
'pooled']
country_paths = ['../data/output/DHS/nigeria/',
'../data/output/DHS/tanzania/',
'../data/output/DHS/uganda/',
'../data/output/DHS/malawi/',
'../data/output/DHS/rwanda/',
'../data/output/DHS/pooled/']
survey = 'dhs'
dimension = 100
k = 10
trials = 10
points = 30
alpha_low = -2
alpha_high = 5
cmap = 'Blues'
t0 = time.time()
performance_matrix = evaluate_models(country_names, country_paths, survey,
dimension, k, trials, points,
alpha_low, alpha_high, cmap)
t1 = time.time()
print 'Time elapsed: {} seconds'.format(t1-t0)
print 'Corresponding values:'
print performance_matrix
| figures/Figure 5.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] id="_RcJx15kL_y6"
#
# # Week 3 Lab - Exploratory Data Analysis (EDA)
#
# <img align="right" style="padding-right:10px;" src="figures_wk3/curious.png" width=300><br>
# This week's assignment will focus on EDA techniques and practices for a given dataset.
#
# ## Dataset for Week3:
# **Dataset Name::** <br>
# Use any dataset that is of interest to you for this assignment.
#
# * https://archive.ics.uci.edu/ml/datasets.php
# * https://www.data.gov/
# * https://www.kaggle.com/datasets
#
# + [markdown] id="1g6ks2wEL_y_"
# # Assignment Requirements
#
# Complete an Exploratory data analysis for the dataset of your choice. Define a few questions that you wish to discover about your dataset to guide your EDA effort. Your analysis should include the items listed below. Make sure you have clearly indicated each assignment requirement within your notebook. For each of the items, document your approach, providing reasoning for your treatment of the data and insights or conclusions that you have reached. Use the [Lab Format](https://colab.research.google.com/drive/1VeVhtWCSG5s4ifEP45hAy3Bk7ncEDG4Y#scrollTo=1g6ks2wEL_y_&line=3&uniqifier=1) document as a guideline for your write up.
#
# Define a few questions that you wish to discover about your dataset to guide your EDA effort.
#
# **Important:** Make sure your provide complete and thorough explanations for all of your analysis steps. You need to defend your thought processes and reasoning.
# -
# # Define a few questions that you wish to discover about your dataset to guide your EDA effort.
#
# **Important:** Make sure your provide complete and thorough explanations for all of your analysis steps. You need to defend your thought processes and reasoning.
import pandas as pd
import numpy as np
import seaborn as sns
import matplotlib.pyplot as plt
# %matplotlib inline
sns.set()
df = pd.read_csv('adult.csv',
names=[
'age',
'workclass',
'fnlwgt',
'education',
'education-num',
'marital-status',
'occupation',
'relationship',
'race',
'sex',
'capital-gain',
'capital-loss',
'hours-per-week',
'native-country',
'income-range'
],
index_col=False)
# ## 1. Describe the data within the dataset.
# - Data types: Categorical vs Continuous variables
# - Statistical summary, etc.
# The data is already clean (: This is a dataset from an analysis pulling data from the 1994 census to document people who make under and over .$50k a year.
#
# From:
# <NAME>. and <NAME>. (1996). UCI Machine Learning Repository [https://archive.ics.uci.edu/ml/datasets/Census+Income]. Irvine, CA: University of California, School of Information and Computer Science.
#
# <NAME>. and <NAME>. (2019). UCI Machine Learning Repository [http://archive.ics.uci.edu/ml]. Irvine, CA: University of California, School of Information and Computer Science.
#
# Data Set name: Census Income Data Set
# Dataset URL: ./https://archive.ics.uci.edu/ml/datasets/Census+Income/.
# Data Set Abstract: "Predict whether income exceeds $50K/yr based on census data. Also known as "Adult" dataset." (Kohavi, 1996)
type(df)
df.info()
# I had to input the column names from the [adult_names.csv](adult_names.csv)
#
# **Continuous Variables:** age, fnlwage, education-num, capital gail, capital-loss, hours-per-week
#
# **Categorical Variables:** workclass, education,, marital-status, occupation, relationship, race, sex, native-country
df.shape
df.head()
# ## 2. Data Cleaning
# - Identify and handle missing values
# - Identify and handle outliers
# Outliers <--- use set_index to create index row on dataset.
# +
#df['index_col'] = df.index
# +
#df.set_index(keys='index_col', inplace=True)
# -
# I just wanted to add an index key to separate instances if need be. Now I will look for outliers since the data doesn't have any missing values _(God bless 🙏)_
#
# ***I have various questions:***
# - What is the difference in educations and income levels between of males vs females?
sns.boxplot(data=df, orient='h')
# I don't really know what is contributing to fnlwgt and it isn't useful to my questions so I am just going to drop it.
df
df.nunique()
df.drop(columns='fnlwgt', inplace=True)
df.boxplot(column=['capital-gain', 'capital-loss'], figsize=(10,6))
df.loc[df['capital-gain'] == 0].value_counts(df['income-range'])
df.loc[df['capital-loss'] > 0].value_counts(df['income-range'])
# I am more interested in the information about people who are making 50K without capital gains or losses. For example I want to know how much their annual salary is not how much they make playing the stock market, etc. I will only keep information about people with 0 capital gain and 0 capital loss.
df = df[df['capital-gain'] == 0]
df = df[df['capital-loss'] == 0]
df.shape
df['workclass'].value_counts()
# I really don't care about "?", Without-pay, and 'Never-worked" for my questions. I'm going to remove them as well.
df = df[df.workclass != ' ?']
df = df[df.workclass != ' Without-pay']
df = df[df.workclass != ' Never-worked']
df['workclass'].value_counts()
df['native-country'].value_counts()
# Let's stick to the continental US for now
df = df[df['native-country'] == ' United-States']
df.shape
# Now that I all instances are without capital gains or losses and are all in the US I will remove those columns for clarity
df = df.drop(columns=['capital-gain','capital-loss','native-country'])
df.boxplot()
# Very interesting that there are people over the age of 80 still working. Overall I am happy with the results. There is quite a bit of outliers for age education and hours per week. The majority of ages are from about 30-50 with the longer whisker being above the mean. Hours per week has dence outliers past the whiskers from almost 0 hours worked to 100 hours worked per week. Overall I find this to be valuable information for context to the conditions of our labor force from the 1994 census data.
df.to_csv('adult0_cleaned.csv',index=False)
df.to_csv('adult_cleaned.csv',index=False)
df = pd.read_csv('adult_cleaned.csv')
df.nunique()
df.head()
# I want to assign numerical values to my categorical variables
from sklearn.preprocessing import LabelEncoder
le = LabelEncoder()
df['sex'] = le.fit_transform(df['sex'])
df['race'] = le.fit_transform(df['race'])
df['workclass'] = le.fit_transform(df['workclass'])
df['marital-status'] = le.fit_transform(df['marital-status'])
df['occupation'] = le.fit_transform(df['occupation'])
df['relationship'] = le.fit_transform(df['relationship'])
df['education'] = le.fit_transform(df['relationship'])
df.rename(columns={"sex":"male"}, inplace=True)
df
df.to_csv('adult_cleaned_1.csv',index=False)
df = pd.read_csv('adult_cleaned.csv')
df1 = pd.read_csv('adult_cleaned_1.csv')
df = pd.concat([df.drop('race', axis=1), pd.get_dummies(df['race'],prefix='race')],axis=1)
df = pd.concat([df.drop('workclass', axis=1), pd.get_dummies(df['workclass'],prefix='workclass')],axis=1)
df = pd.concat([df.drop('education', axis=1), pd.get_dummies(df['education'],prefix='education')],axis=1)
df = pd.concat([df.drop('marital-status', axis=1), pd.get_dummies(df['marital-status'],prefix=' marital-status')],axis=1)
df = pd.concat([df.drop('occupation', axis=1), pd.get_dummies(df['occupation'],prefix='occupation')],axis=1)
df = pd.concat([df.drop('relationship', axis=1), pd.get_dummies(df['relationship'],prefix='relationship')],axis=1)
df = pd.concat([df.drop('income-range', axis=1), pd.get_dummies(df['income-range'],prefix='income-range')],axis=1)
adultcorr_1 = df1.corr()
df
# ## 3. Feature Selection
# - Graphical visualization of features
# - Examine the relationships within the dataset - using 2 different methods
# - Reduction of the dimensionality of the dataset
df0 = pd.read_csv('adult0_cleaned.csv')
corr = df0.corr()
f, ax = plt.subplots(figsize=(12,10))
sns.heatmap(corr, annot=True)
f, ax = plt.subplots(figsize=(12,10))
sns.heatmap(adultcorr_1, annot=True)
adultcorr = df.corr()
f, ax = plt.subplots(figsize=(12,10))
sns.heatmap(adultcorr, vmin = 0, square=True)
# +
k = 10
cols = adultcorr.nlargest(k, 'income-range_ >50K')['income-range_ >50K'].index
cm = np.corrcoef(df[cols].values.T)
sns.set(font_scale = 1.25)
f, ax = plt.subplots(figsize=(10,8))
hm = sns.heatmap(cm, cbar=True, annot=True, square=True, fmt='.2f', annot_kws={'size':14},yticklabels=cols.values, xticklabels=cols.values)
plt.show
# -
X = df1[adultcorr_1.columns[:-1]]
X
y = df1['occupation'].values
y
from sklearn.ensemble import RandomForestRegressor
model = RandomForestRegressor()
model.fit(X,y)
importance_list = list(zip(X.columns, model.feature_importances_))
sorted_importance = sorted(importance_list, key=lambda x: x[1], reverse=True)
sorted_importance
max_feature_len = len(max(X.columns, key=len))
for feature, rank in sorted_importance:
dots = max_feature_len - len(feature)
print(f'{feature}: {"."*dots} {rank*100:.2f}%')
# This is a poor dataset because it doesn't show the actual value of their income. I needed a continuous variable. I suppose using capital gains would work but it wouldn't answer the questions I needed. I could look for the original census dataset and concat the two. I'm not going to.
# ## 4. Insights and Findings
# - Describe an insights and/or findings from within the datset.
# Overall I found obvious variables that were correlated. For example, being a husband and in a married status was an almost 100% correlation. Though, I found it strange that the education and relationship variables were very correlated. I would have to very clearly narrow down variables to answer specific questions with this dataset but would be much more insightful with additional variables.
# ## 5. Bonus: Feature Engineering
# - Create a new feature based for findings.
#
# Average education by gender
edu_gen_wage = df0.groupby(['income-range','sex']).agg({'education-num':['min','max', 'mean', 'median', 'count']})
edu_gen_wage.reset_index()
# The average education level of a female making less than or equal to 50k per year is approaching 10th grade in highschool. Surprisingly the average education level of a female that makes over 50k a year is nearing graduation of highschool.
| week3/Assignment/kristinscully_week3.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
from matplotlib import pyplot as plt
from IPython.display import Markdown as md
import os
from datetime import datetime
import pickle
import numpy as np
from sklearn.metrics import classification_report
# %matplotlib inline
ROOT = ".."
# +
now = datetime.now().strftime('%Y-%m-%d %H:%M:%S')
display(md('''
# Model Validation Report
(generated **{now}**)
'''.format(now=now)))
# -
# ## 1. Overview
#
# ### Project description
# The current project aims to predict the genre of movie given the overview text that describes the movie. For example, the overview for *The Matrix* is as follows:
# >Set in the 22nd century, The Matrix tells the story of a computer hacker who joins a group of underground insurgents fighting the vast and powerful computers who now rule the earth.
#
# From the above text, we would like to predict that the movie belongs to the "Action" and "Science Fiction" genres.
#
# ### Business objective
#
# We are an internet-based movie distributing company, _NetFlux_. For new movies and original content movies, we want to make sure our staff writes overviews that will represent the correct genre of the movie. This will make our recommender system work better and ultimately provide more insight for our users to what movies they want to see.
#
#
# ## 2. Data
# +
movies_with_overviews_path = f'{ROOT}/data/processed/movies_with_overviews.pkl'
date_refreshed_unix = os.path.getmtime(movies_with_overviews_path)
date_refreshed = datetime.utcfromtimestamp(date_refreshed_unix).strftime('%Y-%m-%d %H:%M:%S')
with open(movies_with_overviews_path,'rb') as f:
movies_with_overviews=pickle.load(f)
num_movies = len(movies_with_overviews)
display(md('''
### Dataset Size & Freshness
Movie overviews and genres were scraped from TMDB.
- The dataset was last refreshed at **{date_refreshed}**.
- The dataset contains **{num_movies}** movie overviews.
'''.format(date_refreshed=date_refreshed, num_movies=num_movies, now=now)))
# -
import pandas as pd
from matplotlib import pyplot as plt
from collections import Counter
mwo = pd.DataFrame(movies_with_overviews)
# ### Data Sample
mwo.head()
# ### Genre distribution
#
# The distribution of the genres in these movies is shown in the chart below:
# +
with open(f'{ROOT}/data/processed/genre_id_to_name_dict.pkl','rb') as f:
genre_id_to_name=pickle.load(f)
genre_names=list(genre_id_to_name.values())
genre_ids_series = mwo['genre_ids']
flat_genre_ids = [st for row in genre_ids_series for st in row]
flat_genre_names = [genre_id_to_name[id] for id in flat_genre_ids]
genre_counts = Counter(flat_genre_names)
df = pd.DataFrame.from_dict(genre_counts, orient='index')
ax = df.plot(kind='bar')
ax.set_ylabel('Counts of each genre')
ax.legend().set_visible(False)
# -
# ## 3. Features
#
# ### Input Features
#
# We currently apply a Bag of Words featurization to the movie overviews. This means the input to the model is a vector of the number of occurences of each word in the overview.
#
#
with open(f'{ROOT}/data/processed/raw_count_features_test.pkl','rb') as f:
raw_count_features_test=pickle.load(f)
# **Sample of Processed Features**
(raw_count_features_test[:40, :20]).toarray()
# ### Target
#
# We apply binarization to convert the genres into a vector for prediction.
with open(f'{ROOT}/data/processed/target_test.pkl','rb') as f:
target_test=pickle.load(f)
# **Sample of target**
target_test[:20, :20]
# ## 4. Model
# We use Naive Bayes with the Raw Bag of Word Feature input to make predictions.
# +
with open(f'{ROOT}/data/processed/genre_id_to_name_dict.pkl','rb') as f:
genre_id_to_name=pickle.load(f)
genre_names=list(genre_id_to_name.values())
# +
import json
with open(f"{ROOT}/models/model_scores.json", "r") as f:
scores = json.load(f)
# -
# ### Performance in each Genre
# +
with open(f'{ROOT}/models/classifier_nb.pkl','rb') as f:
classifnb = pickle.load(f)
predsnb=classifnb.predict(raw_count_features_test)
print (classification_report(target_test, predsnb, target_names=genre_names))
# -
# ##### Overall Precision and Recall
# +
nb_scores = scores["naive_bayes"]
md('''
Precision: {prec_mean}
Recall: {rec_mean}
'''.format(prec_mean=nb_scores["prec"], rec_mean=nb_scores["rec"]))
| notebooks/validate-best.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # SVD for Movie Recommendations
# In this notebook, you'll detail a basic version of model-based collaborative filtering for recommendations by employing it on the MovieLens 1M dataset.
#
# Earlier, you used user-based and item-based collaborative filtering to make movie recommendations from users' ratings data. You could only try them on a very small data sample (20,000 ratings), and ended up getting pretty high Root Mean Squared Error (bad recommendations). Memory-based collaborative filtering approaches that compute distance relationships between items or users have these two major issues:
#
# 1. It doesn't scale particularly well to massive datasets, especially for real-time recommendations based on user behavior similarities - which takes a lot of computations.
# 2. Ratings matrices may be overfitting to noisy representations of user tastes and preferences. When we use distance based "neighborhood" approaches on raw data, we match to sparse low-level details that we assume represent the user's preference vector instead of the vector itself.
#
# Thus you will need to apply **Dimensionality Reduction** technique to derive the tastes and preferences from the raw data, otherwise known as doing low-rank matrix factorization. Why reduce dimensions?
#
# * You can discover hidden correlations / features in the raw data.
# * You can remove redundant and noisy features that are not useful.
# * You can interpret and visualize the data easier.
# * You can also access easier data storage and processing.
#
# With that goal in mind, you'll be introduced Singular Vector Decomposition (SVD) to you, a powerful dimensionality reduction technique that is used heavily in modern model-based CF recommender system.
#
# 
# ## Loading the Dataset
# Let's load the 3 data files just like last time.
# +
# Import libraries
import numpy as np
import pandas as pd
# Reading ratings file
ratings = pd.read_csv('ratings.csv', sep='\t', encoding='latin-1', usecols=['user_id', 'movie_id', 'rating', 'timestamp'])
# Reading users file
users = pd.read_csv('users.csv', sep='\t', encoding='latin-1', usecols=['user_id', 'gender', 'zipcode', 'age_desc', 'occ_desc'])
# Reading movies file
movies = pd.read_csv('movies.csv', sep='\t', encoding='latin-1', usecols=['movie_id', 'title', 'genres'])
# -
# Let's take a look at the movies and ratings dataframes.
movies.head()
ratings.head()
# Also let's count the number of unique users and movies.
n_users = ratings.user_id.unique().shape[0]
n_movies = ratings.movie_id.unique().shape[0]
print 'Number of users = ' + str(n_users) + ' | Number of movies = ' + str(n_movies)
# Now, the format of the ratings matrix ought to be be one row per user and one column per movie. To do so, you'll pivot *ratings* to get that and call the new variable *Ratings* (with a capital *R).
Ratings = ratings.pivot(index = 'user_id', columns ='movie_id', values = 'rating').fillna(0)
Ratings.head()
# Last but not least, you need to de-normalize the data (normalize by each users mean) and convert it from a dataframe to a numpy array.
R = Ratings.as_matrix()
user_ratings_mean = np.mean(R, axis = 1)
Ratings_demeaned = R - user_ratings_mean.reshape(-1, 1)
# With my ratings matrix properly formatted and normalized, you are ready to do some dimensionality reduction. But first, let's go over the math.
# ## Model-Based Collaborative Filtering
# *Model-based Collaborative Filtering* is based on *matrix factorization (MF)* which has received greater exposure, mainly as an unsupervised learning method for latent variable decomposition and dimensionality reduction. Matrix factorization is widely used for recommender systems where it can deal better with scalability and sparsity than Memory-based CF:
#
# * The goal of MF is to learn the latent preferences of users and the latent attributes of items from known ratings (learn features that describe the characteristics of ratings) to then predict the unknown ratings through the dot product of the latent features of users and items.
# * When you have a very sparse matrix, with a lot of dimensions, by doing matrix factorization, you can restructure the user-item matrix into low-rank structure, and you can represent the matrix by the multiplication of two low-rank matrices, where the rows contain the latent vector.
# * You fit this matrix to approximate your original matrix, as closely as possible, by multiplying the low-rank matrices together, which fills in the entries missing in the original matrix.
#
# For example, let's check the sparsity of the ratings dataset:
sparsity = round(1.0 - len(ratings) / float(n_users * n_movies), 3)
print 'The sparsity level of MovieLens1M dataset is ' + str(sparsity * 100) + '%'
# ## Support Vector Decomposition (SVD)
# A well-known matrix factorization method is *Singular value decomposition (SVD)*. At a high level, SVD is an algorithm that decomposes a matrix $A$ into the best lower rank (i.e. smaller/simpler) approximation of the original matrix $A$. Mathematically, it decomposes A into a two unitary matrices and a diagonal matrix:
#
# 
#
# where $A$ is the input data matrix (users's ratings), $U$ is the left singular vectors (user "features" matrix), $\Sigma$ is the diagonal matrix of singular values (essentially weights/strengths of each concept), and $V^{T}$ is the right singluar vectors (movie "features" matrix). $U$ and $V^{T}$ are column orthonomal, and represent different things. $U$ represents how much users "like" each feature and $V^{T}$ represents how relevant each feature is to each movie.
#
# To get the lower rank approximation, you take these matrices and keep only the top $k$ features, which can be thought of as the underlying tastes and preferences vectors.
# ### Setting Up SVD
# Scipy and Numpy both have functions to do the singular value decomposition. You will be using the Scipy function *svds* because it let's us choose how many latent factors we want to use to approximate the original ratings matrix (instead of having to truncate it after).
from scipy.sparse.linalg import svds
U, sigma, Vt = svds(Ratings_demeaned, k = 50)
# As we are going to leverage matrix multiplication to get predictions, you'll convert the $\Sigma$ (now are values) to the diagonal matrix form.
sigma = np.diag(sigma)
# ### Making Predictions from the Decomposed Matrices
# You now have everything you need to make movie ratings predictions for every user. You can do it all at once by following the math and matrix multiply $U$, $\Sigma$, and $V^{T}$ back to get the rank $k=50$ approximation of $A$.
#
# But first, you need to add the user means back to get the actual star ratings prediction.
all_user_predicted_ratings = np.dot(np.dot(U, sigma), Vt) + user_ratings_mean.reshape(-1, 1)
# With the predictions matrix for every user, you can build a function to recommend movies for any user. Return the list of movies the user has already rated, for the sake of comparison.
preds = pd.DataFrame(all_user_predicted_ratings, columns = Ratings.columns)
preds.head()
# Now write a function to return the movies with the highest predicted rating that the specified user hasn't already rated. Though you didn't use any explicit movie content features (such as genre or title), you'll merge in that information to get a more complete picture of the recommendations.
def recommend_movies(predictions, userID, movies, original_ratings, num_recommendations):
# Get and sort the user's predictions
user_row_number = userID - 1 # User ID starts at 1, not 0
sorted_user_predictions = preds.iloc[user_row_number].sort_values(ascending=False) # User ID starts at 1
# Get the user's data and merge in the movie information.
user_data = original_ratings[original_ratings.user_id == (userID)]
user_full = (user_data.merge(movies, how = 'left', left_on = 'movie_id', right_on = 'movie_id').
sort_values(['rating'], ascending=False)
)
print 'User {0} has already rated {1} movies.'.format(userID, user_full.shape[0])
print 'Recommending highest {0} predicted ratings movies not already rated.'.format(num_recommendations)
# Recommend the highest predicted rating movies that the user hasn't seen yet.
recommendations = (movies[~movies['movie_id'].isin(user_full['movie_id'])].
merge(pd.DataFrame(sorted_user_predictions).reset_index(), how = 'left',
left_on = 'movie_id',
right_on = 'movie_id').
rename(columns = {user_row_number: 'Predictions'}).
sort_values('Predictions', ascending = False).
iloc[:num_recommendations, :-1]
)
return user_full, recommendations
# Let's try to recommend 20 movies for user with ID 1310.
already_rated, predictions = recommend_movies(preds, 1310, movies, ratings, 20)
# Top 20 movies that User 1310 has rated
already_rated.head(20)
# Top 20 movies that User 1310 hopefully will enjoy
predictions
# These look like pretty good recommendations. It's good to see that, although you didn't actually use the genre of the movie as a feature, the truncated matrix factorization features "picked up" on the underlying tastes and preferences of the user. You've recommended some comedy, drama, and romance movies - all of which were genres of some of this user's top rated movies.
# ### Model Evaluation
# Can't forget to evaluate our model, can we?
#
# Instead of doing manually like the last time, you will use the *[Surprise](https://pypi.python.org/pypi/scikit-surprise)* library that provided various ready-to-use powerful prediction algorithms including (SVD) to evaluate its RMSE (Root Mean Squared Error) on the MovieLens dataset. It is a Python scikit building and analyzing recommender systems.
# +
# Import libraries from Surprise package
from surprise import Reader, Dataset, SVD, evaluate
# Load Reader library
reader = Reader()
# Load ratings dataset with Dataset library
data = Dataset.load_from_df(ratings[['user_id', 'movie_id', 'rating']], reader)
# Split the dataset for 5-fold evaluation
data.split(n_folds=5)
# +
# Use the SVD algorithm.
svd = SVD()
# Compute the RMSE of the SVD algorithm.
evaluate(svd, data, measures=['RMSE'])
# -
# You get a mean *Root Mean Square Error* of 0.8736 which is pretty good. Let's now train on the dataset and arrive at predictions.
trainset = data.build_full_trainset()
svd.train(trainset)
# Let's pick again user with ID 1310 and check the ratings he has given.
ratings[ratings['user_id'] == 1310]
# Now let's use SVD to predict the rating that User with ID 1310 will give to a random movie (let's say with Movie ID 1994).
svd.predict(1310, 1994)
# For movie with ID 1994, you get an estimated prediction of 3.349. The recommender system works purely on the basis of an assigned movie ID and tries to predict ratings based on how the other users have predicted the movie.
#
# ## Conclusion
# In this notebook, you attempted to build a model-based Collaborative Filtering movie recommendation sytem based on latent features from a low rank matrix factorization method called SVD. As it captures the underlying features driving the raw data, it can scale significantly better to massive datasets as well as make better recommendations based on user's tastes.
#
# However, we still likely lose some meaningful signals by using a low-rank approximation. Specifically, there's an interpretability problem as a singular vector specifies a linear combination of all input columns or rows. There's also a lack of sparsity when the singular vectors are quite dense. Thus, SVD approach is limited to linear projections.
| MovieRecomenderSystem(MovieLens)/SVD_Model_Solution.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Manual Classification for Fractional Cover of Water Project <img align="right" src="../../Supplementary_data/dea_logo.jpg">
#
# * **Compatability:** Notebook currently compatible with the `DEA Sandbox` environment
# * **Products used:**
# This notebook uses no DEA products. It uses either example drone data or uploaded drone data, in [WGS84](https://www.ga.gov.au/scientific-topics/positioning-navigation/wgs84) and [GeoTIFF format](https://earthdata.nasa.gov/esdis/eso/standards-and-references/geotiff)
# ## Background <img align="right" src="./Data_and_figures/Fractional_cover_of_water.png" style="width: 400px;"/>
# The Fractional Cover of Water Project is a [Geoscience Australia](https://www.ga.gov.au/) - [Australian Rivers Institute](https://www.griffith.edu.au/australian-rivers-institute/our-people) project. The project requires the collection of drone field data to validate the development of a new algorithm that will classify a [Landsat](https://landsat.gsfc.nasa.gov/) or [Sentinel 2](https://sentinel.esa.int/web/sentinel/missions/sentinel-2) image pixel into sub-pixel fractions of water and wet vegetation.
#
# This notebook allows a user to manually classify collected drone imagery as input to the algorithm development and validation process. Load the data in, run all the cells, and a [bokeh](https://docs.bokeh.org/en/latest/index.html) interactive plot allows the user to zoom in, click 1m x 1m cells, select the land cover type, and save the results to file. Open water cells can have a [Forel-Ule](http://forel-ule-scale.com/) water color recorded for the cell, where this was measured at the site using the [EyeOnWater](https://www.eyeonwater.org/) application.
#
# The image at right shows the variations in wet vegetation fractional cover type that will be collected by drone. `Floating, Emergent, OpenWater, GreenVeg(PV), DryVeg(NPV), BareSoil(BS), and Forel-Ule water colour (if the type is OpenWater)` Water height is additionally recorded though may not be retrievable from the algorithm due to the inherent limitations of top-down satellite imagery.
#
# Additional documentation is provided below.
#
# ## Description
# The notebook uses drone imagery to classify the land cover type, including `Floating, Emergent, OpenWater, GreenVeg(PV), DryVeg(NPV), BareSoil(BS), Forel-Ule water colour (if the type is OpenWater)`
#
# * first bring in the drone data in resolution (1, -1) (unit meter) as a GeoTiff raster image. This can be copied into the [DEA Sandbox](https://app.sandbox.dea.ga.gov.au/) by dragging and dropping the file.
# * classify the pixels into categories listed above
# * save the results as raster into `geotiff`
#
# To use the notebook, please refer the instruction video and doc linked below
# - [Written Workflow Instructions](https://drive.google.com/file/d/1CS7zQYdTUoMFBwbiOj6x74lxHGdLpcdZ/view?usp=sharing)
# - [Video Instructions](https://drive.google.com/file/d/1nygm8rFe1frTR91kRiGzhhwxYALLg5mH/view?usp=sharing)
# ***
# ## Get started
# ### 1. Upload drone image
#
# Set the variable `drone_tif_path` accordingly
# e.g. `drone_tif_path = "./DataAndFigures/drone_wetland_sml4.tif"`
# change the file name if necessary
drone_tif_path = "./Data_and_figures/Drone_wetland_sml.tif"
# ### 2. Name your output file
# The default file name is results_tif_path = `./test.tif`
# This sends the output file to the current directory and a file named `test.tif`.
# You may want to rename this file
results_tif_path = './test.tif'
# ### 3. Run the rest of the notebook
# Now you should be all set up, you can just run the cells below by pressing `Shift + Enter` until the drone image shows up near the end. Alternatively, you can select `Run -->> Run All Cells` from the drop-down menu in JupyterLab.
# +
import sys
import os
from os import path
import datacube
import rasterio.features
import numpy as np
import geopandas as gpd
import matplotlib.pyplot as plt
from shapely.geometry import shape
import xarray as xr
import re
from datetime import datetime
import urllib
from datacube.utils.cog import write_cog
from datacube.utils.geometry import assign_crs
from datacube.utils.geometry import GeoBox
from odc.algo import xr_reproject
sys.path.append("../../Scripts")
from rasterio import warp
from rasterio.features import geometry_mask
from rasterio.transform import from_bounds
from shapely.geometry import Polygon
from bokeh.io import curdoc, output_notebook, show
from bokeh.layouts import layout, column, row
from bokeh.models import (CheckboxGroup, Select, ColumnDataSource, HoverTool, YearsTicker, Legend,
CustomJS, LegendItem, field, Range1d, Circle, Button, RadioGroup, TextInput, WheelZoomTool,
ResetTool, BoxZoomTool, SaveTool, LinearColorMapper, CategoricalColorMapper,
Label, PreText, FileInput, Toggle, MultiPolygons)
from bokeh.models.formatters import DatetimeTickFormatter
from bokeh.events import SelectionGeometry
from bokeh.models.glyphs import Text
from bokeh.palettes import Blues256
from bokeh.colors import RGB, named
from bokeh.plotting import figure
# -
# only sandbox requires next two lines
from dea_dask import create_local_dask_cluster
create_local_dask_cluster()
# required by bokeh
output_notebook()
# drone imagery resample resolution
# note to user: don't change it unless required
# note to editor: change it whatever you want
drone_res_tgt = (1, 1)
# This is the Forel-Ule color scale
furgb = np.array([
[1,33,88,188],
[2,49,109,197],
[3,50,124,187],
[4,75,128,160],
[5,86,143,150],
[6,109,146,152],
[7,105,140,134],
[8,117,158,114],
[9,123,166,84],
[10,125,174,56],
[11,149,182,69],
[12,148,182,96],
[13,165,188,118],
[14,170,184,109],
[15,173,181,95],
[16,168,169,101],
[17,174,159,92],
[18,179,160,83],
[19,175,138,68],
[20,164,105,5],
[21,161,44,4]], dtype='uint8')
# +
def load_drone_tif(fname, res_tgt):
"""
load drone imagery with given file name and resolution
input:
fname: file name of drone imagery
res_tgt: resample resolution of output, type: tuple, e.g. res_tgt=(1, 1)
output:
xarray.DataSet of drone imagery
"""
drone = xr.open_rasterio(fname, parse_coordinates=True, chunks={'band': 1, 'x': 1024, 'y': 1024})
drone = assign_crs(drone)
affine, width, height = warp.calculate_default_transform(drone.crs, 'EPSG:3577', drone.shape[1], drone.shape[2],
*drone.geobox.extent.boundingbox)
tgt_affine, tgt_width, tgt_height = warp.aligned_target(affine, width, height, res_tgt)
drone_geobox = GeoBox(tgt_width, tgt_height, tgt_affine, 'EPSG:3577')
drone_tgt = xr_reproject(drone, drone_geobox, resampling= 'bilinear' )
return drone_tgt.load()
def load_results_tif(fname):
"""
load geotiff of classification results
input:
fname: file name of classification results
output:
xarray.DataSet of geotiff
"""
results = xr.open_rasterio(fname, parse_coordinates=True, chunks={'band': 1, 'x': 1024, 'y': 1024})
return results.load()
# -
# note to user: just run it
# note to editor: load drone imagery, convert to rgba image, set up coordinates and affine
drone_tgt = load_drone_tif(drone_tif_path, drone_res_tgt)
rgba_image = np.empty((drone_tgt.shape[1], drone_tgt.shape[2]), dtype='uint32')
view = rgba_image.view(dtype='uint8').reshape(drone_tgt.shape[1], drone_tgt.shape[2], drone_tgt.shape[0])
for i in range(3):
view[:, :, i] = (drone_tgt.data[i]).astype('uint8')
view[:, :, 3] = 255
scale_factor = np.array([drone_tgt.x.data.min(),
drone_tgt.y.data.min() if drone_tgt.y.data.min() > 0 else drone_tgt.y.data.max()])
xr_results = xr.DataArray(data=[np.zeros(rgba_image.shape, dtype='uint8')] * 2, dims=['band', 'y', 'x'],
coords={'band': [1, 2],
'y': drone_tgt.coords['y'], 'x':drone_tgt.coords['x']}, attrs={'nodata':0})
select_poly_affine = from_bounds(drone_tgt.x.data.min()-scale_factor[0],
drone_tgt.y.data.max()-scale_factor[1],
drone_tgt.x.data.max()-scale_factor[0],
drone_tgt.y.data.min()-scale_factor[1],
drone_tgt.shape[2], drone_tgt.shape[1])
# note to user: run it
# note to editor: monstrous function required to redirect the bokeh interactive plot to a random jupyterlab port
# or the jupyter notebook port via proxy
def plot_doc(doc):
drone_source = ColumnDataSource(data={'img': [np.flip(rgba_image, axis=0)]})
results_source = ColumnDataSource(data={'category': [np.zeros(rgba_image.shape, dtype='uint8')],
'forel_ule': [np.zeros(rgba_image.shape, dtype='uint8')]})
drone_file_input = TextInput(value=drone_tif_path, title="Load drone imagery", height=50, width=600,
sizing_mode='fixed')
def open_drone_tif(attrname, old, new):
fname = drone_file_input.value
if not path.exists(fname):
print("file doesn't exist!")
return
drone_tgt = load_drone_tif(fname, drone_res_tgt)
rgba_image = np.empty((drone_tgt.shape[1], drone_tgt.shape[2]), dtype='uint32')
view = rgba_image.view(dtype='uint8').reshape(drone_tgt.shape[1], drone_tgt.shape[2], drone_tgt.shape[0])
for i in range(3):
view[:, :, i] = (drone_tgt.data[i]).astype('uint8')
view[:, :, 3] = 255
scale_factor = np.array([drone_tgt.x.data.min(),
drone_tgt.y.data.min() if drone_tgt.y.data.min() > 0 else drone_tgt.y.data.max()])
drone_source.data['img'] = [np.flip(rgba_image, axis=0)]
global xr_results
xr_results = xr.DataArray(data=[np.zeros(rgba_image.shape, dtype='uint8')] * 2, dims=['band', 'y', 'x'],
coords={'band': [1, 2],
'y': drone_tgt.coords['y'], 'x':drone_tgt.coords['x']}, attrs={'nodata':0})
results_source.data['category'] = [np.flip(xr_results[0].data, axis=0)]
results_source.data['forel_ule'] = [np.flip(xr_results[1].data, axis=0)]
global select_poly_affine
select_poly_affine = from_bounds(drone_tgt.x.data.min()-scale_factor[0],
drone_tgt.y.data.max()-scale_factor[1],
drone_tgt.x.data.max()-scale_factor[0],
drone_tgt.y.data.min()-scale_factor[1],
drone_tgt.shape[2], drone_tgt.shape[1])
drone_file_input.on_change('value', open_drone_tif)
result_file_input = TextInput(value=results_tif_path, title="Results tiff file", height=50, width=600,
sizing_mode='fixed')
result_save_button = Button(label="Save results", button_type="success")
result_load_button = Button(label="Load results", button_type="success")
def open_result_tif(event):
fname = result_file_input.value
if not path.exists(fname):
print("file doesn't exist!")
return
global xr_results
xr_results = load_results_tif(fname)
results_source.data['category'] = [np.flip(xr_results[0].data, axis=0)]
results_source.data['forel_ule'] = [np.flip(xr_results[1].data, axis=0)]
def save_result_tif(event):
fname = result_file_input.value
if path.exists(fname):
time_now = str(datetime.now()).replace(':', '').replace(' ', '').replace('-', '')
os.rename(fname, fname + '.bak' + time_now)
write_cog(xr_results, fname)
result_load_button.on_click(open_result_tif)
result_save_button.on_click(save_result_tif)
drone_fig = figure(tooltips=[('x-cood', "$x{0.0}"), ('y-coord', "$y{0.0}")], title="image %s" %("drone"),
x_axis_type='auto', y_axis_type='auto', x_minor_ticks=10, y_minor_ticks=10,
x_axis_label="x origin %s" % scale_factor[0],
y_axis_label="y origin %s" % scale_factor[1],
tools="box_zoom, wheel_zoom, pan, tap, poly_select, reset")
drone_fig.toolbar.active_scroll = drone_fig.select_one(WheelZoomTool)
drone_tag = ['drone', 1]
drone_fig.image_rgba(image='img', source=drone_source,
x=drone_tgt.x.data.min()-scale_factor[0],
y=drone_tgt.y.data.min()-scale_factor[1],
dw=drone_tgt.shape[2], dh=drone_tgt.shape[1],
tags=drone_tag,
level="image")
transparent_white = RGB(255, 255, 255, 0)
cats_color = [named.violet.to_rgb(), named.indigo.to_rgb(), named.tomato.to_rgb(),
named.deepskyblue.to_rgb(), named.yellow.to_rgb(), named.beige.to_rgb(),
named.brown.to_rgb()]
cats_color_mapper = LinearColorMapper(cats_color, low=1, high=len(cats_color), low_color=transparent_white)
water_color = [RGB(f[1], f[2], f[3], 255) for f in furgb]
water_color_mapper = LinearColorMapper(water_color, low=1, high=21, low_color=transparent_white)
water_tag = ['forel_ule', 21]
cats_tag = ['cats', 10]
water_image = drone_fig.image(image='forel_ule', source=results_source, x=drone_tgt.x.data.min()-scale_factor[0],
y=drone_tgt.y.data.min()-scale_factor[1],
dw=drone_tgt.shape[2], dh=drone_tgt.shape[1],
color_mapper=water_color_mapper,
global_alpha=0.8,
level="image", tags=water_tag)
cats_image = drone_fig.image(image='category', source=results_source, x=drone_tgt.x.data.min()-scale_factor[0],
y=drone_tgt.y.data.min()-scale_factor[1],
dw=drone_tgt.shape[2], dh=drone_tgt.shape[1],
color_mapper=cats_color_mapper,
global_alpha=0.8,
level="image", tags=cats_tag)
coords_label = PreText(text="null", width=100, sizing_mode='fixed')
select_poly_source = ColumnDataSource(data=dict(x=[[[[]]]], y=[[[[]]]]))
js_code = """
if (cb_obj.final == false)
{
return;
}
const geometry = cb_obj.geometry;
const data = {'x': [[[[]]]], 'y': [[[[]]]]};
if (geometry.type == "point")
{
var ind_x = Math.floor(geometry.x);
var ind_y = Math.floor(geometry.y);
console.log("x:", ind_x);
console.log("y:", ind_y);
label.text = "x=" + ind_x.toString() +";" + "y=" + ind_y.toString();
}
else if (geometry.type == "poly")
{
var array_len = geometry.x.length;
for (var i=0; i<array_len; i++)
{
data['x'][0][0][0].push(Math.floor(cb_obj.geometry.x[i]));
data['y'][0][0][0].push(Math.floor(cb_obj.geometry.y[i]));
}
label.text = "null";
console.log("y:", poly.data);
}
poly.data = data;
"""
js_callback = CustomJS(args={'label': coords_label, 'poly': select_poly_source},
code=js_code)
drone_fig.js_on_event(SelectionGeometry, js_callback)
select_poly = MultiPolygons(xs="x", ys="y", line_width=2)
drone_fig.add_glyph(select_poly_source, select_poly)
def get_ind_from_coords():
ind_list = []
poly_list = []
if coords_label.text != "null":
coords = coords_label.text.split(';')
ind_list += [[abs(int(coords[1].split('=')[1])), abs(int(coords[0].split('=')[1]))]]
return ind_list
elif len(select_poly_source.data['x'][0][0][0]) > 0:
for x, y in zip(select_poly_source.data['x'][0][0][0], select_poly_source.data['y'][0][0][0]):
poly_list += [[x, y]]
poly_shape = Polygon(poly_list)
poly_mask = geometry_mask([poly_shape], out_shape=xr_results[0].data.shape,
transform=select_poly_affine, invert=True)
return np.flip(poly_mask, axis=0)
else:
return None
cats = ["NA", "Overstory","Emergent","Floating","OpenWater","GreenVeg","DryVeg","Bare"]
radio_group = RadioGroup(labels=cats, active=0, height=800, height_policy="fixed", aspect_ratio=0.1)
forel_ule_scale = TextInput(value="0", title="Forel-Ule Water Colour", width=100, sizing_mode='fixed')
def choose_cat(attrname, old, new):
ind_list = get_ind_from_coords()
if ind_list is None:
return
if attrname == 'active':
if isinstance(ind_list, list):
for ind_y, ind_x in ind_list:
xr_results[0][ind_y, ind_x] = radio_group.active
else:
xr_results[0].data[ind_list] = radio_group.active
results_source.data['category'] = [np.flip(xr_results[0].data, axis=0)]
if attrname == 'value':
check_numbers = re.match(r'^[0-9]+$', forel_ule_scale.value)
if check_numbers is None:
print("only input numbers", forel_ule_scale.value)
forel_ule_scale.value = "0"
elif int(forel_ule_scale.value) < 0 or int(forel_ule_scale.value) > 21:
print("invalid value, please check!")
forel_ule_scale.value = "0"
elif radio_group.active != 4 and int(forel_ule_scale.value) > 0:
forel_ule_scale.value = "0"
print("cannot set value for non-water")
if isinstance(ind_list, list):
for ind_y, ind_x in ind_list:
xr_results[1][ind_y, ind_x] = int(forel_ule_scale.value)
else:
xr_results[1].data[ind_list] = int(forel_ule_scale.value)
results_source.data['forel_ule'] = [np.flip(xr_results[1].data, axis=0)]
radio_group.on_change('active', choose_cat)
forel_ule_scale.on_change('value', choose_cat)
def coords_change(attrname, old, new):
ind_list = get_ind_from_coords()
if ind_list is None:
return
if isinstance(ind_list, list):
radio_group.active = xr_results[0].data[ind_list[0][0], ind_list[0][1]]
forel_ule_scale.value = str(xr_results[1].data[ind_list[0][0], ind_list[0][1]])
else:
radio_group.active = xr_results[0].data[ind_list][0]
forel_ule_scale.value = str(xr_results[1].data[ind_list][0])
coords_label.on_change('text', coords_change)
select_poly_source.on_change('data', coords_change)
overlay_toggle_category = Toggle(label="Overlay category", button_type="success",
height=50, width=150, sizing_mode='fixed', active=True)
overlay_toggle_water_color = Toggle(label="Overlay water color", button_type="success",
height=50, width=150, sizing_mode='fixed', active=True)
def overlay_results(event):
if (overlay_toggle_water_color.active):
water_image.visible = True
else:
water_image.visible = False
if (overlay_toggle_category.active):
cats_image.visible = True
else:
cats_image.visible = False
overlay_toggle_category.on_click(overlay_results)
overlay_toggle_water_color.on_click(overlay_results)
control_group = column(coords_label, forel_ule_scale, radio_group)
result_group = row(result_file_input, column(result_load_button, result_save_button))
layouts = layout([drone_file_input, result_group, [overlay_toggle_category, overlay_toggle_water_color],
[control_group, drone_fig]], sizing_mode='scale_height')
doc.add_root(layouts)
def remote_jupyter_proxy_url(port):
"""
Callable to configure Bokeh's show method when a proxy must be
configured.
If port is None we're asking about the URL
for the origin header.
"""
base_url = "https://app.sandbox.dea.ga.gov.au/"
host = urllib.parse.urlparse(base_url).netloc
# If port is None we're asking for the URL origin
# so return the public hostname.
if port is None:
return host
service_url_path = os.environ['JUPYTERHUB_SERVICE_PREFIX']
proxy_url_path = 'proxy/%d' % port
user_url = urllib.parse.urljoin(base_url, service_url_path)
full_url = urllib.parse.urljoin(user_url, proxy_url_path)
return full_url
# ### 4. If everything above worked, you should see your drone imagery below the next cell as well as an output file path and some green buttons.
# - When selecting polygons of multiple pixels, double-click to complete the polygon selection
# - Completing a new polygon will hide the old polygon
# - Hit the green `Save results` button frequently to make sure your results are saved during your session
# - You can switch between water color and category being displayed over your imagery by clicking `Overlay category` or `Overlay water color` buttons
# - You probably want to have your imagery open in a GIS on another screen to compare with this imagery while classifying, otherwise you'll be scrolling in and out a lot
# if you know your url
# notebook_url = "http://localhost:8888"
show(plot_doc, notebook_url=remote_jupyter_proxy_url)
# ***
#
# ## Additional information
#
# **License:** The code in this notebook is licensed under the [Apache License, Version 2.0](https://www.apache.org/licenses/LICENSE-2.0).
# Digital Earth Australia data is licensed under the [Creative Commons by Attribution 4.0](https://creativecommons.org/licenses/by/4.0/) license.
#
# **Contact:** If you need assistance, please post a question on the [Open Data Cube Slack channel](http://slack.opendatacube.org/) or on the [GIS Stack Exchange](https://gis.stackexchange.com/questions/ask?tags=open-data-cube) using the `open-data-cube` tag (you can view previously asked questions [here](https://gis.stackexchange.com/questions/tagged/open-data-cube)).
# If you would like to report an issue with this notebook, you can file one on [Github](https://github.com/GeoscienceAustralia/dea-notebooks).
#
# **Last modified:** October 2020
#
# **Compatible datacube version:**
print(datacube.__version__)
# ## Tags
# Browse all available tags on the DEA User Guide's [Tags Index](https://docs.dea.ga.gov.au/genindex.html)
# + raw_mimetype="text/restructuredtext" active=""
# **Tags**: :index:`sandbox compatible`, :index:`bokeh`, :index:`COG`, :index:`classification`, :index:`DEA Sandbox`, :index:`drones`, :index:`Forel-Ule`, :index:`GeoTIFF`, :index:`inland water`,:index:`water`,
| Scientific_workflows/Fractional_Cover_of_Water/Manual_classification_workflow.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Use OSMnx to plot street network over place shape
#
# This example uses Portland, Maine - a city with several islands within its municipal boundaries. Thus, we set `retain_all=True` when getting the network so that we keep all the graph components, not just the largest connected component.
#
# - [Overview of OSMnx](http://geoffboeing.com/2016/11/osmnx-python-street-networks/)
# - [GitHub repo](https://github.com/gboeing/osmnx)
# - [Examples, demos, tutorials](https://github.com/gboeing/osmnx-examples)
# - [Documentation](https://osmnx.readthedocs.io/en/stable/)
import osmnx as ox, matplotlib.pyplot as plt
from descartes import PolygonPatch
from shapely.geometry import Polygon, MultiPolygon
ox.config(log_console=True, use_cache=True)
# +
# get the place shape
gdf = ox.gdf_from_place('Portland, Maine')
gdf = ox.project_gdf(gdf)
# get the street network, with retain_all=True to retain all the disconnected islands' networks
G = ox.graph_from_place('Portland, Maine', network_type='drive', retain_all=True)
G = ox.project_graph(G)
# -
# plot the network, but do not show it or close it yet
fig, ax = ox.plot_graph(G, fig_height=10, show=False, close=False, edge_color='#777777')
plt.close()
# to this matplotlib axis, add the place shape as descartes polygon patches
for geometry in gdf['geometry'].tolist():
if isinstance(geometry, (Polygon, MultiPolygon)):
if isinstance(geometry, Polygon):
geometry = MultiPolygon([geometry])
for polygon in geometry:
patch = PolygonPatch(polygon, fc='#cccccc', ec='k', linewidth=3, alpha=0.1, zorder=-1)
ax.add_patch(patch)
# optionally set up the axes extents all nicely
margin = 0.02
west, south, east, north = gdf.unary_union.bounds
margin_ns = (north - south) * margin
margin_ew = (east - west) * margin
ax.set_ylim((south - margin_ns, north + margin_ns))
ax.set_xlim((west - margin_ew, east + margin_ew))
fig
# Notice this municipal boundary is an administrative boundary, not a physical boundary, so it represents jurisdictional bounds, not individiual physical features like islands.
| notebooks/07-example-plot-network-over-shape.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
from logicqubit.logic import *
# +
logicQuBit = LogicQuBit(3)
A = Qubit()
B = Qubit()
C = Qubit()
C.H() # control qubit
A.X()
#B.H()
C.Fredkin(A, B)
C.H()
# -
logicQuBit.Measure([C])
logicQuBit.Plot()
| comparing two qubits.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
from sense2vec import Sense2Vec
import re
from termcolor import colored
from JOSS_PDF_Cleaner import Clean_PDF
import gensim
from gensim import utils
import numpy as np
import sys
from sklearn.datasets import fetch_20newsgroups
from nltk import word_tokenize
from nltk import download
from nltk.corpus import stopwords
import matplotlib.pyplot as plt
from sklearn.decomposition import PCA
# %matplotlib inline
import spacy
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
import glob
import string
import glob
from tqdm import tqdm
#import pdfminer
from pdfminer.high_level import extract_text
import pandas as pd
from sklearn.feature_extraction.text import TfidfVectorizer
import nltk
nltk.download('stopwords')
import numpy as np
from sklearn.pipeline import Pipeline
from sklearn.feature_extraction.text import TfidfTransformer
from sklearn.feature_extraction.text import CountVectorizer
import nltk
nltk.download('punkt')
from nltk import sent_tokenize
from nltk import word_tokenize
from nltk.corpus import stopwords
from nltk.stem import WordNetLemmatizer
nltk.download('wordnet')
from nltk.probability import FreqDist
s2v = Sense2Vec().from_disk("../../s2v_old")
Q = 0
PAPER_OF_INTEREST_FNAME = glob.glob('/Volumes/Seagate Backup Plus Drive/JOSS Project/joss-papers-master/*/*/*.pdf')
print(PAPER_OF_INTEREST_FNAME[Q])
Paper_interest = PAPER_OF_INTEREST_FNAME[Q]
text = ''
arr = []
from pdfminer.high_level import extract_pages
from pdfminer.layout import LTTextContainer
for page_layout in extract_pages(Paper_interest):
for element in page_layout:
if isinstance(element, LTTextContainer):
score = Clean_PDF(element.get_text().lower())
#print(score)
if score == 0:
print(colored(element.get_text().lower(), 'green'))
arr.append(element.get_text())
text = text + element.get_text() + ' '
else:
print(colored(element.get_text().lower(), 'red'))
#arr.append(element.get_text())
#text = text + element.get_text() + ' '
#arr = np.array(arr)
text
text = text.replace('-\\n','')
text = text.replace('\\n',' ')
text = text.replace('\n',' ')
text
model = spacy.load('en_core_web_lg')
doc = model(text)
# +
doc_arr = []
pos_arr = []
tag_arr = []
dep_arr = []
for token in doc:
if token.is_alpha == True:
if token.is_stop == False:
doc_arr.append(str(token.lemma_).lower())
pos_arr.append(str(token.pos_))
tag_arr.append(str(token.tag_))
dep_arr.append(str(token.dep_))
doc_arr = np.array(doc_arr)
pos_arr = np.array(pos_arr)
tag_arr = np.array(tag_arr)
dep_arr = np.array(dep_arr)
# -
word_vec = np.zeros((128))
counter = 0
for P in range(len(doc_arr)):
best = s2v.get_best_sense(doc_arr[P])
if best != None:
vector = s2v[best]
word_vec = word_vec + vector
counter = counter + 1
average_word_vec = word_vec / counter
# + tags=[]
average_word_vec
# -
df_reviewers = pd.read_csv('../Data/JOSS Table Test.csv')
from sklearn.metrics.pairwise import cosine_similarity
def GetReviewerSample_Sense2Vec(paper_vec, df_reviewers=df_reviewers):
all_usernames = []
all_domains = []
all_cosine_sims = []
for j in range(df_reviewers.shape[0]-1):
if pd.isna(df_reviewers.iloc[j+1]['Domains/topic areas you are comfortable reviewing']) == False:
reviewer_interests = df_reviewers.iloc[j+1]['Domains/topic areas you are comfortable reviewing'].lower()
reviewer_interests.replace('/',' ')
doc = model(reviewer_interests)
reviewer_arr = []
for token in doc:
if token.is_alpha == True:
if token.is_stop == False:
reviewer_arr.append(str(token.lemma_).lower())
reviewer_arr = np.array(reviewer_arr)
word_vec = np.zeros((128))
counter = 0
for P in range(len(reviewer_arr)):
best = s2v.get_best_sense(reviewer_arr[P])
if best != None:
vector = s2v[best]
word_vec = word_vec + vector
counter = counter + 1
if counter > 0:
average_reviewer_vec = word_vec / counter
all_usernames.append(df_reviewers.username.iloc[j+1])
all_domains.append(reviewer_interests)
all_cosine_sims.append(cosine_similarity(np.array([paper_vec]), np.array([average_reviewer_vec]))[0,0])
return np.array(all_usernames), np.array(all_domains), np.array(all_cosine_sims)
all_usernames, all_domains, all_cosine_sims = GetReviewerSample_Sense2Vec(average_word_vec)
def TopReviewers(number=5, all_usernames=all_usernames, all_domains=all_domains, all_cosine_sims=all_cosine_sims):
message = 'Hello.\nI have found ' +str(number) + ' possible reviewers for this paper.'+ '\n\n'
for J in range(number):
index = np.argsort(all_cosine_sims)[-1-J]
#print(index)
ps = 'I believe '+ str(all_usernames[index]) + ' will be a good reviewer for this paper. Their domain interests and this paper have a cosine similairity score of ' + str(all_cosine_sims[index])[:6] + '. This reviewers domain interests are ' + str(all_domains[index].replace('\n', ','))
message = message + ps + '\n\n'
print(message)
TopReviewers()
| Idea 9/JOSS_Reviewer_Idea_9_Reviewers.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
def isPalindrome(str):
for i in range(0, int(len(str)/2)):
if str[i] != str[len(str)-i-1]:
return False
return True
s = input('Enter the String')
ans = isPalindrome(s)
if (ans):
print("Yes")
else:
print("No")
# -
num=int(input("Enter a number:"))
temp=num
rev=0
while(num>0):
dig=num%10
rev=rev*10+dig
num=num//10
if(temp==rev):
print("The number is palindrome!")
else:
print("Not a palindrome!")
| Python Programs/Palindrome.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python [conda env:practicas]
# language: python
# name: conda-env-practicas-py
# ---
# ## Convolutional Neural Network
#
# En el ejercicio siguiente se va a realizar un modelo de clasificación para el dataset MNIST. Para ello, vamos a diseñar una red neuronal convolucional con 2 capas usando el framework KERAS. La red que vamos a diseñar realizará lo siguiente:
#
# - Vamos a utilizar dos capas convolucionales con filtros de kernel de tamaño 3 (3x3)
# - La función de activación será tipo RELU
# - Por último, vamos a "aplastar" la salida de las capas convolucionales para conectarlo con una última capa sencilla
# - Una capa sencilla de salida con los 10 números posibles
#
# Como ya sabemos, los datos son imágenes de 28x28 píxeles y existen 10 números a clasificar, del 0 al 9. Puesto que el valor de los píxeles está en escala de grises del 0 al 255, normalizamos de 0 a 1.
from keras.datasets import mnist
(X_train, y_train), (X_test, y_test) = mnist.load_data()
import matplotlib.pyplot as plt
# Dibujamos una muetra
plt.imshow(X_train[0])
# Esta vez de añadir una capa de "Flattern" en la red neuronal, vamos a realizar un reshape de los datos de entrada para que vayan al modelo
#reshape data to fit model
X_train = X_train.reshape(60000,28,28,1)
X_test = X_test.reshape(10000,28,28,1)
# +
from keras.utils import to_categorical
# Realizamos un one-hot encode de la variable objetivo
y_train = to_categorical(y_train)
y_test = to_categorical(y_test)
y_train[0]
# -
# Creamos el modelo
# +
from keras.models import Sequential
from keras.layers import Dense, Conv2D, Flatten
# creación del modelo
model = Sequential()
# Añadimos capas
model.add(Conv2D(64, kernel_size=3, activation='relu', input_shape=(28,28,1)))
model.add(Conv2D(32, kernel_size=3, activation='relu'))
model.add(Flatten())
model.add(Dense(10, activation='softmax'))
# -
# Compilamos el modelo para calcular su acierto
model.compile(optimizer='adam', loss='categorical_crossentropy', metrics=['accuracy'])
# Lo entrenamos
history = model.fit(X_train, y_train, validation_data=(X_test, y_test), epochs=3)
# Realizamos unas cuantas predicciones y finalmente calculamos la precisión del modelo
# +
plt.plot(history.history['acc'])
plt.title('Model accuracy')
plt.ylabel('Accuracy')
plt.xlabel('Epoch')
plt.legend(['Train', 'Test'], loc='upper left')
plt.show()
# Plot training & validation loss values
plt.plot(history.history['loss'])
plt.title('Model loss')
plt.ylabel('Loss')
plt.xlabel('Epoch')
plt.legend(['Train', 'Test'], loc='upper right')
plt.show()
# -
# Primeras 4 muestras
model.predict(X_test[:4])
# Primeros 4 valores
y_test[:4]
model.evaluate(X_test, y_test)
| keras_tf_pytorch/Keras/MNIST Convolutional NN.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Canonical Clustering
#
# ## $k$ Nearest-Neighbor
# ### Distance Measure
# The $k$ nearest neighbor is based on the simple idea that when a sample is close to other samples, changes are that they share the same label. To know how close one sample is from another sample, there needs to be a notion of close. For this application we will use euclidian distance also known as the $L^2$ norm
#
# $$ \left\| x \right\|_2 \overset{\textrm{def}}= \left( \sum_{i=1}^n x_i^2 \right)^{1/2}$$
#
# ### Classification
# In classification we would like to assign certain samples to a class. To do so we would like to implement an algorithm that assigns the class based on the classes of the $k$ nearest neighbors. Try to do so in `.code/knn.py`
# +
import sys
sys.path.append('code/')
import knn
import numpy as np
from sklearn import datasets, model_selection
from sklearn.metrics import adjusted_rand_score
# import some data to play with
X, y = datasets.load_iris(return_X_y=True)
# inspect the shape of the data
print(X.shape)
print(y.shape)
# split data into train and test
X_train, X_test, y_train, y_test = model_selection.train_test_split(X, y)
# and predict
k=10
y_predicted = knn.predict(X_train, X_test, y_train, k)
# evaluate predictions using y_predicted and y_test
ARI = adjusted_rand_score(np.squeeze(y_predicted), np.squeeze(y_test))
print("Adjusted Rand Index is: {0}".format(ARI))
# -
# ## Naive Bayes
# #### TODO: probably this exercise is to extensive, rewrite it with using libraries.
# ### Dataset
# Naive Bayes classifier is often used to filter emails for spam, such as in [this article](https://www.aclweb.org/anthology/E03-1059.pdf). For this exercise we will use a dataset containing samples from email messages. This dataset is available on [openml](https://www.openml.org/d/42673) and can be read in using sklearn:
# +
from sklearn import datasets
import numpy as np
spam = datasets.fetch_openml('auml_eml_1_a')
# lets inspect the dataset
np.set_printoptions(precision=3, suppress=True)
print("Dataset Dictionary Keys: \n{0}\n".format(spam.keys()))
print("Dataset Feature Names: \n{0}\n".format(spam['feature_names']))
# print a few examples of spam and non-spam
print("Example spam data entries: \n{0}\n".format(spam['data'][spam['target']==0][:8,:].T))
print("Example spam targets: \n{0}\n".format(spam['target'][spam['target']==0][:8].T))
print("Example non-spam data entries: \n{0}\n".format(spam['data'][spam['target']==1][:8,:].T))
print("Example non-spam targets: \n{0}\n".format(spam['target'][spam['target']==1][:8].T))
# -
# This dataset contains 3 features. The first is a binary feature named 'F1_text_html_found', the second is a nominal feature named 'F1_number_of_links', and the last is again a binary feature named 'F1_contains_javascript'. Using these three features, we are going to try to predict the change of a sample being spam. To do so we use:
#
# \begin{align}
# P(Y=y_k | X=x_i) = \frac{P(Y=y_k) P(X=x_i | Y=y_k)}{P(X=x_i)}
# \end{align}
#
# This is an alternative way of writing:
#
# \begin{align}
# P(Y=y_k | X=x_i) = \frac{P(Y=y_k) P(X_1=x_{i,1} \wedge \ldots \wedge X_d=x_{i,d}) | Y=y_k)}{P(X_1=x_{i,1} \wedge \ldots \wedge X_d=x_{i,d})}
# \end{align}
#
# The nominator is often omitted, because it scales all classes the same so when comparing classes it does not matter to omit it. Furthermore, it simplifies the calculation. Thus we ultimately we will use:
#
# \begin{align}
# P(Y=y_k | X=x_i) = P(Y=y_k) P(X_1=x_{i,1} \wedge \ldots \wedge X_d=x_{i,d}) | Y=y_k)
# \end{align}
#
# To use this formula, we first need to define what the probality distribution is. For a binary this is relatively simple. For the counted feature 'F1_number_of_links', we will use a probability distribution based on a histogram with bins with a width of $10$. Implement these probabilities in `code/utils.py`.
# +
import sys
sys.path.append('code/')
import naive_bayes
import pprint
from sklearn.model_selection import train_test_split
X = spam.data
y = spam.target
X_train, X_test, y_train, y_test = train_test_split(X,y, stratify=y)
feature_types = ['binary', 'histogram', 'binary']
model = naive_bayes.fit(X_train, y_train, feature_types)
# inspect model
#pprint.pprint(model)
y_predict = naive_bayes.predict(X_test, model)
# -
print(y_predict)
| solutions/canonical_classification/canonical_classification.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3 (ipykernel)
# language: python
# name: python3
# ---
# + [markdown] pycharm={"name": "#%% md\n"} id="AnCU8-Mal4fV"
# # The Credit Card Fraud Dataset - Synthesizing the Minority Class
#
# In this notebook a practical exercise is presented to showcase the usage of the YData Synthetic library along with
# GANs to synthesize tabular data.
# For the purpose of this exercise, dataset of credit card fraud from Kaggle is used, that can be found here:
# https://www.kaggle.com/mlg-ulb/creditcardfraud
# -
# ## 1. Setup the library
# ### 1.1 Install `ydata-synthetic`
# + pycharm={"name": "#%%\n"} id="x0u2qegKl4fY" outputId="51b00474-09de-4e9a-dbf9-9535f056fbd0" colab={"base_uri": "https://localhost:8080/", "height": 247}
# Install ydata-synthetic lib
# # ! pip install ydata-synthetic
# -
# ### 1.2 Import libraries and GAN module
# + pycharm={"name": "#%%\n"} id="oX2OK2fbl4fZ"
import importlib
import os
import sys
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
from ydata_synthetic.synthesizers.regular import WGAN_GP
from ydata_synthetic.preprocessing.regular.credit_fraud import *
model = WGAN_GP
# -
# ## 2. Read the dataset
# + pycharm={"name": "#%%\n"} id="P1Rcz4RPl4fZ"
# Read the original data and have it preprocessed
data = pd.read_csv('./data/creditcard.csv', index_col=[0])
# + id="ceNe1Ofbl4fZ" outputId="f8f9fece-e7d3-454f-d4c9-d6cd116ca68a" colab={"base_uri": "https://localhost:8080/"}
# Extract list of columns
data_cols = list(data.columns)
print('Dataset columns: {}'.format(data_cols))
# + pycharm={"name": "#%%\n"} id="3o4V8-ypl4fa" outputId="39fabdb7-b3e4-492f-85f0-cd6232b45609" colab={"base_uri": "https://localhost:8080/"}
# For the purpose of this example we will only synthesize the minority class
# train_data contains 492 rows which had 'Class' value as 1 (which were very few)
train_data = data.loc[ data['Class']==1 ].copy()
# Before training the GAN do not forget to apply the required data transformations
# To ease here we've applied a PowerTransformation - make data distribution more Gaussian-like.
data = transformations(train_data)
print("Dataset info: Number of records - {} Number of variables - {}".format(train_data.shape[0], train_data.shape[1]))
# + [markdown] pycharm={"name": "#%% md\n"} id="3ezlIjKbl4fb"
# ## 3. GAN Training
# -
# ### 3.1 Parameters Settings
# + pycharm={"name": "#%%\n"} id="7FMDs5eql4fb"
# Define the GAN and training parameters
noise_dim = 32
dim = 128
batch_size = 128
log_step = 20
epochs = 60+1
learning_rate = 5e-4
beta_1 = 0.5
beta_2 = 0.9
train_sample = train_data.copy().reset_index(drop=True)
# All columns except 'Class'
data_cols = [ i for i in train_sample.columns if i not in "Class" ]
# Scale down the data
# train_sample[ data_cols ] = train_sample[ data_cols ] / 10 # scale to random noise size, one less thing to learn
gan_args = [batch_size, learning_rate, beta_1, beta_2, noise_dim, train_sample.shape[1], dim]
train_args = ['', epochs, log_step]
# -
# ### 3.2 Training
# + pycharm={"name": "#%%\n"} id="qgMDmyall4fc" outputId="ae669bdf-01b6-49d9-a254-cc0776508f7b" colab={"base_uri": "https://localhost:8080/"}
# Training the GAN model chosen: Vanilla GAN, CGAN, DCGAN, etc.
synthesizer = model(gan_args, n_critic=5)
synthesizer.train(train_sample, train_args)
# -
# ### 3.3 Generator Summary
# + id="tDjYWJPyl4fc" outputId="8a5c7afb-74ee-44ee-8902-048250d04061" colab={"base_uri": "https://localhost:8080/"}
# Generator description
synthesizer.generator.summary()
# -
# ### 3.4 Discriminator Summary
# + pycharm={"name": "#%%\n"} id="9zyfNK8Gl4fd" outputId="634297a1-dbeb-4fd0-fe52-24b181711336" colab={"base_uri": "https://localhost:8080/"}
# Discriminator description
synthesizer.discriminator.summary()
# -
# ## 4. Save the Model to Disk
# + pycharm={"name": "#%%\n"} id="C3cs_LKEl4fd" outputId="bdb0af49-7e29-480e-cb83-56ad2f192ae0" colab={"base_uri": "https://localhost:8080/", "height": 185}
# You can easily save the trained generator and loaded it afterwards
if not os.path.exists("./saved/gan"):
os.makedirs("./saved/gan")
synthesizer.save(path="./saved/gan/generator_fraud.pkl")
# + [markdown] pycharm={"name": "#%% md\n"}
# ## 5. Plot Results
# + id="5mvCYNH5l4fd"
# Dictionary of our trained models
# Model -> [model_name, with_class, generator_model]
models = {'GAN': ['GAN', False, synthesizer.generator]}
# Setup parameters visualization parameters
seed = 17
test_size = 492 # number of fraud cases
noise_dim = 32
np.random.seed(seed)
z = np.random.normal(size=(test_size, noise_dim))
# real = synthesizer.get_data_batch(train=train_sample, batch_size=test_size, seed=seed)
# real_samples = pd.DataFrame(real, columns=data_cols + ["Class"])
labels = train_sample['Class']
model_name = 'GAN'
colors = ['deepskyblue']
markers = 'o'
class_labels = 'Class 1'
col1, col2 = 'V17', 'V10'
base_dir = 'cache/'
# Actual fraud data visualization
model_steps = [ 0,
20,
40,
60
]
rows = len(model_steps)
columns = 1 + len(models)
axarr = [[]]*len(model_steps)
fig = plt.figure(figsize=(14,rows*3))
# Go through each of the 3 model_step values -> 0, 100, 200
for model_step_ix, model_step in enumerate(model_steps):
axarr[model_step_ix] = plt.subplot(rows, columns, model_step_ix*columns + 1)
plt.scatter( train_sample[[col1]], train_sample[[col2]],
label=class_labels, marker=markers, edgecolors=colors, facecolors='none' )
plt.title('Actual Fraud Data')
plt.ylabel(col2) # Only add y label to left plot
plt.xlabel(col1)
xlims, ylims = axarr[model_step_ix].get_xlim(), axarr[model_step_ix].get_ylim()
if model_step_ix == 0:
legend = plt.legend()
legend.get_frame().set_facecolor('white')
[model_name, with_class, generator_model] = models[model_name]
generator_model.load_weights( base_dir + '_generator_model_weights_step_'+str(model_step)+'.h5')
ax = plt.subplot(rows, columns, model_step_ix*columns + 1 + 1 )
g_z = generator_model.predict(z)
gen_samples = pd.DataFrame(g_z, columns=data_cols+['label'])
gen_samples.to_csv('./data/Generated_sample.csv')
plt.scatter( gen_samples[[col1]], gen_samples[[col2]],
label=class_labels[0], marker=markers[0], edgecolors=colors[0], facecolors='none' )
plt.title(model_name)
plt.xlabel(col1)
ax.set_xlim(xlims), ax.set_ylim(ylims)
plt.suptitle('Comparison of GAN outputs', size=16, fontweight='bold')
plt.tight_layout(rect=[0.075,0,1,0.95])
# Adding text labels for training steps
vpositions = np.array([ i._position.bounds[1] for i in axarr ])
vpositions += ((vpositions[0] - vpositions[1]) * 0.35 )
for model_step_ix, model_step in enumerate( model_steps ):
fig.text( 0.05, vpositions[model_step_ix], 'training\nstep\n'+str(model_step), ha='center', va='center', size=12)
if not os.path.exists("./img"):
os.makedirs("./img")
plt.savefig('img/Comparison_of_GAN_outputs.png', dpi=100)
# + pycharm={"name": "#%%\n"}
| tabular-data/gan_example_simplified.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="YGEPN7BYQfMc"
# # Vežbe 1: Python
#
# Autor: <NAME>
#
# Repozitorijum: https://github.com/jelic98/raf_pp_materials
#
# + id="oTyStBg8RCjY"
print('Hello world!')
# + id="wMDD1aX2RFBf"
message = 'Hello world!'
print('Text:', message)
message = 1
print('Number:', message)
message = 2
print('Number:', message, end=', ')
message = 3
print('Number', message, sep=': ')
# + id="smmfqL4cSNMF"
first_name = 'Linus'
last_name = 'Torvalds'
full_name = first_name + ' ' + last_name
print(full_name)
# + id="SAlARKKHSPIB"
quote = 'Any program is only as good as it is useful'
print(f"{full_name} once said\n\t'{quote}'")
print("{} once said\n\t'{}'".format(full_name, quote))
# + id="c9lshgzKS2t-"
def print_math(expr, result):
print(expr, result, sep=' = ')
print_math('3 + 2', 3 + 2)
print_math('3 - 2', 3 - 2)
print_math('3 * 2', 3 * 2)
print_math('3 / 2', 3 / 2)
print_math('3 // 2', 3 // 2)
print_math('3 ^ 2', 3 ** 2)
# + id="c_2ff8CaWwA9"
students = ['Alice', 'Bob', 'Oscar']
total = len(students)
print(f'There are {total} students')
# + id="UVQPjut2TyCQ"
for s in students:
print('Hello, ' + s + '!')
# + id="2y1H8FSJWmQ8"
for s in reversed(students):
print('Hello, ' + s + '!')
# + id="dfkEqwrGT9J7"
print('First:', students[0])
print('Second:', students[1])
print('Last:', students[-1])
print('Second-last:', students[-2])
# + id="KAXtOj9VUwcI"
i = 0
for s in students:
i += 1
print('{}. student is {}'.format(i, s))
# + id="nEGR7y1rUdd8"
for i, s in enumerate(students):
print('{}. student is {}'.format(i + 1, s))
# + id="9Fbgkmr2U_Qh"
print('Before:', students)
students[2] = 'Caprice'
print('After:', students)
# + id="_VQ_AhHOVO-L"
students[99] = 'Oscar'
# + id="uP116AAVVcFn"
print('Caprice' in students)
print('Oscar' in students)
# + id="aCvrR8dVVnqr"
print('Before:', students)
students.append('Danny')
print('After:', students)
# + id="8DGaDDs9VyjM"
even = []
odd = []
for i, s in enumerate(students):
if i % 2 == 0:
even.append(s)
else:
odd.append(s)
print(even)
print(odd)
# + id="3uo2lzh-eTnG"
all = [s for s in students]
print(all)
enums = [(i, s) for i, s in enumerate(all)]
print(enums)
even = [s for i, s in enums if i % 2 == 0]
print(even)
odd = [s for i, s in enums if i % 2 != 0]
print(odd)
# + id="oh30Re9RXCzq"
print(students)
print(students[1:3])
print(students[1:])
print(students[:3])
print(students[:-1])
print(students[:-2])
# + id="k3Jh4fvWXnlC"
for i in range(10):
print(i, end=' ')
# + id="WKvEAF_tXx-f"
for i in range(3, 10):
print(i, end=' ')
# + id="Wie9L_o7X0d4"
for i in range(3, 10, 2):
print(i, end=' ')
# + id="Sni-EM5DX6l9"
print('Before:', students)
del students[1]
print('After:', students)
# + id="Rkju3Xq1fciL"
grades = {'Alice': 10, 'Bob': 9, 'Caprice': 5, 'Danny': 9}
print(grades)
# + id="PWpOQTOEfze_"
grades['Oscar'] = 7
print(grades)
# + id="QqsovmsUgBTa"
grades['Oscar'] = 99
print(grades)
# + id="v0KemFPOf-Mq"
del grades['Oscar']
print(grades)
# + id="hJSX0mFugLBE"
for key, value in grades.items():
print('Student {} has {}'.format(key, value))
# + id="E99BVD_LgdlS"
for key, value in grades.items():
if value == 10:
print('{} is an excellent student'.format(key))
elif value == 5:
print('{} did not pass'.format(key))
else:
print('{} is a good student'.format(key))
# + id="NqYFcNI7hXGR"
def flexible_sum(a, b = 0):
return a + b
print(flexible_sum(3, 2))
print(flexible_sum(3))
print(flexible_sum(b=2, a=3))
# + id="fyBLM0Tdh7hu"
def div(a, b):
try:
return a / b
except ZeroDivisionError:
print('Divisor is zero')
return None
div(3, 0)
# + id="s9On8RhdidXS"
class Animal:
def __init__(self, sound):
self.sound = sound
def play_sound(self):
print(self.sound)
class Dog(Animal):
def __init__(self):
super(Dog, self).__init__('Bark!')
# + id="hhCLIq2qjQI4"
d = Dog()
d.play_sound()
# + id="dQce-bqfiiAZ"
path = '/content/drive/Shared drives/Materijali 2020 2021/5. semestar/Programski prevodioci/Vezbe/data/wiki.txt'
# !cat '{path}'
# + id="5en9mYc6i_0J"
with open(path, 'r') as source:
text = source.read()
length = len(text)
i = 0
buffer = ''
flag_number = False
flag_upper = False
flag_lower = False
while i <= length:
if i < length:
curr = text[i]
ascii = ord(curr[0])
i += 1
if curr == ' ' or curr == '\n' or curr == '-' or i == length:
if flag_number:
print(buffer, 'NUMBER')
elif flag_upper:
print(buffer, 'UPPER')
elif flag_lower:
print(buffer, 'LOWER')
buffer = ''
flag_number = False
flag_upper = False
flag_lower = False
continue
elif curr == ',' or curr == '.' or curr == '!':
continue
elif ascii >= 48 and ascii <= 57:
flag_number = True
elif ascii >= 65 and ascii <= 90:
flag_upper = True
elif ascii >= 97 and ascii <= 122:
flag_lower = True
buffer += curr
print(buffer)
# + id="qWEHyCcXdzbz"
with open(path, 'r') as source:
text = source.read()
for chunk in text.split():
chunk = chunk.replace(',', '').replace('.', '').replace('!', '')
for token in chunk.split('-'):
if token.isnumeric():
print(token, 'NUMBER')
elif token[0].isupper():
print(token, 'UPPER')
else:
print(token, 'LOWER')
| Notebooks/01_Python.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] id="view-in-github" colab_type="text"
# <a href="https://colab.research.google.com/github/probml/probml-notebooks/blob/main/notebooks/hilbert_space_gp_birthdays_numpyro.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# + [markdown] id="E6nNPgz6BhXa"
# https://num.pyro.ai/en/stable/examples/hsgp.html
# + colab={"base_uri": "https://localhost:8080/"} id="ArIltwNiBceO" outputId="1dd00510-0228-4eb5-9939-f427585e69e9"
# %matplotlib inline
# !pip install numpyro
# + [markdown] id="ovlXp3i_BceR"
#
# # Example: Hilbert space approximation for Gaussian processes.
#
# This example replicates a few of the models in the excellent case
# study by <NAME> [1] (originally written using R and Stan).
# The case study uses approximate Gaussian processes [2] to model the
# relative number of births per day in the US from 1969 to 1988.
# The Hilbert space approximation is way faster than the exact Gaussian
# processes because it circumvents the need for inverting the
# covariance matrix.
#
# The original case study presented by Aki also emphasizes the iterative
# process of building a Bayesian model, which is excellent as a pedagogical
# resource. Here, however, we replicate only 4 out of all the models available in [1].
# There are a few minor differences in the mathematical details of our models,
# which we had to make in order for the chains to mix properly. We have clearly
# commented on the places where our models are different.
#
# **References:**
#
# 1. <NAME>, Simpson, et al (2020), `"Bayesian workflow book - Birthdays"
# <https://avehtari.github.io/casestudies/Birthdays/birthdays.html>`.
#
# 2. <NAME>, <NAME>, <NAME>, et al (2020),
# "Practical hilbert space approximate bayesian gaussian processes for probabilistic programming".
# https://arxiv.org/pdf/2004.11408.pdf
#
# <img src="file://../_static/img/examples/hsgp.png" align="center">
#
# + id="fwCqt6qHBceT"
import argparse
import functools
import operator
import os
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import jax
import jax.numpy as jnp
from tensorflow_probability.substrates import jax as tfp
import numpyro
from numpyro import deterministic, plate, sample
import numpyro.distributions as dist
from numpyro.infer import MCMC, NUTS
# --- utility functions
def load_data():
URL = "https://raw.githubusercontent.com/avehtari/casestudies/master/Birthdays/data/births_usa_1969.csv"
data = pd.read_csv(URL, sep=",")
day0 = pd.to_datetime("31-Dec-1968")
dates = [day0 + pd.Timedelta(f"{i}d") for i in data["id"]]
data["date"] = dates
data["births_relative"] = data["births"] / data["births"].mean()
return data
def save_samples(out_path, samples):
"""
Save dictionary of arrays using numpys compressed binary format
Fast reading and writing and efficient storage
"""
np.savez_compressed(out_path, **samples)
class UnivariateScaler:
"""
Standardizes the data to have mean 0 and unit standard deviation.
"""
def __init__(self):
self._mean = None
self._std = None
def fit(self, x):
self._mean = np.mean(x)
self._std = np.std(x)
return self
def transform(self, x):
return (x - self._mean) / self._std
def inverse_transform(self, x):
return x * self._std + self._mean
def _agg(*args, scaler=None):
"""
Custom function for aggregating the samples
and transforming back to the desired scale.
"""
total = functools.reduce(operator.add, args)
return (100 * scaler.inverse_transform(total)).mean(axis=0)
# --- modelling functions
def modified_bessel_first_kind(v, z):
v = jnp.asarray(v, dtype=float)
return jnp.exp(jnp.abs(z)) * tfp.math.bessel_ive(v, z)
def spectral_density(w, alpha, length):
c = alpha * jnp.sqrt(2 * jnp.pi) * length
e = jnp.exp(-0.5 * (length**2) * (w**2))
return c * e
def diag_spectral_density(alpha, length, L, M):
"""spd for squared exponential kernel"""
sqrt_eigenvalues = jnp.arange(1, 1 + M) * jnp.pi / 2 / L
return spectral_density(sqrt_eigenvalues, alpha, length)
def phi(x, L, M):
"""
The first `M` eigenfunctions of the laplacian operator in `[-L, L]`
evaluated at `x`. These are used for the approximation of the
squared exponential kernel.
"""
m1 = (jnp.pi / (2 * L)) * jnp.tile(L + x[:, None], M)
m2 = jnp.diag(jnp.linspace(1, M, num=M))
num = jnp.sin(m1 @ m2)
den = jnp.sqrt(L)
return num / den
def diag_spectral_density_periodic(alpha, length, M):
"""
Not actually a spectral density but these are used in the same
way. These are simply the first `M` coefficients of the Taylor
expansion approximation for the periodic kernel.
"""
a = length ** (-2)
J = jnp.arange(1, M + 1)
q2 = (2 * alpha**2 / jnp.exp(a)) * modified_bessel_first_kind(J, a)
return q2
def phi_periodic(x, w0, M):
"""
Basis functions for the approximation of the periodic kernel.
"""
m1 = jnp.tile(w0 * x[:, None], M)
m2 = jnp.diag(jnp.linspace(1, M, num=M))
mw0x = m1 @ m2
return jnp.cos(mw0x), jnp.sin(mw0x)
# --- models
class GP1:
"""
Long term trend Gaussian process
"""
def __init__(self):
self.x_scaler = UnivariateScaler()
self.y_scaler = UnivariateScaler()
def model(self, x, L, M, y=None):
# intercept
intercept = sample("intercept", dist.Normal(0, 1))
# long term trend
ρ = sample("ρ", dist.LogNormal(-1.0, 1.0))
α = sample("α", dist.HalfNormal(1.0))
eigenfunctions = phi(x, L, M)
spd = jnp.sqrt(diag_spectral_density(α, ρ, L, M))
with plate("basis1", M):
β1 = sample("β1", dist.Normal(0, 1))
f1 = deterministic("f1", eigenfunctions @ (spd * β1))
μ = deterministic("μ", intercept + f1)
σ = sample("σ", dist.HalfNormal(0.5))
with plate("n_obs", x.shape[0]):
sample("y", dist.Normal(μ, σ), obs=y)
def get_data(self):
data = load_data()
x = data["id"].values
y = data["births_relative"].values
self.x_scaler.fit(x)
self.y_scaler.fit(y)
xsd = jnp.array(self.x_scaler.transform(x))
ysd = jnp.array(self.y_scaler.transform(y))
return dict(
x=xsd,
y=ysd,
L=1.5 * max(xsd),
M=10,
)
def make_figure(self, samples):
data = load_data()
dates = data["date"]
y = 100 * data["births_relative"]
μ = 100 * self.y_scaler.inverse_transform(samples["μ"]).mean(axis=0)
f = plt.figure(figsize=(15, 5))
plt.axhline(100, color="k", lw=1, alpha=0.8)
plt.plot(dates, y, marker=".", lw=0, alpha=0.3)
plt.plot(dates, μ, color="r", lw=2)
plt.ylabel("Relative number of births")
plt.xlabel("")
return f
class GP2:
"""
Long term trend with year seasonality component.
"""
def __init__(self):
self.x_scaler = UnivariateScaler()
self.y_scaler = UnivariateScaler()
def model(self, x, w0, J, L, M, y=None):
intercept = sample("intercept", dist.Normal(0, 1))
# long term trend
ρ1 = sample("ρ1", dist.LogNormal(-1.0, 1.0))
α1 = sample("α1", dist.HalfNormal(1.0))
eigenfunctions = phi(x, L, M)
spd = jnp.sqrt(diag_spectral_density(α1, ρ1, L, M))
with plate("basis", M):
β1 = sample("β1", dist.Normal(0, 1))
# year-periodic component
ρ2 = sample("ρ2", dist.HalfNormal(0.1))
α2 = sample("α2", dist.HalfNormal(1.0))
cosines, sines = phi_periodic(x, w0, J)
spd_periodic = jnp.sqrt(diag_spectral_density_periodic(α2, ρ2, J))
with plate("periodic_basis", J):
β2_cos = sample("β2_cos", dist.Normal(0, 1))
β2_sin = sample("β2_sin", dist.Normal(0, 1))
f1 = deterministic("f1", eigenfunctions @ (spd * β1))
f2 = deterministic("f2", cosines @ (spd_periodic * β2_cos) + sines @ (spd_periodic * β2_sin))
μ = deterministic("μ", intercept + f1 + f2)
σ = sample("σ", dist.HalfNormal(0.5))
with plate("n_obs", x.shape[0]):
sample("y", dist.Normal(μ, σ), obs=y)
def get_data(self):
data = load_data()
x = data["id"].values
y = data["births_relative"].values
self.x_scaler.fit(x)
self.y_scaler.fit(y)
xsd = jnp.array(self.x_scaler.transform(x))
ysd = jnp.array(self.y_scaler.transform(y))
w0 = 2 * jnp.pi / (365.25 / self.x_scaler._std)
return dict(
x=xsd,
y=ysd,
w0=w0,
J=20,
L=1.5 * max(xsd),
M=10,
)
def make_figure(self, samples):
data = load_data()
dates = data["date"]
y = 100 * data["births_relative"]
y_by_day_of_year = 100 * data.groupby("day_of_year2")["births_relative"].mean()
μ = 100 * self.y_scaler.inverse_transform(samples["μ"]).mean(axis=0)
f1 = 100 * self.y_scaler.inverse_transform(samples["f1"]).mean(axis=0)
f2 = 100 * self.y_scaler.inverse_transform(samples["f2"]).mean(axis=0)
fig, axes = plt.subplots(1, 2, figsize=(15, 5))
axes[0].plot(dates, y, marker=".", lw=0, alpha=0.3)
axes[0].plot(dates, μ, color="r", lw=2, alpha=1, label="Total")
axes[0].plot(dates, f1, color="C2", lw=3, alpha=1, label="Trend")
axes[0].set_ylabel("Relative number of births")
axes[0].set_title("All time")
axes[1].plot(y_by_day_of_year.index, y_by_day_of_year, marker=".", lw=0, alpha=0.5)
axes[1].plot(y_by_day_of_year.index, f2[:366], color="r", lw=2, label="Year seaonality")
axes[1].set_ylabel("Relative number of births")
axes[1].set_xlabel("Day of year")
for ax in axes:
ax.axhline(100, color="k", lw=1, alpha=0.8)
ax.legend()
return fig
class GP3:
"""
Long term trend with yearly seasonaly and slowly varying day-of-week effect.
"""
def __init__(self):
self.x_scaler = UnivariateScaler()
self.y_scaler = UnivariateScaler()
def model(self, x, day_of_week, w0, J, L, M, L3, M3, y=None):
intercept = sample("intercept", dist.Normal(0, 1))
# long term trend
ρ1 = sample("ρ1", dist.LogNormal(-1.0, 1.0))
α1 = sample("α1", dist.HalfNormal(1.0))
eigenfunctions = phi(x, L, M)
spd = jnp.sqrt(diag_spectral_density(α1, ρ1, L, M))
with plate("basis", M):
β1 = sample("β1", dist.Normal(0, 1))
# year-periodic component
ρ2 = sample("ρ2", dist.HalfNormal(0.1))
α2 = sample("α2", dist.HalfNormal(1.0))
cosines, sines = phi_periodic(x, w0, J)
spd_periodic = jnp.sqrt(diag_spectral_density_periodic(α2, ρ2, J))
with plate("periodic_basis", J):
β2_cos = sample("β2_cos", dist.Normal(0, 1))
β2_sin = sample("β2_sin", dist.Normal(0, 1))
# day of week effect
with plate("plate_day_of_week", 6):
β_week = sample("β_week", dist.Normal(0, 1))
# next enforce sum-to-zero -- this is slightly different from Aki's model,
# which instead imposes Monday's effect to be zero.
β_week = jnp.concatenate([jnp.array([-jnp.sum(β_week)]), β_week])
# long term variation of week effect
α3 = sample("α3", dist.HalfNormal(0.1))
ρ3 = sample("ρ3", dist.LogNormal(1.0, 1.0)) # prior: very long-term effect
eigenfunctions_3 = phi(x, L3, M3)
spd_3 = jnp.sqrt(diag_spectral_density(α3, ρ3, L3, M3))
with plate("week_trend", M3):
β3 = sample("β3", dist.Normal(0, 1))
# combine
f1 = deterministic("f1", eigenfunctions @ (spd * β1))
f2 = deterministic("f2", cosines @ (spd_periodic * β2_cos) + sines @ (spd_periodic * β2_sin))
g3 = deterministic("g3", eigenfunctions_3 @ (spd_3 * β3))
μ = deterministic("μ", intercept + f1 + f2 + jnp.exp(g3) * β_week[day_of_week])
σ = sample("σ", dist.HalfNormal(0.5))
with plate("n_obs", x.shape[0]):
sample("y", dist.Normal(μ, σ), obs=y)
def get_data(self):
data = load_data()
x = data["id"].values
y = data["births_relative"].values
self.x_scaler.fit(x)
self.y_scaler.fit(y)
xsd = jnp.array(self.x_scaler.transform(x))
ysd = jnp.array(self.y_scaler.transform(y))
w0 = 2 * jnp.pi / (365.25 / self.x_scaler._std)
dow = jnp.array(data["day_of_week"].values) - 1
return dict(
x=xsd,
day_of_week=dow,
w0=w0,
J=20,
L=1.5 * max(xsd),
M=10,
L3=1.5 * max(xsd),
M3=5,
y=ysd,
)
def make_figure(self, samples):
data = load_data()
dates = data["date"]
y = 100 * data["births_relative"]
y_by_day_of_year = 100 * (data.groupby("day_of_year2")["births_relative"].mean())
year_days = y_by_day_of_year.index.values
μ = samples["μ"]
intercept = samples["intercept"][:, None]
f1 = samples["f1"]
f2 = samples["f2"]
g3 = samples["g3"]
β_week = samples["β_week"]
β_week = np.concatenate([-β_week.sum(axis=1)[:, None], β_week], axis=1)
fig, axes = plt.subplots(2, 2, figsize=(15, 8), sharey=False, sharex=False)
axes[0, 0].plot(dates, y, marker=".", lw=0, alpha=0.3)
axes[0, 0].plot(
dates,
_agg(μ, scaler=self.y_scaler),
color="r",
lw=0,
label="Total",
marker=".",
alpha=0.5,
)
axes[0, 1].plot(dates, y, marker=".", lw=0, alpha=0.3)
axes[0, 1].plot(dates, _agg(f1, scaler=self.y_scaler), color="r", lw=2, label="Trend")
axes[1, 0].plot(year_days, y_by_day_of_year, marker=".", lw=0, alpha=0.3)
axes[1, 0].plot(
year_days,
_agg(f2[:, :366], scaler=self.y_scaler),
color="r",
lw=2,
label="Year seasonality",
)
axes[1, 1].plot(dates, y, marker=".", lw=0, alpha=0.3)
for day in range(7):
dow_trend = (jnp.exp(g3).T * β_week[:, day]).T
fit = _agg(intercept, f1, dow_trend, scaler=self.y_scaler)
axes[1, 1].plot(dates, fit, lw=2, color="r")
axes[0, 0].set_title("Total")
axes[0, 1].set_title("Long term trend")
axes[1, 0].set_title("Year seasonality")
axes[1, 1].set_title("Weekly effects with long term trend")
for ax in axes.flatten():
ax.axhline(100, color="k", lw=1, alpha=0.8)
ax.legend()
return fig
class GP4:
"""
Long term trend with yearly seasonaly, slowly varying day-of-week effect,
and special day effect including floating special days.
"""
def __init__(self):
self.x_scaler = UnivariateScaler()
self.y_scaler = UnivariateScaler()
def model(
self,
x,
day_of_week,
day_of_year,
memorial_days_indicator,
labour_days_indicator,
thanksgiving_days_indicator,
w0,
J,
L,
M,
L3,
M3,
y=None,
):
intercept = sample("intercept", dist.Normal(0, 1))
# long term trend
ρ1 = sample("ρ1", dist.LogNormal(-1.0, 1.0))
α1 = sample("α1", dist.HalfNormal(1.0))
eigenfunctions = phi(x, L, M)
spd = jnp.sqrt(diag_spectral_density(α1, ρ1, L, M))
with plate("basis", M):
β1 = sample("β1", dist.Normal(0, 1))
# year-periodic component
ρ2 = sample("ρ2", dist.HalfNormal(0.1))
α2 = sample("α2", dist.HalfNormal(1.0))
cosines, sines = phi_periodic(x, w0, J)
spd_periodic = jnp.sqrt(diag_spectral_density_periodic(α2, ρ2, J))
with plate("periodic_basis", J):
β2_cos = sample("β2_cos", dist.Normal(0, 1))
β2_sin = sample("β2_sin", dist.Normal(0, 1))
# day of week effect
with plate("plate_day_of_week", 6):
β_week = sample("β_week", dist.Normal(0, 1))
# next enforce sum-to-zero -- this is slightly different from Aki's model,
# which instead imposes Monday's effect to be zero.
β_week = jnp.concatenate([jnp.array([-jnp.sum(β_week)]), β_week])
# long term separation of week effects
ρ3 = sample("ρ3", dist.LogNormal(1.0, 1.0))
α3 = sample("α3", dist.HalfNormal(0.1))
eigenfunctions_3 = phi(x, L3, M3)
spd_3 = jnp.sqrt(diag_spectral_density(α3, ρ3, L3, M3))
with plate("week_trend", M3):
β3 = sample("β3", dist.Normal(0, 1))
# Finnish horseshoe prior on day of year effect
# Aki uses slab_df=100 instead, but chains didn't mix
# in our case for some reason, so we lowered it to 50.
slab_scale = 2
slab_df = 50
scale_global = 0.1
τ = sample("τ", dist.HalfCauchy(scale=scale_global * 2))
c_aux = sample("c_aux", dist.InverseGamma(0.5 * slab_df, 0.5 * slab_df))
c = slab_scale * jnp.sqrt(c_aux)
with plate("plate_day_of_year", 366):
λ = sample("λ", dist.HalfCauchy(scale=1))
λ_tilde = jnp.sqrt(c) * λ / jnp.sqrt(c + (τ * λ) ** 2)
β4 = sample("β4", dist.Normal(loc=0, scale=τ * λ_tilde))
# floating special days
β5_labour = sample("β5_labour", dist.Normal(0, 1))
β5_memorial = sample("β5_memorial", dist.Normal(0, 1))
β5_thanksgiving = sample("β5_thanksgiving", dist.Normal(0, 1))
# combine
f1 = deterministic("f1", eigenfunctions @ (spd * β1))
f2 = deterministic("f2", cosines @ (spd_periodic * β2_cos) + sines @ (spd_periodic * β2_sin))
g3 = deterministic("g3", eigenfunctions_3 @ (spd_3 * β3))
μ = deterministic(
"μ",
intercept
+ f1
+ f2
+ jnp.exp(g3) * β_week[day_of_week]
+ β4[day_of_year]
+ β5_labour * labour_days_indicator
+ β5_memorial * memorial_days_indicator
+ β5_thanksgiving * thanksgiving_days_indicator,
)
σ = sample("σ", dist.HalfNormal(0.5))
with plate("n_obs", x.shape[0]):
sample("y", dist.Normal(μ, σ), obs=y)
def _get_floating_days(self, data):
x = data["id"].values
memorial_days = data.loc[
data["date"].dt.month.eq(5) & data["date"].dt.weekday.eq(0) & data["date"].dt.day.ge(25),
"id",
].values
labour_days = data.loc[
data["date"].dt.month.eq(9) & data["date"].dt.weekday.eq(0) & data["date"].dt.day.le(7),
"id",
].values
labour_days = np.concatenate((labour_days, labour_days + 1))
thanksgiving_days = data.loc[
data["date"].dt.month.eq(11)
& data["date"].dt.weekday.eq(3)
& data["date"].dt.day.ge(22)
& data["date"].dt.day.le(28),
"id",
].values
thanksgiving_days = np.concatenate((thanksgiving_days, thanksgiving_days + 1))
md_indicators = np.zeros_like(x)
md_indicators[memorial_days - 1] = 1
ld_indicators = np.zeros_like(x)
ld_indicators[labour_days - 1] = 1
td_indicators = np.zeros_like(x)
td_indicators[thanksgiving_days - 1] = 1
return {
"memorial_days_indicator": md_indicators,
"labour_days_indicator": ld_indicators,
"thanksgiving_days_indicator": td_indicators,
}
def get_data(self):
data = load_data()
x = data["id"].values
y = data["births_relative"].values
self.x_scaler.fit(x)
self.y_scaler.fit(y)
xsd = jnp.array(self.x_scaler.transform(x))
ysd = jnp.array(self.y_scaler.transform(y))
w0 = 2 * jnp.pi / (365.25 / self.x_scaler._std)
dow = jnp.array(data["day_of_week"].values) - 1
doy = jnp.array((data["day_of_year2"] - 1).values)
return dict(
x=xsd,
day_of_week=dow,
day_of_year=doy,
w0=w0,
J=20,
L=1.5 * max(xsd),
M=10,
L3=1.5 * max(xsd),
M3=5,
y=ysd,
**self._get_floating_days(data),
)
def make_figure(self, samples):
special_days = {
"Valentine's": pd.to_datetime("1988-02-14"),
"Leap day": pd.to_datetime("1988-02-29"),
"Halloween": pd.to_datetime("1988-10-31"),
"Christmas eve": pd.to_datetime("1988-12-24"),
"Christmas day": pd.to_datetime("1988-12-25"),
"New year": pd.to_datetime("1988-01-01"),
"New year's eve": pd.to_datetime("1988-12-31"),
"April 1st": pd.to_datetime("1988-04-01"),
"Independence day": pd.to_datetime("1988-07-04"),
"Labour day": pd.to_datetime("1988-09-05"),
"Memorial day": pd.to_datetime("1988-05-30"),
"Thanksgiving": pd.to_datetime("1988-11-24"),
}
β4 = samples["β4"]
β5_labour = samples["β5_labour"]
β5_memorial = samples["β5_memorial"]
β5_thanksgiving = samples["β5_thanksgiving"]
day_effect = np.array(β4)
md_idx = special_days["Memorial day"].day_of_year - 1
day_effect[:, md_idx] = day_effect[:, md_idx] + β5_memorial
ld_idx = special_days["Labour day"].day_of_year - 1
day_effect[:, ld_idx] = day_effect[:, ld_idx] + β5_labour
td_idx = special_days["Thanksgiving"].day_of_year - 1
day_effect[:, td_idx] = day_effect[:, td_idx] + β5_thanksgiving
day_effect = 100 * day_effect.mean(axis=0)
fig = plt.figure(figsize=(12, 5))
plt.plot(np.arange(1, 367), day_effect)
for name, day in special_days.items():
xs = day.day_of_year
ys = day_effect[day.day_of_year - 1]
plt.plot(xs, ys, marker="o", mec="k", c="none", ms=10)
plt.text(xs - 3, ys, name, horizontalalignment="right")
plt.title("Special day effect")
plt.ylabel("Relative number of births")
plt.xlabel("Day of year")
plt.xlim([-40, None])
return fig
# + id="x3laVk7ZDt9X"
NAME_TO_MODEL = {
"t": GP1,
"ty": GP2,
"tyw": GP3,
"tywd": GP4,
}
def main(args):
is_sphinxbuild = "NUMPYRO_SPHINXBUILD" in os.environ
model = NAME_TO_MODEL[args.model]()
data = model.get_data()
mcmc = MCMC(
NUTS(model.model),
num_warmup=args.num_warmup,
num_samples=args.num_samples,
num_chains=args.num_chains,
progress_bar=False if is_sphinxbuild else True,
)
mcmc.run(jax.random.PRNGKey(0), **data)
if not is_sphinxbuild:
mcmc.print_summary()
posterior_samples = mcmc.get_samples()
if args.save_samples:
print(f"Saving samples at {args.save_samples}")
save_samples(args.save_samples, posterior_samples)
if args.save_figure:
print(f"Saving figure at {args.save_figure}")
fig = model.make_figure(posterior_samples)
fig.savefig(args.save_figure)
plt.close()
return model, data, mcmc, posterior_samples
# + colab={"base_uri": "https://localhost:8080/"} id="8qnaEHyKCEJr" outputId="f629463e-08ca-49fe-9a42-a2e6d501f386"
def parse_arguments():
parser = argparse.ArgumentParser(description="Hilbert space approx for GPs")
parser.add_argument("--num-samples", nargs="?", default=1000, type=int)
parser.add_argument("--num-warmup", nargs="?", default=1000, type=int)
parser.add_argument("--num-chains", nargs="?", default=1, type=int)
parser.add_argument(
"--model",
nargs="?",
default="tywd",
help="one of"
'"t" (Long term trend),'
'"ty" (t + year seasonality),'
'"tyw" (t + y + slowly varying weekday effect),'
'"tywd" (t + y + w + special days effect)',
)
parser.add_argument("--device", default="cpu", type=str, help='use "cpu" or "gpu".')
parser.add_argument("--x64", action="store_true", help="Enable float64 precision")
parser.add_argument(
"--save-samples",
default="",
type=str,
help="Path where to store the samples. Must be '.npz' file.",
)
parser.add_argument(
"--save-figure",
default="",
type=str,
help="Path where to save the plot with matplotlib.",
)
# args = parser.parse_args() # error in colab
args, unused = parser.parse_known_args()
return args
# + colab={"base_uri": "https://localhost:8080/"} id="M5zMG17iM0mE" outputId="4ad7cd78-b60d-47ec-8906-65daf41b7aa7"
# !pwd
# + colab={"base_uri": "https://localhost:8080/"} id="DCFuHti8B9QB" outputId="dc003ef6-cd95-453d-9cae-697dfd76c71b"
args = parse_arguments()
args.device = 'gpu'
args.num-warmup = 200
args.num-samples = 200
args.save-figure="/content"
args.save-samples="/content"
print(args)
if args.x64:
numpyro.enable_x64()
numpyro.set_platform(args.device)
numpyro.set_host_device_count(args.num_chains)
model, data, mcmc, posterior_samples = main(args)
| notebooks/misc/hilbert_space_gp_birthdays_numpyro.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 2
# language: python
# name: python2
# ---
# %matplotlib inline
#
# # Extracting the time series of activations in a label
#
#
# We first apply a dSPM inverse operator to get signed activations
# in a label (with positive and negative values) and we then
# compare different strategies to average the times series
# in a label. We compare a simple average, with an averaging
# using the dipoles normal (flip mode) and then a PCA,
# also using a sign flip.
#
#
# +
# Author: <NAME> <<EMAIL>>
#
# License: BSD (3-clause)
import matplotlib.pyplot as plt
import mne
from mne.datasets import sample
from mne.minimum_norm import read_inverse_operator, apply_inverse
print(__doc__)
data_path = sample.data_path()
label = 'Aud-lh'
label_fname = data_path + '/MEG/sample/labels/%s.label' % label
fname_inv = data_path + '/MEG/sample/sample_audvis-meg-oct-6-meg-inv.fif'
fname_evoked = data_path + '/MEG/sample/sample_audvis-ave.fif'
snr = 3.0
lambda2 = 1.0 / snr ** 2
method = "dSPM" # use dSPM method (could also be MNE or sLORETA)
# Load data
evoked = mne.read_evokeds(fname_evoked, condition=0, baseline=(None, 0))
inverse_operator = read_inverse_operator(fname_inv)
src = inverse_operator['src']
# Compute inverse solution
pick_ori = "normal" # Get signed values to see the effect of sign filp
stc = apply_inverse(evoked, inverse_operator, lambda2, method,
pick_ori=pick_ori)
label = mne.read_label(label_fname)
stc_label = stc.in_label(label)
mean = stc.extract_label_time_course(label, src, mode='mean')
mean_flip = stc.extract_label_time_course(label, src, mode='mean_flip')
pca = stc.extract_label_time_course(label, src, mode='pca_flip')
print("Number of vertices : %d" % len(stc_label.data))
# View source activations
plt.figure()
plt.plot(1e3 * stc_label.times, stc_label.data.T, 'k', linewidth=0.5)
h0, = plt.plot(1e3 * stc_label.times, mean.T, 'r', linewidth=3)
h1, = plt.plot(1e3 * stc_label.times, mean_flip.T, 'g', linewidth=3)
h2, = plt.plot(1e3 * stc_label.times, pca.T, 'b', linewidth=3)
plt.legend([h0, h1, h2], ['mean', 'mean flip', 'PCA flip'])
plt.xlabel('Time (ms)')
plt.ylabel('Source amplitude')
plt.title('Activations in Label : %s' % label)
plt.show()
| 0.12/_downloads/plot_label_source_activations.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import mlflow
import matplotlib.pyplot as plt
mlflow.tracking.set_tracking_uri("http://127.0.0.1:5000") # Just in case you didn't set MLFLOW_TRACKING_URI
mlflow.set_experiment("predicting_wind_solar")
# You can access the mlflow data using the high level API like so:
runs = mlflow.search_runs()
runs
mask = (runs["tags.mlflow.runName"] == "keras") & (runs["status"] == "FINISHED")
keras_ids = runs.loc[mask, "run_id"]
keras_ids
for run_id in keras_ids:
run = mlflow.get_run(run_id).data.to_dictionary()
print(run_id)
print(run["metrics"])
print(run["params"])
# Or you can go one layer down
client = mlflow.tracking.MlflowClient()
for run_id in keras_ids:
history = client.get_metric_history(run_id, "loss")
epochs = [h.step for h in history]
loss = [h.value for h in history]
plt.plot(epochs, loss, label=run_id)
plt.legend(), plt.xlabel("epoch"), plt.ylabel("loss");
| 2_ml_tracking/compare_runs.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python [conda env:PythonData] *
# language: python
# name: conda-env-PythonData-py
# ---
# +
# Dependencies and Setup
import matplotlib.pyplot as plt
import pandas as pd
import numpy as np
import requests
import gmaps
import os
# Import API key
from api_keys import google_api_key
# Configure gmaps
gmaps.configure(api_key=google_api_key)
# -
city_file = "city_info.csv"
#Read in CSV file
city_df = pd.read_csv(city_file)
city_df.head()
#Finding max humdity for our max_intensity
city_df.sort_values(by = "Humidity", ascending = False).head()
# +
# Store latitude and longitude in locations
locations = city_df[["Lat", "Lng"]]
#Plot Heatmap
fig = gmaps.figure()
#Create heat layer
heat_map = gmaps.heatmap_layer(locations, weights=city_df["Humidity"],
dissipating=True, max_intensity=100,
point_radius=10)
# Add layer
fig.add_layer(heat_map)
# Display figure
fig
# +
#Converting Max Temp column to float so we can use it
city_df["Max Temp"].dtypes
city_df.astype({"Max Temp": "float"})
#Finding ideal weather condition
weather_condition_df = city_df.loc[(city_df["Max Temp"] > 24) & (city_df["Max Temp"] < 29)]
weather_condition_df = weather_condition_df.loc[weather_condition_df["Wind Speed"] < 10]
weather_condition_df = weather_condition_df.loc[weather_condition_df["Cloudiness"] == 0]
weather_condition_df = weather_condition_df.dropna()
weather_condition_df
# -
#Creating blank column for Hotel Name
weather_condition_df["Hotel Name"] = ""
weather_condition_df
# +
#Finding the closest restaurant of each type to coordinates
base_url = "https://maps.googleapis.com/maps/api/place/nearbysearch/json"
#Looping through pandas dataframe to find closest hotel
for index, row in weather_condition_df.iterrows():
#Coordinates for finding closest hotel
#marker_locations = weather_condition_df[['Lat', 'Lng']]
lat = row["Lat"]
lng = row["Lng"]
#Params for our API request
params = {
"location": f"{lat},{lng}",
"rankby": "distance",
"type": "lodging",
"key": google_api_key,
}
# Create url and make API request
response = requests.get(base_url, params=params).json()
results = response['results']
try:
#Placing hotel name in our dataframe from our API
weather_condition_df.loc[index, 'Hotel Name'] = results[0]['name']
except (KeyError, IndexError):
#print("Missing field/result... skipping.")
pass
# -
weather_condition_df
# +
#Convert Hotel Name to list for info box
hotel_names = weather_condition_df["Hotel Name"].tolist()
city_names = weather_condition_df["City"].tolist()
country_names = weather_condition_df["Country"].tolist()
#Contents of our info box
info_box_content = [
f"Name: {hotel} City: {city} Country: {country}"
for hotel, city, country
in zip(hotel_names, city_names, country_names)
]
# +
# Create a map using state centroid coordinates to set markers
marker_locations = weather_condition_df[['Lat', 'Lng']]
# Create a marker_layer using the poverty list to fill the info box
#fig_2 = gmaps.figure()
markers = gmaps.marker_layer(marker_locations,
info_box_content = info_box_content)
fig.add_layer(markers)
fig
# -
| VacationPy.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# Para instalar las librerías, solo tiene que ser una vez
if False :
# !pip install PyMuPDF
# !pip install Pillow
# !pip install opencv-python
# !pip install imutils
import os
import numpy as np
import pandas as pd
import re
from matplotlib import pyplot as plt
import fitz # From PyMuPDF
from PIL import Image
import cv2
import argparse
import imutils
import sys
#os.chdir(r'D:\Dropbox\Proyectos\python\formatos_elecciones')
os.chdir(r'C:\Users\gd.orbegozo10\Dropbox\Cosas_Programacion\Python\python\formatos_elecciones')
png_dir = 'formularios_e_14/pdf_crop/'
png_list = os.listdir(png_dir)
png_list[1:11]
png_dir + png_list[1]
img = cv2.imread(png_dir + png_list[1])
img.shape
# # Hacemos un loop que itere sobre las imagenes de la carpeta para obtener el número de píxeles horizontal y vertical de todas
#
# Defino la función de la barra de progreso
def progressBar(value, endvalue, msg='', bar_length=40):
percent = float(value) / endvalue
arrow = '-' * int(round(percent * bar_length)-1) + '>'
spaces = ' ' * (bar_length - len(arrow))
sys.stdout.write(f"\rProgreso: [{arrow + spaces}] {int(round(percent * 100))}% ({value} de {endvalue} imagenes)")
sys.stdout.flush()
# +
ancho = []
largo = []
for i, x in enumerate(png_list):
img = cv2.imread(png_dir + x)
largo.append(img.shape[0])
ancho.append(img.shape[1])
progressBar(i,len(png_list))
# -
# # Revisar la distribución
plt.hist(ancho)
plt.show()
df_ancho = pd.DataFrame(ancho)
df_ancho.describe()
plt.hist(largo)
plt.show()
df_largo = pd.DataFrame(largo)
df_largo.describe()
df = pd.DataFrame({"ancho":ancho,"largo":largo})
df
df["perc_25_75_ancho"] = (df.ancho >= np.percentile(ancho,25)) & (df.ancho <= np.percentile(ancho,75))
df["perc_25_75_largo"] = (df.largo >= np.percentile(largo,25)) & (df.largo <= np.percentile(largo,75))
df["perc_25_75_ambos"] = (df.perc_25_75_ancho == True) & (df.perc_25_75_largo == True)
df.perc_25_75_ancho.value_counts()
df.perc_25_75_largo.value_counts()
df.perc_25_75_ambos.value_counts()
df
# # Separar números de las imágenes
png_name = png_list[5267]
img = cv2.imread(png_dir + png_name)
img_rect = cv2.imread(png_dir + png_name)
png_name
# Defino la función que cropea los números de cada imagen
def crop_digits(png_name, save_dir, png_dir='formularios_e_14/pdf_crop/', debug=False) :
# Cargo la imagen
img = cv2.imread(png_dir + png_name)
img_rect = cv2.imread(png_dir + png_name)
contador = 0
# Loop para cropear los números totales (sección 1)
row = 160
col = 0
for horizontal_move in range(3):
for horizontal_move_2 in range(3):
row_temp = row
x0, y0 = col, row_temp
row_temp += 49
col += 70
extra_row = 15
extra_col = 10
x1, y1 = col + extra_col, row_temp + extra_row
# Dibujo el rectangulo sobre la imagen
img_rect = cv2.rectangle(img_rect, (x0, y0), (x1, y1), (0,0,250), 1)
# Recorto el dígito
digito = img[y0:y1,x0:x1]
cv2.imwrite(f'{save_dir}/{contador}_{png_name}',digito)
contador += 1
col += 70
# Loop para cropear los números de los votos (sección 2)
row = 285
col = 555
for vertical_move in range(8):
col_temp = col
for horizontal_move in range(3):
row_temp = row
x0, y0 = col_temp, row_temp
row_temp += 66
col_temp += 71
extra_row = 15
extra_col = 10
x1, y1 = col_temp + extra_col, row_temp + extra_col
# Dibujo el rectangulo sobre la imagen
img_rect = cv2.rectangle(img_rect, (x0, y0), (x1, y1), (0,250,0), 1)
# Recorto el dígito
digito = img[y0:y1,x0:x1]
cv2.imwrite(f'{save_dir}/{contador}_{png_name}',digito)
contador += 1
row = row_temp + 5
# Loop para cropear los números de los votos en blanco nulos etc (sección 3)
row = 850
col = 555
for vertical_move in range(4):
col_temp = col
for horizontal_move in range(3):
row_temp = row
x0, y0 = col_temp, row_temp
row_temp += 58
col_temp += 71
extra_row = 18
extra_col = 10
x1, y1 = col_temp + extra_col, row_temp + extra_col
# Dibujo el rectangulo sobre la imagen
img_rect = cv2.rectangle(img_rect, (x0, y0), (x1, y1), (250,0,0), 1)
# Recorto el dígito
digito = img[y0:y1,x0:x1]
cv2.imwrite(f'{save_dir}/{contador}_{png_name}',digito)
contador += 1
row = row_temp + 5
if debug :
# Saco la imagen con los cuadros
cv2.imwrite(f'{save_dir}/{png_name}',img_rect)
# +
png_name = png_list[666]
crop_digits(png_name)
# -
png_name = png_list[5267]
png_name
| scripts/crop_numbers.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3.9.4 64-bit
# name: python3
# ---
# # Intersight Python SDK
#
# If you are familiar Python, you can take advantage of the Intersight Python SDK available for install from the [Python Package Index](https://pypi.org/project/intersight/).
#
# 
#
# ## Installation
#
# Notice the `pip install intersight` command. That's the command used to install the Python SDK in your environment. Be sure you are using Python >= 3.7, earlier versions of Python are not supported.
pip install intersight
#
# > Be sure to uninstall any conflicting versions of the SDK if you have previous installs. You can check installed versions with pip list.
pip list | grep -i intersight
# > If you see Intersight-OpenAPI installed, you can run `pip unistall Intersight-OpenAPI`
# ## Using the Intersight Python SDK
#
# To use the SDK, you must import required modules. You can view help once you've imported any required modules.
#
# +
import intersight
# View the help
help(intersight)
# -
# ### API Keys and User Authentication
#
# Now let's use the Intersight Python SDK to connect to our Intersight environment. First, you'll need an Intersight API Key ID and secret (private) key from your Intersight account. From the Settings menu in Intersight, Select API Keys and Generate API Key.
#
# 
#
# A version 3 key can be used with the SDK and is recommended for long term compatibility with Intersight's API.
#
# 
#
# The Generated API Key ID can be copied to the clipboard and used in API authentication or can be placed into a file. The example below uses a file for lookup so the ApiKeyId is not hardcoded in this notebook.
#
# Be sure to save the Secret (private) Key to a local file that only you have access to (a download option is provided by Intersight).
#
# 
# ### Intersight API Configuration
#
# We'll create an API client and use it to connect with Intersight.
#
# > Note that in DevNet labs the values below can be used as is. For other environments, be sure to enter the values for your account in the `key_id` variable below as well as the `private_key_path`. Also be sure that you are using v3 API keys from your Intersight account.
# + dotnet_interactive={"language": "pwsh"}
import intersight
with open('./ApiKeyIdFile', 'r') as file:
api_key_id = file.read().rstrip()
configuration = intersight.Configuration(
signing_info=intersight.HttpSigningConfiguration(
# key_id='596cc79e5d91b400010d15ad/60ede8d07564612d3335edac/623354c07564612d33d9068d',
key_id=api_key_id,
private_key_path='./SecretKey.txt',
signing_scheme=intersight.signing.SCHEME_HS2019,
signed_headers=[intersight.signing.HEADER_HOST,
intersight.signing.HEADER_DATE,
intersight.signing.HEADER_DIGEST,
intersight.signing.HEADER_REQUEST_TARGET
]
)
)
api_client = intersight.ApiClient(configuration)
# -
# ### Intersight Python SDK Query Examples
#
# Now that you are authenticated, let's use the SDK to query server and related resources in Intersight. You can read more on [Intersight's API query language](https://www.intersight.com/apidocs/introduction/query/), and to get an idea of the resources used by the API you can look at the URIs your browser uses (compute/physical-summaries below).
#
# 
#
# The Python example below sets up a query filter to get a count of servers and then uses the returned count to page through all servers in the account. When getting server summaries, the select query parameter is used to only return certain server attributes.
# +
import intersight.api.compute_api
# Create physical summary (server) class instance
api_instance = intersight.api.compute_api.ComputeApi(api_client)
# Find the count of servers in the account
server_query = api_instance.get_compute_physical_summary_list(count=True)
print(server_query)
# Intersight limits the number of items returned. Page through returned results and select Name, Model, Serial
per_page = 50
query_select = "Name,Model,Serial"
for i in range(0, server_query.count, per_page):
page_query = api_instance.get_compute_physical_summary_list(top=per_page, skip=i, select=query_select)
print(page_query)
# -
# ### Detailed Inventory
#
# Building on the server inventory example above, we'll now look at Intersight's resource model to get information on server components like physical disks. We can search in the API for specific disk Model numbers or search for Models that contain a certian substring. Below we will search for physical disks Model numbers that contain a certain substring.
# +
import intersight.api.storage_api
# Create storage class instance
api_instance = intersight.api.storage_api.StorageApi(api_client)
# Setup query options
query_filter = "contains(Model,'UCS')"
query_select = "Dn,Model,RegisteredDevice,RunningFirmware"
query_expand = "RegisteredDevice($select=DeviceHostname),RunningFirmware($select=Version)"
# Get physcial disks that contain the Model substring defined above
storage_query = api_instance.get_storage_physical_disk_list(filter=query_filter, select=query_select, expand=query_expand)
print(storage_query)
# -
# ## Configure Resources
#
# ### Create a new BIOS Policy
# +
from intersight.api import bios_api
from intersight.model.bios_policy import BiosPolicy
from intersight.model.organization_organization_relationship import OrganizationOrganizationRelationship
organization = OrganizationOrganizationRelationship(moid="5deea1d16972652d33ba886b",
# organization = OrganizationOrganizationRelationship(selector="Name eq 'Demo-DevNet'",
object_type="organization.Organization",
class_id="mo.MoRef",
)
print(organization)
bios_policy = BiosPolicy()
bios_policy.name = "CLive2022"
bios_policy.organization = organization
bios_policy.cpu_energy_performance = "performance"
bios_policy.cpu_performance = "hpc"
# Create a bios.Policy resource
bios_policy_instance = bios_api.BiosApi(api_client)
api_response = bios_policy_instance.create_bios_policy(bios_policy)
print("Name: %s, CPU Energy %s, CPU Perf %s" % (api_response.name, api_response.cpu_energy_performance, api_response.cpu_performance))
# -
# ### Update a BIOS Policy
api_response.cpu_performance = "enterprise"
api_response = bios_policy_instance.update_bios_policy(moid=api_response.moid, bios_policy=api_response)
print("Name: %s, CPU Energy %s, CPU Perf %s" % (api_response.name, api_response.cpu_energy_performance, api_response.cpu_performance))
# ### Delete a BIOS Policy
api_response = bios_policy_instance.delete_bios_policy(moid=api_response.moid)
print(api_response)
# ## Additional Examples
#
# Several additional examples of using the SDK are on GitHub in the [Intersight Python Utilties repo](https://github.com/CiscoDevNet/intersight-python-utils). The repo also contains a credentials.py module to simplify authentication across all of the example scripts.
| src/clive_intersight_python.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] id="view-in-github" colab_type="text"
# <a href="https://colab.research.google.com/github/kmaciver/Ryerson_Capstone/blob/master/FinalModel/Step3-DailySummary/DailySummaryPrediction_(Final_Model).ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# + id="whpMCjGYGwlP" colab_type="code" colab={}
from google.colab import drive
drive.mount('/content/drive')
# + id="4k_TOa4YGrjz" colab_type="code" colab={}
# %matplotlib inline
import matplotlib.pyplot as plt
import tensorflow as tf
import numpy as np
import pandas as pd
import os
from sklearn.preprocessing import MinMaxScaler, StandardScaler
# + id="mdZTX6eHGrj4" colab_type="code" colab={}
from tensorflow.python.keras.models import Sequential
from tensorflow.python.keras.layers import Input, Dense, GRU, Embedding, LSTM, TimeDistributed, Lambda, Dropout
from tensorflow.python.keras.optimizers import RMSprop, Adam
from tensorflow.python.keras.callbacks import EarlyStopping, ModelCheckpoint, TensorBoard, ReduceLROnPlateau
from tensorflow.python.keras import backend as K
from tensorflow.python.keras import losses
import warnings
warnings.filterwarnings('ignore')
import random as rand
from random import randint
from numpy.random import seed
seed(1)
from tensorflow import set_random_seed
set_random_seed(2)
# + id="ONcAVRTrGrj_" colab_type="code" outputId="9907d577-c39f-4ab8-a952-e4f82868cbed" colab={"base_uri": "https://localhost:8080/", "height": 339}
file_path = "/content/drive/My Drive/Capstone/Data Exploration/DSFD.csv"
DaySummary = pd.read_csv(file_path, index_col='date')
DaySummary = DaySummary.drop([DaySummary.columns[0]] , axis='columns')
DaySummary.head()
# + [markdown] id="L9NE5M0EGrkE" colab_type="text"
# Dropping Volume Currency and Close_RoC as discussed in the Feature Selection phase
# + id="tCLXWQPUGrkF" colab_type="code" colab={}
DaySummary = DaySummary.drop(columns=['Volume_Currency','Close_RoC'])
# + id="iPCaUTrqGrkJ" colab_type="code" outputId="e5aefa9d-f736-4909-ff55-591f21d4ef82" colab={"base_uri": "https://localhost:8080/", "height": 233}
#We need create a target data, which is basically a copy of the data that will be later shifted
target_data = DaySummary.copy()
target_data = target_data.loc[:,['High','Low']]
target_data.tail()
# + [markdown] id="UqhMK-RiGrkN" colab_type="text"
# There are 2322 days between the initial day 2013-04-03 of the data up to the last day 2019-08-11
# + id="noNqCIG4GrkO" colab_type="code" outputId="b4efda61-10ef-4b7e-c0ad-232a252e5a11" colab={"base_uri": "https://localhost:8080/", "height": 35}
DaySummary.shape
# + [markdown] id="MXm9QXobGrkU" colab_type="text"
# The objective of the model is to predict the High and Low values of the following day
# + id="pBDjRUeJGrkV" colab_type="code" colab={}
# Predict 1 day in the future
shift_steps = 1
# Now that the target_data was created we need to shift the data so that the target values of 1 day later aling with our
# input data
target_data = target_data.shift(-shift_steps)
# + [markdown] id="0Rpz6pkEGrkY" colab_type="text"
# Here we double check that because we shifted the target values now we have NaN values at the end
# + id="KcySrJ9HGrkZ" colab_type="code" outputId="30f2ef70-9b21-40f9-8a26-bdb56614f8f9" colab={"base_uri": "https://localhost:8080/", "height": 233}
target_data.tail()
# + id="L-2C2lJuGrkg" colab_type="code" colab={}
# Now we need to remove the rows with NaN values for the target data thus needing to exclude also
# 1 line of the DaySummary data
DaySummary_clean = DaySummary.iloc[:-1,:]
target_data_clean = target_data.iloc[:-1,:]
# + id="CZXC3DThGrkl" colab_type="code" outputId="57001b81-9285-4dea-b7b1-d690d0911582" colab={"base_uri": "https://localhost:8080/", "height": 35}
DaySummary_clean.shape, target_data_clean.shape
# + [markdown] id="5j8PSX8VGrkp" colab_type="text"
# Separating the days between test and training
# + id="x5GthVebGrkq" colab_type="code" colab={}
# For the use of Neural Networks we need to convert the data to a numpy array
X_data = np.array(DaySummary_clean)
Y_data = np.array(target_data_clean)
# + id="kH8isLd7Grkt" colab_type="code" outputId="3baa3175-1038-4d78-9ed1-a3a1e410bd24" colab={"base_uri": "https://localhost:8080/", "height": 35}
# Using a 90% split of the data for validation
train_split = 0.9
n_train_rows = int(X_data.shape[0]*train_split)
# For validation since we are performing a sliding window it got to be an even number
X_train = X_data[0:n_train_rows]
X_test = X_data[n_train_rows:]
Y_train = Y_data[0:n_train_rows]
Y_test = Y_data[n_train_rows:]
print(X_train.shape, Y_train.shape, X_test.shape, Y_test.shape)
# + [markdown] id="m_9Jd9mLGrky" colab_type="text"
# Now we need to Scale the data to feed it to the Neural Network
# + id="GcF3lVQfGrk0" colab_type="code" outputId="802f956e-92d4-46bf-999e-15a2b4bb8bc6" colab={"base_uri": "https://localhost:8080/", "height": 35}
x_scaler = MinMaxScaler()
X_train_scale = x_scaler.fit_transform(X_train)
X_test_scale = x_scaler.transform(X_test)
y_scaler = MinMaxScaler()
Y_train_scale = y_scaler.fit_transform(Y_train)
Y_test_scale = y_scaler.transform(Y_test)
# For keras we data input has to have a (x,y,z) shape
X_train_scale = X_train_scale.reshape(1,X_train_scale.shape[0],X_train_scale.shape[1])
Y_train_scale = Y_train_scale.reshape(1,Y_train_scale.shape[0],Y_train_scale.shape[1])
X_test_scale = X_test_scale.reshape(1,X_test_scale.shape[0],X_test_scale.shape[1])
Y_test_scale = Y_test_scale.reshape(1,Y_test_scale.shape[0],Y_test_scale.shape[1])
print(X_train_scale.shape, Y_train_scale.shape, X_test_scale.shape, Y_test_scale.shape)
# + id="WHsd-qk6Grk4" colab_type="code" colab={}
def batch_reshape(sequence_length, X_train_scale, Y_train_scale, num_x_signal, num_y_signal):
"""
Generator function for creating random batches of training-data.
"""
batch_size = X_train_scale.shape[1] // sequence_length
# Allocate a new array for the batch of input-signals.
x_shape = (batch_size, sequence_length, num_x_signal)
x_batch = np.zeros(shape=x_shape, dtype=np.float16)
# Allocate a new array for the batch of output-signals.
y_shape = (batch_size, num_y_signal)
y_batch = np.zeros(shape=y_shape, dtype=np.float16)
#print(x_batch.shape, y_batch.shape, X_train_scale.shape, Y_train_scale.shape) #debugging
# Create Sequence for sliding window
seq = []
for i in range(batch_size):
seq.append(i*sequence_length)
# Fill the batch with sequences of data.
for i in range(0,len(seq)-1):
# Copy the sequences of data starting at this index.
x_batch[i] = X_train_scale[0][seq[i]:seq[i]+sequence_length][:]
y_batch[i] = Y_train_scale[0][seq[i]+sequence_length-1][:]
#print("iteration: ",i,"-OK") #debugging
#print(x_batch.shape,y_batch.shape) #debbuging
return (x_batch, y_batch)
# + id="NXy9oDIIGrk9" colab_type="code" colab={}
def batch_generator(batch_size, sequence_length, num_x_signal, num_y_signal, X_train_scale, Y_train_scale):
X_reshaped , Y_reshaped = batch_reshape(sequence_length, X_train_scale, Y_train_scale, num_x_signal, num_y_signal)
# Infinite loop.
while True:
# Allocate a new array for the batch of input-signals.
x_shape = (batch_size, sequence_length, num_x_signal)
x_batch = np.zeros(shape=x_shape, dtype=np.float16)
# Allocate a new array for the batch of output-signals.
y_shape = (batch_size, num_y_signal)
y_batch = np.zeros(shape=y_shape, dtype=np.float16)
# Fill the batch with random continuous sequences of data.
# Get a random start-index.
# This points somewhere into the training-data.
idx = np.random.randint(X_reshaped.shape[0] - batch_size)
# Copy the sequences of data starting at this index.
x_batch = X_reshaped[idx:idx+batch_size]
y_batch = Y_reshaped[idx:idx+batch_size]
yield (x_batch, y_batch)
# + id="Gnz3xCE7GrlA" colab_type="code" colab={}
num_x_signal = 13 # number of input features
num_y_signal = 2 # number of label classes
batch_size = 60 # tunning parameter
sequence_length = 25 #Amount of time-steps to look back for the 10 minute prediction
# + id="lA36zFWiGrlC" colab_type="code" colab={}
generator = batch_generator(batch_size,sequence_length, num_x_signal, num_y_signal, X_train_scale, Y_train_scale)
# + id="COv6FxyqGrlF" colab_type="code" outputId="f85d5867-3874-48bb-f0ba-0609a7f54b94" colab={"base_uri": "https://localhost:8080/", "height": 35}
x_batch, y_batch = next(generator)
print(x_batch.shape, y_batch.shape)
# + id="ydYAC0cqGrlI" colab_type="code" colab={}
def batch_validation(sequence_length, num_x_signal, num_y_signal, X_test_scale, Y_test_scale):
"""
Generator function for creating random batches of validation data.
"""
# Allocate a new array for the batch of input-signals.
batch_size_val = X_test_scale.shape[1] - sequence_length
x_shape = (batch_size_val, sequence_length, num_x_signal)
x_batch = np.zeros(shape=x_shape, dtype=np.float16)
# Allocate a new array for the batch of output-signals.
y_shape = (batch_size_val, num_y_signal)
y_batch = np.zeros(shape=y_shape, dtype=np.float16)
# Fill the batch with random sequences of data.
for i in range(batch_size_val):
# Copy the sequences of data starting at this index.
x_batch[i] = X_test_scale[0][i:i+sequence_length][:]
y_batch[i] = Y_test_scale[0][i+sequence_length-1][:]
return (x_batch, y_batch)
# + id="_LmBMOraGrlM" colab_type="code" outputId="1fa0af13-5e77-4297-9952-225f04990c2e" colab={"base_uri": "https://localhost:8080/", "height": 35}
X_val, Y_val = batch_validation(sequence_length,num_x_signal, num_y_signal, X_test_scale, Y_test_scale)
print(X_val.shape, Y_val.shape)
# + id="x39IvOs-GrlQ" colab_type="code" colab={}
validation_data = (X_val, Y_val)
# + [markdown] id="UgsnKwYUGrlS" colab_type="text"
# ## Create Recurrent Neural Network
# + id="JqbFT1W9GrlT" colab_type="code" outputId="5dd7d724-f3c5-402a-8da0-55f2dd97c2b5" colab={"base_uri": "https://localhost:8080/", "height": 329}
from tensorflow.keras.layers import BatchNormalization
#from keras.constraints import max_norm
model = Sequential()
model.add(LSTM(units=200,
return_sequences=True,
input_shape=(None,num_x_signal,)))
model.add(LSTM(units=150, return_sequences=False))
model.add(Dropout(0.4))
model.add(BatchNormalization())
model.add(Dense(num_y_signal,activation='linear'))
model.summary()
# + id="BipzhXW4GrlW" colab_type="code" colab={}
optimizer = Adam(lr=1e-3)
model.compile(loss=losses.logcosh, optimizer=optimizer)
#model.save_weights('initial_weights.h5')
# + id="1SEdgxu7Grla" colab_type="code" colab={}
#es = EarlyStopping(monitor='val_loss', patience=10, verbose=1, min_delta=1e-6)
model_file = "DailySummary_LSTM_trained.h5"
mc = ModelCheckpoint('/content/drive/My Drive/Capstone/FinalModels/Step3-DailySummary/'+model_file, monitor="val_loss", mode="min", save_best_only=True)
reduce_lr = ReduceLROnPlateau(monitor='val_loss', factor=0.2,
patience=4, min_lr=1e-4)
# + id="zcKjlC6CGrlc" colab_type="code" colab={}
# %%time
#model.load_weights('initial_weights.h5')
history = model.fit_generator(generator=generator,
epochs=100,
steps_per_epoch=50,
validation_data=validation_data,
callbacks=[ mc, reduce_lr])
#callbacks=[es, reduce_lr])
# + id="z0DLvjZsGrlg" colab_type="code" colab={}
hist_df = pd.DataFrame(history.history)
hist_csv_file = '/content/drive/My Drive/Capstone/FinalModels/Step3-DailySummary/'+model_file.split('.')[0]+'.csv'
with open(hist_csv_file, mode='w') as f:
hist_df.to_csv(f)
# + id="o-YFo1m8Grlj" colab_type="code" outputId="e6072a60-f19b-46af-a5b4-679189ea2345" colab={"base_uri": "https://localhost:8080/", "height": 295}
plt.plot(history.history['val_loss'])
plt.title('model_Val_loss')
plt.ylabel('Val loss')
plt.xlabel('epoch')
plt.legend(['test'], loc='upper left')
plt.show()
# + id="VfRcee_sGrlm" colab_type="code" outputId="14ac17c4-61ac-4609-f418-6a6251e4a6bb" colab={"base_uri": "https://localhost:8080/", "height": 295}
plt.plot(history.history['loss'])
plt.title('model_loss')
plt.ylabel('loss')
plt.xlabel('epoch')
plt.legend(['train'], loc='upper left')
plt.show()
# + id="0XCXzmN8Grlp" colab_type="code" outputId="cc0ace9a-b049-4c08-f2d3-1ebb332bc59c" colab={"base_uri": "https://localhost:8080/", "height": 35}
# The algorithm uses data of the previous 25 time-steps to forecast the following day into the future.
#Create a Dataframe to hold the true predicted values for each day.
DailySummaryPredictionData = DaySummary.copy()
DailySummaryPredictionData = DailySummaryPredictionData.loc[:,['High','Low']]
DailySummaryPredictionData = DailySummaryPredictionData[:-1]
DailySummaryPredictionData = DailySummaryPredictionData[n_train_rows:]# Filter only the testing days
DailySummaryPredictionData = DailySummaryPredictionData[25:]
DailySummaryPredictionData.shape
# + id="zXqQZLLeGrlt" colab_type="code" colab={}
def predict(modelFilename, predictionsData):
column_name = modelFilename.split('.')[0]
#Step 1 - Create the columns to hold predictions
predictionsData[column_name + "_High"] = np.nan
predictionsData[column_name + "_Low"] = np.nan
#Step 2 - load model for prediction
loaded_model = model
#Step39 - Generate the prediction
ypred = loaded_model.predict(X_val)
ypred_rescaled = y_scaler.inverse_transform(ypred)
#Step 10 - Copy the prediction values to the correspondent day in the predictionData
predictionsData[column_name + "_High"] = ypred_rescaled[:,0]
predictionsData[column_name + "_Low"] = ypred_rescaled[:,1]
return(predictionsData)
# + id="2fU5aP2blsO5" colab_type="code" colab={}
modelfilePath = saveDrivePath + '/' + model_file
# + id="buQxqlFRGrly" colab_type="code" colab={}
predict(modelfilePath, DailySummaryPredictionData)
# + id="s49G2-L4wTOK" colab_type="code" colab={}
prediction_file = 'predictions_' + model_file.split('.')[0] + '.csv'
prediction_filePath = '/content/' + prediction_file
DailySummaryPredictionData.to_csv(prediction_file)
# + id="VgoSnGyswuJV" colab_type="code" colab={}
destinantionDir = '/content/drive/My Drive/Capstone/FinalModels/Step3-DailySummary'
oscmd = "mv "+'"'+prediction_filePath+'"' + " " + '"'+destinantionDir+'"'
oscmd
# + id="XSHWElxsIWJV" colab_type="code" colab={}
os.system(oscmd)
# + id="ud9T-Pf5Ia0E" colab_type="code" colab={}
DailySummaryPredictionData.head()
# + id="lkNJiS0OlZKm" colab_type="code" outputId="666d2e6b-fd82-4bca-b828-d70600086922" colab={"base_uri": "https://localhost:8080/", "height": 35}
from sklearn.metrics import mean_squared_error
mse_High = mean_squared_error(DailySummaryPredictionData.iloc[:,0],DailySummaryPredictionData.iloc[:,2])
mse_High
# + id="4PzzkcCpl2qD" colab_type="code" outputId="53f75812-9910-4736-d9da-75eb3c07182d" colab={"base_uri": "https://localhost:8080/", "height": 35}
mse_Low = mean_squared_error(DailySummaryPredictionData.iloc[:,1],DailySummaryPredictionData.iloc[:,3])
mse_Low
# + id="Q6BAyOVFl-4-" colab_type="code" colab={}
| FinalModel/Step3-DailySummary/DailySummaryPrediction_(Final_Model).ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + id="VyeEr4YSJxys" colab_type="code" outputId="33e511d9-4848-4560-a26d-834ef147cf4c" colab={"base_uri": "https://localhost:8080/", "height": 34}
from google.colab import drive
drive.mount('/content/drive')
# + id="J_O_OwtCJs3X" colab_type="code" colab={}
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
# + id="i4psK3HdJs3h" colab_type="code" outputId="5538bd88-b258-431d-f1a6-11cf17abd17f" colab={"base_uri": "https://localhost:8080/", "height": 416}
dataset=pd.read_csv(r'/content/drive/My Drive/datasets/Tensorflow community challenge /Datasets /ratings data/ratings.csv')
dataset
# + id="SoyPX8ZSJs3p" colab_type="code" outputId="a097a26f-3cf2-4e04-b49b-3a24d54b4d60" colab={"base_uri": "https://localhost:8080/", "height": 416}
from sklearn.preprocessing import LabelEncoder
encoder=LabelEncoder()
dataset['num_mfr']=encoder.fit_transform(dataset['mfr'])
dataset['num_type']=encoder.fit_transform(dataset['type'])
dataset
# + id="enP1lU7gJs3x" colab_type="code" colab={}
# + id="gJ2AeUv5Js33" colab_type="code" outputId="a01e770e-f855-491c-c219-fa7dabbc1fc1" colab={"base_uri": "https://localhost:8080/", "height": 297}
sns.barplot(x='num_type',y='rating',data=dataset)
# + id="R8ejfqAlJs39" colab_type="code" outputId="480df280-a1cf-4cba-c7a9-a655aa7a3043" colab={"base_uri": "https://localhost:8080/", "height": 297}
sns.barplot(x='num_mfr',y='rating',data=dataset)
# + id="dyXogUY9Js4H" colab_type="code" colab={}
training_data=dataset.drop(['name','mfr','type'],axis=1)
target=dataset.pop('rating')
# + id="vJXAELwJJs4N" colab_type="code" colab={}
from sklearn.model_selection import train_test_split
x_train,x_test,y_train,y_test=train_test_split(training_data,target,test_size=0.2,random_state=20)
# + id="Sb-g6yErJs4S" colab_type="code" colab={}
from sklearn.linear_model import LinearRegression
from sklearn.linear_model import Ridge
from sklearn.metrics import make_scorer, r2_score
# + id="rkrWtR0_Js4X" colab_type="code" colab={}
def custom_metric(target_test, target_predicted):
# Calculate r-squared score
r2 = r2_score(target_test, target_predicted)
# Return r-squared score
return r2
# + id="EaHzGUZ4Js4e" colab_type="code" outputId="b93dfda8-75e8-4c05-8823-e0ea9fb3f6c6" colab={"base_uri": "https://localhost:8080/", "height": 34}
classifier = Ridge()
model = classifier.fit(x_train, y_train)
score = make_scorer(custom_metric, greater_is_better=True)
score(model, x_test, y_test)
# + id="-fonEDpTJs4j" colab_type="code" outputId="40f5f015-d305-4ddb-c124-d78d42a8d31f" colab={"base_uri": "https://localhost:8080/", "height": 34}
target_predicted = model.predict(x_test)
r2_score(y_test, target_predicted)
# + id="DLh6Ro53Js5E" colab_type="code" outputId="5ea9bd8b-d2ee-44bd-ed51-588d5d59c4b5" colab={"base_uri": "https://localhost:8080/", "height": 284}
plt.scatter(y_test, target_predicted,c='b')
| day_3/day_3_ratings.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3 (ipykernel)
# language: python
# name: python3
# ---
# # Demo notebook for Kamodo Flythrough "FakeFlight" function
# The FakeFlight function flies a user-designed trajectory through the chosen model data. The sample trajectory is created using a few input parameters as described in the output of block 5.
# You may run the notebook as is if you have the sample data file, but you must
# change the 'file_dir', 'output_name', and 'plot_output' variables in block 6 to have the correct file path.
#import satellite flythrough code
from kamodo_ccmc.flythrough import SatelliteFlythrough as SF
import kamodo_ccmc.flythrough.model_wrapper as MW
#The testing data file is available at https://drive.google.com/file/d/1pHx9Q8v4vO59_RUMX-SJqYv_-dE3h-st/view?usp=sharing
#What models are possible?
MW.Choose_Model('')
#Choose which model to view the example for, then execute the notebook
model = 'TIEGCM'
#What are the variable names available from that model?
MW.Model_Variables(model)
#variable name, description, variable number, coordinate type, coordinate grid, list of coordinate names, units of data
#What are the time ranges available in my data?
file_dir = 'C:/Users/rringuet/Kamodo_WinDev1/TIEGCM/Data/' #full file path to where the model output data is stored
#Change file_dir to match the file path for your data.
MW.File_Times(model, file_dir)
#This function also automatically performs any data preparation needed.
#What are the variable names available in my data?
MW.File_Variables(model, file_dir)
#variable name, description, variable number, coordinate type, coordinate grid, list of coordinate names, units of data
help(SF.FakeFlight)
# +
#Choosing input values for FakeFlight function call
#----------------------------
start_utcts, end_utcts, n = 1068771600, 1069632000, 100.
#The chosen time range should match the length of time in the model data files.
#See the times.csv file in the directory where the model data is stored for the available time ranges
#The file will appear after attempting to execute a flythrough function.
#Time values found not to be contained in the model data are automatically discarded (see output of next block).
variable_list = ['rho','u_n','T_e'] #list of desired variable names from above list
#not all variables in the list will be available in the file(s) found.
#choose naming convention for output files
output_type = 'csv' #chosen file format for data output
output_name = 'C:/Users/rringuet/Kamodo_NasaTest/FakeFlightExample_TIEGCM' #filename for DATA output without extension
plot_output = 'C:/Users/rringuet/Kamodo_NasaTest/FakeFlightExample_TIEGCM' #filename for PLOT outputs without extension
plot_coord = 'GSE' #coordinate system chosen for output plots
#See https://sscweb.gsfc.nasa.gov/users_guide/Appendix_C.shtml for a description of coordinate types
#Choose from any option available in SpacePy.
# -
#run FakeFlight with sample trajectory
results = SF.FakeFlight(start_utcts, end_utcts, model, file_dir, variable_list, n=n,
output_type=output_type, output_name=output_name, plot_output=plot_output, plot_coord=plot_coord)
#open plots in separate internet browser window for interactivity. Nothing will open here.
| notebooks/SF_FakeFlight_TIEGCM.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="view-in-github" colab_type="text"
# <a href="https://colab.research.google.com/github/jamellasagelliv/linearpubfiles2021/blob/main/Assignment9.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# + [markdown] id="SLUubLCzHs4V"
# # Lab 2 - Plotting Vector using NumPy and MatPlotLib
# + [markdown] id="QSDFae7hHs4Z"
# In this laboratory we will be discussing the basics of numerical and scientific programming by working with Vectors using NumPy and MatPlotLib.
# + [markdown] id="Wg4cezD8Hs4b"
# ### Objectives
# At the end of this activity you will be able to:
# 1. Be familiar with the libraries in Python for numerical and scientific programming.
# 2. Visualize vectors through Python programming.
# 3. Perform simple vector operations through code.
# + [markdown] id="UQD7DoC2Hs4b"
# ## Discussion
# + [markdown] id="A6grbAIbHs4c"
# ### NumPy
# + [markdown] id="OXixP-e6Hs4d"
# #### Representing Vectors
# + [markdown] id="y-nXPTTuHs4e"
# Now that you know how to represent vectors using their component and matrix form we can now hard-code them in Python. Let's say that you have the vectors:
# + [markdown] id="LlEfOzbtHs4e"
# $$ A = 4\hat{x} + 3\hat{y} \\
# B = 2\hat{x} - 5\hat{y}\\
# C = 4ax + 3ay - 2az \\
# D = 2\hat{i} - 2\hat{j} + 3\hat{k}$$
# + [markdown] id="l6LHUvnRHs4f"
# In which it's matrix equivalent is:
# + [markdown] id="YKAjjPnYHs4g"
# $$ A = \begin{bmatrix} 4 \\ 3\end{bmatrix} , B = \begin{bmatrix} 2 \\ -5\end{bmatrix} , C = \begin{bmatrix} 4 \\ 3 \\ -2 \end{bmatrix}, D = \begin{bmatrix} 2 \\ -2 \\ 3\end{bmatrix}
# $$
# $$ A = \begin{bmatrix} 4 & 3\end{bmatrix} , B = \begin{bmatrix} 2 & -5\end{bmatrix} , C = \begin{bmatrix} 4 & 3 & -2\end{bmatrix} , D = \begin{bmatrix} 2 & -2 & 3\end{bmatrix}
# $$
# + [markdown] id="lkrX9lHBHs4g"
# We can then start doing numpy code with this by:
# + id="Vi6yz53gHs4h"
## Importing necessary libraries
import numpy as np ## 'np' here is short-hand name of the library (numpy) or a nickname.
# + colab={"base_uri": "https://localhost:8080/"} id="KGr4fBg0Hs4j" outputId="e41fc070-119d-43ae-e2cb-c1f50634040d"
A = np.array([4, 3])
B = np.array([2, -5])
C = np.array([
[4],
[3],
[-2]
])
D = np.array ([[2],
[-2],
[3]])
print('Vector A is ', A)
print('Vector B is ', B)
print('Vector C is ', C)
print('Vector D is ', D)
# + [markdown] id="DQhfqdq_Hs4l"
# #### Describing vectors in NumPy
# + colab={"base_uri": "https://localhost:8080/"} id="HRz2gx6JHs4n" outputId="6944f2c6-f37d-4453-c8df-6da3bd0b1823"
### Checking shapes
### Shapes tells us how many elements are there on each row and column
A.shape
H = np.array([1, 0, 2, 5, -0.2, 0])
H.shape
C.shape
# + colab={"base_uri": "https://localhost:8080/"} id="dlDEokrAHs4o" outputId="65c5b591-c713-44e4-af31-db6bba47e7f4"
### Checking size
### Array/Vector sizes tells us many total number of elements are there in the vector
D.size
# + colab={"base_uri": "https://localhost:8080/"} id="lwAQGA_JHs4p" outputId="e66ddd00-3102-4c81-b05c-5258de7f1b91"
### Checking dimensions
### The dimensions or rank of a vector tells us how many dimensions are there for the vector.
D.ndim
# + [markdown] id="tMdCFWyUHs4r"
# #### Addition
# + id="X1D_izU4Hs4t"
R = np.add(A, B) ## this is the functional method usisng the numpy library
P = np.add(C, D)
# + colab={"base_uri": "https://localhost:8080/"} id="jzH3b6dtHs4t" outputId="39ecf2b5-d94d-489c-c23f-0b00930c909e"
R = A + B ## this is the explicit method, since Python does a value-reference so it can
## know that these variables would need to do array operations.
R
# + colab={"base_uri": "https://localhost:8080/"} id="lFpzDkUbVE-s" outputId="283eef22-8c57-4eb6-f25a-237b430c4d6e"
pos1 = np.array([0,0,0])
pos2 = np.array([0,1,3])
pos3 = np.array([1,5,-2])
pos4 = np.array([5,-3,3])
#R = pos1 + pos2 + pos3 + pos4
#R = np.multiply(pos3, pos4)
R = pos3 / pos4
R
# + [markdown] id="oeSxmIS8Hs4v"
# Try to implement subtraction, multiplication, and division with vectors $A$ and $B$!
# + colab={"base_uri": "https://localhost:8080/"} id="GAvlXjtCqjUy" outputId="730bea2e-a072-4476-cc8e-2a212a062801"
### Try out your code here!
A = np.array([7, 6, 5])
B = np.array([5, 4, 3])
SubtractVectors = np.subtract(A, B)
print(SubtractVectors)
MultiplyVectors = np.multiply(A, B)
print(MultiplyVectors)
DivisionVectors = np.divide(A, B)
print(DivisionVectors)
# + [markdown] id="3otcBr7tHs4w"
# ### Scaling
# + [markdown] id="FVwvCklOHs4x"
# Scaling or scalar multiplication takes a scalar value and performs multiplication with a vector. Let's take the example below:
# + [markdown] id="8el8_cMUHs4x"
# $$S = 5 \cdot A$$
# + [markdown] id="Oh9sk6lKHs4y"
# We can do this in numpy through:
# + colab={"base_uri": "https://localhost:8080/"} id="QNruUX1pHs4y" outputId="deb85cfa-ab6f-46b4-ee3e-ac10f7c9bcf7"
#S = 5 * A
S = np.multiply(5,A)
S
# + [markdown] id="owN3j6rMWr-3"
# Try to implement scaling with two vectors.
# + colab={"base_uri": "https://localhost:8080/"} id="CPSJTJqItluc" outputId="e353a9bb-5609-424e-efc9-d645a6be7d07"
SN = 10 #ScalingNumber
Vector1 = np.array([4,5,6])
Vector2 = np.array([6,7,8])
Scale =SN* Vector1, Vector2
print(Scale)
# + [markdown] id="j04WSo4YHs4z"
# ### MatPlotLib
# + [markdown] id="ZM6_LZWWHs42"
# #### Visualizing Data
# + id="B2U78WnhHs43"
import matplotlib.pyplot as plt
import matplotlib
# %matplotlib inline
# + colab={"base_uri": "https://localhost:8080/", "height": 265} id="38rzjx3jXkK_" outputId="efe2d5ba-5bc7-4dd3-aad4-3022c5c14658"
A = [1, -1]
B = [5, -1]
plt.scatter(A[0], A[1], label='A', c='green')
plt.scatter(B[0], B[1], label='B', c='magenta')
plt.grid()
plt.legend()
plt.show()
# + id="KZgiMGCEZpJn"
A = np.array([1, -1])
B = np.array([1, 5])
plt.title("Resultant Vector\nMagnitude:{}" .format(Magnitude))
plt.xlim(-5, 5)
plt.ylim(-5, 5)
plt.quiver(0, 0, A[0], A[1], angles='xy', scale_units='xy', scale=1, color='red')
plt.quiver(A[0], A[1], B[0], B[1], angles='xy', scale_units='xy', scale=1, color='green')
R = A + B
plt.quiver(0, 0, R[0], R[1], angles='xy', scale_units='xy', scale=1, color='black')
plt.grid()
plt.show()
print(R)
Magnitude = np.sqrt(np.sum(R**2))
print(Magnitude)
Slope = R[1]/R[0]
print(Slope)
Angle = (np.arctan(Slope))*(180/np.pi)
print(Angle)
# + id="0NZnAZckHs44"
n = A.shape[0]
plt.xlim(-10, 10)
plt.ylim(-10, 10)
plt.quiver(0,0, A[0], A[1], angles='xy', scale_units='xy',scale=1)
plt.quiver(A[0],A[1], B[0], B[1], angles='xy', scale_units='xy',scale=1)
plt.quiver(0,0, R[0], R[1], angles='xy', scale_units='xy',scale=1)
plt.show()
# + [markdown] id="yWijJqoggcva"
# Try plotting Three Vectors and show the Resultant Vector as a result.
# Use Head to Tail Method.
# + colab={"base_uri": "https://localhost:8080/", "height": 282} id="1UtHbpqKuXcC" outputId="bcab5dbf-be41-49b1-e9f5-d78ce76fdb09"
A = np.array([1, -5])
B = np.array([1, 5])
C = np.array([1, 7])
plt.xlim(-9, 9)
plt.ylim(-9, 9)
Q1 = plt.quiver(0,0, A[0], A[1], angles='xy', scale_units='xy',scale=1, color="red")
Q2 = plt.quiver(A[0],A[1], B[0], B[1], angles='xy', scale_units='xy',scale=1, color="cyan")
Q3 = plt.quiver(2,0, C[0], C[1], angles='xy', scale_units='xy',scale=1, color="blue")
R = A + B + C
plt.quiver(0,0, R[0], R[1], angles='xy', scale_units='xy',scale=1, color="yellow")
plt.grid()
plt.show()
print("The Resultant of the three Vectors is", R)
| Assignment9.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# ---
# + [markdown] origin_pos=0
# # 针对序列级和词元级应用程序微调BERT
# :label:`sec_finetuning-bert`
#
# 在本章的前几节中,我们为自然语言处理应用设计了不同的模型,例如基于循环神经网络、卷积神经网络、注意力和多层感知机。这些模型在有空间或时间限制的情况下是有帮助的,但是,为每个自然语言处理任务精心设计一个特定的模型实际上是不可行的。在 :numref:`sec_bert`中,我们介绍了一个名为BERT的预训练模型,该模型可以对广泛的自然语言处理任务进行最少的架构更改。一方面,在提出时,BERT改进了各种自然语言处理任务的技术水平。另一方面,正如在 :numref:`sec_bert-pretraining`中指出的那样,原始BERT模型的两个版本分别带有1.1亿和3.4亿个参数。因此,当有足够的计算资源时,我们可以考虑为下游自然语言处理应用微调BERT。
#
# 下面,我们将自然语言处理应用的子集概括为序列级和词元级。在序列层次上,介绍了在单文本分类任务和文本对分类(或回归)任务中,如何将文本输入的BERT表示转换为输出标签。在词元级别,我们将简要介绍新的应用,如文本标注和问答,并说明BERT如何表示它们的输入并转换为输出标签。在微调期间,不同应用之间的BERT所需的“最小架构更改”是额外的全连接层。在下游应用的监督学习期间,额外层的参数是从零开始学习的,而预训练BERT模型中的所有参数都是微调的。
#
# ## 单文本分类
#
# *单文本分类*将单个文本序列作为输入,并输出其分类结果。
# 除了我们在这一章中探讨的情感分析之外,语言可接受性语料库(Corpus of Linguistic Acceptability,COLA)也是一个单文本分类的数据集,它的要求判断给定的句子在语法上是否可以接受。 :cite:`Warstadt.Singh.Bowman.2019`。例如,“I should study.”是可以接受的,但是“I should studying.”不是可以接受的。
#
# 
# :label:`fig_bert-one-seq`
#
# :numref:`sec_bert`描述了BERT的输入表示。BERT输入序列明确地表示单个文本和文本对,其中特殊分类标记“<cls>”用于序列分类,而特殊分类标记“<sep>”标记单个文本的结束或分隔成对文本。如 :numref:`fig_bert-one-seq`所示,在单文本分类应用中,特殊分类标记“<cls>”的BERT表示对整个输入文本序列的信息进行编码。作为输入单个文本的表示,它将被送入到由全连接(稠密)层组成的小多层感知机中,以输出所有离散标签值的分布。
#
# ## 文本对分类或回归
#
# 在本章中,我们还研究了自然语言推断。它属于*文本对分类*,这是一种对文本进行分类的应用类型。
#
# 以一对文本作为输入但输出连续值,*语义文本相似度*是一个流行的“文本对回归”任务。
# 这项任务评估句子的语义相似度。例如,在语义文本相似度基准数据集(Semantic Textual Similarity Benchmark)中,句子对的相似度得分是从0(无语义重叠)到5(语义等价)的分数区间 :cite:`Cer.Diab.Agirre.ea.2017`。我们的目标是预测这些分数。来自语义文本相似性基准数据集的样本包括(句子1,句子2,相似性得分):
#
# * "A plane is taking off."(“一架飞机正在起飞。”),"An air plane is taking off."(“一架飞机正在起飞。”),5.000分;
# * "A woman is eating something."(“一个女人在吃东西。”),"A woman is eating meat."(“一个女人在吃肉。”),3.000分;
# * "A woman is dancing."(一个女人在跳舞。),"A man is talking."(“一个人在说话。”),0.000分。
#
# 
# :label:`fig_bert-two-seqs`
#
# 与 :numref:`fig_bert-one-seq`中的单文本分类相比, :numref:`fig_bert-two-seqs`中的文本对分类的BERT微调在输入表示上有所不同。对于文本对回归任务(如语义文本相似性),可以应用细微的更改,例如输出连续的标签值和使用均方损失:它们在回归中很常见。
#
# ## 文本标注
#
# 现在让我们考虑词元级任务,比如*文本标注*(text tagging),其中每个词元都被分配了一个标签。在文本标注任务中,*词性标注*为每个单词分配词性标记(例如,形容词和限定词)。
# 根据单词在句子中的作用。如,在Penn树库II标注集中,句子“<NAME>‘s car is new”应该被标记为“NNP(名词,专有单数)NNP POS(所有格结尾)NN(名词,单数或质量)VB(动词,基本形式)JJ(形容词)”。
#
# 
# :label:`fig_bert-tagging`
#
# :numref:`fig_bert-tagging`中说明了文本标记应用的BERT微调。与 :numref:`fig_bert-one-seq`相比,唯一的区别在于,在文本标注中,输入文本的*每个词元*的BERT表示被送到相同的额外全连接层中,以输出词元的标签,例如词性标签。
#
# ## 问答
#
# 作为另一个词元级应用,*问答*反映阅读理解能力。
# 例如,斯坦福问答数据集(Stanford Question Answering Dataset,SQuAD v1.1)由阅读段落和问题组成,其中每个问题的答案只是段落中的一段文本(文本片段) :cite:`Rajpurkar.Zhang.Lopyrev.ea.2016`。举个例子,考虑一段话:“Some experts report that a mask's efficacy is inconclusive.However,mask makers insist that their products,such as N95 respirator masks,can guard against the virus.”(“一些专家报告说面罩的功效是不确定的。然而,口罩制造商坚持他们的产品,如N95口罩,可以预防病毒。”)还有一个问题“Who say that N95 respirator masks can guard against the virus?”(“谁说N95口罩可以预防病毒?”)。答案应该是文章中的文本片段“mask makers”(“口罩制造商”)。因此,SQuAD v1.1的目标是在给定问题和段落的情况下预测段落中文本片段的开始和结束。
#
# 
# :label:`fig_bert-qa`
#
# 为了微调BERT进行问答,在BERT的输入中,将问题和段落分别作为第一个和第二个文本序列。为了预测文本片段开始的位置,相同的额外的全连接层将把来自位置$i$的任何词元的BERT表示转换成标量分数$s_i$。文章中所有词元的分数还通过softmax转换成概率分布,从而为文章中的每个词元位置$i$分配作为文本片段开始的概率$p_i$。预测文本片段的结束与上面相同,只是其额外的全连接层中的参数与用于预测开始位置的参数无关。当预测结束时,位置$i$的词元由相同的全连接层变换成标量分数$e_i$。 :numref:`fig_bert-qa`描述了用于问答的微调BERT。
#
# 对于问答,监督学习的训练目标就像最大化真实值的开始和结束位置的对数似然一样简单。当预测片段时,我们可以计算从位置$i$到位置$j$的有效片段的分数$s_i + e_j$($i \leq j$),并输出分数最高的跨度。
#
# ## 小结
#
# * 对于序列级和词元级自然语言处理应用,BERT只需要最小的架构改变(额外的全连接层),如单个文本分类(例如,情感分析和测试语言可接受性)、文本对分类或回归(例如,自然语言推断和语义文本相似性)、文本标记(例如,词性标记)和问答。
# * 在下游应用的监督学习期间,额外层的参数是从零开始学习的,而预训练BERT模型中的所有参数都是微调的。
#
# ## 练习
#
# 1. 让我们为新闻文章设计一个搜索引擎算法。当系统接收到查询(例如,“冠状病毒爆发期间的石油行业”)时,它应该返回与该查询最相关的新闻文章的排序列表。假设我们有一个巨大的新闻文章池和大量的查询。为了简化问题,假设为每个查询标记了最相关的文章。如何在算法设计中应用负采样(见 :numref:`subsec_negative-sampling`)和BERT?
# 1. 我们如何利用BERT来训练语言模型?
# 1. 我们能在机器翻译中利用BERT吗?
#
# [Discussions](https://discuss.d2l.ai/t/5729)
#
| submodules/resource/d2l-zh/tensorflow/chapter_natural-language-processing-applications/finetuning-bert.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# # 📝 Exercise M3.02
#
# The goal is to find the best set of hyperparameters which maximize the
# generalization performance on a training set.
#
# Here again with limit the size of the training set to make computation
# run faster. Feel free to increase the `train_size` value if your computer
# is powerful enough.
# +
import numpy as np
import pandas as pd
adult_census = pd.read_csv("../datasets/adult-census.csv")
target_name = "class"
target = adult_census[target_name]
data = adult_census.drop(columns=[target_name, "education-num"])
from sklearn.model_selection import train_test_split
data_train, data_test, target_train, target_test = train_test_split(
data, target, train_size=0.2, random_state=42)
# -
# In this exercise, we will progressively define the classification pipeline
# and later tune its hyperparameters.
#
# Our pipeline should:
# * preprocess the categorical columns using a `OneHotEncoder` and use a
# `StandardScaler` to normalize the numerical data.
# * use a `LogisticRegression` as a predictive model.
#
# Start by defining the columns and the preprocessing pipelines to be applied
# on each group of columns.
# +
from sklearn.compose import make_column_selector as selector
# Write your code here.
# +
from sklearn.preprocessing import OneHotEncoder
from sklearn.preprocessing import StandardScaler
# Write your code here.
# -
# Subsequently, create a `ColumnTransformer` to redirect the specific columns
# a preprocessing pipeline.
# +
from sklearn.compose import ColumnTransformer
# Write your code here.
# -
# Assemble the final pipeline by combining the above preprocessor
# with a logistic regression classifier. Force the maximum number of
# iterations to `10_000` to ensure that the model will converge.
# +
from sklearn.pipeline import make_pipeline
from sklearn.linear_model import LogisticRegression
# Write your code here.
# -
# Use `RandomizedSearchCV` with `n_iter=20` to find the best set of
# hyperparameters by tuning the following parameters of the `model`:
#
# - the parameter `C` of the `LogisticRegression` with values ranging from
# 0.001 to 10. You can use a log-uniform distribution
# (i.e. `scipy.stats.loguniform`);
# - the parameter `with_mean` of the `StandardScaler` with possible values
# `True` or `False`;
# - the parameter `with_std` of the `StandardScaler` with possible values
# `True` or `False`.
#
# Once the computation has completed, print the best combination of parameters
# stored in the `best_params_` attribute.
# +
from sklearn.model_selection import RandomizedSearchCV
from scipy.stats import loguniform
# Write your code here.
| notebooks/parameter_tuning_ex_03.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="view-in-github" colab_type="text"
# <a href="https://colab.research.google.com/github/hamzafarooq/pycaret/blob/master/TimeSeries_Forecasting.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# + [markdown] id="KKRvdotGoCVs" colab_type="text"
# #Data Set Overivew
#
# The tutorial uses the [Iowa Liquor Retails Sales](https://console.cloud.google.com/marketplace/details/iowa-department-of-commerce/iowa-liquor-sales). We will be using the dataset to predict future sales for one of the stores
#
# This dataset contains every wholesale purchase of liquor in the State of Iowa by retailers for sale to individuals since January 1, 2012.
#
# The State of Iowa controls the wholesale distribution of liquor intended for retail sale, which means this dataset offers a complete view of retail liquor sales in the entire state. The dataset contains every wholesale order of liquor by all grocery stores, liquor stores, convenience stores, etc., with details about the store and location, the exact liquor brand and size, and the number of bottles ordered.
# + id="oxappFRWR6qm" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 71} outputId="014a4b8e-162f-4ef2-8ea2-adf75fafc0a1"
#importing necessary packages
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
import numpy as np
from datetime import datetime
import statsmodels.api as sm
import numpy as np
import xgboost as xgb
from sklearn.metrics import mean_squared_error, mean_absolute_error
import imageio
import os
from statsmodels.graphics.tsaplots import plot_acf
# + [markdown] id="MI44fdd8Jtt5" colab_type="text"
# # Reading Data
#
# + [markdown] id="zp4xNDv3o5wy" colab_type="text"
# ## Using GCP and Biq Query
# To setup a project on Google Cloud Platform and use Big Query go to: http://console.cloud.google.com
#
# You can also watch my tutorial here: https://www.youtube.com/watch?v=m5qQ5GLmcZs
# + id="UnJynIZ72ypg" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="054666ca-31f3-4591-c022-dc82b3d68663"
from google.colab import auth
auth.authenticate_user()
print('Authenticated')
# + [markdown] id="cMa6SVLgp9wD" colab_type="text"
# ## Pulling data for one store.
# This dataset has quite a few dimensions, however, we will focus on just part for now which is just one store and for sales that have occured after 1st Jan 2018
# + id="qeVGiMch8Wnb" colab_type="code" colab={}
# %%bigquery --project bold-sorter-281506 df2 # bold-sorter-281506 df2 is the project id ; df2 is the dataframe name
SELECT *
FROM `bigquery-public-data.iowa_liquor_sales.sales`
where store_number = '2633'
and date > '2018-01-01'
# + [markdown] id="W3hSsmL0qOsj" colab_type="text"
# ## Using a direct link
# A version of this dataset is also saved on my google drive. We can use it to pull the dataset
#
# + id="jCqoAXcjISWe" colab_type="code" colab={}
import pandas as pd
url = 'https://drive.google.com/file/d/1g3UG_SWLEqn4rMuYCpTHqPlF0vnIDRDB/view?usp=sharing'
path = 'https://drive.google.com/uc?export=download&id='+url.split('/')[-2]
df2 = pd.read_csv(path)
# + id="dMhpB4RxIp7i" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 37} outputId="212b9ba2-d2aa-4518-b853-b9dc18ed74bd"
path #save this path, just in case
# + [markdown] id="9js2qCkSq5Dd" colab_type="text"
# # Data Overview
# + id="sUNtTaXH_zod" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 394} outputId="53af867d-3d65-4bdf-909f-7ef50ff396ee"
df2.head(5)
# + id="mpDyU--L3Wxb" colab_type="code" colab={}
df2_ds = df2[['date','sale_dollars']] # selecting the needed columns
# + id="H0pw8xouAEPo" colab_type="code" colab={}
df2_ds=df2_ds.sort_index(axis=0)
# + id="YbKw9JvyCjJn" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 204} outputId="3a7ac0f0-b7b0-4de5-fd2f-7fe21145526b"
df2_ds.tail(5)
# + id="FlVg7MRD6zm4" colab_type="code" colab={}
aggregated=df2_ds.groupby('date',as_index=True).sum()
# + id="1Pph6mGw7K4A" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 51} outputId="9e3d206a-851c-410f-c0a8-f05c5750e067"
print(min(aggregated.index))
print(max(aggregated.index))
# + id="bez5t6XJ0Y59" colab_type="code" colab={}
aggregated.index=pd.to_datetime(aggregated.index)
# + [markdown] id="2YYxAbSAAMc5" colab_type="text"
# #Create Fetaures
# There are multiple ways of creating features, however, we will explore simpler ones - There are a few others, which I have commented for now
# + id="ANeLnv9IAvgz" colab_type="code" colab={}
def create_features(df):
"""
Creates time series features from datetime index
"""
df['date'] = df.index
df['dayofweek'] = df['date'].dt.dayofweek
df['quarter'] = df['date'].dt.quarter
df['month'] = df['date'].dt.month
df['year'] = df['date'].dt.year
df['dayofyear'] = df['date'].dt.dayofyear
df['dayofmonth'] = df['date'].dt.day
df['weekofyear'] = df['date'].dt.weekofyear
df['flag'] = pd.Series(np.where(df['date'] >= np.datetime64('2020-03-03'), 1, 0), index=df.index) #flag for COVID-19
#df['rolling_mean_7'] = df['sale_dollars'].shift(7).rolling(window=7).mean()
#df['lag_7'] = df['sale_dollars'].shift(7)
#df['lag_15']=df['sale_dollars'].shift(15)
#df['lag_last_year']=df['sale_dollars'].shift(52).rolling(window=15).mean()
X = df[['dayofweek','quarter','month','year',
'dayofyear','dayofmonth','weekofyear','flag','sale_dollars']]
X.index=df.index
return X
# + id="Pq0YTx94Axo8" colab_type="code" colab={}
def split_data(data, split_date):
return data[data.index <= split_date].copy(), \
data[data.index > split_date].copy()
# + id="Yd5OdAW5BDWv" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 606} outputId="2e4666f8-b896-488f-f5dc-3853ea4c8014"
aggregated=create_features(aggregated)
train, test = split_data(aggregated, '2020-06-15') # splitting the data for training before 15th June
plt.figure(figsize=(20,10))
plt.xlabel('date')
plt.ylabel('sales')
plt.plot(train.index,train['sale_dollars'],label='train')
plt.plot(test.index,test['sale_dollars'],label='test')
plt.legend()
plt.show()
# + [markdown] id="_r8WQT9H-nL9" colab_type="text"
# There is a lot of variation within the date, also, the dates are not continous, that is, there are gaps - we can do two things here, impute missing date or let it be. A major reason we will not create missing dates is because we are considering this data for predictive modeling rather than time series forecasting - hence the data is not depenent on the immediate past but the relationship of the features with sales over time
# + id="59jHH0I_CBHh" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 204} outputId="3e760938-bd67-411a-f99a-d0d664ad3021"
train.tail(4)
# + [markdown] id="sNwbBhkkipSL" colab_type="text"
# # Run PyCaret
# + id="8hz1b-ViELta" colab_type="code" colab={}
# #!pip install pycaret
# + id="0G24V-M4EWzs" colab_type="code" colab={}
from pycaret.regression import *
# + [markdown] id="lA_XyhCt_7K9" colab_type="text"
# Setting up the model is extremely easy
# + id="Pb1KuIswE4Q-" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 956, "referenced_widgets": ["fe4ff1a8494c4db18b6f62cc48ebdf33", "cc1f9d5ed0fc4df8b06cc33fcfb12433", "a6657fb7e6c74968946ddf3c13aa852b"]} outputId="56e01544-b229-47b7-9ac6-ed33061293f5"
reg = setup(data = train,
target = 'sale_dollars',
numeric_imputation = 'mean',
categorical_features = ['dayofweek','quarter','month','year','dayofyear','dayofmonth','weekofyear',
'flag'] ,
transformation = True, transform_target = True,
combine_rare_levels = True, rare_level_threshold = 0.1,
remove_multicollinearity = True, multicollinearity_threshold = 0.95,
silent = True)
# + [markdown] id="9aqlIk56AauE" colab_type="text"
# As a data scientist, I can't emphasize more on the usefulness of the function below - instead of pulling every single model, we just need one line to compare 20 different models! **This is insane!**
# + id="2i0TiH95F6E-" colab_type="code" colab={}
# returns best models - takes a little time to run
top3 = compare_models(n_select = 3)
# + [markdown] id="qAa4eHGtAPF2" colab_type="text"
# ## Creating baseline model
# + id="BHH6Yiq1GaQo" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 292, "referenced_widgets": ["ef1b8c8523d14bae8063cc50962be563", "32735450faed4860bba41860067140e7", "d0ddb74c942f4fbba405cc4d74ed9bcf"]} outputId="58e43d60-f1ef-4745-e525-a197eca55bc3"
#we create a model using light gbm
lightgbm = create_model('lightgbm')
# + [markdown] id="IB0PkaSHBotE" colab_type="text"
# Being able to tune seamlessly and hardly writing a line is extremely useful
# + id="WYO3g_5tJ0ze" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 292, "referenced_widgets": ["a88069478f044e8b86163623106c6e16", "a6aaf5a7b8e94654bd9b734bfa90ae06", "33db443f1200442281a5064b139386cc"]} outputId="30bedc3f-475a-47b6-f599-4a85952c84c5"
tuned_lightgbm = tune_model(lightgbm)
# + id="d4Iatz7WKR-l" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 376, "referenced_widgets": ["587e993da97d40e3929f3115a581afb0", "b05841eeccb641e4bf8f0616dee789d3", "8abce8a944c041b980edb46b327b7d7e"]} outputId="7332db8f-3bc7-4899-c4aa-13aac6b8af25"
plot_model(lightgbm)
# + id="0Hv6iMsZKVg0" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 378} outputId="c3003fd8-76c8-4ea3-8228-6657fdd0027d"
plot_model(lightgbm, plot = 'error')
# + id="olNQjb0XKWTh" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 349} outputId="4ef70395-74bc-46fc-b33d-1ebf8a45cda3"
plot_model(tuned_lightgbm, plot='feature') # looks like COVID-19 has played a huge role in sales
# + id="b_Fd_xJ1Lgs5" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 80} outputId="c5fdaee8-a602-4e7d-b445-22323cbc4cfa"
predict_model(tuned_lightgbm);
# + id="JV_UhNbfLifp" colab_type="code" colab={}
final_lightgbm = finalize_model(tuned_lightgbm)
# + id="URUPWTwkLoRs" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 119} outputId="abf319c4-ba69-43e2-aba0-f52d8e30da99"
#Final Light Gradient Boosting Machine parameters for deployment
print(final_lightgbm)
# + id="s8eWLjcrLuAY" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 80} outputId="d1dfcc3c-f0e4-4766-b4b2-6cb37a9be06a"
predict_model(final_lightgbm);
# + id="wUpeBVkIL5zF" colab_type="code" colab={}
unseen_predictions = predict_model(final_lightgbm, data=test)
unseen_predictions.head()
unseen_predictions.loc[unseen_predictions['Label'] < 0, 'Label'] = 0 #removing any negative values
# + id="SzVa5sgyO3Ao" colab_type="code" colab={}
def plot_series(time, series,i, format="-", start=0, end=None):
#plt.figure(figsize=(20,10))
plt.plot(time[start:end], series[start:end], format,label=i)
plt.xlabel("Date")
plt.ylabel("Sales (Dollar)")
plt.legend()
# + id="EiznLosEO4L3" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 606} outputId="fddb144d-cd6f-40a2-dd60-f1b59820c9e5"
plt.figure(figsize=(20,10))
plot_series(test.index, test['sale_dollars'],"True")
#plot_series(train['ds'],train['y'])
plot_series(test.index, unseen_predictions['Label'],"Baseline")
# + [markdown] id="TI864pAXCRqI" colab_type="text"
# Introducing a new metric, SMAPE - this works really well when there are a lot of 0's in the data - like this one. Please note, 0 is not a missing value
# + id="ofNoqfhAWlsj" colab_type="code" colab={}
def calc_smape(y_hat, y):
return 100/len(y) * np.sum(2 * np.abs(y_hat - y) / (np.abs(y) + np.abs(y_hat)))
# + id="1innmkebCPwn" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="d66e0cbc-6f0c-432f-9d91-a2801ea4557d"
calc_smape(test['sale_dollars'].values,unseen_predictions['Label'].values)
# + [markdown] id="hcrc0OUdCezq" colab_type="text"
# We will consider 78.3 as our baseline SMAPE
# + [markdown] id="Ucb1692uCnPa" colab_type="text"
# ## Blending Models
# We will now create a blend model using four algorithms, huber, random forest, xgboost and lightgbm
# + id="auWCWPD1RJOV" colab_type="code" colab={}
#huber = create_model('huber', verbose = False)
rf = create_model('rf', verbose = False)
lightgbm = create_model('lightgbm', verbose = False)
xgb = create_model('xgboost',verbose=False)
# + id="1Wuh06ZZc5x6" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 292, "referenced_widgets": ["d895c9fc343641e78aa92389880f7df5", "3dbefe6edba64e1896335a8e743b6c7f", "e51c625ae253409dbb5e4d001a8f2267", "ef5a88c7e4ef4c57a61865b6ac7996a8", "6bd7926964404537beac298835857861", "134db3d23e6e4eb8b84e9e87db30b6b9", "97ed6d6cb2bf481ab396e214f30a1635", "fd23b88b2f4149d2adccd0693c9f7d1a", "<KEY>", "1b3083c1b17a4a3cbbb0202142b8e593", "f5e95438c1d1498b8590421fe26065a6", "b3d482acf10e4aa380970a4d77cb3a52"]} outputId="dd7fa321-9133-4c75-9b4d-051cbba52930"
tuned_rf = tune_model(rf)
tuned_huber = tune_model(huber)
tuned_lightgbm = tune_model(lightgbm)
tuned_xgb = tune_model(xgb)
# + id="4m9PKmv1gieh" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 376} outputId="310b5295-464a-4d46-a74b-f022bdfe0551"
plot_model(tuned_huber)
# + [markdown] id="SAwngrt3Dz5m" colab_type="text"
# The below script will just blend all the four models in to one - the time savings are phenomenal
# + id="tkvnGBfdTrGr" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 292, "referenced_widgets": ["b55cc85d024843eaa9bb5c2851e5e5b0", "0222b1b298c047e389f7475901bad635", "00a73ea3bb85411c979a72ff1ef88855"]} outputId="06364d18-6a9f-4874-c63a-76c5c6ad1552"
blend_specific = blend_models(estimator_list = [tuned_rf,tuned_lightgbm,tuned_xgb,tuned_huber])
# + id="6gjtmrgjUGSE" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 80} outputId="ad925eb1-bf42-4f3d-d662-907c96e59c99"
predict_model(blend_specific);
# + id="fF1esVyJUExv" colab_type="code" colab={}
final_model = finalize_model(blend_specific)
# + id="exxFJKhDUGBv" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 204} outputId="ed7dc17a-a7a0-43a4-f184-7051746b37ff"
unseen_predictions_2 = predict_model(final_model, data=test, round=0)
unseen_predictions_2.loc[unseen_predictions_2['Label'] < 0, 'Label'] = 0
unseen_predictions_2.head()
# + id="BMYv_OMwUFhN" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 334} outputId="b83d6493-f9e8-44d3-c254-e10da4a181ff"
plt.figure(figsize=(20,5))
plot_series(test.index, test['sale_dollars'],"True")
plot_series(test.index, unseen_predictions_2['Label'],'Blend')
# + id="sru6fAMrWnSN" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="363bc012-0f30-46b9-e18e-6e165063948d"
calc_smape(test['sale_dollars'].values,unseen_predictions_2['Label'].values)
# + [markdown] id="ebE92ZFGFuEL" colab_type="text"
# The blend model is a major improvment over the baseline model.
# + [markdown] id="-T5E2_VQF1Zt" colab_type="text"
# ## Stacking
# Let's try one more technique, stacking and see if it improves our results
# + id="NHslO6ficCto" colab_type="code" colab={}
stack_1 = stack_models([tuned_rf,tuned_xgb, tuned_lightgbm])
predict_model(stack_1);
final_stack_1 = finalize_model(stack_1)
unseen_predictions_3 = predict_model(final_stack_1, data=test, round=0)
# + id="kfrbaM4OHPEc" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 173} outputId="6611e55e-ef80-4a1e-be0d-0c4465b4cf16"
unseen_predictions_3.loc[unseen_predictions_3['Label'] < 0, 'Label'] = 0
unseen_predictions_3.head(4)
# + id="fX5cIpwrcjkh" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="d0dffad4-9c12-4f5c-f308-b1d07af27183"
calc_smape(test['sale_dollars'].values,unseen_predictions_3['Label'].values)
# + [markdown] id="v-3MxDSZGcS2" colab_type="text"
# Stacking definitely did not improve the model
# + id="q6_iG787HAhY" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 334} outputId="4cbe513f-af9d-4dac-9ca4-45dc180d364d"
plt.figure(figsize=(20,5))
plot_series(test.index, test['sale_dollars'],"True")
plot_series(test.index, unseen_predictions['Label'],'Baseline')
plot_series(test.index, unseen_predictions_2['Label'],'Blend')
plot_series(test.index, unseen_predictions_3['Label'],'Stacking')
# + [markdown] id="TKOmyM1cGgK6" colab_type="text"
# #Next Steps
# The model isn't complete as yet - we can always go back to create a combination of new models + features
# + id="cnZDDXFkkxV6" colab_type="code" colab={}
| examples/TimeSeries_Forecasting.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import ipyleaflet as lf
# +
import json
from operator import itemgetter
from traitlets import (CaselessStrEnum, Unicode, Tuple, List, Bool, CFloat, Enum,
Float, CInt, Int, Instance, Dict, Bytes, Any, Union, Undefined)
import ipywidgets as widgets
from ipywidgets import Color
from ipywidgets.widgets.trait_types import TypedTuple, ByteMemoryView
from ipywidgets.widgets.widget_link import Link
# -
classes = {"map": lf.Map, "map-style": lf.MapStyle, "search-control": lf.SearchControl, "legend-control": lf.LegendControl,
"attribution-control": lf.AttributionControl, "scale-control": lf.ScaleControl, "zoom-control": lf.ZoomControl,
"draw-control": lf.DrawControl, "split-map-control": lf.SplitMapControl, "measure-control": lf.MeasureControl,
"layers-control": lf.LayersControl, "full-screen-control": lf.FullScreenControl, "widget-control": lf.WidgetControl,
"control": lf.Control, "choropleth": lf.Choropleth, "geo-data": lf.GeoJSON, "geo-json": lf.GeoJSON,
"feature-group": lf.FeatureGroup, "layer-group": lf.LayerGroup, "marker-cluster": lf.MarkerCluster, "circle": lf.Circle,
"circle-marker": lf.CircleMarker, "rectangle": lf.Rectangle, "polygon": lf.Polygon, "polyline": lf.Polyline,
"ant-path": lf.AntPath, "path": lf.Path, "vector-layer": lf.VectorLayer, "vector-tile-layer": lf.VectorTileLayer,
"heatmap": lf.Heatmap, "video-overlay": lf.VideoOverlay, "image-overlay": lf.ImageOverlay, "wms-layer": lf.WMSLayer,
"local-tile-layer": lf.TileLayer, "tile-layer": lf.TileLayer, "raster-layer": lf.RasterLayer, "popup": lf.Popup,
"marker": lf.Marker, "awesome-icon": lf.AwesomeIcon, "icon": lf.Icon, "ui-layer": lf.UILayer, "layer": lf.Layer}
# Copied from https://github.com/jupyter-widgets/ipywidgets/blob/master/packages/schema/generate-spec.py
def trait_type(trait, widget_list):
attributes = {}
if isinstance(trait, (CaselessStrEnum, Enum)):
w_type = 'string'
attributes['enum'] = trait.values
elif isinstance(trait, Unicode):
w_type = 'string'
elif isinstance(trait, (Tuple, List)):
w_type = 'array'
elif isinstance(trait, TypedTuple):
w_type = 'array'
attributes['items'] = trait_type(trait._trait, widget_list)
elif isinstance(trait, Bool):
w_type = 'bool'
elif isinstance(trait, (CFloat, Float)):
w_type = 'float'
elif isinstance(trait, (CInt, Int)):
w_type = 'int'
elif isinstance(trait, Color):
w_type = 'color'
elif isinstance(trait, Dict):
w_type = 'object'
elif isinstance(trait, (Bytes, ByteMemoryView)):
w_type = 'bytes'
elif isinstance(trait, Instance) and issubclass(trait.klass,
widgets.Widget):
w_type = 'reference'
attributes['widget'] = trait.klass.__name__
# ADD the widget to this documenting list
if (trait.klass not in [i[1] for i in widget_list]
and trait.klass is not widgets.Widget):
widget_list.append((trait.klass.__name__, trait.klass))
elif isinstance(trait, Union):
w_type = 'union'
attributes['types'] = [trait_type(t, widget_list) for t in trait.trait_types]
elif isinstance(trait, Any):
# In our case, these all happen to be values that are converted to
# strings
w_type = 'label'
else:
w_type = trait.__class__.__name__
attributes['type'] = w_type
if trait.allow_none:
attributes['allow_none'] = True
return attributes
def jsdefault(trait):
if isinstance(trait, Instance):
default = trait.make_dynamic_default()
if issubclass(trait.klass, widgets.Widget):
return 'reference to new instance'
else:
default = trait.default_value
if isinstance(default, bytes) or isinstance(default, memoryview) or isinstance(default, type(Undefined)):
default = trait.default_value_repr()
return default
def jsonify(identifier, widget, widget_list):
n = identifier
attributes = []
for name, trait in widget.traits(sync=True).items():
if name == '_view_count':
# don't document this since it is totally experimental at this point
continue
attribute = dict(
name=name,
help=trait.help or '',
default=jsdefault(trait)
)
attribute.update(trait_type(trait, widget_list))
attributes.append(attribute)
return dict(name=n, attributes=attributes)
def create_spec(widget_list):
widget_data = []
for widget_name, widget_cls in widget_list:
if issubclass(widget_cls, Link):
widget = widget_cls((widgets.IntSlider(), 'value'),
(widgets.IntSlider(), 'value'))
elif issubclass(widget_cls, (widgets.SelectionRangeSlider,
widgets.SelectionSlider)):
widget = widget_cls(options=[1])
elif issubclass (widget_cls, lf.LegendControl):
widget = widget_cls({})
elif issubclass (widget_cls, lf.SearchControl):
widget = widget_cls(marker=lf.Marker())
elif issubclass (widget_cls, lf.WidgetControl):
widget = widget_cls(widget=widgets.DOMWidget())
else:
widget = widget_cls()
widget_data.append(jsonify(widget_name, widget, widget_list))
return widget_data
sort_cl = sorted([[k, classes[k]] for k in classes])
specs = create_spec(sort_cl)
with open('../../resources/leaflet-schema.json', "w") as file:
json.dump(specs, file, sort_keys=True, indent=2, separators=(',', ': '))
bm = lf.basemaps
with open('../../resources/basemaps.json', "w") as file:
json.dump(bm, file, sort_keys=True, indent=2, separators=(',', ': '))
| src/py_export/python-model-export.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="view-in-github" colab_type="text"
# <a href="https://colab.research.google.com/github/jproctor-rebecca/lambdaprojects/blob/master/Spotify_recommender_NN_explore_RJProctor.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# + [markdown] id="F7XvTMN9Ljij"
# ## Imports and enviroment setup
# + id="L1wPWHwpLXSP"
import pandas as pd
import numpy as np
import os
import matplotlib.pyplot as plt
import sklearn
from sklearn.model_selection import train_test_split
from sklearn.pipeline import Pipeline
from sklearn.preprocessing import OneHotEncoder, OrdinalEncoder, StandardScaler
from sklearn.feature_extraction.text import TfidfVectorizer
from sklearn.cluster import KMeans
from sklearn.neighbors import NearestNeighbors, BallTree
from sklearn.decomposition import TruncatedSVD
from sklearn.manifold import TSNE
import tensorflow as tf
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense, LSTM, RNN
from keras.layers.embeddings import Embedding
from keras.preprocessing import sequence
# random state
rs = 42
# + [markdown] id="MQAysJuQLlIj"
# ## Load and clean data
# + id="hagFotJ0Llvq" outputId="02925a7e-e8d7-458a-92c5-6bb351d2b2f7" colab={"base_uri": "https://localhost:8080/", "height": 389}
# load data
# URL = 'https://github.com/Build-Week-Track-Team-7/explore/blob/main/data.csv.zip?raw=true'
# path_to_zip = tf.keras.utils.get_file('./data.zip', origin=URL, extract=True)
# PATH = os.path.join(os.path.dirname(path_to_zip), 'data')
df_raw = pd.read_csv('./data.csv')
df_raw.head()
# + id="KhwZ79GUgW8y" outputId="dc0408d3-78a0-4b7c-b698-4b72e96523b0" colab={"base_uri": "https://localhost:8080/", "height": 373}
def clean_data(df):
'''
a functions that drops columns which either
add bias or noise to dataset
raw text data
---
input - dataframe object/ndarray
output - dataframe object/ndarray
'''
# bias
df_cleaned = df.drop('artists', axis=1)
# duplicate/triplicate information may disproportantly weight results
df_cleaned = df_cleaned.drop('release_date', axis=1)
# duplicate/triplicate information may disproportantly weight results
# noise
df_cleaned = df_cleaned.drop('id', axis=1)
# has no meaningful contribution to results
# drop rows with missing values
df_cleaned = df_cleaned.dropna()
# cast column(s) to integer dtype
# df_cleaned = df_cleaned.astype({'acousticness': 'int64',
# 'danceability': 'int64',
# 'energy': 'int64',
# 'instrumentalness': 'int64',
# 'liveness': 'int64',
# 'liveness': 'int64',
# 'loudness': 'int64',
# 'speechiness': 'int64',
# 'tempo': 'int64',
# 'valence': 'int64'}).dtypes
return df_cleaned
data_cleaned = clean_data(df_raw)
print(data_cleaned.shape)
data_cleaned.head()
# + id="JjZEV6wva7tK"
# shuffle dataset to avoid bias
num_rows = 169909
data_cleaned = data_cleaned.sample(n=num_rows,
replace=False,
random_state=rs
)
# reindex
data_cleaned = data_cleaned.reset_index(drop=True)
# + id="I6ohND9elGMh" outputId="2f0c9f61-98b3-4af3-b656-387532e05ec0" colab={"base_uri": "https://localhost:8080/", "height": 355}
data_cleaned.head()
# + [markdown] id="S1G4G0ATuDCD"
# #### EDA
# + id="MpEk7mYruCiz" outputId="2a2b8d13-185f-4c5e-c4fa-88960e32cefa" colab={"base_uri": "https://localhost:8080/", "height": 310}
data_cleaned.describe()
# + id="yI2gvcYYuuvg" outputId="e1287650-2e43-4a1b-80e0-ad67261ba20a" colab={"base_uri": "https://localhost:8080/", "height": 169}
data_cleaned.describe(exclude='number')
# + id="3E44IB4H5fPO" outputId="79fc67ac-619c-4c72-8b2f-0d80241c52dc" colab={"base_uri": "https://localhost:8080/"}
data_cleaned.info()
# + id="y2CaowZ62E-j" outputId="79d89aad-3ae0-4ae1-e094-fab28646c047" colab={"base_uri": "https://localhost:8080/", "height": 231}
# scatter plot of features as they relate to sampling of individual songs
# features within the -25 to 0 range
plt.scatter(data_cleaned['loudness'][:20], data_cleaned['name'][:20], c='navy')
plt.show()
# + id="rI7LTGnkwS1R" outputId="5b3265c7-7a20-4390-ada4-23f76fd389d6" colab={"base_uri": "https://localhost:8080/", "height": 233}
# features within the 0-1 range
plt.scatter(data_cleaned['acousticness'][:20], data_cleaned['name'][:20], c='red')
plt.scatter(data_cleaned['danceability'][:20], data_cleaned['name'][:20], c='blue')
plt.scatter(data_cleaned['energy'][:20], data_cleaned['name'][:20], c='orange')
plt.scatter(data_cleaned['liveness'][:20], data_cleaned['name'][:20], c='lightgreen')
plt.scatter(data_cleaned['valence'][:20], data_cleaned['name'][:20], c='tomato')
plt.show()
# + id="qgTY05hf00xN" outputId="5a325e70-4919-41a9-c02a-a45deaf3c38a" colab={"base_uri": "https://localhost:8080/", "height": 233}
# features that may have some correlation
plt.scatter(data_cleaned['instrumentalness'][:20], data_cleaned['name'][:20], c='pink')
plt.scatter(data_cleaned['speechiness'][:20], data_cleaned['name'][:20], c='plum')
plt.show()
# + id="l3rmY6XN0fI4" outputId="45ddb8b0-0162-437f-f7da-98e1211f622b" colab={"base_uri": "https://localhost:8080/", "height": 233}
# boolean values
plt.scatter(data_cleaned['explicit'][:20], data_cleaned['name'][:20], c='yellow')
plt.scatter(data_cleaned['mode'][:20], data_cleaned['name'][:20], c='peachpuff')
plt.show()
# + id="yOSAyaDf1zPw" outputId="c0bbfaf4-1d5c-4d70-de0b-d16a3fb23bc9" colab={"base_uri": "https://localhost:8080/", "height": 233}
# features within the 1 - 11 range
plt.scatter(data_cleaned['key'][:20], data_cleaned['name'][:20], c='gray')
plt.show()
# + id="jKp9Y5mD2xM-" outputId="661b0850-9b44-43af-8422-0e5ce126f081" colab={"base_uri": "https://localhost:8080/", "height": 232}
# features within the 70 - 170 range
plt.scatter(data_cleaned['tempo'][:20], data_cleaned['name'][:20], c='goldenrod')
plt.show()
# + id="uyT4-ayP29xW" outputId="952ad571-c9c2-4175-c145-26ec9356d6f3" colab={"base_uri": "https://localhost:8080/", "height": 231}
# features within years 1921 - 2020
plt.scatter(data_cleaned['year'][:20], data_cleaned['name'][:20], c='blueviolet')
plt.show()
# + id="xBuvRHVj1ko2" outputId="6d61374a-1886-4b96-9600-4cf8f525beb8" colab={"base_uri": "https://localhost:8080/", "height": 265}
# features within the 200000 - 800000 range
plt.scatter(data_cleaned['duration_ms'][:20], data_cleaned['name'][:20], c='green')
plt.show()
# + [markdown] id="d4ppl106Lmgg"
# ## Additional preproccessing of data
# Pipeline 1 - (instantiate, fit_transform)
#
#
# * StandardScaler
#
# Pipeline 2a, 2b, 2c, and 2d - (instantiate, fit, predict)
#
# 2a:
# * K-NN - (exploring in isolation for similiarity clustering efficienies)
#
# 2b:
#
# * K-NN - (exploring for similiarity clustering efficienies)
# * PCA - (exploring for efficiency boost not dimensionality reduction b/c this is not needed; already done by limiting features)
#
#
# 2c:
# * K-NN - (exploring for similiarity clustering efficienies)
# * t_SNE - (exploring for additional efficiencies in dimensionality reduction)
#
# 2d:
# * t_SNE - (exploring in isolation for efficiencies in dimensionality reduction)
#
# + [markdown] id="uN4WmSbp-rfL"
# #### Pipeline 1
# * StandardScaler
# + id="NCc1AeXicUcv"
# create feature matrix
X = data_cleaned.drop('name', axis=1)
y = data_cleaned['name']
# + id="y50QQ-LNzBRj" outputId="a0ed8f68-3f23-4646-900e-c8f96ad65d16" colab={"base_uri": "https://localhost:8080/"}
# normalize standardize data using pipeline (instantiate, fit, transform)
# onehotender, ordinal encoder, standard scaler in order
# instantiate pipeline
pipe_1 = Pipeline([
#('onehotencoder', OneHotEncoder()),
#('ordinalencoder', OrdinalEncoder()),
('scaler', StandardScaler()),
])
# fit data to transformer and transform data
pipe_1.fit_transform(X, y=None)
# check data
print(X.shape)
print(y.shape)
# + id="BHfXdgl6LnA5" outputId="b3c62ca2-3d5a-4d8e-cefd-22bee5a24156" colab={"base_uri": "https://localhost:8080/"}
# split data - train/test split
X_train, X_test, y_train, y_test = train_test_split(X, y,
test_size=0.20,
random_state=42
)
# X_arr = np.asarray(X)
# y_arr = np.asarray(y)
# # split data - train/test split
# X_train, X_test, y_train, y_test = train_test_split(X_arr, y_arr,
# test_size=0.20,
# random_state=42
# )
# check data
print(X_train.shape)
print(y_train.shape)
print(X_test.shape)
print(y_test.shape)
# + [markdown] id="chx4RAkO-wbv"
# #### Pipeline *2a*
# 2a:
#
# K-NN - (exploring in isolation for similiarity clustering efficienies)
# + id="XPAn39Vt3cAX"
# hyperparameters for piplines
# K-NN estimator
n_neigh = 25
# must be an interger or will throw an error
alg = 'ball_tree'
#TFIDF Vectorizer
max_df=5
max_fea=10000
min_df=1
st_wds='english'
use_idf=True
# SVD
n_comp=14
# must be an integer
_alg='arpack'
# as an eigensolver for greater efficiency while maintaining accuracy when interacing with Spotify API
# can switch to 'randomized' for greater efficiency if needed
# KMeans
n_clus=10
init='k-means++'
max_iter=100
n_init=1
verbose=2
# + id="mtV4JomQ-0V1" outputId="26e0eac5-57c2-4c9c-cdfb-f318055d48a2" colab={"base_uri": "https://localhost:8080/"}
# instantiate model
nbrs = NearestNeighbors(n_neighbors=n_neigh,
algorithm=alg)
# fit data
nbrs = nbrs.fit(X_train)
# check data
print(X_train.shape)
print(y_train.shape)
print(X_test.shape)
print(y_test.shape)
# + id="j7fOdS1qDT9v"
#dir(nbrs)
# + [markdown] id="o298AkdJ-zzV"
# #### Pipeline *2b*
# 2b:
#
# Tfidf - (vectorizer)
#
# t-SVD - (exploring for efficiency boost not dimensionality reduction b/c this is not needed; already done by limiting features)
#
# KMeans - (exploring similarities through clustering)
# + id="40sPWILM-28H" outputId="25c237a8-ae86-47cb-ab6d-e0dfcd37a32b" colab={"base_uri": "https://localhost:8080/"}
# instantiate pipeline
pipe_2b = Pipeline([
('tfidf', TfidfVectorizer(max_df=max_df,
max_features=max_fea,
min_df=min_df,
stop_words=st_wds,
use_idf=True)
),
('svd', TruncatedSVD(n_components=n_comp,
algorithm=_alg)
),
('km', KMeans(n_clusters=n_clus,
init=init,
max_iter=max_iter,
n_init=n_init,
verbose=verbose)
),
])
# fit data to transformer and transform data
km = pipe_2b.fit(X_train)
# check data
print()
print(X_train.shape)
print(y_train.shape)
print(X_test.shape)
print(y_test.shape)
# + id="totjPFeA-57b"
# + [markdown] id="UwipvKhWMm0l"
# #### Pipeline *2c*
# 2c:
#
# K-NN - (exploring for similiarity clustering efficienies)
#
# t_SNE - (exploring for additional efficiencies in dimensionality reduction)
# + id="zMvuBQ8jMndl"
# + id="ChwHlQ9YMn58"
# + [markdown] id="NUKhkr5iMoSs"
# #### Pipeline *2d*
# 2d:
#
# t_SNE - (exploring for additional efficiencies in dimensionality reduction)
# + id="4BOP0J5AMot3"
# + id="kqinIt4_Mo9Q"
# + [markdown] id="hTtXVJRQLnYu"
# ## Neural Network model(s)
# NN 1 - (instantiate, compile, fit)
# * CNN
#
# > Indented block
#
#
#
# NN 2 - (instantiate, compile, fit)
# * LSTM
#
# NN 3 - (instantiate, compile, fit)
# * RNN (deep and wide)
# + [markdown] id="AKQAU1fxQL3w"
# #### NN 1 - (instantiate, compile, fit)
#
# CNN
# + id="ENLaByJIQpX4"
# define and tune hyperparameters within NN
# CNN
batch_size = 32
epochs = 5
# LSTM layer(s)
units_lstm = len(np.unique(y_train)) * .75
units_lstm = units_lstm.is_integer()
do = 0.2
# Embedding layer
max_feat = 10000
units_em = len(np.unique(y_train)) * .75
units_em = units_em.is_integer()
# Dense hidden layer(s)
act_hid = 'relu'
# Dense output layer
units_out = len(np.unique(y_train))
act_out = 'softmax'
# for use with multiclass
# general use
maxlen = 80
shape = X_train.shape[1]
# compile and fit
loss='sparse_categorical_crossentropy'
batch_sz = 32
opt = 'nadam'
metric='accuracy'
X_train = np.array([np.expand_dims(x_, 1) for x_ in X_train.values])
X_test = np.array([np.expand_dims(x_, 1) for x_ in X_test.values])
# + id="KkH1einZLoL3"
# check data
print()
print(X_train.shape)
print(y_train.shape)
print(X_test.shape)
print(y_test.shape)
# + id="21nj1GYPDjIR"
# of lables in output layer
len(np.unique(y_train))
# + id="zNmzjdZmDjLc"
# Setup Architecture
# instantiate model
model = Sequential()
# -- preprocessing portion of CNN network --
# hidden pre-processing layer 1 with implicit input layer
# creates feature map
# activation function used to produce updated weights (allows backpropegation)
model.add(Conv2D(32, (3,3), activation='relu', input_shape=(32,32, 15)))
# hidden preprocessing layer 2
# subsample values
model.add(MaxPooling2D((2,2)))
# hidden preprocessing layer 3
# creates feature map
# activation function used to produce updated weights (allows backpropegation)
model.add(Conv2D(64, (3,3), activation='relu'))
# hidden preprocessing layer 4
# subsample values
model.add(MaxPooling2D((2,2)))
# hidden preporocessing layer 5
# creates feature map
# activation function used to produce updated weights (allows backpropegation)
model.add(Conv2D(64, (3,3), activation='relu'))
# hidden preporcessing layer 6
# standardiztion/normalization
model.add(GlobalAveragePooling2D())
# a Fancy version of Flatten with subsample values
# -- fully connected portion of CNN network --
# hidden layer 7
model.add(Dense(64, activation='relu'))
# output layer 8
model.add(Dense(10, activation='softmax'))
model.summary()
# + id="sX44kzpvDjfj"
# Compile Model
model.compile(optimizer='adam',
loss='sparse_categorical_crossentropy',
metrics=['accuracy'])
# + id="xBMv04p0Djbo"
results = model.fit(
train_data_gen,
steps_per_epoch=total_train // batch_size,
epochs=epochs,
validation_data=val_data_gen,
validation_steps=total_val // batch_size
)
# + id="_msFkFGuDjFV"
# + [markdown] id="_7TipkBKQMu7"
# #### NN 2 - (instantiate, compile, fit)
#
# LSTM
# + id="jNCOUh09i_0R"
# check data
print()
print(X_train.shape)
print(y_train.shape)
print(X_test.shape)
print(y_test.shape)
# + id="LWXWGmGcUNSM" outputId="8da7af4d-59ac-4ba5-da15-a4e8dc03e8ca" colab={"base_uri": "https://localhost:8080/"}
# of lables in output layer
len(np.unique(y_train))
# + id="Bsz4qeki5Vgp"
## TODO...tokenize lemmas
# + id="GOFmiLAsQNn1"
# instantiating a Sequential class
model = Sequential()
# explicit input layer
#model.add(Embedding(max_feat, units_em))
# hidden layer 1
model.add(LSTM(units=units_lstm, dropout=do, return_sequences=True, input_shape=shape))
# hidden layer 2
#model.add(LSTM(units=units_lstm, dropout=do, return_sequences=True))
# hidden layer 3
#model.add(LSTM(units=units_lstm, dropout=do))
# hidden layer 4
#model.add(Dense(units=units_out, activation=act_hid))
# output layer
model.add(Dense(units_out, activation=act_out))
model.compile(loss=loss,
optimizer=opt,
metrics=[metric])
# fit model
results = model.fit(X_train, y_train,
batch_size=32,
epochs=5,
steps_per_epoch=1000,
validation_data=(X_test, y_test)
)
# display model
model.summary()
# + id="MUgNDlX_ZMaX"
import matplotlib.pyplot as plt
# Plot training & validation loss values
plt.plot(results_nn2.history['loss'])
plt.plot(results_2.history['val_loss'])
plt.title('Model loss')
plt.ylabel('Loss')
plt.xlabel('Epoch')
plt.legend(['Train', 'Test'], loc='upper left')
plt.show();
# + id="aYLHlwxMZd6Z"
import matplotlib.pyplot as plt
# Plot training & validation loss values
plt.plot(results_nn2.history['accuracy'])
plt.plot(results_2.history['val_accuracy'])
plt.title('Model accuracy')
plt.ylabel('Accuracy')
plt.xlabel('Epoch')
plt.legend(['Train', 'Test'], loc='upper left')
plt.show();
# + [markdown] id="_uHaIL82QON_"
# #### NN 3 - (instantiate, compile, fit)
#
# RNN (deep and wide)
# + id="3W1qTCmiQOol"
# + [markdown] id="v0D88AKDLoiZ"
# ## Evaluate model
# + id="-DoFaBDwLo5r"
# + [markdown] id="PGNUFy8hLpUK"
# ## Vizualize results
# + id="u5MGl6KJLqMR"
# + [markdown] id="XzdVQ9ZKLqt6"
# ## Conclusions
# + id="YQ5UXtFmLrGN"
# + [markdown] id="F3B-_uYaOoHe"
# ## Export to application
# + id="T7HWZoZhOjpu"
# convert to json
| Spotify_recommender_NN_explore_RJProctor.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
import numpy as np
import pandas as pd
from scipy.optimize import fsolve
import pickle
# %matplotlib notebook
import matplotlib.pyplot as plt
# -
# To understand the effects of noise (or limiting mag) on one's ability to recover the time of explosion (or better, the time of first light for a young SN), we construct a simple generative model to simulate the flux from the SN.
#
# In [Zheng & Filippenko (2017)](http://adsabs.harvard.edu/abs/2017ApJ...838L...4Z), a broken power-law parameterization of SN Ia light curves is introduced. This parameterization is somewhat physically motivated, however, there are some assumptions that break down for some SNe Ia. A major advantage of this method, however, is that it covers the peak and turn-over of SN light curves, so there is no need to artificially truncate the data (to ~5 d after explosion for example) in order to only fit the early rise.
#
# The formulation is:
#
# $$L = A' \left(\frac{t - t_0}{t_b}\right)^{\alpha_\mathrm{r}} \left[1 + \left(\frac{t - t_0}{t_b}\right)^{s\alpha_\mathrm{d}}\right]^{-2/s}$$
#
# which has a peak value when:
#
# $$ t_p = t_b \times \left(-\frac{\alpha_1 + 1}{\alpha_2 + 1}\right)^{1/[s(\alpha_1 - \alpha_2)]} $$
#
# and:
#
# $$ \begin{align} \alpha_\mathrm{r} & = & 2(\alpha_1 + 1) \\ \alpha_\mathrm{d} & = & \alpha_1 - \alpha_2 \end{align}. $$
#
# With this parameterization, the challenge is to figure out reasonable values from which to draw the various parameters.
#
# From theoretical arguments, we expect that $\alpha_\mathrm{r} \approx 2$, and empirical results largely show this to be true, however, a small handful of SNe have been observed to have $\alpha_\mathrm{r} \approx 1$. Therefore we select $\alpha_\mathrm{r}$ from $[0.75, 2.5]$.
#
# $t_b$ is related to rise time, and we draw this from $\mathcal{N}(18,1)$, based on the Ganeshalingham et al. 2010.
#
# $s$ is a smoothing parameter that does not have a strong physical prior. It should be of order unity, and we draw this from a truncated Gaussian $\mathcal{N}(1.2,0.3)$ that does not go below 0. The precise choice of $s$ is not super important as this is largely degenerate with $\alpha_\mathrm{d}$, which is selected based on $\Delta{m}_{15}$.
#
# We draw the absolute magnitude and decline rate from a multivariate normal distribution:
#
# $$X \sim \mathcal{N}\left(\begin{bmatrix} -19.3 \\ 1.1 \end{bmatrix}, \begin{bmatrix} 0.04 & 0.042 \\ 0.042 & 0.09\end{bmatrix} \right)$$
#
# From this, we can determine $\alpha_\mathrm{d}$ by setting $\Delta{m}_{15} = 2.5 \log\left(\frac{L(t=t_p)}{L(t=t_p+15)}\right)$, which in turn, allows a determination of $A'$ from the distance to the SN and $M$.
#
# We assume a Hubble constant $H_0 = 72 \, \mathrm{km \, s^{-1} \, Mpc^{-1}}$, and limit our analysis to $d < 100 \mathrm{Mpc}$. This corresponds to $z = (H_0\, d)/c \approx 0.024$.
# +
def delta_m15_root(alpha_d, t_p=18, alpha_r=2, s=1, dm15=1.1):
'''Root solver for alpha_d based on Delta m15
Using Eqn. 4 from Zheng & Filippenko (2017), ApJL, 838, 4, it is
possible to calculate the ratio of flux from a SN Ia at t = t_peak
and t = t_peak + 15. If t_p, alpha_r, and s are known, then the
ratio of flux should equal Delta m15. The scipy.optimize root
finder fsolve is used to solve for the value of alpha_d.
Parameters
----------
alpha_d : float
Power-law index for the late-time decline of the SN
t_p : float, optional (default=18)
Time to peak of the SN light curve
alpha_r : float, optional (default=2)
Power-law index for initial rise of the SN light curve
s : float, optional (default=1)
Smoothing parameter for the light curve
dm15 : float, optional (default=1.1)
Delta m15
Returns
-------
alpha_d_root
The value of alpha_d that results in a SN light curve
with a 15 day decline rate = Delta m15
'''
t_b = t_p/((-alpha_r/2)/(alpha_r/2 - alpha_d))**(1/(s*(alpha_d)))
Ltp = (t_p/t_b)**alpha_r * (1 + (t_p/t_b)**(s*alpha_d))**(-2/s)
Ltp_15 = ((t_p + 15)/t_b)**alpha_r * (1 + ((t_p + 15)/t_b)**(s*alpha_d))**(-2/s)
return 2.5*np.log10(Ltp/Ltp_15) - dm15
class SimSnIa():
def __init__(self, name=None):
'''initialize the simulated SN
Attributes
----------
name_ : str (default=None)
Name of the SN object
'''
self.name_ = name
def draw_dist_in_volume(self, d_max=100, H_0=72):
'''simulate SN at a random distance within a fixed volume
Parameters
----------
d_max : int, optional (default=100)
Maximum distance for the simulated SNe, units in Mpc
H_0 : float, optional (default=72)
Value of the Hubble constant (in km/s/Mpc) used to convert the
distance to the SN to a redshift, z.
Attributes
----------
dist_ : float
Distance to the SN in Mpc
z_ : float
Redshift to the SN
mu_ : float
distance modulus to the SN
'''
self.dist_ = np.random.uniform()**(1/3)*d_max
self.z_ = H_0*self.dist_/2.997942e5
self.mu_ = 5*np.log10(self.dist_) + 25
def draw_alpha_r(self, alpha_low=1, alpha_high=2.5):
'''draw random value for early rise power-law index
Select a random value from a flat distribution between
alpha_low and alpha_high to determine the power-law index
for the initial rise of the SN light curve.
Parameters
----------
alpha_low : float, optional (default=1)
Minimum value for the power-law index of the early rise
alpha_high : float, optional (default=2.5)
Maximum value for the power-law index of the early rise
Attributes
----------
alpha_r_ : float
Power-law index for initial rise of the SN light curve
'''
self.alpha_r_ = np.random.uniform(alpha_low, alpha_high)
def draw_rise_time(self, mu_rise=18, sig_rise=1):
'''draw random value for the light curve rise time
Select a random value from a gaussian distribution with
mean, mu_rise (default=18), and standard deviation,
sig_rise (default=1). The defaults are selected based on the
results from Ganeshalingam et al. 2011, MNRAS, 416, 2607
which found that the rise time for SNe Ia can be described
as ~ N(18.03, 0.0576).
Parameters
----------
mu_rise : float, optional (default=18)
Mean value for the rise time of SN Ia
sig_rise : float, optional (default=1)
Standard deviation of the rise time distribution for
SNe Ia
Attributes
----------
t_p_ : float
Time for the light curve to reach peak brightness
'''
self.t_p_ = np.random.normal(mu_rise, sig_rise)
def draw_smoothing_parameter(self, mu_s=2, sig_s=0.5):
'''draw random value for the smoothing parameter
Select a random value from a truncated gaussian distribution
with mean, mu_s (default=2), and standard deviation,
sig_s (default=0.5). This parameter is not physical, and
is largely degenerate with alpha_decline. It is drawn from
a guassian distribution while alpha_decline is selected to
ensure a physical value of delta m15.
Parameters
----------
mu_s : float, optional (default=2)
Mean value for the smoothing parameter
sig_s : float, optional (default=0.5)
Standard deviation of the smoothing parameter
Attributes
----------
s_ : float
Smoothing parameter for the light curve
'''
s = -1
while s < 0:
s = np.random.normal(mu_s, sig_s)
self.s_ = s
def draw_mb_deltam15(self, pkl_file='phillips_kde.pkl'):
'''Draw random M_b and Delta m15 values
Draw from a KDE estimate based on Burns et al. 2018 to get
M_b and Delta m15 for a "normal" SN Ia.
Parameters
----------
pkl_file : str, filename (defaualt='phillips_kde.pkl')
Pickle file that contains the KDE estimate of the
Phillips relation
Attributes
----------
M_b_ : float
Rest-frame absolute magnitude in the B band at the
time of peak brightness
dm15_ : float
Delta m15 for the SN
'''
with open(pkl_file, 'rb') as file:
sn_tuple = pickle.load(file)
kde, phillips_scaler = sn_tuple
scaled_sample = kde.sample(1)[0]
self.dm15_, self.M_b_= phillips_scaler.inverse_transform(scaled_sample)
def calc_alpha_d(self, alpha_d_guess=2):
'''Calculate the value of alpha_d based on Delta m15
Parameters
----------
alpha_d_guess : float, optional (default=2)
Initial guess to solve for the root of the alpha_d eqn
Attributes
----------
alpha_d_ : float
Power-law index for the late-time decline of the SN
'''
if not (hasattr(self, 't_p_') and hasattr(self, 'alpha_r_') and
hasattr(self, 's_') and hasattr(self, 'dm15_')):
self.draw_alpha_r()
self.draw_rise_time()
self.draw_smoothing_parameter()
self.draw_mb_deltam15()
alpha_d = fsolve(delta_m15_root, alpha_d_guess,
args=(self.t_p_, self.alpha_r_,
self.s_, self.dm15_))
self.alpha_d_ = float(alpha_d)
def calc_a_prime(self):
'''Calculate the value of Aprime
Determine the normalization constant to generate a
SN light curve with peak flux equal to the luminosity
associated with M_b.
Attributes
----------
t_b_ : float
"break time" for the broken power-law model
a_prime_ : float
Amplitude for the SN light curve
'''
if not (hasattr(self, 'alpha_d_') and hasattr(self, 'mu_')):
self.draw_dist_in_volume()
self.calc_alpha_d()
m_peak = self.M_b_ + self.mu_
f_peak = 10**(0.4*(25-m_peak))
t_b = self.t_p_/((-self.alpha_r_/2)/(self.alpha_r_/2 - self.alpha_d_))**(1/(self.s_*(self.alpha_d_)))
model_peak = ((self.t_p_)/t_b)**self.alpha_r_ * (1 + ((self.t_p_)/t_b)**(self.s_*self.alpha_d_))**(-2/self.s_)
a_prime = f_peak/model_peak
self.t_b_ = t_b
self.a_prime_ = a_prime
def calc_ft(self, t_obs, t_exp=0):
'''Calculate the model flux at input times t_obs
Use Eqn. 4 of Zheng & Filippenko 2017 to determine the
flux from the SN at all input times t_obs.
Parameters
----------
t_obs : array-like of shape = [n_obs]
Times at which to calculate the flux from the SN
t_exp : float, optional (default=0)
Time of explosion for the SN model
Attributes
----------
t_obs_ : array-like of shape = [n_obs]
Times at which the SN flux is measured
t_exp_ : float
SN time of explosion
model_flux : array-like of shape = [n_obs]
The model flux at all times t_obs, assuming no noise
contributes to the signal from the SN
'''
if not hasattr(self, 'a_prime_'):
self.calc_a_prime()
pre_explosion = np.logical_not(t_obs > t_exp)
model_flux = np.empty_like(t_obs)
model_flux[pre_explosion] = 0
t_rest = t_obs[~pre_explosion]/(1 + self.z_)
model_flux[~pre_explosion] = self.a_prime_ * (((t_rest - t_exp)/self.t_b_)**self.alpha_r_ *
(1 + ((t_rest - t_exp)/self.t_b_)**(self.s_*self.alpha_d_))**(-2/self.s_))
self.t_obs_ = t_obs
self.t_exp_ = t_exp
self.model_flux_ = model_flux
def calc_noisy_lc(self, sigma_sys=20):
'''Calculate SN light curve with systematic and statistical noise
Parameters
----------
sigma_sys : float, optional (default=20)
Systematic noise term to noisify the light curve. Telescope
system is assumed to have a zero-point of 25, such that
m = 25 - 2.5*log10(flux). Thus,
sigma_sys(5-sigma limiting mag) = 10**(0.4*(25 - m_lim))/5.
Default corresponds to a limiting mag of 20.
Attributes
----------
cnts : array-like of shape = [n_obs]
noisy flux from the SN light curve
cnts_unc : array-like of shape = [n_obs]
uncertainty on the noisy flux measurements
'''
if not hasattr(self, 'model_flux_'):
self.calc_ft()
cnts = np.zeros_like(self.t_obs_)
cnts_unc = np.zeros_like(self.t_obs_)
pre_explosion = np.logical_not(self.t_obs_ > self.t_exp_)
cnts[pre_explosion] = np.random.normal(0, sigma_sys, size=sum(pre_explosion))
cnts_unc[pre_explosion] = np.ones_like(self.t_obs_)[pre_explosion]*sigma_sys
sn_flux = self.model_flux_[~pre_explosion]
sn_with_random_noise = sn_flux + np.random.normal(np.zeros_like(sn_flux), np.sqrt(sn_flux))
sn_with_random_plus_sys = sn_with_random_noise + np.random.normal(0, sigma_sys, size=len(sn_flux))
# total uncertainty = systematic + Poisson
sn_uncertainties = np.hypot(np.sqrt(np.maximum(sn_with_random_noise,
np.zeros_like(sn_with_random_noise))),
sigma_sys)
cnts[~pre_explosion] = sn_with_random_plus_sys
cnts_unc[~pre_explosion] = sn_uncertainties
self.cnts_ = cnts
self.cnts_unc_ = cnts_unc
# -
# To account for the noise in the telescope system, a systematic contribution is added to the SN flux, where the magnitude of the systematic term is related to the limiting magnitude of the telescope. For example, when we adopt:
# $$ m = 25 - 2.5\log(\mathrm{counts}),$$
# for a $m = 20\,\mathrm{mag}$ $5\sigma$ limit, the counts = 100 and therefore `sigma_sys` = 20.
#
# Using this generative model, we can incorporate the effects of a detection limit via the `sigma_sys` variable. In particular, smaller aperture telescopes will have larger values of sigma_sys as follows:
#
#
#
# | $m_\mathrm{lim}$ | counts | `sigma_sys` |
# | ---- | ---- | ---- |
# | 22.0 | 15.8 | 3.17 |
# | 21.5 | 25.1 | 5.02 |
# | 21.0 | 39.8 | 7.96 |
# | 20.0 | 100.0 | 20.00 |
# | 19.0 | 251.2 | 50.23 |
# | 18.0 | 631.0 | 126.19 |
# | 17.0 | 1584.9 | 316.98 |
# | 16.0 | 3981.1 | 796.21 |
# +
sn1 = SimSnIa()
sn1.calc_a_prime()
sn1.calc_ft(np.arange(-10,35))
sn1.calc_noisy_lc(sigma_sys=100)
plt.plot(np.arange(-10,35), sn1.model_flux_)
plt.errorbar(np.arange(-10,35), sn1.cnts_, sn1.cnts_unc_, fmt='o')
# +
np.random.seed(42)
fig, ax = plt.subplots(figsize=(10,6))
for i in range(10):
sn = SimSnIa()
sn.draw_dist_in_volume(d_max=600)
dist = 0
if i == 4:
dist = 380
sn.dist_ = dist
elif i == 7:
dist = 200
sn.dist_ = dist
elif i == 9:
dist = 400
sn.dist_ = dist
elif i == 1:
dist = 329
sn.dist_ = dist
elif i == 6:
dist = 750
sn.dist_ = dist
if dist != 0:
sn.z_ = 72*sn.dist_/2.9979e5
sn.mu_ = 5*np.log10(sn.dist_)+25
sn.draw_alpha_r()
sn.draw_rise_time()
sn.draw_smoothing_parameter()
sn.draw_mb_deltam15()
sn.calc_alpha_d()
sn.calc_a_prime()
if i == 0:
t_start = np.random.randint(3)
else:
t_start += np.random.randint(10)
t_obs = np.arange(0, 75, 3, dtype=float) + np.random.uniform(-0.25/24,0.25/24,size=25)
sn.calc_ft(t_obs)
sn.calc_noisy_lc(sigma_sys=8)
mag = 25 - 2.5*np.log10(sn.cnts_)
mag_unc = 1.0857*sn.cnts_unc_/sn.cnts_
print(np.nanmin(mag))
if np.nanmin(mag) < 18.7:
ax.plot((t_obs+t_start)*(1+sn.z_), mag, 'o',
c='0.8', mec='k', mew=1, ms=10)
ax.plot((t_obs+t_start)*(1+sn.z_), mag, '-',
lw=0.7, alpha=0.6, zorder=-10)
else:
ax.plot((t_obs+t_start)*(1+sn.z_), mag, '+',
c='0.2', mew=2, ms=10)
ax.plot((t_obs+t_start)*(1+sn.z_), mag, '-',
lw=0.7, alpha=0.6, zorder=-10)
ax.hlines(18.7, 0, 100, linestyles='--')
ax.set_ylim(21.5, 17)
ax.set_xlim(0,100)
ax.minorticks_on()
ax.tick_params(which='both',top=True, right=True, labelsize=14)
fig.tight_layout()
# fig.savefig("/Users/adamamiller/Desktop/BTS.pdf")
# -
| playground/simulate_lc.ipynb |
# ---
# jupyter:
# accelerator: GPU
# colab:
# collapsed_sections: []
# name: 'ECCV 2020: Habitat-sim Interactivity'
# provenance: []
# jupytext:
# cell_metadata_filter: -all
# formats: nb_python//py:percent,colabs//ipynb
# notebook_metadata_filter: all
# text_representation:
# extension: .py
# format_name: percent
# format_version: '1.3'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# %% [markdown]
# <a href="https://colab.research.google.com/github/facebookresearch/habitat-sim/blob/main/examples/tutorials/colabs/ECCV_2020_Interactivity.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# %% [markdown]
# #Habitat-sim Interactivity
#
# This use-case driven tutorial covers Habitat-sim interactivity, including:
# - Adding new objects to a scene
# - Kinematic object manipulation
# - Physics simulation API
# - Sampling valid object locations
# - Generating a NavMesh including STATIC objects
# - Agent embodiment and continuous control
# %%
# @title Installation { display-mode: "form" }
# @markdown (double click to show code).
# !curl -L https://raw.githubusercontent.com/facebookresearch/habitat-sim/main/examples/colab_utils/colab_install.sh | NIGHTLY=true bash -s
# %%
# @title Path Setup and Imports { display-mode: "form" }
# @markdown (double click to show code).
# %cd /content/habitat-sim
## [setup]
import math
import os
import random
import sys
import git
import magnum as mn
import numpy as np
# %matplotlib inline
from matplotlib import pyplot as plt
from PIL import Image
import habitat_sim
from habitat_sim.utils import common as ut
from habitat_sim.utils import viz_utils as vut
try:
import ipywidgets as widgets
from IPython.display import display as ipydisplay
# For using jupyter/ipywidget IO components
HAS_WIDGETS = True
except ImportError:
HAS_WIDGETS = False
if "google.colab" in sys.modules:
os.environ["IMAGEIO_FFMPEG_EXE"] = "/usr/bin/ffmpeg"
repo = git.Repo(".", search_parent_directories=True)
dir_path = repo.working_tree_dir
# %cd $dir_path
data_path = os.path.join(dir_path, "data")
output_directory = "examples/tutorials/interactivity_output/" # @param {type:"string"}
output_path = os.path.join(dir_path, output_directory)
if not os.path.exists(output_path):
os.mkdir(output_path)
# define some globals the first time we run.
if "sim" not in globals():
global sim
sim = None
global obj_attr_mgr
obj_attr_mgr = None
global prim_attr_mgr
obj_attr_mgr = None
global stage_attr_mgr
stage_attr_mgr = None
global rigid_obj_mgr
rigid_obj_mgr = None
# %%
# @title Define Configuration Utility Functions { display-mode: "form" }
# @markdown (double click to show code)
# @markdown This cell defines a number of utility functions used throughout the tutorial to make simulator reconstruction easy:
# @markdown - make_cfg
# @markdown - make_default_settings
# @markdown - make_simulator_from_settings
def make_cfg(settings):
sim_cfg = habitat_sim.SimulatorConfiguration()
sim_cfg.gpu_device_id = 0
sim_cfg.scene_id = settings["scene"]
sim_cfg.enable_physics = settings["enable_physics"]
# Note: all sensors must have the same resolution
sensor_specs = []
if settings["color_sensor_1st_person"]:
color_sensor_1st_person_spec = habitat_sim.CameraSensorSpec()
color_sensor_1st_person_spec.uuid = "color_sensor_1st_person"
color_sensor_1st_person_spec.sensor_type = habitat_sim.SensorType.COLOR
color_sensor_1st_person_spec.resolution = [
settings["height"],
settings["width"],
]
color_sensor_1st_person_spec.position = [0.0, settings["sensor_height"], 0.0]
color_sensor_1st_person_spec.orientation = [
settings["sensor_pitch"],
0.0,
0.0,
]
color_sensor_1st_person_spec.sensor_subtype = habitat_sim.SensorSubType.PINHOLE
sensor_specs.append(color_sensor_1st_person_spec)
if settings["depth_sensor_1st_person"]:
depth_sensor_1st_person_spec = habitat_sim.CameraSensorSpec()
depth_sensor_1st_person_spec.uuid = "depth_sensor_1st_person"
depth_sensor_1st_person_spec.sensor_type = habitat_sim.SensorType.DEPTH
depth_sensor_1st_person_spec.resolution = [
settings["height"],
settings["width"],
]
depth_sensor_1st_person_spec.position = [0.0, settings["sensor_height"], 0.0]
depth_sensor_1st_person_spec.orientation = [
settings["sensor_pitch"],
0.0,
0.0,
]
depth_sensor_1st_person_spec.sensor_subtype = habitat_sim.SensorSubType.PINHOLE
sensor_specs.append(depth_sensor_1st_person_spec)
if settings["semantic_sensor_1st_person"]:
semantic_sensor_1st_person_spec = habitat_sim.CameraSensorSpec()
semantic_sensor_1st_person_spec.uuid = "semantic_sensor_1st_person"
semantic_sensor_1st_person_spec.sensor_type = habitat_sim.SensorType.SEMANTIC
semantic_sensor_1st_person_spec.resolution = [
settings["height"],
settings["width"],
]
semantic_sensor_1st_person_spec.position = [
0.0,
settings["sensor_height"],
0.0,
]
semantic_sensor_1st_person_spec.orientation = [
settings["sensor_pitch"],
0.0,
0.0,
]
semantic_sensor_1st_person_spec.sensor_subtype = (
habitat_sim.SensorSubType.PINHOLE
)
sensor_specs.append(semantic_sensor_1st_person_spec)
if settings["color_sensor_3rd_person"]:
color_sensor_3rd_person_spec = habitat_sim.CameraSensorSpec()
color_sensor_3rd_person_spec.uuid = "color_sensor_3rd_person"
color_sensor_3rd_person_spec.sensor_type = habitat_sim.SensorType.COLOR
color_sensor_3rd_person_spec.resolution = [
settings["height"],
settings["width"],
]
color_sensor_3rd_person_spec.position = [
0.0,
settings["sensor_height"] + 0.2,
0.2,
]
color_sensor_3rd_person_spec.orientation = [-math.pi / 4, 0, 0]
color_sensor_3rd_person_spec.sensor_subtype = habitat_sim.SensorSubType.PINHOLE
sensor_specs.append(color_sensor_3rd_person_spec)
# Here you can specify the amount of displacement in a forward action and the turn angle
agent_cfg = habitat_sim.agent.AgentConfiguration()
agent_cfg.sensor_specifications = sensor_specs
return habitat_sim.Configuration(sim_cfg, [agent_cfg])
def make_default_settings():
settings = {
"width": 720, # Spatial resolution of the observations
"height": 544,
"scene": "./data/scene_datasets/mp3d_example/17DRP5sb8fy/17DRP5sb8fy.glb", # Scene path
"default_agent": 0,
"sensor_height": 1.5, # Height of sensors in meters
"sensor_pitch": -math.pi / 8.0, # sensor pitch (x rotation in rads)
"color_sensor_1st_person": True, # RGB sensor
"color_sensor_3rd_person": False, # RGB sensor 3rd person
"depth_sensor_1st_person": False, # Depth sensor
"semantic_sensor_1st_person": False, # Semantic sensor
"seed": 1,
"enable_physics": True, # enable dynamics simulation
}
return settings
def make_simulator_from_settings(sim_settings):
cfg = make_cfg(sim_settings)
# clean-up the current simulator instance if it exists
global sim
global obj_attr_mgr
global prim_attr_mgr
global stage_attr_mgr
global rigid_obj_mgr
if sim != None:
sim.close()
# initialize the simulator
sim = habitat_sim.Simulator(cfg)
# Managers of various Attributes templates
obj_attr_mgr = sim.get_object_template_manager()
obj_attr_mgr.load_configs(str(os.path.join(data_path, "objects/example_objects")))
obj_attr_mgr.load_configs(str(os.path.join(data_path, "objects/locobot_merged")))
prim_attr_mgr = sim.get_asset_template_manager()
stage_attr_mgr = sim.get_stage_template_manager()
# Manager providing access to rigid objects
rigid_obj_mgr = sim.get_rigid_object_manager()
# %%
# @title Define Simulation Utility Functions { display-mode: "form" }
# @markdown (double click to show code)
# @markdown - remove_all_objects
# @markdown - simulate
# @markdown - sample_object_state
def simulate(sim, dt=1.0, get_frames=True):
# simulate dt seconds at 60Hz to the nearest fixed timestep
print("Simulating " + str(dt) + " world seconds.")
observations = []
start_time = sim.get_world_time()
while sim.get_world_time() < start_time + dt:
sim.step_physics(1.0 / 60.0)
if get_frames:
observations.append(sim.get_sensor_observations())
return observations
# Set an object transform relative to the agent state
def set_object_state_from_agent(
sim,
obj,
offset=np.array([0, 2.0, -1.5]),
orientation=mn.Quaternion(((0, 0, 0), 1)),
):
agent_transform = sim.agents[0].scene_node.transformation_matrix()
ob_translation = agent_transform.transform_point(offset)
obj.translation = ob_translation
obj.rotation = orientation
# sample a random valid state for the object from the scene bounding box or navmesh
def sample_object_state(
sim, obj, from_navmesh=True, maintain_object_up=True, max_tries=100, bb=None
):
# check that the object is not STATIC
if obj.motion_type is habitat_sim.physics.MotionType.STATIC:
print("sample_object_state : Object is STATIC, aborting.")
if from_navmesh:
if not sim.pathfinder.is_loaded:
print("sample_object_state : No pathfinder, aborting.")
return False
elif not bb:
print(
"sample_object_state : from_navmesh not specified and no bounding box provided, aborting."
)
return False
tries = 0
valid_placement = False
# Note: following assumes sim was not reconfigured without close
scene_collision_margin = stage_attr_mgr.get_template_by_id(0).margin
while not valid_placement and tries < max_tries:
tries += 1
# initialize sample location to random point in scene bounding box
sample_location = np.array([0, 0, 0])
if from_navmesh:
# query random navigable point
sample_location = sim.pathfinder.get_random_navigable_point()
else:
sample_location = np.random.uniform(bb.min, bb.max)
# set the test state
obj.translation = sample_location
if maintain_object_up:
# random rotation only on the Y axis
y_rotation = mn.Quaternion.rotation(
mn.Rad(random.random() * 2 * math.pi), mn.Vector3(0, 1.0, 0)
)
obj.rotation = y_rotation * obj.rotation
else:
# unconstrained random rotation
obj.rotation = ut.random_quaternion()
# raise object such that lowest bounding box corner is above the navmesh sample point.
if from_navmesh:
obj_node = obj.root_scene_node
xform_bb = habitat_sim.geo.get_transformed_bb(
obj_node.cumulative_bb, obj_node.transformation
)
# also account for collision margin of the scene
obj.translation += mn.Vector3(
0, xform_bb.size_y() / 2.0 + scene_collision_margin, 0
)
# test for penetration with the environment
if not sim.contact_test(obj.object_id):
valid_placement = True
if not valid_placement:
return False
return True
# %%
# @title Define Visualization Utility Function { display-mode: "form" }
# @markdown (double click to show code)
# @markdown - display_sample
# Change to do something like this maybe: https://stackoverflow.com/a/41432704
def display_sample(
rgb_obs, semantic_obs=np.array([]), depth_obs=np.array([]), key_points=None
):
from habitat_sim.utils.common import d3_40_colors_rgb
rgb_img = Image.fromarray(rgb_obs, mode="RGBA")
arr = [rgb_img]
titles = ["rgb"]
if semantic_obs.size != 0:
semantic_img = Image.new("P", (semantic_obs.shape[1], semantic_obs.shape[0]))
semantic_img.putpalette(d3_40_colors_rgb.flatten())
semantic_img.putdata((semantic_obs.flatten() % 40).astype(np.uint8))
semantic_img = semantic_img.convert("RGBA")
arr.append(semantic_img)
titles.append("semantic")
if depth_obs.size != 0:
depth_img = Image.fromarray((depth_obs / 10 * 255).astype(np.uint8), mode="L")
arr.append(depth_img)
titles.append("depth")
plt.figure(figsize=(12, 8))
for i, data in enumerate(arr):
ax = plt.subplot(1, 3, i + 1)
ax.axis("off")
ax.set_title(titles[i])
# plot points on images
if key_points is not None:
for point in key_points:
plt.plot(point[0], point[1], marker="o", markersize=10, alpha=0.8)
plt.imshow(data)
plt.show(block=False)
if __name__ == "__main__":
import argparse
parser = argparse.ArgumentParser()
parser.add_argument("--no-display", dest="display", action="store_false")
parser.add_argument("--no-make-video", dest="make_video", action="store_false")
parser.set_defaults(show_video=True, make_video=True)
args, _ = parser.parse_known_args()
show_video = args.display
display = args.display
make_video = args.make_video
else:
show_video = False
make_video = False
display = False
# %%
# @title Define Colab GUI Utility Functions { display-mode: "form" }
# @markdown (double click to show code)
# Event handler for dropdowns displaying file-based object handles
def on_file_obj_ddl_change(ddl_values):
global sel_file_obj_handle
sel_file_obj_handle = ddl_values["new"]
return sel_file_obj_handle
# Event handler for dropdowns displaying prim-based object handles
def on_prim_obj_ddl_change(ddl_values):
global sel_prim_obj_handle
sel_prim_obj_handle = ddl_values["new"]
return sel_prim_obj_handle
# Event handler for dropdowns displaying asset handles
def on_prim_ddl_change(ddl_values):
global sel_asset_handle
sel_asset_handle = ddl_values["new"]
return sel_asset_handle
# Build a dropdown list holding obj_handles and set its event handler
def set_handle_ddl_widget(obj_handles, handle_types, sel_handle, on_change):
sel_handle = obj_handles[0]
descStr = handle_types + " Template Handles:"
style = {"description_width": "300px"}
obj_ddl = widgets.Dropdown(
options=obj_handles,
value=sel_handle,
description=descStr,
style=style,
disabled=False,
layout={"width": "max-content"},
)
obj_ddl.observe(on_change, names="value")
return obj_ddl, sel_handle
def set_button_launcher(desc):
button = widgets.Button(
description=desc,
layout={"width": "max-content"},
)
return button
def make_sim_and_vid_button(prefix, dt=1.0):
if not HAS_WIDGETS:
return
def on_sim_click(b):
observations = simulate(sim, dt=dt)
vut.make_video(
observations, "color_sensor_1st_person", "color", output_path + prefix
)
sim_and_vid_btn = set_button_launcher("Simulate and Make Video")
sim_and_vid_btn.on_click(on_sim_click)
ipydisplay(sim_and_vid_btn)
def make_clear_all_objects_button():
if not HAS_WIDGETS:
return
def on_clear_click(b):
rigid_obj_mgr.remove_all_objects()
clear_objs_button = set_button_launcher("Clear all objects")
clear_objs_button.on_click(on_clear_click)
ipydisplay(clear_objs_button)
# Builds widget-based UI components
def build_widget_ui(obj_attr_mgr, prim_attr_mgr):
# Holds the user's desired file-based object template handle
global sel_file_obj_handle
sel_file_obj_handle = ""
# Holds the user's desired primitive-based object template handle
global sel_prim_obj_handle
sel_prim_obj_handle = ""
# Holds the user's desired primitive asset template handle
global sel_asset_handle
sel_asset_handle = ""
# Construct DDLs and assign event handlers
# All file-based object template handles
file_obj_handles = obj_attr_mgr.get_file_template_handles()
prim_obj_handles = obj_attr_mgr.get_synth_template_handles()
prim_asset_handles = prim_attr_mgr.get_template_handles()
if not HAS_WIDGETS:
sel_file_obj_handle = file_obj_handles[0]
sel_prim_obj_handle = prim_obj_handles[0]
sel_asset_handle = prim_asset_handles[0]
return
file_obj_ddl, sel_file_obj_handle = set_handle_ddl_widget(
file_obj_handles,
"File-based Object",
sel_file_obj_handle,
on_file_obj_ddl_change,
)
# All primitive asset-based object template handles
prim_obj_ddl, sel_prim_obj_handle = set_handle_ddl_widget(
prim_obj_handles,
"Primitive-based Object",
sel_prim_obj_handle,
on_prim_obj_ddl_change,
)
# All primitive asset handles template handles
prim_asset_ddl, sel_asset_handle = set_handle_ddl_widget(
prim_asset_handles, "Primitive Asset", sel_asset_handle, on_prim_ddl_change
)
# Display DDLs
ipydisplay(file_obj_ddl)
ipydisplay(prim_obj_ddl)
ipydisplay(prim_asset_ddl)
# %%
# @title Initialize Simulator and Load Scene { display-mode: "form" }
# convienience functions defined in Utility cell manage global variables
sim_settings = make_default_settings()
# set globals: sim,
make_simulator_from_settings(sim_settings)
# %% [markdown]
# #Interactivity in Habitat-sim
#
# This tutorial covers how to configure and use the Habitat-sim object manipulation API to setup and run physical interaction simulations.
#
# ## Outline:
# This section is divided into four use-case driven sub-sections:
# 1. Introduction to Interactivity
# 2. Physical Reasoning
# 3. Generating Scene Clutter on the NavMesh
# 4. Continuous Embodied Navigation
#
# For more tutorial examples and details see the [Interactive Rigid Objects tutorial](https://aihabitat.org/docs/habitat-sim/rigid-object-tutorial.html) also available for Colab [here](https://github.com/facebookresearch/habitat-sim/blob/main/examples/tutorials/colabs/rigid_object_tutorial.ipynb).
#
#
#
#
# %% [markdown]
# ## Introduction to Interactivity
#
# ####Easily add an object and simulate!
#
#
# %%
# @title Select a Simulation Object Template: { display-mode: "form" }
# @markdown Use the dropdown menu below to select an object template for use in the following examples.
# @markdown File-based object templates are loaded from and named after an asset file (e.g. banana.glb), while Primitive-based object templates are generated programmatically (e.g. uv_sphere) with handles (name/key for reference) uniquely generated from a specific parameterization.
# @markdown See the Advanced Features tutorial for more details about asset configuration.
build_widget_ui(obj_attr_mgr, prim_attr_mgr)
# %%
# @title Add either a File-based or Primitive Asset-based object to the scene at a user-specified location.{ display-mode: "form" }
# @markdown Running this will add a physically-modelled object of the selected type to the scene at the location specified by user, simulate forward for a few seconds and save a movie of the results.
# @markdown Choose either the primitive or file-based template recently selected in the dropdown:
obj_template_handle = sel_file_obj_handle
asset_tempalte_handle = sel_asset_handle
object_type = "File-based" # @param ["File-based","Primitive-based"]
if "File" in object_type:
# Handle File-based object handle
obj_template_handle = sel_file_obj_handle
elif "Primitive" in object_type:
# Handle Primitive-based object handle
obj_template_handle = sel_prim_obj_handle
else:
# Unknown - defaults to file-based
pass
# @markdown Configure the initial object location (local offset from the agent body node):
# default : offset=np.array([0,2.0,-1.5]), orientation=np.quaternion(1,0,0,0)
offset_x = 0.5 # @param {type:"slider", min:-2, max:2, step:0.1}
offset_y = 1.4 # @param {type:"slider", min:0, max:3.0, step:0.1}
offset_z = -1.5 # @param {type:"slider", min:-3, max:0, step:0.1}
offset = np.array([offset_x, offset_y, offset_z])
# @markdown Configure the initial object orientation via local Euler angle (degrees):
orientation_x = 0 # @param {type:"slider", min:-180, max:180, step:1}
orientation_y = 0 # @param {type:"slider", min:-180, max:180, step:1}
orientation_z = 0 # @param {type:"slider", min:-180, max:180, step:1}
# compose the rotations
rotation_x = mn.Quaternion.rotation(mn.Deg(orientation_x), mn.Vector3(1.0, 0, 0))
rotation_y = mn.Quaternion.rotation(mn.Deg(orientation_y), mn.Vector3(1.0, 0, 0))
rotation_z = mn.Quaternion.rotation(mn.Deg(orientation_z), mn.Vector3(1.0, 0, 0))
orientation = rotation_z * rotation_y * rotation_x
# Add object instantiated by desired template using template handle
obj_1 = rigid_obj_mgr.add_object_by_template_handle(obj_template_handle)
# @markdown Note: agent local coordinate system is Y up and -Z forward.
# Move object to be in front of the agent
set_object_state_from_agent(sim, obj_1, offset=offset, orientation=orientation)
# display a still frame of the scene after the object is added if RGB sensor is enabled
observations = sim.get_sensor_observations()
if display and sim_settings["color_sensor_1st_person"]:
display_sample(observations["color_sensor_1st_person"])
example_type = "adding objects test"
make_sim_and_vid_button(example_type)
make_clear_all_objects_button()
# %% [markdown]
#
#
#
#
# ## Physical Reasoning
# %% [markdown]
# This section demonstrates simple setups for physical reasoning tasks in Habitat-sim with a fixed camera position collecting data:
# - Scripted vs. Dynamic Motion
# - Object Permanence
# - Physical plausibility classification
# - Trajectory Prediction
# %%
# @title Select object templates from the GUI: { display-mode: "form" }
build_widget_ui(obj_attr_mgr, prim_attr_mgr)
# %%
# @title Scripted vs. Dynamic Motion { display-mode: "form" }
# @markdown A quick script to generate video data for AI classification of dynamically dropping vs. kinematically moving objects.
rigid_obj_mgr.remove_all_objects()
# @markdown Set the scene as dynamic or kinematic:
scenario_is_kinematic = True # @param {type:"boolean"}
# add the selected object
obj_1 = rigid_obj_mgr.add_object_by_template_handle(sel_file_obj_handle)
# place the object
set_object_state_from_agent(
sim, obj_1, offset=np.array([0, 2.0, -1.0]), orientation=ut.random_quaternion()
)
if scenario_is_kinematic:
# use the velocity control struct to setup a constant rate kinematic motion
obj_1.motion_type = habitat_sim.physics.MotionType.KINEMATIC
vel_control = obj_1.velocity_control
vel_control.controlling_lin_vel = True
vel_control.linear_velocity = np.array([0, -1.0, 0])
# simulate and collect observations
example_type = "kinematic vs dynamic"
observations = simulate(sim, dt=2.0)
if make_video:
vut.make_video(
observations,
"color_sensor_1st_person",
"color",
output_path + example_type,
open_vid=show_video,
)
rigid_obj_mgr.remove_all_objects()
# %%
# @title Object Permanence { display-mode: "form" }
# @markdown This example script demonstrates a possible object permanence task.
# @markdown Two objects are dropped behind an occluder. One is removed while occluded.
rigid_obj_mgr.remove_all_objects()
# @markdown 1. Add the two dynamic objects.
# add the selected objects
obj_1 = rigid_obj_mgr.add_object_by_template_handle(sel_file_obj_handle)
obj_2 = rigid_obj_mgr.add_object_by_template_handle(sel_file_obj_handle)
# place the objects
set_object_state_from_agent(
sim, obj_1, offset=np.array([0.5, 2.0, -1.0]), orientation=ut.random_quaternion()
)
set_object_state_from_agent(
sim,
obj_2,
offset=np.array([-0.5, 2.0, -1.0]),
orientation=ut.random_quaternion(),
)
# @markdown 2. Configure and add an occluder from a scaled cube primitive.
# Get a default cube primitive template
cube_handle = obj_attr_mgr.get_template_handles("cube")[0]
cube_template_cpy = obj_attr_mgr.get_template_by_handle(cube_handle)
# Modify the template's configured scale.
cube_template_cpy.scale = np.array([0.32, 0.075, 0.01])
# Register the modified template under a new name.
obj_attr_mgr.register_template(cube_template_cpy, "occluder_cube")
# Instance and place the occluder object from the template.
occluder_obj = rigid_obj_mgr.add_object_by_template_handle("occluder_cube")
set_object_state_from_agent(sim, occluder_obj, offset=np.array([0.0, 1.4, -0.4]))
occluder_obj.motion_type = habitat_sim.physics.MotionType.KINEMATIC
# fmt off
# @markdown 3. Simulate at 60Hz, removing one object when it's center of mass drops below that of the occluder.
# fmt on
# Simulate and remove object when it passes the midpoint of the occluder
dt = 2.0
print("Simulating " + str(dt) + " world seconds.")
observations = []
# simulate at 60Hz to the nearest fixed timestep
start_time = sim.get_world_time()
while sim.get_world_time() < start_time + dt:
sim.step_physics(1.0 / 60.0)
# remove the object once it passes the occluder center and it still exists/hasn't already been removed
if obj_2.is_alive and obj_2.translation[1] <= occluder_obj.translation[1]:
rigid_obj_mgr.remove_object_by_id(obj_2.object_id)
observations.append(sim.get_sensor_observations())
example_type = "object permanence"
if make_video:
vut.make_video(
observations,
"color_sensor_1st_person",
"color",
output_path + example_type,
open_vid=show_video,
)
rigid_obj_mgr.remove_all_objects()
# %%
# @title Physical Plausibility Classification { display-mode: "form" }
# @markdown This example demonstrates a physical plausibility expirement. A sphere
# @markdown is dropped onto the back of a couch to roll onto the floor. Optionally,
# @markdown an invisible plane is introduced for the sphere to roll onto producing
# @markdown non-physical motion.
introduce_surface = True # @param{type:"boolean"}
rigid_obj_mgr.remove_all_objects()
# add a rolling object
obj_attr_mgr = sim.get_object_template_manager()
sphere_handle = obj_attr_mgr.get_template_handles("uvSphereSolid")[0]
obj_1 = rigid_obj_mgr.add_object_by_template_handle(sphere_handle)
set_object_state_from_agent(sim, obj_1, offset=np.array([1.0, 1.6, -1.95]))
if introduce_surface:
# optionally add invisible surface
cube_handle = obj_attr_mgr.get_template_handles("cube")[0]
cube_template_cpy = obj_attr_mgr.get_template_by_handle(cube_handle)
# Modify the template.
cube_template_cpy.scale = np.array([1.0, 0.04, 1.0])
surface_is_visible = False # @param{type:"boolean"}
cube_template_cpy.is_visibile = surface_is_visible
# Register the modified template under a new name.
obj_attr_mgr.register_template(cube_template_cpy, "invisible_surface")
# Instance and place the surface object from the template.
surface_obj = rigid_obj_mgr.add_object_by_template_handle("invisible_surface")
set_object_state_from_agent(sim, surface_obj, offset=np.array([0.4, 0.88, -1.6]))
surface_obj.motion_type = habitat_sim.physics.MotionType.STATIC
example_type = "physical plausibility"
observations = simulate(sim, dt=3.0)
if make_video:
vut.make_video(
observations,
"color_sensor_1st_person",
"color",
output_path + example_type,
open_vid=show_video,
)
rigid_obj_mgr.remove_all_objects()
# %%
# @title Trajectory Prediction { display-mode: "form" }
# @markdown This example demonstrates setup of a trajectory prediction task.
# @markdown Boxes are placed in a target zone and a sphere is given an initial
# @markdown velocity with the goal of knocking the boxes off the counter.
# @markdown ---
# @markdown Configure Parameters:
rigid_obj_mgr.remove_all_objects()
seed = 2 # @param{type:"integer"}
random.seed(seed)
sim.seed(seed)
np.random.seed(seed)
# setup agent state manually to face the bar
agent_state = sim.agents[0].state
agent_state.position = np.array([-1.97496, 0.072447, -2.0894])
agent_state.rotation = ut.quat_from_coeffs([0, -1, 0, 0])
sim.agents[0].set_state(agent_state)
# load the target objects
cheezit_handle = obj_attr_mgr.get_template_handles("cheezit")[0]
# create range from center and half-extent
target_zone = mn.Range3D.from_center(
mn.Vector3(-2.07496, 1.07245, -0.2894), mn.Vector3(0.5, 0.05, 0.1)
)
num_targets = 9 # @param{type:"integer"}
for _target in range(num_targets):
obj = rigid_obj_mgr.add_object_by_template_handle(cheezit_handle)
# rotate boxes off of their sides
obj.rotation = mn.Quaternion.rotation(
mn.Rad(-mn.math.pi_half), mn.Vector3(1.0, 0, 0)
)
# sample state from the target zone
if not sample_object_state(sim, obj, False, True, 100, target_zone):
rigid_obj_mgr.remove_object_by_id(obj.object_id)
show_target_zone = False # @param{type:"boolean"}
if show_target_zone:
# Get and modify the wire cube template from the range
cube_handle = obj_attr_mgr.get_template_handles("cubeWireframe")[0]
cube_template_cpy = obj_attr_mgr.get_template_by_handle(cube_handle)
cube_template_cpy.scale = target_zone.size()
cube_template_cpy.is_collidable = False
# Register the modified template under a new name.
obj_attr_mgr.register_template(cube_template_cpy, "target_zone")
# instance and place the object from the template
target_zone_obj = rigid_obj_mgr.add_object_by_template_handle("target_zone")
target_zone_obj.translation = target_zone.center()
target_zone_obj.motion_type = habitat_sim.physics.MotionType.STATIC
# print("target_zone_center = " + str(target_zone_obj.translation))
# @markdown ---
# @markdown ###Ball properties:
# load the ball
sphere_handle = obj_attr_mgr.get_template_handles("uvSphereSolid")[0]
sphere_template_cpy = obj_attr_mgr.get_template_by_handle(sphere_handle)
# @markdown Mass:
ball_mass = 5.01 # @param {type:"slider", min:0.01, max:50.0, step:0.01}
sphere_template_cpy.mass = ball_mass
obj_attr_mgr.register_template(sphere_template_cpy, "ball")
ball_obj = rigid_obj_mgr.add_object_by_template_handle("ball")
set_object_state_from_agent(sim, ball_obj, offset=np.array([0, 1.4, 0]))
# @markdown Initial linear velocity (m/sec):
lin_vel_x = 0 # @param {type:"slider", min:-10, max:10, step:0.1}
lin_vel_y = 1 # @param {type:"slider", min:-10, max:10, step:0.1}
lin_vel_z = 5 # @param {type:"slider", min:0, max:10, step:0.1}
ball_obj.linear_velocity = mn.Vector3(lin_vel_x, lin_vel_y, lin_vel_z)
# @markdown Initial angular velocity (rad/sec):
ang_vel_x = 0 # @param {type:"slider", min:-100, max:100, step:0.1}
ang_vel_y = 0 # @param {type:"slider", min:-100, max:100, step:0.1}
ang_vel_z = 0 # @param {type:"slider", min:-100, max:100, step:0.1}
ball_obj.angular_velocity = mn.Vector3(ang_vel_x, ang_vel_y, ang_vel_z)
example_type = "trajectory prediction"
observations = simulate(sim, dt=3.0)
if make_video:
vut.make_video(
observations,
"color_sensor_1st_person",
"color",
output_path + example_type,
open_vid=show_video,
)
rigid_obj_mgr.remove_all_objects()
# %% [markdown]
# ## Generating Scene Clutter on the NavMesh
#
# The NavMesh can be used to place objects on surfaces in the scene. Once objects are placed they can be set to MotionType::STATIC, indiciating that they are not moveable (kinematics and dynamics are disabled for STATIC objects). The NavMesh can then be recomputed including STATIC object meshes in the voxelization.
#
# This example demonstrates using the NavMesh to generate a cluttered scene for navigation. In this script we will:
#
# - Place objects off the NavMesh
# - Set them to MotionType::STATIC
# - Recompute the NavMesh including STATIC objects
# - Visualize the results
# %%
# @title Initialize Simulator and Load Scene { display-mode: "form" }
# @markdown (load the apartment_1 scene for clutter generation in an open space)
sim_settings = make_default_settings()
sim_settings["scene"] = "./data/scene_datasets/habitat-test-scenes/apartment_1.glb"
sim_settings["sensor_pitch"] = 0
make_simulator_from_settings(sim_settings)
# %%
# @title Select clutter object from the GUI: { display-mode: "form" }
build_widget_ui(obj_attr_mgr, prim_attr_mgr)
# %%
# @title Clutter Generation Script
# @markdown Configure some example parameters:
seed = 2 # @param {type:"integer"}
random.seed(seed)
sim.seed(seed)
np.random.seed(seed)
# position the agent
sim.agents[0].scene_node.translation = mn.Vector3(0.5, -1.60025, 6.15)
print(sim.agents[0].scene_node.rotation)
agent_orientation_y = -23 # @param{type:"integer"}
sim.agents[0].scene_node.rotation = mn.Quaternion.rotation(
mn.Deg(agent_orientation_y), mn.Vector3(0, 1.0, 0)
)
num_objects = 10 # @param {type:"slider", min:0, max:20, step:1}
object_scale = 5 # @param {type:"slider", min:1.0, max:10.0, step:0.1}
# scale up the selected object
sel_obj_template_cpy = obj_attr_mgr.get_template_by_handle(sel_file_obj_handle)
sel_obj_template_cpy.scale = mn.Vector3(object_scale)
obj_attr_mgr.register_template(sel_obj_template_cpy, "scaled_sel_obj")
# add the selected object
sim.navmesh_visualization = True
rigid_obj_mgr.remove_all_objects()
fails = 0
for _obj in range(num_objects):
obj_1 = rigid_obj_mgr.add_object_by_template_handle("scaled_sel_obj")
# place the object
placement_success = sample_object_state(
sim, obj_1, from_navmesh=True, maintain_object_up=True, max_tries=100
)
if not placement_success:
fails += 1
rigid_obj_mgr.remove_object_by_id(obj_1.object_id)
else:
# set the objects to STATIC so they can be added to the NavMesh
obj_1.motion_type = habitat_sim.physics.MotionType.STATIC
print("Placement fails = " + str(fails) + "/" + str(num_objects))
# recompute the NavMesh with STATIC objects
navmesh_settings = habitat_sim.NavMeshSettings()
navmesh_settings.set_defaults()
navmesh_success = sim.recompute_navmesh(
sim.pathfinder, navmesh_settings, include_static_objects=True
)
# simulate and collect observations
example_type = "clutter generation"
observations = simulate(sim, dt=2.0)
if make_video:
vut.make_video(
observations,
"color_sensor_1st_person",
"color",
output_path + example_type,
open_vid=show_video,
)
obj_attr_mgr.remove_template_by_handle("scaled_sel_obj")
rigid_obj_mgr.remove_all_objects()
sim.navmesh_visualization = False
# %% [markdown]
# ## Embodied Continuous Navigation
# %% [markdown]
# The following example demonstrates setup and excecution of an embodied navigation and interaction scenario. An object and an agent embodied by a rigid locobot mesh are placed randomly on the NavMesh. A path is computed for the agent to reach the object which is executed by a continuous path-following controller. The object is then kinematically gripped by the agent and a second path is computed for the agent to reach a goal location, also executed by a continuous controller. The gripped object is then released and thrown in front of the agent.
#
# Note: for a more detailed explanation of the NavMesh see Habitat-sim Basics tutorial.
# %%
# @title Select target object from the GUI: { display-mode: "form" }
build_widget_ui(obj_attr_mgr, prim_attr_mgr)
# %%
# @title Continuous Path Follower Example { display-mode: "form" }
# @markdown A python Class to provide waypoints along a path given agent states
class ContinuousPathFollower:
def __init__(self, sim, path, agent_scene_node, waypoint_threshold):
self._sim = sim
self._points = path.points[:]
assert len(self._points) > 0
self._length = path.geodesic_distance
self._node = agent_scene_node
self._threshold = waypoint_threshold
self._step_size = 0.01
self.progress = 0 # geodesic distance -> [0,1]
self.waypoint = path.points[0]
# setup progress waypoints
_point_progress = [0]
_segment_tangents = []
_length = self._length
for ix, point in enumerate(self._points):
if ix > 0:
segment = point - self._points[ix - 1]
segment_length = np.linalg.norm(segment)
segment_tangent = segment / segment_length
_point_progress.append(
segment_length / _length + _point_progress[ix - 1]
)
# t-1 -> t
_segment_tangents.append(segment_tangent)
self._point_progress = _point_progress
self._segment_tangents = _segment_tangents
# final tangent is duplicated
self._segment_tangents.append(self._segment_tangents[-1])
print("self._length = " + str(self._length))
print("num points = " + str(len(self._points)))
print("self._point_progress = " + str(self._point_progress))
print("self._segment_tangents = " + str(self._segment_tangents))
def pos_at(self, progress):
if progress <= 0:
return self._points[0]
elif progress >= 1.0:
return self._points[-1]
path_ix = 0
for ix, prog in enumerate(self._point_progress):
if prog > progress:
path_ix = ix
break
segment_distance = self._length * (progress - self._point_progress[path_ix - 1])
return (
self._points[path_ix - 1]
+ self._segment_tangents[path_ix - 1] * segment_distance
)
def update_waypoint(self):
if self.progress < 1.0:
wp_disp = self.waypoint - self._node.absolute_translation
wp_dist = np.linalg.norm(wp_disp)
node_pos = self._node.absolute_translation
step_size = self._step_size
threshold = self._threshold
while wp_dist < threshold:
self.progress += step_size
self.waypoint = self.pos_at(self.progress)
if self.progress >= 1.0:
break
wp_disp = self.waypoint - node_pos
wp_dist = np.linalg.norm(wp_disp)
def setup_path_visualization(path_follower, vis_samples=100):
vis_objs = []
sphere_handle = obj_attr_mgr.get_template_handles("uvSphereSolid")[0]
sphere_template_cpy = obj_attr_mgr.get_template_by_handle(sphere_handle)
sphere_template_cpy.scale *= 0.2
template_id = obj_attr_mgr.register_template(sphere_template_cpy, "mini-sphere")
print("template_id = " + str(template_id))
if template_id < 0:
return None
vis_objs.append(rigid_obj_mgr.add_object_by_template_handle(sphere_handle))
for point in path_follower._points:
cp_obj = rigid_obj_mgr.add_object_by_template_handle(sphere_handle)
if cp_obj.object_id < 0:
print(cp_obj.object_id)
return None
cp_obj.translation = point
vis_objs.append(cp_obj)
for i in range(vis_samples):
cp_obj = rigid_obj_mgr.add_object_by_template_handle("mini-sphere")
if cp_obj.object_id < 0:
print(cp_obj.object_id)
return None
cp_obj.translation = path_follower.pos_at(float(i / vis_samples))
vis_objs.append(cp_obj)
for obj in vis_objs:
if obj.object_id < 0:
print(obj.object_id)
return None
for obj in vis_objs:
obj.motion_type = habitat_sim.physics.MotionType.KINEMATIC
return vis_objs
def track_waypoint(waypoint, rs, vc, dt=1.0 / 60.0):
angular_error_threshold = 0.5
max_linear_speed = 1.0
max_turn_speed = 1.0
glob_forward = rs.rotation.transform_vector(mn.Vector3(0, 0, -1.0)).normalized()
glob_right = rs.rotation.transform_vector(mn.Vector3(-1.0, 0, 0)).normalized()
to_waypoint = mn.Vector3(waypoint) - rs.translation
u_to_waypoint = to_waypoint.normalized()
angle_error = float(mn.math.angle(glob_forward, u_to_waypoint))
new_velocity = 0
if angle_error < angular_error_threshold:
# speed up to max
new_velocity = (vc.linear_velocity[2] - max_linear_speed) / 2.0
else:
# slow down to 0
new_velocity = (vc.linear_velocity[2]) / 2.0
vc.linear_velocity = mn.Vector3(0, 0, new_velocity)
# angular part
rot_dir = 1.0
if mn.math.dot(glob_right, u_to_waypoint) < 0:
rot_dir = -1.0
angular_correction = 0.0
if angle_error > (max_turn_speed * 10.0 * dt):
angular_correction = max_turn_speed
else:
angular_correction = angle_error / 2.0
vc.angular_velocity = mn.Vector3(
0, np.clip(rot_dir * angular_correction, -max_turn_speed, max_turn_speed), 0
)
# grip/release and sync gripped object state kineamtically
class ObjectGripper:
def __init__(
self,
sim,
agent_scene_node,
end_effector_offset,
):
self._sim = sim
self._node = agent_scene_node
self._offset = end_effector_offset
self._gripped_obj = None
self._gripped_obj_buffer = 0 # bounding box y dimension offset of the offset
def sync_states(self):
if self._gripped_obj is not None:
agent_t = self._node.absolute_transformation_matrix()
agent_t.translation += self._offset + mn.Vector3(
0, self._gripped_obj_buffer, 0.0
)
self._gripped_obj.transformation = agent_t
def grip(self, obj):
if self._gripped_obj is not None:
print("Oops, can't carry more than one item.")
return
self._gripped_obj = obj
obj.motion_type = habitat_sim.physics.MotionType.KINEMATIC
object_node = obj.root_scene_node
self._gripped_obj_buffer = object_node.cumulative_bb.size_y() / 2.0
self.sync_states()
def release(self):
if self._gripped_obj is None:
print("Oops, can't release nothing.")
return
self._gripped_obj.motion_type = habitat_sim.physics.MotionType.DYNAMIC
self._gripped_obj.linear_velocity = (
self._node.absolute_transformation_matrix().transform_vector(
mn.Vector3(0, 0, -1.0)
)
+ mn.Vector3(0, 2.0, 0)
)
self._gripped_obj = None
# %%
# @title Embodied Continuous Navigation Example { display-mode: "form" }
# @markdown This example cell runs the object retrieval task.
# @markdown First the Simulator is re-initialized with:
# @markdown - a 3rd person camera view
# @markdown - modified 1st person sensor placement
sim_settings = make_default_settings()
# fmt: off
sim_settings["scene"] = "./data/scene_datasets/mp3d_example/17DRP5sb8fy/17DRP5sb8fy.glb" # @param{type:"string"}
# fmt: on
sim_settings["sensor_pitch"] = 0
sim_settings["sensor_height"] = 0.6
sim_settings["color_sensor_3rd_person"] = True
sim_settings["depth_sensor_1st_person"] = True
sim_settings["semantic_sensor_1st_person"] = True
make_simulator_from_settings(sim_settings)
default_nav_mesh_settings = habitat_sim.NavMeshSettings()
default_nav_mesh_settings.set_defaults()
inflated_nav_mesh_settings = habitat_sim.NavMeshSettings()
inflated_nav_mesh_settings.set_defaults()
inflated_nav_mesh_settings.agent_radius = 0.2
inflated_nav_mesh_settings.agent_height = 1.5
recompute_successful = sim.recompute_navmesh(sim.pathfinder, inflated_nav_mesh_settings)
if not recompute_successful:
print("Failed to recompute navmesh!")
# @markdown ---
# @markdown ### Set other example parameters:
seed = 24 # @param {type:"integer"}
random.seed(seed)
sim.seed(seed)
np.random.seed(seed)
sim.config.sim_cfg.allow_sliding = True # @param {type:"boolean"}
print(sel_file_obj_handle)
# load a selected target object and place it on the NavMesh
obj_1 = rigid_obj_mgr.add_object_by_template_handle(sel_file_obj_handle)
# load the locobot_merged asset
locobot_template_handle = obj_attr_mgr.get_file_template_handles("locobot")[0]
# add robot object to the scene with the agent/camera SceneNode attached
locobot_obj = rigid_obj_mgr.add_object_by_template_handle(
locobot_template_handle, sim.agents[0].scene_node
)
# set the agent's body to kinematic since we will be updating position manually
locobot_obj.motion_type = habitat_sim.physics.MotionType.KINEMATIC
# create and configure a new VelocityControl structure
# Note: this is NOT the object's VelocityControl, so it will not be consumed automatically in sim.step_physics
vel_control = habitat_sim.physics.VelocityControl()
vel_control.controlling_lin_vel = True
vel_control.lin_vel_is_local = True
vel_control.controlling_ang_vel = True
vel_control.ang_vel_is_local = True
# reset observations and robot state
locobot_obj.translation = sim.pathfinder.get_random_navigable_point()
observations = []
# get shortest path to the object from the agent position
found_path = False
path1 = habitat_sim.ShortestPath()
path2 = habitat_sim.ShortestPath()
while not found_path:
if not sample_object_state(
sim, obj_1, from_navmesh=True, maintain_object_up=True, max_tries=1000
):
print("Couldn't find an initial object placement. Aborting.")
break
path1.requested_start = locobot_obj.translation
path1.requested_end = obj_1.translation
path2.requested_start = path1.requested_end
path2.requested_end = sim.pathfinder.get_random_navigable_point()
found_path = sim.pathfinder.find_path(path1) and sim.pathfinder.find_path(path2)
if not found_path:
print("Could not find path to object, aborting!")
vis_objs = []
recompute_successful = sim.recompute_navmesh(sim.pathfinder, default_nav_mesh_settings)
if not recompute_successful:
print("Failed to recompute navmesh 2!")
gripper = ObjectGripper(sim, locobot_obj.root_scene_node, np.array([0.0, 0.6, 0.0]))
continuous_path_follower = ContinuousPathFollower(
sim, path1, locobot_obj.root_scene_node, waypoint_threshold=0.4
)
show_waypoint_indicators = False # @param {type:"boolean"}
time_step = 1.0 / 30.0
for i in range(2):
if i == 1:
gripper.grip(obj_1)
continuous_path_follower = ContinuousPathFollower(
sim, path2, locobot_obj.root_scene_node, waypoint_threshold=0.4
)
if show_waypoint_indicators:
for vis_obj in vis_objs:
rigid_obj_mgr.remove_object_by_id(vis_obj.object_id)
vis_objs = setup_path_visualization(continuous_path_follower)
# manually control the object's kinematic state via velocity integration
start_time = sim.get_world_time()
max_time = 30.0
while (
continuous_path_follower.progress < 1.0
and sim.get_world_time() - start_time < max_time
):
continuous_path_follower.update_waypoint()
if show_waypoint_indicators:
vis_objs[0].translation = continuous_path_follower.waypoint
if locobot_obj.object_id < 0:
print("locobot_id " + str(locobot_obj.object_id))
break
previous_rigid_state = locobot_obj.rigid_state
# set velocities based on relative waypoint position/direction
track_waypoint(
continuous_path_follower.waypoint,
previous_rigid_state,
vel_control,
dt=time_step,
)
# manually integrate the rigid state
target_rigid_state = vel_control.integrate_transform(
time_step, previous_rigid_state
)
# snap rigid state to navmesh and set state to object/agent
end_pos = sim.step_filter(
previous_rigid_state.translation, target_rigid_state.translation
)
locobot_obj.translation = end_pos
locobot_obj.rotation = target_rigid_state.rotation
# Check if a collision occured
dist_moved_before_filter = (
target_rigid_state.translation - previous_rigid_state.translation
).dot()
dist_moved_after_filter = (end_pos - previous_rigid_state.translation).dot()
# NB: There are some cases where ||filter_end - end_pos|| > 0 when a
# collision _didn't_ happen. One such case is going up stairs. Instead,
# we check to see if the the amount moved after the application of the filter
# is _less_ than the amount moved before the application of the filter
EPS = 1e-5
collided = (dist_moved_after_filter + EPS) < dist_moved_before_filter
gripper.sync_states()
# run any dynamics simulation
sim.step_physics(time_step)
# render observation
observations.append(sim.get_sensor_observations())
# release
gripper.release()
start_time = sim.get_world_time()
while sim.get_world_time() - start_time < 2.0:
sim.step_physics(time_step)
observations.append(sim.get_sensor_observations())
# video rendering with embedded 1st person view
video_prefix = "fetch"
if make_video:
overlay_dims = (int(sim_settings["width"] / 5), int(sim_settings["height"] / 5))
print("overlay_dims = " + str(overlay_dims))
overlay_settings = [
{
"obs": "color_sensor_1st_person",
"type": "color",
"dims": overlay_dims,
"pos": (10, 10),
"border": 2,
},
{
"obs": "depth_sensor_1st_person",
"type": "depth",
"dims": overlay_dims,
"pos": (10, 30 + overlay_dims[1]),
"border": 2,
},
{
"obs": "semantic_sensor_1st_person",
"type": "semantic",
"dims": overlay_dims,
"pos": (10, 50 + overlay_dims[1] * 2),
"border": 2,
},
]
print("overlay_settings = " + str(overlay_settings))
vut.make_video(
observations=observations,
primary_obs="color_sensor_3rd_person",
primary_obs_type="color",
video_file=output_path + video_prefix,
fps=int(1.0 / time_step),
open_vid=show_video,
overlay_settings=overlay_settings,
depth_clip=10.0,
)
# remove locobot while leaving the agent node for later use
rigid_obj_mgr.remove_object_by_id(locobot_obj.object_id, delete_object_node=False)
rigid_obj_mgr.remove_all_objects()
| examples/tutorials/colabs/ECCV_2020_Interactivity.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# #### Robotic Arm
#
# ### 1a)
#
# #### The given equation:
#
# $\frac{d^2\theta}{dt^2} = \frac{a(b- \theta) - \theta\dot\theta^2}{l+\theta^2}$
#
# #### Can be made dimensionless by setting:
# $\omega ^2= \frac{g}{l}$ ; $ \beta = \frac{\Omega}{\omega}$ ; $\gamma = \frac{C}{\omega ^2}$ and changing the variable to $ x = \omega t$.
#
# #### First, differentiate $ x = \omega t$ twice:
# $ x = \omega t$
#
# $\frac{dx}{dt} = \omega$ (1)
#
# $\frac{d^2x}{dt^2} = 0$ (2)
# #### Then by the chain rule;
#
# $ \frac{d\theta}{dt} = \frac{d\theta}{dx} \frac{dx}{dt} = \frac{d\theta}{dx} \omega$ (3)
#
# #### Therefore using the product rule:
#
# $ \frac{d^2\theta}{dt^2} = \frac{dx}{dt} \frac{d^2\theta}{dtdx} + \frac{d \theta}{dx}\frac{d^2x}{dt^2} \implies \frac{dx}{dt} \cdot \frac{d}{dx}(\frac{d\theta}{dt}) + \frac{d \theta}{dx}\frac{d^2x}{dt^2}$ (4)
#
# #### Substituting (1) and (2) into (4):
#
# $ \frac{d^2\theta}{dt^2} = \omega \cdot \frac{d}{dx}(\omega \frac{d\theta}{dx}) + \frac{d \theta}{dx}\cdot 0 = \omega^2 \frac{d^2\theta}{dx^2}$
#
# #### Finally, reconstructing the equation with the new constants and variable change it becomes:
#
# $\omega^2 \frac{d^2\theta}{dx^2} = -\omega^2 \sin \theta + \omega^2 \gamma \cos \theta \sin \omega \beta t = -\omega^2 \sin \theta + \omega^2 \gamma \cos \theta \sin x\beta \implies \frac{d^2\theta}{dx^2} =-\sin \theta + \gamma \cos \theta \sin x\beta $
#
# #### Now seperate this second order into two first order D.E.s by introducing new variables:
#
# $ z = \frac{d\theta}{dx} \rightarrow \frac{dz}{dx} = \frac{d^2\theta}{dx^2} = -\sin z_1 + \gamma \sin x\beta \cos z_1 $
#
# #### So:
# $ z = \frac{d\theta}{dx}$
#
# $\frac{dz}{dx}= -\sin \theta + \gamma \sin x\beta \cos \theta $
# +
#def d2Theta:
# return a(b-theta)-
# Import the required modules
import numpy as np
import scipy
from printSoln import *
from run_kut4 import *
import pylab as pl
a=100.0
b=15.0
# First set up the right-hand side RHS) of the equation
def Eqs(x,y): #Theta is y
f=np.zeros(2) # sets up RHS as a vector
f[0]=y[1]
#f[1]=-np.sin(y[0])+Gamma*np.sin(x*beta)*np.cos(y[0]) # RHS; note that z is also a vector
f[1]=(a*(b-y[0])-y[0]*(y[1]**2))/(1+(y[0]**2))
return f
# Using Runge-Kutta of 4th order
y = np.array([2*np.pi, 0.0]) # Initial values
#start at t=0 -> x=0 (as omega*t when t=0 is 0)
x = 0.0 # Start of integration (Always use floats)
#Finish at t=40s -> xStop= omega*40
xStop = 2 # End of integration
h = 0.5 # Step size
X,Y = integrate(Eqs,x,y,xStop,h) # call the RK4 solver
ThetaSol1=Y[:,0]
dThetaSol1=Y[:,1]
print (ThetaSol1)
print (dThetaSol1)
# +
import scipy as sci
from scipy import integrate
import numpy as np
a,b=100,15
def f(t,y):
f=np.zeros(2)
f[0]=y[1]
f[1]=(a*(b-y[0])-y[0]*(y[1]**2))/(1-y[0])
return f
y=[2*np.pi, 0] # This is theta and dtheta initial conditions
#wnat to integrate over a period of time that includes at 0 where theta is defined and the time we want 0.5s
#use linspace to creat a vector of all the times
t=np.linspace(0,10,100)
#This is saying create a vector with element starting at 0 to 10
# then divide the space evenly into 100ths
#now to solve for Theta (y is the programming) Using SciPy integrate
y_Sol=integrate.odeint(f,t,y,rtol=1e-3, atol=1e-3)
#this creates two vectors, size 1 by 100 for each y, y[0] (theta) and y[1] (dtheta)
#I will set these vector to variables
Theta=y_Sol[:,0]
dTheta=y_Sol[:,1]
print (Theta,"\n")
print (dTheta)
| ExamPrep/SciCompComplete/Jan2016_Q3.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3 (ipykernel)
# language: python
# name: python3
# ---
# # Iteration
#
# This notebook is intended to provide a introduction to the idea of iteration as a tool for addressing problems in physics and engineering that have no simple analytic or closed form solution. Almost every significant computational problem I have addressed in my learning has involved some form of an iterative model.
#
# There are a lot of descriptions of iteration as a computational technique out there but this [mathematical definition](https://www2.edc.org/makingmath/mathtools/iteration/iteration.asp) fits best with my experience. The previous link has some lovely examples from mathematics including the Mandlebrot set.
#
# <blockquote> Iteration is the repeated application of a function or process in which the output of each step is used as the input for the next iteration. </blockquote>
#
# #### Note about Jupyterlab toolbar issue:
#
# There is a toolbar issue for some users in Jupyterlab where the cell type dropdown menu in the tool bar above doesn't allow me to select the cell type. It acts like I've double clicked my mouse or something. After trying to resolve this with various updates I am currently using keyboard methods to set the cell type. This [keyboard guide for Jupyterlab](https://nocomplexity.com/documents/jupyterlab/keyboardshortcuts.html) was very helpful.
# ## Periodic Motion with Drag:
#
# For the lab I have asked you to work with air drag in projectile motion. In this example I will explore finding a numerical solution to our damped periodic motion differential equation. The thought process is the same just in a different context.
#
# Here is the differential equation:
#
# $$ \large \frac{d^2x}{dt^2} + \frac{D}{m}\frac{dx}{dt} + \frac{k}{m} x = 0 $$
#
# The Newtonian analysis that we used to arrive at this equation was based on an ideal spring and a model for drag forces in the air. Those models are (magnitude only):
#
# $$ \large F_{spring} = k x$$
#
# $$ \large F_{air \; drag} = D v = D \frac{dx}{dt}$$
#
# Such a system is started by pulling the spring back to some initial location ($x_0$) and releasing it from rest ($v=0$).
# ## Formal Solution:
#
# The differential equation above has an analytic solution given by:
#
# $$ \large x(t) = C e^{-bt} cos(\omega t + \phi)$$
#
# ..where...
#
# $$\large b = \frac{D}{2m}$$
#
# $$\large \omega = \sqrt{\frac{k}{m} - \frac{D^2}{4m^2}}$$
#
# Knowing that there is a formal solution allows us to explore how an interative numerical solution compares to the analytic solution.
# ### Setup/Dependencies
#
# These are the libraries and setups that I am using in this notebook
# %matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
# ## Initial Conditions and Constants
#
# In this differential equation the initial conditions are given previously and depend only on 3 factors which are D, k, and m. These are the constraints on our solution. I am going to assume no phase shift $\phi$.
#
# ### Iteration Constants
#
# One way to set up an interation is to choose the total length of time you want to explore and the number of incremental steps in that interval. One can also choose length of each iterative step and the number of steps which gets you the same place by a different path.
#
# ### Sample System:
#
# The damped periodic system that stimulated this discussion is the one shown in the [Damped and Driven breadcumb](http://coccweb.cocc.edu/bemerson/PhysicsGlobal/Courses/PH213/PH213Materials/PH213Breadcrumbs/PH213BCDampDrive.html). It has an initial x position of 1., a decay constant of 1., an $\omega_0 =10$, and a damped period of 0.632 s. This is equivalent to a damped angular frequency given by:
#
# $$ f_{damped} = \frac{1}{T} = 1.5823 Hz$$
#
# $$ \omega_{damped} = 2 \pi f = 9.942$$
#
# Matching this system feels like a good way to test this iterative model.
#
# I found it took a little fooling with the constants to get $\gamma = 1.$ and $\omega_0 = 10$ It is important to note that my $b$ is their $\gamma$
# +
# initial conditions
initialVelocity = 0.0 # in m/s
initialPosition = 1. # in m
# constants
dragCoef = .22 # D is the drag coefficient
springConst = 11. # the spring constant
massObject = .11 # mass in kg
decayConst = dragCoef/(2*massObject) # b in exponential term
naturalFreq = np.sqrt(springConst/massObject) # omega_0
dampedFreq = np.sqrt(naturalFreq**2 - (dragCoef**2/(4.*massObject**2)))
# initial condition for velocity when v_0 = 0!! only!!
phaseShift = np.arctan(-decayConst/dampedFreq)
amplitude = initialPosition/np.cos(phaseShift)
# interation constants
maxTime = 2.0 # in s
numPoints = 5000 # number data points in the array
# define plot limits so I don't have to reset them every time I change conditions.
vertLim = 1.1*initialPosition
horizLim = 1.1*maxTime
print("Initial Velocity: %.2f m/s; Initial Position: %.2f m" % (initialVelocity,initialPosition))
print("Drag Coef: %.2f ; Spring Constant %.2f ; mass %.2f" % (dragCoef, springConst, massObject))
print("decay const: %.4f ; natural freq %.4f ; damped freq %.4f" % (decayConst, naturalFreq, dampedFreq))
print("phase shift: %.4f rad; amplitude (C) %.4f m" % (phaseShift,amplitude))
# -
# ## Analytic solution for Comparison:
#
# First we create a set of times that we will use to calculate the x position of the mass using the [np.linspace](https://numpy.org/doc/stable/reference/generated/numpy.linspace.html) function. Then apply the equations stated above. When we use the form of the analytic solution given above with no phase shift then C is the initial position.
#
# From a code writing perspective I first verified that I could get the exponential part of this to plot the way I wanted and then I added in the periodic behavior.
#
# ```
# xenvelope = initialPosition*np.exp(-decayConst*modelTime)
# ```
#
# Then...
#
# ```
# xposition = initialPosition*np.exp(-decayConst*modelTime)*np.cos(dampedFreq*modelTime)
# ```
#
# I had to play with this a bit to be confident that the plot was showing what I wanted it to. This raises an important point which is that at every step in a model we have to look and think whether what we see in the plot is what we 'expect' or is correct. One of the beauties of a notebook is that I can go change various parameters all over the map and see what happens. Feel free to explore yourself.
# ### First Errors:
#
# **#1 Amplitude:** My first overly casual assumption was that I could take the phase shift $\phi$ to be 0. Generally all the phase shift does is move the solution left and right but in this case it's a little more complex. Consider our analytic solution at t=0:
#
# $$ \large x\rvert_{t=0} = C e^{0}cos(0 + \phi) = 1.0 m $$
#
# $$ \large \implies C = \frac{x_0}{cos(\phi)}\; \therefore C\ne x_0 $$
#
# **#2 Phase Shift:** This got me wondering how to find $\phi$. So I differentiated $x(t)$ to get a velocity expression and then set $t=0$.
#
# $$\large \frac{dx}{dt} = Ce^{-bt}[-b cos(\omega t + \phi) -\omega sin(\omega t + \phi)]$$
#
# It does NOT help that the first time I did this I lost the minus sign on the derivative of the cosine. Sheesh! Plugging in $t=0$ we get:
#
# $$\large v\rvert_{t=0} = 0 = Ce^{0}[-b cos(\phi) -\omega sin(\phi)]$$
#
# The term in the square brackets must be 0 which after a little algebra gives us:
#
# $$ \large tan(\phi) = -\frac{b}{\omega} $$
# ## <span style = "color:red">Learning Task:</span>
#
# In the previous errors I first lost a minus sign during my differentiation. Change the sign of $\phi$ in the previous code cell (above) and see if you can see the difference in the plot. Change...
#
# ```phaseShift = np.arctan(-decayConst/dampedFreq)```
#
# to
#
# ```phaseShift = np.arctan(decayConst/dampedFreq)```
#
# Look carefully at the very beginning of the plot and think about slopes...This inconsistency in my plot kept bothering me until I found these errors. What is that inconsistency?
# +
modelTime = np.linspace(0.,maxTime,numPoints)
xposition = amplitude*np.exp(-decayConst*modelTime)*np.cos(dampedFreq*modelTime+phaseShift)
xenvelope = initialPosition*np.exp(-decayConst*modelTime)
# analytic description of velocity if desired
xvelocity = amplitude*np.exp(-decayConst*modelTime)*(
-decayConst*np.cos(dampedFreq*modelTime+phaseShift) -
dampedFreq*np.sin(dampedFreq*modelTime+phaseShift))
# modelTime
# -
# ## Plot
#
# Now we plot it....
# +
fig1, ax1 = plt.subplots()
ax1.plot(modelTime, xposition,
color = 'red', linestyle = '-',alpha = 0.9,
linewidth = 1., label = "analytic")
ax1.plot(modelTime, xenvelope,
color = 'blue', linestyle = '--',alpha = 0.6,
linewidth = 1., label = "envelope")
plt.rcParams.update({'font.size': 16}) # make labels easier to read
ax1.set(xlabel='t (s)', ylabel='position (m)',
title='Analytic Solution')
# set axes through 0,0
ax1.spines['left'].set_position(('data', 0))
ax1.spines['bottom'].set_position(('data', 0))
ax1.spines['top'].set_visible(False)
ax1.spines['right'].set_visible(False)
plt.xlim([0., horizLim])
plt.ylim([-vertLim, vertLim])
fig1.set_size_inches(12, 8)
plt.legend(loc= 1)
plt.show()
# -
# ## Now the iterative numerical solution.
#
# Notice in the DE that if I know the position of the object initially I know what the spring force is. If I know velocity I also know the drag force. Here is the mental sequence....
#
# * known: x and $v_0=0$ -- I can determine a
# * knowing a and v I can predict where the object will be a little later AND it's velocity after a short time
# * now I have a new position and velocity - determine a new a
# * knowing the new a and v I can once again predict the new position and velocity a moment later...
# * rinse and repeat...
# $$ \large \bar{F}_{net} = \bar{F}_{spring} + \bar{F}_{drag} $$
# ### Next Errors:
#
# **Algebra:** The first two steps in the iterative process come directly from our work in PH211 under the assumption that if we consider a short enough $\Delta t$ then the acceleration can be considered constant. In that case our normal kinematic tools apply. The following two lines of code are the same as the physics equations we derived from graphs.
#
# ```
# xPos[i] = xPos[i-1] + xVel[i-1]*deltaTime + 0.5*xAccel[i-1]*deltaTime**2
# xVel[i] = xVel[i-1] + xAccel[i-1]*deltaTime
# ```
#
# $$ \large x_f = x_0 + v_0 \Delta t + \frac{1}{2} a_{cnst} (\Delta t)^2 $$ and
#
# $$ \large v_f = v_0 + a_{cnst} \Delta t $$
# Once the next position and velocity are known then I can calculate, from Newton's Law and my freebody diagram, the acceleration in the next moment.
#
# $$ \large \bar{F}_{net} = \bar{F}_{spring} + \bar{F}_{drag} = - kx - D\frac{dx}{dt} $$
#
# $$ \large \implies a = \frac{-kx - D\frac{dx}{dt}}{m} $$
#
# When I was working on this part I was thinking about how this system is started by pulling the spring back and then releasing it from rest. As it first starts moving the acceleration created by the spring will be 'inward' (-) and the acceleration created by the drag force will be 'outward' (+). This led me to define the acceleration with a '+' sign between the terms. When I looked at the output of the model it seemed apparent there was a problem. This is what you will explore in the next **Learning Task**.
# ## <span style = "color:red">Learning Task:</span>
#
# In the previous errors I first lost a minus sign during my differentiation. Change the sign of $\phi$ in the previous code cell (above) and see if you can see the difference in the plot. Change...
#
# ```xAccel[i] = -springConst*xPos[i]/massObject - dragCoef*xVel[i]/massObject```
#
# to
#
# ```xAccel[i] = -springConst*xPos[i]/massObject + dragCoef*xVel[i]/massObject```
#
# A couple of cells down in the debugging cell uncomment the following lines so you can actually look at the data being generated. These statements print out the first 10 or so positions, velocities, and accelerations.
#
# ```
# print("analytic position: ",xposition[0:nData])
# print("interative position: ",xPos[0:nData])
# print("interative velocity: ",xVel[0:nData])
# print("interative acceleration: ",xAccel[0:nData])
# ```
# You will also need to increase the y limits of the plot in the plotting cell to be able to see the data.
#
# ```
# plt.ylim([-.8, 1.]) -> plt.ylim([-5., 5.])
# ```
#
# Look at both the plot and the first 10 data points and describe what you notice about each. Particularly in the data points it should be clear what is going wrong. Change the sign between the acceleration terms back to a '-' and look at the data again. Make sense now?
#
# Before you leave this task comment out the print statements since they are not needed any more and put the plot limits back to where there were.
# +
itertime = maxTime # in s
iterpoints = numPoints # same number of points as analytic soln
iterationTime = np.linspace(0.,itertime,iterpoints)
deltaTime = iterationTime[1]-iterationTime[0] # determines the size of the delta t between points
# create the x(t) and y(t) arrays to be like iterationTime and fill with 0's
xPos = np.full_like(iterationTime,0)
xVel = np.full_like(iterationTime,0)
xAccel = np.full_like(iterationTime,0)
# set the first point of array to initial position and velocity
xPos[0] = initialPosition
xVel[0] = initialVelocity
xAccel[0] = -springConst*xPos[0]/massObject - dragCoef*xVel[0]/massObject
for i in range (1,iterpoints): # taking it one step at a time
# how far will it move in deltaTime assuming constant acceleration
xPos[i] = xPos[i-1] + xVel[i-1]*deltaTime + 0.5*xAccel[i-1]*deltaTime**2
# how fast will it be going after deltaTime
xVel[i] = xVel[i-1] + xAccel[i-1]*deltaTime
# Determine the acceleration at this moment from the spring force and the drag force.
xAccel[i] = -springConst*xPos[i]/massObject - dragCoef*xVel[i]/massObject
# -
# # debugging:
#
# For dbugging I need to print out data points that span the data set but not all the data points. This bit of code allows me to do that in useful ways.
# +
print("delta time: %.6f s" % deltaTime)
print("")
# first n data points
nData = 10 # n data points
print("first %.i data points" % nData)
print("")
#print("analytic position: ",xposition[0:nData])
#print("interative position: ",xPos[0:nData])
#print("interative velocity: ",xVel[0:nData])
#print("interative acceleration: ",xAccel[0:nData])
# every nth data point
print("")
dataSpace = 10 # set nth spacing
print("every %.i th data point" % dataSpace)
print("")
#print("analytic position: ",xposition[0::dataSpace])
#print("interative position: ",xPos[0::dataSpace])
#print("interative velocity: ",xVel[0::dataSpace])
#print("interative acceleration: ",xAccel[0::dataSpace])
# -
# ## Plot Iterative Model and Analytic Model: NOTICE DIFFERENCES!
#
# For me this step is essential to exploring a model because of the errors that always seem to creep into my coding. Some of these errors are large and some are small. Most are just oversights in setting up the math but some are actual conceptual errors. I will document the full spectrum of errors later in this notebook to keep the flow of the calculations smoother at this point.
#
# What happens at this step is that I look at the expected behavior of the system (the analytic solution) compared to my iterative solution and try to understand what is different. Once I perceive an error I then look for possible causes of that error in the code. As I make changes to the code I try to remember to document the changes in the markdown cells so I don't forget. I don't always follow my own advice but it's still good advice.
# +
fig2, ax2 = plt.subplots()
ax2.plot(modelTime, xposition,
color = 'red', linestyle = '-',alpha = 0.9,
linewidth = 1., label = "analytic")
ax2.plot(iterationTime, xPos,
color = 'green', linestyle = '-',alpha = 0.9,
linewidth = 1., label = "iteration")
ax2.plot(modelTime, xenvelope,
color = 'blue', linestyle = '--',alpha = 0.6,
linewidth = 1., label = "envelope")
plt.rcParams.update({'font.size': 16}) # make labels easier to read
ax2.set(xlabel='t (s)', ylabel='position (m)',
title='Interative and Analytic Solutions')
# set axes through 0,0
ax2.spines['left'].set_position(('data', 0))
ax2.spines['bottom'].set_position(('data', 0))
ax2.spines['top'].set_visible(False)
ax2.spines['right'].set_visible(False)
plt.xlim([0., horizLim])
plt.ylim([-vertLim, vertLim])
fig2.set_size_inches(12, 8)
plt.legend(loc= 1)
#plt.savefig("images/damped.jpg")
plt.show()
# -
# ### Now that it works?
#
# Sometimes this process I just went through is called testing a 'toy model'. The idea that we need to make sure that the calculation we are doing gives us the correct answer to a known problem. After that we can go forward and try it out on calculations where it's unclear what the result would be.
#
# ### True Air Drag:
#
# True air drag depends on the square of the velocity which leads to the following DE which is [not easily solved analytically](https://math.stackexchange.com/questions/2166648/second-order-non-linear-differential-equation-airdrag).
#
# $$ \large \frac{d^2x}{dt^2} + \frac{D}{m}(\frac{dx}{dt})^2 + \frac{k}{m} x = 0 $$
#
# On the other hand our numerical technique should be able to handle it just fine. It turns out to be a little more complex since $v^2$ loses the information about the direction of the velocity and hence the impact on the acceleration. I am out of time and that will have to wait for another day.
#
# **For Reference:** I introduced a new set of arrays which have the same names as my previous arrays except with a NL (non-linear) appended at the end. i.e xPos becomes xPosNL. I do this to retain as much consistency in my code as possible and keep it readable.
# ## <span style = "color:red">Learning Task:</span>
#
# The obvious thing to do in order to implement a $v^2$ drag model is to just change the $v \rightarrow v^2$ and see what happens. The first thing that happened is that the oscillation gained amplitude which indicated the acceleration was increasing instead of decreasing. In the earlier 'toy model' the sign of the velocity was important in determining the direction of the drag force relative to the location. When you square the velocity you lose that information. Turns out numpy has a tool called [numpy.sign](https://numpy.org/doc/stable/reference/generated/numpy.sign.html?highlight=sign) that allows you to extract the sign of a value. Notice how I had to modify the acceleration calculation to be sure the force and acceleration from drag were in the correct direction.
#
# ```xAccelNL[i] = -springConst*xPosNL[i]/massObject - np.sign(xVelNL[i])*dragCoef*(xVelNL[i]**2)/massObject```
#
# For an initial position of 1. m look at the cyan (non-linear) line in the plot below. Describe why the difference between the linear and non-linear models makes or doesn't make sense to you.
#
# Then go back up to the top and change the initial position to 0.1 m. Rerun all the cells and look at the relationship between the linear and non-linear plots now. What is different? Is this bothersome? Do you have a possible explanation?
# +
# create the x(t) and y(t) arrays to be like iterationTime and fill with 0's
xPosNL = np.full_like(iterationTime,0)
xVelNL = np.full_like(iterationTime,0)
xAccelNL = np.full_like(iterationTime,0)
# set the first point of array to initial position and velocity
xPosNL[0] = initialPosition
xVelNL[0] = initialVelocity
xAccelNL[0] = -springConst*xPosNL[0]/massObject - np.sign(xVelNL[i])*dragCoef*(xVelNL[0]**2)/massObject
for i in range (1,iterpoints): # taking it one step at a time
# how far will it move in deltaTime assuming constant acceleration
xPosNL[i] = xPosNL[i-1] + xVelNL[i-1]*deltaTime + 0.5*xAccelNL[i-1]*deltaTime**2
# how fast will it be going after deltaTime
xVelNL[i] = xVelNL[i-1] + xAccelNL[i-1]*deltaTime
# Determine the acceleration at this moment from the spring force and the drag force.
xAccelNL[i] = -springConst*xPosNL[i]/massObject - np.sign(xVelNL[i])*dragCoef*(xVelNL[i]**2)/massObject
# -
# +
fig3, ax3 = plt.subplots()
ax3.plot(modelTime, xposition,
color = 'red', linestyle = '-',alpha = 0.9,
linewidth = 1., label = "analytic")
ax3.plot(iterationTime, xPos,
color = 'green', linestyle = '-',alpha = 0.9,
linewidth = 1., label = "iteration")
ax3.plot(iterationTime, xPosNL,
color = 'cyan', linestyle = '-',alpha = 0.9,
linewidth = 1., label = "non-linear")
ax3.plot(modelTime, xenvelope,
color = 'blue', linestyle = '--',alpha = 0.6,
linewidth = 1., label = "envelope")
plt.rcParams.update({'font.size': 16}) # make labels easier to read
ax3.set(xlabel='t (s)', ylabel='position (m)',
title='Interative and Analytic Solutions')
# set axes through 0,0
ax3.spines['left'].set_position(('data', 0))
ax3.spines['bottom'].set_position(('data', 0))
ax3.spines['top'].set_visible(False)
ax3.spines['right'].set_visible(False)
plt.xlim([0., horizLim])
plt.ylim([-vertLim, vertLim])
fig3.set_size_inches(12, 8)
plt.legend(loc= 1)
#plt.savefig("images/damped.jpg")
plt.show()
# -
| .ipynb_checkpoints/DENumerical-checkpoint.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
#importar bibliotecas
import pandas as pd
import numpy as np
from sklearn.linear_model import LogisticRegression
from sklearn.tree import DecisionTreeClassifier
from sklearn.naive_bayes import GaussianNB
from sklearn.svm import SVC
from sklearn.model_selection import train_test_split
from IPython.display import display
import matplotlib.pyplot as plt
# %matplotlib inline
# +
data = pd.read_csv('titanic.csv', delimiter=',')
display(data.head())
# res = O que pretendemos obter, caso a pessoa sobreviveu ou não.
# classe = a classe na qual a pessoa embarcou.
# sexo = 0 para masculino e 1 para feminino
# irmao/conjuges = Quantidade de irmaos ou conjuge a bordo.
# pais/filho = Quantidade de pais ou filhos a bordo.
# ataxa = Valor pago pelo ticket
# +
# Total de dados
total = data.shape[0]
# Quantidade de colunas sem o 'res'
features = data.shape[1]-1
sobreviveu = len(data[data.res == 1])
faleceu = len(data[data.res == 0])
val = [sobreviveu, faleceu]
rate = (float(sobreviveu*100)/total)
print('Total de pessoas a bordo: ',total)
print('Numero de colunas: ',features)
print('Numero de sobreviventes: ',sobreviveu)
print('Numero de mortes: ',faleceu)
print('Percentual de sobreviventes: {:.2f}%'.format(rate))
# +
# Visualizando dados
x = np.arange(2)
plt.bar(x, val)
plt.xticks(x, ('Sobreviventes', 'Mortos'))
plt.show()
# +
# Remover o resultado
features = data.drop(['res'],1)
labels = data['res']
# Convertendo 'idade' para inteiro
features['idade'] = features['idade'].apply('int64')
print('Features: ')
display(features.head())
print('Labels')
print(labels.head())
# +
# Obter dados no formato correto
features_array = features.to_numpy()
print(features_array)
# +
# Realizando treinamento
X_train, X_test, y_train, y_test = train_test_split(features_array,labels)
print(X_train.shape)
print(X_test.shape)
# -
# Criando o modelo
knn = KNeighborsClassifier(n_neighbors = 1)
knn.fit(X_train, y_train)
# Criando um valor de teste (2 classe, masculino, 26 anos, sem parentes e pagando 35 $)
X_new = np.array([[2, 0, 26, 0, 0, 35.0]])
X_new.shape
# +
prediction = knn.predict(X_new)
'Sobreviveria' if prediction[0] == 1 else 'Morreria'
# A partir dos dados, foi verificado que o usuário de teste sobreviveria.
# +
#Criando outro teste
X2 = np.array([[3,1,78,2,1,78.8]])
prediction = knn.predict(X2)
'Sobreviveria' if prediction[0] == 1 else 'Morreria'
# +
# Precisao do modelo (Quanto mais próximo de 1, melhor)
rate = knn.score(X_test, y_test)
print(f'Pontuação: {rate:.4f}')
| titanic.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: torch
# language: python
# name: torch
# ---
# # Preprocessing of some books by <NAME>
import numpy as np
import preprocessing as prep
import torch
from torch import nn
import torch.nn.functional as F
from torch.utils.data import Dataset, DataLoader, SubsetRandomSampler, random_split
from torch.nn.utils.rnn import pad_sequence, pack_padded_sequence, pad_packed_sequence
# +
dataset = prep.austen_preprocessing(debug=False)
# dataset is a dictionary
#dataset.keys()
# I saved it just once, since every time the numerical encoding changes
#np.save("dataset", dataset)
# +
dataset = np.load("dataset.npy", allow_pickle=True).item()
word2index = dataset['word2index']
index2word = dataset['index2word']
num_sentences = dataset['num_sentences']
print("word2index size: ", len(word2index))
num_w = word2index['bingley']
retrieved = index2word[num_w]
print("Encoded name: ", num_w)
print("Retrieved name: ", retrieved)
print("Number of sentences in the text: ", len(num_sentences))
# -
class AustenDataset(Dataset):
def __init__(self, filepath, transform=None):
dataset = np.load(filepath, allow_pickle=True).item()
self.word2index = dataset['word2index']
self.index2word = dataset['index2word']
self.num_sentences = dataset['num_sentences']
self.transform = transform
sorted_seq = sorted(self.num_sentences, key=lambda x: len(x), reverse=True)
# save original lengths of each sentence (subtract 1 for slicing in x and y)
self.seq_len = [len(s) -1 for s in sorted_seq]
enc_sentences = [torch.LongTensor(enc_s) for enc_s in sorted_seq]
x = [s[:-1] for s in enc_sentences] # last word of each sentence doesn't have a label
y = [s[1:] for s in enc_sentences] # first word of each sentence doesn't have an x associated
x_padded = pad_sequence(x, batch_first=True)
y_padded = pad_sequence(y, batch_first=True)
seq_len = np.array(self.seq_len)
mask = (seq_len > 0)
x_padded = x_padded[mask]
y_padded = y_padded[mask]
seq_len = torch.LongTensor(seq_len[mask])
self.x_padded = x_padded
self.y_padded = y_padded
self.seq_len = seq_len
self.vocab_len = len(self.word2index)
def __len__(self):
return len(self.x_padded)
def __getitem__(self, idx):
# Get text
x = self.x_padded[idx]
y = self.y_padded[idx]
length = self.seq_len[idx]
return (x, y, length)
# +
full_dataset = AustenDataset("dataset.npy")
training_size = int(0.9 * len(full_dataset))
test_size = len(full_dataset) - training_size
train_size = int(0.9 * training_size)
val_size = training_size - train_size
train_dataset, val_dataset, test_dataset = random_split(full_dataset, [train_size, val_size, test_size])
train_sampler = SubsetRandomSampler(np.arange(train_size))
val_sampler = SubsetRandomSampler(np.arange(val_size))
test_sampler = SubsetRandomSampler(np.arange(test_size))
train_loader = DataLoader(train_dataset, batch_size = 32, num_workers=4, drop_last=True, sampler = train_sampler)
val_loader = DataLoader(val_dataset, batch_size = 128, num_workers=4, drop_last=False, sampler = val_sampler)
test_loader = DataLoader(test_dataset, batch_size = 128, num_workers=4, drop_last=False, sampler = test_sampler)
print("Number of batches in train set: ", len(train_loader))
# -
# # Training
class Network(nn.Module):
def __init__(self, vocab_size, emb_dim, hidden_units, layers_num, dropout_prob=0, linear_size=512):
super().__init__()
# Define recurrent layer
self.embedding = nn.Embedding(vocab_size, emb_dim, padding_idx=0)
self.rnn = nn.LSTM(input_size=emb_dim,
hidden_size=hidden_units,
num_layers=layers_num,
dropout=dropout_prob,
batch_first=True)
self.l1 = nn.Linear(hidden_units,linear_size)
self.out = nn.Linear(linear_size,vocab_size) # leave out the '<PAD>' label
def forward(self, x, seq_lengths, state=None):
# Embedding of x
x = self.embedding(x)
# packing for efficient processing
packed_input = pack_padded_sequence(x, seq_lengths, batch_first=True)
# propagate through the LSTM
packed_output, state = self.rnn(packed_input, state)
# unpack for linear layers processing
output, _ = pad_packed_sequence(packed_output, batch_first=True)
#print("Unpacked output: ", output.size(), '\n')
# Linear layer
output = F.leaky_relu(self.l1(output))
# Linear layer with log_softmax (useful for custom cross-entropy)
output = F.log_softmax(self.out(output), dim=2)
return output, state
def masked_cross_ent_loss(y_true, y_pred, pad_token=0, debug=False):
"""
y_true should be of size (batch_size, seq_len)
y_pred should be of size (batch_size, seq_len, vocab_size)
where vocab_size does not count the token '<PAD>' entry.
"""
y_t_flat = y_true.reshape(-1)
y_p_flat = y_pred.view(-1, y_pred.size()[-1])
if debug:
print("y_t_flat.size() : ", y_t_flat.size())
print("y_p_flat.size() : ", y_p_flat.size())
mask = (y_t_flat>0).float() # consider only the non-padded words
num_tokens = int(torch.sum(mask).item()) # check if .item() is good, otherwise use .data[0]
if debug:
print("Num tokens : ", num_tokens)
# for each word choose the log of the prob predicted for the right label
y_p_masked = y_p_flat[range(y_p_flat.shape[0]), y_t_flat] * mask
# compute cross entropy
cross_ent_loss = - torch.sum(y_p_masked) / num_tokens
if debug:
print("Cross-entropy : ", cross_ent_loss)
return cross_ent_loss
params = dict(vocab_size=len(word2index), emb_dim=100, hidden_units=128, layers_num=2, dropout_prob=0.2)
net = Network(**params)
optimizer = torch.optim.Adamax(net.parameters(), lr=0.1, weight_decay=1e-4)
# +
import time
def train_one_epoch(net, optimizer, train_loader, val_loader, debug=False, verbose=True):
verbose_print = print if verbose else lambda *args, **kwargs : None
train_loss = 0
val_loss = 0
n_batches = len(train_loader)
print_every = n_batches // 10
start_time = time.time()
net.train()
for i, data in enumerate(train_loader,0):
x, y, lengths = data
sorted_x = torch.LongTensor(sorted(x.numpy(), key=lambda x: np.count_nonzero(x), reverse=True))
sorted_y = torch.LongTensor(sorted(y.numpy(), key=lambda x: np.count_nonzero(x), reverse=True))
sorted_lengths = torch.LongTensor(sorted(lengths.numpy(), key=lambda x: x, reverse=True))
L_max = sorted_lengths.max().item()
y_trunc = sorted_y[:,:L_max]
optimizer.zero_grad()
try:
output, _ = net(sorted_x,sorted_lengths) # returns (output, state)
except RuntimeError:
print("Lengths: ", sorted_lengths)
loss = masked_cross_ent_loss(y_trunc, output, pad_token=0, debug=debug)
loss.backward()
optimizer.step()
train_loss += loss.item()
if ((i+1) % (print_every) == 0) or (i == n_batches - 1):
verbose_print('\r'+"Batch {}/{}, {:d}% \t Train loss: {:.3f} took: {:.2f}s ".format(
i+1, n_batches, int(100 * (i+1) / n_batches), train_loss / (i+1),
time.time() - start_time), end=' ')
net.eval()
with torch.no_grad():
for i, data in enumerate(val_loader,0):
x, y, lengths = data
sorted_x = torch.LongTensor(sorted(x.numpy(), key=lambda x: np.count_nonzero(x), reverse=True))
sorted_y = torch.LongTensor(sorted(y.numpy(), key=lambda x: np.count_nonzero(x), reverse=True))
sorted_lengths = torch.LongTensor(sorted(lengths.numpy(), key=lambda x: x, reverse=True))
L_max = sorted_lengths.max().item()
y_trunc = sorted_y[:,:L_max]
output, _ = net(sorted_x, sorted_lengths) # returns (output, state)
loss = masked_cross_ent_loss(y_trunc, output, pad_token=0, debug=debug)
val_loss += loss.item()
return net, optimizer, train_loss/len(train_loader), val_loss/len(val_loader)
# -
# first cycle of training
n_epochs = 20
train_log = []
val_log = []
for e in range(n_epochs):
start_time = time.time()
net, optimizer, train_loss, val_loss = train_one_epoch(net, optimizer, train_loader, val_loader, debug=False)
epoch_time = time.time() - start_time
torch.save(net.state_dict(), 'params'+str(e)+'.pth')
train_log.append(train_loss)
val_log.append(val_loss)
print("\nEpoch: %d - time: %.2f - train loss: %.4f - val loss: %.4f"%((e+1), epoch_time, train_loss, val_loss))
torch.save(net.state_dict(), 'final_params.pth')
train_log = list(np.load("train_log.npy"))
val_log = list(np.load("val_log.npy"))
import matplotlib.pyplot as plt
# %matplotlib inline
plt.plot(np.array(train_log), label = 'train loss')
plt.plot(np.array(val_log), label='val loss')
plt.xlabel('Number of epochs', fontsize=16)
plt.ylabel('Cross-entropy loss', fontsize=16)
plt.xticks(np.arange(0,5)*5)
plt.legend()
plt.show()
np.save("train_log", train_log)
np.save("val_log", val_log)
# ## Text generation
trained_net = Network(**params)
trained_net.load_state_dict(torch.load('final_params.pth'))
seed = "You are mr. Darcy I suppose"
def generate_sentence_v0(seed, trained_net, word2index, index2word, len_generated_seq = 10, debug=False, T=1):
# preprocess seed
import re
special_chars = ['!','?','&','(',')','*','-','_',':',';','"','\'','1','2','3','4','5','6','7','8','9','0']
for x in special_chars:
seed = seed.replace(x,' ')
full_text = seed.lower()
full_text = full_text.replace('mr.','mr')
full_text = full_text.replace('mrs.','mrs')
full_text = re.sub('à', 'a', full_text)
full_text = re.sub('ê', 'e', full_text)
full_text = re.sub(r'[.]',' .\n', full_text)
full_text = full_text.replace(',',' ,')
full_text = full_text.replace(' ',' ')
num_sentence = prep.numerical_encoder(full_text, word2index)
if debug:
for i,num_w in enumerate(num_sentence):
print(i,num_w)
enc_sentence = torch.LongTensor(num_sentence).view(1,-1)
context = enc_sentence[:,:-1]
length_context = torch.LongTensor(np.array([len(num_sentence)-1])).view(1)
last_word = enc_sentence[:,-1].view(1,1)
length_last = torch.LongTensor([1]).view(1)
if debug:
print("enc_sentence : ", enc_sentence.size())
print("context : ", context.size(), context)
print("length_context: ", length_context.size(), length_context)
print("last_word: ", last_word.size(), last_word)
with torch.no_grad():
net.eval()
_, hidden_context = net(context, length_context)
gen_words = []
for i in range(len_generated_seq):
last_word_ohe, hidden_context = net(last_word, length_last, state=hidden_context)
prob_last_word = np.exp(last_word_ohe.numpy().flatten()/T)
prob_last_word = prob_last_word/ prob_last_word.sum()
if debug:
print("prob_last_word (shape): ", prob_last_word.shape)
print("Sum of probabilities: ", prob_last_word.sum())
print("'<PAD>' probability: ", prob_last_word[0])
last_word_np = np.random.choice(np.arange(len(prob_last_word)), p=prob_last_word)
gen_words.append(last_word_np)
last_word = torch.LongTensor([last_word_np]).view(1,1)
if debug:
print("Last word: ", last_word_np)
gen_words = np.array(gen_words).flatten()
decoded_sentence = prep.numerical_decoder(gen_words, index2word)
output_string = ' '.join(decoded_sentence)
print("Seed: ", seed, '\n')
print("Generated sentence: ", output_string)
print("\nAll toghether: ", seed, output_string)
generate_sentence_v0(seed, trained_net, word2index, index2word, len_generated_seq=100, debug=False, T=1)
| Nicola_Dainese_Ex3_NN/Austen_training.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# TODO: handle rgb or bw images
# TODO: slice edges and get median of the rows to find ridges
# TODO see how uri calculated the ridges
# +
# %matplotlib inline
import matplotlib.image as img
import matplotlib.pyplot as plt
import numpy as np
def crop(image, ymin, ymax, xmin, xmax):
return image[ymin:ymax, xmin:xmax]
def thresholded(image, val):
return np.logical_and(*[image[...] > val for t in enumerate([0, 0])])
def find_min_max_without_orphand_pixels(nonzero_dimension, crop_filter=20):
sorted = np.sort(nonzero_dimension)
prev=-1
min_val = sorted[0]
for i, x in enumerate(sorted[:100]):
if prev >= 0 and x - prev > crop_filter:
min_val = x
prev = x
prev=-1
max_val = sorted[-1]
for i, x in enumerate(sorted[-100:]):
if prev >= 0 and x - prev > crop_filter:
max_val = prev
break
prev = x
return min_val, max_val
def crop_thresholded(image, crop_val=50):
temp = crop(image, 600, 4300, 1000, 6000)
temp = thresholded(temp, crop_val)
temp = temp * 1
temp = np.nonzero(temp)
ymin, ymax = find_min_max_without_orphand_pixels(temp[0])
xmin,xmax = find_min_max_without_orphand_pixels(temp[1])
temp = crop(image, 600+ymin, 600+ymax, 1000+xmin, 1000+xmax)
return temp
# TODO: fix performance!!! http://scikit-image.org/docs/dev/user_guide/tutorial_parallelization.html
def combine_3_images_to_RGB(red, green, blue):
new_image = np.empty((blue.shape[0],blue.shape[1],3))
for x in range(0, blue.shape[0]):
for y in range(0, blue.shape[1]):
new_image[x,y,0] = red[x,y]
new_image[x,y,1] = green[x,y]
new_image[x,y,2] = blue[x,y]
return new_image
# +
blue = img.imread("/Users/il239838/Downloads/private/Thesis/Papyrus/jm_4a36716c764b6d6b4c442f464b3342347436653838673d3d/P598-Fg009-R/P598-Fg009-R-C01-R01-D07012014-T124136-LR445__001.jpg")
green = img.imread("/Users/il239838/Downloads/private/Thesis/Papyrus/jm_4a36716c764b6d6b4c442f464b3342347436653838673d3d/P598-Fg009-R/P598-Fg009-R-C01-R01-D07012014-T124150-LR540__004.jpg")
red = img.imread("/Users/il239838/Downloads/private/Thesis/Papyrus/jm_4a36716c764b6d6b4c442f464b3342347436653838673d3d/P598-Fg009-R/P598-Fg009-R-C01-R01-D07012014-T124206-LR656__007.jpg")
blue = crop_thresholded(blue)
green = crop_thresholded(green)
red = crop_thresholded(red)
plt.imshow(green)
img.imsave("/Users/il239838/Downloads/private/Thesis/Papyrus/outputs/pseudo_red_crop", red)
img.imsave("/Users/il239838/Downloads/private/Thesis/Papyrus/outputs/pseudo_green_crop", green)
img.imsave("/Users/il239838/Downloads/private/Thesis/Papyrus/outputs/pseudo_blue_crop", blue)
# -
new_image = combine_3_images_to_RGB(red,green,blue)
new_image.shape
plt.imshow(new_image)
img.imsave("/Users/il239838/Downloads/private/Thesis/Papyrus/outputs/pseudo_RGB_crop", new_image)
all_black = np.zeros((100,100))
all_black
plt.imshow(all_black)
img.imsave("/Users/il239838/Downloads/private/Thesis/Papyrus/all_black", all_black)
all_white = np.full((100,100), 145, dtype=np.int)
all_white
plt.imshow(all_white)
# img.imsave("/Users/il239838/Downloads/private/Thesis/Papyrus/all_white", all_white)
# +
mixed = np.empty((100,100), dtype=np.int)
for x in range(0, 99):
for y in range(0, 100):
mixed[x,y] = x+y
plt.imshow(mixed)
mixed
# -
| src/others/1_basic.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ## Lopping Technique
dict_ = {
"name" : "Jai",
"age" : 12,
"sex" : "M",
}
print(dict_["name"])
print(dict_.items()) #dictionary.items() return list of tuples of 2(which are key and keyvalues)
for key, item in dict_.items():
print("{0} : {1}".format(key, item))
list = [5,6,1,3,8,5,7]
for i in sorted(list):
print(i, end = " ")
for i in reversed(list):
print(i, end = " ")
str1, str2, str3 = "", "Jai", "Singhal"
str1 or str2 or str3
# +
##Comparing sequences of same types
print([1,2,3] <= [1,2,4])
print((1,6) < (2,4))
print("Abashhas" >= "aaasdsdaasd")
print( (1, 4, [6,3]) > (1, 4, [6]))
| python_basics/Looping and Conditions Techniques.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] toc=true
# <h1>Table of Contents<span class="tocSkip"></span></h1>
# <div class="toc"><ul class="toc-item"><li><span><a href="#Download-and-Clean-Data" data-toc-modified-id="Download-and-Clean-Data-1"><span class="toc-item-num">1 </span>Download and Clean Data</a></span></li><li><span><a href="#Making-Recommendations" data-toc-modified-id="Making-Recommendations-2"><span class="toc-item-num">2 </span>Making Recommendations</a></span><ul class="toc-item"><li><span><a href="#BERT" data-toc-modified-id="BERT-2.1"><span class="toc-item-num">2.1 </span>BERT</a></span></li><li><span><a href="#Doc2vec" data-toc-modified-id="Doc2vec-2.2"><span class="toc-item-num">2.2 </span>Doc2vec</a></span></li><li><span><a href="#LDA" data-toc-modified-id="LDA-2.3"><span class="toc-item-num">2.3 </span>LDA</a></span></li><li><span><a href="#TFIDF" data-toc-modified-id="TFIDF-2.4"><span class="toc-item-num">2.4 </span>TFIDF</a></span></li><li><span><a href="#WikilinkNN" data-toc-modified-id="WikilinkNN-2.5"><span class="toc-item-num">2.5 </span>WikilinkNN</a></span></li><li><span><a href="#Weighted-Model" data-toc-modified-id="Weighted-Model-2.6"><span class="toc-item-num">2.6 </span>Weighted Model</a></span></li></ul></li></ul></div>
# -
# **rec_books**
#
# Downloads an English Wikipedia dump and parses it for all available books. All available models are then ran to compare recommendation efficacy.
#
# If using this notebook in [Google Colab](https://colab.research.google.com/github/andrewtavis/wikirec/blob/main/examples/rec_books.ipynb), you can activate GPUs by following `Edit > Notebook settings > Hardware accelerator` and selecting `GPU`.
# +
# pip install wikirec -U
# -
# The following gensim update might be necessary in Google Colab as the default version is very low.
# +
# pip install gensim -U
# -
# In Colab you'll also need to download nltk's names data.
# +
# import nltk
# nltk.download("names")
# +
import os
import json
import pickle
import matplotlib.pyplot as plt
import seaborn as sns
sns.set(style="darkgrid")
sns.set(rc={"figure.figsize": (15, 5)})
from wikirec import data_utils, model, utils
from IPython.core.display import display, HTML
display(HTML("<style>.container { width:99% !important; }</style>"))
# -
# # Download and Clean Data
files = data_utils.download_wiki(
language="en", target_dir="./enwiki_dump", file_limit=-1, dump_id=False
)
len(files)
topic = "books"
data_utils.parse_to_ndjson(
topics=topic,
output_path="./enwiki_books.ndjson",
input_dir="./enwiki_dump",
partitions_dir="./enwiki_book_partitions",
limit=None,
delete_parsed_files=True,
multicore=True,
verbose=True,
)
# +
with open("./enwiki_books.ndjson", "r") as fin:
books = [json.loads(l) for l in fin]
print(f"Found a total of {len(books)} books.")
# -
titles = [b[0] for b in books]
texts = [b[1] for b in books]
wikilinks = [b[2] for b in books]
# +
if os.path.isfile("./book_corpus_idxs.pkl"):
print(f"Loading book corpus and selected indexes")
with open(f"./book_corpus_idxs.pkl", "rb") as f:
text_corpus, selected_idxs = pickle.load(f)
selected_titles = [titles[i] for i in selected_idxs]
else:
print(f"Creating book corpus and selected indexes")
text_corpus, selected_idxs = data_utils.clean(
texts=texts,
language="en",
min_token_freq=5, # 0 for Bert
min_token_len=3, # 0 for Bert
min_tokens=50,
max_token_index=-1,
min_ngram_count=3,
remove_stopwords=True, # False for Bert
ignore_words=None,
remove_names=True,
sample_size=1,
verbose=True,
)
selected_titles = [titles[i] for i in selected_idxs]
with open("./book_corpus_idxs.pkl", "wb") as f:
print("Pickling book corpus and selected indexes")
pickle.dump([text_corpus, selected_idxs], f, protocol=4)
# -
# # Making Recommendations
single_input_0 = "<NAME> and the Philosopher's Stone"
single_input_1 = "The Hobbit"
multiple_inputs = ["<NAME> and the Philosopher's Stone", "The Hobbit"]
def load_or_create_sim_matrix(
method,
corpus,
metric,
topic,
path="./",
bert_st_model="xlm-r-bert-base-nli-stsb-mean-tokens",
**kwargs,
):
"""
Loads or creats a similarity matrix to deliver recommendations
NOTE: the .pkl files made are 5-10GB or more in size
"""
if os.path.isfile(f"{path}{topic}_{metric}_{method}_sim_matrix.pkl"):
print(f"Loading {method} {topic} {metric} similarity matrix.")
with open(f"{path}{topic}_{metric}_{method}_sim_matrix.pkl", "rb") as f:
sim_matrix = pickle.load(f)
else:
print(f"Creating {method} {topic} {metric} similarity matrix.")
embeddings = model.gen_embeddings(
method=method, corpus=corpus, bert_st_model=bert_st_model, **kwargs,
)
sim_matrix = model.gen_sim_matrix(
method=method, metric=metric, embeddings=embeddings,
)
with open(f"{path}{topic}_{metric}_{method}_sim_matrix.pkl", "wb") as f:
print(f"Pickling {method} {topic} {metric} similarity matrix.")
pickle.dump(sim_matrix, f, protocol=4)
return sim_matrix
# ## BERT
# Remove n-grams for BERT training
corpus_no_ngrams = [
" ".join([t for t in text.split(" ") if "_" not in t]) for text in text_corpus
]
# We can pass kwargs for sentence_transformers.SentenceTransformer.encode
bert_sim_matrix = load_or_create_sim_matrix(
method="bert",
corpus=corpus_no_ngrams,
metric="cosine", # euclidean
topic=topic,
path="./",
bert_st_model="xlm-r-bert-base-nli-stsb-mean-tokens",
show_progress_bar=True,
batch_size=32,
)
model.recommend(
inputs=single_input_0,
titles=selected_titles,
sim_matrix=bert_sim_matrix,
n=10,
metric="cosine",
)
model.recommend(
inputs=single_input_1,
titles=selected_titles,
sim_matrix=bert_sim_matrix,
n=10,
metric="cosine",
)
model.recommend(
inputs=multiple_inputs,
titles=selected_titles,
sim_matrix=bert_sim_matrix,
n=10,
metric="cosine",
)
# ## Doc2vec
# We can pass kwargs for gensim.models.doc2vec.Doc2Vec
doc2vec_sim_matrix = load_or_create_sim_matrix(
method="doc2vec",
corpus=text_corpus,
metric="cosine", # euclidean
topic=topic,
path="./",
vector_size=100,
epochs=10,
alpha=0.025,
)
model.recommend(
inputs=single_input_0,
titles=selected_titles,
sim_matrix=doc2vec_sim_matrix,
n=10,
metric="cosine",
)
model.recommend(
inputs=single_input_1,
titles=selected_titles,
sim_matrix=doc2vec_sim_matrix,
n=10,
metric="cosine",
)
model.recommend(
inputs=multiple_inputs,
titles=selected_titles,
sim_matrix=doc2vec_sim_matrix,
n=10,
metric="cosine",
)
# ## LDA
# +
topic_nums_to_compare = [1, 10, 20, 30, 40, 50, 60, 70, 80, 90, 100]
# We can pass kwargs for gensim.models.ldamulticore.LdaMulticore
utils.graph_lda_topic_evals(
corpus=text_corpus,
num_topic_words=10,
topic_nums_to_compare=topic_nums_to_compare,
metrics=True,
verbose=True,
)
plt.show()
# -
# We can pass kwargs for gensim.models.ldamulticore.LdaMulticore
lda_sim_matrix = load_or_create_sim_matrix(
method="lda",
corpus=text_corpus,
metric="cosine", # euclidean not an option at this time
topic=topic,
path="./",
num_topics=90,
passes=10,
decay=0.5,
)
model.recommend(
inputs=single_input_0,
titles=selected_titles,
sim_matrix=lda_sim_matrix,
n=10,
metric="cosine",
)
model.recommend(
inputs=single_input_1,
titles=selected_titles,
sim_matrix=lda_sim_matrix,
n=10,
metric="cosine",
)
model.recommend(
inputs=multiple_inputs,
titles=selected_titles,
sim_matrix=lda_sim_matrix,
n=10,
metric="cosine",
)
# ## TFIDF
# We can pass kwargs for sklearn.feature_extraction.text.TfidfVectorizer
tfidf_sim_matrix = load_or_create_sim_matrix(
method="tfidf",
corpus=text_corpus,
metric="cosine", # euclidean
topic=topic,
path="./",
max_features=None,
norm='l2',
)
model.recommend(
inputs=single_input_0,
titles=selected_titles,
sim_matrix=tfidf_sim_matrix,
n=10,
metric="cosine",
)
model.recommend(
inputs=single_input_1,
titles=selected_titles,
sim_matrix=tfidf_sim_matrix,
n=10,
metric="cosine",
)
model.recommend(
inputs=multiple_inputs,
titles=selected_titles,
sim_matrix=tfidf_sim_matrix,
n=10,
metric="cosine",
)
# ## WikilinkNN
# We can pass kwargs for the WikilinkNN Keras model
wikilink_sim_matrix = load_or_create_sim_matrix(
method="wikilinknn",
corpus=text_corpus,
metric="cosine", # euclidean
topic=topic,
path="./",
path_to_json="./enwiki_books.ndjson",
path_to_embedding_model="books_embedding_model.h5",
embedding_size=50,
epochs=20,
verbose=True,
)
model.recommend(
inputs=single_input_0,
titles=selected_titles,
sim_matrix=wikilink_sim_matrix,
n=10,
metric="cosine",
)
model.recommend(
inputs=single_input_1,
titles=selected_titles,
sim_matrix=wikilink_sim_matrix,
n=10,
metric="cosine",
)
model.recommend(
inputs=multiple_inputs,
titles=selected_titles,
sim_matrix=wikilink_sim_matrix,
n=10,
metric="cosine",
)
# ## Weighted Model
# +
# wikilink_sims_copy = wikilink_sims.copy()
# not_selected_idxs = [i for i in range(len(titles)) if i not in selected_idxs]
# wikilink_sims_copy = np.delete(wikilink_sims_copy, not_selected_idxs, axis=0)
# wikilink_sims_copy = np.delete(wikilink_sims_copy, not_selected_idxs, axis=1)
# -
tfidf_weight = 0.35
bert_weight = 1.0 - tfidf_weight
bert_tfidf_sim_matrix = tfidf_weight * tfidf_sim_matrix + bert_weight * bert_sim_matrix
model.recommend(
inputs=single_input_0,
titles=selected_titles,
sim_matrix=bert_tfidf_sim_matrix,
n=10,
metric="cosine",
)
model.recommend(
inputs=single_input_1,
titles=selected_titles,
sim_matrix=bert_tfidf_sim_matrix,
n=10,
metric="cosine",
)
model.recommend(
inputs=multiple_inputs,
titles=selected_titles,
sim_matrix=bert_tfidf_sim_matrix,
n=10,
metric="cosine",
)
| examples/rec_books.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ### Preparatory Step
# +
# Prerequisite package imports
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sb
# %matplotlib inline
# The `solutions_univ.py` is a Python file available in the Notebook server that contains solution to the TO DO tasks.
# The solution to each task is present in a separate function in the `solutions_univ.py` file.
# Do not refer to the file untill you attempt to write code yourself.
from solutions_univ import histogram_solution_1
# -
# ### About the Dataset
# We'll continue working with the Pokémon dataset in this workspace. The data was assembled from the database of information found in this [GitHub repository](https://github.com/veekun/pokedex/tree/master/pokedex/data/csv).
#
pokemon = pd.read_csv('./data/pokemon.csv')
pokemon.head()
# ### **TO DO Task**
# Pokémon have a number of different statistics that describe their combat capabilities. Here, create a _histogram_ that depicts the distribution of 'special-defense' values taken.
#
# **Hint**: Try playing around with different bin width sizes to see what best depicts the data.
# +
plt.figure(figsize=[20, 10])
plt.subplot(2,2,1)
bins = np.arange(0, pokemon['special-defense'].max()+1, 1)
plt.hist(data=pokemon, x='special-defense', bins=bins);
plt.subplot(2, 2, 2)
bins = np.arange(0, pokemon['special-defense'].max()+5, 5)
plt.hist(data=pokemon, x='special-defense', bins=bins);
plt.subplot(2,2,3)
bins = np.arange(0, pokemon['special-defense'].max()+10, 10)
plt.hist(data=pokemon, x='special-defense', bins=bins);
plt.subplot(2, 2, 4)
bins = np.arange(0, pokemon['special-defense'].max()+15, 15)
plt.hist(data=pokemon, x='special-defense', bins=bins);
# -
sb.displot(data=pokemon, x='special-defense', kde=True);
bin_edges = np.arange(0, pokemon['special-defense'].max()+5, 5)
sb.displot(data=pokemon, x='special-defense', bins=bin_edges);
# ### Expected Output
# **Your visualization does not need to be exactly the same as ours, but it should be able to come up with the same conclusions.**
# run this cell to check your work against ours
histogram_solution_1()
| histogram_practice.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# nuclio: ignore
import nuclio
# %nuclio config kind = "job"
# %nuclio config spec.image = "python:3.6-jessie"
# %%nuclio cmd -c
pip install requests
import warnings
warnings.simplefilter(action='ignore', category=FutureWarning)
import os
import json
import requests
from mlrun.execution import MLClientCtx
from typing import List
def slack_notify(
context: MLClientCtx,
webhook_url: str = "URL",
slack_blocks: List[str] = [],
notification_text: str = "Notification"
) -> None:
"""Summarize a table
:param context: the function context
:param webhook_url: Slack incoming webhook URL. Please read: https://api.slack.com/messaging/webhooks
:param notification_text: Notification text
:param slack_blocks: Message blocks list. NOT IMPLEMENTED YET
"""
data = {
'text': notification_text
}
print("====",webhook_url)
response = requests.post(webhook_url, data=json.dumps(
data), headers={'Content-Type': 'application/json'})
print('Response: ' + str(response.text))
print('Response code: ' + str(response.status_code))
# +
# nuclio: end-code
# -
# ### mlconfig
# +
from mlrun import mlconf
import os
mlconf.dbpath = 'http://mlrun-api:8080'
mlconf.artifact_path = mlconf.artifact_path or f'{os.environ["HOME"]}/artifacts'
# -
# ### save
# +
from mlrun import code_to_function
# create job function object from notebook code
fn = code_to_function("slack_notify")
# add metadata (for templates and reuse)
fn.spec.default_handler = "slack_notify"
fn.spec.description = "Send Slack notification"
fn.metadata.categories = ["ops"]
fn.metadata.labels = {"author": "mdl"}
fn.export("function.yaml")
# -
# ## tests
from mlrun import import_function
func = import_function("hub://slack_notify")
# +
from mlrun import NewTask, run_local
#Slack incoming webhook URL. Please read: https://api.slack.com/messaging/webhooks
task_params = {
"webhook_url" : "<KEY>",
"notification_text" : "Test Notification"
}
# -
task = NewTask(
name="tasks slack notify",
params = task_params,
handler=slack_notify)
# ### run local where artifact path is fixed
run = run_local(task, artifact_path=mlconf.artifact_path)
# ### run remote where artifact path includes the run id
func.deploy()
func.run(task, params=task_params, workdir=mlconf.artifact_path)
func.doc()
| slack_notify/slack_notify.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3.8.0 ('uda')
# language: python
# name: python3
# ---
import pandas as pd
train_df=pd.read_csv('../yelp5/train.csv',header=None)
test_df=pd.read_csv('../yelp5/test.csv',header=None)
train_df.columns=['label','sentence']
test_df.columns=['label','sentence']
print(train_df.shape,test_df.shape)
train_df[0].unique()
| yelp5/process.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
from __future__ import print_function
# %matplotlib inline
from matplotlib import pyplot as plt
import numpy as np
from astropy import constants as const
from astropy import units
# for nicer looking plots. You can comment this if you don't have seaborn
import seaborn as sns; sns.set(context="poster")
# -
# # Activity Description
# see: https://docs.google.com/document/d/1ALSBqY4wv7BfCtdi1uFhnFNNlMcIHSg1VooC6rgxvds/edit?usp=sharing
# # Define and test functions
def equilibrium_temperature(luminosity, distance, albedo=0):
"""
Calculates the equilibrium temperature of a planet, assuming blackbody radiation
and thermodynamic equilibrium.
Parameters
----------
luminosity : float
luminosity of the host star [ergs s**-1]
distance : float
distance between the star and the planet [cm]
(assumes circular orbits)
albedo : float
albedo of the planet (0 for perfect absorber, 1 for perfect reflector)
Returns
-------
T_eq : float
Equilibrium temperature of the planet
"""
T_eq = (luminosity / distance**2)**.25 \
* ((1-albedo) / (16 * np.pi * const.sigma_sb.cgs.value))**.25
return T_eq
# ### Test `equilibrium_temperature`:
# (there are better ways to do testing using libraries like `pytest`, `nose` or `unittest`)
T_eq_earth = equilibrium_temperature(const.L_sun.cgs.value, units.AU.to(units.cm))
print("Earth equilibrium temperature: ", T_eq_earth, " K")
# ### Mass-luminosity relation
# +
def luminosity_from_mass(mass):
"""
Convert mass to luminosity using a standard mass-luminosity relation
Parameters
----------
mass : float
stellar mass [g]
Returns
-------
luminosity : float
stellar luminosity [ergs s**-1]
Notes
-----
not vectorized.
`mass` cannot be an array
Sources
-------
Maurizio & Cassisi (2005) Evolution of stars and stellar populations
http://books.google.com/books?id=r1dNzr8viRYC&lpg=PA138&dq=Mass-Luminosity%20relation&lr=&client=firefox-a&pg=PA138#v=onepage&q=&f=false
Nebojsa (2004) Advanced astrophysics
http://books.google.com/books?id=-ljdYMmI0EIC&lpg=PA19&ots=VdMUIiCdP_&dq=Mass-luminosity%20relation&pg=PA19#v=onepage&q=&f=false
"""
# convert to solar units
mass /= const.M_sun.cgs.value
if mass < .43:
luminosity = .23 * (mass)**2.3
elif mass < 2:
luminosity = (mass)**4
elif mass < 20:
luminosity = 1.5 * (mass)**3.5
else:
luminosity = 3200
# convert back into cgs units
luminosity *= const.L_sun.cgs.value
return luminosity
# -
# ### test `luminosity_from_mass`:
if np.isclose(luminosity_from_mass(const.M_sun.cgs.value) / const.L_sun.cgs.value, 1 ):
print("OK")
else:
print("Error: luminosity_from_mass gave wrong answer")
# ### Equilibrium temperature as a function of mass
#
def equilibrium_temperature_from_mass(mass, distance, albedo=0):
return equilibrium_temperature(luminosity_from_mass(mass),
distance,
albedo = albedo)
# ### test `equilibrium_temperature_from_mass`
if np.isclose(equilibrium_temperature_from_mass(const.M_sun.cgs.value, units.AU.to(units.cm)),
equilibrium_temperature(const.L_sun.cgs.value, units.AU.to(units.cm))):
print("OK")
else:
print("Error: equilibrium_temperature_from_mass gave wrong answer")
# # Results for activities
# +
print("albedo = 0")
T_eq_earth = equilibrium_temperature(const.L_sun.cgs.value, units.AU.to(units.cm))
print("Earth equilibrium temperature: ", T_eq_earth, " K")
print()
Earth_albedo = .3
print("albedo = ", Earth_albedo)
T_eq_earth = equilibrium_temperature(const.L_sun.cgs.value,
units.AU.to(units.cm),
albedo=Earth_albedo)
print("Earth equilibrium temperature: ", T_eq_earth, " K")
# +
mass = .96 * const.M_sun.cgs.value
luminosity = luminosity_from_mass(mass)
T_eq_earth = equilibrium_temperature(luminosity, units.AU.to(units.cm))
print("Earth equilibrium temperature from lower mass sun: ", T_eq_earth)
print("For a 10 K change in T_eq, the sun would need to be ",
round(100*(mass / const.M_sun.cgs.value)),
"% of its current mass")
# -
# ### Get equilibrium temperatures of all the planets
# +
# the better way to do this is with objects,
# but we're not going to teach object-oriented programming in this workshop
# tuples should be ("name", distance [in cm])
planets = [
("Mercury", 5.79e12),
("Venus", 1.08e13),
("Earth", 1.50e13),
("Mars", 2.28e13),
("Jupiter", 7.78e13),
("Saturn", 1.43e14),
("Uranus", 2.87e14),
("Neptune", 4.50e14),
("Pluto", 5.90e14)
]
solar_system_T_eqs = dict()
for planet in planets:
solar_system_T_eqs[planet[0]] = equilibrium_temperature(const.L_sun.cgs.value,
planet[1])
# +
solar_system_T_eqs
# if you wanted to keep the solar system in order,
# you could have used collections.OrderedDict instead of dict
# -
# # Calculate T_eq for all exoplanets
import exoplanets
exoplanets.download_data()
data = exoplanets.parse_data()
print("column names of `data`: ")
data.dtype.names
luminosities = const.L_sun.cgs.value * 10**data["st_lum"]
distances = units.AU.to(units.cm) * data["pl_orbsmax"]
# +
plt.scatter(distances / units.AU.to(units.cm),
luminosities / const.L_sun.cgs.value)
plt.xscale("log")
plt.yscale("log")
plt.ylim(10**-3, 10**3)
plt.xlabel("Star-Planet distance [AU]")
plt.ylabel("Star luminosity [L_sun]")
# -
equilibrium_temperatures = equilibrium_temperature(luminosities, distances)
plt.hist(equilibrium_temperatures)
plt.xlabel("Equilibrium Temperature [K]")
plt.ylabel("Number of planets")
| day2/equilibrium_temperatures.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="rbf6cSl0rdKM" colab_type="text"
# Let's set up a Pavolovian conditioning experiment.
#
# Agent A
#
# Sensory input
# * Vision: 10 bit vector. A specific sparse pattern indicates sight of food.
# * Taste: 10 bit vector. A specific sparse pattern indicates taste of food.
#
# Among uncorrelated sensory activity, the following pattern is hidden.
#
# * Taste of food -- time delay t1 --> reward
#
# Would this agent associate high reward prediction for Taste of food?
#
# Later, the following pattern is presented.
#
# * Sight of food --- time delay t0 --> Taste of food --- time delay t1 --> reward
#
# Would the agent now associate high reward prediction for Sight of food? Will it stop associating Taste with reward (as seen from animal studies)?
#
# ### New
# * Each modality gets a layer Lm
# * A corss-modal layer has neurons with RF from each Lm layer.
# * Exc and inh lateral connectivity
#
#
# + id="QhJarmg1Ucs6" colab_type="code" colab={}
import logging
import numpy as np
import os
import random
import torch
import torch.utils.data
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
from torch.distributions import Bernoulli
import matplotlib.pyplot as plt
import matplotlib.ticker as ticker
# %matplotlib inline
plt.style.use('classic')
# + id="-3bYAT4erRbf" colab_type="code" outputId="edf0b4d2-d703-4234-e7e4-a576a3e11849" executionInfo={"status": "ok", "timestamp": 1563228078335, "user_tz": 420, "elapsed": 1573, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "07278259258766517376"}} colab={"base_uri": "https://localhost:8080/", "height": 887}
VISION_SIZE = 10
TASTE_SIZE = 5
REWARD_SIZE = 4
BASELINE_ACTIVATION = B = 0.3
HIGH_ACTIVATION = H = 0.8
class SpatialPattern:
def __init__(self, values):
self.poisson = Bernoulli(probs=values)
def sample(self, shape):
return self.poisson.sample(shape)
class TemporalPattern:
def __init__(self, config):
self.config = config
def sample(self):
all_samples = {}
for modality_name, modality_config in self.config.items():
modality_samples = torch.cat([row[2].sample([row[1] - row[0]]) for row in modality_config])
all_samples[modality_name] = modality_samples
return all_samples
# for i in range(len(self.durations)):
# all_modalities_patterns = self.spatial_patterns[i]
# duration = self.durations[i]
# samples = [torch.tensor(all_modalities_patterns[j].sample([duration])) for j in range(len(all_modalities_patterns))]
# samples = torch.cat(samples, 1)
# all_samples.append(samples)
# return torch.cat(all_samples, 0)
class SensoryModality:
def __init__(self, name, size, baseline_activation):
self.name = name
self.baseline_pattern = SpatialPattern(torch.ones(size) * baseline_activation)
def baseline_sample(self, shape):
return self.baseline_pattern.sample(shape)
def create_spatial_pattern(self, values):
return SpatialPattern(torch.tensor(values))
taste = SensoryModality("taste", TASTE_SIZE, BASELINE_ACTIVATION)
reward = SensoryModality("reward", REWARD_SIZE, BASELINE_ACTIVATION)
modalities = [taste, reward]
experiment1 = TemporalPattern(
{
taste.name: [
[0, 200, taste.create_spatial_pattern([B,B,B,H,H])],
[201, 900, taste.baseline_pattern],
],
reward.name: [
[0, 250, reward.baseline_pattern],
[251, 450, reward.create_spatial_pattern([H,H,H,H])],
[451, 900, reward.baseline_pattern],
]
}
)
data = experiment1.sample()
# print(data)
def trace(data, decay_rate=0.99):
decay_rate = 0.99
trace = torch.ones(data.shape[-1]) * B
data_trace = [trace]
for i in range(data.shape[0]):
row = data[i]
trace = trace * decay_rate + row * (1 - decay_rate)
data_trace.append(trace)
data_trace = torch.stack(data_trace)
#print(data_trace)
return data_trace
data_trace = {}
for modality in modalities:
data_trace[modality.name] = trace(data[modality.name])
plt.plot(data_trace[modality.name].numpy())
plt.title(modality.name)
plt.show()
# + id="VceZ8dBu8j9V" colab_type="code" outputId="1eaea6ca-44c9-4b93-e628-1164ea09f005" executionInfo={"status": "ok", "timestamp": 1563228229193, "user_tz": 420, "elapsed": 668, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "07278259258766517376"}} colab={"base_uri": "https://localhost:8080/", "height": 1000}
class LocalEnsemble(nn.Module):
def __init__(self, input_size, ensemble_size):
super(LocalEnsemble, self).__init__()
self.logger = logging.getLogger(self.__class__.__name__)
self.logger.setLevel(logging.DEBUG)
self.input_size = input_size
self.ensemble = nn.Linear(input_size, ensemble_size)
self.initialize_sparse_weights(self.ensemble)
self.lateral_excitatory = nn.Linear(ensemble_size, ensemble_size)
self.initialize_sparse_weights(self.lateral_excitatory)
self.lateral_inhibitory = nn.Linear(ensemble_size, ensemble_size)
self.initialize_sparse_weights(self.lateral_inhibitory)
self.previous_activation = torch.zeros(ensemble_size)
def initialize_sparse_weights(self, layer):
layer.weight.data = F.dropout(layer.weight.data, 0.5)
self.normalize_weights(layer)
def normalize_weights(self, layer):
layer.weight.data[layer.weight.data < 0] = 0
print("layer.weight.data", layer.weight.data)
den = layer.weight.data.sum(dim=1)
den[den < 0.0001] = 1
layer.weight.data = layer.weight.data / den[:, None]
print("layer.weight.data2", layer.weight.data)
def forward(self, x):
afferent_activation = self.ensemble(x)
lateral_excitatory_activation = torch.tanh(self.lateral_excitatory(self.previous_activation))
lateral_inhibitory_activation = torch.tanh(self.lateral_inhibitory(self.previous_activation))
activation = torch.sigmoid(afferent_activation + lateral_excitatory_activation - lateral_inhibitory_activation)
self.previous_activation = activation
return activation
modality_models = {}
for modality in modalities:
dt = data_trace[modality.name]
modality_models[modality.name] = model = LocalEnsemble(dt.shape[-1], 10)
model.forward(dt) # burner
model.forward(dt) # burner
# + id="1i5fy94tucMF" colab_type="code" outputId="1bf2d12b-3c5f-4896-e4d2-6f9d2452a225" executionInfo={"status": "ok", "timestamp": 1563238676055, "user_tz": 420, "elapsed": 1629, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "07278259258766517376"}} colab={"base_uri": "https://localhost:8080/", "height": 1000}
reward_activations = None
for modality in modalities:
model = modality_models[modality.name]
activations = [model.forward(data_trace[modality.name]) for _ in range(5)]
activations = torch.stack(activations)
# trace(activation.detach())
activations = activations.view(-1, activations.shape[-1])
if modality.name == 'reward':
reward_activations = activations
#plt.plot(trace(activations.detach()).numpy())
plt.plot(activations.detach().numpy())
plt.title(modality.name)
plt.show()
total_reward_trace = reward_activations.sum(dim=1)
plt.plot(total_reward_trace.detach().numpy())
plt.show()
# + id="OSm7rZBIWXVe" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="95b20df7-39b5-47d9-8316-5001d6a8136a" executionInfo={"status": "ok", "timestamp": 1563238101203, "user_tz": 420, "elapsed": 551, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "07278259258766517376"}}
# + id="Q4RUuEfHOSSK" colab_type="code" colab={}
| Attractor Learning/02.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
from sklearn.metrics import mean_squared_error
from sklearn.model_selection import train_test_split
import random
import numpy as np
from rdkit import Chem, DataStructs
import matplotlib.pyplot as plt
import pandas as pd
import scipy.stats
from sklearn.svm import LinearSVC
from sklearn.svm import LinearSVR
from sklearn.pipeline import make_pipeline
from sklearn.preprocessing import StandardScaler
from sklearn.metrics import mean_squared_error
from sklearn.model_selection import GridSearchCV, cross_val_score
import pickle
random.seed(2)
# -
# Here, we import our TMPRSS2 QSAR Dataset, Dark Chemical Matter Dataset, and Screening Library
# +
# collect dataset
assays = pd.read_pickle('../processed_data/combined_dataset.pkl')
assays = assays[assays.activity_target.isin(['Active', 'Inactive'])] # get rid of any 'Inconclusive'
dcm = pd.read_pickle('../processed_data/DarkChemicalMatter_processed.pkl.gz')
# testing data:
screening_data = pd.read_pickle('../processed_data/screening_data_processed.pkl')
# -
screening_data
# Here, we combine our assay data and dark chemical matter data. We next 80%/20% train/test split. This data is split into a training set (80%) and a testing/validation set (20%)
# +
X_assays = np.stack(assays.morgan_fingerprint)
y_assays = np.ones(len(X_assays))
X_dcm = np.stack(dcm.morgan_fingerprint)
y_dcm = np.zeros(len(X_dcm))
X_combined = np.append(X_assays, X_dcm, axis = 0)
y_combined = np.append(y_assays, y_dcm)
X_train, X_test, y_train, y_test = train_test_split(X_combined, y_combined, test_size=0.2)
# -
# Here we use SKLearn GridSearch CV function to identify optimal C parameter for our preliminary SVM Classifier (trained on training set only)
Cs = np.logspace(-6, 2, 16)
clf = GridSearchCV(estimator=LinearSVC(random_state=0, tol=1e-5, max_iter = 10000, dual = False), param_grid=dict(C=Cs), n_jobs=-1)
clf.fit(X_train, y_train)
c_param_SVC_train = clf.best_estimator_.C
c_param_SVC_train
# Using the C parameter calculated above, we determine the Total Accuracy, False Positive Rate, False Negative Rate of our SVM Classifier
# +
SVM_validation = make_pipeline(StandardScaler(), LinearSVC(random_state=0, tol=1e-5, C=c_param_SVC_train, max_iter = 10000, dual = False))
SVM_validation.fit(X_train, y_train)
pred = SVM_validation.predict(X_test)
accuracy = np.sum(pred == y_test)/y_test.size
accuracy
# +
i = 0
false_positive = 0
total_positive = 0
false_negative = 0
total_negative = 0
while(i < len(pred)):
if(y_test[i] == 0):
total_negative += 1
if(pred[i] == 1):
false_positive += 1
elif(y_test[i] == 1):
total_positive += 1
if(pred[i] == 0):
false_negative += 1
i = i + 1
false_positive/total_positive
# -
false_negative/total_negative
# Here, we use SKLearn GridSearch CV function to identify optimal C parameter for our full SVM Classifier (trained on training set and testing set)
Cs = np.logspace(-6, 2, 16)
clf = GridSearchCV(estimator=LinearSVC(random_state=0, tol=1e-5, max_iter = 10000, dual = False), param_grid=dict(C=Cs), n_jobs=-1)
clf.fit(X_combined, y_combined)
c_param_SVC_test = clf.best_estimator_.C
c_param_SVC_test
# Here, we use our full SVM Classifier to identify potentially-active compounds from our screening library
# +
SVM_testing = make_pipeline(StandardScaler(), LinearSVC(random_state=0, tol=1e-5, C=c_param_SVC_test, max_iter = 10000, dual = False))
SVM_testing.fit(X_combined, y_combined)
screening_compounds = np.stack(screening_data.morgan_fingerprint)
pred = SVM_testing.predict(screening_compounds)
screening_data['predictions'] = pred
inactiveCompounds = screening_data[(screening_data['predictions'] == 0)].index
active_screening_compounds = screening_data.drop(inactiveCompounds)
# -
len(active_screening_compounds)
# + tags=[]
#split training and testing data for each dataset, fill nan with acvalue_target
#y_assays_logKi = np.log10(assays.acvalue_scaled_to_tmprss2.fillna(assays.acvalue_target))
#train_X, test_X, train_y, test_y = train_test_split(X_assays, y_assays_logKi, test_size=0.2)
# -
# Next, we identify the subset of the training data for which Ki values can be scaled to TMPRSS2 for use in regression analysis. This data is split into a training set (80%) and a testing/validation set (20%)
# +
y_assays_logKi_raw = np.log10(assays.acvalue_scaled_to_tmprss2)
nan_array = np.isnan(y_assays_logKi_raw)
not_nan = ~nan_array
y_assays_logKi = y_assays_logKi_raw[not_nan]
X_assays = X_assays[not_nan]
train_X, test_X, train_y, test_y = train_test_split(X_assays, y_assays_logKi, test_size=0.2)
# -
# Next, we use SKLearn GridSearch CV function to identify optimal C parameter for our preliminary Support Vector Regressor (trained on training set only)
# + tags=[]
# Use SKLearn GridSearch CV function to identify optimal C parameter for SVM regression (training set)
Cs = np.logspace(-6, 2, 16)
clf = GridSearchCV(estimator=LinearSVR(random_state=0, tol=1e-5, max_iter = 10000, dual = True), param_grid=dict(C=Cs), n_jobs=-1)
clf.fit(train_X, train_y)
c_param_SVR_test = clf.best_estimator_.C
# -
c_param_SVR_test
# Using the C paramater calculated above, we calculate the RMSE of our regressor and the correlation coefficient between our predicted and ground-truth values.
# + tags=[]
#Run SVM regression using SKLearn on test set. Linear regression for prediction accuracy
svmReg = make_pipeline(StandardScaler(), LinearSVR(random_state=0, tol=1e-5, C=c_param_SVR_test, max_iter = 10000, dual = True))
svmReg.fit(train_X, train_y)
pred = svmReg.predict(test_X)
MSE = mean_squared_error(test_y, pred)
RMSE = np.sqrt(MSE)
print("SVR RMSE:{}".format(RMSE))
plt.scatter(test_y, pred)
plt.xlabel('log10(Actual Ki), μM')
plt.ylabel('log10(Predicted Ki), μM')
plt.title('SVM Validation Data')
corr = scipy.stats.pearsonr(test_y, pred)
print(corr)
# -
# Next, we use SKLearn GridSearch CV function to identify optimal C parameter for our full Support Vector Regressor (trained on training set and testing set)
# + tags=[]
#SKLearn C parameter optimization
Cs = np.logspace(-6, 2, 16)
clf_full = GridSearchCV(estimator=LinearSVR(random_state=0, tol=1e-5, max_iter = 10000, dual = True), param_grid=dict(C=Cs), n_jobs=-1)
clf_full.fit(X_assays, y_assays_logKi)
c_param_full = clf_full.best_estimator_.C
# -
c_param_full
# Finally, using this C parameter, we screen the active compounds identified by our SVM Classifier to identify the compounds which are predicted to bind most effectively to TMPRSS2
# +
#Run regressor (trained on full dataset)
test_compounds = np.stack(active_screening_compounds.morgan_fingerprint)
svmReg_full = make_pipeline(StandardScaler(), LinearSVR(random_state=0, tol=1e-5, C=c_param_full, max_iter = 10000, dual = True))
svmReg_full.fit(X_assays, y_assays_logKi)
pred_values = svmReg_full.predict(test_compounds)
# -
#identify top hits
active_screening_compounds['pred_value'] = pred_values
active_screening_compounds.sort_values(by='pred_value').head(20)
plt.hist(active_screening_compounds.pred_value, bins = 20)
plt.xlabel('log10(Predicted Ki of test compound), μM')
plt.ylabel('Abundance of Compounds in Bin')
plt.title('Predicted Ki Values of Potentially-Active Compounds')
# Here, we save raw results, as well as our results with duplicates removed
active_screening_compounds_sorted = active_screening_compounds.sort_values(by='pred_value')
active_screening_compounds_sorted['RMSE'] = RMSE
active_screening_compounds_sorted.drop(columns=['morgan_fingerprint', 'predictions'], inplace=True)
active_screening_compounds_sorted.to_csv('../results/svm_screening_results_raw.csv')
active_screening_compounds_sorted["name"] = active_screening_compounds_sorted["name"].str.lower()
active_screening_compounds_sorted.drop_duplicates(subset=['name'], keep='first', inplace=True)
active_screening_compounds_sorted.to_csv('../results/svm_screening_results_no_duplicate_names.csv')
| notebooks/TMPRSS2_SVM.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 2
# language: python
# name: python2
# ---
# <small><i>This notebook was prepared by [<NAME>](http://donnemartin.com). Source and license info is on [GitHub](https://github.com/donnemartin/data-science-ipython-notebooks).</i></small>
# # Spark
#
# * IPython Notebook Setup
# * Python Shell
# * RDDs
# * Pair RDDs
# * Running Spark on a Cluster
# * Viewing the Spark Application UI
# * Working with Partitions
# * Caching RDDs
# * Checkpointing RDDs
# * Writing and Running a Spark Application
# * Configuring Spark Applications
# * Streaming
# * Streaming with States
# * Broadcast Variables
# * Accumulators
# ## IPython Notebook Setup
# Follow the instructions provided [here](http://ramhiser.com/2015/02/01/configuring-ipython-notebook-support-for-pyspark/) to configure IPython Notebook Support for PySpark with Python 2.
#
# To run Python 3 with Spark 1.4+, check out the following posts on [Stack Overflow](http://stackoverflow.com/questions/30279783/apache-spark-how-to-use-pyspark-with-python-3) or [Reddit](http://www.reddit.com/r/datascience/comments/3ar1bd/continually_updated_data_science_python_notebooks/).
# ## Python Shell
# Start the pyspark shell (REPL):
# !pyspark
# View the spark context, the main entry point to the Spark API:
sc
# ## RDDs
#
# Resilient Distributed Datasets (RDDs) are the fundamental unit of data in Spark. RDDs can be created from a file, from data in memory, or from another RDD. RDDs are immutable.
#
# There are two types of RDD operations:
# * Actions: Returns values, data is not processed in an RDD until an action is preformed
# * Transformations: Defines a new RDD based on the current
#
# Create an RDD from the contents of a directory:
my_data = sc.textFile("file:/path/*")
# Count the number of lines in the data:
my_data.count()
# Return all the elements of the dataset as an array--this is usually more useful after a filter or other operation that returns a sufficiently small subset of the data:
my_data.collect()
# Return the first 10 lines in the data:
my_data.take(10)
# Create an RDD with lines matching the given filter:
my_data.filter(lambda line: ".txt" in line)
# Chain a series of commands:
sc.textFile("file:/path/file.txt") \
.filter(lambda line: ".txt" in line) \
.count()
# Create a new RDD mapping each line to an array of words, taking only the first word of each array:
first_words = my_data.map(lambda line: line.split()[0])
# Output each word in first_words:
for word in first_words.take(10):
print word
# Save the first words to a text file:
first_words.saveAsTextFile("file:/path/file")
# ## Pair RDDs
#
# Pair RDDs contain elements that are key-value pairs. Keys and values can be any type.
# Given a log file with the following space deilmited format: [date_time, user_id, ip_address, action], map each request to (user_id, 1):
# +
DATE_TIME = 0
USER_ID = 1
IP_ADDRESS = 2
ACTION = 3
log_data = sc.textFile("file:/path/*")
user_actions = log_data \
.map(lambda line: line.split()) \
.map(lambda words: (words[USER_ID], 1)) \
.reduceByKey(lambda count1, count2: count1 + count2)
# -
# Show the top 5 users by count, sorted in descending order:
user_actions.map(lambda pair: (pair[0], pair[1])).sortyByKey(False).take(5)
# Group IP addresses by user id:
user_ips = log_data \
.map(lambda line: line.split()) \
.map(lambda words: (words[IP_ADDRESS],words[USER_ID])) \
.groupByKey()
# Given a user table with the following csv format: [user_id, user_info0, user_info1, ...], map each line to (user_id, [user_info...]):
# +
user_data = sc.textFile("file:/path/*")
user_profile = user_data \
.map(lambda line: line.split(',')) \
.map(lambda words: (words[0], words[1:]))
# -
# Inner join the user_actions and user_profile RDDs:
user_actions_with_profile = user_actions.join(user_profile)
# Show the joined table:
for (user_id, (user_info, count)) in user_actions_with_profiles.take(10):
print user_id, count, user_info
# ## Running Spark on a Cluster
# Start the standalone cluster's Master and Worker daemons:
# !sudo service spark-master start
# !sudo service spark-worker start
# Stop the standalone cluster's Master and Worker daemons:
# !sudo service spark-master stop
# !sudo service spark-worker stop
# Restart the standalone cluster's Master and Worker daemons:
# !sudo service spark-master stop
# !sudo service spark-worker stop
# View the Spark standalone cluster UI:
http://localhost:18080//
# Start the Spark shell and connect to the cluster:
# !MASTER=spark://localhost:7077 pyspark
# Confirm you are connected to the correct master:
sc.master
# ## Viewing the Spark Application UI
# From the following [reference](http://spark.apache.org/docs/1.2.0/monitoring.html):
#
# Every SparkContext launches a web UI, by default on port 4040, that displays useful information about the application. This includes:
#
# A list of scheduler stages and tasks
# A summary of RDD sizes and memory usage
# Environmental information.
# Information about the running executors
#
# You can access this interface by simply opening http://<driver-node>:4040 in a web browser. If multiple SparkContexts are running on the same host, they will bind to successive ports beginning with 4040 (4041, 4042, etc).
#
# Note that this information is only available for the duration of the application by default. To view the web UI after the fact, set spark.eventLog.enabled to true before starting the application. This configures Spark to log Spark events that encode the information displayed in the UI to persisted storage.
http://localhost:4040/
# ## Working with Partitions
# From the following [reference](http://blog.cloudera.com/blog/2014/09/how-to-translate-from-mapreduce-to-apache-spark/):
#
# The Spark map() and flatMap() methods only operate on one input at a time, and provide no means to execute code before or after transforming a batch of values. It looks possible to simply put the setup and cleanup code before and after a call to map() in Spark:
val dbConnection = ...
lines.map(... dbConnection.createStatement(...) ...)
dbConnection.close() // Wrong!
# However, this fails for several reasons:
#
# * It puts the object dbConnection into the map function’s closure, which requires that it be serializable (for example, by implementing java.io.Serializable). An object like a database connection is generally not serializable.
# * map() is a transformation, rather than an operation, and is lazily evaluated. The connection can’t be closed immediately here.
# * Even so, it would only close the connection on the driver, not necessarily freeing resources allocated by serialized copies.
# In fact, neither map() nor flatMap() is the closest counterpart to a Mapper in Spark — it’s the important mapPartitions() method. This method does not map just one value to one other value, but rather maps an Iterator of values to an Iterator of other values. It’s like a “bulk map” method. This means that the mapPartitions() function can allocate resources locally at its start, and release them when done mapping many values.
# +
def count_txt(partIter):
for line in partIter:
if ".txt" in line: txt_count += 1
yield (txt_count)
my_data = sc.textFile("file:/path/*") \
.mapPartitions(count_txt) \
.collect()
# Show the partitioning
print "Data partitions: ", my_data.toDebugString()
# -
# ## Caching RDDs
# Caching an RDD saves the data in memory. Caching is a suggestion to Spark as it is memory dependent.
#
# By default, every RDD operation executes the entire lineage. Caching can boost performance for datasets that are likely to be used by saving this expensive recomputation and is ideal for iterative algorithms or machine learning.
#
# * cache() stores data in memory
# * persist() stores data in MEMORY_ONLY, MEMORY_AND_DISK (spill to disk), and DISK_ONLY
#
# Disk memory is stored on the node, not on HDFS.
#
# Replication is possible by using MEMORY_ONLY_2, MEMORY_AND_DISK_2, etc. If a cached partition becomes unavailable, Spark recomputes the partition through the lineage.
#
# Serialization is possible with MEMORY_ONLY_SER and MEMORY_AND_DISK_SER. This is more space efficient but less time efficient, as it uses Java serialization by default.
# +
# Cache RDD to memory
my_data.cache()
# Persist RDD to both memory and disk (if memory is not enough), with replication of 2
my_data.persist(MEMORY_AND_DISK_2)
# Unpersist RDD, removing it from memory and disk
my_data.unpersist()
# Change the persistence level after unpersist
my_data.persist(MEMORY_AND_DISK)
# -
# ## Checkpointing RDDs
#
# Caching maintains RDD lineage, providing resilience. If the lineage is very long, it is possible to get a stack overflow.
#
# Checkpointing saves the data to HDFS, which provide fault tolerant storage across nodes. HDFS is not as fast as local storage for both reading and writing. Checkpointing is good for long lineages and for very large data sets that might not fit on local storage. Checkpointing removes lineage.
# Create a checkpoint and perform an action by calling count() to materialize the checkpoint and save it to the checkpoint file:
# +
# Enable checkpointing by setting the checkpoint directory,
# which will contain all checkpoints for the given data:
sc.setCheckpointDir("checkpoints")
my_data = sc.parallelize([1,2,3,4,5])
# Long loop that may cause a stack overflow
for i in range(1000):
my_data = mydata.map(lambda myInt: myInt + 1)
if i % 10 == 0:
my_data.checkpoint()
my_data.count()
my_data.collect()
# Display the lineage
for rddstring in my_data.toDebugString().split('\n'):
print rddstring.strip()
# -
# ## Writing and Running a Spark Application
# Create a Spark application to count the number of text files:
# +
import sys
from pyspark import SparkContext
if __name__ == "__main__":
if len(sys.argv) < 2:
print >> sys.stderr, "Usage: App Name <file>"
exit(-1)
count_text_files()
def count_text_files():
sc = SparkContext()
logfile = sys.argv[1]
text_files_count = sc.textFile(logfile)
.filter(lambda line: '.txt' in line)
text_files_count.cache()
print("Number of text files: ", text_files_count.count())
# -
# Submit the script to Spark for processing:
# !spark-submit --properties-file dir/myspark.conf script.py data/*
# ## Configuring Spark Applications
# Run a Spark app and set the configuration options in the command line:
# !spark-submit --master spark//localhost:7077 --name 'App Name' script.py data/*
# Configure spark.conf:
spark.app.name App Name
spark.ui.port 4141
spark.master spark://localhost:7077
# Run a Spark app and set the configuration options through spark.conf:
# !spark-submit --properties-file spark.conf script.py data/*
# Set the config options programmatically:
sconf = SparkConf() \
.setAppName("Word Count") \
.set("spark.ui.port","4141")
sc = SparkContext(conf=sconf)
# Set logging levels located in the following file, or place a copy in your pwd:
$SPARK_HOME/conf/log4j.properties.template
# ## Streaming
# Start the Spark Shell locally with at least two threads (need a minimum of two threads for streaming, one for receiving, one for processing):
# !spark-shell --master local[2]
# Create a StreamingContext (similar to SparkContext in core Spark) with a batch duration of 1 second:
val ssc = new StreamingContext(new SparkConf(), Seconds(1))
val my_stream = ssc.socketTextStream(hostname, port)
# Get a DStream from a streaming data source (text from a socket):
val logs = ssc.socketTextStream(hostname, port)
# DStreams support regular transformations such as map, flatMap, and filter, and pair transformations such as reduceByKey, groupByKey, and joinByKey.
#
# Apply a DStream operation to each batch of RDDs (count up requests by user id, reduce by key to get the count):
val requests = my_stream
.map(line => (line.split(" ")(2), 1))
.reduceByKey((x, y) => x + y)
# The transform(function) method creates a new DStream by executing the input function on the RDDs.
val sorted_requests = requests
.map(pair => pair.swap)
.transform(rdd => rdd.sortByKey(false))
# foreachRDD(function) performs a function on each RDD in the DStream (map is like a shortcut not requiring you to get the RDD first before doing an operation):
sorted_requests.foreachRDD((rdd, time) => {
println("Top users @ " + time)
rdd.take(5).foreach(
pair => printf("User: %s (%s)\n", pair._2, pair._1))
}
# Save the DStream result part files with the given folder prefix, the actual folder will be /dir/requests-timestamp0/:
requests.saveAsTextFiles("/dir/requests")
# Start the execution of all DStreams:
ssc.start()
# Wait for all background threads to complete before ending the main thread:
ssc.awaitTermination()
# ## Streaming with States
# Enable checkpointing to prevent infinite lineages:
ssc.checkpoint("dir")
# Compute a DStream based on the previous states plus the current state:
# +
def updateCount = (newCounts: Seq[Int], state: Option[Int]) => {
val newCount = newCounts.foldLeft(0)(_ + _)
val previousCount = state.getOrElse(0)
Some(newCount + previousCount)
}
val totalUserreqs = userreqs.updateStateByKey(updateCount)
# -
# Compute a DStream based Sliding window, every 30 seconds, count requests by user over the last 5 minutes:
val reqcountsByWindow = logs.map(line => (line.split(' ')(2), 1))
.reduceByKeyAndWindow((x: Int, y: Int) => x + y, Minutes(5), Seconds(30))
# Collect statistics with the StreamingListener API:
# +
// define listener
class MyListener extends StreamingListener {
override def onReceiverStopped(...) {
streamingContext.stop()
}
}
// attach listener
streamingContext. addStreamingListener(new MyListener())
# -
# ## Broadcast Variables
# Read in list of items to broadcast from a local file:
broadcast_file = "broadcast.txt"
broadcast_list = list(map(lambda l: l.strip(), open(broadcast_file)))
# Broadcast the target list to all workers:
broadcast_list_sc = sc.broadcast(broadcast_list)
# Filter based on the broadcast list:
# +
log_file = "hdfs://localhost/user/logs/*"
filtered_data = sc.textFile(log_file)\
.filter(lambda line: any(item in line for item in broadcast_list_sc.value))
filtered_data.take(10)
# -
# ## Accumulators
# Create an accumulator:
txt_count = sc.accumulator(0)
# Count the number of txt files in the RDD:
my_data = sc.textFile(filePath)
my_data.foreach(lambda line: if '.txt' in line: txt_count.add(1))
# Count the number of file types encountered:
# +
jpg_count = sc.accumulator(0)
html_count = sc.accumulator(0)
css_count = sc.accumulator(0)
def countFileType(s):
if '.jpg' in s: jpg_count.add(1)
elif '.html' in s: html_count.add(1)
elif '.css' in s: css_count.add(1)
filename="hdfs://logs/*"
logs = sc.textFile(filename)
logs.foreach(lambda line: countFileType(line))
print 'File Type Totals:'
print '.css files: ', css_count.value
print '.html files: ', html_count.value
print '.jpg files: ', jpg_count.value
| spark/spark.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
# %matplotlib widget
import os
import sys
sys.path.insert(0, os.getenv('HOME')+'/pycode/MscThesis/')
import pandas as pd
from amftrack.util import get_dates_datetime, get_dirname, get_plate_number, get_postion_number,get_begin_index
import ast
from amftrack.plotutil import plot_t_tp1
from scipy import sparse
from datetime import datetime
from amftrack.pipeline.functions.node_id import orient
import pickle
import scipy.io as sio
from pymatreader import read_mat
from matplotlib import colors
import cv2
import imageio
import matplotlib.pyplot as plt
import numpy as np
from skimage.filters import frangi
from skimage import filters
from random import choice
import scipy.sparse
import os
from amftrack.pipeline.functions.extract_graph import from_sparse_to_graph, generate_nx_graph, sparse_to_doc
from skimage.feature import hessian_matrix_det
from amftrack.pipeline.functions.experiment_class_surf import Experiment, Edge, Node, plot_raw_plus
from amftrack.pipeline.paths.directory import run_parallel, find_state, directory_scratch, directory_project
from amftrack.notebooks.analysis.util import *
from scipy import stats
from scipy.ndimage.filters import uniform_filter1d
from statsmodels.stats import weightstats as stests
# -
# ***Three ways of loading a plate***
# 1. After hyphae have been extracted (and the network has been cleaned). Then the inst name is needed : refer yourself to `amftrack/notebooks/analysis/data_info.py` where all analysed instances are referenced. Use the `get_exp` function with arguments corresponding to the instance and the directory where you expect to find the analysed plate (most often `directory_project`). If you ask me I can also analyse a plate for you.
# 2. Before hyphae are extracted but after node identification, then chose manually the dates that you want to load using get_date_datetime and selecting the right begin and end depending on which dates you need. Then create an experiment instance and load the dates using the `.load()` method.
# 3. If you don't care about the labelling of the nodes you can follow the same procedure but setting the labeled flag in the `.load()` method to `False`.
# Method 1
directory = directory_project
# Method 2 and 3, find the dates of interest.
plate_number = 40
i,date = get_begin_index(plate_number,directory)
plate_number = 40
plate = get_postion_number(plate_number)
# plate = 3
print(plate)
# directory = directory_scratch
directory = directory_project
listdir = os.listdir(directory)
list_dir_interest = [name for name in listdir if name.split('_')[-1]==f'Plate{0 if plate<10 else ""}{plate}']
dates_datetime = get_dates_datetime(directory,plate)
len(list_dir_interest)
get_dirname(dates_datetime[60], plate)
plate = get_postion_number(plate_number)
begin = i+104
end = i+ 104
dates_datetime = get_dates_datetime(directory,plate)
dates = dates_datetime[begin:end+1]
print(dates[0],dates[-1])
# exp = get_exp((9,0,11),directory)
exp = Experiment(plate,directory)
exp.load(dates) #for method 2
# exp.load(dates, labeled= False) # for method 3
# ***Load the skeletons for visualisation purposes***
#
# This may take some time, go grab a coffee
exp.load_compressed_skel()
# ***Let's look at the network***
exp.plot_raw(0)
nodes = [node.label for node in exp.nodes]
times = [0]
exp.plot(times,[nodes]*len(times))
plot_raw_plus(exp,0,nodes)
node = Node(113,exp)
node.show_source_image(0,1)
begin = Node(115,exp)
end = Node(110,exp)
edge = Edge(begin,end,exp)
# edge.get_length_um(0)
def get_length_um(edge, t):
pixel_conversion_factor = 1.725
length_edge = 0
pixels = edge.pixel_list(t)
for i in range(len(pixels) // 10 + 1):
if i * 10 <= len(pixels) - 1:
length_edge += np.linalg.norm(
np.array(pixels[i * 10])
- np.array(pixels[min((i + 1) * 10, len(pixels) - 1)])
)
# length_edge+=np.linalg.norm(np.array(pixels[len(pixels)//10-1*10-1])-np.array(pixels[-1]))
return length_edge * pixel_conversion_factor
get_length_um(edge,0)
edge.width(0)
nx.shortest_path(exp.nx_graph[0],113,100)
| amftrack/notebooks/Philippe/Demo_Philippe.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Regression and Other Stories: Height and weight
# Predict weight from height. See Chapters 3, 9 and 10 in Regression and Other Stories.
import arviz as az
from bambi import Model
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
from scipy import stats
np.random.seed(0)
earnings = pd.read_csv("https://raw.githubusercontent.com/avehtari/ROS-Examples/master/Earnings/data/earnings.csv")
earnings.head()
# TODO: Figure out what stan_glm does with na
na_filter = earnings["weight"].notnull()
model = Model(earnings[na_filter])
results = model.fit('weight ~ height', samples=1000, chains=4)
func_dict = {"Median": np.median,
"MAD_SD":stats.median_abs_deviation,
}
coefs = az.summary(results, stat_funcs=func_dict, extend=False, round_to=2)
coefs
a_hat = coefs.loc["Intercept[0]", "Median"]
b_hat = coefs.loc["height[0]", "Median"]
predicted_1 = a_hat + b_hat*66
np.round(predicted_1, 2)
# # TODO: Fill in posterior predictive of predict
# ### Center Heights
earnings["c_height"] = earnings["height"] - 66
model = Model(earnings[na_filter])
fit_2 = model.fit('weight ~ c_height', samples=1000, chains=4)
func_dict = {"Median": np.median,
"MAD_SD":stats.median_abs_deviation,
}
coefs = az.summary(fit_2, stat_funcs=func_dict, extend=False, round_to=2)
coefs
a_hat = coefs.loc["Intercept[0]", "Median"]
b_hat = coefs.loc["c_height[0]", "Median"]
predicted_1 = a_hat + b_hat*4
np.round(predicted_1, 2)
# ### Posterior Simulations
# ## Indicator Variables
# ### Predict weight (in pounds) from height (in inches)
# +
# TODO: Add string here
# -
# ### Including a binary variable in a regression
earnings["c_height"] = earnings["height"] - 66
model = Model(earnings[na_filter])
fit_3 = model.fit('weight ~ c_height + male', samples=1000, chains=4)
func_dict = {"Median": np.median,
"MAD_SD":stats.median_abs_deviation,
}
coefs = az.summary(fit_2, stat_funcs=func_dict, extend=False, round_to=2)
coefs
# +
a_hat = coefs.loc["Intercept[0]", "Median"]
b_hat_1 = coefs.loc["c_height[0]", "Median"]
b_hat_2 = coefs.loc["male[0]", "Median"]
predicted_1 = a_hat + b_hat_1*4
np.round(predicted_1, 2)
# -
# ### Using indicator variables for multiple levels of a categorical predictor
# Factor is called contrast in patsy, hence the C
earnings["c_height"] = earnings["height"] - 66
model = Model(earnings[na_filter])
fit_4 = model.fit('weight ~ c_height + male + C(ethnicity)', samples=1000, chains=4)
func_dict = {"Median": np.median,
"MAD_SD":stats.median_abs_deviation,
}
coefs = az.summary(fit_4, stat_funcs=func_dict, extend=False, round_to=2)
coefs
# ### Choose the baseline category by setting the levels
model = Model(earnings[na_filter])
fit_5 = model.fit("weight ~ c_height + male + C(ethnicity, Treatment(reference='White'))", samples=1000, chains=4)
func_dict = {"Median": np.median,
"MAD_SD":stats.median_abs_deviation,
}
coefs = az.summary(fit_4, stat_funcs=func_dict, extend=False, round_to=2)
coefs
# #### Alternatively create indicators for the four ethnic groups directly
# The `pd.get_dummies` method is very handy here. The
earnings_dummies = pd.get_dummies(earnings, prefix="eth", columns=["ethnicity"])
earnings_dummies.head()
model = Model(earnings_dummies[na_filter])
fit_6 = model.fit("weight ~ c_height + male + eth_Black + eth_Hispanic + eth_Other", samples=1000, chains=4)
func_dict = {"Median": np.median,
"MAD_SD":stats.median_abs_deviation,
}
coefs = az.summary(fit_6, stat_funcs=func_dict, extend=False, round_to=2)
coefs
| ROS/Earnings/height_and_weight.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 2
# language: python
# name: python2
# ---
# ### Load the training data
# The network need a big txt file as an input.
# The content of the file will be used to train the network.
data = open('kafka.txt', 'r').read()
## this will return a set of unique chars
chars = list(set(data))
data_size, vocab_size = len(data), len(chars)
print 'data has %d chars, %d unique' % (data_size, vocab_size)
# ### Encode/Decode char/vector
# Neural networks operate on vectors (a vector is an array of float) So we need a way to encode and decode a char as a vector.
# We'll count the number of unique chars (vocab_size). That will be the size of the vector. The vector contains only zero exept for the position of the char wherae the value is 1.
## this are 2 dicts to convert a char to int and int to char
char_to_ix = { ch:i for i,ch in enumerate(chars)}
ix_to_char = { i:ch for i, ch in enumerate(chars)}
print char_to_ix
print ix_to_char
# The dictionary defined above allosw us to create a vector of size 61 instead of 256.
# Here and exemple of the char 'a'
# The vector contains only zeros, except at position char_to_ix['a'] where we put a 1.
# +
import numpy as np
vector_for_char_a = np.zeros((vocab_size, 1))
vector_for_char_a[char_to_ix['a']] = 1
print vector_for_char_a.ravel()
# +
#model parameters
hidden_size = 100
seq_length = 25
learning_rate = 1e-1
Wxh = np.random.randn(hidden_size, vocab_size) * 0.01 #input to hidden
Whh = np.random.randn(hidden_size, hidden_size) * 0.01 #input to hidden
Why = np.random.randn(vocab_size, hidden_size) * 0.01 #input to hidden
bh = np.zeros((hidden_size, 1))
by = np.zeros((vocab_size, 1))
# -
# The parameters are:
# * Wxh are parameters to connect a vector that contain one input to the hidden layer.
# * Whh are parameters to connect the hidden layer to itself. This is the Key of the Rnn: Recursion is done by injecting the previous values from the output of the hidden state, to itself at the next iteration.
# * Why are parameters to connect the hidden layer to the output
# * bh contains the hidden bias
# * by contains the output bias
# #### Define the loss function
# The loss is a key concept in all neural networks training. It is a value that describe how good is our model.
# The smaller the loss, the better our model is.
# (A good model is a model where the predicted output is close to the training output)
# During the training phase we want to minimize the loss.
# The loss function calculates the loss but also the gradients (by backward pass):
# * It perform a forward pass: calculate the next char given a char from the training set.
# * It calculate the loss by comparing the predicted char to the target char. (The target char is the input following char in the tranning set)
# * It calculates the backward pass to calculate the gradients
#
# This function take as input:
# * a list of input char
# * a list of target char
# * and the previous hidden state
#
# This function outputs:
# * the loss
# * the gradient for each parameters between layers
# * the last hidden state
def lossFun(inputs, targets, hprev):
"""
inputs,targets are both list of integers.
hprev is Hx1 array of initial hidden state
returns the loss, gradients on model parameters, and last hidden state
"""
#stores our inputs, hidden states, outputs, and probability values
xs, hs, ys, ps, = {}, {}, {}, {} #Empty dicts
# Each of these are going to be SEQ_LENGTH(Here 25) long dicts i.e. 1 vector per time(seq) step
# xs will store 1 hot encoded input characters for each of 25 time steps (26, 25 times)
# hs will store hidden state outputs for 25 time steps (100, 25 times)) plus a -1 indexed initial state
# to calculate the hidden state at t = 0
# ys will store targets i.e. expected outputs for 25 times (26, 25 times), unnormalized probabs
# ps will take the ys and convert them to normalized probab for chars
# We could have used lists BUT we need an entry with -1 to calc the 0th hidden layer
# -1 as a list index would wrap around to the final element
xs, hs, ys, ps = {}, {}, {}, {}
#init with previous hidden state
# Using "=" would create a reference, this creates a whole separate copy
# We don't want hs[-1] to automatically change if hprev is changed
hs[-1] = np.copy(hprev)
#init loss as 0
loss = 0
# forward pass
for t in xrange(len(inputs)):
xs[t] = np.zeros((vocab_size,1)) # encode in 1-of-k representation (we place a 0 vector as the t-th input)
xs[t][inputs[t]] = 1 # Inside that t-th input we use the integer in "inputs" list to set the correct
hs[t] = np.tanh(np.dot(Wxh, xs[t]) + np.dot(Whh, hs[t-1]) + bh) # hidden state
ys[t] = np.dot(Why, hs[t]) + by # unnormalized log probabilities for next chars
ps[t] = np.exp(ys[t]) / np.sum(np.exp(ys[t])) # probabilities for next chars
loss += -np.log(ps[t][targets[t],0]) # softmax (cross-entropy loss)
# backward pass: compute gradients going backwards
#initalize vectors for gradient values for each set of weights
dWxh, dWhh, dWhy = np.zeros_like(Wxh), np.zeros_like(Whh), np.zeros_like(Why)
dbh, dby = np.zeros_like(bh), np.zeros_like(by)
dhnext = np.zeros_like(hs[0])
for t in reversed(xrange(len(inputs))):
#output probabilities
dy = np.copy(ps[t])
#derive our first gradient
dy[targets[t]] -= 1 # backprop into y
#compute output gradient - output times hidden states transpose
#When we apply the transpose weight matrix,
#we can think intuitively of this as moving the error backward
#through the network, giving us some sort of measure of the error
#at the output of the lth layer.
#output gradient
dWhy += np.dot(dy, hs[t].T)
#derivative of output bias
dby += dy
#backpropagate!
dh = np.dot(Why.T, dy) + dhnext # backprop into h
dhraw = (1 - hs[t] * hs[t]) * dh # backprop through tanh nonlinearity
dbh += dhraw #derivative of hidden bias
dWxh += np.dot(dhraw, xs[t].T) #derivative of input to hidden layer weight
dWhh += np.dot(dhraw, hs[t-1].T) #derivative of hidden layer to hidden layer weight
dhnext = np.dot(Whh.T, dhraw)
for dparam in [dWxh, dWhh, dWhy, dbh, dby]:
np.clip(dparam, -5, 5, out=dparam) # clip to mitigate exploding gradients
return loss, dWxh, dWhh, dWhy, dbh, dby, hs[len(inputs)-1]
| NeuralNets/RecurrentNeuralNet.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3 (ipykernel)
# language: python
# name: python3
# ---
# # Feature Engineering
#
# In this notebook we test out several feature engineering techniques. In particular, we will try out the following features:
#
# 1. Feature Selection
# 2. Row statistics (static features)
# 3. TargetEncoding
# 4. KMeans Clustering
#
# In each case we will compare it with the baseline LightGBM model and score it using cross-validation. For each technique we use the following parameters:
#
# * `n_estimators = 10000` with `early_stopping_rounds = 150`
# * `learning_rate = 0.03`
# * `random_state = 0` to ensure reproducible results
# Global variables for testing changes to this notebook quickly
NUM_TREES = 10000
EARLY_STOP = 150
NUM_FOLDS = 3
RANDOM_SEED = 0
SUBMIT = True
# +
# Essential imports
import numpy as np
import pandas as pd
import matplotlib
import pyarrow
import time
import os
import gc
# feature engineering
import scipy.stats as stats
from category_encoders import MEstimateEncoder
from sklearn.preprocessing import StandardScaler
from sklearn.cluster import KMeans
from functools import partial
# Model evaluation
from sklearn.model_selection import train_test_split, StratifiedKFold
from sklearn.metrics import roc_auc_score
from sklearn.feature_selection import mutual_info_classif
# LightGBM
from lightgbm import LGBMClassifier, plot_importance
# Mute warnings
import warnings
warnings.filterwarnings('ignore')
# display options
pd.set_option('display.max_rows', None)
pd.set_option('display.max_columns', None)
# -
# ## Loading Function
#
# We create a function that recreates the training and holdout sets since some of our methods may overwrite the original data and we need a reproducible way to get the same data.
# Generate training and holdout set
def get_training_data():
train = pd.read_feather("../data/train.feather")
train, holdout = train_test_split(
train,
train_size = 500000,
stratify = train['target'],
shuffle = True,
random_state = RANDOM_SEED,
)
train.reset_index(drop = True, inplace = True)
holdout.reset_index(drop = True, inplace = True)
return train, holdout
# +
# %%time
train, holdout = get_training_data()
# save important features
features = [x for x in train.columns if x not in ['id','target']]
# -
# ## Scoring Function
#
# For each feature engineering technique we create a function that accepts the training, test and validation data as arguments and returns the appropriately transformed data (taking care to avoid leakage). This function is passed to a scoring function as the argument `preprocessing`,
def score_lightgbm(preprocessing = None):
start = time.time()
holdout_preds = np.zeros((holdout.shape[0],))
print('')
skf = StratifiedKFold(n_splits = NUM_FOLDS, shuffle = True, random_state = 0)
for fold, (train_idx, valid_idx) in enumerate(skf.split(train, train['target'])):
# train, valid split for cross-validation
X_train, y_train = train[features].iloc[train_idx].copy(), train['target'].iloc[train_idx].copy()
X_valid, y_valid = train[features].iloc[valid_idx].copy(), train['target'].iloc[valid_idx].copy()
X_test, y_test = holdout[features].copy(), holdout['target'].copy()
# preprocessing function should return a copy
if preprocessing:
try:
X_train, X_valid, X_test = preprocessing(X_train, X_valid, X_test, y_train)
except:
X_train, X_valid, X_test = preprocessing(X_train, X_valid, X_test)
# model with params
model = LGBMClassifier(
n_estimators = NUM_TREES,
random_state = RANDOM_SEED,
learning_rate = 0.03,
)
model.fit(
X_train, y_train,
eval_set = [(X_valid, y_valid)],
eval_metric = 'auc',
early_stopping_rounds = EARLY_STOP,
verbose = False,
)
holdout_preds += model.predict_proba(X_test)[:,1] / NUM_FOLDS
valid_preds = model.predict_proba(X_valid)[:,1]
fold_auc = roc_auc_score(y_valid, valid_preds)
print(f"Fold {fold} (AUC):", fold_auc)
end = time.time()
return roc_auc_score(holdout['target'], holdout_preds), round(end-start, 2), model
# # 0. Baseline (LightGBM)
#
# We start with computing a baseline score for LightGBM using the raw data with no feature engineering.
# +
baseline_score, baseline_time, model = score_lightgbm()
print("\nTraining Time:", baseline_time)
print("Holdout (AUC):", baseline_score)
# -
# # 1. Feature Selection
#
# In this section we experiment with dropping certain features deemed unimportant by various feature selection techniques. We consider two methods for determining unimportant features:
#
# * LightGBM feature importance
# * Mutual Information
# Data structure for comparing
data = dict(
scores = [baseline_score],
times = [baseline_time]
)
index = ["Baseline"]
# ## 1.1 Feature Importance
#
# We define a bad feature as one with a feature importance below 3 using the building `feature_importance_` attribute:
# Determine good columns
good_columns = list()
for score, col in zip(model.feature_importances_, train[features].columns):
if score >= 3:
good_columns.append(col)
def feature_selection_importance(X_train, X_valid, X_test):
return X_train[good_columns], X_valid[good_columns], X_test[good_columns]
# +
# Feature selection with 'feature importance'
print(f'Removed {len(features) - len(good_columns)} features.')
fi_score, fi_time, model = score_lightgbm(feature_selection_importance)
del model
gc.collect()
print("\nTraining Time:", fi_time)
print("Holdout (AUC):", fi_score)
data['times'].append(fi_time)
data['scores'].append(fi_score)
index.append('Feature Importance')
# -
# ## 1.2 Mutual Information
#
# In this section we remove features which have zero [mutual information](https://scikit-learn.org/stable/modules/generated/sklearn.feature_selection.mutual_info_classif.html#sklearn.feature_selection.mutual_info_classif) scores.
def remove_uninformative(X_train, X_valid, X_test, y_train, verbose = False):
# 0. categoricals
binary_features = [X_train[x].dtype.name.startswith("int") for x in X_train.columns]
# 1. Determine uninformative columns
scores = mutual_info_classif(
X_train, y_train,
discrete_features = binary_features,
)
cols = [x for i, x in enumerate(X_train.columns) if scores[i] == 0]
# 2. Drop the uninformative columns
X_train.drop(cols, axis = 1, inplace = True)
X_valid.drop(cols, axis = 1, inplace = True)
X_test.drop(cols, axis = 1, inplace = True)
if verbose:
print("Dropped columns:", *cols)
return X_train, X_valid, X_test
# +
mi_score, mi_time, model = score_lightgbm(remove_uninformative)
del model
gc.collect()
print("\nTraining Time:", mi_time)
print("Holdout (AUC):", mi_score)
data['times'].append(mi_time)
data['scores'].append(mi_score)
index.append('Mutual Information')
# -
# # 1. Row Statistics
#
# In this section, we calculate several row statistics as features and see which (if any) result in improvements over the original features.
# +
def create_row_stats(data):
cont_cols, cat_cols = list(), list()
for col in data.columns:
if data[col].dtype.name.startswith("int"):
cat_cols.append(col)
else:
cont_cols.append(col)
new_data = data.copy()
new_data['binary_count'] = data[cat_cols].sum(axis=1)
new_data['binary_std'] = data[cat_cols].std(axis=1)
new_data['min'] = data[cont_cols].min(axis=1)
new_data['std'] = data[cont_cols].std(axis=1)
new_data['max'] = data[cont_cols].max(axis=1)
new_data['median'] = data[cont_cols].median(axis=1)
new_data['mean'] = data[cont_cols].mean(axis=1)
#new_data['var'] = data[cont_cols].var(axis=1)
#new_data['sum'] = data[cont_cols].sum(axis=1)
#new_data['sem'] = data[cont_cols].sem(axis=1)
new_data['skew'] = data[cont_cols].skew(axis=1)
new_data['median_abs_dev'] = stats.median_abs_deviation(data[cont_cols], axis=1)
new_data['zscore'] = (np.abs(stats.zscore(data[cont_cols]))).sum(axis=1)
return new_data
def row_stats(X_train, X_valid, X_test, y_train):
X_train = create_row_stats(X_train)
X_valid = create_row_stats(X_valid)
X_test = create_row_stats(X_test)
return X_train, X_valid, X_test
# -
features = [x for x in train.columns if x not in ['id','target']]
# +
stats_score, stats_time, model = score_lightgbm(row_stats)
print("\nTraining Time:", stats_time)
print("Holdout (AUC):", stats_score)
data['times'].append(stats_time)
data['scores'].append(stats_score)
index.append('Row Stats')
# -
# We see that our model found some of these variables decently important for training however there is no noticable benefit to the overall model accuracy and a much slower training time.
# # 2. Target Encoding
#
# In this section, we target encode all the binary variables. Target encoding is generally used for higher cardinality categorical data but we'll try it here anyways.
# +
# %%time
train, holdout = get_training_data()
features = [x for x in train.columns if x not in ['id','target']]
binary_features = [x for x in features if train[x].dtype.name.startswith("int")]
# -
def target_encode(X_train, X_valid, X_test, y_train):
encoder = MEstimateEncoder(
cols = binary_features,
m = 1.0,
)
X_train = encoder.fit_transform(X_train, y_train)
X_valid = encoder.transform(X_valid)
X_test = encoder.transform(X_test)
return X_train, X_valid, X_test
# +
target_score, target_time, model = score_lightgbm(target_encode)
# don't need the model
del model
gc.collect()
print("\nTraining Time:", target_time)
print("Holdout (AUC):", target_score)
data['times'].append(target_time)
data['scores'].append(target_score)
index.append('Target Encoding')
# -
# As said before target encoding is best done with high cardinality variables so it's not particularly surprising that this didn't improve our models. It also significantly slowed down training time.
# # 3. KMeans Clustering
#
# We test cluster labels as categorical features and cluster distances as numerical features separately and see if either results in better models.
# ## 3.1 Cluster Labels
def generate_cluster_labels(X_train, X_valid, X_test, name, features, scale = True):
# 1. normalize based on training data
if scale:
scaler = StandardScaler()
X_scaled = scaler.fit_transform(X_train[features])
X_valid_scaled = scaler.transform(X_valid[features])
X_test_scaled = scaler.transform(X_test[features])
else:
# no scaling
X_scaled = X_train[features]
X_valid_scaled = X_valid[features]
X_test_scaled = X_test[features]
# 2. create cluster labels (use predict)
kmeans = KMeans(
n_clusters = 10,
n_init = 10,
random_state = RANDOM_SEED
)
X_train[name + "_Cluster"] = kmeans.fit_predict(X_scaled)
X_valid[name + "_Cluster"] = kmeans.predict(X_valid_scaled)
X_test[name + "_Cluster"] = kmeans.predict(X_test_scaled)
return X_train, X_valid, X_test
def cluster_label_features(X_train, X_valid, X_test, y_train):
# get variables correlated with target
corr = train.corr()
corr = corr.loc['target':'target']
corr = corr.drop(['id','target'],axis=1)
corr = abs(corr)
corr = corr.sort_values(by='target',axis=1, ascending=False)
cols = [x for x in corr.columns][:15]
return generate_cluster_labels(X_train, X_valid, X_test, "Top15", cols)
# +
clusterlabel_score, clusterlabel_time, model = score_lightgbm(cluster_label_features)
# don't need the model
del model
gc.collect()
print("\nTraining Time:", clusterlabel_time)
print("Holdout (AUC):", clusterlabel_score)
data['times'].append(clusterlabel_time)
data['scores'].append(clusterlabel_score)
index.append("Cluster Labels")
# -
# ## 3.2 Cluster Distances
def generate_cluster_distances(X_train, X_valid, X_test, name, features, scale = True):
# 1. normalize based on training data
if scale:
scaler = StandardScaler()
X_scaled = scaler.fit_transform(X_train[features])
X_valid_scaled = scaler.transform(X_valid[features])
X_test_scaled = scaler.transform(X_test[features])
else:
# no scaling
X_scaled = X_train[features]
X_valid_scaled = X_valid[features]
X_test_scaled = X_test[features]
# 2. generate cluster distances (use transform)
kmeans = KMeans(n_clusters = 10, n_init = 10, random_state=0)
X_cd = kmeans.fit_transform(X_scaled)
X_valid_cd = kmeans.transform(X_valid_scaled)
X_test_cd = kmeans.transform(X_test_scaled)
# 3. column labels
X_cd = pd.DataFrame(X_cd, columns=[name + "_Centroid_" + str(i) for i in range(X_cd.shape[1])])
X_valid_cd = pd.DataFrame(X_valid_cd, columns=[name + "_Centroid_" + str(i) for i in range(X_valid_cd.shape[1])])
X_test_cd = pd.DataFrame(X_test_cd, columns=[name + "_Centroid_" + str(i) for i in range(X_test_cd.shape[1])])
return X_train.join(X_cd), X_valid.join(X_valid_cd), X_test.join(X_test_cd)
def cluster_distance_features(X_train, X_valid, X_test, y_train):
# get variables correlated with target
corr = train.corr()
corr = corr.loc['target':'target']
corr = corr.drop(['id','target'],axis=1)
corr = abs(corr)
corr = corr.sort_values(by='target',axis=1, ascending=False)
cols = [x for x in corr.columns][:15]
return generate_cluster_distances(X_train, X_valid, X_test, "Top15", cols)
# +
clusterdist_score, clusterdist_time, model = score_lightgbm(cluster_distance_features)
# don't need the model
del model
gc.collect()
print("\nTraining Time:", clusterdist_time)
print("Holdout (AUC):", clusterdist_score)
data['times'].append(clusterdist_time)
data['scores'].append(clusterdist_score)
index.append('Cluster Distances')
# -
# # Evaluation
pd.DataFrame(data = data, index = index).T
# None of these methods appear particularly promising as they either provide no/little gain and/or increase the training time significantly but we may experiment with using some of these methods for ensembling to increase the variance.
| tps-2021-10/notebooks/Notebook 3 - Feature Engineering.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# Import SQL Alchemy
from sqlalchemy import create_engine
# Import and establish Base for which classes will be constructed
from sqlalchemy.ext.declarative import declarative_base
Base = declarative_base()
# Import modules to declare columns and column data types
from sqlalchemy import Column, Integer, String, Float
# +
# Create Surfer and Board classes
# ----------------------------------
class Surfer(Base):
__tablename__ = 'surfers'
id = Column(Integer, primary_key=True)
name = Column(String(255))
hometown = Column(String(255))
wipeouts = Column(Integer)
rank = Column(Integer)
class Board(Base):
__tablename__ = 'surfboards'
id = Column(Integer, primary_key=True)
surfer_id = Column(Integer)
board_name = Column(String(255))
color = Column(String(255))
length = Column(Integer)
# -
# Create specific instances of the Surfer and Board classes
# ----------------------------------
# Create a new surfer named "Bruno"
surfer = Surfer(name='Bruno', hometown="LA", rank=10)
# Create a new board and associate it with a surfer's ID
board = Board(surfer_id=1, board_name="Awwwyeah", color="Blue", length=68)
# Create Database Connection
# ----------------------------------
# Establish Connection
engine = create_engine("sqlite:///surfer.sqlite")
conn = engine.connect()
# Create both the Surfer and Board tables within the database
Base.metadata.create_all(conn)
# To push the objects made and query the server we use a Session object
from sqlalchemy.orm import Session
session = Session(bind=engine)
# Add "Bruno" to the current session
session.add(surfer)
# Add "Awwwyeah" to the current session
session.add(board)
# Commit both objects to the database
session.commit()
# Query the database and collect all of the surfers in the Surfer table
surfer_list = session.query(Surfer)
for bro in surfer_list:
print(bro.name)
print(bro.hometown)
print(bro.rank)
| 10-Advanced-Data-Storage-and-Retrieval/1/Activities/11-Stu_Surfer_SQL/Solved/.ipynb_checkpoints/Surfer_SQL-checkpoint.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: conda_python3
# language: python
# name: conda_python3
# ---
# # SageMaker Bring Your Own Container (BYOC)
#
#
# 아래의 그림은 개괄적인 BYOC를 셜명 합니다.
# 1. docker image(학습용, 배포용 혹은 학습/배포용) 를 생성 합니다. 아래와 같은 단계가 있습니다.
# - Train/Serve script 작성
# - Dockerfile 작성
# - Docker container 빌드
# - [옵션] Docker container 로컬에서 테스트
# 2. ECR에 Docker container 푸시
# 3. 학습의 단계로서 Estimator를 생성하여 인자인 ECR에 등록된 Docker container 위치 및 S3의 입력 데티터 제공
# 4. 학습 완료후에 모델의 아티펙트(학습 산출물)는 S3에 저장 됨.
# 5. 예측 모델 (추론 모델) 배포
# 6. 엔드 포인트 생성
# 7. 엔드 포인트에 추론 요청 및 결과 받음
#
# 
#
# ## BYOC 가이드 문서 <br>
#
# 1. 개략적인 이해를 위해서는 아래 첨부 파일을 확인 하세요.
# - [BYOC Overview](Sagemaker-BYOC-seongmoon.pdf)
#
#
# 2. 개발자 가이드의 Get Started 참고 하세요.
# [Get Started: Build Your Custom Training Container with Amazon SageMaker]((https://docs.aws.amazon.com/sagemaker/latest/dg/build-container-to-train-script-get-started.html) 링크를 누르시면 상세 설명이 있습니다.
#
# 3. 사용자의 알고리즘을 가져오는 방법을 설명하는 엔트리 페이지 입니다. <br>
# BYOC 관점으로 "Extend a pre-built container image", "Build your own custom contaienr image" 를 확인하시면 됩니다.
# - Use Your Own Algorithms or Models with Amazon SageMaker<br>
# https://docs.aws.amazon.com/sagemaker/latest/dg/your-algorithms.html
#
#
# ## 예제 노트북
#
# #### 1. ubuntu:16.04 Base Image로 이용
#
# 아래의 노트북은 iris 데이타 셋으로 Scikit Learn을 이용하여 Decision Tree 알고리즘을 작성하고, BYOC 의 단계를 수행 및 Endpoint 생성하여 추론 까지 하는 예제 입니다. 생성한 custome docker container는 학습/추론에 모두 사용되게 구성 되어 있습니다.
#
# - Git: [Building your own algorithm container](https://github.com/awslabs/amazon-sagemaker-examples/blob/master/advanced_functionality/scikit_bring_your_own/scikit_bring_your_own.ipynb)
# - Blog: [Train and host Scikit-Learn models in Amazon SageMaker by building a Scikit Docker container](https://aws.amazon.com/ko/blogs/machine-learning/train-and-host-scikit-learn-models-in-amazon-sagemaker-by-building-a-scikit-docker-container/#building)
#
#
# #### 2. Prebuilt SageMaker Container Image를 Base Image로 이용
#
# **빠르게 예제를 확인 원하시면 아래 (3) 의 "churn-prediction-workshop2" 를 확인 하세요.**
#
# (1) 아래의 노트북은 cifar10 이미지 데이터로 Tensorflow에서 알고리즘을 작성하고 , BYOC 의 단계를 수행 및 Endpoint 생성하여 추론 까지 하는 예제 입니다. 생성한 custome docker container는 학습/추론에 모두 사용되게 구성 되어 있습니다.
# - [Building your own TensorFlow container](https://github.com/awslabs/amazon-sagemaker-examples/blob/master/advanced_functionality/tensorflow_bring_your_own/tensorflow_bring_your_own.ipynb)
# - 추론단계에서 추론 전용의 docker container를 사용할 수 있습니다. Tensforflow의 경우에는 tensorflow-serving 추론 이미지를 주로 사용 합니다.[SageMaker TensorFlow Serving Container](https://github.com/aws/sagemaker-tensorflow-serving-container)
#
#
# (2) 아래의 노트북은 "how to extend a prebuilt Amazon SageMaker PyTorch container image"으로 예제로써 이미 제공된 pytorch framework docker image를 기반으로 Dockerfile을 만들고 cifar10에 대한 이미지를 학습하여 추론하는 예제 입니다.
# - [pytorch_extending_our_containers](https://github.com/awslabs/amazon-sagemaker-examples/blob/master/advanced_functionality/pytorch_extending_our_containers/pytorch_extending_our_containers.ipynb)
#
#
# (3) 아래의 두개의 노트북은 Prebuilt Scikit Learn, Tensorflow docker image를 확장한 예제 입니다.
# - churn-prediction-workshop2에 있는 [[Module 3.2] Custom PCA Docker Image 생성 및 ECR Model 학습](https://github.com/gonsoomoon-ml/churn-prediction-workshop2/blob/master/3.2.Make-BYOC-Custom-PCA-Docker.ipynb) 은 pre-built Scikit Learn docker image를 확장한 예 입니다.
# - "Tweet의 감정상태에 따른 이모티콘 추천"에 있는 [[Module 3.4.1] Train Docker Image 생성 및 ECR 퍼블리시](https://github.com/gonsoomoon-ml/RecommendEmoticon/blob/master/Tweet-BERT/3.4.1.Make-Train-Image-ECR.ipynb) 는 pre-built tensorflow2.1 docker image를 확장한 예 입니다.
#
# (4) 공식 개발자 문서에 있는 내용 입니다. 위에 언급한 예제들은 아래 링크에서 일부 가져 왔습니다.
# [Example Notebooks: Use Your Own Algorithm or Model](https://docs.aws.amazon.com/sagemaker/latest/dg/adv-bring-own-examples.html)
#
# ### Amazon SageMaker Custom Training containers
# - SageMaker-compatible training container를 만든는 유형 및 예제 코드가 있습니다.<br>
# - 유형은 크게 "Basic Training Container", "Script Mode Container", "Script Mode Container2", "Framework Container" 로 구분 되어 있습니다.
# https://github.com/awslabs/amazon-sagemaker-examples/tree/master/advanced_functionality/custom-training-containers
#
#
# ## 참고:
#
# - Prebuilt Amazon SageMaker Docker Images for Scikit-learn and Spark ML
# - SKLearn 빌트인 이미지 리스트<br>
# https://docs.aws.amazon.com/sagemaker/latest/dg/pre-built-docker-containers-frameworks.html
#
#
# - Available Deep Learning Containers Images
# - Tensorflow, PyTorch, MXNet 빌트인 이미지 리스트<br>
# https://github.com/aws/deep-learning-containers/blob/master/available_images.md
#
#
# - SageMaker Inference Toolkit
# - 추론용 커스텀 이미지를 만드는 가이드<br>
# https://github.com/aws/sagemaker-inference-toolkit
#
#
# - Bring Your Own Model (XGboost)
# - BYOM (로컬에서 모델 생성 후에 세이지 메이커에서 배포) <br>
# https://github.com/awslabs/amazon-sagemaker-examples/tree/master/advanced_functionality/xgboost_bring_your_own_model
#
#
# - Using Scikit-learn with the SageMaker Python SDK
# - 공식 세이지메이커 Python SDK 홈 입니다.<br>
# https://sagemaker.readthedocs.io/en/stable/frameworks/sklearn/using_sklearn.html#id2
#
#
# - 초보를 위한 도커 안내서 - 이미지 만들고 배포하기
# https://subicura.com/2017/02/10/docker-guide-for-beginners-create-image-and-deploy.html
#
#
#
#
#
#
#
| BYOC/README.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Predicting House Prices
# ## Objectives
# Predict sales prices of residential homes in Ames, Iowa. Practice feature engineering with RFE and regression techiques like OLS and regularization (Lasso Regression). I am using the [Ames Housing dataset](https://www.kaggle.com/c/house-prices-advanced-regression-techniques/overview) available on Kaggle.
# +
#Imports
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import matplotlib as mpl
import seaborn as sns
from scipy import stats
from scipy.stats import pearsonr
import statsmodels.api as sm
from sklearn.linear_model import LinearRegression, LassoCV
from sklearn.preprocessing import StandardScaler
from sklearn.preprocessing import OrdinalEncoder
from sklearn.feature_selection import RFECV
from sklearn.model_selection import cross_val_score
from sklearn.metrics import r2_score
import warnings
warnings.filterwarnings("ignore")
# %matplotlib inline
# -
# ## Loading data
#Loading train dataset
train=pd.read_csv('train.csv')
# +
# Checking the train dataset
print("\n Train dataset dimensions", train.shape)
print("\n Columns info", train.info())
# -
# This first look at the data shows that there are features with a lot of missing values. Comparing the data type in the dataset and the data description document we can see that a lot of the variables are classified wrongly in the dataset. Besides, categorical variables must be introduced to algorithms in a numeric value, not as labels.
# Loading test dataset
test=pd.read_csv('test.csv')
# Checking test dataset
print('\n Test dataset dimensions', test.shape)
print("\n Columns info", test.info())
# ## Data Wrangling
# I'll concatenate both train and test datasets because I'll be doing a lot of data transformations and all the changes done in the training dataset should be incorporated to the test dataset. To be sure that I'll separate them in the exact same way they were made available I'll add an identifier so I can split the dataset before modeling.
#Adding identifier
train['identifier']='train'
test['identifier']='test'
#concatenating
houses= pd.concat((train.loc[:,'MSSubClass':'identifier'],
test.loc[:,'MSSubClass':'identifier']))
houses.shape
# There are some categorial features that need to be transformed. Some appear as objects in the dataset; however there are cases in which a feature appears as numeric but it is actually categorical. Also, some of these categorical variables have NAs but they can be recoded as they contain important information. Finally, there are variables that have a LOT of categories. Some of them can be regrouped, others can't. Those that I believe that cannot be regrouped I'll leave as they are and see if it's worth using them during feature extraction.
#
# So here are the steps for the data wrangling:
#
# 1. Recode features that are worth recoding;
# 2. Transforming the categorical features
# <strong>Step 1: Recoding features</strong>
# +
## Feature: Alley
print('Count by category:',houses['Alley'].value_counts())
print('\nCount of NAs:', houses['Alley'].isnull().sum())
# -
#NA indicates that the house has no alley access. It is the bigger
#'category', but the count is so big that the variable may have really
#little variance. It probably won't be important for the model, but
#I'll recode anyway and decide whether it will be included in the model
#during feature extraction
houses['Alley']=houses['Alley'].fillna('no_alley')
print('Count by category:',houses['Alley'].value_counts())
# +
##Features: OverallQual & OverallCond. I'll regroup these variables.
#Creating a dictionary with the recoding
overall_dic={'OverallQual':{10:'excelent', 9:'excelent',8:'good',
7:'good', 6:'above_average', 5:'average',
4:'poor', 3:'poor', 2:'poor', 1:'poor'},
'OverallCond':{10:'excelent', 9:'excelent',8:'good',
7:'good', 6:'above_average', 5:'average',
4:'poor', 3:'poor', 2:'poor', 1:'poor'}}
#replacing
houses=houses.replace(overall_dic)
# +
#Features: YearBuilt & YearRemodAdd. These variables go back to the
# nineteenth and twentieth ceturies. I'll create categories for each of
#them.
#function to create groups
def yearbuilt_group(year):
if year <= 1900:
return "1900_or_older"
elif 1900 < year <= 1950:
return "1901-1950"
elif 1950 < year < 1970:
return "1951 - 1969"
elif 1970 <= year < 2000:
return "1970 - 1999"
elif 2000<= year:
return "2000's"
#applying the function
houses['YearBuilt']=houses['YearBuilt'].apply(yearbuilt_group)
# +
#YearRemodAdd
#function to code groups
def yearremod_group(year):
if year < 1960:
return "1950-1959"
elif 1960 <= year < 1970:
return "1760 - 1969"
elif 1970 <= year < 1980:
return "1970-1979"
elif 1980 <= year < 1990:
return "1980 - 1989"
elif 1990 <= year < 2000:
return "1990 - 1999"
elif 2000<= year:
return "2000's"
#applying function
houses['YearRemodAdd']=houses['YearRemodAdd'].apply(yearremod_group)
# +
#Features: BsmtQual, BsmtCond, BsmtExposure & BsmtFinType1. NAs
#indicates that the house has no basement. I'll replace them to
# a 'no basement' category
for column in houses[['BsmtQual','BsmtCond', 'BsmtExposure',
'BsmtFinType1','BsmtFinType2']]:
houses[column]=houses[column].fillna('no_basement')
# +
#Functional - there's not a lot of variance in this feature. Most cases
#are categorized as "Typical". Minor and major deductions are in such
# a small number that it's worth just grouping them all in one category
#for deductions.
#creating the dictionary
deductions_dic={'Functional':{'Typ':'Typ', 'Min1':'deduc',
'Min2':'deduc', 'Mod':'deduc',
'Maj1':'deduc', 'Maj2':'deduc',
'Sev':'Sev'}}
#replacing
houses=houses.replace(deductions_dic)
# +
## FireplaceQu: transforming NAs to category 'no_fireplace'
houses['FireplaceQu']=houses['FireplaceQu'].fillna('no_fireplace')
#Checking:
print('Count by category:',houses['FireplaceQu'].value_counts())
# +
#Creating a for loop to fill NAs on variables about garages. In these
#cases NA indicates that there's no garage in the house.
#Features:GarageType,GarageFinish,GarageQual,GarageCond
for column in houses[['GarageType','GarageFinish',
'GarageQual','GarageCond']]:
houses[column]=houses[column].fillna('no_garage')
# +
## Filling NAs for PoolQC, Fence, MiscFeature
houses['PoolQC']=houses['PoolQC'].fillna('no_pool')
houses['Fence']=houses['Fence'].fillna('no_fence')
houses['MiscFeature']=houses['MiscFeature'].fillna('no_miscellaneous')
# -
## Checking the dataset to see if there are more changes to be done
houses.info()
# +
## Features that still have a lot of null cells: LotFrontage,
#MasVnrType, MasVnrArea, GarageYrBlt.
#For LotFrontage I'll input the mean value of this variable
#I'll fill GarageYrBlt with the category '0'
#For MasVnrType and MasVnrArea we actually have NAs, meaning that
#we don't have any information about what the missing values
#could be. I'll just leave the NAs as they are.
#LotFrontage:
mean_LotFrontage=houses['LotFrontage'].mean()
houses['LotFrontage']=houses['LotFrontage'].fillna(mean_LotFrontage)
# GarageYrBlt
houses['GarageYrBlt']=houses['GarageYrBlt'].fillna(0)
# -
#Features to be transformed as categoricals
cat=['MSSubClass','MSZoning','Street', 'Alley','LotShape','LandContour',
'Utilities', 'LotConfig', 'LandSlope','Neighborhood','Condition1',
'Condition2','BldgType','HouseStyle', 'OverallQual', 'OverallCond',
'YearBuilt', 'YearRemodAdd','RoofStyle','Exterior1st','Exterior2nd',
'MasVnrType','ExterQual','ExterCond','Foundation','BsmtQual',
'BsmtCond','BsmtExposure','BsmtFinType2', 'Heating','HeatingQC',
'CentralAir','Electrical', 'KitchenQual','FireplaceQu','GarageType',
'GarageFinish','GarageQual','GarageCond','GarageYrBlt','PavedDrive',
'MoSold','YrSold','SaleType','SaleCondition','RoofMatl','BsmtFinType1',
'Functional', 'PoolQC','Fence','MiscFeature']
#Saving a list of numeric features
num=['LotFrontage','LotArea','MasVnrArea','BsmtFinSF1','BsmtFinSF2',
'BsmtUnfSF','TotalBsmtSF','1stFlrSF','2ndFlrSF','LowQualFinSF',
'GrLivArea','BsmtFullBath','BsmtHalfBath','FullBath','HalfBath',
'BedroomAbvGr','KitchenAbvGr','TotRmsAbvGrd','Fireplaces',
'GarageCars','GarageArea','WoodDeckSF','OpenPorchSF',
'EnclosedPorch','3SsnPorch','ScreenPorch','PoolArea','MiscVal','SalePrice']
# <strong>Step 2: Transforming categorical features</strong>
# Iterate over the columns to change those that are categories
for column in houses[cat]:
houses[column] = houses[column].astype("category")
# ## Exploratory Data Analysis (EDA)
# <strong>Dependent variable</strong>
# Verifying the distribution of the target variable
#Comparing price and log of price.
new_price = {'price':houses["SalePrice"], 'log(price + 1)':np.log1p(houses['SalePrice'])}
prices= pd.DataFrame(new_price)
prices.hist()
#Summarizing price and log of price
prices.describe()
# Price is not normally distributed so I'll have to use the log of price since regression makes the assumption that the data have a gaussian distribution.
#Transforming SalePrice to log of SalePrice
houses["SalePrice"] = np.log1p(houses["SalePrice"])
# <strong>Independent Variables</strong>
# Checking numeric variables for outliers</strong>
#Creating separate dataset just with numeric features
houses_num=houses[num]
#For loop to create boxplots for all features so I can look for
#outliers
for columns in houses_num:
plt.figure()
sns.boxplot(x=houses_num[columns])
# I've looked every feature closely and noticed that there are some
# with a great number of zeros. In these cases we don't have a lot of
# variation in the variables so I believe they'll probably be droppep
# during feature extraction. I'm taking off outliers from variables that
# present variation in their distribution, which indicates that they may be
# relevant for the model.
#Taking off outliers
houses= houses[houses['LotFrontage']<300]
houses= houses[houses['LotArea']<100000]
houses= houses[houses['BsmtUnfSF']<2336]
houses= houses[houses['TotalBsmtSF']<5000]
houses= houses[houses['1stFlrSF']<4000]
houses= houses[houses['GrLivArea']<4000]
# ## Feature Engineering and Selection
# I'll standardize the numeric features, which means subtracting the mean of each observation and then dividing by the standard deviation so I can have all features in the same scale. For the categorical feartures I'll do one-hot encoding for variables which the categories are independent from each other and transform to ordinal those that have categories that are related.
#Scaling numeric features
scaler = StandardScaler()
houses[num]= pd.DataFrame(scaler.fit_transform(houses[num]))
#Checking to see if there's any remaining NAN
print("Is there any NAN?", houses.isnull().values.any())
print("How many?", houses.isnull().sum().sum())
#Dropping NAN
houses=houses.dropna()
# +
#Separating ordinal and nominal categorical variables
cat_ordinal=['OverallQual','OverallCond','ExterQual','ExterCond',
'BsmtQual','BsmtCond','BsmtFinType1','BsmtFinType2',
'HeatingQC','KitchenQual','FireplaceQu','GarageQual',
'GarageCond','PoolQC']
cat_nominal=[i for i in cat if i not in cat_ordinal]
# -
# define ordinal encoding
encoder_ord = OrdinalEncoder()
# transform data
houses[cat_ordinal]= pd.DataFrame(encoder_ord.fit_transform(houses[cat_ordinal]))
#One-hot encoding on nominal categorical features
houses= pd.get_dummies(houses,columns=cat_nominal)
#Spliting dataframe in train and test
train=houses[houses['identifier']=='train']
test=houses[houses['identifier']=='test']
#Dropping identifier from both dataframes
train.drop('identifier',axis='columns',inplace=True)
test.drop('identifier',axis='columns',inplace=True)
# +
### I HAVE TO TAKE OFF SALE PRICE BECAUSE IT WASN'T PRESENT
## IN THE ORIGINAL DATASET!!
# -
#Separating X and y
X_train=train.loc[:, train.columns != 'SalePrice']
y_train=train['SalePrice']
X_test=test.loc[:, test.columns != 'SalePrice']
y_test=test['SalePrice']
# I have too many features. In order to decide which ones I'll use in the first model, that will be a multiple linear regression I'll do a feature selection with RFE (recursive feature selection) with cross validation(RFECV). Later I'll try running a Lasso Regression to see which features are used on this model and compare to those selected here with the RFECV.
#specifying model
lm=LinearRegression()
#defining the rfecv
rfecv=RFECV(estimator=lm, step=1, scoring='r2')
#fitting the rfecv to the training datasets
rfecv.fit(X_train,y_train)
#How many features were selected?
rfecv.n_features_
# summarize all features. Here I'll search for the 24 variables
#selected by the rfecv that are ranked as 1. These will be the features
#I'll use in the first model
for i in range(X_train.shape[1]):
print('Column: %d, Selected %s, Rank: %.3f' % (i, rfecv.support_[i], rfecv.ranking_[i]))
# +
#finding the index for SalePrice
train.columns.get_loc('SalePrice')
# +
#list with selected features and the target variable
feat=[16,17,18,19,63,64,76,77,118,125,126,157,162,177,182,231,232,
368,369,370,371,372,373,374,42]
# +
#saving datasets only with the selected features
train_new=train.iloc[:, feat]
test_new=test.iloc[:, feat]
# -
# ## Prediction
# <strong> Model 1: Multiple Linear Regression, Ordinary Least Squares (OLS) </strong>
# +
#Separating X and y
X_train_new=train_new.loc[:,train_new.columns != 'SalePrice']
y_train_new=train_new['SalePrice']
X_test_new=test_new.loc[:,test_new.columns != 'SalePrice']
y_test_new=test_new['SalePrice']
# -
#Creating the model
linear_reg= LinearRegression(normalize= False, fit_intercept= True)
#Training the model
model1=linear_reg.fit(X_train_new, y_train_new)
# +
# getting the importance of the variables (checking the coeficients)
importance_mod1 = model1.coef_
# summarize feature importance
for i,v in enumerate(importance_mod1):
print('Feature: %0d, Score: %.5f' % (i,v))
# -
#Taking off features that presented score=0
#saving datasets only with the selected features
feat_drop=[15,17]
train_new.drop(train_new.iloc[:,feat_drop], axis = 1, inplace=True)
test_new.drop(test_new.iloc[:,feat_drop], axis = 1, inplace=True)
# +
#Separating X and y again
X_train_new=train_new.loc[:,train_new.columns != 'SalePrice']
y_train_new=train_new['SalePrice']
X_test_new=test_new.loc[:,test_new.columns != 'SalePrice']
y_test_new=test_new['SalePrice']
# -
#Training the model again
model1=linear_reg.fit(X_train_new, y_train_new)
#feature names
features_mod1=X_train_new.columns
#R-Square
r2_score(y_test_new, model1.predict(X_test_new))
#OLS Coeficients
coef_mod1=pd.DataFrame(model1.coef_, index = X_train_new.columns,
columns=['mod1_coefficients'])
coef_mod1.head()
# <strong> Model 2: Lasso Regression </strong>
# Creating LASSO model with the complete datasets
model2 = LassoCV(alphas = [1, 0.1, 0.001, 0.0005]).fit(X_train, y_train)
#R2 of lasso model
r2_score(y_test, model2.predict(X_test))
#Lasso model coeficients
coef_mod2 = pd.DataFrame(model2.coef_, index = X_train.columns,
columns=['mod2_coefficients'])
coef_mod2.head()
#feature names
features=X_train.columns
#saving array with the absolute number of the coeficients
importance_mod2=np.abs(model2.coef_)
#features that survived Lasso regression:
lasso_feat=np.array(features)[importance_mod2!=0]
#How many features survived the lasso regression?
len(lasso_feat)
# The problem with this model is that it still has too many variables, which can make generalization difficult. I may also have some overfitting in this model because the R2 is pretty high.
#
# The RFE determined that only 24 features would be enough. Let's see which were the 24 most important features in this model:
# What are the 24 most important coeficients? Saving as dataframe
top_24=pd.DataFrame(np.abs(coef_mod2['mod2_coefficients'].sort_values(ascending = False)).head(24))
top_24
# <strong> Model 3: Multiple Linear Regression with features selected from Lasso Regression </strong>
#Creating list with the features I'll use in this model
feat_mod3=list(top_24.index)
# +
#Separating X and y
X_train_mod3=train[feat_mod3]
y_train_mod3=train['SalePrice']
X_test_mod3=test[feat_mod3]
y_test_mod3=test['SalePrice']
# -
#Training model 3
model3=linear_reg.fit(X_train_mod3, y_train_mod3)
#R-Square of model 3
r2_score(y_test_mod3, model3.predict(X_test_mod3))
# Model 3 presents a much better prediction than the last models. I'll rerun this model with the statsmodels package to get the summary. I want to check the model statitics to be sure that I am just selecting features that are statistically significant.
# Colecting x
X_stats = houses[feat_mod3]
#with statsmodels a constant needs to be created and included in
#the model
Xc_stats = sm.add_constant(X_train_mod3)
model_stats= sm.OLS(y_train_mod3, Xc_stats)
model_check = model_stats.fit()
model_check.summary()
# +
#Creating a list of variables that were not significant to take off from
#from the model
feat_off=['GarageYrBlt_1939.0', 'LandContour_Bnk','GarageType_BuiltIn',
'GarageYrBlt_1958.0', 'GarageYrBlt_1979.0', 'Neighborhood_BrDale',
'YearBuilt_1901-1950','LandSlope_Gtl', 'Neighborhood_NridgHt']
# -
# <strong> Model 4: Multiple Linear Regression taking off features that were not statistically significant </strong>
# +
#New list of features for model 4. I'll use the list of features
#for model 3 and take off the ones in the feat_off list
feat_mod4=[i for i in feat_mod3 if i not in feat_off]
#how many features will I have in the new model?
len(feat_mod4)
# +
#Separating X and y for model 4
X_train_mod4=train[feat_mod4]
y_train_mod4=train['SalePrice']
X_test_mod4=test[feat_mod4]
y_test_mod4=test['SalePrice']
# -
#Training model 4
model4=linear_reg.fit(X_train_mod4, y_train_mod4)
#R-Square of model 4
r2_score(y_test_mod4, model4.predict(X_test_mod4))
# I even got a slight improvement after taking the irrelevant variables!
# ## Interpreting results
# Now that I've done the predictions I'll run an OLS model with the dataset houses (that contains both training and test datasets) with the features used on model 4 in order to interpret the relationship between the sale price and these features. In other words, I want to understand what drives the prices of the houses in Ames.
#
# In order to do that I'll use the statsmodels package because it gives a better summary of the regression outcome.
# Colecting x e y
X = houses[feat_mod4]
y = houses['SalePrice'].values
# +
#creating the constant
Xc = sm.add_constant(X)
#model for interpretation
model_interpret= sm.OLS(y, Xc)
model5 = model_interpret.fit()
# -
model5.summary()
# After taking some of the variables on model 4 other three features became irrelevant to the model. I'll take them out to check if I get any changes on other variables again. It may worth taking them out to have a even more concise model.
# Also, I suspect that GarageArea and GarageCars may be correlated since both are measures of the size of the garage. If they are I should drop one of them from the final model to avoid multicolinearity.
# +
### pearson correlation between GarageArea and GarageCars
corr, _ = pearsonr(houses['GarageArea'], houses['GarageCars'])
print('Pearsons correlation: %.2f' % corr)
# -
# As I imagined both features are highly correlated. I'll drop GarageArea from the model.
# +
#Second list of variables to take off:
feat_off2= ['GarageYrBlt_1934.0', 'MSZoning_FV', 'Fence_GdPrv',
'GarageArea']
# -
#List of features for model 6
feat_mod6=[i for i in feat_mod4 if i not in feat_off2]
# +
# Running model 6
# Colecting x again, y is the same
X= houses[feat_mod6]
#creating constant with new values of X
Xc= sm.add_constant(X)
#running model 6
model6_interpret= sm.OLS(y, Xc)
model6 = model6_interpret.fit()
#checking summary
model6.summary()
# -
# This will be my final model for prediction. First I'll interpret these results and will run the final prediction model at the end.
# First, let's check the features that have higher effect on the Sales Prices. Since the coeficients are standardized, which means that they are in the same scale, they are comparable. Once I unstandardize them I won't be able to compare them.
standardized_coef=model6.params.sort_values(ascending=False)
standardized_coef
# In my final model I found 11 features that were the most important in driving the prices of residential homes in Ames. These are the features in order of importance. I define importance as the impact on the target variable Sale Price. These features were responsible for explaining most of the variance on Sale Price.
#
#
# 1. GrLivArea - above grade (ground) living area square feet
# 2. TotalBsmtSF - total square feet of basement area
# 3. GarageYrBlt_1972.0 - year that garage was built
# 4. GarageCars - size of garage in car capacity
# 5. FullBath - full bathrooms above grade
# 6. Exterior2nd_Stucco - exterior covering on house (if more than one material)
# 7. HalfBath - half baths above grade
# 8. Fireplaces - number of fire places
# 9. BsmtFinSF1 - Basement type 1 finished square feet
# 11. TotRmsAbvGrd - Total rooms above grade (does not include bathrooms)
# 12. BsmtFullBath - Basement full bathrooms
# In order to interpret these results I first have to unstandardize the coeficients to get their actual values and calculate their exponential since I used the log of the target variable.
# +
#unstandardizing coefficients from numeric features
#colecting standard deviations
original_stds = scaler.scale_
# +
#list with the indices of the numeric features from the houses
#dataset
indices=[10,6,13,19,14,18,3,17,11]
# +
#I have to take the mean and standard deviation of these features
#collecting the standard deviation
stds_num=[]
for index in indices:
stds_num.append(original_stds[index])
stds_num
# +
#I'll have to separate numeric and categorical features from
#the standardized_coef series since only the numerical ones were
#standardized. I'll separate the coeficients from both type of
#variables in order to unstandardize the coeficients and them
#put the list of all features together to calculate the exponential
num_feat=['GrLivArea','TotalBsmtSF','GarageCars','FullBath','HalfBath',
'Fireplaces','BsmtFinSF1','TotRmsAbvGrd','BsmtFullBath']
cat_feat=['GarageYrBlt_1972.0','Exterior2nd_Stucco']
coef_num=standardized_coef[num_feat]
#separate pd.series with the categorical features coefficients
#it will be appended to the unstandardized series latter
coef_cat=pd.Series(standardized_coef[cat_feat])
# -
#transforming coef_num e stds_num to list so I can do the calculations
coef_num_array=np.array(coef_num)
stds_num_array=np.array(stds_num)
#transforming numeric coeficients to their real values
unstandardized_coef=coef_num_array/stds_num_array
# +
#Transforming unstandardized_coef in pandas series and appending
#the series with the categorical features coeficients
unstandardized_coef=pd.Series(unstandardized_coef, index=num_feat).append(coef_cat)
# +
#Calculating exponential values of the coeficients
coef_final=np.exp(unstandardized_coef)
coef_final
# +
#unstandardizing the target variable
unst_target=y/
# -
original_stds[29]
# <strong> Next steps </strong>
#
#
# 1. Interpret coeficients
# 2. Make final model with predictions
| .ipynb_checkpoints/house-prices-prediction-checkpoint.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# <a href="https://www.bigdatauniversity.com"><img src = "https://ibm.box.com/shared/static/ugcqz6ohbvff804xp84y4kqnvvk3bq1g.png" width = 300, align = "center"></a>
#
# <h1 align=center><font size = 5>Lab: Connect to Db2 database on Cloud using Python</font></h1>
# # Introduction
#
# This notebook illustrates how to access a DB2 database on Cloud using Python by following the steps below:
# 1. Import the `ibm_db` Python library
# 1. Enter the database connection credentials
# 1. Create the database connection
# 1. Close the database connection
#
#
#
# __Note:__ Please follow the instructions given in the first Lab of this course to Create a database service instance of Db2 on Cloud and retrieve your database Service Credentials.
#
# ## Import the `ibm_db` Python library
#
# The `ibm_db` [API ](https://pypi.python.org/pypi/ibm_db/) provides a variety of useful Python functions for accessing and manipulating data in an IBM® data server database, including functions for connecting to a database, preparing and issuing SQL statements, fetching rows from result sets, calling stored procedures, committing and rolling back transactions, handling errors, and retrieving metadata.
#
#
# We first import the ibm_db library into our Python Application
#
# Execute the following cell by clicking within it and then
# press `Shift` and `Enter` keys simultaneously
#
import ibm_db
# When the command above completes, the `ibm_db` library is loaded in your notebook.
#
#
# ## Identify the database connection credentials
#
# Connecting to dashDB or DB2 database requires the following information:
# * Driver Name
# * Database name
# * Host DNS name or IP address
# * Host port
# * Connection protocol
# * User ID (or username)
# * User Password
#
#
#
# __Notice:__ To obtain credentials please refer to the instructions given in the first Lab of this course
#
# Now enter your database credentials below and execute the cell with `Shift` + `Enter`
#
# +
#Replace the placeholder values with your actual Db2 hostname, username, and password:
dsn_hostname = "YourDb2Hostname" # e.g.: "dashdb-txn-sbox-yp-dal09-04.services.dal.bluemix.net"
dsn_uid = "YourDb2Username" # e.g. "abc<PASSWORD>"
dsn_pwd = "<PASSWORD>" # e.g. "<PASSWORD>"
dsn_driver = "{IBM DB2 ODBC DRIVER}"
dsn_database = "BLUDB" # e.g. "BLUDB"
dsn_port = "50000" # e.g. "50000"
dsn_protocol = "TCPIP" # i.e. "TCPIP"
# -
# ## Create the DB2 database connection
#
# Ibm_db API uses the IBM Data Server Driver for ODBC and CLI APIs to connect to IBM DB2 and Informix.
#
#
# Lets build the dsn connection string using the credentials you entered above
#
# +
#DO NOT MODIFY THIS CELL. Just RUN it with Shift + Enter
#Create the dsn connection string
dsn = (
"DRIVER={0};"
"DATABASE={1};"
"HOSTNAME={2};"
"PORT={3};"
"PROTOCOL={4};"
"UID={5};"
"PWD={6};").format(dsn_driver, dsn_database, dsn_hostname, dsn_port, dsn_protocol, dsn_uid, dsn_pwd)
#print the connection string to check correct values are specified
print(dsn)
# -
# Now establish the connection to the database
# +
#DO NOT MODIFY THIS CELL. Just RUN it with Shift + Enter
#Create database connection
try:
conn = ibm_db.connect(dsn, "", "")
print ("Connected to database: ", dsn_database, "as user: ", dsn_uid, "on host: ", dsn_hostname)
except:
print ("Unable to connect: ", ibm_db.conn_errormsg() )
# -
# Congratulations if you were able to connect successfuly. Otherwise check the error and try again.
# +
#Retrieve Metadata for the Database Server
server = ibm_db.server_info(conn)
print ("DBMS_NAME: ", server.DBMS_NAME)
print ("DBMS_VER: ", server.DBMS_VER)
print ("DB_NAME: ", server.DB_NAME)
# +
#Retrieve Metadata for the Database Client / Driver
client = ibm_db.client_info(conn)
print ("DRIVER_NAME: ", client.DRIVER_NAME)
print ("DRIVER_VER: ", client.DRIVER_VER)
print ("DATA_SOURCE_NAME: ", client.DATA_SOURCE_NAME)
print ("DRIVER_ODBC_VER: ", client.DRIVER_ODBC_VER)
print ("ODBC_VER: ", client.ODBC_VER)
print ("ODBC_SQL_CONFORMANCE: ", client.ODBC_SQL_CONFORMANCE)
print ("APPL_CODEPAGE: ", client.APPL_CODEPAGE)
print ("CONN_CODEPAGE: ", client.CONN_CODEPAGE)
# -
# ## Close the Connection
# We free all resources by closing the connection. Remember that it is always important to close connections so that we can avoid unused connections taking up resources.
ibm_db.close(conn)
# ## Summary
#
# In this tutorial you established a connection to a DB2 database on Cloud database from a Python notebook using ibm_db API.
# Copyright © 2017 [cognitiveclass.ai](cognitiveclass.ai?utm_source=bducopyrightlink&utm_medium=dswb&utm_campaign=bdu). This notebook and its source code are released under the terms of the [MIT License](https://bigdatauniversity.com/mit-license/).
#
| labs/DB0201EN/.ipynb_checkpoints/DB0201EN-Week3-1-1-Connecting-v4-py-checkpoint.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Comparando, Tratando e Mesclando DataFrames
#
# ## Objetivo
#
# Vamos modificar os IDs para os nomes dos produtos, dos clientes e das lojas, para nossas análises ficarem mais intuitivas futuramente. Para isso, vamos criar um data frame com todos os detalhes.
#
# - Usaremos o método merge para isso e, depois se quisermos, podemos pegar apenas as colunas que queremos do dataframe final.
# ### Criando nossos dataframes
# +
import pandas as pd
#às vezes precisaremos mudar o encoding. Possiveis valores para testar:
#encoding='latin1', encoding='ISO-8859-1', encoding='utf-8' ou então encoding='cp1252'
vendas_df = pd.read_csv(r'Contoso - Vendas - 2017.csv', sep=';')
produtos_df = pd.read_csv(r'Contoso - Cadastro Produtos.csv', sep=';')
lojas_df = pd.read_csv(r'Contoso - Lojas.csv', sep=';')
clientes_df = pd.read_csv(r'Contoso - Clientes.csv', sep=';')
#usaremos o display para ver todos os dataframes
display(vendas_df)
display(produtos_df)
display(lojas_df)
display(clientes_df)
# -
# ### Vamos tirar as colunas inúteis do clientes_df ou pegar apenas as colunas que quisermos
# + active=""
# .drop([coluna1, coluna2, coluna3]) -> retira as colunas: coluna1, coluna2, coluna3
# -
clientes_df = clientes_df[['ID Cliente', 'E-mail']]
produtos_df = produtos_df[['ID Produto', 'Nome do Produto']]
lojas_df = lojas_df[['ID Loja', 'Nome da Loja']]
display(produtos_df)
# ### Agora vamos juntar os dataframes para ter 1 único dataframe com tudo "bonito"
# + active=""
# novo_dataframe = dataframe1.merge(dataframe2, on='coluna')
# -
# - Obs: O merge precisa das colunas com o mesmo nome para funcionar. Se não tiver, você precisa alterar o nome da coluna com o .rename
# + active=""
# dataframe.rename({'coluna1': 'novo_coluna_1'})
# +
#juntando os dataframes
vendas_df = vendas_df.merge(produtos_df, on='ID Produto')
vendas_df = vendas_df.merge(lojas_df, on='ID Loja')
vendas_df = vendas_df.merge(clientes_df, on='ID Cliente')
#exibindo o dataframe final
display(vendas_df)
# -
#vamos renomear o e-mail para ficar claro que é do cliente
vendas_df = vendas_df.rename(columns={'E-mail': 'E-mail do Cliente'})
display(vendas_df)
| pandas/renomear-merge-retirar-criar.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: 'Python 3.6.10 64-bit (''PythonData'': conda)'
# language: python
# name: python361064bitpythondataconda22b9033498814472bf281a4c68ca027c
# ---
import pandas as pd
from bs4 import BeautifulSoup
import requests
from splinter import Browser
url_mars_images = "https://astrogeology.usgs.gov/search/map/Mars/Viking/cerberus_enhanced"
response = requests.get(url_mars_images)
response
response.status_code
soup = BeautifulSoup(response.text, 'html.parser')
print(soup.prettify())
[type(item) for item in list(soup.children)]
html = list(soup.children)[2]
large_image = soup.find_all(class_="wide-image")
for image in large_image:
print(image)
| Missions_to_Mars/image1_cerberus.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3 (ipykernel)
# language: python
# name: python3
# ---
# # Face Recognition
# Face recognition -
#
# -Creating Data
# - Face detection
# - cascadeclassifier
# - Giving label
# - lable as input of person name
# -Loading Data to model(Knn)
# -Predict Data
# ### For single photo
# +
# importing module
import numpy as np
from sklearn.neighbors import KNeighborsClassifier
import cv2
# loading test case
image = cv2.imread("dustbin.jpg")
# loading cascadeclassifier
obj = cv2.CascadeClassifier(r"C:\Users\Aditya\AppData\Local\Programs\Python\Python39\Lib\site-packages\cv2\data//haarcascade_frontalface_default.xml")
face= obj.detectMultiScale(image)
# getting face axis
x,y,w,h = face[0]
# cutting face from entire pic
actual_face = image[y:y+h,x:x+w]
# converting face to gray scale
grey_actual_face = cv2.cvtColor(actual_face,cv2.COLOR_BGR2GRAY)
# resizing image so all data has proper same dimensions
actual_face = cv2.resize(grey_actual_face,(100,100))
# loading saved data
data =np.load("face_data.npy")
# fetching features and lables
features = data[:,1:].astype(int)
lable = data[:,0]
# initialize model
model = KNeighborsClassifier()
# give data to train model
model.fit(features,lable)
# flattening image
flatten_img = actual_face.flatten()
# predicting image name
prediction = model.predict([flatten_img])
prediction
# -
# ### For realtime video capturing
# Vdo = multiple_fram of images so simply just modify this above code and you realtime video capturing done
# +
# importing usefull modules
import numpy as np
from sklearn.neighbors import KNeighborsClassifier
import cv2
# open vdo capturing stream
vdo = cv2.VideoCapture(0)
# initialize the loop which capture multiple frames so it act like vdo
while True:
# taking flag and image from this stream
# flag is just boolean variable which return true if there is no problem with vdo capturing else false
flag, image = vdo.read()
# so if there is no problem with vdo streaming then will do our work
if flag:
# initialize the cascadeclassifier
obj = cv2.CascadeClassifier(r"C:\Users\Aditya\AppData\Local\Programs\Python\Python39\Lib\site-packages\cv2\data//haarcascade_frontalface_default.xml")
# fetching faces
faces= obj.detectMultiScale(image)
# looping for each face from all faces
for face in faces:
x,y,w,h = face
# extracting face axis
actual_face = image[y:y+h,x:x+w]
# drawing rectangle for just make sure that detection is happning right
cv2.rectangle(image,(x,y),(x+w,y+h),(255,0,0),1)
# convert actual face to gray scals
actual_face = cv2.cvtColor(actual_face,cv2.COLOR_BGR2GRAY)
# resize the dimension
actual_face = cv2.resize(actual_face,(100,100))
# loading data for training
data =np.load("face_data.npy")
# fetching features and lables
features = data[:,1:].astype(int)
lable = data[:,0]
# building model
model = KNeighborsClassifier()
# train model with these data
model.fit(features,lable)
# flatten all image frame
flatten_img = actual_face.flatten()
# getting prediction from model
prediction = model.predict([flatten_img])
# show this prediction as text to window
cv2.putText(image,prediction[0],(x,y-10),cv2.FONT_HERSHEY_SIMPLEX,2,(0,255,0),2)
# showing the window
cv2.imshow("image",image)
# for quite the windo
key = cv2.waitKey(1)
if key==ord("q"):
break
vdo.release()
cv2.destroyallwindows()
| KNN-CLASSIFICATION/FACE-RECOGNITION/Face-recognition.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + _uuid="8f2839f25d086af736a60e9eeb907d3b93b6e0e5" _cell_guid="b1076dfc-b9ad-4769-8c92-a6c4dae69d19"
import torch
import torch.nn as nn
import torchvision.transforms as transforms
from torch.utils.data import Dataset, DataLoader
from torch.utils.data.sampler import SubsetRandomSampler
import os
import random
import itertools
import numpy as np
from PIL import Image
import matplotlib.pyplot as plt
# -
class ResidualBlock(nn.Module):
def __init__(self, in_channels):
super(ResidualBlock, self).__init__()
self.block = nn.Sequential(
nn.ReflectionPad2d(1), # padding, keep the image size constant after next conv2d
nn.Conv2d(in_channels, in_channels, 3),
nn.InstanceNorm2d(in_channels),
nn.ReLU(inplace=True),
nn.ReflectionPad2d(1),
nn.Conv2d(in_channels, in_channels, 3),
nn.InstanceNorm2d(in_channels)
)
def forward(self, x):
return x + self.block(x)
class GeneratorResNet(nn.Module):
def __init__(self, in_channels, num_residual_blocks=9):
super(GeneratorResNet, self).__init__()
# Inital Convolution 3*224*224 -> 64*224*224
out_channels=64
self.conv = nn.Sequential(
nn.ReflectionPad2d(in_channels), # padding, keep the image size constant after next conv2d
nn.Conv2d(in_channels, out_channels, 2*in_channels+1),
nn.InstanceNorm2d(out_channels),
nn.ReLU(inplace=True),
)
channels = out_channels
# Downsampling 64*224*224 -> 128*112*112 -> 256*56*56
self.down = []
for _ in range(2):
out_channels = channels * 2
self.down += [
nn.Conv2d(channels, out_channels, 3, stride=2, padding=1),
nn.InstanceNorm2d(out_channels),
nn.ReLU(inplace=True),
]
channels = out_channels
self.down = nn.Sequential(*self.down)
# Transformation (ResNet) 256*56*56
self.trans = [ResidualBlock(channels) for _ in range(num_residual_blocks)]
self.trans = nn.Sequential(*self.trans)
# Upsampling 256*56*56 -> 128*112*112 -> 64*224*224
self.up = []
for _ in range(2):
out_channels = channels // 2
self.up += [
nn.Upsample(scale_factor=2), # bilinear interpolation
nn.Conv2d(channels, out_channels, 3, stride=1, padding=1),
nn.InstanceNorm2d(out_channels),
nn.ReLU(inplace=True),
]
channels = out_channels
self.up = nn.Sequential(*self.up)
# Out layer 64*224*224 -> 3*224*224
self.out = nn.Sequential(
nn.ReflectionPad2d(in_channels),
nn.Conv2d(channels, in_channels, 2*in_channels+1),
nn.Tanh()
)
def forward(self, x):
x = self.conv(x)
x = self.down(x)
x = self.trans(x)
x = self.up(x)
x = self.out(x)
return x
class Discriminator(nn.Module):
def __init__(self, in_channels):
super(Discriminator, self).__init__()
self.model = nn.Sequential(
# why normalize=False?
*self.block(in_channels, 64, normalize=False), # 3*224*224 -> 64*112*112
*self.block(64, 128), # 64*112*112 -> 128*56*56
*self.block(128, 256), # 128*56*56 -> 256*28*28
*self.block(256, 512), # 256*28*28 -> 512*14*14
# Why padding first then convolution?
nn.ZeroPad2d((1,0,1,0)), # padding left and top 512*14*14 -> 512*15*15
nn.Conv2d(512, 1, 4, padding=1) # 512*15*15 -> 1*14*14
)
self.scale_factor = 16
@staticmethod
def block(in_channels, out_channels, normalize=True):
layers = [nn.Conv2d(in_channels, out_channels, 4, stride=2, padding=1)]
if normalize:
layers.append(nn.InstanceNorm2d(out_channels))
layers.append(nn.LeakyReLU(0.2, inplace=True))
return layers
def forward(self, x):
return self.model(x)
G_AB = GeneratorResNet(3, num_residual_blocks=9)
G_BA = GeneratorResNet(3, num_residual_blocks=9)
checkpoint = torch.load("../input/trainedmodel1/melanomagan_config_3_1.pth", map_location=torch.device('cpu'))
G_AB.load_state_dict(checkpoint['G_AB_state_dict'])
G_BA.load_state_dict(checkpoint['G_BA_state_dict'])
cuda = torch.cuda.is_available()
print(f'cuda: {cuda}')
if cuda:
G_AB = G_AB.cuda()
G_BA = G_BA.cuda()
G_AB.eval()
G_BA.eval()
benign_dir = '../input/melanoma/Melanoma/train/benign'
malign_dir = '../input/melanoma/Melanoma/train/malignant'
transform = transforms.Compose([
transforms.ToTensor(),
transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))
])
imgs = os.listdir(benign_dir)
# +
for i in imgs:
plt.figure(figsize=(15, 12))
path = benign_dir + '/' + i
img = Image.open(path)
plt.subplot(1, 3, 1)
plt.imshow(img)
plt.axis('off')
transformed_img = transform(img)
transformed_img = torch.unsqueeze(transformed_img, 0)
generated_img = G_AB(transformed_img)
reconstructed_img = G_BA(generated_img)
generated_img = torch.squeeze(generated_img.detach(), 0)
generated_img = ((generated_img * 0.5)+ 0.5)
plt.subplot(1, 3, 2)
plt.imshow(generated_img.permute(1, 2, 0))
plt.axis('off')
reconstructed_img = torch.squeeze(reconstructed_img.detach(), 0)
reconstructed_img = ((reconstructed_img * 0.5)+ 0.5)
plt.subplot(1, 3, 3)
plt.imshow(reconstructed_img.permute(1, 2, 0))
plt.axis('off')
# -
import os
path = './generated_malign'
if not os.path.exists(path):
os.mkdir(path)
for i in imgs:
path_ip = benign_dir + '/' + i
img = Image.open(path_ip)
transformed_img = transform(img)
transformed_img = torch.unsqueeze(transformed_img, 0)
generated_img = G_AB(transformed_img)
generated_img = torch.squeeze(generated_img.detach(), 0)
generated_img = ((generated_img * 0.5)+ 0.5)
generated_img = generated_img.permute(1, 2, 0).numpy()
generated_img = generated_img * 255.0
generated_img = generated_img.astype(np.uint8)
save_img = Image.fromarray(generated_img)
save_img.save(path + '/generated_' + i)
import shutil
shutil.make_archive('generated_malign', 'zip', path)
# shutil.rmtree(folderlocation)
| Notebooks/generate-malign-samples.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] papermill={"duration": 0.021224, "end_time": "2021-08-14T14:01:00.746353", "exception": false, "start_time": "2021-08-14T14:01:00.725129", "status": "completed"} tags=[]
# # <center>Diabetic Retinopathy Detection using PyTorch<center>
# + [markdown] papermill={"duration": 0.019832, "end_time": "2021-08-14T14:01:00.786448", "exception": false, "start_time": "2021-08-14T14:01:00.766616", "status": "completed"} tags=[]
# <img src = 'https://raw.githubusercontent.com/dimitreOliveira/APTOS2019BlindnessDetection/master/Assets/banner.png' >
# + [markdown] papermill={"duration": 0.019958, "end_time": "2021-08-14T14:01:00.826801", "exception": false, "start_time": "2021-08-14T14:01:00.806843", "status": "completed"} tags=[]
# ### What is Diabetic Retinopathy?
# Diabetic Retinopathy is a condition that can cause Vision loss and Blindness in the people who have Diabetes. It affects the blood vessels in the Retina. It is one of the leading cause of Blindness across the world. Diabetic retinopathy may not have any symptoms at first, but finding it early can help you take steps to protect your vision.
# ### Problem Statement :
# The problem is to perform Image Classification. Given an image (fundus photographs), You have to predict which class does the given image belongs to. The Classes to predict are :
# * 0 - Normal <br>
# * 1 - Mild <br>
# * 2 - Moderate <br>
# * 3 - Severe <br>
# * 4 - Proliferative <br>
# + [markdown] papermill={"duration": 0.019964, "end_time": "2021-08-14T14:01:00.866886", "exception": false, "start_time": "2021-08-14T14:01:00.846922", "status": "completed"} tags=[]
# <img src = 'https://repository-images.githubusercontent.com/195603342/63983100-b4a6-11e9-846c-99b9465f7b3b'>
# + [markdown] papermill={"duration": 0.019953, "end_time": "2021-08-14T14:01:00.907032", "exception": false, "start_time": "2021-08-14T14:01:00.887079", "status": "completed"} tags=[]
# ## Import Libraries
# + papermill={"duration": 1.402735, "end_time": "2021-08-14T14:01:02.329884", "exception": false, "start_time": "2021-08-14T14:01:00.927149", "status": "completed"} tags=[]
import pandas as pd #For reading csv files.
import numpy as np
import matplotlib.pyplot as plt #For plotting.
import PIL.Image as Image #For working with image files.
#Importing torch
import torch
import torch.nn as nn
import torch.nn.functional as F
from torch.utils.data import Dataset,DataLoader #For working with data.
from torchvision import models,transforms #For pretrained models,image transformations.
# + papermill={"duration": 0.071576, "end_time": "2021-08-14T14:01:02.423193", "exception": false, "start_time": "2021-08-14T14:01:02.351617", "status": "completed"} tags=[]
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu') #Use GPU if it's available or else use CPU.
print(device) #Prints the device we're using.
# + [markdown] papermill={"duration": 0.020833, "end_time": "2021-08-14T14:01:02.465698", "exception": false, "start_time": "2021-08-14T14:01:02.444865", "status": "completed"} tags=[]
# ## EDA
# + [markdown] papermill={"duration": 0.019861, "end_time": "2021-08-14T14:01:02.505588", "exception": false, "start_time": "2021-08-14T14:01:02.485727", "status": "completed"} tags=[]
# I am using only the data from https://www.kaggle.com/c/aptos2019-blindness-detection/data. If you've more computing resources, you can also use the data from https://www.kaggle.com/c/diabetic-retinopathy-detection/data.
# + papermill={"duration": 0.058978, "end_time": "2021-08-14T14:01:02.584716", "exception": false, "start_time": "2021-08-14T14:01:02.525738", "status": "completed"} tags=[]
path = "/kaggle/input/aptos2019-blindness-detection/"
train_df = pd.read_csv(f"{path}train.csv")
print(f'No.of.training_samples: {len(train_df)}')
test_df = pd.read_csv(f'{path}test.csv')
print(f'No.of.testing_samples: {len(test_df)}')
# + papermill={"duration": 0.179467, "end_time": "2021-08-14T14:01:02.784971", "exception": false, "start_time": "2021-08-14T14:01:02.605504", "status": "completed"} tags=[]
#Histogram of label counts.
train_df.diagnosis.hist()
plt.xticks([0,1,2,3,4])
plt.grid(False)
plt.show()
# + papermill={"duration": 5.016532, "end_time": "2021-08-14T14:01:07.824378", "exception": false, "start_time": "2021-08-14T14:01:02.807846", "status": "completed"} tags=[]
#As you can see,the data is imbalanced.
#So we've to calculate weights for each class,which can be used in calculating loss.
from sklearn.utils import class_weight #For calculating weights for each class.
class_weights = class_weight.compute_class_weight(class_weight='balanced',classes=np.array([0,1,2,3,4]),y=train_df['diagnosis'].values)
class_weights = torch.tensor(class_weights,dtype=torch.float).to(device)
print(class_weights) #Prints the calculated weights for the classes.
# + papermill={"duration": 0.825229, "end_time": "2021-08-14T14:01:08.671423", "exception": false, "start_time": "2021-08-14T14:01:07.846194", "status": "completed"} tags=[]
#For getting a random image from our training set.
num = int(np.random.randint(0,len(train_df)-1,(1,))) #Picks a random number.
sample_image = (f'{path}train_images/{train_df["id_code"][num]}.png')#Image file.
sample_image = Image.open(sample_image)
plt.imshow(sample_image)
plt.axis('off')
plt.title(f'Class: {train_df["diagnosis"][num]}') #Class of the random image.
plt.show()
# + [markdown] papermill={"duration": 0.023723, "end_time": "2021-08-14T14:01:08.719081", "exception": false, "start_time": "2021-08-14T14:01:08.695358", "status": "completed"} tags=[]
# ## Preprocess the Data
# + papermill={"duration": 0.033772, "end_time": "2021-08-14T14:01:08.776882", "exception": false, "start_time": "2021-08-14T14:01:08.743110", "status": "completed"} tags=[]
class dataset(Dataset): # Inherits from the Dataset class.
'''
dataset class overloads the __init__, __len__, __getitem__ methods of the Dataset class.
Attributes :
df: DataFrame object for the csv file.
data_path: Location of the dataset.
image_transform: Transformations to apply to the image.
train: A boolean indicating whether it is a training_set or not.
'''
def __init__(self,df,data_path,image_transform=None,train=True): # Constructor.
super(Dataset,self).__init__() #Calls the constructor of the Dataset class.
self.df = df
self.data_path = data_path
self.image_transform = image_transform
self.train = train
def __len__(self):
return len(self.df) #Returns the number of samples in the dataset.
def __getitem__(self,index):
image_id = self.df['id_code'][index]
image = Image.open(f'{self.data_path}/{image_id}.png') #Image.
if self.image_transform :
image = self.image_transform(image) #Applies transformation to the image.
if self.train :
label = self.df['diagnosis'][index] #Label.
return image,label #If train == True, return image & label.
else:
return image #If train != True, return image.
# + papermill={"duration": 0.034764, "end_time": "2021-08-14T14:01:08.835449", "exception": false, "start_time": "2021-08-14T14:01:08.800685", "status": "completed"} tags=[]
image_transform = transforms.Compose([transforms.Resize([512,512]),
transforms.ToTensor(),
transforms.Normalize((0.485, 0.456, 0.406), (0.229, 0.224, 0.225))]) #Transformations to apply to the image.
data_set = dataset(train_df,f'{path}train_images',image_transform=image_transform)
#Split the data_set so that valid_set contains 0.1 samples of the data_set.
train_set,valid_set = torch.utils.data.random_split(data_set,[3302,360])
# + papermill={"duration": 0.031069, "end_time": "2021-08-14T14:01:08.890652", "exception": false, "start_time": "2021-08-14T14:01:08.859583", "status": "completed"} tags=[]
train_dataloader = DataLoader(train_set,batch_size=32,shuffle=True) #DataLoader for train_set.
valid_dataloader = DataLoader(valid_set,batch_size=32,shuffle=False) #DataLoader for validation_set.
# + [markdown] papermill={"duration": 0.023786, "end_time": "2021-08-14T14:01:08.938549", "exception": false, "start_time": "2021-08-14T14:01:08.914763", "status": "completed"} tags=[]
# ## Build the Model
# + papermill={"duration": 1.588248, "end_time": "2021-08-14T14:01:10.550514", "exception": false, "start_time": "2021-08-14T14:01:08.962266", "status": "completed"} tags=[]
#Since we've less data, we'll use Transfer learning.
model = models.resnet34(pretrained=True) #Downloads the resnet18 model which is pretrained on Imagenet dataset.
#Replace the Final layer of pretrained resnet18 with 4 new layers.
model.fc = nn.Sequential(nn.Linear(512,256),nn.Linear(256,128),nn.Linear(128,64),nn.Linear(64,5))
# + papermill={"duration": 0.062933, "end_time": "2021-08-14T14:01:10.638390", "exception": false, "start_time": "2021-08-14T14:01:10.575457", "status": "completed"} tags=[]
model = model.to(device) #Moves the model to the device.
# + [markdown] papermill={"duration": 0.025107, "end_time": "2021-08-14T14:01:10.688213", "exception": false, "start_time": "2021-08-14T14:01:10.663106", "status": "completed"} tags=[]
# ## Create functions for Training & Validation
# + papermill={"duration": 0.034907, "end_time": "2021-08-14T14:01:10.747861", "exception": false, "start_time": "2021-08-14T14:01:10.712954", "status": "completed"} tags=[]
def train(dataloader,model,loss_fn,optimizer):
'''
train function updates the weights of the model based on the
loss using the optimizer in order to get a lower loss.
Args :
dataloader: Iterator for the batches in the data_set.
model: Given an input produces an output by multiplying the input with the model weights.
loss_fn: Calculates the discrepancy between the label & the model's predictions.
optimizer: Updates the model weights.
Returns :
Average loss per batch which is calculated by dividing the losses for all the batches
with the number of batches.
'''
model.train() #Sets the model for training.
total = 0
correct = 0
running_loss = 0
for batch,(x,y) in enumerate(dataloader): #Iterates through the batches.
output = model(x.to(device)) #model's predictions.
loss = loss_fn(output,y.to(device)) #loss calculation.
running_loss += loss.item()
total += y.size(0)
predictions = output.argmax(dim=1).cpu().detach() #Index for the highest score for all the samples in the batch.
correct += (predictions == y.cpu().detach()).sum().item() #No.of.cases where model's predictions are equal to the label.
optimizer.zero_grad() #Gradient values are set to zero.
loss.backward() #Calculates the gradients.
optimizer.step() #Updates the model weights.
avg_loss = running_loss/len(dataloader) # Average loss for a single batch
print(f'\nTraining Loss per batch = {avg_loss:.6f}',end='\t')
print(f'Accuracy on Training set = {100*(correct/total):.6f}% [{correct}/{total}]') #Prints the Accuracy.
return avg_loss
# + papermill={"duration": 0.034128, "end_time": "2021-08-14T14:01:10.806321", "exception": false, "start_time": "2021-08-14T14:01:10.772193", "status": "completed"} tags=[]
def validate(dataloader,model,loss_fn):
'''
validate function calculates the average loss per batch and the accuracy of the model's predictions.
Args :
dataloader: Iterator for the batches in the data_set.
model: Given an input produces an output by multiplying the input with the model weights.
loss_fn: Calculates the discrepancy between the label & the model's predictions.
Returns :
Average loss per batch which is calculated by dividing the losses for all the batches
with the number of batches.
'''
model.eval() #Sets the model for evaluation.
total = 0
correct = 0
running_loss = 0
with torch.no_grad(): #No need to calculate the gradients.
for x,y in dataloader:
output = model(x.to(device)) #model's output.
loss = loss_fn(output,y.to(device)).item() #loss calculation.
running_loss += loss
total += y.size(0)
predictions = output.argmax(dim=1).cpu().detach()
correct += (predictions == y.cpu().detach()).sum().item()
avg_loss = running_loss/len(dataloader) #Average loss per batch.
print(f'\nValidation Loss per batch = {avg_loss:.6f}',end='\t')
print(f'Accuracy on Validation set = {100*(correct/total):.6f}% [{correct}/{total}]') #Prints the Accuracy.
return avg_loss
# + [markdown] papermill={"duration": 0.024312, "end_time": "2021-08-14T14:01:10.854931", "exception": false, "start_time": "2021-08-14T14:01:10.830619", "status": "completed"} tags=[]
# ## Optimize the Model
# + papermill={"duration": 0.033381, "end_time": "2021-08-14T14:01:10.912706", "exception": false, "start_time": "2021-08-14T14:01:10.879325", "status": "completed"} tags=[]
def optimize(train_dataloader,valid_dataloader,model,loss_fn,optimizer,nb_epochs):
'''
optimize function calls the train & validate functions for (nb_epochs) times.
Args :
train_dataloader: DataLoader for the train_set.
valid_dataloader: DataLoader for the valid_set.
model: Given an input produces an output by multiplying the input with the model weights.
loss_fn: Calculates the discrepancy between the label & the model's predictions.
optimizer: Updates the model weights.
nb_epochs: Number of epochs.
Returns :
Tuple of lists containing losses for all the epochs.
'''
#Lists to store losses for all the epochs.
train_losses = []
valid_losses = []
for epoch in range(nb_epochs):
print(f'\nEpoch {epoch+1}/{nb_epochs}')
print('-------------------------------')
train_loss = train(train_dataloader,model,loss_fn,optimizer) #Calls the train function.
train_losses.append(train_loss)
valid_loss = validate(valid_dataloader,model,loss_fn) #Calls the validate function.
valid_losses.append(valid_loss)
print('\nTraining has completed!')
return train_losses,valid_losses
# + papermill={"duration": 14567.186613, "end_time": "2021-08-14T18:03:58.123921", "exception": false, "start_time": "2021-08-14T14:01:10.937308", "status": "completed"} tags=[]
loss_fn = nn.CrossEntropyLoss(weight=class_weights) #CrossEntropyLoss with class_weights.
optimizer = torch.optim.SGD(model.parameters(),lr=0.001)
nb_epochs = 30
#Call the optimize function.
train_losses, valid_losses = optimize(train_dataloader,valid_dataloader,model,loss_fn,optimizer,nb_epochs)
# + papermill={"duration": 0.183372, "end_time": "2021-08-14T18:03:58.348350", "exception": false, "start_time": "2021-08-14T18:03:58.164978", "status": "completed"} tags=[]
#Plot the graph of train_losses & valid_losses against nb_epochs.
epochs = range(nb_epochs)
plt.plot(epochs, train_losses, 'g', label='Training loss')
plt.plot(epochs, valid_losses, 'b', label='validation loss')
plt.title('Training and Validation loss')
plt.xlabel('Epochs')
plt.ylabel('Loss')
plt.legend()
plt.show()
# + [markdown] papermill={"duration": 0.041297, "end_time": "2021-08-14T18:03:58.431485", "exception": false, "start_time": "2021-08-14T18:03:58.390188", "status": "completed"} tags=[]
# ## Testing the model
# + papermill={"duration": 0.048753, "end_time": "2021-08-14T18:03:58.522176", "exception": false, "start_time": "2021-08-14T18:03:58.473423", "status": "completed"} tags=[]
test_set = dataset(test_df,f'{path}test_images',image_transform = image_transform,train = False )
test_dataloader = DataLoader(test_set, batch_size=32, shuffle=False) #DataLoader for test_set.
# + papermill={"duration": 0.049879, "end_time": "2021-08-14T18:03:58.613988", "exception": false, "start_time": "2021-08-14T18:03:58.564109", "status": "completed"} tags=[]
def test(dataloader,model):
'''
test function predicts the labels given an image batches.
Args :
dataloader: DataLoader for the test_set.
model: Given an input produces an output by multiplying the input with the model weights.
Returns :
List of predicted labels.
'''
model.eval() #Sets the model for evaluation.
labels = [] #List to store the predicted labels.
with torch.no_grad():
for batch,x in enumerate(dataloader):
output = model(x.to(device))
predictions = output.argmax(dim=1).cpu().detach().tolist() #Predicted labels for an image batch.
labels.extend(predictions)
print('Testing has completed')
return labels
# + papermill={"duration": 152.811132, "end_time": "2021-08-14T18:06:31.466658", "exception": false, "start_time": "2021-08-14T18:03:58.655526", "status": "completed"} tags=[]
labels = test(test_dataloader,model) #Calls the test function.
# + [markdown] papermill={"duration": 0.042447, "end_time": "2021-08-14T18:06:31.734238", "exception": false, "start_time": "2021-08-14T18:06:31.691791", "status": "completed"} tags=[]
# We can able to increase the accuracy of the model by various ways, like increasing the dataset size, increasing the model complexity, using ensemble models, and increasing the number of epochs.
| DR Detection using PyTorch.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # DKRZ PyNGL example
#
# ## - Filled circles instead of grid cells; the size depends on a quality value.
#
# ### Description:
# Draw two plots, first plot is a raster contour plot and the second shows
# the data using filled circles which are sized by a quality variable.
#
# ### Effects illustrated:
# o Creating a contour cellFill plot
# o Using markers
# o Using dummy data
# o Create a legend
# o Create a labelbar
# o Add text
#
# Author: <NAME>
# Load modules
from __future__ import print_function
import numpy as np
import Ngl,Nio
# Global variables
VALUES = True #-- turn on/off value annotation of first plot
GRID = True #-- turn on/off the data grid lines of second plot
# Create dummy data and coordinates
# +
minlat, maxlat = 47.0, 55.0 #-- minimum and maximum latitude of map
minlon, maxlon = 5.0, 16.0 #-- minimum and maximum longitude of map
#-- generate dummy data and coordinates
nlat, nlon = 16, 22
lat = np.linspace(minlat, maxlat, num=nlat)
lon = np.linspace(minlon, maxlon, num=nlon)
#-- generate dummy data with named dimensions
tempmin, tempmax, tempint = -2.0, 2.0, 0.5
temp = np.random.uniform(tempmin,tempmax,[nlat,nlon])
temp1d = temp.flatten()
ncells = len(temp1d)
#-- generate random dummy quality data
minqual, maxqual = 1, 4
quality = np.floor(np.random.uniform(minqual,maxqual+0.5,[nlat,nlon])).astype(int)
quality1d = quality.flatten()
# -
# Open graphics output (workstation)
# +
wkres = Ngl.Resources()
wkres.wkBackgroundColor = "gray85" #-- set background color to light gray
wks = Ngl.open_wks('png','plot_quality_per_cell',wkres)
# -
# Set color map, levels and and which color indices to be used
# +
#-- set color map
colmap = "BlueDarkRed18"
Ngl.define_colormap(wks,colmap)
#-- contour levels and color indices
cmap = Ngl.retrieve_colormap(wks)
ncmap = len(cmap[:,0])
levels = np.arange(tempmin,tempmax+tempint,tempint)
nlevels = len(levels)
colors = np.floor(np.linspace(2,ncmap-1,nlevels+1)).astype(int)
ncolors = len(colors)
# -
# Set contour plot resources for RasterFill
# +
res = Ngl.Resources()
res.nglDraw = False
res.nglFrame = False
res.nglMaximize = False #-- don't maximize plot output, yet
res.vpXF = 0.09 #-- viewport x-position
res.vpYF = 0.95 #-- viewport y-position
res.vpWidthF = 0.8 #-- viewport width
res.vpHeightF = 0.8 #-- viewport height
res.cnFillMode = "RasterFill" #-- use raster fill for contours
res.cnFillOn = True #-- filled contour areas
res.cnLinesOn = False
res.cnLineLabelsOn = False
res.cnInfoLabelOn = False
res.cnLevelSelectionMode = "ExplicitLevels" #-- set manual data levels
res.cnLevels = levels
res.cnMonoFillColor = False
res.cnFillColors = colors
res.lbBoxMinorExtentF = 0.15 #-- decrease height of labelbar
res.lbOrientation = "horizontal" #-- horizontal labelbar
res.mpOutlineBoundarySets = "National" #-- draw national map outlines
res.mpOceanFillColor = "gray90"
res.mpLandFillColor = "gray75"
res.mpInlandWaterFillColor = "gray75"
res.mpDataBaseVersion = "MediumRes" #-- alias to Ncarg4_1
res.mpDataSetName = "Earth..4" #-- higher map resolution
res.mpGridAndLimbOn = False
res.mpLimitMode = "LatLon" #-- map limit mode
res.mpMinLatF = minlat-1.0
res.mpMaxLatF = maxlat+1.0
res.mpMinLonF = minlon-1.0
res.mpMaxLonF = maxlon+1.0
res.sfXArray = lon
res.sfYArray = lat
# -
# Draw the contour plot and advance the first frame
# +
contour = Ngl.contour_map(wks,temp,res)
Ngl.draw(contour)
Ngl.frame(wks)
# -
# Add values to the grid cells, draw the contour plot and advance the second frame
# +
if(VALUES):
txres = Ngl.Resources()
txres.gsFontColor = "black"
txres.txFontHeightF = 0.01
for j in range(0,nlat):
for i in range(0,nlon):
m = i+j
txid = "txid"+str(m)
txid = Ngl.add_text(wks, contour,""+str(quality[j,i]),lon[i],lat[j],txres)
#-- draw the second plot
Ngl.draw(contour)
Ngl.frame(wks)
# -
# Create the third plot using a map plot and add grid lines of the data region
# +
plot = Ngl.map(wks,res)
#-----------------------------------------------------------------------------------
#-- draw grid lines of data region if set by GRID global variable
#-----------------------------------------------------------------------------------
if(GRID):
gres = Ngl.Resources()
gres.gsLineColor = "black"
for i in range(0,nlat):
lx = [minlon,maxlon]
ly = [lat[i],lat[i]]
lid = "lidy"+str(i)
lid = Ngl.add_polyline(wks,plot,lx,ly,gres) #-- add grid lines to plot
for j in range(0,nlon):
lx = [lon[j],lon[j]]
ly = [minlat,maxlat]
lid = "lidx"+str(j)
lid = Ngl.add_polyline(wks,plot,lx,ly,gres) #-- add grid lines to plot
Ngl.draw(plot)
# -
# Now, create the marker size for each cell - marker sizes for quality 1-4
# +
marker_sizes = np.linspace(0.01,0.04,4)
ms_array = np.ones(ncells,float) #-- create array for marker sizes depending
#-- on quality1d
for ll in range(minqual,maxqual+1):
indsize = np.argwhere(quality1d == ll)
ms_array[indsize] = marker_sizes[ll-1]
#-- marker resources
plmres = Ngl.Resources()
plmres.gsMarkerIndex = 16 #-- filled circles
# -
# Now, create the color array for each cell from temp1d
# +
gscolors = np.zeros(ncells,int)
#-- set color for data less than given minimum value
vlow = np.argwhere(temp1d < levels[0]) #-- get the indices of values less levels(0)
gscolors[vlow] = colors[0] #-- choose color
#-- set colors for all cells in between tempmin and tempmax
for i in range(1,nlevels):
vind = np.argwhere(np.logical_and(temp1d >= levels[i-1],temp1d < levels[i])) #-- get the indices of 'middle' values
gscolors[vind] = colors[i] #-- choose the colors
#-- set color for data greater than given maximum
vhgh = np.argwhere(temp1d > levels[nlevels-1]) #-- get indices of values greater levels(nl)
gscolors[vhgh] = colors[ncolors-1] #-- choose color
#-- add the marker to the plot
n=0
for ii in range(0,nlat):
for jj in range(0,nlon):
k = jj+ii
plmres.gsMarkerSizeF = ms_array[n] #-- marker size
plmres.gsMarkerColor = gscolors[n] #-- marker color
plm = "plmark"+str(k)
plm = Ngl.add_polymarker(wks,plot,lon[jj],lat[ii],plmres) #-- add marker to plot
n = n + 1
# -
# Add a labelbar to the plot
# +
vpx = Ngl.get_float(plot,"vpXF") #-- retrieve viewport x-position
vpy = Ngl.get_float(plot,"vpYF") #-- retrieve viewport y-position
vpw = Ngl.get_float(plot,"vpWidthF") #-- retrieve viewport width
vph = Ngl.get_float(plot,"vpHeightF") #-- retrieve viewport height
lbx, lby = vpx, vpy-vph-0.07
lbres = Ngl.Resources()
lbres.vpWidthF = vpw #-- width of labelbar
lbres.vpHeightF = 0.08 #-- height of labelbar
lbres.lbOrientation = "horizontal" #-- labelbar orientation
lbres.lbLabelFontHeightF = 0.014 #-- labelbar label font size
lbres.lbAutoManage = False #-- we control label bar
lbres.lbFillColors = colors #-- box fill colors
lbres.lbPerimOn = False #-- turn off labelbar perimeter
lbres.lbMonoFillPattern = True #-- turn on solid pattern
lbres.lbLabelAlignment = "InteriorEdges" #-- write labels below box edges
#-- create the labelbar
pid = Ngl.labelbar_ndc(wks, ncolors, list(levels.astype('str')), lbx, lby, lbres)
# -
# Add a legend to the plot
# +
legres = Ngl.Resources() #-- legend resources
legres.gsMarkerIndex = 16 #-- filled dots
legres.gsMarkerColor = "gray50" #-- legend marker color
txres = Ngl.Resources() #-- text resources
txres.txFontColor = "black"
txres.txFontHeightF = 0.01
txres.txFont = 30
x, y, ik = 0.94, 0.47, 0
for il in range(minqual,maxqual):
legres.gsMarkerSizeF = marker_sizes[ik]
Ngl.polymarker_ndc(wks, x, y, legres)
Ngl.text_ndc(wks, ""+str(il), x+0.03, y, txres)
y = y + 0.05
ik = ik + 1
txres.txFontHeightF = 0.012
Ngl.text_ndc(wks,"Quality",x,y,txres) #-- legend title
# -
# Add title and center string to the plot
# +
xpos = (vpw/2)+vpx
title1 = "Draw data values at lat/lon location as circles"
title2 = "the size is defined by the quality variable"
txres.txFont = 21
txres.txFontHeightF = 0.018
Ngl.text_ndc(wks, title1, xpos, 0.96, txres)
Ngl.text_ndc(wks, title2, xpos, 0.935, txres)
#-----------------------------------------------------------------------------------
#-- add center string
#-----------------------------------------------------------------------------------
y = vpy+0.035 #-- y-position
txcenter = "Quality: min = "+str(quality.min())+" max = "+str(quality.max())
txres.txFontHeightF = 0.008 #-- font size for string
txres.txJust = "CenterCenter" #-- text justification
Ngl.text_ndc(wks, txcenter, 0.5, y, txres) #-- add text to wks
# -
# Draw the third plot and advance the frame
Ngl.draw(plot)
Ngl.frame(wks)
# Show the plots in this notebook
from IPython.display import Image
Image(filename='plot_quality_per_cell.000001.png')
Image(filename='plot_quality_per_cell.000002.png')
Image(filename='plot_quality_per_cell.000003.png')
| Visualization/PyNGL/PyEarthScience_filled_markers_instead_grid_cells_PyNGL.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
#Kaggle Dataset Source
#https://www.kaggle.com/jessemostipak/hotel-booking-demand/data
# +
import warnings
warnings.filterwarnings('ignore')
import numpy as np
import pandas as pd
from IPython.core import display as ICD
import matplotlib.pyplot as plt
# -
#Placed my download into same folder as other datasets
file_path = 'C:/Users/<NAME>/Documents/School/IE 4910 Python ML/Assignments/PCC5/PCC5 - Datasets/'
hotels = pd.read_csv(file_path + 'hotel_bookings.csv')
print(hotels.dtypes)
orig_shape = hotels.shape
print('shape: {}'.format(orig_shape))
display(hotels.head(5))
#Check nulls
null = pd.DataFrame(hotels.isnull().sum())
print('Count of Null Data Points:')
display(null)
# +
#drop agent, company, and country
hotels = hotels.drop(['agent','company','country'], axis=1)
#drop remaining nan rows
hotels.dropna(axis=0, how='any', inplace=True)
#compare shape
print('Original shape: {}\nNew shape: {}'.format(orig_shape,hotels.shape))
# -
#Stat summary
hotels.describe(include= 'all')
#hist of arrival month
hotels['arrival_date_month'].value_counts().plot(kind='bar')
plt.xlabel('month')
plt.ylabel('number of bookings')
plt.title('Bookings per month: Ranked')
#hist of deposit type
hotels['deposit_type'].value_counts().plot(kind='bar')
plt.ylabel('number of bookings')
plt.title('Type of deposits')
#hist of lead_time
hotels['lead_time'].plot(kind='hist')
plt.xlabel('lead time [days]')
plt.ylabel('number of bookings')
plt.title('Booking lead times')
# +
#create x and y
x = hotels[['arrival_date_month','lead_time','deposit_type']]
y = hotels['is_canceled']
#Map months to numbers
monthmap = {'January':1 , 'February':2 , 'March':3 ,
'April':4 , 'May':5 , 'June':6 ,
'July':7 , 'August':8 , 'September':9,
'October':10, 'November':11, 'December':12}
x.arrival_date_month = x.arrival_date_month.map(monthmap)
# -
#convert deposit type to binary rows
x = pd.get_dummies(x, columns = ['deposit_type'])
# +
#scale other columns
from sklearn.preprocessing import scale
x[['arrival_date_month','lead_time']] = scale(x[['arrival_date_month','lead_time']])
# +
#SVM parameter analysis
from sklearn import model_selection
from sklearn.svm import LinearSVC
#testing params at 20% test size
X_train,X_test,y_train,y_test = model_selection.train_test_split(x, y,
test_size = 0.2,
random_state = 42)
SVC_score = {}
for c in range(0,101,10):
if c==0:
c=1
svm_model = LinearSVC(C=c, loss='hinge', random_state = 42)
svm_model.fit(X_train,y_train)
SVC_score[c] = svm_model.score(X_test,y_test)
Cx=[]
Cy=[]
for key in SVC_score.keys():
print('(C = {}) score: {}'.format(key,SVC_score[key]))
Cx.append(key)
Cy.append(SVC_score[key])
#plot scores
plt.plot(Cx,Cy)
plt.title('SVM score with varied C')
plt.xlabel('C value')
plt.ylabel('model score')
# -
#Set svm model to C=1 (no affect)
svm_model = LinearSVC(C=1, loss='hinge', random_state = 42)
svm_model.fit(X_train,y_train)
svm_model.score(X_test,y_test)
# +
#Decision Tree parameter analysis
from sklearn.tree import DecisionTreeClassifier
DT_score = {}
for depth in range(1,51):
dt_model = DecisionTreeClassifier(max_depth = depth,
random_state = 42)
dt_model.fit(X_train,y_train)
DT_score[depth] = dt_model.score(X_test,y_test)
depths = []
dscores = []
for key in DT_score.keys():
depths.append(key)
dscores.append(DT_score[key])
plt.plot(depths,dscores)
plt.xlabel('max depth')
plt.ylabel('model score')
plt.title('Max depth parameter analysis (test size: 20%)')
# -
#Set DT max depth to 25 (peak value)
dt_model = DecisionTreeClassifier(max_depth = 25,
random_state = 42)
dt_model.fit(X_train,y_train)
dt_model.score(X_test,y_test)
# +
#RF parameter analysis
#WARNING: THIS TAKES QUITE A WHILE TO RUN
from sklearn.ensemble import RandomForestClassifier
n_ = []
mn_ = []
score_ = []
for n in range (0,101,50):
if n == 0:
n=1
for max_l_n in range (5,26,5):
n_.append(n)
mn_.append(max_l_n)
rf_model = RandomForestClassifier(n_estimators = n,
max_leaf_nodes = max_l_n,
n_jobs = 1,
random_state = 42)
rf_model.fit(X_train, y_train)
score_.append(rf_model.score(X_test,y_test))
# +
#plot RF parameters
from mpl_toolkits import mplot3d
ax = plt.axes(projection='3d')
ax.plot_trisurf(n_, mn_, score_, cmap='RdYlGn')
plt.xlabel('num estimators')
plt.ylabel('max leaf nodes')
plt.title('RF model score')
# +
#Set rf model to num estimators = 1, max leaf nodes = 25
rf_model = RandomForestClassifier(n_estimators = 1,
max_leaf_nodes = 25,
n_jobs = 1,
random_state = 42)
rf_model.fit(X_train, y_train)
# +
#calc sensitivity analysis for all methods
sen_x = []
sen_svm = []
sen_dt = []
sen_rf = []
for test_ratio in range(10,100,10):
sen_x.append(test_ratio)
X_train,X_test,y_train,y_test = model_selection.train_test_split(x, y,
test_size=test_ratio/100,
random_state=42)
svm_model.fit(X_train,y_train)
sen_svm.append(svm_model.score(X_test,y_test))
dt_model.fit(X_train,y_train)
sen_dt.append(dt_model.score(X_test,y_test))
rf_model.fit(X_train,y_train)
sen_rf.append(rf_model.score(X_test,y_test))
# -
#plot sensitivity analysis results
sen_all = [sen_x, sen_svm, sen_dt, sen_rf]
sen_df = pd.DataFrame(sen_all)
sen_df = sen_df.transpose()
names = ['Test Ratio','SVM score','DT score','RF score']
sen_df.rename(columns = {0:'test ratio',
1:'SVM score',
2:'DT score',
3:'RF score'},
inplace = True)
sen_df = sen_df.set_index('test ratio')
sen_df.plot()
plt.title('Sensitivity at ideal model parameters')
plt.ylabel('model score')
#Set test ratio to 80
X_train,X_test,y_train,y_test = model_selection.train_test_split(x, y,
test_size=0.8,
random_state=42)
# +
#Report all for each model type with best params
from sklearn.metrics import confusion_matrix
from sklearn.metrics import classification_report
models = [svm_model, dt_model, rf_model]
name = {svm_model:'SVM',
dt_model:'DT',
rf_model:'RF'}
confusion = {}
report = {}
model_score = {}
for model in models:
prediction = model.predict(X_test)
confusion[name[model]] = confusion_matrix(y_test, prediction)
report[name[model]] = classification_report(y_test,prediction)
model_score[name[model]] = model.score(X_test, y_test)
for model in name.values():
print('{} model:'.format(model))
print('Confusion matrix:\n{}'.format(confusion[model]))
print('Classification report:\n{}\n\n\n'.format(report[model]))
# +
#Generate 10 random instances and predict with each method
from random import gauss as norm
from statistics import mean
from statistics import stdev
# len_x = len(x[1,:])
len_x = 5
rand_x = []
for j in range(0,9):
rand_row = []
for i in range(0,len_x):
med_x = 0
dev_x = 1
rand_row.append(norm(med_x,dev_x))
rand_x.append(rand_row)
#predictions
for model in models:
rand_predict = model.predict(rand_x,)
print('{} predictions:\nCanceled? \ny:1/n:0:{}\n'.format(name[model],rand_predict))
# -
| PCCs/PCC5/PCC5Final/PCC5 My Dataset.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Explore diagnostic labels
# Stefan/<NAME>
# Since Thur. Nov. 11th, 2021
#
# Select datasets with labels, for downstream-task classification
#
#
# ## Setup
#
#
# + pycharm={"name": "#%%\n"}
import glob
import wfdb
from icecream import ic
from util import *
# -
# ## PTB-XL
# Looks like the labels are the `scp_codes` column
#
# +
dnm = 'PTB_XL'
rec = get_record_eg(dnm)
ic(vars(rec))
d_dset = config(f'{DIR_DSET}.{dnm}')
dir_nm = d_dset['dir_nm']
path = f'{DIR_DSET}/{dir_nm}'
df = pd.read_csv(f'{PATH_BASE}/{path}/ptbxl_database.csv')
df.head(2)
ic(df['scp_codes'].value_counts())
# -
# ## INCART
# For PVC localization
#
# Looks like the localization results are in `comments`
#
#
# + pycharm={"name": "#%%\n"}
dnm = 'INCART'
rec = get_record_eg(dnm)
ic(vars(rec))
ic(rec.comments)
| notebook/pre-processing/explore_label.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + colab={} colab_type="code" id="Pi1eQpFJbo4U"
# hide
# # !pip install seirsplus
# # !pip install geopandas
# # !pip install geopatra
# # !pip install -U plotly
# # !pip install xlrd
import contextlib
import io
import json
import random
import sys
import warnings
from pathlib import Path
from typing import List, Union
import folium
import geopandas as gpd
import geopatra
import networkx
import pandas as pd
from branca.colormap import linear
from IPython.display import display
from IPython.utils import io
from ipywidgets import (
HTML,
FloatLogSlider,
FloatSlider,
GridBox,
HBox,
IntSlider,
Label,
Layout,
Output,
SelectionSlider,
VBox,
interactive,
)
from seirsplus.models import *
warnings.filterwarnings("ignore")
# + colab={} colab_type="code" id="r0igYZLjbo4a"
# hide
assam_travel_history = Path("AssamTravelHistory.xlsx").resolve()
xl = pd.ExcelFile(assam_travel_history)
# + colab={} colab_type="code" id="eFc9efsjbo4f"
# hide
def read_assam_excel_to_df(filename: str) -> pd.DataFrame:
xl = pd.ExcelFile(filename)
df_list = []
for sheet_name in xl.sheet_names:
district_df = xl.parse(sheet_name)
district_df["district"] = sheet_name
district_df.drop(columns=["S.No."], inplace=True)
df_list.append(district_df)
return pd.concat(df_list, sort=False)
df = read_assam_excel_to_df(assam_travel_history)
# + colab={} colab_type="code" id="NhQyhdSGbo4i"
# hide
df["DateOfArrival"] = pd.to_datetime(df["Date of arrival"])
df["DateOfReturn"] = pd.to_datetime(df["Date of Receipt"])
df.drop(columns=["Date of arrival", "Date of Receipt"], inplace=True)
# + colab={} colab_type="code" id="HAydw6rvbo4l"
# hide
df_copy = df
df_copy["Inflow"] = 1
assam_traveller_count_df = df_copy.groupby("district").agg({"Inflow": "sum"})
assam_traveller_count_df.reset_index(inplace=True)
# + colab={} colab_type="code" id="3kyXYiVSbo4p"
# hide
def clean_district_names(dname: str):
input_to_output_mapping = {
"Cacher": "Cachar",
"Kamrup_M": "Kamrup Metropolitan",
"Kamrup_R": "Kamrup",
"KarbiAnlong": "Karbi Anglong",
"Baksha": "Baksa",
}
return input_to_output_mapping.get(dname, dname)
# + colab={} colab_type="code" id="emaBa1iwbo4t"
# hide
assam_traveller_count_df["district"] = assam_traveller_count_df.district.apply(
clean_district_names
)
# + colab={} colab_type="code" id="eoMeoa7dbo4x"
# hide
assam_pop_web_raw = pd.read_html(
"https://www.census2011.co.in/census/state/districtlist/assam.html"
)
assam_pop_web_raw = assam_pop_web_raw[0][["District", "Population"]]
assam_pop_df = assam_pop_web_raw[
~(assam_pop_web_raw["District"].apply(lambda x: len(x)) > 21)
]
# + colab={} colab_type="code" id="UlFjC8uCbo41"
# hide
assam_pop_df_rename = assam_pop_df.rename(columns={"District": "district"})
# + colab={} colab_type="code" id="4DYJTLZSbo44"
# hide
assam_df = pd.merge(
assam_pop_df_rename, assam_traveller_count_df, on="district", how="left"
)
assam_df.to_csv("AssamDistrictInfo.csv", index=False)
| others/2020-04-02-PreProcessingAssam.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + _cell_guid="595b6845-1ac0-42e3-be9e-29c15442ccdc" _uuid="300eb65ae193e447250f8d7d0ca2b0867a9ff707"
import nltk
import gensim
import numpy as np
import keras.backend as K
from nltk.corpus import wordnet as wn
from nltk.stem import WordNetLemmatizer
from nltk.tokenize import word_tokenize
# #Pretrained Model - Word Embedding - Word2Vec
# from nltk.data import find
# word2vec_sample = str(find('models/word2vec_sample/pruned.word2vec.txt'))
# word_embedding_model = gensim.models.KeyedVectors.load_word2vec_format(word2vec_sample, binary=False)
# Load Google's pre-trained Word2Vec model.
word_embedding_model = gensim.models.KeyedVectors.load_word2vec_format('W2V_Model/GoogleNews-vectors-negative300.bin/data', binary=True)
# + _cell_guid="1a7aa1ee-594e-42d7-9f30-0538b7e88919" _uuid="aac70ce16fd4d9cc2393727e29f098300fc77d8d"
# Importing Train and Test Data
import csv
emotions = ['anger',
# 'anticipation',
'disgust',
'fear',
'joy',
# 'love',
'optimism',
'pessimism',
'sadness',
'surprise'] #,
# 'trust']
x_train_raw = []
y_train_raw = []
x_test_raw = []
y_test_raw = []
with open('Data/train.csv') as csvfile:
reader = csv.DictReader(csvfile)
for row in reader:
x_train_raw.append(row['Tweet'])
y_train_raw_temp = []
for emotion in emotions:
y_train_raw_temp.append(row[emotion])
y_train_raw.append(y_train_raw_temp)
with open('Data/test.csv') as csvfile:
reader = csv.DictReader(csvfile)
for row in reader:
x_test_raw.append(row['Tweet'])
y_test_raw_temp = []
for emotion in emotions:
y_test_raw_temp.append(row[emotion])
y_test_raw.append(y_test_raw_temp)
train_size = len(y_train_raw)
test_size = len(y_test_raw)
print("Train Size:", train_size, " samples")
print("Test Size:", test_size, " samples")
# + _cell_guid="55e3ea36-17fa-436e-a5d9-6f5d3de1401b" _uuid="b5a045613fbd611fd0153ad9d76b1633087d35bc"
SentiWords = {}
with open('Data/words.csv') as csvfile:
reader = csv.DictReader(csvfile)
for row in reader:
w = row['Words']
SentiWords_temp = np.zeros(len(emotions), dtype=K.floatx())
for i, emotion in enumerate(emotions):
SentiWords_temp[i] = row[emotion]
SentiWords[w] = SentiWords_temp
SentiWords["QQ"] = np.zeros(len(emotions), dtype=K.floatx())
# + _cell_guid="b1378401-da11-4a63-bc5f-173ce56c4cb1" _uuid="6626402b6485e0d5e609fed2f47df3d7a5578a1c"
# initialization Block
max_n_words = 40
ignore_words = ['?', '@', '-', '.', '_', '/', ' ', '.', '!',
"you'll", 'itself', 'some', 'same', 'off', 'any', 'having',
'and', 'theirs', 'your', 'should', 'after', 'out', 'in',
"you'd", 'd', 'its', 'had', 'myself','from', 'ourselves',
'here', 'an', 'all', 'yours', 'as', 'hers', 'they', 'll',
"she's", 'through', 'you', 'then', 'once', 'my', 'am', 'who',
'being', 'of', 'shan', 'that', 'so', 'with', 'yourselves',
'both', 't', 'his', 'we', 'more', 'did', 'our', 'he', 'o',
'them', 'than', 'it', 'y', 'her', 'up', 'about', 'this',
'himself', 'just', 'if', 'own', 'has', 'how', 'because',
'him', 'doing', 'at', 'm', 'is', 'each', 's', 'too', 'those',
'such', 'have', 'above', "you've", 'most', 'on', 'under',
'by', 'few', 'where', 'when', 'were', "you're", "it's",
'been', 'the', 'before', 'do', 'these', 'other', 'to', 'i',
'can', 'themselves', 'what', 'are', 'while', 'which', 'me',
'ma', "that'll", 've', 'for', 'why', 'a', 'during', 'yourself',
'below', 'now', 'only', 'their', 'herself', 'will', 'does',
'she', 'be', 'there', "should've", 'was', 're', 'ours',
'whom', 'further']
# ignore_words = ['?', '@', '-', '.', '_', '/', ' ', '.', '!']
vector_size = len(word_embedding_model['I'])
print ("Vocabulary size: ",len(word_embedding_model.vocab), " words")
print ("Word-Vector size used: ", vector_size)
# + _cell_guid="0cfac649-e15d-499b-ab3e-7456c884fc8e" _uuid="cb20f38c98fa0df96bfddfd3d70a9f7c324c949a"
# Cleaning and vectorizing the data
max_n_words=40
vector_size = len(word_embedding_model['I'])
# Lemmatizing Functions
def get_wordnet_pos(tag):
if (tag == ''):
return ''
elif tag.startswith('J'):
return wn.ADJ
elif tag.startswith('V'):
return wn.VERB
elif tag.startswith('N'):
return wn.NOUN
elif tag.startswith('R'):
return wn.ADV
else:
return ''
def lemmatize_w(word, tag):
wn_tag = get_wordnet_pos(tag)
if (wn_tag == ''):
return word
else:
return WordNetLemmatizer().lemmatize(word, wn_tag)
def lemmatize_s(sentence):
# sent = []
# word_tag = nltk.pos_tag(sentence)
# for w, t in word_tag:
# sent.append(lemmatize_w(w, t))
# return sent
return sentence
# Filtering the data
def filter_sent (sentence, embedding_model = word_embedding_model):
# tokenize each word in the sentence
s_words = word_tokenize(sentence.lower())
filtered_sentence = []
l = lemmatize_s(s_words)
for w in l:
# Remove words not in Vocab
if w in embedding_model.vocab:
filtered_sentence.append(w)
return filtered_sentence
# Converting data to vectors
x_train = np.zeros((train_size, max_n_words, vector_size + len(emotions)), dtype=K.floatx())
y_train = np.zeros((train_size, len(emotions)), dtype=np.int32)
x_test = np.zeros((test_size, max_n_words, vector_size + len(emotions)), dtype=K.floatx())
y_test = np.zeros((test_size, len(emotions)), dtype=np.int32)
for i in range(train_size):
x = filter_sent(x_train_raw[i])
for index, word in enumerate(x):
if word in SentiWords:
vec = SentiWords[word]
else:
vec = SentiWords["QQ"]
x_train[i, index, :] = np.concatenate((word_embedding_model[word], vec), axis=0)
for i, y in enumerate(y_train_raw):
y_temp = np.zeros(len(emotions), dtype=np.int32)
y_temp = np.array(y, dtype=np.int32)
y_train[i, :] = y_temp
for i in range(test_size):
x = filter_sent(x_test_raw[i])
for index, word in enumerate(x):
if word in SentiWords:
vec = SentiWords[word]
else:
vec = SentiWords["QQ"]
x_test[i, index, :] = np.concatenate((word_embedding_model[word], vec), axis=0)
for i, y in enumerate(y_test_raw):
y_temp = np.zeros(len(emotions), dtype=np.int32)
y_temp = np.array(y, dtype=np.int32)
y_test[i, :] = y_temp
print ("Data Vectorized")
print ("Input shape: ", x_train.shape)
print ("Output shape: ", y_train.shape)
# + _cell_guid="ab46f426-ab80-4ef1-bf84-f88985d1cada" _uuid="9167aac618c1fdd532f0be934babae15fb48bf31"
from keras.callbacks import EarlyStopping
from keras.models import Sequential
from keras.layers.core import Dense, Dropout, Flatten
from keras.layers.convolutional import Conv1D
from keras.layers.convolutional import MaxPooling1D
from keras.optimizers import Adam, RMSprop, SGD
from keras.preprocessing import sequence
from keras.layers import LSTM, Embedding
# Keras model
batch_size = 32 # 10
nb_epochs = 10 # 20
model = Sequential()
# CNN
model.add(Conv1D(64, kernel_size=3, activation='relu', padding='same',
input_shape=(max_n_words, vector_size + len(emotions))))
model.add(Dropout(0.5))
model.add(Conv1D(64, kernel_size=3, activation='relu', padding='same'))
model.add(Dropout(0.5))
model.add(MaxPooling1D(2))
model.add(Conv1D(32, kernel_size=2, activation='relu', padding='same'))
model.add(Dropout(0.5))
model.add(Conv1D(32, kernel_size=2, activation='relu', padding='same'))
model.add(Dropout(0.5))
model.add(MaxPooling1D(2))
model.add(Flatten())
model.add(Dense(64, activation='tanh'))
model.add(Dense(32, activation='tanh')) # tanh # relu
model.add(Dropout(0.25))
model.add(Dense(len(emotions), activation='sigmoid')) # softmax #
# Compile the model
model.compile(loss = 'binary_crossentropy',
optimizer = RMSprop(lr=0.001, rho=0.9, epsilon=None, decay=1e-7),
metrics = ['accuracy'])
print("MODEL:")
print(model.summary(), "\n")
# + _cell_guid="3fc6c0f1-8cec-4e29-b87d-2e60dd2e6fe4" _uuid="326343034e5d41dfe020414d07705a708ba7afd6"
# Fit the model
#x_train = np.zeros((train_size, max_n_words, vector_size + len(emotions)), dtype=K.floatx())
#y_train = np.zeros((train_size, len(emotions)), dtype=np.int32)
#x_test = np.zeros((test_size, max_n_words, vector_size + len(emotions)), dtype=K.floatx())
#y_test = np.zeros((test_size, len(emotions)), dtype=np.int32)
history = model.fit(x_train, y_train,
batch_size=batch_size,
shuffle=False, #False,
epochs=nb_epochs,
verbose=2,
validation_data=(x_test, y_test),
callbacks=[EarlyStopping(min_delta=1e-7, patience=3)]) # min_delta=0.00025, patience=2
# Fit the model (without early stop)
# history = model.fit(x_train, y_train,
# batch_size=batch_size,
# shuffle=False, #False
# epochs=nb_epochs,
# verbose=2,
# validation_data=(x_test, y_test))
print ("\n================================== Model Trained =================================\n")
# + _cell_guid="7744a9f0-b6c2-4d26-826b-f7a5cd6377b2" _uuid="5c7366dd052ced9ad982fcd02655647c55083ba1"
# Plotting
import matplotlib.pyplot as plt
# summarize history for accuracy
plt.plot(history.history['acc'])
plt.plot(history.history['val_acc'])
plt.title('model accuracy')
plt.ylabel('accuracy')
plt.xlabel('epoch')
plt.legend(['train', 'test'], loc='upper left')
plt.show()
# summarize history for loss
plt.plot(history.history['loss'])
plt.plot(history.history['val_loss'])
plt.title('model loss')
plt.ylabel('loss')
plt.xlabel('epoch')
plt.legend(['train', 'test'], loc='upper left')
plt.show()
# + _cell_guid="f41b9902-6aa4-4828-871c-48799e47a761" _uuid="6aebee321ad13e91de4fbee609f2db5b74c154a0"
# Classification
ERROR_THRESHOLD = 0.40
def classify(sentence, model, show_details=False):
inp = np.zeros((train_size, max_n_words, vector_size + len(emotions)), dtype=K.floatx())
filtered_sentence = filter_sent(sentence)
x = filtered_sentence
for index, word in enumerate(x):
if word in SentiWords:
vec = SentiWords[word]
else:
vec = SentiWords["QQ"]
inp[0, index, :] = np.concatenate((word_embedding_model[word], vec), axis=0)
results = model.predict(x = inp, batch_size=None)
results2 = [[i,r] for i,r in enumerate(results[0]) if r > ERROR_THRESHOLD ]
results2.sort(key=lambda x: x[1], reverse=True)
return_results = [[emotions[r[0]],r[1]] for r in results2]
if show_details:
print (emotions)
print (results[0])
if len(return_results) == 0:
return_results.append("Neutral")
return return_results
# + _cell_guid="cfc7bc28-0b35-4474-84f1-eb5d390572bd" _uuid="423cd271212af027632efc60bbfeb458366b678d"
# Manual Testing
test_x = "He is so bad"
print (test_x, "\n")
print ("Prediction: ", classify(test_x, model), "\n")
# + _cell_guid="0da0ac88-a1b6-489a-813a-b9aa286deb6f" _uuid="9485fac2da0ed8b1a44ebefc0f9687e81c9ee701"
# Random manual Testing
n = np.random.randint(test_size - 1)
test_x = x_test_raw[n]
# n = np.random.randint(train_size - 1)
# test_x = x_train_raw[n]
print (test_x,"\n")
tags = []
for i, val in enumerate(y_test_raw[n]):
if val == '1':
tags.append(emotions[i])
print ("Actual Tag: ", tags, "\n")
# for i, val in enumerate(y_train_raw[n]):
# if val == '1':
# tags.append(emotions[i])
# print ("Actual Tag: ", tags)
print ("Prediction: ", classify(test_x, model), "\n")
# + _cell_guid="42365206-fd96-4a3f-9505-a48b1d10a46d" _uuid="263c057433f25a01c5d3037109f57f6a2b90f119"
# Saving Model
# serialize model to JSON
model_json = model.to_json()
with open("Models/model_CNN_miltilabel.json", "w") as json_file:
json_file.write(model_json)
# serialize weights to HDF5
model.save_weights("Models/model_CNN_miltilabel.h5")
print ("Saved model to disk")
# + _cell_guid="3ab20cbc-994b-4241-a323-c79521ba9cbb" _uuid="51485e82b2563ac075b2fd4d7d74ac9c42ffb4ca"
# Loading Model
from keras.models import model_from_json
# load json and create model
json_file = open('Models/model_CNN_miltilabel.json', 'r')
loaded_model_json = json_file.read()
json_file.close()
loaded_model = model_from_json(loaded_model_json)
# load weights into new model
loaded_model.load_weights("Models/model_CNN_miltilabel.h5")
print ("Loaded model from disk")
# + _cell_guid="7be73888-b59e-4269-8d55-22565df872c6" _uuid="1eafbf127139ea05050e3080f9fa87be273a7b27"
# Evaluate loaded model on test data
loaded_model.compile(loss = 'binary_crossentropy',
optimizer = RMSprop(lr = 0.002, rho = 0.9, epsilon = None, decay = 1e-7),
metrics=['accuracy'])
score = loaded_model.evaluate(x_test, y_test, verbose=1)
print ("%s: %.2f%%" % (loaded_model.metrics_names[1], score[1]*100))
# + _cell_guid="4acbb9ca-c26c-419d-b0fc-a2f55467d328" _uuid="457d1daf8838e7d633d4b0aca25777907bd16629"
# Manual Testing
test_x = input("Enter the text in English: ")
print (test_x, "\n")
print ("Prediction: ", classify(test_x, loaded_model),"\n")
# -
| CNNmodel.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import pyEX as p
p.symbolsDF().head()
df = p.chartDF('aapl', '5y')
df = df[['open', 'close', 'high', 'low']].set_index(df['date'])
df.head()
# %matplotlib inline
df.plot()
| examples/pyex.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Spell Checker
# #### Edit 1: Naive approach (Code deleted, it was just looping over everything maybe will take one year to solve this problem)
# #### Edit 2: <NAME>'s spell checker
# #### Edit 3: Added Trie
# + _cell_guid="b1076dfc-b9ad-4769-8c92-a6c4dae69d19" _uuid="8f2839f25d086af736a60e9eeb907d3b93b6e0e5"
# This Python 3 environment comes with many helpful analytics libraries installed
# It is defined by the kaggle/python docker image: https://github.com/kaggle/docker-python
# For example, here's several helpful packages to load in
import numpy as np # linear algebra
import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv)
# Input data files are available in the "../input/" directory.
# For example, running this (by clicking run or pressing Shift+Enter) will list all files under the input directory
import os
for dirname, _, filenames in os.walk('/kaggle/input'):
for filename in filenames:
print(os.path.join(dirname, filename))
# Any results you write to the current directory are saved as output.
# Required Distance
# !pip install textdistance
from sklearn.model_selection import train_test_split
# -
import textdistance
import time
import csv
import tqdm
import operator
word_file = '/kaggle/input/itmo-spelling-correction-autumn-2019/words.csv'
train_file = '/kaggle/input/itmo-spelling-correction-autumn-2019/no_fix.submission.csv'
truth_file = '/kaggle/input/itmo-spelling-correction-autumn-2019/train.csv'
# +
# Load Word Dict
words = {}
with open(word_file, newline='') as file:
buffer = csv.DictReader(file)
for r in tqdm.tqdm(buffer):
words[r['Id']] = int(r['Freq'])
# +
# Load Truth File
truth_values = set()
with open(truth_file, newline='') as file:
buffer = csv.DictReader(file)
for r in tqdm.tqdm(buffer):
truth_values.add(r['Expected'])
# -
len(truth_values)
distance_to_keep = [1, 2]
distance_function = textdistance.levenshtein
# +
WORDS = words
def correction(initial_word, threshold=1000):
"Most probable spelling correction for word."
probability_dict = {}
# Probability for just the word
for word in known([initial_word]):
if word in WORDS:
if WORDS[word] > threshold:
probability_dict[word] = WORDS[word]
# Probability for one edit
for word in known(edits1(initial_word)):
if word in WORDS:
if WORDS[word] > threshold:
probability_dict[word] = WORDS[word]
# # Probability for two edits
# for word in known(edits2(word)):
# probability_dict[word] = WORDS[word]
# print(f'Prob dictionary: {probability_dict}')
if probability_dict:
correction = sorted(probability_dict.items(), key=operator.itemgetter(1), reverse=True)[0][0]
else:
correction = initial_word
return correction
def known(words):
"The subset of `words` that appear in the dictionary of WORDS."
if words:
return set(w for w in words if w and w in truth_values)
return set()
def edits1(word):
"All edits that are one edit away from `word`."
letters = 'бвгджзклмнпрстфхцчшщаэыуояеёюийьъ'.upper() # Removing English alphabets abcdefghijklmnopqrstuvwxyz
splits = [(word[:i], word[i:]) for i in range(len(word) + 1)]
deletes = [L + R[1:] for L, R in splits if R]
transposes = [L + R[1] + R[0] + R[2:] for L, R in splits if len(R)>1]
replaces = [L + c + R[1:] for L, R in splits if R for c in letters]
inserts = [L + c + R for L, R in splits for c in letters]
return set(deletes + transposes + replaces + inserts)
def edits2(word):
"All edits that are two edits away from `word`."
return (e2 for e1 in edits1(word) for e2 in edits1(e1))
# -
correction('АНІМЕ')
correction('UNIZOOI')
# ### Testing This
# It works lets try on all test set
train = []
with open(train_file) as file:
buffer = csv.DictReader(file)
for r in tqdm.tqdm(buffer):
train.append(r['Id'])
with open('submission_pn.csv', 'w', newline='') as file:
buffer = csv.writer(file)
buffer.writerow(['Id', 'Predicted'])
for word in tqdm.tqdm(train):
corr = correction(word, 15000)
buffer.writerow([word, corr])
# # **Trie**
# +
"""
My just Trie Implementation: https://github.com/shivammehta007/pythondsa/blob/master/structures/trie.py
This is just addon to the normal Trie
"""
class Trie:
"""
Trie
Trie is a treelike datastructure used to predict suggestion here is a
simple implementation of it with add and search functionality.
It has a root that is the head or the Trie and a node_count to count
total number of nodes currently in Trie
"""
class _Node:
"""
Node
This is how a trie node looks like it has a hashmap of characters
with an end indicating weather this is the end of word or not.
One additional field that I added is the frequency count just for
furthur probabilistic calculations if required.
"""
def __init__(self, end=False):
self.characters = {}
self.frequency = 0
self.end = end
def __init__(self):
self.root = self._Node()
self.node_count = 1
def add_string(self, string):
"""
Adds a string to the trie
Parameters:
string: String
"""
node = self.root
for c in string:
if c not in node.characters:
node.characters[c] = self._Node()
self.node_count += 1
node = node.characters[c]
node.end = True
if string in WORDS:
node.frequency = WORDS[string]
def search_word(self, string):
"""
Searches for a word in the trie
Parameters:
string: String
"""
node = self.root
for c in string:
if c not in node.characters:
return False, 0
node = node.characters[c]
if node.end:
return True, node.frequency
return False, 0
def one_edit_distance(self, word):
"""
All edits that are one edit away from word
Parameters:
word: String
"""
letters = 'бвгджзклмнпрстфхцчшщаэыуояеёюийьъ'.upper() # Removing english alphabets abcdefghijklmnopqrstuvwxyz
splits = [(word[:i], word[i:]) for i in range(len(word) + 1)]
deletes = [L + R[1:] for L, R in splits if R]
transposes = [L + R[1] + R[0] + R[2:] for L, R in splits if len(R)>1]
replaces = [L + c + R[1:] for L, R in splits if R for c in letters]
inserts = [L + c + R for L, R in splits for c in letters]
return set(deletes + transposes + replaces + inserts)
def correct_spelling(self, initial_word, threshold=1000):
"""
Checks and Returns the Correct Spelling of the Word with one edit distance
"""
probability_dict = {}
present, _ = self.search_word(initial_word)
if present:
return initial_word
for word in self.one_edit_distance(initial_word):
present, freq = self.search_word(word)
if not present:
continue
if freq > threshold:
probability_dict[word] = freq
if probability_dict:
correction = sorted(probability_dict.items(), key=operator.itemgetter(1), reverse=True)[0][0]
else:
correction = initial_word
return correction
# -
trie = Trie()
for word in tqdm.tqdm(truth_values):
trie.add_string(word)
# #### Some Tests
trie.search_word('АНИМЕ')
trie.correct_spelling('КИТАЦ', 20000) # КИТАЙ
# ### Lets Create the final Submission
with open('submission_trie.csv', 'w', newline='') as file:
buffer = csv.writer(file)
buffer.writerow(['Id', 'Predicted'])
for word in tqdm.tqdm(train):
corr = trie.correct_spelling(word, 20000)
buffer.writerow([word, corr])
# ## Code below if for testing only
# Load Truth File
X_test = []
y_test = []
with open(truth_file, newline='') as file:
buffer = csv.DictReader(file)
for r in tqdm.tqdm(buffer):
X_test.append(r['Id'])
y_test.append(r['Expected'])
X_train, X_test, y_train, y_test = train_test_split(X_test, y_test, test_size=0.2)
accuracy_pn = 0
accuracy_trie = 0
comparator_accuracy = 0
for i in tqdm.tqdm(range(len(X_test))):
corr_pn = correction(X_test[i])
corr_trie = trie.correct_spelling(X_test[i], 20000)
if corr_pn in WORDS and corr_trie in WORDS:
if WORDS[corr_pn] > WORDS[corr_trie]:
corr_comp = corr_pn
else:
corr_comp = corr_trie
if corr_comp == y_test[i]:
comparator_accuracy += 1
if corr_pn == y_test[i]:
accuracy_pn += 1
if corr_trie == y_test[i]:
accuracy_trie += 1
print('PN Accuracy: {:.4f}'.format(accuracy_pn/ len(X_test) * 100))
print('Trie Accuracy: {:.4f}'.format(accuracy_trie/ len(X_test) * 100))
print('Comparator Accuracy: {:.4f}'.format(comparator_accuracy/ len(X_test) * 100))
# +
# TQDM fixing code :3
# list(getattr(tqdm.tqdm, '_instances'))
# for instance in list(tqdm.tqdm._instances):
# tqdm.tqdm._decr_instances(instance)
# -
| final_submission.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# In this notebook, we will carry out the championship match transformations using the previously trained and selected models for each feature and each season. These are simple linear models with one variable, so it is simple to apply them using the intercept and coefficient from the trained models.
# To reproduce the steps in this notebook:
# 1. make sure the steps from the previous notebooks have been followed first
# 1. run "make championship_transformation" in a terminal opened at the location of the repository
# * this will carry out the match transformations & generate a single combined dataset of premier and premier equivalent championship matches
# common notebook config
# %run notebook-config.ipynb
from src.models import championship_transformation
interim_filepath = "../data/interim"
models_filepath = "../models"
processed_filepath = "../data/processed"
# ## Transform matches
#
# 1. Duplicate championship matches in to home & away with the opponent name appended with "(Premier)". This is to indicate that this opponent does not really exist and is just an estimation of how the team may play against a premier league level opponent.
# 2. Transform feature values using selected models for each season from previous notebook
# 3. Combine transformed championship matches with premier matches to form a single combined dataset
championship_transformation.transform_championship_matches(interim_filepath, models_filepath, processed_filepath)
# ## Check transformed matches
#
# Sanity check the transformed matches by looking for null values and checking the distribution of values.
# +
all_matches = pd.read_csv("{}/premier_equivalent_matches.csv".format(processed_filepath))
championship_matches = all_matches[
(all_matches.division == "championship")
]
print("Number of championship matches: {}".format(championship_matches.shape[0]))
print("Number of match columns: {}".format(championship_matches.shape[1]))
# -
# Check for null values
championship_matches.isnull().sum()
# +
# Drop empty columns - i.e. bookmaker odds which are no longer relevant on transformed championship matches
championship_matches = championship_matches.dropna(axis="columns")
championship_matches.describe()
# +
# inspect distribution of transformed features
# just show "for" columns because the "against" are duplicated values for the opponent
columns = [col for col in championship_matches if col.endswith("_for")]
championship_matches[columns].hist(figsize=(20,15))
plt.show()
| notebooks/003-jah-championship-transformation.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Calculating Receiver Position from Android Raw GNSS Measurements
#
# *This notebook was written by [<NAME>](https://www.johnsonmitchelld.com/). The source code, as well as the gnssutils Python package containing the EphemerisManager module, can be found on [Github](https://github.com/johnsonmitchelld/gnss-analysis).*
#
# Satellite navigation is absolutely fascinating. The GPS receiver in your smartphone uses concepts as diverse as orbital mechanics, digital signal processing, convex optimization, and the theory of relativity to determine your location anywhere on Earth within seconds. From the development of GPS in the 80's until the early 2010s, satellite navigation was a specialized tool, used primarily by the military, airlines, surveyors, and private citizens fortunate enough to own an expensive personal receiver. Widespread GNSS receiver adoption in smartphones over the past decade, combined with the development of new GNSS constellations from China and the EU, have commoditized satellite navigation like never before.
#
# Despite these changes, GNSS receivers were mostly black boxes until recently. Most standalone receivers output only a final position, velocity, and time solution, with dilution of precision and satellite signal-to-noise ratios included if you're lucky.
#
# That changed in 2016 with Google's introduction of the [raw GNSS measurement API](https://developer.android.com/guide/topics/sensors/gnss) in Android 7. Several applications have since been developed to make use of this data, including Google's own [GnssLogger](https://play.google.com/store/apps/details?id=com.google.android.apps.location.gps.gnsslogger&hl=en_US&gl=US), which provides an interface for logging raw GNSS measurements to a text file. It is now possible for any of the billions of Android device owners worldwide to experiment with raw GNSS measurement data for themselves.
#
# What's still missing is an approachable, easy-to-use platform for processing this data. Google's recent paper, Android Raw GNSS Measurement Datasets for Precise Positioning, contains the following table listing publicly available software packages for GNSS positioning:
#
# 
#
# Notice that all of the open source options are either written in C/C++ or MATLAB. C and C++ are highly performant languages, but neither is particularly approachable without a software engineering background. MATLAB is much more familiar to engineers and scientists, but I believe its widespread use is harmful for reasons I outline [here](https://www.johnsonmitchelld.com/opinion/2020/12/20/stop-teaching-matlab.html).
#
# To address this void in approachable GNSS software packages built on open source platforms, I decided to develop my own. This Jupyter notebook, along with the accompanying EphemerisManager class, provide everything a user needs to calculate the position of their Android phone using recorded GNSS measurements. All that's necessary (in addition to the phone, of course) is an environment capable of running Jupyter notebooks.
#
# I would like to thank the developers of the tools listed above. I used Google's [GPS Measurement Tools](https://github.com/google/gps-measurement-tools) and the European Space Agency's [GNSS resources](https://gage.upc.edu/tutorials/) as references while developing this project, in addition to many other sources mentioned below.
#
# ## Data Aquisition
#
# The experiment below was conducted with a sample dataset I collected with Google's GnssLogger app on my Samsung Galaxy S9 Plus while driving through downtown Seattle. If you'd like to collect your own data and process it with this notebook, you can find the code and documentation [here](https://github.com/johnsonmitchelld/gnss-analysis).
#
# The import process and data directory determination below assume that the Python interpreter's working directory is the same as this notebook, and that the gnssutils package and data repository are located in the parent directory. If that is not the case, you'll have to modify accordingly.
import sys, os, csv
parent_directory = os.path.split(os.getcwd())[0]
ephemeris_data_directory = os.path.join(parent_directory, 'data')
sys.path.insert(0, parent_directory)
from datetime import datetime, timezone, timedelta
import pandas as pd
import numpy as np
import navpy
from gnssutils import EphemerisManager
# Let's read in a GNSS log file, format the satellite IDs to match up with the RINEX 3.0 standard, and convert the columns we need to calculate receiver position to numeric values. We'll also filter the data so that we're only working with GPS measurements to simplify the rest of the analysis.
# + tags=[]
# Get path to sample file in data directory, which is located in the parent directory of this notebook
input_filepath = os.path.join(parent_directory, 'data', 'sample', 'gnss_log_2020_12_02_17_19_39.txt')
with open(input_filepath) as csvfile:
reader = csv.reader(csvfile)
for row in reader:
if row[0][0] == '#':
if 'Fix' in row[0]:
android_fixes = [row[1:]]
elif 'Raw' in row[0]:
measurements = [row[1:]]
else:
if row[0] == 'Fix':
android_fixes.append(row[1:])
elif row[0] == 'Raw':
measurements.append(row[1:])
android_fixes = pd.DataFrame(android_fixes[1:], columns = android_fixes[0])
measurements = pd.DataFrame(measurements[1:], columns = measurements[0])
# Format satellite IDs
measurements.loc[measurements['Svid'].str.len() == 1, 'Svid'] = '0' + measurements['Svid']
measurements.loc[measurements['ConstellationType'] == '1', 'Constellation'] = 'G'
measurements.loc[measurements['ConstellationType'] == '3', 'Constellation'] = 'R'
measurements['SvName'] = measurements['Constellation'] + measurements['Svid']
# Remove all non-GPS measurements
measurements = measurements.loc[measurements['Constellation'] == 'G']
# Convert columns to numeric representation
measurements['Cn0DbHz'] = pd.to_numeric(measurements['Cn0DbHz'])
measurements['TimeNanos'] = pd.to_numeric(measurements['TimeNanos'])
measurements['FullBiasNanos'] = pd.to_numeric(measurements['FullBiasNanos'])
measurements['ReceivedSvTimeNanos'] = pd.to_numeric(measurements['ReceivedSvTimeNanos'])
measurements['PseudorangeRateMetersPerSecond'] = pd.to_numeric(measurements['PseudorangeRateMetersPerSecond'])
measurements['ReceivedSvTimeUncertaintyNanos'] = pd.to_numeric(measurements['ReceivedSvTimeUncertaintyNanos'])
# A few measurement values are not provided by all phones
# We'll check for them and initialize them with zeros if missing
if 'BiasNanos' in measurements.columns:
measurements['BiasNanos'] = pd.to_numeric(measurements['BiasNanos'])
else:
measurements['BiasNanos'] = 0
if 'TimeOffsetNanos' in measurements.columns:
measurements['TimeOffsetNanos'] = pd.to_numeric(measurements['TimeOffsetNanos'])
else:
measurements['TimeOffsetNanos'] = 0
print(measurements.columns)
# -
# ## Pre-processing
#
# Those with experience in GNSS data processing will probably notice that we're missing a few important fields for calculating receiver position. For each satellite the process requires:
#
# * **Time of signal transmission** - in order to calculate the satellite's location when the signal was transmitted
# * **Measured pseudorange** - a rough estimate of the distance between each satellite and the receiver before correcting for clock biases, ionospheric delay, and other phenomena which we'll explain later
# * **Ephemeris parameters** - a set of parameters required to calculate the satellite's position in space
#
# Of these, the only one provided directly by Android's raw GNSS measurement API is the time of signal transmission, labeled `ReceivedSvTimeNanos`. Fortunately, the European Global Navigation Satellite Systems Agency (GSA) published a very helpful [white paper](https://www.gsa.europa.eu/system/files/reports/gnss_raw_measurement_web_0.pdf) explaining the parameters reported by Android and how they can be used to generate the values we need.
#
# Ephemeris parameters are an exception. The Android API technically has commands for obtaining these, but they don't seem to work on my Galaxy S9 - at least not in the GnssLogger app. Instead we'll get them from the [International GNSS Service](https://igs.org/mgex/data-products/#data) (IGS), which maintains an array of GNSS data products available to the public through NASA's Crustal Dynamics Data Information System, the German Bundesamt für Kartographie und Geodäsie (BKG), and the French Institut Géographique National (IGN). More on this later.
#
# ### Timestamp Generation
#
# First things first: what time were these measurements taken?
#
# It turns out this question isn't as simple as it might sound. The GPS system has its own time scale, which differs from UTC by some number of leap seconds. Leap seconds are used to adjust UTC for changes in the rotation of the Earth, and are decided by the International Earth Rotation and Reference Systems Service. If you're interested in reading more about leap seconds and international time standards, you can check out this [FAQ](https://www.nist.gov/pml/time-and-frequency-division/nist-time-frequently-asked-questions-faq) from the US National Institute of Standards and Technology.
#
# We're going to need a Python `datetime` timestamp for each measurement in order to determine which ephemeris files to retrieve from IGS. To get one, let's calculate the GPS time in nanoseconds using the equations from Section 2.4 of the GSA's white paper. Then we'll convert it to a normal Unix timestamp using the `Pandas.to_datetime()` function with the GPS reference epoch.
#
# Note that the UTC flag is set to True in the `Pandas.to_datetime()` method. The timestamp column needs to be in UTC because the ephemeris files provided by IGS are labeled by UTC date and time. Since Python's datetime library defaults to local time on some systems, it was necessary to keep everything timezone-aware to avoid mix-ups.
#
# This nomenclature is a bit confusing since we're using the `to_datetime()` method to create a column called `UnixTime`, not `Utc`. Unix time and UTC are technically in the same time zone, but UTC includes leap seconds, while Unix time does not. Since we're creating a timestamp from the number of seconds elapsed since the GPS epoch, the resulting timestamp could differ from UTC by some number of leap seconds. That's close enough to ensure we get the correct ephemeris file, and it won't affect the actual position calculations because we don't use the `UnixTime` column for those.
#
# Finally, let's split the data into measurement epochs. We do this by creating a new column and setting it to 1 whenever the difference between a timestamp and the previous timestamp is greater than 200 milliseconds using the `DataFrame.shift()` command. Then we use the `Series.cumsum()` method to generate unique numbers for the individual epochs.
# +
measurements['GpsTimeNanos'] = measurements['TimeNanos'] - (measurements['FullBiasNanos'] - measurements['BiasNanos'])
gpsepoch = datetime(1980, 1, 6, 0, 0, 0)
measurements['UnixTime'] = pd.to_datetime(measurements['GpsTimeNanos'], utc = True, origin=gpsepoch)
measurements['UnixTime'] = measurements['UnixTime']
# Split data into measurement epochs
measurements['Epoch'] = 0
measurements.loc[measurements['UnixTime'] - measurements['UnixTime'].shift() > timedelta(milliseconds=200), 'Epoch'] = 1
measurements['Epoch'] = measurements['Epoch'].cumsum()
# -
# Next let's calculate the estimated signal transmission and reception times, once again using the equations found in Section 2.4 of the GSA white paper mentioned above. Once we've determined these, we can calculate a measured pseudorange value for each satellite by taking the difference between the two and multiplying by the speed of light:
#
# $$
# \rho_{measured} = \frac{(t_{Rx} - t_{Tx})}{1E9} \cdot c
# $$
#
# +
WEEKSEC = 604800
LIGHTSPEED = 2.99792458e8
# This should account for rollovers since it uses a week number specific to each measurement
measurements['tRxGnssNanos'] = measurements['TimeNanos'] + measurements['TimeOffsetNanos'] - (measurements['FullBiasNanos'].iloc[0] + measurements['BiasNanos'].iloc[0])
measurements['GpsWeekNumber'] = np.floor(1e-9 * measurements['tRxGnssNanos'] / WEEKSEC)
measurements['tRxSeconds'] = 1e-9*measurements['tRxGnssNanos'] - WEEKSEC * measurements['GpsWeekNumber']
measurements['tTxSeconds'] = 1e-9*(measurements['ReceivedSvTimeNanos'] + measurements['TimeOffsetNanos'])
# Calculate pseudorange in seconds
measurements['prSeconds'] = measurements['tRxSeconds'] - measurements['tTxSeconds']
# Conver to meters
measurements['PrM'] = LIGHTSPEED * measurements['prSeconds']
measurements['PrSigmaM'] = LIGHTSPEED * 1e-9 * measurements['ReceivedSvTimeUncertaintyNanos']
# -
# Now that we have pseudorange values, we can begin the standard process for calculating receiver position. First we need to retrieve the ephemeris data for each satellite from one of the International GNSS Service (IGS) analysis centers. These include NASA's [CDDIS](https://cddis.nasa.gov/Data_and_Derived_Products/GNSS/orbit_products.html) and the German [BKG](https://igs.bkg.bund.de/dataandproducts/index).
#
# This process is, frankly, a pain, so I wrote a Python module to handle the details. You can view the code and documentation [here](https://github.com/johnsonmitchelld/gnss-analysis/tree/main/gnssutils), but all you have to do is initialize an instance of the `EphemerisManager` class with a path to the directory where you want it to cache downloaded files. Then, provide the `get_ephemeris()` method with a Python datetime object and and list of satellites, and it will return a Pandas dataframe of valid ephemeris parameters for the requested satellites.
#
# Broadcast ephemerides are good for four hours from the time they are issued, but the GPS control segment generally uploads new values every two hours. The `EphemerisManager` class downloads a whole day's worth of data at a time and then returns the most recent parameters at the requested time for each satellite. Because of this we're going to run the `get_ephemeris()` method for every epoch in case new parameters were issued since the last measurement was taken.
#
# For the sake of demonstration, let's start by walking through the position solution for just one measurement epoch. We'll use the first epoch in the dataset with measurements from at least five satellites.
# +
manager = EphemerisManager(ephemeris_data_directory)
epoch = 0
num_sats = 0
while num_sats < 5 :
one_epoch = measurements.loc[(measurements['Epoch'] == epoch) & (measurements['prSeconds'] < 0.1)].drop_duplicates(subset='SvName')
timestamp = one_epoch.iloc[0]['UnixTime'].to_pydatetime(warn=False)
one_epoch.set_index('SvName', inplace=True)
num_sats = len(one_epoch.index)
epoch += 1
sats = one_epoch.index.unique().tolist()
ephemeris = manager.get_ephemeris(timestamp, sats)
print(timestamp)
print(one_epoch[['UnixTime', 'tTxSeconds', 'GpsWeekNumber']])
# -
# Okay, we've got valid ephemeris parameters and signal transmission time in seconds of the GPS week for one measurement epoch's worth of data. Time to figure out where these satellites are (or where they were at approximately 01:19:57.432 UTC on December 3rd, 2020).
#
# ## Coordinate Systems Primer
#
# A brief geometry lesson is in order before I continue. If you ask someone on the street what their location is, they will most likely provide you with some sort of named landmark for reference, e.g. a street name, intersection, or address. Not every location on Earth has a named landmark nearby for reference, so most children learn about the concept of latitude and longitude in school.
#
# ### LLA
#
# Latitude, longitude, and altitude (LLA from here on) together make up a [spherical coordinate system](https://en.wikipedia.org/wiki/Spherical_coordinate_system), which was developed over millennia by geographers for use in celestial navigation. Latitude, longitude and altitude are handy because they quickly give you an idea of where you are generally on Earth - positive latitudes are in the northern hemisphere, positive longitudes west of the Prime Meridian, etc. However, spherical coordinates tend to complicate the math involved with satellite navigation because they are not linear. One degree of latitude equals roughly 69 miles (111 km) at the equator but only 49 miles (79 km) at 45 degrees north or south.
#
# ### ECEF
#
# The Earth-centered, Earth-fixed ([ECEF](https://en.wikipedia.org/wiki/ECEF)) coordinate system solves this issue. It is a Cartesian coordinate system with coordinates on X, Y, and Z axes which rotate with the Earth. Satellite navigation systems perform most of their calculations in ECEF coordinates before converting to LLA for user output; hence our function for calculating satellite position will output ECEF coordinates.
#
# ### ENU/NED
#
# Another challenge arises when calculating the vector **between** two points on Earth. A linear coordinate system is also desired in this case. However, ECEF doesn't quite work for relative positioning on Earth's surface, because, for example, the X and Y axes can point east, west, up, down, or somewhere in between depending on your location. Hence a third system called [local tangent plane coordinates](https://en.wikipedia.org/wiki/Local_tangent_plane_coordinates) is used to define vectors between coordinates. The most common of these are east, north, up (ENU) and north, east, down (NED), which we will use later.
#
# Calculating satellite position from ephemeris data is a bit involved, but the details are conveniently laid out in Tables 20-III and 20-IV of the GPS [Interface Specification Document](https://www.gps.gov/technical/icwg/IS-GPS-200L.pdf). The function below performs this calculation, taking a Pandas dataframe with the ephemeris parameters for every satellite and a dataseries containing the transmit time for all received signals in seconds since the start of the current GPS week. It also calculates the satellite clock offsets for the given transmission time.
# +
def calculate_satellite_position(ephemeris, transmit_time):
mu = 3.986005e14
OmegaDot_e = 7.2921151467e-5
F = -4.442807633e-10
sv_position = pd.DataFrame()
sv_position['sv']= ephemeris.index
sv_position.set_index('sv', inplace=True)
sv_position['t_k'] = transmit_time - ephemeris['t_oe']
A = ephemeris['sqrtA'].pow(2)
n_0 = np.sqrt(mu / A.pow(3))
n = n_0 + ephemeris['deltaN']
M_k = ephemeris['M_0'] + n * sv_position['t_k']
E_k = M_k
err = pd.Series(data=[1]*len(sv_position.index))
i = 0
while err.abs().min() > 1e-8 and i < 10:
new_vals = M_k + ephemeris['e']*np.sin(E_k)
err = new_vals - E_k
E_k = new_vals
i += 1
sinE_k = np.sin(E_k)
cosE_k = np.cos(E_k)
delT_r = F * ephemeris['e'].pow(ephemeris['sqrtA']) * sinE_k
delT_oc = transmit_time - ephemeris['t_oc']
sv_position['delT_sv'] = ephemeris['SVclockBias'] + ephemeris['SVclockDrift'] * delT_oc + ephemeris['SVclockDriftRate'] * delT_oc.pow(2)
v_k = np.arctan2(np.sqrt(1-ephemeris['e'].pow(2))*sinE_k,(cosE_k - ephemeris['e']))
Phi_k = v_k + ephemeris['omega']
sin2Phi_k = np.sin(2*Phi_k)
cos2Phi_k = np.cos(2*Phi_k)
du_k = ephemeris['C_us']*sin2Phi_k + ephemeris['C_uc']*cos2Phi_k
dr_k = ephemeris['C_rs']*sin2Phi_k + ephemeris['C_rc']*cos2Phi_k
di_k = ephemeris['C_is']*sin2Phi_k + ephemeris['C_ic']*cos2Phi_k
u_k = Phi_k + du_k
r_k = A*(1 - ephemeris['e']*np.cos(E_k)) + dr_k
i_k = ephemeris['i_0'] + di_k + ephemeris['IDOT']*sv_position['t_k']
x_k_prime = r_k*np.cos(u_k)
y_k_prime = r_k*np.sin(u_k)
Omega_k = ephemeris['Omega_0'] + (ephemeris['OmegaDot'] - OmegaDot_e)*sv_position['t_k'] - OmegaDot_e*ephemeris['t_oe']
sv_position['x_k'] = x_k_prime*np.cos(Omega_k) - y_k_prime*np.cos(i_k)*np.sin(Omega_k)
sv_position['y_k'] = x_k_prime*np.sin(Omega_k) + y_k_prime*np.cos(i_k)*np.cos(Omega_k)
sv_position['z_k'] = y_k_prime*np.sin(i_k)
return sv_position
# Run the function and check out the results:
sv_position = calculate_satellite_position(ephemeris, one_epoch['tTxSeconds'])
print(sv_position)
# -
# We now have raw pseudorange values, satellite positions, and satellite clock offsets.
#
# These pseudoranges can be repsresented by the equation ([Misra and Enge, 2006](https://www.gpstextbook.com/)):
#
# $$
# \begin{equation}
# \rho^{(k)}_{measured} = \sqrt{(x^{(k)} - x_r)^2 + (y^{(k)} - y_r)^2 + (z^{(k)} - z_r)^2} + c \cdot \delta t_r - c \cdot \delta t^{(k)} + I^{(k)}(t) + T^{(k)}(t) + \epsilon_\rho^{(k)}
# \end{equation}
# $$
#
# where $x_r$, $y_r$, $z_r$, and $ \delta t_r $ are the receiver position (in ECEF coordinates) and clock bias (in seconds), $x^{(k)}$, $y^{(k)}$, $z^{(k)}$, and $ \delta t^{(k)} $ are the position and clock bias of the $k$th satellite, $c$ is the speed of light, and $I^{(k)}(t)$ and $T^{(k)}(t)$ are the ionospheric and trophospheric delay terms for the $k$th satellite. $\epsilon_\rho^{(k)}$ accounts for all remaining measurement and modeling errors. Following convention in the GNSS community, a superscript is used to identify measurements from a specific satellite, with parenthesis to distinguish between superscripts and exponents.
#
# Since we've calculated the satellite clock offsets, let's go ahead and apply them to generate a corrected pseudorange. We're also going to ignore ionospheric and trophospheric delays, thus lumping them into the error term $\epsilon_\rho^{(k)}$. The expression for the corrected pseudorange then becomes:
#
# $$
# \rho^{(k)}_{corrected} = \rho^{(k)} + c \cdot \delta t^{(k)} = \sqrt{(x^{(k)} - x_r)^2 + (y^{(k)} - y_r)^2 + (z^{(k)} - z_r)^2} + c \cdot \delta t_r + \epsilon_\rho^{(k)}
# $$
#
# +
#initial guesses of receiver clock bias and position
b0 = 0
x0 = np.array([0, 0, 0])
xs = sv_position[['x_k', 'y_k', 'z_k']].to_numpy()
# Apply satellite clock bias to correct the measured pseudorange values
pr = one_epoch['PrM'] + LIGHTSPEED * sv_position['delT_sv']
pr = pr.to_numpy()
# -
# The following is adapted from ([Blewitt, 1997](http://www.nbmg.unr.edu/staff/pdfs/blewitt%20basics%20of%20gps.pdf)).
#
# Let's drop the subscript for the corrected pseudorange measurements and refer to them simply as $p^{(k)}$. With four or more linearly independent pseudorange measurements, we end up with the system of equations:
#
# $$
# \begin{gather*}
# \rho^{(1)} = \sqrt{(x^{(1)} - x_r)^2 + (y^{(1)} - y_r)^2 + (z^{(1)} - z_r)^2} + c \cdot \delta t_r + \epsilon_\rho^{(1)} \\
# \rho^{(2)} = \sqrt{(x^{(2)} - x_r)^2 + (y^{(2)} - y_r)^2 + (z^{(2)} - z_r)^2} + c \cdot \delta t_r + \epsilon_\rho^{(2)} \\
# \rho^{(3)} = \sqrt{(x^{(3)} - x_r)^2 + (y^{(3)} - y_r)^2 + (z^{(3)} - z_r)^2} + c \cdot \delta t_r + \epsilon_\rho^{(3)} \\
# \cdots \\
# \rho^{(k)} = \sqrt{(x^{(k)} - x_r)^2 + (y^{(k)} - y_r)^2 + (z^{(k)} - z_r)^2} + c \cdot \delta t_r + \epsilon_\rho^{(k)}
# \end{gather*}
# $$
#
# where $x_r, y_r, z_r$, and $\delta t_r$ are the unkowns. We will substitute the receiver clock bias term with an equivalent value in meters:
#
# $$
# b = c \cdot \delta t_r
# $$
#
# Since the system is nonlinear, we must approximate the solution using the [Gauss-Newton algorithm](https://en.wikipedia.org/wiki/Gauss%E2%80%93Newton_algorithm) for solving [non-linear least squares](https://en.wikipedia.org/wiki/Non-linear_least_squares) problems. Let us define a function, $P$, for calculating an estimated or modeled pseudorange $\hat{\rho}^{(k)}$ based on estimates $\hat{x}, \hat{y}, \hat{z}, \hat{b}$ of the real position and clock offset values $x, y, z, b$:
#
# $$
# \hat{\rho}^{(k)} = P(\hat{x}, \hat{y}, \hat{z}, \hat{b}, k) = \sqrt{(x^{(k)}\ - \hat{x})^2 + (y^{(k)} - \hat{y})^2 + (z^{(k)} - \hat{z})} + \hat{b}
# $$
#
# We perform a first-order Taylor series expansion about our estimated values:
#
# $$
# P(x, y, z, b, k) = P(\hat{x}, \hat{y}, \hat{z}, \hat{b}, k) + P'(\hat{x}, \hat{y}, \hat{z}, \hat{b}, k)(x - \hat{x}, y - \hat{y}, z - \hat{z}, b - \hat{b})
# $$
#
# where $P(x, y, z, b, k)$ is equal to our measured or observed psedurange $\rho^{(k)}$. The difference between the observed and computed pseudorange is therefore:
#
# $$
# \Delta \rho = P(x, y, z, b, k) - P(\hat{x}, \hat{y}, \hat{z}, \hat{b}, k) = P'(\hat{x}, \hat{y}, \hat{z}, \hat{b}, k)(x - \hat{x}, y - \hat{y}, z - \hat{z}, b - \hat{b})
# $$
#
# Converting to matrix form and adding the error term $\epsilon$, we have:
#
# $$
# \Delta \rho= \begin{pmatrix} \frac{\delta \rho}{\delta x} & \frac{\delta \rho}{\delta y} &
# \frac{\delta \rho}{\delta z} & \frac{\delta \rho}{\delta b} \end{pmatrix}
# \begin{pmatrix} \Delta x \\ \Delta y \\ \Delta z \\ \Delta b \end{pmatrix} + \epsilon
# $$
#
# Thus for $k$ satellites, we now have the system of $k$ linear equations:
#
# $$
# \begin{pmatrix} \Delta \rho^{(1)} \\
# \Delta \rho^{(2)} \\
# \Delta \rho^{(3)} \\
# \vdots \\
# \Delta \rho^{(k)}
# \end{pmatrix}
# =
# \begin{pmatrix} \frac{\delta P^{(1)}}{\delta x} & \frac{\delta P^{(1)}}{\delta y} &
# \frac{\delta \rho^{(1)}}{\delta z} & \frac{\delta \rho^{(1)}}{\delta b} \\
# \frac{\delta \rho^{(2)}}{\delta x} & \frac{\delta \rho^{(2)}}{\delta y} &
# \frac{\delta \rho^{(2)}}{\delta z} & \frac{\delta \rho^{(2)}}{\delta b} \\
# \frac{\delta \rho^{(3)}}{\delta x} & \frac{\delta \rho^{(3)}}{\delta y} &
# \frac{\delta \rho^{(3)}}{\delta z} & \frac{\delta \rho^{(3)}}{\delta b} \\
# \vdots & \vdots & \vdots & \vdots \\
# \frac{\delta \rho^{(k)}}{\delta x} & \frac{\delta \rho^{(k)}}{\delta y} &
# \frac{\delta \rho^{(k)}}{\delta z} & \frac{\delta \rho^{(k)}}{\delta b}
# \end{pmatrix}
# \begin{pmatrix} \Delta x \\ \Delta y \\ \Delta z \\ \Delta b \end{pmatrix} +
# \begin{pmatrix}
# \epsilon^{(1)} \\
# \epsilon^{(2)} \\
# \epsilon^{(3)} \\
# \vdots \\
# \epsilon^{(k)} \\
# \end{pmatrix}
# $$
#
# In matrix notation, this equation is commonly written
#
# $$
# \mathbf{b} = \mathbf{Ax} + \mathbf{\epsilon}
# $$
#
# where vectors are denoted with boldface lower case characters and matrices with boldface upper case. GNSS resources typically refer to the matrix $\mathbf{A}$ as the geometry matrix $\mathbf{G}$. We will follow this convention here. We will call the pseudorange residual matrix $\Delta \mathbf{\rho}$. It also improves clarity to separate the receiver position from the clock offset, so we will continue to denote the clock offset $b$ while writing the position $\Delta x, \Delta y, \Delta z$ in vector form $\Delta \mathbf{x}$. Thus our system of linear equations becomes,
#
# $$
# \Delta \mathbf{\rho} = \mathbf{G} \begin{pmatrix} \Delta \mathbf{x} \\ \Delta b \end{pmatrix} + \mathbf{\epsilon}
# $$
#
# It turns out the first three columns of $\mathbf{G}$ are just the unit vectors between the $k$th satellite and the estimated receiver position. This is because the unit vector effectively represents the proportion of the pseudorange which is tangent to each dimension. These unit vectors are found by normalizing the vectors $\mathbf{x}^{(k)} - \mathbf{\hat{x}}$ by their magnitudes $r^{(k)}$, where
#
# $$
# r^{(k)} = \lVert \mathbf{x}^{(k)} - \mathbf{\hat{x}} \rVert = \sqrt{(x^{(k)} - \hat{x})^2 + (y^{(k)} - \hat{y})^2 + (z^{(k)} - \hat{z})^2}
# $$
#
# All values in the fourth column equal 1, because the clock offset adds directly to the pseudorange with no geometric scaling factor required. Thus the geometry matrix G becomes:
#
# $$
# \mathbf{G} =
# \begin{pmatrix}
# \frac{x^{(1)} - \hat{x}}{r^{(1)}} & \frac{y^{(1)} - \hat{y}}{r^{(1)}} &
# \frac{z^{(1)} - \hat{z}}{r^{(1)}} & 1 \\
# \frac{x^{(2)} - \hat{x}}{r^{(2)}} & \frac{y^{(2)} - \hat{y}}{r^{(2)}} &
# \frac{z^{(2)} - \hat{z}}{r^{(2)}} & 1 \\
# \frac{x^{(3)} - \hat{x}}{r^{(3)}} & \frac{y^{(3)} - \hat{y}}{r^{(3)}} &
# \frac{z^{(3)} - \hat{z}}{r^{(3)}} & 1 \\
# \vdots & \vdots & \vdots & \vdots \\
# \frac{x^{(k)} - \hat{x}}{r^{(k)}} & \frac{y^{(k)} - \hat{y}}{r^{(k)}} &
# \frac{z^{(k)} - \hat{z}}{r^{(k)}} & 1 \\
# \end{pmatrix}
# $$
#
#
# For a full derivation of these partial derivatives, refer to Misra and Enge, [Raquet (2013)](http://indico.ictp.it/event/a12180/session/21/contribution/12/material/0/0.pdf), or <NAME>'s excellent [blog post](https://www.telesens.co/2017/07/17/calculating-position-from-raw-gps-data/).
#
# The matrix $\Delta \mathbf{\rho}$ is the difference between the measured and calculated pseudorange with the estimated parameters, calculated as follows:
#
# $$
# \Delta \mathbf{\rho} =
# \begin{pmatrix} \Delta P^{(1)} \\
# \Delta P^{(2)} \\
# \Delta P^{(3)} \\
# \vdots \\
# \Delta P^{(k)}
# \end{pmatrix}
# =
# \begin{pmatrix}
# \rho^{(1)} - \hat{\rho}^{(1)} \\
# \rho^{(2)} - \hat{\rho}^{(2)} \\
# \rho^{(3)} - \hat{\rho}^{(3)} \\
# \vdots \\
# \rho^{(k)} - \hat{\rho}^{(k)} \\
# \end{pmatrix}
# =
# \begin{pmatrix}
# \rho^{(1)} - r^{(1)} - b \\
# \rho^{(2)} - r^{(2)} - b \\
# \rho^{(3)} - r^{(3)} - b \\
# \vdots \\
# \rho^{(k)} - r^{(k)} - b \\
# \end{pmatrix}
# $$
#
# Solving the linear system to minimise the sum of the squared residuals $\epsilon^{(k)}$ results in the solution to the normal equations,
#
# $$
# \begin{pmatrix} \Delta x \\ \Delta y \\ \Delta z \\ \Delta b \end{pmatrix}
# =
# \begin{pmatrix} \Delta \mathbf{x} \\ \Delta b \end{pmatrix}
# =
# \begin{pmatrix}
# \mathbf{G}^T \mathbf{G}
# \end{pmatrix}^{-1} \mathbf{G}^T \Delta \mathbf{\rho}
# $$
#
# See p. 17 of [Blewitt](http://www.nbmg.unr.edu/staff/pdfs/blewitt%20basics%20of%20gps.pdf) (1997) for a full derivation of this solution. After performing each linear least squares update, the estimated parameters $\mathbf{\hat{x}}, \mathbf{b}$ are updated:
#
# $$
# \begin{pmatrix}
# \hat{x}_{i + 1} \\
# \hat{y}_{i + 1} \\
# \hat{z}_{i + 1} \\
# \hat{b}_{i + 1} \\
# \end{pmatrix}
# =
# \begin{pmatrix}
# \hat{x}_{i} \\
# \hat{y}_{i} \\
# \hat{z}_{i} \\
# \hat{b}_{i} \\
# \end{pmatrix}
# +
# \begin{pmatrix} \Delta x \\ \Delta y \\ \Delta z \\ \Delta b \end{pmatrix}
# $$
#
# This process is performed iteratively until the magnitude of the change in position, $\lVert \Delta \mathbf{x} \rVert$, falls below a set threshold. In practice it usually only takes a few iterations for the solution to converge.
#
# Below is the function to calculate rough receiver position. Its inputs are Numpy arrays containing satellite positions in the ECEF frame, pseudorange measurements in meters, and initial estimates of receiver position and clock bias. The unit for clock bias is meters, so you need to divide by the speed of light to convert to seconds.
#
# This function is adapted and simplified from the MATLAB code in <NAME>'s [blog](https://www.telesens.co/2017/07/17/calculating-position-from-raw-gps-data/). It's a simple least squares iteration with no correction for ionospheric delay or satellite movement during signal time of flight.
# +
def least_squares(xs, measured_pseudorange, x0, b0):
dx = 100*np.ones(3)
b = b0
# set up the G matrix with the right dimensions. We will later replace the first 3 columns
# note that b here is the clock bias in meters equivalent, so the actual clock bias is b/LIGHTSPEED
G = np.ones((measured_pseudorange.size, 4))
iterations = 0
while np.linalg.norm(dx) > 1e-3:
r = np.linalg.norm(xs - x0, axis=1)
phat = r + b0
deltaP = measured_pseudorange - phat
G[:, 0:3] = -(xs - x0) / r[:, None]
sol = np.linalg.inv(np.transpose(G) @ G) @ np.transpose(G) @ deltaP
dx = sol[0:3]
db = sol[3]
x0 = x0 + dx
b0 = b0 + db
norm_dp = np.linalg.norm(deltaP)
return x0, b0, norm_dp
x, b, dp = least_squares(xs, pr, x0, b0)
print(navpy.ecef2lla(x))
print(b/LIGHTSPEED)
print(dp)
# -
# Looks like it worked! The pseudorange residual is in the tens of meters, which is reasonable considering we didn't correct for ionospheric delay or the rotation of the Earth during signal transmission.
#
# Now let's wrap this entire process into a loop to calculate receiver position for every measurement epoch in the dataset with more than 4 satellite measurements.
#
#
# + tags=[]
ecef_list = []
for epoch in measurements['Epoch'].unique():
one_epoch = measurements.loc[(measurements['Epoch'] == epoch) & (measurements['prSeconds'] < 0.1)]
one_epoch = one_epoch.drop_duplicates(subset='SvName').set_index('SvName')
if len(one_epoch.index) > 4:
timestamp = one_epoch.iloc[0]['UnixTime'].to_pydatetime(warn=False)
sats = one_epoch.index.unique().tolist()
ephemeris = manager.get_ephemeris(timestamp, sats)
sv_position = calculate_satellite_position(ephemeris, one_epoch['tTxSeconds'])
xs = sv_position[['x_k', 'y_k', 'z_k']].to_numpy()
pr = one_epoch['PrM'] + LIGHTSPEED * sv_position['delT_sv']
pr = pr.to_numpy()
x, b, dp = least_squares(xs, pr, x, b)
ecef_list.append(x)
ecef_array = np.stack(ecef_list, axis=0)
lla_array = np.stack(navpy.ecef2lla(ecef_array), axis=1)
ref_lla = lla_array[0, :]
ned_array = navpy.ecef2ned(ecef_array, ref_lla[0], ref_lla[1], ref_lla[2])
# -
# Finally, we'll convert the list of NED translations into a Pandas dataframe and create a plot of change in position relative to the first measurement epoch.
# +
import matplotlib.pyplot as plt
ned_df = pd.DataFrame(ned_array, columns=['N', 'E', 'D'])
plt.style.use('dark_background')
plt.plot(ned_df['E'], ned_df['N'])
# Add titles
plt.title('Position Offset From First Epoch')
plt.xlabel("East (m)")
plt.ylabel("North (m)")
plt.gca().set_aspect('equal', adjustable='box')
# -
| notebooks/simple_least_squares.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Physics 256
# ## Simple Harmonic Oscillators
# <img src="http://i.imgur.com/l2WMuTN.gif">
import style
style._set_css_style('../include/bootstrap.css')
# ## Last Time
#
# ### [Notebook Link: 15_Baseball.ipynb](./15_Baseball.ipynb)
#
# - motion of a pitched ball
# - drag and the magnus force
# - surface roughness of a projectile
#
# ## Today
#
# - The simple harmonic pendulum
# ## Setting up the Notebook
# + jupyter={"outputs_hidden": false}
import matplotlib.pyplot as plt
import numpy as np
# %matplotlib inline
plt.style.use('../include/notebook.mplstyle');
# %config InlineBackend.figure_format = 'svg'
# -
# ## Equation of Motion
#
# The equation of motion for a simple linear pendulum of length $\ell$ and mass $m$ is givn by:
#
# $$ m \frac{d \vec{v}}{d t} = \vec{F}_{\rm g} = -m g \hat{y}$$
#
# Measuring $x$ and $y$ from the equlibrium position we have
# \begin{align}
# x &= \ell \sin \theta \\
# y &= \ell (1-\cos\theta)
# \end{align}
#
# The kinetic and potential energy are:
#
# \begin{align}
# T &= \frac{1}{2} m \dot{r}^2 \\
# &= \frac{1}{2} m (\dot{x}^2 + \dot{y}^2) \\
# &= \frac{1}{2} m \ell^2 \dot{\theta}^2
# \end{align}
#
# \begin{equation}
# V = m g \ell (1-\cos\theta).
# \end{equation}
#
# Thus, the Lagrangian is:
# \begin{align}
# \mathcal{L} &= T - V \\
# &= \frac{1}{2} m \ell^2 \dot{\theta}^2 - m g \ell (1-\cos\theta)
# \end{align}
# and the equation of motion is given by the Euler-Lagrange formula
#
# \begin{align}
# \frac{\partial \mathcal{L}}{\partial \theta} - \frac{d}{dt} \frac{\partial \mathcal{L}}{\partial \dot{\theta}} &= 0 \\
# -m g \ell \sin \theta - \frac{d}{dt} (m\ell^2 \dot{\theta}) &= 0
# \end{align}
#
# which yields the familiar equation:
# \begin{equation}
# \ddot{\theta} = -\frac{g}{\ell} \sin\theta .
# \end{equation}
#
# To solve this analytically, we are used to considering only small angle oscillations allowing us to replace $\sin\theta \simeq \theta$ for $\theta \ll 1$. For $\theta(0) = \theta_0 \ll 1$ and $\dot{\theta}(0) = 0$ can be it can be integrated to give
#
# $$ \theta(t) = \theta_0 \cos \left( \sqrt{\frac{g}{\ell}} t \right).$$
#
# <div class="span alert alert-success">
# <h2> Programming challenge </h2>
# Use the Euler method to directly integrate the full equation of motion and compare with the analytical expression for $\theta_0 = \pi/12$ and $\dot{\theta}(0) =0$ for $\ell = 0.25$ m.
#
# \begin{align}
# \theta_{n+1} &= \theta_n + \omega_n \Delta t \\
# \omega_{n+1} &= \omega_n - \frac{g}{\ell} \sin\theta_n \Delta t \\
# \end{align}
# </div>
#
# <!--
# θ[n+1] = θ[n] + ω[n]*Δt
# ω[n+1] = ω[n] -(g/ℓ)*np.sin(θ[n])*Δt
# -->
# + jupyter={"outputs_hidden": false}
from scipy.constants import pi as π
from scipy.constants import g
# constants and intitial conditions
ℓ = 0.25 # m
Δt = 0.001 # s
t = np.arange(0.0,5.0,Δt)
θ,ω = np.zeros_like(t),np.zeros_like(t)
θ[0] = π/12.0 # rad
for n in range(t.size-1):
pass
# the small angle solution
plt.plot(t, θ[0]*np.cos(np.sqrt(g/ℓ)*t), label='Small angle solution')
# the Euler method
plt.plot(t,θ, label='Euler method')
plt.legend(loc='lower left')
plt.xlabel('Time [s]')
plt.ylabel('θ(t) [rad]')
# -
# ## What went wrong?
#
# The oscillations are **growing** with time! This is our first encounter with a numerical procedure that is **unstable**.
#
# Let's examine the total energy of the system where we can approximate $\cos\theta \simeq 1 - \theta^2/2$:
#
# \begin{align}
# E &= \frac{1}{2} m \ell^2 \omega^2 + m g \ell (1-\cos\theta) \\
# &\simeq \frac{1}{2}m \ell^2 \left(\omega^2 + \frac{g}{\ell}\theta^2 \right).
# \end{align}
#
# Writing things in terms of our Euler variables:
#
# \begin{align}
# E_{n+1} &= \frac{1}{2}m\ell^2 \left[\left(\omega_n - \frac{g}{\ell}\theta_n \Delta t\right)^2 + \frac{g}{\ell}\left(\theta_n + \omega_n\Delta t\right)^2 \right] \\
# &= E_{n} + \frac{1}{2}mg \ell \left(\omega_i^2 + \frac{g}{\ell} \theta_n^2\right) \Delta t^2.
# \end{align}
#
# This tells us the origin of the problem: **the energy is increasing without bound, regardless of the size of $\Delta t$**.
#
# ### Question: Why didn't we encounter this problem previously?
#
# <!-- With the exception of constant acceleration, we always had it, we just never noticed it on the timescales we were interested in. -->
#
# ### How do we fix it?
#
# We can consider alternative higher-order ODE solvers (as described in Appendix A of the textbook). However, there is a very simple fix that works here:
#
# ### Euler-Cromer Method
# Looking at our original discretized equations:
#
# \begin{align}
# \theta_{n+1} &= \theta_n + \omega_n \Delta t \\
# \omega_{n+1} &= \omega_n - \frac{g}{\ell} \sin\theta_n \Delta t
# \end{align}
#
# we can make the simple observation that we can replace the order of evaluation and use the updated value of $\omega$ in our calculation of $\theta$.
#
# \begin{align}
# \omega_{n+1} &= \omega_n - \frac{g}{\ell} \sin\theta_n \Delta t \\
# \theta_{n+1} &= \theta_n + \omega_{n+1} \Delta t
# \end{align}
#
# This leads to the energy being *approximately* conserved at each step:
#
# \begin{equation}
# E_{n+1} = E_{n} + \frac{1}{2}m g \left(\omega_n^2 - \frac{g}{\ell}\theta_n^2 \right)\Delta t^2 + \mathrm{O}(\Delta t^3).
# \end{equation}
#
# + jupyter={"outputs_hidden": false}
from scipy.constants import pi as π
from scipy.constants import g
# constants and intitial conditions
ℓ = 0.25 # m
Δt = 0.001 # s
t = np.arange(0.0,4.0,Δt)
θ,ω = np.zeros_like(t),np.zeros_like(t)
θ[0] = π/12.0 # rad
for n in range(t.size-1):
pass
# the small angle solution
plt.plot(t, θ[0]*np.cos(np.sqrt(g/ℓ)*t), label='Small angle solution')
# the Euler-Cromer method
plt.plot(t,θ, label='Euler Cromer method')
plt.legend(loc='lower left',frameon=True)
plt.xlabel('Time [s]')
plt.ylabel('θ(t) [rad]')
# -
# ## There are still some noticable deviations, thoughts?
#
#
# <!--Non-linear corrections. -->
#
# ## Turning on Non-Linearity
#
# An analytical solution exists without the small-angle approximation, but it is considerably more complicated:
#
# \begin{eqnarray}
# \theta(t) &=& 2 \sin^{-1} \left\{ k\, \mathrm{sn}\left[K(k^2)-\sqrt{\frac{g}{\ell}} t; k^2\right] \right\} \newline
# k &=& \sin \frac{\theta_0}{2} \newline
# K(m) &=& \int_0^1 \frac{d z}{\sqrt{(1-z^2)(1-m z^2)}}
# \end{eqnarray}
#
# <!--
#
#
#
# # the exact solution
# plt.plot(t,non_linear_θ(ℓ,θ[0],t), label='Exact solution')
#
# -->
def non_linear_θ(ℓ,θₒ,t):
'''The solution for θ for the non-linear pendulum.'''
# use special functions
from scipy import special
k = np.sin(θₒ/2.0)
K = special.ellipk(k*k)
(sn,cn,dn,ph) = special.ellipj(K-np.sqrt(g/l)*t,k*k)
return 2.0*np.arcsin(k*sn)
# + jupyter={"outputs_hidden": false}
# the small angle solution
plt.plot(t, θ[0]*np.cos(np.sqrt(g/ℓ)*t), label='Small angle solution')
# the Euler-Cromer method
plt.plot(t,θ,label='Euler Cromer method')
# the exact solution in terms of special functions
plt.plot(t,non_linear_θ(ℓ,θ[0],t), label='Exact', alpha=0.5)
plt.legend(loc='lower left',frameon=True)
plt.xlabel('Time [s]')
plt.ylabel('θ(t) [rad]')
# -
| 4-assets/BOOKS/Jupyter-Notebooks/Overflow/16_SimpleHarmonicMotion.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] colab_type="text" id="8AdUt3HiUrc3"
# # Querying Multiple Data Formats
# In this demo we join a CSV, a Parquet File, and a GPU DataFrame(GDF) in a single query using BlazingSQL.
#
# In this notebook, we will cover:
# - How to set up [BlazingSQL](https://blazingsql.com) and the [RAPIDS AI](https://rapids.ai/) suite.
# - How to create and then join BlazingSQL tables from CSV, Parquet, and GPU DataFrame (GDF) sources.
#
# 
# + [markdown] colab_type="text" id="_h26epJpUeZP"
# ## Setup
# ### Environment Sanity Check
#
# RAPIDS packages (BlazingSQL included) require Pascal+ architecture to run. For Colab, this translates to a T4 GPU instance.
#
# The cell below will let you know what type of GPU you've been allocated, and how to proceed.
# + colab={"base_uri": "https://localhost:8080/", "height": 312} colab_type="code" id="_lf6yKBoRYGy" outputId="b4b13e6e-dd62-4ed6-d261-1195e7d2b2e1"
# !wget https://github.com/BlazingDB/bsql-demos/raw/master/utils/colab_env.py
# !python colab_env.py
# + [markdown] colab_type="text" id="xM8xTlqeRi-g"
# ## Installs
# The cell below pulls our Google Colab install script from the `bsql-demos` repo then runs it. The script first installs miniconda, then uses miniconda to install BlazingSQL and RAPIDS AI. This takes a few minutes to run.
# + colab={"base_uri": "https://localhost:8080/", "height": 35} colab_type="code" id="roG-52LWRsRC" outputId="288c24b8-3713-481b-afb9-f3780bdbec6a"
# !wget https://github.com/BlazingDB/bsql-demos/raw/master/utils/bsql-colab.sh
# !bash bsql-colab.sh
import sys, os, time
sys.path.append('/usr/local/lib/python3.6/site-packages/')
os.environ['NUMBAPRO_NVVM'] = '/usr/local/cuda/nvvm/lib64/libnvvm.so'
os.environ['NUMBAPRO_LIBDEVICE'] = '/usr/local/cuda/nvvm/libdevice/'
import subprocess
subprocess.Popen(['blazingsql-orchestrator', '9100', '8889', '127.0.0.1', '8890'],
stdin=subprocess.PIPE, stdout=subprocess.PIPE, stderr=subprocess.STDOUT)
subprocess.Popen(['java', '-jar', '/usr/local/lib/blazingsql-algebra.jar', '-p', '8890'])
import pyblazing.apiv2.context as cont
cont.runRal()
time.sleep(1)
# + [markdown] colab_type="text" id="xEFX-xVGzxmJ"
# ## Grab Data
#
# The data is in our public S3 bucket so we can use wget to grab it
# + colab={"base_uri": "https://localhost:8080/", "height": 609} colab_type="code" id="55nBTddNz-Py" outputId="fb658a23-49f7-48b8-f406-3cb3e69e56c9"
# !wget 'https://blazingsql-colab.s3.amazonaws.com/cancer_data/cancer_data_00.csv'
# !wget 'https://blazingsql-colab.s3.amazonaws.com/cancer_data/cancer_data_01.parquet'
# !wget 'https://blazingsql-colab.s3.amazonaws.com/cancer_data/cancer_data_02.csv'
# + [markdown] colab_type="text" id="aMwNKxePSwOp"
# ## Import packages and create Blazing Context
# You can think of the BlazingContext much like a Spark Context (i.e. where information such as FileSystems you have registered and Tables you have created will be stored). If you have issues running this cell, restart runtime and try running it again.
# + colab={"base_uri": "https://localhost:8080/", "height": 35} colab_type="code" id="azZ7l2q7odYT" outputId="a5302d6e-307e-45c5-a682-c786cc999a40"
from blazingsql import BlazingContext
# start up BlazingSQL
bc = BlazingContext()
# + [markdown] colab_type="text" id="N2bqpDEnZyQf"
# ### Create Table from CSV
# Here we create a BlazingSQL table directly from a comma-separated values (CSV) file.
# + colab={} colab_type="code" id="HhRhj-ZvZygH"
# define column names and types
column_names = ['diagnosis_result', 'radius', 'texture', 'perimeter']
column_types = ['float32', 'float32', 'float32', 'float32']
# create table from CSV file
bc.create_table('data_00', '/content/cancer_data_00.csv', dtype=column_types, names=column_names)
# + [markdown] colab_type="text" id="HJFz-mqZTJ5Z"
# ### Create Table from Parquet
# Here we create a BlazingSQL table directly from an Apache Parquet file.
# + colab={} colab_type="code" id="HJuvtJDYTMyb"
# create table from Parquet file
bc.create_table('data_01', '/content/cancer_data_01.parquet')
# + [markdown] colab_type="text" id="98HJFrt5TRa0"
# ### Create Table from GPU DataFrame
# Here we use cuDF to create a GPU DataFrame (GDF), then use BlazingSQL to create a table from that GDF.
#
# The GDF is the standard memory representation for the RAPIDS AI ecosystem.
# + colab={} colab_type="code" id="14GwxmLsTV_p"
import cudf
# define column names and types
column_names = ['compactness', 'symmetry', 'fractal_dimension']
column_types = ['float32', 'float32', 'float32', 'float32']
# make GDF with cuDF
gdf_02 = cudf.read_csv('/content/cancer_data_02.csv', dtype=column_types, names=column_names)
# create BlazingSQL table from GDF
bc.create_table('data_02', gdf_02)
# + [markdown] colab_type="text" id="9DAZShZ2y-Nx"
# # Join Tables Together
#
# Now we can use BlazingSQL to join all three data formats in a single federated query.
# + colab={"base_uri": "https://localhost:8080/", "height": 1000} colab_type="code" id="HOYSFebvzGcX" outputId="ad133dfd-540e-4142-8f12-a4a70d803bb6"
# define a query
sql = '''
SELECT
a.*,
b.area, b.smoothness,
c.*
FROM data_00 as a
LEFT JOIN data_01 as b
ON (a.perimeter = b.perimeter)
LEFT JOIN data_02 as c
ON (b.compactness = c.compactness)
'''
# join the tables together
gdf = bc.sql(sql)
# display results
gdf
# + [markdown] colab_type="text" id="wygAeTIFTm2X"
# # You're Ready to Rock
# And... thats it! You are now live with BlazingSQL.
#
# Check out our [docs](https://docs.blazingdb.com) to get fancy or to learn more about how BlazingSQL works with the rest of [RAPIDS AI](https://rapids.ai/).
| colab_notebooks/federated_query_demo.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
import numpy as np
from keras.layers import Input, Convolution2D, Dense, Dropout, Flatten, concatenate, BatchNormalization
from keras.models import Model # basic class for specifying and training a neural network
from keras import losses
import keras
from keras import callbacks
from collections import defaultdict
import tensorflow as tf
from src.learner.game_to_features import FeatureSet_v1_1
from src.core.game_record import GameRecord
from src.core.board import Board
# -
# !gsutil ls gs://itd-aia-ds-dproc-staging/q-gomoku/
# +
def make_features(row):
gamestring = row.gamestring
fs = FeatureSet_v1_1(gamestring)
return zip(fs.q_features, fs.q_labels, fs.p_features, fs.p_labels)
gamestrings = spark.read.parquet('gs://itd-aia-ds-dproc-staging/q-gomoku/games2019-06-22-18-51/').rdd.map(lambda x : x.gamestring).collect()
#q_features, q_labels, p_features, p_labels =\
# map(list, zip(*spark.read.parquet('gs://itd-aia-ds-dproc-staging/q-gomoku/games2019-06-22-08-18/').repartition(10) \
# .rdd.flatMap(make_features))).collect()
# -
step = 2
q_features = []
q_labels = []
p_features = []
p_labels = []
game_records = []
for gamestring in gamestrings:
fs = FeatureSet_v1_1(gamestring)
game_records.append(GameRecord.parse(gamestring))
q_features.extend(fs.q_features)
q_labels.extend(fs.q_labels)
p_features.extend(fs.p_features)
p_labels.extend(fs.p_labels)
# +
results = defaultdict(int)
for record in game_records:
results[record.get_winning_player()] += 1
print("Games", len(gamestrings), "Results", results, "Q_features", len(q_features))
# -
SIZE = q_features[0].shape[0]
CHANNELS = q_features[0].shape[2]
# +
inp = Input(shape=(SIZE, SIZE, CHANNELS))
# key difference between this and conv network is padding
conv_1 = Convolution2D(64, (3, 3), padding='valid', activation='relu',
kernel_initializer='random_normal', use_bias=False)(inp)
bn2 = BatchNormalization()(conv_1)
conv_2 = Convolution2D(32, (3, 3), padding='valid', activation='relu',
kernel_initializer='random_normal', use_bias=False)(bn2)
bn3 = BatchNormalization()(conv_2)
conv_3 = Convolution2D(16, (3, 3), padding='valid', activation='relu',
kernel_initializer='random_normal', use_bias=False)(bn3)
bn4 = BatchNormalization()(conv_3)
conv_4 = Convolution2D(16, (3, 3), padding='valid', activation='relu',
kernel_initializer='random_normal', use_bias=False)(bn4)
bn5 = BatchNormalization()(conv_4)
conv_5 = Convolution2D(8, (3, 3), padding='valid', activation='relu',
kernel_initializer='random_normal', use_bias=False)(bn4)
bn6 = BatchNormalization()(conv_5)
flat = Flatten()(bn3)
hidden = Dense(10, activation='relu', kernel_initializer='random_normal', use_bias=False)(flat)
#bn_hidden = BatchNormalization()(hidden)
#hidden_2 = Dense(50, activation='relu', kernel_initializer='random_normal', use_bias=False)(bn_hidden)
bn_final = BatchNormalization()(hidden)
out = Dense(1, use_bias=False)(bn_final)
q_model = Model(inputs=[inp], outputs=out)
q_model.compile(loss=losses.mean_squared_error, optimizer='adam', metrics=['mean_squared_error'])
# +
es = callbacks.EarlyStopping(monitor='val_loss', mode='min', verbose=1, patience=2)
checkpoint = callbacks.ModelCheckpoint('../models/v2_' + str(step) + '_value.net', monitor='val_loss', verbose=1, save_best_only=True, mode='min', period=2)
callback_list = [es, checkpoint]
with tf.device('/gpu:0'):
q_model.fit(x=np.array(q_features),
y=q_labels,
callbacks=callback_list,
shuffle=True,
epochs=100,
verbose=1,
batch_size=500,
validation_split=0.1)
# +
inp = Input(shape=(SIZE, SIZE, CHANNELS))
conv_1 = Convolution2D(64, (3, 3), padding='same', activation='relu',
kernel_initializer='random_normal', use_bias=False)(inp)
bn2 = BatchNormalization()(conv_1)
conv_2 = Convolution2D(64, (3, 3), padding='same', activation='relu',
kernel_initializer='random_normal', use_bias=False)(bn2)
bn3 = BatchNormalization()(conv_2)
conv_3 = Convolution2D(64, (3, 3), padding='same', activation='relu',
kernel_initializer='random_normal', use_bias=False)(bn3)
bn4 = BatchNormalization()(conv_3)
conv_4 = Convolution2D(32, (3, 3), padding='same', activation='relu',
kernel_initializer='random_normal', use_bias=False)(bn4)
bn5 = BatchNormalization()(conv_4)
conv_5 = Convolution2D(16, (3, 3), padding='same', activation='relu',
kernel_initializer='random_normal', use_bias=False)(bn5)
bn6 = BatchNormalization()(conv_5)
flat = Flatten()(bn6)
hidden = Dense(SIZE ** 2, activation='relu', kernel_initializer='random_normal', use_bias=False)(flat)
bn_final = BatchNormalization()(hidden)
out = Dense(SIZE ** 2, activation='softmax')(bn_final)
p_model = Model(inputs=[inp], outputs=out)
p_model.compile(loss='sparse_categorical_crossentropy', optimizer='adam', metrics=['accuracy'])
# +
es = callbacks.EarlyStopping(monitor='val_loss', mode='min', verbose=1, patience=2)
checkpoint = callbacks.ModelCheckpoint('../models/v2_' + str(step) + '_policy.net', monitor='val_loss', verbose=1, save_best_only=True, mode='min', period=2)
callback_list = [es, checkpoint]
with tf.device('/gpu:0'):
p_model.fit(x=np.array(p_features),
y=np.array(p_labels),
callbacks=callback_list,
shuffle=True,
epochs=100,
verbose=1,
batch_size=500,
validation_split=0.1)
# -
| notebooks/train_models.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] id="view-in-github" colab_type="text"
# <a href="https://colab.research.google.com/github/shahd1995913/Tahalf-Mechine-Learning-DS3/blob/main/Exercise/ML1_S5_Exercise_.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# + [markdown] id="yzVjwL7dxy4p"
# # ML1-S3 (K Nearest Neighbor (`KNN`) Algorithm)👨🏻💻
# ---
#
# ### Agenda
# - [Introduction](#Introduction)
# - [How does KNN Work?](#How)
# - [Model Representation](#Representation)
# - [Required Data Preparation](#Preparation)
# - [sklean Implementation](#Implementation)
# - [Implementation from scratch](#scratch)
# - [Compaire models](#Compaire)
# + [markdown] id="uBhcQp5owWL8"
# ## <a id='Introduction'></a> `Introduction`
# ---
# **K Nearest Neighbor** algorithm falls under the `Supervised` Learning category and is used for **classification** (most commonly) and **regression**. It is a versatile algorithm also used for imputing missing values and resampling datasets. As the name (K Nearest Neighbor) suggests it considers K Nearest Neighbors (Data points) to predict the class or continuous value for the new Datapoint.
# <img src=https://machinelearningknowledge.ai/wp-content/uploads/2018/08/KNN-Classification.gif width= 500>
# + [markdown] id="jEKS7-SmwZXT"
# **The algorithm’s learning is**:
#
# ---
# 1. Instance-based learning: Here we do not learn weights from training data to predict output (as in model-based algorithms) but use entire training instances to predict output for unseen data.
#
#
# 2. Lazy Learning: Model is not learned using training data prior and the learning process is postponed to a time when prediction is requested on the new instance.
#
#
# 3. Non -Parametric: In KNN, there is no predefined form of the mapping function.
# ---
# + [markdown] id="w1XbIunDwktG"
# ## <a id='How'></a> `How does KNN Work?`
# ---
# 👉**Principle**:
#
# Consider the following figure. Let us say we have plotted data points from our training set on a two-dimensional feature space. As shown, we have a total of 6 data points (3 red and 3 blue).
#
# Red data points belong to `class1` and blue data points belong to `class2`.
#
# And yellow data point in a feature space represents the new point for which a class is to be predicted. Obviously, we say it belongs to ‘class1’ (red points)
#
# **Why?** 🤔➨ Because its nearest neighbors belong to that class!
# <img src=https://editor.analyticsvidhya.com/uploads/17303KNN%20working.png width=400>
# + [markdown] id="uBHI524w0EL8"
# Yes, this is the principle behind K Nearest Neighbors. Here, nearest neighbors are those data points that have minimum distance in feature space from our new data point. And K is the number of such data points we consider in our implementation of the algorithm. Therefore, distance metric and K value are two important considerations while using the KNN algorithm. Euclidean distance is the most popular distance metric. You can also use Hamming distance, Manhattan distance, Minkowski distance as per your need. For predicting class/ continuous value for a new data point, it considers all the data points in the training dataset. Finds new data point’s ‘K’ Nearest Neighbors (Data points) from feature space and their class labels or continuous values.
#
# Then:
#
# - For classification: A class label assigned to the majority of K Nearest Neighbors from the training dataset is considered as a predicted class for the new data point.
#
# - For regression: Mean or median of continuous values assigned to K Nearest Neighbors from training dataset is a predicted continuous value for our new data point
# ---
# + [markdown] id="zaDQkdkI1HyV"
# ## <a id='Representation'></a> `Model Representation`
# ---
#
# Here, we do not learn weights and store them, instead, the entire training dataset is stored in the memory. Therefore, model representation for KNN is the entire training dataset.
#
# ### How to choose the value for K?** 🤔
# ---
#
# **K is a crucial parameter in the KNN algorithm. Some suggestions for choosing K Value are**:
#
# 1. Using error curves: The figure below shows error curves for different values of K for training and test data.
#
# At low K values, there is overfitting of data/high variance. Therefore test error is high and train error is low. At K=1 in train data, the error is always zero, because the nearest neighbor to that point is that point itself. Therefore though training error is low test error is high at lower K values. This is called overfitting. As we increase the value for K, the test error is reduced.
#
# But after a certain K value, bias/ underfitting is introduced and test error goes high. So we can say initially test data error is high(due to variance) then it goes low and stabilizes and with further increase in K value, it again increases(due to bias). The K value when test error stabilizes and is low is considered as optimal value for K. From the above error curve we can choose K=8 for our KNN algorithm implementation.
#
# <img src=https://editor.analyticsvidhya.com/uploads/47280kvalue.png width=400>
#
#
# 2. Also, domain knowledge is very useful in choosing the K value.
#
# 3. K value should be odd while considering binary(two-class) classification.
# ---
# + [markdown] id="53oC1RDl1cRJ"
# ## <a id='Preparation'></a> `Required Data Preparation`
# ---
# 1. Data Scaling: To locate the data point in multidimensional feature space, it would be helpful if all features are on the same scale. Hence normalization or standardization of data will help.
#
# 2. Dimensionality Reduction: KNN may not work well if there are too many features. Hence dimensionality reduction techniques like feature selection, principal component analysis can be implemented.
#
# 3. Missing value treatment: If out of M features one feature data is missing for a particular example in the training set then we cannot locate or calculate distance from that point. Therefore deleting that row or imputation is required.
# + [markdown] id="VvlR5OfA1lbd"
# ## <a id='Implementation'></a> `sklean Implementation`
# ---
# + id="62uWhHwx1u2q"
#First, let’s import all required libraries.
#============================================
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
from sklearn.datasets import make_classification
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import StandardScaler
from sklearn.neighbors import KNeighborsClassifier
from sklearn import metrics
# + [markdown] id="ri5NCJ8u1wDT"
# After loading important libraries, we create our data using sklearn.datasets with 200 samples, 8 features, and 2 classes. Then data is split into the train(80%) and test(20%) data and scaled using StandardScaler.
# + colab={"base_uri": "https://localhost:8080/"} id="wEqfGKRQ1yCn" outputId="0a99e02a-f695-4896-f711-02ca2f39c724"
X,Y=make_classification(n_samples= 200,n_features=8,n_informative=8,n_redundant=0,n_repeated=0,n_classes=2,random_state=14)
X_train, X_test, y_train, y_test= train_test_split(X, Y, test_size= 0.2,random_state=32)
sc= StandardScaler()
sc.fit(X_train)
X_train= sc.transform(X_train)
sc.fit(X_test)
X_test= sc.transform(X_test)
X.shape
# + [markdown] id="SZxDaCZQ15Nq"
# For choosing the K value, we use error curves and K value with optimal variance, and bias error is chosen as K value for prediction purposes. With the error curve plotted below, we choose K=7 for the prediction
# + colab={"base_uri": "https://localhost:8080/", "height": 296} id="b97eXTO6134F" outputId="9d182fa9-b97b-404a-b2a1-bfb4a65f09fe"
# Step 2: Find the value for K
#====================================
error1= []
error2= []
for k in range(1,15):
knn= KNeighborsClassifier(n_neighbors=k)
knn.fit(X_train,y_train)
y_pred1= knn.predict(X_train)
error1.append(np.mean(y_train!= y_pred1))
y_pred2= knn.predict(X_test)
error2.append(np.mean(y_test!= y_pred2))
# plt.figure(figsize(10,5))
plt.plot(range(1,15),error1,label="train")
plt.plot(range(1,15),error2,label="test")
plt.xlabel('k Value')
plt.ylabel('Error')
plt.legend()
# + [markdown] id="BE0MaFwN2AER"
# In step 2, we have chosen the K value to be 7. Now we substitute that value and get the accuracy score = 0.9 for the test data.
# + colab={"base_uri": "https://localhost:8080/"} id="GJwncSr22CTC" outputId="187a1022-38bc-4fd9-93c8-08622840733f"
#Step 3: Predict:
#======================
knn= KNeighborsClassifier(n_neighbors=7)
knn.fit(X_train,y_train)
y_pred= knn.predict(X_test)
metrics.accuracy_score(y_test,y_pred) == 0.9
# + [markdown] id="uqxCimH42DNV"
# ## <a id='scratch'></a> `Implementation from scratch`
# ---
# + id="aTdM5lMRfKZe"
# load dataset
#======================
from sklearn.model_selection import train_test_split
import numpy as np
from sklearn.datasets import load_iris
#Loading the Data
iris= load_iris()
# + id="yP1dTVFe3QID"
# Split X and Y
#======================
# Store features matrix in X
X= iris.data
#Store target vector in
Y= iris.target
X_train, X_test, y_train, y_test= train_test_split(X, Y, test_size= 0.2,random_state=32)
# + [markdown] id="Y-njvTec3Uib"
# ### psudo code
# + [markdown] id="1o4WA3C-0PUH"
# K-nearest neighbors (KNN) algorithm uses ‘feature similarity’ to predict the values of new datapoints which further means that the new data point will be assigned a value based on how closely it matches the points in the training set. We can understand its working with the help of following steps −
#
# - Step 1 ➨ For implementing any algorithm, we need dataset. So during the first step of KNN, we must load the training as well as test data.
#
# - Step 2 ➨ Next, we need to choose the value of K i.e. the nearest data points. K can be any integer.
#
# - Step 3 ➨ For each point in the test data do the following:
#
# * 3.1 − Calculate the distance between test data and each row of training data with the help of any of the method namely: Euclidean, Manhattan or Hamming distance. The most commonly used method to calculate distance is Euclidean.
#
# * 3.2 − Now, based on the distance value, sort them in ascending order.
#
# * 3.3 − Next, it will choose the top K rows from the sorted array.
#
# * 3.4 − Now, it will assign a class to the test point based on most frequent class of these rows.
#
# - Step 4 ➨ End
# + id="uZEcrKy9fMt1"
class MyKNN():
def __init__(self,X,Y,dist):
self.X_train = X
self.Y_train = Y
self.metric = dist
def euclidean_distance(self,p1,p2):
''' this helper function calculates the euclidean distance between two given points '''
# the equation : sqrt(sum((x - y)^2))
dist = np.sqrt(np.sum( (p1-p2)**2 ))
return dist
def manhattan_distance(self,p1,p2):
''' this helper function calculates the manhattan distance between two given points '''
# the equation : sum(|x - y|)
dist = sum(abs(p1-p2))
return dist
def calc_distance(self,p1,p2,metric="euclidean"):
''' This function calculate the distance between two numbers based on the given distance metric
INPUTS :
p1 : first point or vector (numpy array)
p2 : second point or vector (numpy array)
metric : the distance metric (string)
Returns :
distance value as a float number
'''
if metric == "euclidean":
dist = self.euclidean_distance(p1,p2)
elif metric =="manhattan" :
dist = self.manhattan_distance(p1,p2)
else :
print("you have entered invalid metric value")
print("please Choose between 'euclidean' and 'manhattan' ")
raise ValueError
return dist
def predict(self,X,y,X_,neighbors):
''' This function KNN algorithm implementation
Inputs :
X : the training features vectors
y : the training targets/labels vectors
X_ : the input features we want to calculate the call for.
neighbors : the numbers of naghbors to calculate on.
metric : the distance metric we will calculate on
Output :
y_ : list of the predicted classes for our input
'''
#create output list
y_ = []
#iterate over inputs
for p in X_ :
dists = []
#calculate the distance between each point and the training data
for px in X:
dist = self.calc_distance(p,px,metric=self.metric)
dists.append(dist)
#get the minimun k distances
mins = np.argsort(dists)[:neighbors]
# get the most repeated class
classes = [y[i] for i in mins]
pred = max(set(classes), key = classes.count)
y_.append(pred)
return y_
# + id="QRCSfZhAfSZO"
#create model and predict
#==============================
knn = MyKNN(X,Y,"euclidean")
y_ = knn.predict(X_train,y_train,X_test,7)
# + id="KXIFEYVx905K" outputId="96677cf7-a598-4e07-8def-cb953f45eec0"
# Accuracy
#==========================
from sklearn.metrics import accuracy_score
custom_model_score = accuracy_score(y_test,y_)
print(custom_model_score)
# + [markdown] id="XYspSMZf-XZd"
# ## <a id='Compaire'></a> `Compaire models`
# ---
# + [markdown] id="t4mQc_fA-2gQ"
# #### sklearn model
# + id="w-s1ntTlfTmQ"
# 7 kneighbors distance euclidean
knn= KNeighborsClassifier(n_neighbors=7,metric="euclidean")
# fit model
knn.fit(X_train,y_train)
#predict on test dataset
y_pred= knn.predict(X_test)
#get accuracy
sklearn_model_score = metrics.accuracy_score(y_test,y_pred)
# + colab={"base_uri": "https://localhost:8080/"} id="vWCI4Mo3fX1a" outputId="b41280db-b6cd-480a-c55c-de958d28503b"
print (sklearn_model_score == custom_model_score)
# + [markdown] id="6aEg5px6fnp4"
# ---
# # Homework
# ---
# Tie break :
#
#
# Assume that you have selected the number of neighbours to be an even number, e.g., 2. For one of the neighbours, the suggested class is 1, and for the other neighbour the suggested class is 2. How would you break the tie? Write example pseudocode that does this.
| Exercise/ML1_S5_Exercise_.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Document embeddings in BigQuery for document similarity and clustering tasks
#
# This notebook shows how to do use a pre-trained embedding as a vector representation of a natural language text column.
# Given this embedding, we can load it as a BQ-ML model and then carry out document similarity or clustering.
#
# This notebook accompanies the following Medium blog post:
# https://medium.com/@lakshmanok/how-to-do-text-similarity-search-and-document-clustering-in-bigquery-75eb8f45ab65
# ## Embedding model for documents
#
# We're going to use a model that has been pretrained on Google News. Here's an example of how it works in Python. We will use it directly in BigQuery, however.
# +
import tensorflow as tf
import tensorflow_hub as tfhub
model = tf.keras.Sequential()
model.add(tfhub.KerasLayer("https://tfhub.dev/google/tf2-preview/gnews-swivel-20dim/1",
output_shape=[20], input_shape=[], dtype=tf.string))
model.summary()
model.predict(["""
Long years ago, we made a tryst with destiny; and now the time comes when we shall redeem our pledge, not wholly or in full measure, but very substantially. At the stroke of the midnight hour, when the world sleeps, India will awake to life and freedom.
A moment comes, which comes but rarely in history, when we step out from the old to the new -- when an age ends, and when the soul of a nation, long suppressed, finds utterance.
"""])
# -
# ## Loading model into BigQuery
#
# The Swivel model above is already available in SavedModel format. But we need it on Google Cloud Storage before we can load it into BigQuery.
# + language="bash"
# BUCKET=ai-analytics-solutions-kfpdemo # CHANGE AS NEEDED
#
# rm -rf tmp
# mkdir tmp
# FILE=swivel.tar.gz
# wget --quiet -O tmp/swivel.tar.gz https://tfhub.dev/google/tf2-preview/gnews-swivel-20dim/1?tf-hub-format=compressed
# cd tmp
# tar xvfz swivel.tar.gz
# cd ..
# mv tmp swivel
# gsutil -m cp -R swivel gs://${BUCKET}/swivel
# rm -rf swivel
#
# echo "Model artifacts are now at gs://${BUCKET}/swivel/*"
# -
# Let's load the model into a BigQuery dataset named advdata (create it if necessary)
# %%bigquery
CREATE OR REPLACE MODEL advdata.swivel_text_embed
OPTIONS(model_type='tensorflow', model_path='gs://ai-analytics-solutions-kfpdemo/swivel/*')
# From the BigQuery web console, click on "schema" tab for the newly loaded model. We see that the input is called sentences and the output is called output_0:
# <img src="swivel_schema.png" />
# %%bigquery
SELECT output_0 FROM
ML.PREDICT(MODEL advdata.swivel_text_embed,(
SELECT "Long years ago, we made a tryst with destiny; and now the time comes when we shall redeem our pledge, not wholly or in full measure, but very substantially." AS sentences))
# ## Document search
#
# Let's use the embeddings to return similar strings. We'll use the comments field of a storm reports table from NOAA.
# %%bigquery
SELECT
EXTRACT(DAYOFYEAR from timestamp) AS julian_day,
ST_GeogPoint(longitude, latitude) AS location,
comments
FROM `bigquery-public-data.noaa_preliminary_severe_storms.wind_reports`
WHERE EXTRACT(YEAR from timestamp) = 2019
LIMIT 10
# Let's define a distance function and then do a search for matching documents to the search string "power line down on a home". Note that the matches include "house" as a synonym for home. And not as good, but close matches all include "power line" as the more distinctive term.
# +
# %%bigquery
CREATE TEMPORARY FUNCTION td(a ARRAY<FLOAT64>, b ARRAY<FLOAT64>, idx INT64) AS (
(a[OFFSET(idx)] - b[OFFSET(idx)]) * (a[OFFSET(idx)] - b[OFFSET(idx)])
);
CREATE TEMPORARY FUNCTION term_distance(a ARRAY<FLOAT64>, b ARRAY<FLOAT64>) AS ((
SELECT SQRT(SUM( td(a, b, idx))) FROM UNNEST(GENERATE_ARRAY(0, 19)) idx
));
WITH search_term AS (
SELECT output_0 AS term_embedding FROM ML.PREDICT(MODEL advdata.swivel_text_embed,(SELECT "power line down on a home" AS sentences))
)
SELECT
term_distance(term_embedding, output_0) AS termdist,
comments
FROM ML.PREDICT(MODEL advdata.swivel_text_embed,(
SELECT comments, LOWER(comments) AS sentences
FROM `bigquery-public-data.noaa_preliminary_severe_storms.wind_reports`
WHERE EXTRACT(YEAR from timestamp) = 2019
)), search_term
ORDER By termdist ASC
LIMIT 10
# -
# ## Document clustering
#
# We can use the embeddings as input to a K-Means clustering model. To make things interesting, let's also include the day and location.
# K-Means at present doesn't accept arrays as input, so I'm defining a function to make it a struct with named parameters.
# +
# %%bigquery
CREATE TEMPORARY FUNCTION arr_to_input_20(arr ARRAY<FLOAT64>)
RETURNS
STRUCT<p1 FLOAT64, p2 FLOAT64, p3 FLOAT64, p4 FLOAT64,
p5 FLOAT64, p6 FLOAT64, p7 FLOAT64, p8 FLOAT64,
p9 FLOAT64, p10 FLOAT64, p11 FLOAT64, p12 FLOAT64,
p13 FLOAT64, p14 FLOAT64, p15 FLOAT64, p16 FLOAT64,
p17 FLOAT64, p18 FLOAT64, p19 FLOAT64, p20 FLOAT64>
AS (
STRUCT(
arr[OFFSET(0)]
, arr[OFFSET(1)]
, arr[OFFSET(2)]
, arr[OFFSET(3)]
, arr[OFFSET(4)]
, arr[OFFSET(5)]
, arr[OFFSET(6)]
, arr[OFFSET(7)]
, arr[OFFSET(8)]
, arr[OFFSET(9)]
, arr[OFFSET(10)]
, arr[OFFSET(11)]
, arr[OFFSET(12)]
, arr[OFFSET(13)]
, arr[OFFSET(14)]
, arr[OFFSET(15)]
, arr[OFFSET(16)]
, arr[OFFSET(17)]
, arr[OFFSET(18)]
, arr[OFFSET(19)]
));
CREATE OR REPLACE MODEL advdata.storm_reports_clustering
OPTIONS(model_type='kmeans', NUM_CLUSTERS=10) AS
SELECT
arr_to_input_20(output_0) AS comments_embed,
EXTRACT(DAYOFYEAR from timestamp) AS julian_day,
longitude, latitude
FROM ML.PREDICT(MODEL advdata.swivel_text_embed,(
SELECT timestamp, longitude, latitude, LOWER(comments) AS sentences
FROM `bigquery-public-data.noaa_preliminary_severe_storms.wind_reports`
WHERE EXTRACT(YEAR from timestamp) = 2019
))
# -
# The resulting clusters look like this
# <img src="storm_reports_clusters.png"/>
# Show a few of the comments from cluster #1
# +
# %%bigquery
CREATE TEMPORARY FUNCTION arr_to_input_20(arr ARRAY<FLOAT64>)
RETURNS
STRUCT<p1 FLOAT64, p2 FLOAT64, p3 FLOAT64, p4 FLOAT64,
p5 FLOAT64, p6 FLOAT64, p7 FLOAT64, p8 FLOAT64,
p9 FLOAT64, p10 FLOAT64, p11 FLOAT64, p12 FLOAT64,
p13 FLOAT64, p14 FLOAT64, p15 FLOAT64, p16 FLOAT64,
p17 FLOAT64, p18 FLOAT64, p19 FLOAT64, p20 FLOAT64>
AS (
STRUCT(
arr[OFFSET(0)]
, arr[OFFSET(1)]
, arr[OFFSET(2)]
, arr[OFFSET(3)]
, arr[OFFSET(4)]
, arr[OFFSET(5)]
, arr[OFFSET(6)]
, arr[OFFSET(7)]
, arr[OFFSET(8)]
, arr[OFFSET(9)]
, arr[OFFSET(10)]
, arr[OFFSET(11)]
, arr[OFFSET(12)]
, arr[OFFSET(13)]
, arr[OFFSET(14)]
, arr[OFFSET(15)]
, arr[OFFSET(16)]
, arr[OFFSET(17)]
, arr[OFFSET(18)]
, arr[OFFSET(19)]
));
SELECT sentences
FROM ML.PREDICT(MODEL `ai-analytics-solutions.advdata.storm_reports_clustering`,
(
SELECT
sentences,
arr_to_input_20(output_0) AS comments_embed,
EXTRACT(DAYOFYEAR from timestamp) AS julian_day,
longitude, latitude
FROM ML.PREDICT(MODEL advdata.swivel_text_embed,(
SELECT timestamp, longitude, latitude, LOWER(comments) AS sentences
FROM `bigquery-public-data.noaa_preliminary_severe_storms.wind_reports`
WHERE EXTRACT(YEAR from timestamp) = 2019
))))
WHERE centroid_id = 1
LIMIT 10
# -
# As you can see, these are basically uninformative comments. How about centroid #3?
# +
# %%bigquery
CREATE TEMPORARY FUNCTION arr_to_input_20(arr ARRAY<FLOAT64>)
RETURNS
STRUCT<p1 FLOAT64, p2 FLOAT64, p3 FLOAT64, p4 FLOAT64,
p5 FLOAT64, p6 FLOAT64, p7 FLOAT64, p8 FLOAT64,
p9 FLOAT64, p10 FLOAT64, p11 FLOAT64, p12 FLOAT64,
p13 FLOAT64, p14 FLOAT64, p15 FLOAT64, p16 FLOAT64,
p17 FLOAT64, p18 FLOAT64, p19 FLOAT64, p20 FLOAT64>
AS (
STRUCT(
arr[OFFSET(0)]
, arr[OFFSET(1)]
, arr[OFFSET(2)]
, arr[OFFSET(3)]
, arr[OFFSET(4)]
, arr[OFFSET(5)]
, arr[OFFSET(6)]
, arr[OFFSET(7)]
, arr[OFFSET(8)]
, arr[OFFSET(9)]
, arr[OFFSET(10)]
, arr[OFFSET(11)]
, arr[OFFSET(12)]
, arr[OFFSET(13)]
, arr[OFFSET(14)]
, arr[OFFSET(15)]
, arr[OFFSET(16)]
, arr[OFFSET(17)]
, arr[OFFSET(18)]
, arr[OFFSET(19)]
));
SELECT sentences
FROM ML.PREDICT(MODEL `ai-analytics-solutions.advdata.storm_reports_clustering`,
(
SELECT
sentences,
arr_to_input_20(output_0) AS comments_embed,
EXTRACT(DAYOFYEAR from timestamp) AS julian_day,
longitude, latitude
FROM ML.PREDICT(MODEL advdata.swivel_text_embed,(
SELECT timestamp, longitude, latitude, LOWER(comments) AS sentences
FROM `bigquery-public-data.noaa_preliminary_severe_storms.wind_reports`
WHERE EXTRACT(YEAR from timestamp) = 2019
))))
WHERE centroid_id = 3
# -
# These are all reports that were validated in some way by radar!!!!
# Copyright 2020 Google Inc. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License
#
#
| 09_bqml/text_embeddings.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Imports
# +
# # Python standard library
import itertools
import os
# # Third-party libraries
import pandas as pd
import re
import screed
from tqdm import tqdm
# # Local python files
from path_constants import DATA_FOLDER, ORPHEUM_BENCHMARKING_FOLDER, QFO_EUKARYOTA_FOLDER
# -
reads_dir = os.path.join(ORPHEUM_BENCHMARKING_FOLDER, "simulated", "human")
# # Subset to only reads from complete protein sequences -- no fragments
# ## Get good uniprot ids, starting with M and ATG
# +
uniprot_protein_starts_with_m = []
protein_fasta = os.path.join(
QFO_EUKARYOTA_FOLDER,
"UP000005640_9606.fasta",
)
cdna_fasta = os.path.join(
QFO_EUKARYOTA_FOLDER,
"UP000005640_9606_DNA.fasta",
)
with screed.open(protein_fasta) as records:
for record in records:
if record["sequence"].startswith("M"):
uniprot_protein_starts_with_m.append(
"|".join(record["name"].split()[0].split("|")[:2])
)
print("uniprot_protein_starts_with_m", len(uniprot_protein_starts_with_m))
uniprot_dna_starts_with_atg = []
with screed.open(cdna_fasta) as records:
for record in records:
if record["sequence"].startswith("ATG"):
uniprot_dna_starts_with_atg.append(
"|".join(record["name"].split()[0].split("|")[:2])
)
print("uniprot_dna_starts_with_atg", len(uniprot_dna_starts_with_atg))
# -
# ! tail $protein_fasta
uniprot_dna_starts_with_atg[:3]
uniprot_protein_starts_with_m[:3]
uniprot_starts_with_atg_and_m = set(uniprot_dna_starts_with_atg).intersection(set(uniprot_protein_starts_with_m))
len(uniprot_starts_with_atg_and_m)
uniprot_starts_with_atg_and_m_list = list(uniprot_starts_with_atg_and_m)
uniprot_starts_with_atg_and_m_list[:5]
# ! grep -c '>' $cdna_fasta
# ! grep -c '>' $protein_fasta
uniprot_dna_starts_with_atg[:3]
# ## Write good uniprot ids to file
good_uniprot_records = []
with screed.open(cdna_fasta) as records:
for record in records:
clean_uniprot_id = '|'.join(record['name'].split('|')[:2])
if clean_uniprot_id in uniprot_starts_with_atg_and_m:
good_uniprot_records.append(record)
len(good_uniprot_records)
good_uniprot_records[:3]
record['name']
# +
protein_fasta_good_uniprot_ids = os.path.join(
QFO_EUKARYOTA_FOLDER,
"UP000005640_9606_DNA__startswith_atg_and_protein_startswith_m.fasta",
)
with open(protein_fasta_good_uniprot_ids, "w") as f:
for record in good_uniprot_records:
f.write(f'>{record["name"]}\n{record["sequence"]}\n')
# -
good_uniprot_records_dict = {'|'.join(r['name'].split('|')[:2]): r['sequence'] for r in good_uniprot_records}
len(good_uniprot_records_dict)
good_uniprot_records_series = pd.Series(good_uniprot_records_dict)
good_uniprot_records_dict['tr|A0A024R1R8']
uniprot_dna_starts_with_atg[:3]
# ### Grep dna fasta for the sequence
# ! grep -A 1 'sp|A0A075B6K2|ENSP00000374848' $cdna_fasta
# ! zgrep -A 3 'read1000/sp|A0A075B6K2|ENSP00000374848;mate1:5-154;mate2:37-186' $reads_dir/*.fq.gz
# ## Get read IDs of reads that don't have an `N`
# +
fastq = os.path.join(reads_dir, "Homo_sapiens_9606_qfo_dna_01.fq.gz")
read_ids_without_n = []
with screed.open(fastq) as records:
for record in records:
if "N" not in record["sequence"]:
read_ids_without_n.append(record["name"])
print(len(read_ids_without_n))
read_ids_without_n[:3]
# -
# # Infer reading frame from read start -- assume all reads start with ATG
# ## Hamming distance function
#
# from http://claresloggett.github.io/python_workshops/improved_hammingdist.html
# +
# Return the Hamming distance between string1 and string2.
# string1 and string2 should be the same length.
def hamming_distance(string1, string2):
# Start with a distance of zero, and count up
distance = 0
# Loop over the indices of the string
L = len(string1)
for i in range(L):
# Add 1 to the distance if these two characters are not equal
if string1[i] != string2[i]:
distance += 1
# Return the final count of differences
return distance
# Reveres complemeent
old_chars = "ACGT"
replace_chars = "TGCA"
tab = str.maketrans(old_chars,replace_chars)
def reverse_complement(sequence):
return sequence.translate(tab)[::-1]
def rev_compl(st):
nn = {'A': 'T', 'C': 'G', 'G': 'C', 'T': 'A'}
return "".join(nn[n] for n in reversed(st))
# -
# ### read 51 is coding in negative frame, and mate2!!
#
# ```
# @read51/sp|A0A024RBG1|ENSP00000492425;mate1:130-279;mate2:281-430
# GCTTTTCCAGATACTCTGCATGTACAGGTTTATGACACTGGAGAACTTTGATAGCATCTTCTACTTTGAACCACTCTCTCTTCCTTCCAATATTAACAGAATCTTCCCAATCTTCTAATATTTCAGTGACTGTTAGAACATAAACATATG
# +
# IIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIII
# ```
hamming_distance(good_uniprot_records_dict['sp|A0A024RBG1'][280:429],
'GCTTTTCCAGATACTCTGCATGTACAGGTTTATGACACTGGAGAACTTTGATAGCATCTTCTACTTTGAACCACTCTCTCTTCCTTCCAATATTAACAGAATCTTCCCAATCTTCTAATATTTCAGTGACTGTTAGAACATAAACATATG')
# ### Read 52 is coding in positive frame
#
# ```
# @read52/sp|A0A024RBG1|ENSP00000492425;mate1:125-274;mate2:193-342
# ACCCAGACCAGTGGATTGTCCCAGGAGGAGGAATGGAACCCGAGGAGGAACCTGGCGGTGCTGCCGTGAGGGAAGTTTATGAGGAGGCTGGAGTCAAAGGAAAACTAGGCAGACTTCTGGGCATATTTGAGCAGAACCAAGACCGAAAGC
# +
# IIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIII
# ```
#
hamming_distance(good_uniprot_records_dict['sp|A0A024RBG1'][124:273],
'ACCCAGACCAGTGGATTGTCCCAGGAGGAGGAATGGAACCCGAGGAGGAACCTGGCGGTGCTGCCGTGAGGGAAGTTTATGAGGAGGCTGGAGTCAAAGGAAAACTAGGCAGACTTCTGGGCATATTTGAGCAGAACCAAGACCGAAAGC')
# ## Function to actually infer translation frame
# +
# # %%time
# https://regex101.com/r/WNtXD8/1/
interval_patterns = 'mate1:(?P<mate1_start>\d+)-(\d+);mate2:(\d+)-(\d+)'
def get_strand(canonical_seq, record_seq):
# import pdb ; pdb.set_trace()
try:
n_mismatches = hamming_distance(canonical_seq, record_seq)
except IndexError:
# Lengths don't match, ignore this read
return None
if n_mismatches > 10:
# Make sure it's really the reverse complement
revcomp = reverse_complement(record_seq)
try:
n_mismatches = hamming_distance(canonical_seq, revcomp)
except IndexError:
# Lengths don't match, ignore this read
return None
# Not too many mismatches
if n_mismatches <= 10:
strand = -1
else:
strand = None
else:
strand = 1
# if strand is None:
# raise ValueError
return strand
def get_correct_reading_frame(record, required_length=150, verbose=False):
name = record['name']
if 'mate1Start' in name:
frame = 1
else:
# Subtract 1 since the fastq file uses 1-based indexing for the start/stop but python is 0-based
try:
start1, end1, start2, end2 = map(lambda x: int(x) - 1 , re.findall(interval_patterns, name)[0])
except IndexError:
# Read id has negative values and otherwise doesn't match my mental model --> ignore
return None
end1 += 1
# start2 += 1
end2 += 1
uniprot_id = '|'.join(name.split(';')[0].split('/')[-1].split('|')[:2])
try:
canonical_sequence = good_uniprot_records_dict[uniprot_id]
except KeyError:
# Uniprot record doesn't have clear start/stop site, so difficult to infer frame --> skip
return None
canonical_length = len(canonical_sequence)
if end1 > canonical_length or end2 > canonical_length:
# Read extends past the boundary of the source sequence --> skip
return None
mate1 = canonical_sequence[start1:end1]
mate2 = canonical_sequence[start2:end2]
assert len(mate1) == required_length
assert len(mate2) == required_length
if verbose:
print(name)
print(f'start1: {start1} -- end1: {end1}')
print(f'start2: {start2} -- end2: {end2}')
if verbose > 1:
print(f'>mate1\n{mate1}')
print(f'>mate1_rc\n{reverse_complement(mate1)}')
if verbose > 1:
print(f'>mate2\n{mate2}')
print(f'>mate2_rc\n{reverse_complement(mate2)}')
frame_number = 3 - ((start1 -1 )% 3)
if verbose > 1:
print(f'{frame_number} = (({start1} + 1) % 3) + 1')
# frame_number = ((start1)% 3) + 1
record_seq = record['sequence']
if verbose > 1:
print(f'>record\n{record_seq}')
print(f'>record_rc\n{reverse_complement(record_seq)}')
if verbose:
print('--- Trying mate 1 ---')
strand = get_strand(mate1, record_seq)
if verbose and strand is not None:
if strand > 0:
print('mate1')
if strand < 0:
print('mate1, reverse complement')
if strand is None:
if verbose:
print('--- Not mate1, trying mate 2 ---')
# Maybe it's mate2?
# strand = -1
strand = get_strand(mate2, record_seq)
frame_number = 3 - ((start2 - 1 ) % 3)
if verbose and strand is not None:
print(f'{frame_number} = (({start2} + 1) % 3) + 1')
if strand > 0:
print('mate2')
if strand < 0:
print('mate2, reverse complement')
# Multiply the frame number by the strand multiplier
try:
frame = frame_number * strand
if verbose:
print(f'{frame} = {frame_number} * {strand}')
except TypeError:
# Strand is still none, don't know what's going on so skip this read
frame = None
return frame
def fastq_per_read_frame(fastq, verbose=False):
read_id_to_frame = {}
with screed.open(fastq) as records:
for record in tqdm(records):
# if 'read52/' in record['name']:
# break
if verbose:
print('\n---')
frame = get_correct_reading_frame(record, required_length=150, verbose=verbose)
if verbose:
print(f'frame: {frame}')
if frame is not None:
read_id_to_frame[record['name']] = frame
read_id_to_frame_series = pd.Series(read_id_to_frame, name='translation_frame')
print(read_id_to_frame_series.shape)
read_id_to_frame_series.head()
return read_id_to_frame_series
# -
# ## Make mini fastq for testing
# +
# for read_id in protein_k11_good_uniprot_ids_no_ns_coding.sample(5).read_id.values:
# # ! zgrep -A 3 "$read_id" $reads_dir/*
# +
# %%file mini.fastq
@read51/sp|A0A024RBG1|ENSP00000492425;mate1:130-279;mate2:281-430__frame=-3
GCTTTTCCAGATACTCTGCATGTACAGGTTTATGACACTGGAGAACTTTGATAGCATCTTCTACTTTGAACCACTCTCTCTTCCTTCCAATATTAACAGAATCTTCCCAATCTTCTAATATTTCAGTGACTGTTAGAACATAAACATATG
+
IIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIII
@read52/sp|A0A024RBG1|ENSP00000492425;mate1:125-274;mate2:193-342__frame=3
ACCCAGACCAGTGGATTGTCCCAGGAGGAGGAATGGAACCCGAGGAGGAACCTGGCGGTGCTGCCGTGAGGGAAGTTTATGAGGAGGCTGGAGTCAAAGGAAAACTAGGCAGACTTCTGGGCATATTTGAGCAGAACCAAGACCGAAAGC
+
IIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIII
@read302822/sp|P49760|ENSP00000460443;mate1:755-904;mate2:890-1039__frame=3
ATTTCCTCAAAGACAACAACTACCTGCCCTACCCCATCCACCAAGTGCGCCACATGGCCTTCCAGCTGTGCCAGGCTGTCAAGTTCCTCCATGATAACAAGCTGACACATACAGACCTCAAGCCTGAAAATATTCTGATTGTGAATTCAG
+
IIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIII
@read780376/sp|Q96PL2|ENSP00000494896;mate1:747-896;mate2:829-978__frame=2
CCNGTTCCAGAACATCCCCAAACTCTCCAAGGTGTGGTTACACTGTGAGACGTTCATCTGCGACAGTGAGAAACTCTCCTGCCCAGTGACCTGCGATAAACGGAAGCGCCTCCTGCGAGACCAGACCGGGGGAGTCCTGGTCGTGGAGCT
+
IIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIII
@read192484/sp|P09629|ENSP00000239165;mate1:19-168;mate2:125-274__frame=1
GCGAATACTTTATTTTCTAAATATCCAGCCTCAAGTTCGGTTTTCGCTACCGGAGCCTTCCCAGAACAAACTTCTTGTGCGTTTGCTTCCAACCCCCAGCGCCCGGGCTATGGAGCGGGTTCGGGCGCTTCCTTCGCCGCCTCGATGCAG
+
IIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIII
@read335141/sp|P60368|ENSP00000375479;mate1:141-290;mate2:251-400__frame=2
CACCCCAGTGAGCTGTGTGTCCAGCCCCTGCTGCCAGGCGGCCTGTGAGCCCAGCGCCTGCCAATCAGGCTGCACCAGCTCCTGCACGCCCTCGTGCTGCCAGCAGTCTAGCTGCCAGCCGGCTTGCTGCACCTCCTCCCCCTGCCAGCA
+
IIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIII
@read494460/sp|Q5VWX1|ENSP00000281156;mate1:81-230;mate2:193-342__frame=2
TTTGGCAGAAGAAATTGAAAAGTTTCAAGGTTCTGATGGAAAAAAGGAAGACGAAGAAAAGAAGTATCTTGATGTCATCAGCAACAAAAACATAAAGCTCTCAGAAAGAGTACTGATTCCTGTCAAGCAGTATCCAAAGTTCAATTTTGT
+
IIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIII
@read734191/sp|Q96A65|ENSP00000376868;mate1:842-991;mate2:949-1098__frame=-1
AGTGTCCTGCAGGTATCCCAGGACCACAGAGTGTGCAGCGGCTACAGCATTAAACTTGTCAAACAGTAACTCCAGCAGTTCTAGAAGCAACCTTGGTTGGTTCTCCACAGTAACGTTCTCCCCCCGCTGATAGCCACTGTCTGCCACCTG
+
IIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIII
# @read640286/sp|Q8N7Q2|BAC05176;mate1:30-179;mate2:140-289__frame=?
TTTGGCCAACTTCGCCTCTTCAATTAAAAGGACACATGCTGTTAACGGGTGCTGTGGATTACAGATGATCGCACTCTGGGCACAGTCCTCTGGAAATGCAGATGCCCGTGTGGAGGAAATTCTGGCGGGAGAGGAGCGGCGACTCGCCGC
+
IIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIII
@read325914/sp|P56278|ENSP00000358488;mate1:12-161;mate2:125-274__frame=2
GGATGTGGGGGCTCCACCCGATCACCTCTGGGTTCACCAAGAGGGTATCTACCGCGACGAATACCAGCGCACGTGGGTGGCCGTCGTGGAAGAGGAGACGAGTTTCCTAAGGGCACGAGTCCAGCAAATTCAGGTTCCCTTAGGTGACGC
+
IIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIII
@read714894/sp|Q8WYR1|ENSP00000485280;mate1:570-719;mate2:662-811__frame=2
GAGCCAGACGCCCTCACCCCCGACAGACTCCCCTAGGCACGCCAGCCCTGGAGAGCTGGGCACCACCCCATGGGAGGAGAGCACCAATGACATCTCCCACTACCTCGGCATGCTGGACCCCTGGTATGAGCGCAATGTACTGGGCCTCAT
+
IIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIII
# -
# ### Test on mini dataset
mini_results = fastq_per_read_frame('mini.fastq', verbose=2)
mini_results
# ## Make ground truth dataframe for testing
from io import StringIO
s = '''read_id true_mate true_frame guessed_frame
read51/sp|A0A024RBG1|ENSP00000492425;mate1:130-279;mate2:281-430 mate2_rc -3 -3
read52/sp|A0A024RBG1|ENSP00000492425;mate1:125-274;mate2:193-342 mate1 3 2
read302822/sp|P49760|ENSP00000460443;mate1:755-904;mate2:890-1039 mate1 3 2
read780376/sp|Q96PL2|ENSP00000494896;mate1:747-896;mate2:829-978 mate1 2 3
read192484/sp|P09629|ENSP00000239165;mate1:19-168;mate2:125-274 mate1 1 1
read335141/sp|P60368|ENSP00000375479;mate1:141-290;mate2:251-400 mate1 2 3
read494460/sp|Q5VWX1|ENSP00000281156;mate1:81-230;mate2:193-342 mate1 2 3
read734191/sp|Q96A65|ENSP00000376868;mate1:842-991;mate2:949-1098 mate2_rc -1 -2
read640286/sp|Q8N7Q2|BAC05176;mate1:30-179;mate2:140-289 mate1 3 2
read325914/sp|P56278|ENSP00000358488;mate1:12-161;mate2:125-274 mate1 2 3
'''
mini_df = pd.read_csv(StringIO(s), sep='\t')
mini_df
mini_df['transcript_id'] = mini_df.read_id.map(lambda x: x.split(';')[0].split('/')[-1])
mini_df['uniprot_id'] = mini_df.transcript_id.map(lambda x: '|'.join(x.split('|')[:2]))
mini_df
for i, row in mini_df.iterrows():
uniprot_id = row['uniprot_id']
print(f'\n---\n{row.read_id}')
print(good_uniprot_records_dict[uniprot_id])
# ### Spot check some reading frames
# # Run code to assign correct reading frame to all read ids
fastq
# %%time
read_id_to_frame_series = fastq_per_read_frame(fastq)
print(read_id_to_frame_series.shape)
read_id_to_frame_series.head()
read_id_to_frame_series.head()
# ## Write correct reading frames to file!
# human_busco_dir = "/mnt/ibm_sm/home/olga/pipeline-results/human-simulated/nf-predictorthologs--busco-mammalia-human"
csv = os.path.join(ORPHEUM_BENCHMARKING_FOLDER, "correct_reading_frames.csv")
read_id_to_frame_series.to_csv(csv, index=True, header=True)
# # Create gold standard classification data for all reading frames
# ## Read gold standard series
# +
read_id_to_frame_series.index.name = 'read_id'
read_id_to_frame = read_id_to_frame_series.reset_index()
read_id_to_frame['is_coding'] = True
read_id_to_frame['read_id_frame'] = read_id_to_frame.read_id.astype(str) + '__frame=' + read_id_to_frame.translation_frame.astype(str)
read_id_to_frame = read_id_to_frame.set_index('read_id_frame')
print(read_id_to_frame.shape)
read_id_to_frame.head()
# -
# ## Make cartesian product of read id and frames with `itertools`
frames = (1, 2, 3, -1, -2, -3)
all_read_id_frames = [
f"{read_id}__frame={frame}"
for read_id, frame in itertools.product(read_id_to_frame["read_id"], frames)
]
len(all_read_id_frames)
# ## Make true coding frame series
true_coding_frame = pd.Series(False, index=all_read_id_frames, name='is_coding')
true_coding_frame[read_id_to_frame.index] = True
true_coding_frame.sum()
true_coding_frame.head()
# ## Write to file
# +
basename = "true_reading_frames"
parquet = os.path.join(ORPHEUM_BENCHMARKING_FOLDER, f"{basename}.parquet")
csv = os.path.join(ORPHEUM_BENCHMARKING_FOLDER, f"{basename}.csv")
true_coding_frame.to_frame().to_parquet(parquet)
true_coding_frame.to_csv(csv)
# -
true_coding_frame.head()
| notebooks/figure_2_00_create_orpheum_ground_truth.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .r
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: R
# language: R
# name: ir
# ---
# # Normalize a lipids phenotype for a GWAS study
#
# In this notebook we use the UK Biobank data to normalize a prepared lipids phenotype for use in a GWAS study.
#
#
# Note that this work is part of a larger project to [Demonstrate the Potential for Pooled Analysis of All of Us and UK Biobank Genomic Data](https://github.com/all-of-us/ukb-cross-analysis-demo-project). Specifically this is for the portion of the project that is the **siloed** analysis.
# # Setup
#
# <div class="alert alert-block alert-warning">
# <b>Cloud Environment</b>: This notebook was written for use on the UK Biobank Research Analysis Platform.
# <ul>
# <li>Use compute type 'Single Node' with sufficient CPU and RAM (e.g. start with 4 CPUs and 15 GB RAM, increase if needed).</li>
# <li>This notebook is pretty fast, but in general it is recommended to be run in the background via <kbd>dx run dxjupyterlab</kbd> to capture provenance.</li>
# </ul>
# </div>
#
# ```
# dx run dxjupyterlab \
# --instance-type=mem2_ssd1_v2_x4 \
# -icmd="papermill 07_ukb_lipids_phenotype_for_gwas.ipynb 07_ukb_lipids_phenotype_for_gwas_$(date +%Y%m%d).ipynb" \
# -iin=07_ukb_lipids_phenotype_for_gwas.ipynb \
# --folder=outputs/r-prepare-phenotype-for-gwas/$(date +%Y%m%d)/
# ```
# See also https://platform.dnanexus.com/app/dxjupyterlab
lapply(c('lubridate', 'skimr', 'tidyverse'),
function(pkg) { if(! pkg %in% installed.packages()) { install.packages(pkg)} } )
library(lubridate)
library(skimr)
library(tidyverse)
# +
## Plot setup.
theme_set(theme_bw(base_size = 16)) # Default theme for plots.
#' Returns a data frame with a y position and a label, for use annotating ggplot boxplots.
#'
#' @param d A data frame.
#' @return A data frame with column y as max and column label as length.
get_boxplot_fun_data <- function(df) {
return(data.frame(y = max(df), label = stringr::str_c('N = ', length(df))))
}
# -
# ## Define constants
# + tags=["parameters"]
# Papermill parameters. See https://papermill.readthedocs.io/en/latest/usage-parameterize.html
#---[ Inputs ]---
# This was created via ukb_rap_siloed_analyses/02_ukb_lipids_phenotype.ipynb
PHENOTYPES = '/mnt/project/outputs/r-prepare-phenotype/20220308/ukb_200kwes_lipids_phenotype.tsv'
# This was created via ukb_rap_siloed_analyses/06_ukb_plink_ld_and_pca.ipynb
PCS = '/mnt/project/outputs/plink-ld-pca/20220308/ukb_200kwes_lipids_plink_pca.eigenvec'
#---[ Outputs ]---
GWAS_PHENOTYPE_FILENAME = 'ukb_200kwes_lipids_gwas_phenotype.tsv'
# -
# # Load data
system(str_glue('cp {PHENOTYPES} .'), intern=TRUE)
pheno <- read_tsv(basename(PHENOTYPES))
skim(pheno)
system(str_glue('cp {PCS} .'), intern=TRUE)
pcs <- read_tsv(basename(PCS))
head(pcs)
# # Add the ancestry covariates
# Confirm that the id sets are identical.
stopifnot(sort(pcs$IID) == sort(pheno$IID))
pheno <- left_join(pheno, pcs, by = c('FID' = '#FID', 'IID'))
# # Normalize lipids values
pheno$TC_adj_mg_dl_resid = resid(lm(TC_adj_mg_dl ~ sex+age+age2+PC1+PC2+PC3+PC4+PC5+PC6+PC7+PC8+PC9+PC10, data=pheno, na.action=na.exclude))
pheno$LDL_adj_mg_dl_resid = resid(lm(LDL_adj_mg_dl ~ sex+age+age2+PC1+PC2+PC3+PC4+PC5+PC6+PC7+PC8+PC9+PC10, data=pheno, na.action=na.exclude))
pheno$HDL_mg_dl_resid = resid(lm(HDL_mg_dl ~ sex+age+age2+PC1+PC2+PC3+PC4+PC5+PC6+PC7+PC8+PC9+PC10, data=pheno, na.action=na.exclude))
pheno$TG_log_mg_dl_resid = resid(lm(TG_log_mg_dl ~ sex+age+age2+PC1+PC2+PC3+PC4+PC5+PC6+PC7+PC8+PC9+PC10, data=pheno, na.action=na.exclude))
pheno$TC_adj_mg_dl_norm <- sd(pheno$TC_adj_mg_dl_resid, na.rm = TRUE) * scale(qnorm(
(rank(pheno$TC_adj_mg_dl_resid, na.last = 'keep') - 0.5) / sum(!is.na(pheno$TC_adj_mg_dl_resid)) ))
pheno$LDL_adj_mg_dl_norm <- sd(pheno$LDL_adj_mg_dl_resid, na.rm = TRUE) * scale(qnorm(
(rank(pheno$LDL_adj_mg_dl_resid, na.last = 'keep') - 0.5) / sum(!is.na(pheno$LDL_adj_mg_dl_resid)) ))
pheno$HDL_mg_dl_norm <- sd(pheno$HDL_mg_dl_resid, na.rm = TRUE) * scale(qnorm(
(rank(pheno$HDL_mg_dl_resid, na.last = 'keep') - 0.5) / sum(!is.na(pheno$HDL_mg_dl_resid)) ))
pheno$TG_log_mg_dl_norm <- sd(pheno$TG_log_mg_dl_resid, na.rm = TRUE) * scale(qnorm(
(rank(pheno$TG_log_mg_dl_resid, na.last = 'keep') - 0.5) / sum(!is.na(pheno$TG_log_mg_dl_resid)) ))
# ## Check that NAs were handled correctly
head(pheno %>% filter(!is.na(LDL_adj_mg_dl)) %>% select(starts_with('LDL'), starts_with('TG')))
head(pheno %>% filter(is.na(LDL_adj_mg_dl)) %>% select(starts_with('LDL'), starts_with('TG')))
# ## Convert matrix columns to vectors
class(pheno$TC_adj_mg_dl_resid)
dim(pheno$TC_adj_mg_dl_resid)
class(pheno$TC_adj_mg_dl_norm)
dim(pheno$TC_adj_mg_dl_norm)
dim(pheno$LDL_adj_mg_dl_norm)
dim(pheno$HDL_mg_dl_norm)
dim(pheno$TG_log_mg_dl_norm)
class(pheno$TC_adj_mg_dl_norm[,1])
dim(pheno$TC_adj_mg_dl_norm[,1])
length((pheno$TC_adj_mg_dl_norm[,1]))
# +
pheno <- pheno %>%
mutate(
TC_adj_mg_dl_norm = TC_adj_mg_dl_norm[,1],
LDL_adj_mg_dl_norm = LDL_adj_mg_dl_norm[,1],
HDL_mg_dl_norm = HDL_mg_dl_norm[,1],
TG_log_mg_dl_norm = TG_log_mg_dl_norm[,1]
)
head(pheno)
# -
# # Write the prepared phenotype to TSV for regenie
skim(pheno)
write_tsv(pheno, GWAS_PHENOTYPE_FILENAME)
# # Provenance
devtools::session_info()
| ukb_rap_siloed_analyses/07_ukb_lipids_phenotype_for_gwas.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Django Shell-Plus
# language: python
# name: django_extensions
# ---
import numpy as np
import pandas as pd
from django.utils import timezone
# create a serie
s = pd.Series([1, 3, 5, np.nan, 6, 8])
s
# Create 12 months range with pandas date_range
start_year = f"{timezone.now().year}0101"
print(start_year)
dates = pd.date_range(start_year, periods=12)
dates
np.random.randn(12, 4)
# create a dataframe
# index = lignes or rows
df = pd.DataFrame(np.random.randn(12, 8), index=dates, columns=list("ABCDEFGH"))
df
# select october avec le label
# attention au format US qui inverse mois et jour :)
df.loc["20210110"]
# sélection de lignes en utilisant un index iloc
df.iloc[8:10]
# df["B":"E"] ==> error, can't slice by columns
# slicing rows (indexes)
df["20210102":"20210104"]
# subset avec les mois juillet à octobre et uniquement les colonnes A, B et E
df.loc["20210107":"20210110", ['B', 'A', 'E']]
# seulement juillet à otobre et toutes les colonnes de B à E
df.loc[["20210107","20210110"], "B":"E"]
# accès à une colonne en utilisant un attribut
df.D
# accès à un élément précis
df.D["20210108"]
# identique à :
df["D"]["20210108"]
df["D"]["20210108"] = "Hello"
df["D"]["20210108"]
df
# add a new colum
df['NEW'] = list(range(len(df.index)))
df
# # copy
dfa = df.copy()
# swap B and NEW values
dfa.loc[:, ['B', 'NEW']] = dfa[['NEW', 'B']].to_numpy()
dfa
# check df is unchanged
df
# sélectionner uniquement les lignes pour lesquelles la valeur de la cellule en 1 est supérieur à 1
df[df["A"] > 1]
# on change la valeur "hello" pour une valeur numérique
# on utilise .at
df.at["20210108", "D"] = 1000
# on ne sélectionne que les valeurs qui sont plus grandes que 0
df[df > 0]
# transpose la matrice
df.T
# # copy only a subset of the dataframe (6 first lines)
df2 = df.iloc[0:6].copy()
# replace column value
df2["E"] = ["one", "one", "two", "three", "four", "three"]
# select only rows if value in column E is two or four
df2[df2["E"].isin(["two", "four"])]
# sélection basée sur une valeur booléenne
df2["E"].isin(["two", "four"])
# sélectionne les lignes impaires
df2[[i%2 == 0 for i in range(6)]]
# créer une série qui ne couvre que partiellement les index de df
s1 = pd.Series([1, 2, 3, 4, 5, 6], index=pd.date_range("20210102", periods=6))
s1
# ajouter cette colonne à df, remarquez que c'est placé correctement dans le tableau
df["I"] = s1
df
# in place modification of the data
# with "where" sélection
df2 = df.iloc[0:4, 0:4].copy().fillna(0)
df2[df2 > 0] = -df2
df2
| notebooks/Tuto panda.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Sentinel-1
#
# ```{admonition} Learning Objectives
# *A 30 minute guide to Sentinel-1 data for SnowEX*
# - understand key characteristics of Sentinel-1 Synthetic Aperture Radar
# - find, visualize, interpret Sentinel-1 data products
# - use Python raster libraries [rioxarray](https://corteva.github.io/rioxarray) and [hvplot](https://hvplot.holoviz.org)
# ```
#
# :::{figure-md} sentinel1
# <img src="../../img/sentinel1_radar_vision.jpg" alt="sentinel-1 cartoon" width="800px">
#
# Artist view of Sentinel-1. image source: `https://www.esa.int/ESA_Multimedia/Images/2014/01/Sentinel-1_radar_vision`)
# :::
#
# ```{seealso}
# this tutorial is a quick practical guide and *will not cover InSAR processing*, check out [UNAVCO InSAR Short Courses](https://www.unavco.org/education/professional-development/short-courses/short-courses.html) if you're interested in learning custom processing of SAR data.
# ```
# + tags=["remove-input"]
# Import all Python libraries required for this notebook
import hvplot.xarray
import os
import pandas as pd
import rioxarray
import s3fs
# -
# ## Dive right in
#
# Synthetic Aperture Radar (SAR) is an active imaging technique that records microwave reflections off of Earth's surface. **Unlike passive optical imagers that require cloud-free, sunny days, SAR can operate at night and microwaves penetrate clouds** At first glance, a SAR 'image' might look a lot like a black-and-white image of the Earth, but these observations contain more than just color values and can be used to query many physical properties and processes!
#
# But before getting into theory and caveats, let's visualize some data. An easy way to get started with Sentinel-1 SAR over a SnowEx field site is to use the Radiometric Terrain Corrected (RTC) backscatter amplitude data on AWS: https://registry.opendata.aws/sentinel-1-rtc-indigo/
# +
# GDAL environment variables to efficiently read remote data
os.environ['GDAL_DISABLE_READDIR_ON_OPEN']='EMPTY_DIR'
os.environ['AWS_NO_SIGN_REQUEST']='YES'
# Data is stored in a public S3 Bucket
url = 's3://sentinel-s1-rtc-indigo/tiles/RTC/1/IW/12/S/YJ/2016/S1B_20161121_12SYJ_ASC/Gamma0_VV.tif'
# These Cloud-Optimized-Geotiff (COG) files have 'overviews', low-resolution copies for quick visualization
da = rioxarray.open_rasterio(url, overview_level=3).squeeze('band')
da.hvplot.image(clim=(0,0.4), cmap='gray',
x='x', y='y',
aspect='equal', frame_width=400,
title='S1B_20161121_12SYJ_ASC',
rasterize=True # send rendered image to browser, rather than full array
)
# -
# ```{admonition} Interpretation
# The above image is in UTM coordinates, with linear power units. Dark zones (such as Grand Mesa) correspond to low radar amplitude returns, which can result from wet snow. High-amplitude bright zones occur on dry slopes that perpendicular to radar incidence and urban areas like Grand Junction, CO.
# ```
# ```{admonition} Exercise
# :class: dropdown
#
# Visualizations of SAR data are often better on a different scale.
# Convert the linear power values to amplitude or decibels and replot.
# Need a hint? Check out this [short article](https://storymaps.arcgis.com/stories/73b6af082e1f44bca8a0c5fb6bf09f37)
# ```
# ## Quick facts
#
# Sentinel-1 is a constellation of two C-band satellites operated by the European Space Agency (ESA). *It is the first SAR system with a global monitoring strategy and fully open data policy!* S1A launched 3 April 2014, and S1B launched 25 April 2016. There are many observation modes for SAR, over land the most common mode is 'Interferometric Wideswath' (IW), which has the following characteristics:
#
# | wavelength | resolution | posting | frame size | incidence | orbit repeat |
# | - | - | - | - | - | - |
# | *(cm)* | *rng x azi (m)* | *rng x azi (m)* | *w x h (km)* | *(deg)* | *(days)* |
# | 5.547 | 3x22 | 2.3x14 | 250x170 | 33-43 | 12 |
#
#
# Unlike most optical satellite observations, SAR antennas are pointed to the side, resulting in an "line-of-sight" (LOS) incidence angle with respect to the ellipsoid normal vector. A consequence of this viewing geometry is that radar images can have significant distortions known as shadow (for example due to tall mountains) and layover (where large regions perpendicular to incident energy all map to the same pixel). Also note that *resolution* != *pixel posting*. *resolution* is the minimal separate of distinguishable targets on the ground, and *posting* is the raster grid of a Sentinel1 image.
#
# The [global observation plan](https://sentinels.copernicus.eu/web/sentinel/missions/sentinel-1/observation-scenario) for Sentinel-1 has changed over time, but generally over Europe, you have repeat acquisitions every 6 days, and elsewhere every 12 or 24 days.
#
# Satellite radar instruments record the *amplitude* and *phase* of microwaves that bounce off Earth's surface. These values can be stored as a single complex number, so you'll often encounter SAR images that store complex-valued arrays. {term}`InSAR`, or 'Interferometric Synthetic Aperture Radar', is a common processing pipline to generate phase-change maps or 'interferograms' (which can be related to tectonic surface displacements or snow properties) using two different SAR acquisitions.
#
# :::{figure-md} sar-schematic
# <img src="../../img/insar_schematic.jpg" alt="insar cartoon" width="800px">
#
# InSAR schematic depicting how phase shifts can result from earthquake displacements. [Source](https://www.ga.gov.au/scientific-topics/positioning-navigation/geodesy/geodetic-techniques/interferometric-synthetic-aperture-radar))
# :::
#
# ```
#
# ```{seealso}
# - [ESA Documentation](https://sentinel.esa.int/web/sentinel/missions/sentinel-1)
# - [Alaska Satellite Facility Documentation](https://asf.alaska.edu/data-sets/sar-data-sets/sentinel-1/)
# ```
# ## Search and Discovery
#
# ### Public Archives
# The most common data products you'll encounter for Sentinel-1 are {term}`GRD` (just amplitude) and {term}`SLC` (amplitude+phase). These are [level-1](https://earthdata.nasa.gov/collaborate/open-data-services-and-software/data-information-policy/data-levels) products that many higher-level scientific products are derived from.
#
# Level-1 data is typically stored in *radar* coordinates. For SLCs, pixels are rectangular with a ratio of about 6:1 for range:azimuth, given the different resolutions in these orthogonal directions. You might notice ground control points (GCPs) stored in GRD and SLC metadata for [approximate geocoding](https://docs.qgis.org/3.16/en/docs/user_manual/working_with_raster/georeferencer.html#available-transformation-algorithms), but you need to use advanced processing pipelines to accurately geocode and generate higher level products such as interferograms {cite:p}`YagueMartinez2016`.
#
# ### Higher-level products
# There are also a growing number of cloud-hosted archives of processed Sentinel-1 data including global $\sigma_0$ radiometric terrain corrected (RTC) in [Google Earth Engine](https://developers.google.com/earth-engine/guides/sentinel1), $\gamma_0$ RTC data as an [AWS public dataset](https://registry.opendata.aws/sentinel-1-rtc-indigo/). Generating higher-level products often requires using a digital elevation model, so it's important to be aware of the *resolution* and *acquisition time* of that digital elevation model. For example, two common free global digital elevation models for processing include:
#
# | Product| Acquisition Date | Coverage | Resolution (m) | Open Access |
# | - | - | - | - | - |
# | [SRTM](https://lpdaac.usgs.gov/products/srtmgl1v003/) | 2000-02-11 | 60$^\circ$N to 60$^\circ$S | 30 | [link](https://lpdaac.usgs.gov/products/srtmgl1v003/) |
# | [Copernicus DEM](https://portal.opentopography.org/datasetMetadata?otCollectionID=OT.032021.4326.1) | 2019-12-01 -> | global | 30 | [link](https://registry.opendata.aws/copernicus-dem/) |
#
#
# ### On-demand processing
# SAR processing algorithms are complicated and resource intensive, so when possible it is nice to take advantages of services to generate higher level products. For example ASF has the HyP3 service which can generate RTC and Interferometric products: https://hyp3-docs.asf.alaska.edu
# ## Amplitude
#
# SAR Backscatter variations can be related to of melting snow {cite:p}`Small2011`. Cross-polarized Sentinel-1 backscatter variation has even been shown to relate to {term}`SWE` {cite:p}`Lievens2019`. Let's use the public RTC data from earlier and interpret values over grand mesa.
#
# In order to work with a multidimensional timeseries stack of imagery we'll construct a [GDAL VRT file](https://gdal.org/drivers/raster/vrt.html) Based on the following organization: `s3://sentinel-s1-rtc-indigo/tiles/RTC/1/[MODE]/[MGRS UTM zone]/[MGRS latitude label]/[MGRS Grid Square ID]/[YEAR]/[SATELLITE]_[DATE]_[TILE ID]_[ORBIT DIRECTION]/[ASSET]`, since the code takes a few minutes to run, we only run it if the vrt doesn't already exist:
# + tags=["hide-input"]
zone = 12
latLabel = 'S'
square = 'YJ'
year = '202*' #>=2020
date = '*' #all acquisitions
polarization = 'VV'
s3Path = f's3://sentinel-s1-rtc-indigo/tiles/RTC/1/IW/{zone}/{latLabel}/{square}/{year}/{date}/Gamma0_{polarization}.tif'
# Find imagery according to S3 path pattern
s3 = s3fs.S3FileSystem(anon=True)
keys = s3.glob(s3Path[5:]) #strip s3://
print(f'Located {len(keys)} images matching {s3Path}:')
vrtName = f'stack{zone}{latLabel}{square}.vrt'
if not os.path.exists(vrtName):
with open('s3paths.txt', 'w') as f:
for key in keys:
f.write("/vsis3/%s\n" % key)
cmd = f'gdalbuildvrt -overwrite -separate -input_file_list s3paths.txt {vrtName}'
print(cmd)
os.system(cmd)
# -
# Load a time series we created a VRT with GDAL to facilitate this step
da3 = rioxarray.open_rasterio(vrtName, overview_level=3, chunks='auto')
da3; #omit output
# +
# Need to add time coordinates to this data
datetimes = [pd.to_datetime(x[55:63]) for x in keys]
# add new coordinate to existing dimension
da = da3.assign_coords(time=('band', datetimes))
# make 'time' active coordinate instead of integer band
da = da.swap_dims({'band':'time'})
# Name the dataset (helpful for hvplot calls later on)
da.name = 'Gamma0VV'
da
# +
#use a small bounding box over grand mesa (UTM coordinates)
xmin,xmax,ymin,ymax = [739186, 742748, 4.325443e+06, 4.327356e+06]
daT = da.sel(x=slice(xmin, xmax),
y=slice(ymax, ymin))
# NOTE: this can take a while on slow internet connections, we're reading over 100 images!
all_points = daT.where(daT!=0).hvplot.scatter('time', groupby=[], dynspread=True, datashade=True)
mean_trend = daT.where(daT!=0, drop=True).mean(dim=['x','y']).hvplot.line(title='North Grand Mesa', color='red')
(all_points * mean_trend)
# -
# ```{admonition} Interpretation
# The background backscatter for this area of interest is approximately 0.5. A distinct dip in backscatter is observed during Spring snow melt April through May, with max backscatter in June corresponding to bare ground conditions. Decreasing backscatter amplitude from June onwards is likely due to increasing vegetation.
# ```
# ## Phase
#
# InSAR phase delays can be due to a number of factors including tectonic motions, atmospheric water vapor changes, and ionospheric conditions. The theory relating InSAR phase delays due to propagation in dry snow is described in a classic study by {cite:p}`Guneriussen2001`.
#
# A first-order approximation of Snow-Water-Equivalent from InSAR phase change from {cite}`Leinss2015` is:
#
# $$
# \Delta SWE = \frac{\Delta \Phi \lambda_i}{2 \pi (1.59 + \theta_i^{5/2})}
# $$ (phase_swe)
#
# Where $\Delta\Phi$ is measured change in phase, $\lambda_i$ is the radar wavelength and $\theta_i$ is the incidence angle. This approximation assumes dry, homogeneous snow with a depth of less than 3 meters. Note also that phase delays are also be caused by changes in atmospheric water vapor, ionospheric conditions, and tectonic displacements, so care must be taken to isolate phase changes arising from SWE changes. Isolating these signals is complicated and more studies like SnowEx are necessary to validate satellite-based SWE extractions with in-situ sensors.
#
# The following cell gets you started with plotting phase data generated by ASF's on-demand InSAR processor. It takes about an hour for processing an interferogram, so we've done that ahead of time (see scripts in this repository: https://github.com/snowex-hackweek/hyp3SAR).
# + tags=["remove-input"]
if not os.path.exists('/tmp/tutorial-data'):
os.chdir('/tmp')
os.system('git clone --depth 1 https://github.com/snowex-hackweek/tutorial-data.git')
# -
path = '/tmp/tutorial-data/sar/S1AA_20201030T131820_20201111T131820_VVP012_INT80_G_ueF_EBD2/S1AA_20201030T131820_20201111T131820_VVP012_INT80_G_ueF_EBD2_unw_phase.tif'
da = rioxarray.open_rasterio(path, masked=True).squeeze('band')
da.hvplot.image(x='x', y='y', aspect='equal', rasterize=True, cmap='bwr', title='2020/10/30_2020/11/11 Unwrapped Phase (radians)')
# ```{admonition} Interpretation
# :class: danger
# Single Sentinel-1 interferograms are usually dominated by atmospheric noise, so be cautious about interpretations of surface properties such as surface displacements or SWE changes. In theory, positive changes correspond to a path increases (or propagation delay). Phase changes are expected to be minimal for short durations, so it's common to apply a DC-offset so that the image has a mean of 0, or normalize the image to a ground-control point with assumed zero phase shift.
# ```
# ```{admonition} Exercises
# :class: dropdown
#
# - Plot a histogram of phase change
# - Convert the phase change into a map of SWE change
# - Process another interferogram between separate dates for the same area and average the two
# ```
# ## Next steps
#
# This tutorial just scratched the surface of Sentinel-1 SAR! Hopefully you're now eager to explore more and utilize this dataset for SnowEx projects. Below are additional references and project ideas:
#
# ```{seealso}
# - expanded tutorial on [AWS RTC public dataset](https://github.com/scottyhq/sentinel1-rtc)
# - documentation for open source [ISCE2](https://github.com/isce-framework/isce2-docs) SAR processing software
# - [UAF Microwave Remote Sensing Course](https://radar.community.uaf.edu)
# - [SERVIR SAR Handbook](https://servirglobal.net/Global/Articles/Article/2674/sar-handbook-comprehensive-methodologies-for-forest-monitoring-and-biomass-estimation)
# ```
#
# ```{admonition} Project Ideas
# :class: tip
# - Explore RTC backscatter over different areas of interest and time ranges
# - Apply the C-Snow algorithm for SWE retrieval from cross-polarized backscatter with the AWS RTC public dataset
# - Process interferograms during dry-snow conditions and convert time series of phase changes due to snow accumulation. This would require looking at historical weather and other in-situ sensors
# - Compare Sentinel-1 and UAVSAR amplitude and phase products (different wavelengths and viewing geometries)
# ```
| book/tutorials/sar/sentinel1.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ## Solving the 'SBOX_FATAL_MEMORY_EXCEEDED' error
# When trying to open a jupyter notebook after working on .ipynb file, after many items and output have been saved the following error appears: 'SBOX_FATAL_MEMORY_EXCEEDED'.
# This happened to me with a file of 104MB. Someone might think that the file was huge, it's not that big compared with
# other files over 500MB that I opened without getting the same problem nor any other errors.
# The first thing to try I'd say is to close the command prompt and type the code [inside](https://stackoverflow.com/questions/52730839/how-do-i-change-notebookapp-iopub-data-rate-limit-for-jupyter) the command prompt (inside Anaconda Prompt if you're using it):
jupyter notebook --NotebookApp.iopub_data_rate_limit=1.0e10
# The code above will increase the data rate limit, and it works in many cases as it did for me in other cases. But it did not work for me in this specific problem. If it does not work for you, another [way](https://mindtrove.info/jupyter-tidbit-clear-outputs/) to try is to remove all cell outputs from the jupyter notebook file, printing the cleaned notebook and redirecting it into another file as shown below:
jupyter nbconvert my_input_notebook.ipynb --to notebook --ClearOutputPreprocessor.enabled=True --stdout > my_output_notebook.ipynb
# A file 'my_input_notebook.ipynb' is created without the whole jupyter notebook output, which will decrease memory size dramatically, getting rid of the 'SBOX_FATAL_MEMORY_EXCEEDED'
# error. In my case, this worked and the memory size dropped from 104MB to 1.3MB, a huge difference. Another option is to convert a notebook to an executable script:
jupyter nbconvert --to script my_notebook.ipynb
# The code above works nicely also. It makes a .py file from the .ipynb without having to open the jupyter notebook. If I tried to open the original notebook .ipynb file to download it as .py file, it would crash before even loading it, so the code above can be a good option. If you want to see other options that might also work for you, here it [is](https://nbconvert.readthedocs.io/en/latest/usage.html#convert-script). Do not forget that a .ipynb file is a JSON file, another way is to open the ipynb file as a JSON file, and copy the lines of code you want from the ipynb file. The [code](https://stackoverflow.com/questions/43334836/how-can-we-convert-ipynb-file-to-json) is below:
import json
with open('data.ipynb', mode= 'r', encoding= 'utf-8') as f:
myfile = json.loads(f.read())
myfile
# I hope this is useful, I'm sure other people came across the same problem. The first three cells with the lines of code shall be typed inside the command prompt (Anaconda Prompt is preferred). And the last cell above must be typed inside the jupyter notebook input cell instead of the command prompt as the other lines of code. It was not run here because the JSON file from my .ipynb file is enormous and it has restricted information, but I ran the code above in another jupyter notebook file, and it works fine. Thanks for reading!
| sbox_fatal_memory_exceeded.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# Copyright (c) Microsoft Corporation. All rights reserved.
#
# Licensed under the MIT License.
import azureml.core
from azureml.core import Workspace, Experiment, Run, Dataset
from azureml.core import ScriptRunConfig
ws = Workspace.from_config()
#TODO: load variables from config file
exp_name = ""
dataset_name = ""
dataset = Dataset.get_by_name(ws,dataset_name)
exp = Experiment(workspace=ws, name=exp_name)
with exp.start_logging() as run:
run.log(name="message", value="my message")
run.log('Accuracy',0.99)
| notebooks/01_train_local.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="view-in-github" colab_type="text"
# <a href="https://colab.research.google.com/github/pedroescobedob/DS-Unit-1-Sprint-3-Statistical-Tests-and-Experiments/blob/master/Pedro_Escobedo_LS_DS_132_Sampling_Confidence_Intervals_and_Hypothesis_Testing_Assignment.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# + [markdown] id="11OzdxWTM7UR" colab_type="text"
# ## Assignment - Build a confidence interval
#
# A confidence interval refers to a neighborhood around some point estimate, the size of which is determined by the desired p-value. For instance, we might say that 52% of Americans prefer tacos to burritos, with a 95% confidence interval of +/- 5%.
#
# 52% (0.52) is the point estimate, and +/- 5% (the interval $[0.47, 0.57]$) is the confidence interval. "95% confidence" means a p-value $\leq 1 - 0.95 = 0.05$.
#
# In this case, the confidence interval includes $0.5$ - which is the natural null hypothesis (that half of Americans prefer tacos and half burritos, thus there is no clear favorite). So in this case, we could use the confidence interval to report that we've failed to reject the null hypothesis.
#
# But providing the full analysis with a confidence interval, including a graphical representation of it, can be a helpful and powerful way to tell your story. Done well, it is also more intuitive to a layperson than simply saying "fail to reject the null hypothesis" - it shows that in fact the data does *not* give a single clear result (the point estimate) but a whole range of possibilities.
#
# How is a confidence interval built, and how should it be interpreted? It does *not* mean that 95% of the data lies in that interval - instead, the frequentist interpretation is "if we were to repeat this experiment 100 times, we would expect the average result to lie in this interval ~95 times."
#
# For a 95% confidence interval and a normal(-ish) distribution, you can simply remember that +/-2 standard deviations contains 95% of the probability mass, and so the 95% confidence interval based on a given sample is centered at the mean (point estimate) and has a range of +/- 2 (or technically 1.96) standard deviations.
#
# Different distributions/assumptions (90% confidence, 99% confidence) will require different math, but the overall process and interpretation (with a frequentist approach) will be the same.
#
# Your assignment - using the data from the prior module ([congressional voting records](https://archive.ics.uci.edu/ml/datasets/Congressional+Voting+Records)):
#
#
# ### Confidence Intervals:
# 1. Generate and numerically represent a confidence interval
# 2. Graphically (with a plot) represent the confidence interval
# 3. Interpret the confidence interval - what does it tell you about the data and its distribution?
#
# ### Chi-squared tests:
# 4. Take a dataset that we have used in the past in class that has **categorical** variables. Pick two of those categorical variables and run a chi-squared tests on that data
# - By hand using Numpy
# - In a single line using Scipy
#
# + id="Ckcr4A4FM7cs" colab_type="code" colab={}
# TODO - your code!
# + id="AQ9cyrSWFlUG" colab_type="code" colab={}
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
import pandas as pd
import random
from matplotlib import style
# + id="<KEY>" colab_type="code" outputId="8bbe7d7b-29cb-493a-f04c-01e252e83ee7" colab={"resources": {"http://localhost:8080/nbextensions/google.colab/files.js": {"data": "Ly8gQ29weXJpZ2h0IDIwMTcgR29vZ2xlIExMQwovLwovLyBMaWNlbnNlZCB1bmRlciB0aGUgQXBhY2hlIExpY2Vuc2UsIFZlcnNpb24gMi4wICh0aGUgIkxpY2Vuc2UiKTsKLy8geW91IG1heSBub3QgdXNlIHRoaXMgZmlsZSBleGNlcHQgaW4gY29tcGxpYW5jZSB3aXRoIHRoZSBMaWNlbnNlLgovLyBZb3UgbWF5IG9idGFpbiBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKLy8KLy8gICAgICBodHRwOi8vd3d3LmFwYWNoZS5vcmcvbGljZW5zZXMvTElDRU5TRS0yLjAKLy8KLy8gVW5sZXNzIHJlcXVpcmVkIGJ5IGFwcGxpY2FibGUgbGF3IG9yIGFncmVlZCB0byBpbiB3cml0aW5nLCBzb2Z0d2FyZQovLyBkaXN0cmlidXRlZCB1bmRlciB0aGUgTGljZW5zZSBpcyBkaXN0cmlidXRlZCBvbiBhbiAiQVMgSVMiIEJBU0lTLAovLyBXSVRIT1VUIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4KLy8gU2VlIHRoZSBMaWNlbnNlIGZvciB0aGUgc3BlY2lmaWMgbGFuZ3VhZ2UgZ292ZXJuaW5nIHBlcm1pc3Npb25zIGFuZAovLyBsaW1pdGF0aW9ucyB1bmRlciB0aGUgTGljZW5zZS4KCi8qKgogKiBAZmlsZW92ZXJ2aWV3IEhlbHBlcnMgZm9yIGdvb2dsZS5jb2xhYiBQeXRob24gbW9kdWxlLgogKi8KKGZ1bmN0aW9uKHNjb3BlKSB7CmZ1bmN0aW9uIHNwYW4odGV4dCwgc3R5bGVBdHRyaWJ1dGVzID0ge30pIHsKICBjb25zdCBlbGVtZW50ID0gZG9jdW1lbnQuY3JlYXRlRWxlbWVudCgnc3BhbicpOwogIGVsZW1lbnQudGV4dENvbnRlbnQgPSB0ZXh0OwogIGZvciAoY29uc3Qga2V5IG9mIE9iamVjdC5rZXlzKHN0eWxlQXR0cmlidXRlcykpIHsKICAgIGVsZW1lbnQuc3R5bGVba2V5XSA9IHN0eWxlQXR0cmlidXRlc1trZXldOwogIH0KICByZXR1cm4gZWxlbWVudDsKfQoKLy8gTWF4IG51bWJlciBvZiBieXRlcyB3aGljaCB3aWxsIGJlIHVwbG9hZGVkIGF0IGEgdGltZS4KY29uc3QgTUFYX1BBWUxPQURfU0laRSA9IDEwMCAqIDEwMjQ7Ci8vIE1heCBhbW91bnQgb2YgdGltZSB0byBibG9jayB3YWl0aW5nIGZvciB0aGUgdXNlci4KY29uc3QgRklMRV9DSEFOR0VfVElNRU9VVF9NUyA9IDMwICogMTAwMDsKCmZ1bmN0aW9uIF91cGxvYWRGaWxlcyhpbnB1dElkLCBvdXRwdXRJZCkgewogIGNvbnN0IHN0ZXBzID0gdXBsb2FkRmlsZXNTdGVwKGlucHV0SWQsIG91dHB1dElkKTsKICBjb25zdCBvdXRwdXRFbGVtZW50ID0gZG9jdW1lbnQuZ2V0RWxlbWVudEJ5SWQob3V0cHV0SWQpOwogIC8vIENhY2hlIHN0ZXBzIG9uIHRoZSBvdXRwdXRFbGVtZW50IHRvIG1ha2UgaXQgYXZhaWxhYmxlIGZvciB0aGUgbmV4dCBjYWxsCiAgLy8gdG8gdXBsb2FkRmlsZXNDb250aW51ZSBmcm9tIFB5dGhvbi4KICBvdXRwdXRFbGVtZW50LnN0ZXBzID0gc3RlcHM7CgogIHJldHVybiBfdXBsb2FkRmlsZXNDb250aW51ZShvdXRwdXRJZCk7Cn0KCi8vIFRoaXMgaXMgcm91Z2hseSBhbiBhc3luYyBnZW5lcmF0b3IgKG5vdCBzdXBwb3J0ZWQgaW4gdGhlIGJyb3dzZXIgeWV0KSwKLy8gd2hlcmUgdGhlcmUgYXJlIG11bHRpcGxlIGFzeW5jaHJvbm91cyBzdGVwcyBhbmQgdGhlIFB5dGhvbiBzaWRlIGlzIGdvaW5nCi8vIHRvIHBvbGwgZm9yIGNvbXBsZXRpb24gb2YgZWFjaCBzdGVwLgovLyBUaGlzIHVzZXMgYSBQcm9taXNlIHRvIGJsb2NrIHRoZSBweXRob24gc2lkZSBvbiBjb21wbGV0aW9uIG9mIGVhY2ggc3RlcCwKLy8gdGhlbiBwYXNzZXMgdGhlIHJlc3VsdCBvZiB0aGUgcHJldmlvdXMgc3RlcCBhcyB0aGUgaW5wdXQgdG8gdGhlIG5leHQgc3RlcC4KZnVuY3Rpb24gX3VwbG9hZEZpbGVzQ29udGludWUob3V0cHV0SWQpIHsKICBjb25zdCBvdXRwdXRFbGVtZW50ID0gZG9jdW1lbnQuZ2V0RWxlbWVudEJ5SWQob3V0cHV0SWQpOwogIGNvbnN0IHN0ZXBzID0gb3V0cHV0RWxlbWVudC5zdGVwczsKCiAgY29uc3QgbmV4dCA9IHN0ZXBzLm5leHQob3V0cHV0RWxlbWVudC5sYXN0UHJvbWlzZVZhbHVlKTsKICByZXR1cm4gUHJvbWlzZS5yZXNvbHZlKG5leHQudmFsdWUucHJvbWlzZSkudGhlbigodmFsdWUpID0+IHsKICAgIC8vIENhY2hlIHRoZSBsYXN0IHByb21pc2UgdmFsdWUgdG8gbWFrZSBpdCBhdmFpbGFibGUgdG8gdGhlIG5leHQKICAgIC8vIHN0ZXAgb2YgdGhlIGdlbmVyYXRvci4KICAgIG91dHB1dEVsZW1lbnQubGFzdFByb21pc2VWYWx1ZSA9IHZhbHVlOwogICAgcmV0dXJuIG5leHQudmFsdWUucmVzcG9uc2U7CiAgfSk7Cn0KCi8qKgogKiBHZW5lcmF0b3IgZnVuY3Rpb24gd2hpY2ggaXMgY2FsbGVkIGJldHdlZW4gZWFjaCBhc3luYyBzdGVwIG9mIHRoZSB1cGxvYWQKICogcHJvY2Vzcy4KICogQHBhcmFtIHtzdHJpbmd9IGlucHV0SWQgRWxlbWVudCBJRCBvZiB0aGUgaW5wdXQgZmlsZSBwaWNrZXIgZWxlbWVudC4KICogQHBhcmFtIHtzdHJpbmd9IG91dHB1dElkIEVsZW1lbnQgSUQgb2YgdGhlIG91dHB1dCBkaXNwbGF5LgogKiBAcmV0dXJuIHshSXRlcmFibGU8IU9iamVjdD59IEl0ZXJhYmxlIG9mIG5leHQgc3RlcHMuCiAqLwpmdW5jdGlvbiogdXBsb2FkRmlsZXNTdGVwKGlucHV0SWQsIG91dHB1dElkKSB7CiAgY29uc3QgaW5wdXRFbGVtZW50ID0gZG9jdW1lbnQuZ2V0RWxlbWVudEJ5SWQoaW5wdXRJZCk7CiAgaW5wdXRFbGVtZW50LmRpc2FibGVkID0gZmFsc2U7CgogIGNvbnN0IG91dHB1dEVsZW1lbnQgPSBkb2N1bWVudC5nZXRFbGVtZW50QnlJZChvdXRwdXRJZCk7CiAgb3V0cHV0RWxlbWVudC5pbm5lckhUTUwgPSAnJzsKCiAgY29uc3QgcGlja2VkUHJvbWlzZSA9IG5ldyBQcm9taXNlKChyZXNvbHZlKSA9PiB7CiAgICBpbnB1dEVsZW1lbnQuYWRkRXZlbnRMaXN0ZW5lcignY2hhbmdlJywgKGUpID0+IHsKICAgICAgcmVzb2x2ZShlLnRhcmdldC5maWxlcyk7CiAgICB9KTsKICB9KTsKCiAgY29uc3QgY2FuY2VsID0gZG9jdW1lbnQuY3JlYXRlRWxlbWVudCgnYnV0dG9uJyk7CiAgaW5wdXRFbGVtZW50LnBhcmVudEVsZW1lbnQuYXBwZW5kQ2hpbGQoY2FuY2VsKTsKICBjYW5jZWwudGV4dENvbnRlbnQgPSAnQ2FuY2VsIHVwbG9hZCc7CiAgY29uc3QgY2FuY2VsUHJvbWlzZSA9IG5ldyBQcm9taXNlKChyZXNvbHZlKSA9PiB7CiAgICBjYW5jZWwub25jbGljayA9ICgpID0+IHsKICAgICAgcmVzb2x2ZShudWxsKTsKICAgIH07CiAgfSk7CgogIC8vIENhbmNlbCB1cGxvYWQgaWYgdXNlciBoYXNuJ3QgcGlja2VkIGFueXRoaW5nIGluIHRpbWVvdXQuCiAgY29uc3QgdGltZW91dFByb21pc2UgPSBuZXcgUHJvbWlzZSgocmVzb2x2ZSkgPT4gewogICAgc2V0VGltZW91dCgoKSA9PiB7CiAgICAgIHJlc29sdmUobnVsbCk7CiAgICB9LCBGSUxFX0NIQU5HRV9USU1FT1VUX01TKTsKICB9KTsKCiAgLy8gV2FpdCBmb3IgdGhlIHVzZXIgdG8gcGljayB0aGUgZmlsZXMuCiAgY29uc3QgZmlsZXMgPSB5aWVsZCB7CiAgICBwcm9taXNlOiBQcm9taXNlLnJhY2UoW3BpY2tlZFByb21pc2UsIHRpbWVvdXRQcm9taXNlLCBjYW5jZWxQcm9taXNlXSksCiAgICByZXNwb25zZTogewogICAgICBhY3Rpb246ICdzdGFydGluZycsCiAgICB9CiAgfTsKCiAgaWYgKCFmaWxlcykgewogICAgcmV0dXJuIHsKICAgICAgcmVzcG9uc2U6IHsKICAgICAgICBhY3Rpb246ICdjb21wbGV0ZScsCiAgICAgIH0KICAgIH07CiAgfQoKICBjYW5jZWwucmVtb3ZlKCk7CgogIC8vIERpc2FibGUgdGhlIGlucHV0IGVsZW1lbnQgc2luY2UgZnVydGhlciBwaWNrcyBhcmUgbm90IGFsbG93ZWQuCiAgaW5wdXRFbGVtZW50LmRpc2FibGVkID0gdHJ1ZTsKCiAgZm9yIChjb25zdCBmaWxlIG9mIGZpbGVzKSB7CiAgICBjb25zdCBsaSA9IGRvY3VtZW50LmNyZWF0ZUVsZW1lbnQoJ2xpJyk7CiAgICBsaS5hcHBlbmQoc3BhbihmaWxlLm5hbWUsIHtmb250V2VpZ2h0OiAnYm9sZCd9KSk7CiAgICBsaS5hcHBlbmQoc3BhbigKICAgICAgICBgKCR7ZmlsZS50eXBlIHx8ICduL2EnfSkgLSAke2ZpbGUuc2l6ZX0gYnl0ZXMsIGAgKwogICAgICAgIGBsYXN0IG1vZGlmaWVkOiAkewogICAgICAgICAgICBmaWxlLmxhc3RNb2RpZmllZERhdGUgPyBmaWxlLmxhc3RNb2RpZmllZERhdGUudG9Mb2NhbGVEYXRlU3RyaW5nKCkgOgogICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAnbi9hJ30gLSBgKSk7CiAgICBjb25zdCBwZXJjZW50ID0gc3BhbignMCUgZG9uZScpOwogICAgbGkuYXBwZW5kQ2hpbGQocGVyY2VudCk7CgogICAgb3V0cHV0RWxlbWVudC5hcHBlbmRDaGlsZChsaSk7CgogICAgY29uc3QgZmlsZURhdGFQcm9taXNlID0gbmV3IFByb21pc2UoKHJlc29sdmUpID0+IHsKICAgICAgY29uc3QgcmVhZGVyID0gbmV3IEZpbGVSZWFkZXIoKTsKICAgICAgcmVhZGVyLm9ubG9hZCA9IChlKSA9PiB7CiAgICAgICAgcmVzb2x2ZShlLnRhcmdldC5yZXN1bHQpOwogICAgICB9OwogICAgICByZWFkZXIucmVhZEFzQXJyYXlCdWZmZXIoZmlsZSk7CiAgICB9KTsKICAgIC8vIFdhaXQgZm9yIHRoZSBkYXRhIHRvIGJlIHJlYWR5LgogICAgbGV0IGZpbGVEYXRhID0geWllbGQgewogICAgICBwcm9taXNlOiBmaWxlRGF0YVByb21pc2UsCiAgICAgIHJlc3BvbnNlOiB7CiAgICAgICAgYWN0aW9uOiAnY29udGludWUnLAogICAgICB9CiAgICB9OwoKICAgIC8vIFVzZSBhIGNodW5rZWQgc2VuZGluZyB0byBhdm9pZCBtZXNzYWdlIHNpemUgbGltaXRzLiBTZWUgYi82MjExNTY2MC4KICAgIGxldCBwb3NpdGlvbiA9IDA7CiAgICB3aGlsZSAocG9zaXRpb24gPCBmaWxlRGF0YS5ieXRlTGVuZ3RoKSB7CiAgICAgIGNvbnN0IGxlbmd0aCA9IE1hdGgubWluKGZpbGVEYXRhLmJ5dGVMZW5ndGggLSBwb3NpdGlvbiwgTUFYX1BBWUxPQURfU0laRSk7CiAgICAgIGNvbnN0IGNodW5rID0gbmV3IFVpbnQ4QXJyYXkoZmlsZURhdGEsIHBvc2l0aW9uLCBsZW5ndGgpOwogICAgICBwb3NpdGlvbiArPSBsZW5ndGg7CgogICAgICBjb25zdCBiYXNlNjQgPSBidG9hKFN0cmluZy5mcm9tQ2hhckNvZGUuYXBwbHkobnVsbCwgY2h1bmspKTsKICAgICAgeWllbGQgewogICAgICAgIHJlc3BvbnNlOiB7CiAgICAgICAgICBhY3Rpb246ICdhcHBlbmQnLAogICAgICAgICAgZmlsZTogZmlsZS5uYW1lLAogICAgICAgICAgZGF0YTogYmFzZTY0LAogICAgICAgIH0sCiAgICAgIH07CiAgICAgIHBlcmNlbnQudGV4dENvbnRlbnQgPQogICAgICAgICAgYCR7TWF0aC5yb3VuZCgocG9zaXRpb24gLyBmaWxlRGF0YS5ieXRlTGVuZ3RoKSAqIDEwMCl9JSBkb25lYDsKICAgIH0KICB9CgogIC8vIEFsbCBkb25lLgogIHlpZWxkIHsKICAgIHJlc3BvbnNlOiB7CiAgICAgIGFjdGlvbjogJ2NvbXBsZXRlJywKICAgIH0KICB9Owp9CgpzY29wZS5nb29nbGUgPSBzY29wZS5nb29nbGUgfHwge307CnNjb3BlLmdvb2dsZS5jb2xhYiA9IHNjb3BlLmdvb2dsZS5jb2xhYiB8fCB7fTsKc2NvcGUuZ29vZ2xlLmNvbGFiLl9maWxlcyA9IHsKICBfdXBsb2FkRmlsZXMsCiAgX3VwbG9hZEZpbGVzQ29udGludWUsCn07Cn0pKHNlbGYpOwo=", "ok": true, "headers": [["content-type", "application/javascript"]], "status": 200, "status_text": ""}}, "base_uri": "https://localhost:8080/", "height": 74}
from google.colab import files
uploaded = files.upload()
# + id="7xhM4vZYGIRY" colab_type="code" colab={}
columns = ['Democrat or Republican',
'Handicapped infants', 'Water project','Adoption of the budget resolution',
'Physician fee','El Salvador Aid','Religious groups in schools',
'Anti satellite test ban', 'Aid to Nicaraguan contras','Mx missile',
'Immigration','Synfuels corporation cutback','Education spending',
'Superfund right to sue','Crime','Duty free exports',
'Export Administration Act South Africa']
# + id="d0aT5N7SO9oz" colab_type="code" colab={}
house_votes = pd.read_csv('house-votes-84.data', names=columns)
# + id="j6mmCGa2SjPb" colab_type="code" outputId="01f69833-a13e-40af-c93f-3c4d11c7ecce" colab={"base_uri": "https://localhost:8080/", "height": 34}
house_votes.shape
# + id="DOeAP9fIPXvw" colab_type="code" outputId="c4520e07-557e-4f27-cafc-50de864b048b" colab={"base_uri": "https://localhost:8080/", "height": 430}
house_votes.head(10)
# + id="TlUEZNAoW3UA" colab_type="code" colab={}
# defining what a missing values is
missing_values = ['?']
# + id="lcLnBQJ2XEkw" colab_type="code" colab={}
# Replacing the missing values with N/A values
df = pd.read_csv('house-votes-84.data', names=columns, na_values=missing_values)
# + id="jeFaBk4aXnjg" colab_type="code" outputId="ba2ea050-db80-40ef-b118-4680c10180e9" colab={"base_uri": "https://localhost:8080/", "height": 275}
df.head()
# + id="klOKZOwAYWby" colab_type="code" outputId="9d0aed2b-d0e1-4215-a42b-743fa6975134" colab={"base_uri": "https://localhost:8080/", "height": 34}
df.shape
# + id="sjspye-xX-WR" colab_type="code" outputId="8ea66e2d-6e6f-4455-8e04-0028298460b6" colab={"base_uri": "https://localhost:8080/", "height": 34}
# Total number of N/A values
print(df.isnull().sum().sum())
# + id="HvC_AAVPb2E3" colab_type="code" colab={}
# Converting strings into integers
y_or_n = {'y': 1,'n': 2}
# + id="_XHjKKTIeDiY" colab_type="code" outputId="18f5ba8b-6969-43ef-f0d3-4de836da6002" colab={"base_uri": "https://localhost:8080/", "height": 275}
new_df = df.replace(y_or_n)
new_df.head()
# + id="ysKpSv1ifpYP" colab_type="code" colab={}
new_df = pd.options.display.float_format = '{:,.0f}'.format
# + id="UKztxANaUPzQ" colab_type="code" colab={}
democrat = new_df[new_df['Democrat or Republican'] == 'democrat']
# + id="RhiuZCAgUdDF" colab_type="code" outputId="9924da30-47f3-4ee2-a295-35692d7a49ec" colab={"base_uri": "https://localhost:8080/", "height": 34}
democrat.shape
# + id="BPCcM8vsUgwL" colab_type="code" colab={}
republican = new_df[new_df['Democrat or Republican'] == 'republican']
# + id="ntb3UMY2Uk-O" colab_type="code" outputId="a9c0f528-3d8d-475c-df57-88ae8e287103" colab={"base_uri": "https://localhost:8080/", "height": 34}
republican.shape
# + id="1z5bPXIhUpm1" colab_type="code" outputId="e4bcf396-1ed7-47d3-b5d6-400046e0e62c" colab={"base_uri": "https://localhost:8080/", "height": 306}
democrat_1 = democrat.median()
democrat_1
# + id="9Mx6ONqJLjDw" colab_type="code" outputId="15c03b1f-9182-4e52-9812-b101367b4128" colab={"base_uri": "https://localhost:8080/", "height": 1000}
democrat.fillna(democrat_1, inplace=True)
democrat
# + id="NxZAxCMtK67f" colab_type="code" outputId="8a10634a-c763-4ada-dbc1-7efb858e59e4" colab={"base_uri": "https://localhost:8080/", "height": 306}
republican_1 = republican.median()
republican_1
# + id="jvTZKt7t8-1n" colab_type="code" outputId="80b855be-544c-4407-d2f4-588c312dbb0d" colab={"base_uri": "https://localhost:8080/", "height": 1000}
republican.fillna(republican_1, inplace=True)
republican
# + id="jyLxW1wNKpr1" colab_type="code" colab={}
new_df.update(democrat)
# + id="SITbClY1JSLB" colab_type="code" outputId="a99e92db-a3d2-4eb2-f435-dc9f2cd65d2f" colab={"base_uri": "https://localhost:8080/", "height": 1000}
new_df
# + id="ZLBBPSf4N9GK" colab_type="code" colab={}
new_df.update(republican)
# + id="zW3htMe3OE2Z" colab_type="code" outputId="4009934b-7acb-42c9-b371-8816d89618ef" colab={"base_uri": "https://localhost:8080/", "height": 1000}
new_df
# + id="GI4Dcb5uweuM" colab_type="code" colab={}
# + [markdown] id="xpWSwNRdwfE3" colab_type="text"
# # **Confidence Interval**
# + id="mN9QZI3ueyVz" colab_type="code" colab={}
def mean_confidence_interval(new_df, confidence=0.95):
a = 1.0 * np.array(new_df)
n = len(a)
m, se = np.mean(a), scipy.stats.sem(a)
h = se * scipy.stats.t.ppf((1 + confidence) / 2., n-1)
return m, m-h, m+h
# + id="aCjEbiXSfbcF" colab_type="code" colab={}
from scipy import stats
def confidence_interval(data, confidence=0.95):
data = np.array(data)
mean = np.mean(data)
n = len(data)
stderr = stats.sem(data)
t = stats.t.ppf((1 + confidence) / 2.0, n - 1)
interval = stderr * t
return (mean, mean - interval, mean + interval)
# + id="8i-ADdYLkQ7O" colab_type="code" colab={}
d_handicap = democrat['Handicapped infants']
# + id="r3yOrnZRl8ZG" colab_type="code" outputId="acaefc9a-1e23-4558-fec8-a76405d1c280" colab={"base_uri": "https://localhost:8080/", "height": 119}
sample_size = 100
sample = d_handicap.sample(sample_size)
sample.head()
# + id="6SW7NFVzqT8p" colab_type="code" outputId="ca4d856f-6a34-482a-eff8-32131663966b" colab={"base_uri": "https://localhost:8080/", "height": 34}
sample_mean = sample.mean()
sample_std = np.std(sample, ddof=1)
print(sample_mean, sample_std)
# + id="3gGglM_kqaX4" colab_type="code" outputId="6ee482cb-5450-4277-eec9-a19ea075cabd" colab={"base_uri": "https://localhost:8080/", "height": 34}
standard_error = sample_std/np.sqrt(sample_size)
standard_error
# + id="0KXad7H2qehM" colab_type="code" outputId="e2d98f00-fc9f-4e2e-b598-4a157ce79b76" colab={"base_uri": "https://localhost:8080/", "height": 34}
t = 1.984 # 95% confidence
(sample_mean, sample_mean - t*standard_error, sample_mean + t*standard_error)
# + id="GY54xr8LrVYp" colab_type="code" outputId="e103c8a7-c4da-41d7-a023-c65fc9ce6ed0" colab={"base_uri": "https://localhost:8080/", "height": 34}
d_handicap_confidence = confidence_interval(sample, confidence=0.95)
d_handicap_confidence
# + [markdown] id="srCHhAOBwm5S" colab_type="text"
# # **Confidence interval (Graph)**
# + id="vGzzgrofwyee" colab_type="code" outputId="262cdfab-fc1e-4eff-ea97-feda6b9ca528" colab={"base_uri": "https://localhost:8080/", "height": 269}
democrat['Handicapped infants'].hist(bins=20);
# + id="XGBaFurR1IUi" colab_type="code" outputId="789c6571-195f-4b12-c811-7a85948367e7" colab={"base_uri": "https://localhost:8080/", "height": 286}
plt.errorbar(x = sample_mean, y = sample_mean, yerr = standard_error)
# + id="qQkBd2d672AC" colab_type="code" outputId="3a04ef6c-aa4a-4ec0-facd-6ed6b7898641" colab={"base_uri": "https://localhost:8080/", "height": 283}
sns.distplot(democrat['Handicapped infants'], color='r')
sns.distplot(republican['Handicapped infants'], color='b');
# + [markdown] id="pyuaQ3Mj51Mx" colab_type="text"
# # Interpret the confidence interval
# + [markdown] id="9ur3j2UO8qTy" colab_type="text"
# The confidence interval shows that there is a 95% of confidence that democrats will not support Handicapped infants.
# + id="Arm1Qhok9c-t" colab_type="code" colab={}
# + [markdown] id="n6fk1oYt9N0g" colab_type="text"
# # Another Dataset
# + id="Ph1ShDULCfBg" colab_type="code" colab={}
import pandas as pd
import numpy as np
# + id="zwCtQcBi9ZwM" colab_type="code" outputId="376a453e-c80e-40db-e316-c388cb0c8fd6" colab={"base_uri": "https://localhost:8080/", "height": 204}
exercise_df = pd.read_csv('https://raw.githubusercontent.com/pedroescobedob/DS-Unit-1-Sprint-1-Dealing-With-Data/master/module3-databackedassertions/persons.csv')
exercise_df.head()
# + id="0pLl3wEXAO1J" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 102} outputId="9bd0a33a-5337-4690-ea5b-9e9117e303ad"
pip install pandas==0.23.4
# + id="owx7FZxz_rUZ" colab_type="code" outputId="b9073feb-fd60-402e-f7c3-f58b761a9241" colab={"base_uri": "https://localhost:8080/", "height": 266}
pd.crosstab(exercise_df['weight'], exercise_df['exercise_time'])
time_e = pd.cut(exercise_df['exercise_time'], 5, labels=[-0.3, 60.0, 120.0, 180.0, 300.0])
weight_e = pd.cut(exercise_df['weight'], 5)
observed = pd.crosstab(weight_e, time_e, margins=True)
observed
# + colab_type="code" outputId="70760d2f-c746-4af2-c2ba-7ecab367564b" id="spkbvLwXDs7H" colab={"base_uri": "https://localhost:8080/", "height": 51}
row_sums = observed.iloc[0:6, 5].values
col_sums = observed.iloc[5, 0:6].values
print(row_sums)
print(col_sums)
# + colab_type="code" id="ttEtO_esDssp" colab={}
expected = []
for row_sum in row_sums:
expected_row = []
for column in col_sums:
expected_val = column*row_sum
expected_row.append(expected_val)
expected.append(expected_row)
expected = np.array(expected)
# + id="aVwvypcU-ty6" colab_type="code" outputId="80e65afa-a4b2-4304-b94f-03683596c0e9" colab={"base_uri": "https://localhost:8080/", "height": 153}
chi_square = ((observed - expected)**2/(observed)).sum()
chi_square
# + [markdown] id="XJytbat_H7bp" colab_type="text"
# # Scipy
# + id="NZtecTt6H9Ik" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 221} outputId="550b77b5-5c56-470c-e212-9f0f29e6e505"
chi_squared, p_value, dof, expected = stats.chi2_contingency(observed)
print(chi_squared, p_value, dof, expected)
| Pedro_Escobedo_LS_DS_132_Sampling_Confidence_Intervals_and_Hypothesis_Testing_Assignment.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ## Description
#
# Produce a plot showing what fraction of the genome is covered at different depths.
import phase3_data
v3 = phase3_data.release_data()
import numba
import numpy as np
import dask.array as da
# kubernetes cluster setup
n_workers = 50
from dask_kubernetes import KubeCluster
cluster = KubeCluster()
cluster.scale_up(n_workers)
#cluster.adapt(minimum=1, maximum=n_workers)
cluster
# dask client setup
from dask.distributed import Client, progress
client = Client(cluster)
client
values = 30, 20, 10, 5, 1
all_meta = v3.load_metadata_by_sampleset(v3.all_samplesets)
@numba.njit
def val_sum(block):
values = np.array([30, 20, 10, 5, 1], dtype=np.int32)
out = np.zeros((block.shape[1], 5), dtype=np.int32)
for i in range(block.shape[0]):
for j in range(block.shape[1]):
if block[i, j] >= values[0]:
out[j, 0] += 1
elif block[i, j] >= values[1]:
out[j, 1] += 1
elif block[i, j] >= values[2]:
out[j, 2] += 1
elif block[i, j] >= values[3]:
out[j, 3] += 1
elif block[i, j] >= values[4]:
out[j, 4] += 1
# return out
return out.reshape((1, block.shape[1], 5))
chromosomes = "2L", "2R", "3L", "3R", "X"
def compute_coverage_bases(selected_samples, seq_ids=chromosomes):
result = np.empty(
(len(seq_ids), # n contigs
selected_samples.sum(), # n samples
5)) # n values
for ix, chrom in enumerate(seq_ids):
dp = v3.load_calldata_by_sampleset(chrom, v3.all_samplesets, field="AD").sum(axis=2)
print(f"processing {chrom}")
# remove males and arabiensis.
dp_sel = da.compress(selected_samples.values, dp, axis=1)
r = da.map_blocks(
val_sum,
dp_sel,
dtype=np.int32,
chunks=(1, dp_sel.chunks[1], 5),
new_axis=(2,))
co = r.compute()
result[ix] = co.sum(axis=0).cumsum(axis=1)
contig_sizes = np.array([v3.load_mask(sid, "gamb_colu").shape[0] for sid in seq_ids])
all_chroms_norm = result.sum(0) / contig_sizes.sum()
assert all_chroms_norm.max() <= 1.0, "bug, fraction can never exceed 1.0"
return all_chroms_norm, result
sel_samples_gambcolu_f = all_meta.is_gamb_colu & (all_meta.sex_call == 'F')
sel_samples_arab_f = all_meta.is_arabiensis & (all_meta.sex_call == 'F')
# +
r = {}
for label, sel in zip(["gamb_colu", "arabiensis"], [sel_samples_gambcolu_f, sel_samples_arab_f]):
normed_r, _ = compute_coverage_bases(sel)
r[label] = np.percentile(normed_r, [1, 5, 50, 95, 99], axis=0)
# -
import matplotlib.pyplot as plt
# %matplotlib inline
import seaborn as sns
for key, percentiles in r.items():
fig, ax = plt.subplots(figsize=(4, 4))
plot_col = "purple"
ax.plot(values, percentiles[2], '-o', color=plot_col)
ax.fill_between(values, percentiles[0], percentiles[-1], alpha=0.1, color=plot_col)
ax.fill_between(values, percentiles[1], percentiles[-2], alpha=0.25, color=plot_col)
ax.grid(True)
ax.set_ylabel("fraction genome covered at X")
ax.set_xlabel("X coverage")
ax.set_ylim((0, 1.0))
ax.set_title(f"${key.replace('_', ' ')}$")
fig.savefig(f"../content/images/{key}_frac_covered_X.svg")
sel_samples_gambcolu_f.sum(), sel_samples_arab_f.sum()
# ### Caption
#
# Fig X. Plot showing the median fraction of genome covered by at least X reads, where X is the value on the X axis. As well as median- the 5/95 and 1/99 percentiles are displayed as filled ranges. Only female samples are shown, allowing inclusion of the X chromosome, in A) A. gambiae and coluzzii (n=2166), and B) arabiensis (n=365).
cluster.adapt()
| notebooks/sequencing_fraction-by-coverage.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Question 1
altitude = input("Enter ant distance: ")
alt = int(altitude)
if alt<=1000:
print("You are safe to land")
elif 1000<alt<=5000:
print("Bring down the plane to 1000")
else:
print("Turn around, Better luck next time")
# # Question 2
for num in range(200):
if num > 1:
for i in range(2,num):
if (num % i) == 0:
break
else:
print(num)
| Day 3 Assignnment.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: 'Python 3.8.3 64-bit (''base'': conda)'
# name: python3
# ---
# +
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
import math
import random
np.random.seed(1)
random.seed(1)
# +
def p(k, i, xi, A, a, h, k2coord, Gt):
return 1 / (1 + math.exp(-2 * I(k, i, xi, A, a, h, k2coord, Gt)))
def I(k, i, xi, A, a, h, k2coord, Gt):
total = 0
zeta = random.uniform(-1,1) # sampled for each unique (k,i)
for j in k2coord[k]: # for each coordinate in cluster k
eta = random.uniform(-1,1) # different for each cell
sigma = Gt[j]
total += ((A*xi[k] + a*eta) * sigma) + h*zeta
return (1 / len(k2coord[k])) * total
def cluster_info(arr):
""" number of clusters (nonzero fields separated by 0s) in array
and size of cluster
"""
data = []
k2coord = {}
k = 0
if arr[0] != 0: # left boundary
data.append(0) # we will increment later in loop
k2coord[k] = []
else:
k=-1
# print("arr", arr)
# print("data", data)
for i in range(0,len(arr)-1):
if arr[i] == 0 and arr[i+1] != 0:
data.append(0)
k += 1
k2coord[k] = []
if arr[i] != 0:
data[-1] += 1
k2coord[k].append(i)
if arr[-1] != 0:
if data: # if array is not empty
data[-1] += 1 # right boundary
k2coord[k].append(len(arr)-1)
else:
data.append(1)
k2coord[k] = [len(arr)-1]
Ncl = len(data) # number of clusters
Nk = data # Nk[k] = size of cluster k
coord2k = {e:k for k,v in k2coord.items() for e in v}
return Ncl, Nk, k2coord, coord2k
# + tags=[]
# pd = 0.25
# pe = 0.02
# ph = 0.18 # vary
A = 1.8 # interaction strength between agents
a = 2*A # randomness of A
h = 0 # external field reflecting the effects of the environmnet
pd = 0.05 # probability that an active trader diffuses and becomes inactive
pe = 0.01 # probability that a nontrading enters the market
ph = 0.0493 # probability that an active trader can turn one of his inactive neighbors into an active one
pa = 0.1 # active and inactive distribution
N0 = 2000# timepoints
N1 = 200
G = np.zeros(shape=(N0,N1)).astype(int)
G[0] = np.random.choice(a=[-1,0,1], p=[pa/2, 1-pa, pa/2], size=N1, replace=True)
x = np.empty(N0)
for t in range(N0):
Ncl, Nk, k2coord, coord2k = cluster_info(G[t])
# print(k2coord, coord2k)
# break
xi = np.random.uniform(-1, 1, size=Ncl) # unique xi for each cluster k
# print(Ncl, Nk, k2coord, coord2k, xi)
xt = 0
for k, size in enumerate(Nk):
tmp = 0
for i in k2coord[k]:
tmp += G[t,i]
xt += size * tmp
x[t] = xt
if t == N0-1:
# last iteration, we stop
break
for i in range(N1):
# traders update their stance
if G[t,i] != 0:
k = coord2k[i]
# print(k)
pp = p(k, i, xi, A, a, h, k2coord, G[t])
if random.random() < pp:
G[t+1,i] = 1
else:
G[t+1,i] = -1
# trader influences non-active neighbour to join
if G[t,i] != 0:
stance = G[t,i]
if random.random() < ph:
if G[t,(i-1)%N1] == 0 and G[t,(i+1)%N1] == 0:
ni = np.random.choice([-1,1])
G[t+1,(i+ni)%N1] = stance#random.choice([-1,1])
elif G[t,(i-1)%N1] == 0:
G[t+1,(i-1)%N1] = stance#random.choice([-1,1])
elif G[t,(i+1)%N1] == 0:
G[t+1,(i+1)%N1] = stance#random.choice([-1,1])
else:
continue
# active trader diffuses if it has inactive neighbour
# only happens at edge of cluster
if G[t,i] != 0:
if random.random() < pd:
if (G[t,(i-1)%N1] == 0) or (G[t,(i+1)%N1] == 0):
G[t+1,i] = 0
else:
continue
# nontrader enters market
if G[t,i] == 0:
if random.random() < pe:
G[t+1,i] = np.random.choice([-1,1])
fig, (ax1, ax2) = plt.subplots(
ncols=1, nrows=2, figsize=(12,5), sharex=True, gridspec_kw = {'wspace':0, 'hspace':0}
)
ax1.imshow(G.T, cmap="binary", interpolation="None", aspect="auto")
# plt.colorbar()
r = (x - np.mean(x)) / np.std(x)
s = 100
S = np.zeros_like(x)
S[0] = s
for i in range(1,N0):
# S[i] = S[i-1] + (S[i-1] * r[i])
S[i] = S[i-1] + (S[i-1] * r[i]/100)
ax2.plot(S)
ax2.grid(alpha=0.4)
ax2.set_xlabel("time")
# ax2.set_ylabel("standardised log returns")
ax2.set_ylabel("close price")
ax1.set_ylabel("agents")
plt.tight_layout()
plt.show()
# -
| code/charel/bartolozzi2004.ipynb |