code stringlengths 38 801k | repo_path stringlengths 6 263 |
|---|---|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="TsHV-7cpVkyK" colab_type="text"
# ##### Copyright 2019 The TensorFlow Authors.
# + id="atWM-s8yVnfX" colab_type="code" colab={}
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# + [markdown] id="TB0wBWfcVqHz" colab_type="text"
# # Hyperparameter Tuning with the HParams Dashboard
#
# <table class="tfo-notebook-buttons" align="left">
# <td>
# <a target="_blank" href="https://www.tensorflow.org/tensorboard/r2/hyperparameter_tuning_with_hparams"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" />View on TensorFlow.org</a>
# </td>
# <td>
# <a target="_blank" href="https://colab.research.google.com/github/tensorflow/tensorboard/blob/master/docs/r2/hyperparameter_tuning_with_hparams.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a>
# </td>
# <td>
# <a target="_blank" href="https://github.com/tensorflow/tensorboard/blob/master/docs/r2/hyperparameter_tuning_with_hparams.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a>
# </td>
# </table>
# + [markdown] id="elH58gbhWAmn" colab_type="text"
# When building machine learning models, you need to choose various [hyperparameters](https://en.wikipedia.org/wiki/Hyperparameter_(machine_learning)), such as the dropout rate in a layer or the learning rate. These decisions impact model metrics, such as accuracy. Therefore, an important step in the machine learning workflow is to identify the best hyperparameters for your problem, which often involves experimentation. This process is known as "Hyperparameter Optimization" or "Hyperparameter Tuning".
#
# The HParams dashboard in TensorBoard provides several tools to help with this process of identifying the best experiment or most promising sets of hyperparameters.
#
# This tutorial will focus on the following steps:
#
# 1. Experiment setup and HParams summary
# 2. Adapt TensorFlow runs to log hyperparameters and metrics
# 3. Start runs and log them all under one parent directory
# 4. Visualize the results in TensorBoard's HParams dashboard
#
# Note: The HParams summary APIs and dashboard UI are in a preview stage and will change over time.
#
# Start by installing TF 2.0 and loading the TensorBoard notebook extension:
# + id="8p3Tbx8cWEFA" colab_type="code" colab={}
# !pip install -q tf-nightly-2.0-preview
# Load the TensorBoard notebook extension
# %load_ext tensorboard
# + id="lEWCCQYkWIdA" colab_type="code" colab={}
# Clear any logs from previous runs
# !rm -rf ./logs/
# + [markdown] id="9GtR_cTTkf9G" colab_type="text"
# Import TensorFlow and the TensorBoard HParams plugin:
# + id="mVtYvbbIWRkV" colab_type="code" colab={}
import tensorflow as tf
from tensorboard.plugins.hparams import api as hp
# + [markdown] id="XfCa27_8kov6" colab_type="text"
# Download the [FashionMNIST](https://github.com/zalandoresearch/fashion-mnist) dataset and scale it:
# + id="z8b82G7YksOS" colab_type="code" outputId="8d018f2d-7574-4af2-b8cb-0452f8d54724" colab={"base_uri": "https://localhost:8080/", "height": 153}
fashion_mnist = tf.keras.datasets.fashion_mnist
(x_train, y_train),(x_test, y_test) = fashion_mnist.load_data()
x_train, x_test = x_train / 255.0, x_test / 255.0
# + [markdown] id="0tsTxO85WaYq" colab_type="text"
# ## 1. Experiment setup and the HParams experiment summary
#
# Experiment with three hyperparameters in the model:
#
# 1. Number of units in the first dense layer
# 2. Dropout rate in the dropout layer
# 3. Optimizer
#
# List the values to try, and log an experiment configuration to TensorBoard. This step is optional: you can provide domain information to enable more precise filtering of hyperparameters in the UI, and you can specify which metrics should be displayed.
# + id="5Euw0agpWb4V" colab_type="code" colab={}
HP_NUM_UNITS = hp.HParam('num_units', hp.Discrete([16, 32]))
HP_DROPOUT = hp.HParam('dropout', hp.RealInterval(0.1, 0.2))
HP_OPTIMIZER = hp.HParam('optimizer', hp.Discrete(['adam', 'sgd']))
METRIC_ACCURACY = 'accuracy'
with tf.summary.create_file_writer('logs/hparam_tuning').as_default():
hp.hparams_config(
hparams=[HP_NUM_UNITS, HP_DROPOUT, HP_OPTIMIZER],
metrics=[hp.Metric(METRIC_ACCURACY, display_name='Accuracy')],
)
# + [markdown] id="T_T95RrSIVO6" colab_type="text"
# If you choose to skip this step, you can use a string literal wherever you would otherwise use an `HParam` value: e.g., `hparams['dropout']` instead of `hparams[HP_DROPOUT]`.
# + [markdown] id="va9XMh-dW4_f" colab_type="text"
# ## 2. Adapt TensorFlow runs to log hyperparameters and metrics
#
# The model will be quite simple: two dense layers with a dropout layer between them. The training code will look familiar, although the hyperparameters are no longer hardcoded. Instead, the hyperparameters are provided in an `hparams` dictionary and used throughout the training function:
# + id="hG-zalNfW5Zl" colab_type="code" colab={}
def train_test_model(hparams):
model = tf.keras.models.Sequential([
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(hparams[HP_NUM_UNITS], activation=tf.nn.relu),
tf.keras.layers.Dropout(hparams[HP_DROPOUT]),
tf.keras.layers.Dense(10, activation=tf.nn.softmax),
])
model.compile(
optimizer=hparams[HP_OPTIMIZER],
loss='sparse_categorical_crossentropy',
metrics=['accuracy'],
)
model.fit(x_train, y_train, epochs=1) # Run with 1 epoch to speed things up for demo purposes
_, accuracy = model.evaluate(x_test, y_test)
return accuracy
# + [markdown] id="46oJF8seXM7v" colab_type="text"
# For each run, log an hparams summary with the hyperparameters and final accuracy:
# + id="8j-fO6nEXRfW" colab_type="code" colab={}
def run(run_dir, hparams):
with tf.summary.create_file_writer(run_dir).as_default():
hp.hparams(hparams) # record the values used in this trial
accuracy = train_test_model(hparams)
tf.summary.scalar(METRIC_ACCURACY, accuracy, step=1)
# + [markdown] id="2mYdW0VKLbFE" colab_type="text"
# When training Keras models, you can use callbacks instead of writing these directly:
#
# ```python
# model.fit(
# ...,
# callbacks=[
# tf.keras.callbacks.TensorBoard(logdir), # log metrics
# hp.KerasCallback(logdir, hparams), # log hparams
# ],
# )
# ```
# + [markdown] id="u2nOgIKAXdcb" colab_type="text"
# ## 3. Start runs and log them all under one parent directory
#
# You can now try multiple experiments, training each one with a different set of hyperparameters.
#
# For simplicity, use a grid search: try all combinations of the discrete parameters and just the lower and upper bounds of the real-valued parameter. For more complex scenarios, it might be more effective to choose each hyperparameter value randomly (this is called a random search). There are more advanced methods that can be used.
#
# Run a few experiments, which will take a few minutes:
# + id="KbqT5n-AXd0h" colab_type="code" outputId="f906b680-f941-4c7c-9b15-4dcc760bf2bb" colab={"base_uri": "https://localhost:8080/", "height": 649}
session_num = 0
for num_units in HP_NUM_UNITS.domain.values:
for dropout_rate in (HP_DROPOUT.domain.min_value, HP_DROPOUT.domain.max_value):
for optimizer in HP_OPTIMIZER.domain.values:
hparams = {
HP_NUM_UNITS: num_units,
HP_DROPOUT: dropout_rate,
HP_OPTIMIZER: optimizer,
}
run_name = "run-%d" % session_num
print('--- Starting trial: %s' % run_name)
print({h.name: hparams[h] for h in hparams})
run('logs/hparam_tuning/' + run_name, hparams)
session_num += 1
# + [markdown] id="qSyjWQ3mPKT9" colab_type="text"
# ## 4. Visualize the results in TensorBoard's HParams plugin
# + [markdown] id="5VBvplwyP8Vy" colab_type="text"
# The HParams dashboard can now be opened. Start TensorBoard and click on "HParams" at the top.
# + id="Xf4KM-U2bbP_" colab_type="code" colab={}
# %tensorboard --logdir logs/hparam_tuning
# + [markdown] id="UTWg9nXnxWWI" colab_type="text"
# <img class="tfo-display-only-on-site" src="images/hparams_table.png?raw=1"/>
# + [markdown] id="4RPGbR9EWd4o" colab_type="text"
# The left pane of the dashboard provides filtering capabilities that are active across all the views in the HParams dashboard:
#
# - Filter which hyperparameters/metrics are shown in the dashboard
# - Filter which hyperparameter/metrics values are shown in the dashboard
# - Filter on run status (running, success, ...)
# - Sort by hyperparameter/metric in the table view
# - Number of session groups to show (useful for performance when there are many experiments)
#
# + [markdown] id="Z3Q5v28XaCt8" colab_type="text"
# The HParams dashboard has three different views, with various useful information:
#
# * The **Table View** lists the runs, their hyperparameters, and their metrics.
# * The **Parallel Coordinates View** shows each run as a line going through an axis for each hyperparemeter and metric. Click and drag the mouse on any axis to mark a region which will highlight only the runs that pass through it. This can be useful for identifying which groups of hyperparameters are most important. The axes themselves can be re-ordered by dragging them.
# * The **Scatter Plot View** shows plots comparing each hyperparameter/metric with each metric. This can help identify correlations. Click and drag to select a region in a specific plot and highlight those sessions across the other plots.
#
# A table row, a parallel coordinates line, and a scatter plot market can be clicked to see a plot of the metrics as a function of training steps for that session (although in this tutorial only one step is used for each run).
# + [markdown] id="fh3p0DtcdOK1" colab_type="text"
# To further explore the capabilities of the HParams dashboard, download a set of pregenerated logs with more experiments:
# + id="oxrSUAnCeFmQ" colab_type="code" colab={} language="bash"
# wget -q 'https://storage.googleapis.com/download.tensorflow.org/tensorboard/hparams_demo_logs.zip'
# unzip -q hparams_demo_logs.zip -d logs/hparam_demo
# + [markdown] id="__8xQhjqgr3D" colab_type="text"
# View these logs in TensorBoard:
# + id="KBHp6M_zgjp4" colab_type="code" colab={}
# %tensorboard --logdir logs/hparam_demo
# + [markdown] id="Po7rTfQswAMT" colab_type="text"
# <img class="tfo-display-only-on-site" src="images/hparams_parallel_coordinates.png?raw=1"/>
# + [markdown] id="IlDz2oXBgnZ9" colab_type="text"
# You can try out the different views in the HParams dashboard.
#
# For example, by going to the parallel coordinates view and clicking and dragging on the accuracy axis, you can select the runs with the highest accuracy. As these runs pass through 'adam' in the optimizer axis, you can conclude that 'adam' performed better than 'sgd' on these experiments.
| docs/r2/hyperparameter_tuning_with_hparams.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3 (ipykernel)
# language: python
# name: python3
# ---
# # Project - Natural Language Processing
# - Can you determine who tweeted this?
# ### Description
# - We will analyze a collection of tweets from one tweet account
# - Can we figure out the person behind the account?
# ### Step 1: Import libraries
import pandas as pd
from nltk import word_tokenize, ngrams
from collections import Counter
import nltk
nltk.download('punkt')
# ### Step 2: Import data
# - Use Pandas [read_csv](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.read_csv.html) method to read files/tweets.csv
data = pd.read_csv('files/tweets.csv')
data.head()
# ### Step 3: Convert content to a list of content
# - Use list on the column **content**
# - You can also apply [to_list()](https://pandas.pydata.org/docs/reference/api/pandas.Series.to_list.html) on the column
content = list(data['content'])
len(content)
# ### Step 4: Create a corpus
# - Create an empty list called **corpus**
# - Iterate over **content**
# - Extend **corpus** with all words in lowercase if any character is alpha in the word.
# - HINT: To lowercase, call **lower()** on the word.
# - HINT: To check if any character is alhpa, use **any(c.isalpha() for c in word)**
corpus = []
for item in content:
corpus.extend([word.lower() for word in word_tokenize(item) if any(c.isalpha() for c in word)])
# ### Step 5: Check corpus
# - Find the length of the corpus
# - Look at the first 10 words in the corpus
len(corpus)
corpus[:10]
# ### Step 6: Display all 3-grams
# - Use **Counter(ngrams(corpus, 3))** and assign it to a variable
# - List the 10 most common 3-grams
# - HINT: call **most_common(10)** on the result from **Counter(...)**
ngram = Counter(ngrams(corpus, 3))
ngram.most_common(10)
# ### Step 7 (Optional): Pretty print
# - Iterate over the result with a for-loop
# - HINT: Each loop gives a **ngram** and **frequency**
for gram, freq in ngram.most_common(10):
print(f'Frequency: {freq} -> {gram}')
# ### Step 8 (Optional): Try it with 4-grams
ngram = Counter(ngrams(corpus, 4))
ngram.most_common(10)
ngram = Counter(ngrams(corpus, 5))
ngram.most_common(10)
| Machine Learning With Python/jupyter/final/11 - Project - Natural Language Processing.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: 'Python 3.7.8 64-bit (''tensorflow'': conda)'
# metadata:
# interpreter:
# hash: 9ec957caba7ae6ccc97a7fb0804bf14cbdb1f70a4904cd45a06dd27fe16a3b19
# name: python3
# ---
# # How To mesure beauty
#
# ## 1. Nombre d'or
# > ### - La largeur de la bouche est 1.618 fois la largeur du nez
#
# > ### - La largeur de la bouche est 1.618 fois l'intervalle qui la separe de l'extremité du visage
#
# > ### - Le nez etant un triangle, les cotés font 1.618 fois la base
#
# ## 2. Facial Keypoint Detection (Dataset)
# > ### - Le dataset contient 15 FKPs (paper)
#
# > ### - Le dataset contient 68 FKPs (an other dataset)
#
# > ### - Chaque FKP a deux coordonnées spatiales (x, y)
#
#
# # Targered FKPs
#
# ## MOUTH : 49 - 55
# ## NOSE
# > ## PEEK : 28
# > ## BASE : 32 - 36
#
# ## EDGES
# > ## LEFT : 4
# > ## RIGHT : 14
# # WORKFLOW
#
# ## 1. Data Preprocessing
# > ### - Download FKPs dataset
# > ### - Load and visualize data
# > ### - Data Transformation (Normalization, Rotation, Rescal, RandomCrop)
# > ### - Data Iteration and Batching (training & testing)
#
# > ### ```python [[IMAGE ARRAY], [X, Y]]```
#
# > ### 2 Possibilites
# > 1. ### Entrainer 68 CNN qui predisent les coordonnees (x, y) d'un FKP
# - ```python [[IMAGE ARRAY], [X, Y]]```
# > 2. ### Entrainer 1 CNN qui predit les 136 coordonnees
# - ```python [[IMAGE ARRAY], [X..., Y...]]```
#
#
# ## 2. Defining and Training a Naimish CNN to predict FKPs
# > ### - Define model with Keras
# > ### - Define Training Parameters (Epoch, Loss Function, Optimizer)
# > ### - Train model
# > ### - Test model
#
# ## 3. Defining a Face Beauty classifier
# > ### 1 Mean Square
# - #### Calcul de la moyenne quadratique
#
# > #### 3.1 La largeur de la bouche est 1.618 fois la largeur du nez
# - #### mouthWidth = distance(FKP49, FKP55)
# - #### noseWIdth = distance(FKP32, FKP36)
# - #### ratioMouthNose = mouthWidth / noseWIdth
# - #### accuracy (taux d'exactitude) = ratioMouthNose / 1.618 = 1.
#
# > #### 3.2 La largeur de la bouche est 1.618 fois l'intervalle qui la separe de l'extremité du visage
# - #### mouthWidth = distance(FKP49, FKP55),
# - #### leftMouthEdgeInterval = distance(FKP4, FKP49)
# - #### rightMouthEdgeInterval = distance(FKP55, FKP14)
# - ### Left Mouth
# - #### ratioLeftMouthEdgeInterval = mouthWidth / leftMouthEdgeInterval
# - #### leftAccuracy (taux d'exactitude) = ratioLeftMouthEdgeInterval / 1.618 = 1.
# - ### Right Mouth
# - #### ratioRightMouthEdgeInterval = mouthWidth / rightMouthEdgeInterval
# - #### rightAccuracy (taux d'exactitude) = ratioRightMouthEdgeInterval / 1.618 = 1.
#
# > #### 3.3 Le nez etant un triangle, les cotés font 1.618 fois la base
# - #### noseWIdth = distance(FKP32, FKP36)
# - #### noseLeft = distance(FKP28, FKP32)
# - #### noseLeft = distance(FKP28, FKP36)
# - #### Left Nose
# - #### ratioLeftBaseNoise = noseLeft / noseWIdth
# - #### leftNBaseNoiseAccuracy (taux d'exactitude) = ratioLeftBaseNoise / 1.618 = 1.
# - ### Right Nose
# - #### ratioRightBaseNoise = noseRight / noseWIdth
# - #### rightNBaseNoiseAccuracy (taux d'exactitude) = ratioRightBaseNoise / 1.618 = 1.
#
# > #### FacialBeautyAccuracy = Sum(RuleAccuracies) / rulesNomber
#
# > ### 2. Train an Unsupervised classifier
# - #### KNN (3 classes)
# - Note : cela peut etre utilisé comme un algorithme d'etiquetage
#
# > ### 3. Train a Supervised Classifier
# - #### KNN (3 classes) as labeling algorithm
# - #### Full Connected Neural Network as classifier
#
# ## 4. Deploying model as an API
#
# # Facial Keypoint Detection
#
# This project will be all about defining and training a convolutional neural network to perform facial keypoint detection, and using computer vision techniques to transform images of faces. The first step in any challenge like this will be to load and visualize the data you'll be working with.
#
# Let's take a look at some examples of images and corresponding facial keypoints.
#
# <img src='images/key_pts_example.png' width=100% height=100%/>
#
# Facial keypoints (also called facial landmarks) are the small magenta dots shown on each of the faces in the image above. In each training and test image, there is a single face and **68 keypoints, with coordinates (x, y), for that face**. These keypoints mark important areas of the face: the eyes, corners of the mouth, the nose, etc. These keypoints are relevant for a variety of tasks, such as face filters, emotion recognition, pose recognition, and so on. Here they are, numbered, and you can see that specific ranges of points match different portions of the face.
#
# <img src='images/landmarks_numbered.jpg' width=60% height=60%/>
#
# ---
# ## Load and Visualize Data
#
# The first step in working with any dataset is to become familiar with your data; you'll need to load in the images of faces and their keypoints and visualize them! This set of image data has been extracted from the [YouTube Faces Dataset](https://www.cs.tau.ac.il/~wolf/ytfaces/), which includes videos of people in YouTube videos. These videos have been fed through some processing steps and turned into sets of image frames containing one face and the associated keypoints.
#
# #### Training and Testing Data
#
# This facial keypoints dataset consists of 5770 color images. All of these images are separated into either a training or a test set of data.
#
# * 3462 of these images are training images, for you to use as you create a model to predict keypoints.
# * 2308 are test images, which will be used to test the accuracy of your model.
#
# The information about the images and keypoints in this dataset are summarized in CSV files, which we can read in using `pandas`. Let's read the training CSV and get the annotations in an (N, 2) array where N is the number of keypoints and 2 is the dimension of the keypoint coordinates (x, y).
#
# ---
# +
# import the required libraries
import glob
import os
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import matplotlib.image as mpimg
import cv2
# +
key_pts_frame = pd.read_csv('data/training_frames_keypoints.csv')
key_pts_frame.head(5)
# +
n = 0
image_name = key_pts_frame.iloc[n, 0]
key_pts = key_pts_frame.iloc[n, 1:]#.as_matrix()
print(key_pts)
key_pts = key_pts.values
print(type(key_pts))
key_pts = key_pts.astype('float').reshape(-1, 2)
print('Image name: ', image_name)
print('Landmarks shape: ', key_pts.shape)
print('First 4 key pts:\n{}'.format(key_pts[:4]))
# -
# print out some stats about the data
print('Number of images: ', key_pts_frame.shape[0])
# ## Look at some images
#
# Below, is a function `show_keypoints` that takes in an image and keypoints and displays them. As you look at this data, **note that these images are not all of the same size**, and neither are the faces! To eventually train a neural network on these images, we'll need to standardize their shape.
def show_keypoints(image, key_pts):
"""Show image with keypoints"""
plt.imshow(image)
plt.scatter(key_pts[:, 0], key_pts[:, 1], s=20, marker='.', c='m')
# +
# Display a few different types of images by changing the index n
# select an image by index in our data frame
n = 0
image_name = key_pts_frame.iloc[n, 0]
key_pts = key_pts_frame.iloc[n, 1:].values#.as_matrix()
key_pts = key_pts.astype('float').reshape(-1, 2)
plt.figure(figsize=(5, 5))
show_keypoints(mpimg.imread(os.path.join('data/training/', image_name)), key_pts)
plt.show()
# -
# ## Dataset class and Transformations
#
# To prepare our data for training, we'll be using PyTorch's Dataset class. Much of this this code is a modified version of what can be found in the [PyTorch data loading tutorial](http://pytorch.org/tutorials/beginner/data_loading_tutorial.html).
#
# #### Dataset class
#
# ``torch.utils.data.Dataset`` is an abstract class representing a
# dataset. This class will allow us to load batches of image/keypoint data, and uniformly apply transformations to our data, such as rescaling and normalizing images for training a neural network.
#
#
# Your custom dataset should inherit ``Dataset`` and override the following
# methods:
#
# - ``__len__`` so that ``len(dataset)`` returns the size of the dataset.
# - ``__getitem__`` to support the indexing such that ``dataset[i]`` can
# be used to get the i-th sample of image/keypoint data.
#
# Let's create a dataset class for our face keypoints dataset. We will
# read the CSV file in ``__init__`` but leave the reading of images to
# ``__getitem__``. This is memory efficient because all the images are not
# stored in the memory at once but read as required.
#
# A sample of our dataset will be a dictionary
# ``{'image': image, 'keypoints': key_pts}``. Our dataset will take an
# optional argument ``transform`` so that any required processing can be
# applied on the sample. We will see the usefulness of ``transform`` in the
# next section.
#
# +
from torch.utils.data import Dataset, DataLoader
class FacialKeypointsDataset(Dataset):
"""Face Landmarks dataset."""
def __init__(self, csv_file, root_dir, transform=None):
"""
Args:
csv_file (string): Path to the csv file with annotations.
root_dir (string): Directory with all the images.
transform (callable, optional): Optional transform to be applied
on a sample.
"""
self.key_pts_frame = pd.read_csv(csv_file)
self.root_dir = root_dir
self.transform = transform
def __len__(self):
return len(self.key_pts_frame)
def __getitem__(self, idx):
image_name = os.path.join(self.root_dir,
self.key_pts_frame.iloc[idx, 0])
# image matrix
image = mpimg.imread(image_name)
# if image has an alpha color channel, get rid of it
if(image.shape[2] == 4):
image = image[:,:,0:3]
key_pts = self.key_pts_frame.iloc[idx, 1:].values#.as_matrix()
key_pts = key_pts.astype('float').reshape(-1, 2)
sample = {'image': image, 'keypoints': key_pts}
if self.transform:
sample = self.transform(sample)
return sample
# -
# Now that we've defined this class, let's instantiate the dataset and display some images.
# +
# Construct the dataset
face_dataset = FacialKeypointsDataset(csv_file='data/training_frames_keypoints.csv',
root_dir='data/training/')
# print some stats about the dataset
print('Length of dataset: ', len(face_dataset))
# +
# Display a few of the images from the dataset
num_to_display = 3
for i in range(num_to_display):
# define the size of images
fig = plt.figure(figsize=(20,10))
# randomly select a sample
rand_i = np.random.randint(0, len(face_dataset))
sample = face_dataset[rand_i]
# print the shape of the image and keypoints
print(i, sample['image'].shape, sample['keypoints'].shape)
ax = plt.subplot(1, num_to_display, i + 1)
ax.set_title('Sample #{}'.format(i))
# Using the same display function, defined earlier
show_keypoints(sample['image'], sample['keypoints'])
# -
# ## Transforms
#
# Now, the images above are not of the same size, and neural networks often expect images that are standardized; a fixed size, with a normalized range for color ranges and coordinates, and (for PyTorch) converted from numpy lists and arrays to Tensors.
#
# Therefore, we will need to write some pre-processing code.
# Let's create four transforms:
#
# - ``Normalize``: to convert a color image to grayscale values with a range of [0,1] and normalize the keypoints to be in a range of about [-1, 1]
# - ``Rescale``: to rescale an image to a desired size.
# - ``RandomCrop``: to crop an image randomly.
# - ``ToTensor``: to convert numpy images to torch images.
#
#
# We will write them as callable classes instead of simple functions so
# that parameters of the transform need not be passed everytime it's
# called. For this, we just need to implement ``__call__`` method and
# (if we require parameters to be passed in), the ``__init__`` method.
# We can then use a transform like this:
#
# tx = Transform(params)
# transformed_sample = tx(sample)
#
# Observe below how these transforms are generally applied to both the image and its keypoints.
#
#
import torch
from torchvision import transforms, utils
# +
# tranforms
class Normalize(object):
"""Convert a color image to grayscale and normalize the color range to [0,1]."""
def __call__(self, sample):
image, key_pts = sample['image'], sample['keypoints']
image_copy = np.copy(image)
key_pts_copy = np.copy(key_pts)
# convert image to grayscale
image_copy = cv2.cvtColor(image, cv2.COLOR_RGB2GRAY)
# scale color range from [0, 255] to [0, 1]
image_copy= image_copy/255.0
# scale keypoints to be centered around 0 with a range of [-1, 1]
# mean = 100, sqrt = 50, so, pts should be (pts - 100)/50
key_pts_copy = (key_pts_copy - 100)/50.0
return {'image': image_copy, 'keypoints': key_pts_copy}
class Rescale(object):
"""Rescale the image in a sample to a given size.
Args:
output_size (tuple or int): Desired output size. If tuple, output is
matched to output_size. If int, smaller of image edges is matched
to output_size keeping aspect ratio the same.
"""
def __init__(self, output_size):
# output_size est de type int ou tuple
assert isinstance(output_size, (int, tuple))
self.output_size = output_size
def __call__(self, sample):
image, key_pts = sample['image'], sample['keypoints']
h, w = image.shape[:2]
if isinstance(self.output_size, int):
if h > w:
new_h, new_w = self.output_size * h / w, self.output_size
else:
new_h, new_w = self.output_size, self.output_size * w / h
else:
new_h, new_w = self.output_size # (h, w)
new_h, new_w = int(new_h), int(new_w)
img = cv2.resize(image, (new_w, new_h))
# scale the pts, too
key_pts = key_pts * [new_w / w, new_h / h]
return {'image': img, 'keypoints': key_pts}
class RandomCrop(object):
"""Crop randomly the image in a sample.
Args:
output_size (tuple or int): Desired output size. If int, square crop
is made.
"""
def __init__(self, output_size):
assert isinstance(output_size, (int, tuple))
if isinstance(output_size, int):
self.output_size = (output_size, output_size)
else:
assert len(output_size) == 2
self.output_size = output_size
def __call__(self, sample):
image, key_pts = sample['image'], sample['keypoints']
h, w = image.shape[:2]
new_h, new_w = self.output_size
top = np.random.randint(0, h - new_h)
left = np.random.randint(0, w - new_w)
image = image[top: top + new_h,
left: left + new_w]
key_pts = key_pts - [left, top]
return {'image': image, 'keypoints': key_pts}
class ToTensor(object):
"""Convert ndarrays in sample to Tensors."""
def __call__(self, sample):
image, key_pts = sample['image'], sample['keypoints']
# if image has no grayscale color channel, add one
if(len(image.shape) == 2):
# add that third color dim
image = image.reshape(image.shape[0], image.shape[1], 1)
# swap color axis because
# numpy image: H x W x C
# torch image: C X H X W
image = image.transpose((2, 0, 1))
return {'image': torch.from_numpy(image),
'keypoints': torch.from_numpy(key_pts)}
class AddBatchChannel(object):
"""Convert ndarrays in sample to Tensors."""
def __call__(self, sample):
image, key_pts = sample['image'], sample['keypoints']
# if image has no grayscale color channel, add one
if(len(image.shape) == 2):
# add that third color dim
image = image.reshape(image.shape[0], image.shape[1], 1)
# swap color axis because
# numpy image: H x W x C
# batch image: C X H X W
# image = image.transpose((2, 0, 1))
return {'image': image,
'keypoints': key_pts}
# -
# ## Transform Compose
class TransformCompose(object):
def __init__(self, transforms=[]):
# s'assurer qu'on a la liste de transformation
assert isinstance(transforms, list)
self.transforms = transforms
def __call__(self, sample):
"""
Applique la combinaison de transformations
:return: tuple (np.array, (x, y))
"""
# nouvelle transformation
newSample = sample
# appliquer les transformations
for transform in self.transforms:
newSample = transform(newSample)
# add chanel
image = newSample['image']
# if image has no grayscale color channel, add one
if(len(image.shape) == 2):
# add that third color dim
image = image.reshape(image.shape[0], image.shape[1], 1)
# swap color axis because
# numpy image: H x W x C
# image: C X H X W
image = image.transpose((2, 0, 1))
newSample['image'] = image
return newSample['image'], newSample['keypoints']
# ## Test out the transforms
#
# Let's test these transforms out to make sure they behave as expected. As you look at each transform, note that, in this case, **order does matter**. For example, you cannot crop a image using a value smaller than the original image (and the orginal images vary in size!), but, if you first rescale the original image, you can then crop it to any size smaller than the rescaled size.
# +
# test out some of these transforms
rescale = Rescale(100)
crop = RandomCrop(50)
composed = transforms.Compose([Rescale(250),
RandomCrop(224)])
# apply the transforms to a sample image
test_num = 500
sample = face_dataset[test_num]
fig = plt.figure()
for i, tx in enumerate([rescale, crop, composed]):
transformed_sample = tx(sample)
ax = plt.subplot(1, 3, i + 1)
plt.tight_layout()
ax.set_title(type(tx).__name__)
show_keypoints(transformed_sample['image'], transformed_sample['keypoints'])
plt.show()
# -
# ## Create the transformed dataset
#
# Apply the transforms in order to get grayscale images of the same shape. Verify that your transform works by printing out the shape of the resulting data (printing out a few examples should show you a consistent tensor size).
# +
# define the data tranform
# order matters! i.e. rescaling should come before a smaller crop
data_transform = transforms.Compose([Rescale(250),
RandomCrop(224),
Normalize(),
ToTensor()])
# create the transformed dataset
transformed_dataset = FacialKeypointsDataset(csv_file='data/training_frames_keypoints.csv',
root_dir='data/training/',
transform=data_transform)
# +
# print some stats about the transformed data
print('Number of images: ', len(transformed_dataset))
# make sure the sample tensors are the expected size
for i in range(5):
sample = transformed_dataset[i]
print(i, sample['image'].size(), sample['keypoints'].size())
# -
# ## Data Iteration and Batching
#
# Right now, we are iterating over this data using a ``for`` loop, but we are missing out on a lot of PyTorch's dataset capabilities, specifically the abilities to:
#
# - Batch the data
# - Shuffle the data
# - Load the data in parallel using ``multiprocessing`` workers.
#
# ``torch.utils.data.DataLoader`` is an iterator which provides all these
# features, and we'll see this in use in the *next* notebook, Notebook 2, when we load data in batches to train a neural network!
#
# ---
#
#
BATCH_SIZE = 128
IMG_SHAPE = 96 # Our training data consists of images with width of 150 pixels and height of 150 pixels
# ## Dataset with no any transformation
# +
# create the transformed training dataset
# data_transform = transforms.Compose([Rescale(150),
# RandomCrop(IMG_SHAPE),
# Normalize(),
# AddBatchChannel()])
data_transform = None
trainingDataset = FacialKeypointsDataset(csv_file='data/training_frames_keypoints.csv',
root_dir='data/training/',
transform=data_transform)
# create the transformed testing dataset
# data_transform = transforms.Compose([Rescale((IMG_SHAPE, IMG_SHAPE)),
# Normalize(),
# AddBatchChannel()])
data_transform = None
testingDataset = FacialKeypointsDataset(csv_file='data/test_frames_keypoints.csv',
root_dir='data/test/',
transform=data_transform)
# -
# ## Dataset with transformation
# +
# create the transformed training dataset
data_transform = transforms.Compose([Rescale(150),
RandomCrop(IMG_SHAPE),
Normalize(),
AddBatchChannel()])
trainingDataset = FacialKeypointsDataset(csv_file='data/training_frames_keypoints.csv',
root_dir='data/training/',
transform=data_transform)
# create the transformed testing dataset
data_transform = transforms.Compose([Rescale((IMG_SHAPE, IMG_SHAPE)),
Normalize(),
AddBatchChannel()])
testingDataset = FacialKeypointsDataset(csv_file='data/test_frames_keypoints.csv',
root_dir='data/test/',
transform=data_transform)
# +
sample = trainingDataset[0]
print(f"image shape : {sample['image'].shape} and FKPs shape : {sample['keypoints'].shape}")
# +
sample = testingDataset[0]
print(f"image shape : {sample['image'].shape} and FKPs shape : {sample['keypoints'].shape}")
# -
# # Train data and Validation data
# > ### - Train data : 80% train dataset
# > ### - Validation data : 20% train dataset
# ## Transform dataset to numpy inputs and targets
def transformDataset(dataset):
inputs = []
targets = []
for sample in dataset:
image = sample['image']
keypoints = sample['keypoints']
image = np.array(image)
keypoints = np.array(keypoints)
inputs.append(image)
targets.append(keypoints)
inputs = np.array(inputs)
targets = np.array(targets)
return inputs, targets
# ## Transformation example
# separation de donnees
from sklearn.model_selection import train_test_split
import numpy as np
VALIDATION_SPLIT = 0.20
# ## Fake data for testing
# +
nb = 3000
inputs = np.random.rand(nb, 96, 96, 1)
targets = np.random.rand(nb,68,2)
print(f"inputs.shape : {inputs.shape}\ttargets.shape : {targets.shape}")
# -
# ## Dataset Transformation
# +
inputs, targets = transformDataset(trainingDataset)
print(f"inputs.shape : {inputs.shape}\ttargets.shape : {targets.shape}")
# -
inputs[0].shape
inputs[1].shape
# ## Split dataset into training and validation
# +
trainDatas, validationDatas, trainLabels, validationLabels = train_test_split(
inputs,targets,
test_size = VALIDATION_SPLIT, # 0.2
shuffle = True
)
# Training datas
trainDatas = np.array(trainDatas)
trainLabels = np.array(trainLabels)
# Validation datas
validationDatas = np.array(validationDatas)
validationLabels = np.array(validationLabels)
# -
trainDatas.shape
validationDatas.shape
# ## Testing data transformation
# +
# Testing datas
testDatas, testLabels = transformDataset(testingDataset)
testDatas = np.array(testDatas)
testLabels = np.array(testLabels)
print(f"inputs.shape : {testDatas.shape}\ttargets.shape : {testLabels.shape}")
# -
testDatas.shape
# ## Custom generator for memory optimization
# +
class DataGenerator(object):
# class variables
TRAINING = "training"
TESTING = "testing"
VALIDATION = "validation"
def __init__(self, targetLabel=0, targetData='training', batchSize=20):
# targetLabel index must be int
assert isinstance(targetLabel, int)
# targetLata must be str
assert isinstance(targetData, str) and targetData in [self.TRAINING, self.TESTING, self.VALIDATION]
# batchSize index must be int
assert isinstance(batchSize, int)
self.targetLabel = targetLabel
self.targetData = targetData
self.batchSize = batchSize
# counter
self.lastbatch = 0
# compute size
self.size = self.shape()[0]
def __len__(self):
return self.size
def __iter__(self):
return self
def __next__(self):
return self.next()
def __repr__(self):
return (
f"DataGenerator\n\n"
f"targetData : {self.targetData}\n"
f"targetLabel : {self.targetLabel}\n"
f"size : {self.size}\n"
f"batchSize : {self.batchSize}\n"
)
def shape(self):
if self.targetData == self.TRAINING:
return trainDatas.shape
elif self.targetData == self.TESTING:
return testDatas.shape
elif self.targetData == self.VALIDATION:
return validationDatas.shape
def makeBatch(self, datas, targets, selectedIndex):
"""
Make a batch
"""
batch = []
for index in range(self.lastbatch, selectedIndex):
# index out of range
if index >= self.size:
break
# sample, image and target FPK
sample = (datas[index], targets[index][self.targetLabel])
batch.append(sample)
return batch
def next(self):
# index out of size
if self.lastbatch >= self.size:
self.lastbatch = 0
raise StopIteration()
# batch size
batch = list()
# last index
lastindex = self.lastbatch + self.batchSize
if self.targetData == self.TRAINING:
batch = self.makeBatch(trainDatas, trainLabels, lastindex)
elif self.targetData == self.TESTING:
batch = self.makeBatch(testDatas, testLabels, lastindex)
elif self.targetData == self.VALIDATION:
batch = self.makeBatch(validationDatas, validationLabels, lastindex)
# next batch
self.lastbatch = lastindex
if len(batch) == 0:
self.lastbatch = 0
raise StopIteration()
return batch
# -
trainingGenTest = DataGenerator(targetData='training', targetLabel=0, batchSize=20)
trainingGenTest
# + tags=[]
for data in trainingGenTest:
print(f"Data Size : {len(data)}\t- Data Type : {type(data)}\n")
sample = data[0]
print(f"Image size : {sample[0].shape}\t- Target Size : {sample[1].shape}\n")
break
# -
# # Defining and Training a Convolutional Neural Network (CNN) to Predict Facial Keypoints
# +
try:
# Use the %tensorflow_version magic if in colab.
# %tensorflow_version 2.x
except Exception:
pass
import tensorflow as tf
# +
naimishModel = tf.keras.models.Sequential([
# First layer
# Convolution2d1 to Convolution2d4 do not use zero
# padding, have their weights initialized with random numbers drawn from uniform distribution
tf.keras.layers.Conv2D(32, (4,4), input_shape=(96, 96, 1), padding='valid', activation='elu',
kernel_initializer=tf.keras.initializers.RandomUniform(minval=-1., maxval=1.)
),
# Maxpooling2d1 to Maxpooling2d4 use a pool shape of (2, 2), with non-overlapping strides and no zero padding
tf.keras.layers.MaxPooling2D(pool_size=(2, 2), strides=None, padding="valid"),
tf.keras.layers.Dropout(0.1),
# second layer
tf.keras.layers.Conv2D(64, (3,3), padding='valid', activation='elu',
kernel_initializer=tf.keras.initializers.RandomUniform(minval=-1., maxval=1.)
),
tf.keras.layers.MaxPooling2D(pool_size=(2, 2), strides=None, padding="valid"),
tf.keras.layers.Dropout(0.2),
# Third layer
tf.keras.layers.Conv2D(128, (2,2), padding='valid', activation='elu',
kernel_initializer=tf.keras.initializers.RandomUniform(minval=-1., maxval=1.)
),
tf.keras.layers.MaxPooling2D(pool_size=(2, 2), strides=None, padding="valid"),
tf.keras.layers.Dropout(0.3),
# Fourth layer
tf.keras.layers.Conv2D(256, (1,1), padding='valid', activation='elu',
kernel_initializer=tf.keras.initializers.RandomUniform(minval=-1., maxval=1.)
),
tf.keras.layers.MaxPooling2D(pool_size=(2, 2), strides=None, padding="valid"),
tf.keras.layers.Dropout(0.4),
# Flatten
tf.keras.layers.Flatten(),
# Dense 1
tf.keras.layers.Dense(1000, activation='elu'),
tf.keras.layers.Dropout(0.5),
# Dense 2
tf.keras.layers.Dense(1000, activation='elu'),
tf.keras.layers.Dropout(0.6),
# Dense 1
tf.keras.layers.Dense(2, activation='elu'),
])
# -
# ## Learning Configuration
# +
optimizer = tf.keras.optimizers.Adam(
learning_rate=0.001,
beta_1=0.9,
beta_2=0.999,
epsilon=1e-08,
amsgrad=False,
name="Adam"
)
naimishModel.compile(
optimizer=optimizer,
loss=tf.keras.metrics.mean_squared_error,
metrics=['accuracy']
)
# -
# ## Model Summary
naimishModel.summary()
# ## Test clone keras model
# +
modelClone = tf.keras.models.clone_model(naimishModel)
print(f"\nmodelClone == naimishModel : {modelClone == naimishModel}\n")
print(f"modelClone is naimishModel : {modelClone is naimishModel}\n")
print(f"\nid(modelClone) : {id(modelClone)}\n\nid(naimishModel) : {id(naimishModel)}")
# -
modelClone.summary()
# # Training
#
# > ### - Train data : 80% train dataset
# > ### - Validation data : 20% train dataset
# > ### - Batch SIze : 128
# > ### - Epochs : 300
# > ### - Early Stopping Callback (ESC) : stops the training when the number of contiguous epochs without improvement in validation loss are 30 (Patience Level .
# > ### - Model Checkpoint Callback (MCC) : stores the weights of the model with the lowest validation loss.
# ## Callback functions
BATCH_SIZE = 128
EPOCHS = 300
PATIENCE = 30
# +
import os
from datetime import datetime
class CustomCallback(tf.keras.callbacks.Callback):
def __init__(self, cwd, targetFolder):
assert os.path.exists(cwd)
assert isinstance(targetFolder, str)
assert os.path.exists(targetFolder)
self.targetFolder = targetFolder
# self.modelRef = modelRef
self.cwd = cwd
def on_epoch_end(self, epoch, logs=None):
keys = list(logs.keys())
print("\nEnd epoch {} of training; got log keys: {}".format(epoch, keys))
# metrics
valLoss = logs['val_loss']
if 'val_acc' in logs:
valAcc = logs['val_acc']
elif 'val_accuracy' in logs:
valAcc = logs['val_accuracy']
try:
# swap cwd
os.chdir(self.targetFolder)
# save model
modelFile = f"model-{datetime.now().timestamp()}-{epoch}-{valLoss:.3f}-{valAcc:.3f}.h5"
print("\nSaving model : '{}'\n".format(modelFile))
self.model.save(modelFile)
except Exception as error:
print(error)
finally:
# swap cwd
os.chdir(self.cwd)
# +
import os
modelClone = tf.keras.models.clone_model(naimishModel)
cwd = os.getcwd()
logDir = os.path.join(cwd, 'modelTest', 'logs')
modelTest = os.path.join(cwd, 'modelTest')
if not os.path.isdir(logDir):
os.makedirs(logDir)
modelFile = 'model.{epoch:02d}-{val_loss:.2f}.h5'
my_callbacks = [
tf.keras.callbacks.EarlyStopping(patience=PATIENCE),
# tf.keras.callbacks.ModelCheckpoint(filepath=modelFile),
tf.keras.callbacks.TensorBoard(log_dir=logDir),
CustomCallback(cwd=cwd, targetFolder=modelTest)
]
# +
# history = modelClone.fit(
# trainDatas,
# trainLabels[:, 0, :],
# epochs=1,
# validation_data = (validationDatas, validationLabels[:, 0, :]),
# callbacks=my_callbacks,
# verbose=1
# )
# -
# ## Toujours compiler un model apres clonage
# ## You must compile your model before training/testing. Use `model.compile(optimizer, loss)`.
# +
modelClone.compile(
optimizer=optimizer,
loss=tf.keras.metrics.mean_squared_error,
metrics=['accuracy']
)
history = modelClone.fit(
trainDatas,
trainLabels[:, 0, :],
epochs=1,
validation_data = (validationDatas, validationLabels[:, 0, :]),
callbacks=my_callbacks,
verbose=1
)
# -
# # Global and parallel training
import os
import shutil
# +
def makeFolder(targetPath):
if not os.path.isdir(targetPath):
os.makedirs(targetPath)
def removeFolder(targetPath):
if os.path.isdir(targetPath):
shutil.rmtree(targetPath, ignore_errors=True)
# -
# ## Save history object Helper
# +
import pickle
class HistoryManager(object):
def __init__(self, targetFile, data={}):
assert isinstance(targetFile, str)
self.filename = f"{targetFile}.data"
self.data = data
def save(self, targetObject=None):
if targetObject != None:
self.data = targetObject
with open(self.filename, 'wb') as handle:
pickle.dump(self.data, handle, protocol=pickle.HIGHEST_PROTOCOL)
def load(self):
with open(self.filename, 'rb') as handle:
data = pickle.load(handle)
self.data = data
return data
# +
a = {"a": "b"}
am = HistoryManager("Jonathan", a)
# -
am.save()
b = am.load()
b
# ## Train model Helper
trainLabels.shape
trainLabels[:, 0, :].shape
# +
from threading import RLock
def trainModel(model, fkp, my_callbacks, epochs, fkpFolder):
# lock = RLock()
# with lock:
# fit model
history = model.fit(
trainDatas,
trainLabels[:, fkp, :],
epochs=epochs,
validation_data = (validationDatas, validationLabels[:, fkp, :]),
callbacks=my_callbacks,
verbose=1
)
# save history
historyFile = os.path.join(fkpFolder, 'history')
history = HistoryManager(historyFile, history)
history.save()
# -
# ## Make training Thread
# +
import os
from threading import Thread
# current work directory
cwd = os.getcwd()
cwd
# -
# +
import os
from threading import Thread
# current work directory
cwd = os.getcwd()
# models folder
modelsFolder = os.path.join(cwd, 'models')
# remove folder
removeFolder(modelsFolder)
makeFolder(modelsFolder)
# Thread list
trainingThreads = []
# targetFKPs : index begin by 0
targetFKPs = np.array([49, 55, 28, 32, 36, 4, 14]) - 1
# iter FKPS
for fkp in targetFKPs:
# clone model
modelClone = tf.keras.models.clone_model(naimishModel)
# compile clone model
modelClone.compile(
optimizer=optimizer,
loss=tf.keras.metrics.mean_squared_error,
metrics=['accuracy']
)
# define callbacks
fkpFolder = os.path.join(modelsFolder, str(f"FKP{fkp}"))
makeFolder(fkpFolder)
logDir = os.path.join(fkpFolder, 'logs')
makeFolder(logDir)
modelFile = 'model.{epoch:02d}-{val_loss:.2f}.h5'
my_callbacks = [
tf.keras.callbacks.EarlyStopping(patience=PATIENCE),
# tf.keras.callbacks.ModelCheckpoint(filepath=modelFile),
tf.keras.callbacks.TensorBoard(log_dir=logDir),
CustomCallback(cwd=cwd, targetFolder=fkpFolder)
]
# create training thread
# def trainModel(model, fkp, my_callbacks, epochs, fkpFolder):
# trainThread = Thread(
# target=trainModel,
# args=(modelClone, fkp, my_callbacks, 1, fkpFolder),
# name=f"TrainingTread#{fkp}"
# )
# trainModel(modelClone, fkp, my_callbacks, 1, fkpFolder)
history = modelClone.fit(
trainDatas,
trainLabels[:, fkp, :],
validation_data = (validationDatas, validationLabels[:, fkp, :]),
epochs=EPOCHS,
batch_size=BATCH_SIZE,
callbacks=my_callbacks,
verbose=1
)
# save history
historyFile = os.path.join(fkpFolder, 'history')
history = HistoryManager(historyFile, history.history)
history.save()
# add thread to training tread list
# trainingThreads.append(trainThread)
# -
# ## PUsh Result and shutdown VM
# ! git add *
# ! git commit -m "Training result"
# ! git push origin training
# ! shutdown -s -t 100
# ## Result Visualization
# +
acc = history.history['accuracy']
val_acc = history.history['val_accuracy']
loss = history.history['loss']
val_loss = history.history['val_loss']
epochs_range = range(epochs)
plt.figure(figsize=(8, 8))
plt.subplot(1, 2, 1)
plt.plot(epochs_range, acc, label='Training Accuracy')
plt.plot(epochs_range, val_acc, label='Validation Accuracy')
plt.legend(loc='lower right')
plt.title('Training and Validation Accuracy')
plt.subplot(1, 2, 2)
plt.plot(epochs_range, loss, label='Training Loss')
plt.plot(epochs_range, val_loss, label='Validation Loss')
plt.legend(loc='upper right')
plt.title('Training and Validation Loss')
plt.show()
# -
# ## Export model
# +
import time
import os
savedModelPath = os.path.join(os.getcwd(), f"saved_models/")
if not os.path.exists(savedModelPath):
os.makedirs(savedModelPath)
model.trainable = False
model.save(savedModelPath)
# -
# ## Load saved model
# +
try:
# Use the %tensorflow_version magic if in colab.
# %tensorflow_version 2.x
except Exception:
pass
import tensorflow as tf
# -
# ## Load Model Helper
# +
import os
cwd = os.getcwd()
def loadFKPModel(fkp, name):
targetFile = os.path.join(cwd, "models", f"FKP{fkp}", name)
# assert that this model file exists
assert os.path.exists(targetFile)
# load model
loadedModel = tf.keras.models.load_model(targetFile)
return loadedModel
# +
saved_model_path = "D:\FacialBeauty-AI\models\FKP0\model.01-1.80.h5"
loadedModel = loadFKPModel(fkp=0, name="model.01-3.24.h5")
loadModel.summary()
# -
# ## Load History Helper
# +
import os
cwd = os.getcwd()
def loadModelHistory(fkp):
targetFile = os.path.join(cwd, "models", f"FKP{fkp}", "history")
history = HistoryManager(targetFile=targetFile)
return history.data
# -
history = loadModelHistory(fkp=0)
history
| FacialBeauty-AI.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ### Notebook for homology searches for DNA meythlation machinery
#
# This needs to have Java 11 in the path and for example run in the pycoMeth environment only
import os
from Bio import SeqIO
import pandas as pd
import numpy as np
import re
notebook_path = os.path.abspath(".")
IN_DIR = os.path.abspath('../../analyses/methylation_machinery/')
OUT_DIR = os.path.abspath('../../analyses/methylation_machinery/')
GENOME_DIR = os.path.abspath('../../data/genomic_resources/')
Pgt_protein_fn = os.path.abspath('../../data/genomic_resources/Puccinia_graminis_tritici_21-0.proteins.fa')
FivemC_seeds_fn = os.path.abspath('../../analyses/methylation_machinery/5mC_methylation_query.fasta')
n_threads = 20
blast_outfmt6_headers = "qseqid sseqid pident length mismatch gapopen qstart qend sstart send evalue bitscore".split(' ')
###write a function that takes the interpro TSV and returns a dict of domains for a specific search engine
def interpro_accession_dict(fn):
header = ['P-ID', 'md5', 'len', 'analysis', 'accession', 'description', 'start', 'stop', 'score', 'status' , 'date',
'Interpro_accession', 'Interpro_description']
df = pd.read_csv(fn, sep='\t', header =None, names= header).dropna()
return dict(zip(df.groupby('P-ID')['Interpro_accession'].unique().index, df.groupby('P-ID')['Interpro_accession'].unique()))
###write a function that takes the interpro TSV and returns a dict of domains for a specific search engine
def interpro_analysis_dict(fn, analysis):
header = ['P-ID', 'md5', 'len', 'analysis', 'accession', 'description', 'start', 'stop', 'score', 'status' , 'date',
'Interpro_accession', 'Interpro_description']
df = pd.read_csv(fn, sep='\t', header =None, names= header).dropna()
grouped = df[df.analysis == analysis].groupby('P-ID')
return dict(zip(grouped['analysis'].unique().index, grouped['accession'].unique()))
# ### Here the blast analysis starts
os.chdir(OUT_DIR)
# !makeblastdb -dbtype prot -in {Pgt_protein_fn}
# !blastp -help
#define file names
FivemC_outfmt_6_fn = 'Puccinia_graminis_tritici_21-0.proteins.5mC_methylation_query.blastp.outfmt6'
FivemC_outfmt_6_fn = os.path.join(OUT_DIR, FivemC_outfmt_6_fn)
#run blast
# !blastp -num_threads 20 -outfmt 6 -query {FivemC_seeds_fn} -db {Pgt_protein_fn} > {FivemC_outfmt_6_fn}
# !head {FivemC_outfmt_6_fn}
# ### Downstream filtering of blast resutls
FivemC_blast_df = pd.read_csv(FivemC_outfmt_6_fn, header = None, names=blast_outfmt6_headers, sep='\t' )
FivemC_blast_df.head()
#filtering of blast_df
FivemC_stringent_blast_df = FivemC_blast_df[FivemC_blast_df.evalue < 1e-10].copy()
FivemC_stringent_blast_df.groupby('qseqid')['sseqid'].count()
FivemC_stringent_blast_df.sseqid.unique()
FivemC_seeds_ids = []
for seq in SeqIO.parse(FivemC_seeds_fn, 'fasta'):
FivemC_seeds_ids.append(seq.id)
not_present = set(FivemC_seeds_ids) - set(FivemC_stringent_blast_df.qseqid.unique())
not_present
set(FivemC_seeds_ids) - set(FivemC_blast_df[FivemC_blast_df.evalue < 1e-2].qseqid.unique())
##pull out fasta sequence of all the hits
e_value = 0.01
FivemC_Pgt_protein_hit_fn = 'Puccinia_graminis_tritici_21-0.proteins.5mC_methylation_query.blastp-%s.fasta' % e_value
FivemC_Pgt_protein_hit_fn = os.path.join(OUT_DIR, FivemC_Pgt_protein_hit_fn)
blast_df = FivemC_blast_df
###get all the hits once and subset the blast with the e-value selected
hit_ids = blast_df[blast_df.evalue < e_value].sseqid.unique()
hit_list = []
sub_blast_df = blast_df[blast_df.evalue < e_value].copy()
for seq in SeqIO.parse(Pgt_protein_fn, 'fasta'):
if seq.id in hit_ids:
print(seq.id)
hit_list.append(seq)
SeqIO.write(hit_list, FivemC_Pgt_protein_hit_fn, 'fasta')
sub_blast_df
# ### Pull in haplotype information
pgt_gff3_fn = os.path.join('../../data/genomic_resources/Puccinia_graminis_tritici_21-0.gff3')
with open(pgt_gff3_fn, 'r') as fh:
haplotype_dict = {}
for line in fh:
line = line.rstrip()
if any(s in line for s in hit_ids):
for hit in hit_ids:
if hit in line:
haplotype_dict[hit] = line.split('\t')[0][-1]
len(haplotype_dict.values()) == len(hit_ids)
sub_blast_df['shaplotype'] = sub_blast_df.sseqid.map(haplotype_dict)
#get the locus id for loci with multiple transcripts
sub_blast_df['sseqid_locus'] = [x.split('-')[0] for x in sub_blast_df.sseqid]
#only keep the transcript witht the best hit
sub_blast_df.drop_duplicates(['qseqid', 'sseqid_locus'], keep='first', inplace = True)
# ### Do Interpro scan on command line
interpro5 = '/home/jamila/anaconda3/downloads/interproscan-5.42-78.0/interproscan.sh'
TMP_DIR = os.path.join(OUT_DIR, 'tmp')
if not os.path.exists(TMP_DIR):
os.mkdir(TMP_DIR)
Pgt_protein_hit_intrpro_fn = os.path.join(TMP_DIR, os.path.basename(FivemC_Pgt_protein_hit_fn).replace('.fasta', '.interpro5.tsv'))
FivemC_seeds_intrpro_fn = os.path.join(TMP_DIR, os.path.basename(FivemC_seeds_fn).replace('.fasta', '.interpro5.tsv'))
# Run interpro on both set of protein files
# !head {FivemC_Pgt_protein_hit_fn}
# !bash {interpro5} -cpu 4 -i {FivemC_Pgt_protein_hit_fn} -f tsv -iprlookup -o {Pgt_protein_hit_intrpro_fn}
# !bash {interpro5} -cpu 4 -i {FivemC_seeds_fn} -f tsv -iprlookup -o {FivemC_seeds_intrpro_fn}
#pull in interpro results and add them to the dataframe
sub_blast_df['q_pfam'] = sub_blast_df.qseqid.map(interpro_analysis_dict(FivemC_seeds_intrpro_fn, 'Pfam'))
sub_blast_df['q_interpro'] = sub_blast_df.qseqid.map(interpro_accession_dict(FivemC_seeds_intrpro_fn))
sub_blast_df['s_pfam'] = sub_blast_df.sseqid.map(interpro_analysis_dict(Pgt_protein_hit_intrpro_fn, 'Pfam'))
sub_blast_df['s_interpro'] = sub_blast_df.sseqid.map(interpro_accession_dict(Pgt_protein_hit_intrpro_fn))
#do some cosmetics on the the dataframe for proteins without interpro /pfam domains because pandas is wierd sometimes.
for cln in ['q_pfam', 'q_interpro', 's_pfam','s_interpro']:
if sub_blast_df[cln].isna().sum():
sub_blast_df.loc[sub_blast_df[sub_blast_df[cln].isna()].index, cln] = [ [[]] * sub_blast_df[cln].isna().sum() ]
#calculate the fraction of overlapping interpro/pfam domains between query sequences and hits
sub_blast_df['pfam_int'] = sub_blast_df.apply(lambda row: set(row['q_pfam']).intersection(set(row['s_pfam'])) , axis=1)
sub_blast_df['pfam_int_frac'] = sub_blast_df['pfam_int'].apply(lambda x: len(x)) / sub_blast_df['q_pfam'].apply(lambda x: len(x))
sub_blast_df['interpro_int'] = sub_blast_df.apply(lambda row: set(row['q_interpro']).intersection(set(row['s_interpro'])) , axis=1)
sub_blast_df['interpro_int_frac'] = sub_blast_df['interpro_int'].apply(lambda x: len(x)) / sub_blast_df['q_interpro'].apply(lambda x: len(x))
sub_blast_df.iloc[:,[0,1,10, 17, 18,19]].head(30)
#filter the dataframe to have only hits that have the best possible interpro domains fractions
pfam_filt_df = sub_blast_df[sub_blast_df.groupby('qseqid')['interpro_int_frac'].transform(max) == sub_blast_df['interpro_int_frac']]
##look at how many hits per query sequence are still left
pfam_filt_df.groupby('qseqid')['sseqid'].count()
best_sseq_df = pfam_filt_df[pfam_filt_df.groupby('sseqid')['interpro_int_frac'].transform(max) == pfam_filt_df['interpro_int_frac']]
pgt_match_list = []
DNA_seed_list = []
haplotype_list = []
match_type_list = []
for seed_gene, pgt_gene in zip(best_sseq_df.qseqid, best_sseq_df.sseqid):
if not pgt_gene.endswith('-T2'):
DNA_seed_list.append(seed_gene)
pgt_match_list.append(pgt_gene)
match_type_list.append('blast')
pgt_match_series = pd.Series(pgt_match_list, name="Pgt_match")
DNA_seed_series = pd.Series(DNA_seed_list, name='Seed_ID')
haplotype_series = pd.Series(haplotype_list, name='haplotype')
match_type_series = pd.Series(match_type_list, name='Match_type')
out_df = pd.concat([DNA_seed_series, pgt_match_series, haplotype_series, match_type_series], axis =1)
out_fn = os.path.join(OUT_DIR, '%s_orthologs.Pgt21-0.tsv' %os.path.basename(FivemC_seeds_fn).replace('.fasta', '') )
out_df.to_csv(out_fn, sep='\t', index=None)
# !head {out_fn}
| homology_searches/DNA_5mC_methylation_machinery_homology_search.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # ANN to Predict a Single Emotion
# Using modal representation, can we imrprove on Santamaria-Granados, Luz, et al. "Using deep convolutional neural network for emotion detection on a physiological signals dataset (AMIGOS)." IEEE Access 7 (2018): 57-67.
#
# Another step forward from fingerprints
# Setup: Javascript and ipynb stuff
# %reload_ext autoreload
# %autoreload 2
# %matplotlib inline
# + language="javascript"
# utils.load_extension('collapsible_headings/main')
# utils.load_extension('hide_input/main')
# utils.load_extension('autosavetime/main')
# utils.load_extension('execute_time/ExecuteTime')
# utils.load_extension('code_prettify/code_prettify')
# utils.load_extension('scroll_down/main')
# utils.load_extension('jupyter-js-widgets/extension')
# -
# ## Load Functions
from fastai.vision import *
import os
import numpy as np
import pandas as pd
import pickle
import warnings; warnings.simplefilter('ignore')
path=pathlib.Path('/media/tris/tris_files/EEG_datasets/DMD')
path
# ## Load Original Data Labels
path = '/media/tris/tris_files/EEG_datasets/DEAP_data_preprocessed_python/s01.dat'
df_read = pickle.load(open(path, 'rb'), encoding='latin1')
labels=df_read['labels']
df_tmp = pd.DataFrame(labels, columns=['valence','arousal','dominance','liking'])
df=df_tmp
for n in range (2,10):
path = '/media/tris/tris_files/EEG_datasets/DEAP_data_preprocessed_python/s0'+str(n)+'.dat'
df_read = pickle.load(open(path, 'rb'), encoding='latin1')
labels=df_read['labels']
df_tmp = pd.DataFrame(labels, columns=['valence','arousal','dominance','liking'])
df=df.append(df_tmp, ignore_index=True)
for n in range (10,33):
path = '/media/tris/tris_files/EEG_datasets/DEAP_data_preprocessed_python/s'+str(n)+'.dat'
df_read = pickle.load(open(path, 'rb'), encoding='latin1')
labels=df_read['labels']
df_tmp = pd.DataFrame(labels, columns=['valence','arousal','dominance','liking'])
df=df.append(df_tmp, ignore_index=True)
df=df.div(9) #normalize
df.head()
# ## Set filenames and labels
filenames=[]
subject_labels=[]
for n in range (0,9):
for i in range (0,9):
filename='S'+str(n+1)+'T'+str(i+1)
filenames.append(filename)
subject_label=n+1
subject_labels.append(subject_label)
for i in range (9,40):
filename='S'+str(n+1)+'T'+str(i+1)
filenames.append(filename)
subject_label=n+1
subject_labels.append(subject_label)
for n in range (9,32):
for i in range (0,9):
filename='S'+str(n+1)+'T'+str(i+1)
filenames.append(filename)
subject_label=n+1
subject_labels.append(subject_label)
for i in range (9,40):
filename='S'+str(n+1)+'T'+str(i+1)
filenames.append(filename)
subject_label=n+1
subject_labels.append(subject_label)
df['file_name']=filenames
df['subject_label']=subject_labels
df=df[['file_name','subject_label','valence','arousal']]
df.tail()
# +
# for i in range(1,33):
# for ii in range(1,41):
# os.rename(r'/media/tris/tris_files/github/OoMA-omniscient/data/processed/heatmaps_ind/heatmaps/T'+str(ii)+'S'+str(i)+".png",r'/media/tris/tris_files/github/OoMA-omniscient/data/processed/heatmaps_ind/heatmaps/S'+str(i)+'T'+str(ii)+".png")
# +
# from PIL import Image
# for i in range (0,1280):
# im = Image.open(r"/media/tris/tris_files/github/OoMA-omniscient/data/processed/heatmaps_ind/heatmaps/"+str(df.file_name[i])+".png")
# im1=im.crop((0,0,1450,1030))
# im1.save("/media/tris/tris_files/github/OoMA-omniscient/data/processed/heatmaps_ind/heatmaps_re/"+str(df.file_name[i])+".png")
# +
# df.loc[(df['valence'] >= 0.5) & (df['arousal'] >= 0.5), 'emotion_quad'] = 0 #HVHA
# df.loc[(df['valence'] <= 0.5) & (df['arousal'] >= 0.5), 'emotion_quad'] = 1 #LVHA
# df.loc[(df['valence'] <= 0.5) & (df['arousal'] <= 0.5), 'emotion_quad'] = 2 #LVLA
# df.loc[(df['valence'] >= 0.5) & (df['arousal'] <= 0.5), 'emotion_quad'] = 3 #HVLA
# df.head()
# -
df.loc[(df['valence'] >= 0.5) & (df['arousal'] >= 0.5), 'emotion_quad'] = 'HVHA' #HVHA
df.loc[(df['valence'] <= 0.5) & (df['arousal'] >= 0.5), 'emotion_quad'] = 'LVHA' #LVHA
df.loc[(df['valence'] <= 0.5) & (df['arousal'] <= 0.5), 'emotion_quad'] = 'LVLA' #LVLA
df.loc[(df['valence'] >= 0.5) & (df['arousal'] <= 0.5), 'emotion_quad'] = 'HVLA' #HVLA
df.head()
# 
path=pathlib.Path('/media/tris/tris_files/EEG_datasets/robots')
path
df.to_csv(path/'labels_proc_quads.csv', index=False)
df = pd.read_csv(path/'labels_proc_quads.csv') #load labels
df.tail()
df.hist()
# ## Data loader
src = (ImageList.from_csv(path, 'labels_proc_quads.csv', folder='heatmaps_15', suffix='.png')
.split_by_rand_pct(0.2)
.label_from_df(cols=['emotion_quad'],label_cls=CategoryList))
tfms=get_transforms()
tfms[0][6]
tf_final=[[tfms[0][0],tfms[0][4],tfms[0][5],tfms[0][6]],None]
data = (src.transform(tf_final, size=128)
.databunch(bs=4).normalize())
data.show_batch(rows=4, figsize=(12,9))
# ## Setup Network
learn = cnn_learner(data, models.resnet34, metrics=accuracy)
learn.summary()
learn.lr_find()
learn.recorder.plot()
# ## Train last layers
lr = 1e-4
learn.fit_one_cycle(10, slice(lr))
learn.recorder.plot_losses()
learn.show_results()
# ## Train the whole network
learn.unfreeze()
learn.lr_find()
learn.recorder.plot()
lr = 9e-5
learn.fit_one_cycle(5, slice(lr))
# ## Interpret and Results
learn.show_results()
preds,y,losses = learn.get_preds(with_loss=True)
interp = ClassificationInterpretation(learn, preds, y, losses)
interp.plot_confusion_matrix(figsize=(10,10))
interp.plot_top_losses(4)
data = (src.transform(tf_final, size=512)
.databunch(bs=4).normalize())
data.show_batch(rows=4, figsize=(12,9))
learn.data=data
learn.freeze()
learn.lr_find()
learn.recorder.plot()
lr = 3e-3
learn.fit_one_cycle(5, slice(lr))
learn.unfreeze()
learn.lr_find()
learn.recorder.plot()
lr = 3e-3
learn.fit_one_cycle(5, slice(lr))
preds,y,losses = learn.get_preds(with_loss=True)
interp = ClassificationInterpretation(learn, preds, y, losses)
interp.plot_confusion_matrix(figsize=(10,10))
# ## Not very good. So quick to pick label 0. Hmmm. I think we need some individualization here. Tabular?
idx=5
x,y = data.valid_ds[idx]
x.show()
m = learn.model.eval();
xb,_ = data.one_item(x)
xb_im = Image(data.denorm(xb)[0])
xb = xb.cuda()
from fastai.callbacks.hooks import *
def hooked_backward(cat=y):
with hook_output(m[0]) as hook_a:
with hook_output(m[0], grad=True) as hook_g:
preds = m(xb)
preds[0,int(cat)].backward()
return hook_a,hook_g
hook_a,hook_g = hooked_backward()
acts = hook_a.stored[0].cpu()
acts.shape
avg_acts = acts.mean(0)
avg_acts.shape
def show_heatmap(hm,filename):
_,ax = plt.subplots(figsize=(20,10))
xb_im.show(ax)
ax.imshow(hm, alpha=0.7, extent=(0,200,200,0), #this is werid with resize...
interpolation='bilinear', cmap='magma');
plt.savefig(filename)
# +
# show_heatmap(avg_acts,'test.png')
# -
# for idx in range(0,200):
# x,y = src.valid_ds[idx]
# xb,_ = src.one_item(x)
# xb_im = Image(src.denorm(xb)[0])
# xb = xb.cuda()
# hook_a,hook_g = hooked_backward()
# acts = hook_a.stored[0].cpu()
# avg_acts = acts.mean(0)
# filename='heatmap_'+str(idx)+'.png'
# show_heatmap(avg_acts,filename)
| src/models/.ipynb_checkpoints/2020-09-11_DEAP-OMA-quadrants_02-checkpoint.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
# -*- coding: utf-8 -*-
import pandas as pd
import numpy as np
import pdb
import matplotlib.pyplot as plt
import seaborn as sns
sns.set()
# -
df_coat_a = pd.read_csv("mf_coat_alpha.csv", index_col=0)
print(df_coat_a)
df_coat_g = pd.read_csv("mf_coat_gamma.csv", index_col=0)
print(df_coat_g)
df_yahoo_a = pd.read_csv("mf_yahoo_alpha.csv", index_col=0)
print(df_yahoo_a)
df_yahoo_g = pd.read_csv("mf_yahoo_gamma.csv", index_col=0)
print(df_yahoo_g)
# +
def organize_logs(df):
cols = df.columns.values.tolist()
val_list = []
param_list = []
for col in cols:
vals = df[col]
val_list.extend(vals.values.tolist())
param_list.extend(len(vals)*[col])
return val_list, param_list
font1 = {'family' : 'Times New Roman',
'weight' : 'normal',
'size' : 18,
}
font2 = {'family' : 'Times New Roman',
'weight' : 'normal',
'size' : 14,
}
# -
print("[COAT] Gamma=1e-3 Alpha in [2,1,0.5,0.1]")
auc_list, param_list = organize_logs(df_coat_a)
plot_data = pd.DataFrame({"AUC":auc_list, "alpha": param_list})
ax = sns.lineplot(x="alpha",y="AUC",data=plot_data,ci=95,err_style="band",n_boot=100, markers=True)
# plt.ylim(0.66,0.73)
plt.rc("xtick",labelsize=14)
plt.rc("ytick",labelsize=14)
ax.set_xlabel(r"$\alpha$",font1)
ax.set_ylabel("AUC",font1)
plt.savefig("coat_alpha.pdf",bbox_inches="tight")
plt.show()
print("[YAHOO] Gamma=1e-3 Alpha in [2,1,0.5,0.1]")
auc_list, param_list = organize_logs(df_yahoo_a)
plot_data = pd.DataFrame({"AUC":auc_list, "alpha": param_list})
ax = sns.lineplot(x="alpha",y="AUC",data=plot_data,ci=95,err_style="band",n_boot=100, markers=True)
#plt.ylim(0.7,0.72)
plt.rc("xtick",labelsize=14)
plt.rc("ytick",labelsize=14)
ax.set_xlabel(r"$\alpha$",font1)
ax.set_ylabel("AUC",font1)
plt.savefig("yahoo_alpha.pdf",bbox_inches="tight")
plt.show()
print("[COAT] Alpha=1e-1 Gamma in [1,1e-1,1e-2,1e-3]")
auc_list, param_list = organize_logs(df_coat_g)
plot_data = pd.DataFrame({"AUC":auc_list, "gamma": param_list})
ax = sns.lineplot(x="gamma",y="AUC",data=plot_data,ci=95,err_style="band",n_boot=100, markers=True)
plt.ylim(0.68,0.74)
plt.rc("xtick",labelsize=14)
plt.rc("ytick",labelsize=14)
ax.set_xlabel(r"$\gamma$",font1)
ax.set_ylabel("AUC",font1)
plt.savefig("coat_gamma.pdf",bbox_inches="tight")
plt.show()
plt.show()
print("[YAHOO] Alpha=1e-1 Gamma in [1e-1,1e-2,1e-3,1e-4]")
auc_list, param_list = organize_logs(df_yahoo_g)
plot_data = pd.DataFrame({"AUC":auc_list, "gamma": param_list})
ax = sns.lineplot(x="gamma",y="AUC",data=plot_data,ci=95,err_style="band",n_boot=100, markers=True)
plt.ylim(0.68,0.73)
plt.rc("xtick",labelsize=14)
plt.rc("ytick",labelsize=14)
ax.set_xlabel(r"$\gamma$",font1)
ax.set_ylabel("AUC",font1)
plt.savefig("yahoo_gamma.pdf",bbox_inches="tight")
plt.show()
plt.show()
| demo/plot.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python [default]
# language: python
# name: python3
# ---
# # Introduction
# The first part of this [series](http://pbpython.com/excel-pandas-comp.html) was very well received so I thought I would continue the theme of showing how to do common Excel tasks in pandas.
#
# In the first article, I focused on common, math tasks in Excel and how to do them in pandas. In this article, I'll focus on some other Excel tasks related to data selection and how to map them to pandas.
#
# Please refer to [this post](http://pbpython.com/excel-pandas-comp-2.html) for the full post.
# # Getting Set Up
# Import the pandas and numpy modules.
import pandas as pd
import numpy as np
# Load in the Excel data that represents a year's worth of sales.
df = pd.read_excel("../data/sample-salesv3.xlsx")
# Take a quick look at the data types to make sure everything came through as expected.
df.dtypes
# You'll notice that our date column is showing up as a generic `object`. We are going to convert it to datetime object to make some selections a little easier.
df['date'] = pd.to_datetime(df['date'])
df.head()
df.dtypes
# The date is now a datetime object which will be useful in future steps.
# # Filtering the data
# Similar to the autofilter function in Excel, you can use pandas to filter and select certain subsets of data.
#
# For instance, if we want to just see a specific account number, we can easily do that with pandas.
#
# Note, I am going to use the `head` function to show the top results. This is purely for the purposes of keeping the article shorter.
df[df["account number"]==307599].head()
# You could also do the filtering based on numeric values.
df[df["quantity"] > 22].head()
# If we want to do more complex filtering, we can use `map` to filter. In this example, let's look for items with sku's that start with B1.
df[df["sku"].map(lambda x: x.startswith('B1'))].head()
# It's easy to chain two statements together using the &.
df[df["sku"].map(lambda x: x.startswith('B1')) & (df["quantity"] > 22)].head()
# Another useful function that pandas supports is called `isin`. It allows us to define a list of values we want to look for.
#
# In this case, we look for all records that include two specific account numbers.
df[df["account number"].isin([714466,218895])].head()
# Pandas supports another function called `query` which allows you to efficiently select subsets of data. It does require the installation of [numexpr](https://github.com/pydata/numexpr) so make sure you have it installed before trying this step.
#
# If you would like to get a list of customers by name, you can do that with a query, similar to the python syntax shown above.
df.query('name == ["Kulas Inc","Barton LLC"]').head()
# The query function allows you do more than just this simple example but for the purposes of this discussion, I'm showing it so you are aware that it is out there for you.
# # Working with Dates
# Using pandas, you can do complex filtering on dates. Before doing anything with dates, I encourage you to sort by the date column to make sure the results return what you are expecting.
df = df.sort_values(by='date')
df.head()
# The python filtering syntax shown before works with dates.
df[df['date'] >='20140905'].head()
# One of the really nice features of pandas is that it understands dates so will allow us to do partial filtering. If we want to only look for data more recent than a specific month, we can do so.
df[df['date'] >='2014-03'].head()
# Of course, you can chain the criteria.
df[(df['date'] >='20140701') & (df['date'] <= '20140715')].head()
# Because pandas understands date columns, you can express the date value in multiple formats and it will give you the results you expect.
df[df['date'] >= 'Oct-2014'].head()
df[df['date'] >= '10-10-2014'].head()
# When working with time series data, if we convert the data to use the date as at the index, we can do some more filtering.
#
# Set the new index using `set_index`.
df2 = df.set_index(['date'])
df2.head()
# We can slice the data to get a range.
df2["20140101":"20140201"].head()
# Once again, we can use various date representations to remove any ambiguity around date naming conventions.
df2["2014-Jan-1":"2014-Feb-1"].head()
df2["2014-Jan-1":"2014-Feb-1"].tail()
df2["2014"].head()
df2["2014-Dec"].head()
# # Additional String Functions
# Pandas has support for vectorized string functions as well. If we want to identify all the skus that contain a certain value, we can use `str.contains`. In this case, we know that the sku is always represented in the same way, so B1 only shows up in the front of the sku.
df[df['sku'].str.contains('B1')].head()
# We can string queries together and use sort to control how the data is ordered.
# A common need I have in Excel is to understand all the unique items in a column. For instance, maybe I only want to know when customers purchased in this time period. The unique function makes this trivial.
df[(df['sku'].str.contains('B1-531')) & (df['quantity']>40)].sort_values(by=['quantity','name'],ascending=[0,1])
# # Bonus Task
# I frequently find myself trying to get a list of unique items in a long list within Excel. It is a multi-step process to do this in Excel but is fairly simple in pandas. We just use the `unique` function on a column to get the list.
df["name"].unique()
# If we wanted to include the account number, we could use `drop_duplicates`.
df.drop_duplicates(subset=["account number","name"]).head()
# We are obviously pulling in more data than we need and getting some non-useful information, so select only the first and second columns using `ix`.
df.drop_duplicates(subset=["account number","name"]).iloc[:,[0,1]]
# I hope you found this useful. I encourage you to try and apply these ideas to some of your own repetitive Excel tasks and streamline your work flow.
| notebooks/Common-Excel-Part-2.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .r
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: R
# language: R
# name: ir
# ---
data <- read.table("../Data/RCTs_and_Patients_Nb_local_prop_median_UI_per_region_and_disease.txt")
head(data)
data$Nb_RCTs <- paste(format(round(data$Nb_RCTs_med),nsmall = 0),
" [",format(round(data$Nb_RCTs_low),nsmall = 0),
"--",format(round(data$Nb_RCTs_up),nsmall = 0),"]")
data$Nb_Patients <- paste(format(round(data$Nb_Patients_med),nsmall = 0),
" [",format(round(data$Nb_Patients_low),nsmall = 0),
"--",format(round(data$Nb_Patients_up),nsmall = 0),"]")
data$Prop_RCTs <- paste(format(round(data$Prop_RCTs_med,1),nsmall = 1),
" [",format(round(data$Prop_RCTs_low,1),nsmall = 1),
"--",format(round(data$Prop_RCTs_up,1),nsmall = 1),"]")
data$Prop_Patients <- paste(format(round(data$Prop_Patients_med,1),nsmall = 1),
" [",format(round(data$Prop_Patients_low,1),nsmall = 1),
"--",format(round(data$Prop_Patients_up,1),nsmall = 1),"]")
head(data[,c("Region","Disease","Nb_RCTs","Nb_Patients","Prop_RCTs","Prop_Patients")])
data[data$Dis=="All" & data$Region%in%c("All","High-income","Non-HI"),c("Region","Disease","Nb_RCTs","Nb_Patients")]
dall <- data[data$Region=="All",]
dall[order(dall$Nb_RCTs_low,decreasing=TRUE),c("Region","Disease","Nb_RCTs","Nb_Patients","Prop_RCTs","Prop_Patients")]
| A- Mapping health research effort/.ipynb_checkpoints/A.9- Numbers-checkpoint.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Interpreting nodes and edges with saliency maps in GAT
#
# + [markdown] nbsphinx="hidden" tags=["CloudRunner"]
# <table><tr><td>Run the latest release of this notebook:</td><td><a href="https://mybinder.org/v2/gh/stellargraph/stellargraph/master?urlpath=lab/tree/demos/interpretability/gat-node-link-importance.ipynb" alt="Open In Binder" target="_parent"><img src="https://mybinder.org/badge_logo.svg"/></a></td><td><a href="https://colab.research.google.com/github/stellargraph/stellargraph/blob/master/demos/interpretability/gat-node-link-importance.ipynb" alt="Open In Colab" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg"/></a></td></tr></table>
# -
# This demo shows how to use integrated gradients in graph attention networks to obtain accurate importance estimations for both the nodes and edges. The notebook consists of three parts:
#
# setting up the node classification problem for Cora citation network
# training and evaluating a GAT model for node classification
# calculating node and edge importances for model's predictions of query ("target") nodes.
# + nbsphinx="hidden" tags=["CloudRunner"]
# install StellarGraph if running on Google Colab
import sys
if 'google.colab' in sys.modules:
# %pip install -q stellargraph[demos]==1.2.1
# + nbsphinx="hidden" tags=["VersionCheck"]
# verify that we're using the correct version of StellarGraph for this notebook
import stellargraph as sg
try:
sg.utils.validate_notebook_version("1.2.1")
except AttributeError:
raise ValueError(
f"This notebook requires StellarGraph version 1.2.1, but a different version {sg.__version__} is installed. Please see <https://github.com/stellargraph/stellargraph/issues/1172>."
) from None
# +
import networkx as nx
import pandas as pd
import numpy as np
from scipy import stats
import os
import time
import sys
import stellargraph as sg
from copy import deepcopy
from stellargraph.mapper import FullBatchNodeGenerator
from stellargraph.layer import GAT, GraphAttention
from tensorflow.keras import layers, optimizers, losses, metrics, models, Model
from sklearn import preprocessing, feature_extraction, model_selection
from tensorflow.keras import backend as K
import matplotlib.pyplot as plt
from stellargraph import datasets
from IPython.display import display, HTML
# %matplotlib inline
# -
# ## Loading the CORA network
# + [markdown] tags=["DataLoadingLinks"]
# (See [the "Loading from Pandas" demo](../basics/loading-pandas.ipynb) for details on how data can be loaded.)
# + tags=["DataLoading"]
dataset = datasets.Cora()
display(HTML(dataset.description))
G, subjects = dataset.load()
# -
print(G.info())
# ### Splitting the data
# For machine learning we want to take a subset of the nodes for training, and use the rest for validation and testing. We'll use scikit-learn again to do this.
#
# Here we're taking 140 node labels for training, 500 for validation, and the rest for testing.
train_subjects, test_subjects = model_selection.train_test_split(
subjects, train_size=140, test_size=None, stratify=subjects
)
val_subjects, test_subjects = model_selection.train_test_split(
test_subjects, train_size=500, test_size=None, stratify=test_subjects
)
# +
from collections import Counter
Counter(train_subjects)
# -
# ### Converting to numeric arrays
# For our categorical target, we will use one-hot vectors that will be fed into a soft-max Keras layer during training. To do this conversion ...
# +
target_encoding = preprocessing.LabelBinarizer()
train_targets = target_encoding.fit_transform(train_subjects)
val_targets = target_encoding.transform(val_subjects)
test_targets = target_encoding.transform(test_subjects)
all_targets = target_encoding.transform(subjects)
# -
# ## Creating the GAT model in Keras
# To feed data from the graph to the Keras model we need a generator. Since GAT is a full-batch model, we use the `FullBatchNodeGenerator` class to feed node features and graph adjacency matrix to the model.
generator = FullBatchNodeGenerator(G, method="gat", sparse=False)
# For training we map only the training nodes returned from our splitter and the target values.
train_gen = generator.flow(train_subjects.index, train_targets)
# Now we can specify our machine learning model, we need a few more parameters for this:
#
# * the `layer_sizes` is a list of hidden feature sizes of each layer in the model. In this example we use two GAT layers with 8-dimensional hidden node features at each layer.
# * `attn_heads` is the number of attention heads in all but the last GAT layer in the model
# * `activations` is a list of activations applied to each layer's output
# * Arguments such as `bias`, `in_dropout`, `attn_dropout` are internal parameters of the model, execute `?GAT` for details.
# To follow the GAT model architecture used for Cora dataset in the original paper [Graph Attention Networks. P. Veličković et al. ICLR 2018 https://arxiv.org/abs/1803.07294], let's build a 2-layer GAT model, with the second layer being the classifier that predicts paper subject: it thus should have the output size of `train_targets.shape[1]` (7 subjects) and a softmax activation.
gat = GAT(
layer_sizes=[8, train_targets.shape[1]],
attn_heads=8,
generator=generator,
bias=True,
in_dropout=0,
attn_dropout=0,
activations=["elu", "softmax"],
normalize=None,
saliency_map_support=True,
)
# Expose the input and output tensors of the GAT model for node prediction, via GAT.in_out_tensors() method:
x_inp, predictions = gat.in_out_tensors()
# ### Training the model
# Now let's create the actual Keras model with the input tensors `x_inp` and output tensors being the predictions `predictions` from the final dense layer
model = Model(inputs=x_inp, outputs=predictions)
model.compile(
optimizer=optimizers.Adam(lr=0.005),
loss=losses.categorical_crossentropy,
weighted_metrics=["acc"],
)
# Train the model, keeping track of its loss and accuracy on the training set, and its generalisation performance on the validation set (we need to create another generator over the validation data for this)
val_gen = generator.flow(val_subjects.index, val_targets)
# Train the model
N = G.number_of_nodes()
history = model.fit(
train_gen, validation_data=val_gen, shuffle=False, epochs=10, verbose=2
)
sg.utils.plot_history(history)
# Evaluate the trained model on the test set
# +
test_gen = generator.flow(test_subjects.index, test_targets)
test_metrics = model.evaluate(test_gen)
print("\nTest Set Metrics:")
for name, val in zip(model.metrics_names, test_metrics):
print("\t{}: {:0.4f}".format(name, val))
# -
# Check serialization
# Save model
model_json = model.to_json()
model_weights = model.get_weights()
# Load model from json & set all weights
model2 = models.model_from_json(model_json, custom_objects=sg.custom_keras_layers)
model2.set_weights(model_weights)
model2_weights = model2.get_weights()
pred2 = model2.predict(test_gen)
pred1 = model.predict(test_gen)
print(np.allclose(pred1, pred2))
# ## Node and link importance via saliency maps
# Now we define the importances of node features, nodes, and links in the target node's neighbourhood (ego-net), and evaluate them using our library.
#
# Node feature importance: given a target node $t$ and the model's prediction of $t$'s class, for each node $v$ in its ego-net, feature importance of feature $f$ for node $v$ is defined as the change in the target node's predicted score $s(c)$ for the winning class $c$ if feature $f$ of node $v$ is perturbed.
#
# The overall node importance for node $v$ is defined here as the sum of all feature importances for node $v$, i.e., it is the amount by which the target node's predicted score $s(c)$ would change if we set all features of node $v$ to zeros.
#
# Link importance for link $e=(u, v)$ is defined as the change in target node $t$'s predicted score $s(c)$ if the link $e$ is removed from the graph. Links with high importance (positive or negative) affect the target node prediction more than links with low importance.
#
# Node and link importances can be used to assess the role of neighbour nodes and links in model's predictions for the node(s) of interest (the target nodes). For datasets like CORA-ML, the features and edges are binary, vanilla gradients may not perform well so we use integrated gradients to compute them (https://arxiv.org/pdf/1703.01365.pdf).
from stellargraph.interpretability.saliency_maps import IntegratedGradientsGAT
from stellargraph.interpretability.saliency_maps import GradientSaliencyGAT
# Select the target node whose prediction is to be interpreted.
graph_nodes = list(G.nodes())
all_gen = generator.flow(graph_nodes)
target_nid = 1109199
target_idx = graph_nodes.index(target_nid)
target_gen = generator.flow([target_nid])
# Node id of the target node:
y_true = all_targets[target_idx] # true class of the target node
# Extract adjacency matrix and feature matrix
y_pred = model.predict(target_gen).squeeze()
class_of_interest = np.argmax(y_pred)
print(
"target node id: {}, \ntrue label: {}, \npredicted label: {}".format(
target_nid, y_true, y_pred.round(2)
)
)
# Get the node feature importance by using integrated gradients
int_grad_saliency = IntegratedGradientsGAT(model, train_gen, generator.node_list)
saliency = GradientSaliencyGAT(model, train_gen)
# Get the ego network of the target node.
G_ego = nx.ego_graph(G.to_networkx(), target_nid, radius=len(gat.activations))
# Compute the link importance by integrated gradients.
integrate_link_importance = int_grad_saliency.get_link_importance(
target_nid, class_of_interest, steps=25
)
print("integrated_link_mask.shape = {}".format(integrate_link_importance.shape))
integrated_node_importance = int_grad_saliency.get_node_importance(
target_nid, class_of_interest, steps=25
)
print("\nintegrated_node_importance", integrated_node_importance.round(2))
print(
"integrated self-importance of target node {}: {}".format(
target_nid, integrated_node_importance[target_idx].round(2)
)
)
print(
"\nEgo net of target node {} has {} nodes".format(target_nid, G_ego.number_of_nodes())
)
print(
"Number of non-zero elements in integrated_node_importance: {}".format(
np.count_nonzero(integrated_node_importance)
)
)
# Get the ranks of the edge importance values.
sorted_indices = np.argsort(integrate_link_importance.flatten().reshape(-1))
sorted_indices = np.array(sorted_indices)
integrated_link_importance_rank = [(int(k / N), k % N) for k in sorted_indices[::-1]]
topk = 10
print(
"Top {} most important links by integrated gradients are {}".format(
topk, integrated_link_importance_rank[:topk]
)
)
# print('Top {} most important links by integrated gradients (for potential edges) are {}'.format(topk, integrated_link_importance_rank_add[-topk:]))
# In the following, we plot the link and node importance (computed by integrated gradients) of the nodes within the ego graph of the target node.
#
# For nodes, the shape of the node indicates the positive/negative importance the node has. 'round' nodes have positive importance while 'diamond' nodes have negative importance. The size of the node indicates the value of the importance, e.g., a large diamond node has higher negative importance.
#
# For links, the color of the link indicates the positive/negative importance the link has. 'red' links have positive importance while 'blue' links have negative importance. The width of the link indicates the value of the importance, e.g., a thicker blue link has higher negative importance.
nx.set_node_attributes(G_ego, values={x[0]: {"subject": x[1]} for x in subjects.items()})
# +
node_size_factor = 1e2
link_width_factor = 4
nodes = list(G_ego.nodes())
colors = pd.DataFrame(
[v[1]["subject"] for v in G_ego.nodes(data=True)], index=nodes, columns=["subject"]
)
colors = np.argmax(target_encoding.transform(colors), axis=1) + 1
fig, ax = plt.subplots(1, 1, figsize=(15, 10))
pos = nx.spring_layout(G_ego)
# Draw ego as large and red
node_sizes = [integrated_node_importance[graph_nodes.index(k)] for k in nodes]
node_shapes = [
"o" if integrated_node_importance[graph_nodes.index(k)] > 0 else "d" for k in nodes
]
positive_colors, negative_colors = [], []
positive_node_sizes, negative_node_sizes = [], []
positive_nodes, negative_nodes = [], []
# node_size_sclae is used for better visualization of nodes
node_size_scale = node_size_factor / np.max(node_sizes)
for k in range(len(node_shapes)):
if list(nodes)[k] == target_nid:
continue
if node_shapes[k] == "o":
positive_colors.append(colors[k])
positive_nodes.append(list(nodes)[k])
positive_node_sizes.append(node_size_scale * node_sizes[k])
else:
negative_colors.append(colors[k])
negative_nodes.append(list(nodes)[k])
negative_node_sizes.append(node_size_scale * abs(node_sizes[k]))
cmap = plt.get_cmap("jet", np.max(colors) - np.min(colors) + 1)
nc = nx.draw_networkx_nodes(
G_ego,
pos,
nodelist=positive_nodes,
node_color=positive_colors,
cmap=cmap,
node_size=positive_node_sizes,
with_labels=False,
vmin=np.min(colors) - 0.5,
vmax=np.max(colors) + 0.5,
node_shape="o",
)
nc = nx.draw_networkx_nodes(
G_ego,
pos,
nodelist=negative_nodes,
node_color=negative_colors,
cmap=cmap,
node_size=negative_node_sizes,
with_labels=False,
vmin=np.min(colors) - 0.5,
vmax=np.max(colors) + 0.5,
node_shape="d",
)
# Draw the target node as a large star colored by its true subject
nx.draw_networkx_nodes(
G_ego,
pos,
nodelist=[target_nid],
node_size=50 * abs(node_sizes[nodes.index(target_nid)]),
node_shape="*",
node_color=[colors[nodes.index(target_nid)]],
cmap=cmap,
vmin=np.min(colors) - 0.5,
vmax=np.max(colors) + 0.5,
label="Target",
)
edges = G_ego.edges()
# link_width_scale is used for better visualization of links
weights = [
integrate_link_importance[graph_nodes.index(u), graph_nodes.index(v)]
for u, v in edges
]
link_width_scale = link_width_factor / np.max(weights)
edge_colors = [
"red"
if integrate_link_importance[graph_nodes.index(u), graph_nodes.index(v)] > 0
else "blue"
for u, v in edges
]
ec = nx.draw_networkx_edges(
G_ego, pos, edge_color=edge_colors, width=[link_width_scale * w for w in weights]
)
plt.legend()
plt.colorbar(nc, ticks=np.arange(np.min(colors), np.max(colors) + 1))
plt.axis("off")
plt.show()
# -
# We then remove the node or edge in the ego graph one by one and check how the prediction changes. By doing so, we can obtain the ground truth importance of the nodes and edges. Comparing the following figure and the above one can show the effectiveness of integrated gradients as the importance approximations are relatively consistent with the ground truth.
# +
[X, _, A], y_true_all = all_gen[0]
N = A.shape[-1]
X_bk = deepcopy(X)
edges = [(graph_nodes.index(u), graph_nodes.index(v)) for u, v in G_ego.edges()]
nodes_idx = [graph_nodes.index(v) for v in nodes]
selected_nodes = np.array([[target_idx]], dtype="int32")
clean_prediction = model.predict([X, selected_nodes, A]).squeeze()
predict_label = np.argmax(clean_prediction)
groud_truth_edge_importance = np.zeros((N, N), dtype="float")
groud_truth_node_importance = []
for node in nodes_idx:
if node == target_idx:
groud_truth_node_importance.append(0)
continue
X = deepcopy(X_bk)
# we set all the features of the node to zero to check the ground truth node importance.
X[0, node, :] = 0
predict_after_perturb = model.predict([X, selected_nodes, A]).squeeze()
prediction_change = (
clean_prediction[predict_label] - predict_after_perturb[predict_label]
)
groud_truth_node_importance.append(prediction_change)
node_shapes = [
"o" if groud_truth_node_importance[k] > 0 else "d" for k in range(len(nodes))
]
positive_colors, negative_colors = [], []
positive_node_sizes, negative_node_sizes = [], []
positive_nodes, negative_nodes = [], []
# node_size_scale is used for better visulization of nodes
node_size_scale = node_size_factor / max(groud_truth_node_importance)
for k in range(len(node_shapes)):
if nodes_idx[k] == target_idx:
continue
if node_shapes[k] == "o":
positive_colors.append(colors[k])
positive_nodes.append(graph_nodes[nodes_idx[k]])
positive_node_sizes.append(node_size_scale * groud_truth_node_importance[k])
else:
negative_colors.append(colors[k])
negative_nodes.append(graph_nodes[nodes_idx[k]])
negative_node_sizes.append(node_size_scale * abs(groud_truth_node_importance[k]))
X = deepcopy(X_bk)
for edge in edges:
original_val = A[0, edge[0], edge[1]]
if original_val == 0:
continue
# we set the weight of a given edge to zero to check the ground truth link importance
A[0, edge[0], edge[1]] = 0
predict_after_perturb = model.predict([X, selected_nodes, A]).squeeze()
groud_truth_edge_importance[edge[0], edge[1]] = (
predict_after_perturb[predict_label] - clean_prediction[predict_label]
) / (0 - 1)
A[0, edge[0], edge[1]] = original_val
# print(groud_truth_edge_importance[edge[0], edge[1]])
fig, ax = plt.subplots(1, 1, figsize=(15, 10))
cmap = plt.get_cmap("jet", np.max(colors) - np.min(colors) + 1)
# Draw the target node as a large star colored by its true subject
nx.draw_networkx_nodes(
G_ego,
pos,
nodelist=[target_nid],
node_size=50 * abs(node_sizes[nodes_idx.index(target_idx)]),
node_color=[colors[nodes_idx.index(target_idx)]],
cmap=cmap,
node_shape="*",
vmin=np.min(colors) - 0.5,
vmax=np.max(colors) + 0.5,
label="Target",
)
# Draw the ego net
nc = nx.draw_networkx_nodes(
G_ego,
pos,
nodelist=positive_nodes,
node_color=positive_colors,
cmap=cmap,
node_size=positive_node_sizes,
with_labels=False,
vmin=np.min(colors) - 0.5,
vmax=np.max(colors) + 0.5,
node_shape="o",
)
nc = nx.draw_networkx_nodes(
G_ego,
pos,
nodelist=negative_nodes,
node_color=negative_colors,
cmap=cmap,
node_size=negative_node_sizes,
with_labels=False,
vmin=np.min(colors) - 0.5,
vmax=np.max(colors) + 0.5,
node_shape="d",
)
edges = G_ego.edges()
# link_width_scale is used for better visulization of links
link_width_scale = link_width_factor / np.max(groud_truth_edge_importance)
weights = [
link_width_scale
* groud_truth_edge_importance[graph_nodes.index(u), graph_nodes.index(v)]
for u, v in edges
]
edge_colors = [
"red"
if groud_truth_edge_importance[graph_nodes.index(u), graph_nodes.index(v)] > 0
else "blue"
for u, v in edges
]
ec = nx.draw_networkx_edges(G_ego, pos, edge_color=edge_colors, width=weights)
plt.legend()
plt.colorbar(nc, ticks=np.arange(np.min(colors), np.max(colors) + 1))
plt.axis("off")
plt.show()
# + [markdown] nbsphinx="hidden" tags=["CloudRunner"]
# <table><tr><td>Run the latest release of this notebook:</td><td><a href="https://mybinder.org/v2/gh/stellargraph/stellargraph/master?urlpath=lab/tree/demos/interpretability/gat-node-link-importance.ipynb" alt="Open In Binder" target="_parent"><img src="https://mybinder.org/badge_logo.svg"/></a></td><td><a href="https://colab.research.google.com/github/stellargraph/stellargraph/blob/master/demos/interpretability/gat-node-link-importance.ipynb" alt="Open In Colab" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg"/></a></td></tr></table>
| demos/interpretability/gat-node-link-importance.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + _cell_guid="b1076dfc-b9ad-4769-8c92-a6c4dae69d19" _uuid="8f2839f25d086af736a60e9eeb907d3b93b6e0e5" colab={} colab_type="code" id="U2D2gTdJVp90"
import numpy as np
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
# from collections import Counter
# + _cell_guid="79c7e3d0-c299-4dcb-8224-4455121ee9b0" _uuid="d629ff2d2480ee46fbb7e2d37f6b5fab8052498a" colab={} colab_type="code" id="oyGfxL3eVp9-"
data = pd.read_csv('movie_bd_v5.csv')
data.sample(5)
# + [markdown] colab_type="text" id="DTIt7ezGVp-G"
# # Предобработка
# + colab={} colab_type="code" id="jNb40DwKVp-H"
from typing import List, Callable
from collections import Counter
answers = {} # создадим словарь для ответов
# тут другие ваши предобработки колонок например:
##
# My functions here
#
# Create empty (None) Counter obj
# just to reserve a place
# the conter will be used below
counter:Counter = None
def split_str_by_pipe(str_pipe: str) -> List[str]:
return split_str_by_char(str_pipe, '|')
def split_str_by_char(str_ch: str, char: str) -> List[str]:
"""
Split the str_pipe (pipe's delimeted string) and return
list of trimed string
"""
if str_ch is not None and len(str_ch) > 0:
return [ s.strip('\n\r\t ') for s in str_ch.split(char)]
return []
# def transform_row(r):
# r.animal = 'wild ' + r.type
# r.type = r.animal + ' creature'
# r.age = "{} year{}".format(r.age, r.age > 1 and 's' or '')
# return r
# df.apply(transform_row, axis=1)
# def modify_row(row, field_name, func: Callable):
# print(row[field_name])
# coumn_names = func(str(row[field_name]))
# for c in coumn_names:
# row[c] = 1
# def fill_conter(str_pipe: str, func: Callable, counter:Counter):
# counter.update(func(str_pipe))
#the time given in the dataset is in string format.
#So we need to change this in datetime format
# ...
# 1.
# Create profit_data DF with special 'profit' column
profit_data = data.copy()
profit_data['profit'] = profit_data[['revenue','budget']] \
.apply(lambda row: row['revenue'] - row['budget'], axis=1 )
# +
## Test
##
# c = Counter()
# fill_conter('<NAME>|<NAME>|<NAME>|<NAME>', split_str_by_pipe, c)
# fill_conter('<NAME>|<NAME>|<NAME>|<NAME>', split_str_by_pipe, c)
# fill_conter('<NAME>|<NAME>|<NAME>|Just', split_str_by_pipe, c)
# list(c)
# + [markdown] colab_type="text" id="YxZaH-nPVp-L"
# # 1. У какого фильма из списка самый большой бюджет?
# + [markdown] colab_type="text" id="Nd-G5gX6Vp-M"
# Использовать варианты ответов в коде решения запрещено.
# Вы думаете и в жизни у вас будут варианты ответов?)
# + colab={} colab_type="code" id="uVnXAY5RVp-O"
# в словарь вставляем номер вопроса и ваш ответ на него
# Пример:
answers['1'] = '2. Spider-Man 3 (tt0413300)'
# запишите свой вариант ответа
answers['1'] = '...'
# если ответили верно, можете добавить комментарий со значком "+"
# + colab={} colab_type="code" id="dZwb3m28Vp-S"
# тут пишем ваш код для решения данного вопроса:
answers['1'] = data[data['budget']==data['budget'].max()]['original_title'].values[0]
answers['1']
# + [markdown] colab_type="text" id="K7L3nbRXVp-X"
# ВАРИАНТ 2
# + colab={} colab_type="code" id="OGaoQI7rVp-X"
# можно добавлять разные варианты решения
answers['1'] = data.loc[data['budget'].idxmax()]['original_title']
answers['1']
# + [markdown] colab_type="text" id="FNRbbI3vVp-c"
# # 2. Какой из фильмов самый длительный (в минутах)?
# + colab={} colab_type="code" id="vHAoEXNTVp-d"
# думаю логику работы с этим словарем вы уже поняли,
# по этому не буду больше его дублировать
answers['2'] = data.loc[data['runtime'].idxmax()]['original_title']
answers['2']
# + colab={} colab_type="code" id="ot-VX2XrVp-g"
answers['2'] = data.sort_values('runtime', ascending=False).iloc[0]['original_title']
answers['2']
# + [markdown] colab_type="text" id="bapLlpW8Vp-k"
# # 3. Какой из фильмов самый короткий (в минутах)?
#
#
#
#
# + colab={} colab_type="code" id="YBxaSHuAVp-l"
answers['3'] = data.loc[data['runtime'].idxmin()]['original_title']
answers['3']
# + [markdown] colab_type="text" id="TfQbxbfNVp-p"
# # 4. Какова средняя длительность фильмов?
#
# + colab={} colab_type="code" id="5K6dKZYVVp-q"
answers['4'] = data['runtime'].mean()
answers['4']
# + [markdown] colab_type="text" id="r5TvbnT_Vp-u"
# # 5. Каково медианное значение длительности фильмов?
# + colab={} colab_type="code" id="iBROplKnVp-v"
answers['5'] = data['runtime'].median()
answers['5']
# + [markdown] colab_type="text" id="39P-deDSVp-y"
# # 6. Какой самый прибыльный фильм?
# #### Внимание! Здесь и далее под «прибылью» или «убытками» понимается разность между сборами и бюджетом фильма. (прибыль = сборы - бюджет) в нашем датасете это будет (profit = revenue - budget)
# + colab={} colab_type="code" id="UYZh4T9WVp-y"
# лучше код получения столбца profit вынести в Предобработку что в начале
# see above data['profit'] = data[['revenue','budget']].apply(lambda row: row['revenue'] - row['budget'], axis=1 )
answers['6'] = profit_data.loc[profit_data['profit'].idxmax()]['original_title']
answers['6']
# + [markdown] colab_type="text" id="M99JmIX4Vp-2"
# # 7. Какой фильм самый убыточный?
# + colab={} colab_type="code" id="w-D2m4XPVp-3"
answers['7'] = profit_data.loc[profit_data['profit'].idxmin()]['original_title']
answers['7']
# + [markdown] colab_type="text" id="wEOM5ERVVp-6"
# # 8. У скольких фильмов из датасета объем сборов оказался выше бюджета?
# + colab={} colab_type="code" id="y00_7HD6Vp-7"
answers['8'] = profit_data[profit_data['profit'] > 0 ].shape[0]
answers['8']
# + [markdown] colab_type="text" id="xhpspA9KVp_A"
# # 9. Какой фильм оказался самым кассовым в 2008 году?
# + colab={} colab_type="code" id="MoUyQr9RVp_B"
answers['9'] = profit_data.loc[profit_data[profit_data['release_year'] == 2008 ]['profit'].idxmax()]['original_title']
answers['9']
# + [markdown] colab_type="text" id="Zi4hDKidVp_F"
# # 10. Самый убыточный фильм за период с 2012 по 2014 г. (включительно)?
#
# -
answers['10'] = data.loc[data[data['release_year'].between(2012, 2014, inclusive=True)]['profit'].idxmin()]['original_title']
answers['10']
# + [markdown] colab_type="text" id="EA7Sa9dkVp_I"
# # 11. Какого жанра фильмов больше всего?
# + colab={} colab_type="code" id="zsJAwJ8QVp_J"
# эту задачу тоже можно решать разными подходами, попробуй реализовать разные варианты
# если будешь добавлять функцию - выноси ее в предобработку что в начале
# Step 1 create counter and get
# the most common element
ganre_counter = Counter()
data['genres'].apply(lambda g: fill_conter(str(g), split_str_by_pipe, ganre_counter))
ganre_counter.most_common(1)
# + [markdown] colab_type="text" id="Ax6g2C8SVp_M"
# ВАРИАНТ 2
# + colab={} colab_type="code" id="otO3SbrSVp_N"
# step 1 copy exist DF
ganre_data = data.copy()
# step 2 update df with 'ganre' columns
# if ganre is found set value 1
for g in list(ganre_counter):
ganre_data[g] = ganre_data['genres'].apply(lambda c: 1 if g in split_str_by_pipe(str(c)) else 0)
ganre_data[list(ganre_counter)].sum().idxmax()
# -
# ВАРИАНТ 3
# step 1 copy exist DF
ganre_data = data.copy()
# prepere for explode
ganre_data['list_genre'] = ganre_data['genres'].apply(lambda c: split_str_by_pipe(str(c)))
# calculate
ganre_data.explode('list_genre')['list_genre'].value_counts().idxmax()
# + [markdown] colab_type="text" id="T9_bPWpkVp_Q"
# # 12. Фильмы какого жанра чаще всего становятся прибыльными?
# +
# step 1 copy exist DF
ganre_data = data.copy()
# step 2 update df with 'ganre' columns
# if ganre is found set value 1
for g in list(ganre_counter):
ganre_data[g] = ganre_data['genres'].apply(lambda c: 1 if g in split_str_by_pipe(str(c)) else 0)
ganre_data[ganre_data['profit']>0][list(ganre_counter)].sum().idxmax()
# + colab={} colab_type="code" id="Tmt8MaK1Vp_R"
# step 1 copy exist DF
ganre_data = data.copy()
# prepere for explode
ganre_data['list_genre'] = ganre_data['genres'].apply(lambda c: split_str_by_pipe(str(c)))
# calculate
ganre_data = ganre_data.explode('list_genre')
ganre_data[ganre_data['profit'] > 0]['list_genre'].value_counts().idxmax()
# + [markdown] colab_type="text" id="0F23bgsDVp_U"
# # 13. У какого режиссера самые большие суммарные кассовые сбооры?
# -
# step 1 copy exist DF
director_data = data.copy()
# prepere for explode
director_data['list_director'] = director_data['director'].apply(lambda c: split_str_by_pipe(str(c)))
# calculate
director_data.explode('list_director').groupby('list_director')['revenue'].apply(sum).idxmax()
# + [markdown] colab_type="text" id="PsYC9FgRVp_a"
# # 14. Какой режисер снял больше всего фильмов в стиле Action?
# + colab={} colab_type="code" id="wd2M-wHeVp_b"
# step 1 copy exist DF
director_ganre_data = data.copy()
# prepere for explode
director_ganre_data['list_director'] = director_ganre_data['director'].apply(lambda c: split_str_by_pipe(str(c)))
director_ganre_data['list_genre'] = director_ganre_data['genres'].apply(lambda c: split_str_by_pipe(str(c)))
director_ganre_data = director_ganre_data.explode('list_director').explode('list_genre')
director_ganre_data[director_ganre_data['list_genre']=='Action']['list_director'].value_counts().idxmax()
# + [markdown] colab_type="text" id="PQ0KciD7Vp_f"
# # 15. Фильмы с каким актером принесли самые высокие кассовые сборы в 2012 году?
# + colab={} colab_type="code" id="aga62oeKVp_g"
# step 1 copy exist DF
cast_data = data.copy()
cast_data['list_cast'] = cast_data['cast'].apply(lambda c: split_str_by_pipe(str(c)))
cast_data = cast_data.explode('list_cast')
# cats_data
cast_data[cast_data['release_year']==2012].groupby('list_cast')['revenue'].apply(sum).idxmax()
# + [markdown] colab_type="text" id="mWHyyL7QVp_j"
# # 16. Какой актер снялся в большем количестве высокобюджетных фильмов?
# + colab={} colab_type="code" id="qQtmHKTFVp_k"
# step 1 copy exist DF
cast_data = data.copy()
cast_data['list_cast'] = cast_data['cast'].apply(lambda c: split_str_by_pipe(str(c)))
cast_data = cast_data.explode('list_cast')
# cats_data
cast_data[cast_data['budget'] > cast_data['budget'].mean()]['list_cast'].value_counts().idxmax()
# + [markdown] colab_type="text" id="NIh6AaW5Vp_n"
# # 17. В фильмах какого жанра больше всего снимался Nicolas Cage?
# + colab={} colab_type="code" id="H74SJDIBVp_n"
cast_ganre_data = data.copy()
cast_ganre_data['list_cast'] = cast_ganre_data['cast'].apply(lambda c: split_str_by_pipe(str(c)))
cast_ganre_data['list_genre'] = cast_ganre_data['genres'].apply(lambda c: split_str_by_pipe(str(c)))
cast_ganre_data = cast_ganre_data.explode('list_cast').explode('list_genre')
cast_ganre_data[cast_ganre_data['list_cast']=='Nicolas Cage']['list_genre'].value_counts().idxmax()
# + [markdown] colab_type="text" id="RqOmPRfWVp_q"
# # 18. Самый убыточный фильм от Paramount Pictures
# + colab={} colab_type="code" id="9E_B0Y96Vp_r"
production_companies_data = data.copy()
production_companies_data['list_companies'] = production_companies_data['production_companies'].apply(lambda c: split_str_by_pipe(str(c)))
production_companies_data = production_companies_data.explode('list_companies')
production_companies_data[(production_companies_data['list_companies']=='Paramount Pictures')]. \
sort_values('profit', ascending=True). \
iloc[0]['original_title']
# + [markdown] colab_type="text" id="vS8Ur6ddVp_u"
# # 19. Какой год стал самым успешным по суммарным кассовым сборам?
# + colab={} colab_type="code" id="Dnbt4GdIVp_v"
data.groupby('release_year')['revenue'].apply(sum).idxmax()
# + [markdown] colab_type="text" id="JAzJh4QAVp_z"
# # 20. Какой самый прибыльный год для студии <NAME>?
# + colab={} colab_type="code" id="wgVu02DEVp_0"
production_companies_data = data.copy()
production_companies_data['list_companies'] = production_companies_data['production_companies'].apply(lambda c: split_str_by_pipe(str(c)))
production_companies_data = production_companies_data.explode('list_companies')
# production_companies_data
production_companies_data[(production_companies_data['list_companies'].str.startswith('<NAME>'))]. \
groupby('release_year')['profit'].apply(sum).idxmax()
# + [markdown] colab_type="text" id="8Im1S2HRVp_4"
# # 21. В каком месяце за все годы суммарно вышло больше всего фильмов?
# + colab={} colab_type="code" id="lev6TH7gVp_4"
month_data = data.copy()
month_data['month'] = month_data['release_date'].apply(lambda c: int(split_str_by_char(str(c),'/')[0]))
month_data.groupby('month')['original_title'].nunique().idxmax()
# + [markdown] colab_type="text" id="uAJsZ_NeVp_7"
# # 22. Сколько суммарно вышло фильмов летом? (за июнь, июль, август)
# + colab={} colab_type="code" id="Aa-hEREoVp_8"
month_data = data.copy()
month_data['month'] = month_data['release_date'].apply(lambda c: int(split_str_by_char(str(c),'/')[0]))
month_data[month_data['month'].isin([6,7,8])].shape[0]
# + [markdown] colab_type="text" id="G94ppOY1VqAA"
# # 23. Для какого режиссера зима – самое продуктивное время года?
# + colab={} colab_type="code" id="RhNTsamuVqAB"
month_director_data = data.copy()
month_director_data['list_director'] = month_director_data['director'].apply(lambda c: split_str_by_pipe(str(c)))
month_director_data = month_director_data.explode('list_director')
month_director_data['month'] = month_data['release_date'].apply(lambda c: int(split_str_by_char(str(c),'/')[0]))
month_director_data = month_director_data[month_director_data['month'].isin([1,2,12])]
month_director_data['list_director'].value_counts().idxmax()
# + [markdown] colab_type="text" id="RBo0JVjVVqAF"
# # 24. Какая студия дает самые длинные названия своим фильмам по количеству символов?
# + colab={} colab_type="code" id="QRGS8L0iVqAG"
# OK
#original_title
title_data = data.copy()
title_data['title_lenght'] = title_data['original_title'].apply(lambda c: len(str(c)))
title_data['list_companies'] = title_data['production_companies'].apply(lambda c: split_str_by_pipe(str(c)))
title_data = title_data.explode('list_companies')
title_data.groupby('list_companies')['title_lenght'].mean().idxmax()
# title_data[title_data['title_lenght'] > title_data['title_lenght'].mean()]['list_companies'].value_counts().idxmax()
# + [markdown] colab_type="text" id="9G0hbvR7VqAK"
# # 25. Описание фильмов какой студии в среднем самые длинные по количеству слов?
# + colab={} colab_type="code" id="Ge2GsLNxVqAK"
# OK
# overview
tagline_data = None
overview_data = data.copy()
overview_data['overview_words_cnt'] = overview_data['overview'].apply(lambda c: len(split_str_by_char(str(c), ' ')))
overview_data['list_companies'] = overview_data['production_companies'].apply(lambda c: split_str_by_pipe(str(c)))
overview_data = overview_data.explode('list_companies')
overview_data.groupby('list_companies')['overview_words_cnt'].mean().idxmax()
# tagline_data[tagline_data['title_words'] > tagline_data['title_words'].mean()]['list_companies'].value_counts().idxmax()
# + [markdown] colab_type="text" id="FJ1AFt90VqAP"
# # 26. Какие фильмы входят в 1 процент лучших по рейтингу?
# + colab={} colab_type="code" id="8qmJVq4CVqAQ"
# OK
# vote_average
# data[data['vote_average'] >= data['vote_average'].quantile(0.99)] \
# .sort_values('vote_average', ascending=False)[['original_title']] \
# .head(10)
# + [markdown] colab_type="text" id="MdXsUXbCVqAV"
# # 27. Какие актеры чаще всего снимаются в одном фильме вместе?
#
# + [markdown] colab_type="text" id="4ymnxEVoVqAW"
# ВАРИАНТ 2
# -
# OK
import itertools
data27=pd.read_csv('movie_bd_v5.csv')
data27['cast'] = data27['cast'].str.split('|')
data_p = data27['cast'].reset_index()
data_p['pairs'] = data_p['cast'].apply(lambda s: list(itertools.combinations(s, 2)))
data_p
data_p = data_p.explode('pairs').reset_index()
Counter(data_p['pairs']).most_common(1)
# +
# # step 1 copy exist DF
# cast_data = data.copy()
# cast_data['list_cast'] = cast_data['cast'].apply(lambda c: split_str_by_pipe(str(c)))
# cast_data = cast_data.explode('list_cast')
# # cats_data
# # cast_data[cast_data['budget'] > cast_data['budget'].mean()]['list_cast'].value_counts().idxmax()
# cast_data
# + [markdown] colab_type="text" id="U0nONFnGVqAX"
# # Submission
# + colab={} colab_type="code" id="IfcaRO9-VqAX" outputId="0f132912-32bb-4196-c98c-abfbc4ad5a5f"
# в конце можно посмотреть свои ответы к каждому вопросу
answers
# + colab={} colab_type="code" id="SiRmHPl8VqAd"
# и убедиться что ни чего не пропустил)
len(answers)
# + colab={} colab_type="code" id="uCfuTkRbVqAg"
# ls -al
# -
# + colab={} colab_type="code" id="Vwx3NrkSVqAl"
| module_1/module_1_draft.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: torch_retina
# language: python
# name: torch_retina
# ---
# +
import numpy as np
import pandas as pd
import os
import matplotlib.pyplot as plt
from PIL import Image, ImageDraw, ImageEnhance
import albumentations as A
import albumentations.pytorch
from tqdm.notebook import tqdm
import cv2
import re
import time
import sys
sys.path.append('../')
from retinanet import coco_eval
from retinanet import csv_eval
from retinanet import model
# from retinanet import retina
from retinanet.dataloader import *
from retinanet.anchors import Anchors
from retinanet.losses import *
from retinanet.scheduler import *
from retinanet.parallel import DataParallelModel, DataParallelCriterion
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import LabelEncoder
#Torch
import torch
import torch.nn as nn
from torch.utils.data import Dataset,DataLoader
from torch.utils.data.sampler import SequentialSampler, RandomSampler
from torch.optim import Adam, lr_scheduler
import torch.optim as optim
# +
# device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
# print ('Available devices ', torch.cuda.device_count())
# print ('Current cuda device ', torch.cuda.current_device())
# print(torch.cuda.get_device_name(device))
# GPU 할당 변경하기
GPU_NUM = 4 # 원하는 GPU 번호 입력
device = torch.device(f'cuda:{GPU_NUM}' if torch.cuda.is_available() else 'cpu')
torch.cuda.set_device(device) # change allocation of current GPU
print(device)
print ('Current cuda device ', torch.cuda.current_device()) # check
device_ids = [4,0,2,3]
# +
# os.environ["CUDA_VISIBLE_DEVICES"] = '0, 2, 3, 4'
# print ('Current cuda device ', torch.cuda.current_device()) # check
# +
# device = torch.device('cuda')
# device = torch.device('cpu')
# +
# # %time
# PATH_TO_WEIGHTS = '../coco_resnet_50_map_0_335_state_dict.pt'
# retinanet = model.resnet50(num_classes=5,)
# # retinanet.load_state_dict(torch.load(PATH_TO_WEIGHTS))
# +
# %time
PATH_TO_WEIGHTS = '../coco_resnet_50_map_0_335_state_dict.pt'
pretrained_retinanet = model.resnet50(num_classes=80, device=device)
pretrained_retinanet.load_state_dict(torch.load(PATH_TO_WEIGHTS))
retinanet = model.resnet50(num_classes=5, device=device)
for param, state in zip(pretrained_retinanet.parameters(), pretrained_retinanet.state_dict()) :
#print(state)
if 'classificationModel' not in state :
retinanet.state_dict()[state] = param
else :
print(state)
for param, state in zip(pretrained_retinanet.fpn.parameters(), pretrained_retinanet.fpn.state_dict()) :
#print(state)
retinanet.fpn.state_dict()[state] = param
for param, state in zip(pretrained_retinanet.regressionModel.parameters(), pretrained_retinanet.regressionModel.state_dict()) :
#print(state)
retinanet.regressionModel.state_dict()[state] = param
# +
# https://pypi.org/project/torch-encoding/
# torch-encoding should be installed
# pip install torch-encoding
# import torch.encoding as encoding
# retinanet = encoding.nn.DataParallelModel(retinanet, device_ids = [4,0,2,3]).to(device)
# +
# retinanet.to(device)
retinanet = torch.nn.DataParallel(retinanet, device_ids = [4,0,2,3], output_device=4).to(device)
# retinanet = DataParallelModel(retinanet, device_ids = device_ids)
retinanet.to(device)
# retinanet.cuda()
# retinanet.module.freeze_bn()
# -
train_info = np.load('../data/train.npy', allow_pickle=True, encoding='latin1').item()
# train_info
# +
batch_size = 32
train_ds = PapsDataset(train_info, transforms)
train_data_loader = DataLoader(
train_ds,
batch_size=batch_size,
shuffle=True,
num_workers=4,
collate_fn=collate_fn
)
# -
criterion = FocalLoss(device)
# criterion = DataParallelCriterion(criterion, device_ids = device_ids)
criterion = criterion.to(device)
# optimizer = optim.Adam(retinanet.parameters(), lr=1e-4)
# scheduler = optim.lr_scheduler.ReduceLROnPlateau(optimizer, patience=3, verbose=True)
# retinanet.train()
retinanet.training = True
# retinanet.module.freeze_bn()
# retinanet.freeze_bn()
# https://gaussian37.github.io/dl-pytorch-lr_scheduler/
optimizer = optim.Adam(retinanet.parameters(), lr = 1e-7)
scheduler = CosineAnnealingWarmUpRestarts(optimizer, T_0=20, T_mult=1, eta_max=0.0005, T_up=5, gamma=0.5)
# CosineAnnealingWarmRestarts
scheduler.state_dict()
# # scheduler._last_lr
# optimizer.param_groups[0]["lr"]
# +
#for i, data in enumerate(tqdm(train_data_loader)) :
EPOCH_NUM = 60
loss_per_epoch = 2
for epoch in range(EPOCH_NUM) :
epoch_loss = []
total_loss = 0
tk0 = tqdm(train_data_loader, total=len(train_data_loader), leave=False)
EPOCH_LEARING_RATE = optimizer.param_groups[0]["lr"]
print("*****{}th epoch, learning rate {}".format(epoch, EPOCH_LEARING_RATE))
for step, data in enumerate(tk0) :
images, _, paths, targets = data
# print(targets)
batch_size = len(images)
# images = list(image.to(device) for image in images)
c, h, w = images[0].shape
images = torch.cat(images).view(-1, c, h, w).to(device)
# targets = [{k: v.to(device) for k, v in t.items()} for t in targets]
targets = [ t.to(device) for t in targets]
# classification_loss, regression_loss = retinanet([images, targets])
outputs = retinanet([images, targets])
classification, regression, anchors, annotations = (outputs)
classification_loss, regression_loss = criterion(classification, regression, anchors, annotations)
# output = retinanet(images)
# features, regression, classification = output
# classification_loss, regression_loss = criterion(classification, regression, modified_anchors, targets)
classification_loss = classification_loss.mean()
regression_loss = regression_loss.mean()
loss = classification_loss + regression_loss
total_loss += loss.item()
epoch_loss.append((loss.item()))
tk0.set_postfix(lr=optimizer.param_groups[0]["lr"], batch_loss=loss.item(), cls_loss=classification_loss.item(),
reg_loss=regression_loss.item(), avg_loss=total_loss/(step+1))
optimizer.zero_grad()
loss.backward()
torch.nn.utils.clip_grad_norm_(retinanet.parameters(), 0.1)
optimizer.step()
print('{}th epochs loss is {}'.format(epoch, np.mean(epoch_loss)))
if loss_per_epoch > np.mean(epoch_loss):
print('best model is saved')
torch.save(retinanet.state_dict(), 'best_model.pt')
loss_per_epoch = np.mean(epoch_loss)
# scheduler.step(np.mean(epoch_loss))
scheduler.step()
# -
torch.save(retinanet.state_dict(), '../trained_models/model.pt')
retinanet.load_state_dict(torch.load('../trained_models/model.pt'))
# +
# retinanet.eval()
# -
test_info = np.load('../data/test.npy', allow_pickle=True, encoding='latin1').item()
# train_info
# +
test_ds = PapsDataset(test_info,val_transforms)
test_data_loader = DataLoader(
test_ds,
batch_size=1,
shuffle=False,
num_workers=4,
collate_fn=collate_fn
)
# -
retinanet.eval()
# retinanet.training = True
tk1 = tqdm(test_data_loader, total=len(test_data_loader),leave=False)
for step, data in enumerate(tk1) :
with torch.no_grad():
images, _, paths, targets = data
batch_size = len(images)
c, h, w = images[0].shape
images = torch.cat(images).view(-1, c, h, w).to(device)
print(images.shape)
targets = [ t.to(device) for t in targets]
scores, labels, boxes = retinanet(images)
print(scores)
print(labels)
print(boxes)
adfsfd
retinanet.eval()
retinanet.training
| notebook/model_loss.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="view-in-github" colab_type="text"
# <a href="https://colab.research.google.com/github/bijeet2001/18CSE124/blob/main/18CSE124_dmdwlab2_assignment2.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# + id="-bCBia4_8aCG"
#Toyota dataset comma separated value
path = "https://raw.githubusercontent.com/chirudukuru/DMDW/main/Toyota.csv"
# + id="akDOeQhGVwAS"
import pandas as pd
# + id="-cQErTpaVylp"
data = pd.read_csv(path)
# + colab={"base_uri": "https://localhost:8080/", "height": 402} id="jxzpGInvV3aZ" outputId="08e688a3-02c4-4101-f4d6-968aec14e368"
data
# + id="WrAzfzNgXZgC"
data = pd.read_csv("https://raw.githubusercontent.com/chirudukuru/DMDW/main/Toyota.csv",index_col="Unnamed: 0")
# + colab={"base_uri": "https://localhost:8080/", "height": 402} id="pmVXCVw0X95v" outputId="bfe0eca8-645d-41d3-840d-baf086590c85"
data
# + colab={"base_uri": "https://localhost:8080/"} id="lpfdrWr1YJqy" outputId="dae449df-de7c-417c-c04d-07153e938680"
type(data)
# + colab={"base_uri": "https://localhost:8080/"} id="U1xvfhQPZDre" outputId="212768ba-c4c9-41ec-bdf1-0d1ceae122a2"
data.shape
# + colab={"base_uri": "https://localhost:8080/"} id="MbBMKs9cZInk" outputId="3ad13139-0e19-4de3-d50c-1ca7305a888d"
data.info()
# + colab={"base_uri": "https://localhost:8080/"} id="oPsvREWPZQFI" outputId="1a2c75ea-1f66-4873-bcf8-257e1d28f354"
data.index
# + colab={"base_uri": "https://localhost:8080/"} id="0t_GmcWaZXP6" outputId="74915e5f-8f0c-4b08-f32a-7180df04cc89"
data.columns
# + colab={"base_uri": "https://localhost:8080/", "height": 195} id="UNnIQPiXZt9H" outputId="5cf1bb34-0dd2-4014-f188-df01eef05258"
data.head()
# + colab={"base_uri": "https://localhost:8080/", "height": 195} id="2PXi24CiZz9R" outputId="102b5da1-b3d8-4196-c260-d10e7719384d"
data.tail()
# + colab={"base_uri": "https://localhost:8080/", "height": 136} id="EPbR8w0MZ2t7" outputId="4eca37a2-c352-44a9-888d-531bbda1be56"
data.head(3)
# + colab={"base_uri": "https://localhost:8080/", "height": 343} id="AQOJAGUfZ5ze" outputId="0e252651-3118-4ad6-e990-f354a76fec22"
data[['Price',"Age"]].head(10)
# + colab={"base_uri": "https://localhost:8080/"} id="VxmBhOPNaH6m" outputId="30d4c091-52b3-4fb8-c5d0-b6c5048f35e4"
### Data Wrangling (Working With Null Values)
data.isnull().sum()
# + colab={"base_uri": "https://localhost:8080/"} id="omhWKSAiaeyS" outputId="738cc046-75b4-403b-f06f-0874a47de8d2"
data.dropna(inplace=True) # removed the null values 1st method remove rows when large data we having
data.isnull().sum()
# + colab={"base_uri": "https://localhost:8080/"} id="faK-cVrPanIk" outputId="ede3fe6f-d764-4ab2-edbb-d7516c18879b"
data.shape
# + colab={"base_uri": "https://localhost:8080/", "height": 343} id="77dmfCogap9H" outputId="1ff612d1-6177-41fb-b9e5-37d799dafd52"
data.head(10) # After removal of nullvalue rows
# + colab={"base_uri": "https://localhost:8080/"} id="etYelZBiavBW" outputId="665e5050-808e-47d6-c902-8f7f1928d92b"
#2nd method handling missing values
data['MetColor'].mean()
# + colab={"base_uri": "https://localhost:8080/"} id="jq6CtRtsaynh" outputId="a8117698-9c9b-4ec6-e121-9fbeb8e98c8a"
data['MetColor'].head()
# + id="He2UQOjVa1F1"
import numpy as np
# + colab={"base_uri": "https://localhost:8080/"} id="WNWCMFaFchf8" outputId="914dac6a-aa5c-4500-ee34-a49b2858a1a5"
data['MetColor'].replace(np.NaN,data['MetColor'].mean()).head()
# + id="Tld_zg2zdXUV"
| 18CSE124_dmdwlab2_assignment2.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernel_info:
# name: u4-s3-dnn
# kernelspec:
# display_name: U4-S3-DNN (Python 3.7)
# language: python
# name: u4-s3-dnn
# ---
# <img align="left" src="https://lever-client-logos.s3.amazonaws.com/864372b1-534c-480e-acd5-9711f850815c-1524247202159.png" width=200>
# <br></br>
# <br></br>
#
# # Major Neural Network Architectures Challenge
# ## *Data Science Unit 4 Sprint 3 Challenge*
#
# In this sprint challenge, you'll explore some of the cutting edge of Data Science. This week we studied several famous neural network architectures:
# recurrent neural networks (RNNs), long short-term memory (LSTMs), convolutional neural networks (CNNs), and Generative Adverserial Networks (GANs). In this sprint challenge, you will revisit these models. Remember, we are testing your knowledge of these architectures not your ability to fit a model with high accuracy.
#
# __*Caution:*__ these approaches can be pretty heavy computationally. All problems were designed so that you should be able to achieve results within at most 5-10 minutes of runtime on Colab or a comparable environment. If something is running longer, doublecheck your approach!
#
# ## Challenge Objectives
# *You should be able to:*
# * <a href="#p1">Part 1</a>: Train a RNN classification model
# * <a href="#p2">Part 2</a>: Utilize a pre-trained CNN for objective detection
# * <a href="#p3">Part 3</a>: Describe the difference between a discriminator and generator in a GAN
# * <a href="#p4">Part 4</a>: Describe yourself as a Data Science and elucidate your vision of AI
# + [markdown] colab_type="text" id="-5UwGRnJOmD4"
# <a id="p1"></a>
# ## Part 1 - RNNs
#
# Use an RNN/LSTM to fit a multi-class classification model on reuters news articles to distinguish topics of articles. The data is already encoded properly for use in an RNN model.
#
# Your Tasks:
# - Use Keras to fit a predictive model, classifying news articles into topics.
# - Report your overall score and accuracy
#
# For reference, the [Keras IMDB sentiment classification example](https://github.com/keras-team/keras/blob/master/examples/imdb_lstm.py) will be useful, as well the RNN code we used in class.
#
# __*Note:*__ Focus on getting a running model, not on maxing accuracy with extreme data size or epoch numbers. Only revisit and push accuracy if you get everything else done!
# + colab={"base_uri": "https://localhost:8080/", "height": 1114} colab_type="code" id="DS-9ksWjoJit" outputId="0c3512e4-5cd4-4dc6-9cda-baf00c835f59"
from tensorflow.keras.datasets import reuters
(X_train, y_train), (X_test, y_test) = reuters.load_data(num_words=None,
skip_top=0,
maxlen=None,
test_split=0.2,
seed=723812,
start_char=1,
oov_char=2,
index_from=3)
# + colab={"base_uri": "https://localhost:8080/", "height": 34} colab_type="code" id="fLKqFh8DovaN" outputId="64b0d621-7e74-4181-9116-406e8c518465"
# Demo of encoding
word_index = reuters.get_word_index(path="reuters_word_index.json")
print(f"Iran is encoded as {word_index['iran']} in the data")
print(f"London is encoded as {word_index['london']} in the data")
print("Words are encoded as numbers in our dataset.")
# + colab={} colab_type="code" id="_QVSlFEAqWJM"
from tensorflow.keras.preprocessing import sequence
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense, Embedding, LSTM
batch_size = 46
max_features = len(word_index.values())
maxlen = 200
print(len(X_train), 'train sequences')
print(len(X_test), 'test sequences')
print('Pad sequences (samples x time)')
X_train = sequence.pad_sequences(X_train, maxlen=maxlen)
X_test = sequence.pad_sequences(X_test, maxlen=maxlen)
print('X_train shape:', X_train.shape)
print('X_test shape:', X_test.shape)
print('Build model...')
# TODO - your code!
# +
# You should only run this cell once your model has been properly configured
model.compile(loss='sparse_categorical_crossentropy',
optimizer='adam',
metrics=['accuracy'])
print('Train...')
model.fit(X_train, y_train,
batch_size=batch_size,
epochs=1,
validation_data=(X_test, y_test))
score, acc = model.evaluate(X_test, y_test,
batch_size=batch_size)
print('Test score:', score)
print('Test accuracy:', acc)
# -
# ## Sequence Data Question
# #### *Describe the `pad_sequences` method used on the training dataset. What does it do? Why do you need it?*
#
# Please add your answer in markdown here.
#
# ## RNNs versus LSTMs
# #### *What are the primary motivations behind using Long-ShortTerm Memory Cell unit over traditional Recurrent Neural Networks?*
#
# Please add your answer in markdown here.
#
# ## RNN / LSTM Use Cases
# #### *Name and Describe 3 Use Cases of LSTMs or RNNs and why they are suited to that use case*
#
# Please add your answer in markdown here.
# + [markdown] colab_type="text" id="yz0LCZd_O4IG"
# <a id="p2"></a>
# ## Part 2- CNNs
#
# ### Find the Frog
#
# Time to play "find the frog!" Use Keras and ResNet50 (pre-trained) to detect which of the following images contain frogs:
#
# <img align="left" src="https://d3i6fh83elv35t.cloudfront.net/newshour/app/uploads/2017/03/GettyImages-654745934-1024x687.jpg" width=400>
#
# + colab={"base_uri": "https://localhost:8080/", "height": 245} colab_type="code" id="whIqEWR236Af" outputId="7a74e30d-310d-4a3a-9ae4-5bf52d137bda"
# !pip install google_images_download
# + colab={"base_uri": "https://localhost:8080/", "height": 332} colab_type="code" id="EKnnnM8k38sN" outputId="59f477e9-0b25-4a38-9678-af24e0176535"
from google_images_download import google_images_download
response = google_images_download.googleimagesdownload()
arguments = {"keywords": "animal pond", "limit": 4, "print_urls": True}
absolute_image_paths = response.download(arguments)
# + [markdown] colab_type="text" id="si5YfNqS50QU"
# At time of writing at least a few do, but since the Internet changes - it is possible your 5 won't. You can easily verify yourself, and (once you have working code) increase the number of images you pull to be more sure of getting a frog. Your goal is to validly run ResNet50 on the input images - don't worry about tuning or improving the model.
#
# *Hint* - ResNet 50 doesn't just return "frog". The three labels it has for frogs are: `bullfrog, tree frog, tailed frog`
#
# *Stretch goal* - also check for fish.
# + colab={} colab_type="code" id="FaT07ddW3nHz"
# You've got something to do in this cell. ;)
import numpy as np
from tensorflow.keras.applications.resnet50 import ResNet50
from tensorflow.keras.preprocessing import image
from tensorflow.keras.applications.resnet50 import preprocess_input, decode_predictions
def process_img_path(img_path):
return image.load_img(img_path, target_size=(224, 224))
def img_contains_frog(img):
""" Scans image for Frogs
Should return a integer with the number of frogs detected in an
image.
Inputs:
---------
img: Precrossed image ready for predicftion
Returns:
---------
frogs (int): Count of predicted frogs in the image
"""
# Your Code Here
# TODO - your code!
return frogs
# +
import matplotlib.pyplot as plt
def display_predictions(urls):
image_data = []
frogs = []
for url in urls:
x = process_img_path(url)
x = image.img_to_array(x)
x = np.expand_dims(x, axis=0)
x = preprocess_input(x)
image_data.append(x)
frogs.append(img_contains_frog(x))
return image_data,frogs
# +
f, axarr = plt.subplots(2,2)
imgs, frogs = display_predictions(absolute_image_paths[0]['animal pond'])
for x,y in [(0,0),(0,1), (1,0), (1,1)]:
axarr[x,y].imshow(np.squeeze(imgs[x], axis=0) / 255)
axarr[x,y].set_title(f"Frog: {frogs[x]}")
axarr[x,y].axis('off')
# + [markdown] colab_type="text" id="XEuhvSu7O5Rf"
# <a id="p3"></a>
# ## Part 3 - Autoencoders
#
# Describe a use case for an autoencoder given that an autoencoder tries to predict its own input.
#
# __*Your Answer:*__
# + [markdown] colab_type="text" id="626zYgjkO7Vq"
# <a id="p4"></a>
# ## Part 4 - More...
# + [markdown] colab_type="text" id="__lDWfcUO8oo"
# Answer the following questions, with a target audience of a fellow Data Scientist:
#
# - What do you consider your strongest area, as a Data Scientist?
# - What area of Data Science would you most like to learn more about, and why?
# - Where do you think Data Science will be in 5 years?
# - What are the threats posed by AI to our society?
# - How do you think we can counteract those threats?
# - Do you think achieving General Artifical Intelligence is ever possible?
#
# A few sentences per answer is fine - only elaborate if time allows.
# + [markdown] colab_type="text" id="_Hoqe3mM_Mtc"
# ## Congratulations!
#
# Thank you for your hard work, and congratulations! You've learned a lot, and you should proudly call yourself a Data Scientist.
#
# +
from IPython.display import HTML
HTML("""<iframe src="https://giphy.com/embed/26xivLqkv86uJzqWk" width="480" height="270" frameBorder="0" class="giphy-embed" allowFullScreen></iframe><p><a href="https://giphy.com/gifs/mumm-champagne-saber-26xivLqkv86uJzqWk">via GIPHY</a></p>""")
| SC/LS_DS_Uni_4_Sprint_3_Challenge.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] colab_type="text"
# This is a companion notebook for the book [Deep Learning with Python, Second Edition](https://www.manning.com/books/deep-learning-with-python-second-edition?a_aid=keras&a_bid=76564dff). For readability, it only contains runnable code blocks and section titles, and omits everything else in the book: text paragraphs, figures, and pseudocode.
#
# **If you want to be able to follow what's going on, I recommend reading the notebook side by side with your copy of the book.**
#
# This notebook was generated for TensorFlow 2.6.
# + [markdown] colab_type="text"
# # Deep learning for timeseries
# + [markdown] colab_type="text"
# ## Different kinds of timeseries tasks
# + [markdown] colab_type="text"
# ## A temperature forecasting example
# + colab_type="code"
# !wget https://s3.amazonaws.com/keras-datasets/jena_climate_2009_2016.csv.zip
# !unzip jena_climate_2009_2016.csv.zip
# + [markdown] colab_type="text"
# **Inspecting the data of the Jena weather dataset**
# + colab_type="code"
import os
fname = os.path.join("jena_climate_2009_2016.csv")
with open(fname) as f:
data = f.read()
lines = data.split("\n")
header = lines[0].split(",")
lines = lines[1:]
print(header)
print(len(lines))
# + [markdown] colab_type="text"
# **Parsing the data**
# + colab_type="code"
import numpy as np
temperature = np.zeros((len(lines),))
raw_data = np.zeros((len(lines), len(header) - 1))
for i, line in enumerate(lines):
values = [float(x) for x in line.split(",")[1:]]
temperature[i] = values[1]
raw_data[i, :] = values[:]
# + [markdown] colab_type="text"
# **Plotting the temperature timeseries**
# + colab_type="code"
from matplotlib import pyplot as plt
plt.plot(range(len(temperature)), temperature)
# + [markdown] colab_type="text"
# **Plotting the first 10 days of the temperature timeseries**
# + colab_type="code"
plt.plot(range(1440), temperature[:1440])
# + [markdown] colab_type="text"
# **Computing the number of samples we'll use for each data split.**
# + colab_type="code"
num_train_samples = int(0.5 * len(raw_data))
num_val_samples = int(0.25 * len(raw_data))
num_test_samples = len(raw_data) - num_train_samples - num_val_samples
print("num_train_samples:", num_train_samples)
print("num_val_samples:", num_val_samples)
print("num_test_samples:", num_test_samples)
# + [markdown] colab_type="text"
# ### Preparing the data
# + [markdown] colab_type="text"
# **Normalizing the data**
# + colab_type="code"
mean = raw_data[:num_train_samples].mean(axis=0)
raw_data -= mean
std = raw_data[:num_train_samples].std(axis=0)
raw_data /= std
# + colab_type="code"
import numpy as np
from tensorflow import keras
int_sequence = np.arange(10)
dummy_dataset = keras.preprocessing.timeseries_dataset_from_array(
data=int_sequence[:-3],
targets=int_sequence[3:],
sequence_length=3,
batch_size=2,
)
for inputs, targets in dummy_dataset:
for i in range(inputs.shape[0]):
print([int(x) for x in inputs[i]], int(targets[i]))
# + [markdown] colab_type="text"
# **Instantiating Datasets for training, validation, and testing.**
# + colab_type="code"
sampling_rate = 6
sequence_length = 120
delay = sampling_rate * (sequence_length + 24 - 1)
batch_size = 256
train_dataset = keras.preprocessing.timeseries_dataset_from_array(
raw_data[:-delay],
targets=temperature[delay:],
sampling_rate=sampling_rate,
sequence_length=sequence_length,
shuffle=True,
batch_size=batch_size,
start_index=0,
end_index=num_train_samples)
val_dataset = keras.preprocessing.timeseries_dataset_from_array(
raw_data[:-delay],
targets=temperature[delay:],
sampling_rate=sampling_rate,
sequence_length=sequence_length,
shuffle=True,
batch_size=batch_size,
start_index=num_train_samples,
end_index=num_train_samples + num_val_samples)
test_dataset = keras.preprocessing.timeseries_dataset_from_array(
raw_data[:-delay],
targets=temperature[delay:],
sampling_rate=sampling_rate,
sequence_length=sequence_length,
shuffle=True,
batch_size=batch_size,
start_index=num_train_samples + num_val_samples)
# + [markdown] colab_type="text"
# **Inspecting the output of one of our Datasets.**
# + colab_type="code"
for samples, targets in train_dataset:
print("samples shape:", samples.shape)
print("targets shape:", targets.shape)
break
# + [markdown] colab_type="text"
# ### A common-sense, non-machine-learning baseline
# + [markdown] colab_type="text"
# **Computing the common-sense baseline MAE**
# + colab_type="code"
def evaluate_naive_method(dataset):
total_abs_err = 0.
samples_seen = 0
for samples, targets in dataset:
preds = samples[:, -1, 1] * std[1] + mean[1]
total_abs_err += np.sum(np.abs(preds - targets))
samples_seen += samples.shape[0]
return total_abs_err / samples_seen
print(f"Validation MAE: {evaluate_naive_method(val_dataset):.2f}")
print(f"Test MAE: {evaluate_naive_method(test_dataset):.2f}")
# + [markdown] colab_type="text"
# ### Let's try a basic machine learning model
# + [markdown] colab_type="text"
# **Training and evaluating a densely connected model**
# + colab_type="code"
from tensorflow import keras
from tensorflow.keras import layers
inputs = keras.Input(shape=(sequence_length, raw_data.shape[-1]))
x = layers.Flatten()(inputs)
x = layers.Dense(16, activation="relu")(x)
outputs = layers.Dense(1)(x)
model = keras.Model(inputs, outputs)
callbacks = [
keras.callbacks.ModelCheckpoint("jena_dense.keras",
save_best_only=True)
]
model.compile(optimizer="rmsprop", loss="mse", metrics=["mae"])
history = model.fit(train_dataset,
epochs=10,
validation_data=val_dataset,
callbacks=callbacks)
model = keras.models.load_model("jena_dense.keras")
print(f"Test MAE: {model.evaluate(test_dataset)[1]:.2f}")
# + [markdown] colab_type="text"
# **Plotting results**
# + colab_type="code"
import matplotlib.pyplot as plt
loss = history.history["mae"]
val_loss = history.history["val_mae"]
epochs = range(1, len(loss) + 1)
plt.figure()
plt.plot(epochs, loss, "bo", label="Training MAE")
plt.plot(epochs, val_loss, "b", label="Validation MAE")
plt.title("Training and validation MAE")
plt.legend()
plt.show()
# + [markdown] colab_type="text"
# ### Let's try a 1D convolutional model
# + colab_type="code"
inputs = keras.Input(shape=(sequence_length, raw_data.shape[-1]))
x = layers.Conv1D(8, 24, activation="relu")(inputs)
x = layers.MaxPooling1D(2)(x)
x = layers.Conv1D(8, 12, activation="relu")(x)
x = layers.MaxPooling1D(2)(x)
x = layers.Conv1D(8, 6, activation="relu")(x)
x = layers.GlobalAveragePooling1D()(x)
outputs = layers.Dense(1)(x)
model = keras.Model(inputs, outputs)
callbacks = [
keras.callbacks.ModelCheckpoint("jena_conv.keras",
save_best_only=True)
]
model.compile(optimizer="rmsprop", loss="mse", metrics=["mae"])
history = model.fit(train_dataset,
epochs=10,
validation_data=val_dataset,
callbacks=callbacks)
model = keras.models.load_model("jena_conv.keras")
print(f"Test MAE: {model.evaluate(test_dataset)[1]:.2f}")
# + [markdown] colab_type="text"
# ### A first recurrent baseline
# + [markdown] colab_type="text"
# **A simple LSTM-based model**
# + colab_type="code"
inputs = keras.Input(shape=(sequence_length, raw_data.shape[-1]))
x = layers.LSTM(16)(inputs)
outputs = layers.Dense(1)(x)
model = keras.Model(inputs, outputs)
callbacks = [
keras.callbacks.ModelCheckpoint("jena_lstm.keras",
save_best_only=True)
]
model.compile(optimizer="rmsprop", loss="mse", metrics=["mae"])
history = model.fit(train_dataset,
epochs=10,
validation_data=val_dataset,
callbacks=callbacks)
model = keras.models.load_model("jena_lstm.keras")
print("Test MAE: {model.evaluate(test_dataset)[1]:.2f}")
# + [markdown] colab_type="text"
# ## Understanding recurrent neural networks
# + [markdown] colab_type="text"
# **NumPy implementation of a simple RNN**
# + colab_type="code"
import numpy as np
timesteps = 100
input_features = 32
output_features = 64
inputs = np.random.random((timesteps, input_features))
state_t = np.zeros((output_features,))
W = np.random.random((output_features, input_features))
U = np.random.random((output_features, output_features))
b = np.random.random((output_features,))
successive_outputs = []
for input_t in inputs:
output_t = np.tanh(np.dot(W, input_t) + np.dot(U, state_t) + b)
successive_outputs.append(output_t)
state_t = output_t
final_output_sequence = np.concatenate(successive_outputs, axis=0)
# + [markdown] colab_type="text"
# ### A recurrent layer in Keras
# + [markdown] colab_type="text"
# **A RNN layer that can process sequences of any length**
# + colab_type="code"
num_features = 14
inputs = keras.Input(shape=(None, num_features))
outputs = layers.SimpleRNN(16)(inputs)
# + [markdown] colab_type="text"
# **A RNN layer that returns only its last output step**
# + colab_type="code"
num_features = 14
steps = 120
inputs = keras.Input(shape=(steps, num_features))
outputs = layers.SimpleRNN(16, return_sequences=False)(inputs)
print(outputs.shape)
# + [markdown] colab_type="text"
# **A RNN layer that returns its full output sequence**
# + colab_type="code"
num_features = 14
steps = 120
inputs = keras.Input(shape=(steps, num_features))
outputs = layers.SimpleRNN(16, return_sequences=True)(inputs)
print(outputs.shape)
# + [markdown] colab_type="text"
# **Stacking RNN layers**
# + colab_type="code"
inputs = keras.Input(shape=(steps, num_features))
x = layers.SimpleRNN(16, return_sequences=True)(inputs)
x = layers.SimpleRNN(16, return_sequences=True)(x)
outputs = layers.SimpleRNN(16)(x)
# + [markdown] colab_type="text"
# ## Advanced use of recurrent neural networks
# + [markdown] colab_type="text"
# ### Using recurrent dropout to fight overfitting
# + [markdown] colab_type="text"
# **Training and evaluating a dropout-regularized LSTM**
# + colab_type="code"
inputs = keras.Input(shape=(sequence_length, raw_data.shape[-1]))
x = layers.LSTM(32, recurrent_dropout=0.25)(inputs)
x = layers.Dropout(0.5)(x)
outputs = layers.Dense(1)(x)
model = keras.Model(inputs, outputs)
callbacks = [
keras.callbacks.ModelCheckpoint("jena_lstm_dropout.keras",
save_best_only=True)
]
model.compile(optimizer="rmsprop", loss="mse", metrics=["mae"])
history = model.fit(train_dataset,
epochs=50,
validation_data=val_dataset,
callbacks=callbacks)
# + colab_type="code"
inputs = keras.Input(shape=(sequence_length, num_features))
x = layers.LSTM(32, recurrent_dropout=0.2, unroll=True)(inputs)
# + [markdown] colab_type="text"
# ### Stacking recurrent layers
# + [markdown] colab_type="text"
# **Training and evaluating a dropout-regularized, stacked GRU model**
# + colab_type="code"
inputs = keras.Input(shape=(sequence_length, raw_data.shape[-1]))
x = layers.GRU(32, recurrent_dropout=0.5, return_sequences=True)(inputs)
x = layers.GRU(32, recurrent_dropout=0.5)(x)
x = layers.Dropout(0.5)(x)
outputs = layers.Dense(1)(x)
model = keras.Model(inputs, outputs)
callbacks = [
keras.callbacks.ModelCheckpoint("jena_stacked_gru_dropout.keras",
save_best_only=True)
]
model.compile(optimizer="rmsprop", loss="mse", metrics=["mae"])
history = model.fit(train_dataset,
epochs=50,
validation_data=val_dataset,
callbacks=callbacks)
model = keras.models.load_model("jena_stacked_gru_dropout.keras")
print(f"Test MAE: {model.evaluate(test_dataset)[1]:.2f}")
# + [markdown] colab_type="text"
# ### Using bidirectional RNNs
# + [markdown] colab_type="text"
# **Training and evaluating a bidirectional LSTM**
# + colab_type="code"
inputs = keras.Input(shape=(sequence_length, raw_data.shape[-1]))
x = layers.Bidirectional(layers.LSTM(16))(inputs)
outputs = layers.Dense(1)(x)
model = keras.Model(inputs, outputs)
model.compile(optimizer="rmsprop", loss="mse", metrics=["mae"])
history = model.fit(train_dataset,
epochs=10,
validation_data=val_dataset)
# + [markdown] colab_type="text"
# ### *_Going even further_*
# + [markdown] colab_type="text"
# ## Chapter summary
| chapter10_dl-for-timeseries.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import os
os.environ["CUDA_VISIBLE_DEVICES"] = "2"
# +
from __future__ import print_function
import tensorflow as tf
from keras.layers import Flatten, Dense, Reshape
from keras.layers import Input,InputLayer, Dense, Activation, ZeroPadding2D, BatchNormalization, Flatten, Conv2D
from keras.layers import AveragePooling2D, MaxPooling2D, Dropout
from keras.models import Sequential,Model
from keras.optimizers import SGD
from keras.callbacks import ModelCheckpoint,LearningRateScheduler
from keras.callbacks import ModelCheckpoint
from keras import losses
from keras.datasets import mnist
from keras.layers import Input, Dense, Conv2D, MaxPooling2D, UpSampling2D
from keras.models import Model
from keras import backend as K
from keras import models
from keras import layers
import keras
from sklearn.utils import shuffle
from sklearn import preprocessing
import scipy.io
from mpl_toolkits.mplot3d import Axes3D
import numpy as np
import pandas as pd
import sys
from sklearn.manifold import TSNE
from sklearn.utils import shuffle
from sklearn import preprocessing
import scipy.io
from mpl_toolkits.mplot3d import Axes3D
import numpy as np
import pandas as pd
from tensorflow import keras
from keras.layers import Conv2D,MaxPool2D,Dense,Dropout,Flatten
from keras.models import Sequential
from keras.preprocessing.image import ImageDataGenerator
from keras import regularizers
from keras.models import Sequential
from keras.layers import Dense, Dropout, Activation, Flatten
from keras.layers import Conv2D, MaxPooling2D, BatchNormalization
from keras import optimizers
import keras
from keras.layers import Dense, Conv2D, BatchNormalization, Activation
from keras.layers import AveragePooling2D, Input, Flatten
from keras.optimizers import Adam
from keras.callbacks import ModelCheckpoint, LearningRateScheduler
from keras.callbacks import ReduceLROnPlateau
from keras.preprocessing.image import ImageDataGenerator
from keras.regularizers import l2
from keras import backend as K
from keras.models import Model
from keras.datasets import cifar10
from keras import losses
import numpy as np
import os
import matplotlib.pyplot as plt
import seaborn as sns
import warnings
warnings.filterwarnings("ignore")
from sklearn.metrics import confusion_matrix
# +
def tf_print(op, tensors, message=""):
def print_message(x):
sys.stdout.write("\n DEBUG: " + message + " %s\n" % x)
return x
prints = [tf.compat.v1.py_func(print_message, [tensor], tensor.dtype) for tensor in tensors]
with tf.control_dependencies(prints):
op = tf.identity(op)
return op
def tf_print_2(tensor, tensors):
def print_message(x):
message = ""
sys.stdout.write("DEBUG: " + message + " %s" % x)
return x
prints = [tf.compat.v1.py_func(print_message, [tensors], tensor.dtype)]
with tf.control_dependencies(prints):
tensor = tf.identity(tensor)
return tensor
def pairwise_dist(A):
# Taken frmo https://stackoverflow.com/questions/37009647/compute-pairwise-distance-in-a-batch-without-replicating-tensor-in-tensorflow
r = tf.reduce_sum(A*A, 1)
r = tf.reshape(r, [-1, 1])
D = tf.maximum(r - 2*tf.matmul(A, tf.transpose(A)) + tf.transpose(r), 1e-7)
D = tf.sqrt(D)
return D
def dist_corr(X, Y):
n = tf.cast(tf.shape(X)[0], tf.float32)
a = pairwise_dist(X)
b = pairwise_dist(Y)
A = a - tf.reduce_mean(a, axis=1) - tf.expand_dims(tf.reduce_mean(a, axis=0), axis=1) + tf.reduce_mean(a)
B = b - tf.reduce_mean(b, axis=1) - tf.expand_dims(tf.reduce_mean(b, axis=0), axis=1) + tf.reduce_mean(b)
dCovXY = tf.sqrt(tf.reduce_sum(A*B) / (n ** 2))
dVarXX = tf.sqrt(tf.reduce_sum(A*A) / (n ** 2))
dVarYY = tf.sqrt(tf.reduce_sum(B*B) / (n ** 2))
dCorXY = dCovXY / tf.sqrt(dVarXX * dVarYY)
return dCorXY
def custom_loss1(y_true,y_pred):
dcor = dist_corr(y_true,y_pred)
return dcor
def custom_loss2(y_true,y_pred):
recon_loss = losses.categorical_crossentropy(y_true, y_pred)
return recon_loss
# -
alpha1, alpha2 = 1000., 0.1
stage_num, block_num = 2, 1
experiment_name = "cifar10_{}_{}_{}_{}".format(alpha1, alpha2, stage_num, block_num)
# +
# Training parameters
batch_size = 128 # orig paper trained all networks with batch_size=128
epochs = 10
data_augmentation = False
num_classes = 10
# Subtracting pixel mean improves accuracy
subtract_pixel_mean = False
n = 3
# Computed depth from supplied model parameter n
depth = n * 9 + 2
# Load the CIFAR10 data.
(x_train, y_train), (x_test, y_test) = cifar10.load_data()
x_trainRaw = x_train
x_testRaw = x_test
# Input image dimensions.
input_shape = x_train.shape[1:]
# Normalize data.
x_train = x_train.astype('float32') / 255
x_test = x_test.astype('float32') / 255
# If subtract pixel mean is enabled
if subtract_pixel_mean:
x_train_mean = np.mean(x_train, axis=0)
x_train -= x_train_mean
x_test -= x_train_mean
# Convert class vectors to binary class matrices.
y_train = keras.utils.to_categorical(y_train, num_classes)
y_test = keras.utils.to_categorical(y_test, num_classes)
def lr_schedule(epoch):
"""Learning Rate Schedule
Learning rate is scheduled to be reduced after 80, 120, 160, 180 epochs.
Called automatically every epoch as part of callbacks during training.
# Arguments
epoch (int): The number of epochs
# Returns
lr (float32): learning rate
"""
lr = 1e-4
if epoch > 180:
lr = 1e-6
elif epoch > 160:
lr = 7e-6
elif epoch > 120:
lr = 2e-5
elif epoch > 80:
lr = 8e-5
return lr
def resnet_layer(inputs,
num_filters=16,
kernel_size=3,
strides=1,
activation='relu',
batch_normalization=True,
conv_first=True):
"""2D Convolution-Batch Normalization-Activation stack builder
# Arguments
inputs (tensor): input tensor from input image or previous layer
num_filters (int): Conv2D number of filters
kernel_size (int): Conv2D square kernel dimensions
strides (int): Conv2D square stride dimensions
activation (string): activation name
batch_normalization (bool): whether to include batch normalization
conv_first (bool): conv-bn-activation (True) or
bn-activation-conv (False)
# Returns
x (tensor): tensor as input to the next layer
"""
conv = Conv2D(num_filters,
kernel_size=kernel_size,
strides=strides,
padding='same',
kernel_initializer='he_normal',
kernel_regularizer=l2(1e-4))
x = inputs
if conv_first:
x = conv(x)
if batch_normalization:
x = BatchNormalization()(x)
if activation is not None:
x = Activation(activation)(x)
else:
if batch_normalization:
x = BatchNormalization()(x)
if activation is not None:
x = Activation(activation)(x)
x = conv(x)
return x
def resnet_v2(input_shape, depth, num_classes=10):
"""ResNet Version 2 Model builder [b]
Stacks of (1 x 1)-(3 x 3)-(1 x 1) BN-ReLU-Conv2D or also known as
bottleneck layer
First shortcut connection per layer is 1 x 1 Conv2D.
Second and onwards shortcut connection is identity.
At the beginning of each stage, the feature map size is halved (downsampled)
by a convolutional layer with strides=2, while the number of filter maps is
doubled. Within each stage, the layers have the same number filters and the
same filter map sizes.
Features maps sizes:
conv1 : 32x32, 16
stage 0: 32x32, 64
stage 1: 16x16, 128
stage 2: 8x8, 256
# Arguments
input_shape (tensor): shape of input image tensor
depth (int): number of core convolutional layers
num_classes (int): number of classes (CIFAR10 has 10)
# Returns
model (Model): Keras model instance
"""
if (depth - 2) % 9 != 0:
raise ValueError('depth should be 9n+2 (eg 56 or 110 in [b])')
# Start model definition.
num_filters_in = 16
num_res_blocks = int((depth - 2) / 9)
inputs = Input(shape=input_shape)
# v2 performs Conv2D with BN-ReLU on input before splitting into 2 paths
x = resnet_layer(inputs=inputs,
num_filters=num_filters_in,
conv_first=True)
# Instantiate the stack of residual units
for stage in range(3):
for res_block in range(num_res_blocks):
activation = 'relu'
batch_normalization = True
strides = 1
if stage == 0:
num_filters_out = num_filters_in * 4
if res_block == 0: # first layer and first stage
activation = None
batch_normalization = False
else:
num_filters_out = num_filters_in * 2
if res_block == 0: # first layer but not first stage
strides = 2 # downsample
# bottleneck residual unit
y = resnet_layer(inputs=x,
num_filters=num_filters_in,
kernel_size=1,
strides=strides,
activation=activation,
batch_normalization=batch_normalization,
conv_first=False)
y = resnet_layer(inputs=y,
num_filters=num_filters_in,
conv_first=False)
y = resnet_layer(inputs=y,
num_filters=num_filters_out,
kernel_size=1,
conv_first=False)
if res_block == 0:
# linear projection residual shortcut connection to match
# changed dims
x = resnet_layer(inputs=x,
num_filters=num_filters_out,
kernel_size=1,
strides=strides,
activation=None,
batch_normalization=False)
x = keras.layers.add([x, y])
if stage_num == stage and block_num == res_block:
before_flatten_dims = x.get_shape().as_list()[1:]
split_layer = Flatten(name='split_layer')
split_layer_output = split_layer(x)
x = Reshape(before_flatten_dims)(split_layer_output)
print(before_flatten_dims)
num_filters_in = num_filters_out
# Add classifier on top.
# v2 has BN-ReLU before Pooling
x = BatchNormalization()(x)
x = Activation('relu')(x)
x = AveragePooling2D(pool_size=8)(x)
x = Flatten()(x)
outputs = Dense(num_classes,
activation='softmax',
kernel_initializer='he_normal',
name='softmax')(x)
# Instantiate model.
model = Model(inputs=inputs, outputs=[split_layer_output, outputs])
#model = Model(inputs=inputs, outputs=[outputs])
return model
model = resnet_v2(input_shape=input_shape, depth=depth)
# +
model.compile(loss={'split_layer': custom_loss1, 'softmax': custom_loss2},
loss_weights={'split_layer': alpha1, 'softmax': alpha2},
optimizer=Adam(lr=lr_schedule(0)),
metrics={'softmax':'accuracy'})
#model.compile(loss=custom_loss2, optimizer=Adam(lr=lr_schedule(0)),
# metrics=['accuracy'])
#model.summary()
#print(model_type)
# Prepare model model saving directory.
save_dir = os.path.join(os.getcwd(), '../saved_models')
model_name = '%s_model.{epoch:03d}.h5' % experiment_name
if not os.path.isdir(save_dir):
os.makedirs(save_dir)
filepath = os.path.join(save_dir, model_name)
# Prepare callbacks for model saving and for learning rate adjustment.
checkpoint = ModelCheckpoint(filepath=filepath,
monitor='val_loss',
save_best_only=True)
lr_scheduler = LearningRateScheduler(lr_schedule)
lr_reducer = ReduceLROnPlateau(factor=np.sqrt(0.1),
cooldown=0,
patience=5,
min_lr=1e-5)
callbacks = [checkpoint, lr_reducer, lr_scheduler]
# Run training, with or without data augmentation.
x_train_flattened = x_train.reshape(50000, 32*32*3)
x_test_flattened = x_test.reshape(10000, 32*32*3)
history = model.fit(x_train, [x_train_flattened, y_train],
batch_size=batch_size,
epochs=epochs,
validation_data=(x_test, [x_test_flattened, y_test]),
shuffle=True,
callbacks=callbacks,
verbose=2)
# -
history = model.fit(x_train, [x_train_flattened, y_train],
batch_size=batch_size,
epochs=epochs,
validation_data=(x_test, [x_test_flattened, y_test]),
shuffle=True,
callbacks=callbacks,
verbose=2)
x_test_encoded = model.predict(x_test)
# +
def plot_train_history(history):
fig, axes = plt.subplots(1, 3, figsize=(25, 6))
axes[0].plot(history.history['split_layer_loss'], label='Split Layer loss')
axes[0].plot(history.history['val_split_layer_loss'], label='Split Val loss')
axes[0].set_xlabel('Epochs')
axes[0].legend()
axes[1].plot(history.history['softmax_loss'], label='Softmax loss')
axes[1].plot(history.history['val_softmax_loss'], label='Validation softmax loss')
axes[1].set_xlabel('Epochs')
axes[1].legend()
axes[2].plot(history.history['softmax_accuracy'], label='Softmax acc')
axes[2].plot(history.history['val_softmax_accuracy'], label='Validation softmax acc')
axes[2].set_xlabel('Epochs')
axes[2].legend()
plot_train_history(history)
# -
experiment_name
# +
#test raw vs smash
n = 20
plt.figure(figsize=(20, 4))
for i in range(10,20):
# display original
ax = plt.subplot(2, n, i)
plt.imshow(x_test[i])
#plt.imshow((x_test[i] * 255).astype(np.int64))
plt.gray()
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
# display reconstruction
ax = plt.subplot(2, n, i + n)
plt.imshow(x_test_encoded[0][i].reshape(16, 16, 3))
#plt.imshow((x_test_encoded[0][i].reshape(32, 32, 3) * 255).astype(np.int64))
plt.gray()
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
plt.show()
# -
out_dir = '/tf/datasets/{}/output/'.format(experiment_name)
inp_dir = '/tf/datasets/{}/input/'.format(experiment_name)
os.makedirs(out_dir)
os.makedirs(inp_dir)
import matplotlib
for i in range(10000):
#np.save('rawCifar10_baseline/'+str(i), x_test[i],allow_pickle = True)
#np.save('noSmashCifar10_baseline/'+str(i), x_test_encoded[0][i].reshape(32, 32, 3),allow_pickle = True)
np.save('{}/{}'.format(out_dir, i), x_test_encoded[0][i].reshape(8, 8, 256),allow_pickle = True)
np.save('{}/{}'.format(inp_dir, i), x_testRaw[i].reshape(32, 32, 3),allow_pickle = True)
#matplotlib.image.imsave('rawCifar10/'+str(i)+'.png', x_test[i])
#matplotlib.image.imsave('smashCifar10/'+str(i)+'.png', x_test_encoded[0][i].reshape(32, 32, 3))
import pickle
with open('/tf/datasets/{}/trainHistoryDict'.format(experiment_name), 'wb') as file_pi:
pickle.dump(history.history, file_pi)
with open('/tf/datasets/{}/trainHistoryDict2'.format(experiment_name), 'wb') as file_pi:
pickle.dump(history2.history, file_pi)
# +
#train raw vs smash
n = 10
plt.figure(figsize=(20, 4))
for i in range(1,n):
# display original
ax = plt.subplot(2, n, i)
plt.imshow(x_train[i])
plt.gray()
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
# display reconstruction
ax = plt.subplot(2, n, i + n)
plt.imshow(x_train_encoded[0][i].reshape(32, 32, 3))
plt.gray()
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
plt.show()
# + active=""
#
| noPeekCifar10 .ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="view-in-github" colab_type="text"
# <a href="https://colab.research.google.com/github/keithferns98/Basics-ComputerVision/blob/main/tf2/training_for_card_detection.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# + colab={"base_uri": "https://localhost:8080/"} id="bO4YLZGN4uk2" outputId="df134352-569c-43a0-ba0a-80eab1d26e7d"
# !pip install -U --pre tensorflow=="2.*"
# !pip install tf_slim
# + colab={"base_uri": "https://localhost:8080/"} id="oVtMmkaOjF9R" outputId="2eeb7849-8a9a-493e-9440-f7d565997603"
# !pip install opencv-python
# + colab={"base_uri": "https://localhost:8080/"} id="K7NNonjKHZ9v" outputId="c09c4bf1-fd12-4a7b-b856-11b7f7ff67e0"
# !nvidia-smi
# + colab={"base_uri": "https://localhost:8080/"} id="YQVlMbLO5Gd-" outputId="b94db2b8-d65f-40ba-8eb6-7bb4ae1ccab8"
# %cd /content/drive/MyDrive/TF2
# + colab={"base_uri": "https://localhost:8080/"} id="xKK-yIh45bm8" outputId="7ee02f84-5ce8-4ef1-a509-4ca4b106e830"
# !pip install pycocotools
# + id="Hmbt3egG5nIX"
import pycocotools
# + colab={"base_uri": "https://localhost:8080/", "height": 36} id="H5kLk_C164H4" outputId="05652905-9718-4b8b-b636-4319e2858db5"
pwd
# + id="B0SQtQpC6T-m"
import os
import pathlib
if "models" in pathlib.Path.cwd().parts:
while "models" in pathlib.Path.cwd().parts:
os.chdir('..')
elif not pathlib.Path('models').exists():
# !git clone --depth 1 https://github.com/tensorflow/models
# + id="hLg4_gvB67Pe" colab={"base_uri": "https://localhost:8080/"} outputId="d87cae4f-dfad-407d-f9c9-5b7c7c5e69fc" language="bash"
# cd models/research/
#
# protoc object_detection/protos/*.proto --python_out=.
# + colab={"base_uri": "https://localhost:8080/", "height": 36} id="dIp9_2c7JDrp" outputId="aba36c66-0177-41eb-e807-e50a95e83bda"
pwd
# + colab={"base_uri": "https://localhost:8080/"} id="C13j_SOLSBJF" outputId="4b580594-e766-4b17-9445-75d7dfde0b4f"
# %cd ..
# + colab={"base_uri": "https://localhost:8080/"} id="s5PlVz4G-Ssg" outputId="1cd10069-3323-44e9-a5c8-b3d7cc9672cc" language="bash"
# cd models/research/object_detection/packages/tf2
# pip install .
# + colab={"base_uri": "https://localhost:8080/"} id="mXJ5qsTMI-c4" outputId="45bd054b-f324-4ec0-d27a-aab9f4245fa8"
# %cd models/research
# + colab={"base_uri": "https://localhost:8080/"} id="MZIrcqYZ1cM4" outputId="8f8519f4-d626-41e8-9859-315de9f41420"
# !python xml_to_csv.py
# + colab={"base_uri": "https://localhost:8080/"} id="6dymh2Pg1saY" outputId="91655dbb-1de7-4e02-b8a7-d90822fceb53"
#for training data
# !python generate_tfrecord.py --csv_input=images/train_labels.csv --image_dir=images/train --output_path=train.record
#for validation data
# !python generate_tfrecord.py --csv_input=images/test_labels.csv --image_dir=images/test --output_path=test.record
# + id="Dh3zcaIlP9ER" colab={"base_uri": "https://localhost:8080/"} outputId="1a9cee62-4324-440c-80b3-bc0f2d090413"
# !wget http://download.tensorflow.org/models/object_detection/tf2/20200711/ssd_resnet50_v1_fpn_640x640_coco17_tpu-8.tar.gz
# + colab={"base_uri": "https://localhost:8080/"} id="5pbke4EW3VRP" outputId="369ce775-91c0-46d9-da03-4991fcb7e9e8"
# !tar -xvf ssd_resnet50_v1_fpn_640x640_coco17_tpu-8.tar.gz
# + id="O_qTh2gU3nvD"
# !mv ssd_resnet50_v1_fpn_640x640_coco17_tpu-8/pipeline.config /content/drive/MyDrive/TF2/models/research/training
# + colab={"base_uri": "https://localhost:8080/", "height": 36} id="2wPjOuyMkpBR" outputId="dfe160c8-762b-4a59-abc8-c61d7fc8dd09"
pwd
# + colab={"base_uri": "https://localhost:8080/"} id="iZ0AX_wbk-yt" outputId="cdc348e0-85b9-47aa-af51-d773126709f3"
# %cd /content/drive/MyDrive/TF2/models/research
# + id="lFwSJPuM5IwZ" colab={"base_uri": "https://localhost:8080/"} outputId="924a69c3-0684-473f-8ff4-46bb4d562cb5"
# !python model_main_tf2.py --model_dir-training/my_model --pipeline_config_path-training/pipeline.config
# + colab={"base_uri": "https://localhost:8080/"} id="dTuIlePnkk_i" outputId="30b2b060-ff42-4b28-8fdd-d0f3310c8100"
# !python model_main_tf2.py --model_dir=models/my_model --pipeline_config_path=training/pipeline.config
# + colab={"base_uri": "https://localhost:8080/"} id="8kRD39u3psKD" outputId="96462ad2-9273-43ec-8be4-1183ee3777e2"
# !pip uninstall opencv-python
# + colab={"base_uri": "https://localhost:8080/", "height": 36} id="Y7QuCAZ1P5_j" outputId="a060e6f3-fbf2-4d50-ea7b-461d7adc549a"
pwd
# + colab={"base_uri": "https://localhost:8080/", "height": 510} id="XiQ1UJUsqo89" outputId="75423859-b5ed-44a3-a560-825f095d3da2"
# %load_ext tensorboard --logdir=models/my_models
# + colab={"base_uri": "https://localhost:8080/"} id="_lsssu1uPMK2" outputId="36d28ae0-b74c-44b3-a0a6-c135b504e43c"
pip install tensorboard
# + colab={"base_uri": "https://localhost:8080/", "height": 36} id="JYTxYvbARPl1" outputId="0aca963b-1846-47fc-d8e6-1d3325329af6"
pwd
# + colab={"base_uri": "https://localhost:8080/"} id="3NM63CL8PzE4" outputId="0a986124-cb02-497d-d3c8-8b929d048286"
# !python exporter_main_v2.py --input_type image_tensor --pipeline_config_path training/pipeline.config --trained_checkpoint_dir models/my_model --output_directory models/export_model
# + id="xyIDmg-jR1U5"
| tf2/training_for_card_detection.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: conda_python3
# language: python
# name: conda_python3
# ---
# # import lib
import pandas
import psycopg2
import configparser
# # connect to db
# +
config = configparser.ConfigParser()
config.read('config.ini')
host=config['myaws']['host']
db=config['myaws']['db']
user=config['myaws']['user']
pwd = config['myaws']['pwd']
conn = psycopg2.connect( host=host,
user=user,
password = <PASSWORD>,
dbname=db)
# -
cur=conn.cursor()
# # q1
df_student=pandas.read_sql_query('select * from gp22.student',conn)
df_student[:]
# # q2
sql_statement = """
select gp22.professor.p_name,
gp22.course.c_name
from gp22.professor
inner join gp22.course
on gp22.professor.p_email = gp22.course.p_email
"""
df_student=pandas.read_sql_query(sql_statement,conn)
df_student[:]
# # q3
sql_statement= """
select c_num,
count(c_num) as enrolled
from gp22.enroll_list
group by c_num
"""
# +
df_price=pandas.read_sql_query(sql_statement,conn)
df_price.plot.bar(x='c_num',y='enrolled')
# -
# # q4
sql_statement= """
select gp22.professor.p_name,
count(gp22.course.c_name) as teaching_number
from gp22.professor
inner join gp22.course
on gp22.professor.p_email = gp22.course.p_email
group by professor.p_name
"""
# +
df_price=pandas.read_sql_query(sql_statement,conn)
df_price.plot.bar(x='p_name',y='teaching_number')
# -
# # q5
# +
sql_statement = """
insert into gp22.professor(p_email,p_name,p_office)
values('{}','{}','{}')
""".format('<EMAIL>','<NAME>', 'ISAT 1123')
print(sql_statement)
# -
cur.execute(sql_statement)
conn.rollback()
conn.commit()
df_student=pandas.read_sql_query('select * from gp22.professor',conn)
df_student[:]
# +
sql_statement = """
insert into gp22.course(c_num,c_name,p_email)
values('{}','{}','{}')
""".format('IA 378','IA New', '<EMAIL>')
print(sql_statement)
# -
cur.execute(sql_statement)
conn.commit()
df_student=pandas.read_sql_query('select * from gp22.course',conn)
df_student[:]
sql_statement = """
update gp22.course
set p_email = '<EMAIL>'
where p_email = '<EMAIL>'
"""
print(sql_statement)
cur.execute(sql_statement)
conn.commit()
df_student=pandas.read_sql_query('select * from gp22.course',conn)
df_student[:]
sql_statement = """
delete from gp22.professor
where p_email = '<EMAIL>'
"""
print(sql_statement)
cur.execute(sql_statement)
conn.commit()
df_student=pandas.read_sql_query('select * from gp22.professor',conn)
df_student[:]
cur.close()
conn.close()
| Lab 4.ipynb |
// ---
// jupyter:
// jupytext:
// text_representation:
// extension: .cs
// format_name: light
// format_version: '1.5'
// jupytext_version: 1.14.4
// kernelspec:
// display_name: .NET (C#)
// language: C#
// name: .net-csharp
// ---
// + dotnet_interactive={"language": "csharp"}
#r "nuget:OpenCvSharp4"
#r "nuget:OpenCvSharp4.Extensions"
#r "nuget:OpenCvSharp4.runtime.win"
// + dotnet_interactive={"language": "csharp"}
using System;
using System.IO;
using OpenCvSharp;
// + dotnet_interactive={"language": "csharp"}
var cascade = new CascadeClassifier(@"../Data/haarcascade_frontalface_alt.xml");
var nestedCascade = new CascadeClassifier(@"../Data/haarcascade_eye.xml");
var color = Scalar.FromRgb(0, 255, 0);
using(VideoCapture capture = new VideoCapture(0))
using(Window window = new Window("Webcam"))
using(Mat srcImage = new Mat())
using(var grayImage = new Mat())
using(var detectedFaceGrayImage = new Mat())
{
while (capture.IsOpened())
{
capture.Read(srcImage);
Cv2.CvtColor(srcImage, grayImage, ColorConversionCodes.BGRA2GRAY);
Cv2.EqualizeHist(grayImage, grayImage);
var faces = cascade.DetectMultiScale(
image: grayImage,
minSize: new Size(60, 60)
);
foreach (var faceRect in faces)
{
using(var detectedFaceImage = new Mat(srcImage, faceRect))
{
Cv2.Rectangle(srcImage, faceRect, color, 3);
Cv2.CvtColor(detectedFaceImage, detectedFaceGrayImage, ColorConversionCodes.BGRA2GRAY);
var nestedObjects = nestedCascade.DetectMultiScale(
image: detectedFaceGrayImage,
minSize: new Size(30, 30)
);
foreach (var nestedObject in nestedObjects)
{
var center = new Point
{
X = (int)(Math.Round(nestedObject.X + nestedObject.Width * 0.5, MidpointRounding.ToEven) + faceRect.Left),
Y = (int)(Math.Round(nestedObject.Y + nestedObject.Height * 0.5, MidpointRounding.ToEven) + faceRect.Top)
};
var radius = Math.Round((nestedObject.Width + nestedObject.Height) * 0.25, MidpointRounding.ToEven);
Cv2.Circle(srcImage, center, (int)radius, color, thickness: 2);
}
}
}
window.ShowImage(srcImage);
int key = Cv2.WaitKey(1);
if (key == 27)
{
break;
}
}
}
| Notebooks/03-face-detection-webcam.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import cv2
import numpy as np
import matplotlib.pyplot as plt
# %matplotlib inline
img = cv2.imread("../imori.jpg")
H, W, ch = img.shape
plt.imshow(cv2.cvtColor(img, cv2.COLOR_BGR2RGB))
plt.show()
# +
scale = 1.5
H_big, W_big = int(H * scale), int(W * scale)
output_img = np.zeros((H_big, W_big, ch))
for i in range(H_big):
for j in range(W_big):
output_img[i, j, :] = img[int(i/scale), int(j/scale), :]
output_img = output_img.astype("uint8")
# +
plt.imshow(cv2.cvtColor(output_img, cv2.COLOR_BGR2RGB))
plt.show()
print(H_big, W_big, "vs", H, W)
| Question_21_30/solutions_py/solution_025.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Generate test data
import numpy as np
import matplotlib.pyplot as plt
# ### Raw data (values)
# +
# generate data
np.random.seed(42)
n = np.random.normal(40, 2.5, 1000)
np.random.seed(2409)
g = np.random.gamma(10, 7, 1000)
# save data
#np.savetxt("g/test_data_g.csv", g, delimiter=",")
#np.savetxt("n/test_data_n.csv", n, delimiter=",")
# plot
plt.plot(n)
plt.plot(g)
plt.show()
# -
# ### Discretized data (different bins for n & g)
# +
# binning method
bins = 'fd'
# get the bins
bins_n = np.histogram_bin_edges(n, bins)
bins_g = np.histogram_bin_edges(g, bins)
# calculate the empirical probabilities
count_n = np.histogram(n, bins=bins)[0]
count_g = np.histogram(g, bins=bins)[0]
# save histcounts
np.savetxt("n/test_data_n_histcounts_fd.csv", count_n, delimiter=",")
np.savetxt("g/test_data_g_histcounts_fd.csv", count_g, delimiter=",")
# plot
plt.plot(count_n)
plt.plot(count_g)
plt.show()
# -
# ### Joint probabilites (same binning for n & g)
# +
# binning method
bins = 'fd'
# get the joint bins
bins_ng = np.histogram_bin_edges([n, g], bins)
# calculate unconditioned histograms
hist_n = np.histogram(n, bins=bins_ng)[0]
hist_g = np.histogram(g, bins=bins_ng)[0]
# calculate the probabilities
p_ng = (hist_n / np.sum(hist_n)) + 1e-15
p_gn = (hist_g / np.sum(hist_g)) + 1e-15
# save
np.savetxt("n/test_data_n_pdf_same_bins_fd.csv", p_ng, delimiter=",")
np.savetxt("g/test_data_g_pdf_same_bins_fd.csv", p_gn, delimiter=",")
# plot
plt.plot(p_ng)
plt.plot(p_gn)
plt.show()
# -
# ### 2D Histogram counts
# +
# get the bins, n and g get their own bins
bins_joint = [bins_n, bins_g]
# get the joint histogram
joint_counts = np.histogram2d(n, g, bins_joint)[0]
# save
np.savetxt("test_data_n_g_joint_counts_fd.csv", joint_counts, delimiter=",")
| skinfo/test/test_data/generate_test_data.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python3 (LSST MAF)
# language: python
# name: lsst_maf
# ---
# # Finding fiducial magnitudes for $t_{\mbox{eff}}$
# ## Introduction
# $t_{\mbox{eff}}$ is a measure of depth, related to limiting magnitude as:
#
# \begin{align}
# t_{\mbox{eff}} & = 10^{\frac{4}{5}(m_\circ - m_{\mbox{lim}})} \\
# m_{\mbox{lim}} & = m_\circ + 1.25 \log t_{\mbox{eff}}
# \end{align}
#
# such limiting magnitude varies with $t_{\mbox{eff}}$ in the same way as it varies with exposure time.
#
# The constant $m_\circ$ provides a reference point, defining at what magnitude $t_{\mbox{eff}} = 1$, analogous to the constant reference magnitude in the definition of a magnitude system: accumulated $t_{\mbox{eff}}$ represents the progress made toward reaching a target coadd limiting magnitude, as measured in exposure time under constant conditions. If $m_\circ$ is the typical limiting magnitude for the instrument under real conditions, the difference between the accumulated $t_{\mbox{eff}}$ and the $t_{\mbox{eff}}$ corresponding to the target limiting magnitude is a good approximation of the total exposure time needed to attain the target limiting magnitude. If $m_\circ$ is different from that typical for the instrument, then the remaining $t_{\mbox{eff}}$ will be related to the remaining needed exposure time by a scaling factor.
#
# A reasonable fiducial magnitude for a given instrument is to define for $t_{\mbox{eff}}=1$ for optimally pointing images under typical conditions: taken at zenith, during dark time, in typical seeing. With this fiducial magnitude, the remaining needed $t_{\mbox{eff}}$ matches the remaining exposure time if optimally scheduled, and the target limiting magnitude varies with the zenith distance at transit.
#
# `opsim` provides a function, `m5_flat_sed`, to calculate the limiting magnitude (including various throughputs, etc.) under a given set of conditions (seeing, airmass, and sky brightness) using parameters provided by system engineering.
#
# The initial section of this notebook uses the mean log Fried parameter (according to the model described in RTN-022) to deteremine the seeing, the sky brightness model used by `opsim` in full dark conditions, zenith pointing (airmass=1), and `m5_flat_sed` to compute fiducial zero points for $t_{\mbox{eff}}$.
#
# The later section of the notebook generates magnitude using different assumptions about the throughput, seeing, sky brightness, and airmass to trace the differences between magnitudes derived from the SRD and estimates based on the latest assumptions used by `opsim`.
# For discussion related to this notebook and the resoning behind it, see the conversations in the `survey-strategy-team` LSSTC slack channel starting April 9, 2021 (around [here](https://lsstc.slack.com/archives/G6MP1HTDW/p1617997908069200)), through April 20th.
# ## Notebook setup
# ### Logging
# Do it first so I can use logging to track how long imports take.
import logging
logging.basicConfig(format='%(asctime)s %(message)s')
logger = logging.getLogger(__name__)
logger.setLevel('DEBUG')
logger.info("Starting")
# ### Imports
import os
import os.path
import copy
import pickle
import urllib
from os import path
from tempfile import TemporaryDirectory
logger.debug("Loading common modules")
import pandas as pd
import numpy as np
import matplotlib as mpl
import matplotlib.pyplot as plt
import sqlite3
logger.debug("Loading general astronomy modules")
import healpy
import palpy
import astropy
import astropy.coordinates
from astropy.coordinates import EarthLocation
from astropy.time import Time
import astropy.units as u
logger.debug("Loading Rubin Observatory modules")
import rubin_sim
import rubin_sim.site_models
from rubin_sim.utils import m5_flat_sed
from rubin_sim.site_models import SeeingData, SeeingModel
from rubin_sim.site_models import CloudData
from rubin_sim.skybrightness_pre import SkyModelPre
# ### Plot setup
mpl.rcParams['figure.figsize'] = (8, 5)
plt.style.use('ggplot')
np.random.seed(6563)
# ### Constants
sky_model = SkyModelPre()
site = EarthLocation.of_site('Cerro Pachon')
mean_log_r0 = -0.9163 ;# See RTN-022, table 1
mjd_sample_rate = 0.01 ;# Every 0.01 days = every 14.4 minutes
original_median_seeing500 = 0.62
data_dir = '.'
opsim_origin = 'https://lsst.ncsa.illinois.edu/sim-data/sims_featureScheduler_runs1.7/baseline/baseline_nexp2_v1.7_10yrs.db'
bands = pd.DataFrame({
'band': ('u', 'g', 'r', 'i', 'z', 'y'),
'min_wavelength': (350.0, 400.0, 552.0, 691.0, 818.0, 949.0),
'max_wavelength': (400.0, 552.0, 691.2, 818.0, 922.0, 1060.0)}
).set_index('band', drop=False)
bands
sun_cache_fname = 'sun_cache.pickle'
moon_cache_fname = 'moon_cache.pickle'
sky_cache_fname = 'sky_cache.pickle'
sky_x12_cache_fname = 'sky_x12_cache.pickle'
visit_cache = 'opsim_visits.h5'
# ### Retrieving data
if path.isfile(opsim_origin) or 'opsim_fname' in locals():
# Check for opsim_fname in locals so that if we rerun the notebook without restarting the kernel
# it will not re-download the data
opsim_fname = opsim_origin
else:
opsim_data_dir = TemporaryDirectory(dir=data_dir)
opsim_fname = path.join(opsim_data_dir.name, opsim_origin.split('/')[-1])
logger.info("Downloading %s to %s", opsim_origin, opsim_fname)
opsim_db_file = urllib.request.urlretrieve(opsim_origin, opsim_fname)
logger.info("Download complete")
print(f"Opsim data file downloaded to\n\t{opsim_fname}\nwhich will be deleted when the notebook is restarted or closed.")
opsim_name = os.path.splitext(os.path.split(opsim_fname)[-1])[0]
# ### Configuration
pd.set_option('display.float_format', lambda x: '%.2f' % x)
# ## Seeing
# The seeing model of RTN-022 provides a mean log r0 (Fried parameter), which provides an average seeing. Begin by transforming that to a von Karman seeing with an outer scale of 30m:
# +
def vk_seeing(r0, outer_scale=30.0, wavelength=5.0e-7):
"""Calculate the seeing using a von Karman model.
See Tokovinin 2002PASP..114.1156T
Args:
r0: the Fried parameter, in meters
outer_scale: the von Karman outer scale, in meters
wavelength: the wavelength of light, in meters
Returns:
The PSF FWHM, in arcseconds
"""
# Calculate the DIMM estimate of the seeing using the Kolmogorov model,
# using eqn 5 from Tokovinin 2002PASP..114.1156T eqn 5
kol_seeing = 0.98*wavelength/r0
# Calculate the correction factor required to convert the Kolmogorov model
# seeing to the von Karman model seeing,
# using eqn 19 of Tokovinin 2002PASP..114.1156T
vk_correction2 = 1.0 - 2.183*np.power(r0/outer_scale, 0.356)
# Apply the correction factor
seeing_rad = kol_seeing * np.sqrt(vk_correction2)
# Convert to arcseconds
seeing = np.degrees(seeing_rad)*(60.0*60.0)
return seeing
seeing500 = np.round(vk_seeing(10**mean_log_r0, outer_scale=30), 2)
seeing500
# -
# Now, apply the `opsim` seeing model:
seeing_model = SeeingModel()
seeing = pd.Series(seeing_model(seeing500, 1.0)['fwhmEff'], index=bands.band)
seeing
# ## Sky
# Sample the sky at intervals for all dark time in the survey:
# +
# %%time
mjd_range = (astropy.time.Time('2022-10-01T16:00:00Z').mjd, astropy.time.Time('2032-10-01T16:00:00Z').mjd)
all_mjds = np.arange(*mjd_range, mjd_sample_rate)
all_times = astropy.time.Time(all_mjds, scale='utc', format='mjd', location=site)
try:
with open(sun_cache_fname, 'rb') as sun_cache:
sun_coords = pickle.load(sun_cache)
with open(moon_cache_fname, 'rb') as moon_cache:
moon_coords = pickle.load(moon_cache)
except FileNotFoundError:
sun_coords = astropy.coordinates.get_sun(all_times).transform_to(astropy.coordinates.AltAz(obstime=all_times, location=site))
with open(sun_cache_fname, 'wb') as fp:
pickle.dump(sun_coords, fp)
moon_coords = astropy.coordinates.get_moon(all_times).transform_to(astropy.coordinates.AltAz(obstime=all_times, location=site))
with open(moon_cache_fname, 'wb') as moon_cache:
pickle.dump(moon_coords, moon_cache)
dark_idxs = np.logical_and(sun_coords.alt.deg < -18, moon_coords.alt.deg < -18)
mjds = all_mjds[dark_idxs]
times = all_times[dark_idxs]
# -
# Find the healpixels for the standard pointing (zenith) for the sampled times:
def find_zenith_hpixs(mjd, site, nside, north=True):
lst = Time(mjd, format='mjd', location=site).sidereal_time('apparent').to_value(u.degree)
ra = lst
decl = site.lat.deg
hpix = healpy.ang2pix(nside, ra, decl, lonlat=True)
return hpix
# Actually calculate the sky brightnesses:
# +
def compute_zenith_mags(mjds, sky_model, **kwargs):
zenith_hpixs = pd.DataFrame({'mjd': mjds,
'hpix': find_zenith_hpixs(mjds, **kwargs)})
mags = zenith_hpixs.apply(
lambda x: pd.Series({k: m[0] for k, m in sky_model.returnMags(x.mjd, indx=np.array([x.hpix], dtype=int), badval=np.nan, zenith_mask=False).items()}),
axis=1)
mags['mjd'] = mjds
return mags
if __debug__:
df = compute_zenith_mags(np.arange(60000.7, 60001.7, 0.01), sky_model, site=site, nside=sky_model.nside)
ax = None
for band in bands.band:
ax = df.plot('mjd', band, ax=ax)
# -
# %%time
try:
with open(sky_cache_fname, 'rb') as sky_cache:
dark_sky_mags = pickle.load(sky_cache)
except FileNotFoundError:
dark_sky_mags = compute_zenith_mags(mjds, sky_model=sky_model, site=site, nside=sky_model.nside)
with open(sky_cache_fname, 'wb') as sky_cache:
pickle.dump(dark_sky_mags, sky_cache)
# Get the median dark sky in each band:
median_dark_sky = dark_sky_mags.median()
median_dark_sky
# ## Calculate the fiducial magnitudes
# +
try:
# Make sure we are using parameters as defined in the code.
delattr(m5_flat_sed, 'Cm')
except AttributeError:
pass
m0 = bands.copy()
m0.apply(
lambda r: m5_flat_sed(r['band'], median_dark_sky[r['band']], seeing[r['band']], expTime=15, airmass=1, nexp=2), axis=1).round(2)
# -
# # Tracing the origin of differences with the SRD
# ## SRD m5
# Copy values from the SRD:
m5 = pd.DataFrame(
{'SRD spec': [23.9, 25.0, 24.7, 24.0, 23.3, 22.1],
'SRD min': [23.4 ,24.6, 24.3, 23.6, 22.9, 21.7],
'SRD stretch': [24.0, 25.1, 24.8, 24.1, 23.4, 22.2]},
index=bands.band)
# ## Reproduce m5 from the overview paper
# Table 2 from p. 26 of the [overview paper](https://arxiv.org/abs/0805.2366).
# +
overview_table2 = pd.DataFrame({'m_sky': [22.99, 22.26, 21.20, 20.48, 19.60, 18.61],
'theta': [0.81, 0.77, 0.73, 0.71, 0.69, 0.68],
'theta_eff': [0.92, 0.87, 0.83, 0.80, 0.79, 0.76],
'gamma': [0.038, 0.039, 0.039, 0.039, 0.039, 0.039],
'k_m': [0.491, 0.213, 0.126, 0.096, 0.069, 0.170],
'C_m': [23.09, 24.42, 24.44, 24.32, 24.16, 23.73],
'm5': [23.78, 24.81, 24.35, 23.92, 23.34, 22.45],
'Delta_C_m_infinity': [0.62, 0.18, 0.10, 0.07, 0.05, 0.04],
'Delta_C_m_2': [0.23, 0.08, 0.05, 0.03, 0.02, 0.02],
'Delta_m5': [0.21, 0.16, 0.14, 0.13, 0.13, 0.14],
}, index=bands.index)
overview_table2.T
# -
# Call `m5_flat_sed` once to instantiate default parameters.
# +
try:
# Make sure we are using parameters as defined in the code.
delattr(m5_flat_sed, 'Cm')
except AttributeError:
pass
_ = m5_flat_sed('u', 23, 1.0, 30.0, 1.0)
# -
m5_flat_sed.Cm = overview_table2.C_m.to_dict()
m5_flat_sed.dCm_infinity = overview_table2.Delta_C_m_infinity.to_dict()
m5_flat_sed.kAtm = overview_table2.k_m
m5_flat_sed.msky = overview_table2.m_sky
m5['overview'] = overview_table2.reset_index().apply(
lambda r: m5_flat_sed(r['band'], r['m_sky'], r['theta_eff'], expTime=30, airmass=1.0, nexp=1), axis=1).values
m5
# Compare m5 as re-derived from the table using `m5_flat_sed` with the actual values from the table:
m5['overview'] - overview_table2['m5']
# Okay, so close, but maybe some rounding difference.
# ## Move to airmass of 1.2
# See if we can reproduce the `theta_eff` from overview table 2 using `SeeingModel`, so we can use it to calculate the effective seeing at other airmasses.
seeing_model = SeeingModel()
test_eff_seeing = pd.DataFrame(
{'table 2': overview_table2.theta_eff.values,
'SeeingModel': seeing_model(original_median_seeing500, 1.0)['fwhmEff']},
index=bands.index)
test_eff_seeing['difference'] = test_eff_seeing['SeeingModel'] - test_eff_seeing['table 2']
test_eff_seeing
# Pre-compute the effective seeing at airmass
original_seeing_12 = pd.Series(seeing_model(original_median_seeing500, 1.2)['fwhmEff'], index=bands.index)
original_seeing_12
m5['airmass'] = overview_table2.reset_index().apply(
lambda r: m5_flat_sed(r['band'], r['m_sky'], original_seeing_12[r['band']], expTime=30, airmass=1.2, nexp=1), axis=1).values
m5
# Compare change to airmass 1.2 to that reported in table 2:
test_airmass12 = pd.DataFrame({'table 2 Delta_m5': overview_table2.Delta_m5,
'Delta_m5': m5['overview'] - m5['airmass']})
test_airmass12['diff'] = test_airmass12['Delta_m5'] - test_airmass12['table 2 Delta_m5']
test_airmass12
# ## Update to use throughput in `sims_utils`
# Trigger reinitialzation of the parameters in `m5_flat_sed` to be defaults when called:
del m5_flat_sed.Cm
m5['throughput'] = overview_table2.reset_index().apply(
lambda r: m5_flat_sed(r['band'], r['m_sky'], original_seeing_12[r['band']], expTime=30, airmass=1.2, nexp=1), axis=1).values
m5
# ## Switch to two exposures per visit
m5['2exp'] = overview_table2.reset_index().apply(
lambda r: m5_flat_sed(r['band'], r['m_sky'], original_seeing_12[r['band']], expTime=15, airmass=1.2, nexp=2), axis=1).values
m5
# Compare with the difference reported in overview table 2:
pd.DataFrame({'calculated': m5['throughput'] - m5['2exp'],
'overview': overview_table2['Delta_C_m_2'],
'diff': overview_table2['Delta_C_m_2'] + m5['2exp'] - m5['throughput']})
# ## Change to the latest seeing model
seeing_airmass12 = pd.Series(seeing_model(seeing500, 1.2)['fwhmEff'], index=bands.band)
seeing_airmass12
m5['seeing'] = overview_table2.reset_index().apply(
lambda r: m5_flat_sed(r['band'], r['m_sky'], seeing_airmass12[r['band']], expTime=15, airmass=1.2, nexp=2), axis=1).values
m5
# ## Change to the latest sky brightness model and airmass=1.2
night_idxs = sun_coords.alt.deg < -14
night_mjds = all_mjds[night_idxs]
night_times = all_times[night_idxs]
# +
def compute_zd_from_airmass(airmass, a=470.0):
mu = (1.0/airmass) + (1/(2.0*a))*((1.0/airmass) - airmass)
zd = np.degrees(np.arccos(mu))
return zd
if __debug__:
# Bemporad values (empirical values for airmass as a function of zenith distance)
assert compute_zd_from_airmass(1)==0
assert np.round(compute_zd_from_airmass(1.154), 1)==30
assert np.round(compute_zd_from_airmass(1.995), 1)==60
assert np.round(compute_zd_from_airmass(2.904), 1)==70
assert np.round(compute_zd_from_airmass(18.8))==88
# -
def compute_transiting_hpixs(mjd, airmass, site, nside, north=True):
zd = compute_zd_from_airmass(airmass)
lst = Time(mjd, format='mjd', location=site).sidereal_time('apparent').to_value(u.degree)
ra = lst
decl = site.lat.deg + zd if north else site.lat.degrees - zd
hpix = healpy.ang2pix(nside, ra, decl, lonlat=True)
return hpix
# Actually calculate the sky brightnesses:
# +
def compute_transiting_mags(mjds, sky_model, **kwargs):
transiting_hpixs = pd.DataFrame({'mjd': mjds,
'hpix': compute_transiting_hpixs(mjds, **kwargs)})
mags = transiting_hpixs.apply(
lambda x: pd.Series({k: m[0] for k, m in sky_model.returnMags(x.mjd, indx=np.array([x.hpix], dtype=int), badval=np.nan, zenith_mask=False).items()}),
axis=1)
mags['mjd'] = mjds
return mags
if __debug__:
df = compute_transiting_mags(np.arange(60000.7, 60001.7, 0.01), sky_model, airmass=1.2, site=site, nside=sky_model.nside)
ax = None
for band in bands.band:
ax = df.plot('mjd', band, ax=ax)
# -
# %%time
try:
with open(sky_x12_cache_fname, 'rb') as sky_cache:
sky_mags_x12 = pickle.load(sky_cache)
except FileNotFoundError:
sky_mags_x12 = compute_transiting_mags(night_mjds, sky_model=sky_model, airmass=1.2, site=site, nside=sky_model.nside)
with open(sky_x12_cache_fname, 'wb') as sky_cache:
pickle.dump(sky_mags_x12, sky_cache)
sky_x12_mag_stats = sky_mags_x12.describe()
sky_x12_mag_stats
# The nominal total number of u, g, and r exposures is the same as the nominal total z and y.
#
# If we assume u, g, and r are observed in the darkest 50% of time, the z and y in the brightest, and the i evenly split, then we get:
typical_m_sky = pd.Series([
sky_x12_mag_stats.loc['75%', 'u'],
sky_x12_mag_stats.loc['75%', 'g'],
sky_x12_mag_stats.loc['75%', 'r'],
sky_x12_mag_stats.loc['50%', 'i'],
sky_x12_mag_stats.loc['25%', 'z'],
sky_x12_mag_stats.loc['25%', 'y']],
index=bands.index
)
typical_m_sky
m5['sky'] = overview_table2.reset_index().apply(
lambda r: m5_flat_sed(r['band'], typical_m_sky[r['band']], seeing_airmass12[r['band']], expTime=15, airmass=1.2, nexp=2), axis=1).values
m5
# ## Baseline median and mean
# Look at what a baseline `opsim` simulation actually achieves.
try:
visits = pd.read_hdf(visit_cache, opsim_name)
except (FileNotFoundError, KeyError):
with sqlite3.connect(opsim_fname) as con:
visits = pd.read_sql_query('SELECT * FROM SummaryAllProps', con)
visits.set_index('filter', drop=False, inplace=True)
visits.index.name = 'band'
visits.to_hdf(visit_cache, opsim_name)
m5['b. median'] = visits.groupby('band').agg({'fiveSigmaDepth': 'median'}).loc[bands.index, 'fiveSigmaDepth'].T
m5['b. mean'] = visits.groupby('band').agg({'fiveSigmaDepth': 'mean'}).loc[bands.index, 'fiveSigmaDepth'].T
m5
# ## Baseline mag from mean t_eff
def compute_teff(ref_mags, mags=visits.fiveSigmaDepth):
t_eff = 10**(0.8*(mags-ref_mags.loc[mags.index]))
return t_eff
m5['b. teff mean'] = m5['SRD spec'] + 1.25*np.log10(compute_teff(m5['SRD spec'], visits.fiveSigmaDepth).groupby('band').mean())
m5
print(m5)
# ## Plot how the magnitude limits change with each modification from the SRD
m5_imp = m5[['overview', 'airmass', 'throughput', '2exp', 'seeing', 'sky']].T
m5_imp.index.rename('step', inplace=True)
m5_imp.reset_index(inplace=True)
fig, axes = plt.subplots(2, 3, figsize=(16, 12))
for band, ax in zip(bands.band, axes.flatten()):
ax.plot(m5_imp.index, m5_imp[band], marker='o', linestyle=' ', color='darkblue')
ax.plot(m5_imp.index, m5_imp[band], alpha=0.2, color='darkblue')
ax.set_xticks(np.arange(len(m5_imp['step'])))
ax.set_xticklabels(m5_imp['step'], rotation=20)
ax.set_title(band)
ax.axhline(m5.loc[band, 'SRD spec'], linestyle=':', color='k')
ax.text(0, m5.loc[band, 'SRD spec'], 'SRD spec', horizontalalignment='left', verticalalignment='top', color='k')
ax.axhline(m5.loc[band, 'SRD min'], linestyle=':', color='r')
ax.text(0, m5.loc[band, 'SRD min'], 'SRD min', horizontalalignment='left', verticalalignment='bottom', color='r')
ax.axhline(m5.loc[band, 'SRD stretch'], linestyle=':', color='g')
ax.text(0, m5.loc[band, 'SRD stretch'], 'SRD stretch', horizontalalignment='left', verticalalignment='top', color='g')
ax.axhline(m5.loc[band, 'b. median'], linestyle='--', color='b')
ax.text(0, m5.loc[band, 'b. median'], 'baseline 1.7 sim median', horizontalalignment='left', verticalalignment='top', color='b')
ax.axhline(m5.loc[band, 'b. teff mean'], linestyle='--', color='k')
ax.text(0, m5.loc[band, 'b. teff mean'], 'baseline 1.7 sim t_eff mean', horizontalalignment='left', verticalalignment='top', color='k')
# ## Examine as differences from baseline averages
m5.apply(lambda c: c - m5['b. teff mean'], axis=0)
m5.apply(lambda c: c - m5['b. median'], axis=0)
m5.apply(lambda c: c - m5['b. mean'], axis=0)
# ## Find t_eff stats
def compute_teff_stats(*args, **kwargs):
teff_stats = compute_teff(*args, **kwargs).groupby('band').describe().loc[bands.index].T
return teff_stats
m5.apply(lambda r: compute_teff_stats(r).loc['mean'])
m5.apply(lambda r: compute_teff_stats(r).loc['50%'])
# ## Progression table
m5
# +
m5prog = m5[['overview']].copy()
m5prog['throughput'] = overview_table2.reset_index().apply(
lambda r: m5_flat_sed(r['band'], r['m_sky'], overview_table2['theta_eff'][r['band']], expTime=15, airmass=1, nexp=2), axis=1).values
m5prog['seeing'] = overview_table2.reset_index().apply(
lambda r: m5_flat_sed(r['band'], r['m_sky'],
pd.Series(seeing_model(seeing500, 1.0)['fwhmEff'], index=bands.index)[r['band']],
expTime=15, airmass=1, nexp=2), axis=1).values
m5prog['sky'] = overview_table2.reset_index().apply(
lambda r: m5_flat_sed(r['band'],
dark_sky_mags[r['band']].median(),
pd.Series(seeing_model(seeing500, 1.0)['fwhmEff'], index=bands.index)[r['band']],
expTime=15, airmass=1, nexp=2), axis=1).values
m5prog['X_k'] = overview_table2.reset_index().apply(
lambda r: m5_flat_sed(r['band'],
dark_sky_mags[r['band']].median(),
pd.Series(seeing_model(seeing500, 1.0)['fwhmEff'], index=bands.index)[r['band']],
expTime=15, airmass=1.2, nexp=2), axis=1).values
m5prog['X_seeing'] = overview_table2.reset_index().apply(
lambda r: m5_flat_sed(r['band'],
dark_sky_mags[r['band']].median(),
pd.Series(seeing_model(seeing500, 1.2)['fwhmEff'], index=bands.index)[r['band']],
expTime=15, airmass=1.2, nexp=2), axis=1).values
m5prog['X'] = overview_table2.reset_index().apply(
lambda r: m5_flat_sed(r['band'],
sky_x12_mag_stats.loc['75%', r['band']],
pd.Series(seeing_model(seeing500, 1.2)['fwhmEff'], index=bands.index)[r['band']],
expTime=15, airmass=1.2, nexp=2), axis=1).values
m5prog['moon'] = overview_table2.reset_index().apply(
lambda r: m5_flat_sed(r['band'],
typical_m_sky[r['band']],
pd.Series(seeing_model(seeing500, 1.2)['fwhmEff'], index=bands.index)[r['band']],
expTime=15, airmass=1.2, nexp=2), axis=1).values
print(m5prog)
# -
print(m5.loc[:, ['b. median', 'b. mean']].apply(lambda x: x - m5prog.moon).rename(columns={'b. median': 'sim median - model', 'b. mean': 'sim mean - model'}))
m5p_imp = m5prog.T
m5p_imp.index.rename('step', inplace=True)
m5p_imp.reset_index(inplace=True)
fig, axes = plt.subplots(2, 3, figsize=(16, 12))
for band, ax in zip(bands.band, axes.flatten()):
ax.plot(m5p_imp.index, m5p_imp[band], marker='o', linestyle=' ', color='darkblue')
ax.plot(m5p_imp.index, m5p_imp[band], alpha=0.2, color='darkblue')
ax.set_xticks(np.arange(len(m5p_imp['step'])))
ax.set_xticklabels(m5p_imp['step'], rotation=20)
ax.set_title(band)
ax.hlines(m5.loc[band, ['SRD spec', 'SRD min', 'SRD stretch']], xmin=0, xmax=3, linestyle=':', color='k')
ax.text(0, m5.loc[band, 'SRD spec'], 'SRD spec', horizontalalignment='left', verticalalignment='top', color='k')
ax.text(0, m5.loc[band, 'SRD min'], 'SRD min', horizontalalignment='left', verticalalignment='bottom', color='r')
ax.text(0, m5.loc[band, 'SRD stretch'], 'SRD stretch', horizontalalignment='left', verticalalignment='top', color='g')
ax.axhline(m5.loc[band, 'b. median'], linestyle='--', color='b')
ax.text(0, m5.loc[band, 'b. median'], 'baseline 1.7 sim median', horizontalalignment='left', verticalalignment='top', color='b')
ax.axhline(m5.loc[band, 'b. teff mean'], linestyle='--', color='k')
ax.text(0, m5.loc[band, 'b. teff mean'], 'baseline 1.7 sim t_eff mean', horizontalalignment='left', verticalalignment='top', color='k')
# ## Using median airmass, seeing, sky brightness of baseline visits
m5visits = m5prog[['overview', 'throughput', 'seeing', 'sky']].copy()
m5visits['sim_X'] = overview_table2.reset_index().apply(
lambda r: m5_flat_sed(r['band'],
dark_sky_mags[r['band']].median(),
pd.Series(seeing_model(seeing500, 1.0)['fwhmEff'], index=bands.index)[r['band']],
expTime=15, airmass=visits.loc[visits['filter']==r['band'],'airmass'].median(), nexp=2), axis=1).values
m5visits['sim_seeing'] = overview_table2.reset_index().apply(
lambda r: m5_flat_sed(r['band'],
dark_sky_mags[r['band']].median(),
visits.loc[visits['filter']==r['band'], 'seeingFwhmEff'].median(),
expTime=15, airmass=visits.loc[visits['filter']==r['band'], 'airmass'].median(), nexp=2), axis=1).values
m5visits['sim_sky'] = overview_table2.reset_index().apply(
lambda r: m5_flat_sed(r['band'],
visits.loc[visits['filter']==r['band'], 'skyBrightness'].median(),
visits.loc[visits['filter']==r['band'], 'seeingFwhmEff'].median(),
expTime=15, airmass=visits.loc[visits['filter']==r['band'],'airmass'].median(), nexp=2), axis=1).values
print(m5visits)
print(m5.loc[:, ['b. median', 'b. mean']].apply(lambda x: x - m5visits.sim_sky).rename(columns={'b. median': 'sim median - model', 'b. mean': 'sim mean - model'}))
m5p_imp = m5visits.T
m5p_imp.index.rename('step', inplace=True)
m5p_imp.reset_index(inplace=True)
fig, axes = plt.subplots(2, 3, figsize=(16, 12))
for band, ax in zip(bands.band, axes.flatten()):
ax.plot(m5p_imp.index, m5p_imp[band], marker='o', linestyle=' ', color='darkblue')
ax.plot(m5p_imp.index, m5p_imp[band], alpha=0.2, color='darkblue')
ax.set_xticks(np.arange(len(m5p_imp['step'])))
ax.set_xticklabels(m5p_imp['step'], rotation=20)
ax.set_title(band)
ax.hlines(m5.loc[band, ['SRD spec', 'SRD min', 'SRD stretch']], xmin=0, xmax=3, linestyle=':', color='k')
ax.text(0, m5.loc[band, 'SRD spec'], 'SRD spec', horizontalalignment='left', verticalalignment='top', color='k')
ax.text(0, m5.loc[band, 'SRD min'], 'SRD min', horizontalalignment='left', verticalalignment='bottom', color='r')
ax.text(0, m5.loc[band, 'SRD stretch'], 'SRD stretch', horizontalalignment='left', verticalalignment='top', color='g')
ax.axhline(m5.loc[band, 'b. median'], linestyle='--', color='b')
ax.text(0, m5.loc[band, 'b. median'], 'baseline 1.7 sim median', horizontalalignment='left', verticalalignment='top', color='b')
ax.axhline(m5.loc[band, 'b. teff mean'], linestyle='--', color='k')
ax.text(0, m5.loc[band, 'b. teff mean'], 'baseline 1.7 sim t_eff mean', horizontalalignment='left', verticalalignment='top', color='k')
# ## Sky brightness and moon
fig, axes = plt.subplots(3, 2, figsize=(13, 8))
for band, ax in zip(bands.band, axes.flatten()):
visits.query(f"band == '{band}'").hist('skyBrightness', bins=100, ax=ax)
text_trans = mpl.transforms.blended_transform_factory(ax.transData, ax.transAxes)
ax.axvline(sky_x12_mag_stats.loc['mean', band], color='darkblue')
ax.text(sky_x12_mag_stats.loc['mean', band], 1, 'mean at X=1.2', transform=text_trans, horizontalalignment='right', verticalalignment='top', rotation=90, color='darkblue')
ax.axvline(sky_x12_mag_stats.loc['min', band], color='black', linestyle=":")
ax.text(sky_x12_mag_stats.loc['min', band], 1, 'min at X=1.2', transform=text_trans, horizontalalignment='right', verticalalignment='top', rotation=90, color='black')
ax.axvline(sky_x12_mag_stats.loc['max', band], color='black', linestyle=":")
ax.axvline(sky_x12_mag_stats.loc['25%', band], color='black', linestyle='--')
ax.text(sky_x12_mag_stats.loc['25%', band], 1, '25% at X=1.2', transform=text_trans, horizontalalignment='right', verticalalignment='top', rotation=90, color='black')
ax.axvline(sky_x12_mag_stats.loc['75%', band], color='black', linestyle='--')
ax.axvline(sky_x12_mag_stats.loc['50%', band], color='black')
ax.text(sky_x12_mag_stats.loc['50%', band], 1, 'median at X=1.2', transform=text_trans, horizontalalignment='right', verticalalignment='top', rotation=90, color='black')
ax.set_title(band)
sky_mags_x12.describe()
# ## Checking parameters from sims_utils
sims_utils_params = pd.DataFrame({'C_m': m5_flat_sed.Cm,
'Delta_C_m_infinity': m5_flat_sed.dCm_infinity})
param_compare = pd.concat([overview_table2[sims_utils_params.columns], sims_utils_params], keys=['overview table 2', 'sims_utils'], axis=1).reorder_levels((1,0), axis=1)
param_compare = param_compare.reindex(sorted(param_compare.columns), axis=1)
print(param_compare)
# ## Coadd-depth based $t_{\mbox{eff}}$
coadd_depth = pd.DataFrame({'mag': [26.1, 27.4, 27.5, 26.8, 26.1, 24.9],
'visits': [56, 80, 184, 184, 160, 160]}, index=bands.band).reindex(visits.index)
coadd_depth['m0'] = coadd_depth.mag - 1.25*np.log10(coadd_depth.visits)
coadd_depth['prog'] = m5prog.moon
coadd_depth['round'] = np.round(m5prog.sky*2)/2
coadd_depth['m0_visit_teff'] = (10**(0.8*(visits.fiveSigmaDepth - coadd_depth.m0))).groupby('band').mean()
coadd_depth['prog_visit_teff'] = (10**(0.8*(visits.fiveSigmaDepth - coadd_depth.prog))).groupby('band').mean()
coadd_depth['round_visit_teff'] = (10**(0.8*(visits.fiveSigmaDepth - coadd_depth['round']))).groupby('band').mean()
coadd_depth.groupby('band').first().loc[bands.band]
# ## Best possible as reference for $t_{\mbox{eff}}$
overview_table2.reset_index().apply(
lambda r: m5_flat_sed(r['band'],
dark_sky_mags[r['band']].max(),
pd.Series(seeing_model(0.0, 1.0)['fwhmEff'], index=bands.index)[r['band']],
expTime=15, airmass=1, nexp=2), axis=1)
best_m5 = pd.DataFrame({
'm5': overview_table2.reset_index().apply(lambda r: m5_flat_sed(r['band'],
dark_sky_mags[r['band']].max(),
pd.Series(seeing_model(0.0, 1.0)['fwhmEff'], index=bands.index)[r['band']],
expTime=15, airmass=1, nexp=2), axis=1).values},
index=bands.band)
best_m5['round_best'] = np.round(best_m5.m5*2)/2
best_m5['m5 mean teff'] = 30*(10**(0.8*(visits.fiveSigmaDepth - best_m5.reindex(visits.index).m5))).groupby('band').mean()
best_m5['round mean teff'] = 30*(10**(0.8*(visits.fiveSigmaDepth - best_m5.reindex(visits.index).round_best))).groupby('band').mean()
best_m5
# ### Distributions of t_eff in units of ideal visits, unrounded reference points
teff = (10**(0.8*(visits.fiveSigmaDepth - best_m5.reindex(visits.index).m5)))
fig, axes = plt.subplots(3,2, sharex=True, sharey=True)
for band, ax in zip(bands.band, axes.flatten()):
teff[band].hist(bins=np.arange(20)/30, ax=ax, density=True)
ax.text(0.97, 0.95, band, horizontalalignment='right', verticalalignment='top', transform=ax.transAxes)
ax.set_yticklabels([])
plt.tight_layout()
# ### Distributions in units of ideal seconds, unrounded reference points
teff = 30*(10**(0.8*(visits.fiveSigmaDepth - best_m5.reindex(visits.index).m5)))
fig, axes = plt.subplots(3,2, sharex=True, sharey=True)
for band, ax in zip(bands.band, axes.flatten()):
teff[band].hist(bins=np.arange(20), ax=ax, density=True)
ax.text(0.97, 0.95, band, horizontalalignment='right', verticalalignment='top', transform=ax.transAxes)
ax.set_yticklabels([])
plt.tight_layout()
# ### Distributions in unites of ideal seconds, rounded reference points
round_teff = 30*(10**(0.8*(visits.fiveSigmaDepth - best_m5.reindex(visits.index).round_best)))
fig, axes = plt.subplots(3,2, sharex=True, sharey=True)
for band, ax in zip(bands.band, axes.flatten()):
round_teff[band].hist(bins=np.arange(20), ax=ax, density=True)
ax.text(0.97, 0.95, band, horizontalalignment='right', verticalalignment='top', transform=ax.transAxes)
ax.set_yticklabels([])
plt.tight_layout()
| notebooks/teff_fiducial.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
import os
import sys
from time import sleep
import numpy as np
import GCode
import GRBL
# -
# # Inkscape G-Code Laser Playback
# ## Test Setup
# # Code:
cnc = GRBL.GRBL(port="/dev/cnc_3018")
cnc.laser_mode = 1
cnc.serial.setDTR(False)
cnc.serial.flushInput()
cnc.serial.setDTR(True)
print(cnc.laser_mode)
cnc.status
cnc.kill_alarm()
cnc.cmd("M3S1")
cnc.cmd("G1F1")
def laser_aim(self):
self.cmd("G1F1")
self.cmd("M3S1")
input("Aim laser and press enter to continue...")
self.cmd("M3S0")
self.cmd("M5")
laser_aim(cnc)
test_run = GCode.GCode()
test_run.load("/tmp/speaker_0001.gcode")
cnc.run(test_run)
cnc.cmd("M3S255")
cnc.cmd("M3S2")
cnc.cmd("G91")
cnc.cmd("G1F200")
cnc.cmd("M4S255")
cnc.cmd("G1X25")
test_run = GCode.GCode()
test_run.load("/tmp/circle_fill_0001.gcode")
cnc.run(test_run)
cnc.reset()
cnc.status
cnc.cmd("G92X0Y0")
cnc.status
cnc.cmd("G0X-1")
laser_aim(cnc)
cnc.status
cnc.cmd("G92X0Y0")
cnc.status
test_run = GCode.GCode()
test_run.load("/tmp/circle_fill_0003.gcode")
cnc.run(test_run)
laser_aim(cnc)
# +
test_run = GCode.GCode()
test_run.load("/tmp/wf_snuff_0002.gcode")
cnc.cmd("G92X0Y0")
cnc.status
# -
cnc.run(test_run)
cnc.status
cnc.reset()
laser_aim(cnc)
| DevelopmentSandbox/LaserPlayback-Copy2.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# [](https://colab.research.google.com/github/dopplerchase/DRpy/blob/master/notebooks/Example_one.ipynb)
# # Welcome to DRpy (derpy)
#
# Here is a simple notebook to help you get going.
# ### If you want mapping, please install cartopy
#first install cartopy
# !apt-get -qq install libproj-dev proj-data proj-bin libgeos-dev
# !pip install Cython
# !pip install cartopy
# ### Ok, now go clone the repository
#
# The reason we are cloning the directory is to grab the example file. If you don't need this file, just install this way:
#
# # !pip install git+https://github.com/dopplerchase/DRpy.git
#clone repository
# !git clone https://github.com/dopplerchase/DRpy.git
# ### Install it
# +
import os
# #cd into folder
os.chdir('./DRpy/')
#install
# !python setup.py install
# #cd into notebooks
os.chdir('./notebooks/')
# -
# ## Now we can actually use it!
import drpy
# %pylab inline
filename = '../example_file/2A.GPM.DPR.V820180723.20200117-S181128-E184127.V06A.RT-H5'
dpr = drpy.core.GPMDPR(filename = filename)
dpr.read()
dpr.toxr()
zoom = 10
center_lat = 47.5
center_lon = -55
corners = [center_lon - zoom,center_lon +zoom, center_lat-zoom,center_lat+zoom]
dpr.setboxcoords(corners = corners)
#to reduce size of data, drop empty cross-track sections
dpr.xrds = dpr.xrds.dropna(dim='along_track',how='all')
# ### lets take a gander at the xarray datatset (.xrds)
dpr.xrds
# +
#use cartopy for maping
import cartopy
import cartopy.crs as ccrs
import cartopy.feature as cfeature
import matplotlib.ticker as mticker
from cartopy.mpl.gridliner import LONGITUDE_FORMATTER, LATITUDE_FORMATTER
import matplotlib.patheffects as PathEffects
import cartopy.io.shapereader as shpreader
from cartopy.mpl.geoaxes import GeoAxes
from mpl_toolkits.axes_grid1 import AxesGrid
from cartopy.mpl.ticker import LongitudeFormatter, LatitudeFormatter
import matplotlib.colors as colors
import matplotlib.patheffects as PathEffects
#make figure
fig = plt.figure(figsize=(10, 10))
#add the map
ax = fig.add_subplot(1, 1, 1,projection=ccrs.PlateCarree())
# corners = [center_lon - 3,center_lon + 3, center_lat-1.9,center_lat+1.9]
# ax.set_extent(corners)
ax.add_feature(cfeature.STATES.with_scale('50m'),lw=0.5)
ax.add_feature(cartopy.feature.OCEAN.with_scale('50m'))
ax.add_feature(cartopy.feature.LAND.with_scale('50m'), edgecolor='black',lw=0.5,facecolor=[0.95,0.95,0.95])
ax.add_feature(cartopy.feature.LAKES.with_scale('50m'), edgecolor='black')
ax.add_feature(cartopy.feature.RIVERS.with_scale('50m'))
# ax.set_xticks(np.arange(corners[0], corners[1], 1), crs=ccrs.PlateCarree())
# ax.set_yticks(np.linspace(corners[2], corners[3], 5), crs=ccrs.PlateCarree())
# lon_formatter = LongitudeFormatter(zero_direction_label=True)
# lat_formatter = LatitudeFormatter()
# ax.xaxis.set_major_formatter(lon_formatter)
# ax.yaxis.set_major_formatter(lat_formatter)
pm = ax.scatter(dpr.xrds.lons,dpr.xrds.lats,c=dpr.xrds.nearsurfaceKu,vmin=12,vmax=40,cmap=drpy.graph.cmaps.HomeyerRainbow,s=0.25)
ax.plot(dpr.xrds.lons,dpr.xrds.lats,'o',fillstyle='none',color='k',markeredgewidth=0.1,ms=np.sqrt(0.25))
plt.colorbar(pm,ax=ax,shrink=0.5)
| notebooks/.ipynb_checkpoints/Example_one-checkpoint.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] _cell_guid="845e32ab-4600-4db5-84c6-1847f14aa48d" _uuid="12f808bbb94d2ff51d4b8d531a32f21e179ae4cc"
# It seems like none of the Keras scripts published so far managed to get above 0.26. As written below, this script won't do much better either, but that is with 4 folds, and only two repeated runs and 3 epochs per fold. A proper version of this script with 5 folds and 3 repeated runs has out-of-fold CV of 0.274 and a leaderboard score of 0.270.
#
# Keep on reading for suggestions how to get this script to score better.
# + _cell_guid="be30f312-b2fb-4e5b-a0b8-9ddbbb190d2e" _uuid="7a90bcae1134c958f391542f4d67df93ed020d98"
import numpy as np
np.random.seed()
import pandas as pd
from datetime import datetime
from sklearn.metrics import log_loss, roc_auc_score
from sklearn.model_selection import StratifiedKFold
from sklearn.preprocessing import MinMaxScaler
from keras.models import load_model
from keras.models import Sequential, Model
from keras.layers import Input, Dense, Dropout, Activation
from keras.layers.normalization import BatchNormalization
from keras.callbacks import EarlyStopping, ModelCheckpoint, CSVLogger, Callback
from keras.wrappers.scikit_learn import KerasClassifier
# + [markdown] _cell_guid="cd63fa76-0b6d-4b1b-bd9c-7dcaba0ccd40" _uuid="bc2f0776a6622aa29b628ef37a660f340337f0cc"
# This callback is very important. It calculates roc_auc and gini values so they can be monitored during the run. Also, it creates a log of those parameters so that they can be used for early stopping. A tip of the hat to **[Roberto](https://www.kaggle.com/rspadim)** and **[this kernel](https://www.kaggle.com/rspadim/gini-keras-callback-earlystopping-validation)** for helping me figure out the latter.
#
# *Note that this callback in combination with early stopping doesn't print well if you are using verbose=1 (moving arrow) during fitting. I recommend that you use verbose=2 in fitting.*
# + _cell_guid="8d9c3461-8f27-4754-a204-a4cf82a62450" _uuid="ca16921d40817307195c9567405e4aa7c3765097"
class roc_auc_callback(Callback):
def __init__(self,training_data,validation_data):
self.x = training_data[0]
self.y = training_data[1]
self.x_val = validation_data[0]
self.y_val = validation_data[1]
def on_train_begin(self, logs={}):
return
def on_train_end(self, logs={}):
return
def on_epoch_begin(self, epoch, logs={}):
return
def on_epoch_end(self, epoch, logs={}):
y_pred = self.model.predict_proba(self.x, verbose=0)
roc = roc_auc_score(self.y, y_pred)
logs['roc_auc'] = roc_auc_score(self.y, y_pred)
logs['norm_gini'] = ( roc_auc_score(self.y, y_pred) * 2 ) - 1
y_pred_val = self.model.predict_proba(self.x_val, verbose=0)
roc_val = roc_auc_score(self.y_val, y_pred_val)
logs['roc_auc_val'] = roc_auc_score(self.y_val, y_pred_val)
logs['norm_gini_val'] = ( roc_auc_score(self.y_val, y_pred_val) * 2 ) - 1
print('\rroc_auc: %s - roc_auc_val: %s - norm_gini: %s - norm_gini_val: %s' % (str(round(roc,5)),str(round(roc_val,5)),str(round((roc*2-1),5)),str(round((roc_val*2-1),5))), end=10*' '+'\n')
return
def on_batch_begin(self, batch, logs={}):
return
def on_batch_end(self, batch, logs={}):
return
# + [markdown] _cell_guid="b4000c8c-8794-45e1-9ce5-ef84f9dc13a7" _uuid="18a2e97141c98c5ea3dfa45c35e75e0257ddf20e"
# Housekeeping utilities.
# + _cell_guid="69074e99-541c-4e93-b4d8-97f7a6407465" _uuid="89ab39c0da583cf503966fa4edccfec9f25c2e50"
def timer(start_time=None):
if not start_time:
start_time = datetime.now()
return start_time
elif start_time:
thour, temp_sec = divmod(
(datetime.now() - start_time).total_seconds(), 3600)
tmin, tsec = divmod(temp_sec, 60)
print('\n Time taken: %i hours %i minutes and %s seconds.' %
(thour, tmin, round(tsec, 2)))
def scale_data(X, scaler=None):
if not scaler:
scaler = MinMaxScaler(feature_range=(-1, 1))
scaler.fit(X)
X = scaler.transform(X)
return X, scaler
# + [markdown] _cell_guid="2f346986-75c0-4bed-b4db-e7ccc99f88f8" _uuid="ea7bfad9bf8617aa9182ad7f6cd3ea9e6fb11276"
# I never seem to be able to write a generic routine for data loading where one would just plug in file names and everything else would be done automatically. Still trying.
# + _cell_guid="8665bec9-6b3f-437a-836c-c0ee252338b8" _uuid="217687c584b7af8619705f3cf23e480bcf029e58"
# train and test data path
DATA_TRAIN_PATH = '../input/train.csv'
DATA_TEST_PATH = '../input/test.csv'
def load_data(path_train=DATA_TRAIN_PATH, path_test=DATA_TEST_PATH):
train_loader = pd.read_csv(path_train, dtype={'target': np.int8, 'id': np.int32})
train = train_loader.drop(['target', 'id'], axis=1)
train_labels = train_loader['target'].values
train_ids = train_loader['id'].values
print('\n Shape of raw train data:', train.shape)
test_loader = pd.read_csv(path_test, dtype={'id': np.int32})
test = test_loader.drop(['id'], axis=1)
test_ids = test_loader['id'].values
print(' Shape of raw test data:', test.shape)
return train, train_labels, test, train_ids, test_ids
# + [markdown] _cell_guid="1c0173f8-8f85-468d-97e0-6fa55c9cf640" _uuid="1e112501fd4387f197b98b72f27155981b585178"
# You can ignore most of the parameters below other than the top two. Obviously, more folds means longer running time, but I can tell you from experience that 10 folds with Keras will usually do better than 4. The number of "runs" should be in the 3-5 range. At a minimum, I suggest 5 folds and 3 independent runs per fold (which will eventually get averaged). This is because of stochastic nature of neural networks, so one run per fold may or may not produce the best possible result.
#
# **If you can afford it, 10 folds and 5 runs per fold would be my recommendation. Be warned that it may take a day or two, even if you have a GPU.**
# + _cell_guid="0609e539-110a-4968-bab8-6156e3ebb904" _uuid="e34bced4579cd9b755c33c13a419f29f0ce8354f"
folds = 4
runs = 2
cv_LL = 0
cv_AUC = 0
cv_gini = 0
fpred = []
avpred = []
avreal = []
avids = []
# + [markdown] _cell_guid="988ac526-b38d-46ee-8636-c23128470b0e" _uuid="49a0ff4069fb3e38e1441d907869e3e2457fbb57"
# Loading data. Converting "categorical" variables, even though in this dataset they are actually numeric.
# + _cell_guid="b6cc7153-1c92-47d2-a24f-9b41668a0a90" _uuid="fea20a516b7076d491b9aa4bfe2153243e0974c8"
# Load data set and target values
train, target, test, tr_ids, te_ids = load_data()
n_train = train.shape[0]
train_test = pd.concat((train, test)).reset_index(drop=True)
col_to_drop = train.columns[train.columns.str.endswith('_cat')]
col_to_dummify = train.columns[train.columns.str.endswith('_cat')].astype(str).tolist()
for col in col_to_dummify:
dummy = pd.get_dummies(train_test[col].astype('category'))
columns = dummy.columns.astype(str).tolist()
columns = [col + '_' + w for w in columns]
dummy.columns = columns
train_test = pd.concat((train_test, dummy), axis=1)
train_test.drop(col_to_dummify, axis=1, inplace=True)
train_test_scaled, scaler = scale_data(train_test)
train = train_test_scaled[:n_train, :]
test = train_test_scaled[n_train:, :]
print('\n Shape of processed train data:', train.shape)
print(' Shape of processed test data:', test.shape)
# + [markdown] _cell_guid="94b32825-7e74-4f2f-8609-66eae854113f" _uuid="7de3cc91cc55f33ce70ce3acbda04fdb9c618ecc"
# The two parameters below are worth playing with. Larger patience gives the network a better chance to find solutions when it gets close to the local/global minimum. It also means longer training times. Batch size is one of those parameters that can always be optimized for any given dataset. If you have a GPU, larger batch sizes translate to faster training, but that may or may not be better for the quality of training.
# + _cell_guid="631f1bcc-b309-449f-9595-aee7200f0210" _uuid="37113e0d59829e615f475e1ac9316c4d2097534d"
patience = 10
batchsize = 128
# + [markdown] _cell_guid="6b757c87-bdc0-4fd6-a620-9a1361aa7e0b" _uuid="f9c1cbc23ef29cc5573390df6896640d8a1da536"
# There are lots of comments within the code below. I think the callback section is particularly import.
# + _cell_guid="8da63d81-1961-4b91-b934-9dc7fe05d8ea" _uuid="e205d78f5cd47452a1a4fba004f9169534356a9f"
# Let's split the data into folds. I always use the same random number for reproducibility,
# and suggest that you do the same (you certainly don't have to use 1001).
skf = StratifiedKFold(n_splits=folds, random_state=1001)
starttime = timer(None)
for i, (train_index, test_index) in enumerate(skf.split(train, target)):
start_time = timer(None)
X_train, X_val = train[train_index], train[test_index]
y_train, y_val = target[train_index], target[test_index]
train_ids, val_ids = tr_ids[train_index], tr_ids[test_index]
# This is where we define and compile the model. These parameters are not optimal, as they were chosen
# to get a notebook to complete in 60 minutes. Other than leaving BatchNormalization and last sigmoid
# activation alone, virtually everything else can be optimized: number of neurons, types of initializers,
# activation functions, dropout values. The same goes for the optimizer at the end.
#########
# Never move this model definition to the beginning of the file or anywhere else outside of this loop.
# The model needs to be initialized anew every time you run a different fold. If not, it will continue
# the training from a previous model, and that is not what you want.
#########
# This definition must be within the for loop or else it will continue training previous model
def baseline_model():
model = Sequential()
model.add(
Dense(
200,
input_dim=X_train.shape[1],
kernel_initializer='glorot_normal',
))
model.add(Activation('relu'))
model.add(BatchNormalization())
model.add(Dropout(0.5))
model.add(Dense(100, kernel_initializer='glorot_normal'))
model.add(Activation('relu'))
model.add(BatchNormalization())
model.add(Dropout(0.25))
model.add(Dense(50, kernel_initializer='glorot_normal'))
model.add(Activation('relu'))
model.add(BatchNormalization())
model.add(Dropout(0.15))
model.add(Dense(25, kernel_initializer='glorot_normal'))
model.add(Activation('relu'))
model.add(BatchNormalization())
model.add(Dropout(0.1))
model.add(Dense(1, activation='sigmoid'))
# Compile model
model.compile(optimizer='adam', metrics = ['accuracy'], loss='binary_crossentropy')
return model
# This is where we repeat the runs for each fold. If you choose runs=1 above, it will run a
# regular N-fold procedure.
#########
# It is important to leave the call to random seed here, so each run starts with a different seed.
#########
for run in range(runs):
print('\n Fold %d - Run %d\n' % ((i + 1), (run + 1)))
np.random.seed()
# Lots to unpack here.
# The first callback prints out roc_auc and gini values at the end of each epoch. It must be listed
# before the EarlyStopping callback, which monitors gini values saved in the previous callback. Make
# sure to set the mode to "max" because the default value ("auto") will not handle gini properly
# (it will act as if the model is not improving even when roc/gini go up).
# CSVLogger creates a record of all iterations. Not really needed but it doesn't hurt to have it.
# ModelCheckpoint saves a model each time gini improves. Its mode also must be set to "max" for reasons
# explained above.
callbacks = [
roc_auc_callback(training_data=(X_train, y_train),validation_data=(X_val, y_val)), # call this before EarlyStopping
EarlyStopping(monitor='norm_gini_val', patience=patience, mode='max', verbose=1),
CSVLogger('keras-5fold-run-01-v1-epochs.log', separator=',', append=False),
ModelCheckpoint(
'keras-5fold-run-01-v1-fold-' + str('%02d' % (i + 1)) + '-run-' + str('%02d' % (run + 1)) + '.check',
monitor='norm_gini_val', mode='max', # mode must be set to max or Keras will be confused
save_best_only=True,
verbose=1)
]
# The classifier is defined here. Epochs should be be set to a very large number (not 3 like below) which
# will never be reached anyway because of early stopping. I usually put 5000 there. Because why not.
nnet = KerasClassifier(
build_fn=baseline_model,
# Epoch needs to be set to a very large number ; early stopping will prevent it from reaching
# epochs=5000,
epochs=3,
batch_size=batchsize,
validation_data=(X_val, y_val),
verbose=2,
shuffle=True,
callbacks=callbacks)
fit = nnet.fit(X_train, y_train)
# We want the best saved model - not the last one where the training stopped. So we delete the old
# model instance and load the model from the last saved checkpoint. Next we predict values both for
# validation and test data, and create a summary of parameters for each run.
del nnet
nnet = load_model('keras-5fold-run-01-v1-fold-' + str('%02d' % (i + 1)) + '-run-' + str('%02d' % (run + 1)) + '.check')
scores_val_run = nnet.predict_proba(X_val, verbose=0)
LL_run = log_loss(y_val, scores_val_run)
print('\n Fold %d Run %d Log-loss: %.5f' % ((i + 1), (run + 1), LL_run))
AUC_run = roc_auc_score(y_val, scores_val_run)
print(' Fold %d Run %d AUC: %.5f' % ((i + 1), (run + 1), AUC_run))
print(' Fold %d Run %d normalized gini: %.5f' % ((i + 1), (run + 1), AUC_run*2-1))
y_pred_run = nnet.predict_proba(test, verbose=0)
if run > 0:
scores_val = scores_val + scores_val_run
y_pred = y_pred + y_pred_run
else:
scores_val = scores_val_run
y_pred = y_pred_run
# We average all runs from the same fold and provide a parameter summary for each fold. Unless something
# is wrong, the numbers printed here should be better than any of the individual runs.
scores_val = scores_val / runs
y_pred = y_pred / runs
LL = log_loss(y_val, scores_val)
print('\n Fold %d Log-loss: %.5f' % ((i + 1), LL))
AUC = roc_auc_score(y_val, scores_val)
print(' Fold %d AUC: %.5f' % ((i + 1), AUC))
print(' Fold %d normalized gini: %.5f' % ((i + 1), AUC*2-1))
timer(start_time)
# We add up predictions on the test data for each fold. Create out-of-fold predictions for validation data.
if i > 0:
fpred = pred + y_pred
avreal = np.concatenate((avreal, y_val), axis=0)
avpred = np.concatenate((avpred, scores_val), axis=0)
avids = np.concatenate((avids, val_ids), axis=0)
else:
fpred = y_pred
avreal = y_val
avpred = scores_val
avids = val_ids
pred = fpred
cv_LL = cv_LL + LL
cv_AUC = cv_AUC + AUC
cv_gini = cv_gini + (AUC*2-1)
# + [markdown] _cell_guid="a14d6cf9-abe2-46d0-9d7d-c3d07f555748" _uuid="b0fc6c40b88ab16828fb041bb64510a6dab44d27"
# Here we average all the predictions and provide the final summary.
# + _cell_guid="ffee9fa9-5ff7-41df-8702-e8bf0c386eb6" _uuid="ca37252600dc011cd9bc354fc702efd76e0324f2"
LL_oof = log_loss(avreal, avpred)
print('\n Average Log-loss: %.5f' % (cv_LL/folds))
print(' Out-of-fold Log-loss: %.5f' % LL_oof)
AUC_oof = roc_auc_score(avreal, avpred)
print('\n Average AUC: %.5f' % (cv_AUC/folds))
print(' Out-of-fold AUC: %.5f' % AUC_oof)
print('\n Average normalized gini: %.5f' % (cv_gini/folds))
print(' Out-of-fold normalized gini: %.5f' % (AUC_oof*2-1))
score = str(round((AUC_oof*2-1), 5))
timer(starttime)
mpred = pred / folds
# + [markdown] _cell_guid="fd1bd64e-fdea-4b1b-88ca-d469b7013bed" _uuid="61295f2ab3eabbe49441bd7e1b47736f3c342d6d"
# Save the file with out-of-fold predictions. For easier book-keeping, file names have the out-of-fold gini score and are are tagged by date and time.
# + _cell_guid="e9914308-7d3b-4eef-8ea0-65532d1b4c21" _uuid="3e20f234c25aaa7302f8a9ed7b752cdaed415b9a"
print('#\n Writing results')
now = datetime.now()
oof_result = pd.DataFrame(avreal, columns=['target'])
oof_result['prediction'] = avpred
oof_result['id'] = avids
oof_result.sort_values('id', ascending=True, inplace=True)
oof_result = oof_result.set_index('id')
sub_file = 'train_5fold-keras-run-01-v1-oof_' + str(score) + '_' + str(now.strftime('%Y-%m-%d-%H-%M')) + '.csv'
print('\n Writing out-of-fold file: %s' % sub_file)
oof_result.to_csv(sub_file, index=True, index_label='id')
# + [markdown] _cell_guid="72a76239-2c3e-4fd1-b280-3862755106be" _uuid="f4d9e51f7ea16afbb5c18886c3fddd48bbd55674"
# Save the final prediction. This is the one to submit.
# + _cell_guid="30446b64-aec5-45da-bf09-91fac68ae6eb" _uuid="61a7063ce1fb44e4a79a1a52ae12db2600000df5"
result = pd.DataFrame(mpred, columns=['target'])
result['id'] = te_ids
result = result.set_index('id')
print('\n First 10 lines of your 5-fold average prediction:\n')
print(result.head(10))
sub_file = 'submission_5fold-average-keras-run-01-v1_' + str(score) + '_' + str(now.strftime('%Y-%m-%d-%H-%M')) + '.csv'
print('\n Writing submission: %s' % sub_file)
result.to_csv(sub_file, index=True, index_label='id')
| 10 poer sugero safe driver prediction/keras-averaging-runs-gini-early-stopping.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Import Packages
import os
import matplotlib.pyplot as plt
import scipy.io as sio
import torch
import numpy as np
import pandas as pd
import logging
import re
from train_models import FNO1dComplex, SpectralConv1d, OneStepDataSet
from train_models_no_spacetime import FNO1dComplexNoSpacetime
# + tags=[]
# %load_ext autoreload
# %autoreload 1
# %aimport plotting_utils
# -
# # Load Data and Models
# +
DATA_DIR = '/local/meliao/projects/fourier_neural_operator/data/2021-08-14_NLS_data_files'
MODEL_DIR = '/local/meliao/projects/fourier_neural_operator/experiments/20_investigate_frequency_response/models'
PLOTS_DIR = '/local/meliao/projects/fourier_neural_operator/experiments/20_investigate_frequency_response/plots/Compare_N_X_datasets'
RESULTS_DIR = '/local/meliao/projects/fourier_neural_operator/experiments/20_investigate_frequency_response/results'
# +
model_fp_dd = {'Dataset 0': os.path.join(MODEL_DIR, 'dset_00_time_1_ep_1000'),
'Dataset 1': os.path.join(MODEL_DIR, 'dset_01_time_1_ep_1000'),
'Dataset 2': os.path.join(MODEL_DIR, 'dset_02_time_1_ep_1000'),
'Dataset 3': os.path.join(MODEL_DIR, 'dset_03_time_1_ep_1000'),
'Dataset 4': os.path.join(MODEL_DIR, 'dset_04_time_1_ep_1000'),
}
whole_dset_fp_dd = {'Dataset 0': os.path.join(MODEL_DIR, 'whole_dset_00_ep_1000'),
'Dataset 1': os.path.join(MODEL_DIR, 'whole_dset_01_ep_1000'),
'Dataset 2': os.path.join(MODEL_DIR, 'whole_dset_02_ep_1000'),
'Dataset 3': os.path.join(MODEL_DIR, 'whole_dset_03_ep_1000'),
# 'Dataset 4': os.path.join(MODEL_DIR, 'whole_dset_04_ep_1000'),
}
model_dd = {k: torch.load(v, map_location='cpu') for k,v in model_fp_dd.items()}
whole_model_dd = {k: torch.load(v, map_location='cpu') for k,v in whole_dset_fp_dd.items()}
# -
if not os.path.isdir(PLOTS_DIR):
os.mkdir(PLOTS_DIR)
# + tags=[]
data_fp_dd = {'Dataset 0': os.path.join(DATA_DIR, '00_test.mat'),
'Dataset 1': os.path.join(DATA_DIR, '01_test.mat'),
'Dataset 2': os.path.join(DATA_DIR, '02_test.mat'),
'Dataset 3': os.path.join(DATA_DIR, '03_test.mat'),
'Dataset 4': os.path.join(DATA_DIR, '04_test.mat'),
'Dataset 5': os.path.join(DATA_DIR, '05_test.mat')}
data_dd = {k: sio.loadmat(v) for k,v in data_fp_dd.items()}
dataset_dd = {k: OneStepDataSet(v['output'], v['t'], v['x']) for k,v in data_dd.items()}
# -
# # Prediction Differences Between Similar ICs
def prepare_input(X):
# X has shape (nbatch, 1, grid_size)
s = X.shape[-1]
n_batches = X.shape[0]
# Convert to tensor
X_input = torch.view_as_real(torch.tensor(X, dtype=torch.cfloat))
# FNO code appends the spatial grid to the input as below:
x_grid = torch.linspace(-np.pi, np.pi, s).view(-1,1)
X_input = torch.cat((X_input, x_grid.repeat(n_batches, 1, 1)), axis=2)
return X_input
def l2_normalized_error(pred, actual):
"""Short summary.
Parameters
----------
pred : type
Description of parameter `pred`.
actual : type
Description of parameter `actual`.
Returns
-------
types
Description of returned object.
"""
errors = pred - actual
error_norms = torch.linalg.norm(torch.tensor(errors), dim=-1, ord=2)
actual_norms = torch.linalg.norm(torch.tensor(actual), dim=-1, ord=2)
normalized_errors = torch.divide(error_norms, actual_norms)
return normalized_errors
# + tags=[]
with torch.no_grad():
preds_dd = {}
errors_dd = {}
for k in model_dd.keys():
model_k = model_dd[k]
dset_k = dataset_dd[k]
input = prepare_input(dset_k.X[:,0])
target = dset_k.X[:,1]
preds_k = model_k(input)
preds_dd[k] = preds_k
errors_dd[k] = l2_normalized_error(preds_k, target)
print("Finished with model ", k)
whole_dset_preds_dd = {}
whole_dset_errors_dd = {}
for k in whole_model_dd.keys():
model_k = whole_model_dd[k]
dset_k = dataset_dd[k]
input = prepare_input(dset_k.X[:,0])
target = dset_k.X[:,1]
preds_k = model_k(input)
whole_dset_preds_dd[k] = preds_k
whole_dset_errors_dd[k] = l2_normalized_error(preds_k, target)
print("Finished with model ", k)
# -
# ## Prediction plots for ICs freq [1, ..., 5]
for i in range(1):
pred_dd = {'FNO predictions': preds_dd['Dataset 0'][i].numpy()}
x_vals_dd = {'FNO predictions': np.linspace(-np.pi, np.pi, 1024)}
soln_x_vals = np.linspace(-np.pi, np.pi, 1024)
soln = dataset_dd['Dataset 0'].X[i,1].numpy()
plotting_utils.quick_prediction_plot(pred_dd, soln, x_vals_dd=x_vals_dd, soln_x_vals=soln_x_vals)
# +
def plot_average_freq_mistakes(preds_dd, solns_dd, show_n_modes=50, fp=None, abs_val_errors=True):
"""
solns has different ICs along axis 0
so does the values in preds_dd
"""
fig, ax = plt.subplots()
for k in preds_dd.keys():
preds_dft = np.abs(np.fft.fft(preds_dd[k], axis=-1))
soln_dft = np.abs(np.fft.fft(solns_dd[k], axis=-1))
if abs_val_errors:
diffs = np.abs(preds_dft - soln_dft)
else:
diffs = preds_dft - soln_dft
means = np.mean(diffs, axis=0)
stds = np.std(diffs, axis=0)
if abs_val_errors:
plt_low = np.clip(means - stds, a_min=0., a_max=None)
else:
plt_low = means - stds
plt_high = means + stds
ax.plot(means[:show_n_modes], label=k)
ax.fill_between(np.arange(show_n_modes), plt_low[:show_n_modes], plt_high[:show_n_modes], alpha=0.2)
ax.hlines(0, xmin=0, xmax=show_n_modes, linestyles='dashed', color='black', alpha=0.5)
ax.legend()
if fp is not None:
plt.savefig(fp)
else:
plt.show()
plt.close(fig)
# -
dd_for_plt = {'ICs freq [1, ..., 5]': preds_dd['Dataset 0'].numpy(),
'ICs freq [6, ..., 10]': preds_dd['Dataset 1'].numpy(),
'ICs freq [11, ..., 15]': preds_dd['Dataset 2'].numpy(), }
soln_dd_for_plt = {'ICs freq [1, ..., 5]': dataset_dd['Dataset 0'].X[:,1].numpy(),
'ICs freq [6, ..., 10]': dataset_dd['Dataset 1'].X[:,1].numpy(),
'ICs freq [11, ..., 15]': dataset_dd['Dataset 2'].X[:,1].numpy()}
plot_average_freq_mistakes(dd_for_plt, soln_dd_for_plt, abs_val_errors=False)
# ## Prediction plots for ICs freq [6, ..., 10]
i = 0
pred_dd = {'FNO predictions': preds_dd['Dataset 1'][i].numpy()}
x_vals_dd = {'FNO predictions': np.linspace(-np.pi, np.pi, 1024)}
soln_x_vals = np.linspace(-np.pi, np.pi, 1024)
soln = dataset_dd['Dataset 1'].X[i,1].numpy()
plotting_utils.quick_prediction_plot(pred_dd, soln, x_vals_dd=x_vals_dd, soln_x_vals=soln_x_vals)
# +
def quick_boxplot(errors_dd, names_dd=None, ref_hline=None, fp=None, title=None):
error_lst = []
key_lst = []
for k, errors in errors_dd.items():
error_lst.append(errors)
key_lst.append(k)
if names_dd is not None:
key_lst = [names_dd[k] for k in key_lst]
fig, ax = plt.subplots()
ax.set_yscale('log')
ax.set_ylabel('L2 Normalized Error')
ax.set_xlabel('FNO Model')
ax.set_title(title)
ax.set_xticklabels(labels=key_lst, rotation=45, ha='right')
if ref_hline is not None:
ax.hlines(ref_hline, xmin=0.5, xmax=len(key_lst)+ 0.5, linestyles='dashed')
fig.patch.set_facecolor('white')
ax.boxplot(error_lst)
fig.tight_layout()
if fp is not None:
plt.savefig(fp)
else:
plt.show()
plt.close(fig)
# -
names_dd = {'Dataset 0': 'ICs freq [1, ..., 5]',
'Dataset 1': 'ICs freq [6, ..., 10]',
'Dataset 2': 'ICs freq [11, ..., 15]',
'Dataset 3': 'ICs freq [16, ..., 20]'}
t = 'FNO test errors on different datasets'
fp=os.path.join(PLOTS_DIR, 'time_1_test_errors.png')
quick_boxplot(errors_dd, names_dd = names_dd, title=t, fp=fp)
quick_boxplot(whole_dset_errors_dd, names_dd=names_dd, title='FNO test errors on different datasets')
def double_boxplot(errors_dd_1, errors_dd_2, ref_hline=None, fp=None, title=None):
FUDGE_X_PARAM = 0.125
WIDTH_PARAM = 0.25
error_lst_1 = []
error_lst_2 = []
key_lst = []
for k in errors_dd_1.keys():
error_lst_1.append(errors_dd_1[k].numpy())
error_lst_2.append(errors_dd_2[k].numpy())
key_lst.append(k)
fig, ax = plt.subplots()
x_ticks = np.arange(len(error_lst_1))
x_data_1 = x_ticks - FUDGE_X_PARAM
print(x_data_1)
x_data_2 = x_ticks + FUDGE_X_PARAM
bplot1 = ax.boxplot(error_lst_1, widths=WIDTH_PARAM, positions=x_data_1)
bplot2 = ax.boxplot(error_lst_2, widths=WIDTH_PARAM, positions=x_data_2)
ax.set_yscale('log')
ax.set_ylabel('L2 Normalized Error')
ax.set_xlabel('FNO Model')
ax.set_title(title)
ax.set_xticks(ticks=x_ticks)
ax.set_xticklabels(labels=key_lst, rotation=45, ha='right')
if ref_hline is not None:
ax.hlines(ref_hline, xmin=0.5, xmax=len(key_lst)+ 0.5, linestyles='dashed')
fig.patch.set_facecolor('white')
fig.tight_layout()
if fp is not None:
plt.savefig(fp)
else:
plt.show()
plt.close(fig)
# + tags=[]
# errors_dd.pop('Dataset 4')
double_boxplot(errors_dd, whole_dset_errors_dd)
# -
def make_composed_predictions(model, dset):
"""
"""
# print(ones_input.shape)
preds = torch.zeros_like(dset.X)
errors = torch.zeros((dset.X.shape[0], dset.X.shape[1]))
preds[:, 0] = dset.X[:, 0]
inputs_i = prepare_input(dset.X[:, 0])
for t_idx in range(1, dset.n_tsteps+1):
time = dset.t[t_idx]
predictions_i = model(inputs_i)
preds[:, t_idx] = predictions_i
inputs_i = prepare_input(predictions_i)
errors_i = l2_normalized_error(predictions_i, dset.X[:,t_idx])
errors[:,t_idx] = errors_i
return preds, errors
# + tags=[]
comp_pred_dd = {}
comp_error_dd = {}
with torch.no_grad():
for k in model_dd.keys():
model_k = model_dd[k]
dset_k = dataset_dd[k]
preds_k, errors_k = make_composed_predictions(model_k, dset_k)
comp_pred_dd[k] = preds_k
comp_error_dd[k] = errors_k
print("Finished with model ", k)
# + tags=[]
whole_dset_comp_pred_dd = {}
whole_dset_comp_error_dd = {}
with torch.no_grad():
for k in whole_model_dd.keys():
model_k = whole_model_dd[k]
dset_k = dataset_dd[k]
preds_k, errors_k = make_composed_predictions(model_k, dset_k)
whole_dset_comp_pred_dd[k] = preds_k
whole_dset_comp_error_dd[k] = errors_k
print("Finished with model ", k)
# + tags=[]
error_dd_for_plt = {k: v.numpy()[:, :10] for k,v in comp_error_dd.items()}
error_dd_for_plt.pop('Dataset 2')
plotting_utils.plot_time_errors(error_dd_for_plt)
# -
fp_pattern_dd = {'ICs freq [1, ..., 5]': os.path.join(RESULTS_DIR, "dset_00_time_1_{}.txt"),
'ICs freq [6, ..., 10]': os.path.join(RESULTS_DIR, "dset_01_time_1_{}.txt"),
'ICs freq [11, ..., 15]': os.path.join(RESULTS_DIR, "dset_02_time_1_{}.txt"),
'ICs freq [16, ..., 20]': os.path.join(RESULTS_DIR, "dset_03_time_1_{}.txt")}
train_df_dd = {k: pd.read_table(v.format('train')) for k,v in fp_pattern_dd.items()}
test_df_dd = {k: pd.read_table(v.format('test')) for k,v in fp_pattern_dd.items()}
for k in fp_pattern_dd.keys():
t = "Train/test data: " + k
plotting_utils.make_train_test_plot(train_df_dd[k], test_df_dd[k], title=t, log_scale=True)
with torch.no_grad():
ones_vec = torch.tensor(1., dtype=torch.float).repeat((input1.shape[0], 1,1))
output1_dd = {model_k: model(input1) for model_k, model in model_dd.items()}
no_W_output1_dd = {model_k: model(input1) for model_k, model in no_W_model_dd.items()}
output2_dd = {model_k: model(input2) for model_k, model in model_dd.items()}
no_W_output2_dd = {model_k: model(input2) for model_k, model in no_W_model_dd.items()}
# +
def plot_two_solutions(preds_dd, ic_dd, soln_dd, x_vals, fp=None, title=None, alpha=0.7, show_n_modes=50):
fig, ax = plt.subplots(2,3)
fig.set_size_inches(14, 7.5)
ax[0,0].set_title("$Re(u)$", size=20)
ax[0,1].set_title("$Im(u)$", size=20)
ax[0,2].set_title("$Abs(DFT(u))$", size=20)
ax[0,0].set_ylabel("ICs", size=20)
ax[1,0].set_ylabel("Predictions", size=20)
for k,v in ic_dd.items():
ax[0,0].plot(x_vals, np.real(v), alpha=alpha, label=k)
ax[0,1].plot(x_vals, np.imag(v), alpha=alpha, label=k)
v_dft = np.fft.fft(v)
v_dft_abs = np.abs(v_dft)
ax[0,2].plot(v_dft_abs[:show_n_modes], label=k)
# ax[2].plot(v_dft_abs[:show_n_modes], alpha=.3, label=k)
ax[0,2].set_ylabel("Abs(DFT(u))", size=13)
ax[0,2].set_xlabel("Frequency", size=13)
for k,v in preds_dd.items():
ax[1,0].plot(x_vals, np.real(v), alpha=alpha, label=k)
ax[1,1].plot(x_vals, np.imag(v), alpha=alpha, label=k)
v_dft = np.fft.fft(v)
v_dft_abs = np.abs(v_dft)
ax[1,2].plot(v_dft_abs[:show_n_modes], label=k)
# ax[2].plot(v_dft_abs[:show_n_modes], alpha=.3, label=k)
for k,v in soln_dd.items():
ax[1,0].plot(x_vals, np.real(v), '--', alpha=alpha, label=k)
ax[1,1].plot(x_vals, np.imag(v), '--', alpha=alpha, label=k)
v_dft = np.fft.fft(v)
v_dft_abs = np.abs(v_dft)
ax[1,2].plot(v_dft_abs[:show_n_modes], '--', alpha=alpha, label=k)
ax[1,0].legend()
ax[0,0].legend()
ax[1,2].set_ylabel("Abs(DFT(u))", size=13)
ax[1,2].set_xlabel("Frequency", size=13)
ax[0,2].legend()
ax[1,2].legend()
fig.suptitle(title, size=20)
fig.patch.set_facecolor('white')
fig.tight_layout()
if fp is not None:
plt.savefig(fp)
else:
plt.show()
plt.close(fig)
def plot_two_solutions_only_DFT(preds_dd, ic_dd, soln_dd, x_vals, fp=None, title=None, alpha=0.7, show_n_modes=50):
fig, ax = plt.subplots(1,2)
fig.set_size_inches(14, 7.5)
ax[0].set_title("$|\\mathcal{F}(u(0, x))|$", size=20)
ax[1].set_title("$|\\mathcal{F}(u(1, x))|$", size=20)
# ax[0,0].set_ylabel("ICs", size=20)
# ax[1,0].set_ylabel("Predictions", size=20)
for k,v in ic_dd.items():
# ax[0,0].plot(x_vals, np.real(v), alpha=alpha, label=k)
# ax[0,1].plot(x_vals, np.imag(v), alpha=alpha, label=k)
v_dft = np.fft.fft(v)
v_dft_abs = np.abs(v_dft)
ax[0].plot(v_dft_abs[:show_n_modes], label=k)
# ax[2].plot(v_dft_abs[:show_n_modes], alpha=.3, label=k)
ax[0].set_ylabel("Initial Conditions", size=13)
ax[0].set_xlabel("Frequency", size=13)
for k,v in preds_dd.items():
# ax[1,0].plot(x_vals, np.real(v), alpha=alpha, label=k)
# ax[1,1].plot(x_vals, np.imag(v), alpha=alpha, label=k)
v_dft = np.fft.fft(v)
v_dft_abs = np.abs(v_dft)
ax[1].plot(v_dft_abs[:show_n_modes], label=k)
# ax[2].plot(v_dft_abs[:show_n_modes], alpha=.3, label=k)
ax[1].set_ylabel("Predictions/Solutions", size=13)
ax[1].set_xlabel("Frequency", size=13)
for k,v in soln_dd.items():
# ax[1,0].plot(x_vals, np.real(v), '--', alpha=alpha, label=k)
# ax[1,1].plot(x_vals, np.imag(v), '--', alpha=alpha, label=k)
v_dft = np.fft.fft(v)
v_dft_abs = np.abs(v_dft)
ax[1].plot(v_dft_abs[:show_n_modes], '--', alpha=alpha, label=k)
ax[1].legend()
ax[0].legend()
# ax[1,2].set_ylabel("Abs(DFT(u))", size=13)
# ax[1,2].set_xlabel("Frequency", size=13)
# ax[0,2].legend()
# ax[1,2].legend()
fig.suptitle(title, size=20)
fig.patch.set_facecolor('white')
fig.tight_layout()
if fp is not None:
plt.savefig(fp)
else:
plt.show()
plt.close(fig)
# -
for k in model_dd.keys():
for i in range(5):
preds_dd = {'Preds_1': output1_dd[k].numpy()[i],
'Preds_2': output2_dd[k].numpy()[i]}
ic_dd = {'IC_1': dset1.X[i,0].numpy(),
'IC_2': dset2.X[i,0]}
soln_dd = {'Soln_1': dset1.X[i,1].numpy(),
'Soln_2': dset2.X[i,1].numpy()}
solns = dset1.X[i, 1].numpy()
title = 'Test case ' + str(i) + ', model trained on ' + k
fp_i = os.path.join(PLOTS_DIR, 'compare_predictions_model_{}_test_case_{}.png'.format(model_name_dd[k], i))
plot_two_solutions_only_DFT(preds_dd, ic_dd, soln_dd, np.linspace(-np.pi, np.pi, 1024), title=title, fp=fp_i)
for k in no_W_model_dd.keys():
for i in range(5):
preds_dd = {'Preds_1': no_W_output1_dd[k].numpy()[i],
'Preds_2': no_W_output2_dd[k].numpy()[i]}
ic_dd = {'IC_1': dset1.X[i,0].numpy(),
'IC_2': dset2.X[i,0].numpy()}
soln_dd = {'Soln_1': dset1.X[i,1].numpy(),
'Soln_2': dset2.X[i,1].numpy()}
# solns = dset1.X[i, 1].numpy()
title = 'Test case ' + str(i) + ', No W channel, model trained on ' + k
fp_i = os.path.join(PLOTS_DIR, 'no_W_compare_predictions_model_{}_test_case_{}.png'.format(model_name_dd[k], i))
plot_two_solutions_only_DFT(preds_dd, ic_dd, soln_dd, np.linspace(-np.pi, np.pi, 1024), title=title, fp=fp_i)
# +
def make_rescaled_predictions(model, dset):
"""
"""
# print(ones_input.shape)
preds = torch.zeros_like(dset.X)
x_vals = torch.zeros((dset.X.shape[1], dset.X.shape[2]))
errors = torch.zeros((dset.X.shape[0], dset.X.shape[1]))
# print(x_vals.shape)
x_vals[0] = dset.x_grid.reshape((1,-1))
preds[:, 0] = dset.X[:, 0]
for t_idx in range(1, dset.n_tsteps+1):
time = dset.t[t_idx]
rescaled_ICs = prepare_input(dset.rescaled_ICs[:,t_idx])
# print(rescaled_ICs.shape)
# x_vals[t_idx] = rescaled_ICs[0, :,2]
predictions_i = model(rescaled_ICs)
# inv_root_t = 1 / torch.sqrt(time)
root_t = torch.sqrt(time)
predictions_i = root_t * predictions_i
preds[:, t_idx] = predictions_i
errors_i = l2_normalized_error(predictions_i, dset.X[:,t_idx])
errors[:,t_idx] = errors_i
# print("Finished predictions at ", t_idx, inv_root_t)
return preds, errors
# def make_composed_predictions(model, dset):
# """
# """
# ones_input = torch.tensor(1.).repeat(dset.n_batches, 1,1)
# # print(ones_input.shape)
# preds = torch.zeros_like(dset.X)
# errors = torch.zeros((dset.X.shape[0], dset.X.shape[1]))
# preds[:, 0] = dset.X[:, 0]
# inputs_i = prepare_input(dset.X[:, 0])
# for t_idx in range(1, dset.n_tsteps+1):
# time = dset.t[t_idx]
# # rescaled_ICs = dset.make_x_train_rescaled_batched(dset.X[:, 0], time)
# predictions_i = model(inputs_i, ones_input)
# preds[:, t_idx] = predictions_i
# inputs_i = prepare_input(predictions_i)
# errors_i = l2_normalized_error(predictions_i, dset.X[:,t_idx])
# errors[:,t_idx] = errors_i
# # print("Finished predictions at ", t_idx)
# return preds, errors
# + tags=[]
preds_dd = {}
errors_dd = {}
with torch.no_grad():
for k, model in model_dd.items():
preds_i, errors_i = make_rescaled_predictions(model, scaling_dset)
preds_dd[k] = preds_i
errors_dd[k] = errors_i
print("Finished with ", k)
# preds_composed, errors_composed = make_composed_predictions(model, time_dset)
# preds_rescaled, x_vals_rescaled, errors_rescaled = make_rescaled_predictions(model, scaling_dset)
# + tags=[]
errors_dd_i = {k: np.delete(v.numpy(), [59], axis=0) for k,v in errors_dd.items()}
fp_time_errors = os.path.join(PLOTS_DIR, 'scaling_time_errors.png')
plotting_utils.plot_time_errors(errors_dd_i, title='Time-Rescaling Preds with FNO trained on different ICs') #, fp=fp_time_errors)
# + tags=[]
test_cases_for_plot = list(range(3))
for test_case in test_cases_for_plot:
solns = scaling_dset.X.numpy()[test_case]
for k,v in preds_dd.items():
fp_i = os.path.join(PLOTS_DIR, 'model_{}_test_case_{}.png'.format(model_name_dd[k], test_case))
print("Working on model {}, case {}".format(model_name_dd[k], test_case))
preds_dd_i = {k: v.numpy()[test_case]}
plotting_utils.plot_one_testcase_panels(preds_dd_i, solns, plot_errors=True, show_n_timesteps=10, fp=fp_i)
# break
# + tags=[]
pred_arr = preds_dd['Mixed ICs']
print(pred_arr.shape)
plt.plot(np.real(pred_arr[0,2,:].numpy()))
# +
train_pattern = os.path.join(RESULTS_DIR, '{}_train_FNO_train.txt')
test_pattern = os.path.join(RESULTS_DIR, '{}_train_FNO_test.txt')
for k,v in model_name_dd.items():
train_fp_i = train_pattern.format(v)
test_fp_i = test_pattern.format(v)
train_df = pd.read_table(train_fp_i)
test_df = pd.read_table(test_fp_i)
title_i = 'Training set: ' + k
fp_i = os.path.join(PLOTS_DIR, 'train_test_{}.png'.format(v))
plotting_utils.make_train_test_plot(train_df, test_df, log_scale=True, title=title_i, fp=fp_i)
# -
DATA_DIR = '/local/meliao/projects/fourier_neural_operator/data/'
NEW_PLOTS_DIR = '/local/meliao/projects/fourier_neural_operator/experiments/18_train_with_rescaling/plots/mixed_IC_model'
if not os.path.isdir(NEW_PLOTS_DIR):
os.mkdir(NEW_PLOTS_DIR)
test_dset_fp_dd = {'ICs freq [1, ..., 5]': os.path.join(DATA_DIR, '2021-06-24_NLS_data_04_test.mat'),
'ICs freq [6, ..., 10]': os.path.join(DATA_DIR, '2021-07-22_NLS_data_06_test.mat'),
'ICs freq [11, ..., 15]': os.path.join(DATA_DIR, '2021-08-04_NLS_data_09_test.mat'),
'ICs freq [16, ..., 20]': os.path.join(DATA_DIR, '2021-08-04_NLS_data_10_test.mat'),
'Mixed ICs': os.path.join(DATA_DIR, '2021-08-08_NLS_mixed_IC_data_test.mat'),
}
# +
test_data_dd = {k: sio.loadmat(v) for k,v in test_dset_fp_dd.items()}
test_dset_dd = {k: TimeScalingDataSet(v['output'], v['t'], v['x']) for k,v in test_data_dd.items()}
# + tags=[]
preds_dd = {}
errors_dd = {}
mixed_model = model_dd['Mixed ICs']
with torch.no_grad():
for k, dset in test_dset_dd.items():
preds_i, _ , errors_i = make_rescaled_predictions(mixed_model, dset)
preds_dd[k] = preds_i
errors_dd[k] = errors_i
print("Finished with ", k)
# preds_composed, errors_composed = make_composed_predictions(model, time_dset)
# preds_rescaled, x_vals_rescaled, errors_rescaled = make_rescaled_predictions(model, scaling_dset)
# + tags=[]
errors_dd_i = {k: v.numpy() for k,v in errors_dd.items()}
t = 'Model trained on Mixed ICs and tested on different datasets'
fp = os.path.join(NEW_PLOTS_DIR, 'mixed_ICs_time_errors.png')
plotting_utils.plot_time_errors(errors_dd_i, title=t, fp=fp)
# + tags=[]
test_cases_for_plot = list(range(3))
for test_case in test_cases_for_plot:
for k, dset in test_dset_dd.items():
solns = dset.X.numpy()[test_case]
preds_dd_i = {k: preds_dd[k].numpy()[test_case]}
fp_i = os.path.join(NEW_PLOTS_DIR, 'panels_dset_{}_test_case_{}.png'.format(model_name_dd[k], test_case))
plotting_utils.plot_one_testcase_panels(preds_dd_i, solns, show_n_timesteps=10, fp=fp_i)
print("Finished dset {} and test case {}".format(model_name_dd[k], test_case))
# break
# -
| experiments/20_investigate_frequency_response/Compare_n_x_grid.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Using MCC-RGB and MCC
# This is a simple example of applying `pymccrgb`'s ground classification algorithm to a multispectral lidar dataset with colors at each point. It is compared to the MCC algorithm devloped by [<NAME>, 2007](https://doi.org/10.1109/TGRS.2006.890412).
import warnings
warnings.filterwarnings('ignore')
from pymccrgb.core import mcc, mcc_rgb
from pymccrgb.plotting import plot_points_3d, plot_results
from pymccrgb.datasets import load_mammoth_lidar
# First, we load the dataset, a lidar point cloud over Horseshoe Lake near Mammoth Mountain, CA. This was acquired by the [National Center for Airborne Laser Mapping](http://ncalm.cive.uh.edu/) with an Optech Titan three-channel lidar scanner.
data = load_mammoth_lidar(npoints=1e6)
plot_points_3d(data)
# We can use the standard MCC algorithm (`mcc`) to classify the points by relative height.
data_mcc, labels_mcc = mcc(data, verbose=True)
# ... or use MCC-RGB (`mcc_rgb`) to classify points by relative height and color.
#
# To do this, we specify a training height tolerance. In this case, points over 0.3 m relative height will be considered vegetation (non-ground) points. These points are used to train a color-based classifer that is used to re-classify low non ground points.
data_new, labels_new = mcc_rgb(data, verbose=True)
# Finally the `plot_results` convenience function compares the final results of each algorithm visually.
plot_results(data, labels_mcc, labels_new)
| docs/source/examples/mcc-rgb.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python [conda root]
# language: python
# name: conda-root-py
# ---
# # <NAME>
# # IMPORTAR DATOS
# +
import gspread
from oauth2client.service_account import ServiceAccountCredentials
# use creds to create a client to interact with the Google Drive API
scope = ['https://spreadsheets.google.com/feeds']
creds = ServiceAccountCredentials.from_json_keyfile_name('client_secret.json', scope)
client = gspread.authorize(creds)
# Find a workbook by name and open the first sheet
# Make sure you use the right name here.
sheet = client.open("DatosTFG_SistemasRecomendacion").sheet1
# Extract and print all of the values
list_of_hashes = sheet.get_all_records()
print(list_of_hashes)
# -
import pandas as pd
tabla = pd.DataFrame(data=sheet.get_all_records())
#listar
lista = list(tabla.groupby(tabla.dtypes, axis=1))
lista
# # MOSTRAR SUMA DE VALORACIONES
import numpy as np
import matplotlib.pyplot as plt
#obtener las sumas de las ponderaciones por asignatura
l={}
for asig in list(tabla.columns):
tabla[asig]
oneStars=0;
twoStars=0;
threeStars=0;
fourStars=0;
fiveStars=0;
for i in tabla[asig] :
if(i==5):
fiveStars+=1;
if(i==4):
fourStars+=1;
if(i==3):
threeStars+=1;
if(i==2):
twoStars+=1;
if(i==1):
oneStars+=1;
l[asig]=[oneStars, twoStars,threeStars,fourStars,fiveStars]
import pprint as pp
pp.pprint(l)
# # MOSTRAR GRÁFICOS POR VALORACIONES DE CADA ASIGNATURA
# +
res=pd.DataFrame(data=l)
res=res.T
res.columns=['1','2','3','4','5']
res.head(10).plot(kind='bar')
plt.show()
# -
res2=res.T
lis=[]
count=0
for i in res2.columns:
if i != 'Token'and i!='Submitted At':
lis.append(i)
if count==9:
newpd = res2[lis]
newpd.T.plot(kind='bar')
plt.show()
lis=[]
count=0
else:
count+=1
print(lis,"listaif")
newpd = res2[lis]
newpd.T.plot(kind='bar')
plt.show()
#almacenar en un diccionario de diccionarios cuya clave sea el usuario, y como valor un conjunto de diccionarios
#(clave=nombre asignatura, valor=ponderacion). No introducimos el Token ni la fecha en la que se realizó el cuestionario.
asignaturas= list(tabla.columns)
diccAsi={}
diccAsi.setdefault(1,{})
for k in asignaturas:
if k != 'Token'and k!='Submitted At':
for i, j in zip(tabla[k], range(len(tabla['Algoritmia'])) ):
diccAsi.setdefault(j,{})
diccAsi[j][k]=i
diccAsi
#acceso a todas las ponderaciones de un usuario concreto en todas las asignaturas, junto con el nombre de dicha asignatura
for i in diccAsi[0]:
print(diccAsi[0][i], i)
# # CÁLCULO Y ALMACENAMIENTO DEL SISTEMA DE RECOMENDACIÓN EN MATRIZ DE DISTANCIAS
tabla1 = pd.DataFrame.from_dict(diccAsi)
tabla1= tabla1.T
#transformamos los vacíos en NaN para evitar el error de calcular el coeficiente de correlación de pearson entre un int y un string ('' equivale a un str)
tabla1= tabla1.replace('', np.nan, regex=True)
tabla1
def med_pond(tabla, list1):
return tabla[list1].mean()
def med_pond_ix(tabla, list1):
return tabla.ix[list1].mean()
def coef_corr_pearson(tabla1, usu1, usu2, rows=True):
if rows:
#almacenamiento en un array de arrays las listas con las respuestas no vacías de ambos usuarios
listUsu= tabla1.ix[[usu1,usu2]].dropna(axis=1).as_matrix()
usu1_med=med_pond_ix(tabla1, usu1)
usu2_med=med_pond_ix(tabla1, usu2)
else:
listUsu= tabla1[[usu1,usu2]].dropna(axis=0).as_matrix().T
usu1_med= med_pond(tabla1, usu1)
usu2_med= med_pond(tabla1, usu2)
if listUsu.any():
correl_pearson= np.dot(listUsu[0,:] - usu1_med, listUsu[1,:]-usu2_med)/(np.sqrt(np.dot(listUsu[0,:]-usu1_med, listUsu[0, :]-usu1_med))*
np.sqrt(np.dot(listUsu[1,:]-usu2_med, listUsu[1, :]-usu2_med)))
if np.isnan(correl_pearson):
return 0
else:
return correl_pearson
else:
return 0
def matriz_sim(tabla, correl_pearson=coef_corr_pearson ):
matrizSim={}
listaAsign= tabla.columns.tolist()
for asign in listaAsign:
otrasAsign= listaAsign[listaAsign.index(asign): len(listaAsign)]
matrizSim[asign]= dict([(asign2, correl_pearson(tabla, asign, asign2, rows=False)) for asign2 in otrasAsign])
for asign in listaAsign:
otrasAsign= listaAsign[listaAsign.index(asign): len(listaAsign)]
for asign2 in otrasAsign:
matrizSim[asign2][asign]= matrizSim[asign][asign2]
return matrizSim
matrizSim= pd.DataFrame(matriz_sim(tabla1))
# # PREDICCIÓN Y RECOMENDACIÓN
def mezl_dat(tabla1, matrizSim, usu, asignatura):
vot_usu= pd.isnull(tabla1.ix[usu])
vot_usu[asignatura]=True
for i,k,j in zip(matrizSim[asignatura],tabla1.ix[usu],vot_usu):
if j== False:
yield (k, i)
def predicc_usu_asign(listDatos, n):
divis=0.0
div=0.0
for i in listDatos[0: n]:
if i[1]>0:
div+=abs(i[1])
divis+=(i[1]*i[0])
return divis/div if div!=0 else 0
def dev_recom_productos(tabla1, matrizSim, asignatura, usu, n):
listDatos=list(mezl_dat(tabla1,matrizSim,usu, asignatura))
listDatos.sort(key=lambda x:x[1],reverse=True)
return round(predicc_usu_asign(listDatos,n),0)
dev_recom_productos(tabla1,matrizSim,str('Algoritmia'),2,10)
# # GUARDAR DATOS
# +
#formato CSV
#tabla1.to_csv('datos.data')
# +
#formato binario
import pickle
def guardarDatos(nombreArchivo, tabla1):
archivo = open(nombreArchivo, "wb")
pickle.dump(tabla1, archivo)
archivo.close()
# -
guardarDatos('archivoDatos.bin', tabla1)
# # LECTURA DATOS
#formato binario
def recuperarDatos(nombreArchivo):
archivo = open(nombreArchivo, "rb")
tablaDatos = pickle.load(archivo)
archivo.close()
return tablaDatos
tablaDatos= recuperarDatos('archivoDatos.bin')
# +
#formato CSV
#tablaDatosCSV=pd.read_csv('datos.data', index_col=0)
# -
| Prototipo_Sistema_Recomendacion/BasadoEnProductos.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# Disable warnings
import warnings
warnings.filterwarnings('ignore')
# + [markdown] pycharm={"name": "#%% md\n"}
# # Stock Market Predictions with a Long Short-Term Memory Neural Network
#
# Say you're planning to invest in the stock market, so you want to model fluctuations in price by looking at the history of a sequence of prices to accurately predict what future prices will be. When analyzing a sequence of data which were observed in some constant increment of time, and each observation is directly dependent on one or more previous observations (a stock price tomorrow directly depends on its price today), you need a time series model. In this workshop, we'll start by investigating two well-known models, then compare their prediction accuracy to an LSTM nueral network.
#
# Adapted from: https://www.datacamp.com/community/tutorials/lstm-python-stock-market
#
# ### Import Necessary Packages
# -
from pandas_datareader import data
import matplotlib.pyplot as plt
import pandas as pd
import datetime as dt
import urllib.request, json
import os
import numpy as np
import tensorflow as tf
from sklearn.preprocessing import MinMaxScaler
# ### Load Data
#
# The tutorial that this workshop was adapted from outlines two sources of data for use in the remainder of the workshop. For simplicity, we'll stick with the Kaggle data set that was provided. Feel free to tinker with analyzing different stock symbols (i.e., data for different companies' stock prices), although I can't guarantee that everything will work as I've only tested the code with the Kaggle data for Hewlett-Packard (HP). Generally speaking, stock prices can be measured with the following metrics:
# - Open: Opening stock price of a time period
# - Close: Closing stock price of a time period
# - High: Highest stock price of a time period
# - Low: Lowest stock price of a time period
#
# Note that these metrics can be analyzed for various time intervals (e.g., daily, hourly, 15 minutes, 5 minutes, etc.), but in this workshop, we'll focus on daily prices over the course of multiple years. In theory, you could build a model on any interval of time you have data for, but exploring the benefits and drawbacks of such variations are outside of the scope of this workshop.
# + pycharm={"name": "#%%\n"}
data_source = 'kaggle' # alphavantage or kaggle
if data_source == 'alphavantage':
api_key = '<KEY>'
# American Airlines stock market prices
ticker = 'AAL'
# JSON file with all the stock market data for AAL from the last 20 years
url_string = f'https://www.alphavantage.co/query?function=TIME_SERIES_DAILY&symbol={ticker}&outputsize=full&apikey={api_key}'
# Save data to this file
file_to_save = f'../data/raw/stock_market_data-{ticker}.csv'
# If you haven't already saved data,
# Go ahead and grab the data from the url
# And store date, low, high, volume, close, open values to a Pandas DataFrame
if not os.path.exists(file_to_save):
with urllib.request.urlopen(url_string) as url:
data = json.loads(url.read().decode())
# extract stock market data
data = data['Time Series (Daily)']
df = pd.DataFrame(columns=['Date', 'Low', 'High', 'Close', 'Open'])
for k,v in data.items():
date = dt.datetime.strptime(k, '%Y-%m-%d')
data_row = [date.date(), float(v['3. low']), float(v['2. high']), float(v['4. close']), float(v['1. open'])]
df.loc[-1,:] = data_row
df.index = df.index + 1
print(f'Data saved to : {file_to_save}')
df.to_csv(file_to_save)
# If the data is already there, just load it from the CSV
else:
print('File already exists. Loading data from CSV')
df = pd.read_csv(file_to_save)
else:
# You will be using HP's data. Feel free to experiment with other data.
# But while doing so, be careful to have a large enough dataset and also pay attention to the data normalization
df = pd.read_csv(os.path.join('../data/external/Stocks', 'hpq.us.txt'), delimiter=',', usecols=['Date', 'Open', 'High', 'Low', 'Close'])
print('Loaded data from the Kaggle repository')
# -
# ### Sort and Check Data
#
# Note that it is extremely important for time series data to be ordered by time, otherwise you would be training your model on some arbitrary sequence of observations which may be detrimental to its efficacy.
# + pycharm={"name": "#%%\n"}
# Sort DataFrame by date
df = df.sort_values('Date')
df.head()
# + pycharm={"name": "#%%\n"}
plt.figure(figsize=(18, 9))
plt.plot(range(df.shape[0]), (df['Low'] + df['High']) / 2.0)
plt.xticks(range(0, df.shape[0], 500), df['Date'].loc[::500], rotation=45)
plt.xlabel('Date', fontsize=18)
plt.ylabel('Mid Price', fontsize=18)
plt.show()
# -
# ### Split Data
#
# As per usual, you should split your data into training and testing sets, so your model is validated upon its predicitons for observations it has never seen before.
# +
# First calculate the mid prices from the highest and lowest
high_prices = df.loc[:, 'High'].to_numpy()
low_prices = df.loc[:, 'Low'].to_numpy()
mid_prices = (high_prices + low_prices) / 2.0
# Split data into training and test sets
train_data = mid_prices[:11000]
test_data = mid_prices[11000:]
# -
# ### Normalize Data
#
# Before training a model, you must normalize the data. Since different time periods of data have different value ranges, we normalize the data by "binning" the full time series into windows of some specified size (in this case it is 2500). We then smooth **only** the training data, using the exponential moving average, to reduce the amount of noise our models encounter.
# +
# Scale the data to be between 0 and 1
# When scaling remember! You normalize both test and train data with respect to training data
# Because you are not supposed to have access to test data
scaler = MinMaxScaler()
train_data = train_data.reshape(-1, 1)
test_data = test_data.reshape(-1, 1)
# Train the Scaler with training data and smooth data
smoothing_window_size = 2500
for di in range(0, 10000, smoothing_window_size):
scaler.fit(train_data[di:di + smoothing_window_size, :])
train_data[di:di + smoothing_window_size, :] = scaler.transform(train_data[di:di + smoothing_window_size, :])
# You normalize the last bit of remaining data
scaler.fit(train_data[di + smoothing_window_size:, :])
train_data[di + smoothing_window_size:, :] = scaler.transform(train_data[di + smoothing_window_size:, :])
# Reshape both train and test data
train_data = train_data.reshape(-1)
# Normalize test data
test_data = scaler.transform(test_data).reshape(-1)
# Now perform exponential moving average smoothing
# So the data will have a smoother curve than the original ragged data
EMA = 0.0
gamma = 0.1
for ti in range(11000):
EMA = gamma * train_data[ti] + (1 - gamma) * EMA
train_data[ti] = EMA
# Used for visualization and test purposes
all_mid_data = np.concatenate([train_data,test_data], axis=0)
# -
# ## One-Step Ahead Prediction via Averaging
#
# We will compare different methods of modeling the stock price time series we have based on Mean Squared Error (MSE), which is calculated by averaging the squared error of each prediction we generate over all observations.
#
# ### Standard Average
# $$x_{t+1}=\frac{1}{N}\sum_{i=t-N}^t x_i$$
# In this case, we're saying the prediction at time $t+1$ is the average of the stock prices observed within a window of time $t-N$ to time $t$.
# +
window_size = 100
N = train_data.size
std_avg_predictions = []
std_avg_x = []
mse_errors = []
for pred_idx in range(window_size, N):
if pred_idx >= N:
date = dt.datetime.strptime(k, '%Y-%m-%d').date() + dt.timedelta(days=1)
else:
date = df.loc[pred_idx, 'Date']
std_avg_predictions.append(np.mean(train_data[pred_idx - window_size:pred_idx]))
mse_errors.append((std_avg_predictions[-1] - train_data[pred_idx])**2)
std_avg_x.append(date)
print(f'MSE error for standard averaging: {0.5 * np.mean(mse_errors):.5f}')
# -
plt.figure(figsize=(18,9))
plt.plot(range(df.shape[0]), all_mid_data, color='b', label='True')
plt.plot(range(window_size, N), std_avg_predictions, color='orange', label='Prediction')
# plt.xticks(range(0, df.shape[0], 50), df['Date'].loc[::50], rotation=45)
plt.xlabel('Date')
plt.ylabel('Mid Price')
plt.legend(fontsize=18)
plt.show()
# Notice that the model's predictions follow the actual behavior of the stock prices fairly accurately, although it seems to lag behind the actual price movement in the market by a few days. It seems as though this model is relatively useful for making short-term price predictions (i.e., a day or two ahead), but we will continue to investigate further.
#
# ### Exponential Moving Average
# $$x_{t+1}=EMA_t=\gamma\times EMA_{t-1}+(1-\gamma)x_t$$
# Here, $EMA_0=0$ and $EMA$ is the exponential moving average value you maintain over time. When predicting price for time $t+1$, $\gamma$ dictates how the immediately preceding observation (time $t$) is weighted against the prior moving average for time $t-1$.
# +
window_size = 100
N = train_data.size
run_avg_predictions = []
run_avg_x = []
mse_errors = []
running_mean = 0.0
run_avg_predictions.append(running_mean)
decay = 0.5
for pred_idx in range(1, N):
running_mean = running_mean * decay + (1.0 - decay)*train_data[pred_idx - 1]
run_avg_predictions.append(running_mean)
mse_errors.append((run_avg_predictions[-1] - train_data[pred_idx])**2)
run_avg_x.append(date)
print(f'MSE error for EMA averaging: {0.5 * np.mean(mse_errors):.5f}')
# -
plt.figure(figsize=(18,9))
plt.plot(range(df.shape[0]), all_mid_data, color='b', label='True')
plt.plot(range(0, N), run_avg_predictions, color='orange', label='Prediction')
#plt.xticks(range(0, df.shape[0], 50), df['Date'].loc[::50], rotation=45)
plt.xlabel('Date')
plt.ylabel('Mid Price')
plt.legend(fontsize=18)
plt.show()
# It is apparent that the line of predictions nearly perfectly mirrors the actual stock price movement, but is it really that useful? In practical applications, you would ideally like to be able to make predictions for times $t+1$, $t+2$, etc. For the two models we just explored, however, you're only ever able to make a single prediction for the subsequent period of time (time $t+1$). What if you instead wanted to make a prediction 30 days in advance? For this purpose, we will explore the use of long short-term memory nueral networks.
| notebooks/.ipynb_checkpoints/intro-checkpoint.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
#
# # Video Games Sales Analysis
#
# In this project I will use the video games sales data for analysis. I will try to gain useful insights from this dataset and improve my skills while doing so. This analysis is part of the Zero to Pandas course offered by Jovian ML and FreecodeCamp.
# ## Downloading the Dataset
#
# The dataset can be obtained from this kaggle link: https://www.kaggle.com/gregorut/videogamesales/notebooks
# This dataset contains a list of video games with sales greater than 100,000 copies. It has sales record for North America, Europe, Japan and rest of the world in millions. Though we can conclude that some countries must have been excluded but it is not clear.
# !pip install jovian opendatasets --upgrade --quiet
# Let's begin by downloading the data, and listing the files within the dataset.
# Change this
dataset_url = 'https://www.kaggle.com/gregorut/videogamesales/notebooks'
import opendatasets as od
od.download(dataset_url)
# The dataset has been downloaded and extracted.
# Change this
data_dir = './videogamesales'
import os
os.listdir(data_dir)
# Let us save and upload our work to Jovian before continuing.
project_name = "video-game-sales-analysis" # change this (use lowercase letters and hyphens only)
# !pip install jovian --upgrade -q
import jovian
jovian.commit(project=project_name)
# ## Data Preparation and Cleaning
#
# This is the main step before the data analysis portion . In this step I will try to make the dataset ready for analysis , i.e. I would choose useful columns, handle missing or incorrect values and would perform parsing if necessary.
#
#
import pandas as pd
# Loading the dataset into a pandas dataframe
video_sales_df = pd.read_csv('./videogamesales/vgsales.csv')
video_sales_df.head() # Having a look at the first five rows of the dataframe
# Evaluating the shape of our dataframe
video_sales_df.shape
# We can see that our dataset has 16598 rows and 11 columns
# List of columns in our dataframe
video_sales_df.columns
# As the number of columns in our dataset is not very high we can use all the columns to some extend
# Now lets see some basic information about our dataset
video_sales_df.info()
# As we can obsereve in the above result that the year column has the data type as float which should have been datetime, so we must convert it into datetime by the help of pandas functions.
video_sales_df['Year'] = pd.to_datetime(video_sales_df['Year'], format='%Y', errors='coerce')
# Now we can confirm the data type of Year columnn
video_sales_df.dtypes
# We can see that the data type of our Year column is now datetime
# Now let us have some stats about the numeric columns of our data set
video_sales_df.describe()
# As we see that our Rank column is also numeric but it is not meaningful to perform statistics on it so we must remove it from the statistics info
video_sales_df_copy = video_sales_df.copy() # A copy dataframe to remove Rank column from statistical description
video_sales_df_copy.drop(columns = 'Rank', inplace = True) # droping the rank column
video_sales_df_copy.describe() # Now we dont have the Rank column in the statistical info
# There doesn't seem to be any problem with our numeric columns as of now. It is to be noted that al the sales are in millions
import jovian
jovian.commit()
# ## Exploratory Analysis and Visualization
#
# Before we can ask interesting questions about the dataset it would help to understand what the differnt coluns of our dataset looks like.
#
#
# Let's begin by importing`matplotlib.pyplot` and `seaborn`.
# +
import seaborn as sns
import matplotlib
import matplotlib.pyplot as plt
# %matplotlib inline
sns.set_style('darkgrid')
matplotlib.rcParams['font.size'] = 14
matplotlib.rcParams['figure.figsize'] = (9, 5)
matplotlib.rcParams['figure.facecolor'] = '#00000000'
# -
# ## Genre
# Lets explore the genre column of our dataframe to see what is the majority of games whose sales record are present
video_sales_df.Genre.value_counts()
plt.pie(video_sales_df.Genre.value_counts(), labels=video_sales_df.Genre.value_counts().index, autopct='%1.0f%%', pctdistance=1.1, labeldistance=1.2)
plt.title('Video Game Genre Share')
plt.ylabel(' ')
plt.show()
# Action games have the highest count in the dataset will the Puzzle games have the least count. It could be due to the fact that many people are interested in video games that involve action in them as many consider puzzle based to be boring.
# ## Platform
# Lets analyse the number of games based on the platform
video_sales_df.Platform.value_counts()
plt.figure(figsize = (15,9))
plt.title('Number of different games per game platforms')
sns.countplot(y='Platform', data=video_sales_df)
plt.show()
# The most games are available for the DS and PS2 platforms which might mean that DS and PS2 are used by more customers But we cant be sure about that.
#
# ## Publisher
# Lets analyse the Publisher column to see which company publishes the what number of games. Lets consider publishers with game counts above 100 only .
plt.figure(figsize=(15,12))
publishers = video_sales_df.Publisher.value_counts()
publishers = publishers[publishers.values > 100]
publishers.plot(kind = 'barh')
plt.xlabel('Count')
plt.ylabel('Publishing Company')
plt.title('Publishing companies with more than 100 games')
plt.show()
# We see that Electronics Arts is the undisputed leader in terms of the video games sale count. One of the reason maybe the popularity of its sports based games like FIFA and Cricket
# ## Year
video_sales_df['Year'].value_counts().sort_values(ascending=False)
video_sales_df['year_only'] = video_sales_df['Year'].dt.year
video_sales_df['year_only'] = video_sales_df['year_only'].astype(int, errors='ignore')
plt.figure(figsize = (15,9))
plt.title('Number of different games per game platforms')
sns.countplot(y='year_only', data=video_sales_df)
plt.show()
# We can see that the sales were the higest in the years 2006 to 2011 and after that we can see a downfall of the total sales
# ## Sales
# Lets analyse the numeric sales column for North America, Europe, Japan and Rest of the World
#
fig, axes = plt.subplots(2,2, figsize= (18, 14))
axes[0, 0].scatter(video_sales_df.year_only, video_sales_df.NA_Sales, color = 'b')
axes[0, 0].set_title('North America Video Game Sales')
axes[0, 0].set_xlabel('Year')
axes[0, 0].set_ylabel('Sales in Million')
axes[0, 1].scatter(video_sales_df.year_only, video_sales_df.EU_Sales, color = 'r')
axes[0, 1].set_title('Europe Video Game Sales')
axes[0, 1].set_xlabel('Year')
axes[0, 1].set_ylabel('Sales in Million')
axes[1, 0].scatter(video_sales_df.year_only, video_sales_df.JP_Sales, color = 'g')
axes[1, 0].set_title('Japan Video Game Sales')
axes[1, 0].set_xlabel('Year')
axes[1, 0].set_ylabel('Sales in Million')
axes[1, 1].scatter(video_sales_df.year_only, video_sales_df.Other_Sales)
axes[1, 1].set_title('Other county Video Game Sales')
axes[1, 1].set_xlabel('Year')
axes[1, 1].set_ylabel('Sales in Million')
plt.show()
# In the above visualization we saw the video game sales in the four different regions, the outliers in all the four regions are very few. We can see that the sales have a higher number in the North American region.
# Let us save and upload our work to Jovian before continuing
import jovian
jovian.commit()
# ## Asking and Answering Questions
#
# We've already gained several insights about the dataset and video game sales in different regions , let us further improve our insights by asking some interesting questions.
#
# #### Q1: What are the top five game genres?
video_sales_df.Genre.value_counts().head(5)
video_sales_df.Genre.value_counts().head(5).plot(kind = 'bar', color = 'g')
plt.xlabel('Genre')
plt.ylabel('Count')
plt.title('Top 5 game genre')
plt.show()
# **Action** and **Sports** are the top grossing genres of video games followed by **miscellaneous**, **role-playing** and **shooting** games. This also shows why Electronics Art has a large share in the sales because it manily makes sports based games.
# #### Q2: Which region has the largest share in the global video games sales?
Total_Video_Games_Sales = video_sales_df.Global_Sales.sum()
North_America_Sales_percent = (video_sales_df.NA_Sales.sum()*100)/Total_Video_Games_Sales
Europe_Sales_percent = (video_sales_df.EU_Sales.sum()*100)/Total_Video_Games_Sales
Japan_Sales_percent = (video_sales_df.JP_Sales.sum()*100)/Total_Video_Games_Sales
Other_Region_Sales_percent = (video_sales_df.Other_Sales.sum()*100)/Total_Video_Games_Sales
plt.figure(figsize=(16,9))
plt.pie([North_America_Sales_percent, Europe_Sales_percent, Japan_Sales_percent, Other_Region_Sales_percent], labels=['North America', 'Europe', 'Japan', 'Other Regions'], autopct='%1.0f%%', pctdistance=1.1, labeldistance=1.2)
plt.title('Video Game Genre Share')
plt.ylabel(' ')
plt.show()
# Through this pie chart we see that the major share holder is **North America**. Though this result can be bit biased becuase of the nature of the survey in which more importance might have been given to the North American area.
# #### Q3: What are top 5 highest grossing games in North America ?
video_sales_NA_sorted_df = video_sales_df.sort_values(by = ['NA_Sales'], ascending = False)
video_sales_NA_sorted_df.head()[['Name', 'NA_Sales']]
# The top grossing game in North America is **Wii Sports** with a total sales of 41.49 millions in North America followed by Super Mario Bros., <NAME>, Tetris and <NAME>.
# #### Q4: Which game has the highest sales in the other region(Exluding North America, Europe and Japan)?
video_sales_Other_sorted_df = video_sales_df.sort_values(by = ['Other_Sales'], ascending = False)
video_sales_NA_sorted_df.head(1)[['Name', 'Other_Sales']]
# **GTA: San Andreas** has the highest sales in the other regions which is 10.57 millions.
# #### Q5: Which game has the highest difference in their sales in North America and Europe?
video_sales_df['NA_EU_Sales_Diff'] = abs(video_sales_df['NA_Sales'] - video_sales_df['EU_Sales'])
video_sales_df.head()
video_sales_df.sort_values(by = ['NA_EU_Sales_Diff'], ascending = False).head(1)[['Name', 'NA_EU_Sales_Diff']]
# **<NAME>** has the highest difference in their sales when comparison is made between North America and Europe.
# Let us save and upload our work to Jovian before continuing.
import jovian
jovian.commit()
# ## Inferences and Conclusion
#
# - Based on this dataset we cant say that equal importance was given to all the regions of the world as it might be the case that the data was collected mostly from North American stores.
# - Games in the Action genre tend to fetch the most revenue followed by Sports based games.
# - Electronic Arts is the top grossing publisher clearly beacuse of their supper popular sports based games.
# - GTA : San Andreas is the top grossing game in the other Regions of the World.
# - Top grossing game in North America is Wii Sports.
# - Duck Hunt has the highest difference in the sales made in Europe and North America which shows it is more popular in North America when compared with Europe.
# - These stats can be a bit biased based on the nature of data collection.
import jovian
jovian.commit()
# ## References and Future Work
#
# This dataset has a lot of information about the sales trend of video games which can be further exploited to gain useful insights.
#
# - A region based analysis can be performed in depth to compare video games sales tredn
#
#
# References:
#
# - Stack Overflow Developer Survey: https://insights.stackoverflow.com/survey
# - Pandas user guide: https://pandas.pydata.org/docs/user_guide/index.html
# - Matplotlib user guide: https://matplotlib.org/3.3.1/users/index.html
# - Seaborn user guide & tutorial: https://seaborn.pydata.org/tutorial.html
# - opendatasets Python library: https://github.com/JovianML/opendatasets
import jovian
jovian.commit()
| Course Project/zerotopandas-course-project.ipynb |
# -*- coding: utf-8 -*-
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .jl
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Julia 1.5.0
# language: julia
# name: julia-1.5
# ---
# # RNN language model
# Loosely based on [Zaremba et al. 2014](https://arxiv.org/abs/1409.2329), this example trains a word based RNN language model on Mikolov's PTB data with 10K vocab. It uses the `batchSizes` feature of `rnnforw` to process batches with different sized sentences. The `mb` minibatching function sorts sentences in a corpus by length and tries to group similarly sized sentences together. For an example that uses fixed length batches and goes across sentence boundaries see the [charlm](https://github.com/denizyuret/Knet.jl/blob/master/tutorial/08.charlm.ipynb) notebook. **TODO:** convert to the new RNN interface.
EPOCHS=10
RNNTYPE=:lstm
BATCHSIZE=64
EMBEDSIZE=128
HIDDENSIZE=256
VOCABSIZE=10000
NUMLAYERS=1
DROPOUT=0.5
LR=0.001
BETA_1=0.9
BETA_2=0.999
EPS=1e-08;
# Load data
using Knet
include(Knet.dir("data","mikolovptb.jl"))
(trn,val,tst,vocab) = mikolovptb()
@assert VOCABSIZE == length(vocab)+1 # +1 for the EOS token
for x in (trn,val,tst,vocab); println(summary(x)); end
# Print a sample
println(tst[1])
println(vocab[tst[1]])
@doc mikolovptb
# +
# Minibatch data into (x,y,b) triples. This is the most complicated part of the code:
# for language models x and y contain the same words shifted, x has an EOS in the beginning, y has an EOS at the end
# x,y = [ s11,s21,s31,...,s12,s22,...] i.e. all the first words followed by all the second words etc.
# b = [b1,b2,...,bT] i.e. how many sentences have first words, how many have second words etc.
# length(x)==length(y)==sum(b) and length(b)=length(s1)+1 (+1 because of EOS)
# sentences in batch should be sorted from longest to shortest, i.e. s1 is the longest sentence
function mb(sentences,batchsize)
sentences = sort(sentences,by=length,rev=true)
data = []; eos = VOCABSIZE
for i = 1:batchsize:length(sentences)
j = min(i+batchsize-1,length(sentences))
sij = view(sentences,i:j)
T = 1+length(sij[1])
x = UInt16[]; y = UInt16[]; b = UInt16[]
for t=1:T
bt = 0
for s in sij
if t == 1
push!(x,eos)
push!(y,s[1])
elseif t <= length(s)
push!(x,s[t-1])
push!(y,s[t])
elseif t == 1+length(s)
push!(x,s[t-1])
push!(y,eos)
else
break
end
bt += 1
end
push!(b,bt)
end
push!(data,(x,y,b))
end
return data
end
mbtrn = mb(trn,BATCHSIZE)
mbval = mb(val,BATCHSIZE)
mbtst = mb(tst,BATCHSIZE)
map(length,(mbtrn,mbval,mbtst))
# -
# Define model
function initmodel()
w(d...)=KnetArray(xavier(Float32,d...))
b(d...)=KnetArray(zeros(Float32,d...))
r,wr = rnninit(EMBEDSIZE,HIDDENSIZE,rnnType=RNNTYPE,numLayers=NUMLAYERS,dropout=DROPOUT,atype=KnetArray{Float32})
wx = w(EMBEDSIZE,VOCABSIZE)
wy = w(VOCABSIZE,HIDDENSIZE)
by = b(VOCABSIZE,1)
return r,wr,wx,wy,by
end;
# +
# Define loss and its gradient
function predict(ws,xs,bs;pdrop=0)
r,wr,wx,wy,by = ws
r = value(r)
x = wx[:,xs] # xs=(ΣBt) x=(X,ΣBt)
x = dropout(x,pdrop)
(y,_) = rnnforw(r,wr,x,batchSizes=bs) # y=(H,ΣBt)
y = dropout(y,pdrop)
return wy * y .+ by # return=(V,ΣBt)
end
loss(w,x,y,b;o...) = nll(predict(w,x,b;o...), y)
lossgradient = gradloss(loss);
# +
# Train and test loops
function train(model,data,optim)
Σ,N=0,0
for (x,y,b) in data
grads,loss1 = lossgradient(model,x,y,b;pdrop=DROPOUT)
update!(model, grads, optim)
n = length(y)
Σ,N = Σ+n*loss1, N+n
end
return Σ/N
end
function test(model,data)
Σ,N=0,0
for (x,y,b) in data
loss1 = loss(model,x,y,b)
n = length(y)
Σ,N = Σ+n*loss1, N+n
end
return Σ/N
end;
# -
model = optim = nothing;
Knet.gc() # free gpu memory
if !isfile("rnnlm.jld2")
# Initialize and train model
model = initmodel()
optim = optimizers(model,Adam,lr=LR,beta1=BETA_1,beta2=BETA_2,eps=EPS)
for epoch=1:EPOCHS
@time global j1 = train(model,mbtrn,optim) # ~100 seconds
@time global j2 = test(model,mbval) # ~4 seconds
@time global j3 = test(model,mbtst) # ~4 seconds
println((epoch,exp(j1),exp(j2),exp(j3))); flush(stdout) # prints perplexity = exp(negative_log_likelihood)
end
Knet.save("rnnlm.jld2","model",model)
else
model = Knet.load("rnnlm.jld2","model")
@time global j1 = test(model,mbtrn)
@time global j2 = test(model,mbval)
@time global j3 = test(model,mbtst)
println((EPOCHS,exp(j1),exp(j2),exp(j3))); flush(stdout) # prints perplexity = exp(negative_log_likelihood)
end
summary(model)
| examples/rnnlm/rnnlm.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
#
#
# # Setup
#
# First, import the necessary modules from QGL. This will also import ``numpy`` into the namespace as ``np``.
#
# The AWGDir environment variable is used to indicate where QGL will store it's output sequence files. First we load the QGL module. It defaults to a temporary directory as provided by Python's tempfile module.
#
from QGL import *
# Next we instantiate the channel library. By default bbndb will use an sqlite database at the location specified by the BBN_DB environment variabe, but we override this behavior below in order to use a specific filename. Also shown (but commented out) is the syntax for creating a temporary in-memory database for testing purposes.
#
# +
cl = ChannelLibrary("example")
# This would be a temporary, in memory database
# cl = ChannelLibrary(":memory:")
# -
# The channel library has a number of convenience functions defined to create instruments and qubits, as well as functions to define the relationships between them. Let us create a qubit first:
#
q1 = cl.new_qubit("q1")
# In order to compile the QGL program into pulse sequences, we need to define a minimal hardware configuration. Basically, we need to specify AWG resources for output pulse compilation and digitizer resources for signal measurement.
# +
# Most calls required label and address. Let's define
# an AWG for control pulse generation
aps2_1 = cl.new_APS2("BBNAPS1", address="192.168.5.101")
# an AWG for measurement pulse generation
aps2_2 = cl.new_APS2("BBNAPS2", address="192.168.5.102")
# and digitizer for measurement collection
dig_1 = cl.new_X6("X6_1", address=0)
# Qubit q1 is controlled by AWG aps2_1
cl.set_control(q1, aps2_1)
# Qubit q1 is measured by AWG aps2_2 and digitizer dig_1
cl.set_measure(q1, aps2_2, dig_1.ch(1))
# -
# Commit the changes to the channel library.
cl.commit()
# # Basic sequence construction and plotting
# You can construct simple gate sequences by creating `Lists` of `Pulse` objects. These can be constructed by calling various primitives defined for qubits, for example, 90 and 180 degree rotations about X and Y:
seq1 = [[X(q1), Y(q1)]]
seq2 = [[X90(q1),Y90(q1),X(q1),Id(q1),Y(q1)]]
# This sequence of pulses can be plotted for visual review. First, you must compile the QGL into pulses based on the hardware defined above. Since our `Qubit` object is a quadrature channel, you see two colors corresponding to the I and Q control signals.
mf = compile_to_hardware(seq1, 'Test1')
plot_pulse_files(mf)
# Now, let's plot the second sequence.
mf = compile_to_hardware(seq2, 'Test2')
plot_pulse_files(mf)
# # Constructing more sophisticated sequences
# To get rotations of arbitrary angle (i.e., amplitude control of the pulse), you can use the "theta" primitive:
seq = [[Xtheta(q1, 0.2), Xtheta(q1, 0.4), Xtheta(q1, 0.6), Xtheta(q1, 0.8), Xtheta(q1, 1.0)]]
mf = compile_to_hardware(seq, 'Test')
plot_pulse_files(mf)
# To rotate about an arbitrary axis, use the "U" primitives:
seq = [[U(q1, 0.0), U(q1, np.pi/8), U(q1, np.pi/4), U(q1, 3*np.pi/8), U(q1, np.pi/2)]]
mf = compile_to_hardware(seq, 'Test')
plot_pulse_files(mf)
# Z rotations are performed in "software:" they act as frame changes on the following pulses.
seq = [[X(q1), Z90(q1), X(q1), Z90(q1), X(q1), Z90(q1), X(q1), Z90(q1), X(q1)]]
mf = compile_to_hardware(seq, 'Test')
plot_pulse_files(mf)
# Sequences can act on multiple qubits, i.e., channels. Let's create another "logical" qubit channel as well as a "physical" channel.
q2 = cl.new_qubit("q2")
aps2_3 = cl.new_APS2("BBNAPS3", address="192.168.5.103")
cl.set_control(q2, aps2_3)
# When you plot a sequence with multiple logical channels, each channel (both I and Q) is plotted seperately.
seq = [[X(q1), X(q2), Y(q1), Y(q2)]]
mf = compile_to_hardware(seq, 'Test')
plot_pulse_files(mf)
# One can express simultaneous operations with the `*` operator (meant to evoke a tensor product). If no operation is specified for a channel in a given time slot, an identity (no-op) operation is inserted.
seq = [[X(q1)*X(q2), X(q1)*Y(q2), Y(q1), X(q2)]]
mf = compile_to_hardware(seq, 'Test')
plot_pulse_files(mf)
# # Constructing sequences with measurements
#
# Measurement pulses can be created with the `MEAS` primitive. Given a qubit X, the compiler finds the associated logical measurement channel and creates a `Pulse` on that channel. Note that a `Trigger` pulse for the digitizer is created along with the qubit measurement pulse.
#
# Remember, measurement channel was defined above with:
# Qubit q1 is measured by AWG aps2_2 and digitizer dig_1
# `cl.set_measure(q1, aps2_2, dig_1.ch(1))`
seq = [[MEAS(q1)]]
mf = compile_to_hardware(seq, 'Test')
plot_pulse_files(mf)
# # Single-sideband modulation
#
# In order to prevent leakage at the source frequency, typically single-sideband (SSB) modulation with IQ mixers is used to generate both control and measurement pulses.
# For control pulses, we define the SSB frequency as
# `q1.frequency`. In the following example, you can set that parameter to a non-zero SSB to modulate the pulse envelope.
q1.frequency = 50e6
seq= [[X(q1), Y(q1)]]
mf = compile_to_hardware(seq, 'Test')
plot_pulse_files(mf)
# For measurement pulses, we define the SSB frequency as
#
# `q1.measure_chan.autodyne_freq`
#
# Note that `autodyne_freq` is different from `frequency`. In the first case, the SSB frequency is baked directly in the pulse waveform, and is independent of the time when the pulse is applied. In the second case (normally used for the control SSB, see above), the actual pulse shape depends on the time when the pulse occurs. This is in order to maintain the phase reference allowing for arbitrary rotations around any transversal axis.
# +
# set the modulation frequency
cl["q1"].measure_chan.autodyne_freq = 10e6
seq = [[X(q1), MEAS(q1)]]
mf = compile_to_hardware(seq, 'Test')
plot_pulse_files(mf)
# -
# # Longer sequences using list comprehensions
# ### Rabi amplitude
# Note the increasing amplitude with each sequence.
seq = [[Xtheta(q1, a), MEAS(q1)] for a in np.linspace(0,2,11)]
mf = compile_to_hardware(seq, 'Test')
plot_pulse_files(mf)
# ### T1
# Note the delay between the X pulse and the measurement.
seq = [[X(q1), Id(q1, d), MEAS(q1)] for d in np.linspace(0, 10e-7, 11)]
mf = compile_to_hardware(seq, 'Test')
plot_pulse_files(mf)
# ### Ramsey
# Note the varying delay between the X pulses prior to measurement.
seq = [[X90(q1), Id(q1, delay), X90(q1), MEAS(q1)] for delay in np.linspace(0, 5e-7, 11)]
mf = compile_to_hardware(seq, 'Test')
plot_pulse_files(mf)
| doc/ex1_QGL_basics.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
from tfx.components.base import executor_spec
# 일반 파일 로더 컴포넌트를 가져옵니다.
from tfx.components import FileBasedExampleGen
# 아브로 관련 실행자를 가져옵니다.
from tfx.components.example_gen.custom_executors import avro_executor
avro_dir_path = "avro_data"
# Executor를 재정의합니다.
example_gen = FileBasedExampleGen(
input_base=avro_dir_path,
custom_executor_spec=executor_spec.ExecutorClassSpec(
avro_executor.Executor))
| chapters/chapter3/3-5.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ### Toulouse/Squidguard Model
#
# +
from __future__ import print_function
import numpy as np
import tensorflow as tf
import pandas as pd
from sklearn.feature_extraction.text import CountVectorizer
from sklearn.model_selection import train_test_split
from sklearn.metrics import classification_report,confusion_matrix
# https://machinelearningmastery.com/reproducible-results-neural-networks-keras/
np.random.seed(1)
tf.random.set_seed(2)
NGRAMS = 2
FEATURE_LEN = 128
EPOCHS = 5
# Blacklists
df = pd.read_csv('../train-test/data/blacklists.csv.bz2')
#df = df.sample(100000, random_state=21)
df
# -
df['cat_count'] = df.blacklists_cat.apply(lambda c: len(c.split('|')))
sdf = df[df.cat_count == 1]
sdf
dom_group = sdf.groupby('blacklists_cat').agg({'domain': 'count'})
dom_group
# ### Take out categories that have less than 1000 domains.
filter_cat = list(dom_group[dom_group.domain > 1000].index)
#filter_cat = list(dom_group[dom_group.domain > 100].index)
# ### Take out categories that have recall < 0.3 (based on previous iterations of the model)
excat = ['audio-video', 'blog', 'dating', 'liste_bu', 'sports', 'publicite']
filter_cat = [x for x in filter_cat if x not in excat]
filter_cat
sdf.loc[sdf.blacklists_cat.isin(filter_cat) == False, 'blacklists_cat'] = 'others'
sdf.groupby('blacklists_cat').agg({'domain': 'count'})
# ## Preprocessing the input data
# +
# build n-gram list
#vect = CountVectorizer(analyzer='char', max_df=0.3, min_df=3, ngram_range=(NGRAMS, NGRAMS), lowercase=False)
vect = CountVectorizer(analyzer='char', ngram_range=(NGRAMS, NGRAMS), lowercase=False)
a = vect.fit_transform(sdf.domain)
vocab = vect.vocabulary_
# sort n-gram by freq (highest -> lowest)
words = []
for b in vocab:
c = vocab[b]
#print(b, c, a[:, c].sum())
words.append((a[:, c].sum(), b))
#break
words = sorted(words, reverse=True)
words_list = [w[1] for w in words]
num_words = len(words_list)
print("num_words = %d" % num_words)
def find_ngrams(text, n):
a = zip(*[text[i:] for i in range(n)])
wi = []
for i in a:
w = ''.join(i)
try:
idx = words_list.index(w)
except:
idx = 0
wi.append(idx)
return wi
# build X from index of n-gram sequence
X = np.array(sdf.domain.apply(lambda c: find_ngrams(c, NGRAMS)))
# check max/avg feature
X_len = []
for x in X:
X_len.append(len(x))
max_feature_len = max(X_len)
avg_feature_len = int(np.mean(X_len))
# +
print("Max feature len = %d, Avg. feature len = %d" % (max_feature_len, avg_feature_len))
class_labels = sdf.blacklists_cat.astype('category').cat.categories
y = np.array(sdf.blacklists_cat.astype('category').cat.codes)
# Split train and test dataset
X_train_valid, X_test, y_train_valid, y_test = train_test_split(X, y, test_size=0.2, random_state=21, stratify=y)
X_train, X_valid, y_train, y_valid = train_test_split(X_train_valid, y_train_valid, test_size=0.2, random_state=21, stratify=y_train_valid)
#X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2)
# backup
y_train_lab = y_train
y_test_lab = y_test
# -
unique, counts = np.unique(y_test, return_counts=True)
dict(zip(unique, counts))
unique, counts = np.unique(y_train, return_counts=True)
dict(zip(unique, counts))
# ## Train a LSTM model
# +
import keras
from keras.preprocessing import sequence
from keras.models import Sequential
from keras.layers import Dense, Embedding, Dropout, Activation
from keras.layers import LSTM
from keras.layers.convolutional import Conv1D
from keras.layers.convolutional import MaxPooling1D
from keras.models import load_model
max_features = num_words # 20000
feature_len = FEATURE_LEN # avg_feature_len # cut texts after this number of words (among top max_features most common words)
batch_size = 32
print(len(X_train_valid), 'train+valid sequences')
print(len(X_train), 'train sequences')
print(len(X_valid), 'valid sequences')
print(len(X_test), 'test sequences')
print('Pad sequences (samples x time)')
X_train_valid = sequence.pad_sequences(X_train_valid, maxlen=feature_len)
X_train = sequence.pad_sequences(X_train, maxlen=feature_len)
X_valid = sequence.pad_sequences(X_valid, maxlen=feature_len)
X_test = sequence.pad_sequences(X_test, maxlen=feature_len)
print('X_train_valid shape:', X_train_valid.shape)
print('X_train shape:', X_train.shape)
print('X_valid shape:', X_valid.shape)
print('X_test shape:', X_test.shape)
n_classes = np.max(y_train_valid) + 1
print(n_classes, 'classes')
print('Convert class vector to binary class matrix '
'(for use with categorical_crossentropy)')
y_train_valid = keras.utils.to_categorical(y_train_valid, n_classes)
y_train = keras.utils.to_categorical(y_train, n_classes)
y_valid = keras.utils.to_categorical(y_valid, n_classes)
y_test = keras.utils.to_categorical(y_test, n_classes)
print('y_train_valid shape:', y_train_valid.shape)
print('y_train shape:', y_train.shape)
print('y_valid shape:', y_valid.shape)
print('y_test shape:', y_test.shape)
# -
def create_model():
print('Build model...')
model = Sequential()
model.add(Embedding(num_words, 32, input_length=feature_len))
model.add(LSTM(128, dropout=0.2, recurrent_dropout=0.2))
model.add(Dense(n_classes, activation='softmax'))
# try using different optimizers and different optimizer configs
model.compile(loss='categorical_crossentropy',
optimizer='adam',
metrics=['accuracy'])
#print(model.summary())
return model
# + active=""
# print('Train...')
# model.fit(X_train, y_train, batch_size=batch_size, epochs=EPOCHS,
# validation_split=0.1, verbose=1)
# score, acc = model.evaluate(X_test, y_test,
# batch_size=batch_size, verbose=1)
# print('Test score:', score)
# print('Test accuracy:', acc)
# +
import matplotlib.pyplot as plt
from keras.wrappers.scikit_learn import KerasClassifier
from sklearn.calibration import CalibratedClassifierCV, calibration_curve
from sklearn.metrics import (brier_score_loss, precision_score, recall_score,
f1_score, log_loss)
from sklearn.linear_model import LogisticRegression
# +
# Train uncalibrated random forest classifier on whole train and validation
# data and evaluate on test data
clf = KerasClassifier(build_fn=create_model, epochs=EPOCHS, batch_size=batch_size, verbose=1)
clf.fit(X_train_valid, y_train_valid)
clf_probs = clf.predict_proba(X_test)
score = log_loss(y_test, clf_probs)
# -
# Train random forest classifier, calibrate on validation data and evaluate
# on test data
clf = KerasClassifier(build_fn=create_model, epochs=EPOCHS, batch_size=batch_size, verbose=1)
clf.fit(X_train, y_train)
clf_probs = clf.predict_proba(X_test)
X_train.shape, y_train.shape, X_valid.shape, y_valid.shape
sig_clf = CalibratedClassifierCV(clf, method="isotonic", cv="prefit")
sig_clf.fit(X_valid, np.argmax(y_valid, axis=1))
sig_clf_probs = sig_clf.predict_proba(X_test)
sig_score = log_loss(y_test, sig_clf_probs)
score, sig_score
sig_clf_pred = sig_clf.predict(X_test)
# +
print("\tPrecision: %1.3f" % precision_score(np.argmax(y_test, axis=1), sig_clf_pred, average='macro'))
print("\tRecall: %1.3f" % recall_score(np.argmax(y_test, axis=1), sig_clf_pred, average='macro'))
print("\tF1: %1.3f\n" % f1_score(np.argmax(y_test, axis=1), sig_clf_pred, average='macro'))
fraction_of_positives, mean_predicted_value = \
calibration_curve(y_test[:, 0], sig_clf_probs[:, 0], n_bins=10)
# -
clf_pred = clf.predict(X_test)
# +
print("\tPrecision: %1.3f" % precision_score(np.argmax(y_test, axis=1), clf_pred, average='macro'))
print("\tRecall: %1.3f" % recall_score(np.argmax(y_test, axis=1), clf_pred, average='macro'))
print("\tF1: %1.3f\n" % f1_score(np.argmax(y_test, axis=1), clf_pred, average='macro'))
fraction_of_positives, mean_predicted_value = \
calibration_curve(y_test[:, 0], clf_probs[:, 0], n_bins=10)
# -
# ## Confusion Matrix
y_pred = clf.predict(X_test)
target_names = list(sdf.blacklists_cat.astype('category').cat.categories)
print(classification_report(np.argmax(y_test, axis=1), y_pred, target_names=target_names))
print(confusion_matrix(np.argmax(y_test, axis=1), y_pred))
sig_y_pred = sig_clf.predict(X_test)
target_names = list(sdf.blacklists_cat.astype('category').cat.categories)
print(classification_report(np.argmax(y_test, axis=1), sig_y_pred, target_names=target_names))
print(confusion_matrix(np.argmax(y_test, axis=1), y_pred))
def brier_multi(targets, probs):
return np.mean(np.sum((probs - targets)**2, axis=1))
brier_multi(y_test, clf_probs)
brier_multi(y_test, sig_clf_probs)
# +
fig_index = 1
fig = plt.figure(fig_index, figsize=(10, 10))
ax1 = plt.subplot2grid((3, 1), (0, 0), rowspan=2)
ax2 = plt.subplot2grid((3, 1), (2, 0))
ax1.plot([0, 1], [0, 1], "k:", label="Perfectly calibrated")
for name, prob_pos, y_test2, y_pred2, target in [('LSTM', clf_probs, y_test, y_pred, 'adult'),
('LSTM', clf_probs, y_test, y_pred, 'malware'),
('LSTM', clf_probs, y_test, y_pred, 'phishing'),
('LSTM + sigmoid', sig_clf_probs, y_test, sig_y_pred, 'adult'),
('LSTM + sigmoid', sig_clf_probs, y_test, sig_y_pred, 'malware'),
('LSTM + sigmoid', sig_clf_probs, y_test, sig_y_pred, 'phishing')]:
i = target_names.index(target)
clf_score = brier_score_loss(y_test2[:, i], prob_pos[:, i], pos_label=y_test2.max())
print("%s:" % name)
print("\tBrier: %1.3f" % (clf_score))
print("\tPrecision: %1.3f" % precision_score(y_test2[:, i], y_pred2==i))
print("\tRecall: %1.3f" % recall_score(y_test2[:, i], y_pred2==i))
print("\tF1: %1.3f\n" % f1_score(y_test2[:, i], y_pred2==i))
fraction_of_positives, mean_predicted_value = \
calibration_curve(y_test2[:, i], prob_pos[:, i], n_bins=10)
ax1.plot(mean_predicted_value, fraction_of_positives, "s-",
label="%s (%1.3f) [%s]" % (name, clf_score, target))
ax2.hist(prob_pos[:, i], range=(0, 1), bins=10, label='%s [%s]' % (name, target),
histtype="step", lw=2)
ax1.set_ylabel("Fraction of positives")
ax1.set_ylim([-0.05, 1.05])
ax1.legend(loc="lower right")
ax1.set_title('Calibration plots (reliability curve)')
ax2.set_xlabel("Mean predicted value")
ax2.set_ylabel("Count")
ax2.legend(loc="upper center", ncol=2)
plt.tight_layout()
# -
# ## Save model
# + active=""
# model.save('./models/toulouse_cat_lstm_others_2017.h5')
# words_df = pd.DataFrame(words_list, columns=['vocab'])
# words_df.to_csv('./models/toulouse_cat_vocab_others_2017.csv', index=False, encoding='utf-8')
# pd.DataFrame(target_names, columns=['toulouse_cat']).to_csv('./models/toulouse_cat_names_others_2017.csv', index=False)
# -
y_score = clf_probs
# +
import numpy as np
from scipy import interp
import matplotlib.pyplot as plt
from itertools import cycle
from sklearn.metrics import roc_curve, auc
# Plot linewidth.
lw = 2
# Compute ROC curve and ROC area for each class
fpr = dict()
tpr = dict()
roc_auc = dict()
for i in range(n_classes):
fpr[i], tpr[i], _ = roc_curve(y_test[:, i], y_score[:, i])
roc_auc[i] = auc(fpr[i], tpr[i])
# Compute micro-average ROC curve and ROC area
fpr["micro"], tpr["micro"], _ = roc_curve(y_test.ravel(), y_score.ravel())
roc_auc["micro"] = auc(fpr["micro"], tpr["micro"])
# Compute macro-average ROC curve and ROC area
# First aggregate all false positive rates
all_fpr = np.unique(np.concatenate([fpr[i] for i in range(n_classes)]))
# Then interpolate all ROC curves at this points
mean_tpr = np.zeros_like(all_fpr)
for i in range(n_classes):
mean_tpr += interp(all_fpr, fpr[i], tpr[i])
# Finally average it and compute AUC
mean_tpr /= n_classes
fpr["macro"] = all_fpr
tpr["macro"] = mean_tpr
roc_auc["macro"] = auc(fpr["macro"], tpr["macro"])
# Plot all ROC curves
fig = plt.figure(1, figsize=(12, 8))
plt.plot(fpr["micro"], tpr["micro"],
label='micro-average ROC curve (area = {0:0.2f})'
''.format(roc_auc["micro"]),
color='deeppink', linestyle=':', linewidth=4)
plt.plot(fpr["macro"], tpr["macro"],
label='macro-average ROC curve (area = {0:0.2f})'
''.format(roc_auc["macro"]),
color='navy', linestyle=':', linewidth=4)
cmap = plt.get_cmap("tab10")
colors = cycle([cmap(i) for i in range(n_classes)])
for i, color in zip(range(n_classes), colors):
plt.plot(fpr[i], tpr[i], color=color, lw=lw,
label='ROC curve of class {0} (area = {1:0.2f})'
''.format(class_labels[i], roc_auc[i]))
if i >= 19:
break
plt.plot([0, 1], [0, 1], 'k--', lw=lw)
plt.xlim([0.0, 1.0])
plt.ylim([0.0, 1.05])
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate')
plt.title('ROC Curve -- Toulouse LSTM Model')
plt.legend(loc="lower right")
plt.show()
# -
fig.savefig('./roc-toulouse-lstm.eps', format='eps', dpi=300, bbox_inches="tight", orientation='landscape');
y_score = sig_clf_probs
# +
# Plot linewidth.
lw = 2
# Compute ROC curve and ROC area for each class
fpr = dict()
tpr = dict()
roc_auc = dict()
for i in range(n_classes):
fpr[i], tpr[i], _ = roc_curve(y_test[:, i], y_score[:, i])
roc_auc[i] = auc(fpr[i], tpr[i])
# Compute micro-average ROC curve and ROC area
fpr["micro"], tpr["micro"], _ = roc_curve(y_test.ravel(), y_score.ravel())
roc_auc["micro"] = auc(fpr["micro"], tpr["micro"])
# Compute macro-average ROC curve and ROC area
# First aggregate all false positive rates
all_fpr = np.unique(np.concatenate([fpr[i] for i in range(n_classes)]))
# Then interpolate all ROC curves at this points
mean_tpr = np.zeros_like(all_fpr)
for i in range(n_classes):
mean_tpr += interp(all_fpr, fpr[i], tpr[i])
# Finally average it and compute AUC
mean_tpr /= n_classes
fpr["macro"] = all_fpr
tpr["macro"] = mean_tpr
roc_auc["macro"] = auc(fpr["macro"], tpr["macro"])
# Plot all ROC curves
fig = plt.figure(1, figsize=(12, 8))
plt.plot(fpr["micro"], tpr["micro"],
label='micro-average ROC curve (area = {0:0.2f})'
''.format(roc_auc["micro"]),
color='deeppink', linestyle=':', linewidth=4)
plt.plot(fpr["macro"], tpr["macro"],
label='macro-average ROC curve (area = {0:0.2f})'
''.format(roc_auc["macro"]),
color='navy', linestyle=':', linewidth=4)
cmap = plt.get_cmap("tab10")
colors = cycle([cmap(i) for i in range(n_classes)])
for i, color in zip(range(n_classes), colors):
plt.plot(fpr[i], tpr[i], color=color, lw=lw,
label='ROC curve of class {0} (area = {1:0.2f})'
''.format(class_labels[i], roc_auc[i]))
if i >= 19:
break
plt.plot([0, 1], [0, 1], 'k--', lw=lw)
plt.xlim([0.0, 1.0])
plt.ylim([0.0, 1.05])
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate')
plt.title('ROC Curve -- Toulouse LSTM Model (Calibrated)')
plt.legend(loc="lower right")
plt.show()
# -
fig.savefig('./roc-toulouse-lstm-calibrated.eps', format='eps', dpi=300, bbox_inches="tight", orientation='landscape');
| pydomains/models/calibration/toulouse_pred_2017_lstm_prob_cal.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Building ID Data Example
# +
import pyblp
import numpy as np
np.set_printoptions(linewidth=1)
pyblp.__version__
# -
# In this example, we'll build a small panel of market and firm IDs.
id_data = pyblp.build_id_data(T=2, J=5, F=4)
id_data
| docs/notebooks/api/build_id_data.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# <h1><center>Tutorial for implementing Sapcell on ALS dataset</center></h1>
# This tutorial provids the step-by-step implementation of using Sapcell to find cell types in one spatial transcriptomics tissue slide in ALS dataset and classification of ALS disease stages among all samples in ALS dataset.
# ## Make your own configurations
# Specify input/output path and model parameters in `config.py`
# +
# path config
META_PATH = '../dataset/metadata/mouse_sample_names_sra.tsv'
IMG_PATH = '../dataset/image/'
CM_PATH = '../dataset/cm/'
ATM_PATH = None
TILE_PATH = '../dataset/tile/'
DATASET_PATH = '../dataset/'
TEMPLATE_IMG = '../dataset/image/CN94_D2_HE.jpg'
# image config
SIZE = 299, 299
N_CHANNEL = 3
NORM_METHOD = 'vahadane'
# count matrix config
THRESHOLD_GENE = 0.01
THRESHOLD_SPOT = 0.01
MIN_EXP = 1
# metadata config
SAMPLE_COLUMN = 'sample_name'
LABEL_COLUMN = 'age'
CONDITION_COLUMN = 'breed'
CONDITION = 'B6SJLSOD1-G93A'
ADDITIONAL_COLUMN = 2 if CONDITION_COLUMN else 1
# reproducibility
seed = 37
# color_map
color_map = ['#ff8aff', '#6fc23f', '#af63ff', '#eaed00', '#f02449', '#00dbeb', '#d19158', '#9eaada', '#89af7c', '#514036']
# classification model config
n_classes = 4
batch_size = 32
epochs = 10
train_ratio = 0.7
# -
# ### 1. Image Preprocessing
# #### 1.1 Run in Command Line
# + language="bash"
# python ../SpaCell/image_normalization.py
# -
# #### 1.2 Detailed run in Python (equivalent to the one liner Spacell code above)
import os
from os import path
import numpy as np
import pandas as pd
import shutil
import glob
import random
import staintools
from staintools import stain_normalizer, LuminosityStandardizer
from staintools import ReinhardColorNormalizer
# from multiprocessing import Pool
import PIL
from PIL import Image, ImageDraw
Image.MAX_IMAGE_PIXELS = None
# +
def offset_img(img, r_offset, g_offset, b_offset):
"""
Add colour offset for a Pillow image object.
:param img: Pillow image object
:param r_offset(float): red colour offset pixel value
:param g_offset(float): green colour offset pixel value
:param b_offset(float): blue colour offset pixel value
:return: new Pillow image object with colour offset
"""
new_img = img.copy()
pixels = new_img.load()
for i in range(img.size[0]): #For every column
for j in range(img.size[1]): #For every row
r, g, b = pixels[i,j]
new_r, new_g, new_b = r+r_offset, g+g_offset, b+b_offset
pixels[i,j] = int(new_r), int(new_g), int(new_b)
return new_img
def scale_rgb(img, r_scale, g_scale, b_scale):
"""
Scale colour channels for a Pillow image object.
:param img: Pillow image object
:param r_scale(float): red colour scale factor
:param g_scale(float): green colour scale factor
:param b_scale(float): blue colour scale factor
:return: scaled Pillow image object
"""
source = img.split()
R, G, B = 0, 1, 2
red = source[R].point(lambda i:i*r_scale)
green = source[G].point(lambda i:i*g_scale)
blue = source[B].point(lambda i:i*b_scale)
return Image.merge('RGB', [red, green, blue])
def remove_colour_cast(img):
"""
Select 99th percentile pixels values for each channel for a Pillow image object.
:param img: Pillow image object
:return: Pillow image object without colour cast
"""
img = img.convert('RGB')
img_array = np.array(img)
#Calculate 99th percentile pixels values for each channel
rp = np.percentile(img_array[:,:,0].ravel(), q = 99)
gp = np.percentile(img_array[:,:,1].ravel(), q = 99)
bp = np.percentile(img_array[:,:,2].ravel(), q = 99)
#scale image based on percentile values
return scale_rgb(img, 255/rp, 255/gp, 255/bp)
def tile(img, spots_center_gen, out_dir, atm):
"""
Tile whole slide image based on spot coords and affine transformation matrix. \
One tile contains one spot.
:param img: Pillow image object
:param spots_center_gen (str): spots coords (XxY)
:param out_dir: output path
:param atm (list): affine transformation matrix. None, if not provided.
:return: None
"""
sample = os.path.split(out_dir)[-1]
for x_coord, y_coord in spots_center_gen:
if atm:
x_pixel = float(x_coord) * float(atm[0]) + float(atm[6])
y_pixel = float(y_coord) * float(atm[4]) + float(atm[7])
x_0 = x_pixel - float(atm[0]) * 0.8 / 2
y_0 = y_pixel - float(atm[4]) * 0.8 / 2
x_1 = x_pixel + float(atm[0]) * 0.8 / 2
y_1 = y_pixel + float(atm[4]) * 0.8 / 2
else:
unit_x = float(img.size[0]) / 32
unit_y = float(img.size[0]) / 34
x_pixel = float(x_coord) * unit_x
y_pixel = float(y_coord) * unit_y
x_0 = x_pixel - unit_x * 0.8 / 2
y_0 = y_pixel - unit_y * 0.8 / 2
x_1 = x_pixel + unit_x * 0.8 / 2
y_1 = y_pixel + unit_y * 0.8 / 2
tile = img.crop((x_0, y_0, x_1, y_1))
tile.thumbnail(SIZE, Image.ANTIALIAS)
tile_name = str(sample) + '-' + str(x_coord) + '-' + str(y_coord)
print("generate tile of sample {} at spot {}x{}".format(str(sample), str(x_coord), str(y_coord)))
tile.save(os.path.join(out_dir, tile_name + '.jpeg'), 'JPEG')
def mkdirs(dirs):
if not os.path.exists(dirs):
os.makedirs(dirs)
def spot_gen(cm):
"""
Generate spot coords
:param cm: pandas dataframe, spot coords (XxY) as column name.
:yield: tuple(X, Y)
"""
for spot in [x.split('x') for x in cm]:
x_point = spot[0]
y_point = spot[1]
yield x_point, y_point
def img_cm_gen(img_path, cm_path, sample_name):
"""
Using sample name to pair image and count matrix
:param img_path (list): list of image path
:param cm_path (list): list of count matrix path
:param sample_name (list): list of sample name
:yield: paired (sample name, image path, count matrix path)
"""
for sample in sample_name:
for cm_root, _, cm_files in os.walk(cm_path):
for cm_file in cm_files:
if cm_file.endswith(".txt") and cm_file.startswith(sample):
pattern = "_".join(sample.split("_")[0:2])
for img_root, _, img_files in os.walk(img_path):
for img_file in img_files:
if img_file.endswith(".jpg") and img_file.startswith(pattern):
assert "_".join(img_file.split("_")[0:2]) == "_".join(cm_file.split("_")[0:2])
yield (sample, os.path.join(img_path, img_file), os.path.join(cm_path, cm_file))
# -
# Load template image and create stain normalizer based on template image using staintools package
template = Image.open(TEMPLATE_IMG)
normalizer = staintools.StainNormalizer(method=NORM_METHOD)
template_std = LuminosityStandardizer.standardize(np.array(template))
normalizer.fit(template_std)
# Load metadata and find sample
meta_data = pd.read_csv(META_PATH, header=0, sep='\t')
sample_name = list(meta_data.loc[:, SAMPLE_COLUMN])
# Normalize and tile whole slide images, each tile will contain one spot.
for sample, img_path, cm_path in img_cm_gen(IMG_PATH, CM_PATH, sample_name):
if ATM_PATH:
atm_file = open(ATM_PATH, "r")
atm = atm_file.read().split(" ")
else:
atm = None
img = Image.open(img_path)
# image normalization
img_uncast = remove_colour_cast(img)
img_std = LuminosityStandardizer.standardize(np.array(img_uncast))
transformed = normalizer.transform(img_std)
img = Image.fromarray(transformed)
# find spots
cm = pd.read_csv(cm_path, header=0, sep='\t', index_col=0)
cm = cm.transpose()
spots_center_gen = spot_gen(cm)
tile_out = os.path.join(TILE_PATH, sample)
mkdirs(tile_out)
# tiling
tile(img, spots_center_gen, tile_out, atm)
# ### 2. Count Matrix Preprocessing
# #### 2.1 Run in Command Line
# + language="bash"
# python ../SpaCell/count_matrix_normalization.py
# -
# #### 2.2 Detailed run in Python (equivalent to the one liner Spacell code above)
import pandas as pd
import os
import glob
import numpy as np
def add_label(dataframe, label, meta):
"""
Add label for each sample in count matrix for classfication use.
:param dataframe : pandas data frame object
:param label (list): list of labels
:return: new dataframe with label as last column
"""
label_list = []
for spot in dataframe.index.values:
sample_id = '_'.join(spot.split('_')[:-1])
spot_label = meta.loc[sample_id, label]
label_list.append(spot_label)
dataframe[label] = label_list
return dataframe
# Merge all count matries to one, replace missing values with 0
# +
meta_data = pd.read_csv(META_PATH, header=0, sep='\t', index_col=0)
sample_name = list(meta_data.index)
total_counts = pd.DataFrame()
for file in glob.glob(CM_PATH+'*.txt'):
sample_n = '_'.join(os.path.basename(file).split("_")[0:-4])
if sample_n in sample_name:
cm = pd.read_csv(file, header = 0,sep='\t', index_col=0) # column:genes row:spots
# reindex
new_spots = ["{0}_{1}".format(sample_n, spot) for spot in cm.index]
cm.index = new_spots
total_counts = total_counts.append(cm, sort=False)
# replace missing values with 0
total_counts.replace([np.inf, -np.inf], np.nan)
total_counts.fillna(0.0, inplace=True)
# -
num_spots = len(total_counts.index)
num_genes = len(total_counts.columns)
print("Number of total spots: {}".format(num_spots))
print("Number of total genes: {}".format(num_genes))
# Remove low quality spots
min_genes_spot = round((total_counts != 0).sum(axis=1).quantile(THRESHOLD_SPOT))
print("Number of expressed genes a spot must have to be kept ({}% of total expressed genes) {}".format(THRESHOLD_SPOT, min_genes_spot))
total_counts = total_counts[(total_counts != 0).sum(axis=1) >= min_genes_spot]
print("Dropped {} spots".format(num_spots - len(total_counts.index)))
# Remove low quality genes
# +
# Spots are columns and genes are rows
total_counts = total_counts.transpose()
min_spots_gene = round(len(total_counts.columns) * THRESHOLD_GENE)
print("Removing genes that are expressed in less than {} spots with a count of at least {}".format(min_spots_gene, MIN_EXP))
total_counts = total_counts[(total_counts >= MIN_EXP).sum(axis=1) >= min_spots_gene]
print("Dropped {} genes".format(num_genes - len(total_counts.index)))
# -
# Count matrix normalized by total counts per spot
# +
# Spots are rows and genes are columns
total_counts = total_counts.transpose()
row_sum = total_counts.sum(axis=1)
normal_total_counts = total_counts.div(row_sum, axis=0)
# -
# Add label and save data
normal_total_counts = add_label(normal_total_counts, LABEL_COLUMN , meta_data)
if CONDITION_COLUMN:
normal_total_counts = add_label(normal_total_counts, CONDITION_COLUMN, meta_data)
normal_total_counts.to_csv(os.path.join(DATASET_PATH,'cm_norm.tsv'), sep='\t')
# ### 3. Generate Paired Image and Gene Count Training Dataset
# #### 3.1 Run in Command Line
# + language="bash"
# python ../SpaCell/dataset_management.py
# -
# #### 3.2 Detailed run in Python (equivalent to the one liner Spacell code above)
import glob
import keras
import numpy as np
import pandas as pd
import os
import time
from keras.preprocessing import image
from keras.preprocessing.image import ImageDataGenerator
from keras.applications import xception
from keras.utils import to_categorical
# Load normalized count matrix
cm = pd.read_csv(os.path.join(DATASET_PATH, 'cm_norm.tsv'), header=0, sep='\t', index_col=0)
# Check if label has condition, subset count matrix with condition
if CONDITION_COLUMN:
cm_ = cm[cm[CONDITION_COLUMN] == CONDITION]
cm = cm_
# Pairing each spot with its corresponding gene counts
col_cm = list(cm.index)
img_files = glob.glob(TILE_PATH+'/*/*.jpeg')
sorted_img = []
sorted_cm = []
for img in img_files:
id_img = os.path.splitext(os.path.basename(img))[0].replace("-", "_")
for c in col_cm:
id_c = c.replace("x", "_")
if id_img == id_c:
sorted_img.append(img)
sorted_cm.append(c)
# Generate dataset index file
cm = cm.reindex(sorted_cm)
df = pd.DataFrame(data={'img':sorted_img,
'cm':sorted_cm,
'label':cm[LABEL_COLUMN]})
df.to_csv(os.path.join(DATASET_PATH, 'dataset_age.tsv'), sep='\t')
cm.to_csv(os.path.join(DATASET_PATH, "cm_age.tsv"), sep='\t')
# ### 4. Classification for ALS Disease Stages
# #### 4.1 Run in Command Line
# + language="bash"
# python ../SpaCell/spacell_classification.py
# -
# #### 4.2 Detailed run in Python (equivalent to the one liner Spacell code above)
import pandas as pd
import os
from sklearn.preprocessing import LabelEncoder
from sklearn.model_selection import train_test_split
from keras.layers import Dense, GlobalAveragePooling2D, Input, concatenate, Dropout, Lambda
from keras.applications import xception
from keras.utils import multi_gpu_model
from keras.applications.xception import Xception
from sklearn.metrics import confusion_matrix, roc_curve, auc
import seaborn as sns
import matplotlib
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
from keras.models import Model, Sequential
class DataGenerator(keras.utils.Sequence):
'Generates data for Keras'
def __init__(self, df, cm_df, le, batch_size=32, dim=(299,299), n_channels=3,
cm_len = None, n_classes=n_classes, shuffle=True, is_train=True):
'Initialization'
self.dim = dim
self.batch_size = batch_size
self.df = df
self.list_IDs = cm_df.index
self.n_channels = n_channels
self.cm_len = cm_len
self.n_classes = n_classes
self.shuffle = shuffle
self.cm_df = cm_df
self.le = le
self.is_train = is_train
self.on_epoch_end()
def __len__(self):
'Denotes the number of batches per epoch'
return int(np.floor(len(self.list_IDs) / self.batch_size))
def __getitem__(self, index):
'Generate one batch of data'
# Generate indexes of the batch
indexes = self.indexes[index*self.batch_size:(index+1)*self.batch_size]
# Find list of IDs
list_IDs_temp = [self.list_IDs[k] for k in indexes]
# Generate data
X_img, X_cm, y = self.__data_generation(list_IDs_temp)
if self.is_train:
return [X_img, X_cm], y
else:
return [X_img, X_cm]
def on_epoch_end(self):
'Updates indexes after each epoch'
self.indexes = np.arange(len(self.list_IDs))
if self.shuffle == True:
np.random.shuffle(self.indexes)
def __data_generation(self, list_IDs_temp):
'Generates data containing batch_size samples' # X : (n_samples, *dim, n_channels)
# Initialization
X_img = np.empty((self.batch_size, *self.dim, self.n_channels))
X_cm = np.empty((self.batch_size, self.cm_len))
y = np.empty((self.batch_size, self.n_classes), dtype=int)
# Generate data
for i, ID in enumerate(list_IDs_temp):
# Store img
X_img[i,] = self._load_img(ID)
# Store cm
X_cm[i,] = self._load_cm(ID)
# Store class
y[i,] = self._load_label(ID)
return X_img, X_cm, y
def _load_img(self, img_temp):
img_path = self.df.loc[img_temp, 'img']
X_img = image.load_img(img_path, target_size=self.dim)
X_img = image.img_to_array(X_img)
X_img = np.expand_dims(X_img, axis=0)
X_img = xception.preprocess_input(X_img)
return X_img
def _load_cm(self, cm_temp):
spot = self.df.loc[cm_temp, 'cm']
X_cm = self.cm_df.ix[spot, :-ADDITIONAL_COLUMN].values
return X_cm
def _load_label(self, lable_temp):
spot = self.df.loc[lable_temp, 'cm']
y = self.cm_df.ix[spot, [-ADDITIONAL_COLUMN]].values
y = self.le.transform(y)
return to_categorical(y, num_classes=self.n_classes)
def get_classes(self):
if not self.is_train:
y = self.cm_df.iloc[:, [-ADDITIONAL_COLUMN]].values
y = self.le.transform(y)
return y
# +
def st_comb_nn(tile_shape, cm_shape, output_shape):
"""
Neural network model for classification.
:param tile_shape: dimension of tile image
:param cm_shape: dimension of count matrix
:param output_shape: number of output classes
:return: compiled classification model
"""
#### xception base for tile
tile_input = Input(shape=tile_shape, name = "tile_input")
xception_base = Xception(input_tensor=tile_input, weights='imagenet', include_top=False)
x_tile = xception_base.output
x_tile = GlobalAveragePooling2D()(x_tile)
x_tile = Dense(512, activation='relu', name="tile_fc")(x_tile)
#### NN for count matrix
cm_input = Input(shape=cm_shape, name="count_matrix_input")
x_cm = Dense(512, activation='relu', name="cm_fc")(cm_input)
#### merge
merge = concatenate([x_tile, x_cm], name="merge_tile_cm")
merge = Dense(512, activation='relu', name="merge_fc_1")(merge)
merge = Dense(128, activation='relu', name="merge_fc_2")(merge)
preds = Dense(output_shape, activation='softmax')(merge)
##### compile model
model = Model(inputs=[tile_input, cm_input], outputs=preds)
try:
parallel_model = multi_gpu_model(model, gpus=4, cpu_merge=False)
except:
parallel_model = model
parallel_model.compile(optimizer='Adam', loss='categorical_crossentropy', metrics=['accuracy'])
return parallel_model, model
def model_eval(y_pred, y_true, class_list):
"""
Generate ROC plot and confusion matrix for evaluate model.
:param y_pred: predicted score for each label
:param y_true: true label
:param class_list: list of classes
:return: None
"""
y_true_onehot = np.zeros((len(y_true), len(class_list)))
y_true_onehot[np.arange(len(y_true)), y_true] = 1
y_pred_int = np.argmax(y_pred, axis=1)
confusion_matrix_age = confusion_matrix(y_true, y_pred_int)
plt.figure(figsize = (4,4), dpi = 180)
color = ['blue', 'green', 'red', 'cyan']
for i in range(len(class_list)):
fpr, tpr, thresholds = roc_curve(y_true_onehot[:,i], y_pred[:,i])
roc_auc = auc(fpr, tpr)
plt.plot(fpr, tpr, color=color[i], lw=2, label='ROC %s curve (area = %0.2f)' % (class_list[i], roc_auc))
plt.plot([0, 1], [0, 1], color='navy', lw=2, linestyle='--')
plt.xlim([0.0, 1.0])
plt.ylim([0.0, 1.05])
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate')
plt.title('roc_auc')
plt.legend(loc="lower right")
plt.savefig('./age_roc_combine.pdf')
plt.tight_layout()
cm_plot = plot_confusion_matrix(confusion_matrix_age, classes = class_list)
def plot_confusion_matrix(cm, classes=None):
"""
Generate confusion matrix for evaluate model.
:param cm: confusion matrix
:param class: list of classes
:return: confusion matrix plot
"""
#Normalise Confusion Matrix by dividing each value by the sum of that row
cm = cm.astype('float')/cm.sum(axis = 1)[:, np.newaxis]
print(cm)
#Make DataFrame from Confusion Matrix and classes
cm_df = pd.DataFrame(cm, index = classes, columns = classes)
#Display Confusion Matrix
plt.figure(figsize = (4,4), dpi = 180)
cm_plot = sns.heatmap(cm_df, vmin = 0, vmax = 1, annot = True, fmt = '.2f', cmap = 'Blues', square = True)
plt.title('Confusion Matrix', fontsize = 12)
#Display axes labels
plt.ylabel('True label', fontsize = 12)
plt.xlabel('Predicted label', fontsize = 12)
plt.savefig('./age_confusion_matrix_combine.pdf')
plt.tight_layout()
return cm_plot
# -
# Load dataset index file and count matrix
df = pd.read_csv(os.path.join(DATASET_PATH, "dataset_age.tsv"), header=0, sep='\t', index_col=0)
sorted_cm = pd.read_csv(os.path.join(DATASET_PATH, "cm_age.tsv"), header=0, sep='\t', index_col=0)
# Encode lable to number
label_encoder = LabelEncoder()
class_list = list(set(sorted_cm.loc[:, LABEL_COLUMN]))
label_encoder.fit(class_list)
# Split train and test dataset
train_index, test_index = train_test_split(sorted_cm.index, train_size=train_ratio, shuffle=True)
train_cm = sorted_cm.loc[train_index,:]
test_cm = sorted_cm.loc[test_index,:]
# Prepare dataset for training and testing
cm_shape = train_cm.shape[1]-ADDITIONAL_COLUMN
train_gen = DataGenerator(df=df,
cm_df=train_cm,
le=label_encoder,
cm_len=cm_shape,
batch_size=batch_size)
test_gen = DataGenerator(df=df,
cm_df=test_cm,
le=label_encoder,
shuffle=False,
is_train=False,
batch_size=1,
cm_len=cm_shape)
# Compile model and training model
parallel_model_combine, model_ = st_comb_nn((SIZE[0], SIZE[1], N_CHANNEL), (cm_shape,), n_classes)
parallel_model_combine.fit_generator(generator=train_gen,
steps_per_epoch=len(train_gen),
epochs=epochs)
# Predict label for test dataset and evaluate model performance
#Note: here, for an example run, we only be using 4 images in the model (in the manuscript we used >200 images and had the Supplementary Figure 2)
y_pred = parallel_model_combine.predict_generator(generator=test_gen, verbose=1)
y_true = sorted_cm.ix[test_index,-ADDITIONAL_COLUMN].values
y_true = label_encoder.transform(y_true)
model_eval(y_pred, y_true, class_list=class_list)
# ### 5. Clustering for Finding Grey Matter and White Matter
# #### 5.1 Run in Command Line
# + language="bash"
# python ../SpaCell/spacell_clustering.py -i ../dataset/image/CN51_C2_HE.jpg \
# -l ../dataset/tile/CN51_C2_2 \
# -c ../dataset/cm/CN51_C2_2_stdata_aligned_counts_IDs.txt \
# -e 10 \
# -k 2 \
# -o ../results/
# -
# #### 5.2 Detailed run in Python (equivalent to the one liner Spacell code above)
from keras.applications.resnet50 import ResNet50, preprocess_input as preprocess_resnet, decode_predictions
from keras import backend as K
from sklearn.cluster import KMeans
import os
import numpy as np
import collections
import matplotlib
import matplotlib.pyplot as plt
from matplotlib import transforms
# +
class ResNet:
"""
pre-trained ResNet50 model
"""
__name__ = "ResNet"
def __init__(self, batch_size=1):
self.model = ResNet50(include_top=False, weights='imagenet', pooling="avg")
self.batch_size = batch_size
self.data_format = K.image_data_format()
def predict(self, x):
if self.data_format == "channels_first":
x = x.transpose(0, 3, 1, 2)
x = preprocess_resnet(x.astype(K.floatx()))
return self.model.predict(x, batch_size=self.batch_size)
def encode(tiles, model):
"""
Generate features for each tile.
:param tiles: tile image (Numpy Array)
:param model: Neural Net model
:return: feature list
"""
features = model.predict(tiles)
features = features.ravel()
return features
def features_gen(tile_and_infor, model, out_path):
"""
Write all tile feature in whole slide image to a pandas dataframe.
:param tile_and_infor: tile information (tile, (img_name, coordx, coordy))
:param model: Neural Network model
:param out_path: path to save tile feature
:return: pandas dataframe
"""
current_file = None
df = pd.DataFrame()
for j, (tile, output_infor) in enumerate(tile_and_infor):
print("generate features for {}th tile".format(j+1))
spot = output_infor[1] + 'x' + output_infor[2]
if current_file is not None:
assert current_file == output_infor[0]
current_file = output_infor[0]
features = encode(tile, model)
df[spot] = features
out_path = os.path.join(out_path, current_file)
assert len(df) > 0
df.to_csv(out_path + '.tsv', header=True, index=False, sep="\t")
def autoencoder(n_input):
"""
Neural network autoencoder for single input.
:param n_input: dimension of input
:return: compiled autoencoder model
model.fit(x_train, x_train, batch_size = 32, epochs = 500)
bottleneck_representation = encoder.predict(x_train)
"""
model = Sequential()
# encoder
model.add(Dense(512, activation='relu', input_shape=(n_input,)))
model.add(Dense(256, activation='relu'))
model.add(Dense(64, activation='relu'))
# bottleneck code
model.add(Dense(20, activation='linear', name="bottleneck"))
# decoder
model.add(Dense(64, activation='relu'))
model.add(Dense(256, activation='relu'))
model.add(Dense(512, activation='relu'))
model.add(Dense(n_input, activation='sigmoid'))
model.compile(loss = 'mean_squared_error', optimizer = Adam())
encoder = Model(model.input, model.get_layer('bottleneck').output)
return model, encoder
def combine_ae(ge_dim, tfv_dim):
"""
Neural network autoencoder for combined data.
:param ge_dim: dimension of count matrix
:param tfv_dim: dimension of tile feature
:return: compiled autoencoder model
combine_ae.fit([X_ge, X_tfv],
[X_ge, X_tfv],
epochs = 100, batch_size = 128,
validation_split = 0.2, shuffle = True)
bottleneck_representation = encoder.predict([X_ge, X_tfv])
"""
# Input Layer
input_dim_ge = Input(shape=(ge_dim,), name="gene_expression")
input_dim_tfv = Input(shape=(tfv_dim,), name="tile_feature_vector")
# Dimensions of Encoder layer
encoding_dim_ge = 256
encoding_dim_tfv = 256
# Encoder layer for each dataset
encoded_ge = Dense(encoding_dim_ge, activation='relu',
name="Encoder_ge")(input_dim_ge)
encoded_tfv = Dense(encoding_dim_tfv, activation='relu',
name="Encoder_tfv")(input_dim_tfv)
# Merging Encoder layers from different dataset
merge = concatenate([encoded_ge, encoded_tfv])
# Bottleneck compression
bottleneck = Dense(20, kernel_initializer='uniform', activation='linear',
name="Bottleneck")(merge)
# Inverse merging
merge_inverse = Dense(encoding_dim_ge + encoding_dim_tfv,
activation='relu', name="Concatenate_Inverse")(bottleneck)
# Decoder layer for each dataset
decoded_ge = Dense(ge_dim, activation='sigmoid',
name="Decoder_ge")(merge_inverse)
decoded_tfv = Dense(tfv_dim, activation='sigmoid',
name="Decoder_tfv")(merge_inverse)
# Combining Encoder and Decoder into an Autoencoder model
autoencoder = Model(inputs=[input_dim_ge, input_dim_tfv], outputs=[decoded_ge, decoded_tfv])
encoder = Model(inputs=[input_dim_ge, input_dim_tfv], outputs=bottleneck)
# Compile Autoencoder
autoencoder.compile(optimizer='adam',
loss={'Decoder_ge': 'mean_squared_error',
'Decoder_tfv': 'mean_squared_error'})
return autoencoder, encoder
def tile_gen(tile_path):
"""
Generate tile and tile information.
:param tile_path: path to tile folder
:return: tile and tile information (tile, (img_name, coordx, coordy))
"""
file_name = []
for tile_root, _, tile_files in os.walk(tile_path):
for tile_file in tile_files:
if tile_file.endswith(".jpeg"):
tile = Image.open(os.path.join(tile_root, tile_file))
tile = np.asarray(tile, dtype="int32")
tile = tile.astype(np.float32)
tile = np.stack([tile])
img_name, coordx, coordy = os.path.splitext(tile_file)[0].split("-")
file_name.append((img_name, coordx, coordy))
yield (tile, (img_name, coordx, coordy))
def k_means(input_x, n_cluster):
"""
Find cluster for each input spot using K-means.
:param input_x: latent variable for each spot
:param n_cluster (int): number of clusters
:return: cluster color for each spot
"""
y_pred = KMeans(init='k-means++', n_clusters=n_cluster, n_init=20, max_iter=1000).fit_predict(input_x)
counter = collections.Counter(y_pred)
sorted_counter = [i[0] for i in sorted(counter.items(), key=lambda x: x[1], reverse=True)]
color = [color_map[j] for j in [sorted_counter.index(i) for i in y_pred]]
return color
def scatter_plot(x_points, y_points, output=None, colors=None,
alignment=None, cmap=None, title='Scatter', xlabel='X',
ylabel='Y', image=None, alpha=1.0, size=10, vmin=None, vmax=None):
"""
Plot each spot with different colour based on assigned cluster on original whole slide image
:param x_points: X pixel coords
:param y_points: Y pixel coords
:return: scatter plot
"""
# Plot spots with the color class in the tissue image
fig, a = plt.subplots(figsize = (4,4), dpi = 180)
base_trans = a.transData
# Extend (left, right, bottom, top)
# The location, in data-coordinates, of the lower-left and upper-right corners.
# If None, the image is positioned such that the pixel centers fall on zero-based (row, column) indices.
extent_size = [1, 33, 35, 1]
# If alignment is None we re-size the image to chip size (1,1,33,35)
# Otherwise we keep the image intact and apply the 3x3 transformation
if alignment is not None and not np.array_equal(alignment, np.identity(3)):
base_trans = transforms.Affine2D(matrix=alignment) + base_trans
extent_size = None
# Create the scatter plot
sc = a.scatter(x_points, y_points, c=colors, edgecolor="none",
cmap=cmap, s=size, transform=base_trans, alpha=alpha,
vmin=vmin, vmax=vmax)
# Plot the image
if image is not None and os.path.isfile(image):
img = plt.imread(image)
a.imshow(img, extent=extent_size)
# Add labels and title
a.set_xlabel(xlabel)
a.set_ylabel(ylabel)
a.set_title(title, size=10)
if output is not None:
fig.savefig("{}.pdf".format(output),
format='pdf', dpi=180)
else:
fig.show()
# -
# Parameter for a certain image
IMG_PATH = '../dataset/image/CN51_C2_HE.jpg'
CM_PATH = '../dataset/cm/CN51_C2_2_stdata_aligned_counts_IDs.txt'
ATM_PATH = None
OUT_PATH = '../results/'
TILE_PATH = '../dataset/tile/CN51_C2_2'
FEATURE_PATH = os.path.join(OUT_PATH, 'feature')
EPOCHS = 10
CLUSTER = 2
BASENAME = os.path.split(TILE_PATH)[-1]
CLUSTER_PATH = os.path.join(os.path.join(OUT_PATH, 'cluster'), BASENAME)
mkdirs(FEATURE_PATH)
mkdirs(CLUSTER_PATH)
# If provided affine transformation matrix
if ATM_PATH:
atm_file = open(ATM_PATH, "r")
atm = atm_file.read().split(" ")
else:
atm = None
# Generate feature for each tile using pre-trained ResNet50 model
model = ResNet()
tiles = tile_gen(TILE_PATH)
features_gen(tiles, model, FEATURE_PATH)
# Load data
df_cm = pd.read_csv(CM_PATH, header=0, sep='\t',index_col=0)
df_tfv = pd.read_csv(os.path.join(FEATURE_PATH, BASENAME)+'.tsv', header=0, sep='\t')
df_tfv_t = df_tfv.transpose()
print("find {} spots in count matrix".format(len(df_cm.index.values.tolist())))
print("find {} spots in tile feature".format(len(df_tfv_t.index.values.tolist())))
assert len(df_tfv_t.index.values.tolist()) == len(df_cm.index.values.tolist())
cm_spot = list(df_cm.index.values)
df_tfv_t = df_tfv_t.reindex(cm_spot)
cm_val = df_cm.values
tfv_val = df_tfv_t.values
if ATM_PATH:
atm = parseAlignmentMatrix(ATM_PATH)
else:
atm = None
x_points = [float(i.split('x')[0]) for i in df_cm.index.values.tolist()]
y_points = [float(i.split('x')[1]) for i in df_cm.index.values.tolist()]
# Compile and train autoencoder
ae, encoder = combine_ae(cm_val.shape[1], tfv_val.shape[1])
ae.fit([cm_val, tfv_val], [cm_val, tfv_val], batch_size=32, epochs=EPOCHS, validation_split=0.2, shuffle=True)
# Find latent variables for each spot
bottleneck_representation = encoder.predict([cm_val, tfv_val])
# Find cluster by perform k-means clustering on latent veriables
cluster = k_means(bottleneck_representation, CLUSTER)
# Plot cluster result on original whole slide image.
scatter_plot(x_points, y_points, colors=cluster, alignment=atm, image=IMG_PATH,
output=os.path.join(CLUSTER_PATH, '{}_{}'.format("combined_model", "both_data")))
| tutorial/Tutorial_Spacell_ALS.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# 1. install graphviz otherwise you will get an error
# 2. This example is take from following [link](https://github.com/apache/incubator-mxnet/issues/7943)
# +
import mxnet as mx
# -
net = mx.gluon.nn.HybridSequential()
with net.name_scope():
net.add(mx.gluon.nn.Dense(256, activation="relu"))
net.add(mx.gluon.nn.Dense(256, activation="relu"))
net.add(mx.gluon.nn.Dense(10))
# Option 1
print(net)
net.hybridize()
net.collect_params().initialize()
# +
# Option 2 (net must be HybridSequential, if want to plot whole graph)
x = mx.sym.var('data')
sym = net(x)
mx.viz.plot_network(sym)
# -
from mxnet.gluon.model_zoo import vision
resnet18 = vision.resnet18_v1()
x = mx.sym.var('data')
sym = resnet18(x)
mx.viz.plot_network(sym)
# +
alexnet = vision.alexnet()
# -
x = mx.sym.var('data')
sym = alexnet(x)
mx.viz.plot_network(sym)
squeezenet = vision.squeezenet1_0()
x = mx.sym.var('data')
sym = squeezenet(x)
mx.viz.plot_network(sym)
densenet = vision.densenet121()
x = mx.sym.var('data')
sym = densenet(x)
mx.viz.plot_network(sym)
| notebooks/old-mxnet/mxnet_visualize_network_example1.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ## **Example. Estimating a population total under simple random sampling using transformed normal models**
# +
# %matplotlib inline
import random
import statistics as stat
import matplotlib.pyplot as plt
import numpy as np
import pymc as pm
import theano.tensor as tt
from scipy import stats
plt.style.use('seaborn-darkgrid')
plt.rc('font', size=12)
# %config Inline.figure_formats = ['retina']
# -
# List of populations of cities and towns in New York State in 1960,
# used in the first example of Section 7.6 and summarized in Table 7.2
# of Bayesian Data Analysis. Code = 400 if included in Sample 1 only; 300 if
# included in Sample 2 only; 200 if included in both samples; 100 if
# included in neither sample.
new_york = np.genfromtxt('data/newyork.txt', skip_header=7, dtype=(int, int))
print(new_york.shape)
# Let's find the sample 1 and sample 2.
# +
sample1 = []
sample2 = []
for i, j in new_york[:, :]:
if j == 400 or j == 200:
sample1.append(i)
if j == 300 or j == 200:
sample2.append(i)
sample1.sort()
sample2.sort()
# -
# As you may see, the lenght of `sample2` is 104.
print(len(sample2))
# I delete five entries and I insert a new datum in order to get the mean and the standard deviation right.
# +
sample2.sort()
inds = [1, 2, 3, 4, 5]
for i in sorted(inds, reverse=True):
del sample2[i]
sample2.append(1425350)
# -
print('The mean is {}'.format(stat.mean(sample2)))
print('The standard deviation is {}'.format(int(stat.stdev(sample2))))
print('The lenght is {}'.format(len(sample2)))
# #### **Sample1: initial analysis.**
# +
def logp_(value):
return tt.log(tt.pow(value, -1))
with pm.Model() as model_1:
mu = pm.Uniform('mu', lower=-5e5, upper=5e5)
sigma = pm.Uniform('sigma', lower=0, upper=5e5)
pm.Potential('sigma_log', logp_(sigma))
y_bar = pm.Normal('y_bar', mu=mu, sd=sigma, observed=sample1)
# -
with model_1:
trace_1 = pm.sample(draws=2_000, tune=2_000)
pm.traceplot(trace_1, varnames=['mu', 'sigma']);
df = pm.summary(trace_1, varnames=['mu', 'sigma'])
df
with model_1:
ppc_1 = pm.sample_posterior_predictive(trace_1, samples=100, vars=[y_bar, mu, sigma])
# The goal is to find the $95\, \%$ posterior distribution for $y_{\text{total}}$. How? Remember the equation (7.18):
#
# $$y_{\text{total}} = N \cdot \overline {y} = n\cdot \overline y_{\text{obs}} + (N − n) \overline y_{\text{mis}}.$$
#
# We need to find $\overline y_{\text{mis}}$. Page 205 says how to find that value (I will try to follow it closely).
# With each `mu` and each `sigma`, I use `np.random.normal` to obtain 100 values, then I get the mean of that array.
# +
y_miss_1 = []
for i, j in zip(ppc_1['mu'], ppc_1['sigma']):
temp = np.random.normal(loc=i, scale=j, size=100)
y_miss_1.append(np.mean(temp))
y_miss_1 = np.array(y_miss_1)
y_miss_1[40:50]
# -
# Now, we find $y_{\text{total}}$. Since there are 100 values, we can find the posterior interval.
N = 804
n = 100
y_total = n * np.mean(ppc_1['y_bar'], axis=1) + (N - n) * y_miss_1
perc_25 = int(np.percentile(y_total, 2.5))
perc_975 = int(np.percentile(y_total, 97.5))
print(f'The 95% interval is [{perc_25:.2e}, {perc_975:.2e}]')
# Another way of finding that interval is using `ppc_1['y_bar']`, you have to use `np.mean(ppc['y_bar'], axis=1)` to find $y_{\text{total}}$, instead of `y_miss_1`:
#
# `y_total = n * np.mean(ppc_1['y_bar'], axis=1) + (N - n) * np.mean(ppc_1['y_bar'], axis=1)`
y_to = n * np.mean(ppc_1['y_bar'], axis=1) + (N - n) * np.mean(ppc_1['y_bar'], axis=1)
perc_25 = int(np.percentile(y_to, 2.5))
perc_975 = int(np.percentile(y_to, 97.5))
print(f'The 95% interval is [{perc_25:.2e}, {perc_975:.2e}]')
# Both methods show similar interval; nevertheless, I use the first method in the following examples. So the interval is $[-9.3 \times 10^6, 45.3 \times 10^6]$. The modified interval is $[1.9 \times 10^6, 45.3 \times 10^6]$. Remember that the numbers can change if you rerun this notebook.
# We repeat the above analysis under the assumption that the $N = 804$ values in the complete data follow a lognormal distribution.
with pm.Model() as model_1_log:
mu = pm.Uniform('mu', lower=0, upper=5e2)
sigma = pm.Uniform('sigma', lower=0, upper=5e2)
# pm.Potential('simga_log', logp_(sigma))
y_bar = pm.Lognormal('y_bar', mu=mu, sd=sigma, observed=sample1)
with model_1_log:
trace_2 = pm.sample(draws=2000, tune=3000)
pm.traceplot(trace_2, varnames=['mu', 'sigma']);
df2 = pm.summary(trace_2, varnames=['mu', 'sigma'])
df2
with model_1_log:
ppc_1_log = pm.sample_posterior_predictive(trace_2, samples=100, vars=[y_bar, mu, sigma])
# Again, we need to find the posterior interval for $y_{\text{total}}$. Pay attention to the code.
# +
y_miss_2 = []
for i, j in zip(ppc_1_log['mu'], ppc_1_log['sigma']):
temp = np.exp(np.random.normal(loc=i, scale=j, size=100))
y_miss_2.append(np.mean(temp))
y_miss_2 = np.array(y_miss_2)
y_miss_2[50:60]
# -
y_total_2 = n * np.mean(ppc_1_log['y_bar'], axis=1) + (N - n) * y_miss_2
perc_25 = int(np.percentile(y_total_2, 2.5))
perc_975 = int(np.percentile(y_total_2, 97.5))
print(f'The 95% interval is [{perc_25:.2e}, {perc_975:.2e}]')
# So the interval is $[3.7 \times 10^6, 12.4 \times 10^6]$. Again if you rerun this notebook, the numbers can change.
# #### **Sample 1: checking the lognormal model.**
cond = np.sum(ppc_1_log['y_bar'], axis=1) >= sum(sample1)
np.sum(cond)
# 0 of 100 has a greater sum than `sum(sample1)`. That number can change if you rerun this notebook.
# #### **Sample 1: extended analysis.**
# Using SciPy, I find $\phi$. The function `scipy.stats.boxcox` uses the maximum likelihood estimation, although it is known to have [some issues](https://github.com/scipy/scipy/issues/6873). [This question](https://stats.stackexchange.com/questions/337527/parameter-lambda-of-box-cox-transformation-and-likelihood) explains it better and why it is used [here](https://stats.stackexchange.com/questions/202530/how-is-the-box-cox-transformation-valid).
# +
from scipy import stats
stats.boxcox(sample1)
# -
# So $\phi$ is $-0.1688902053661071$.
# +
phi = stats.boxcox(sample1)[1]
# An inverse Box-Cox transformation is needed
def invbox_cox(data, phi):
if phi == 0:
return np.exp(data)
else:
return np.exp(np.log(data * phi + 1) * 1 / phi)
sample1 = np.array(sample1)
# +
data_transformed = stats.boxcox(sample1)[0]
with pm.Model() as model_trans:
mu = pm.Uniform('mu', lower=0, upper=1e2)
sigma = pm.Uniform('sigma', lower=0, upper=5e1)
y_phi = pm.Normal('y_phi', mu=mu, sd=sigma, observed=data_transformed)
# -
with model_trans:
trace_3 = pm.sample(draws=2000, tune=2000)
pm.traceplot(trace_3);
pm.summary(trace_3)
with model_trans:
ppc_trans = pm.sample_posterior_predictive(trace_3, samples=100, vars=[y_phi, mu, sigma])
# Again, we need to find the posterior interval for $y_{\text{total}}$. Pay attention to the code.
# +
y_miss_3 = []
for i, j in zip(ppc_trans['mu'], ppc_trans['sigma']):
temp = np.random.normal(loc=i, scale=j, size=100)
y_miss_3.append(np.mean(temp))
y_miss_3 = np.array(y_miss_3)
y_miss_3[40:50]
# -
y_total_3 = n * np.mean(invbox_cox(ppc_trans['y_phi'], phi), axis=1) + (N - n) * invbox_cox(y_miss_3, phi)
perc_25 = int(np.percentile(y_total_3, 2.5))
perc_975 = int(np.percentile(y_total_3, 97.5))
print(f'The 95% interval is [{perc_25:.2e}, {perc_975:.2e}]')
# So the interval in $[1.6 \times 10^6, 43.9 \times 10^6]$. The numbers can change if you rerun this notebook.
cond2 = np.sum(invbox_cox(ppc_trans['y_phi'], phi), axis=1) >= sum(sample1)
np.sum(cond2)
# 30 of 100 have a greater sum than `sum(sample1)`. That number can change if you rerun this notebook.
# Everything was done with `sample1`. We need to repeat the analysis with `sample2`.
# #### **Sample2: initial analysis.**
with pm.Model() as model_2:
mu = pm.Uniform('mu', lower=-5e5, upper=5e5)
sigma = pm.Uniform('sigma', lower=0, upper=5e5)
pm.Potential('sigma_log', logp_(sigma))
y_bar = pm.Normal('y_bar', mu=mu, sd=sigma, observed=sample1)
with model_2:
trace_4 = pm.sample(draws=2_000, tune=2_000)
pm.traceplot(trace_4, varnames=['mu', 'sigma']);
df3 = pm.summary(trace_4, varnames=['mu', 'sigma'])
df3
with model_2:
ppc_2 = pm.sample_posterior_predictive(trace_4, samples=100, vars=[y_bar, mu, sigma])
# With each `mu` and each `sigma`, I use `np.random.normal` to obtain 100 values, then I get the mean of that array.
# +
y_miss_4 = []
for i, j in zip(ppc_2['mu'], ppc_2['sigma']):
temp = np.random.normal(loc=i, scale=j, size=100)
y_miss_4.append(np.mean(temp))
y_miss_4 = np.array(y_miss_4)
y_miss_4[40:50]
# -
# Now, we find $y_{\text{total}}$. Since there are 100 values, we can find the posterior interval.
y_total_4 = n * np.mean(ppc_2['y_bar'], axis=1) + (N - n) * y_miss_4
perc_25 = int(np.percentile(y_total_4, 2.5))
perc_975 = int(np.percentile(y_total_4, 97.5))
print(f'The 95% interval is [{perc_25:.2e}, {perc_975:.2e}]')
# So the interval is $[-18.8 \times 10^6, 45.2 \times 10^6]$. The modified interval is $[3.8 \times 10^6, 45.2 \times 10^6]$. Remember that the numbers can change if you rerun this notebook.
# We repeat the above analysis under the assumption that the $N = 804$ values in the complete data follow a lognormal distribution.
with pm.Model() as model_2_log:
mu = pm.Uniform('mu', lower=0, upper=5e2)
sigma = pm.Uniform('sigma', lower=0, upper=5e2)
# pm.Potential('simga_log', logp_(sigma))
y_bar = pm.Lognormal('y_bar', mu=mu, sd=sigma, observed=sample1)
with model_2_log:
trace_5 = pm.sample(draws=2000, tune=3000)
pm.traceplot(trace_5, varnames=['mu', 'sigma']);
df4 = pm.summary(trace_5, varnames=['mu', 'sigma'])
df4
with model_2_log:
ppc_2_log = pm.sample_posterior_predictive(trace_5, samples=100, vars=[y_bar, mu, sigma])
# Again, we need to find the posterior interval for $y_{\text{total}}$. Pay attention to the code.
# +
y_miss_5 = []
for i, j in zip(ppc_2_log['mu'], ppc_2_log['sigma']):
temp = np.exp(np.random.normal(loc=i, scale=j, size=100))
y_miss_5.append(np.mean(temp))
y_miss_5 = np.array(y_miss_5)
y_miss_5[50:60]
# -
y_total_5 = n * np.mean(ppc_2_log['y_bar'], axis=1) + (N - n) * y_miss_5
perc_25 = int(np.percentile(y_total_5, 2.5))
perc_975 = int(np.percentile(y_total_5, 97.5))
print(f'The 95% interval is [{perc_25:.2e}, {perc_975:.2e}]')
# So the interval is $[3.5 \times 10^6, 10.5 \times 10^6]$. Again if you rerun this notebook, the numbers can change.
# #### **Sample 2: checking the lognormal model.**
cond3 = np.sum(ppc_2_log['y_bar'], axis=1) >= sum(sample2)
np.sum(cond3)
# 0 of 100 has a greater sum than `sum(sample1)`. That number could change if you rerun this notebook.
# #### **Sample 2: extended analysis.**
# Again, I use the function `scipy.stats.boxcox`.
# +
from scipy import stats
stats.boxcox(sample2)
# -
#
# So $\phi$ is $-0.25583481227052385$.
phi2 = stats.boxcox(sample2)[1]
sample2 = np.array(sample2)
# +
data_transformed = stats.boxcox(sample2)[0]
with pm.Model() as model_trans_2:
mu = pm.Uniform('mu', lower=0, upper=1e2)
sigma = pm.Uniform('sigma', lower=0, upper=5e1)
y_phi = pm.Normal('y_phi', mu=mu, sd=sigma, observed=data_transformed)
# -
with model_trans_2:
trace_6 = pm.sample(draws=2000, tune=2000)
pm.traceplot(trace_6);
pm.summary(trace_6)
with model_trans_2:
ppc_trans_2 = pm.sample_posterior_predictive(trace_6, samples=100, vars=[y_phi, mu, sigma])
# Again, we need to find the posterior interval for $y_{\text{total}}$. Pay attention to the code.
# +
y_miss_6 = []
for i, j in zip(ppc_trans_2['mu'], ppc_trans_2['sigma']):
temp = np.random.normal(loc=i, scale=j, size=100)
y_miss_6.append(np.mean(temp))
y_miss_6 = np.array(y_miss_6)
y_miss_6[40:50]
# -
y_total_6 = n * np.mean(invbox_cox(ppc_trans_2['y_phi'], phi2), axis=1) + (N - n) * invbox_cox(y_miss_6, phi2)
perc_25 = int(np.percentile(y_total_6, 2.5))
perc_975 = int(np.percentile(y_total_6, 97.5))
print(f'The 95% interval is [{perc_25:.2e}, {perc_975:.2e}]')
# Ok, so we are getting values so high that `invbox_cox` outputs `nan`.
# Indices where the values are so high
np.argwhere(np.isnan(invbox_cox(ppc_trans_2['y_phi'], phi2)))
# We could remove the `nan` values to obtain the intervail (which is not good)
y_total_6 = y_total_6[~np.isnan(y_total_6)]
perc_25 = int(np.percentile(y_total_6, 2.5))
perc_975 = int(np.percentile(y_total_6, 97.5))
print(f'The 95% interval is [{perc_25:.2e}, {perc_975:.2e}]')
# Another solution is to modify `invbox_cox` a little (which is not good)
def invbox_cox_mod(data, phi):
if phi == 0:
return np.exp(data)
else:
return np.exp(np.log(np.abs(data * phi + 1)) * 1 / phi) # np.abs() here
y_total_6 = n * np.mean(invbox_cox_mod(ppc_trans_2['y_phi'], phi2), axis=1) + (N - n) * invbox_cox_mod(y_miss_6, phi2)
perc_25 = int(np.percentile(y_total_6, 2.5))
perc_975 = int(np.percentile(y_total_6, 97.5))
print(f'The 95% interval is [{perc_25:.2e}, {perc_975:.2e}]')
# The numbers are... crazy.
cond4 = np.sum(invbox_cox(ppc_trans_2['y_phi'], phi2), axis=1) >= sum(sample2)
np.sum(cond4)
# 27 of 100 have a greater sum than `sum(sample2)`. That number can change if you rerun this notebook.
# %load_ext watermark
# %watermark -iv -v -p theano,scipy,matplotlib -m
| BDA3/chap_07.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Challenge: What's a probablistic estimate for number of days to drill a well?
#
# Often we use neighboring data points to built an estimate:
# * Number of days to drill a well.
# * Volume of oil produced after 90 days.
# * etc.
#
# To do this work we select data from the surrounding area and look at averages, medians, and averages. But there's a lot more insights we can pull from these data if we use different tools. </br>
#
# For this notebook we are going to work with a statistical method called survival analysis which models the duration of event. Survival analysis is used in the medical world to measure the effectiveness of new drugs but can be used for any type of data that has a duration.
#
# ## Introduction - Survival Analysis of Polictical Regimes
# Example taken from:</br>
# https://lifelines.readthedocs.io/en/latest/Survival%20analysis%20with%20lifelines.html
#Import libraries
# %matplotlib inline
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
from lifelines import KaplanMeierFitter
import scipy.misc
sns.set()
# Let's load and inspect the example dataset of regimes.
from lifelines.datasets import load_dd
data = load_dd()
data.head()
# Interesting stuff! For this analysis we need two columns "duration" and "observed". The former is the data to make the plot and the latter filters the data to only leaders that finished their term naturally, no coups or death in offices.</br>
#
# Let's pull these columns out into their own objects: T and E</br>
#
# Next we'll call the a method of survival analysis called the Kaplan Meier and fit it to the data.
# +
#Select data for analysis
T = data["duration"]
E = data["observed"]
#Initiate model and fit model
kmf = KaplanMeierFitter()
kmf.fit(T, event_observed=E)
# -
# The model is built let's plot the results.
#Plot a Survival Function
kmf.plot(figsize=(10,10))
plt.title('Survival function of political regimes');
# ##### What this graph is telling you
# * x-axis: duration in office in years
# * y-axis: probability of a leader still around after x years in office
# * The shaded area is the confidence interval of the data.
# * For Example: _There's a 20% that a leader will be in office more than 8 years._
#
# ##### However, that not the whole story . . .
# There are many different types of governments which behave differently. Let's create another plot but this time filter out Democratic vs. Non-Democratic regimes.
# +
#Survival analysis plots for Democratic vs. Non-Democratic regimes
ax = plt.subplot(111)
dem = (data["democracy"] == "Democracy") #filter for regimes
#Fit two different models
kmf.fit(T[dem], event_observed=E[dem], label="Democratic Regimes")
kmf.plot(ax=ax, figsize=(10,10))
kmf.fit(T[~dem], event_observed=E[~dem], label="Non-democratic Regimes")
kmf.plot(ax=ax)
#plot
plt.ylim(0, 1);
plt.title("Lifespans of different global regimes");
# -
# This plot makes sense as dictactors are more likley to remain in power longer than democratically elected officials.
#
# #### Now let's try this technique with well data.
# ---
# ## Exercise - Survival Analysis of Days Drilling in the Mississippi Canyon Protracton, GOM.
#
# The Mississippi Canyon Protraction Area in the Gulf of Mexico is one of the most prolific parts of the basin with some of its largest fields (Mars/Ursa, Thunderhorse). Thousands of wells by different operators have been drilled here and likley many more. When planning a well it is a common analysis to look at surrounding wells to estimate time it will take to drill. Instead of coming up with one number (i.e. average days of drilling), let's calculate a probability distribution.
#
# ## Step 1. Load Data and Generate Calculated Columns
# We will be loading these data from an csv file download for the U.S. BOEM.
# +
#Load all well drilled in protraction area
df = pd.read_csv('../data/BoreholeMC.csv')
#Show first 5 columns
df.head()
# -
# Next we need to calculate days drilled for each well using the columns "Spud Date" and "Total Depth Date". We'll also need to filter out empty values as the Kaplan Meier method doesn't accept null values.
# +
#Remove empty values
days=df[['Total Depth Date','Spud Date']].apply(pd.to_datetime, errors='coerce').dropna()
#Calculate time difference
days['drill_days']=days['Total Depth Date']-days['Spud Date']
#Convert Date Difference to Days
days['drill_days'] = days['drill_days']/np.timedelta64(1, 'D')
#Show first 5 columns
days.head()
# -
#Initiate model and fit model
kmf = KaplanMeierFitter()
kmf.fit(days.drill_days, event_observed=None)
#Plot a Survival Function
kmf.plot(figsize=(10,10))
plt.title('Drilling Days Mississippi Canyon Protraciton Area');
# #### Does this look right?
# No, it does not.
# ## Step 2. Clean and Filter Data
#
# The plot above is wrong, there's no way a well would have been drilling for __13 years! There must be some spurious data.__ Let's investigate and clean.
#Use Describe to look at metrics for dataframe
days.describe().T
# This quick description tells us alot:
# * The Min values is 0 days. A deepwater well can't be drilled in 0 days.
# * As expected the max value is too high.
# * The P25, P50, P75 look right, implying that are spurious data is on the limits.
#
# We need to figure out a reasonable cutoffs. To do so let's create a histogram of drilling days.
#Histogram of drilling days
fig = plt.subplots(figsize=(10,8))
plt.hist(days.drill_days, range=(0,5000))
plt.xlabel('Drilling Days')
plt.title ('Histogram of Drilling Days')
plt.show()
# __Histograms are terrible,__ especially this one as there's a large proportion of the data that is <500 days. An ECDF plot, which shows the proportion of data points at a certain value might be more instructive.
# +
#Generate inputs for ECDF
n = len(days.drill_days)
x = np.sort(days.drill_days.values)
y = np.arange (1,n+1)/n
#Plot ECDF
fig = plt.subplots(figsize=(10,8))
plt.plot(x, y, marker='.', linestyle='none')
plt.title('ECDF of Drilling Days in MC')
plt.xlabel('Days Drill')
plt.ylabel('Proportion of Data')
plt.show()
# -
# Still heavily skewed but if we zoom in to the upper right of the image we can make better sense of it.
#Plot zoom of ECDF upper, left
fig = plt.subplots(figsize=(10,8))
plt.plot(x, y, marker='.', linestyle='none');
plt.xlim(100,500)
plt.ylim(0.9,1)
plt.title('Zoom - ECDF of Drilling Days in MC')
plt.xlabel('Days Drill')
plt.ylabel('Proportion of Data')
plt.show()
# Now we can easily read the plot we see that __93% of the data is less than 150 days,__ and that 96% of the data is less than 365 days. Let's use this information to filter down the data to a more realistic range. No one plans to drill a well for over a year. Also, it is unlikely that an offshore well can be drilled in <7 days. Let's use the Query fucntion to reduce the days range.
#
# <br /> _If you would like to experiment with different numbers go ahead and update the code block below._
# +
#Filter data to 7<x<150
days_filtered = days.query("drill_days<150 & drill_days>7")
#Describe filtered data
days_filtered.describe().T
# -
#Plot filtered Survival Function
kmf.fit(days_filtered.drill_days, event_observed=None)
kmf.plot(figsize=(10,10));
# This plot makes more sense. _We can read the plot as 50% of the wells in MC took 35 days to drill._ The narrow confidence interval shows that this distribution is well constrained.
#
# ## Step 3. Breaking out the data - Exploration vs. Development
# The graph above is okay but just like in the introduction we aren't taking account of the differences in the data. One simple division we can make is to separate Exploration and Development wells.
#
# To divide the wells we need to grab the "Type Code" column from the original data source. One way to do that is to Merge the original dataframe with the days_filtered dataframe. You may have noticed that pandas as an index column and as we've done our manipulations and filters that index column has been unchanged. This allows us to match index columns from different dataframes to merge data.
# +
#Merge dataframes
df_filtered = pd.merge(df, days_filtered['drill_days'], left_index=True, right_index=True)
#New dataframe of data for analysis and drop an empyt cells
df_filtered=df_filtered[['drill_days','Type Code']].dropna()
#Create separate dataframes for Exploration and Development wells
expl_days = df_filtered['drill_days'][df_filtered["Type Code"] == "E"].dropna()
dev_days = df_filtered['drill_days'][df_filtered["Type Code"] != "E"].dropna()
# -
#Survival plot for Exploration vs. Development
ax = plt.subplot(111)
kmf.fit(expl_days, event_observed=None, label="Exploration Wells")
kmf.plot(ax=ax, figsize=(10,10))
kmf.fit(dev_days, event_observed=None, label="Development Wells")
kmf.plot(ax=ax)
plt.ylim(0, 1);
plt.xlabel('Days Drilling')
plt.title("Drilling Days for Exploration vs. Development Wells");
# This is more informative and it makes sense. Development wells (orange) should take shorter time to drill than Exploration wells (blue).
#
# ## Step 4. Functions and Exploring the data
#
# Now that we have the data in shape we can ask a log more questions like:
# * How do Exploration and Development wells compare for different companies?
# * How do different companies compare in their drill times?
#
# There's an addage that goes:
#
# __"If you've repeated a workflow, its time to write a function."__
#
# Functions in Python allow us to save out a sequence of code then call it when needed with the ability to put in new data types or variables.
#
# The funciton below allows us to compare Exploration and Development wells for a particular company. We'll call this fucntion "company_expl_dev_lifelines" and it has several inputs that are behind the brackes:
# 1. df - this is a placeholder for an dataframe with a "drill_days" column
# 2. compnay - this is a placeholder for a name of a company
# 3. mindays - this is a variable that filters the data we can chose to set or not
# +
#function to compare Exploration and Development wells for a particular company
def company_expl_dev_lifelines(df, company):
#Filter Data
dn= df.loc[df['Company Name'].str.contains(company)]
dk = pd.merge(dn, days_filtered['drill_days'], left_index=True, right_index=True)
dk=dk[['drill_days','Type Code']].dropna()
de = dk['drill_days'][dk["Type Code"] == "E"].dropna()
dd = dk['drill_days'][dk["Type Code"] != "E"].dropna()
#Make Plot
ax = plt.subplot(111)
kmf.fit(de, event_observed=None, label="Exploration Wells")
kmf.plot(ax=ax, figsize=(10,10))
kmf.fit(dd, event_observed=None, label="Development Wells")
kmf.plot(ax=ax)
plt.ylim(0, 1);
plt.title(f"Drilling Days for {company} - Exploration vs. Development Wells");
plt.xlabel('Days Drilling')
plt.show()
# -
# Let's try this fucntion out with Shell.
company_expl_dev_lifelines(df, 'Shell')
# Now its your turn to picks companies to plot. To help you find names, below is a bar chart of the most prolific drillers in MC Protraction. Note how the confidence intervals expand as there are fewer datapoints (i.e. Taylor Energy).
#Quick Plot of who's drilled the most in the protraction
comp_counts = df['Company Name'].value_counts()
comp_counts = comp_counts[comp_counts>50]
comp_counts.plot(kind='barh', figsize=(5,5), title='Top Operators in MC (>50 Wells)', label='# Wells');
company_expl_dev_lifelines(df, 'Taylor')
# ### How do different companies compare in their drill times?
#
# Below is a similiar looking function but it compares wells from two different companies. Note that you now need to add two compay names.
def company_compare_lifelines(df, company1, company2):
#Filter Data
dk = pd.merge(df, days_filtered['drill_days'], left_index=True, right_index=True)
dk=dk[['drill_days','Type Code']].dropna()
dn= dk.loc[df['Company Name'].str.contains(company1)].dropna()
do = dk.loc[df['Company Name'].str.contains(company2)].dropna()
#Make Plot
ax = plt.subplot(111)
kmf.fit(dn.drill_days, event_observed=None, label=company1)
kmf.plot(ax=ax, figsize=(10,10))
kmf.fit(do.drill_days, event_observed=None, label=company2)
kmf.plot(ax=ax)
plt.ylim(0, 1);
plt.title(f"Drilling Days for {company1} vs. {company2}");
plt.xlabel('Days Drilling')
plt.show()
company_compare_lifelines(df, 'Shell', 'Exxon')
# There's lots more to explore with these Suvival Analysis: different cuts of data, different day, analysis of the distributions, etc.
#
# Where else do you have duration data that might fit well in these kinds of plots?
| notebooks/Drilling_days.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="view-in-github" colab_type="text"
# <a href="https://colab.research.google.com/github/rachana2522/FLAT-LAB/blob/main/FlatpDA.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# + colab={"base_uri": "https://localhost:8080/"} id="CvVDHDM0qi3e" outputId="3ca23283-e941-475c-df27-57b134714d72"
pip install automata-lib
# + id="1B1BqotgrTLU"
from automata .pda.dpda import DPDA
dpda = DPDA(
states={'q0','q1','qf'},
input_symbols={'a','b'},
stack_symbols={'z','x'},
transitions={
'q0':{
'a':{
'z':('q0',('x','z')),
'x':('q0',('x','x'))
},
'b':{
'x':('q1','')}
},
'q1':{
'b':{'x':('q1','')},
'':{'z':('qf','')}
}
},
initial_state='q0',
initial_stack_symbol='z',
final_states={'qf',},
acceptance_mode='both'
)
# + colab={"base_uri": "https://localhost:8080/"} id="E0JC5iAVxB1_" outputId="0cbfe4a9-0b9c-414c-ea5b-23572d83f6c7"
if dpda.accepts_input('aabb'):
print('accepted')
else:
print('Not accepted')
# + id="6T1cWfucyNMj"
from automata .pda.dpda import DPDA
dpda2 = DPDA(
states={'q0','q1','qf'},
input_symbols={'a','b'},
stack_symbols={'z','x'},
transitions={
'q0':{
'a':{
'z':('q0',('x','z')),
'x':('q0',('x','x'))
},
'b':{
'x':('q1','')},
},
'q1':{
'b':{'x':('q1','')},
'':{'z':('qf','')}
}
},
initial_state='q0',
initial_stack_symbol='z',
final_states={'q0','qf'},
acceptance_mode='both'
)
# + colab={"base_uri": "https://localhost:8080/"} id="N1xKWfNS18Z-" outputId="8b657f8e-b932-4232-8cbf-b8a7b43857d1"
intputstr2= input("Enter:")
if dpda2.accepts_input(intputstr2):
print("string is accepted")
else:
print("string is not accepted")
# + id="xgMN-ltp4QW1"
from automata .pda.dpda import DPDA
dpda2 = DPDA(
states={'q0','q1','qf'},
input_symbols={'a','b'},
stack_symbols={'z','x'},
transitions={
'q0':{
'a':{
'z':('q0',('x','z')),
'x':('q0',('x','x'))
},
'b':{
'x':('q1','')},
},
'q1':{
'b':{'x':('q1','')},
'':{'z':('qf','')}
}
},
initial_state='q0',
initial_stack_symbol='z',
final_states={'q0','qf'},
acceptance_mode='both'
)
| FlatpDA.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: wri-env
# language: python
# name: wri-env
# ---
# ### 1. <NAME> data
import sys
sys.path.append("../../")
sys.path.append("../../../")
from tasks.data_loader.src.utils import *
import json
file = load_file("../input/Chile_V1_210108.json")
sent_label_map = labeled_sentences_from_dataset(file)
len(sent_label_map)
sent_label_map['0002998_1']
# ### Sentences that start with a comma
starting_comma = [(sent_id, sent) for sent_id, sent in sent_label_map.items() if len(sent['text'].strip()) != 0 and sent['text'][0] == ","]
len(starting_comma), starting_comma[20:30]
# ### Sentences that are empty
empty = [(sent_id, sent) for sent_id, sent in sent_label_map.items() if len(sent['text'].strip()) == 0]
len(empty), empty[20:30]
# ### Sentences that start with a parenthesis
starting_parenthesis = [(sent_id, sent) for sent_id, sent in sent_label_map.items() if len(sent['text'].strip()) != 0 and sent['text'][0] == ")"]
len(starting_parenthesis), starting_parenthesis[20:30]
# ### Sentences that start with a number
starting_num = [(sent_id, sent) for sent_id, sent in sent_label_map.items() if len(sent['text'].strip()) != 0 and sent['text'][0].isnumeric()]
len(starting_num), starting_num[1], starting_num[20:30]
# ### Sentences that end in a connector like "de", "un", "y", etc (also repeated sentences?)
connectors = {"de", "un", "y", "que", "el", "la", "los", "lo", "las",
"les", "ellos", "ellas", "por", "cual", "una", "unas", "unos",
"en", "es", "esta"}
ending_connector = [(sent_id, sent) for sent_id, sent in sent_label_map.items() if len(sent['text'].strip()) != 0 and sent['text'].split()[-1] in connectors]
len(ending_connector), ending_connector[10:30]
# # Results from data checking 2: Chile_V1_210108.json
# - Total sents: 188046
# - Empty sentences: 1324
# - Sentences that start with a comma: 1201
# - Sentences that start with a ")": 642
# - Sentences that start with a number: 6170 (Could also include good examples, like bullet points!)
# - Sentences being cut: ~2392, tried checking for connectors ("de", "un", "y", "que", etc.) at the end of sentence and also commas
# # Results from data checking 1: Chile.json
# - 154436 sentences instead of 79953
# - Sentences that start with a comma: 1018
# - ", en el punto 4,3"
# - ", de 10 de julio de 2018;"
# - ", Empresa Eléctrica de Arica S"
# - ", y su aplicación será competencia del Juzgado de Policía Local de la comuna"
#
# - Sentences that start with a ")": 1338
# - "), de fecha 19 de diciembre de 2008, y sus modificaciones, que autoriza llamados a postulación para subsidios habitacionales en sistemas y programas habitacionales que indica durante el año 2009 y señala el monto de los recursos destinados, entre otros, a la atención a través del sistema regulado por el DS Nº 145 (V"
# - "), en la letra a"
#
# - Sentences that start with a number: 12389
# - Bad examples:
# - "25 634 5414 Los Muermos 4115-7315"
# - "3.815 de 2003"
# - "11 Aguas Calientes 4 636365 7353860 8 y 10"
# - Unclear examples:
# - "2018 del Director (S) del Servicio de Salud Viña del Mar - Quillota"
# - Good examples:
# - "2º Establécese el trámite: TCB1 "Declaración de Plantas de Biogás" para los siguientes tipos de Plantas:"
# - "1) Que, el artículo 72º-19 de la Ley establece que la Comisión deberá fijar, mediante resolución exenta, las normas técnicas que rijan los aspectos económicos, de seguridad, coordinación, calidad, información y económicos del funcionamiento del sector eléctrico, debiendo establecer un plan de trabajo anual que permita proponer, facilitar y coordinar el desarrollo de tales normas, en el marco de un proceso público y participativo, cuyas normas deben ser establecidas en un reglamento;"
# - "6,5 La madera de los embalajes y los pallets deberá estar libre de corteza y de daños causados por insectos"
#
#
# - Sentences being cut: ~8205, tried checking for connectors ("de", "un", "y", "que", etc.) at the end of sentence and also commas
# - "+ Se lavará el carro, en el área sucia y"
# - "+ Debido al riesgo de los pacientes que en él"
# - "más dos profesionales del área de la"
# - "construcción, un profesional del"
| tasks/text_preprocessing/notebooks/test_sentence_files.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # AnomieBOT
#
# https://en.wikipedia.org/wiki/User:AnomieBOT
#
# Practically endless list of tasks, operating since at least 2008, has 5 different accounts to do edits
#
# %reload_ext autoreload
# %autoreload 2
# %matplotlib inline
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from datetime import datetime
df = pd.read_csv('../revs_scored_jan.tsv', sep='\t', header=0)
days = df.rev_timestamp.map(lambda ts: datetime.utcfromtimestamp(ts).day)
df['day'] = days
# +
sdf_anomiebot = df[df.day<22]
sdf_anomiebot = sdf_anomiebot[sdf_anomiebot.user_text == "AnomieBOT"]
sdf_reverted = sdf_anomiebot[sdf_anomiebot.is_reverted]
sdf_reverted = sdf_reverted[sdf_reverted.seconds_to_revert.astype('str').astype('int')<86400]
# -
sdf1 = sdf_reverted.loc[sdf_reverted.is_reverted, ["rev_id", "user_text", "revert_id", "page_namespace"]]
sdf1.revert_id = sdf1.revert_id.astype('int')
sdf2 = df[df.user_is_bot == False]
sdf2 = sdf2[sdf2.is_revert]
sdf2['revert_set_size'] = sdf2.revert_set_size.astype('int')
sdf2 = sdf2[["rev_id", 'user_text', 'revert_set_size']]
reverts_by_human = pd.merge(sdf1, sdf2,
how='inner',
left_on='revert_id',
right_on='rev_id',
suffixes=('', '_reverter')
)
reverts_by_human0 = reverts_by_human[reverts_by_human.page_namespace == 0]
# +
# summary of variables
# sdf_anomiebot = all edits by anomiebot w/in time frame
# sdf_reverted = all reverted edits by anomiebot w/in time frame
# reverts_by_human = all edits of AnomieBOT's that were reverted by a human
# reverts_by_human0 = all edits by anomiebot in namespace 0 reverted by a human
# -
len(sdf_anomiebot)
sdf_anomiebot.groupby("day", as_index=False).count()[["day", "rev_id"]]
#daily average withihn 3 weeks = 1030.7619 edits per day
len(sdf_reverted)
sdf_reverted.groupby("day", as_index=False).count()[["day", "rev_id"]]
#daily average withihn 3 weeks = 12.0476 reverted edits per day
# ## sample of edits reverted by humans in namespace 0
len(reverts_by_human0)
reverts_by_human0.sample(n=20, random_state=1).reset_index()
# 0. 877844261 comment: dating maintenance tags {{Cn}}, has minor flag. added date to {{cn}} template. original edit at 1am. reverting user says "revert to last good version before the IP crap started." 3am. also gets rid of other intervening edits. apparently cn template is the same as citation needed template, just abbreviated. ***human conflict***
#
# 1. 879465056 comment: rescuing orphaned refs. adds lots of info to a ref which only has a name, including web url, names, title, date and accessdate. reverting edit 16 hours later removes only AnomieBOT's additions. comment: this page is for the 2019 F3 Asian Championship. Ghiretti is confirmed to run in this series with Hitech, the regular series and the winter series have separate entry lists. I believe that this is ***human error***, because the ref name "Hitech" is left undefined.
#
# 2. 877789628 comment: dating maintenance tags: {{fact}}. AnomieBOT adds date to fact tag. 20 mins later, a user comes along and deletes that whole line, including tag, saying that there is no official announcement for the info yet. ***human conflict***
#
# 3. 877808458 comment: dating maintenance tags: {{Citation needed}}. AnomieBOT adds date to citation needed tag. 5 hours later, a user comes along and deletes the entire sentence, including the tag, basically saying that they couldn't find any supporting info. ***human conflict***
#
# 4. 877003846 comment: Dating maintenance tags: {{Use dmy dates}}. Anomie BOT changes tag date from April 2019 to January 2019. another user 5 hours later leaves no edit summary, but changes Anomie's tag to April 2015 and changes another date in the article to 2015. I'm guessing Anomie changed the date because it hadn't happened yet, and the other user used the wrong date in the first place. it looks like this template is used to tell editors that dates in the article must have a specific format, and the date is used to show editors when the dates were last checked for the correct format by a human or a bot. I think the human user may have been incorrect about the usage of this template and shouldn't have made that change. ***human error?***
#
# 5. 876751654 comment: Dating maintenance tags: {{Cn}}. AnomieBOT adds date to tag. 20 hours later, user says: removing unsourced information about palms which is simply not true. ***human conflict***
#
# 6. 877147243 comment: Rescuing orphaned refs. AnomieBOT adds lots of info to ref that only had name, including url and title, and access-date. reverting edit an hour later says: as discussed on the talk page, these sources do not discuss social media addiction and are therefore WP:SYN. this edit removes multiple paragraphs, including the citation edited by the bot. ***human conflict***
#
# 7. 878689146 comment: fixing reference errors. AnomieBOT removes "\<ref>\</ref>" from the page. reverting edit 20 hours later is completely unrelated, made on line 66 where Anomie's edit was on line 1. routine work.
#
# 8. 876274193 comment: dating maintenance tags: {{2l}}. AnomieBOT adds date to 2l template. reverting edit 7 hours later after intervening edits, comment says: "reverted to revision 875227896 by Drdpw (talk): All names transferred. (TW)". 2l template appears to be short for too long, same as the {{VeryLong}} template. I think the reverting user disagreed that the template and various other edits were necessary, as the revert target was to the edit before the 2l template was added. ***human conflict***
#
# 9. 876737595 comment: dating maintenance tags: {{Cn}} {{Or}}. the bot adds dates to the two tags, which are in the same paragraph. 14 hours later, a user removes the whole paragraph with the comment: remove conspiratorial BS as I have before. ***human conflict***
#
# 10. 878030255 comment: fixing reference errors and rescuing orphaned refs ("nvpi" from rev 877888036; "brit" from rev 877888036). the bot adds info to 2 refs that only have a name. it also adds quotation marks to the names. 2 hours later, same user that made previous edit reverts to revision 877888036 without leaving a specific comment. the edit appears to remove both lines that AnomieBOT edited, among several other things.
#
# 11. 876865419 comment: fixing reference errors. bot removes empty ref template (\<ref>\</ref>). reverting comment: both of these links are already in the article. reverting edit 10 mins later removes two lines from the article, not including the one AnomieBOT edited. the two edits are really unrelated. routine work.
#
# 12. 877638107 comment: dating maintenance tags: {{incoming links}}. bot adds a date to the template. 17 hours later, user removes the whole template with comment: no longer incoming links. seems like routine work.
#
# 13. 877335896 comment: dating maintenance tags: {{use dmy dates}}. bot changes date on template from October 2019 to January 2019, probably because it makes no sense to say October because it hadn't happened yet at the time. reverting edit 9 hours later changes date to September 2018. also changes things to say film is from 2018 rather than 2019 and other small edits. reverting comment simply reads 'lgv'. not sure what's going on here. however, AnomieBOT did not make an error and was functioning as intended. the first user made an error. ***human error***
#
# 14. 878860832 comment: rescuing orphaned refs ("rollingstone.com" from rev 878779391). bot adds link to ref with only name. reverting edit 8 hours later comment says: reverted to revision 878779391 by C.Fred (talk): Rvv. this revert restores the ref that AnomieBOT rescued. ***human error***
#
# 15. 877899322 comment: fixing reference errors. bot removes two empty ref templates. the following edit removes some other nonsense syntax marks and changes a word, no ref summary. ***human error***
#
# 16. 877655493 comment: dating maintenance tags: {{lead too short}}. bot adds date to template. reverting comment: Reverted to revision 872196374 by Cat's Tuxedo (talk): Restore redirect - no indication of notability. ***human conflict*** error here - clearly reverted with other edits as well from page history https://en.wikipedia.org/w/index.php?title=The_Ren_%26_Stimpy_Show:_Time_Warp&action=history
#
# 17. 877687214 comment: dating maintenance tags: {{by whom?}}. bot adds date to template. reverting edit 2 hours later makes a LOT of changes, among which removes the entire template including the date added by the bot. reverting comment reads: rv weird grammar, links to dabs. routine work.
#
# 18. 878685382 comment: rescuing orphaned refs (MaxiRole from Cloak & Dagger). bot adds url, title, author, date, accessdate, and archiveurl to a ref with only name. reverting edit 3 hours later removes a section, including this ref. revert comment: reverted to revision 878085310 by Adamstom.97 (talk): no confirmation for Maxi for THIS season, nor that O'Reilly will go by Mayhem. ***human conflict***
#
# 19. 876865186 comment: rescuing orphaned refs ("everett" from rev 873063733). bot adds url, names, date, title to ref with only name. reverting edit 1 hour later adds in a paragraph above, including the missing ref info, and removes the bits that AnomieBOT added to the ref below. reverting comment: entirely appropriate for the lead to summarize the body. ***human conflict***
#
# ## how many bot edits are reverted in groups?
# percentage of all reverted bot edits, with revert set size of at least 2
df_all_reverted = df[df.is_reverted]
df_all_reverted = df_all_reverted[df_all_reverted.user_is_bot == True]
len(df_all_reverted[(df_all_reverted.revert_set_size != '1') & (df_all_reverted.revert_set_size != 'None')]) / len(df_all_reverted)
# percentage of reverted bot edits in namespace 0, that were reverted with at least one other edit
df_all_reverted0 = df_all_reverted[df_all_reverted.page_namespace ==0]
len(df_all_reverted0[(df_all_reverted0.revert_set_size != '1') & (df_all_reverted0.revert_set_size != 'None')]) / len(df_all_reverted0)
# percentage of AnomieBOT's edits reverted by a human in namespace 0, reverted with at least one other edit
len(reverts_by_human0[reverts_by_human0.revert_set_size > 1]) / len(reverts_by_human0)
len(reverts_by_human)
len(reverts_by_human0)
# ## how many of Anomie's human reverted edits were reverted with a human edit?
reverts_of_anomie_h = reverts_by_human.groupby('revert_id', as_index=False).count()[["revert_id"]]
# +
sdf1 = df.loc[df.is_reverted]
sdf1.revert_id = sdf1.revert_id.astype('str').astype('int')
sdf2 = reverts_of_anomie_h
# -
reverts_of_anomie_h = pd.merge(sdf1, sdf2,
how='inner',
left_on='revert_id',
right_on='revert_id',
suffixes=('', '')
)
a = reverts_of_anomie_h.groupby(["revert_id", "user_is_bot"], as_index=False).count()
# +
# percentage of anomie's human passive reverts reverted with a human edit
# (found by dividing the amount of reverts that reverted a human by the ones reverting anomie)
len(a[a.user_is_bot==False]) / len(a[a.user_is_bot==True])
# -
| reu2021/specific_bots/AnomieBOT.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Neural Networks with TensorFlow and Keras
import warnings
warnings.filterwarnings('ignore')
# %matplotlib inline
# %pylab inline
import pandas as pd
print(pd.__version__)
import tensorflow as tf
tf.logging.set_verbosity(tf.logging.ERROR)
print(tf.__version__)
import keras
print(keras.__version__)
# ## First Step: Load Data and disassemble for our purposes
# ### We need a few more data point samples for this approach
df = pd.read_csv('./insurance-customers-1500.csv', sep=';')
y=df['group']
df.drop('group', axis='columns', inplace=True)
X = df.as_matrix()
df.describe()
# ## Second Step: Deep Learning as Alchemy
# +
# ignore this, it is just technical code
# should come from a lib, consider it to appear magically
# http://scikit-learn.org/stable/auto_examples/neighbors/plot_classification.html
import matplotlib.pyplot as plt
from matplotlib.colors import ListedColormap
cmap_print = ListedColormap(['#AA8888', '#004000', '#FFFFDD'])
cmap_bold = ListedColormap(['#AA4444', '#006000', '#AAAA00'])
cmap_light = ListedColormap(['#FFAAAA', '#AAFFAA', '#FFFFDD'])
font_size=25
def meshGrid(x_data, y_data):
h = 1 # step size in the mesh
x_min, x_max = x_data.min() - 1, x_data.max() + 1
y_min, y_max = y_data.min() - 1, y_data.max() + 1
xx, yy = np.meshgrid(np.arange(x_min, x_max, h),
np.arange(y_min, y_max, h))
return (xx,yy)
def plotPrediction(clf, x_data, y_data, x_label, y_label, colors, title="", mesh=True, fixed=None, fname=None, print=False):
xx,yy = meshGrid(x_data, y_data)
plt.figure(figsize=(20,10))
if clf and mesh:
grid_X = np.array(np.c_[yy.ravel(), xx.ravel()])
if fixed:
fill_values = np.full((len(grid_X), 1), fixed)
grid_X = np.append(grid_X, fill_values, axis=1)
Z = clf.predict(grid_X)
Z = np.argmax(Z, axis=1)
Z = Z.reshape(xx.shape)
plt.pcolormesh(xx, yy, Z, cmap=cmap_light)
plt.xlim(xx.min(), xx.max())
plt.ylim(yy.min(), yy.max())
if print:
plt.scatter(x_data, y_data, c=colors, cmap=cmap_print, s=200, marker='o', edgecolors='k')
else:
plt.scatter(x_data, y_data, c=colors, cmap=cmap_bold, s=80, marker='o', edgecolors='k')
plt.xlabel(x_label, fontsize=font_size)
plt.ylabel(y_label, fontsize=font_size)
plt.title(title, fontsize=font_size)
if fname:
plt.savefig(fname)
# -
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.4, random_state=42, stratify=y)
X_train.shape, y_train.shape, X_test.shape, y_test.shape
# +
# tiny little pieces of feature engeneering
from keras.utils.np_utils import to_categorical
num_categories = 3
y_train_categorical = to_categorical(y_train, num_categories)
y_test_categorical = to_categorical(y_test, num_categories)
# +
from keras.layers import Input
from keras.layers import Dense
from keras.models import Model
from keras.layers import Dropout
inputs = Input(name='input', shape=(3, ))
x = Dense(100, name='hidden1', activation='relu')(inputs)
x = Dense(100, name='hidden2', activation='relu')(x)
predictions = Dense(3, name='softmax', activation='softmax')(x)
model = Model(input=inputs, output=predictions)
# loss function: http://cs231n.github.io/linear-classify/#softmax
model.compile(optimizer='adam',
loss='categorical_crossentropy',
metrics=['accuracy'])
model.summary()
# -
# %time model.fit(X_train, y_train_categorical, epochs=500, batch_size=100)
train_loss, train_accuracy = model.evaluate(X_train, y_train_categorical, batch_size=100)
train_accuracy
test_loss, test_accuracy = model.evaluate(X_test, y_test_categorical, batch_size=100)
test_accuracy
# ### Look at all the different shapes for different kilometers per year
# * now we have three dimensions, so we need to set one to a certain number
kms_per_year = 20
plotPrediction(model, X_test[:, 1], X_test[:, 0],
'Age', 'Max Speed', y_test,
fixed = kms_per_year,
title="Test Data Max Speed vs Age with Prediction, 20 km/year")
kms_per_year = 50
plotPrediction(model, X_test[:, 1], X_test[:, 0],
'Age', 'Max Speed', y_test,
fixed = kms_per_year,
title="Test Data Max Speed vs Age with Prediction, 50 km/year")
kms_per_year = 5
plotPrediction(model, X_test[:, 1], X_test[:, 0],
'Age', 'Max Speed', y_test,
fixed = kms_per_year,
title="Test Data Max Speed vs Age with Prediction, 5 km/year")
| notebooks/md/4-keras-tensorflow-nn.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import numpy as np
import pandas as pd
import tensorflow as tf
import pickle
# ## Data Preprocessing
# Loading formatted data
# I use format the data into pd dataframe
# See data_formatting.ipynb for details
train_data = pd.read_pickle("../dataset/train.pickle")
validate_data = pd.read_pickle("../dataset/validate.pickle")
test_data = pd.read_pickle("../dataset/test.pickle")
# ### Tokenize the source code
# #### BoW
# For data batching convenience, the paper trained only on functions with token length $10 \leq l \leq 500$, padded to the maximum length of **500**
# The paper does not mention to pad the 0 at the end or at the beginning, so I assume they append the padding at the end (actually, this is not a big deal in CNN)
# text_to_word_sequence does not work since it ask a single string
# +
# train_tokenized = tf.keras.preprocessing.text.text_to_word_sequence(train_data[0])
# x_train = tf.keras.preprocessing.sequence.pad_sequences(train_tokenized, maxlen=500, padding="post")
# +
# validate_tokenized = tf.keras.preprocessing.text.text_to_word_sequence(validate_data[0])
# x_validate = tf.keras.preprocessing.sequence.pad_sequences(validate_tokenized, maxlen=500, padding="post")
# +
# test_tokenized = tf.keras.preprocessing.text.text_to_word_sequence(test_data[0])
# x_test = tf.keras.preprocessing.sequence.pad_sequences(test_tokenized, maxlen=500, padding="post")
# -
# #### Init the Tokenizer
# #### BoW
# The paper does not declare the num of words to track, I am using 10000 here
tokenizer = tf.keras.preprocessing.text.Tokenizer(num_words=10000)
# Required before using texts_to_sequences
# Arguments; a list of strings
tokenizer.fit_on_texts(list(train_data[0]))
# For data batching convenience, the paper trained only on functions with token length $10 \leq l \leq 500$, padded to the maximum length of **500**
# The paper does not mention to pad the 0 at the end or at the beginning, so I assume they append the padding at the end (actually, this is not a big deal in CNN)
train_tokenized = tokenizer.texts_to_sequences(train_data[0])
x_train = tf.keras.preprocessing.sequence.pad_sequences(train_tokenized, maxlen=500, padding="post")
validate_tokenized = tokenizer.texts_to_sequences(validate_data[0])
x_validate = tf.keras.preprocessing.sequence.pad_sequences(validate_tokenized, maxlen=500, padding="post")
test_tokenized = tokenizer.texts_to_sequences(test_data[0])
x_test = tf.keras.preprocessing.sequence.pad_sequences(test_tokenized, maxlen=500, padding="post")
y_train = train_data[train_data.columns[3]].astype(int)
y_validate = validate_data[validate_data.columns[3]].astype(int)
y_test = test_data[test_data.columns[3]].astype(int)
# ## Model Design
# This dataset is highly imbalanced, so I am working on adjusting the train weights
# https://www.tensorflow.org/tutorials/structured_data/imbalanced_data
clear, vulnerable = (train_data[train_data.columns[3]]).value_counts()
total = vulnerable + clear
print("Total: {}\n Vulnerable: {} ({:.2f}% of total)\n".format(total, vulnerable, 100 * vulnerable / total))
# +
weight_for_0 = (1 / clear)*(total)/2.0
weight_for_1 = (1 / vulnerable)*(total)/2.0
class_weight = {0: weight_for_0, 1: weight_for_1}
print('Weight for class 0: {:.2f}'.format(weight_for_0))
print('Weight for class 1: {:.2f}'.format(weight_for_1))
# +
model = tf.keras.Sequential()
model.add(tf.keras.layers.Embedding(input_dim=10000, output_dim=13, input_length=500))
model.add(tf.keras.layers.Conv1D(filters=512, kernel_size=9, activation="relu"))
model.add(tf.keras.layers.MaxPool1D(pool_size=4))
model.add(tf.keras.layers.Dropout(rate=0.5))
model.add(tf.keras.layers.Flatten())
model.add(tf.keras.layers.Dense(units=64, activation="relu"))
model.add(tf.keras.layers.Dense(units=16, activation="relu"))
# I am using the sigmoid rather than the softmax mentioned in the paper
model.add(tf.keras.layers.Dense(units=1, activation="sigmoid"))
# Adam Optimization with smaller laerning rate
adam = tf.keras.optimizers.Adam(lr=0.0001)
# Define the evaluation metrics
METRICS = [
tf.keras.metrics.TruePositives(name='tp'),
tf.keras.metrics.FalsePositives(name='fp'),
tf.keras.metrics.TrueNegatives(name='tn'),
tf.keras.metrics.FalseNegatives(name='fn'),
tf.keras.metrics.BinaryAccuracy(name='accuracy'),
tf.keras.metrics.Precision(name='precision'),
tf.keras.metrics.Recall(name='recall'),
tf.keras.metrics.AUC(name='auc'),
]
model.compile(optimizer=adam, loss="binary_crossentropy", metrics=METRICS)
model.summary()
# -
history = model.fit(x=x_train, y=y_train, batch_size=128, epochs=10, verbose=1, class_weight=class_weight, validation_data=(x_validate, y_validate))
with open('CWE469_trainHistory', 'wb') as history_file:
pickle.dump(history.history, history_file)
model.save("Simple_CNN_CWE469")
results = model.evaluate(x_test, y_test, batch_size=128)
| model/Simple_CNN_CWE469.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] slideshow={"slide_type": "slide"}
# # 2-1. 행렬과 벡터의 연산
# + slideshow={"slide_type": "skip"}
import numpy as np
import matplotlib.pyplot as plt
import matplotlib as mpl
style_name = 'bmh' #bmh
mpl.style.use(style_name)
np.set_printoptions(precision=2, linewidth =150)
style = plt.style.library[style_name]
style_colors = [ c['color'] for c in style['axes.prop_cycle'] ]
# + [markdown] slideshow={"slide_type": "slide"}
# ### 행렬과 벡터 사용의 목적
#
# - 데이터를 벡터 또는 행렬 형태로 간단히 표현
#
# - 데이터 간의 연산을 빠르고 간결하게 처리
#
# ### 학습 개요
#
# - 행렬과 벡터의 기본 정의와 연산
#
# - 선형연립방정식과 행렬의 관계
#
# - 벡터화 연산 코딩 실습
#
# - 주요 참고 문헌 및 예제 : 공업수학Advanced Engineering Mathematics 10판<sup>[kreyszig]</sup>
#
#
# + [markdown] slideshow={"slide_type": "subslide"}
# ## 행렬
#
# - 사각 괄호로 둘러 쌓인 숫자들의 배열(Rectangular array of numbers written between square brackets)<sup>[andrew]</sup>
#
# $$
# \begin{bmatrix}
# 0.3 & 1 & -5 \\
# 0 & -0.2 & 16
# \end{bmatrix}, \qquad
# \begin{bmatrix}
# a_{11} & a_{12} & a_{13} \\
# a_{21} & a_{22} & a_{23} \\
# a_{31} & a_{32} & a_{33}
# \end{bmatrix}
# $$
#
# - 각 숫자: 요소<sup>entries, elements</sup>, 가로줄: 행<sup>rows</sup>, 세로줄: 열<sup>columns</sup>
#
# - 보기의 두번째 행렬, 행과 열수가 같은 행렬을 정사각행렬 또는 정방행렬<sup>square matrix</sup>이라 한다.
#
#
#
#
# + [markdown] slideshow={"slide_type": "subslide"}
# ### 일반적 표현법
#
# $$
# \mathbf{A} = [a_{jk}] = \begin{bmatrix}
# a_{11} & a_{12} & \cdots & a_{1n} \\
# a_{21} & a_{22} & \cdots & a_{2n} \\
# \vdots & \vdots & \ddots & \vdots \\
# a_{m1} & a_{m2} & \cdots & a_{mn}
# \end{bmatrix}
# $$
#
# - 볼드 대분자로 표시
#
# - 행렬의 크기는 [행크기]x[열크기] 로 표시
#
# - 위 경우는 $m \times n$행렬
#
# - 요소의 첫 번째 인덱스가 행 번호, 두 번째 인덱스가 열 번호
#
# $$
# \mathbf{A} = \begin{bmatrix}
# 0.3 & 1 & -5 \\
# 0 & -0.2 & 16
# \end{bmatrix}
# $$
#
# - 위 행렬에서 $a_{12}=1$
# + [markdown] slideshow={"slide_type": "subslide"}
# ### 선형연립방정식의 표현
#
# - 위에서 정의한 행렬을 이용하면 선형연립방정식을 행렬로만 표시 가능
#
# $$
# \begin{matrix}
# 7x_1 & + & 6x_2 & + & 9x_3 & = & 6 \\
# 6x_1 & & & - & 3x_3 & = & 25 \\
# 5x_1 & - & 8x_2 & + & x_3 & = & 10
# \end{matrix}
# $$
#
# - 보기의 연립방정식은 미지수가 3개인 선형 연립방정식
#
# - 계수만 따서 행렬로
#
# $$
# \mathbf{A} = \begin{bmatrix}
# 7 & 6 & 9 \\
# 6 & 0 & -3 \\
# 5 & -8 & 1
# \end{bmatrix}
# $$
#
# - 우측 숫자를 하나의 열로 첨가하면 첨가행렬<sup>augmented matrix</sup>이 만들어 지는데 이는 선형 연립방정식의 모든 정보를 포함
#
# $$
# \tilde{\mathbf{A}} = \begin{bmatrix}
# 7 & 6 & 9 & 6 \\
# 6 & 0 & -3 & 25 \\
# 5 & -8 & 1 & 10
# \end{bmatrix}
# $$
#
#
# + [markdown] slideshow={"slide_type": "slide"}
# ## 벡터
#
# - 행렬 관점에서 행이나 열을 하나만 가진 행렬을 벡터라 할 수 있음.
#
# - 다시 말해 $n \times 1$ 행렬
#
# - 행벡터 또는 열벡터라 하며 대부분 별도의 언급이 없으면 **벡터는 열벡터**
#
# - 표시는 볼드 소문자
#
# - 행렬과 벡터는 모두 의미 있는 숫자를 모아서 표현하기에 적합한 구조
#
#
# $$
# \mathbf{v} = \begin{bmatrix} v_1 \\ v_2 \\ \vdots \\ v_n \end{bmatrix}
# $$
#
#
# + slideshow={"slide_type": "subslide"}
from mpl_toolkits.mplot3d import Axes3D
from matplotlib import cm
plt.rcParams["figure.figsize"] = (17,7)
fig = plt.figure()
ax1 = fig.add_subplot(1, 2, 1, projection='3d')
ax2 = fig.add_subplot(1, 2, 2, projection='3d')
x, y, z = 0, 1, 2
P = (1.5, 1.8, 1)
Q = (2.1, 3.2, 3)
O = (0, 0, 0)
def draw_vector(ax, P, Q, elev, azim, title=''):
ax.plot3D([P[x], Q[x]], [P[y], Q[y]], [P[z], Q[z]], lw=3, color=style_colors[0])
ax.plot3D([Q[x], Q[x]], [0, Q[y]], [0,0], '--', color="gray")
ax.plot3D([P[x], P[x]], [0, P[y]], [0,0], '--', color="gray")
ax.plot3D([0, Q[x]], [Q[y], Q[y]], [0,0], '--', color="gray")
ax.plot3D([0, P[x]], [P[y], P[y]], [0,0], '--', color="gray")
ax.plot3D([P[x], P[x]], [P[y], P[y]], [0, P[z]], '--', color="gray") #p로의 세로선
ax.plot3D([Q[x], Q[x]], [Q[y], Q[y]], [0, Q[z]], '--', color="gray") #p로의 세로선
ax.plot3D([0, P[x]], [0, P[y]], [0,0], '--', color="gray")
ax.plot3D([0, Q[x]], [0, Q[y]], [0,0], '--', color="gray")
ax.plot3D([0, P[x]], [0, P[y]], [P[z], P[z]], '--', color="gray")
ax.plot3D([0, Q[x]], [0, Q[y]], [Q[z], Q[z]], '--', color="gray")
ax.plot3D([P[x], Q[x]], [0, 0], [0,0], lw=3, color=style_colors[0])
ax.plot3D([0,0], [P[y], Q[y]], [0,0], lw=3, color=style_colors[0])
ax.plot3D([0,0], [0, 0], [P[z], Q[z]], lw=3, color=style_colors[0])
ax.plot3D([O[x], 0], [O[y], 0], [O[z], Q[z]+1], color=style_colors[1], alpha=0.4)
ax.plot3D([O[x], Q[x]+1], [0, 0], [0, 0], color=style_colors[1], alpha=0.4)
ax.plot3D([0,0], [O[y], Q[y]+1], [0, 0], color=style_colors[1], alpha=0.4)
ax.plot([O[x]], [O[y]], [O[z]], 'o', color=style_colors[1])
ax.text(*Q, '$Q$', fontsize=15)
ax.text(P[x], P[y], P[z]-0.2, '$P$', fontsize=15)
ax.text(P[x]+(Q[x]-P[x])-0.3, -0.4, 0, '$\mathbf{a}_1$', fontsize=15)
ax.text(-0.4, P[y]+(Q[y]-P[y])-0.8, 0, '$\mathbf{a}_2$', fontsize=15)
ax.text(0, 0.1, P[z]+(Q[y]-P[z])/2, '$\mathbf{a}_3$', fontsize=15)
ax.set_title(title)
ax.set_xlabel('x')
ax.set_ylabel('y')
ax.set_zlabel('z')
ax.view_init(elev, azim)
draw_vector(ax1, P, Q, 20, 30, 'Vector and its components')
draw_vector(ax2, O, Q, 20, 30, 'Position vector')
plt.show()
# + [markdown] slideshow={"slide_type": "fragment"}
#
# - 3차원 공간의 두 점 $P(x_1, y_1, z_1)$, $Q(x_2, y_2, z_2)$가 있을 때
#
# - 두 점 $P$, $Q$로 정의되는 벡터 $\mathbf{a} = \vec{PQ} = (a_1, a_2, a_3)^{\text{T}}$
#
# - $\mathbf{a}$의 성분 : 시작점 $P$와 끝점 $Q$의 좌표차이 $a_1 = x_2 - x_1$, $a_2 = y_2 - y_1$, $a_3 = z_2 - z_1$
#
# - 원점과 점 $Q$로 정의되는 벡터 : 위치벡터<sup>position vector</sup>
#
# - 따라서 공간상의 점은 위치벡터
#
# - 벡터의 길이 $\lvert \mathbf{a} \rvert = \sqrt{a_1^2 + a_2^2 + a_3^2} $
#
# - 벡터의 합 : 두 벡터의 요소간의 합 $\mathbf{a} + \mathbf{b} = [a_1+b_1, a_2+b_2, a_3+b_3]$
#
#
# -
# ### 행렬과 벡터 요소의 인덱스
#
# - 인덱스 : 행렬, 벡터의 요소를 지정하기 위해 아래첨자로 사용하는 숫자
#
# - 0부터 시작하는 방식과 1부터 시작하는 방식이 있음
#
# - 수학 : 1부터 시작
#
# - 컴퓨터 공학 : 0부터 시작
#
# $$
# \mathbf{v}= \begin{bmatrix} 23 \\ 13 \\8 \\100 \end{bmatrix} = \begin{bmatrix} v_1 \\ v_2 \\ v_3 \\ v_4 \end{bmatrix} = \begin{bmatrix} v_0 \\ v_1 \\ v_2 \\ v_3 \end{bmatrix}
# $$
#
# - 프로그램 언어는 수학 계산용 언어는 주로 1부터 시작, 범용 언어는 0부터 시작
#
# | 구분 | 언어 |
# |------------|------------|
# | 1부터 시작 | FORTRAN, Julia, MATLAB, Octave, ... |
# | 0부터 시작 | Python, C/C++, Java, Javascript, ... |
#
# #### octave
# ```Octave
# >> v = [10;11;12];
# >> v(0)
# error: v(0): subscripts must be either integers 1 to (2^31)-1 or logicals
# >> v(1)
# ans = 10
# ```
#
# #### python
# ```python
# In [1]: import numpy as np
#
# In [2]: v = np.array([10, 11, 12])
#
# In [3]: v[0]
# Out[3]: 10
#
# In [4]: v[1]
# Out[4]: 11
#
# In [5]:
# ```
#
# ### 벡터의 덧셈, 뺄셈
#
# - 벡터의 덧셈, 뺄셈은 요소끼리 덧셈, 뺄셈
#
# - 덧셈 : 두 벡터가 이루는 평행 사변형으로 계산
#
# - 뺄셈 : $a-b = a+(-b)$로 두고 덧셈과 동일하게 계산
#
# <img src="imgs/vec_plus_minus.png" width="700"/>
# + [markdown] slideshow={"slide_type": "subslide"}
# ### 놈(노름)<sup>norm</sup>
#
# - 벡터의 길이 $\lvert \mathbf{a} \rvert = \sqrt{a_1^2 + a_2^2 + a_3^2} $에 대한 일반적인 정의
#
# - 벡터 공간의 원소들에 일종의 '길이' 또는 '크기'를 부여하는 함수
#
# $$
# \lVert \mathbf{x} \rVert_{p} = \left( \sum_{i=1}^{n} \lvert x_i \rvert^{p} \right)^{1/p}
# $$
#
# - 두점 (0,0), (6,6)대한
#
# - $p=1$, L1 놈,맨해튼 거리 : 요소의 절대값을 더함, |6|+|6|=12
#
# - $p=2$, L2 놈, 유클리드 놈 : 요소의 제곱을 더해서 루트 씌움, 두점 사이의 거리 $\sqrt{6^2 + 6^2}=8.48...$
#
#
# <img align="center" src="imgs/Manhattan_distance.svg">
# <h5 align="center">https://en.wiktionary.org/wiki/Manhattan_distance : Public domain</h5>
#
#
#
# + [markdown] slideshow={"slide_type": "slide"}
# ## 행렬, 벡터의 연산
#
# ### 행렬과 벡터의 덧셈과 스칼라 곱셈
#
# #### 행렬의 동치
#
# - 두 행렬 $\mathbf{A}$, $\mathbf{B}$의 크기가 같고 같은 위치의 요소가 모두 같으면 두 행렬은 서로 같음.
#
# $$
# \mathbf{A} = \begin{bmatrix} a_{11} & a_{12} \\ a_{21} & a_{22}\end{bmatrix} \quad
# \mathbf{B} = \begin{bmatrix} 5 & 1 \\ 3 & -2 \end{bmatrix}
# $$
#
# - 위 두 행렬에서 $a_{11}=5$, $a_{12}=1$, $a_{21}=3$, $a_{22}=-2$면 두 행렬은 같음.
# + [markdown] slideshow={"slide_type": "subslide"}
# #### 행렬의 덧셈과 스칼라곱
#
# - 두 행렬의 크기가 같을 때 $\mathbf{A} + \mathbf{B}$는 같은 위치 요소의 덧셈을 의미
#
# $$
# \mathbf{A} = \begin{bmatrix} 3 & 1 \\ 10 & 12\end{bmatrix} \quad
# \mathbf{B} = \begin{bmatrix} 5 & 1 \\ 3 & -2 \end{bmatrix}
# $$
#
# $$
# \mathbf{A} + \mathbf{B} = \begin{bmatrix} 8 & 2 \\ 13 & 10\end{bmatrix}
# $$
#
#
# - 따라서 크기가 다른 행렬은 덧셈 불가능
#
# - 행렬에 대한 스칼라 곱은 각 요소에 스칼라를 곱하는 것
#
# - 예
#
# $$
# \begin{bmatrix} 8 & 2 \\ 16 & 10\end{bmatrix} \times \frac{1}{2} + 3 \times \begin{bmatrix} 1 & 1 \\ 3 & 4\end{bmatrix} = \begin{bmatrix} 4 & 1 \\ 8 & 5\end{bmatrix} + \begin{bmatrix} 3 & 3 \\ 9 & 12\end{bmatrix} = \begin{bmatrix} 7 & 4 \\ 17 & 17\end{bmatrix}
# $$
# +
A = np.asarray([8,2,16,10]).reshape(2,2)
B = np.asarray([1,1,3,4]).reshape(2,2)
print(A*(1/2) + 3*B)
# + [markdown] slideshow={"slide_type": "subslide"}
# - 일반적인 성질
#
# $$
# \begin{align}
# \mathbf{A} + \mathbf{B} &= \mathbf{B} + \mathbf{A} \\[5pt]
# (\mathbf{A} + \mathbf{B})+ \mathbf{C} &= \mathbf{A} + (\mathbf{B}+ \mathbf{C}) \\[5pt]
# \mathbf{A} + \mathbf{0} &= \mathbf{A} \\[5pt]
# \mathbf{A} + (-\mathbf{A}) & = \mathbf{0} \\[5pt]
# c(\mathbf{A} + \mathbf{B}) &= c\mathbf{A} + c\mathbf{B} \\[5pt]
# (c+k)\mathbf{A} &= c\mathbf{A} + k\mathbf{A} \\[5pt]
# c(k\mathbf{A}) &= (ck)\mathbf{A} \\[5pt]
# 1\mathbf{A} &= \mathbf{A}
# \end{align}
# $$
# + [markdown] slideshow={"slide_type": "subslide"}
# #### 단위벡터
#
# - 크기가 1인 벡터
#
# - Normailize : 벡터를 벡터의 크기로 나눔
#
# $$
# \hat{\mathbf{u}} = \frac{\mathbf{u}}{\lvert \mathbf{u} \rvert}
# $$
#
#
# <table border="0">
# <tr>
# <td>
# <img src="imgs/unit_vector1.png"/>
# </td>
# <td>
# <img src="imgs/unit_vector2.png"/>
# </td>
# </tr>
# <tr>
# <td colspan="2">https://en.wikipedia.org/wiki/Unit_vector : CC BY-SA 4.0</td>
# </tr>
# </table>
#
#
# + [markdown] slideshow={"slide_type": "slide"}
# ### 행렬의 곱셈
# - $m \times n$행렬 $\mathbf{A}$와 $r \times p$행렬 $\mathbf{B}$의 곱 $\mathbf{C} = \mathbf{AB}$는 $r=n$일 때 다음과 같이
# $m \times p$행렬에 의해 정의
#
# <img src="imgs/matrix-product2.png" width="550" />
#
# $$
# c_{jk} = \sum_{l=1}^{n} a_{jl}b_{lk} = a_{j1}b_{1k} + a_{j2}b_{2k} + \cdots + a_{jn}b_{nk} \\
# j = 1, \cdots, m \\
# k = 1, \cdots, p
# $$
#
# $$
# \begin{matrix}
# \mathbf{A} & \mathbf{B} & = & \mathbf{C} \\
# [m \times n] & [n \times p] && [m \times p]
# \end{matrix}
# $$
#
# - 위처럼 서로 인접한 인덱스가 항상 같아야 곱셈이 성립
#
# <img src="imgs/matrix-product.png" width="450" />
#
# - 따라서 $\mathbf{BA}$는 정의되지 않고 일반적으로 $\mathbf{AB} \neq \mathbf{BA}$
#
# $$
# \begin{bmatrix} 1 & 2 \\ 23 & 43 \\ 56 & 32 \\ 41 & 55 \end{bmatrix} \begin{bmatrix} 11 & 12 & 13 \\21 & 22 & 23 \end{bmatrix} = \begin{bmatrix} 1\cdot11 + 2\cdot21 & 1\cdot12+2\cdot22 & 1\cdot13+2\cdot23 \\
# 23\cdot11 + 43\cdot21 & 23\cdot12+43\cdot22 & 23\cdot13+43\cdot23 \\
# 56\cdot11 + 32\cdot21 & 56\cdot12+32\cdot22 & 56\cdot13+32\cdot23 \\
# 41\cdot11 + 55\cdot21 & 41\cdot12+55\cdot22 & 41\cdot13+55\cdot23
# \end{bmatrix}
# $$
#
# - 다음은 정의되지 않음
#
# $$
# \begin{bmatrix} 11 & 12 & 13 \\21 & 22 & 23 \end{bmatrix} \begin{bmatrix} 1 & 2 \\ 23 & 43 \\ 56 & 32 \\ 41 & 55 \end{bmatrix}
# $$
# -
# #### Numpy array multiplication is not matrix multiplication<sup>[scipy-lecture]</sup>
#
# - numpy ndarray간의 곱은 요소간의 직접적인 곱이며 행렬곱이 아님
#
# - 행렬곱을 하려면 `numpy.dot() ` 또는 ` numpy.matrix ` 객체를 이용해야함
#
# - 주로 ` numpy.dot()` 이용
# +
c = np.ones((3, 3))
C = np.matrix(c)
print("numpy array multiplication")
print(c*c)
print("\n")
print("numpy dot function")
print(c.dot(c))
print("\n")
print("numpy matrix multiplication")
print(C*C)
# -
# ```python
# A = np.asarray([1, 2, 23, 43, 56, 32, 41, 55]).reshape(4,2)
# B = np.asarray([11, 12, 13, 21, 22, 23]).reshape(2,3)
# C = np.dot(A,B)
# print(C)
#
# # 정의되지 않는다.
# C = np.dot(B,A)
#
# [[ 53 56 59]
# [1156 1222 1288]
# [1288 1376 1464]
# [1606 1702 1798]]
#
# ---------------------------------------------------------------------------
# ValueError Traceback (most recent call last)
# <ipython-input-5-143337631864> in <module>()
# 5
# 6 # 정의되지 않는다.
# ----> 7 C = np.dot(B,A)
#
# ValueError: shapes (2,3) and (4,2) not aligned: 3 (dim 1) != 4 (dim 0)
#
#
# ```
# + [markdown] slideshow={"slide_type": "slide"}
# #### 행렬과 벡터의 곱
#
# $$
# \begin{bmatrix} 2 & 2 \\ 3 & 8 \\ 1 & 1 \\ 0 & 1 \end{bmatrix} \begin{bmatrix} 3 \\ 5 \end{bmatrix} =
# \begin{bmatrix} 2 \cdot 3 + 2 \cdot 5 \\ 3 \cdot 3 + 8 \cdot 5 \\ 1\cdot3 + 1\cdot5 \\ 0\cdot3 + 1\cdot5\end{bmatrix} = \begin{bmatrix} 16 \\ 49 \\8 \\ 5\end{bmatrix}
# $$
#
# - 위 행렬과 벡터의 곱은 왼쪽 행렬의 첫번째 열 $[2 \, 3 \, 1 \, 0]^{\text{T}}$을 3배, 두번째 열 $[2 \, 8 \, 1 \, 1]^{\text{T}}$을 5배하여 서로 더한 결과
#
# - 따라서 행렬곱의 결과를 왼쪽 행렬의 열을 오른쪽 행렬의 열에 있는 숫자 만큼 배수를 취하여 서로 섞은(더한) 벡터라고 볼 수 있음.
#
# - 다음은 정의되지 않음
#
# $$
# \begin{bmatrix} 3 \\ 15 \end{bmatrix} \begin{bmatrix}4 & 2 \\ 1 & 8 \end{bmatrix}
# $$
#
# - 행렬과 벡터의 곱을 위와 같이 앞 행렬의 열에 혼합으로 생각한다면 행렬과 행렬의 곱을 다음처럼 생각해 볼 수 도 있다.
#
# $$
# \begin{align}
# \mathbf{A}\mathbf{B} &= \mathbf{A} \begin{bmatrix} \color{RoyalBlue}{\mathbf{b}_1} & \color{OrangeRed}{\mathbf{b}_2} & \color{Goldenrod}{\mathbf{b}_3} \end{bmatrix} \\[5pt]
# &= \begin{bmatrix} 1 & 2 \\ 23 & 43 \\ 56 & 32 \\ 41 & 55 \end{bmatrix} \begin{bmatrix} \color{RoyalBlue}{11} & \color{OrangeRed}{12} & \color{Goldenrod}{13} \\ \color{RoyalBlue}{21} & \color{OrangeRed}{22} & \color{Goldenrod}{23} \end{bmatrix} = \begin{bmatrix} 1 \cdot \color{RoyalBlue}{11} + 2 \cdot \color{RoyalBlue}{21} & 1 \cdot \color{OrangeRed}{12}+2 \cdot \color{OrangeRed}{22} & 1\cdot \color{Goldenrod}{13} + 2\cdot \color{Goldenrod}{23} \\
# 23 \cdot \color{RoyalBlue}{11} + 43\cdot\color{RoyalBlue}{21} & 23\cdot\color{OrangeRed}{12}+43 \cdot \color{OrangeRed}{22} & 23\cdot \color{Goldenrod}{13} + 43\cdot \color{Goldenrod}{23} \\
# 56 \cdot \color{RoyalBlue}{11} + 32\cdot\color{RoyalBlue}{21} & 56\cdot \color{OrangeRed}{12} +32 \cdot \color{OrangeRed}{22} & 56\cdot \color{Goldenrod}{13} + 32\cdot \color{Goldenrod}{23} \\
# 41 \cdot \color{RoyalBlue}{11} + 55\cdot\color{RoyalBlue}{21} & 41\cdot \color{OrangeRed}{12} +55 \cdot \color{OrangeRed}{22} & 41\cdot \color{Goldenrod}{13} + 55\cdot \color{Goldenrod}{23}
# \end{bmatrix} \\[5pt]
# &= \begin{bmatrix} \mathbf{A}\color{RoyalBlue}{\mathbf{b}_1} & \mathbf{A}\color{OrangeRed}{\mathbf{b}_2} & \mathbf{A}\color{Goldenrod}{\mathbf{b}_3} \end{bmatrix}
# \end{align}
# $$
#
# +
A = np.asarray([2, 2, 3, 8, 1, 1, 0, 1]).reshape(4,2)
b = np.asarray([3,5]).reshape(2,1)
C = np.dot(A,b)
print(C)
# 각 열에 뒤에서 곱하는 벡터의 요소배 하여 더한 것과도 결과가 같다.
print(A[:,0]*b[0] + A[:,1]*b[1])
print( (A*b.reshape(-1)).sum(axis=1) )
# + [markdown] slideshow={"slide_type": "slide"}
# #### 행벡터와 열벡터의 곱
#
# - 행벡터와 열벡터의 곱
#
# - 이런 연산을 벡터의 내적 이라고 도 함
#
#
# $$
# \begin{bmatrix} 3 & 6 & 1 \end{bmatrix} \begin{bmatrix} 1 \\ 2 \\ 4 \end{bmatrix} = [19]
# $$
#
# #### 내적<sup>inner product, dot product</sup>
#
# - 두 벡터의 내적 또는 점적은 두 벡터의 사이각의 코사인값과 두 벡터의 크기의 곱으로 정의
#
# $$
# \begin{align}
# \mathbf{a} \cdot \mathbf{b} &= \lvert \mathbf{a} \rvert \lvert \mathbf{b} \rvert \cos \gamma
# \end{align}
# $$
#
# <img src="imgs/innerprod.png" width="600"/>
#
#
#
# #### 내적의 성질
#
# $$
# \begin{align}
# (q_1 \mathbf{a} + q_2 \mathbf{b}) \cdot \mathbf{c} = q_1 \mathbf{a} \cdot \mathbf{c} + q_2 \mathbf{b} \cdot \mathbf{c} \quad &\text{Linearity} \\[5pt]
# \mathbf{a} \cdot \mathbf{b} = \mathbf{b} \cdot \mathbf{a} \quad &\text{Symmetry}\\[5pt]
# (\mathbf{a}+\mathbf{b}) \cdot \mathbf{c}= \mathbf{a}\cdot\mathbf{c} + \mathbf{b}\cdot\mathbf{c} \quad &\text{Distributivity}
# \end{align}
# $$
# -
# #### 벡터의 단위 기저벡터 형식 표현<sup>$\dagger$</sup>
#
# - $\mathbf{i}$, $\mathbf{j}$, $\mathbf{k}$ : 카르테시안 좌표계<sup>Cartesian coordinate system</sup>에서 각 축의 양의 방향으로 향하는 단위벡터
#
# <img src="imgs/certesian-unit.png" width="350"/>
#
# $$
# \mathbf{a} = a_1 \mathbf{i} + a_2 \mathbf{j} + a_3 \mathbf{k} \\[5pt]
# \mathbf{b} = b_1 \mathbf{i} + b_2 \mathbf{j} + b_3 \mathbf{k}
# $$
#
# - 기저벡터 형식으로 쓰고 내적의 분배법칙을 적용하면
#
# $$
# \begin{align}
# \mathbf{a} \cdot \mathbf{b}
# &= (a_1 \mathbf{i} + a_2 \mathbf{j} + a_3 \mathbf{k}) \cdot ( b_1 \mathbf{i} + b_2 \mathbf{j} + b_3 \mathbf{k}) \\[5pt]
# &= \quad a_1 b_1 \mathbf{i} \cdot \mathbf{i} + a_2 b_1 \mathbf{j} \cdot \mathbf{i} + a_3 b_1 \mathbf{k} \cdot \mathbf{i} \\[5pt]
# &+ \quad a_1 b_2 \mathbf{i} \cdot \mathbf{j} + a_2 b_2 \mathbf{j} \cdot \mathbf{j} + a_3 b_2 \mathbf{k} \cdot \mathbf{j} \\[5pt]
# &+ \quad a_1 b_3 \mathbf{i} \cdot \mathbf{k} + a_2 b_3 \mathbf{j} \cdot \mathbf{k} + a_3 b_3 \mathbf{k} \cdot \mathbf{k}
# \end{align}
# $$
#
# - 여기서 $\mathbf{i} \cdot \mathbf{i} = \lvert \mathbf{i} \rvert^2 = 1$이고 $\mathbf{j}$, $\mathbf{k}$에 대해서도 마찬가지
#
# - 또한 $\mathbf{i} \cdot \mathbf{j}$등 서로 다른 단위벡터의 내적은 0
#
# - 따라서
#
# $$
# \mathbf{a} \cdot \mathbf{b} = a_1 b_1 + a_2 b_2 + a_3 b_3 \qquad \mathbf{a} = (a_1, a_2, a_3), \mathbf{b} = (b_1, b_2, b_3)
# $$
#
#
# + [markdown] slideshow={"slide_type": "slide"}
# #### 행렬곱의 일반적인 성질
#
# $$
# \begin{align}
# (k\mathbf{A})\mathbf{B} &= k(\mathbf{AB}) \\[5pt]
# \mathbf{A}(\mathbf{B}\mathbf{C}) &= (\mathbf{A}\mathbf{B})\mathbf{C} \\[5pt]
# (\mathbf{A} + \mathbf{B})\mathbf{C} &= \mathbf{A}\mathbf{C}+\mathbf{B}\mathbf{C} \\[5pt]
# \mathbf{C}(\mathbf{A} + \mathbf{B}) &= \mathbf{C}\mathbf{A}+\mathbf{C}\mathbf{B}
# \end{align}
# $$
#
# - **교환법칙은 성립하지 않음**
#
# #### 행렬곱 연산의 인덱스 표현, 아인슈타인 표기법<sup>Einstein summation convention</sup>
#
# - $\mathbf{c} = \mathbf{Ab}$, $\mathbf{C} = \mathbf{AB}$ 일때
#
# $$
# \mathbf{c}_{i} = \sum_j \mathbf{A}_{ij}\mathbf{b}_{j}, \qquad \mathbf{c}_{i} = \mathbf{A}_{ij}\mathbf{b}_{j} \quad \text{i : free index} \quad \text{j : dummy index}
# $$
#
# $$
# \mathbf{C}_{ij} = \sum_k \mathbf{A}_{ik}\mathbf{B}_{kj}, \qquad \mathbf{C}_{ij} = \mathbf{A}_{ik}\mathbf{B}_{kj} \quad \text{i,j : free index} \quad \text{k : dummy index}
# $$
#
# - 더미인덱스는 어떤 인덱스로 바꿔도 상관없음. 중요한것은 프리인덱스
# + [markdown] slideshow={"slide_type": "subslide"}
# #### 선형변환 관점에서 본 행렬 곱셈의 유래<sup>$\dagger$</sup>
#
# $$
# y_1 = a_{11} x_1 + a_{12}x_2 \\
# y_2 = a_{21} x_1 + a_{22}x_2
# $$
#
# - 위 식은 어떤 $(x_1, x_2)$ 벡터를 적당히 곱하고 더하고 해서 $(y_1, y_2)$로 변환 시키는 과정
#
# - 또는 $(y_1, y_2)$가 정해져 있다면 2원 1차 연립방정식이라고 볼 수 있다.
#
# - 위 연립방정식을 숫자만으로 나타낼 수 없을까?
#
# - 앞서 "선형연립방정식의 표현" 이미 이야기한것과 같이 다음처럼 계수 행렬을 만들면
#
# $$
# \begin{bmatrix} a_{11} & a_{12} \\ a_{21} & a_{22} \end{bmatrix}
# $$
#
# - 이 계수행렬로 부터 다시 연립방정식의 우변항을 만들어 낼려면 변수벡터를 행렬곱 형식으로 곱하면 된다.
#
# - 일단 행렬곱 형식을 인정하고 이를 행렬-벡터형태로 써보면
#
# $$
# \mathbf{y} = \begin{bmatrix} y_1 \\ y_2 \end{bmatrix} = \mathbf{Ax} =
# \begin{bmatrix} a_{11} & a_{12} \\ a_{21} & a_{22} \end{bmatrix}
# \begin{bmatrix} x_1 \\ x_2 \end{bmatrix} =
# \begin{bmatrix} a_{11}x_1+a_{12}x_2 \\ a_{21}x_1+a_{22}x_2 \end{bmatrix} \tag{1}
# $$
#
# - $\mathbf{x}$가 $\mathbf{w}$로 부터 다시 변환된다고 하면
#
# $$
# \mathbf{x} = \begin{bmatrix} x_1 \\ x_2 \end{bmatrix} = \mathbf{Bw} =
# \begin{bmatrix} b_{11} & b_{12} \\ b_{21} & b_{22} \end{bmatrix}
# \begin{bmatrix} w_1 \\ w_2 \end{bmatrix} =
# \begin{bmatrix} b_{11}w_1+b_{12}w_2 \\ b_{21}w_1+b_{22}w_2 \end{bmatrix} \tag{2}
# $$
#
# - 즉 다음처럼 순차적으로 선형변환이 일어난다.
#
# $$
# \mathbf{w} \to \mathbf{x} \to \mathbf{y}
# $$
#
# - $\mathbf{w}$에서 $\mathbf{y}$로의 변환을 생각해보면
#
# $$
# \mathbf{y} = \mathbf{Cw} = \begin{bmatrix} c_{11} & c_{12} \\ c_{21} & c_{22} \end{bmatrix}
# \begin{bmatrix} w_1 \\ w_2 \end{bmatrix}=
# \begin{bmatrix} c_{11}w_1+c_{12}w_2 \\ c_{21}w_1+c_{22}w_2 \end{bmatrix}
# $$
#
# - 로 어떤 계수행렬 $\mathbf{C}$가 있을 수 있다.
#
# $$
# \begin{align}
# \mathbf{y} &= \begin{bmatrix} y_1 \\ y_2 \end{bmatrix} = \mathbf{Ax} \\[5pt]
# &=\begin{bmatrix} a_{11} & a_{12} \\ a_{21} & a_{22} \end{bmatrix}
# \begin{bmatrix} b_{11}w_1+b_{12}w_2 \\ b_{21}w_1+b_{22}w_2 \end{bmatrix} \qquad \because (2) \\[5pt]
# &= \begin{bmatrix} a_{11}(b_{11}w_1 + b_{12}w_2) + a_{12}(b_{21}w_1 + b_{22}w_2) \\
# a_{21}(b_{11}w_1 + b_{12}w_2) + a_{22}(b_{21}w_1 + b_{22}w_2) \end{bmatrix} \\[5pt]
# &= \begin{bmatrix} (a_{11}b_{11} + a_{12}b_{21})w_1 + (a_{11}b_{12}+a_{12}b_{22})w_2 \\
# (a_{21}b_{11} + a_{22}b_{21})w_1 + (a_{21}b_{12} + a_{22}b_{22})w_2
# \end{bmatrix} \\[5pt]
# &= \begin{bmatrix} (a_{11}b_{11} + a_{12}b_{21}) & (a_{11}b_{12}+a_{12}b_{22}) \\
# (a_{21}b_{11} + a_{22}b_{21}) & (a_{21}b_{12} + a_{22}b_{22})
# \end{bmatrix}
# \begin{bmatrix} w_1 \\ w_2 \end{bmatrix} \\[5pt]
# &= \begin{bmatrix} a_{11} & a_{12} \\ a_{21} & a_{22} \end{bmatrix} \begin{bmatrix} b_{11} & b_{12} \\ b_{21} & b_{22} \end{bmatrix}\begin{bmatrix} w_1 \\ w_2 \end{bmatrix} \\[5pt]
# &= \mathbf{AB} \mathbf{w} = \mathbf{C}\mathbf{w}
# \end{align}
# $$
#
# - 두 변환을 한꺼번에 수행하는 변환은 각 변환의 행렬곱으로 표현된다.
#
#
# + [markdown] slideshow={"slide_type": "slide"}
# ### 행렬의 전치<sup>transpose</sup>
#
# - 주어진 행렬의 행을 열로, 열을 행으로 이동
#
# - $\mathbf{A}^{\text{T}}$로 표시
#
# $$
# \mathbf{A} = \begin{bmatrix} 5 & -8 & 1 \\ 4 & 0 & 3 \end{bmatrix}
# $$
#
# $$
# \mathbf{A}^{\text{T}} = \begin{bmatrix} 5 & 4 \\ -8 & 0 \\ 1 & 3 \end{bmatrix}
# $$
#
# $$
# \begin{align}
# \left(\mathbf{A}^{\text{T}}\right)^{\text{T}} &= \mathbf{A} \\[5pt]
# \left(\mathbf{A}+\mathbf{B}\right)^{\text{T}} &= \mathbf{A}^{\text{T}} + \mathbf{B}^{\text{T}} \\[5pt]
# \left(c\mathbf{A}\right)^{\text{T}} &= c \mathbf{A}^{\text{T}} \\[5pt]
# \left( \mathbf{A} \mathbf{B} \right)^{\text{T}} &= \mathbf{B}^{\text{T}} \mathbf{A}^{\text{T}}
# \end{align}
# $$
# +
A = np.asarray([5, -8, 1, 4, 0, 3]).reshape(2,3)
print(A.T)
B = np.asarray([3, 1, 0, 1, 2, 8]).reshape(3,2)
print(np.dot(A,B).T)
print(np.dot(B.T, A.T))
# -
# ### 행렬의 대각합<sup>trace</sup>
#
# $$
# \text{tr}(\mathbf{A}) = \sum_{i=1}^{n} a_{ii} = a_{11} + a_{22} + \cdots + a_{nn}
# $$
#
# - 성질
#
# $$
# \text{tr}(\mathbf{A} + \mathbf{B}) = \text{tr}(\mathbf{A}) + \text{tr}(\mathbf{b})
# $$
#
# $$
# \text{tr}(c\mathbf{A}) = c\,\text{tr}(\mathbf{A})
# $$
#
# $$
# \text{tr}(\mathbf{A}) = \text{tr}\left(\mathbf{A}^{\text{T}} \right)
# $$
#
# $$
# \text{tr}(\mathbf{AB}) = \text{tr}(\mathbf{BA})
# $$
# + [markdown] slideshow={"slide_type": "slide"}
# ### 특별한 행렬들
#
# #### 단위행렬<sup>unit, identity matrix</sup>
#
# - 대각 성분이 모두 1인 $n \times n$ 정사각 행렬, $\mathbf{I}_{n}$, $\mathbf{I}$
#
# - $\mathbf{AI} = \mathbf{IA} = \mathbf{A}$
#
#
# $$
# \mathbf{I} = \begin{bmatrix} 1 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 1\end{bmatrix}
# $$
#
# #### 대각행렬<sup>diagonal matrix</sup>
#
# - 대각선상에서만 0이 아닌 성분을 가진 $n \times n$ 정사각 행렬
#
# $$
# \mathbf{D} = \begin{bmatrix} 2 & 0 & 0 \\ 0 & -3 & 0 \\ 0 & 0 & 0 \end{bmatrix}
# $$
#
#
#
# #### 대칭행렬<sup>symmetric matrix</sup>
#
# - 정사각행렬에 대해서 $ \mathbf{S}^{\text{T}} = \mathbf{S} $, 따라서 $s_{kj} = s_{jk}$인 행렬
#
# $$
# \mathbf{S} = \begin{bmatrix}
# 10 & 20 & 100 \\ 20 & 10 & 120 \\ 100 & 120 & 20
# \end{bmatrix}
# $$
#
#
# + [markdown] slideshow={"slide_type": "slide"}
# ### 역행렬
#
# - $n \times n$행렬 $\mathbf{A}$의 역행렬은 $\mathbf{A}^{-1}$로 표시하며 다음을 만족시킨다.
# $$
# \mathbf{AA}^{-1} = \mathbf{A}^{-1}\mathbf{A} = \mathbf{I}
# $$
#
# - 역행렬을 가지면 $\mathbf{A}$는 정칙행렬<sup>nonsingular matrix</sup>, 역행렬이 없으면 특이행렬<sup>singular matrix</sup>
#
# - 대부분의 언어에는 역행렬을 구하는 선형대수 라이브러리가 준비되어 있다. ```python numpy.linalg.inv() ```
#
# + [markdown] slideshow={"slide_type": "subslide"}
# #### 역행렬을 이용한 선형 연립방정식의 해
#
# $$
# \begin{align}
# \mathbf{A}\mathbf{x} &= \mathbf{b} \\[5pt]
# \mathbf{A}^{-1} \mathbf{A} \mathbf{x} &= \mathbf{A}^{-1} \mathbf{b} \\[5pt]
# \mathbf{x} &= \mathbf{A}^{-1} \mathbf{b}
# \end{align}
# $$
#
# $$
# \begin{matrix}
# 2x_1 & + & 1x_2 & = & 3 \\
# -6x_1 & + & 3x_2 & = & -27
# \end{matrix} \Rightarrow
# \begin{bmatrix}
# 2 & 1 \\
# -6 & 3
# \end{bmatrix}
# \begin{bmatrix}
# x_1 \\ x_2
# \end{bmatrix}=
# \begin{bmatrix}
# 3\\ -27
# \end{bmatrix}
# $$
#
# $$
# \begin{bmatrix}
# x_1 \\ x_2
# \end{bmatrix}=
# \begin{bmatrix}
# 2 & 1 \\
# -6 & 3
# \end{bmatrix}^{-1}
# \begin{bmatrix}
# 3\\ -27
# \end{bmatrix}
# $$
# + slideshow={"slide_type": "fragment"}
A = np.array([[2,1],[-6, 3]])
b = np.array([3, -27])
np.dot(np.linalg.inv(A), b)
# -
# ### 행렬 왜 배우지?
#
# - 기온에 따른 아이스크림 판매량
#
# |온도|20|21|22|23|24|25|26|
# |----|--|--|--|--|--|--|--|
# |판매량|15|16|21|33|42|60|64|
#
# - 현재 6월 30일, 오늘까지 주문하면 아이스크림 원가가 반값
#
# - 7월, 8월 판매할 아이스크림을 오늘 12시 전에 주문해야 한다.
#
# +
plt.rcParams["figure.figsize"] = (10,5)
t = np.array([20, 21, 22, 23, 24, 25, 26])
c = np.array([15, 16, 21, 33, 42, 60, 64])
p = np.polyfit(t, c, 1)
ts = np.linspace(20, 35, 100)
print(p)
plt.plot(t, c, 'o-')
plt.plot(ts, p[0]*ts+p[1], alpha=0.5)
plt.xlabel('temp.')
plt.ylabel('sales')
plt.show()
# -
# - 회귀식과 7월, 8월 일별 온도를 이용해서 모든 날짜의 구매 수량을 예측한다.
#
# - 7월, 8월 일별 온도는 기상청 웹사이트에서 긁어온다.
#
# - for 루프 방식과 행렬-벡터 곱셈을 이용하여 결과 비교
#
# - 결과는 동일하고 데이터의 양이 훨씬 많다면 행렬곱 형태의 효율이 압도적으로 좋다.
# +
t_78 = np.array([30.5,31.1,34.3,32.1,33.1,32.8,29.2,29.7,35.4,32.6,34,35.9,37.2,35.9,30.9,34.5,
33,32,35.2,35.6,36.6,38.4,31.9,35.9,27.9,30.1,33.9,35.4,26.2,30.9,26.1,32.8,
31.3,32.1,35.6,36.5,37,35.5,34.5,29.7,26.2,32.3,30.2,26.7,24.2,25.2,30.4,30.3,
31.1,30.2,29.3,32,33.5,35.9,35.5,33.7,31.4,31,32.3,30.9,25.6,29.1]).reshape(-1,1)
# for loop
pred = []
for tt in t_78 :
pred.append(p[0]*tt + p[1])
print(np.asarray(pred).sum())
# matrix, vector product
T = np.c_[np.ones(t_78.shape[0]), t_78]
print(np.dot(T,p[::-1].reshape(-1,1)).sum())
# -
# ### 행렬과 벡터의 연산 구현
#
# - 실제 행렬과 벡터의 연산을 코딩하는 방법은 아래 두가지 형태가 있다.
#
# - #### for loop 연산 : 직관적이면서 구현이 쉬움
# - #### 벡터화<sup>vectorization</sup> 연산 : 덜 직관적이지만 효율적
#
#
# - 가능하면 벡터화 연산을 사용하는 것이 좋지만 익숙해지는데 시간이 걸리고 그때그때 햇갈리기도 함
#
# - 행렬-벡터 과정 마지막 내용으로 몇가지 예제에 대해 코드를 만들어 봄
# ### 예제 1. $$
# y_i = \sum_{j} A_{ij} x_j \quad y_i = \sum_{j} A_{ji} x_j
# $$
#
# +
A = np.random.randint(1, 10, 25).reshape(5,5)
x = np.random.randint(1, 10, 5).reshape(-1,1)
print('A')
print(A)
print('x')
print(x)
print("\n")
print("######################")
print("y_i = sum_j A_ij * x_j ")
print("######################")
########################################
# for loop 방식
y = []
temp = 0; m = A.shape[0];
for i in range(m) :
for j in range(A.shape[1]) :
temp += A[i,j]*x[j]
y.append(temp)
temp = 0
print("for loop 방식")
y = np.array(y)
print(y)
print("\n")
print("벡터화 방식")
y = A.dot(x)
print(y)
print("\n")
print("######################")
print("y_i = sum_j A_ji * x_j ")
print("######################")
########################################
# for loop 방식
y = []
temp = 0; m = A.shape[0];
for i in range(m) :
for j in range(A.shape[1]) :
temp += A[j,i]*x[j]
y.append(temp)
temp = 0
print("for loop 방식")
y = np.array(y)
print(y)
print("\n")
print("벡터화 방식")
y = A.T.dot(x)
print(y)
# -
# ### 예제 2. $$
# \mathbf{S} = \sum_{i=1}^{m} \mathbf{x}^{(i)} \left(\mathbf{x}^{(i)} \right)^{\text{T}}
# $$
#
# - 벡터에 쓰는 윗첨자는 주로 여러개의 벡터가 있을 때 $i$번째 벡터를 나타내는 경우가 많음
#
# - 지수와 구별하기 위해 $(i)$로 표시
#
# - #### for 루프 방식
#
# ```python
# X = [x1, x2, x3, x4, x5]
#
# S = np.zeros((x1.shape[0], x1.shape[0]), dtype=int)
# for i in range(len(X)):
# x_i = X[i]
# S += np.dot(x_i, x_i.T)
# print(S)
# [[335 370 405]
# [370 410 450]
# [405 450 495]]
# ```
#
# - #### 행렬-벡터 방식
#
# ```python
# X = np.array([x1, x2, x3, x4, x5]).reshape(5,3) # row major
# S = np.dot(X.T, X)
# print(S)
# [[335 370 405]
# [370 410 450]
# [405 450 495]]
# ```
#
# +
# 열벡터 5개 정의
x1 = np.array([1,2,3])[:,np.newaxis]
x2 = np.array([4,5,6])[:,np.newaxis]
x3 = np.array([7,8,9])[:,np.newaxis]
x4 = np.array([10,11,12])[:,np.newaxis]
x5 = np.array([13,14,15])[:,np.newaxis]
########################################
# for loop 방식
print("for loop 방식")
X = [x1, x2, x3, x4, x5]
m = len(X)
S = np.zeros((x1.shape[0], x1.shape[0]), dtype=int)
for i in range(m):
x_i = X[i]
S += np.dot(x_i, x_i.T)
print(S)
print('\n')
########################################
# 벡터화 방식
print("벡터화 방식 : Numpy와 벡터 내적 이용")
X = np.array([x1, x2, x3, x4, x5]).reshape(5,3) # row major
S = np.dot(X.T, X)
print(S)
# -
# ### 예제 3. $$
# w^{(t+1)}_j = \sum_{i=1}^{m} \left\{ \left(\mathbf{X}_{(i,:)}\cdot\mathbf{w}^{(t)}-y_i\right) X_{ij}\right\}
# $$
#
# - 위 식은 선형회귀에서 회귀 계수를 업데이트하는 식
#
# - $\mathbf{X}_{(i,:)}$ : 행렬 $\mathbf{X}$의 $i$번째 행을 나타냄
#
# - 윗첨자 $(t)$, $(t+1)$ : 반복적인 업데이트 과정에서 $(t)$번째와 $(t+1)$번째를 나타냄
#
# - 즉 위 식은 $(t)$번째 $\mathbf{w}$를 $(t+1)$번째 $\mathbf{w}$로 업데이트하는 식
#
# - 아래와 같은 데이터 상태에서 위 식으로 업데이트를 하면......
#
#
# $$
# \mathbf{X} = \begin{bmatrix}
# 2.74 & 3.58 \\
# 3.01 & 2.72 \\
# 2.12 & 3.23 \\
# 2.19 & 4.46 \\
# 4.82 & 1.92 \\
# 3.96 & 2.64 \\
# 2.84 & 4.63 \\
# 0.36 & 0.44 \\
# 0.1 & 4.16 \\
# 3.89 & 4.35
# \end{bmatrix} \qquad \mathbf{w} = \begin{bmatrix} 1.2 \\ 3.4 \end{bmatrix} \qquad
# \mathbf{y} = \begin{bmatrix}
# 0.98\\
# 0.8\\
# 0.46\\
# 0.78\\
# 0.12\\
# 0.64\\
# 0.14\\
# 0.94\\
# 0.52\\
# 0.41
# \end{bmatrix}
# $$
#
#
#
# - #### for 루프 방식
#
# ```python
# w2 = np.zeros_like(w)
# m = X.shape[0]
#
# for j in range(w.shape[0]):
# for i in range(m):
# w2[j] += (np.dot(X[i,:],w) - y[i])*X[i,j]
#
# print(w2)
# [[381.35]
# [489.13]]
# ```
#
# - #### 행렬-벡터 방식
#
# ```python
# print(np.dot(X.T, (np.dot(X,w)-y)))
# [[381.35]
# [489.13]]
# ```
# +
np.random.seed(0)
X = (np.random.rand(20)*5).reshape(-1,2)
# print(X)
w = np.array([1.2, 3.4]).reshape(-1,1)
# print(w)
y = np.random.rand(10).reshape(-1,1)
# print(y)
########################################
# for loop 방식
print("for loop 연산")
def for_loop():
w2 = np.zeros_like(w)
m = X.shape[0]
for j in range(w.shape[0]):
for i in range(m):
w2[j] += (np.dot(X[i,:],w) - y[i])*X[i,j]
return w2
########################################
# For loop 브로드캐스팅 이용
# %timeit -n100 -r10 for_loop()
w2 = for_loop()
print(w2)
print("\n")
########################################
# Numpy 브로드캐스팅 이용
print("벡터화 연산 : Numpy 브로드캐스팅 이용")
# %timeit -n100 -r10 ((np.dot(X,w)-y) * X).sum(axis=0)
print(((np.dot(X,w)-y) * X).sum(axis=0))
print("\n")
########################################
# Numpy와 벡터 내적 이용
print("벡터화 연산 : Numpy와 벡터 내적 이용")
# %timeit -n100 -r10 np.dot(X.T, (np.dot(X,w)-y))
print(np.dot(X.T, (np.dot(X,w)-y)))
# + [markdown] slideshow={"slide_type": "slide"}
# ## 참고문헌
#
# 1. [Kreysgiz] 공업수학Advanced Engineering Mathematics 10판, <NAME>, <NAME> & SONS
#
# 2. [Andrew] Machine Learning, Coursera, <NAME>, https://www.coursera.org/learn/machine-learning
#
# 3. [scipy-lecture] Array multiplication is not matrix multiplication, http://www.scipy-lectures.org/intro/numpy/operations.html
#
# + slideshow={"slide_type": "skip"} language="html"
# <link href='https://fonts.googleapis.com/earlyaccess/notosanskr.css' rel='stylesheet' type='text/css'>
# <!--https://github.com/kattergil/NotoSerifKR-Web/stargazers-->
# <link href='https://cdn.rawgit.com/kattergil/NotoSerifKR-Web/5e08423b/stylesheet/NotoSerif-Web.css' rel='stylesheet' type='text/css'>
# <!--https://github.com/Joungkyun/font-d2coding-->
# <link href="http://cdn.jsdelivr.net/gh/joungkyun/font-d2coding/d2coding.css" rel="stylesheet" type="text/css">
# <style>
# h1 { font-family: 'Noto Sans KR' !important; color:#348ABD !important; }
# h2 { font-family: 'Noto Sans KR' !important; color:#467821 !important; }
# h3, h4 { font-family: 'Noto Sans KR' !important; color:#A60628 !important; }
# p:not(.navbar-text) { font-family: 'Noto Serif KR', 'Nanum Myeongjo'; font-size: 12pt; line-height: 200%; text-indent: 10px; }
# li:not(.dropdown):not(.p-TabBar-tab):not(.p-MenuBar-item):not(.jp-DirListing-item):not(.p-CommandPalette-header):not(.p-CommandPalette-item):not(.jp-RunningSessions-item)
# { font-family: 'Noto Serif KR', 'Nanum Myeongjo'; font-size: 12pt; line-height: 200%; }
# table { font-family: 'Noto Sans KR' !important; font-size: 11pt !important; }
# li > p { text-indent: 0px; }
# li > ul { margin-top: 0px !important; }
# sup { font-family: 'Noto Sans KR'; font-size: 9pt; }
# code, pre { font-family: D2Coding, 'D2 coding' !important; font-size: 12pt !important; line-height: 130% !important;}
# .code-body { font-family: D2Coding, 'D2 coding' !important; font-size: 12pt !important;}
# .ns { font-family: 'Noto Sans KR'; font-size: 15pt;}
# .summary {
# font-family: 'Georgia'; font-size: 12pt; line-height: 200%;
# border-left:3px solid #FF0000;
# padding-left:20px;
# margin-top:10px;
# margin-left:15px;
# }
# .green { color:#467821 !important; }
# .comment { font-family: 'Noto Sans KR'; font-size: 10pt; }
# </style>
| study/mathematics-for-ml/datablocklab/class-03/03-01-matrix.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: 'Python 3.9.7 64-bit (''sdf'': conda)'
# name: python3
# ---
# %load_ext autoreload
# %autoreload 2
# Add parent directorpredict_sdf into system path
import sys, os
sys.path.insert(1, os.path.abspath(os.path.normpath('..')))
import torch
from torch import nn
from torch.nn.init import calculate_gain
if torch.cuda.is_available():
for i in range(torch.cuda.device_count()):
print(f'CUDA {i}: {torch.cuda.get_device_name(i)}')
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
torch.set_default_dtype(torch.float32)
import torch
from torch import nn
from siren_pytorch.siren_pytorch import Modulator, SirenNet
from utils.helpers import cartesian_to_spherical, spherical_to_cartesian
from models import Siren_PINN
#net = MLP_PINN(N_layers=8, width=32, loss_lambda=[1.0, 1.0], activation=nn.ELU()).to(device)
net = SirenNet(dim_in=3, dim_out=1, num_layers=3, dim_hidden=256).to(device)
# +
import os
from utils.dataset import ImplicitDataset, RandomMeshSDFDataset, batch_loader
dataset_name = '../datasets/box_1f0_gyroid_4pi'
output_stl = dataset_name+'.stl'
train_dataset = ImplicitDataset.from_file(file=dataset_name+'_train.npz', device=device)
#train_dataset.points.requires_grad_(True)
#train_dataset = RandomMeshSDFDataset(output_stl, sampling_method='importance', device=device)
print(train_dataset)
# +
from utils.optimizer import CallbackScheduler
# Optimization
## ADA
optimizer=torch.optim.Adam(net.parameters(), lr=0.001, betas=(0.9, 0.999), eps=1e-6, amsgrad=False)
lr_scheduler = CallbackScheduler([
CallbackScheduler.reduce_lr(0.2),
CallbackScheduler.reduce_lr(0.2),
CallbackScheduler.init_LBFGS(
lr=0.5, max_iter=20, max_eval=40,
tolerance_grad=1e-5, tolerance_change=1e-6,
history_size=100,
line_search_fn=None
),
CallbackScheduler.reduce_lr(0.2)
], optimizer=optimizer, model=net, eps=1e-7, patience=300)
# +
max_epochs = 2500
PRINT_EVERY_EPOCH = 100
points = train_dataset.points
sdfs = train_dataset.sdfs
#points.requires_grad_(True)
try:
# Training
epoch = 0
while epoch < max_epochs:
#for residual_points in batch_loader(train_dataset.points, num_batches=10):
#residual_points.requires_grad_(True)
#if train_dataset.points.grad is not None:
#train_dataset.points.grad.zero_()
optimizer.zero_grad()
y = net(points).squeeze()
#loss = net.loss(y, residual_x=residual_points, bc_x=train_dataset.bc_points, bc_sdf=train_dataset.bc_sdfs)
#y = net(train_dataset.points)
loss = nn.MSELoss(reduction='mean')(y, sdfs)
loss.backward()
lr_scheduler.optimizer.step(lambda: loss)
lr_scheduler.step_when((epoch % 500) == 499)
lr_scheduler.step_loss(loss)
if epoch % PRINT_EVERY_EPOCH == 0:
print(f'#{epoch} Loss: {loss:.6f}')
epoch += 1
# if device.type == 'cuda':
# torch.cuda.empty_cache()
# if epoch % 100:
# net.loss_lambda[0] = epoch / 2000.0
except KeyboardInterrupt as e:
print('Bye bye')
# -
from utils import SDFVisualize, plot_model_weight
visualize = SDFVisualize(z_level=0.0, step=0.05, offset=30, nums=100, device=device)
visualize.from_nn(net, bounds_from_mesh=output_stl)
visualize.from_mesh(output_stl)
#visualize.from_dataset(net, dataset_name + '_slice.npz')
from utils.dataset import TestDataset
test_dataset = TestDataset(dataset_name+'_test.npz', device=device)
print('Uniform SDFS: ', net.test(test_dataset.uniform.points, test_dataset.uniform.sdfs).cpu().detach().numpy())
print('Uniform gradient: ', net.test_gradient(test_dataset.uniform.points, test_dataset.uniform.gradients).cpu().detach().numpy())
print('Random SDFS:', net.test(test_dataset.random.points, test_dataset.random.sdfs).cpu().detach().numpy())
| notebooks/2.10_train_Siren.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernel_info:
# name: python3
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ### Note
# * Instructions have been included for each segment. You do not have to follow them exactly, but they are included to help you think through the steps.
# +
# Dependencies and Setup
import pandas as pd
# File to Load (Remember to Change These)
school_data_to_load = "Resources/schools_complete.csv"
student_data_to_load = "Resources/students_complete.csv"
# Read School and Student Data File and store into Pandas DataFrames
school_data = pd.read_csv(school_data_to_load)
student_data = pd.read_csv(student_data_to_load)
# Combine the data into a single dataset.
school_data_complete = pd.merge(student_data, school_data, how="left", on=["school_name", "school_name"])
school_data_complete
# -
# ## District Summary
#
# * Calculate the total number of schools
#
# * Calculate the total number of students
#
# * Calculate the total budget
#
# * Calculate the average math score
#
# * Calculate the average reading score
#
# * Calculate the percentage of students with a passing math score (70 or greater)
#
# * Calculate the percentage of students with a passing reading score (70 or greater)
#
# * Calculate the percentage of students who passed math **and** reading (% Overall Passing)
#
# * Create a dataframe to hold the above results
#
# * Optional: give the displayed data cleaner formatting
# +
Total_Students = len(student_data)
Columns = ['Type', 'Budget', 'Total Students', 'Per Student Budget',
'Reading score', 'Math score', 'Passing Reading %',
'Passing Math %', 'Passing Both %']
District_Summary_values = {
'Total Schools' : [len(school_data)],
'Total Students' : [Total_Students],
'Total budget' : [school_data['budget'].sum()],
'Avg math score' : [student_data['math_score'].mean()],
'Avg reading score' : [student_data['reading_score'].mean()],
'Passing math %' : [len(student_data[student_data['math_score']>=70])/Total_Students],
'Passing reading %' : [len(student_data[student_data['reading_score']>=70])/Total_Students],
'Passing both %' : [len(student_data[(student_data['math_score']>=70)
& (student_data['reading_score']>=70)])/Total_Students]
}
District_Summary = pd.DataFrame(District_Summary_values)
District_Summary['Total Students'] = District_Summary['Total Students'].map("{:,}".format)
District_Summary['Total budget'] = District_Summary['Total budget'].map("${:,.2f}".format)
District_Summary['Avg math score'] = District_Summary['Avg math score'].map("{:,.2f}".format)
District_Summary['Avg reading score'] = District_Summary['Avg reading score'].map("{:,.2f}".format)
District_Summary['Passing math %'] = District_Summary['Passing math %'].map("{:.2%}".format)
District_Summary['Passing reading %'] = District_Summary['Passing reading %'].map("{:,.2%}".format)
District_Summary['Passing both %'] = District_Summary['Passing both %'].map("{:,.2%}".format)
District_Summary
# -
# ## School Summary
# * Create an overview table that summarizes key metrics about each school, including:
# * School Name
# * School Type
# * Total Students
# * Total School Budget
# * Per Student Budget
# * Average Math Score
# * Average Reading Score
# * % Passing Math
# * % Passing Reading
# * % Overall Passing (The percentage of students that passed math **and** reading.)
#
# * Create a dataframe to hold the above results
#Creating a new DF with the relevan data
School_Summary = school_data[['school_name','type','budget']].set_index('school_name')
#Generate variables with the info needed
School_grouped = school_data_complete.groupby('school_name')
Student_school_count = School_grouped.count()['student_name']
Reading_school_score = School_grouped.sum()['reading_score']
Math_school_score = School_grouped.sum()['math_score']
#Adding the 4 fisrt columns to de DF
School_Summary['Total Students'] = Student_school_count
School_Summary['Per Student Budget'] = School_Summary['budget']/Student_school_count
School_Summary['Reading score'] = School_grouped.mean()['reading_score']
School_Summary['Math score'] = School_grouped.mean()['math_score']
#Generate the Passing Reading assigment percentage
bool_pass_read = student_data['reading_score']>=70
Pass_read_school = student_data[bool_pass_read].groupby('school_name').count()['student_name']
School_Summary['Passing Reading %'] = (Pass_read_school/Student_school_count)
#Generate the Passing Math assigment percentage
bool_pass_math = student_data['math_score']>=70
Pass_math_school = student_data[bool_pass_math].groupby('school_name').count()['student_name']
School_Summary['Passing Math %'] = (Pass_math_school/Student_school_count)
#Generate the Passing Both assigment percentage
Pass_both_school = student_data[bool_pass_math & bool_pass_read].groupby('school_name').count()['student_name']
School_Summary['Passing Both %'] = Pass_both_school/Student_school_count
#Formating a nes DF for display
School_SummaryD = School_Summary.copy()
School_SummaryD['budget'] = School_SummaryD['budget'].map('${:,.2f}'.format)
School_SummaryD['Total Students'] = School_SummaryD['Total Students'].map('{:,}'.format)
School_SummaryD['Reading score'] = School_SummaryD['Reading score'].map('{:.2f}'.format)
School_SummaryD['Math score'] = School_SummaryD['Math score'].map('{:.2f}'.format)
School_SummaryD['Passing Reading %'] = School_SummaryD['Passing Reading %'].map('{:,.2%}'.format)
School_SummaryD['Passing Math %'] = School_SummaryD['Passing Math %'].map('{:,.2%}'.format)
School_SummaryD['Passing Both %'] = School_SummaryD['Passing Both %'].map('{:,.2%}'.format)
School_SummaryD
# ## Top Performing Schools (By % Overall Passing)
# * Sort and display the top five performing schools by % overall passing.
School_SummaryD.sort_values('Passing Both %',ascending=False).head()
# ## Bottom Performing Schools (By % Overall Passing)
# * Sort and display the five worst-performing schools by % overall passing.
School_SummaryD.sort_values('Passing Both %').head()
# ## Math Scores by Grade
# * Create a table that lists the average Reading Score for students of each grade level (9th, 10th, 11th, 12th) at each school.
#
# * Create a pandas series for each grade. Hint: use a conditional statement.
#
# * Group each series by school
#
# * Combine the series into a dataframe
#
# * Optional: give the displayed data cleaner formatting
grade_grouped = school_data_complete.groupby(['grade','school_name'])
Math_Scores_Grade = pd.DataFrame(grade_grouped.mean()['math_score']['9th'])
Math_Scores_Grade.rename(columns={'math_score':'9th'},inplace= True)
Math_Scores_Grade['9th'] = Math_Scores_Grade['9th']
Math_Scores_Grade['10th'] = grade_grouped.mean()['math_score']['10th']
Math_Scores_Grade['11th'] = grade_grouped.mean()['math_score']['11th']
Math_Scores_Grade['12th'] = grade_grouped.mean()['math_score']['12th']
#Formating a nes DF for display
Math_Scores_GradeD = Math_Scores_Grade.copy()
Math_Scores_GradeD['9th'] = Math_Scores_GradeD['9th'].map('{:,.2f}'.format)
Math_Scores_GradeD['10th'] = Math_Scores_GradeD['10th'].map('{:,.2f}'.format)
Math_Scores_GradeD['11th'] = Math_Scores_GradeD['11th'].map('{:,.2f}'.format)
Math_Scores_GradeD['12th'] = Math_Scores_GradeD['12th'].map('{:,.2f}'.format)
Math_Scores_GradeD
# ## Reading Score by Grade
# * Perform the same operations as above for reading scores
grade_grouped = school_data_complete.groupby(['grade','school_name'])
Read_Scores_Grade = pd.DataFrame(grade_grouped.mean()['reading_score']['9th'])
Read_Scores_Grade.rename(columns={'reading_score':'9th'},inplace= True)
Read_Scores_Grade['9th'] = Read_Scores_Grade['9th']
Read_Scores_Grade['10th'] = grade_grouped.mean()['reading_score']['10th']
Read_Scores_Grade['11th'] = grade_grouped.mean()['reading_score']['11th']
Read_Scores_Grade['12th'] = grade_grouped.mean()['reading_score']['12th']
Read_Scores_Grade.rename(columns={'reading_score':'9th'},inplace= True)
#Formating a nes DF for display
Read_Scores_GradeD = Read_Scores_Grade.copy()
Read_Scores_GradeD['9th'] = Read_Scores_GradeD['9th'].map('{:,.2f}'.format)
Read_Scores_GradeD['10th'] = Read_Scores_GradeD['10th'].map('{:,.2f}'.format)
Read_Scores_GradeD['11th'] = Read_Scores_GradeD['11th'].map('{:,.2f}'.format)
Read_Scores_GradeD['12th'] = Read_Scores_GradeD['12th'].map('{:,.2f}'.format)
Read_Scores_GradeD
# ## Scores by School Spending
# * Create a table that breaks down school performances based on average Spending Ranges (Per Student). Use 4 reasonable bins to group school spending. Include in the table each of the following:
# * Average Math Score
# * Average Reading Score
# * % Passing Math
# * % Passing Reading
# * Overall Passing Rate (Average of the above two)
max_budget = int(School_Summary['Per Student Budget'].max()) #655.0
min_budget = int(School_Summary['Per Student Budget'].min()) #578.0
bin_size =15
bins=[0] + list(range(min_budget+7,max_budget+15,bin_size))
bins = [0,585,630,645,680]
labels =['<585','(585-600]','(600-615]','(615-630]','(630-645]','>645']
School_Summary['Bins'] = pd.cut(School_Summary['Per Student Budget'],bins)
Bins_Grouped = School_Summary.groupby('Bins')
Scores_School_Spending = pd.DataFrame(Bins_Grouped.mean()[Columns[4:9]])
#Formating a nes DF for display
Scores_School_SpendingD = Scores_School_Spending.copy()
Scores_School_SpendingD['Passing Reading %'] = Scores_School_SpendingD['Passing Reading %'].map('{:,.2%}'.format)
Scores_School_SpendingD['Passing Math %'] = Scores_School_SpendingD['Passing Math %'].map('{:,.2%}'.format)
Scores_School_SpendingD['Passing Both %'] = Scores_School_SpendingD['Passing Both %'].map('{:,.2%}'.format)
Scores_School_SpendingD['Reading score'] = Scores_School_SpendingD['Reading score'].map('{:,.2f}'.format)
Scores_School_SpendingD['Math score'] = Scores_School_SpendingD['Math score'].map('{:,.2f}'.format)
Scores_School_SpendingD
# ## Scores by School Size
# * Perform the same operations as above, based on school size.
School_grouped = school_data_complete.groupby('school_name')
Student_school_DF = pd.DataFrame(School_grouped.count()['student_name'])
bins = [0,1000,2000,5000]
School_Summary['School Size'] = pd.cut(Student_school_DF['student_name'],bins)
Size_Grouped = School_Summary.groupby('School Size')
Size_School_Spending = pd.DataFrame(Size_Grouped.mean()[Columns[4:9]])
#Formating a nes DF for display
Size_School_SpendingD = Size_School_Spending.copy()
Size_School_SpendingD['Passing Reading %'] = Size_School_SpendingD['Passing Reading %'].map('{:,.2%}'.format)
Size_School_SpendingD['Passing Math %'] = Size_School_SpendingD['Passing Math %'].map('{:,.2%}'.format)
Size_School_SpendingD['Passing Both %'] = Size_School_SpendingD['Passing Both %'].map('{:,.2%}'.format)
Size_School_SpendingD['Reading score'] = Size_School_SpendingD['Reading score'].map('{:,.2f}'.format)
Size_School_SpendingD['Math score'] = Size_School_SpendingD['Math score'].map('{:,.2f}'.format)
Size_School_SpendingD
# ## Scores by School Type
# * Perform the same operations as above, based on school type
Type_grouped = School_Summary.groupby('type')
Type_School = pd.DataFrame(Type_grouped.mean()[Columns[4:9]])
#Formating a nes DF for display
Type_SchoolD = Type_School.copy()
Type_SchoolD['Passing Reading %'] = Type_SchoolD['Passing Reading %'].map('{:,.2%}'.format)
Type_SchoolD['Passing Math %'] = Type_SchoolD['Passing Math %'].map('{:,.2%}'.format)
Type_SchoolD['Passing Both %'] = Type_SchoolD['Passing Both %'].map('{:,.2%}'.format)
Type_SchoolD['Reading score'] = Type_SchoolD['Reading score'].map('{:,.2f}'.format)
Type_SchoolD['Math score'] = Type_SchoolD['Math score'].map('{:,.2f}'.format)
Type_SchoolD
| PyCitySchools/PyCitySchools_starter.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernel_info:
# name: python3
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] inputHidden=false outputHidden=false
# # Profiler performance
#
# We use the part of the instacart data that you can find here https://www.instacart.com/datasets/grocery-shopping-2017
#
# Specically order_products__prior.csv a 4 columns, 33.2 Million rows csv file.
#
# Before 2.2.10
# It took 355.58 seconds to process all the data set in a Windows 10,
# Instacart data
#
# After 2.2.10
# It took 78 sec. infer== False
#
# -
# %load_ext autoreload
# %autoreload 2
import sys
sys.path.append("..")
# Create optimus
from optimus import Optimus
op = Optimus(master="local[*]", app_name = "optimus" ,verbose =True, checkpoint= True)
# ### Benchmark
df = op.load.csv("C:\\Users\\argenisleon\\Desktop\\order_products__prior.csv")
df.table()
# %%time
df.groupBy("order_id").count().sort("count",ascending=False).show()
# %%time
df.cols.frequency("order_id")
# %%time
op.profiler.to_json(df, "order_id", infer=False, relative_error=1)
a = df.limit(10) 1:46 2:25
# %%time
df.cols.frequency("order_id")
from optimus import Profiler
p = Profiler()
p.run(a,"order_id")
op.profiler.run(a, "order_id", infer=True, relative_error=1)
df.groupBy("order_id").count().sort("count",ascending=False)
| examples/profiler-test-instacart.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 2
# language: python
# name: python2
# ---
# ## Subset & Index data
import numpy as np
import pandas as pd
# +
# Load the data into a DataFrame
# data of endangered languages
data = pd.read_csv('endangeredLang.csv')
data.head(n=5)
# -
data[6:] #selecting all entries from index 6 onwards
data[::2] #select entries at even index locations
# +
data.iloc[0:8,0:7] #specify the ranges of rows and columns
#positional indexing
## iloc[row slicing, column slicing]
# -
# select all columns for rows of index values 0 and 10
data.loc[[0, 10], :]
#isolate rowsa and columns
data[['Countries','Name in English']][4:8]
data[data.Countries == "Italy"] #isolate by row names
x=data.loc[data['Countries'].isin(['Italy','Germany'])] #isolate by two row labels
x.head(15)
data[(data.Countries == "India") & (data.Degree of endangerment=="Vulnerable")]
df=data[:] #make a copy of the data frame stored in data
df.rename(columns={'Number of speakers':'Number'}, inplace=True) #replace "Number of Speakers" with "Number"
df.head(5)
df[(df.Countries == "India") & (df.Number<100)] #specify 2 conditions to be fulfilled
| section3/data_index.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import numpy as np
import pandas as pd
import os
from tqdm import tqdm
from decimal import Decimal, ROUND_HALF_UP
from matplotlib import pyplot as plt
from statistics import mode
from collections import Counter
from keras_vggface.vggface import VGGFace
from keras_vggface import utils
from keras.preprocessing import image
import tensorflow as tf
# Thinning the dataframe
def new_list(l, count = 50):
len_a = len(l)
if count >= len_a:
return l
c = len_a / count
res = []
prev = 1
cnt = 0
for i in l:
if prev >= len_a:
break
cnt += 1
dec = int(Decimal(prev).to_integral_value(rounding = ROUND_HALF_UP))
if cnt == dec:
prev += c
res.append(i)
return res
# +
# For every video, the deep embeddings are combined into sequences (windows) of 2 seconds.
# However, here we downsample every video to 5 frames per second (FPS).
# That is, at 25 FPS, we select every 5 frame.
# For 30 FPS, every 6 frames and so on.
df_test = pd.read_csv('img/01-01-07-02-02-02-06/df_test.csv')
name_folder = df_test.name_folder.unique().tolist()
fps = 30
need_index = []
for i in tqdm(name_folder):
curr_name = i
curr_df = df_test[df_test.name_folder==i]
all_index = curr_df.index.tolist()
need_frames = round(len(curr_df)/(5*fps/25))
if need_frames!=0:
need_index.extend(new_list(all_index, count = need_frames))
df_test_short = df_test[df_test.index.isin(need_index)]
df_test_short = df_test_short.reset_index(drop=True)
# -
df_test_short.head()
# Building of a model for feature extraction
path_model = 'models/EmoAffectnet/weights_66_37.h5'
resnet50_features = VGGFace(model='resnet50', include_top=False, input_shape=(224, 224, 3), pooling='avg')
gaus = tf.keras.layers.GaussianNoise(0.1)(resnet50_features.output)
x = tf.keras.layers.Dense(units=512, kernel_regularizer=tf.keras.regularizers.l2(1e-4), activation = 'relu', name='dense_x')(gaus)
x = tf.keras.layers.Dropout(0.5)(x)
x = tf.keras.layers.Dense(7, activation = 'softmax')(x)
model_resnet50_features = tf.keras.models.Model(resnet50_features.input, x)
model_resnet50_features.load_weights(path_model)
model_loaded = tf.keras.models.Model(inputs=model_resnet50_features.input, outputs=[model_resnet50_features.get_layer('dense_x').output])
# Getting of features
# +
def get_feature(path, model_loaded, shape = (224,224)):
feature_all = []
for i in tqdm(path):
read_images = []
for image_curr in i:
img = image.load_img(image_curr, target_size=shape)
x = image.img_to_array(img)
x = x.reshape((1, x.shape[0], x.shape[1], x.shape[2]))
x = utils.preprocess_input(x,version=2) # or version=2
x = x.reshape(shape[0],shape[1],3)
read_images.append(x)
feature_all.append(model_loaded.predict(np.asarray(read_images)))
return feature_all
def sequencing(df, unique_name, len_seq=10, step = 2):
path_new = []
name_new = []
labels_new = []
dict_seq_train = {}
for i in tqdm(unique_name):
curr_df = df[df.name_folder==i].copy()
curr_df = curr_df.reset_index(drop=True)
if len(curr_df) > len_seq:
for j in range(0,len(curr_df), round(len_seq/step)):
start = j # start of slice
finish = j+len_seq # end of slice
need_slice = curr_df.loc[start:finish-1]
if len(need_slice) == len_seq:
path_new.append(need_slice.path_images.tolist())
name_new.append(need_slice.name_folder.tolist())
labels_new.append(need_slice.emotion.tolist())
else:
need_duble = len_seq - len(need_slice)
path_new.append(need_slice.path_images.tolist() + [curr_df.path_images.tolist()[-1]]*need_duble)
name_new.append(need_slice.name_folder.tolist() + [curr_df.name_folder.tolist()[-1]]*need_duble)
labels_new.append(need_slice.emotion.tolist() + [curr_df.emotion.tolist()[-1]]*need_duble)
elif len(curr_df) == len_seq:
path_new.append(curr_df.path_images.tolist())
name_new.append(curr_df.name_folder.tolist())
labels_new.append(curr_df.emotion.tolist())
elif len(curr_df) < len_seq:
need_duble = len_seq - len(curr_df)
path_new.append(curr_df.path_images.tolist() + [curr_df.path_images.tolist()[-1]]*need_duble)
name_new.append(curr_df.name_folder.tolist() + [curr_df.name_folder.tolist()[-1]]*need_duble)
labels_new.append(curr_df.emotion.tolist() + [curr_df.emotion.tolist()[-1]]*need_duble)
return path_new, name_new, labels_new
# -
# Getting of data for input LSTM
# +
path_new, name_new, labels_new = sequencing(df_test_short, name_folder, len_seq=10, step = 2)
feature = get_feature(path_new, model_loaded)
feature_ar = np.asarray(feature)
feature_ar.shape
# -
# Building of a model
def network():
input_lstm = tf.keras.Input(shape=(10, 512))
X = tf.keras.layers.Masking(mask_value=0.)(input_lstm)
X = tf.keras.layers.LSTM(512, return_sequences = True, kernel_regularizer=tf.keras.regularizers.l2(1e-3))(X)
X = tf.keras.layers.Dropout(rate = 0.2)(X)
X = tf.keras.layers.LSTM(256, return_sequences = False, kernel_regularizer=tf.keras.regularizers.l2(1e-3))(X)
X = tf.keras.layers.Dropout(rate = 0.2)(X)
X = tf.keras.layers.Dense(units = 7)(X)
X = tf.keras.layers.Activation('softmax')(X)
model = tf.keras.Model(inputs=input_lstm, outputs=X)
return model
# Getting of prediction
# +
def cout_prob(emotion_count, proby, i):
if type(proby[i]) == int:
emotion_count[proby[i]] += 1
else:
emotion_count += proby[i]
return emotion_count
def get_predy_truey(name_x_new, truey, proby):
emotion_count = np.zeros((7))
list_true = []
list_proby = []
name = None
name_list = []
for i in range(len(name_x_new)):
if name == None:
name = name_x_new[i][0]
true = truey[i]
emotion_count = cout_prob(emotion_count, proby, i)
elif name_x_new[i][0] == name:
emotion_count = cout_prob(emotion_count, proby, i)
elif name_x_new[i][0] != name:
list_true.append(true)
list_proby.append(emotion_count/np.sum(emotion_count))
name = name_x_new[i][0]
emotion_count = np.zeros((7))
true = truey[i]
emotion_count = cout_prob(emotion_count, proby, i)
if i == len(name_x_new)-1:
list_true.append(true)
list_proby.append(emotion_count/np.sum(emotion_count))
list_true = np.asarray(list_true)
pred_max = np.argmax(list_proby, axis = 1).tolist()
return list_true, pred_max
# +
# RAVDESS, CREMA-D, RAMAS, IEMOCAP and SAVEE corpora have one emotion label during all interval.
# Label -1 is in Affwild2 corpora
def change_labels(labels):
counter = Counter(labels)
if len(counter) > 1:
try:
if int(mode(labels)) == -1:
curr_mode = int(sorted(counter, key=counter.get, reverse=True)[1])
else:
curr_mode = int(mode(labels))
except:
if int(sorted(counter, key=counter.get, reverse=True)[0]) == -1:
curr_mode = int(sorted(counter, key=counter.get, reverse=True)[1])
else:
curr_mode = int(sorted(counter, key=counter.get, reverse=True)[0])
else:
curr_mode = int(mode(labels))
return curr_mode
# -
model = network()
model.load_weights('models/LSTM/for_RAVDESS.h5')
prob = model.predict(feature_ar)
labels_true = []
for i in labels_new:
labels_true.append(change_labels(i))
label_model = {0:'Neutral', 1:'Happiness', 2:'Sadness', 3:'Surprise', 4:'Fear', 5:'Disgust', 6:'Anger'}
truey, predy = get_predy_truey(name_new, labels_true, prob)
print('Ground truth: ', label_model[truey[0]])
print('Prediction class: ', label_model[predy[0]])
# Drawing of face areas
# +
fig = plt.figure(figsize=(20, 4))
for i in range(20):
ax = fig.add_subplot(2, 10, 1+i, xticks=[], yticks=[])
frame = int(os.path.basename(df_test_short.path_images[i]).split('.')[0])
img = image.load_img(df_test_short.path_images[i], target_size=(224,224))
ax.imshow(img)
ax.text(35, 35, 'Frame {}'.format(frame), fontsize = 14, color = 'white')
ax.axis('off')
plt.show()
# -
| test_LSTM_RAVDESS.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
from nustar_pysolar import planning, io
import astropy.units as u
import warnings
warnings.filterwarnings('ignore')
# # Download the list of occultation periods from the MOC at Berkeley.
#
# ## Note that the occultation periods typically only are stored at Berkeley for the *future* and not for the past. So this is only really useful for observation planning.
fname = io.download_occultation_times(outdir='../data/')
print(fname)
# # Download the NuSTAR TLE archive.
#
# This contains every two-line element (TLE) that we've received for the whole mission. We'll expand on how to use this later.
#
# The `times`, `line1`, and `line2` elements are now the TLE elements for each epoch.
tlefile = io.download_tle(outdir='../data')
print(tlefile)
times, line1, line2 = io.read_tle_file(tlefile)
# # Here is where we define the observing window that we want to use.
#
# Note that tstart and tend must be in the future otherwise you won't find any occultation times and sunlight_periods will return an error.
tstart = '2020-09-12T08:30:00'
tend = '2020-09-13T01:00:00'
orbits = planning.sunlight_periods(fname, tstart, tend)
orbits
# +
# Get the solar parameter
from sunpy.coordinates import sun
angular_size = sun.angular_radius(t='now')
dx = angular_size.arcsec
print(dx)
# -
offset = [-dx, 0]*u.arcsec
for ind, orbit in enumerate(orbits):
midTime = (0.5*(orbit[1] - orbit[0]) + orbit[0])
sky_pos = planning.get_skyfield_position(midTime, offset, load_path='./data', parallax_correction=True)
print("Orbit: {}".format(ind))
print("Orbit start: {} Orbit end: {}".format(orbit[0].iso, orbit[1].iso))
print(f'Aim time: {midTime.iso} RA (deg): {sky_pos[0]:8.3f} Dec (deg): {sky_pos[1]:8.3f}')
print("")
# # This is where you actually make the Mosaic for Orbit 0
from importlib import reload
reload(planning)
# +
pa = planning.get_nustar_roll(tstart, 0)
print(tstart)
print("NuSTAR Roll angle for Det0 in NE quadrant: {}".format(pa))
# We're actually using a SKY PA of 340. So...we'll need to rotate
target_pa = 150
extra_roll = ( 150 - pa.value ) * u.deg
print(f'Extra roll used: {extra_roll}')
# Just use the first orbit...or choose one. This may download a ton of deltat.preds, which is a known
# bug to be fixed.
orbit = orbits[0].copy()
print(orbit)
#...adjust the index above to get the correct orbit. Then uncomment below.
planning.make_mosaic(orbit, make_regions=True, extra_roll = extra_roll, outfile='orbit0_mosaic.txt', write_output=True)
# -
| notebooks/20200912/Planning 20200912.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
from __future__ import print_function
# #%matplotlib inline
import argparse
import os
import random
import torch
import torch.nn as nn
import torch.nn.parallel
import torch.backends.cudnn as cudnn
import torch.optim as optim
import torch.utils.data
import torchvision.datasets as dset
import torchvision.transforms as transforms
import torchvision.utils as vutils
import numpy as np
import matplotlib.pyplot as plt
import matplotlib.animation as animation
from IPython.display import HTML
import torch.nn as nn
from keijzer_exogan import *
# Set random seem for reproducibility
manualSeed = 999
#manualSeed = random.randint(1, 10000) # use if you want new results
print("Random Seed: ", manualSeed)
random.seed(manualSeed)
torch.manual_seed(manualSeed)
# #%matplotlib inline
# %config InlineBackend.print_figure_kwargs={'facecolor' : "w"} # Make sure the axis background of plots is white, this is usefull for the black theme in JupyterLab
#sns.set()
# -
torch.cuda.device_count()
# + active=""
# %env CUDA_VISIBLE_DEVICES=0,1,2,3
# +
# Root directory for dataset
dataroot = "/datc/opschaler/brian/celeba_complete"
# Number of workers for dataloader
workers = 0 # 0 when to_vram is enabled
# Batch size during training
batch_size = 2**11 # 2**11
print('Batch size: ', batch_size)
# Spatial size of training images. All images will be resized to this
# size using a transformer.
image_size = 32
# Number of channels in the training images. For color images this is 3
nc = 1
# Size of z latent vector (i.e. size of generator input)
nz = 100
# Size of feature maps in generator
ngf = 32
# Size of feature maps in discriminator
ndf = 32
# Number of training epochs
num_epochs =1*10**3
# Learning rate for optimizers
lr = 2e-4
lr_G = 2e-4
# Beta1 hyperparam for Adam optimizers
beta1 = 0.5
# Number of GPUs available. Use 0 for CPU mode.
selected_gpus = [0,1,2,3]
ngpu = len(selected_gpus)
# -
# ## Mark the below cell as code to use the celeba dataset
# + active=""
# # We can use an image folder dataset the way we have it setup.
# # Create the dataset
# dataset = dset.ImageFolder(root=dataroot,
# transform=transforms.Compose([
# transforms.Grayscale(1),
# transforms.Resize(image_size),
# transforms.CenterCrop(image_size),
# transforms.ToTensor(),
# transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5)),
# ]))
#
#
#
#
# # Create the dataloader
# dataloader = torch.utils.data.DataLoader(dataset, batch_size=batch_size,
# shuffle=True, num_workers=workers, pin_memory=False)
# -
# ## Mark the below cell as code to use the ExoGAN dataset
pwd
# +
##### Creating custom Dataset classes
path = '/datb/16011015/ExoGAN_data/selection//' #notice how you dont put the last folder in here...
images = np.load(path+'first_chunks_25_percent_images.npy')
shuffle = True
if shuffle:
np.random.shuffle(images) # shuffles the images
images = images[:int(len(images)*0.05)] # use only first ... percent of the data (0.05)
print('Number of images: ', len(images))
dataset = numpy_dataset(data=images, to_vram=True) # to_vram pins it to all GPU's
#dataset = numpy_dataset(data=images, to_vram=True, transform=transforms.Compose([transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))])) # to_vram pins it to all GPU's
# Create the dataloader
dataloader = torch.utils.data.DataLoader(dataset, batch_size=batch_size,
shuffle=True, num_workers=workers, pin_memory=False)
# +
# Decide which device we want to run on
# Seems to be the main devive, e.g. cuda:0 wont work when only gpu [2,3] are selected... is [2,3] then do cuda:2
device = torch.device("cuda:"+str(selected_gpus[0]) if (torch.cuda.is_available() and ngpu > 0) else "cpu")
# Plot some training images
real_batch = next(iter(dataloader))
plt.figure(figsize=(8,8))
plt.axis("off")
plt.title("Training Images")
plt.imshow(np.transpose(vutils.make_grid(real_batch[0].to(device)[:64], padding=2, normalize=True).cpu(),(1,2,0)))
# -
# custom weights initialization called on netG and netD
def weights_init(m):
classname = m.__class__.__name__
if classname.find('Conv') != -1:
nn.init.normal_(m.weight.data, 0.0, 0.02)
elif classname.find('BatchNorm') != -1:
nn.init.normal_(m.weight.data, 1.0, 0.02)
nn.init.constant_(m.bias.data, 0)
# +
# Generator Code
class Generator_original(nn.Module):
def __init__(self, ngpu):
super(Generator, self).__init__()
self.ngpu = ngpu
"""
where (in_channels, out_channels,
kernel_size, stride=1, padding=0, output_padding=0, groups=1, bias=True, dilation=1)
"""
self.main = nn.Sequential(
#1
# input is Z, going into a convolution
nn.ConvTranspose2d( nz, ngf * 8, 4, 1, 0, bias=False),
nn.BatchNorm2d(ngf * 8),
nn.ReLU(True),
#4
# state size. (ngf*8) x 4 x 4
nn.ConvTranspose2d(ngf * 8, ngf * 4, 4, 2, 1, bias=False),
nn.BatchNorm2d(ngf * 4),
nn.ReLU(True),
nn.Dropout2d(0.5),
#7
# state size. (ngf*4) x 8 x 8
nn.ConvTranspose2d( ngf * 4, ngf * 2, 4, 2, 1, bias=False),
nn.BatchNorm2d(ngf * 2),
nn.ReLU(True),
nn.Dropout2d(0.5),
#10
# state size. (ngf*2) x 16 x 16
nn.ConvTranspose2d( ngf * 2, ngf*1, 4, 2, 1, bias=False),
nn.BatchNorm2d(ngf*1),
nn.ReLU(True),
# Go from 1x64x64 to 1x32x32
nn.Conv2d(ngf, ngf, 4, 2, 1, bias=False),
nn.BatchNorm2d(ngf*1),
nn.ReLU(True),
#10
# state size. (ngf*2) x 16 x 16
#nn.ConvTranspose2d( ngf * 2, ngf*1, 4, 2, 1, bias=False),
#nn.BatchNorm2d(ngf*1),
#nn.ReLU(True),
#13
# state size. (ngf) x 32 x 32
nn.ConvTranspose2d(ngf*1, nc, 4, 2, 1, bias=False),
nn.Tanh()
# state size. (nc) x 64 x 64
)
def forward(self, input):
return self.main(input)
# -
class Generator(nn.Module):
def __init__(self, ngpu, nz=100, ngf=32, nc=1):
super(Generator, self).__init__()
self.ngpu = ngpu
self.nz = nz
self.nc = nc
self.ngf = ngf
"""
where (in_channels, out_channels,
kernel_size, stride=1, padding=0, output_padding=0, groups=1, bias=True, dilation=1)
"""
self.main = nn.Sequential(
#1
# input is Z, going into a convolution
nn.Upsample(scale_factor = 2, mode='bilinear'),
nn.ReflectionPad2d(1),
nn.Conv2d(nz, int(ngf * 8 / 2), kernel_size=3, stride=1, padding=0),
nn.ReLU(True),
nn.Upsample(scale_factor = 2, mode='bilinear'),
nn.ReflectionPad2d(1),
nn.Conv2d(ngf*8, int(ngf * 4 / 2), kernel_size=3, stride=1, padding=0),
nn.ReLU(True),
nn.Upsample(scale_factor = 2, mode='bilinear'),
nn.ReflectionPad2d(1),
nn.Conv2d(ngf*4, int(ngf * 2 / 2), kernel_size=3, stride=1, padding=0),
nn.ReLU(True),
nn.Upsample(scale_factor = 2, mode='bilinear'),
nn.ReflectionPad2d(1),
nn.Conv2d(ngf*2, int(ngf * 1 / 2), kernel_size=3, stride=1, padding=0),
nn.ReLU(True),
# Go from 1x64x64 to 1x32x32
nn.Upsample(scale_factor = 2, mode='bilinear'),
nn.ReflectionPad2d(1),
nn.Conv2d(ngf*1, int(ngf * 1 / 2), kernel_size=3, stride=1, padding=0),
nn.ReLU(True),
nn.Upsample(scale_factor = 2, mode='bilinear'),
nn.ReflectionPad2d(1),
nn.Conv2d(ngf, int(nc * 4 / 2), kernel_size=3, stride=1, padding=0),
nn.Tanh()
# state size. (nc) x 64 x 64
)
def forward(self, input):
return self.main(input)
# +
# Create the generator
netG = Generator(ngpu).to(device)
# Handle multi-gpu if desired
if (device.type == 'cuda') and (ngpu > 1):
netG = nn.DataParallel(netG, device_ids=selected_gpus, output_device=device) # select only gpu 0, 2, 3
# Apply the weights_init function to randomly initialize all weights
# to mean=0, stdev=0.2.
netG.apply(weights_init)
# Print the model
print(netG)
# +
# Apply the weights_init function to randomly initialize all weights
# to mean=0, stdev=0.2.
netG.apply(weights_init)
# Print the model
print(netG)
# +
from torchsummary import summary
noise = torch.randn(batch_size, nz, 1, 1, device=device)
noise.shape
summary(netG, (100,1,1))
# -
class Discriminator(nn.Module):
def __init__(self, ngpu):
super(Discriminator, self).__init__()
self.ngpu = ngpu
self.main = nn.Sequential(
# input is (nc) x 64 x 64
nn.Conv2d(nc, ndf, 4, 2, 1, bias=False),
nn.LeakyReLU(0.2, inplace=True),
# state size. (ndf) x 32 x 32
nn.Conv2d(ndf, ndf * 1, 4, 2, 1, bias=False),
#nn.BatchNorm2d(ndf * 1),
nn.LeakyReLU(0.2, inplace=True),
# state size. (ndf*2) x 16 x 16
nn.Conv2d(ndf * 1, ndf * 2, 4, 2, 1, bias=False),
#nn.BatchNorm2d(ndf * 2),
nn.LeakyReLU(0.2, inplace=True),
# state size. (ndf*4) x 8 x 8
nn.Conv2d(ndf * 2, ndf * 4, 4, 2, 1, bias=False),
#nn.BatchNorm2d(ndf * 4),
nn.LeakyReLU(0.2, inplace=True),
# state size. (ndf*4) x 8 x 8
nn.Conv2d(ndf * 4, ndf * 8, 4, 2, 1, bias=False),
#nn.BatchNorm2d(ndf * 8),
nn.LeakyReLU(0.2, inplace=True),
# state size. (ndf*8) x 4 x 4
nn.Conv2d(ndf * 8, 1, 1, 1, 0, bias=False),
nn.Sigmoid()
)
def forward(self, input):
return self.main(input)
# +
# Create the Discriminator
netD = Discriminator(ngpu).to(device)
# Handle multi-gpu if desired
if (device.type == 'cuda') and (ngpu > 1):
print('netD to cuda')
netD = nn.DataParallel(netD, device_ids=selected_gpus, output_device=device) # select only gpu 0, 2, 3
# Apply the weights_init function to randomly initialize all weights
# to mean=0, stdev=0.2.
netD.apply(weights_init)
# Print the model
print(netD)
# + active=""
# from torchsummary import summary
# summary(netD, (32, 32))
# +
# Initialize BCELoss function
criterion = nn.BCELoss()
# Create batch of latent vectors that we will use to visualize
# the progression of the generator
fixed_noise = torch.randn(64, nz, 1, 1, device=device)
# Establish convention for real and fake labels during training
real_label = 1
fake_label = 0
# Setup Adam optimizers for both G and D
optimizerD = optim.Adam(netD.parameters(), lr=lr, betas=(beta1, 0.999)) # should be sgd
optimizerG = optim.Adam(netG.parameters(), lr=lr_G, betas=(beta1, 0.999))
# +
import time as t
# Training Loop
# Lists to keep track of progress
img_list = []
G_losses = []
D_losses = []
# -
train_D = True
train_G = True
from IPython.display import clear_output
# + active=""
# # Load saved weights
# netG.load_state_dict(torch.load('netG_state_dict')) #net.module..load_... for parallel model , net.load_... for single gpu model
# netD.load_state_dict(torch.load('netD_state_dict'))
# +
iters = 0
t1 = t.time()
print("Starting Training Loop...")
# For each epoch
for epoch in range(num_epochs):
# For each batch in the dataloader
q = np.random.randint(3, 6)
#q = 1000
for i, data in enumerate(dataloader, 0):
# Format batch
#if i % 11: # show first image of the real dataset every ... iterations
# print('real:')
# plt.imshow(data[0, 0, :, :].reshape(32, 32))
# plt.show()
# real_cpu = data.to(device) # for np array images
real_cpu = data.to(device) # for PIL images
b_size = real_cpu.size(0)
"""
https://github.com/soumith/ganhacks
implement random label range from 0.0-0.3 to 0.7-1.2 for fake and real respectively
"""
low = 0.01
high = 0.2 #0.3
fake_label = (low - high) * torch.rand(1) + high # uniform random dist between low and high
fake_label = fake_label.data[0] # Gets the variable out of the tensor
low = 0.8 #0.7
high = 1.0
real_label = (low - high) * torch.rand(1) + high # uniform random dist between low and high
real_label = real_label.data[0] # Gets the variable out of the tensor
label = torch.full((b_size,), real_label, device=device)
# Generate batch of latent vectors
noise = torch.randn(b_size, nz, 1, 1, device=device)
# Generate fake image batch with G
fake = netG(noise)
if i % q == 0:
labels_inverted = 'yes'
label.fill_(fake_label)
else:
labels_inverted = 'no'
label.fill_(real_label)
if i > 1:
if D_G_z1 < 0.45: # 45
train_G = True
train_D = False
else:
train_D = True
train_G = False
if train_D:
############################
# (1) Update D network: maximize log(D(x)) + log(1 - D(G(z)))
###########################
## Train with all-real batch
netD.zero_grad()
# Forward pass real batch through D
#print('real_cpu shape: ', real_cpu.shape)
output = netD(real_cpu).view(-1)
# Calculate loss on all-real batch
errD_real = criterion(output, label) ## make this fake label sometimes
# Calculate gradients for D in backward pass
errD_real.backward()
D_x = output.mean().item()
## Train with all-fake batch
# Generate batch of latent vectors
noise = torch.randn(b_size, nz, 1, 1, device=device)
# Generate fake image batch with G
fake = netG(noise)
#if i % 11: # show first image of the real dataset every ... iterations
# print(fake.shape)
# plt.imshow(fake.reshape(32, 32))
# plt.show()
# swap labels for the discriminator when i % q == 0 (so once every q-th batch)
if i % q == 0:
label.fill_(real_label)
else:
label.fill_(fake_label)
#label.fill_(fake_label) ## make this real label sometimes
# Classify all fake batch with D
output = netD(fake.detach()).view(-1)
# Calculate D's loss on the all-fake batch
errD_fake = criterion(output, label)
# Calculate the gradients for this batch
errD_fake.backward()
D_G_z1 = output.mean().item()
# Add the gradients from the all-real and all-fake batches
errD = errD_real + errD_fake
# Update D
if train_D:
optimizerD.step()
if train_G:
############################
# (2) Update G network: maximize log(D(G(z)))
###########################
netG.zero_grad()
label.fill_(real_label) # fake labels are real for generator cost
# Since we just updated D, perform another forward pass of all-fake batch through D
output = netD(fake).view(-1)
# Calculate G's loss based on this output
errG = criterion(output, label)
# Calculate gradients for G
errG.backward()
D_G_z2 = output.mean().item()
if train_G:
# Update G
optimizerG.step()
t2 = t.time()
# Output training stats
if train_G and train_D:
training_dg = 'D & G'
elif train_G:
training_dg = 'G'
elif train_D:
training_dg = '\t D'
if iters % (1) == 0:
print('[%d/%d][%d/%d]\tLoss_D: %.4f\tLoss_G: %.4f\tD(x): %.4f\tD(G(z)): %.4f / %.4f\t Time: %.2f \t q: %s \t training: %s'
% (epoch, num_epochs, i, len(dataloader),
errD.item(), errG.item(), D_x, D_G_z1, D_G_z2, (t2-t1), q, training_dg))
t1 = t.time()
# Save Losses for plotting later
G_losses.append(errG.item())
D_losses.append(errD.item())
# Check how the generator is doing by saving G's output on fixed_noise
if (iters % 13 == 0) or ((epoch == num_epochs-1) and (i == len(dataloader)-1)):
with torch.no_grad():
fake = netG(fixed_noise).detach().cpu()
img_list.append(vutils.make_grid(fake, padding=2, normalize=True))
#if (iters % 98 == 0):
# plt.imshow(np.transpose(fake[0],(1,2,0)))
# plt.show()
iters += 1
"""
Plot losses
"""
clear_output() # clears cell output
plt.figure(figsize=(10,5))
plt.title("Generator and Discriminator Loss During Training")
plt.plot(D_losses,label="D")
plt.plot(G_losses,label="G")
plt.xlabel("iterations")
plt.ylabel("Loss")
plt.legend()
plt.show()
# +
plt.figure(figsize=(10,5))
plt.title("Generator and Discriminator Loss During Training")
plt.plot(G_losses,label="G")
plt.plot(D_losses,label="D")
plt.xlabel("iterations")
plt.ylabel("Loss")
plt.legend()
plt.show()
# +
plt.plot(D_losses,label="D")
plt.xlabel("iterations")
plt.ylabel("Loss")
plt.ylim((0,1))
plt.legend()
plt.show()
# +
# Grab a batch of real images from the dataloader
real_batch = next(iter(dataloader))
# Plot the real images
plt.figure(figsize=(15,15))
plt.subplot(1,2,1)
plt.axis("off")
plt.title("Ground truth image")
#plt.imshow(np.transpose(vutils.make_grid(real_batch[0].to(device)[:64], padding=5, normalize=True).cpu(),(1,2,0)))
plt.imshow(np.transpose(vutils.make_grid(real_batch[0].to(device)[:64], padding=5, normalize=True).cpu(),(1,2,0)))
# Plot the fake images from the last epoch
plt.subplot(1,2,2)
plt.axis("off")
plt.title("Generated images")
plt.imshow(np.transpose(img_list[-1],(1,2,0)))
#plt.tight_layout()
#plt.savefig('plots/DCGAN_generated.png', dpi=1200)
print('plt saved')
# -
import matplotlib
matplotlib.rcParams['animation.embed_limit'] = 20**128
# +
# #%%capture
fig = plt.figure(figsize=(8,8))
plt.axis("off")
ims = [[plt.imshow(np.transpose(i,(1,2,0)), animated=True)] for i in img_list]
ani = animation.ArtistAnimation(fig, ims, interval=500, repeat_delay=1000, blit=True)
HTML(ani.to_jshtml())
# + active=""
# # save weights
# torch.save(netG.module.state_dict(), 'netG_state_dict')
# torch.save(netD.module.state_dict(), 'netD_state_dict')
# -
# Impainting
working on implementing: https://github.com/lotuswhl/Image-inpainting-with-dcgan-pytorch
| notebooks/old notebooks/ExoGAN_DCGAN.ipynb |
// ---
// jupyter:
// jupytext:
// text_representation:
// extension: .scala
// format_name: light
// format_version: '1.5'
// jupytext_version: 1.14.4
// kernelspec:
// display_name: Apache Toree - Scala
// language: scala
// name: apache_toree_scala
// ---
// ## Solutions - Problem 2
//
// Get number of flights departed from each of the US state in the month of 2008 January.
//
// * We have to use airport codes to determine state of each of the US airport.
// * We need to use airlines data to get departure details.
// * To solve this problem we have to perform inner join.
// Let us start spark context for this Notebook so that we can execute the code provided.
//
// If you want to use terminal for the practice, here is the command to use.
//
// ```
// spark2-shell \
// --master yarn \
// --name "Joining Data Sets" \
// --conf spark.ui.port=0
// ```
// +
import org.apache.spark.sql.SparkSession
val spark = SparkSession.
builder.
config("spark.ui.port", "0").
appName("Joining Data Sets").
master("yarn").
getOrCreate()
// -
spark.conf.set("spark.sql.shuffle.partitions", "2")
import spark.implicits._
val airlinesPath = "/public/airlines_all/airlines-part/flightmonth=200801"
val airlines = spark.
read.
parquet(airlinesPath)
airlines.select("Year", "Month", "DayOfMonth", "Origin", "Dest", "CRSDepTime").show
airlines.count
val airportCodesPath = "/public/airlines_all/airport-codes"
def getValidAirportCodes(airportCodesPath: String) = {
val airportCodes = spark.
read.
option("sep", "\t").
option("header", true).
option("inferSchema", true).
csv(airportCodesPath).
filter("!(State = 'Hawaii' AND IATA = 'Big') AND Country = 'USA'")
airportCodes
}
val airportCodes = getValidAirportCodes(airportCodesPath)
airportCodes.count
import org.apache.spark.sql.functions.{col, lit, count}
airlines.
join(airportCodes, col("IATA") === col("Origin"), "inner").
groupBy("State").
agg(count(lit(1)).alias("FlightCount")).
orderBy(col("FlightCount").desc).
show
airportCodes.filter("State IS NULL").show
airportCodes.filter(col("State") isNull).show
| 06_joining_data_sets/07_solutions_problem_2.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python [conda env:py352]
# language: python
# name: conda-env-py352-py
# ---
# # Scikit-Plot - Plotting to assist in model building & selection
#
# These all match (by and large) the example graphs that sklearn demonstrates in their documentation, but simplifies the implementaiton of the graphs slightly.
#
# http://scikit-plot.readthedocs.io/en/stable/index.html
# #### Imports & Versions
# +
import sys
print("Python v.{}".format(sys.version_info[:3]))
print('*'*25)
import numpy as np
print('numpy v.{}'.format(np.__version__))
import pandas as pd
print('pandas v.{}'.format(pd.__version__))
import statsmodels
import statsmodels.api as sm
print('statsmodels v.{}'.format(statsmodels.__version__))
import matplotlib
print('matplotlib v.{}'.format(matplotlib.__version__))
import matplotlib.pyplot as plt
# %matplotlib inline
import patsy
from patsy import dmatrices
print('patsy v.{}'.format(patsy.__version__))
import sklearn
print('sklearn v.{}'.format(sklearn.__version__))
from sklearn.ensemble import RandomForestClassifier
from sklearn.datasets import load_digits
from sklearn.model_selection import train_test_split
from sklearn.cluster import KMeans
from sklearn.linear_model import LogisticRegression
from sklearn.decomposition import PCA
print('*'*25)
import scikitplot
print('scikitplot v.{}'.format(scikitplot.__version__))
from scikitplot import classifier_factory
import scikitplot.plotters as skplt
# -
# <h1>Review Scikit-Plot Functions API Plots</h1>
# <h3>Binary Classification, Principal model is Logistic Regression, statsmodel affiars dataset</h3>
dta = sm.datasets.fair.load_pandas().data #statsmodel affairs dataset
dta['affair'] = (dta.affairs > 0).astype(int) #change the affairs datapoint to a single, binary class
y, X = dmatrices('affair ~ rate_marriage + age + yrs_married + children + religious + educ',
dta, return_type='dataframe')
y = np.ravel(y)
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.3)
logreg = LogisticRegression()
logreg.fit(X_train,y_train)
# ## Confusion Matrix
_=skplt.plot_confusion_matrix(logreg.predict(X_test), y_test, figsize=(7,7))
plt.show()
_=skplt.plot_confusion_matrix(logreg.predict(X_test), y_test, figsize=(7,7), normalize = True)
plt.show()
# scikit-plot provides a useable confusion matrix for binary classifiers, but not wildly more useful than sklearn.metrics.confusion_matrix textual version. This is something that I fell becomes more useful for a multi-class classificaiton problem. Normalize will give % in lieu of whole numbers
# ## ROC Curve
_=skplt.plot_roc_curve(y_test, logreg.predict_proba(X_test), figsize = (7,7))
plt.show()
# The ROC Curve illustrates the performance of a binary classifier system as its discrimination threshold is varied. ROC analysis provides tools to select possibly optimal models and to discard suboptimal ones independently from (and prior to specifying) the cost context or the class distribution. ROC analysis is related in a direct and natural way to cost/benefit analysis of diagnostic decision making.<sup>(1)</sup>
#
# The area under the curve (often referred to as simply the AUC) is equal to the probability that a classifier will rank a randomly chosen positive instance higher than a randomly chosen negative one<sup>(1)</sup>
#
# AUC can be used to compare one model against another, as the model with the highest AUC could be considered to be the best model... but... if the model has a very high AUC, it could be overfitted/over specified. With this being logreg, and since that is usually used to predict probabilities, the classification doesnt need to be perfect!
#
# ROC shows trade-offs between sensitivity and specificity. The middle/diagonal line is the plot of a truly random classifier. Anything left/up from there is better than random, and below/right is worse than random<sup>(2)</sup>
#
# <sub>1. https://en.wikipedia.org/wiki/Receiver_operating_characteristic<br>2. https://classeval.wordpress.com/introduction/introduction-to-the-roc-receiver-operating-characteristics-plot/</sub>
# ## Kolmogorov–Smirnov (KS) Statistic Plot
_=skplt.plot_ks_statistic(y_test, logreg.predict_proba(X_test), figsize=(7,7))
plt.show()
# The Kolmogorov–Smirnov statistic quantifies a distance between the empirical distribution function of the sample and the cumulative distribution function of the reference distribution, or between the empirical distribution functions of two samples. The Kolmogorov–Smirnov test can be modified to serve as a goodness of fit test.<sup>(1)</sup>
#
# The Kolmogorov–Smirnov test may also be used to test whether two underlying one-dimensional probability distributions differ. Note that the two-sample test checks whether the two data samples come from the same distribution. This does not specify what that common distribution is (e.g. whether it's normal or not normal).<sup>(1)</sup>
#
# The K-statistic is the absolute max distance between the CDFs, and the closer that the value is to 0, the more likely that the two samples are drawn from the same distribution.
#
# The Kolmogorov-Smirnoff Statistic when used to measure the dicrimatory power of a score card, looks at how the distribution of the score differs among goods and bads. The Kolgomorov Stat measures the maximum point of separation between the CDF of two distributions. See Graphs Below: The Graph Below Shows the cummulative distributions of the observed goods and bads. The K-S Statistic is the maximum separation of these cdfs.<sup>(2)</sup>
#
# K-S or Kolmogorov-Smirnov chart measures performance of classification models. The K-S is 100, if the scores partition the population into two separate groups in which one group contains all the positives and the other all the negatives. On the other hand, If the model cannot differentiate between positives and negatives, then it is as if the model selects cases randomly from the population. The K-S would be 0.<sup>(3)</sup>
#
#
# <sub>1. https://en.wikipedia.org/wiki/Kolmogorov%E2%80%93Smirnov_test <br>2. https://ecomathcompstatfinance.wordpress.com/2013/10/22/models-measure-giniksrank-ordering/ <br>3.https://www.analyticsvidhya.com/blog/2016/02/7-important-model-evaluation-error-metrics/</sub>
# ## Precision-Recall Curve
_=skplt.plot_precision_recall_curve(y_test, logreg.predict_proba(X_test), figsize = (7,7))
plt.show()
# The precision-recall plot is a model-wide measure for evaluating binary classifiers and closely related to the ROC plot.It is easy to compare several classifiers in the precision-recall plot. Curves close to the perfect precision-recall curve have a better performance level than the ones closes to the baseline. In other words, a curve above the other curve has a better performance level. A precision-recall curve can be noisy (a zigzag curve frequently going up and down) for small recall values. Therefore, precision-recall curves tend to cross each other much more frequently than ROC curves especially for small recall values. Comparisons with multiple classifiers can be difficult if the curves are too noisy. Similar to ROC curves, the AUC (the area under the precision-recall curve) score can be used as a single performance measure for precision-recall curves.<sup>(1)</sup>
#
# The precision-recall curve of a perfect classifier would be a right angle, with the horizaontal and vertical lines at 1.0. Randomness would be a horizontal line across the middle of the plot at 0.5
#
# In information retrieval, precision is a measure of result relevancy, while recall is a measure of how many truly relevant results are returned. Recall that Precision = tp / (tp+fp) and Recall = tp/(tp+fn) A high area under the curve represents both high recall and high precision, where high precision relates to a low false positive rate, and high recall relates to a low false negative rate. High scores for both show that the classifier is returning accurate results (high precision), as well as returning a majority of all positive results (high recall).<sup>(2)</sup>
#
# Precision-recall curves are typically used in binary classification to study the output of a classifier. In order to extend Precision-recall curve and average precision to multi-class or multi-label classification, it is necessary to binarize the output. One curve can be drawn per label, but one can also draw a precision-recall curve by considering each element of the label indicator matrix as a binary prediction (micro-averaging).<sup>(2)</sup>
#
# <sub></sub>
#
# <sub>1.https://classeval.wordpress.com/introduction/introduction-to-the-precision-recall-plot<br>2.http://scikit-learn.org/stable/auto_examples/model_selection/plot_precision_recall.html</sub>
# ## Feature Importance Bar Graph w/ Error Bars
#Doesn't Work with a logistic Regression model, so creating a quick Random Forrest
rf = RandomForestClassifier()
rf.fit(X, y)
skplt.plot_feature_importances(rf, feature_names=X.columns.values, figsize = (10,10))
plt.show()
# Random forrest can be used for measuring importance of features as a task in itself<sup>(2)</sup>
#
# The red bars are the feature importances of the forest, along with their inter-trees variability.<sup>(1)</sup>
#
# <sub>1. http://scikit-learn.org/stable/auto_examples/ensemble/plot_forest_importances.html<br>2. https://www.statistik.uni-dortmund.de/useR-2008/slides/Strobl+Zeileis.pdf</sub>
# ## Silhouette Analysis
kmeans = KMeans(n_clusters=5, random_state=1)
skplt.plot_silhouette(kmeans, X, figsize = (7,7))
plt.show()
kmeans = KMeans(n_clusters=8, random_state=1)
skplt.plot_silhouette(kmeans, X, figsize = (7,7))
plt.show()
# Silhouette analysis can be used to study the separation distance between the resulting clusters. The silhouette plot displays a measure of how close each point in one cluster is to points in the neighboring clusters and thus provides a way to assess parameters like number of clusters visually.<sup>(1)</sup>
#
# Silhouette coefficients (as these values are referred to as) near +1 indicate that the sample is far away from the neighboring clusters. A value of 0 indicates that the sample is on or very close to the decision boundary between two neighboring clusters and negative values indicate that those samples might have been assigned to the wrong cluster.<sup>(1)</sup>
#
# Also from the thickness of the silhouette plot the cluster size can be visualized.<sup>(1)</sup>
#
# The 8 cluster set would be bad for 2x reasons:
# 1. The first cluster (0) has a below average score
# 2. There is a wide fluctuation in the size of the silhouette plots
# *Note: the cluster of 4 might be bad as well*
#
# <sub>1. http://scikit-learn.org/stable/auto_examples/cluster/plot_kmeans_silhouette_analysis.html</sub>
# ## Elbow Plot
skplt.plot_elbow_curve(kmeans, X, cluster_ranges=range(1, 11), figsize = (10,10))
plt.show()
# The elbow method looks at the percentage of variance explained as a function of the number of clusters: One should choose a number of clusters so that adding another cluster doesn't give much better modeling of the data. More precisely, if one plots the percentage of variance explained by the clusters against the number of clusters, the first clusters will add much information (explain a lot of variance), but at some point the marginal gain will drop, giving an angle in the graph. The number of clusters is chosen at this point, hence the "elbow criterion". This "elbow" cannot always be unambiguously identified.<sup>(1)</sup>
#
# <sub>1. https://en.wikipedia.org/wiki/Determining_the_number_of_clusters_in_a_data_set</sub>
# ## PCA Componenet Explained Variances
pca = PCA(random_state=1)
pca.fit(X)
skplt.plot_pca_component_variance(pca, figsize = (10,10))
plt.show()
# In simple words, principal component analysis is a method of extracting important variables (in form of components) from a large set of variables available in a data set. It extracts low dimensional set of features from a high dimensional data set with a motive to capture as much information as possible. With fewer variables.<sup>(1)</sup>
#
# This plot shows that 1 PCA component explains 88.4% of the variance, and that 2 would explain ~95%, so we could probably just select 1 PCA component to model from.
#
# <sub>1. https://www.analyticsvidhya.com/blog/2016/03/practical-guide-principal-component-analysis-python/</sub>
# ## PCA 2-D Projection
skplt.plot_pca_2d_projection(pca, X, y, figsize = (10,10))
plt.show()
# Still working on the full interpretation of this...
# # Scikit-Plot Functions on Multi-Class Classification
# ### Principal model is Random Forrest, sklearn digits dataset
X, y = load_digits(return_X_y=True)
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.3)
rf= RandomForestClassifier(n_estimators=5, max_depth=5, random_state=1)
rf.fit(X_train, y_train)
# ### Confusion Matrix of our Random Forest Classifier, fitted on our data
_=skplt.plot_confusion_matrix(rf.predict(X_test), y_test, figsize=(10,10), normalize = True)
plt.show()
# This is an easy, visual method to compare predictive accuracy for all of the different classifications. We might be concerned about 5,8, and 9 based on this.
# ## Feature Importance
skplt.plot_feature_importances(rf, figsize = (10,10))
plt.show()
X.shape
# This is a tough one to try and use the Feature importance graph, since there are so many factors. It can give an easy look at some of the mor important ones.
# ## Silhouette Analysis
kmeans = KMeans(n_clusters=3, random_state=1)
skplt.plot_silhouette(kmeans, X, figsize = (7,7))
plt.show()
# In this case, 3x clusters seems to nicely fit the data where the clusters are of approximately equal size and all above average, but the next plot will show that this is probably not the best model to use
# ## Elbow Plot
skplt.plot_elbow_curve(kmeans, X, cluster_ranges=range(1, 40), figsize = (10,10))
plt.show()
# This might suggest that kmeans is not a good model to use in this case, since there is no clearly define elbow. If forced, I might go with something along the lines of 9-10 clusters.
# ## PCA Componenet Explained Variances
pca = PCA(random_state=1)
pca.fit(X)
skplt.plot_pca_component_variance(pca, figsize = (10,10))
plt.show()
# Using PCA we could expect to reduce our 64 original factors down to 11 and stil be able to explain 76.2% of the variance with those factors. With 40 factors, as opposed to the original 64, we could explain ~99% of the variance
| Scikit-Plot.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import numpy as np
bias = np.ones(100)
bias
top_x1 = np.random.normal(12,2,10)
top_x1
top_x2 = np.random.normal(10,2,10)
top_x2
top = np.array([top_x1,top_x2])
top
top.T
top_x1.T
top_x1
bottom_x1 = np.random.normal(5,2,10)
bottom_x1
bottom_x2 = np.random.normal(6,2,10)
bottom_x2
bottom = np.array([bottom_x1,bottom_x2])
bottom
bottom.T
import matplotlib.pyplot as plt
_ , ax = plt.subplots(figsize = (4,4))
topT = top.T
ax.scatter(topT[:,0],topT[:,1],color='r')
_
topT
bottomT = bottom.T
ax.scatter(bottomT[:,0],bottomT[:,1],color='b')
_
_
# ## Final Code
n_points = 10
np.random.seed(0) # to make constant random number outputs
top_x1 = np.random.normal(12,2,n_points) #deviration is 2 and center point is 12 , this would be changing between 10 and 14
top_x2 = np.random.normal(10,2,n_points)
top = np.array([top_x1,top_x2]).T #T means transform, this make two 2x10 arrarys to 10x2
bottom_x1 = np.random.normal(6,2,n_points)
bottom_x2 = np.random.normal(5,2,n_points)
bottom = np.array([bottom_x1,bottom_x2]).T
top
bottom
_ , ax = plt.subplots(figsize = (4,4))
ax.scatter(top[:,0],top[:,1],color='r')
_
ax.scatter(bottom[:,0],bottom[:,1],color='b')
_
ax.scatter(top[:,0],top[:,1],color='r')
ax.scatter(bottom[:,0],bottom[:,1],color='b')
_
| exercises/dot data PLOTTING.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="view-in-github" colab_type="text"
# <a href="https://colab.research.google.com/github/stephenbeckr/randomized-algorithm-class/blob/master/Demos/demo02_sorts.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# + [markdown] id="pvT_TyItieGO"
# # Demo 2: sorting
#
# Demo to show the effect of randomized perturbations on the speed of sorting algorithms
#
# APPM 5650 Randomized Algorithms, Fall 2021
# <NAME> (original MATLAB '19, jupyter '21) & <NAME> (Python, '19)
# + [markdown] id="xw3fIt_Ui0Jz"
# ## Subroutines, sorting code
# + id="4CDw01j5izF0"
# -------------------------------------------------------------------------------------- #
import numpy as np # import numpy package
import time as time # import time package
import matplotlib.pyplot as plt # import matplotlib package
# -------------------------------------------------------------------------------------- #
def bubble_sort(x): # sorts x in increasing order
n = len(x)
for iteration in range(n-1, 0, -1):
alreadySorted = True
for j in range(iteration):
if a_less_than_b( x[j+1], x[j] ) == True: # swap them
tmp = x[j+1]
x[j+1] = x[j]
x[j] = tmp
alreadySorted = False
if alreadySorted == True:
break
return x
# -------------------------------------------------------------------------------------- #
def quick_sort(x): # sorts x in increasing order
# quick_sort and supporting code follows example from...
# http://interactivepython.org/runestone/static/pythonds/SortSearch/TheQuickSort.html
quick_sort_r(x,0,len(x)-1)
return x
# -------------------------------------------------------------------------------------- #
def quick_sort_r(x, first, last): # recursive workhorse for quick_sort
if first < last:
split = partition(x, first, last)
quick_sort_r(x, first, split-1)
quick_sort_r(x, split+1, last)
# -------------------------------------------------------------------------------------- #
def partition(x,first,last): # find the split point and move other items
pivotvalue = x[first]
left = first + 1
right = last
done = False
while not done:
while left <= right and x[left] <= pivotvalue:
a_less_than_b.counter = a_less_than_b.counter + 1
left = left + 1
while x[right] >= pivotvalue and right >= left:
a_less_than_b.counter = a_less_than_b.counter + 1
right = right - 1
if right < left:
done = True
else:
temp = x[left]
x[left] = x[right]
x[right] = temp
temp = x[first]
x[first] = x[right]
x[right] = temp
return right
# -------------------------------------------------------------------------------------- #
# This is done analogously to how we did this in Matlab (with a global or "persistent" variable)
# There are surely more pythonic ways to do this; at the very least, you could define a class
def a_less_than_b(a, b):
y = (a < b)
a_less_than_b.counter += 1 # counter property used to track function calls
return y
# + [markdown] id="t6HIk7N1jQmj"
# ### and the main demonstration:
# + colab={"base_uri": "https://localhost:8080/"} id="uf7ZTHAAiXj-" outputId="095e8e28-6672-4749-a671-3fca7efedc39"
np.random.seed(seed = 2) # set seed for reproducibility
nList = np.logspace(1, 2.75, num = 10, dtype=np.int64 );
N = len(nList);
nReps = 5
bubble = np.zeros((N,1)); quick = np.zeros((N,1));
bubbleRandom = np.zeros((N,nReps)); quickRandom = np.zeros((N,nReps));
for ni in range(N):
n = nList[ni]
print(ni+1, 'of', N, 'trials')
# Bubble sort
x = np.linspace(n,1,n); # what we want to sort (adversarial choice!)
a_less_than_b.counter = 0
y = bubble_sort(x);
bubble[ni] = a_less_than_b.counter;
# Quick sort
a_less_than_b.counter = 0
#x = np.linspace(1,n,n);
x = np.linspace(n,1,n);
y = quick_sort(x);
quick[ni] = a_less_than_b.counter;
# Now apply random permutations
# (do this a few times and average)
for r in range(nReps):
x = x[np.random.permutation(np.int(n))];
a_less_than_b.counter = 0;
y = bubble_sort(x);
bubbleRandom[ni, r] = a_less_than_b.counter;
x = x[np.random.permutation(np.int(n))];
a_less_than_b.counter = 0;
y = quick_sort(x);
quickRandom[ni, r] = a_less_than_b.counter;
# + [markdown] id="BdYDLfOKkBpz"
# ... and plot it ...
# + colab={"base_uri": "https://localhost:8080/", "height": 517} id="cQ_B1XSbjILz" outputId="991ce0d9-5eae-435b-f321-293da44b31e4"
fig, ax = plt.subplots(figsize=(12,8))
line1, = ax.loglog(nList, bubble, label='bubblesort', marker='o',markersize=12)
line2, = ax.loglog(nList, quick, label='quicksort' , marker='s',markersize=12)
line3, = ax.loglog(nList, np.mean(quickRandom,1), label='quicksort random', marker=".",markersize=12)
line4, = ax.loglog(nList, np.mean(bubbleRandom,1), label='bubblesort random', marker="*",markersize=12)
line5, = ax.loglog(nList, 1.3*np.log(nList)*nList, '--',label='n log(n)')
line6, = ax.loglog(nList, nList**2/1.5, '--',label='n^2/2')
ax.legend(loc='upper left')
ax.grid(True)
ax.set_xlabel('list length')
ax.set_ylabel('number of comparisons')
plt.title('comparison test: randomized vs. ordered lists')
plt.show()
# + id="nh7GiOuokC31"
| Demos/demo02_sorts.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Exploring the kpot transformer library (universal transformer gpt model)
#
# Some minor adaptions needed to the kpot library - to make it work...
import sys
sys.path.insert(0,'../src')
sys.path.insert(0,'../src/com/github/kpot/keras-transformer')
# +
from keras import regularizers, optimizers, losses
from keras import backend as K
from keras.models import Model
# kpot transformer library
from example.utils import ( load_optimizer_weights, contain_tf_gpu_mem_usage, CosineLRSchedule )
from example.models import ( universal_transformer_gpt_model )
# -
learning_rate = 2e-4
max_seq_length = 384
word_embedding_size = 512
vocabulary_size=16262
number_of_heads = 32
def compile_new_model():
optimizer = optimizers.Adam(
lr=learning_rate, beta_1=0.6, beta_2=0.999)
_model = universal_transformer_gpt_model(
max_seq_length=max_seq_length,
#vocabulary_size=encoder.vocabulary_size(),
vocabulary_size=vocabulary_size,
word_embedding_size=word_embedding_size,
transformer_depth=7,
num_heads=number_of_heads)
_model.compile(
optimizer,
loss=losses.sparse_categorical_crossentropy,
metrics=[perplexity])
return _model
def perplexity(y_true, y_pred):
"""
Popular metric for evaluating language modelling architectures.
More info: http://cs224d.stanford.edu/lecture_notes/LectureNotes4.pdf
"""
cross_entropy = K.sparse_categorical_crossentropy(y_true, y_pred)
return K.mean(K.exp(K.mean(cross_entropy, axis=-1)))
model = compile_new_model()
model.summary()
| ipynb/exploring_kpot_0x01.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernel_info:
# name: python3-azureml
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # 얼굴 감지 및 분석Detecting and Analyzing Faces
#
#
# Computer Vision 솔루션은 종종 사람의 얼굴을 감지, 분석 또는 식별할 수 있는 인공지능(AI) 솔루션이 필요하다. 예를 들어, 소매업체 NorthWind Traders가 AI 서비스를 통해 매장을 모니터링하여 도움이 필요한 고객을 파악하고 직원들에게 도움을 지시하는 '스마트 스토어'를 구현하기로 결정했다고 가정합시다. 이를 달성하기 위한 한 가지 방법은 얼굴 감지 및 분석을 수행하는 것이다. 즉, 이미지에 얼굴이 있는지 확인하고, 얼굴의 특징을 분석하는 것이다.
#
# 
#
# ## Face 서비스를 통해 얼굴 감지 및 분석
#
# Northwind Traders회사에서 원하는 스마트 시스템은 고객의 얼굴 특징을 검지하고 분석할 수 있는 도구라고 가정하자. Microsoft Azure에서는 Azure Cognitive 서비스의 일종인 **Face**를 활용하여 이것을 해결할 수 있다.
#
# ### Cognitive 서비스 리소스 생성하기
#
# Azure 구독에서 **Cognitive Services** 리소스를 생성하자.
#
# > **알림**: 만일 이미 Cognitive 서비스 리소스를 이미 가지고 있다면 Azure 포털에 있는 **최식 리소스**에 있는 리소스를 열어서 키와 엔드포인트를 복사해서 아래 셀에 붙여 넣으면 된다. 아니면 아래 순서로 만들면 된다
#
# 1. 브라우저의 새로운 탭을 열고, Azure portal( https://portal.azure.com)을 입력하고 , Microsoft계정으로 로그인한다..
# 2. **+리소스 만들기** 버튼을 클릭하고, *Cognitive Services* 서비스를 찾은 다음, **Cognitive Services** 리소스를 다음과 같은 내용으로 생성한다.
# - **이름**: *유일한 이름을 입력한다(가능하면 영문과 숫자사용)*.
# - **구독**: *Azure 구독선택*.
# - **위치**: *가능한 위치(한국 중부 추천)*:
# - **가격책정계층**: 표준 S0
# - **리소스 그룹**: *원하는 유리한 이름(가능하면 영문과 숫자사용)*.
# 3. 배포가 완료될 때까지 기다린다. 그런 다음 Cognitive Services 리소스로 이동하여 **개요* 페이지에서 링크를 클릭하여 서비스 키를 관리한다. 클라이언트 응용 프로그램에서 Cognitive Services 리소스에 연결하려면 엔드포인트과 키가 필요하다.
#
# ### Cognitive Services 리소스에 있는 키와 엔드포인트 가져오기
#
# Cgnitive Services 리소스를 사용하기 위해서는, 클라이언트 응용프로그램에서는 엔드포인트와 인증 키가 필요합니다.client applications need its endpoint and authentication key:
#
# 1. Azure portal에서, Cgnitive Services 리소스를 선택하고 **키 및 엔트포인트** 페이지를 선택한 다음 **키1** 을 복사하여 아래의 **YOUR_COG_KEY**.를 붙여 넣는다.
# 2. 리소스에 있는 **엔드포인트** 를 복사해서 아래의 **YOUR_COG_ENDPOINT**.에 붙여 넣는다.
# 3. 셀을 선택한 다음 셀 왼쪽에있는 **셀 실행**(▷) 버튼을 클릭하여 아래 코드를 실행한다.
# + gather={"logged": 1599693964655}
cog_key = 'YOUR_COG_KEY'
cog_endpoint = 'YOUR_COG_ENDPOINT'
print('Ready to use cognitive services at {} using key {}'.format(cog_endpoint, cog_key))
# -
# Cognitive Services 리소스에서 Face 서비스를 이용하기 위해서는 Azure Cognitive Services Face 패키지를 설치해야 한다.
# + tags=[]
# ! pip install azure-cognitiveservices-vision-face
# -
# 이제 Cognitive 서비스 리소스와 설치된 SDK 패키지를 가지고 있으므로 가게 있는 사람 얼굴을 검지하기 위해 Face 서비스를 이용할 수 있다.
# 예를 확인하기 위해 아래 셀의 코드를 실행해보라.
# + gather={"logged": 1599693970079}
from azure.cognitiveservices.vision.face import FaceClient
from msrest.authentication import CognitiveServicesCredentials
from python_code import faces
import os
# %matplotlib inline
# 얼굴 감지 클라이언트를 만든다.
face_client = FaceClient(cog_endpoint, CognitiveServicesCredentials(cog_key))
# 이미지를 연다.
image_path = os.path.join('data', 'face', 'store_cam2.jpg')
image_stream = open(image_path, "rb")
# 얼굴을 감지한다.
detected_faces = face_client.face.detect_with_stream(image=image_stream)
# 얼굴을 나타낸다(python_code/faces.py의 코드를 이용한다)
faces.show_faces(image_path, detected_faces)
# -
# 검지된 얼굴들은 각각 유일한 ID가 할당되고 응용프로그램은 감지된 개별 얼굴을 구분할 수 있다.
#
# 아래 셀을 실행해서 쇼핑하고 있는 고객들의 얼굴에 대한 ID값을 확인한다.
# + gather={"logged": 1599693970447}
# 이미지를 연다.
image_path = os.path.join('data', 'face', 'store_cam3.jpg')
image_stream = open(image_path, "rb")
# 얼굴을 감지한다.
detected_faces = face_client.face.detect_with_stream(image=image_stream)
# 얼굴들을 나타낸다.(ython_code/faces.py에 있는 코드를 이용한다)
faces.show_faces(image_path, detected_faces, show_id=True)
# -
# ## 얼굴의 속성들을 분석한다.
#
# Face리소스는 단지 얼굴을 검지하는 것 외의 더 많은 일들을 수행할 수 있다. 얼굴 특징을 분석할 수 있고 나이와 감정상태에 등에 대한 설명을 제공한다. 예를들면 아래의 코드를 실행하여 쇼핑객의 얼굴 속성들을 분석해보자.For example, run the code below to analyze the facial attributes of a shopper.
# + gather={"logged": 1599693971321}
# 이미지를 연다
image_path = os.path.join('data', 'face', 'store_cam1.jpg')
image_stream = open(image_path, "rb")
# 얼굴을 감지하고 얼굴 특성을 지정한다.
attributes = ['age', 'emotion']
detected_faces = face_client.face.detect_with_stream(image=image_stream, return_face_attributes=attributes)
# 얼굴들과 속성값들을 나타낸다(python_code/faces.py에 있는 코드를 이용한다).
faces.show_face_attributes(image_path, detected_faces)
# -
#
# 이미지에있는 고객의 감정 상태 점수를 바탕으로 쇼핑 경험에 대한 행복정도를 판정할 수 있다.
#
# ## 비슷한 얼굴 찾기
#
# 검지된 각 얼굴에 대한 얼굴 ID 값들은 각 얼굴들끼리 일치도를 알아보는데 사용된다.이런 ID를 이용하여 이전에 검지된 얼굴과 비교하는데 사용되고 비슷한 특징이 있는 얼굴들을 찾는데 사용된다.
#
# 예를들면 아래의 셀에 있는 코드를 싷행하여 하나의 사진에 있는 쇼핑객과 다른 사진에 있는 모습을 비교하여 일치하는 얼굴을 찾는다.
# + gather={"logged": 1599693972555}
# 이미지 1에 있는 첫번째 울글의 ID를 가져온다.
image_1_path = os.path.join('data', 'face', 'store_cam3.jpg')
image_1_stream = open(image_1_path, "rb")
image_1_faces = face_client.face.detect_with_stream(image=image_1_stream)
face_1 = image_1_faces[0]
# 두번째 이미지에 있는 Face ID 값들을 가져온다.
image_2_path = os.path.join('data', 'face', 'store_cam2.jpg')
image_2_stream = open(image_2_path, "rb")
image_2_faces = face_client.face.detect_with_stream(image=image_2_stream)
image_2_face_ids = list(map(lambda face: face.face_id, image_2_faces))
# 이미지 1에 있는 얼굴과 비슷한 얼굴이 이미지 2에 있는지 찾는다.
similar_faces = face_client.face.find_similar(face_id=face_1.face_id, face_ids=image_2_face_ids)
# 이미지 1에 있는 얼굴과 비슷한 얼굴을 이미지 2에서 찾아서 나타낸다(python_code/face.py에 있는 코드를 사용한다).
faces.show_similar_faces(image_1_path, face_1, image_2_path, image_2_faces, similar_faces)
# -
# ## 얼굴 인식(Facial Regcognition)하기
#
# 지금까지 여러분은 Face서비스로 얼굴과 얼굴의 특징을 감지할 수 있고, 서로 비슷한 두 얼굴을 식별할 수 있다는 것을 봤다. 특정 사람의 얼굴을 인식하도록 Face를 학습시키는 *얼굴 인식(Face Recognition)* 솔루션을 구현하면 한 단계 더 나아갈 수 있다. 이는 소셜 미디어 응용 프로그램에서 친구의 사진을 자동으로 태그하거나 생체 인식 확인 시스템의 일부로 얼굴 인식을 사용하는 등 다양한 시나리오에서 유용할 수 있다.
# 이런 작업들이 어떻게 일어나는지 살펴보기 위해 Northwind Traders 회사가 얼굴 인식서비스를 이용하여 IT 부서의 권한이 있는 직원만 보안 시스템에 액세스 하도록 하는 시스템을 만들고 싶다고 가정한다.
#
# 권한이 있는 직원을 대표할 수 있는 *직원그룹*을 만드는 작업부터 시작해보자.
# + gather={"logged": 1599693973492}
group_id = 'employee_group_id'
try:
# 그룹이 이미 존재한다면 삭제한다.
face_client.person_group.delete(group_id)
except Exception as ex:
print(ex.message)
finally:
face_client.person_group.create(group_id, 'employees')
print ('Group created!')
# -
# *직원 그룹*이 존재하므로 해당 그룹에 포함시키기 원하는 직원들을 그룹에 포함시킨다.그리고 나서 각 직원들에 대한 여러종류의 사진들을 등록하여 Face서비스가 각 사람들의 독특한 특징들에 대한 것들을 학습하도록한다. 이상적인 것은 특정사람이 다양한 포즈와 서로다른 감정을 나타내는 사진이 필요하다.
# 우리는 Wendell이라는 한명의 직원을 등록하고 해당 직원에 대한 여러 종류의 사진을 등록할 것이다.
# + gather={"logged": 1599693976898}
import matplotlib.pyplot as plt
from PIL import Image
import os
# %matplotlib inline
# 직원(Wendell)을 그룹에 추가한다.
wendell = face_client.person_group_person.create(group_id, 'Wendell')
# Wendel/에 대한 사진들을 가져온다
folder = os.path.join('data', 'face', 'wendell')
wendell_pics = os.listdir(folder)
# 사진들을 등록한다.
i = 0
fig = plt.figure(figsize=(8, 8))
for pic in wendell_pics:
# 직원 그룹이 있는 사람에게 각 사진들을 추가한다.
img_path = os.path.join(folder, pic)
img_stream = open(img_path, "rb")
face_client.person_group_person.add_face_from_stream(group_id, wendell.person_id, img_stream)
# 각 이미지들을 표시한다.
img = Image.open(img_path)
i +=1
a=fig.add_subplot(1,len(wendell_pics), i)
a.axis('off')
imgplot = plt.imshow(img)
plt.show()
# -
# 추가된 직원과 사진들을 등록하였다면 Face를 학습하여 각 사진을 인식하도록 할 수 있다.
# + gather={"logged": 1599693977046}
face_client.person_group.train(group_id)
print('Trained!')
# -
# 이제 학습된 모델을 사용하여 이미지에 있는 얼굴들을 구분하는데 사용할 수 있다.
# + gather={"logged": 1599693994820}
# 두번째 이미지에 있는 얼굴 ID를 가져온다.
image_path = os.path.join('data', 'face', 'employees.jpg')
image_stream = open(image_path, "rb")
image_faces = face_client.face.detect_with_stream(image=image_stream)
image_face_ids = list(map(lambda face: face.face_id, image_faces))
# 인식한 얼굴이름들을 가져온다
face_names = {}
recognized_faces = face_client.face.identify(image_face_ids, group_id)
for face in recognized_faces:
person_name = face_client.person_group_person.get(group_id, face.candidates[0].person_id).name
face_names[face.face_id] = person_name
# 인식한 알굴들을 나타낸다.
faces.show_recognized_faces(image_path, image_faces, face_names)
# -
# ## 심화 학습
#
# Face 인지 서비스에 대하여 더 많이 알고 싶다면 [Face documentation](https://docs.microsoft.com/azure/cognitive-services/face/) 를 참조하라.
#
| 01d - Face Analysis.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# * 方法参考自 The Barra China Equity Model (CNE5)'s 文档
#
# * 请在环境变量中设置`DB_URI`指向数据库
# +
# %matplotlib inline
import os
import numpy as np
import pandas as pd
from matplotlib import pyplot as plt
import statsmodels.api as sm
from alphamind.api import *
from PyFin.api import *
from alphamind.analysis.crosssetctions import cross_section_analysis
plt.style.use('ggplot')
# +
"""
Back test parameter settings
"""
start_date = '2016-01-01'
end_date = '2018-02-28'
category = 'sw'
level = 1
freq = '20b'
universe = Universe('ashare_ex')
horizon = map_freq(freq)
ref_dates = makeSchedule(start_date, end_date, freq, 'china.sse')
# -
def risk_factor_analysis(factor_name):
data_source = os.environ['DB_URI']
engine = SqlEngine(data_source)
risk_names = list(set(risk_styles).difference({factor_name}))
industry_names = list(set(industry_styles).difference({factor_name}))
constraint_risk = risk_names + industry_names
df = pd.DataFrame(columns=['ret', 'ic', 't.'], dtype=float)
for ref_date in ref_dates:
df.loc[ref_date, :] = cross_section_analysis(ref_date,
factor_name,
universe,
horizon,
constraint_risk,
engine=engine)
df.index = pd.to_datetime(df.index)
return df
candidates_factors = ['BETA', 'SIZE']
# %%time
res = [risk_factor_analysis(factor) for factor in candidates_factors]
# +
df = pd.DataFrame()
for f_name, data in zip(candidates_factors, res):
data['factor'] = f_name
df = df.append(data)
# -
df['abs t.'] = np.abs(df['t.'])
df[['factor', 'abs t.']].groupby('factor').mean().sort_values('abs t.', ascending=False).head()
| notebooks/Example 5 - Style Factor Analysis.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .jl
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Julia 1.3.0
# language: julia
# name: julia-1.3
# ---
# # Verstaile Hierarchical Dirichlet Processes
#
# The following code is supplemental to the paper:
#
# ### Scalable and Flexible Clustering of Grouped Data via Parallel and Distributed Sampling in Versatile Hierarchical Dirichlet Processes
#
# It contains the paper code, in addition to 4 Jupyter Notebooks (Running Julia) featuring 4 of the experiments.
# Due to size constraints, not all experiments are included in this, however, they can be replicated using this code.
#
# ### How to run this?
#
# You need to install Julia, with IJulia for Jupyter Notebook supports:
# https://julialang.org/
#
# To run the code, in addition, you need to install the following packages:
# * StatsBase
# * Distributed
# * DistributedArrays
# * Distributions
# * SpecialFunctions
# * CatViews
# * LinearAlgebra
# * Random
# * NPZ
# * JLD2
# * Clustering
# * PDMats
# * DPMMSubClusters
# * IJulia
# * Images
# * ImageMagick
#
# You can use the following to add them all at once (1 line):
# ```
# ]add StatsBase Distributed DistributedArrays Distributions SpecialFunctions CatViews LinearAlgebra Random NPZ JLD2 Clustering PDMats DPMMSubClusters IJulia Images ImageMagick
# ```
#
# Note that as Julia need to build packages, it is recommended after install to use `using <package>` in order to precompile every thing.
#
# After installing the above, you can run any of the attached Notebooks, or create new ones.
#
# In order for all the PATH to work in the distributed setting which we use, you need to run `jupyter-notebook` from within the Notebooks directory. Or change each process path manually.
#
# ### Distributed Aspect
#
# The code must have atleast 1 additional worker process (using `addprocs(#)`). You can add processes over different machines if you wish, check https://docs.julialang.org/en/v1/stdlib/Distributed/index.html for the instructions.
| examples/README.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import pandas as pd
data = pd.read_csv("Salary_DataSet.csv - Salary_DataSet.csv.csv")
data
x = data.drop('Salary',axis=1) #iloc
x
y = data['Salary']
y
from sklearn.model_selection import train_test_split
x_train,x_test,y_train,y_test = train_test_split(x,y,test_size=0.2)
y.head(10)
print("x_train = ",x_train.shape)
print("y_train = ",y_train.shape)
print("x_test = ",x_test.shape)
print("y_test = ",y_test.shape)
| ML LAB 23-10-2021.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# <!--BOOK_INFORMATION-->
# <img align="left" style="padding-right:10px;" src="fig/cover-small.jpg">
#
# *Este notebook es una adaptación realizada por <NAME> del material "[Whirlwind Tour of Python](http://www.oreilly.com/programming/free/a-whirlwind-tour-of-python.csp)" de Jake VanderPlas; tanto el [contenido original](https://github.com/jakevdp/WhirlwindTourOfPython) como la [adpatación actual](https://github.com/rrgalvan/PythonIntroMasterMatemat)] están disponibles en Github.*
#
# *The text and code are released under the [CC0](https://github.com/jakevdp/WhirlwindTourOfPython/blob/master/LICENSE) license; see also the companion project, the [Python Data Science Handbook](https://github.com/jakevdp/PythonDataScienceHandbook).*
#
# <!--NAVIGATION-->
# < [Defining and Using Functions](08-Defining-Functions.ipynb) | [Contents](Index.ipynb) | [Iterators](10-Iterators.ipynb) >
# # Errors and Exceptions
# No matter your skill as a programmer, you will eventually make a coding mistake.
# Such mistakes come in three basic flavors:
#
# - *Syntax errors:* Errors where the code is not valid Python (generally easy to fix)
# - *Runtime errors:* Errors where syntactically valid code fails to execute, perhaps due to invalid user input (sometimes easy to fix)
# - *Semantic errors:* Errors in logic: code executes without a problem, but the result is not what you expect (often very difficult to track-down and fix)
#
# Here we're going to focus on how to deal cleanly with *runtime errors*.
# As we'll see, Python handles runtime errors via its *exception handling* framework.
# ## Runtime Errors
#
# If you've done any coding in Python, you've likely come across runtime errors.
# They can happen in a lot of ways.
#
# For example, if you try to reference an undefined variable:
print(Q)
# Or if you try an operation that's not defined:
1 + 'abc'
# Or you might be trying to compute a mathematically ill-defined result:
2 / 0
# Or maybe you're trying to access a sequence element that doesn't exist:
L = [1, 2, 3]
L[1000]
# Note that in each case, Python is kind enough to not simply indicate that an error happened, but to spit out a *meaningful* exception that includes information about what exactly went wrong, along with the exact line of code where the error happened.
# Having access to meaningful errors like this is immensely useful when trying to trace the root of problems in your code.
# ## Catching Exceptions: ``try`` and ``except``
# The main tool Python gives you for handling runtime exceptions is the ``try``...``except`` clause.
# Its basic structure is this:
try:
print("this gets executed first")
except:
print("this gets executed only if there is an error")
# Note that the second block here did not get executed: this is because the first block did not return an error.
# Let's put a problematic statement in the ``try`` block and see what happens:
try:
print("let's try something:")
x = 1 / 0 # ZeroDivisionError
except:
print("something bad happened!")
# Here we see that when the error was raised in the ``try`` statement (in this case, a ``ZeroDivisionError``), the error was caught, and the ``except`` statement was executed.
#
# One way this is often used is to check user input within a function or another piece of code.
# For example, we might wish to have a function that catches zero-division and returns some other value, perhaps a suitably large number like $10^{100}$:
def safe_divide(a, b):
try:
return a / b
except:
return 1E100
safe_divide(1, 2)
safe_divide(2, 0)
# There is a subtle problem with this code, though: what happens when another type of exception comes up? For example, this is probably not what we intended:
safe_divide (1, '2')
# Dividing an integer and a string raises a ``TypeError``, which our over-zealous code caught and assumed was a ``ZeroDivisionError``!
# For this reason, it's nearly always a better idea to catch exceptions *explicitly*:
def safe_divide(a, b):
try:
return a / b
except ZeroDivisionError:
return 1E100
safe_divide(1, 0)
safe_divide(1, '2')
# We're now catching zero-division errors only, and letting all other errors pass through un-modified.
# ## Raising Exceptions: ``raise``
# We've seen how valuable it is to have informative exceptions when using parts of the Python language.
# It's equally valuable to make use of informative exceptions within the code you write, so that users of your code (foremost yourself!) can figure out what caused their errors.
#
# The way you raise your own exceptions is with the ``raise`` statement. For example:
raise RuntimeError("my error message")
# As an example of where this might be useful, let's return to our ``fibonacci`` function that we defined previously:
def fibonacci(N):
L = []
a, b = 0, 1
while len(L) < N:
a, b = b, a + b
L.append(a)
return L
# One potential problem here is that the input value could be negative.
# This will not currently cause any error in our function, but we might want to let the user know that a negative ``N`` is not supported.
# Errors stemming from invalid parameter values, by convention, lead to a ``ValueError`` being raised:
def fibonacci(N):
if N < 0:
raise ValueError("N must be non-negative")
L = []
a, b = 0, 1
while len(L) < N:
a, b = b, a + b
L.append(a)
return L
fibonacci(10)
fibonacci(-10)
# Now the user knows exactly why the input is invalid, and could even use a ``try``...``except`` block to handle it!
N = -10
try:
print("trying this...")
print(fibonacci(N))
except ValueError:
print("Bad value: need to do something else")
# ## Diving Deeper into Exceptions
#
# Briefly, I want to mention here some other concepts you might run into.
# I'll not go into detail on these concepts and how and why to use them, but instead simply show you the syntax so you can explore more on your own.
# ### Accessing the error message
#
# Sometimes in a ``try``...``except`` statement, you would like to be able to work with the error message itself.
# This can be done with the ``as`` keyword:
try:
x = 1 / 0
except ZeroDivisionError as err:
print("Error class is: ", type(err))
print("Error message is:", err)
# With this pattern, you can further customize the exception handling of your function.
# ### Defining custom exceptions
# In addition to built-in exceptions, it is possible to define custom exceptions through *class inheritance*.
# For instance, if you want a special kind of ``ValueError``, you can do this:
# +
class MySpecialError(ValueError):
pass
raise MySpecialError("here's the message")
# -
# This would allow you to use a ``try``...``except`` block that only catches this type of error:
try:
print("do something")
raise MySpecialError("[informative error message here]")
except MySpecialError:
print("do something else")
# You might find this useful as you develop more customized code.
# ## ``try``...``except``...``else``...``finally``
# In addition to ``try`` and ``except``, you can use the ``else`` and ``finally`` keywords to further tune your code's handling of exceptions.
# The basic structure is this:
try:
print("try something here")
except:
print("this happens only if it fails")
else:
print("this happens only if it succeeds")
finally:
print("this happens no matter what")
# The utility of ``else`` here is clear, but what's the point of ``finally``?
# Well, the ``finally`` clause really is executed *no matter what*: I usually see it used to do some sort of cleanup after an operation completes.
# **Nota** A continuación se incluye un ejemplo en el que se usan excepciones en un caso "realista". Se trata de aplicar el método de Newton para distintos valores, en búsqueda de convergencia
# +
class ErrorConvergencia(ValueError):
pass
def newton(f, df, x0, tolerancia=1.e-15, maxiter=100):
"""
Realiza iteraciones del método de Newton para f(x)=0.
Parámetros:
* f, df: Función y derivada
* x0: Aproximación inicial
* tolerancia: convergencia si |x_k - x_{k+1}|<tolerancia
* maxiter: número máximo de iteraciones
Devuelve: x_{k+1}
Excepciones: ErrorConvergencia si se sobrepasa el máximo de iter.
"""
diferencia = tolerancia
k = 1
while k <= maxiter and diferencia >= tolerancia:
x = x0 - f(x0)/df(x0)
diferencia = abs(x - x0)
x0 = x
k = k+1
if (diferencia>=tolerancia):
raise ErrorConvergencia("El método de Newton no convergió")
else:
return x0
def f(x): return x**4 - 2*x**2 - 1
def df(x): return 4*x**3 - 4*x
for x0 in (1/3, 1/2, 1, 2):
try:
print("* x0=" + str(x0))
x = newton(f, df, x0)
except ErrorConvergencia:
print(" Error de convergencia para x0=" + str(x0))
except ZeroDivisionError:
print(" División por cero (falla la hipótesis f'(x)!=0)")
else:
print(" Solución mediante el método de Newton: x=" + str(x))
# -
# <!--NAVIGATION-->
# < [Defining and Using Functions](08-Defining-Functions.ipynb) | [Contents](Index.ipynb) | [Iterators](10-Iterators.ipynb) >
| 09-Errors-and-Exceptions.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# %pylab inline
import os, sys, glob
# %load_ext autoreload
# %autoreload 2
import trako as TKO
sys.path.append('MICCAI/')
import runner, sprinter
from prettytable import PrettyTable
DATA_FOLDER = '/home/d/Dropbox/Fan-test-ADHD/'
OUT_FOLDER = DATA_FOLDER
input = '111_reg_reg_outlier_removed_cluster_00007_orig.vtp'
# +
config = {
'name':'default'
}
compressed = input.replace('vtp','_'+config['name']+'.tko')
restored = input.replace('vtp','_'+config['name']+'_restored.vtp')
stats = runner.Runner.tko(DATA_FOLDER, input,
compressed=os.path.join(OUT_FOLDER,compressed),
restored=os.path.join(OUT_FOLDER,restored), config=config, force=True)
i_poly = TKO.Util.loadvtp(os.path.join(DATA_FOLDER, input))
i_scalars = i_poly['scalars']
i_nscalars = i_poly['scalar_names']
i_properties = i_poly['properties']
i_nproperties = i_poly['property_names']
r_poly = TKO.Util.loadvtp(os.path.join(OUT_FOLDER, restored))
r_scalars = r_poly['scalars']
r_properties = r_poly['properties']
for i,j in enumerate(i_nscalars):
if j.startswith('RTOP'):
print(i,j)
stats = TKO.Util.error(i_scalars[i], r_scalars[i])
print('min', stats[0][0])
print('max', stats[0][1])
print('mean', stats[0][2])
print('std', stats[0][3])
# -
originalRTOP1 = np.sort(i_scalars[2])
restoredRTOP1 = np.sort(r_scalars[2])
ax = plt.gca()
ax.set_ylim(0,100)
plt.plot(originalRTOP1, color='blue')
plt.plot(restoredRTOP1, color='red')
bitrates = [11,12,13,14,15,16,17,18,19,20]
for b in bitrates:
config = {
'name':str(b)+'bit',
'RTOP1': {
'position':False,
'sequential':True,
'quantization_bits': b,
'compression_level':1,
'quantization_range':-1,
'quantization_origin':None
}
}
compressed = input.replace('vtp','_'+config['name']+'.tko')
restored = input.replace('vtp','_'+config['name']+'_restored.vtp')
stats = runner.Runner.tko(DATA_FOLDER, input,
compressed=os.path.join(OUT_FOLDER,compressed),
restored=os.path.join(OUT_FOLDER,restored), config=config, verbose=False,force=True)
i_poly = TKO.Util.loadvtp(os.path.join(DATA_FOLDER, input))
i_scalars = i_poly['scalars']
i_nscalars = i_poly['scalar_names']
i_properties = i_poly['properties']
i_nproperties = i_poly['property_names']
r_poly = TKO.Util.loadvtp(os.path.join(OUT_FOLDER, restored))
r_scalars = r_poly['scalars']
r_properties = r_poly['properties']
for i,j in enumerate(i_nscalars):
if j.startswith('RTOP1'):
print(i,j,'bitrate', b)
stats = TKO.Util.error(i_scalars[i], r_scalars[i])
print('min', stats[0][0])
print('max', stats[0][1])
print('mean', stats[0][2])
print('std', stats[0][3])
plt.figure()
originalRTOP1 = np.sort(i_scalars[2])
restoredRTOP1 = np.sort(r_scalars[2])
ax = plt.gca()
ax.set_ylim(0,100)
plt.title('Subject 111, RTOP1, Encoded with Bitrate '+str(b))
plt.plot(originalRTOP1, color='blue')
plt.plot(restoredRTOP1, color='red')
| IPY/FansLatest.ipynb |
-- -*- coding: utf-8 -*-
-- ---
-- jupyter:
-- jupytext:
-- text_representation:
-- extension: .hs
-- format_name: light
-- format_version: '1.5'
-- jupytext_version: 1.14.4
-- kernelspec:
-- display_name: Haskell
-- language: haskell
-- name: haskell
-- ---
-- # Labs 6
-- ## 1) Składanie i "aplikacja" funkcji: funkcje postaci: `a→b` vs. `a→m b` (rozszerzone/monadyczne, Kleisli arrows)
-- **1. Zmodyfikować definicję**
--
-- **f >.>> g = \x -> g (extractMaybe (f x))**
--
-- ### tak, aby zamiast `extractMaybe` wykorzystać `>^$>`, tzn.
--
-- ### f >.>> g = \x -> ___ >^$> ___
(>^$>) :: Maybe a -> (a -> Maybe b) -> Maybe b
Nothing >^$> _ = Nothing
(Just x) >^$> f = f x
infixl 1 >^$>
extractMaybe :: Maybe a -> a
extractMaybe Nothing = error "Nothing inside!"
extractMaybe (Just x) = x
(>.>>) :: (a -> Maybe b) -> (b -> Maybe c) -> (a -> Maybe c)
f >.>> g = \x -> g (extractMaybe (f x))
f >.>> g = \x -> f x >^$> g
-- **2. Ponownie zmodyfikować definicję**
--
-- **f >.>> g = ...**
--
-- **ale tym razem zamiast `extractMaybe` wykorzystać `fmap`, tzn.**
--
-- **f >.>> g = \x -> ___ fmap ___ ___**
--
-- **wskazówka: rozważyć użycie funkcji pomocniczej**
--
-- **joinMaybe :: Maybe (Maybe a) -> (Maybe a)**
joinMaybe :: Maybe (Maybe a) -> Maybe a
joinMaybe (Just Nothing) = Nothing
joinMaybe (Just (Just x)) = Just x
joinMaybe Nothing = Nothing
-- ## 2) Przykłady monad: `Maybe`
-- **3. (Dla monady `Maybe`) zdefiniować (`>=>`) przy pomocy `>>=`; czy można tę definicję uogólnić, aby była prawdziwa dla dowolnej monady?**
(>=>) :: Monad m => (a -> m b) -> (b -> m c) -> a -> m c
f >=> g = \x -> f x >>= g
-- **4. Napisać funkcję join dla monady `Maybe`**
maybeJoin :: Maybe (Maybe a) -> Maybe a
maybeJoin (Just (Just x)) = Just x
maybeJoin (Just Nothing) = Nothing
maybeJoin Nothing = Nothing
maybeJoin (Just (Just 12))
maybeJoin (Just Nothing)
-- ## 3) Przykłady monad: `Either e`
-- **5. Napisać funkcję `join` dla monady (`Either e`)**
eitherJoin :: Either e (Either e a) -> Either e a
eitherJoin (Right (Right x)) = Right x
eitherJoin (Right (Left er)) = Left er
eitherJoin (Left er) = Left er
eitherJoin (Right (Right 10))
eitherJoin (Left "Error")
-- ## 4) Przykłady monad: `[]`
-- **6. Napisać funkcję `join` dla monady `[]`**
joinList :: [[a]] -> [a]
joinList [] = []
joinList (xs:xss) = xs ++ joinList xss
joinList [[1,2,3,4,5]]
joinList [[1..8], [1..3]]
joinList []
| Lab_6/Lab_6.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import os, sys
sys.path.insert(0, os.path.abspath(os.path.join("..", "..")))
# # Reload the naive predictions
#
# Shows how to make use of the data produced from the `scripted` script `naive.py`.
# %matplotlib inline
import matplotlib.pyplot as plt
import open_cp.scripted
import open_cp.scripted.analysis as analysis
loaded = open_cp.scripted.Loader("stscan_preds.pic.xz")
loaded.timed_points.time_range
fig, axes = plt.subplots(ncols=2, figsize=(16,7))
analysis.plot_data_scatter(loaded, axes[0])
analysis.plot_data_grid(loaded, axes[1])
next(iter(loaded))
times = [x[1] for x in loaded]
preds = [x[2] for x in loaded]
fig, axes = plt.subplots(ncols=2, figsize=(16,7))
for ax, i in zip(axes, [0, 60]):
analysis.plot_prediction(loaded, preds[i], ax)
ax.set_title(times[i])
# ## Fit binomial model instead
#
# Use a [beta prior](https://en.wikipedia.org/wiki/Conjugate_prior)
betas = analysis.hit_counts_to_beta("stscan_counts.csv")
fig, ax = plt.subplots(figsize=(12,8))
analysis.plot_betas(betas, ax)
fig, ax = plt.subplots(figsize=(12,8))
analysis.plot_betas(betas, ax, range(1,21))
# ## What does this difference actually mean??
#
# Suppose we pick 5% coverage. There is a big gap between the curves there.
import collections, statistics, datetime
tps = loaded.timed_points.bin_timestamps(datetime.datetime(2016,1,1), datetime.timedelta(days=1))
c = collections.Counter(tps.timestamps)
statistics.mean(c.values())
# So we have about 5 crime events a day, on average.
# +
import scipy.special
import numpy as np
def BetaBinom(alpha,beta,n,k):
"""http://www.channelgrubb.com/blog/2015/2/27/beta-binomial-in-python"""
part_1 = scipy.special.comb(n,k)
part_2 = scipy.special.betaln(k+alpha,n-k+beta)
part_3 = scipy.special.betaln(alpha,beta)
result = (np.log(part_1) + part_2)- part_3
return np.exp(result)
fig, axes = plt.subplots(ncols=len(betas), figsize=(16,5))
n = 5
for ax, key in zip(axes, betas):
beta = betas[key][5]
p = [BetaBinom(*beta.args,n,k) for k in range(0,n+1)]
ax.bar(np.arange(n+1), p)
ax.set(xlabel="Number of crimes captured", ylabel="Probability")
ax.set_title("{}; {} total events.".format(key, n))
# -
# These plots show the probability of capturing $x$ events out of the 5 total events. This sort of puts the difference in perspective-- it's pretty small!
| examples/Scripts/Reload stscan predictions.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
# Import the libraries
import numpy as np
import pandas as pd
import os
import seaborn as sns
import matplotlib.pyplot as plt
import warnings
import numpy as np
import itertools
import statsmodels.api as sm
import matplotlib
from textwrap import wrap
from matplotlib import ticker
from datetime import datetime
warnings.filterwarnings("ignore")
plt.style.use('fivethirtyeight')
# %matplotlib inline
# -
# ## Importing the data
#
#
#print(os.listdir('../data'))
df = pd.read_csv('../data/NumberConfirmedOfCases.csv')
#df = df.set_index('Date', append=False)
#df['Date'] = df.apply(lambda x: datetime.strptime(x['Date'], '%d-%m-%Y').date(), axis=1) #convert the date
df = df.groupby('Date')['Cases'].sum().reset_index() #group the data
df['Date'] = pd.to_datetime(df['Date'])
df['date_delta'] = (df['Date'] - df['Date'].min()) / np.timedelta64(1,'D')
df.head()
#print(os.listdir('../data'))
df = pd.read_csv('../data/NumberConfirmedOfCases.csv')
#df = df.set_index('Date', append=False)
#df['Date'] = df.apply(lambda x: datetime.strptime(x['Date'], '%d-%m-%Y').date(), axis=1) #convert the date
df = df.groupby('Date')['Cases'].sum().reset_index() #group the data
df.head()
df.describe()
df.describe(include='O')
df.columns
df.shape
# +
#print('Time period start: {}\nTime period end: {}'.format(df.year.min(),df.year.max()))
# -
# ## Visualizing the time series data
#
# We are going to use matplotlib to visualise the dataset.
# +
# Time series data source: fpp pacakge in R.
import matplotlib.pyplot as plt
df = pd.read_csv('../data/NumberConfirmedOfCases.csv', parse_dates=['Date'], index_col='Date')
df = df.groupby('Date')['Cases'].sum().reset_index() #group the data
# Draw Plot
def plot_df(df, x, y, title="", xlabel='Date', ylabel='Cases', dpi=100,angle=45):
plt.figure(figsize=(8,4), dpi=dpi)
plt.plot(x, y, color='tab:red')
plt.gca().set(title=title, xlabel=xlabel, ylabel=ylabel)
plt.xticks(rotation=angle)
plt.show()
plot_df(df, x=df.Date, y=df.Cases, title='Dayly infiction')
# -
df['Date'] = pd.to_datetime(df['Date'], format='%Y-%m-%d')
df['Date'].head()
df = df.set_index('Date')
df.index
from pandas import Series
from matplotlib import pyplot
pyplot.figure(figsize=(6,8), dpi= 100)
pyplot.subplot(211)
df.Cases.hist()
pyplot.subplot(212)
df.Cases.plot(kind='kde')
pyplot.show()
# +
from pylab import rcParams
df = pd.read_csv('../data/NumberConfirmedOfCases.csv', parse_dates=['Date'], index_col='Date')
df = df.groupby('Date')['Cases'].sum().reset_index() #group the data
df['Date'] = pd.to_datetime(df['Date'], format='%Y-%m-%d')
df = df.set_index('Date')
rcParams['figure.figsize'] = 8,6
decomposition = sm.tsa.seasonal_decompose(df, model='multiplicative', freq=1)
fig = decomposition.plot()
plt.show()
# +
df = pd.read_csv('../data/NumberConfirmedOfCases.csv', parse_dates=['Date'], index_col='Date')
df = df.groupby('Date')['Cases'].sum().reset_index() #group the data
x = df['Date'].values
y1 = df['Cases'].values
# Plot
fig, ax = plt.subplots(1, 1, figsize=(6,3), dpi= 120)
plt.fill_between(x, y1=y1, y2=-y1, alpha=0.5, linewidth=2, color='seagreen')
plt.ylim(-50, 50)
plt.title('Dayly Infiction (Two Side View)', fontsize=16)
plt.hlines(y=0, xmin=np.min(df.Date), xmax=np.max(df.Date), linewidth=.5)
plt.xticks(rotation=45)
plt.show()
# -
# ## Boxplot of Month-wise (Seasonal) and Year-wise (trend) Distribution
#
# You can group the data at seasonal intervals and see how the values are distributed within a given year or month and how it compares over time.
#
# The boxplots make the year-wise and month-wise distributions evident. Also, in a month-wise boxplot, the months of December and January clearly has higher drug sales, which can be attributed to the holiday discounts season.
#
# So far, we have seen the similarities to identify the pattern. Now, how to find out any deviations from the usual pattern?
# +
# Importing the data
df = pd.read_csv('../data/NumberConfirmedOfCases.csv', parse_dates=['Date'], index_col='Date')
df = df.groupby('Date')['Cases'].sum().reset_index() #group the data
df.reset_index(inplace=True)
# Prepare data
#df['year'] = [d.year for d in df.Date]
df['month'] = [d.strftime('%b') for d in df.Date]
df['day']=df['Date'].dt.day
df['week']=df['Date'].dt.week
months = df['month'].unique()
# Plotting
fig, axes = plt.subplots(3,1, figsize=(8,16), dpi= 80)
sns.boxplot(x='month', y='Cases', data=df, ax=axes[0])
sns.boxplot(x='week', y='Cases', data=df,ax=axes[1])
sns.boxplot(x='day', y='Cases', data=df,ax=axes[2])
axes[0].set_title('Month-wise Box Plot', fontsize=18);
axes[1].set_title('Week-wise Box Plot', fontsize=18)
axes[1].set_title('Day-wise Box Plot', fontsize=18)
plt.show()
# -
# ## Autocorrelation and partial autocorrelation
#
# Autocorrelation measures the relationship between a variable's current value and its past values.
#
# Autocorrelation is simply the correlation of a series with its own lags. If a series is significantly autocorrelated, that means, the previous values of the series (lags) may be helpful in predicting the current value.
#
# Partial Autocorrelation also conveys similar information but it conveys the pure correlation of a series and its lag, excluding the correlation contributions from the intermediate lags.
# +
from statsmodels.graphics.tsaplots import plot_acf
from statsmodels.graphics.tsaplots import plot_pacf
df = pd.read_csv('../data/NumberConfirmedOfCases.csv')
df = df.groupby('Date')['Cases'].sum().reset_index() #group the data
pyplot.figure(figsize=(6,8), dpi= 100)
pyplot.subplot(211)
plot_acf(df.Cases, ax=pyplot.gca(), lags = len(df.Cases)-1)
pyplot.subplot(212)
plot_pacf(df.Cases, ax=pyplot.gca(), lags = len(df.Cases)-1)
pyplot.show()
# -
# ## Lag Plots
#
# A Lag plot is a scatter plot of a time series against a lag of itself. It is normally used to check for autocorrelation. If there is any pattern existing in the series like the one you see below, the series is autocorrelated. If there is no such pattern, the series is likely to be random white noise.
# +
from pandas.plotting import lag_plot
plt.rcParams.update({'ytick.left' : False, 'axes.titlepad':10})
# Import
df = pd.read_csv('../data/NumberConfirmedOfCases.csv')
df = df.groupby('Date')['Cases'].sum().reset_index() #group the data
# Plot
fig, axes = plt.subplots(1, 4, figsize=(10,3), sharex=True, sharey=True, dpi=100)
for i, ax in enumerate(axes.flatten()[:4]):
lag_plot(df.Cases, lag=i+1, ax=ax, c='firebrick')
ax.set_title('Lag ' + str(i+1))
fig.suptitle('Lag Plots of Sun Spots Area)', y=1.15)
# -
# ## Estimating the forecastability
#
# The more regular and repeatable patterns a time series has, the easier it is to forecast. Since we have a small dataset, we apply a Sample Entropy to examine that. Put in mind that, The higher the approximate entropy, the more difficult it is to forecast it.
#
#
#
#
# +
# https://en.wikipedia.org/wiki/Sample_entropy
df = pd.read_csv('../data/NumberConfirmedOfCases.csv')
df = df.groupby('Date')['Cases'].sum().reset_index() #group the data
def SampEn(U, m, r):
"""Compute Sample entropy"""
def _maxdist(x_i, x_j):
return max([abs(ua - va) for ua, va in zip(x_i, x_j)])
def _phi(m):
x = [[U[j] for j in range(i, i + m - 1 + 1)] for i in range(N - m + 1)]
C = [len([1 for j in range(len(x)) if i != j and _maxdist(x[i], x[j]) <= r]) for i in range(len(x))]
return sum(C)
N = len(U)
return -np.log(_phi(m+1) / _phi(m))
print(SampEn(df.Cases, m=2, r=0.2*np.std(df.Cases)))
# -
# ### Plotting Rolling Statistics
#
# We observe that the rolling mean and Standard deviation are not constant with respect to time (increasing trend)
#
# The time series is hence not stationary
#
#
from statsmodels.tsa.stattools import adfuller
def test_stationarity(timeseries):
#Determing rolling statistics
rolmean = pd.Series(timeseries).rolling(window=12).std()
rolstd = pd.Series(timeseries).rolling(window=12).mean()
#Plot rolling statistics:
orig = plt.plot(timeseries, color='blue',label='Original')
mean = plt.plot(rolmean, color='red', label='Rolling Mean')
std = plt.plot(rolstd, color='black', label = 'Rolling Std')
plt.legend(loc='best')
plt.title('Rolling Mean & Standard Deviation')
plt.show(block=False)
#Perform Dickey-Fuller test:
print ('Results of Dickey-Fuller Test:')
dftest = adfuller(timeseries, autolag='AIC')
dfoutput = pd.Series(dftest[0:4], index=['Test Statistic','p-value','#Lags Used','Number of Observations Used'])
for key,value in dftest[4].items():
dfoutput['Critical Value (%s)'%key] = value
print(dfoutput)
# +
df = pd.read_csv('../data/NumberConfirmedOfCases.csv')
df = df.groupby('Date')['Cases'].sum().reset_index() #group the data
test_stationarity(df['Cases'])
# -
# The standard deviation and th mean are clearly increasing with time therefore, this is not a stationary series.
# +
from pylab import rcParams
df = pd.read_csv('../data/NumberConfirmedOfCases.csv', parse_dates=['Date'], index_col='Date')
df = df.groupby('Date')['Cases'].sum().reset_index() #group the data
df['Date'] = pd.to_datetime(df['Date'], format='%Y-%m-%d')
df = df.set_index('Date')
ts_log = np.log(df)
plt.plot(ts_log)
# -
# # Remove Trend - Smoothing
n = int(len(df.Cases)/2)
moving_avg = ts_log.rolling(n).mean()
plt.plot(ts_log)
plt.plot(moving_avg, color='red')
ts_log_moving_avg_diff = ts_log.Cases - moving_avg.Cases
ts_log_moving_avg_diff.head(n)
ts_log_moving_avg_diff.dropna(inplace=True)
test_stationarity(ts_log_moving_avg_diff)
expwighted_avg = ts_log.ewm(n).mean()
plt.plot(ts_log)
plt.plot(expwighted_avg, color='red')
ts_log_ewma_diff = ts_log.Cases - expwighted_avg.Cases
test_stationarity(ts_log_ewma_diff)
ts_log_diff = ts_log.Cases - ts_log.Cases.shift()
plt.plot(ts_log_diff)
ts_log_diff.dropna(inplace=True)
test_stationarity(ts_log_diff)
# ## Autoregressive Integrated Moving Average (ARIMA)
#
# In an ARIMA model there are 3 parameters that are used to help model the major aspects of a times series: seasonality, trend, and noise. These parameters are labeled p,d,and q.
#
# Number of AR (Auto-Regressive) terms (p): p is the parameter associated with the auto-regressive aspect of the model, which incorporates past values i.e lags of dependent variable. For instance if p is 5, the predictors for x(t) will be x(t-1)….x(t-5).
#
# Number of Differences (d): d is the parameter associated with the integrated part of the model, which effects the amount of differencing to apply to a time series.
#
# Number of MA (Moving Average) terms (q): q is size of the moving average part window of the model i.e. lagged forecast errors in prediction equation. For instance if q is 5, the predictors for x(t) will be e(t-1)….e(t-5) where e(i) is the difference between the moving average at ith instant and actual value.
#
#
#
# +
# ARMA example
from statsmodels.tsa.arima_model import ARMA
from random import random
# fit model
model = ARMA(ts_log_diff, order=(2, 1))
model_fit = model.fit(disp=False)
# -
model_fit.summary()
plt.plot(ts_log_diff)
plt.plot(model_fit.fittedvalues, color='red')
plt.title('RSS: %.4f'% np.nansum((model_fit.fittedvalues-ts_log_diff)**2))
ts = df.Cases - df.Cases.shift()
ts.dropna(inplace=True)
pyplot.figure()
pyplot.subplot(211)
plot_acf(ts, ax=pyplot.gca(),lags=n)
pyplot.subplot(212)
plot_pacf(ts, ax=pyplot.gca(),lags=n)
pyplot.show()
# +
#divide into train and validation set
train = df[:int(0.8*(len(df)))]
valid = df[int(0.8*(len(df))):]
#plotting the data
train['Cases'].plot()
valid['Cases'].plot()
# -
#building the model
from pmdarima.arima import auto_arima
model = auto_arima(train, trace=True, error_action='ignore', suppress_warnings=True)
model.fit(train)
# +
forecast = model.predict(n_periods=len(valid))
forecast = pd.DataFrame(forecast,index = valid.index,columns=['Prediction'])
#plot the predictions for validation set
plt.plot(df.Cases, label='Train')
#plt.plot(valid, label='Valid')
plt.plot(forecast, label='Prediction')
plt.show()
# -
from sklearn.metrics import mean_squared_error, r2_score, mean_absolute_error, median_absolute_error, mean_squared_log_error
def evaluate_forecast(y,pred):
results = pd.DataFrame({'r2_score':r2_score(y, pred),
}, index=[0])
results['mean_absolute_error'] = mean_absolute_error(y, pred)
results['median_absolute_error'] = median_absolute_error(y, pred)
results['mse'] = mean_squared_error(y, pred)
results['msle'] = mean_squared_log_error(y, pred)
results['mape'] = mean_absolute_percentage_error(y, pred)
results['rmse'] = np.sqrt(results['mse'])
return results
evaluate_forecast(valid, forecast)
train.head()
train_prophet = pd.DataFrame()
train_prophet['ds'] = train.index
train_prophet['y'] = train.Cases.values
train_prophet.head()
# +
from fbprophet import Prophet
#instantiate Prophet with only yearly seasonality as our data is monthly
model = Prophet( yearly_seasonality=True, seasonality_mode = 'multiplicative')
model.fit(train_prophet) #fit the model with your dataframe
# -
# predict for five months in the furure and MS - month start is the frequency
future = model.make_future_dataframe(periods = 36, freq = 'MS')
future.tail()
forecast.columns
# +
# now lets make the forecasts
forecast = model.predict(future)
forecast[['ds', 'yhat', 'yhat_lower', 'yhat_upper']].tail()
# +
fig = model.plot(forecast)
#plot the predictions for validation set
plt.plot(valid, label='Valid', color = 'red', linewidth = 2)
plt.show()
# -
model.plot_components(forecast);
y_prophet = pd.DataFrame()
y_prophet['ds'] = df.index
y_prophet['y'] = df.Cases.values
y_prophet = y_prophet.set_index('ds')
forecast_prophet = forecast.set_index('ds')
start_index =5
end_index = 15
evaluate_forecast(y_prophet.y[start_index:end_index], forecast_prophet.yhat[start_index:end_index])
# +
from statsmodels.tsa.arima_model import ARIMA
model = ARIMA(df['Cases'], order=(2, 1, 2))
results_ARIMA = model.fit(disp=-1)
#plt.plot(ts_log_diff)
plt.plot(results_ARIMA.fittedvalues, color='red')
#plt.title('RSS: %.4f'% sum((results_ARIMA.fittedvalues-ts_log_diff)**2))
# -
| notebooks/Time series analysis.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # AWS Step Functions Data Science SDK - Hello World
# 1. [Introduction](#Introduction)
# 1. [Setup](#Setup)
# 1. [Create steps for your workflow](#Create-steps-for-your-workflow)
# 1. [Define the workflow instance](#Define-the-workflow-instance)
# 1. [Review the Amazon States Language code for your workflow](#Review-the-Amazon-States-Language-code-for-your-workflow)
# 1. [Create the workflow on AWS Step Functions](#Create-the-workflow-on-AWS-Step-Functions)
# 1. [Execute the workflow](#Execute-the-workflow)
# 1. [Review the execution progress](#Review-the-execution-progress)
# 1. [Review the execution history](#Review-the-execution-history)
#
# ## Introduction
#
# This notebook describes using the AWS Step Functions Data Science SDK to create and manage workflows. The Step Functions SDK is an open source library that allows data scientists to easily create and execute machine learning workflows using AWS Step Functions and Amazon SageMaker. For more information, see the following.
# * [AWS Step Functions](https://aws.amazon.com/step-functions/)
# * [AWS Step Functions Developer Guide](https://docs.aws.amazon.com/step-functions/latest/dg/welcome.html)
# * [AWS Step Functions Data Science SDK](https://aws-step-functions-data-science-sdk.readthedocs.io)
#
# In this notebook we will use the SDK to create steps, link them together to create a workflow, and execute the workflow in AWS Step Functions.
import sys
# !{sys.executable} -m pip install --upgrade stepfunctions
# ## Setup
#
# ### Add a policy to your SageMaker role in IAM
#
# **If you are running this notebook on an Amazon SageMaker notebook instance**, the IAM role assumed by your notebook instance needs permission to create and run workflows in AWS Step Functions. To provide this permission to the role, do the following.
#
# 1. Open the Amazon [SageMaker console](https://console.aws.amazon.com/sagemaker/).
# 2. Select **Notebook instances** and choose the name of your notebook instance
# 3. Under **Permissions and encryption** select the role ARN to view the role on the IAM console
# 4. Choose **Attach policies** and search for `AWSStepFunctionsFullAccess`.
# 5. Select the check box next to `AWSStepFunctionsFullAccess` and choose **Attach policy**
#
# If you are running this notebook in a local environment, the SDK will use your configured AWS CLI configuration. For more information, see [Configuring the AWS CLI](https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-configure.html).
#
# Next, create an execution role in IAM for Step Functions.
#
# ### Create an execution role for Step Functions
#
# You need an execution role so that you can create and execute workflows in Step Functions.
#
# 1. Go to the [IAM console](https://console.aws.amazon.com/iam/)
# 2. Select **Roles** and then **Create role**.
# 3. Under **Choose the service that will use this role** select **Step Functions**
# 4. Choose **Next** until you can enter a **Role name**
# 5. Enter a name such as `StepFunctionsWorkflowExecutionRole` and then select **Create role**
#
#
# Attach a policy to the role you created. The following steps attach a policy that provides full access to Step Functions, however as a good practice you should only provide access to the resources you need.
#
# 1. Under the **Permissions** tab, click **Add inline policy**
# 2. Enter the following in the **JSON** tab
#
# ```json
# {
# "Version": "2012-10-17",
# "Statement": [
# {
# "Effect": "Allow",
# "Action": [
# "sagemaker:CreateTransformJob",
# "sagemaker:DescribeTransformJob",
# "sagemaker:StopTransformJob",
# "sagemaker:CreateTrainingJob",
# "sagemaker:DescribeTrainingJob",
# "sagemaker:StopTrainingJob",
# "sagemaker:CreateHyperParameterTuningJob",
# "sagemaker:DescribeHyperParameterTuningJob",
# "sagemaker:StopHyperParameterTuningJob",
# "sagemaker:CreateModel",
# "sagemaker:CreateEndpointConfig",
# "sagemaker:CreateEndpoint",
# "sagemaker:DeleteEndpointConfig",
# "sagemaker:DeleteEndpoint",
# "sagemaker:UpdateEndpoint",
# "sagemaker:ListTags",
# "lambda:InvokeFunction",
# "sqs:SendMessage",
# "sns:Publish",
# "ecs:RunTask",
# "ecs:StopTask",
# "ecs:DescribeTasks",
# "dynamodb:GetItem",
# "dynamodb:PutItem",
# "dynamodb:UpdateItem",
# "dynamodb:DeleteItem",
# "batch:SubmitJob",
# "batch:DescribeJobs",
# "batch:TerminateJob",
# "glue:StartJobRun",
# "glue:GetJobRun",
# "glue:GetJobRuns",
# "glue:BatchStopJobRun"
# ],
# "Resource": "*"
# },
# {
# "Effect": "Allow",
# "Action": [
# "iam:PassRole"
# ],
# "Resource": "*",
# "Condition": {
# "StringEquals": {
# "iam:PassedToService": "sagemaker.amazonaws.com"
# }
# }
# },
# {
# "Effect": "Allow",
# "Action": [
# "events:PutTargets",
# "events:PutRule",
# "events:DescribeRule"
# ],
# "Resource": [
# "arn:aws:events:*:*:rule/StepFunctionsGetEventsForSageMakerTrainingJobsRule",
# "arn:aws:events:*:*:rule/StepFunctionsGetEventsForSageMakerTransformJobsRule",
# "arn:aws:events:*:*:rule/StepFunctionsGetEventsForSageMakerTuningJobsRule",
# "arn:aws:events:*:*:rule/StepFunctionsGetEventsForECSTaskRule",
# "arn:aws:events:*:*:rule/StepFunctionsGetEventsForBatchJobsRule"
# ]
# }
# ]
# }
# ```
#
# 3. Choose **Review policy** and give the policy a name such as `StepFunctionsWorkflowExecutionPolicy`
# 4. Choose **Create policy**. You will be redirected to the details page for the role.
# 5. Copy the **Role ARN** at the top of the **Summary**
#
# ### Import the required modules from the SDK
# +
import stepfunctions
import logging
from stepfunctions.steps import *
from stepfunctions.workflow import Workflow
stepfunctions.set_stream_logger(level=logging.INFO)
workflow_execution_role = "<execution-role-arn>" # paste the StepFunctionsWorkflowExecutionRole ARN from above
# -
# ## Create basic workflow
#
# In the following cell, you will define the step that you will use in our first workflow. Then you will create, visualize and execute the workflow.
#
# Steps relate to states in AWS Step Functions. For more information, see [States](https://docs.aws.amazon.com/step-functions/latest/dg/concepts-states.html) in the *AWS Step Functions Developer Guide*. For more information on the AWS Step Functions Data Science SDK APIs, see: https://aws-step-functions-data-science-sdk.readthedocs.io.
#
# ### Pass state
#
# A `Pass` state in Step Functions simply passes its input to its output, without performing work. See [Pass](https://aws-step-functions-data-science-sdk.readthedocs.io/en/latest/states.html#stepfunctions.steps.states.Pass) in the AWS Step Functions Data Science SDK documentation.
#
start_pass_state = Pass(
state_id="MyPassState"
)
# ### Chain together steps for the basic path
#
# The following cell links together the steps you've created into a sequential group called `basic_path`. We will chain a single step to create our basic path. See [Chain](https://aws-step-functions-data-science-sdk.readthedocs.io/en/latest/states.html#stepfunctions.steps.states.Chain) in the AWS Step Functions Data Science SDK documentation.
#
# After chaining together the steps for the basic path, in this case only one step, we will visualize the basic path.
# First we chain the start pass state
basic_path=Chain([start_pass_state])
# ## Define the workflow instance
#
# The following cell defines the [workflow](https://aws-step-functions-data-science-sdk.readthedocs.io/en/latest/workflow.html#stepfunctions.workflow.Workflow) with the path we just defined.
#
# After defining the workflow, we will render the graph to see what our workflow looks like.
# +
# Next, we define the workflow
basic_workflow = Workflow(
name="MyWorkflow_Simple",
definition=basic_path,
role=workflow_execution_role
)
#Render the workflow
basic_workflow.render_graph()
# -
# ## Review the Amazon States Language code for your workflow
#
# The following renders the JSON of the [Amazon States Language](https://docs.aws.amazon.com/step-functions/latest/dg/concepts-amazon-states-language.html) definition of the workflow you created.
print(basic_workflow.definition.to_json(pretty=True))
# ## Create the workflow on AWS Step Functions
#
# Create the workflow in AWS Step Functions with [create](https://aws-step-functions-data-science-sdk.readthedocs.io/en/latest/workflow.html#stepfunctions.workflow.Workflow.create).
basic_workflow.create()
# ## Execute the workflow
#
# Run the workflow with [execute](https://aws-step-functions-data-science-sdk.readthedocs.io/en/latest/workflow.html#stepfunctions.workflow.Workflow.execute). Since the workflow only has a pass state, it will succeed immediately.
basic_workflow_execution = basic_workflow.execute()
# ## Review the execution progress
#
# Render workflow progress with the [render_progress](https://aws-step-functions-data-science-sdk.readthedocs.io/en/latest/workflow.html#stepfunctions.workflow.Execution.render_progress).
#
# This generates a snapshot of the current state of your workflow as it executes. This is a static image. Run the cell again to check progress.
basic_workflow_execution.render_progress()
# ## Review the execution history
#
# Use [list_events](https://aws-step-functions-data-science-sdk.readthedocs.io/en/latest/workflow.html#stepfunctions.workflow.Execution.list_events) to list all events in the workflow execution.
basic_workflow_execution.list_events(html=True)
# ## Create additional steps for your workflow
#
# In the following cells, you will define additional steps that you will use in our workflows. Steps relate to states in AWS Step Functions. For more information, see [States](https://docs.aws.amazon.com/step-functions/latest/dg/concepts-states.html) in the *AWS Step Functions Developer Guide*. For more information on the AWS Step Functions Data Science SDK APIs, see: https://aws-step-functions-data-science-sdk.readthedocs.io.
#
# ### Choice state
#
# A `Choice` state in Step Functions adds branching logic to your workflow. See [Choice](https://aws-step-functions-data-science-sdk.readthedocs.io/en/latest/states.html#stepfunctions.steps.states.Choice) in the AWS Step Functions Data Science SDK documentation.
choice_state = Choice(
state_id="Is this Hello World example?"
)
# First create the steps for the "happy path".
# ### Wait state
#
# A `Wait` state in Step Functions waits a specific amount of time. See [Wait](https://aws-step-functions-data-science-sdk.readthedocs.io/en/latest/states.html#stepfunctions.steps.states.Wait) in the AWS Step Functions Data Science SDK documentation.
wait_state = Wait(
state_id="Wait for 3 seconds",
seconds=3
)
# ### Parallel state
#
# A `Parallel` state in Step Functions is used to create parallel branches of execution in your workflow. This creates the `Parallel` step and adds two branches: `Hello` and `World`. See [Parallel](https://aws-step-functions-data-science-sdk.readthedocs.io/en/latest/states.html#stepfunctions.steps.states.Parallel) in the AWS Step Functions Data Science SDK documentation.
parallel_state = Parallel("MyParallelState")
parallel_state.add_branch(
Pass(state_id="Hello")
)
parallel_state.add_branch(
Pass(state_id="World")
)
# ### Lambda Task state
#
# A `Task` State in Step Functions represents a single unit of work performed by a workflow. Tasks can call Lambda functions and orchestrate other AWS services. See [AWS Service Integrations](https://docs.aws.amazon.com/step-functions/latest/dg/concepts-service-integrations.html) in the *AWS Step Functions Developer Guide*.
#
# #### Create a Lambda function
#
# The Lambda task state in this workflow uses a simple Lambda function **(Python 3.x)** that returns a base64 encoded string. Create the following function in the [Lambda console](https://console.aws.amazon.com/lambda/).
#
# ```python
# import json
# import base64
#
# def lambda_handler(event, context):
# return {
# 'statusCode': 200,
# 'input': event['input'],
# 'output': base64.b64encode(event['input'].encode()).decode('UTF-8')
# }
# ```
# #### Define the Lambda Task state
#
# The following creates a [LambdaStep](https://aws-step-functions-data-science-sdk.readthedocs.io/en/latest/compute.html#stepfunctions.steps.compute.LambdaStep) called `lambda_state`, and then configures the options to [Retry](https://docs.aws.amazon.com/step-functions/latest/dg/concepts-error-handling.html#error-handling-retrying-after-an-error) if the Lambda function fails.
# +
lambda_state = LambdaStep(
state_id="Convert HelloWorld to Base64",
parameters={
"FunctionName": "<lambda-function-name>", #replace with the name of the function you created
"Payload": {
"input": "HelloWorld"
}
}
)
lambda_state.add_retry(Retry(
error_equals=["States.TaskFailed"],
interval_seconds=15,
max_attempts=2,
backoff_rate=4.0
))
lambda_state.add_catch(Catch(
error_equals=["States.TaskFailed"],
next_step=Fail("LambdaTaskFailed")
))
# -
# ### Succeed state
#
# A `Succeed` state in Step Functions stops an execution successfully. See [Succeed](https://aws-step-functions-data-science-sdk.readthedocs.io/en/latest/states.html#stepfunctions.steps.states.Succeed) in the AWS Step Functions Data Science SDK documentation.
succeed_state = Succeed("HelloWorldSuccessful")
# ### Chain together steps for the happy path
#
# The following cell links together the steps you've created above into a sequential group called `happy_path`. The new path sequentially includes the Wait state, the Parallel state, the Lambda state, and the Succeed state that you created earlier.
#
# After chaining together the steps for the happy path, we will define the workflow and visualize the happy path.
happy_path = Chain([wait_state, parallel_state, lambda_state, succeed_state])
# +
# Next, we define the workflow
happy_workflow = Workflow(
name="MyWorkflow_Happy",
definition=happy_path,
role=workflow_execution_role
)
happy_workflow.render_graph()
# -
# For the sad path, we simply end the workflow using a `Fail` state. See [Fail](https://aws-step-functions-data-science-sdk.readthedocs.io/en/latest/states.html#stepfunctions.steps.states.Fail) in the AWS Step Functions Data Science SDK documentation.
sad_state = Fail("HelloWorldFailed")
# ### Choice state
#
# Now, attach branches to the Choice state you created earlier. See *Choice Rules* in the [AWS Step Functions Data Science SDK documentation](https://aws-step-functions-data-science-sdk.readthedocs.io) .
choice_state.add_choice(
rule=ChoiceRule.BooleanEquals(variable=start_pass_state.output()["IsHelloWorldExample"], value=True),
next_step=happy_path
)
choice_state.add_choice(
ChoiceRule.BooleanEquals(variable=start_pass_state.output()["IsHelloWorldExample"], value=False),
next_step=sad_state
)
# ## Chain together two steps
#
# In the next cell, you will chain the start_pass_state with the choice_state and define the workflow.
# +
# First we chain the start pass state and the choice state
branching_workflow_definition=Chain([start_pass_state, choice_state])
# Next, we define the workflow
branching_workflow = Workflow(
name="MyWorkflow_v2",
definition=branching_workflow_definition,
role=workflow_execution_role
)
# -
# Review the Amazon States Language code for your workflow
print(branching_workflow.definition.to_json(pretty=True))
# ## Review a visualization for your workflow
#
# The following cell generates a graphical representation of your workflow.
branching_workflow.render_graph(portrait=False)
# ## Create the workflow and execute
#
# In the next cells, we will create the branching happy workflow in AWS Step Functions with [create](https://aws-step-functions-data-science-sdk.readthedocs.io/en/latest/workflow.html#stepfunctions.workflow.Workflow.create) and execute it with [execute](https://aws-step-functions-data-science-sdk.readthedocs.io/en/latest/workflow.html#stepfunctions.workflow.Workflow.execute).
#
# Since IsHelloWorldExample is set to True, your execution should follow the happy path.
branching_workflow.create()
branching_workflow_execution = branching_workflow.execute(inputs={
"IsHelloWorldExample": True
})
# ## Review the progress
#
# Review the workflow progress with the [render_progress](https://aws-step-functions-data-science-sdk.readthedocs.io/en/latest/workflow.html#stepfunctions.workflow.Execution.render_progress).
#
# Review the execution history by calling [list_events](https://aws-step-functions-data-science-sdk.readthedocs.io/en/latest/workflow.html#stepfunctions.workflow.Execution.list_events) to list all events in the workflow execution.
branching_workflow_execution.render_progress()
branching_workflow_execution.list_events(html=True)
| step-functions-data-science-sdk/hello_world_workflow/hello_world_workflow.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] id="hello-world"
# # Hello World!
#
# This is a simple markdown page with GitHub code snippets.
# All this will be translated into a Jupyter notebook using the desired language.
# + [markdown] id="example"
# ## Example
#
# Here is a code sample:
# + id="example-code"
# Hello world in Python.
print('Hello from Python!')
# + [markdown] id="example-2"
# </p><table>
# <td>
# <a target="_blank" class="button" href="https://colab.research.google.com/github//davidcavazos/md2ipynb/blob/master/examples/notebooks/hello-world-py.ipynb">
# <img src="https://www.tensorflow.org/images/colab_logo_32px.png" width="20px" height="20px"/>
# Run in Colab
# </a>
# </td>
# <td style="padding-left:1em">
# <a target="_blank" class="button" href="https://github.com//davidcavazos/md2ipynb/blob/master/examples/code/hello-world.py">
# <img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" width="20px" height="20px"/>
# View on GitHub
# </a>
# </td>
# </table>
# <br/>
#
# You are all done!
# + [markdown] id="html-section"
# ## HTML Section
#
# <p>HTML is also supported since Markdown is a superset of HTML</p>
# + [markdown] id="what-s-next"
# ## What's next
#
# Check the [README.md](https://github.com/davidcavazos/md2ipynb/blob/master/README.md) for more instructions.
| examples/notebooks/hello-minimal.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# This notebook can be run on mybinder: [](https://mybinder.org/v2/git/https%3A%2F%2Fgricad-gitlab.univ-grenoble-alpes.fr%2Fchatelaf%2Fml-sicom3a/master?urlpath=lab/tree/notebooks/7_Clustering/N3_Kernel_Kmeans_example/)
# # Kernel Kmeans Example, on a 2 classes problem
# +
import numpy as np
import matplotlib.pyplot as plt
from scipy.io import loadmat
import scipy.stats as stats
# %matplotlib inline
# -
# ## Create data set :
# +
D1 = np.random.randn(80,) * 0.1 + 1
P1 = np.random.rand(80,) * 2 * np.pi
D2 = np.random.randn(40,) * 0.2
P2 = np.random.rand(40,) * 2 * np.pi
C1 = np.zeros((80, 2))
C1[:, 0] = D1 * np.cos(P1)
C1[:, 1] = D1 * np.sin(P1)
C2 = np.zeros((40, 2))
C2[:, 0] = D2 * np.cos(P2)
C2[:, 1] = D2 * np.sin(P2)
plt.subplot(121)
fig = plt.scatter(C1[:, 0], C1[:, 1], marker="+", color="blue")
fig = plt.scatter(C2[:, 0], C2[:, 1], marker="o", color="red")
plt.axis("equal")
plt.title("theoretical")
X = np.append(C1, C2, axis=0)
plt.subplot(122)
plt.scatter(X[:, 0], X[:, 1])
plt.axis("equal")
plt.title("observed");
# -
# ### Exercise 6
# - Briefly explain why usual Kmeans algorithm will fail to detect the classes above
# - Is the Kernel approach the only possibility for this kind of clustering problem?
# ### Exercise 7
#
# - Propose a change of representation space to allow successful Kmeans clustering in a 1D space. Implement it (use Kmeans_basic.ipynb example)
# - Explain the role of parameter 'gamma' , then change it in Kernel Kmeans code below and comment your findings
# - Compare the initialization of this algorithm with the type of initialization used in the previous studies of Kmeans.
# +
# Kernel computation
N = X.shape[0]
Ker = np.zeros((N, N))
gamma = 5
for i in range(0, N):
for j in range(0, N):
d = np.sum((X[i, :] - X[j, :]) ** 2)
Ker[i, j] = np.exp(-gamma * d)
# Init
import numpy.matlib
converged = 0
# Kernel K-means is sensitive to initial conditions (as is Kmeans). Try altering
# this initialisation to see the effect.
K = 2
Z = np.matlib.repmat(np.array([1, 0]), N, 1)
perm = np.random.permutation(N)[0 : np.intc(N / 2)]
Z[perm, :] = [0, 1]
di = np.zeros((N, K))
count = 0
while converged == 0:
count += 1
Nk = np.sum(Z, axis=0)
converged = 1
for k in range(0, K):
Vk = Z[:, k].reshape(N, 1)
di[:, k] = (
np.diag(Ker)
- (2 / Nk[k]) * np.sum(np.matlib.repmat(Vk.transpose(), N, 1) * Ker, axis=1)
+ (float(Nk[k]) ** (-2))
* np.sum(np.sum((Vk @ Vk.transpose()) * Ker, axis=0), axis=0)
)
oldZ = np.copy(Z)
Z = np.zeros((N, K))
for i in range(0, N):
if di[i, 0] < di[i, 1]:
Z[i, :] = [1, 0]
if Z[i, 0] != oldZ[i, 0]:
converged = 0
else:
Z[i, :] = [0, 1]
if Z[i, 1] != oldZ[i, 1]:
converged = 0
# visu
IndC0 = np.where(Z[:, 0] == 1)[0]
IndC1 = np.where(Z[:, 1] == 1)[0]
plt.scatter(X[IndC0, 0], X[IndC0, 1], color="green", marker="o")
plt.scatter(X[IndC1, 0], X[IndC1, 1], color="cyan", marker="o")
plt.axis("equal")
print("converged in {} iterations".format(count))
# -
| Notebooks/7_Clustering/N3_Kernel_Kmeans_example.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .r
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# ---
# + [markdown] colab_type="text" id="I1JiGtmRbLVp"
# ##### Copyright 2018 The TF-Agents Authors.
# + [markdown] colab_type="text" id="b5tItHFpLyXG"
# # Introduction
#
# Reinforcement learning (RL) is a general framework where agents learn to perform actions in an environment so as to maximize a reward. The two main components are the environment, which represents the problem to be solved, and the agent, which represents the learning algorithm.
#
# The agent and environment continuously interact with each other. At each time step, the agent takes an action on the environment based on its *policy* $\pi(a_t|s_t)$, where $s_t$ is the current observation from the environment, and receives a reward $r_{t+1}$ and the next observation $s_{t+1}$ from the environment. The goal is to improve the policy so as to maximize the sum of rewards (return).
#
# Note: It is important to distinguish between the `state` of the environment and the `observation`, which is the part of the environment `state` that the agent can see, e.g. in a poker game, the environment state consists of the cards belonging to all the players and the community cards, but the agent can observe only its own cards and a few community cards. In most literature, these terms are used interchangeably and observation is also denoted as $s$.
#
# 
#
# This is a very general framework and can model a variety of sequential decision making problems such as games, robotics etc.
#
# + [markdown] colab_type="text" id="YQWpFOZyLyjG"
# # The Cartpole Environment
#
# The Cartpole environment is one of the most well known classic reinforcement learning problems ( the *"Hello, World!"* of RL). A pole is attached to a cart, which can move along a frictionless track. The pole starts upright and the goal is to prevent it from falling over by controlling the cart.
#
# * The observation from the environment $s_t$ is a 4D vector representing the position and velocity of the cart, and the angle and angular velocity of the pole.
# * The agent can control the system by taking one of 2 actions $a_t$: push the cart right (+1) or left (-1).
# * A reward $r_{t+1} = 1$ is provided for every timestep that the pole remains upright. The episode ends when one of the following is true:
# * the pole tips over some angle limit
# * the cart moves outside of the world edges
# * 200 time steps pass.
#
# The goal of the agent is to learn a policy $\pi(a_t|s_t)$ so as to maximize the sum of rewards in an episode $\sum_{t=0}^{T} \gamma^t r_t$. Here $\gamma$ is a discount factor in $[0, 1]$ that discounts future rewards relative to immediate rewards. This parameter helps us focus the policy, making it care more about obtaining rewards quickly.
#
# + [markdown] colab_type="text" id="M2hGvsUWLyul"
# # The DQN Agent
#
# The [DQN (Deep Q-Network) algorithm](https://storage.googleapis.com/deepmind-media/dqn/DQNNaturePaper.pdf) was developed by DeepMind in 2015. It was able to solve a wide range of Atari games (some to superhuman level) by combining reinforcement learning and deep neural networks at scale. The algorithm was developed by enhancing a classic RL algorithm called Q-Learning with deep neural networks and a technique called *experience replay*.
#
# ## Q-Learning
#
# Q-Learning is based on the notion of a Q-function. The Q-function (a.k.a the state-action value function) of a policy $\pi$, $Q^{\pi}(s, a)$, measures the expected return or discounted sum of rewards obtained from state $s$ by taking action $a$ first and following policy $\pi$ thereafter. We define the optimal Q-function $Q^*(s, a)$ as the maximum return that can be obtained starting from observation $s$, taking action $a$ and following the optimal policy thereafter. The optimal Q-function obeys the following *Bellman* optimality equation:
#
# $\begin{equation}
# Q^*(s, a) = \mathbb{E}\left[ r + \gamma \max_{a'} Q^*(s', a')\right]
# \end{equation}$
#
# This means that the maximum return from state $s$ and action $a$ is the sum of the immediate reward $r$ and the return (discounted by $\gamma$) obtained by following the optimal policy therafter until the end of the episode (i.e., the maximum reward from the next state $s'$). The expectation is computed both over the distribution of immediate rewards $r$ and possible next states $s'$.
#
# The basic idea behind Q-Learning is to use the Bellman optimality equation as an iterative update $Q_{i+1}(s, a) \leftarrow \mathbb{E}\left[ r + \gamma \max_{a'} Q_{i}(s', a')\right]$, and it can be shown that this converges to the optimal $Q$-function, i.e. $Q_i \rightarrow Q^*$ as $i \rightarrow \infty$ (see the [DQN paper](https://www.cs.toronto.edu/~vmnih/docs/dqn.pdf)).
#
#
# ## Deep Q-Learning
#
# For most problems, it is impractical to represent the $Q$-function as a table containing values for each combination of $s$ and $a$. Instead, we train a function approximator, such as a neural network with parameters $\theta$, to estimate the Q-values, i.e. $Q(s, a; \theta) \approx Q^*(s, a)$. This can done by minimizing the following loss at each step $i$:
#
# $\begin{equation}L_i(\theta_i) = \mathbb{E}_{s, a, r, s'\sim \rho(.)} \left[ (y_i - Q(s, a; \theta_i))^2 \right]\end{equation}$ where $y_i = r + \gamma \max_{a'} Q(s', a'; \theta_{i-1})$
#
# Here, $y_i$ is called the TD (temporal difference) target, and $y_i - Q$ is called the TD error. $\rho$ represents the behaviour distribution, the distribution over transitions $\{s, a, r, s'\}$ collected from the environment.
#
# Note that the parameters from the previous iteration $\theta_{i-1}$ are fixed and not updated. In practice we use a snapshot of the network parameters from a few iterations ago instead of the last iteration. This copy is called the *target network*.
#
# Q-Learning is an *off-policy* algorithm that learns about the greedy policy $a = \max_{a} Q(s, a; \theta)$ while using a different behaviour policy for acting in the environment/collecting data. This behaviour policy is usually an $\epsilon$-greedy policy that selects the greedy action with probability $1-\epsilon$ and a random action with probability $\epsilon$ to ensure good coverage of the state-action space.
#
# ## Experience Replay
#
# To avoid computing the full expectation in the DQN loss, we can minimize it using stochastic gradient descent. If the loss is computed using just the last transition $\{s, a, r, s'\}$, this reduces to standard Q-Learning.
#
# The Atari DQN work introduced a technique called Experience Replay to make the network updates more stable. At each time step of data collection, the transitions are added to a circular buffer called the *replay buffer*. Then during training, instead of using just the latest transition to compute the loss and its gradient, we compute them using a mini-batch of transitions sampled from the replay buffer. This has two advantages: better data efficiency by reusing each transition in many updates, and better stability using uncorrelated transitions in a batch.
#
# + [markdown] colab_type="text" id="iuYYBJUWtvnP"
# # DQN on Cartpole in TF-Agents
#
# TF-Agents provides all the components necessary to train a DQN agent, such as the agent itself, the environment, policies, networks, replay buffers, data collection loops, and metrics. These components are implemented as Python functions or TensorFlow graph ops, and we also have wrappers for converting between them. Additionally, TF-Agents supports TensorFlow 2.0 mode, which enables us to use TF in imperative mode.
#
# Next, take a look at the [tutorial for training a DQN agent on the Carpole environment using TF-Agents](https://github.com/tensorflow/agents/blob/master/tf_agents/colabs/1_dqn_tutorial.ipynb).
#
#
#
| tf_agents/colabs/0_intro_rl.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python [conda env:tensorflow_p36]
# language: python
# name: conda-env-tensorflow_p36-py
# ---
# Importing Packages
from utils import train_val_generator
from keras.models import load_model
from keras.preprocessing import image
import matplotlib.pyplot as plt
import shutil
import numpy as np
# +
# Train and Test Paths
train_path = './train'
test_path ='./test'
train_path_sandals = './train_sandals'
test_path_sandals ='./test_sandals'
train_path_boots = './train_boots'
test_path_boots ='./test_boots'
train_path_slippers = './train_slippers'
test_path_slippers ='./test_slippers'
train_path_shoes = './train_shoes'
test_path_shoes ='./test_shoes'
# Train and Test Generators
train_gen, val_gen = train_val_generator(32,train_path,test_path)
train_gen_sandals, val_gen_sandals = train_val_generator(32,train_path_sandals,test_path_sandals)
train_gen_boots, val_gen_boots = train_val_generator(32,train_path_boots,test_path_boots)
train_gen_shoes, val_gen_shoes = train_val_generator(32,train_path_shoes,test_path_shoes)
train_gen_slippers, val_gen_slippers = train_val_generator(32,train_path_slippers,test_path_slippers)
# -
labels_main = (train_gen.class_indices)
labels_main = dict((v,k) for k,v in labels_main.items())
print(labels_main)
labels_shoes = (train_gen_shoes.class_indices)
labels_shoes = dict((v,k) for k,v in labels_shoes.items())
print(labels_shoes)
labels_sandals = (train_gen_sandals.class_indices)
labels_sandals = dict((v,k) for k,v in labels_sandals.items())
print(labels_sandals)
labels_slippers = (train_gen_slippers.class_indices)
labels_slippers = dict((v,k) for k,v in labels_slippers.items())
print(labels_slippers)
labels_boots = (train_gen_boots.class_indices)
labels_boots = dict((v,k) for k,v in labels_boots.items())
print(labels_boots)
# +
def max_pred(arr,model):
model = load_model(model)
preds = model.predict(arr).flatten().tolist()
return preds.index(max(preds))
def predict(pred_dir):
img = image.load_img(pred_dir, target_size=(224, 224))
img_tensor = image.img_to_array(img)
img_tensor = np.expand_dims(img_tensor, axis=0)
img_tensor /= 255.
k = max_pred(img_tensor,'vgg_19.h5')
predictions = labels_main[k]
if predictions == "Boots":
attribute = labels_boots[max_pred(img_tensor,'vanilla_boots.h5')]
elif predictions == "Sandals":
attribute = labels_sandals[max_pred(img_tensor,'vanilla_sandals.h5')]
elif predictions == "Shoes":
attribute = labels_shoes[max_pred(img_tensor,'vanilla_shoes.h5')]
elif predictions == "Slippers":
attribute = labels_slippers[max_pred(img_tensor,'vanilla_slippers.h5')]
plt.imshow(plt.imread(pred_dir))
return predictions, attribute
# -
predict("./data/Shoes/Heels/<NAME>/8031389.210435.jpg")
| Shoe-Classification/prediction.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] id="sS5sQetee7vc"
# # Model Pipeline
#
# By: <NAME>, <NAME>, <NAME>, <NAME>, <NAME>
#
# **Can we improve on the baseline scores using different encoding, imputing, and scaling schemes?**
# - Averaged Logistic Regression accuracy Score: 0.5
# - Averaged Linear Regression accuracy score: 0.2045
# - Averaged K-Nearest Neighbour accuracy score: 0.6198
# - Averaged Naive Bayes accuracy score: 0.649
#
# **`p1_tag` ~ `rank` + `total_funding_usd` + `employee_count` (ordinal) + `country` (nominal) + `category_groups` (nominal)**
# + [markdown] id="Lzz5DdXWTUOh"
# ### STEPS FOR CONNECTING TO COLAB
#
# https://www.marktechpost.com/2019/06/07/how-to-connect-google-colab-with-google-drive/
#
# * Upload the .csv files to your google drive
# * Go to the file in google drive, right click on file name, then click on 'Get Link' and it shows the unique id of the file. Copy it and save it in the below code:
# downloaded = drive.CreateFile({'id':"1uWwO-geA8IRNaerjQCk92******"})
# * Replace the id with id of file you want to access
# downloaded.GetContentFile('baseline.csv')
#
#
# ### Enabling GPU settings in COLAB
#
# https://www.tutorialspoint.com/google_colab/google_colab_using_free_gpu.htm
# + colab={"base_uri": "https://localhost:8080/"} id="UKGov2n9gm4i" outputId="cd1932d4-0a3a-4aef-c44f-c26bde01d4d8"
## GCP drive to colab connectivity Code
from google.colab import drive
drive.mount('/content/drive')
from pydrive.auth import GoogleAuth
from pydrive.drive import GoogleDrive
from google.colab import auth
from oauth2client.client import GoogleCredentials
auth.authenticate_user()
gauth = GoogleAuth()
gauth.credentials = GoogleCredentials.get_application_default()
drive = GoogleDrive(gauth)
downloaded = drive.CreateFile({'id':"1uWwO-geA8IRNaerjQCk92KCWDlZM_6Zx"}) # replace the id with id of file you want to access
downloaded.GetContentFile('baseline.csv')
downloaded = drive.CreateFile({'id':"13zLq9t_noAl7RRsLuWmbI_fRe3rE0dpg"}) # replace the id with id of file you want to access
downloaded.GetContentFile('pagerank_df_deg3.csv')
# + colab={"base_uri": "https://localhost:8080/"} id="6OY3uRhuhBs6" outputId="9738bc24-1302-4fa5-b342-ecbae4256890"
#pip install prince
# + id="Zal_raJUhSkB"
#pip install category_encoders
# + id="b1PxJFhiRa5c"
#pip install from libsvm
# + id="Q-SWnTdIe7vc"
'''Data analysis'''
import numpy as np
import pandas as pd
import csv
import re
import warnings
import json
import os
import time
import math
import itertools
import statistics
from datetime import datetime
warnings.filterwarnings('ignore')
'''Plotting'''
import matplotlib.pyplot as plt
import seaborn as sns
# %matplotlib inline
'''Stat'''
import statsmodels.api as sm
from scipy.stats import chi2_contingency
'''ML'''
import prince
import category_encoders as ce
from sklearn import metrics, svm, preprocessing, utils
from sklearn.metrics import mean_squared_error, r2_score, f1_score
from sklearn.preprocessing import StandardScaler, MinMaxScaler
from sklearn.impute import SimpleImputer
from sklearn.compose import ColumnTransformer
from sklearn.model_selection import train_test_split, GridSearchCV,RandomizedSearchCV
from sklearn.pipeline import Pipeline
from sklearn.ensemble import RandomForestRegressor
from sklearn.ensemble import RandomForestClassifier
from sklearn.linear_model import LinearRegression
from sklearn.linear_model import LogisticRegression
from sklearn.naive_bayes import GaussianNB, BernoulliNB, MultinomialNB
from sklearn.neighbors import KNeighborsClassifier
from sklearn.decomposition import PCA
def reduce_mem_usage(df, verbose=True):
numerics = ['int16', 'int32', 'int64', 'float16', 'float32', 'float64']
start_mem = df.memory_usage().sum() / 1024**2
for col in df.columns:
col_type = df[col].dtypes
if col_type in numerics:
c_min = df[col].min()
c_max = df[col].max()
if str(col_type)[:3] == 'int':
if c_min > np.iinfo(np.int8).min and c_max < np.iinfo(np.int8).max:
df[col] = df[col].astype(np.int8)
elif c_min > np.iinfo(np.int16).min and c_max < np.iinfo(np.int16).max:
df[col] = df[col].astype(np.int16)
elif c_min > np.iinfo(np.int32).min and c_max < np.iinfo(np.int32).max:
df[col] = df[col].astype(np.int32)
elif c_min > np.iinfo(np.int64).min and c_max < np.iinfo(np.int64).max:
df[col] = df[col].astype(np.int64)
else:
if c_min > np.finfo(np.float16).min and c_max < np.finfo(np.float16).max:
df[col] = df[col].astype(np.float16)
elif c_min > np.finfo(np.float32).min and c_max < np.finfo(np.float32).max:
df[col] = df[col].astype(np.float32)
else:
df[col] = df[col].astype(np.float64)
end_mem = df.memory_usage().sum() / 1024**2
if verbose: print('Mem. usage decreased to {:5.2f} Mb ({:.1f}% reduction)'.format(end_mem, 100*(start_mem-end_mem)/start_mem))
return df
# + [markdown] id="B8ym8TNWe7vd"
# ## Reading in data
#
# * Read data
# * Create age feature
# * Impute the na and infinite values
# * One-hot encode countrycode
# + colab={"base_uri": "https://localhost:8080/"} id="sn6J21X5e7vd" outputId="6d8311e8-7794-427f-c63d-db1912f7dcb0"
# read the baseline dataset
# If you are not using colab replace below with appropriate local file path
# Make sure you add 'founded_on' column in 2_Baseline_Model.ipynb file and
# generate the baseline.csv then read it into Gdrive/colab or your local
# machine
df = pd.read_csv('files/output/baseline.csv',sep=';')
df = df[df.columns.to_list()[61:]]
print("Original DF shape",df.shape)
# create column map
column_map = {'employee_count':'employee_count_ord', 'employee_size':'employee_count', 'category_groups':'category_groups_list', 'country':'country_code'}
df = df.rename(column_map, axis=1)
print('\nStarting Dataframe Columns:\n\n{}\n'.format(df.columns.to_list()))
# read the pagerank dataset
# Merge with page rank data set
# If you are not using colab replace below with appropriate local file path
df_pr = pd.read_csv('files/output/pagerank_df.csv',sep=',')
df_pr['uuid'] = df_pr['__id']
df= pd.merge(df_pr.copy(),df.copy(), how = 'right',on='uuid')
print("Original DF_PR shape after pagerank",df.shape)
# read total_degree
df_td = pd.read_csv('files/output/total_degree.csv',sep=',')
df_td['uuid'] = df_td['__id']
df= pd.merge(df_td.copy(),df.copy(), how = 'right',on='uuid')
print("Original DF_PR shape after total degree",df.shape)
# read in_degree
df_id = pd.read_csv('files/output/in_degree.csv',sep=',')
df_id['uuid'] = df_id['__id']
df= pd.merge(df_id.copy(),df.copy(), how = 'right',on='uuid')
print("Original DF_PR shape after in_degree",df.shape)
# read out_degree
df_od = pd.read_csv('files/output/out_degree.csv',sep=',')
df_od['uuid'] = df_od['__id']
df= pd.merge(df_od.copy(),df.copy(), how = 'right',on='uuid')
print("Original DF_PR shape after out_degree",df.shape)
# read triangle_count
df_tc = pd.read_csv('files/output/triangle_count.csv',sep=',')
df_tc['uuid'] = df_tc['__id']
df= pd.merge(df_tc.copy(),df.copy(), how = 'right',on='uuid')
print("Original DF_PR shape after triangle_count",df.shape)
# read kcore
df_kc = pd.read_csv('files/output/kcore_df.csv',sep=',')
df_kc['uuid'] = df_kc['__id']
df= pd.merge(df_kc.copy(),df.copy(), how = 'right',on='uuid')
print("Original DF_PR shape after kcore",df.shape)
# Have industry mapper for 'ind_1'...'ind_46' columns
industries = ['Software', 'Information Technology', 'Internet Services', 'Data and Analytics',
'Sales and Marketing', 'Media and Entertainment', 'Commerce and Shopping',
'Financial Services', 'Apps', 'Mobile', 'Science and Engineering', 'Hardware',
'Health Care', 'Education', 'Artificial Intelligence', 'Professional Services',
'Design', 'Community and Lifestyle', 'Real Estate', 'Advertising',
'Transportation', 'Consumer Electronics', 'Lending and Investments',
'Sports', 'Travel and Tourism', 'Food and Beverage',
'Content and Publishing', 'Consumer Goods', 'Privacy and Security',
'Video', 'Payments', 'Sustainability', 'Events', 'Manufacturing',
'Clothing and Apparel', 'Administrative Services', 'Music and Audio',
'Messaging and Telecommunications', 'Energy', 'Platforms', 'Gaming',
'Government and Military', 'Biotechnology', 'Navigation and Mapping',
'Agriculture and Farming', 'Natural Resources']
industry_map = {industry:'ind_'+str(idx+1) for idx,industry in enumerate(industries)}
# Age has infinite values, imputing to zero for now
df['founded_on'] = df['founded_on'].fillna(0)
df = df.fillna(0)
################### IMPUTATING MISSING NUMERIC VALUES ###################
# Impute numeric variables (total funding amount, rank)
# First impute with a simple method (median), and then follow with regression imputation iteratively
imputer = SimpleImputer(missing_values=np.nan, strategy='median')
variables = ['total_funding_usd','rank']
for i in variables:
df['imp_' + i] = imputer.fit_transform(df[i].values.reshape(-1,1))
# Remove ind columns for now
df_subset = df[['uuid', 'p1_tag', 'rank', 'total_funding_usd', 'employee_count_ord', 'imp_total_funding_usd', 'imp_rank']]
# Linear regression imputation
lin_reg_data = pd.DataFrame(columns = ['lin_' + i for i in variables])
for i in variables:
lin_reg_data['lin_' + i] = df_subset['imp_' + i]
parameters = list(set(df_subset.columns) - set(variables) - {'uuid'} - {'imp_' + i})
#R un linear regression to impute values
model = linear_model.LinearRegression()
model.fit(X = df_subset[parameters], y = df_subset['imp_' + i])
# Save imputed values
lin_reg_data.loc[df_subset[i].isna(), 'lin_' + i] = model.predict(df_subset[parameters])[df_subset[i].isna()]
# Add linear regression-imputed total_funding_usd and rank to original baseline dataset
df = pd.merge(df,lin_reg_data,how="inner",left_index=True,right_index=True)
df.drop(['imp_total_funding_usd','total_funding_usd','rank','imp_rank'], inplace=True, axis=1)
df.rename(columns={'lin_total_funding_usd':'total_funding_usd','lin_rank':'rank'})
################### IMPUTATING MISSING INDUSTRY INDICATORS ###################
# Use Logistic Regresion to impute industry binary variables
# First impute all variables with a simple method (set to 0), and then follow with regression imputation iteratively
ind = ['ind_1', 'ind_2', 'ind_3', 'ind_4', 'ind_5', 'ind_6', 'ind_7', 'ind_8', 'ind_9', 'ind_10', 'ind_11', 'ind_12', 'ind_13',
'ind_14', 'ind_15', 'ind_16', 'ind_17', 'ind_18', 'ind_19', 'ind_20', 'ind_21', 'ind_22', 'ind_23', 'ind_24', 'ind_25',
'ind_26', 'ind_27', 'ind_28', 'ind_29', 'ind_30', 'ind_31', 'ind_32', 'ind_33', 'ind_34', 'ind_35', 'ind_36', 'ind_37',
'ind_38', 'ind_39', 'ind_40', 'ind_41', 'ind_42', 'ind_43', 'ind_44', 'ind_45', 'ind_46']
for i in ind:
df['imp_' + i] = df[i].fillna(0)
# Subset to relevant variables for regression
df_subset_2 = df.drop(['employee_count','category_groups_list','country_code'], axis = 1)
num_columns = ['p1_tag','rank','employee_count_ord','total_funding_usd']
# Logistic regression imputation
log_reg_data = pd.DataFrame(columns = ['log_' + i for i in ind])
for i in ind:
log_reg_data['log_' + i] = df_subset_2['imp_' + i]
parameters = list(set(df_subset_2.columns) - set(ind) - {'uuid'} - {'imp_' + i})
# Run logisitic regression to impute values
model = linear_model.LogisticRegression()
model.fit(X = df_subset_2[parameters], y = df_subset_2['imp_' + i])
# Save imputed values
log_reg_data.loc[df_subset_2[i].isna(), 'log_' + i] = model.predict(df_subset_2[parameters])[df_subset_2[i].isna()]
#Add logistic regression-imputed variables to original baseline dataset
df = pd.merge(df, log_reg_data, how="inner",left_index=True,right_index=True)
#Drop original industry columns and columns with basic imputation
imp = ['imp_ind_1', 'imp_ind_2', 'imp_ind_3', 'imp_ind_4', 'imp_ind_5', 'imp_ind_6', 'imp_ind_7', 'imp_ind_8', 'imp_ind_9',
'imp_ind_10', 'imp_ind_11', 'imp_ind_12', 'imp_ind_13', 'imp_ind_14', 'imp_ind_15', 'imp_ind_16', 'imp_ind_17', 'imp_ind_18',
'imp_ind_19', 'imp_ind_20', 'imp_ind_21', 'imp_ind_22', 'imp_ind_23', 'imp_ind_24', 'imp_ind_25', 'imp_ind_26', 'imp_ind_27',
'imp_ind_28', 'imp_ind_29', 'imp_ind_30', 'imp_ind_31', 'imp_ind_32', 'imp_ind_33', 'imp_ind_34', 'imp_ind_35', 'imp_ind_36',
'imp_ind_37', 'imp_ind_38', 'imp_ind_39', 'imp_ind_40', 'imp_ind_41', 'imp_ind_42', 'imp_ind_43', 'imp_ind_44', 'imp_ind_45',
'imp_ind_46']
df.drop(imp, inplace=True, axis=1)
df.drop(ind, inplace=True, axis = 1)
def log_rename(col_name):
if re.match(r"^log_", col_name):
return (col_name[4:])
else:
return col_name
df.rename(columns=log_rename)
##############################################################################
# create age feature
print("DF shape before adding age",df.shape)
df['founded_on2'] = pd.to_datetime(df['founded_on'].fillna(0))
diff_y = today.year - df['founded_on2'].dt.year
founded_md = df['founded_on2'].apply(lambda x: (x.month,x.day) )
no_years = founded_md > (today.month,today.day)
df['age'] = diff_y - no_years
print("DF shape after adding age",df.shape)
# Encode country_code using one-hotencoding
df = pd.concat([df,pd.get_dummies(df['country_code'], prefix='country')],axis=1)
df.head(1)
df_simple = df.drop(['employee_count','category_groups_list','uuid','__id_x','__id_y',
'founded_on','founded_on2','country_code'], axis=1)
df_simple = reduce_mem_usage(df_simple)
print('\nEnding Dataframe Columns:\n\n{}'.format(df_simple.columns.to_list()))
print('\nDataframe shape:', df_simple.shape)
del industries, industry_map
# + colab={"base_uri": "https://localhost:8080/"} id="UaDSZ2aje7ve" outputId="8df246b7-5589-41ff-bb88-2b6e309e0887"
## Select equal sample of non-Pledge 1% organizations
df_p1 = df_simple[df_simple['p1_tag']==1]
print(df_p1.shape)
df_notp1 = df_simple[df_simple['p1_tag']==0].sample(n=df_p1.shape[0], replace=True)
df_model = pd.concat([df_p1, df_notp1]).reset_index(drop=True)
df_model = reduce_mem_usage(df_model)
# Create variable for each feature type: categorical and numerical
numeric_features = df_model.select_dtypes(include=['int8', 'int16', 'int32', 'int64', 'float16', 'float32','float64']).drop(['p1_tag'], axis=1).columns
categorical_features = df_model.select_dtypes(include=['object']).columns
print('Numeric features:', numeric_features.to_list())
print('Categorical features:', categorical_features.to_list())
X = df_model.drop('p1_tag', axis=1)
y = df_model['p1_tag']
y = preprocessing.LabelEncoder().fit_transform(y)
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2)
print('Training data shape:', X_train.shape)
print('Train label shape:', y_train.shape)
print('Test data shape:', X_test.shape)
print('Test label shape:', y_test.shape)
# reset indexes for train and test
X_train= X_train.reset_index(drop=True)
X_test= X_test.reset_index(drop=True)
# + [markdown] id="vel0APzMW2dP"
# ## PCA
#
# * PCA on Country_Code
# * PCA on Industry_Code
# * Merged PCA attributes to original dataset
# * Drop remaining columns
# * Create PCA graphs
# + [markdown] id="SMgVBynZvSyf"
# #### Country Dataset
# + colab={"base_uri": "https://localhost:8080/", "height": 647} id="Hfb6xuWChK3Z" outputId="96f95c5d-8cc0-4803-86d3-2ee6a4fc88ae"
# Perform PCA of country dataset
Country_train_df = X_train.filter(regex='^country',axis=1).fillna(0)
Country_test_df = X_test.filter(regex='^country',axis=1).fillna(0)
# create PCA features for train and test set
pca_Country = PCA()
principalComponents_Country_train = pca_Country.fit_transform(Country_train_df)
# for each item in k, display the explained fraction of variation for first
# k principal components
# concat with train dataset
ratios = pca_Country.explained_variance_ratio_
k = [1,2,3,4,5,10,20,30,40,50]
fraction_list = []
for item in k:
fraction_list.append(round(ratios[item-1],2))
# plot the graph
plt.figure(figsize=(10,10))
plt.xticks(fontsize=12)
plt.yticks(fontsize=14)
plt.ylabel('Principal Component Fraction of total variation',fontsize=14)
plt.xlabel('Principal component size',fontsize=14)
plt.title('Company-fraction of total variance vs. number of principal components',
fontsize=16)
plt.plot(k,fraction_list)
# + [markdown] id="qAIgl2Kbu4EK"
# From the above graph, choosing number of components as 10 as it is showing the best variance vs size fit
# + id="0Eg7brJ3plYY"
# create PCA features for train and test set
n_cty_components=10
pca_Country = PCA(n_components=n_cty_components)
principalComponents_Country_train = pca_Country.fit_transform(Country_train_df)
principalComponents_Country_test = pca_Country.transform(Country_test_df)
n_cty_components
# create dataframes from numpy
df_cty_train = pd.DataFrame(principalComponents_Country_train,columns=['Country_'+ str(x) for x in range(n_cty_components)])
df_cty_test = pd.DataFrame(principalComponents_Country_test,columns=['Country_'+ str(x) for x in range(n_cty_components)])
# drop country prefix columns
X_train = X_train.drop(list(X_train.filter(regex='^country_',axis=1).columns), axis=1)
X_test = X_test.drop(list(X_test.filter(regex='^country_',axis=1).columns), axis=1)
# concat with train dataset
X_train = pd.concat([X_train, df_cty_train],axis = 1)
X_test = pd.concat([X_test, df_cty_test],axis = 1)
# + [markdown] id="0k2ghorWvX_i"
# #### Industry Dataset
# + colab={"base_uri": "https://localhost:8080/", "height": 647} id="hNCYcDu9gj_z" outputId="38dcead6-df46-444e-d6a4-f4a112b87ef7"
# Perform PCA of industry dataset
industry_train_df = X_train.filter(regex='^ind_',axis=1).fillna(0)
industry_test_df = X_test.filter(regex='^ind_',axis=1).fillna(0)
# create PCA features for train and test set
pca_Industry = PCA()
principalComponents_Industry_train = pca_Industry.fit_transform(industry_train_df)
# for each item in k, display the explained fraction of variation for first
# k principal components
# concat with train dataset
ratios = pca_Industry.explained_variance_ratio_
k = [1,2,3,4,5,10,20,30,40]
fraction_list = []
for item in k:
fraction_list.append(round(ratios[item-1],2))
# plot the graph
plt.figure(figsize=(10,10))
plt.xticks(fontsize=12)
plt.yticks(fontsize=14)
plt.ylabel('Principal Component Fraction of total variation',fontsize=14)
plt.xlabel('Principal component size',fontsize=14)
plt.title('Industry-fraction of total variance vs. number of principal components',
fontsize=16)
plt.plot(k,fraction_list)
# + [markdown] id="J8Sh2xR6vb48"
# From the above graph, choosing number of components as 10 as it is showing the best variance vs size fit
# + id="u1szRoh6qwt7"
# create PCA features for train and test set
n_ind_components=10
pca_Industry = PCA(n_components=n_ind_components)
principalComponents_Industry_train = pca_Industry.fit_transform(industry_train_df)
principalComponents_Industry_test = pca_Industry.transform(industry_test_df)
# create dataframes from numpy
df_ind_train = pd.DataFrame(principalComponents_Industry_train,columns=['Ind_'+ str(x) for x in range(n_ind_components)])
df_ind_test = pd.DataFrame(principalComponents_Industry_test,columns=['Ind_'+ str(x) for x in range(n_ind_components)])
# drop country prefix columns
X_train = X_train.drop(list(X_train.filter(regex='^ind_',axis=1).columns), axis=1)
X_test = X_test.drop(list(X_test.filter(regex='^ind_',axis=1).columns), axis=1)
# concat with train dataset
X_train = pd.concat([X_train, df_ind_train],axis = 1)
X_test = pd.concat([X_test, df_ind_test],axis = 1)
# + [markdown] id="HdaPElEIvohP"
# #### Visualizing PCA for Industry and Country datasets
# + colab={"base_uri": "https://localhost:8080/", "height": 985} id="UYBtgwajQZ8L" outputId="6174a13c-5e59-4e88-e92e-72c577ebe7fa"
# create graphs for PCA analysis for country and industry features
Country_df = X.filter(regex='^country',axis=1).fillna(0)
pca_new_Country = PCA(n_components=10)
Country_df_PCA = pca_new_Country.fit_transform(Country_df)
Industry_df = X.filter(regex='^ind_',axis=1).fillna(0)
pca_new_Industry_df = PCA(n_components=30)
Industry_df_PCA = pca_new_Industry_df.fit_transform(Industry_df)
# The PCA model
fig, axes = plt.subplots(1,2,figsize=(15,15))
colors = ['r','g']
fig.suptitle('PCA Analysis for Country and Industry', fontsize=30)
targets = [1,0]
for target, color in zip(targets,colors):
indexes = np.where(y == target)
axes[0].scatter(Country_df_PCA[indexes][:,0], Country_df_PCA[indexes][:,1],color=color)
axes[0].set_xlabel('PC1')
axes[0].set_ylabel('PC2')
axes[0].set_title('PCA-Country')
axes[1].scatter(Industry_df_PCA[indexes][:,0], Industry_df_PCA[indexes][:,1], color=color)
axes[1].set_xlabel('PC1')
axes[1].set_ylabel('PC2')
axes[1].set_title('PCA-Industry')
plt.axis('tight')
out_labels = ['p1','non-p1']
plt.legend(out_labels,prop={'size':10},loc='upper right',title='Legend of plot')
plt.show()
# + colab={"base_uri": "https://localhost:8080/"} id="f-GjoA3uCx9J" outputId="84902812-2e97-4d1a-8e8a-8323a97b23bf"
print("Final train dataset shape",X_train.shape)
print("\nFinal test dataset shape",X_test.shape)
print('\nTrain Dataframe Columns:\n\n{}'.format(X_train.columns.to_list()))
print('\nTest Dataframe Columns:\n\n{}'.format(X_test.columns.to_list()))
# + [markdown] id="4pIvEj_te7ve"
# ##Run through pipeline
# * Import GPU libraries for SVM and Xgboost
# * Create parameters for experimentation and tuning inputs
# * Use RandomizedSearchCV for pipeline
# * Store result into a .json file
#
# From: <a href='https://towardsdatascience.com/an-easier-way-to-encode-categorical-features-d840ff6b3900'>An Easier Way to Encode Categorical Features</a>
# + [markdown] id="Kli50BKCr92l"
# Feature scaling :
# https://towardsdatascience.com/all-about-feature-scaling-bcc0ad75cb35
# + colab={"base_uri": "https://localhost:8080/"} id="BmnuF44ge7ve" outputId="ac7feb4d-171c-455d-faf1-df090485dad0"
from sklearn import metrics, svm
from sklearn.ensemble import RandomForestRegressor
from sklearn.ensemble import RandomForestClassifier
from sklearn.tree import DecisionTreeClassifier,export_graphviz
import xgboost as xgb
from sklearn.preprocessing import MinMaxScaler,StandardScaler,\
MaxAbsScaler,RobustScaler,QuantileTransformer,PowerTransformer
from libsvm.svmutil import *
from sklearn.decomposition import PCA
results = {}
classifier_list = []
LRR = LogisticRegression(max_iter=10000, tol=0.1)
KNN = KNeighborsClassifier(n_neighbors=5)
BNB = BernoulliNB()
GNB = GaussianNB()
SVM = svm.SVC()
DCT = DecisionTreeClassifier()
XGB = xgb.XGBRegressor() #tree_method='gpu_hist', gpu_id=0
RMF = RandomForestClassifier()
#classifier
classifier_list.append(('LRR', LRR, {'classifier__C': [0.0001, 0.001, 0.01, 0.1, 1.0, 10.0, 100.0, 1000]}))
classifier_list.append(('KNN', KNN, {}))
classifier_list.append(('BNB', BNB, {'classifier__alpha': [0.0001, 0.001, 0.01, 0.1, 1.0, 10.0]}))
classifier_list.append(('GNB', GNB, {'classifier__var_smoothing': [0.0001, 0.001, 0.01, 0.1, 1.0, 10.0]}))
classifier_list.append(('DCT', DCT, {'classifier__max_depth':np.arange(1, 21),
'classifier__min_samples_leaf':[1, 5, 10, 20, 50, 100]}))
classifier_list.append(('XGB', XGB, {}))
classifier_list.append(('RMF', RMF, {}))
classifier_list.append(('SVM', SVM, {}))
encoder_list = [ce.one_hot.OneHotEncoder]
scaler_list = [StandardScaler()]
for label, classifier, params in classifier_list:
results[label] = {}
for encoder in encoder_list:
for feature_scaler in scaler_list:
results[label][f'{encoder.__name__} with {feature_scaler}'] = {}
print('{} with {} and {}'.format(label,encoder.__name__,feature_scaler))
numeric_transformer = Pipeline(steps=[('imputer', SimpleImputer(strategy='median')),('scaler', StandardScaler())])
categorical_transformer = Pipeline(steps=[('imputer', SimpleImputer(strategy='constant', fill_value='missing')),
('woe', encoder())])
preprocessor = ColumnTransformer(transformers=[('num', numeric_transformer, numeric_features),
('cat', categorical_transformer, categorical_features)])
pipe = Pipeline(steps=[#('preprocessor', preprocessor),
('scaler', feature_scaler),
('classifier', classifier)])
if params != {}:
search = RandomizedSearchCV(pipe, params, n_jobs=-1)
search.fit(X_train, y_train)
print('Best parameter (CV score={:.3f}): {}'.format(search.best_score_, search.best_params_))
model = search.fit(X_train, y_train)
y_pred = model.predict(X_test)
if label == 'XGB':
y_pred = [round(value) for value in y_pred]
score = f1_score(y_test, y_pred,average='weighted')
print('Best score: {:.4f}\n'.format(score))
results[label][f'{encoder.__name__} with {feature_scaler}']['score'] = score
try:
results[label][f'{encoder.__name__} with {feature_scaler}']['best_params'] = search.best_params_
except:
print('Something went wrong w/ GridSearch or pipeline fitting.')
else:
try:
model = pipe.fit(X_train, y_train)
y_pred = model.predict(X_test)
if label == 'XGB':
y_pred = [round(value) for value in y_pred]
score = f1_score(y_test, y_pred,average='weighted')
print('Score: {:.4f}\n'.format(score))
results[label][f'{encoder.__name__} with {feature_scaler}']['score'] = score
except:
print('Something went wrong with pipeline fitting')
# encode to encode int/float and array types and write the output json
class NpEncoder(json.JSONEncoder):
def default(self, obj):
if isinstance(obj, np.integer):
return int(obj)
elif isinstance(obj, np.floating):
return float(obj)
elif isinstance(obj, np.ndarray):
return obj.tolist()
else:
return super(NpEncoder, self).default(obj)
# File is saved under Files directory. /content would be the baseline folder
# You can click on folder icon on left side of the directory structure to
# see the created file
json.dumps(results, cls=NpEncoder)
with open('files/output/results_baseline.json', 'w') as fp:
json.dump(results, fp, sort_keys=True, indent=4, cls=NpEncoder)
with open('files/output/results_baseline.json', 'r') as fp:
results = json.load(fp)
print(results)
# + [markdown] id="e_Z3Tv1HMGix"
#
# -
| 5_Model_Pipeline.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# In our research project, we begin with Exploratory Data Analysis(EDA) to visualize the data and try to find out the attribute of out data;
# During this Notebook, you need to install librosa: an API which focus on dealing with sounds
#
# +
import os
from os.path import isdir, join
from pathlib import Path
import pandas as pd
# Math
import numpy as np
from scipy.fftpack import fft
from scipy import signal
from scipy.io import wavfile
import librosa
from sklearn.decomposition import PCA
# Visualization
import matplotlib.pyplot as plt
import seaborn as sns
import IPython.display as ipd
import librosa.display
import plotly.offline as py
py.init_notebook_mode(connected=True)
import plotly.graph_objs as go
import plotly.tools as tls
import pandas as pd
# %matplotlib inline
# -
# #In the privious Version I haven't find the librosa API, so I use the wavefile to read the .wav
# We use wavfile and wavfile to read the .wav file in
# and later I will compare the difference of the spectrum and mel-spectrum
file_path = '/Users/lijianxi/Desktop/Apr217390/part1/train/stop/0ab3b47d_nohash_0.wav'
y,sr= librosa.load(str(train_audio_path) + filename)
sample_rate, samples = wavfile.read(str(train_audio_path) + filename)
mfccs = librosa.feature.mfcc(y=y, sr=sr, n_mfcc=40)
librosa.display.specshow(mfccs, x_axis='time')
librosa.display.waveplot(y)
def log_specgram(audio, sample_rate, window_size=20,
step_size=10, eps=1e-10):
nperseg = int(round(window_size * sample_rate / 1e3))
noverlap = int(round(step_size * sample_rate / 1e3))
freqs, times, spec = signal.spectrogram(audio,
fs=sample_rate,
window='hann',
nperseg=nperseg,
noverlap=noverlap,
detrend=False)
return freqs, times, np.log(spec.T.astype(np.float32) + eps)
# +
freqs, times, spectrogram = log_specgram(samples, sample_rate)
fig = plt.figure(figsize=(14, 8))
ax1 = fig.add_subplot(311)
ax1.set_title('Raw wave of ' + filename)
ax1.set_ylabel('Amplitude')
ax1.plot(np.linspace(0, sample_rate/len(samples), sample_rate), samples)
ax2 = fig.add_subplot(312)
ax2.imshow(spectrogram.T, aspect='auto', origin='lower',
extent=[times.min(), times.max(), freqs.min(), freqs.max()])
ax2.set_yticks(freqs[::16])
ax2.set_xticks(times[::16])
ax2.set_title('Spectrogram of ' + filename)
ax2.set_ylabel('Freqs in Hz')
ax2.set_xlabel('Seconds')
# -
D = np.abs(librosa.stft(y))**2
S = librosa.feature.melspectrogram(S=D)
# Passing through arguments to the Mel filters
S = librosa.feature.melspectrogram(y=y, sr=sr, n_mels=128,fmax=8000)
# Besides of spectrum we also try to visualiz the mel-spectrum
import matplotlib.pyplot as plt
plt.figure(figsize=(10, 4))
librosa.display.specshow(librosa.power_to_db(S,ref=np.max),y_axis='mel', fmax=8000,x_axis='time')
plt.colorbar(format='%+2.0f dB')
plt.title('Mel spectrogram of Yes')
plt.tight_layout()
# According to the wikipedia, the frequency of human speech is lower than 4kHz;
# so with the help of Nyquist–Shannon sampling theorem, we try to set the sampling rate at 8kHz
# in order to keep the detail of human speech.
# https://en.wikipedia.org/wiki/Voice_frequency
# +
filename = 'yes/0a7c2a8d_nohash_0.wav'
new_sample_rate = 8000
sample_rate, samples = wavfile.read(str(train_audio_path) + filename)
resampled = signal.resample(samples, int(new_sample_rate/sample_rate * samples.shape[0]))
# -
ipd.Audio(samples, rate=sample_rate)
ipd.Audio(resampled, rate=new_sample_rate)
def custom_fft(y, fs):
T = 1.0 / fs
N = y.shape[0]
yf = fft(y)
xf = np.linspace(0.0, 1.0/(2.0*T), N//2)
vals = 2.0/N * np.abs(yf[0:N//2]) # FFT is simmetrical, so we take just the first half
# FFT is also complex, to we take just the real part (abs)
return xf, vals
xf, vals = custom_fft(samples, sample_rate)
plt.figure(figsize=(12, 4))
plt.title('FFT of recording sampled with ' + str(sample_rate) + ' Hz')
plt.plot(xf, vals)
plt.xlabel('Frequency')
plt.grid()
plt.show()
xf, vals = custom_fft(resampled, new_sample_rate)
plt.figure(figsize=(12, 4))
plt.title('FFT of recording sampled with ' + str(new_sample_rate) + ' Hz')
plt.plot(xf, vals)
plt.xlabel('Frequency')
plt.grid()
plt.show()
# From the two FFT of sounds above, we could find that the part of humans sound have been keeped.
| Final_Project/LibrosaEDA.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3 (ipykernel)
# language: python
# name: python3
# ---
# # Dependencies
# +
# %matplotlib inline
# %config InlineBackend.figure_format = 'retina'
# %gui qt
import pandas as pd
import numpy as np
import seaborn as sns
import matplotlib.pyplot as plt
import os
from pybacmman.selections import saveAndOpenSelection
# -
# # Import Measurements from BACMMAN
# - The next command will read measurments exported from bacmman
dsName = "test_run" # change to the actual name of the experiment
objectClassIdx = 1
folder_path = "~/switchdrive/Biozentrum/Data/Bacmman_i2i/" # change this path so that it points to the folder containing experiment folders
data = pd.read_csv(os.path.join(folder_path, dsName, f"{dsName}_{objectClassIdx}.csv"), sep=';') # 1 is for the object class #1 = bacteria
data
ax = data.GrowthRateArea.hist(bins=100, range=(-0.01, 0.05))
ax.set_xlabel("Growth Rate")
ax.set_title("Bacteria MutH Growth Rate")
# # Import Selections from BACMMAN
# - The next command will read selection exported from bacmman
# - See [here](https://github.com/jeanollion/bacmman/wiki/Selections#create-selections-from-python) on how to make selection
# - Task: make a selection only containing channels with cells
# - Hint: you can see if channels contain cells by selecting `Bacteria` in Displayed Objects` and then click one channel in the right panel
# - To export selection from BACMMAN select selections from the _data browsing_ tab and run the menu command _Run > Extract Selected Selections_
# +
selection = pd.read_csv(os.path.join(folder_path, dsName, f"{dsName}_Selections.csv"), sep=';') # 1 is for the object class #1 = bacteria
# + tags=[]
selection
# -
# You can use this selection to subset a dataframe: use both _Position_ and _Indices_ columns to identify an object.
# +
from pybacmman.pandas import subsetByDataframe
subset = subsetByDataframe(data, selection, on=["Position", "Indices"])
print(f"number of rows in data: {data.shape[0]}, selection: {selection.shape[0]} subset: {subset.shape[0]}")
subset.head()
# + [markdown] tags=[]
# # Export Selections To BACMMAN
# -
# - The next command will create a subset of data containing only long cells, and save it as a selection directly into BACMMAN.
# - BACMMAN should be open before executing this command.
#
data_subset = data[data.Length>8]
print(f"number of objects: {data_subset.shape[0]}")
saveAndOpenSelection(data_subset, dsName, 1, "long cells", address='127.0.0.1', port=25333, python_proxy_port=25334)
# - If an error occurs, if can be a problem of communication between java and python. BACMMAN includes a python gateway that is able to listen to queries from python. In the menu Misc> Python Gateway you can set the adress, port and python port and they must match with those set as argument of the _saveAndOpenSelection_ function.
# # Manually edit segmentation and tracking
#
# If needed you can manually edit segmentation and tracking using the Bacmman GUI, see [here](https://github.com/jeanollion/bacmman/wiki/Data-Curation) for instructions. You can also look [at this screencast](https://www.github.com/jeanollion/bacmman/wiki/resources/screencast/manual_correction_dataset2.webm)
# # Using Pandas
#
# Let's look in some more detail into the data structure. We can select the first frame and look at the table
#
# Note: you can remove `.head()` to look at the full table, or you can use `.tail()` to look at the end part
data[data['Frame']==0].head(n=10)
# Now let's look at the last frame, and try to figure out how lineage info is stored and how cells are assigned id's
data[data['Frame']==49].head(n=10)
# There is quite some info here, but it is a bit obscure:
# - `Position` is the name of the position (image)
# - `PositionIdx` is an integer keeping track of which position you are in
# - `Indices` corresponds to frame_nr - channel_nr - cell-nr
# - `Frame` is frame nr
# - `Idx` is cell nr (1 = mother cells)
# - `Bacteria` lineage keeps track of cell lineage (after each division a letter is added)
#
# Annoyingly there is no field for channel, so let's add it.
#
# > **Exercise**
# >
# > Think about how you could do this
# >
# > Hint: you can use python package [`re`](https://docs.python.org/3/library/re.html#) to extract it from the `Indices` field
# +
#enter code here
import re
ChIdx = #make list/array with channel number for each row in data
data['ChannelIdx'] = ChIdx
data.head()
# -
# ### Solution (expand cell to see)
# ```python
# import re
# data['ChannelIdx'] = [int(re.split("\-",ind)[1]) for ind in data['Indices']]
# data.head()
# ```
# Now let's look at the mother cell and first offspring in the first channel. Try to understand how lineages are connected.
#
# As you might notice lineages in different channels have the same BacteriaLineage code. Often it is very useful to have a unique lineage id, a number that is constant throughout a cell's life and that only occurs once within the data table. Can you come up with a good idea of how to implement this?
data.loc[(data['Channel']==0) & (data['Idx']<2) & (data['Frame']<13)]
# To uniquely id a cell linage we need three pieces of info
# - `Position-idx`
# - `Channel-idx`
# - `Bacteria-Lineage`
#
# > **Exercise**
# > Add a unique lineage id to the dataframe
data['LinIdx'] = # add code here
data.head()
# ### Solution (Expand to see)
# ```python
# data['ChannelIdx'] = [int(re.split("\-",ind)[1]) for ind in data['Indices']]
# data['LinIdx'] = data['PositionIdx'].map(str) + '-' + data['ChannelIdx'].map(str) + '-' + data['BacteriaLineage'].map(str)
# data.head()
# data.loc[data['LinIdx']=='0-0-A']
# Slight detour: [pandas dataframes](https://pandas.pydata.org) combined with [seaborn poltting library](https://seaborn.pydata.org) allow for some powerful data analysis.
#
# here two minor examples.
#
# First we plot the average life time of cells.
#
# Then we plot the length at birth vs length at division.
life_time_cell = data.groupby('LinIdx').size()
ax = sns.histplot(life_time_cell)
ax.set_xlabel('life time of cell (in frames)')
# +
df_first_frame = data.groupby('LinIdx').first()
df_last_frame = data.groupby('LinIdx').last()
# df_last_frame includes cells that are lost, we should exclude them
df_last_frame = df_last_frame.loc[(~np.isnan(df_last_frame['NextDivisionFrame']))]
fig, axs = plt.subplots(1,3, figsize=(12,4))
sns.histplot(ax=axs[0], data=df_first_frame, x='Length')
sns.histplot(ax=axs[1], data=df_last_frame, x='Length')
sns.scatterplot(ax=axs[2], x=df_first_frame['Length'], y=df_last_frame['Length'])
titles = ['length at birth', 'length at division', 'length at division vs length at birth']
for idx, title in enumerate(titles): axs[idx].set_title(title)
| Solutions/0_making_selections_sol.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import numpy as np
import numpy.random as rng
import matplotlib.pyplot as plt
import seaborn as sns
import pandas as pd
import pymc3 as pm
import scipy.stats as stats
from sklearn.preprocessing import StandardScaler
plt.style.use('bmh')
# %matplotlib inline
import theano
theano.config.warn.round=False
# observed data
np.random.seed(123)
n = 11
_a = 6
_b = 2
x = np.linspace(0, 1, n)
y = _a*x + _b + np.random.randn(n)
niter = 10000
with pm.Model() as linreg:
a = pm.Normal('a', mu=0, sd=100)
b = pm.Normal('b', mu=0, sd=100)
sigma = pm.HalfNormal('sigma', sd=1)
y_est = a*x + b
likelihood = pm.Normal('y', mu=y_est, sd=sigma, observed=y)
trace = pm.sample(niter, random_seed=123)
t = trace[niter//2:]
pm.traceplot(trace, varnames=['a', 'b'])
pass
# +
fig = plt.figure(figsize=(10,10))
ax = fig.add_subplot(111)
ax.scatter(x, y, s=40, label='data')
for a_, b_ in zip(t['a'][-100:], t['b'][-100:]):
ax.plot(x, a_*x + b_, c='black', alpha=0.1)
ax.plot(x, _a*x + _b, label='true regression line', lw=4., c='red')
ax.legend(loc='best')
plt.savefig("bayes-lin-reg.png")
# -
| pymc3/linear_regression.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Introduction to Dask: Hello World!
# #### By <NAME>
# -------
#
# While the world’s data doubles each year, CPU computing has hit a brick wall with the end of Moore’s law. For the same reasons, scientific computing and deep learning has turned to NVIDIA GPU acceleration, data analytics and machine learning where GPU acceleration is ideal.
#
# NVIDIA created RAPIDS – an open-source data analytics and machine learning acceleration platform that leverages GPUs to accelerate computations. RAPIDS is based on Python, has pandas-like and Scikit-Learn-like interfaces, is built on Apache Arrow in-memory data format, and can scale from 1 to multi-GPU to multi-nodes. RAPIDS integrates easily into the world’s most popular data science Python-based workflows. RAPIDS accelerates data science end-to-end – from data prep, to machine learning, to deep learning. And through Arrow, Spark users can easily move data into the RAPIDS platform for acceleration.
#
# In this notebook, we will show how to quickly setup Dask and run a "Hello World" example.
#
# **Table of Contents**
#
# * Setup
# * Load Libraries
# * Setup Dask
# * Hello World!
# * Sleeping in Parallel
# * Conclusion
# ## Setup
#
# This notebook was tested using the `rapidsai/rapidsai-dev-nightly:0.10-cuda10.0-devel-ubuntu18.04-py3.7` container from [DockerHub](https://hub.docker.com/r/rapidsai/rapidsai-nightly) and run on the NVIDIA GV100 GPU. Please be aware that your system may be different and you may need to modify the code or install packages to run the below examples.
#
# If you think you have found a bug or an error, please file an issue here: https://github.com/rapidsai/notebooks-contrib/issues
#
# Before we begin, let's check out our hardware setup by running the `nvidia-smi` command.
# !nvidia-smi
# Next, let's see what CUDA version we have:
# !nvcc --version
# ## Load Lbraries
#
# Next, let's load some libraries.
try:
import dask; print('Dask Version:', dask.__version__)
except ModuleNotFoundError:
os.system('conda install -y dask')
from dask.delayed import delayed
from dask.distributed import Client
import os
import subprocess
import time
# ## Setup Dask
#
# Dask is a library the allows for parallelized computing. Written in Python, it allows one to schedule tasks dynamically as well handle large data structures - similar to those found in NumPy and Pandas. In the subsequent tutorials, we'll show how to use Dask with Pandas and cuDF and how we can use both to accelerate common ETL tasks as well as build ML models like XGBoost.
#
# To learn more about Dask, check out the documentation here: http://docs.dask.org/en/latest/
#
# Dask operates using a concept of a "Client" and "workers". The client tells the workers what tasks to perform and when to perform. Typically, we set the number of works to be equal to the number of computing resources we have available to us. For example, we might set `n_workers = 8` if we have 8 CPU cores on our machine that can each operate in parallel. This allows us to take advantage of all of our computing resources and enjoy the most benefits from parallelization.
#
# Dask is a first class citizen in the world of General Purpose GPU computing and the RAPIDS ecosystem makes it very easy to use Dask with cuDF and XGBoost. As we see below, we can inititate a Cluster and Client using only few lines of code.
# +
from dask_cuda import LocalCUDACluster
cluster = LocalCUDACluster()
client = Client(cluster)
# -
# Now, let's show our current Dask status. We should see the IP Address for our Scheduler as well the the number of workers in our Cluster.
# show current Dask status
client
# You can also see the status and more information at the Dashboard, found at `http://<scheduler_uri>/status`. You can ignore this for now, we'll dive into this in subsequent tutorials.
# ## Hello World
#
# Our Dask Client and Dask Workers have been setup. It's time to execute our first program in parallel. We'll define a function that takes some value `x` and adds 5 to it.
def add_x_to_5(x):
return x + 5
# Next, we'll iterate through our `n_workers` and create an execution graph, where each worker is responsible for taking it's ID and passing it to the function `add_x_to_5`. For example, Dask Worker 2 will result in the value 7.
#
# An important thing to note is that the Dask Workers aren't actually executing these results - we're just defining the execution graph for our Dask Client to execute later. The `delayed` function wrapper ensures that this computation is in fact "delayed" and not executed on the spot.
n_workers = 4
results_delayed = [delayed(add_x_to_5)(i) for i in range(n_workers)]
results_delayed
# We'll use the Dask Client to compute the results.
results = client.compute(results_delayed, optimize_graph=False, fifo_timeout="0ms")
time.sleep(1) # this will give Dask time to execute each worker
results
# Note that the results are not the "actual results" of adding 5 to each of `[0, 1, 2, 3]` - we need to collect and print the results. We can do so by calling the `result()` method for each of our results.
print([result.result() for result in results])
# ## Sleeping in Parallel
#
# To see that Dask is truly executing in parallel, we'll define a function that sleeps for 1 second and returns the string "Success!". In serial, this function will take 4 seconds to execute.
def sleep_1():
time.sleep(1)
return 'Success!'
# +
# %%time
for _ in range(n_workers):
sleep_1()
# -
# Using Dask, we see that this whole process takes a little over second - each worker is executing in parallel!
# +
# %%time
# define delayed execution graph
results_delayed = [delayed(sleep_1)() for _ in range(n_workers)]
# use client to perform computations using execution graph
results = client.compute(results_delayed, optimize_graph=False, fifo_timeout="0ms")
# collect and print results
print([result.result() for result in results])
# -
# ## Conclusion
#
# To learn more about RAPIDS, be sure to check out:
#
# * [Open Source Website](http://rapids.ai)
# * [GitHub](https://github.com/rapidsai/)
# * [Press Release](https://nvidianews.nvidia.com/news/nvidia-introduces-rapids-open-source-gpu-acceleration-platform-for-large-scale-data-analytics-and-machine-learning)
# * [NVIDIA Blog](https://blogs.nvidia.com/blog/2018/10/10/rapids-data-science-open-source-community/)
# * [Developer Blog](https://devblogs.nvidia.com/gpu-accelerated-analytics-rapids/)
# * [NVIDIA Data Science Webpage](https://www.nvidia.com/en-us/deep-learning-ai/solutions/data-science/)
#
| getting_started_notebooks/basics/Dask_Hello_World.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
import findspark
findspark.init()
from pyspark import SparkContext, SparkConf,SQLContext,HiveContext
from pyspark.sql import SparkSession,DataFrame,Column,Row,GroupedData,DataFrameNaFunctions
from pyspark.sql.functions import lit,concat,col,concat_ws
from pyspark.sql import functions as sf
#from pyspark.sql import functions as sf
#from local_lib.PSPARK_DF_245 import selectExpr
import argparse
from IPython.display import clear_output
import sched, time
import os
import pandas as pd
import numpy as np
import sys
import warnings
if not sys.warnoptions:
warnings.simplefilter("ignore")
pd.options.display.float_format = '{:.3f}'.format
# +
try:
sc.stop()
except:
pass
conf = SparkConf().setAppName("PY_DFF").setMaster("local[*]").set("spark.executor.memory", "4g")
sc = SparkContext(conf = conf)
spark = SparkSession.builder \
.master("local[*]") \
.appName("SPARK_APPP") \
.config("spark.executor.memory", "4g") \
.getOrCreate()
hc = HiveContext(sc)
sqlContext = SQLContext(sc)
# +
import pyspark
import os
os.environ['PYSPARK_SUBMIT_ARGS'] = '--jars ./ojdbc7.jar pyspark-shell'
sc = spark.sparkContext.addPyFile("./ojdbc7.jar")
# -
## initialize the variables
ip = 'aiorc.cqgrwtaxaib2.us-east-1.rds.amazonaws.com' ## host of the oracle RDS instance
port = 1521 ## port of the oracle RDS instance
SID = 'ORCL' ## Database of the oracle RDS instance
username = 'B4B4KE4'
password = '<PASSWORD>(('
Data_table="IRIS"
JdbcOracleUrl = 'jdbc:oracle:thin:{USER_NAME}/{PASSWORD}@//{URL}:{PORT}/{SID}'.format(USER_NAME=username,PASSWORD=password,URL=ip,PORT=port,SID=SID)
TABLEDF = spark.read.format("jdbc").option("url",JdbcOracleUrl)\
.option("dbtable",Data_table).option("user",username)\
.option("password",password)\
.option("driver","oracle.jdbc.driver.OracleDriver")\
.load()
TABLEDF.show(10)
# +
Postgres_HOST = 'aipost1.cqgrwtaxaib2.us-east-1.rds.amazonaws.com'
Postgres_Port = '5432'
Postgres_DB = 'postgres'
Postgres_TableName = 'IRIS'
Postgres_User_Name = 'B4B4KE4'
Postgres_User_Password = '<PASSWORD>(('
Postgres_Driver_Name = 'org.postgresql.Driver'
JdbcPostgresUrl = 'jdbc:postgresql://{HOST}:{PORT}/{DB}'.format(HOST=Postgres_HOST,PORT=Postgres_Port,DB=Postgres_DB)
TABLEDF.write.jdbc(
url=JdbcPostgresUrl,
table=Postgres_TableName,
mode="overwrite",
properties={
"user":Postgres_User_Name,
"password":<PASSWORD>,
"driver":Postgres_Driver_Name,
"client_encoding":"utf8"
})
# -
Postgres_TABLEDF = spark.read.format("jdbc").option("url",JdbcPostgresUrl)\
.option("dbtable",Postgres_TableName).option("user",Postgres_User_Name)\
.option("password",<PASSWORD>)\
.option("driver",Postgres_Driver_Name)\
.load()
Postgres_TABLEDF.show(10)
| oracle_to_postgres_base_code.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 2
# language: python
# name: python2
# ---
# #similarity engine trainer
# train a gensim word2vec model with a corpus
# +
import gensim
import codecs
class Corpus_Iterator(object):
def __init__(self,filename):
self.filename = filename
def __iter__(self):
for line in codecs.open(self.filename,'r',encoding='utf8'):
yield line.split()
corpus_file = 'corpus.txt'
# -
corp = Corpus_Iterator('corpus.txt')
model = gensim.models.Word2Vec(corp,min_count=1,size=100)
model.save('SimEngine')
# +
import gensim
import pymongo
from pymongo import MongoClient
import time
mongo_url = 'mongodb://localhost:27017/'
db = 'CamSim'
coll = 'CamAuthors'
client = MongoClient(mongo_url)
ca = client[db][coll]
for count_size in [0,1,2,3,4,5,6,7,8,9,10]:
print("*"*50 + ' '+ str(count_size))
corp = Corpus_Iterator('corpus.txt')
model = gensim.models.Word2Vec(corp,min_count=count_size,size=100)
model.save('SimEngine')
ind = 0
cur1=ca.find()
cur2=ca.find()
ind1=0
ind2=0
for rec1 in cur1[0:1]:
corp1 = rec1['corpus'].split()
big_negs = 0
negs = 0
singletons =0
multiplons = 0
for rec2 in cur2[:1000]:
corp2 = rec2['corpus'].split()
for i in corp1:
for j in corp2:
try:
s = model.similarity(i,j)
multiplons+=1
if (s<0.): negs+=1
if (s<-0.25): big_negs+=1
except:
singletons+=1
ind2+=1
#s = model.n_similarity(corp1,corp2)
print(str(negs)+' negatives '+str(ind1)+' '+str(ind2))
print(str(singletons)+' singletons ')
print(str(multiplons)+' multiplons')
print(str(big_negs)+' large negatives, percentage: '+str(100*float(big_negs)/negs))
print(str(100*float(negs)/(negs+singletons+multiplons))+' percent negative')
ind1+=1
# -
def pcorp(li):
li2 = []
for w in li:
try:
v = model[w]
li2.append(w)
except:
pass
return li2
# +
import gensim
import pymongo
from pymongo import MongoClient
import time
mongo_url = 'mongodb://localhost:27017/'
db = 'CamSim'
coll = 'CamAuthors'
client = MongoClient(mongo_url)
ca = client[db][coll]
for count_size in [0,1,2,3,4,5,6,7,8,9,10,15,20,30,40,50,60,70,80,100]:
print("*"*50 + ' '+ str(count_size))
corp = Corpus_Iterator('corpus.txt')
model = gensim.models.Word2Vec(corp,min_count=count_size,size=100)
model.save('SimEngine')
ind = 0
cur1=ca.find()
cur2=ca.find()
ind1=0
ind2=0
for rec1 in cur1[0:1]:
corp1 = rec1['corpus'].split()
n_big_negs = 0
n_negs = 0
p_big_negs = 0
p_negs = 0
fails = 0
pcorp1 = pcorp(corp1)
for rec2 in cur2[:1000]:
corp2 = rec2['corpus'].split()
pcorp2 = pcorp(corp2)
cum = 0
try:
for i in pcorp1:
for j in pcorp2:
cum+=model.similarity(i,j)
pair_ave = cum/(len(pcorp1)+len(pcorp2))
if (pair_ave<0.): p_negs+=1
if (pair_ave<-0.25): p_big_negs+=1
nsim_score = model.n_similarity(pcorp1,pcorp2)
if (nsim_score<0.): n_negs+=1
if (nsim_score<-0.25): n_big_negs+=1
except:
fails+=1
ind2+=1
print('n_negatives ' + str(n_negs))
print('n_big negatives '+ str(n_big_negs))
print('p_negatives ' + str(p_negs))
print('p_big negatives '+ str(p_big_negs))
print('fails '+ str(fails))
# -
| SimTrainer.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
import myutil as mu
import torch
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
from torch.utils.data import TensorDataset # 텐서데이터셋
from torch.utils.data import DataLoader # 데이터로더
from torch.utils.data import Dataset
import matplotlib.pyplot as plt # 맷플롯립사용
import torchvision.datasets as dsets
import torchvision.transforms as transforms
from torch.utils.data import DataLoader
import random
from sklearn.datasets import load_digits
# -
# ---
# - CNN으로 MNIST 분류하기
# - 이번 챕터에서는 CNN으로 MNIST를 분류해보겠습니다.
#
# ---
# 임의의 텐서를 만듭니다. 텐서의 크기는 1 × 1 × 28 × 28입니다.
#
# +
inputs = torch.Tensor(1, 1, 28, 28)
mu.log("inputs", inputs)
# -
# ---
# - 합성곱층과 풀링 선언하기
# - 이제 첫번째 합성곱 층을 구현해봅시다.
# - 1채널 짜리를 입력받아서 32채널을 뽑아내는데 커널 사이즈는 3이고 패딩은 1입니다.
#
# +
conv1 = nn.Conv2d(in_channels=1, out_channels=32, kernel_size=(3, 3), padding=(1, 1))
mu.log("conv1", conv1)
# -
# ---
# - 이제 두번째 합성곱 층을 구현해봅시다.
# - 32채널 짜리를 입력받아서 64채널을 뽑아내는데 커널 사이즈는 3이고 패딩은 1입니다.
#
# +
conv2 = nn.Conv2d(in_channels=32, out_channels=64, kernel_size=(3, 3), padding=(1, 1))
mu.log("conv2", conv2)
# -
# ---
# - 이제 맥스풀링을 구현해봅시다.
# - 정수 하나를 인자로 넣으면 커널 사이즈와 스트라이드가 둘 다 해당값으로 지정됩니다.
#
# +
pool = nn.MaxPool2d(kernel_size=(2, 2))
mu.log("pool", pool)
# -
# ---
# - 구현체를 연결하여 모델 만들기
# - 지금까지는 선언만한 것이고 아직 이들을 연결시키지는 않았습니다.
# - 이들을 연결시켜서 모델을 완성시켜보겠습니다.
# - 우선 입력을 첫번째 합성곱층을 통과시키고 합성곱층을 통과시킨 후의 텐서의 크기를 보겠습니다.
#
# +
out = conv1(inputs)
mu.log("out = conv1(inputs)", out)
# -
# ---
# - 32채널의 28너비 28높이의 텐서가 되었습니다.
# - 32가 나온 이유는 conv1의 out_channel로 32를 지정해주었기 때문입니다.
# - 또한, 28너비 28높이가 된 이유는 패딩을 1폭으로 하고 3 × 3 커널을 사용하면 크기가 보존되기 때문입니다.
# - 이제 이를 맥스풀링을 통과시키고 맥스풀링을 통과한 후의 텐서의 크기를 보겠습니다.
#
# +
out = pool(out)
mu.log("out = pool(out)", out)
# -
# ---
# - 32채널의 14너비 14높이의 텐서가 되었습니다.
# - 이제 이를 다시 두번째 합성곱층에 통과시키고 통과한 후의 텐서의 크기를 보겠습니다.
#
# +
out = conv2(out)
mu.log("out = conv2(out)", out)
# -
# ---
# - 64채널의 14너비 14높이의 텐서가 되었습니다.
# - 64가 나온 이유는 conv2의 out_channel로 64를 지정해주었기 때문입니다.
# - 또한, 14너비 14높이가 된 이유는 패딩을 1폭으로 하고 3 × 3 커널을 사용하면 크기가 보존되기 때문입니다.
# - 이제 이를 맥스풀링을 통과시키고 맥스풀링을 통과한 후의 텐서의 크기를 보겠습니다.
#
# +
out = pool(out)
mu.log("out = pool(out)", out)
# -
# ---
# - 이제 이 텐서를 펼치는 작업을 할 겁니다.
# - 그런데 펼치기에 앞서 텐서의 n번째 차원을 접근하게 해주는 .size(n)에 대해서 배워보겠습니다.
# - 현재 out의 크기는 1 × 64 × 7 × 7입니다.
# - out의 첫번째 차원이 몇인지 출력해보겠습니다.
#
# +
mu.log("out.size(0)", out.size(0))
mu.log("out.size(1)", out.size(1))
mu.log("out.size(2)", out.size(2))
mu.log("out.size(3)", out.size(3))
# -
# ---
# 이제 이를 가지고 .view()를 사용하여 텐서를 펼치는 작업을 해보겠습니다.
#
# +
out = out.view(out.size(0), -1)
mu.log("out = out.view(out.size(0), -1)", out)
mu.log("out.size(1)", out.size(1))
# -
# ---
# - 배치 차원을 제외하고 모두 하나의 차원으로 통합된 것을 볼 수 있습니다.
# - 이제 이에 대해서 전결합층(Fully-Connteced layer)를 통과시켜보겠습니다.
# - 출력층으로 10개의 뉴런을 배치하여 10개 차원의 텐서로 변환합니다.
#
# +
fc = nn.Linear(out.size(1), 10)
mu.log("fc", fc)
out = fc(out)
mu.log("out = fc(out)", out)
# -
# ---
# CNN으로 MNIST 분류하기
#
# +
device = 'cuda' if torch.cuda.is_available() else 'cpu'
# 랜덤 시드 고정
torch.manual_seed(777)
# GPU 사용 가능일 경우 랜덤 시드 고정
if device == 'cuda':
torch.cuda.manual_seed_all(777)
# -
# ---
# 학습에 사용할 파라미터를 설정합니다.
#
# +
learning_rate = 0.001
training_epochs = 15
batch_size = 100
# -
# ---
# 데이터로더를 사용하여 데이터를 다루기 위해서 데이터셋을 정의해줍니다.
#
# +
mnist_train = dsets.MNIST(root='MNIST_data/', # 다운로드 경로 지정
train=True, # True를 지정하면 훈련 데이터로 다운로드
transform=transforms.ToTensor(), # 텐서로 변환
download=True)
mu.log("mnist_train", mnist_train)
mnist_test = dsets.MNIST(root='MNIST_data/', # 다운로드 경로 지정
train=False, # False를 지정하면 테스트 데이터로 다운로드
transform=transforms.ToTensor(), # 텐서로 변환
download=True)
mu.log("mnist_test", mnist_test)
data_loader = torch.utils.data.DataLoader(dataset=mnist_train,
batch_size=batch_size,
shuffle=True,
drop_last=True)
mu.log("len(data_loader)", len(data_loader))
mu.log("data_loader.sampler.num_samples", data_loader.sampler.num_samples)
mu.log("data_loader.batch_size", data_loader.batch_size)
# -
# ---
# 이제 클래스로 모델을 설계합니다.
#
# +
class CNN(nn.Module):
def __init__(self):
super().__init__()
# 첫번째층
# ImgIn shape=(?, 28, 28, 1)
# Conv -> (?, 28, 28, 32)
# Pool -> (?, 14, 14, 32)
self.layer1 = nn.Sequential(
nn.Conv2d(in_channels=1, out_channels=32, kernel_size=2, stride=1, padding=1),
nn.ReLU(),
nn.MaxPool2d(kernel_size=2, stride=2)
)
# 두번째층
# ImgIn shape=(?, 14, 14, 32)
# Conv ->(?, 14, 14, 64)
# Pool ->(?, 7, 7, 64)
self.layer2 = nn.Sequential(
nn.Conv2d(in_channels=32, out_channels=64, kernel_size=2, stride=1, padding=1),
nn.ReLU(),
nn.MaxPool2d(kernel_size=2, stride=2)
)
# 전결합층 7x7x64 inputs -> 10 outputs
self.fc = nn.Linear(in_features=7 * 7 * 64, out_features=10, bias=True)
# 전결합층 한정으로 가중치 초기화
nn.init.xavier_normal_(self.fc.weight)
def forward(self, x):
out = self.layer1(x)
out = self.layer2(out)
# 전결합층을 위해서 Flatten
out = out.view(out.size(0), -1)
out = self.fc(out)
return out
# -
# ---
# 모델을 정의합니다.
#
# +
model = CNN().to(device)
mu.log("model", model)
# -
# ---
# 비용 함수와 옵티마이저를 정의합니다.
#
# +
criterion = nn.CrossEntropyLoss().to(device)
mu.log("criterion", criterion)
optimizer = optim.Adam(model.parameters(), lr=learning_rate)
mu.log("optimizer", optimizer)
# -
# ---
# 총 배치의 수를 출력해보겠습니다.
#
# +
total_batch = len(data_loader)
mu.log("total_batch", total_batch)
# -
# ---
# - 총 배치의 수는 600입니다.
# - 그런데 배치 크기를 100으로 했으므로 결국 훈련 데이터는 총 60,000개란 의미입니다.
# - 이제 모델을 훈련시켜보겠습니다.
# - (시간이 꽤 오래 걸립니다.)
#
# +
mu.plt_init()
for epoch in range(training_epochs + 1):
avg_cost = 0
hypothesis = None
Y = None
for X, Y in data_loader:
X = X.to(device)
Y = Y.to(device)
hypothesis = model(X)
cost = criterion(hypothesis, Y)
optimizer.zero_grad()
cost.backward()
optimizer.step()
avg_cost += cost / total_batch
accuracy = mu.get_cross_entropy_accuracy(hypothesis, Y)
mu.log_epoch(epoch, training_epochs, avg_cost, accuracy)
mu.plt_show()
mu.log("model", model)
# -
# ---
# - 이제 테스트를 해보겠습니다.
# -98%의 정확도를 얻습니다. 다음 챕터에서는 층을 더 쌓아보겠습니다.
#
# +
# 학습을 진행하지 않을 것이므로 torch.no_grad()
with torch.no_grad():
X_test = mnist_test.test_data.view(len(mnist_test), 1, 28, 28).float().to(device)
Y_test = mnist_test.test_labels.to(device)
prediction = model(X_test)
accuracy = mu.get_cross_entropy_accuracy(prediction, Y)
mu.log("accuracy", accuracy)
| 0702_cnn_mnist.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3 (ipykernel)
# language: python
# name: python3
# ---
# ## How to use pyctcdecode when working with a NeMo model
# install NeMo
# !pip install "nemo-toolkit[asr]==1.3.0"
# get a single audio file
# !wget https://dldata-public.s3.us-east-2.amazonaws.com/1919-142785-0028.wav
# load pretrained NeMo model
import nemo.collections.asr as nemo_asr
# we could choose for example a BPE encoded conformer-ctc model
# asr_model = nemo_asr.models.EncDecCTCModelBPE.from_pretrained(model_name='stt_en_conformer_ctc_small')
# let's take a standard quartznet model though to start
asr_model = nemo_asr.models.EncDecCTCModel.from_pretrained(model_name='QuartzNet15x5Base-En')
# transcribe audio to logits
logits = asr_model.transcribe(["1919-142785-0028.wav"], logprobs=True)[0]
# look at the alphabet of our model defining the labels for the logit matrix we just calculated
asr_model.decoder.vocabulary
# +
from pyctcdecode import build_ctcdecoder
# build the decoder and decode the logits
decoder = build_ctcdecoder(asr_model.decoder.vocabulary)
decoder.decode(logits)
# -
# ## Librispeech experiments
# The real use of a decoder however comes from the ability to
# +
# NOTE: some of this code is borrowed from the official NeMo tutorial
# https://github.com/NVIDIA/NeMo/blob/main/tutorials/asr/Offline_ASR.ipynb
# +
# download pretrained knelm model for librispeech
# NOTE: since out nemo vocabulary is all lowercased, we need to convert all librispeech data as well
import gzip
import os, shutil, wget
lm_gzip_path = '3-gram.pruned.1e-7.arpa.gz'
if not os.path.exists(lm_gzip_path):
print('Downloading pruned 3-gram model.')
lm_url = 'http://www.openslr.org/resources/11/3-gram.pruned.1e-7.arpa.gz'
lm_gzip_path = wget.download(lm_url)
print('Downloaded the 3-gram language model.')
else:
print('Pruned .arpa.gz already exists.')
uppercase_lm_path = '3-gram.pruned.1e-7.arpa'
if not os.path.exists(uppercase_lm_path):
with gzip.open(lm_gzip_path, 'rb') as f_zipped:
with open(uppercase_lm_path, 'wb') as f_unzipped:
shutil.copyfileobj(f_zipped, f_unzipped)
print('Unzipped the 3-gram language model.')
else:
print('Unzipped .arpa already exists.')
lm_path = 'lowercase_3-gram.pruned.1e-7.arpa'
if not os.path.exists(lm_path):
with open(uppercase_lm_path, 'r') as f_upper:
with open(lm_path, 'w') as f_lower:
for line in f_upper:
f_lower.write(line.lower())
print('Converted language model file to lowercase.')
# -
# download unigram vocab
# !wget http://www.openslr.org/resources/11/librispeech-vocab.txt
# +
# load unigram list
with open("librispeech-vocab.txt") as f:
unigram_list = [t.lower() for t in f.read().strip().split("\n")]
# load kenlm Model
import kenlm
kenlm_model = kenlm.Model('lowercase_3-gram.pruned.1e-7.arpa')
# -
decoder = build_ctcdecoder(
asr_model.decoder.vocabulary,
kenlm_model,
unigram_list,
)
decoder.decode(logits)
# ## Experiments on librispeech dev-other
# +
# download librispeech dev-other corpus, using one of the great existing scripts, for example:
# https://github.com/NVIDIA/NeMo/blob/main/scripts/dataset_processing/get_librispeech_data.py
# -
# load manifest that holds meta information on all files in dev_other
import pandas as pd
dev_other_df = pd.read_json("/my_dir/dev_other.json", lines=True)
# decode all logits (this may take a while)
logits_list = [
a.cpu().detach().numpy()
for a in asr_model.transcribe(dev_other_df["audio_filepath"].tolist(), logprobs=True)
]
decoder = build_ctcdecoder(
asr_model.decoder.vocabulary,
kenlm_model,
unigram_list,
)
import multiprocessing
with multiprocessing.get_context("fork").Pool() as pool:
pred_list = decoder.decode_batch(pool, logits_list)
from nemo.collections.asr.metrics.wer import word_error_rate
word_error_rate(dev_other_df["text"].tolist(), pred_list)
# let's compare this to greedy decoding
def _greedy_decode(logits):
"""Decode argmax of logits and squash in CTC fashion."""
label_dict = {n: c for n, c in enumerate(labels)}
prev_c = None
out = []
for n in logits.argmax(axis=1):
c = label_dict.get(n, "") # if not in labels, then assume it's ctc blank char
if c != prev_c:
out.append(c)
prev_c = c
return "".join(out)
word_error_rate(dev_other_df["text"].tolist(), [_greedy_decode(l) for l in logits_list])
# we did better by using a language model, but we can improve this further by tuning the decoder parameters
# ## gridsearch optimal parameters
data_grid = []
for a in [0.6, 0.7, 0.8]:
for b in [2.0, 3.0, 4.0]:
decoder.reset_params(alpha=a, beta=b)
with multiprocessing.get_context("fork").Pool(15) as pool:
# use lower beam-with here for fast testing
lm_preds = decoder.decode_batch(pool, logits_list, beam_width=50)
wer_val = word_error_rate(dev_other_df["text"].tolist(), lm_preds)
data_grid.append((a, b, wer_val))
pd.DataFrame(data_grid, columns=["alpha", "beta", "wer"]).sort_values(by="wer").head()
# +
# advanced parameters to tune:
# beam_width: how many beams to keep after each step
# beam_prune_logp: beams that are much worse than best beam will be pruned
# token_min_logp: tokens below this logp are skipped unless they are argmax of frame
# prune_logp: beams that are much worse than best beam will be pruned
# unk_score_offset: score decrease for token if oov
# lm_score_boundary: whether to have kenlm respect boundaries when scoring
# -
| tutorials/01_pipeline_nemo.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] colab_type="text" id="tu5GtCghdaqn" slideshow={"slide_type": "slide"}
# # 5장 서포트 벡터 머신
# + [markdown] slideshow={"slide_type": "slide"}
# #### 감사의 글
#
# 자료를 공개한 저자 오렐리앙 제롱과 강의자료를 지원한 한빛아카데미에게 진심어린 감사를 전합니다.
# + [markdown] slideshow={"slide_type": "slide"}
# * 선형 분류/회귀 지원 모델
# * 비선형 분류/회귀 지원 가능
# * 이상치 탐색 가능
# * 복잡한 분류 문제 가능
# * 작거나 중간 크기의 데이터셋에 적합
# + [markdown] colab_type="text" id="tu5GtCghdaqn" slideshow={"slide_type": "slide"}
# ## 개요
#
# * 선형 SVM 분류
# * 라지 마진 분류
# * 하드/소프트 마진 분류
# + [markdown] colab_type="text" id="tu5GtCghdaqn" slideshow={"slide_type": "fragment"}
# * 비선형 SVM 분류
# * 선형 SVM + 다항 특성
# * SVC + 커널 트릭
# + [markdown] colab_type="text" id="tu5GtCghdaqn" slideshow={"slide_type": "fragment"}
# * SVM 회귀
# + [markdown] colab_type="text" id="tu5GtCghdaqn" slideshow={"slide_type": "slide"}
# * SVM 이론
# * 결정함수, 예측, 목적함수
# * 2차 계획법(QP, quadratic programming)
# * 쌍대 문제
# * 커널 SVM
# * 온라인 SVM
# + [markdown] colab_type="text" id="JGc-Nyz1gRJg" slideshow={"slide_type": "slide"}
# ## 5.1 선형 SVM 분류
# + [markdown] colab_type="text" id="tu5GtCghdaqn" slideshow={"slide_type": "slide"}
# * 라지 마진 분류
# * 하드/소프트 마진 분류
# + [markdown] colab_type="text" id="N3xGgvNJeI0x" slideshow={"slide_type": "slide"}
# ### 라지 마진 분류
# + [markdown] colab_type="text" id="N3xGgvNJeI0x" slideshow={"slide_type": "fragment"}
# * 라지 마진: 클래스를 구분하는 가장 넓은 도로
#
# * 분류 대상 클래스들 사이의 가장 큰 도로, 즉 라지 마진을 계산하여 클래스 분류
# + [markdown] colab_type="text" id="UGMrjXXhxHRe" slideshow={"slide_type": "slide"}
# * 예제:
# + [markdown] slideshow={"slide_type": ""}
# <img src="images/ch05/homl05-01.png" width="600"/>
# + [markdown] slideshow={"slide_type": "fragment"}
# | | 왼편 그래프 | 오른편 그래프 |
# | ---: | -------------: | -------------: |
# | **분류기:** | 선형 분류 | 마진 분류 |
# | **실선:** | 분류 좋음 | 결정 경계(최대폭 도로 중심선) |
# | **일반화:** | 일반화 어려움 | 일반화 가능 |
# + [markdown] colab_type="text" id="N3xGgvNJeI0x" slideshow={"slide_type": "slide"}
# ### 서포트 벡터
# + [markdown] colab_type="text" id="JGc-Nyz1gRJg" slideshow={"slide_type": "fragment"}
# * 도로의 양쪽 경계에 위치하는 샘플 (아래 그림에서 동그라미 표시됨)
# * 서포트 벡터 사이의 간격, 즉 도로의 폭이 최대가 되도록 학습
# * 특성 스케일을 조정하면 결정경계가 훨씬 좋아짐.
# * 특성들의 단위를 통일하기 때문.
# + [markdown] slideshow={"slide_type": ""}
# <img src="images/ch05/homl05-02.png" width="600"/>
# + [markdown] colab_type="text" id="N3xGgvNJeI0x" slideshow={"slide_type": "slide"}
# ### 서포트 벡터 머신(SVM)
# + [markdown] colab_type="text" id="JGc-Nyz1gRJg" slideshow={"slide_type": "fragment"}
# * 두 클래스로부터 최대한 멀리 떨어져 있는 결정 경계를 이용한 분류기
# * 결정 경계 찾기: 클래스 사이를 지나가는 도로 폭(마진)이 특정 조건하에 최대가 되어야 함
# + [markdown] colab_type="text" id="kHZMq0ZchbG2" slideshow={"slide_type": "slide"}
# ### 하드 마진 분류
#
# * 모든 훈련 샘플이 도로 바깥쪽에 올바르게 분류되도록 하는 마진 분류
# + [markdown] colab_type="text" id="kHZMq0ZchbG2" slideshow={"slide_type": "fragment"}
# * 훈련 세트가 선형적으로 구분되는 경우에만 가능
# + [markdown] colab_type="text" id="ipXN9Jndh8la" slideshow={"slide_type": "slide"}
# #### 하드 마진 분류와 이상치
# + [markdown] slideshow={"slide_type": ""}
# <img src="images/ch05/homl05-03.png" width="600"/>
# + [markdown] slideshow={"slide_type": ""}
# | | 왼편 그래프 | 오른편 그래프 |
# | ---: | -------------: | -------------: |
# | **이상치:** | 타 클래스에 섞임 | 타 클래스에 매우 가까움 |
# | **하드 마진 분류:** | 불가능 | 가능하지만 일반화 어려움 |
# + [markdown] colab_type="text" id="kHZMq0ZchbG2" slideshow={"slide_type": "slide"}
# ### 소프트 마진 분류
#
# * 마진 위반(margin violation) 사례의 발생 정도를 조절하면서 도로의 폭을 최대로 넓게 유지하는 마진 분류
# + [markdown] colab_type="text" id="kHZMq0ZchbG2" slideshow={"slide_type": "fragment"}
# * __마진 위반:__ 훈련 샘플이 도로 상에 위치하거나 결정 경계를 넘어 해당 클래스 반대편에 위치하는 샘플
# + [markdown] colab_type="text" id="3J1AIfHsi6x0" slideshow={"slide_type": "slide"}
# #### 예제: 붓꽃 품종 이진 분류
#
# * 사이킷런의 SVM 분류기 `LinearSVC` 활용
#
# * Iris-Virginica 품종 여부 판단
# + [markdown] slideshow={"slide_type": "fragment"}
# ```python
# svm_clf1 = LinearSVC(C=1, loss="hinge", random_state=42)
#
# svm_clf2 = LinearSVC(C=100, loss="hinge", random_state=42)
# ```
# + [markdown] slideshow={"slide_type": "subslide"}
# <img src="images/ch05/homl05-04.png" width="600"/>
# + [markdown] slideshow={"slide_type": ""}
# | | 왼편 그래프 | 오른편 그래프 |
# | ---: | -------------: | -------------: |
# | **C:** | 작게 | 크게 |
# | **도로폭:** | 넓게 | 좁게 |
# | **마진 위반:** | 많게 | 적게|
# | **분류:** | 덜 정교하게 | 보다 정교하게 |
# + [markdown] colab_type="text" id="VGdLiYbTk1Xs" slideshow={"slide_type": "slide"}
# ### 사이킷런의 선형 SVM 지원 모델 예제
# + [markdown] colab_type="text" id="VGdLiYbTk1Xs" slideshow={"slide_type": "fragment"}
# #### LinearSVC
#
# * 앞서 설명됨.
# + [markdown] colab_type="text" id="VGdLiYbTk1Xs" slideshow={"slide_type": "fragment"}
# #### SVC + 선형 커널
#
# * 예제: `SVC(kernel="linear", C=1)`
# + [markdown] colab_type="text" id="VGdLiYbTk1Xs" slideshow={"slide_type": "fragment"}
# #### SGDClassifier + hinge 손실함수 활용 + 규제
#
# * 예제: `SGDClassifier(loss="hinge", alpha=1/(m*C))`
# * 규제 강도가 훈련 샘플 수(`m`)에 반비례. 즉, 훈련 샘플 수가 크면 규제 약해짐
# + [markdown] colab_type="text" id="pKX4P_7Wll07" slideshow={"slide_type": "slide"}
# ## 5.2 비선형 분류
# + [markdown] colab_type="text" id="pKX4P_7Wll07" slideshow={"slide_type": "slide"}
# * 방식 1: 선형 SVM 적용
# * 다항 특성 활용: 다항 특성을 추가한 후 선형 SVM 적용
# * 유사도 특성 활용: 유사도 특성을 추가하거나 유사도 특성만을 활용하여 선형 SVM 적용
# + [markdown] colab_type="text" id="pKX4P_7Wll07" slideshow={"slide_type": "fragment"}
# * 방식 2: `SVC` + 커널 트릭
# * 새로운 특성을 실제로 추가하지 않지만, 동일한 결과를 유도하는 방식
# * 예제 1: 다항 커널 (__주의__: 책에서는 다항식 커널로 불림)
# * 예제 2: 가우시안 RBF(방사 기저 함수) 커널
# + [markdown] colab_type="text" id="Noa1Kbiil4Y_" slideshow={"slide_type": "slide"}
# ### 방식 1: 선형 SVM 활용
# + [markdown] colab_type="text" id="Noa1Kbiil4Y_" slideshow={"slide_type": "slide"}
# #### 다항 특성 + 선형 SVM: 예제 1
#
# * 특성 $x_1$ 하나만 갖는 모델에 새로운 특성 $x_1^2$을 추가한 후 선형 SVM 분류 적용
# -
# <div align="center"><img src="images/ch05/homl05-05.png" width="600"/></div>
# + [markdown] colab_type="text" id="Noa1Kbiil4Y_" slideshow={"slide_type": "slide"}
# #### 참고: 다항 특성 + 선형 회귀
#
# * 4장에서 이미 한 번 살펴 봤음.
# * 특성 $x_1$ 하나만 갖는 모델에 새로운 특성 $x_1^2$을 추가한 후 선형회귀 적용
# * 2차 다항식 모델: $\hat y = \theta_0 + \theta_1\, x_1 + \theta_2\, x_1^{2}$
# -
# <div align="center"><img src="images/ch04/homl04-07.png" width="500"/></div>
# + [markdown] colab_type="text" id="Noa1Kbiil4Y_" slideshow={"slide_type": "slide"}
# #### 다항 특성 + 선형 SVM: 예제 2
#
# * moons 데이터셋: 마주보는 두 개의 반원 모양으로 두 개의 클래스로 구분되는 데이터
# -
# <div align="center"><img src="images/ch05/homl05-06.png" width="500"/></div>
# + [markdown] slideshow={"slide_type": "slide"}
# ```python
# # 3차 다항까지 추가
# polynomial_svm_clf = Pipeline([
# ("poly_features", PolynomialFeatures(degree=3)),
# ("scaler", StandardScaler()),
# ("svm_clf", LinearSVC(C=10, loss="hinge", random_state=42))
# ])
# ```
# -
# <div align="center"><img src="images/ch05/homl05-07.png" width="500"/></div>
# + [markdown] colab_type="text" id="Noa1Kbiil4Y_" slideshow={"slide_type": "slide"}
# #### 유사도 특성 + 선형 SVM
#
# * 유사도 함수: 각 샘플에 대해 특정 랜드마크(landmark)와의 유사도를 측정하는 함수
# + [markdown] colab_type="text" id="Noa1Kbiil4Y_" slideshow={"slide_type": ""}
# #### 유사도 함수 예제: 가우시안 방사 기저 함수(RBF, radial basis function)
#
# * $\ell$: 랜드마크
# * $\gamma$: 랜드마크에서 멀어질 수록 0에 수렴하는 속도를 조절함
# * $\gamma$ 값이 클수록 가까운 샘플 선호
# * 과대적합 위험 커짐
# * 0: 랜드마크에서 아주 멀리 떨어진 경우
# * 1: 랜드마크와 같은 위치인 경우
#
# $$
# \phi(\mathbf x, \ell) = \exp(-\gamma\, \lVert \mathbf x - \ell \lVert^2)
# $$
# + [markdown] colab_type="text" id="ApSHKXvRoimC" slideshow={"slide_type": "slide"}
# #### 유사도 함수 적용 장단점
#
# * 각 샘플을 랜드마크로 지정 후 유사도 특성 추가
# * ($n$ 개의 특성을 가진 $m$ 개의 샘플) $\quad\Longrightarrow\quad$ ($m$ 개의 특성을 가진 $m$ 개의 샘플)
# * 장점: 차원이 커지면서 선형적으로 구분될 가능성이 높아짐.
#
# * 단점: 훈련 세트가 매우 클 경우 동일한 크기의 아주 많은 특성이 생성됨.
# + [markdown] colab_type="text" id="Noa1Kbiil4Y_" slideshow={"slide_type": "slide"}
# #### 유사도 특성 + 선형 SVM: 예제
#
# * 랜드마크: -2와 1
# * $x_2$과 $x_3$: 각각 -2와 1에 대한 가우시안 RBF 함수로 계산한 유사도
# * 아래 이미지: $\mathbf x = -1$
# -
# <div align="center"><img src="images/ch05/homl05-08.png" width="600"/></div>
# + [markdown] colab_type="text" id="Noa1Kbiil4Y_" slideshow={"slide_type": "slide"}
# ### 방식 2: SVC + 커널 트릭
# + [markdown] colab_type="text" id="1jwsMMfMnAjW"
# #### 커널 트릭
#
# * 어떠한 특성도 새로 추가하지 않으면서 특성을 추가한 것과 수학적으로 동일한 결과가 나오게 하는 기법
#
# * 다항 커널: 다항 특성을 추가하는 효과를 내주는 함수
# * 가우시안 RBF 커널: 유사도 특성을 추가하는 효과를 내주는 함수
# + [markdown] colab_type="text" id="Noa1Kbiil4Y_" slideshow={"slide_type": "slide"}
# #### SVC + 다항 커널: 예제
#
# * 다항 특성을 추가해서 모델 학습을 진행하는 방식을 커널 트릭을 이용하여 지원
# -
# ```python
# # 두 개의 다항 커널 지정
#
# from sklearn.svm import SVC
#
# poly_kernel_svm_clf = Pipeline([
# ("scaler", StandardScaler()),
# ("svm_clf", SVC(kernel="poly", degree=d, coef0=c0, C=C))
# ])
# ```
# + [markdown] slideshow={"slide_type": "slide"}
# <div align="center"><img src="images/ch05/homl05-09.png" width="600"/></div>
# + [markdown] slideshow={"slide_type": ""}
#
# | | 왼편 그래프 | 오른편 그래프 |
# | ---: | -------------: | -------------: |
# | **kernel="poly":** | 다항 커널 | 다항 커널 |
# | **degree=d:** | d=3: 3차 다항 커널 | d=10: 10차 다항 커널 |
# | **coef0=r:** | r=1: 높은 차수 강조 조금 | r=100:높은 차수 강조 많이 |
# | **C=C:** | C=5: 마진 약간 크게 | C=5: 마진 약간 크게 |
# + [markdown] colab_type="text" id="HLqZKMBNoDLz" slideshow={"slide_type": "slide"}
# #### 적절한 하이퍼파라미터 선택
#
# * 모델이 과대적합이면 차수를 줄여야 함
# * 적절한 하이퍼파라미터는 그리드 탐색 등을 이용하여 찾음
# * 처음에는 그리드의 폭을 크게, 그 다음에는 좀 더 세밀하게 검색
# * 하이퍼파라미터의 역할을 잘 알고 있어야 함
# + [markdown] colab_type="text" id="Noa1Kbiil4Y_" slideshow={"slide_type": "slide"}
# #### SVC + 가우시안 RBF: 예제
#
# * 유사도 특성을 추가해서 모델 학습을 진행하는 방식을 커널 트릭을 이용하여 지원
# -
# ```python
# rbf_kernel_svm_clf = Pipeline([
# ("scaler", StandardScaler()),
# ("svm_clf", SVC(kernel="rbf", gamma=ga, C=C))
# ])
# ```
# + [markdown] slideshow={"slide_type": "slide"}
# <div align="center"><img src="images/ch05/homl05-10.png" width="600"/></div>
# + [markdown] slideshow={"slide_type": ""}
#
# | | 상단 왼편 그래프 | 상단 오른편 그래프 |
# | ---: | -------------: | -------------: |
# | **kernel="rbf":** | 가우시안 RBF 커널 | 가우시안 RBF 커널 |
# | **gamma=ga:** | ga=0.1: 랜드마크에 조금 집중 | ga=0.1: 랜드마크에 조금 집중 |
# | **C=C:** | C=0.001: 가중치 규제 많이 | C=1000: 가중치 규제 적게 |
# | | 따라서 마진 도로폭 넓게 | 따라서 마진 도로폭 좁게 |
# + [markdown] slideshow={"slide_type": "slide"}
# <div align="center"><img src="images/ch05/homl05-10.png" width="600"/></div>
# + [markdown] slideshow={"slide_type": ""}
# | | 하단 왼편 그래프 | 하단 오른편 그래프 |
# | ---: | -------------: | -------------: |
# | **kernel="rbf":** | 가우시안 RBF 커널 | 가우시안 RBF 커널 |
# | **gamma=ga:** | ga=5.0: 랜드마크에 많이 집중 | ga=5.0: 랜드마크에 많이 집중 |
# | **C=C:** | C=0.001: 가중치 규제 많이 | C=1000: 가중치 규제 적게 |
# | | 결정 경계 덜 민감 | 결졍 경계가 보다 민감 |
# + [markdown] colab_type="text" id="IekQoOEprhSY" slideshow={"slide_type": "slide"}
# #### 추천 커널
# * `SVC`의 `kernel` 기본값은 `"rbf"` => 대부분의 경우 이 커널이 잘 맞음
# + [markdown] colab_type="text" id="IekQoOEprhSY" slideshow={"slide_type": "fragment"}
# * 선형 모델이 예상되는 경우 `"linear"` 커널을 사용할 수 있음
# * 훈련 세트가 크거나 특성이 아주 많을 경우 `LinearSVC`가 빠름
# + [markdown] colab_type="text" id="IekQoOEprhSY" slideshow={"slide_type": "fragment"}
# * 시간과 컴퓨팅 성능이 허락한다면 교차 검증, 그리드 탐색을 이용하여 적절한 커널을 찾아볼 수 있음
# * 시간과 컴퓨팅 성능이 지원되야함
# + [markdown] colab_type="text" id="IekQoOEprhSY" slideshow={"slide_type": "fragment"}
# * 훈련 세트에 특화된 커널이 알려져 있다면 해당 커널을 사용
# + [markdown] colab_type="text" id="CiVqbDyGsPPZ" slideshow={"slide_type": "slide"}
# ### 계산 복잡도
# + [markdown] colab_type="text" id="CiVqbDyGsPPZ"
# 분류기|시간 복잡도(m 샘플 수, n 특성 수)|외부 메모리 학습|스케일 조정|커널 트릭|다중 클래스 분류
# ----|-----|-----|-----|-----|-----
# LinearSVC | $O(m \times n)$ | 미지원 | 필요 | 미지원 | OvR 기본
# SGDClassifier | $O(m \times n)$ | 지원 | 필요 | 미지원 | 지원
# SVC | $O(m^2 \times n) \sim O(m^3 \times n)$ | 미지원 | 필요 | 지원 | OvR 기본
# + [markdown] colab_type="text" id="P1lYLawkuMlw" slideshow={"slide_type": "slide"}
# ## 5.3 SVM 회귀
# + [markdown] colab_type="text" id="P1lYLawkuMlw" slideshow={"slide_type": "slide"}
# * SVM 분류 목표: 마진 위반 발생 정도를 조절하면서 두 클래스 사이의 도로폭을 최대한 넓게 하기
# + [markdown] colab_type="text" id="P1lYLawkuMlw" slideshow={"slide_type": "fragment"}
# * SVM 회귀 목표: 마진 위반 발생 정도를 조절하면서 도로폭을 최대한 넓혀서 도로 위에 가능한 많은 샘플 포함하기
# + [markdown] colab_type="text" id="P1lYLawkuMlw" slideshow={"slide_type": "fragment"}
# * 회귀 모델의 마진 위반 사례: 도로 밖에 위치한 샘플
# + [markdown] colab_type="text" id="iU8IiHfouli_" slideshow={"slide_type": "slide"}
# ### 선형 SVM 회귀
#
# * 선형 회귀 모델을 SVM을 이용하여 구현
# + [markdown] colab_type="text" id="iU8IiHfouli_" slideshow={"slide_type": "slide"}
# #### 예제: LinearSVR 활용
# + [markdown] colab={} colab_type="code" id="5tMs4GI8rbZs"
# ```python
# # LinearSVR 클래스 지정
# from sklearn.svm import LinearSVR
#
# svm_reg = LinearSVR(epsilon=e)
# ```
# + [markdown] slideshow={"slide_type": "slide"}
# <div align="center"><img src="images/ch05/homl05-11.png" width="600"/></div>
# + [markdown] slideshow={"slide_type": ""}
#
# | | 왼편 그래프 | 오른편 그래프 |
# | ---: | -------------: | -------------: |
# | **epsilon=e:** | e=1.5: 마진 크게 | e=0.5 마진 작게 |
# + [markdown] colab_type="text" id="nVTdpLgOvuGf" slideshow={"slide_type": "slide"}
# ### 비선형 SVM 회귀
#
# * 커널 트릭을 활용하여 비선형 회귀 모델 구현
# -
# #### 예제: SVR + 다항 커널
# + [markdown] colab={} colab_type="code" id="5tMs4GI8rbZs"
# ```python
# # SVR + 다항 커널
# from sklearn.svm import SVR
#
# svm_poly_reg = SVR(kernel="poly", degree=d, C=C, epsilon=e, gamma="scale")
# ```
# + [markdown] slideshow={"slide_type": "slide"}
# <div align="center"><img src="images/ch05/homl05-12.png" width="600"/></div>
# + [markdown] slideshow={"slide_type": ""}
#
# | | 왼편 그래프 | 오른편 그래프 |
# | ---: | -------------: | -------------: |
# | **degree=d:** | d=2: 2차 다항 커널 | d=2: 2차 다항 커널 |
# | **epsilon=e:** | e=0.1: 마진 작게 | e=0.1 마진 작게 |
# | **C=C:** | C=100: 가중치 규제 거의 없음 | C=0.01: 가중치 규제 많음 |
# | | 샘플에 더 민감 | 샘플에 덜 민감 |
# | | 도록폭을 보다 넓게 | 도로폭을 보다 좁게|
# + [markdown] colab_type="text" id="1FPyRJPJws_I" slideshow={"slide_type": "slide"}
# ### 회귀 모델 시간 복잡도
# + [markdown] colab_type="text" id="1FPyRJPJws_I"
# * `LinearSVR`: `LinearSVC`의 회귀 버전
# * 시간 복잡도가 훈련 세트의 크기에 비례해서 선형적으로 증가
# + [markdown] colab_type="text" id="1FPyRJPJws_I"
# * `SVR`: `SVC`의 회귀 버전
# * 훈련 세트가 커지면 매우 느려짐
# + [markdown] slideshow={"slide_type": "slide"}
# ## 5.4 SVM 이론
# + [markdown] slideshow={"slide_type": "slide"}
# * (선형) SVM 작동 원리
# * 결정 함수와 예측
# * 목적 함수
# * 2차 계획법(QP, quadratic programming)
# * 쌍대 문제
# + [markdown] slideshow={"slide_type": "slide"}
# * 커널 SVM 작동원리
# * 쌍대 문제를 해결할 때 커널 기법 활용 가능
# + [markdown] slideshow={"slide_type": "slide"}
# * 온라인 SVM
# * 온라인 선형 SVM
# * 온라인 커널 SVM
# + [markdown] slideshow={"slide_type": "slide"}
# ### (선형) SVM 작동 원리: 결정 함수와 예측
# + [markdown] slideshow={"slide_type": "slide"}
# #### 선형 SVM 분류기 모델의 결정 함수
# + [markdown] slideshow={"slide_type": ""}
# \begin{align*}
# h(\mathbf x) &= \mathbf w^T \mathbf x + b \\
# &= w_1 x_1 + \cdots + w_n x_n + b
# \end{align*}
# + [markdown] slideshow={"slide_type": "fragment"}
# #### 선형 SVM 분류기 예측
# + [markdown] slideshow={"slide_type": ""}
# $$
# \hat y = \begin{cases}
# 0 & \text{if } h(\mathbf x) < 0\\
# 1 & \text{if } h(\mathbf x) \ge 0
# \end{cases}
# $$
# + [markdown] slideshow={"slide_type": "slide"}
# #### 결정 경계
#
# * 결정 함수의 값이 0인 점들의 집합
#
# $$\{\mathbf x \mid h(\mathbf x)=0 \}$$
# + [markdown] slideshow={"slide_type": "slide"}
# * 결정 경계 예제
# * 붓꽃 분류: 꽃잎 길이와 너비를 기준으로 Iris-Virginica(초록색 삼각형) 품종 여부 판단
# + [markdown] slideshow={"slide_type": "fragment"}
# <div align="center"><img src="images/ch05/homl05-13.png" width="600"/></div>
# + [markdown] slideshow={"slide_type": "fragment"}
# * 두 점선에 유의할 것
# * $h(\mathbf x)$가 1 또는 -1인 샘플들의 집합
# * 마진과 밀접하게 관련됨.
# + [markdown] slideshow={"slide_type": "slide"}
# ### (선형) SVM 작동 원리: 목적 함수
# + [markdown] slideshow={"slide_type": "slide"}
# #### 결정 함수의 기울기와 마진 폭
#
# * 결정 함수의 기울기가 작아질 수록 마진 폭이 커짐. 아래 그림 참조
# * 결정 함수의 기울기가 $\| \mathbf w \|$에 비례함.
# + [markdown] slideshow={"slide_type": "fragment"}
# <div align="center"><img src="images/ch05/homl05-14.png" width="600"/></div>
# + [markdown] slideshow={"slide_type": "fragment"}
# * 마진을 크게 하기 위해 $\| \mathbf w \|$를 최소화 해야 함.
# * 하드 마진: 모든 양성(음성) 샘플에 대한 결정 함수의 값이 1(-1)보다 크다(작다)
# * 소프트 마진: 모든 샘플에 대한 결정 함수의 값이 지정된 값 이상 또는 이하 이어야 한다.
# + [markdown] slideshow={"slide_type": "slide"}
# #### 하드 마진 선형 SVM 분류기의 목적 함수
#
# * 목적 함수:
#
# $$\frac 1 2 \mathbf w^T \mathbf w$$
#
# * 아래 조건 하에서 목적 함수를 최소화 시키는 $\mathbf w$와 $b$를 구해야 함:
#
# $$t^{(i)} (\mathbf w^T \mathbf x^{(i)} + b) \ge 1$$
#
# * 단, 다음이 성립:
# * $x^{(i)}$: $i$ 번째 샘플
# * $t^{(i)}$: 양성 샘플일 때 1, 음성 샘플일 때 -1
# + [markdown] slideshow={"slide_type": "slide"}
# #### 소프트 마진 선형 SVM 분류기의 목적 함수
#
# * 목적 함수:
#
# $$\frac 1 2 \mathbf w^T \mathbf w + C \sum_{i=0}^{m-1} \zeta^{(i)}$$
#
# * 아래 조건 하에서 목적 함수를 최소화 시키는 $\mathbf w$와 $b$를 구해야 함:
#
# $$t^{(i)} (\mathbf w^T \mathbf x^{(i)} + b) \ge 1 - \zeta^{(i)}$$
#
# + [markdown] slideshow={"slide_type": "slide"}
# * 단, 다음이 성립:
# * $x^{(i)}$: $i$ 번째 샘플
# * $t^{(i)}$: 양성 샘플일 때 1, 음성 샘플일 때 -1
# * $\zeta^{(i)}\ge 0$: __슬랙 변수__. $i$ 번째 샘플이 얼마나 마진을 위반할지 정함.
#
# * $C$: 아래 두 목표 사이의 트레이드오프를 조절하는 하이퍼파라미터
# * 목표 1: 슬랙 변수의 값을 작게 만들기
# * 목표 2: 마진을 크게 하기 위해 $\frac 1 2 \mathbf w^T \mathbf w$ 값을 가능하면 작게 만들기
# + [markdown] slideshow={"slide_type": "slide"}
# ### (선형) SVM 작동 원리: 2차 계획법(QP)
# + [markdown] slideshow={"slide_type": "slide"}
# * 하드(소프트) 마진 문제: 선형 제약조건이 있는 블록 2차 최적화 문제
# * 2차 계획법(QP, quadratic programming) 문제로 알려짐.
# * 해법에 대한 설명은 이 책의 수준을 벗어남.
# + [markdown] slideshow={"slide_type": "slide"}
# ### (선형) SVM 작동 원리: 쌍대 문제
# + [markdown] slideshow={"slide_type": "slide"}
# * 쌍대 문제(dual problem): 주어진 문제의 답과 동일한 답을 갖는 문제
# * 하드(소프트) 마진과 관련된 2차 계획법 문제의 답을 보다 쉽게 해결할 수 있는 쌍대 문제를 이용하여 해결 가능
# + [markdown] slideshow={"slide_type": "slide"}
# #### 선형 SVM 목적 함수의 쌍대 문제
#
# * 아랙 식을 최소화하는 $\alpha$ 찾기. 단, $\alpha^{(i)} > 0$:
#
# $$
# \frac 1 2 \sum_{i=0}^{m-1} \sum_{j=0}^{m-1} \alpha^{(i)} \alpha^{(j)} t^{(i)} t^{(j)} \mathbf x^{{(i)}^T} \mathbf x^{(j)}
# - \sum_{i=0}^{m-1} \alpha^{(i)}
# $$
# + [markdown] slideshow={"slide_type": "slide"}
# * 쌍대 문제의 답 $\hat \alpha$를 이용하여 $\hat{\mathbf w}$ 와 $\hat b$를 선형 SVM 모델의 파라미터로 활용
# * $n_s$: 서포트 벡터 수, 즉, ${\hat \alpha}^{(i)} > 0$ 인 샘플 수
#
# \begin{align*}
# \hat{\mathbf w} &= \sum_{i=0}^{m-1} {\hat \alpha}^{(i)} t^{(i)} \mathbf x^{(i)} \\
# \hat b &= \frac{1}{n_s} \sum_{i=0, \; {\hat \alpha}^{(i)} > 0}^{m-1} \big( t^{(i)} - {\hat{\mathbf w}^T} \mathbf x^{(i)} \big)
# \end{align*}
# + [markdown] slideshow={"slide_type": "slide"}
# ### 커널 SVM 작동 원리
# + [markdown] slideshow={"slide_type": "slide"}
# #### 쌍대 문제와 커널 SVM
#
# * 커널 SVM이 작동 원리는 원래의 문제가 아닌 쌍대 문제 해결과 관련됨.
# + [markdown] slideshow={"slide_type": "slide"}
# * 특히 아래 쌍대 목적 함수에서 사용된 $\mathbf x^{{(i)}^T} \mathbf x^{(j)} $에 주의해야 함.
# + [markdown] slideshow={"slide_type": ""}
# <div align="center"><img src="images/ch05/homl05-15.png" width="300"/></div>
# + [markdown] slideshow={"slide_type": "fragment"}
# #### 예제: 2차 다항 커널 작동 아이디어
#
# * 원래 아래 2차 다항식 함수를 적용한 후에 쌍대 목적 함수의 최적화 문제를 해결해야 함.
#
# $$
# \phi(\mathbf x) = (x_1^2, \sqrt{2} x_1 x_2, x_2^2)^T
# $$
#
# * 원래 아래 식의 최적화 문제를 해결해야 함.
#
# <div align="center"><img src="images/ch05/homl05-16.png" width="350"/></div>
# + [markdown] slideshow={"slide_type": "slide"}
# * 하지만 다음이 성립함
#
# $$
# \phi(\mathbf a)^T \phi(\mathbf b) = ({\mathbf a}^T \mathbf b)^2
# $$
#
# + [markdown] slideshow={"slide_type": "fragment"}
# * 따라서 2차 다항식 함수 $\phi$ 전혀 적용할 필요 없이 아래 함수에 대한 최적화 문제를 해결하면 됨.
#
# <div align="center"><img src="images/ch05/homl05-17.png" width="310"/></div>
# + [markdown] slideshow={"slide_type": "fragment"}
# * 커널 기법으로 구해진 쌍대문제의 해 $\hat \alpha$를 이용하여 예측값 $h(\phi(\mathbf x))$ 또한
# $\phi(\mathbf x)$ 없이 계산할 수 있음.
# + [markdown] slideshow={"slide_type": "slide"}
# #### 예제: 지원되는 커널
#
# * 선형:
#
# $$K(\mathbf a, \mathbf b) = \mathbf a^T \mathbf b$$
#
# * 다항식:
#
# $$K(\mathbf a, \mathbf b) = \big( \gamma \mathbf a^T \mathbf b + r \big)^d$$
#
# * 가우시안 RBF:
#
# $$K(\mathbf a, \mathbf b) = \exp \big( \!-\! \gamma \| \mathbf a - \mathbf b \|^2 \big )$$
#
# * 시그모이드:
#
# $$K(\mathbf a, \mathbf b) = \tanh\! \big( \gamma \mathbf a^T \mathbf b + r \big)$$
# + [markdown] slideshow={"slide_type": "slide"}
# ### 온라인 SVM
# + [markdown] slideshow={"slide_type": "slide"}
# * 온라인 학습: 새로운 샘플에 대해 점진적으로 학습하기
# + [markdown] slideshow={"slide_type": "slide"}
# #### 선형 온라인 SVM
#
# * 특정 비용함수를 최소화하기 위한 경사하강법 사용
# * 예제: 사이킷런의 SGDClassifier
# * `loss` 하이퍼파라미터를 `hinge` 로 설정하면 선형 SVM 모델 지정
# + [markdown] slideshow={"slide_type": "slide"}
# #### 비선형 온라인 SVM
#
# * 온라인 커널 SVM 구현 가능.
# * 하지만 신경망 알고리즘 사용 추천
| slides/handson-ml2-05.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python [conda env:gt]
# language: python
# name: conda-env-gt-py
# ---
# # Nash equilibria - Exercises
#
# 1. Give the definition of a best response.
# 2. For the following games identify the best reponses:
#
# 1. $
# A =
# \begin{pmatrix}
# 2 & 1\\
# 1 & 1\end{pmatrix}
# \qquad
# B =
# \begin{pmatrix}
# 1 & 1\\
# 1 & 3\end{pmatrix}
# $
#
# 2. $
# A =
# \begin{pmatrix}
# 2 & 1 & 3 & 17\\
# 27 & 3 & 1 & 1\\
# 4 & 6 & 7 & 18
# \end{pmatrix}
# \qquad
# B =
# \begin{pmatrix}
# 11 & 9 & 10 & 22\\
# 0 & 1 & 1 & 0\\
# 2 & 10 & 12 & 0
# \end{pmatrix}
# $
#
# 3. $
# A =
# \begin{pmatrix}
# 3 & 3 & 2 \\
# 2 & 1 & 3
# \end{pmatrix}
# \qquad
# B =
# \begin{pmatrix}
# 2 & 1 & 3 \\
# 2 & 3 & 2
# \end{pmatrix}
# $
#
# 4. $
# A =
# \begin{pmatrix}
# 3 & -1\\
# 2 & 7\end{pmatrix}
# \qquad
# B =
# \begin{pmatrix}
# -3 & 1\\
# 1 & -6\end{pmatrix}
# $
#
# 3. Represent the following game in normal form:
#
# > Assume two neighbouring countries have at their disposal very destructive armies. If both countries attack each other the countries' civilian population will suffer 10 thousand casualties. If one country attacks whilst the other remains peaceful, the peaceful country will lose 15 thousand casualties but would also retaliate causing the offensive country 13 thousand casualties. If both countries remain peaceful then there are no casualties.
#
# 1. Clearly state the players and strategy sets.
# 2. Plot the utilities to both countries assuming that they play a mixed strategy while the other country remains peaceful.
# 3. Plot the utilities to both countries assuming that they play a mixed strategy while the other country attacks.
# 4. Obtain the best responses of each player.
#
# 4. State and prove the best response condition.
# 5. Define Nash equilibria.
| nbs/exercises/04-Nash-equilibria.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import numpy as np
from matplotlib import pyplot as plt
from glob import glob
import pickle
from operator import itemgetter
# +
# list all saved files in directory
directory_template = '/home/kateb/Desktop/Computational-Biophysics-2019/monte carlo/data/e/'
list_of_pkl = glob(directory_template + '*.pkl')
list_of_Cv_T = []
for each_file in list_of_pkl:
print(each_file)
# open each file and convert to numpy array
with open(each_file,'rb') as f:
list_of_e = np.array(pickle.load(f))
# Find T based on the name of the file
T = float(each_file.split('/')[-1][:-4])
list_of_e = list_of_e[400:]
Cv = 1/(T**2) * np.var(list_of_e)
list_of_Cv_T.append([Cv,T])
# -
list_of_Cv_T = sorted(list_of_Cv_T, key=itemgetter(1))
list_of_Cv_T
list_of_Cv_T = np.array(list_of_Cv_T)
plt.plot(list_of_Cv_T[:,1],list_of_Cv_T[:,0], label='Cv')
plt.legend(loc='upper left')
plt.xlabel('Temperature')
plt.ylabel('Cv')
plt.show()
| monte carlo/Cv Analysis.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Natural Language Processing (NLP)
#
# (Courtesy of Prof. Franks and Kharitonova)
# Examples below are adapted from ["How To Work with Language Data in Python 3 using the Natural Language Toolkit (NLTK)"](https://www.digitalocean.com/community/tutorials/how-to-work-with-language-data-in-python-3-using-the-natural-language-toolkit-nltk)
# +
#pip install nltk
# +
#from gensim import corpora, models
# -
import nltk
print(nltk.__version__)
# You should have version 3.2.1 installed
# since we'll use NLTK's Twitter package that requires this version
# +
#nltk.download()
# -
# ## Load the data
#
#
nltk.download('twitter_samples')
from nltk.corpus import twitter_samples
twitter_samples # has a specific type: TwitterCorpusReader
# NLTK's twitter corpus currently contains a sample of 20,000 tweets retrieved from the Twitter Streaming API. Full tweets are stored as line-separated JSON. We can see how many JSON files exist in the corpus using the `twitter_samples.fileids()` method:
twitter_samples.fileids()
# Using those file IDs we can then return the tweet strings.
# Let's look at just the first few:
twitter_samples.strings('tweets.20150430-223406.json')[0:3]
# We now know our corpus was downloaded successefully, so we can start processing the tweets.
#
# Let's count how many adjectives and nouns appear in the positive subset of the `twitter_samples` corpus:
#
# * A **noun**, in its most basic definition, is usually defined as a person, place, or thing (e.g., a movie, a book, and a burger). Counting nouns can help determine how many different _topics_ are being discussed.
# * An **adjective** is a word that modifies a noun (or pronoun), for example: a _horrible_ movie, a _funny_ book, or a _delicious_ burger. Counting adjectives can determine what type of language is being used, i.e. opinions tend to include more adjectives than facts.
#
# We could later count positive adjectives (great, awesome, happy, etc.) versus negative adjectives (boring, lame, sad, etc.), which could be used to analyze the sentiment of tweets or reviews about a product or movie, for example. This script provides data that can in turn inform decisions related to that product or movie.
#
#
# Let's create a `tweets` variable and assign to it the list of tweet strings from the `positive_tweets.json` file.
tweets = twitter_samples.strings('positive_tweets.json')
tweets[0:3]
# ## Tokenizing Sentences
#
# **Tokenization** is the act of breaking up a sequence of strings into pieces such as words, keywords, phrases, symbols and other elements, which are called _tokens_. Let's create a new variable called `tweets_tokens`, to which we will assign the tokenized list of tweets:
# Load tokenized tweets
tweets_tokens = twitter_samples.tokenized('positive_tweets.json')
# This new variable, `tweets_tokens`, is a list where each element in the list is a list of tokens.
#
# We can compare the list of tokens to the original tweet that the tokens came from:
print(tweets[0:1])
tweets_tokens[0:1]
# Now that we have the tokens of each tweet we can tag the tokens with the appropriate parts of speech (POS) tags.
#
#
# In order to access NLTK's POS tagger, we'll need to import it. (Typically, all import statements must go at the beginning of the script/notebook.)
from nltk.tag import pos_tag_sents
nltk.download('averaged_perceptron_tagger')
# Now, we can tag each of our tokens. NLTK allows us to do it all at once using: `pos_tag_sents()`. We are going to create a new variable `tweets_tagged`, which we will use to store our tagged lists.
tweets_tagged = pos_tag_sents(tweets_tokens)
# To get an idea of what tagged tokens look like, here is what the first element in our tweets_tagged list looks like:
tweets_tagged[0:1]
# ## Tagging parts of speech (POS)
#
# We can see that our tweet is represented as a list and for each token we have information about its POS tag. Each token/tag pair is saved as a [tuple](https://www.digitalocean.com/community/tutorials/understanding-tuples-in-python-3). The default tagger of `nltk.pos_tag()` uses the Penn Treebank Tag Set. You can check an [alphabetical list of part-of-speech tags used in the Penn Treebank Project](https://www.ling.upenn.edu/courses/Fall_2003/ling001/penn_treebank_pos.html).
#
# In NLTK, the abbreviation for **adjective** is `JJ`.
#
# The NLTK tagger marks **singular nouns** (`NN`) with different tags than **plural nouns** (`NNS`). To simplify, we will only count singular nouns by keeping track of the `NN` tag.
#
# In the next step we will count how many times `JJ` and `NN` appear throughout our corpus.
# We will keep track of how many times JJ and NN appear using an accumulator (count) variable, which we will continuously add to every time we find a tag.
#
# After we create the variables, we'll create two for loops. The first loop will iterate through each tweet in the list. The second loop will iterate through each token/tag pair in each tweet. For each pair, we will look up the tag using the appropriate tuple index.
#
# We will then check to see if the tag matches either the string 'JJ' or 'NN' by using conditional statements. If the tag is a match we will add (+= 1) to the appropriate accumulator.
# +
# Set accumulators
JJ_count = 0
NN_count = 0
# Loop through list of tweets
for tweet in tweets_tagged:
for pair in tweet:
tag = pair[1]
if tag == 'JJ':
JJ_count += 1
elif tag == 'NN':
NN_count += 1
# -
# After the two loops are complete, we should have the total count for adjectives and nouns in our corpus.
print('Total number of adjectives = ', JJ_count)
print('Total number of nouns = ', NN_count)
| labs/lab06/lab06-1.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3 (PyTorch 1.6 Python 3.6 CPU Optimized)
# language: python
# name: python3__SAGEMAKER_INTERNAL__arn:aws:sagemaker:us-west-2:236514542706:image/pytorch-1.6-cpu-py36-ubuntu16.04-v1
# ---
# # Distributed Model Parallel Training on Amazon SageMaker
# In this notebook we will use a Visual transformer to do image classification `horse or human` data from https://laurencemoroney.com/datasets.html. We will download both training and validation dataset provided on the site.
#
# Note:
# - Kernel: `PyTorch 1.8 Python 3.6 CPU Optimized)`
# - Instance Type: `ml.m5.xlarge`
# update sagemaker
# !pip install -U sagemaker
# make sure version is greater or equal to '2.84.0'
import sagemaker
sagemaker.__version__
## Download data
# !curl -o train.zip https://storage.googleapis.com/laurencemoroney-blog.appspot.com/horse-or-human.zip
# !curl -o validation.zip https://storage.googleapis.com/laurencemoroney-blog.appspot.com/validation-horse-or-human.zip
# +
## Unzip file
import zipfile
with zipfile.ZipFile("train.zip","r") as train_zip_ref:
train_zip_ref.extractall("data/train")
with zipfile.ZipFile("validation.zip","r") as val_zip_ref:
val_zip_ref.extractall("data/validation")
# -
# ## Convert images to High Resolution
# We will start with converting our images to High Resolution using HuggingFace model [EdsrModel](#https://huggingface.co/eugenesiow/edsr-base) from `super-image` library.
# Please note that this step is optional, and reason for doing this is to mimick the real world image datasets for High Performance Computing, where image size might be in mega bytes.
# !pip install datasets super-image
# !python3 -m pip install --upgrade sagemaker
# We are using `horse-or-human` dataset from here, which has image size of about `178KB`, we will convert these images to Higher resolution and `resulting size would be close to 2MB`.
# + active=""
# @InProceedings{Lim_2017_CVPR_Workshops,
# author = {<NAME> and <NAME> and <NAME> and <NAME> and <NAME>},
# title = {Enhanced Deep Residual Networks for Single Image Super-Resolution},
# booktitle = {The IEEE Conference on Computer Vision and Pattern Recognition (CVPR) Workshops},
# month = {July},
# year = {2017}
# }
# +
from super_image import EdsrModel, ImageLoader
from PIL import Image
import requests
model = EdsrModel.from_pretrained('eugenesiow/edsr-base', scale=4)
# -
import os
from os import listdir
folder_dir = "data/validation/"
for folder in os.listdir(folder_dir):
folder_path = f'{folder_dir}{folder}'
for image_file in os.listdir(folder_path):
path = f'{folder_path}/{image_file}'
image = Image.open(path)
inputs = ImageLoader.load_image(image)
preds = model(inputs)
ImageLoader.save_image(preds, path)
# +
# quick check on image size for the last image converted by the model.
import os
file_size = os.path.getsize(path)
print("File Size is :", file_size/1000000, "MB")
# -
# ## Optional: Duplicate files to increase number of images for testing purpose in the later sections
# +
# import os
# import shutil
# # repeat the same code for other folders as well.
# source_folder = r"./data/validation/horses/"
# destination_folder = r"./data/validation/horses/"
# i=0
# # fetch all files
# for file_name in os.listdir(source_folder):
# # construct full file path
# i=i+1
# source = source_folder + file_name
# destination = destination_folder + str(i) + '_' + file_name
# # copy only files
# if os.path.isfile(source):
# shutil.copy(source, destination)
# print('copied', destination)
# -
# ## Upload data to s3
# +
# %%time
import sagemaker
from sagemaker import get_execution_role
from sagemaker.estimator import Estimator
import boto3
sagemaker_session = sagemaker.Session()
bucket = sagemaker_session.default_bucket()
prefix = 'horse-or-human'
role = get_execution_role()
client = boto3.client('sts')
account = client.get_caller_identity()['Account']
print(f'AWS account:{account}')
session = boto3.session.Session()
region = session.region_name
print(f'AWS region:{region}')
# -
from sagemaker.s3 import S3Uploader
# s3_input_data = f's3://{bucket}/{prefix}/data'
s3_train_data = S3Uploader.upload('data/train',f's3://{bucket}/{prefix}/data/train')
s3_val_data = S3Uploader.upload('data/validation',f's3://{bucket}/{prefix}/data/validation')
print('s3 train data path: ', s3_train_data)
print('s3 validation data path: ', s3_val_data)
s3_train_data = 's3://sagemaker-us-west-2-706553727873/horse-or-human/data/train'
s3_val_data = 's3://sagemaker-us-west-2-706553727873/horse-or-human/data/validation'
# +
smp_options = {
"enabled":True,
"parameters": { # Required
# "microbatches": 4,
"partitions": 1, # Required
"placement_strategy": "spread",
"pipeline": "interleaved",
"optimize": "speed",
"ddp": True,
}
}
mpi_options = {
"enabled" : True, # Required
"processes_per_host" : 8, # Required
# "custom_mpi_options" : "--mca btl_vader_single_copy_mechanism none"
}
# -
## Define PyTorch Estimator
metric_definitions=[
{'Name': 'train:error', 'Regex': 'train_loss: ([0-9\.]+)'},
{'Name': 'validation:error', 'Regex': 'val_loss : ([0-9\.]+)'}
]
from sagemaker.pytorch import PyTorch
estimator = PyTorch(entry_point='train_smp.py',
source_dir='src',
role=role,
instance_count=1,
instance_type='ml.p3.16xlarge',
framework_version="1.10.0",
py_version="py38",
sagemaker_session=sagemaker_session,
hyperparameters={
'epochs':5,
'batch_size':8,
'lr':3e-5,
'gamma': 0.7
},
distribution={
"smdistributed": {"modelparallel": smp_options},
"mpi": mpi_options
},
debugger_hook_config=False,
# disable_profiler=True,
metric_definitions = metric_definitions,
base_job_name="smp-training",
max_run=2400,
)
# +
# %%time
from sagemaker.inputs import TrainingInput
train = TrainingInput(s3_train_data, content_type='image/png',input_mode='File')
val = TrainingInput(s3_val_data, content_type='image/png',input_mode='File')
estimator.fit({'train':train, 'val': val})
# -
estimator.model_data
| chapter_6/2_distributed_model_parallel_training.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python [conda root]
# language: python
# name: conda-root-py
# ---
import numpy as np
import os
def write_pbs_head(fpbs, job_name):
fpbs.write('#! /bin/bash\n')
fpbs.write('#PBS -M <EMAIL>\n')
fpbs.write('#PBS -l nodes=1:ppn=24\n')
fpbs.write('#PBS -l walltime=72:00:00\n')
fpbs.write('#PBS -q common\n')
fpbs.write('#PBS -N %s\n' % job_name)
fpbs.write('\n')
fpbs.write('cd $PBS_O_WORKDIR\n')
fpbs.write('\n')
# +
# case 1
rs_list = [0.2, 0.5, 0.8]
ds_list = [0.6, 0.4, 0.3, 0.2, 0.15, 0.1, 0.07, 0.05]
es_list = [-0.5, -1, -2, -3]
job_dir = 'sphere_converge'
if not os.path.exists(job_dir):
os.makedirs(job_dir)
for es in es_list:
t_name = os.path.join(job_dir, 'run_es%3.1f.sh' % np.abs(es))
with open(t_name, 'w') as frun:
# create .pbs file
for ds in ds_list:
for rs in rs_list:
job_name = 'rs%3.1f_ds%4.2f_es%3.1f' % (rs, ds, np.abs(es))
frun.write('mpirun -n 24 python ')
frun.write(' ../sphereInPipe.py ')
frun.write(' -ksp_max_it 1000 ')
frun.write(' -forcepipe R09l20 ')
frun.write(' -rs %f ' % rs)
frun.write(' -es %f ' % es)
frun.write(' -ds %f ' % ds)
frun.write(' -f %s ' % job_name)
frun.write(' > %s.txt \n' % job_name)
frun.write('echo %s finished \n\n ' % job_name)
# +
# case 2
rs_list = [0.1, 0.3, 0.4, 0.6, 0.7]
ds_list = [0.6, 0.4, 0.3, 0.2, 0.15, 0.1, 0.07, 0.05]
es_list = [-0.5, -1, -2, -3]
job_dir = 'sphere_converge'
if not os.path.exists(job_dir):
os.makedirs(job_dir)
for es in es_list:
t_name = os.path.join(job_dir, 'run2_es%3.1f.sh' % np.abs(es))
with open(t_name, 'w') as frun:
# create .pbs file
for ds in ds_list:
for rs in rs_list:
job_name = 'rs%3.1f_ds%4.2f_es%3.1f' % (rs, ds, np.abs(es))
frun.write('mpirun -n 24 python ')
frun.write(' ../sphereInPipe.py ')
frun.write(' -ksp_max_it 1000 ')
frun.write(' -forcepipe R09l20 ')
frun.write(' -rs %f ' % rs)
frun.write(' -es %f ' % es)
frun.write(' -ds %f ' % ds)
frun.write(' -f %s ' % job_name)
frun.write(' > %s.txt \n' % job_name)
frun.write('echo %s finished \n\n ' % job_name)
| sphereInPipe/generate_bash.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Simple pendulum using Lagrange's equation
#
# Defines a LagrangianPendulum class that is used to generate basic pendulum plots from solving Lagrange's equations.
#
# * Last revised 17-Mar-2019 by <NAME> (<EMAIL>).
# ## Euler-Lagrange equation
#
# For a simple pendulum, the Lagrangian with generalized coordinate $\phi$ is
#
# $\begin{align}
# \mathcal{L} = \frac12 m L^2 \dot\phi^2 - mgL(1 - \cos\phi)
# \end{align}$
#
# The Euler-Lagrange equation is
#
# $\begin{align}
# \frac{d}{dt}\frac{\partial\mathcal{L}}{\partial \dot\phi} = \frac{\partial\mathcal L}{\partial\phi}
# \quad\Longrightarrow\quad
# m L^2 \ddot \phi = -mgL\sin\phi
# \ \mbox{or}\ \ddot\phi = - \omega_0^2\sin\phi = 0
# \;.
# \end{align}$
#
# ## Hamilton's equations
#
# The generalized momentum corresponding to $\phi$ is
#
# $\begin{align}
# \frac{\partial\mathcal{L}}{\partial \dot\phi} = m L^2 \dot\phi \equiv p_\phi
# \;.
# \end{align}$
#
# We can invert this equation to find $\dot\phi = p_\phi / m L^2$.
# Constructing the Hamiltonian by Legendre transformation we find
#
# $\begin{align}
# \mathcal{H} &= \dot\phi p_\phi - \mathcal{L} \\
# &= \frac{p_\phi^2}{m L^2} - \frac12 m L^2 \dot\phi^2 + mgL(1 - \cos\phi) \\
# &= \frac{p_\phi^2}{2 m L^2} + mgL(1 - \cos\phi)
# \;.
# \end{align}$
#
# Thus $\mathcal{H}$ is simply $T + V$. Hamilton's equations are
#
# $\begin{align}
# \dot\phi &= \frac{\partial\mathcal{H}}{\partial p_\phi} = \frac{p_\phi}{m L^2} \\
# \dot p_\phi &= -\frac{\partial\mathcal{H}}{\partial \phi} = -mgL \sin\phi
# \;.
# \end{align}$
# %matplotlib inline
# +
import numpy as np
from scipy.integrate import odeint, solve_ivp
import matplotlib.pyplot as plt
# +
# The dpi (dots-per-inch) setting will affect the resolution and how large
# the plots appear on screen and printed. So you may want/need to adjust
# the figsize when creating the figure.
plt.rcParams['figure.dpi'] = 100. # this is the default for notebook
# Change the common font size (smaller when higher dpi)
font_size = 10
plt.rcParams.update({'font.size': font_size})
# -
# ## Pendulum class and utility functions
class LagrangianPendulum():
"""
Pendulum class implements the parameters and Lagrange's equations for
a simple pendulum (no driving or damping).
Parameters
----------
L : float
length of the simple pendulum
g : float
gravitational acceleration at the earth's surface
omega_0 : float
natural frequency of the pendulum (\sqrt{g/l} where l is the
pendulum length)
mass : float
mass of pendulum
Methods
-------
dy_dt(t, y)
Returns the right side of the differential equation in vector y,
given time t and the corresponding value of y.
"""
def __init__(self, L=1., mass=1., g=1.
):
self.L = L
self.g = g
self.omega_0 = np.sqrt(g/L)
self.mass = mass
def dy_dt(self, t, y):
"""
This function returns the right-hand side of the diffeq:
[dphi/dt d^2phi/dt^2]
Parameters
----------
t : float
time
y : float
A 2-component vector with y[0] = phi(t) and y[1] = dphi/dt
Returns
-------
"""
return [y[1], -self.omega_0**2 * np.sin(y[0]) ]
def solve_ode(self, t_pts, phi_0, phi_dot_0,
abserr=1.0e-9, relerr=1.0e-9):
"""
Solve the ODE given initial conditions.
Specify smaller abserr and relerr to get more precision.
"""
y = [phi_0, phi_dot_0]
solution = solve_ivp(self.dy_dt, (t_pts[0], t_pts[-1]),
y, t_eval=t_pts,
atol=abserr, rtol=relerr)
phi, phi_dot = solution.y
return phi, phi_dot
class HamiltonianPendulum():
"""
Hamiltonian Pendulum class implements the parameters and Hamilton's equations for
a simple pendulum (no driving or damping).
Parameters
----------
L : float
length of the simple pendulum
g : float
gravitational acceleration at the earth's surface
omega_0 : float
natural frequency of the pendulum (\sqrt{g/l} where l is the
pendulum length)
mass : float
mass of pendulum
Methods
-------
dy_dt(t, y)
Returns the right side of the differential equation in vector y,
given time t and the corresponding value of y.
"""
def __init__(self, L=1., mass=1., g=1.
):
self.L = L
self.g = g
self.omega_0 = np.sqrt(g*L)
self.mass = mass
def dy_dt(self, t, y):
"""
This function returns the right-hand side of the diffeq:
[dphi/dt d^2phi/dt^2]
Parameters
----------
t : float
time
y : float
A 2-component vector with y[0] = phi(t) and y[1] = dphi/dt
Returns
-------
"""
return [y[1], -self.mass * self.omega_0**2 * np.sin(y[0]) ]
def solve_ode(self, t_pts, phi_0, p_phi_0,
abserr=1.0e-9, relerr=1.0e-9):
"""
Solve the ODE given initial conditions.
Specify smaller abserr and relerr to get more precision.
"""
y = [phi_0, p_phi_0]
solution = solve_ivp(self.dy_dt, (t_pts[0], t_pts[-1]),
y, t_eval=t_pts,
atol=abserr, rtol=relerr)
phi_dot, p_phi_dot = solution.y
return phi_dot, p_phi_dot
def plot_y_vs_x(x, y, axis_labels=None, label=None, title=None,
color=None, linestyle=None, semilogy=False, loglog=False,
ax=None):
"""
Generic plotting function: return a figure axis with a plot of y vs. x,
with line color and style, title, axis labels, and line label
"""
if ax is None: # if the axis object doesn't exist, make one
ax = plt.gca()
if (semilogy):
line, = ax.semilogy(x, y, label=label,
color=color, linestyle=linestyle)
elif (loglog):
line, = ax.loglog(x, y, label=label,
color=color, linestyle=linestyle)
else:
line, = ax.plot(x, y, label=label,
color=color, linestyle=linestyle)
if label is not None: # if a label if passed, show the legend
ax.legend()
if title is not None: # set a title if one if passed
ax.set_title(title)
if axis_labels is not None: # set x-axis and y-axis labels if passed
ax.set_xlabel(axis_labels[0])
ax.set_ylabel(axis_labels[1])
return ax, line
def start_stop_indices(t_pts, plot_start, plot_stop):
start_index = (np.fabs(t_pts-plot_start)).argmin() # index in t_pts array
stop_index = (np.fabs(t_pts-plot_stop)).argmin() # index in t_pts array
return start_index, stop_index
# ## Make simple pendulum plots
# +
# Labels for individual plot axes
phi_vs_time_labels = (r'$t$', r'$\phi(t)$')
phi_dot_vs_time_labels = (r'$t$', r'$d\phi/dt(t)$')
state_space_labels = (r'$\phi$', r'$d\phi/dt$')
# Common plotting time (generate the full time then use slices)
t_start = 0.
t_end = 50.
delta_t = 0.001
t_pts = np.arange(t_start, t_end+delta_t, delta_t)
L = 1.
g = 1.
mass = 1.
# Instantiate a pendulum
p1 = HamiltonianPendulum(L=L, g=g, mass=mass)
# +
# both plots: same initial conditions
phi_0 = (3./4.)*np.pi
p_phi_dot_0 = mass*g*L*np.cos(phi_0)
phi_dot_0 = 0.
phi_dot , p_phi_dot = p1.solve_ode(t_pts, phi_dot_0, p_phi_dot_0)
# start the plot!
fig = plt.figure(figsize=(15,5))
overall_title = 'Simple pendulum from Lagrangian: ' + \
rf' $\omega_0 = {p1.omega_0:.2f},$' + \
rf' $\phi_0 = {phi_0:.2f},$' + \
rf' $\dot\phi_0 = {phi_dot_0:.2f}$' + \
'\n' # \n means a new line (adds some space here)
fig.suptitle(overall_title, va='baseline')
# first plot: phi plot
ax_a = fig.add_subplot(1,3,1)
start, stop = start_stop_indices(t_pts, t_start, t_end)
plot_y_vs_x(t_pts[start : stop], phi[start : stop],
axis_labels=phi_vs_time_labels,
color='blue',
label=None,
title=r'$\phi(t)$',
ax=ax_a)
# second plot: phi_dot plot
ax_b = fig.add_subplot(1,3,2)
start, stop = start_stop_indices(t_pts, t_start, t_end)
plot_y_vs_x(t_pts[start : stop], phi_dot[start : stop],
axis_labels=phi_vs_time_labels,
color='blue',
label=None,
title=r'$\dot\phi(t)$',
ax=ax_b)
# third plot: state space plot from t=30 to t=50
ax_c = fig.add_subplot(1,3,3)
start, stop = start_stop_indices(t_pts, t_start, t_end)
plot_y_vs_x(phi[start : stop], phi_dot[start : stop],
axis_labels=state_space_labels,
color='blue',
label=None,
title='State space',
ax=ax_c)
fig.tight_layout()
fig.savefig('simple_pendulum_Lagrange.png', bbox_inches='tight')
# -
# Now trying the power spectrum, plotting only positive frequencies and cutting off the lower peaks:
# +
start, stop = start_stop_indices(t_pts, t_start, t_end)
signal = phi[start:stop]
power_spectrum = np.abs(np.fft.fft(signal))**2
freqs = np.fft.fftfreq(signal.size, delta_t)
idx = np.argsort(freqs)
fig_ps = plt.figure(figsize=(5,5))
ax_ps = fig_ps.add_subplot(1,1,1)
ax_ps.semilogy(freqs[idx], power_spectrum[idx], color='blue')
ax_ps.set_xlim(0, 1.)
ax_ps.set_ylim(1.e5, 1.e11)
ax_ps.set_xlabel('frequency')
ax_ps.set_title('Power Spectrum')
fig_ps.tight_layout()
# -
| 2020_week_10/.ipynb_checkpoints/CDL_Copy_Lagrangian_pendulum-checkpoint.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] colab_type="text" id="view-in-github"
# <a href="https://colab.research.google.com/github/mengwangk/dl-projects/blob/master/04_02_auto_ml_5.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# + [markdown] colab_type="text" id="4hyoPGdjpqa_"
# # Automated ML
# + colab={} colab_type="code" id="SLxr2k_ue8yq"
COLAB = True
# + colab={"base_uri": "https://localhost:8080/", "height": 306} colab_type="code" id="oy5ww2zRfFGG" outputId="1fe440e4-c49d-4c96-d882-e3c5a007b990"
if COLAB:
# !sudo apt-get install git-lfs && git lfs install
# !rm -rf dl-projects
# !git clone https://github.com/mengwangk/dl-projects
# #!cd dl-projects && ls -l --block-size=M
# + colab={} colab_type="code" id="G2xin10SfozR"
if COLAB:
# !cp dl-projects/utils* .
# !cp dl-projects/preprocess* .
# + colab={} colab_type="code" id="fC2-l3JBpqbE"
# %reload_ext autoreload
# %autoreload 2
# %matplotlib inline
# + colab={} colab_type="code" id="TP7V_IzepqbK"
import numpy as np
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
import scipy.stats as ss
import math
import matplotlib
from scipy import stats
from collections import Counter
from pathlib import Path
plt.style.use('fivethirtyeight')
sns.set(style="ticks")
# Automated feature engineering
import featuretools as ft
# Machine learning
from sklearn.pipeline import Pipeline
from sklearn.preprocessing import Imputer, MinMaxScaler, StandardScaler
from sklearn.impute import SimpleImputer
from sklearn.metrics import precision_score, recall_score, f1_score, roc_auc_score, precision_recall_curve, roc_curve, mean_squared_error, accuracy_score
from sklearn.metrics import confusion_matrix
from sklearn.model_selection import train_test_split, cross_val_score, GridSearchCV
from sklearn.ensemble import RandomForestClassifier
from IPython.display import display
from utils import *
from preprocess import *
# The Answer to the Ultimate Question of Life, the Universe, and Everything.
np.random.seed(42)
# + colab={"base_uri": "https://localhost:8080/", "height": 102} colab_type="code" id="3bFT5CoxpqbP" outputId="d4526412-4822-4772-ccf1-f8361da6f411"
# %aimport
# + [markdown] colab_type="text" id="3E16jPVPpqbV"
# ## Preparation
# + colab={"base_uri": "https://localhost:8080/", "height": 34} colab_type="code" id="U421BuhtfYS7" outputId="8c1dd574-a794-40c9-aac9-6fe0c61b779a"
if COLAB:
from google.colab import drive
drive.mount('/content/gdrive')
GDRIVE_DATASET_FOLDER = Path('gdrive/My Drive/datasets/')
# + colab={"base_uri": "https://localhost:8080/", "height": 153} colab_type="code" id="9IgnETKkpqbX" outputId="8f4235b3-c55d-432d-8ac7-1999f0bc5f8a"
if COLAB:
DATASET_PATH = GDRIVE_DATASET_FOLDER
ORIGIN_DATASET_PATH = Path('dl-projects/datasets')
else:
DATASET_PATH = Path("datasets")
ORIGIN_DATASET_PATH = Path('datasets')
DATASET = DATASET_PATH/"feature_matrix.csv"
ORIGIN_DATASET = ORIGIN_DATASET_PATH/'4D.zip'
if COLAB:
# !ls -l gdrive/"My Drive"/datasets/ --block-size=M
# !ls -l dl-projects/datasets --block-size=M
# + colab={} colab_type="code" id="urQTD6DQNutw"
data = pd.read_csv(DATASET, header=0, sep=',', quotechar='"', parse_dates=['time'])
origin_data = format_tabular(ORIGIN_DATASET)
# + colab={"base_uri": "https://localhost:8080/", "height": 1000} colab_type="code" id="4BjrERxV8WuT" outputId="606dab19-c17b-4ad4-a633-f908f985fb1a"
data.info()
# + [markdown] colab_type="text" id="vOYlp-8Br61r"
# ## Exploratory Data Analysis
# + colab={} colab_type="code" id="JnQXyVqng5Cm"
feature_matrix = data
# + colab={"base_uri": "https://localhost:8080/", "height": 629} colab_type="code" id="fa1Oc3LiiCIY" outputId="f26a315d-fb50-4a9c-e5a0-3743d6d34026"
feature_matrix.columns
# + colab={"base_uri": "https://localhost:8080/", "height": 1000} colab_type="code" id="NwxxOED04A8X" outputId="fb537565-a1a3-4511-d4f7-17a7148dc090"
feature_matrix.head(4).T
# + colab={"base_uri": "https://localhost:8080/", "height": 359} colab_type="code" id="YvRCAb4e5AYH" outputId="06be3ab4-b606-470f-b091-c5bafa605126"
origin_data[origin_data['LuckyNo']==0].head(10)
# + colab={"base_uri": "https://localhost:8080/", "height": 334} colab_type="code" id="DNNrR3LvKOk1" outputId="4c3c4307-407a-4b44-eeac-b7956e4de137"
feature_matrix.groupby('time')['COUNT(Results)'].mean().plot()
plt.title('Average Monthly Count of Results')
plt.ylabel('Strike Per Number')
# + [markdown] colab_type="text" id="5G5SHX0qFVRa"
# ## Feature Selection
# + colab={"base_uri": "https://localhost:8080/", "height": 51} colab_type="code" id="m-rXOEmJFVCl" outputId="38010b39-41d9-4160-b014-43c767fd9938"
from utils import feature_selection
# %load_ext autoreload
# %autoreload 2
# + colab={"base_uri": "https://localhost:8080/", "height": 119} colab_type="code" id="C_1ACaOMFUp_" outputId="be3eab7b-631d-4551-a410-6c88ee1e179c"
feature_matrix_selection = feature_selection(feature_matrix.drop(columns = ['time', 'NumberId']))
# + colab={} colab_type="code" id="5WC-SEf3F0m4"
feature_matrix_selection['time'] = feature_matrix['time']
feature_matrix_selection['NumberId'] = feature_matrix['NumberId']
feature_matrix_selection['Label'] = feature_matrix['Label']
# + colab={"base_uri": "https://localhost:8080/", "height": 544} colab_type="code" id="Jnj8dp5bGRdk" outputId="eb7c7608-97a0-43c6-e933-ab519a0d2632"
feature_matrix_selection.columns
# + colab={} colab_type="code" id="-GTIZdItLnLa"
# + [markdown] colab_type="text" id="vt0maK--K2cQ"
# ## Correlations
# + colab={"base_uri": "https://localhost:8080/", "height": 34} colab_type="code" id="_3TLGrLhK-va" outputId="b2749ae1-f71a-410b-f953-5bd3a60a32b8"
feature_matrix_selection.shape
# + colab={"base_uri": "https://localhost:8080/", "height": 119} colab_type="code" id="bKud_Z0yK-77" outputId="723044dd-41cf-4bc3-a48c-133cbe107fc5"
corrs = feature_matrix_selection.corr().sort_values('TotalStrike')
corrs['TotalStrike'].head()
# + colab={"base_uri": "https://localhost:8080/", "height": 170} colab_type="code" id="fU43s9BtK_IZ" outputId="747ee597-f4e2-4458-ad9d-0d1bc1d48985"
corrs['Label'].dropna().tail(8)
# + colab={"base_uri": "https://localhost:8080/", "height": 170} colab_type="code" id="l6nY6TTrBb3l" outputId="236e376a-12be-4519-992c-5476631cd4e8"
corrs['TotalStrike'].dropna().tail(8)
# + [markdown] colab_type="text" id="9kCNyWm1BgxF"
# ## Visualization
# + colab={} colab_type="code" id="GibUAbe5Byp0"
#pip install autoviz
# + colab={} colab_type="code" id="5FL2-Eb1Bm9p"
#from autoviz.AutoViz_Class import AutoViz_Class
# + colab={} colab_type="code" id="DC3Oqql7nD8-"
# + [markdown] colab_type="text" id="hTJQWmXAsCVo"
# ### CatBoost
# + colab={"base_uri": "https://localhost:8080/", "height": 887} colab_type="code" id="2paDS9f-iebd" outputId="e5c98da6-166a-4a2f-9fe7-9fcd931c39df"
# !pip install catboost
# !pip install ipywidgets
# + colab={} colab_type="code" id="lrgkK2hyARmn"
import catboost as cgb
# + colab={} colab_type="code" id="JuiMZl9rsMk1"
model = cgb.CatBoostClassifier(eval_metric="AUC", depth=10, iterations= 500, l2_leaf_reg= 9, learning_rate= 0.15)
# + colab={} colab_type="code" id="DtkZVlf1sOq2"
def predict_dt(dt, feature_matrix, return_probs = False):
feature_matrix['date'] = feature_matrix['time']
# Subset labels
test_labels = feature_matrix.loc[feature_matrix['date'] == dt, 'Label']
train_labels = feature_matrix.loc[feature_matrix['date'] < dt, 'Label']
print(f"Size of test labels {len(test_labels)}")
print(f"Size of train labels {len(train_labels)}")
# Features
X_train = feature_matrix[feature_matrix['date'] < dt].drop(columns = ['NumberId', 'time',
'date', 'Label', 'TotalStrike', 'month', 'year', 'index'], errors='ignore')
X_test = feature_matrix[feature_matrix['date'] == dt].drop(columns = ['NumberId', 'time',
'date', 'Label', 'TotalStrike', 'month', 'year', 'index'], errors='ignore')
print(f"Size of X train {len(X_train)}")
print(f"Size of X test {len(X_test)}")
feature_names = list(X_train.columns)
# Impute and scale features
pipeline = Pipeline([('imputer', SimpleImputer(strategy = 'median')),
('scaler', MinMaxScaler())])
# Fit and transform training data
X_train = pipeline.fit_transform(X_train)
X_test = pipeline.transform(X_test)
# Labels
y_train = np.array(train_labels).reshape((-1, ))
y_test = np.array(test_labels).reshape((-1, ))
print('Training on {} observations.'.format(len(X_train)))
print('Testing on {} observations.\n'.format(len(X_test)))
# Train
model.fit(X_train, y_train)
# Make predictions
predictions = model.predict(X_test)
probs = model.predict_proba(X_test)[:, 1]
# Calculate metrics
p = precision_score(y_test, predictions)
r = recall_score(y_test, predictions)
f = f1_score(y_test, predictions)
auc = roc_auc_score(y_test, probs)
a = accuracy_score(y_test, predictions)
cm = confusion_matrix(y_test, predictions)
print(f'Precision: {round(p, 5)}')
print(f'Recall: {round(r, 5)}')
print(f'F1 Score: {round(f, 5)}')
print(f'ROC AUC: {round(auc, 5)}')
print(f'Accuracy: {round(a, 5)}')
print(probs)
print()
print(cm)
m = np.where(predictions==1)
print(len(m[0]), m)
# Feature importances
fi = pd.DataFrame({'feature': feature_names, 'importance': model.feature_importances_})
if return_probs:
return fi, probs
return fi
# + colab={} colab_type="code" id="2SVt7dq8QNVR"
# + colab={} colab_type="code" id="voYEeTn_QNp9"
# + colab={"base_uri": "https://localhost:8080/", "height": 629} colab_type="code" id="SwajXEsyuJOw" outputId="4dda44bf-d85f-4ba7-8d5c-dbe1f33fd5a1"
# All the months
len(feature_matrix_selection['time'].unique()), feature_matrix_selection['time'].unique()
# + colab={"base_uri": "https://localhost:8080/", "height": 1000} colab_type="code" id="55CRISQM9VoV" outputId="82156585-9010-4d9f-cf54-fca939910718"
# %time june_2019 = predict_dt(pd.datetime(2019,6,1), feature_matrix_selection)
# + colab={"base_uri": "https://localhost:8080/", "height": 553} colab_type="code" id="VG_tWy2m9sjg" outputId="3d29e5d6-624e-43ad-9922-63eadf0456a4"
from utils import plot_feature_importances
norm_june_2019_fi = plot_feature_importances(june_2019)
# + colab={"base_uri": "https://localhost:8080/", "height": 1000} colab_type="code" id="uB1HJTBvK9o4" outputId="1e5637a8-17a5-4647-b003-788cefb26ce9"
# %time july_2019 = predict_dt(pd.datetime(2019,7,1), feature_matrix_selection)
# + colab={"base_uri": "https://localhost:8080/", "height": 553} colab_type="code" id="FEYXPrTVK92i" outputId="df96dcca-0f6e-4774-9342-e9aa429514a8"
norm_july_2019_fi = plot_feature_importances(july_2019)
# + [markdown] colab_type="text" id="fG8qe5e3K-ZB"
# ### Tuning - GridSearch
# + colab={} colab_type="code" id="Tm2CtATFLFBD"
# + [markdown] colab_type="text" id="RHO8sHSWEXp6"
# ## Comparison to Baseline
# + colab={"base_uri": "https://localhost:8080/", "height": 51} colab_type="code" id="XsPab_k6F7jq" outputId="4c04abf9-542a-4ef0-fd09-ae07162d45be"
a = np.array([0,0,0,1,0,1, 1])
print(len(a))
m = np.where(a==1)
print(len(m[0]), a[m[0]])
# + colab={} colab_type="code" id="09DxeG1aUVos"
| 04_02_auto_ml_5.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import pandas as pd
df = pd.read_csv('stock_market.csv',low_memory=False)
print(df['closeDay0'].mean())
#¿Cuál es el precio promedio de una acción al cerrar su primer dia en el mercado de valores?
| proyecto/firtsq.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import MultirefPredict
import qcelemental
mol = qcelemental.models.Molecule.from_data("""
O 0.000000000000 0.000000000000 -0.068516245955
H 0.000000000000 -0.790689888800 0.543701278274
H 0.000000000000 0.790689888800 0.543701278274
""")
b1 = MultirefPredict.diagnostic_factory("B1",molecule=mol).computeDiagnostic()
print(b1)
| Examples/Example.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 2
# language: python
# name: python2
# ---
# # MrJob
#
# MrJob is a MapReduce framework which is able to run mapper and reducers on:
#
# - Locally using multiprocessing
# - Hadoop Cluster
# - Amazon EMR
#
# It's a powerfull abstraction over the underlying cluster which takes care for you to save the data on the file system used by the underlying cluster and starting the processing nodes. Your python code won't change in case you run it locally, on hadoop or on EMR.
#
# MrJob can be installed using
#
# $ pip install mrjob
#
# in your python environment.
#
# ## Protocols
#
# As Hadoop Streaming I/O protocol is line based you usually need to take care of properly splitting your data in lines and to properly escape any newline or tab character inside the data.
#
# This is something MrJob will do for you through the use of **Protocols**, whenever you declare a MrJob Process you can tell it to encode/decode data using the ``RawValueProtocol`` which uses the standard Hadoop protocol or the ``JSONProtocol`` which encoded the values to JSON to permit representing something more complex than plain text.
#
# ## MrJob Steps
#
#
# MrJob provides far more steps than the classical Map/Reduce steps.
#
# The more classic step is the **combiner** which gets executed after the mapper and before the reducer. This can combine multiple values emitted by the same mapper into a single output. It is usually similar to a **reducer** but it runs only on a specific mapper instead of getting data from any mapper.
#
# The full list of all the available steps is:
#
# ### Main Steps ###
# - mapper()
# - combiner()
# - reducer()
#
# ### Initialization ###
# - mapper_init()
# - combiner_init()
# - reducer_init()
#
# ### Finalization ###
# - mapper_final()
# - combiner_final()
# - reducer_final()
#
# ### Filtering ###
# - mapper_pre_filter()
# - combiner_pre_filter()
# - reducer_pre_filter()
#
#
# # Writing MrJob Processes
#
#
# MrJob processes are created by declaring a class that inherits from ``mrjob.job.MRJob`` and declares methods named after the MapReduce steps. Each method is required to *emit* a key, value pair through the use of ``yield`` expression.
#
# This makes so that each MapReduce steps doesn't have to remember the values it has to emit, it is a generator that sends values to the next step as soon as they are emitted and goes on emitting new values.
#
# Due to the way *Hadoop Streaming* works each MrJob process must be a standalone python module which can be started by command line. This is achieved in Python using the
#
# ```
# if __name__ == '__main__':
# do_something()
# ```
#
# which makes so that the module is an executable (``__name__`` is __main__ only when the .py file is started from command line). Whenever the script is started as a standalone executable and not imported by another python software ``do_something()`` will trigger.
#
# For MrJob to work instead of do something we will have ``MrJobProcessName.run()`` which actually starts the MrJob process.
#
# The reason for this is that your **.py** file has to be copied to the Hadoop Cluster to be executed and then it must be possible to run it as a standalone software that gets started by the Hadoop Streaming.
# +
from mrjob.job import MRJob
class MRWordFreqCount(MRJob):
def mapper(self, _, line):
for word in line.split():
yield word.lower(), 1
def reducer(self, word, counts):
yield word, sum(counts)
if __name__ == '__main__':
try:
MRWordFreqCount.run()
except TypeError:
print 'MrJob cannot work inside iPython Notebook as it is not saved as a standalone .py file'
# -
# # Starting MrJob
#
# Supponing you saved the previous scripts as **wordcount.py** you can start it using:
#
# ```
# $ python 00_wordcount.py lorem.txt
# ```
#
# Where lorem.txt is the input of the data (in this case plain text):
#
# ```
# Lorem ipsum dolor sit amet, consectetur adipiscing elit. Quisque molestie lacus a iaculis tempus. Nam lorem nulla, viverra non pulvinar ut, fermentum et tortor. Cras vitae libero sed purus venenatis posuere. Proin commodo risus augue, vitae suscipit lectus accumsan sit amet. Praesent eu erat sem. Pellentesque interdum porta libero, et ultrices nunc eleifend sit amet. In in mauris nec elit ullamcorper ultrices at ac ante. Suspendisse potenti. Aenean eu nisl in ante adipiscing imperdiet. Ut pulvinar lectus quis feugiat adipiscing.
# Nunc vulputate mauris congue diam ultrices aliquet. Nulla pharetra laoreet est quis vestibulum. Quisque feugiat pharetra sagittis. Phasellus nulla massa, sodales a suscipit blandit, facilisis eu augue. Cras mi massa, ullamcorper nec tristique at, convallis quis eros. Mauris non fermentum lacus, vitae tristique tellus. In volutpat metus augue, nec laoreet ante hendrerit vitae. Vivamus id lacus nec orci tristique vulputate.
# ```
#
# When you run the mrjob process you will get something like:
#
# ```
# creating tmp directory /var/folders/js/ykgc_8hj10n1fmh3pzdkw2w40000gn/T/wordcount.amol.20140611.163251.274075
# writing to /var/folders/js/ykgc_8hj10n1fmh3pzdkw2w40000gn/T/wordcount.amol.20140611.163251.274075/step-0-mapper_part-00000
# Counters from step 1:
# (no counters found)
# writing to /var/folders/js/ykgc_8hj10n1fmh3pzdkw2w40000gn/T/wordcount.amol.20140611.163251.274075/step-0-mapper-sorted
# > sort /var/folders/js/ykgc_8hj10n1fmh3pzdkw2w40000gn/T/wordcount.amol.20140611.163251.274075/step-0-mapper_part-00000
# writing to /var/folders/js/ykgc_8hj10n1fmh3pzdkw2w40000gn/T/wordcount.amol.20140611.163251.274075/step-0-reducer_part-00000
# Counters from step 1:
# (no counters found)
# Moving /<KEY>T/wordcount.amol.20140611.163251.274075/step-0-reducer_part-00000 -> /var/folders/js/ykgc_8hj10n1fmh3pzdkw2w40000gn/T/wordcount.amol.20140611.163251.274075/output/part-00000
# ```
#
# And final output will be
#
# ```
# Streaming final output from /var/folders/js/ykgc_8hj10n1fmh3pzdkw2w40000gn/T/wordcount.amol.20140611.163251.274075/output
# "a" 2
# "ac" 1
# "accumsan" 1
# "adipiscing" 3
# "aenean" 1
# "aliquet." 1
# ```
#
# Note that MrJob is taking care of copying the input and outputs of the various steps to temporary directories (in this case on my own computer as I'm not starting it on an hadoop cluster), and then returns the output.
#
# You can note that the output is encoded as by Hadoop Streaming protocol with the word and the count separated by tab as those are the key and value emitted by our reducer.
#
# ## Hint
#
# To save automatically your result inside a file you can simple use command bash command to redirect your output:
#
# ```
# $ python 00_wordcount.py lorem.txt > result
# ```
# # What can be emitted?
#
# We currently counted the words, which is the same example used to explain MapReduce, what if we want to do something else where the data we emit doesn't come from the input itself?
#
# MapReduce doesn't make any assumption on the data you emit, nor the key nor the value have to be correlated to the input in any way. Actually each mapper can emit any number of data too.
#
# So if we want to get some statistics on the text, like words, characters and phrases we can easily achieve it
# +
from mrjob.job import MRJob
class MRTextInfo(MRJob):
def mapper(self, _, line):
for phrase in line.split('.'):
yield 'phrases', 1
for word in phrase.split():
yield 'words', 1
yield 'characters', len(word)
def reducer(self, key, counts):
yield key, sum(counts)
if __name__ == '__main__':
try:
MRTextInfo.run()
except TypeError:
print 'MrJob cannot work inside iPython Notebook as it is not saved as a standalone .py file'
# -
# Final output will be something like:
#
# Creating temp directory /var/folders/_x/g5brlyv963vclshf_kffdm440000gn/T/01_text_info.alexcomu.20160610.150420.364592
#
# Running step 1 of 1...
#
# Streaming final output from /var/folders/_x/g5brlyv963vclshf_kffdm440000gn/T/01_text_info.alexcomu.20160610.150420.364592/output...
#
# "characters" 1258
# "phrases" 21
# "words" 144
#
#
#
# ## MultiStep Jobs
#
# There are cases when it is convenient to run multiple MapReduce steps on a single input to actually provide the expected output.
#
# If you want to have multiple steps instead of the plain mapper, reducer steps you can specify them in MrJob using ``steps()`` method which will be called by MrJob to know the actual MapReduce Steps to run in place of the standard ones.
#
# We are going to create a multistep job that gets the most frequent word in the text and only returns that one.
# It's clear that the first loop through MapReduce will be our previous word frequency counter, then we need to filter the most frequent word out of all the counted ones.
# +
from mrjob.job import MRJob
from mrjob.step import MRStep
class MRMostFreqWord(MRJob):
def steps(self):
return [
MRStep(mapper=self.mapper_wordcount,
reducer=self.reducer_wordcount),
MRStep(mapper=self.mapper_freq,
reducer=self.reducer_freq),
MRStep(mapper=self.mapper_most,
reducer=self.reducer_most)
]
def mapper_wordcount(self, _, line):
for word in line.split():
if len(word)>2:
yield word.lower(), 1 # return only word with at least 3 letters
def reducer_wordcount(self, word, counts):
yield word, sum(counts) # Sum occurrences of each word to get frequency
def mapper_freq(self, word, total):
if total > 1: # Only get words that appear more than once
yield total, word # Group them by frequency
def reducer_freq(self, total, words):
yield total, words.next() # .next() gets the first element, so we emit only one word for each frequency
def mapper_most(self, freq, word):
yield 'most_used', [freq, word] # Group all the words together in a list of (frequency, word) tuples
def reducer_most(self, _, freqs):
yield 'most_used', max(freqs) # Get only the most frequent word
if __name__ == '__main__':
try:
MRMostFreqWord.run()
except TypeError:
print 'MrJob cannot work inside iPython Notebook as it is not saved as a standalone .py file'
# -
# Output of our script when launched against the lorem.txt file will be:
#
# ```
# "most_used" [4, "nec"]
# ```
#
# Which is actually a conjuction, so our rule of filtering words shorter than 3 characters didn't remove all the conjunctions, we can try to run again the script using a better filter (using for example regular expressions)
#
# +
from mrjob.job import MRJob
from mrjob.step import MRStep
import re
## MultiStep Example
WORD_REGEXP = re.compile(r"[\w']+")
class MRMostFreqWord(MRJob):
def steps(self):
return [
MRStep(mapper=self.mapper_wordcount,
reducer=self.reducer_wordcount),
MRStep(mapper=self.mapper_freq,
reducer=self.reducer_freq),
MRStep(mapper=self.mapper_most,
reducer=self.reducer_most)
]
def mapper_wordcount(self, _, line):
words = WORD_REGEXP.findall(line)
for w in words:
if len(w)>3:
yield w.lower(), 1
#for word in line.split():
# if len(word)>4:
# yield word.lower(), 1 # return only word with at least 3 letters
def reducer_wordcount(self, word, counts):
yield word, sum(counts) # Sum occurrences of each word to get frequency
def mapper_freq(self, word, total):
if total > 1: # Only get words that appear more than once
yield total, word # Group them by frequency
def reducer_freq(self, total, words):
yield total, words.next() # .next() gets the first element, so we emit only one word for each frequency
def mapper_most(self, freq, word):
yield 'most_used', [freq, word] # Group all the words together in a list of (frequency, word) tuples
def reducer_most(self, _, freqs):
yield 'most_used', max(freqs) # Get only the most frequent word
if __name__ == '__main__':
try:
MRMostFreqWord.run()
except TypeError:
print 'MrJob cannot work inside iPython Notebook as it is not saved as a standalone .py file'
# -
# In this case we will have a better result:
#
# ```
# "most_used" [4, "vitae"]
# ```
#
# # Let's Play!
#
# ### Exercise 1
#
# From the lorem.txt file calculate how many words starts for each letter of the alphabet.
#
# ### Exercise 2
#
# From the lorem.txt file calculate how many words starts for each letter of the alphabet and print out the max and the min.
#
# ### Exercise 3
#
# Write a MapReduce job that report the most frequent word grouped by word length.
#
# ES:
#
# ```
# 5 Chars -> hello -> 8 occurrences
# 3 Chars -> cat -> 4 occurrences
# 2 Chars -> at -> 7 occurrences
# ```
# # Programmatically Running Jobs
#
# While starting jobs from the command line is a good way to test them, it is often the case that you have to visualize the resulting data. So we need to be able to start the MapReduce job from our software, get the output and send it to the HTML layer for visualization.
#
# This can be achieved using MrJob Runners, which permit to start the execution programmatically and read the output.
# Keep in mind that, as the MrJob process must live in a separate **.py** file, the runner must be kept separate from it.
# So the runner will rely in our application, while the actual MrJob class will be in a separate module that can be sent to Hadoop for execution.
#
# Our runner for the WordFreqCount job will look like (check the folder 06_runner):
#
#
# +
from wordcount import MRWordFreqCount
mr_job = MRWordFreqCount()
mr_job.stdin = open('lorem.txt').readlines()
with mr_job.make_runner() as runner:
runner.run()
for line in runner.stream_output():
key, value = mr_job.parse_output_line(line)
print 'Word:', key, 'Count:', value
# -
# The ``stdin`` parameter of the job is the actual input it will receive.
# As Hadoop inputs are line separated we need to pass a list of strings, one for each line.
# Being our data already text on multiple lines we can just read the lines in the text file.
#
# Then the runner will provide back the output through the ``stream_output`` function, which is a generator that returns a single HadoopStreaming output line. This has to be parsed according to the communication protocol, so we need to call ``parse_output_line`` to get back the emitted **key** and **value**.
#
# Then we have the emitted values and we can use them as needed. In this case we just print them.
# Note that differently from when you run MrJob manually, in this case there is no output apart from our own prints.
#
# So if you start the runner without printing anything it will just do nothing.
# # Exercise - MovieData DB
#
# * Download MovieData DB (100K) from http://grouplens.org/datasets/movielens/
#
# I already downloaded the dataset inside the folder ``/PATH/TO/MRJOB/ROOT/examples/_dataset``. Unzip the folder and let's start!
#
# We have (from ``u.info`` file):
#
# 943 users
# 1682 films
# 100000 ratings
#
# We'll use the file u.data which contains (splitted by TAB):
#
# user id | film id | rating | timestamp
# 299 144 4 877881320
#
# ### Exercise 4 - Rating Counter
#
# Count occurrences of rating value from movie DB.
#
# ### Exercise 5 - Most Rated Movie
#
# Count occurrences of each movie rating from movie DB and find the most rated.
#
# ### Exercise 6 - Quick Lookup
#
# Add to the exercise 2 the information about the movie. (use file ``u.item``)
#
#
# # Exercise - Fake Friends DB
#
# Inside folder ``/PATH/TO/MRJOB/ROOT/examples/_dataset`` you will find a csv file called ``fakefriends.csv``. Inside this file there is fake list of users with the relative friends. This is the format:
#
# ID, Name, Age, Number of Friends
# 0,Will,33,385
# ### Exercise 7 - User with max friends
#
# Find the user which has the MAX number of friends and to the same for the MIN.
#
# ### Exercise 8 - Friends Avarage per Age
#
# Calculate For each Age the Avarage of friends.
| DataScience/MrJob.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # DoE - Blocking
# The purpose of this notebook is to show you an example of how to apply the DoE technique called **blocking**, building on top of the ANOVA exercise done previously.
#
# We suppose that the experiment to determine which set of parameters is better is replicated for both a summer and winter day.
# To remove the potential dependence on the time at which the test is executed, blocking is applied to the 'season' parameter.
import random
import pandas as pd
import HEMS_sim
# +
# Dictionary with basic configuration of the simulation
basic_conf = {
'ID':'00',
'batt_storage_capacity':20,
'batt_charge_capacity':5,
'pv1_scaling':1,
'controller_change_rate':0.5,
'climate_conditions':'sunny',
'season':'autumn',
'random_weather':False,
'stochastic':True,
'noise_scale':1}
# +
# Scenario name which determines the name of the files we will be saving with the results
scenario_name = 'with_noise_3'
random.seed(23)
# Selecting the variations we are looking at
ccr_variations = [0.3, 0.7] # Controller charge rate
batt_storage_variations = [15, 25] # Battery size in kWh
seasons = ['summer', 'winter']
# Selected treatments are all combinations of ccr and battery variations (Full factorial Design)
from itertools import product
ffdes = [list(treatment) for treatment in product(ccr_variations, batt_storage_variations)]
# Each season is a block, with each block repeating the treatments in a random order
blocks = {s: random.sample(ffdes, k=len(ffdes)) for s in seasons}
# Dictionary with the variations we want to introduce
variations = {
'run_{:02d}'.format(i+1): {
'ID':'{:02d}'.format(i+1),
'season': block,
'controller_change_rate': treatment[0],
'batt_storage_capacity': treatment[1],
} for i, (block, treatment) in enumerate(
[(block, treatment) for block, treatments in blocks.items() for treatment in treatments])
}
# Merging of the basic configuration and the variations
recipes = {key: {**(basic_conf.copy()),**data} for key,data in variations.items()}
# -
for run_id, recipe in recipes.items():
print(f"Executing run {run_id}")
HEMS_sim.run_simulation(scenario_name,recipe)
run_store = pd.HDFStore('temp_files/runs_summary_{}.h5'.format(scenario_name))
summaries = pd.concat([run_store[k] for k in run_store.keys()], axis=0).set_index('ID')
run_store.close()
summaries
# +
from scipy import stats
A_hi = 0.7
A_lo = 0.3
A_c = 'controller_change_rate'
B_hi = 25
B_lo = 15
B_c = 'battery storage capacity [kWh]'
block_c = 'season'
block_hi = 'summer'
block_lo = 'winter'
outcome_column = 'Self consumption index'
# Effect of A (controller change rate alone)
g1 = summaries[summaries[A_c]==A_hi].index
g2 = summaries[summaries[A_c]==A_lo].index
Fa, pa = stats.f_oneway(summaries[outcome_column, g1], summaries[outcome_column, g2])
# Effect of B (Battery storage size) alone
g1 = summaries[summaries[B_c]==B_hi].index
g2 = summaries[summaries[B_c]==B_lo].index
Fb, pb = stats.f_oneway(summaries[outcome_column, g1], summaries[outcome_column, g2])
# Effect of A and B together
g1 = summaries[(summaries[A_c]==A_hi and summaries[B_c]==B_hi) or (summaries[A_c]==A_lo and summaries[B_c]==B_lo)].index
g2 = summaries[(summaries[A_c]==A_hi and summaries[B_c]==B_lo) or (summaries[A_c]==A_lo and summaries[B_c]==B_lo)].index
Fab, pab = stats.f_oneway(summaries[outcome_column, g1], summaries[outcome_column, g2])
# Effect of blocks
Fbl, pbl = stats.f_oneway([sci1, sci2, sci3, sci4], [sci5, sci6, sci7, sci8])
print(sci1, sci2, sci3, sci4, sci5, sci6, sci7, sci8)
print(Fa, Fb, Fab, Fbl)
print(pa, pb, pab, pbl)
# A = ccr
# B = battery size
# sci1: summer, A low, B low
# sci2: summer, A low, B high
# sci3: summer, A high, B low
# sci4: summer, A high, B high
# sci5: winter, A low, B low
# sci6: winter, A low, B high
# sci7: winter, A high, B low
# sci8: winter, A high, B high
# -
| DoE-exercises/3_Blocking.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # String Formatting
#
# String formatting lets you inject items into a string rather than trying to chain items together using commas or string concatenation. As a quick comparison, consider:
#
# player = 'Thomas'
# points = 33
#
# 'Last night, '+player+' scored '+str(points)+' points.' # concatenation
#
# f'Last night, {player} scored {points} points.' # string formatting
#
#
# There are three ways to perform string formatting.
# * The oldest method involves placeholders using the modulo `%` character.
# * An improved technique uses the `.format()` string method.
# * The newest method, introduced with Python 3.6, uses formatted string literals, called *f-strings*.
#
# Since you will likely encounter all three versions in someone else's code, we describe each of them here.
# ## Formatting with placeholders
# You can use <code>%s</code> to inject strings into your print statements. The modulo `%` is referred to as a "string formatting operator".
print("I'm going to inject %s here." %'something')
# You can pass multiple items by placing them inside a tuple after the `%` operator.
print("I'm going to inject %s text here, and %s text here." %('some','more'))
# You can also pass variable names:
x, y = 'some', 'more'
print("I'm going to inject %s text here, and %s text here."%(x,y))
# ### Format conversion methods.
# It should be noted that two methods <code>%s</code> and <code>%r</code> convert any python object to a string using two separate methods: `str()` and `repr()`. We will learn more about these functions later on in the course, but you should note that `%r` and `repr()` deliver the *string representation* of the object, including quotation marks and any escape characters.
print('He said his name was %s.' %'Fred')
print('He said his name was %r.' %'Fred')
# As another example, `\t` inserts a tab into a string.
print('I once caught a fish %s.' %'this \tbig')
print('I once caught a fish %r.' %'this \tbig')
# The `%s` operator converts whatever it sees into a string, including integers and floats. The `%d` operator converts numbers to integers first, without rounding. Note the difference below:
print('I wrote %s programs today.' %3.75)
print('I wrote %d programs today.' %3.75)
# ### Padding and Precision of Floating Point Numbers
# Floating point numbers use the format <code>%5.2f</code>. Here, <code>5</code> would be the minimum number of characters the string should contain; these may be padded with whitespace if the entire number does not have this many digits. Next to this, <code>.2f</code> stands for how many numbers to show past the decimal point. Let's see some examples:
print('Floating point numbers: %5.2f' %(13.144))
print('Floating point numbers: %1.0f' %(13.144))
print('Floating point numbers: %1.5f' %(13.144))
print('Floating point numbers: %10.2f' %(13.144))
print('Floating point numbers: %25.2f' %(13.144))
# For more information on string formatting with placeholders visit https://docs.python.org/3/library/stdtypes.html#old-string-formatting
# ### Multiple Formatting
# Nothing prohibits using more than one conversion tool in the same print statement:
print('First: %s, Second: %5.2f, Third: %r' %('hi!',3.1415,'bye!'))
# ## Formatting with the `.format()` method
# A better way to format objects into your strings for print statements is with the string `.format()` method. The syntax is:
#
# 'String here {} then also {}'.format('something1','something2')
#
# For example:
print('This is a string with an {}'.format('insert'))
# ### The .format() method has several advantages over the %s placeholder method:
# #### 1. Inserted objects can be called by index position:
print('The {2} {1} {0}'.format('fox','brown','quick'))
# #### 2. Inserted objects can be assigned keywords:
print('First Object: {a}, Second Object: {b}, Third Object: {c}'.format(a=1,b='Two',c=12.3))
# #### 3. Inserted objects can be reused, avoiding duplication:
print('A %s saved is a %s earned.' %('penny','penny'))
# vs.
print('A {p} saved is a {p} earned.'.format(p='penny'))
# ### Alignment, padding and precision with `.format()`
# Within the curly braces you can assign field lengths, left/right alignments, rounding parameters and more
print('{0:8} | {1:9}'.format('Fruit', 'Quantity'))
print('{0:8} | {1:9}'.format('Apples', 3.))
print('{0:8} | {1:9}'.format('Oranges', 10))
# By default, `.format()` aligns text to the left, numbers to the right. You can pass an optional `<`,`^`, or `>` to set a left, center or right alignment:
print('{0:<8} | {1:^8} | {2:>8}'.format('Left','Center','Right'))
print('{0:<8} | {1:^8} | {2:>8}'.format(11,22,33))
# You can precede the aligment operator with a padding character
print('{0:=<8} | {1:-^8} | {2:.>8}'.format('Left','Center','Right'))
print('{0:=<8} | {1:-^8} | {2:.>8}'.format(11,22,33))
# Field widths and float precision are handled in a way similar to placeholders. The following two print statements are equivalent:
print('This is my ten-character, two-decimal number:%10.2f' %13.579)
print('This is my ten-character, two-decimal number:{0:10.2f}'.format(13.579))
# Note that there are 5 spaces following the colon, and 5 characters taken up by 13.58, for a total of ten characters.
#
# For more information on the string `.format()` method visit https://docs.python.org/3/library/string.html#formatstrings
# ## Formatted String Literals (f-strings)
# Introduced in Python 3.6, f-strings offer several benefits over the older `.format()` string method described above. For one, you can bring outside variables immediately into to the string rather than pass them as arguments through `.format(var)`.
# +
name = 'Fred'
print(f"He said his name is {name}.")
# -
# Pass `!r` to get the string representation:
print(f"He said his name is {name!r}")
# #### Float formatting follows `"result: {value:{width}.{precision}}"`
# Where with the `.format()` method you might see `{value:10.4f}`, with f-strings this can become `{value:{10}.{6}}`
#
num = 23.45678
print("My 10 character, four decimal number is:{0:10.4f}".format(num))
print(f"My 10 character, four decimal number is:{num:{10}.{6}}")
# Note that with f-strings, *precision* refers to the total number of digits, not just those following the decimal. This fits more closely with scientific notation and statistical analysis. Unfortunately, f-strings do not pad to the right of the decimal, even if precision allows it:
num = 23.45
print("My 10 character, four decimal number is:{0:10.4f}".format(num))
print(f"My 10 character, four decimal number is:{num:{10}.{6}}")
# If this becomes important, you can always use `.format()` method syntax inside an f-string:
num = 23.45
print("My 10 character, four decimal number is:{0:10.4f}".format(num))
print(f"My 10 character, four decimal number is:{num:10.4f}")
# For more info on formatted string literals visit https://docs.python.org/3/reference/lexical_analysis.html#f-strings
# That is the basics of string formatting!
| Python-Programming/Python-3-Bootcamp/00-Python Object and Data Structure Basics/.ipynb_checkpoints/03-Print Formatting with Strings-checkpoint.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
def graficarMatriz(matriz, vectorCol=['red','blue']):
#Circulo unitario
x = np.linspace(-1,1,100000)
y = np.sqrt(1-x**2)
#Circulo unitario transformado
x1 = matriz[0,0]*x + matriz[0,1]*y
y1 = matriz[1,0]*x + matriz[1,1]*y
x1_neg = matriz[0,0]*x - matriz[0,1]*y
y1_neg = matriz[1,0]*x - matriz[1,1]*y
#Vectores
u1 = [matriz[0,0], matriz[1,0]]
v1 = [matriz[0,1], matriz[1,1]]
graficarVectores([u1,v1], cols=[vectorCol[0], vectorCol[1]])
plt.plot(x1, y1, 'green', alpha=0.7)
plt.plot(x1_neg, y1_neg, 'green', alpha=0.7)
| ML Micro Projects/Algebra Lineal aplicado en ML/funciones_auxiliares/.ipynb_checkpoints/graficarMatriz-checkpoint.ipynb |
# -*- coding: utf-8 -*-
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .r
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: R
# language: R
# name: ir
# ---
# <a href="https://www.bigdatauniversity.com"><img src = "https://ibm.box.com/shared/static/cw2c7r3o20w9zn8gkecaeyjhgw3xdgbj.png" width = 400, align = "center"></a>
#
#
# <h1 align=center><font size = 5>RANDOM FORESTS IN R</font></h1>
# In this notebook, we will be going over what Random Forests are, what they are used for, and how to use them in an R environment.
#
# ---
#
# ## Why do we need Random Forests?
# You might be familiar with the concept of Decision Trees -- a probabilistic predictive model which can be used to classify data in a wide array of applications. Decision Trees are created through observation of data points. A probabilistic model is created by observing the features present in each point labeled a certain class, and then associating probabilities to these features.
#
# <center>
# <a title="By <NAME> (Own work) [CC BY-SA 3.0 (http://creativecommons.org/licenses/by-sa/3.0) or GFDL (http://www.gnu.org/copyleft/fdl.html)], via Wikimedia Commons" href="https://commons.wikimedia.org/wiki/File%3ACART_tree_titanic_survivors.png"><img width="320" alt="CART tree titanic survivors" src="https://upload.wikimedia.org/wikipedia/commons/f/f3/CART_tree_titanic_survivors.png"/></a>
# <font size="2">*Example of a Decision Tree for the Titanic dataset.*</font>
# </center>
#
# Decision Trees are very interesting because one can follow the structure created to understand how the class was inferred. However, this kind of model is not without its own problems. One of the main problems is what is called **overfitting**. Overfitting happens when the process of creating the tree makes it so that the tree is extremely ramified and complex -- this means that the model will not generalize correctly.
#
# This can mean that the data points are too varied, or maybe that there are too many features to be analyzed at once. However, if we cut down the number of data points or features, this might make our model worse. So, we would need another kind of solution for this problem.
#
# ## What are Random Forests
#
# Random Forests are one of the proposed solutions. As one might infer from its name, Random Forests are composed of multiple Decision Trees. This makes them part of a family of models -- that are composed of other models working in tandem -- called **ensemble learning models**. The main concept behind Random Forests is that, if you partition the data that would be used to create a single decision tree into different parts, create one tree for each of these partitions, and then use a method to "average" the results of all of these different trees, you should end up with a better model. In the case of trees used for classification, this "average" is the **mode** of the set of trees in the forest. For regression, the "average" is the **mean** of the set of trees in the forest.
#
# The main mechanism behind Random Forests is **bagging**, which is shorthand for **bootstrap aggregating**. Bagging is the concept of randomly sampling some data from a dataset, but **with replacement**. What this means in practice is that there is some amount of data that will be repeated in each partition, and some amount of data that will not be represented in the samples -- about 63% of the unique examples are kept -- this makes it so that the model generated for that bag is able to generalize better to some degree. Each partition of data of our training data for the Random Forest applies this concept.
#
# <center>
# <img src="https://ibm.box.com/shared/static/5m7lep2u6fzt6ors1b0kpgv0jtzh3z7z.png" width="480">
# <font size="2">*Bagging example. Notice how some data points are repeated -- this is intentional!*</font>
# </center>
#
# You might be asking yourself what happens to the data that is not present in the "bags". This data, aptly called *Out-Of-Bag Data*, serves as a kind of **testing data for the generated model** -- which serves as validation that our model works!
#
# Additionally, Random Forests are created using **feature bagging** as well, which makes it so that there are no problems of overfitting due to a large amount of features for a small amount of data. For example, if a few features are very strong predictors, they will be present in a large amount of "bags", and these bags will become correlated. However, this also makes it so that the Random Forest itself does not focus only on what strongly predicts the data that it was fed, making the model generalize better. Traditionally, a dataset with a number $f$ of features will have $\left\lceil{\sqrt[2]{f}}\ \right\rceil$ features in each partition.
#
# <center>
# <img src="https://ibm.box.com/shared/static/a4b0d3eg7vtuh8wipj9eo4bat9szow67.png" width="720">
# <font size="2">*Example of a Random Forest. Don't forget that the bags can have repeated data points!*</font>
# </center>
#
# ---
#
# ## Using Random Forests in R
# Now that you know what Random Forests are, we can move on to use them in R. Conveniently enough, CRAN (R's library repository) has a library ready for us -- aptly named `randomForest`. However, we first need to install it. You can do that by running the code cell below.
# Install the package "randomForest"./resources/Random-Forests-in-R.ipynb
install.packages("randomForest")
# Once you install it, you won't need to install it again. You just need to load it up whenever you are going to utilize it. You can do that by using the `library` command.
# Load the "randomForest" library into the R context.
library(randomForest)
# We can now go ahead and create the model. For this example, we will be using the built-in `iris` dataset. Feel free to try using other datasets!
#
# To create the model, we will use the `randomForest` function. It has a wide array of parameters for customization, but the simplest approach is just to provide it with a `formula` and the dataset to infer the probabilities from. This can be seen in the following code:
# +
# Create the Random Forest model.
# The randomForest function accepts a "formula" structure as its main parameter. In this case, "Species" will be the variable
# to be predicted, while the others will be the predictors.
myLittleForest <- randomForest(Species ~ Sepal.Length + Sepal.Width + Petal.Length + Petal.Width, data = iris)
# Print the summary of our model.
print(myLittleForest)
# -
# As you can see, calling the `print` function on the model we created prints a summary of our model. This summary is quite informative -- it tells us how many trees were created, how many variables were tried at each split, the estimate of the error rate for the Out-of-Bag data (*remember, it works as validation for our model!*), and its confusion matrix.
#
# Another statistic that can be quite informative is the importance of each predictor for the prediction of our data points. This can be done by using the `importance` function, which can be seen in the following code:
print(importance(myLittleForest, type=2))
# In this case, it seems that the petal length of the flowers is the main difference between species (*the larger the Gini coefficient is, the more different each data point is in terms of that variable*).
#
# ---
#
# This is the end of the Random Forests in R notebook. Hopefully, you now understand how Random Forests are structured, how they work and how to utilize them in R. Thank you for reading this notebook, and good luck on your studies.
#
#
# ## Want to learn more?
#
# IBM SPSS Modeler is a comprehensive analytics platform that has many machine learning algorithms. It has been designed to bring predictive intelligence to decisions made by individuals, by groups, by systems – by your enterprise as a whole. A free trial is available through this course, available here: [SPSS Modeler for Mac users](https://cocl.us/ML0151EN_SPSSMod_mac) and [SPSS Modeler for Windows users](https://cocl.us/ML0151EN_SPSSMod_win)
#
# Also, you can use Data Science Experience to run these notebooks faster with bigger datasets. Data Science Experience is IBM's leading cloud solution for data scientists, built by data scientists. With Jupyter notebooks, RStudio, Apache Spark and popular libraries pre-packaged in the cloud, DSX enables data scientists to collaborate on their projects without having to install anything. Join the fast-growing community of DSX users today with a free account at [Data Science Experience](https://cocl.us/ML0151EN_DSX)
# ### Thanks for completing this lesson!
#
# Notebook created by: <a href="https://br.linkedin.com/in/walter-gomes-de-amorim-junior-624726121"><NAME></a>
# <hr>
# Copyright © 2016 [Cognitive Class](https://cognitiveClass.ai/?utm_source=bducopyrightlink&utm_medium=dswb&utm_campaign=bdu). This notebook and its source code are released under the terms of the [MIT License](https://bigdatauniversity.com/mit-license/).
| notebooks/machine-learning/ML0151EN-Review-RandomForests.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3.9.7 ('base')
# language: python
# name: python3
# ---
import pandas as pd
import sqlite3 as db
import datetime as dt
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
from sklearn.metrics import r2_score,explained_variance_score
from sklearn.cluster import KMeans
from sklearn import linear_model
#from statsmodels.tsa.seasonal import seasonal_decompose
github_userName = 'Tanag3r'
ebird_token = '<PASSWORD>'
db_name = 'trailheadDirectBirds_sous.db'
##connect to database
def connectDB():
try:
cnx = db.connect(db_name)
except Exception as cnxError:
raise UserWarning(f'Unable to connect to database due to: {cnxError}')
return cnx
def stopQualityMask(speciesCode: str,closestStop: str):
cnx = connectDB()
try:
gap = dt.date.today().year-2018
query = f'SELECT COUNT(DISTINCT(year)) as "frq" FROM coefficients_bySpecies WHERE speciesCode = "{speciesCode}" AND closestStop = "{closestStop}"'
coeficients = pd.read_sql(query,con=cnx)
coeficients['frq'] = coeficients.apply(lambda g: (g.frq/gap),axis=1)
except Exception as maskExc:
raise maskExc
return coeficients['frq']
def log_stopMetrics(speciesCode,key,testBlob,latestUpdate):
cnx = connectDB()
cur = cnx.cursor()
try:
sqliteInsert = """INSERT INTO stopMetricsBlob (speciesCode,key,testBlob,latestUpdate) VALUES (?,?,?,?);"""
logTuple = (speciesCode,key,testBlob,latestUpdate)
cur.execute(sqliteInsert,logTuple)
cnx.commit()
cur.close()
except db.Error as sqlError:
raise sqlError
finally:
if cnx:
cnx.close()
#baseline request from the application layer
#outputs a list of birds at the stop and a classification based solely off the number of observations
def birdList_request(StopName: str,cnx):
try:
query = f'SELECT speciesCode,count(subId) as "checklists",(SELECT count(subId) FROM historicObservations hxobx WHERE hxobx.speciesCode=hsob.speciesCode) as "sightings" FROM historicObservations hsob LEFT JOIN closestStop on hsob.locId=closestStop.locId WHERE StopName = "{StopName}" GROUP BY speciesCode;'
sightings = pd.read_sql(sql=query,con=cnx)
#rareness at the stop
sightings['stopGroup'] = int
bucket = sightings['checklists'].quantile([0,0.15,0.5,0.85,1])
sightings.loc[sightings['checklists'] <= bucket[0.15],'stopGroup'] = 1 #mythic
sightings.loc[(sightings['checklists'] > bucket[0.15]) & (sightings['checklists'] <= bucket[0.5]),'stopGroup'] = 2 #rare
sightings.loc[(sightings['checklists'] > bucket[0.5]) & (sightings['checklists'] < bucket[0.85]),'stopGroup'] = 3 #uncommon
sightings.loc[(sightings['checklists'] >= bucket[0.85]) & (sightings['checklists'] <=bucket[1]),'stopGroup'] = 4 #common
#overall rareness
sightings['overall'] = int
bucket = sightings['sightings'].quantile([0,0.15,0.5,0.85,1])
sightings.loc[sightings['sightings'] <= bucket[0.15],'overall'] = 1
sightings.loc[(sightings['sightings'] > bucket[0.15]) & (sightings['sightings'] <= bucket[0.5]),'overall'] = 2
sightings.loc[(sightings['sightings'] > bucket[0.5]) & (sightings['sightings'] < bucket[0.85]),'overall'] = 3
sightings.loc[(sightings['sightings'] >= bucket[0.85]) & (sightings['sightings'] <=bucket[1]),'overall'] = 4
sightings['StopName'] = StopName
#raise an exception if the stopName given is not valid and return a list of valid stop names
except Exception as ex:
raise ex
return sightings
# For birds with robust data, do they behave the same at all stops? If not, what stops does each species appear to prefer? Is that preference explained by habitat?
#TODO #92 change over flat annual average to weekly average in wklyAbd_selectSpecies() and other calcs
def wklyAbd_selectSpecies(cnx,speciesList: list):
cnx=cnx
try:
querySpecies = []
for i in speciesList:
i = str(i)
querySpecies.append(i)
querySpecies = str(querySpecies).strip('[]')
#query = f'SELECT speciesCode,FX.locId,StopName,obsDt,howMany FROM historicObservations AS FX LEFT JOIN closestStop on FX.locId = closestStop.locId WHERE (SELECT count(distinct(subId)) FROM historicObservations AS QA WHERE QA.comName = FX.comName) > 2 AND FX.locId in ({query_locIds})'
#call baseline
queryBaseline = f'SELECT speciesCode,obsDt,howMany FROM historicObservations WHERE speciesCode in ({querySpecies})'
baseline = pd.read_sql(sql=queryBaseline,con=cnx,parse_dates=['obsDt'])
baseline = baseline.assign(obsDt_week=baseline.obsDt.dt.isocalendar().week)
baseline['howMany'].fillna(1,inplace=True)
baselineAvg = baseline.groupby(['speciesCode'])['howMany'].mean()
query = f'SELECT speciesCode,FX.locId,StopName,obsDt,howMany FROM historicObservations AS FX LEFT JOIN closestStop on FX.locId = closestStop.locId WHERE (SELECT count(distinct(subId)) FROM historicObservations AS QA WHERE QA.comName = FX.comName) > 2 AND FX.speciesCode in ({querySpecies});'
obsData = pd.read_sql(query,con=cnx,parse_dates=['obsDt'])
obsData = obsData.assign(obsDt_week=obsData.obsDt.dt.isocalendar().week)
obsData['howMany'].fillna(1,inplace=True)
obsData = obsData.groupby(['speciesCode','StopName','obsDt_week'])['howMany'].mean().reset_index()
#compute relative abundance
obsData['relativeAbundance'] = obsData.apply(lambda x: (x.howMany/baselineAvg[x.speciesCode]),axis=1) #baseline relative abd, same process as Fink et. all
#obsData['relativeAbundance'] = obsData.apply(lambda f: stopQualityMask(f.speciesCode,f.StopName)*f.relativeAbundance,axis=1) #apply frequency mask
#avgAbd = obsData.groupby(['speciesCode'])['relativeAbundance'].mean()
#obsData['relativeAbundance'] = obsData.apply(lambda n: ((n.relativeAbundance)/(avgAbd[n.speciesCode])),axis=1) #normalizing around average relative abundance
obsData.sort_values(by=['StopName','obsDt_week'],ascending=True,inplace=True)
except db.DatabaseError as dbExc:
raise f'there was an issue with the database request: {dbExc}'
except Exception as ex:
raise ex
finally: cnx.close()
return obsData[['obsDt_week','speciesCode','StopName','relativeAbundance']]
# Interpolation testing
westan = wklyAbd_selectSpecies(cnx=connectDB(),speciesList=['westan'])
#takes a dataframe and returns a dictionary containing the best method and order value
def deriveInterpolationMethod(obsData):
try:
testResults = []
methodDict = [{'method':'linear','order':3,'limit_direction':'both','limit':5},{'method':'slinear','order':3,'limit_direction':'both','limit':5},{'method':'quadratic','order':0,'limit_direction':'both','limit':5},{'method':'cubic','order':3,'limit_direction':'both','limit':5},{'method':'spline','order':3,'limit_direction':'both','limit':5},{'method':'spline','order':5,'limit_direction':'both','limit':5},{'method':'polynomial','order':3,'limit_direction':'both','limit':5},{'method':'polynomial','order':5,'limit_direction':'both','limit':5},{'method':'barycentric','order':5,'limit_direction':'both','limit':5}]
small_methodDict = [{'method':'linear','order':0,'limit_direction':'both','limit':3},{'method':'slinear','order':3,'limit_direction':'both','limit':3},{'method':'polynomial','order':3,'limit_direction':'both','limit':3},{'method':'quadratic','order':0,'limit_direction':'both','limit':3}]
stop_obsData = obsData
allweek = pd.DataFrame({'obsDt_week':range(1,53)})
stop_obsData = pd.merge(left=allweek,left_on='obsDt_week',right=stop_obsData,right_on='obsDt_week',how='left')
#mask
stop_obsData['mask'] = stop_obsData['relativeAbundance'].interpolate(method='ffill',limit=4,limit_direction='forward')
stop_obsData['mask'] = stop_obsData['mask'].interpolate(method='bfill',limit=2,limit_direction='backward')
stop_obsData.loc[stop_obsData['mask'].isna() == True,'relativeAbundance'] = 0
stop_obsData = stop_obsData[stop_obsData['relativeAbundance'].notna()].drop(columns=['mask'])
for v in list([0.2,0.25,0.3,0.35]):
stop_obsData['sample'] = stop_obsData['relativeAbundance'].sample(frac=v,random_state=1)
if stop_obsData['speciesCode'].count() < 4:
chosen = small_methodDict
else: chosen = methodDict
blob = {}
for test in chosen:
testName = str(test['method'])
stop_obsData[testName] = stop_obsData['sample']
stop_obsData[testName] = stop_obsData['sample'].interpolate(method=test['method'],order=test['order'],limit=test['limit'],limit_direction=test['limit_direction']).fillna(stop_obsData['relativeAbundance'])
r_sqr = r2_score(y_true=stop_obsData['relativeAbundance'],y_pred=stop_obsData[testName])
if r_sqr < 0:
continue
blob = {'method':test['method'],'order':test['order'],'limit':test['limit'],'sampleSize':v,'r_sqr':r_sqr}
testResults.append(blob)
if not testResults:
testResults.append({'method':'nearest','order':0,'limit':2})
resultFrame = pd.DataFrame.from_dict(testResults)
else:
resultFrame = pd.DataFrame.from_dict(testResults)
resultFrame = resultFrame.groupby(by=['method','order','limit'])['r_sqr'].median().reset_index()
resultFrame.sort_values(by=['r_sqr'],ascending=False,inplace=True,ignore_index=True)
#resultFrame.drop_duplicates(subset=['StopName'],keep='first',inplace=True)
#stop_obsData.rename(columns={'sample':test['method']},inplace=True)
##TODO #93 for each stop, return the best interpolation method as determined by the r2 score -- done
except Exception as problem:
raise problem
return resultFrame.loc[0].to_dict()
#return resultFrame
# +
##TODO #90 rewrite stopCovariance to run for a single species at a time --DONE
def stopCovariance(obsData: pd.DataFrame,species: str,StopKey: str,cnx):
try:
obsData.sort_values(by=['StopName','obsDt_week'],ascending=True,inplace=True)
container = []
resultsContainer = []
stopKeys = obsData.drop_duplicates(subset=['StopName'])
for StopName in stopKeys.itertuples():
stop_obsData = obsData[obsData['speciesCode']==species]
stop_obsData = stop_obsData[stop_obsData['StopName']==StopName.StopName]
stop_obsData.set_index('obsDt_week',inplace=True)
stop_obsData.sort_index(axis='index',ascending=True,inplace=True)
method = deriveInterpolationMethod(stop_obsData)
allweek = pd.DataFrame({'obsDt_week':range(1,53)})
stop_obsData.drop(columns=['StopName'],inplace=True)
stop_obsData = pd.merge(left=stop_obsData,right=allweek,left_on='obsDt_week',right_on='obsDt_week',how='outer')
stop_obsData.set_index('obsDt_week',inplace=True)
stop_obsData.sort_index(axis='index',ascending=True,inplace=True)
stop_obsData['mask'] = stop_obsData['relativeAbundance'].interpolate(method='ffill',limit=5,limit_direction='forward') #mask, values do not matter
stop_obsData['mask'] = stop_obsData['mask'].interpolate(method='bfill',limit=2,limit_direction='backward') #mask, values do not matter
stop_obsData.loc[stop_obsData['mask'].isna() == True,'relativeAbundance'] = 0
stop_obsData['fx_relativeAbundance'] = stop_obsData['relativeAbundance'].interpolate(method=method['method'],order=method['order'],limit=method['limit'],limit_direction='both')
stop_obsData.drop(columns=['relativeAbundance','mask','speciesCode'],inplace=True)
stop_obsData.rename(columns={'fx_relativeAbundance':StopName.StopName},inplace=True)
container.append(stop_obsData)
weeklySpeciesAbd = pd.concat(container,ignore_index=False,axis=1)
weeklySpeciesAbd = weeklySpeciesAbd.fillna(value=0,axis=0)
weeklySpeciesAbd = weeklySpeciesAbd.apply(lambda n:(np.log1p(n)),axis=1)
y_true = weeklySpeciesAbd[StopKey]
#return related stops
fit = [(stop,r2_score(y_true,weeklySpeciesAbd[stop])) for stop in list(weeklySpeciesAbd)]
rSqr = filter(lambda r: (r[1] > 0.25) and (r[1]<1),fit)
var = [(stop,explained_variance_score(y_true,weeklySpeciesAbd[stop])) for stop in list(weeklySpeciesAbd)]
expVar = filter(lambda e: (e[1]>0.25) and (e[1]<1),var)
blob = {'rSquared':list(rSqr),'explVar':list(expVar)}
log_stopMetrics(speciesCode=species,key=StopKey,testBlob=str(blob),latestUpdate=str(dt.datetime.today()))
except Exception as ex:
raise ex
return {'speciesCode':species,'stopKey':StopKey,'rSquared':list(rSqr),'explVar':list(expVar)}
#return method
# -
for x in list(westan['StopName'].drop_duplicates()):
stopCovariance(obsData=westan,species='westan',StopKey=x,cnx=connectDB())
# Habitats
def kmeans_habitat(cnx,distinctHabitats: int):
try:
data = pd.read_sql(sql='SELECT * FROM FAO_by_locId;',con=cnx)
data = data.drop(columns=['locName']).set_index('locId')
data.fillna(0,inplace=True)
#normalize
maxValue = data.apply(max,axis=1)
data = data.apply(lambda x: (x/maxValue[x.index]),axis=0) #min-max normalizing to smooth in proportionality
#compute kmeans for each locId
habitat_kmeans = KMeans(n_clusters=distinctHabitats,init='k-means++')
habitat_kmeans = habitat_kmeans.fit(data.values)
clusterLabels = habitat_kmeans.labels_
#define habitats
habitatFrame = pd.DataFrame(data=clusterLabels,columns=['clusterLabel'],index=data.index).sort_values(by='clusterLabel').reset_index()
habitatFrame = pd.merge(left=habitatFrame,left_on='locId',right=data,right_on='locId',how='left')
except Exception as kmeansExc:
raise kmeansExc
return habitatFrame
def wklyAbd_selectSpecies_locIds(cnx,speciesList: list,locIdList: list):
cnx=cnx
try:
querySpecies = []
for i in speciesList:
i = str(i)
querySpecies.append(i)
querySpecies = str(querySpecies).strip('[]')
queryLocs = []
for i in locIdList:
i = str(i)
queryLocs.append(i)
queryLocs = str(queryLocs).strip('[]')
#baseline
baseQuery = f'SELECT speciesCode,obsDt,howMany FROM historicObservations WHERE speciesCode in ({querySpecies});'
baseline = pd.read_sql(baseQuery,con=cnx,parse_dates=['obsDt'])
baseline = baseline.assign(obsDt_week=baseline.obsDt.dt.isocalendar().week)
baselineAvg = baseline.groupby(['speciesCode'])['howMany'].mean()
#query = f'SELECT speciesCode,FX.locId,StopName,obsDt,howMany FROM historicObservations AS FX LEFT JOIN closestStop on FX.locId = closestStop.locId WHERE (SELECT count(distinct(subId)) FROM historicObservations AS QA WHERE QA.comName = FX.comName) > 2 AND FX.speciesCode in ({querySpecies}) AND FX.locId IN ({queryLocs});'
query = f'SELECT speciesCode,FX.locId,obsDt,howMany FROM historicObservations AS FX WHERE (SELECT count(distinct(subId)) FROM historicObservations AS QA WHERE QA.comName = FX.comName) > 2 AND FX.speciesCode in ({querySpecies}) AND FX.locId IN ({queryLocs});'
obsData = pd.read_sql(query,con=cnx,parse_dates=['obsDt'])
obsData = obsData.assign(obsDt_week=obsData.obsDt.dt.isocalendar().week)
obsData['howMany'].fillna(1,inplace=True)
obsData = obsData.groupby(['speciesCode','obsDt_week'])['howMany'].mean().reset_index()
if obsData.empty == True:
obsData = pd.DataFrame.from_dict({'speciesCode':speciesList,'obsDt_week':1,'relativeAbundance':0.00})
#maxCount = obsData.groupby(['speciesCode'])['howMany'].max()
else:
#derive relative abundance
obsData['relativeAbundance'] = obsData.apply(lambda x: (x.howMany/baselineAvg[x.speciesCode]),axis=1) #baseline relative abd, same process as Fink et. all
#obsData['relativeAbundance'] = obsData.apply(lambda f: stopQualityMask(f.speciesCode,f.StopName)*f.relativeAbundance,axis=1) #apply frequency mask
#avgAnnualAbd = obsData.groupby(['speciesCode'])['relativeAbundance'].mean()
#obsData['relativeAbundance'] = obsData.apply(lambda n: ((n.relativeAbundance)/(avgAnnualAbd[n.speciesCode])),axis=1) #normalizing around average relative abundance
#obsData.sort_values(by=['obsDt_week'],ascending=True,inplace=True)
except db.DatabaseError as dbExc:
raise f'there was an issue with the database request: {dbExc}'
except Exception as ex:
raise ex
finally: cnx.close()
return obsData
habs = kmeans_habitat(cnx=connectDB(),distinctHabitats=10)
habs.value_counts(subset=['clusterLabel'])
labelList = habs[habs['clusterLabel']==0]
labelZero = list(labelList['locId'])
labelZero
result = wklyAbd_selectSpecies_locIds(cnx=connectDB(),speciesList=['westan'],locIdList=labelZero)
result
np.sqrt(((5.837-2.901)**2/3))
gockin = amerob[amerob['speciesCode']=='gockin']
# Use LASSO to cross validate KFold splits of training data.
#
# https://machinelearningmastery.com/lasso-regression-with-python/
from sklearn.model_selection import cross_val_score
from sklearn.model_selection import RepeatedKFold
from sklearn.linear_model import Lasso
| testKitchen/neuralSetup_analysis.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
# default_exp core
# -
# # core
# > API details.
#hide
from nbdev.showdoc import *
# +
#export
import tensorflow as tf
import numpy as np
from glob import glob
import os
import pathlib
from typing import Union
# -
# # Load dataset
# +
#export
def remove_dsstore(path):
"""
Deletes .DS_Store files from path and sub-folders of path.
"""
path = pathlib.Path(path)
for e in path.glob('*.DS_Store'):
os.remove(e)
for e in path.glob('*/*.DS_Store'):
os.remove(e)
def get_basename(path: tf.string):
assert isinstance(path, tf.Tensor)
return tf.strings.split(path, os.path.sep)[-1]
# -
#export
IMAGENET_LABELS = ["tench",
"goldfish",
"great white shark",
"tiger shark",
"hammerhead shark",
"electric ray",
"stingray",
"cock",
"hen",
"ostrich",
"brambling",
"goldfinch",
"house finch",
"junco",
"indigo bunting",
"American robin",
"bulbul",
"jay",
"magpie",
"chickadee",
"American dipper",
"kite",
"bald eagle",
"vulture",
"great grey owl",
"fire salamander",
"smooth newt",
"newt",
"spotted salamander",
"axolotl",
"American bullfrog",
"tree frog",
"tailed frog",
"loggerhead sea turtle",
"leatherback sea turtle",
"mud turtle",
"terrapin",
"box turtle",
"banded gecko",
"green iguana",
"Carolina anole",
"desert grassland whiptail lizard",
"agama",
"frilled-necked lizard",
"alligator lizard",
"Gila monster",
"European green lizard",
"chameleon",
"Komodo dragon",
"Nile crocodile",
"American alligator",
"triceratops",
"worm snake",
"ring-necked snake",
"eastern hog-nosed snake",
"smooth green snake",
"kingsnake",
"garter snake",
"water snake",
"vine snake",
"night snake",
"boa constrictor",
"African rock python",
"Indian cobra",
"green mamba",
"sea snake",
"Saharan horned viper",
"eastern diamondback rattlesnake",
"sidewinder",
"trilobite",
"harvestman",
"scorpion",
"yellow garden spider",
"barn spider",
"European garden spider",
"southern black widow",
"tarantula",
"wolf spider",
"tick",
"centipede",
"black grouse",
"ptarmigan",
"ruffed grouse",
"prairie grouse",
"peacock",
"quail",
"partridge",
"grey parrot",
"macaw",
"sulphur-crested cockatoo",
"lorikeet",
"coucal",
"bee eater",
"hornbill",
"hummingbird",
"jacamar",
"toucan",
"duck",
"red-breasted merganser",
"goose",
"black swan",
"tusker",
"echidna",
"platypus",
"wallaby",
"koala",
"wombat",
"jellyfish",
"sea anemone",
"brain coral",
"flatworm",
"nematode",
"conch",
"snail",
"slug",
"sea slug",
"chiton",
"chambered nautilus",
"Dungeness crab",
"rock crab",
"fiddler crab",
"red king crab",
"American lobster",
"spiny lobster",
"crayfish",
"hermit crab",
"isopod",
"white stork",
"black stork",
"spoonbill",
"flamingo",
"little blue heron",
"great egret",
"bittern",
"crane (bird)",
"limpkin",
"common gallinule",
"American coot",
"bustard",
"ruddy turnstone",
"dunlin",
"common redshank",
"dowitcher",
"oystercatcher",
"pelican",
"king penguin",
"albatross",
"grey whale",
"killer whale",
"dugong",
"sea lion",
"Chihuahua",
"Japanese Chin",
"Maltese",
"Pekingese",
"Shih Tzu",
"King Charles Spaniel",
"Papillon",
"toy terrier",
"Rhodesian Ridgeback",
"Afghan Hound",
"Basset Hound",
"Beagle",
"Bloodhound",
"Bluetick Coonhound",
"Black and Tan Coonhound",
"Treeing Walker Coonhound",
"English foxhound",
"Redbone Coonhound",
"borzoi",
"Irish Wolfhound",
"Italian Greyhound",
"Whippet",
"Ibizan Hound",
"Norwegian Elkhound",
"Otterhound",
"Saluki",
"Scottish Deerhound",
"Weimaraner",
"Staffordshire Bull Terrier",
"American Staffordshire Terrier",
"Bedlington Terrier",
"Border Terrier",
"Kerry Blue Terrier",
"Irish Terrier",
"Norfolk Terrier",
"Norwich Terrier",
"Yorkshire Terrier",
"Wire Fox Terrier",
"Lakeland Terrier",
"Sealyham Terrier",
"Airedale Terrier",
"Cairn Terrier",
"Australian Terrier",
"D<NAME> Terrier",
"Boston Terrier",
"Miniature Schnauzer",
"Giant Schnauzer",
"Standard Schnauzer",
"Scottish Terrier",
"Tibetan Terrier",
"Australian Silky Terrier",
"Soft-coated Wheaten Terrier",
"West Highland White Terrier",
"Lhasa Apso",
"Flat-Coated Retriever",
"Curly-coated Retriever",
"Golden Retriever",
"Labrador Retriever",
"Chesapeake Bay Retriever",
"German Shorthaired Pointer",
"Vizsla",
"English Setter",
"Irish Setter",
"Gordon Setter",
"Brittany",
"Clumber Spaniel",
"English Springer Spaniel",
"Welsh Springer Spaniel",
"Cocker Spaniels",
"Sussex Spaniel",
"Irish Water Spaniel",
"Kuvasz",
"Schipperke",
"Groenendael",
"Malinois",
"Briard",
"Australian Kelpie",
"Komondor",
"Old English Sheepdog",
"Shetland Sheepdog",
"collie",
"Border Collie",
"Bouvier des Flandres",
"Rottweiler",
"German Shepherd Dog",
"Dobermann",
"Miniature Pinscher",
"Greater Swiss Mountain Dog",
"Bernese Mountain Dog",
"Appenzeller Sennenhund",
"Entlebucher Sennenhund",
"Boxer",
"Bullmastiff",
"Tibetan Mastiff",
"French Bulldog",
"Great Dane",
"St. Bernard",
"husky",
"Alaskan Malamute",
"Siberian Husky",
"Dalmatian",
"Affenpinscher",
"Basenji",
"pug",
"Leonberger",
"Newfoundland",
"Pyrenean Mountain Dog",
"Samoyed",
"Pomeranian",
"Chow Chow",
"Keeshond",
"<NAME>",
"Pembroke Welsh Corgi",
"Cardigan Wel<NAME>orgi",
"Toy Poodle",
"Miniature Poodle",
"Standard Poodle",
"Mexican hairless dog",
"grey wolf",
"Alaskan tundra wolf",
"red wolf",
"coyote",
"dingo",
"dhole",
"African wild dog",
"hyena",
"red fox",
"kit fox",
"Arctic fox",
"grey fox",
"tabby cat",
"tiger cat",
"Persian cat",
"Siamese cat",
"Egyptian Mau",
"cougar",
"lynx",
"leopard",
"snow leopard",
"jaguar",
"lion",
"tiger",
"cheetah",
"brown bear",
"American black bear",
"polar bear",
"sloth bear",
"mongoose",
"meerkat",
"tiger beetle",
"ladybug",
"ground beetle",
"longhorn beetle",
"leaf beetle",
"dung beetle",
"rhinoceros beetle",
"weevil",
"fly",
"bee",
"ant",
"grasshopper",
"cricket",
"stick insect",
"cockroach",
"mantis",
"cicada",
"leafhopper",
"lacewing",
"dragonfly",
"damselfly",
"red admiral",
"ringlet",
"monarch butterfly",
"small white",
"sulphur butterfly",
"gossamer-winged butterfly",
"starfish",
"sea urchin",
"sea cucumber",
"cottontail rabbit",
"hare",
"Angora rabbit",
"hamster",
"porcupine",
"fox squirrel",
"marmot",
"beaver",
"guinea pig",
"common sorrel",
"zebra",
"pig",
"wild boar",
"warthog",
"hippopotamus",
"ox",
"water buffalo",
"bison",
"ram",
"bighorn sheep",
"Alpine ibex",
"hartebeest",
"impala",
"gazelle",
"dromedary",
"llama",
"weasel",
"mink",
"European polecat",
"black-footed ferret",
"otter",
"skunk",
"badger",
"armadillo",
"three-toed sloth",
"orangutan",
"gorilla",
"chimpanzee",
"gibbon",
"siamang",
"guenon",
"patas monkey",
"baboon",
"macaque",
"langur",
"black-and-white colobus",
"proboscis monkey",
"marmoset",
"white-headed capuchin",
"howler monkey",
"titi",
"Geoffroy's spider monkey",
"common squirrel monkey",
"ring-tailed lemur",
"indri",
"Asian elephant",
"African bush elephant",
"red panda",
"giant panda",
"snoek",
"eel",
"coho salmon",
"rock beauty",
"clownfish",
"sturgeon",
"garfish",
"lionfish",
"pufferfish",
"abacus",
"abaya",
"academic gown",
"accordion",
"acoustic guitar",
"aircraft carrier",
"airliner",
"airship",
"altar",
"ambulance",
"amphibious vehicle",
"analog clock",
"apiary",
"apron",
"waste container",
"assault rifle",
"backpack",
"bakery",
"balance beam",
"balloon",
"ballpoint pen",
"Band-Aid",
"banjo",
"baluster",
"barbell",
"barber chair",
"barbershop",
"barn",
"barometer",
"barrel",
"wheelbarrow",
"baseball",
"basketball",
"bassinet",
"bassoon",
"swimming cap",
"bath towel",
"bathtub",
"station wagon",
"lighthouse",
"beaker",
"military cap",
"beer bottle",
"beer glass",
"bell-cot",
"bib",
"tandem bicycle",
"bikini",
"ring binder",
"binoculars",
"birdhouse",
"boathouse",
"bobsleigh",
"bolo tie",
"poke bonnet",
"bookcase",
"bookstore",
"bottle cap",
"bow",
"bow tie",
"brass",
"bra",
"breakwater",
"breastplate",
"broom",
"bucket",
"buckle",
"bulletproof vest",
"high-speed train",
"butcher shop",
"taxicab",
"cauldron",
"candle",
"cannon",
"canoe",
"can opener",
"cardigan",
"car mirror",
"carousel",
"tool kit",
"carton",
"car wheel",
"automated teller machine",
"cassette",
"cassette player",
"castle",
"catamaran",
"CD player",
"cello",
"mobile phone",
"chain",
"chain-link fence",
"chain mail",
"chainsaw",
"chest",
"chiffonier",
"chime",
"china cabinet",
"Christmas stocking",
"church",
"movie theater",
"cleaver",
"cliff dwelling",
"cloak",
"clogs",
"cocktail shaker",
"coffee mug",
"coffeemaker",
"coil",
"combination lock",
"computer keyboard",
"confectionery store",
"container ship",
"convertible",
"corkscrew",
"cornet",
"cowboy boot",
"cowboy hat",
"cradle",
"crane (machine)",
"crash helmet",
"crate",
"infant bed",
"Crock Pot",
"croquet ball",
"crutch",
"cuirass",
"dam",
"desk",
"desktop computer",
"rotary dial telephone",
"diaper",
"digital clock",
"digital watch",
"dining table",
"dishcloth",
"dishwasher",
"disc brake",
"dock",
"dog sled",
"dome",
"doormat",
"drilling rig",
"drum",
"drumstick",
"dumbbell",
"Dutch oven",
"electric fan",
"electric guitar",
"electric locomotive",
"entertainment center",
"envelope",
"espresso machine",
"face powder",
"feather boa",
"filing cabinet",
"fireboat",
"fire engine",
"fire screen sheet",
"flagpole",
"flute",
"folding chair",
"football helmet",
"forklift",
"fountain",
"fountain pen",
"four-poster bed",
"freight car",
"French horn",
"frying pan",
"fur coat",
"garbage truck",
"gas mask",
"gas pump",
"goblet",
"go-kart",
"golf ball",
"golf cart",
"gondola",
"gong",
"gown",
"grand piano",
"greenhouse",
"grille",
"grocery store",
"guillotine",
"barrette",
"hair spray",
"half-track",
"hammer",
"hamper",
"hair dryer",
"hand-held computer",
"handkerchief",
"hard disk drive",
"harmonica",
"harp",
"harvester",
"hatchet",
"holster",
"home theater",
"honeycomb",
"hook",
"hoop skirt",
"horizontal bar",
"horse-drawn vehicle",
"hourglass",
"iPod",
"clothes iron",
"jack-o'-lantern",
"jeans",
"jeep",
"T-shirt",
"jigsaw puzzle",
"pulled rickshaw",
"joystick",
"kimono",
"knee pad",
"knot",
"lab coat",
"ladle",
"lampshade",
"laptop computer",
"lawn mower",
"lens cap",
"paper knife",
"library",
"lifeboat",
"lighter",
"limousine",
"ocean liner",
"lipstick",
"slip-on shoe",
"lotion",
"speaker",
"loupe",
"sawmill",
"magnetic compass",
"mail bag",
"mailbox",
"tights",
"tank suit",
"manhole cover",
"maraca",
"marimba",
"mask",
"match",
"maypole",
"maze",
"measuring cup",
"medicine chest",
"megalith",
"microphone",
"microwave oven",
"military uniform",
"milk can",
"minibus",
"miniskirt",
"minivan",
"missile",
"mitten",
"mixing bowl",
"mobile home",
"Model T",
"modem",
"monastery",
"monitor",
"moped",
"mortar",
"square academic cap",
"mosque",
"mosquito net",
"scooter",
"mountain bike",
"tent",
"computer mouse",
"mousetrap",
"moving van",
"muzzle",
"nail",
"neck brace",
"necklace",
"nipple",
"notebook computer",
"obelisk",
"oboe",
"ocarina",
"odometer",
"oil filter",
"organ",
"oscilloscope",
"overskirt",
"bullock cart",
"oxygen mask",
"packet",
"paddle",
"paddle wheel",
"padlock",
"paintbrush",
"pajamas",
"palace",
"pan flute",
"paper towel",
"parachute",
"parallel bars",
"park bench",
"parking meter",
"passenger car",
"patio",
"payphone",
"pedestal",
"pencil case",
"pencil sharpener",
"perfume",
"Petri dish",
"photocopier",
"plectrum",
"Pickelhaube",
"picket fence",
"pickup truck",
"pier",
"piggy bank",
"pill bottle",
"pillow",
"ping-pong ball",
"pinwheel",
"pirate ship",
"pitcher",
"hand plane",
"planetarium",
"plastic bag",
"plate rack",
"plow",
"plunger",
"Polaroid camera",
"pole",
"police van",
"poncho",
"billiard table",
"soda bottle",
"pot",
"potter's wheel",
"power drill",
"prayer rug",
"printer",
"prison",
"projectile",
"projector",
"hockey puck",
"punching bag",
"purse",
"quill",
"quilt",
"race car",
"racket",
"radiator",
"radio",
"radio telescope",
"rain barrel",
"recreational vehicle",
"reel",
"reflex camera",
"refrigerator",
"remote control",
"restaurant",
"revolver",
"rifle",
"rocking chair",
"rotisserie",
"eraser",
"rugby ball",
"ruler",
"running shoe",
"safe",
"safety pin",
"salt shaker",
"sandal",
"sarong",
"saxophone",
"scabbard",
"weighing scale",
"school bus",
"schooner",
"scoreboard",
"CRT screen",
"screw",
"screwdriver",
"seat belt",
"sewing machine",
"shield",
"shoe store",
"shoji",
"shopping basket",
"shopping cart",
"shovel",
"shower cap",
"shower curtain",
"ski",
"ski mask",
"sleeping bag",
"slide rule",
"sliding door",
"slot machine",
"snorkel",
"snowmobile",
"snowplow",
"soap dispenser",
"soccer ball",
"sock",
"solar thermal collector",
"sombrero",
"soup bowl",
"space bar",
"space heater",
"space shuttle",
"spatula",
"motorboat",
"spider web",
"spindle",
"sports car",
"spotlight",
"stage",
"steam locomotive",
"through arch bridge",
"steel drum",
"stethoscope",
"scarf",
"stone wall",
"stopwatch",
"stove",
"strainer",
"tram",
"stretcher",
"couch",
"stupa",
"submarine",
"suit",
"sundial",
"sunglass",
"sunglasses",
"sunscreen",
"suspension bridge",
"mop",
"sweatshirt",
"swimsuit",
"swing",
"switch",
"syringe",
"table lamp",
"tank",
"tape player",
"teapot",
"teddy bear",
"television",
"tennis ball",
"thatched roof",
"front curtain",
"thimble",
"threshing machine",
"throne",
"tile roof",
"toaster",
"tobacco shop",
"toilet seat",
"torch",
"totem pole",
"tow truck",
"toy store",
"tractor",
"semi-trailer truck",
"tray",
"trench coat",
"tricycle",
"trimaran",
"tripod",
"triumphal arch",
"trolleybus",
"trombone",
"tub",
"turnstile",
"typewriter keyboard",
"umbrella",
"unicycle",
"upright piano",
"vacuum cleaner",
"vase",
"vault",
"velvet",
"vending machine",
"vestment",
"viaduct",
"violin",
"volleyball",
"waffle iron",
"wall clock",
"wallet",
"wardrobe",
"military aircraft",
"sink",
"washing machine",
"water bottle",
"water jug",
"water tower",
"whiskey jug",
"whistle",
"wig",
"window screen",
"window shade",
"Windsor tie",
"wine bottle",
"wing",
"wok",
"wooden spoon",
"wool",
"split-rail fence",
"shipwreck",
"yawl",
"yurt",
"website",
"comic book",
"crossword",
"traffic sign",
"traffic light",
"dust jacket",
"menu",
"plate",
"guacamole",
"consomme",
"hot pot",
"trifle",
"ice cream",
"ice pop",
"baguette",
"bagel",
"pretzel",
"cheeseburger",
"hot dog",
"mashed potato",
"cabbage",
"broccoli",
"cauliflower",
"zucchini",
"spaghetti squash",
"acorn squash",
"butternut squash",
"cucumber",
"artichoke",
"bell pepper",
"cardoon",
"mushroom",
"<NAME>",
"strawberry",
"orange",
"lemon",
"fig",
"pineapple",
"banana",
"jackfruit",
"custard apple",
"pomegranate",
"hay",
"carbonara",
"chocolate syrup",
"dough",
"meatloaf",
"pizza",
"pot pie",
"burrito",
"red wine",
"espresso",
"cup",
"eggnog",
"alp",
"bubble",
"cliff",
"coral reef",
"geyser",
"lakeshore",
"promontory",
"shoal",
"seashore",
"valley",
"volcano",
"baseball player",
"bridegroom",
"scuba diver",
"rapeseed",
"daisy",
"yellow lady's slipper",
"corn",
"acorn",
"rose hip",
"horse chestnut seed",
"coral fungus",
"agaric",
"gyromitra",
"stinkhorn mushroom",
"earth star",
"hen-of-the-woods",
"bolete",
"ear",
"toilet paper"]
from nbdev.export import notebook2script;notebook2script('00_core.ipynb')
| examples/nbs/00_core.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# <img src="../images/aeropython_logo.png" alt="AeroPython" style="width: 300px;"/>
# # Ecuaciones de Lotka-Volterra: modelo presa depredador.
# ## Introducción
# Resulta intuitivo pensar que las poblaciones de un animal depredador y su presa están relacionadas de algún modo en el que si una aumenta, la otra lo hace también. Utilizaremos como ejemplo en este artículo un ecosistema aislado y formado por leones y cebras que viven en armonía, es decir, los leones comiéndose a las cebras. Imaginemos que por cualquier circunstancia, por ejemplo, por disponer de mayor cantidad de alimento, aumenta la población de cebras; los leones dispondrán de más alimento y su población aumentará, pero ¿qué ocurrirá a partir de este momento? Si la población de leones llega a ser demasiado grande para el número de cebras en nuestra sabana, podrían acabar con todas, provocando su propia extinción por inanición. Pero incluso si el festín no es tan grande como para comerse todas las cebras, pero sí para dejar una población muy mermada, probablemente los leones tendrán que pasar hambre una buena temporada y algunos de ellos morirán hasta que las cebras tengan tiempo suficiente para reproducirse y volver a ser pasto de los leones. ¿Cuántas cebras morirán en el atracón? ¿Cuánto tiempo pasarán los leones hambre? ¿Cuántos morirán?
# ## Ecuaciones
# Las ecuaciones de Lotka-Volterra son un modelo biomatemático que pretende responder a estas cuestiones prediciendo la dinámica de las poblaciones de presa y depredador bajo una serie de hipótesis:
#
# * El ecosistema está aislado: no hay migración, no hay otras especies presentes, no hay plagas...
# * La población de presas en ausencia de depredadores crece de manera exponencial: la velocidad de reproducción es proporcional al número de individuos. Las presas sólo mueren cuando son cazadas por el depredador.
# * La población de depredadores en ausencia de presas decrece de manera exponencial.
# * La población de depredadores afecta a la de presas haciéndola decrecer de forma proporcional al número de presas y depredadores (esto es como decir de forma proporcional al número de posibles encuentros entre presa y depredador).
# * La población de presas afecta a la de depredadores también de manera proporcional al número de encuentros, pero con distinta constante de proporcionalidad (dependerá de cuanto sacien su hambre los depredadores al encontrar una presa).
#
# Se trata de un sistema de dos ecuaciones diferenciales de primer orden, acopladas, autónomas y no lineales:
# $$ \frac{dx}{dt} = \alpha x - \beta x y $$
# $$ \frac{dy}{dt} = -\gamma y + \delta y x $$
# donde x es el número de presas (cebras en nuestro caso) e y es el número de depredadores (leones). Los parámetros son constantes positivas que representan:
#
# * $\alpha$: tasa de crecimiento de las presas.
# * $\beta$: éxito en la caza del depredador.
# * $\gamma$: tasa de decrecimiento de los depredadores.
# * $\delta$: éxito en la caza y cuánto alimenta cazar una presa al depredador.
# ## Resolución
# Resolveremos este sistema en Python usando la función `odeint` en `scipy.integrate`. Puedes ver cómo funciona en el artículo [El salto de <NAME> en Python](http://pybonacci.org/2012/10/15/el-salto-de-felix-baumgartner-en-python/). Para esto usaremos: Python 3.4, numpy 1.9.0, matplotlib 1.4.0, scipy 0.14.0.
import numpy as np
import matplotlib.pyplot as plt
from scipy.integrate import odeint
# %matplotlib inline
# Definimos la función que representará el sistema de ecuaciones en forma canónica para pasárselo a `odeint`:
def df_dt(x, t, a, b, c, d):
dx = a * x[0] - b * x[0] * x[1]
dy = - c * x[1] + d * x[0] * x[1]
return np.array([dx, dy])
# Definimos los parámetros de nuestro problema, las condiciones iniciales y el tiempo final de la integración así como el número de pasos.
# +
# Parámetros
a = 0.1
b = 0.02
c = 0.3
d = 0.01
# Condiciones iniciales
x0 = 40
y0 = 9
conds_iniciales = np.array([x0, y0])
# Condiciones para integración
tf = 200
N = 800
t = np.linspace(0, tf, N)
# -
# Resolvemos la ecuación:
solucion = odeint(df_dt, conds_iniciales, t, args=(a, b, c, d))
# y representamos los resultados en función del tiempo:
plt.style.use('ggplot')
plt.figure("Evolución temporal", figsize=(8,5))
plt.title("Evolución temporal")
plt.plot(t, solucion[:, 0], label='presa')
plt.plot(t, solucion[:, 1], label='depredador')
plt.xlabel('tiempo')
plt.ylabel('población')
plt.legend()
# plt.savefig('evolucion_temporal.png')
# Otra forma interesante de visualizar estos datos es ver el número de presas en función del número de depredadores en lugar de a lo largo del tiempo, es decir, podemos visualizar su mapa de fases:
plt.figure("Presas vs depredadores", figsize=(8,5))
plt.plot(solucion[:, 0], solucion[:, 1])
plt.xlabel('presas')
plt.ylabel('depredadores')
# plt.savefig('presas_vs_depredadores.png')
# Vemos que se trata de una solución periódica en la que, como decíamos al principio, un aumento en la población de cebras va seguido del aumento del número de leones. Un gran número de depredadores merma la población de presas y a los pobres leones les toca pasar hambre una temporada. Otra forma interesante de visualizar estos datos es ver el número de presas en función del número de depredadores, en lugar de a lo largo del tiempo, es decir, podemos visualizar su mapa de fases. Podemos pintar también el campo de direcciones de nuestras ecuaciones usando la función `quiver`. El tamaño de las flechas se ha normalizado para que todas tengan la misma longitud y se ha usado un `colormap` para representar el módulo.
# +
x_max = np.max(solucion[:,0]) * 1.05
y_max = np.max(solucion[:,1]) * 1.05
x = np.linspace(0, x_max, 25)
y = np.linspace(0, y_max, 25)
xx, yy = np.meshgrid(x, y)
uu, vv = df_dt((xx, yy), 0, a, b, c, d)
norm = np.sqrt(uu**2 + vv**2)
uu = uu / norm
vv = vv / norm
plt.figure("Campo de direcciones", figsize=(8,5))
plt.quiver(xx, yy, uu, vv, norm, cmap=plt.cm.gray)
plt.plot(solucion[:, 0], solucion[:, 1])
plt.xlim(0, x_max)
plt.ylim(0, y_max)
plt.xlabel('presas')
plt.ylabel('depredadores')
# plt.savefig('campo_direcciones.png')
# +
n_max = np.max(solucion) * 1.10
fig, ax = plt.subplots(1,2)
fig.set_size_inches(12,5)
ax[0].quiver(xx, yy, uu, vv, norm, cmap=plt.cm.gray)
ax[0].plot(solucion[:, 0], solucion[:, 1], lw=2, alpha=0.8)
ax[0].set_xlim(0, x_max)
ax[0].set_ylim(0, y_max)
ax[0].set_xlabel('presas')
ax[0].set_ylabel('depredadores')
ax[1].plot(t, solucion[:, 0], label='presa')
ax[1].plot(t, solucion[:, 1], label='depredador')
ax[1].legend()
ax[1].set_xlabel('tiempo')
ax[1].set_ylabel('población')
# plt.savefig('campo_direcciones_ev_temporal.png')
# -
# Si nos fijamos en la línea azul, la coordenada x en cada punto indica el número de presas y la coordenada y el número de depredadores. La evolución a lo largo del tiempo que hemos representado antes, se obtiene al recorrer esta curva en sentido antihorario. Podemos ver también como el campo de direcciones nos señala la tendencia del sistema en cada situación. Por ejemplo, una flecha que apunta hacia arriba a la derecha indica que con ese número de cebras y leones en nuestra sabana, la tendencia será que aumenten ambos.
#
# Llegados a este punto podemos preguntarnos qué habría ocurrido si el número inicial de cebras y leones hubiese sido otro. Como ya sabemos integrar ecuaciones diferenciales, bastaría con cambiar nuestra `x0` e `y0` y repetir el proceso (incluso podríamos hacer un widget interactivo). Sin embargo, se puede demostrar que a lo largo de las líneas del mapa de fases, como la que hemos pintado antes, se conserva la cantidad:
#
# $$ C = \alpha \ln{y} - \beta y + \gamma \ln{x} -\delta x $$
#
# Por tanto, pintando un `contour` de esta cantidad podemos obtener la solución para distintos valores iniciales del problema.
def C(x, y, a, b, c, d):
return a * np.log(y) - b * y + c * np.log(x) - d * x
# +
x = np.linspace(0, x_max, 100)
y = np.linspace(0, y_max, 100)
xx, yy = np.meshgrid(x, y)
constant = C(xx, yy, a, b, c, d)
plt.figure('distintas_soluciones', figsize=(8,5))
plt.contour(xx, yy, constant, 50, cmap=plt.cm.Blues)
plt.xlabel('presas')
plt.ylabel('depredadores')
# plt.savefig('distintas_soluciones.png')
# -
# Vemos que estas curvas se van haciendo cada vez más y más pequeñas, hasta que, en nuestro caso, colapsarían en un punto en torno a $(30,5)$. Se trata de un punto de equilibrio o punto crítico; si el sistema lo alcanzase, no evolucionaría y el número de cebras y leones sería constante en el tiempo. El otro punto crítico de nuestro sistema es el $(0,0)$. Analizándolos matemáticamente se obtiene que:
#
# El punto crítico situado en $(0,0)$ es un punto de silla. Al tratarse de un punto de equilibrio inestable la extinción de cualquiera de las dos especies en el modelo sólo puede conseguirse imponiendo la condición inicial nula.
# El punto crítico situado en $(γ/δ,α/β)$ es un centro (en este caso los autovalores de la matriz del sistema linealizado son ambos imaginarios puros, por lo que a priori no se conoce su estabilidad).
# +
#n_max = np.max(solucion) * 1.10
fig, ax = plt.subplots(1,2)
fig.set_size_inches(12,5)
ax[0].plot(solucion[:, 0], solucion[:, 1], lw=2, alpha=0.8)
ax[0].scatter(c/d, a/b)
levels = (0.5, 0.6, 0.7, 0.72, 0.73, 0.74, 0.75, 0.76, 0.77, 0.775, 0.78, 0.781)
ax[0].contour(xx, yy, constant, levels, colors='blue', alpha=0.3)
ax[0].set_xlim(0, x_max)
ax[0].set_ylim(0, y_max)
ax[0].set_xlabel('presas')
ax[0].set_ylabel('depredadores')
ax[1].plot(t, solucion[:, 0], label='presa')
ax[1].plot(t, solucion[:, 1], label='depredador')
ax[1].legend()
ax[1].set_xlabel('tiempo')
ax[1].set_ylabel('población')
# plt.savefig('distintas_soluciones_ev_temporal.png')
# -
# ## Mejorando el modelo
# Como se puede observar, este modelo tiene algunas deficiencias propias de su simplicidad y derivadas de las hipótesis bajo las que se ha formulado. Una modificación razonable es cambiar el modelo de crecimiento de las presas en ausencia de depredadores, suponiendo que en vez de aumentar de forma exponencial, lo hacen según una [función logística](http://es.wikipedia.org/wiki/Funci%C3%B3n_log%C3%ADstica). Esta curva crece de forma similar a una exponencial al principio, moderándose después y estabilizándose asintóticamente en un valor:
def logistic_curve(t, a=1, m=0, n=1, tau=1):
e = np.exp(-t / tau)
return a * (1 + m * e) / (1 + n * e)
x_ = np.linspace(0,10)
plt.figure('función logística', figsize=(8,5))
plt.plot(x_, logistic_curve(x_, 1, m=10, n=100, tau=1))
# plt.savefig('funcion_logistica.png')
# Podemos observar como esta curva crece de forma similar a una exponencial al principio, moderándose después y estabilizándose asintóticamente en un valor. Este modelo de crecimiento representa mejor las limitaciones en el número de presas debidas al medio (falta de alimento, territorio...). Llevando este modelo de crecimiento a las ecuaciones originales se tiene un nuevo sistema en el que interviene un parámetro más:
#
# $$ \frac{dx}{dt} = (\alpha x - r x^2) - \beta x y $$
# $$ \frac{dy}{dt} = -\gamma y + \delta y x $$
def df_dt_logistic(x, t, a, b, c, d, r):
dx = a * x[0] - r * x[0]**2 - b * x[0] * x[1]
dy = - c * x[1] + d * x[0] * x[1]
return np.array([dx, dy])
# +
# Parámetros
a = 0.1
b = 0.02
c = 0.3
d = 0.01
r = 0.001
# Condiciones iniciales
x0 = 40
y0 = 9
conds_iniciales = np.array([x0, y0])
# Condiciones para integración
tf = 200
N = 800
t = np.linspace(0, tf, N)
# -
solucion_logistic = odeint(df_dt_logistic, conds_iniciales, t, args=(a, b, c, d, r))
# +
n_max = np.max(solucion) * 1.10
fig, ax = plt.subplots(1,2)
fig.set_size_inches(12,5)
x_max = np.max(solucion_logistic[:,0]) * 1.05
y_max = np.max(solucion_logistic[:,1]) * 1.05
x = np.linspace(0, x_max, 25)
y = np.linspace(0, y_max, 25)
xx, yy = np.meshgrid(x, y)
uu, vv = df_dt_logistic((xx, yy), 0, a, b, c, d, r)
norm = np.sqrt(uu**2 + vv**2)
uu = uu / norm
vv = vv / norm
ax[0].quiver(xx, yy, uu, vv, norm, cmap=plt.cm.gray)
ax[0].plot(solucion_logistic[:, 0], solucion_logistic[:, 1], lw=2, alpha=0.8)
ax[0].set_xlim(0, x_max)
ax[0].set_ylim(0, y_max)
ax[0].set_xlabel('presas')
ax[0].set_ylabel('depredadores')
ax[1].plot(t, solucion_logistic[:, 0], label='presa')
ax[1].plot(t, solucion_logistic[:, 1], label='depredador')
ax[1].legend()
ax[1].set_xlabel('tiempo')
ax[1].set_ylabel('población')
# plt.savefig('campo_direcciones_ev_temporal_caso2.png')
# -
# En este caso se puede observar como el comportamiento deja de ser periódico. El punto crítico que antes era un centro, se convierte en un atractor y la solución tiende a estabilizarse en un número fijo de presas y depredadores.
# ## Referencias
# Si tienes curiosidad sobre como seguir perfeccionando este modelo o cómo incluir otras especies, quizás quieras echar un vistazo a:
#
# * [Competitive Lotka–Volterra equations](http://en.wikipedia.org/wiki/Competitive_Lotka%E2%80%93Volterra_equations) o [The Predator-Prey Equations](http://www.math.psu.edu/tseng/class/Math251/Notes-Predator-Prey.pdf)
#
# * [Presentación ETSIINF-UPM](http://www.dma.fi.upm.es/docencia/mastercaci/2012-2013/sistemascomplejos/projects/lotka-volterra.pdf)
#
# * [Apuntes de ecuaciones diferenciales](http://matap.dmae.upm.es/WebpersonalBartolo/EDOoficial.html), <NAME> (ETSIA-UPM).
#
# * Si te interesa ver cómo realizar la integración con diferentes métodos, puedes visitar [Predator Prey Model - Bank Assignment of Numerical Mooc](http://nbviewer.ipython.org/github/numerical-mooc/assignment-bank/blob/master/Lessons.and.Assignments/Predator.Prey.Model/Predator.Prey.Model.ipynb).
# ## Widgets
from IPython.html.widgets import interact
def solucion_temporal_interact(a, b, c, d, x0, y0, tf):
conds_iniciales = np.array([x0, y0])
# Condiciones para integración
N = 800
t = np.linspace(0, tf, N)
solucion = odeint(df_dt, conds_iniciales, t, args=(a, b, c, d))
plt.figure("Evolución temporal", figsize=(8,5))
plt.title("Evolución temporal")
plt.plot(t, solucion[:, 0], label='presa')
plt.plot(t, solucion[:, 1], label='depredador')
plt.xlabel('tiempo')
plt.ylabel('población')
plt.legend()
interact(solucion_temporal_interact,
a=(0.01,0.5), b=(0.01,0.5),
c=(0.01,0.5), d=(0.01,0.5),
x0=(1,80), y0=(1,50),
tf=(50,300));
def mapa_fases_interact(a, b, c, d, x0, y0, tf):
conds_iniciales = np.array([x0, y0])
# Condiciones para integración
N = 800
t = np.linspace(0, tf, N)
solucion = odeint(df_dt, conds_iniciales, t, args=(a, b, c, d))
x_max = np.max(solucion[:,0]) * 1.05
y_max = np.max(solucion[:,1]) * 1.05
x = np.linspace(0, x_max, 25)
y = np.linspace(0, y_max, 25)
xx, yy = np.meshgrid(x, y)
uu, vv = df_dt((xx, yy), 0, a, b, c, d)
norm = np.sqrt(uu**2 + vv**2)
uu = uu / norm
vv = vv / norm
plt.figure("Campo de direcciones", figsize=(8,5))
plt.quiver(xx, yy, uu, vv, norm, cmap=plt.cm.gray)
plt.plot(solucion[:, 0], solucion[:, 1])
plt.xlim(0, x_max)
plt.ylim(0, y_max)
plt.xlabel('presas')
plt.ylabel('depredadores')
# # plt.savefig('campo_direcciones.png')
interact(mapa_fases_interact,
a=(0.01,0.5), b=(0.01,0.5),
c=(0.01,0.5), d=(0.01,0.5),
x0=(1,80), y0=(1,50),
tf=(50,300));
# ---
#
# #### <h4 align="right">¡Síguenos en Twitter!
# <br/>
# ###### <a href="https://twitter.com/AeroPython" class="twitter-follow-button" data-show-count="false">Follow @AeroPython</a> <script>!function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0],p=/^http:/.test(d.location)?'http':'https';if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src=p+'://platform.twitter.com/widgets.js';fjs.parentNode.insertBefore(js,fjs);}}(document, 'script', 'twitter-wjs');</script>
# <br/>
# ###### Este notebook ha sido realizado por: <NAME>
# <br/>
# ##### <a rel="license" href="http://creativecommons.org/licenses/by/4.0/deed.es"><img alt="Licencia Creative Commons" style="border-width:0" src="http://i.creativecommons.org/l/by/4.0/88x31.png" /></a><br /><span xmlns:dct="http://purl.org/dc/terms/" property="dct:title">Curso AeroPython</span> por <span xmlns:cc="http://creativecommons.org/ns#" property="cc:attributionName"><NAME> y <NAME></span> se distribuye bajo una <a rel="license" href="http://creativecommons.org/licenses/by/4.0/deed.es">Licencia Creative Commons Atribución 4.0 Internacional</a>.
# ---
# _Las siguientes celdas contienen configuración del Notebook_
#
# _Para visualizar y utlizar los enlaces a Twitter el notebook debe ejecutarse como [seguro](http://ipython.org/ipython-doc/dev/notebook/security.html)_
#
# File > Trusted Notebook
# Esta celda da el estilo al notebook
from IPython.core.display import HTML
css_file = '../styles/aeropython.css'
HTML(open(css_file, "r").read())
| notebooks_completos/091-Ejemplos-Lotka-Volterra.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3.8.5 64-bit ('venv')
# metadata:
# interpreter:
# hash: dbca80a896c2703cb73d8aa5d94020e1f4bfb9b41a68d9aabda164472ff3ec82
# name: python3
# ---
# +
# Import modules
from src.model import ExampleModel
from src.configs.config import example_config
from src.utils.helpers import MNISTdataset as Dataset
# -
# Create model object
my_model = ExampleModel(example_config)
# +
# Extract dataset
my_dataset = Dataset()
train_x = my_dataset.TRAIN_DS[0]
train_y = my_dataset.TRAIN_DS[1]
val_x = my_dataset.VAL_DS[0]
val_y = my_dataset.VAL_DS[1]
# +
# Generate, compile and train model
my_model.generate_model()
my_model.compile()
my_model.train_model(train_x, train_y, val_x, val_y)
| src/my_notebook.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Database Admin 101
#
# ## Introduction
#
# Now that you've seen how to access and retrieve information from a SQL database, let's investigate how you could create or alter an existing database. Although there is still much to learn, this will lead you into the realm of database administration.
#
# ## Objectives
#
# You will be able to:
# - Create a SQL database
# - Create a SQL table
# - Create rows in a SQL table
# - Alter entries in a SQL table
# - Delete entries in a SQL table
# - Determine when it is necessary to commit changes to a database
# - Commit changes via sqlite3
# ## Previewing Files in the Current Working Directory
#
# Remember that you can use the bash `ls` command to preview files and folders in the current working directory. Run the cell below to do just that!
# ls
# ## Creating a Database
#
# You've seen how to connect to a database, but did you know creating one is just as easy? All you have to do is create a connection to a non-existent database, and viola! The database will be created simply by establishing a connection.
import sqlite3
conn = sqlite3.connect('pets_database.db')
cur = conn.cursor()
# ## Re-preview Files
#
# If you use the `ls` command once again, you should now see the pets_database.db file there.
# ls
# ## Creating Tables
#
# Now that you have a database, let's create our cats table along with columns for id, name, age, and breed. Remember that we use our cursor to execute these SQL statements, and that the statements must be wrapped in quotes ('''SQL statement goes here''' or """SQL statement goes here"""). Indenting portions of your queries can also make them much easier to read and debug.
#
# ```python
# cur.execute("""CREATE TABLE cats (
# id INTEGER PRIMARY KEY,
# name TEXT,
# age INTEGER,
# breed TEXT )
# """)
# ```
#Creating the cats table
cur.execute("""CREATE TABLE cats (
id INTEGER PRIMARY KEY,
name TEXT,
age INTEGER,
breed TEXT )
""")
# ## Populating Tables
#
# In order to populate a table, you can use the `INSERT INTO` command, followed by the name of the table to which we want to add data. Then, in parentheses, we type the column names that we want to fill with data. This is followed by the `VALUES` keyword, which is accompanied by a parentheses enclosed list of the values that correspond to each column name.
#
# **Important**: Note that you don't have to specify the "id" column name or value. Primary Key columns are auto-incrementing. Therefore, since the cats table has an "id" column whose type is `INTEGER PRIMARY KEY`, you don't have to specify the id column values when you insert data. As long as you have defined an id column with a data type of `INTEGER PRIMARY KEY`, a newly inserted row's id column will be automatically given the correct value.
#
# Okay, let's start storing some cats.
#
# ### Code Along I: INSERT INTO
#
# To insert a record with values, type the following:
#
# ```python
# cur.execute('''INSERT INTO cats (name, age, breed)
# VALUES ('Maru', 3, 'Scottish Fold');
# ''')
# ```
# insert Maru into the pet_database.db here
cur.execute('''INSERT INTO cats (name, age, breed)
VALUES ('Maru', 3, 'Scottish Fold');
''')
# ## Altering a Table
#
# You can also update a table like this: cursor.execute('''ALTER TABLE cats ADD COLUMN notes text;''')
#
# The general pattern is `ALTER TABLE table_name ADD COLUMN column_name column_type;`
# ## Updating Data
#
# You use `UPDATE` keyword to change prexisting rows within a table.
#
# The `UPDATE` statement uses a `WHERE` clause to grab the row you want to update. It identifies the table name you are looking in and resets the data in a particular column to a new value.
#
# A boilerplate `UPDATE` statement looks like this:
#
# ```python
# cur.execute('''UPDATE [table name]
# SET [column name] = [new value]
# WHERE [column name] = [value];
# ''')
# ```
#
# ### Code Along II: UPDATE
#
# Let's update one of our cats. Turns out Maru's friend Hannah is actually Maru's friend Hana. Let's update that row to change the name to the correct spelling:
#
# ```python
# cur.execute('''UPDATE cats SET name = "Hana" WHERE name = "Hannah";''')
# ```
# update hannah here
cur.execute('''UPDATE cats SET name = "Hana" WHERE name = "Hannah";''')
# ## Deleting Data
#
# You use the `DELETE` keyword to delete table rows.
#
# Similar to the `UPDATE` keyword, the `DELETE` keyword uses a `WHERE` clause to select rows.
#
# A boilerplate `DELETE` statement looks like this:
#
# ```python
# cur.execute('''DELETE FROM [table name] WHERE [column name] = [value];''')
# ```
#
# ### Code Along III: DELETE
#
# Let's go ahead and delete Lil' Bub from our cats table (sorry Lil' Bub):
#
# ```python
# cur.execute('''DELETE FROM cats WHERE id = 2;''')
# ```
#
# DELETE record with id=2 here
cur.execute('''DELETE FROM cats WHERE id = 2;''')
# Notice that this time we selected the row to delete using the `PRIMARY KEY` column. Remember that every table row has a `PRIMARY KEY` column that is unique. Lil' Bub was the second row in the database and thus had an id of 2.
# ## Saving Changes
#
# While everything may look well and good, if you were to connect to the database from another Jupyter notebook (or elsewhere) the database would appear blank! That is, while the changes are reflected in your current session connection to the database you have yet to commit those changes to the master database so that other users and connections can also view the updates.
#
# Before you commit the changes, let's demonstrate this concept.
#
# First, preview the results of the table:
#
# ```python
# cur.execute("""SELECT * FROM cats;""").fetchall()
# ```
#Preview the table via the current cursor/connection
cur.execute("""SELECT * FROM cats;""").fetchall()
# Now, to demonstrate that these changes aren't reflected to other connections to the database create a 2nd connection/cursor and run the same preview:
#
# ```python
# conn2 = sqlite3.connect('pets_database.db')
# cur2 = conn2.cursor()
# cur2.execute("""SELECT * FROM cats;""").fetchall()
# ```
#Preview the table via a second current cursor/connection
#Don't overwrite the previous connection: you'll lose all of your work!
conn2 = sqlite3.connect('pets_database.db')
cur2 = conn2.cursor()
cur2.execute("""SELECT * FROM cats;""").fetchall()
# As you can see, the second connection doesn't currently display any data in the cats table! To make the changes universally accessible commit the changes.
#
# In this case:
#
# ```python
# conn.commit()
# ```
# Commit your changes to the databaase
conn.commit()
# Now, if you reload your second connection, you should see the updates reflected in the data!
#
# ```python
# conn2 = sqlite3.connect('pets_database.db')
# cur2 = conn2.cursor()
# cur2.execute("""SELECT * FROM cats;""").fetchall()
# ```
#Preview the table via a reloaded second current cursor/connection
conn2 = sqlite3.connect('pets_database.db')
cur2 = conn2.cursor()
cur2.execute("""SELECT * FROM cats;""").fetchall()
# ## Summary
#
# Congrats! In this lesson, you learned how to create, edit, and delete tables and databases using SQL!
| index.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ## Markdown cell Title
#
# With **some** text
print("Test")
# +
import numpy as np
import pandas as pd
dates = pd.date_range('20130101',periods=6)
df = pd.DataFrame(np.random.randn(6,4),index=dates,columns=list('ABCD'))
df
# -
dates
print(df)
| Examples/ExcelTest2.ipynb |