code stringlengths 38 801k | repo_path stringlengths 6 263 |
|---|---|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Swan data structure
#
# A SwanGraph consists of several different parts that can be used individually. This page serves as an overview of several of the important features of a SwanGraph.
#
#
# ## Table of contents
#
# * [Genomic location information](data_structure.md#loc_df)
# * [Intron / exon information](data_structure#edge_df)
# * [Transcript information](data_structure#t_df)
# * [AnnData](data_structure#anndata)
# * [Current plotted graph information](data_structure#pg)
#
# We'll be using the same SwanGraph from the rest of the tutorial to examine how data is stored in the SwanGraph. Load it using the following code:
# +
import swan_vis as swan
# code to download this data is in the Getting started tutorial
sg = swan.read('../tutorials/data/swan.p')
# -
# ## <a name="loc_df"></a>Genomic location information
# Swan stores information on individual genomic locations that eventually are plotted as nodes in the SwanGraph in the `SwanGraph.loc_df` pandas DataFrame. The information in the DataFrame and the column names are described below:
# * chromosomal coordinates (chrom, coord)
# * whether or not the genomic location is present in the provided reference annotation (annotation)
# * what role the location plays in the transcript(s) that it is part of (internal, TSS, TES)
# * internal identifier in the SwanGraph (vertex_id)
sg.loc_df.head()
# ## <a name="edge_df"></a>Intron / exon location information
# Swan stores information about the exons and introns that are eventually plotted as edges in the SwanGraph in the `SwanGraph.edge_df` pandas DataFrame. The information in the DataFrame and the column names are described below:
# * internal vertex ids from `SwanGraph.loc_df` that bound each edge (v1, v2)
# * strand that this edge is from (strand)
# * whether this edge is an intron or an exon (edge_type)
# * whether or not the edge is present in the provided reference annotation (annotation)
# * internal identifier in the SwanGraph (edge_id)
sg.edge_df.head()
# ## <a name="t_df"></a>Transcript information
# Swan stores information about the transcripts from the annotation and added transcriptome in the `SwanGraph.t_df` pandas DataFrame. The information in the DataFrame and the column names are described below:
# * transcript ID from the GTF (tid)
# * transcript name from the GTF, if provided (tname)
# * gene ID from the GTF (gid)
# * gene name from the GTF, if provided (gname)
# * path of edges (edge_ids from `SwanGraph.edge_df`) that make up the transcript (path)
# * path of genomic locations (vertex_ids fom `SwanGraph.loc_df`) that make up the transcript (loc_path)
# * whether or not the transcript is present in the provided reference annotation (annotation)
# * novelty category of the transcript, if provided (novelty)
sg.t_df.head()
# ## <a name="anndata"></a>AnnData
# Swan stores abundance information for transcripts, TSSs, TESs, and edges using the [AnnData](https://anndata.readthedocs.io/en/latest/) data format. This allows for tracking of abundance information using multiple metrics, storage of complex metadata, and direct compatibility with plotting and analysis using [Scanpy](https://scanpy.readthedocs.io/en/stable/index.html). Since there's a lot of information online about these data formats, I'll just go over the specifics that Swan uses.
# ### General AnnData format
# The basic AnnData format is comprised of:
# * `AnnData.obs` - pandas DataFrame - information and metadata about the samples / cells / datasets
# * `AnnData.var` - pandas DataFrame - information about the variables being measured (ie genes, transcripts etc.)
# * `AnnData.X` - numpy array - information about expression of each variable in each sample
#
# In Swan, the expression data is stored in three different formats that can be accessed through different layers:
# * `AnnData.layers['counts']` - raw counts of each variable in each sample
# * `AnnData.layers['tpm']` - transcripts per million calculated per sample
# * `AnnData.layers['pi']` - percent isoform use per gene (only calculated for transcripts, TSS, TES)
# ### Transcript AnnData
# You can access transcript expression information using `SwanGraph.adata`.
#
# The variable information stored is just the transcript ID but can be merged with `SwanGraph.t_df` for more information.
sg.adata.var.head()
# The metadata information that has been added to the SwanGraph along with the initial dataset name from the column names of the added abundance table.
sg.adata.obs.head()
# The expression information are stored in `SwanGraph.adata.layers['counts']`, `SwanGraph.adata.layers['tpm']`, and `SwanGraph.adata.layers['pi']` for raw counts, TPM, and percent isoform (pi) respectively.
print(sg.adata.layers['counts'][:5, :5])
print(sg.adata.layers['tpm'][:5, :5])
print(sg.adata.layers['pi'][:5, :5])
# ### Edge AnnData
# You can access edge expression information using `SwanGraph.edge_adata`.
#
# The variable information stored is just the edge ID but can be merged with `SwanGraph.edge_df` for more information.
sg.edge_adata.var.head()
# The metadata information that has been added to the SwanGraph along with the initial dataset name from the column names of the added abundance table. It should be identical to `SwanGraph.adata.obs`.
sg.edge_adata.obs.head()
# And similarly, counts and TPM of each edge are stored in `SwanGraph.edge_adata.layers['counts']` and `SwanGraph.edge_adata.layers['tpm']`. This data is very sparse though so it shows up as all zeroes here!
print(sg.edge_adata.layers['counts'][:5, :5])
print(sg.edge_adata.layers['tpm'][:5, :5])
# ### TSS / TES AnnData
# You can access TSS and TES expression information using `SwanGraph.tss_adata` and `SwanGraph.tes_adata` respectively.
#
# Unlike the other AnnDatas for edge and transcript expression, the `AnnData.var` table hold more information:
# * automatically-generated TSS or TES id, which is made up of the gene ID the TSS or TES belongs to and its number (tss_id or tes_id)
# * gene ID that the TSS / TES belongs to (gid)
# * gene name that the TSS / TES belongs to, if provided (gname)
# * vertex ID from `SwanGraph.loc_df` that the TSS / TES came from (vertex_id)
# * automatically-generated TSS or TES id, which is made up of the gene name (if provided) that the TSS or TES belongs to and its number (tss_name or tes_name)
print(sg.tss_adata.var.head())
print(sg.tes_adata.var.head())
# Again the metadata in `SwanGraph.tss_adata` and `SwanGraph.tes_adata` should be identical to the metadata in the other AnnDatas.
print(sg.tss_adata.obs.head())
print(sg.tes_adata.obs.head())
# And finally, expression data for each TSS / TES are stored in the following layers:
# `SwanGraph.tss_adata.layers['counts']`, `SwanGraph.tss_adata.layers['tpm']`, `SwanGraph.tss_adata.layers['pi']`, `SwanGraph.tes_adata.layers['counts']`, `SwanGraph.tes_adata.layers['tpm']`, `SwanGraph.tes_adata.layers['pi']`
r = 5
start_c = 20
end_c = 25
print(sg.tss_adata.layers['counts'][:r, start_c:end_c])
print(sg.tss_adata.layers['tpm'][:r, start_c:end_c])
print(sg.tss_adata.layers['pi'][:r, start_c:end_c])
print()
print(sg.tes_adata.layers['counts'][:r, start_c:end_c])
print(sg.tes_adata.layers['tpm'][:r, start_c:end_c])
print(sg.tes_adata.layers['pi'][:r, start_c:end_c])
# ## <a name="pg"></a>Current plotted graph information
# To reduce run time for generating gene reports, Swan stores the subgraph that is used to generate plots for any specific gene in `SwanGraph.pg`. This object is very similar to the parent `SwanGraph` object. It has a `loc_df`, `edge_df`, and `t_df` that just consist of the nodes, edges, and transcripts that make up a specific gene. This data structure can be helpful for understanding what is going on in generated plots as the node labels are not consistent with the display labels in Swan plots.
# For instance, let's again plot ADRM1.
sg.plot_graph('ADRM1')
# In `SwanGraph.pg.loc_df`, you can see what genomic location each node plotted in the gene's graph corresponds to:
sg.pg.loc_df.head()
# In `SwanGraph.pg.edge_df`, you can see information about each edge, indexed by the subgraph vertex IDs from `SwanGraph.pg.loc_df`:
sg.pg.edge_df.head()
# And finally, `SwanGraph.pg.t_df` holds the information about each transcript in the gene:
sg.pg.t_df.head()
| faqs/Data structure.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
import glob, re, os
for filename in glob.glob('*.*'):
new_name = filename.replace(' ','-')
os.rename(filename,new_name)
for filename in glob.glob('*.*'):
print(filename)
# -
| 02-Camera-Calibration/.ipynb_checkpoints/Replace-filenames-checkpoint.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import warnings
warnings.filterwarnings("ignore", message="numpy.dtype size changed")
warnings.filterwarnings("ignore", category=FutureWarning)
import mxnet as mx
mx.__version__
a = mx.nd.ones((2, 3), mx.gpu())
# #### b = a * 2 + 1
b.asnumpy()
| mxnet_intialize.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import pandas as pd
#importing the dataframe
news_df = pd.read_csv("../data/News.csv")
news_df.head()
news_df.shape
# ### About the data
#
# The given dataset contains a large number of News Article Headlines mapped together with its
# Sentiment Score and their respective social feedback on multiple platforms. The collected data accounts
# about 93239 news items on four different topics: Economy, Microsoft, Obama and Palestine. (UCI
# Machine Learning Repository, n.d.)
#
# The attributes present in the dataset are:
# - **IDLink (numeric):** Unique identifier of news items
# - **Title (string):** Title of the news item according to the official media sources
# - **Headline (string):** Headline of the news item according to the official media sources
# - **Source (string):** Original news outlet that published the news item
# - **Topic (string):** Query topic used to obtain the items in the official media sources
# - **PublishDate (timestamp):** Date and time of the news items' publication
# - **SentimentTitle (numeric):** Sentiment score of the text in the news items' title
# - **SentimentHeadline (numeric):** Sentiment score of the text in the news items' headline
# - **Facebook (numeric):** Final value of the news items' popularity according to the social media
# source Facebook
# - **GooglePlus (numeric):** Final value of the news items' popularity according to the social media
# source Google+
# - **LinkedIn (numeric):** Final value of the news items' popularity according to the social media
# source LinkedIn
#
# For this project the Title and SentimentTitle attributes will only be used and news related to Microsoft will be removed as it is more tech centric and it is quite irrelevant in the context of Nepal.
# Data with neutral sentiment
news_df = news_df[news_df['SentimentHeadline'] != 0]
# Data with positive sentiment
news_df[news_df['SentimentHeadline'] > 0].shape
# Data with negative sentiment
news_df[news_df['SentimentHeadline'] < 0].shape
# It seems like there is almost thrice more negative news(while considering neural news as negative) than postive news.
# ### Data Preprocessing
#Dropping news related to microsoft
news_df = news_df[news_df['Topic'] != "microsoft"]
#Removing the irreleant columns
news_df = news_df[['Headline', 'SentimentHeadline']]
news_df.info()
# In general sentiment score above 0.05 are considered positive
# And since we are only interested in filtering good news or positive news
# We will label score above 0.05 as postive and any score below it as negative
def is_positive(sentiment_score):
if sentiment_score > 0:
return 1
else:
return 0
news_df['Is_SentimentHeadline_Positive'] = news_df['SentimentHeadline'].apply(is_positive)
# Removing SentimentHeadline column
news_df = news_df[['Headline','Is_SentimentHeadline_Positive']]
news_df.head()
# ### Text Preprocessing
# +
# Removing Punctuations and converting all word to lowercase
import string
import nltk
def remove_proper_noun(text):
text = nltk.tag.pos_tag(text.split())
edited_text = [word for word,tag in text if tag != 'NNP' and tag != 'NNPS']
return ' '.join(edited_text)
def remove_punctuation(text):
text = remove_proper_noun(text)
no_punctuation_text = ''.join([i for i in str(text) if i not in string.punctuation])
return no_punctuation_text.lower()
# -
news_df['Headline'] = news_df['Headline'].apply(remove_punctuation)
news_df.head()
import spacy
nlp = spacy.load("en_core_web_sm")
# +
import re
def remove_nonwords(str_):
return re.sub("[^A-Za-z ]\w+[^A-Za-z]*", ' ', str_)
# Lemmatization and Removing stop words and non words
def text_preprocessing(text):
text = remove_nonwords(text)
tokenized_text = [token.lemma_ for token in nlp(text)]
no_stopwords_list = [i.lower() for i in tokenized_text if i not in nlp.Defaults.stop_words]
lemma_text = ' '.join(no_stopwords_list)
return lemma_text
# -
# Preprocessing the Headline text
news_df['Headline'] = news_df['Headline'].apply(text_preprocessing)
news_df.head()
# Removing all Null
news_df = news_df[news_df['Headline'].notnull()]
# Dropping all Nan
news_df = news_df.dropna()
# dropping ALL duplicte values
news_df.drop_duplicates(subset ="Headline",
keep = False, inplace = True)
news_df.to_csv("../Data/Clean_data.csv", index=False)
| Notebook/Good News Classifying Model - Cleaning.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Hello world
# ## Optimization algorithms: demo
# <em><b>Note</b>: This is just a demo of optimization techniques - not related to any real-life problem.</em>
#
# Imagine the world of strings where the environment demands that the strings are very fit. An unfit string cannot survive.
#
# How do we define the fitness of a string? The environment demands that the string should be as similar as possible to the target string $s$=''Hello, world''. The closer the string is to ''Hello, world'', the better its chances of survival.
#
# <b> The problem:</b> given a set of random strings of length len($s$) over alphabet $\sigma=\{32 \ldots 122\}$, produce the string which best matches the environment, i.e. with a minimum fitness score.
#
# Let <em>Weighted Hamming Distance</em> between two strings $x$ and $y$ (both of length $n$) be defined as:<br>
# $WH(x,y) = \sum_{i=0}^{n} abs(x[i] - y[i])$
#
# $WH(x,y)$ estimates how far is $x$ from $y$. The lower $WH(x,y)$, the closer string $x$ approaches the target string $y$.<br>
# In the space of $\sigma^n$ of all possible strings, we are looking for a global minimum - the string $m$ such that $WH(m,s)$ is minimized. <br>
#
# If we compute $WH(x,s)$ for an arbitrary string $x$, we say that we evaluate the <em>fitness</em> of $x$ with respect to the target string $s$. This is our <em>fitness function</em>.
# ## 1. Preparation
# flag which can be turned on/off to print the steps of each optimization
print_steps = True
# ### 1.1. String fitness function
# +
s = 'Hello, world'
n = len(s)
# fitness function - weighted hamming distance
string_fitness = lambda x: sum([abs(ord(x[i])- ord(s[i])) for i in range(n)])
# -
# ### 1.2. Generating initial random strings
# +
import random
alphabet = range(32,123)
def get_random_string(n, sigma):
t = [chr(random.choice(sigma)) for i in range(n)]
return ''.join(t)
# build initial population of many random strings of target length n
def get_rand_population(population_size, n, sigma):
population=[]
for i in range(0,population_size):
individual_str = get_random_string(n, sigma)
population.append(individual_str)
return population
# test
print(get_random_string(len(s), alphabet))
print(get_rand_population(3, len(s), alphabet ))
# -
# ## 2. Random optimize
#
# The simplest possible idea is to try a vaste amount of random solutions and select the one with the best fitness score. Let's see how close we can get to the target $s$=''Hello, world'' with this approach.
#
# After all, isn't evolution just a lottery? ''Physics makes rules, evolution rolls the dice'' ( <NAME>. "The Equations of Life").
# +
def random_optimize(population, fitness_function):
best_score = None
best_solution = None
for i in range(len(population)):
# Calculate fitness
score = fitness_function(population[i])
# Compare it to the best one so far
if best_score == None or score < best_score:
best_score = score
best_solution = population[i]
if print_steps:
print(best_solution, " -> score:", best_score)
return (best_score, best_solution, len(population))
# Now the test
string_population = get_rand_population(204800, len(s), alphabet)
best_score, best_sol, iterations = random_optimize (string_population, string_fitness)
# -
print()
print("*************Rand optimize**************")
print("trials:{}, best solution:'{}', best score:{}".format(iterations,best_sol,best_score))
# Randomly trying different solutions is very inefficient because the probability of getting anything close to perfect Hello, world is $(\frac{1}{90})^{12}$. No matter how many random guesses you try – the fitness of the resulting string is very low (the distance from the target remains high).
# ## 3. Hill climbing
#
# We did not advance far with the random optimization. This happens because each time we come up with some solution and evaluate its fitness - we discard it and try another one - not related to the previous.
#
# The idea behing the <em>hill climbing</em> optimization is to start at a random point in space (choose one random string), and try to move from this point into a direction of better fitness. We will generate a set of neighbors for a randomly selected string, and see if any of these neighbors improve the overall fitness score of the string.
#
# We continue moving into the direction of this better solution, until there is no more improvement.
#
# <img src="images/hillclimbing.jpg" style="height:200px;">
#
#
# The algorithm may produce an optimal solution, but it may also get stuck at the local minimum.
# <img src="images/localmin.jpg" style="height:200px;">
# +
def hillclimb_optimize(start_point, fitness_function):
sol = start_point
score = fitness_function(sol)
iterations = 0
# Main loop
while 1:
iterations += 1
current_best_score = score
# Create neighboring solutions
neighbors=[]
for j in range(len(s)):
# Change character at position j one up or one down
if ord(sol[j])>32:
neighbors.append(sol[0:j]+chr(ord(sol[j])-1)+sol[j+1:])
if ord(sol[j])<122:
neighbors.append(sol[0:j]+chr(ord(sol[j])+1)+sol[j+1:])
for n_str in neighbors:
n_score = fitness_function(n_str)
if n_score < score:
score = n_score
sol = n_str
if print_steps:
print(sol, " -> score:", score)
# If there's no improvement, then we've reached the bottom of the hill
if score == current_best_score:
break
if score == 0:
break
return (score, sol, iterations)
# Now the test
rand_str = get_random_string(len(s), alphabet)
best_score, best_sol, iterations = hillclimb_optimize(rand_str, string_fitness)
# -
print()
print("*************Hill climbing****************")
print("steps:{}, best solution:'{}', best score:{}".format(iterations,best_sol,best_score))
# ## 3. Simulated annealing
# The idea of the <em>simulated annealing</em> is borrowed from physics. In metalurgical annealing we heat a metal (alloy) to a very high temperature, the crystal bonds break, and the atoms diffuse more freely. If we cool it slowly, the atoms tend to form more regular crystals producing an alloy with the low termodynamics energy.
#
# <img style="height:250px;float:right;padding:4px;" src="images/sim_annealing.jpg" >
#
# In the <em>simulated annealing</em> algorithm we set the initial temperature very high, and then we generate a single random neighbor of the current solution. The fitness of this neihgbor can be better or worse that that of the current solution. When the temperature is high, the probability of selecting worse solution is higher. This allows to better explore the search space and get out of the local minimum. The temperature gradually decreases, and so at the end we do not accept worse solutions.
#
# The criterion of accepting ''bad'' solutions:
#
# $p=e^{\frac{-\Delta F}{T}}>R(0,1)$
#
# where $T$ is the current temperature, $R(0,1)$ is a random number between $0$ and $1$, and $\Delta F$ is the difference between the fitness score of new solution and the old solution.
#
# Since the temperature $T$ (the willingness to accept a worse solution) starts very high,
# the exponent will be close to 0, and $p$ will almost be 1. As the temperature decreases, the difference between the new fitness score and the old one becomes more important - a bigger difference leads to a lower probability, so the
# algorithm will not accept solutions which do not improve fitness - converging to a local minimum after exploring a large global search space.
# +
import math
def annealing_optimize(start_sol, fitness_function, T=10000.0, cool=0.95, step=1):
sol = start_sol
iterations = 0
# Graduate cooling
while T > 0.01:
score = fitness_function(sol)
# Choose one of spots randomly
i = random.randint(0, len(sol) - 1)
# Choose rand direction to change the character at position i
dir = random.randint(-step, step)
change = ord(sol[i]) + dir
# out of domain
if change > 122 or change < 32:
continue
iterations += 1
# Create a new solution with one of the characters changed
new_sol = sol[:i] + chr(change) + sol[i+1:]
# Calculate the new cost
new_score = fitness_function(new_sol)
# Does it make the probability cutoff?
p = pow(math.e, -(new_score - score) / T)
if new_score < score or p > random.random() :
sol = new_sol
score = new_score
if print_steps:
print(sol, "-> score:", score)
if score == 0:
break
# Decrease the temperature
T = T * cool
return (score, sol, iterations)
# now test
rand_str = get_random_string(len(s), alphabet)
best_score, best_sol, iterations = annealing_optimize(rand_str, string_fitness,
T=204800.0, cool=0.999, step=1)
# -
print()
print("*************Simulated annealing***************")
print("steps:{}, best solution:'{}', best score:{}".format(iterations,best_sol,best_score))
# ## 4. Genetic algorithm
#
# This optimization technique is inspired by the theory of evolution. The algorithm starts with a population of random individuals, and selects the ones with the best fitness score (elite). It continues to the next generation with this group. In order to enrich the genetic pool in the current generation, the algorithm adds random mutations and crossover to the elite group. After predefined number of generations, the algorithm returns the top-fit individual.
# ### 4.1. Mutation and crossover
# +
# Mutation Operation
def string_mutation(individual):
i = random.randint(0, len(individual) - 1)
# mutation changes character at random position to any valid character
rchar = chr(random.randint(32, 122))
individual = individual[0:i] + rchar + individual[(i + 1):]
return individual
# Mate operation (crossover)
def string_crossover (s1, s2):
# find the point of crossover
i = random.randint (0, len(s1)-1)
return s1[:i]+s2[i:]
# -
# ### 4.2. Algorithm
#
# Initial population - a list of random strings
# +
def genetic_optimize(population, fitness_function,
mutation_function, mate_function,
mutation_probability, elite_ratio,
maxiterations):
# How many winners to consider from each generation?
original_population_size = len(population)
top_elite = int(elite_ratio * original_population_size)
# Main loop
iterations = 0
for i in range(maxiterations):
iterations += 1
individual_scores = [(fitness_function(v), v) for v in population]
individual_scores.sort()
ranked_individuals = [v for (s, v) in individual_scores]
# Start with the pure winners
population = ranked_individuals[0:top_elite]
# Add mutated and bred forms of the winners
while len(population) < original_population_size:
if random.random() < mutation_probability:
# Mutation
c = random.randint(0, top_elite)
population.append(mutation_function(ranked_individuals[c]))
else:
# Crossover
c1 = random.randint(0, top_elite)
c2 = random.randint(0, top_elite)
population.append(mate_function(ranked_individuals[c1], ranked_individuals[c2]))
# Print current best score
if print_steps:
print(individual_scores[0][1]," -> score:", individual_scores[0][0])
if individual_scores[0][0] == 0:
return (individual_scores[0][0],individual_scores[0][1], iterations)
# returns the best solution
return (individual_scores[0][0], individual_scores[0][1], iterations)
string_population = get_rand_population(2048, len(s), alphabet)
best_score, best_sol, iterations = genetic_optimize(string_population, string_fitness,
string_mutation, string_crossover,
mutation_probability=0.25, elite_ratio=0.1,
maxiterations=100)
print()
print("*****************GENETIC ALGORITHM ***************")
print("generations:{}, best solution:'{}', best score:{}".format(iterations,best_sol,best_score))
# -
# ### This is the end of the ''Hello, world'' demo.
# Copyright © 2022 <NAME>
| .ipynb_checkpoints/hello_world_demo-checkpoint.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ## Haxby data set:
# Haxby is a high-quality block-design fMRI dataset from a study on face & object representation in the human ventral temporal cortex (This cortex is involved in the high-level visual processing of complex stimuli). It consists of 6 subjects with 12 runs per subject. In this experiment during each run, the subjects passively viewed greyscale images of 8 object categories, grouped in 24s blocks separated by rest periods. Each image was shown for 500ms and was followed by a 1500ms inter-stimulus interval.
#
# ## Project Goal
# For this project i am trying machine learning and deep learning methods to learning about barin decoding and predicting which object category the subject saw by analyzing the fMRI activity recorded masks of the ventral stream.
# +
import nibabel as nib
import pandas as pd
import matplotlib.pyplot as plt
import numpy as np
import time
import plotly.express as px
from nilearn.plotting import plot_anat, show, plot_stat_map, plot_matrix
from nilearn import datasets, plotting, image
from nilearn.image import mean_img, get_data
from nilearn.input_data import NiftiMasker
from sklearn.model_selection import train_test_split, LeaveOneGroupOut, cross_val_score, GridSearchCV
from sklearn.linear_model import LogisticRegression, RidgeClassifier, RidgeClassifierCV
from sklearn.svm import SVC
from sklearn.neighbors import KNeighborsClassifier
from sklearn.neural_network import MLPClassifier
from sklearn.metrics import accuracy_score, confusion_matrix, classification_report
from sklearn.feature_selection import SelectPercentile, f_classif, SelectKBest
from sklearn.pipeline import Pipeline
from sklearn.dummy import DummyClassifier
from sklearn.multiclass import OneVsOneClassifier, OneVsRestClassifier
from sklearn.preprocessing import StandardScaler
from sklearn import tree
# +
# #%matplotlib inline
# #%load_ext memory_profiler
# -
# ## Dataset
# +
# If we don't define which subject by default 2nd subject will be fetched.
haxby_ds = datasets.fetch_haxby(subjects=[4], fetch_stimuli=True)
len(haxby_ds.func)
# -
# Read the data documentation
print(haxby_ds['description'].decode('utf-8'))
# Look inside the data
haxby_ds.keys()
haxby_ds.session_target
# +
mask_file = haxby_ds.mask
labels = haxby_ds.session_target[0]
mask_vt_file = haxby_ds.mask_vt[0]
mask_face_file = haxby_ds.mask_face[0]
# 'func' is a list of filenames: one for each subject
func_file = haxby_ds.func[0]
# Load the behavioral data that I will predict
beh_label = pd.read_csv(haxby_ds.session_target[0], sep=" ")
# Extract tags indicating to which acquisition run a tag belongs
session = beh_label['chunks']
# Preparing the data (Load target information as string and give a numerical identifier to each)
y = beh_label['labels']
# Identify the resting state
nonrest_task_mask = (y != 'rest')
# Remove the resting state and find names of remaining active labels
categories = y[nonrest_task_mask].unique()
#session = session[nonrest_task_mask]
# Get the labels of the numerical conditions represented by the vector y
unique_conditions, order = np.unique(categories, return_index=True)
# Sort the conditions by the order of appearance
unique_conditions = unique_conditions[np.argsort(order)]
# Extract tags indicating to which acquisition run a tag belongs
session_labels = beh_label['chunks'][nonrest_task_mask]
# -
# Print basic information on the dataset
print('Functional nifti images are located at: %s' % haxby_ds.func[0])
print('Mask nifti image (3D) is located at: %s' % haxby_ds.mask)
print('First subject functional nifti images (4D) are at: %s' %func_file) # 4D data
# +
# Checkout the confounds of the data
session_target = pd.read_csv(haxby_ds['session_target'][0], sep='\t')
session_target.head()
# -
# ## Preparing the fMRI data (smooth and apply the mask)
# +
# Standardizing and smoothing the data
nifti_masker = NiftiMasker(mask_img=mask_file, standardize=True, sessions=session, smoothing_fwhm=4,
memory="nilearn_cache", memory_level=1)
X = nifti_masker.fit_transform(func_file)
# -
# ## Plot Haxby masks
masker = NiftiMasker(mask_img=mask_vt_file, standardize=True)
fmri_masked = masker.fit_transform(func_file)
# The variable “fmri_masked” is a numpy array
print(fmri_masked)
print(fmri_masked.shape)
# ## Converting the Mask to a Matrix
# +
# load bold image into memory as a nibabel image
func = nib.load(func_file)
# load mask image into memory as a nibabel image
mask = nib.load(mask_file)
# get the physical data of the mask (3D matrix of voxels)
mask_data = mask.get_data()
print(func.shape)
print(mask.shape)
print(len(mask_data[mask_data==1]))
# +
# Create the masker object
masker = NiftiMasker(mask_img=mask_file, standardize=True)
# Create a numpy matrix from the BOLD data, using the mask for the transformation
func_masked = masker.fit_transform(func_file)
# View the dimensions of the matrix. The shape represents the number of time-stamps by the number of voxels in the mask.
print(func_masked.shape)
# -
# Viewing the numerical values of the matrix
print(func_masked)
# Load the labels from a csv into an array using pandas
stimuli = pd.read_csv(labels, sep=' ')
# +
# View the dimensions of the matrix
print(stimuli.shape)
# Viewing the values of the matrix
print(stimuli)
# -
targets = stimuli['labels']
print(targets)
targets_mask = targets.isin(['face', 'cat'])
print(targets_mask)
func_masked = func_masked[targets_mask]
func_masked.shape
# +
targets_masked = targets[targets_mask]
print(targets_masked.shape)
print(targets_masked)
# -
# # ML Models
# ## Decoding with ANOVA + SVM: face vs house in the Haxby dataset
# +
# Restrict the analysis to faces and places
condition_mask = beh_label['labels'].isin(['face', 'house'])
conditions_f_h = y[condition_mask]
# Confirm that I now have 2 conditions
print(conditions_f_h.unique())
session_f_h = beh_label[condition_mask].to_records(index=False)
print(session_f_h.dtype.names)
# -
# Apply our condition_mask to fMRI data
X_f_h = X[condition_mask]
# +
#Build the decoder
#Define the prediction function to be used. Here I am using a Support Vector Classification, with a linear kernel
svc = SVC(kernel='linear')
# Define the dimension reduction to be used. (keep 5% of voxels)
feature_selection = SelectPercentile(f_classif, percentile=5)
# I have SVC classifier and our feature selection (SelectPercentile),then plug them together in a *pipeline*:
anova_svc = Pipeline([('anova', feature_selection), ('svc', svc)])
# -
# Fit the decoder and predict
anova_svc.fit(X_f_h, conditions_f_h)
y_pred = anova_svc.predict(X_f_h)
# +
# Obtain prediction scores via cross validation
# Define the cross-validation scheme used for validation, using LeaveOneGroupOut cross-validation.
cv = LeaveOneGroupOut()
# Compute the prediction accuracy for the different folds (i.e. session)
cv_scores = cross_val_score(anova_svc, X_f_h, conditions_f_h, cv=cv, groups=session_f_h)
# Return the corresponding mean prediction accuracy
classification_accuracy = cv_scores.mean()
# Print the results
print("Classification accuracy: %.4f / Chance level: %f" % (classification_accuracy, 1. / len(conditions_f_h.unique())))
# -
# Visualizing the results:
# +
# Look at the SVC’s discriminating weights
coef = svc.coef_
# reverse feature selection
coef = feature_selection.inverse_transform(coef)
# reverse masking
weight_img = masker.inverse_transform(coef)
# Use the mean image as a background to avoid relying on anatomical data
mean_img = image.mean_img(func_file)
# Create the figure
plot_stat_map(weight_img, mean_img, title='SVM weights')
# Save the results as a Nifti file
#weight_img.to_file('haxby_face_vs_house.nii')
# -
# ## ROI-based decoding analysis
# In this section, I am looking at decoding accuracy for different objects in three different masks: the full ventral stream (mask_vt), the house selective areas (mask_house), and the face-selective areas (mask_face), that have been defined via a standard General Linear Model (GLM) based analysis.
# extract tags indicating to which acquisition run a tag belongs
session_labels = beh_label["chunks"][nonrest_task_mask]
# +
# The classifier: a support vector classifier
classifier = SVC(C=1., kernel="linear")
# A classifier to set the chance level
dummy_classifier = DummyClassifier()
# Make a data splitting object for cross validation
cv = LeaveOneGroupOut()
mask_names = ['mask_vt', 'mask_face', 'mask_house']
mask_scores = {}
mask_chance_scores = {}
for mask_name in mask_names:
print("Working on mask %s" % mask_name)
# Standardizing
mask_filename = haxby_ds[mask_name][0]
masker = NiftiMasker(mask_img=mask_filename, standardize=True)
masked_timecourses = masker.fit_transform(func_file)[nonrest_task_mask]
mask_scores[mask_name] = {}
mask_chance_scores[mask_name] = {}
for category in categories:
print("Processing %s %s" % (mask_name, category))
classification_target = (y[nonrest_task_mask] == category)
mask_scores[mask_name][category] = cross_val_score(
classifier,
masked_timecourses,
classification_target,
cv=cv,
groups=session_labels,
scoring="roc_auc",
)
mask_chance_scores[mask_name][category] = cross_val_score(
dummy_classifier,
masked_timecourses,
classification_target,
cv=cv,
groups=session_labels,
scoring="roc_auc",
)
print("Scores: %1.2f +- %1.2f" % (
mask_scores[mask_name][category].mean(),
mask_scores[mask_name][category].std()))
# -
# ## Different multi-class strategies
# I compare one vs all and one vs one multi-class strategies: the overall cross-validated accuracy and the confusion matrix.
# +
# Build the decoders, using scikit-learn
svc_ovo = OneVsOneClassifier(Pipeline([
('anova', SelectKBest(f_classif, k=500)),
('svc', SVC(kernel='linear'))
]))
svc_ova = OneVsRestClassifier(Pipeline([
('anova', SelectKBest(f_classif, k=500)),
('svc', SVC(kernel='linear'))
]))
# +
# Remove the "rest" condition
y = y[nonrest_task_mask]
X = X[nonrest_task_mask]
cv_scores_ovo = cross_val_score(svc_ovo, X, y, cv=5, verbose=1)
cv_scores_ova = cross_val_score(svc_ova, X, y, cv=5, verbose=1)
print('OvO:', cv_scores_ovo.mean())
print('OvA:', cv_scores_ova.mean())
# +
# Plot barplots of the prediction scores
plt.figure(figsize=(4, 3))
plt.boxplot([cv_scores_ova, cv_scores_ovo])
plt.xticks([1, 2], ['One vs All', 'One vs One'])
plt.title('Prediction: accuracy score')
# -
# ## Plot a confusion matrix:
# To make sure that the system is not confusing two classes.
# +
# I fit on the the first 10 sessions and plot a confusion matrix on the last 2 sessions
# Matrix rows represents the predicted class instances, columns represents actual class.
svc_ovo.fit(X[session_labels < 5], y[session_labels < 5])
y_pred_ovo = svc_ovo.predict(X[session_labels >= 5])
plot_matrix(confusion_matrix(y_pred_ovo, y[session_labels >= 5]),
labels=unique_conditions, cmap='Blues')
plt.title('Confusion matrix: One vs One')
plt.xticks(rotation=45)
plt.yticks(rotation=0)
svc_ova.fit(X[session_labels < 5], y[session_labels < 5])
y_pred_ova = svc_ova.predict(X[session_labels >= 5])
plot_matrix(confusion_matrix(y_pred_ova, y[session_labels >= 5]),
labels=unique_conditions, cmap='Blues')
plt.title('Confusion matrix: One vs All')
plt.xticks(rotation=45)
plt.yticks(rotation=0)
# +
# Plotting the confusion matrix
svc_ovo.fit(X[session_labels < 10], y[session_labels < 10])
y_pred_ovo = svc_ovo.predict(X[session_labels >= 10])
plot_matrix(confusion_matrix(y_pred_ovo, y[session_labels >= 10]),
labels=unique_conditions, cmap='Blues')
plt.title('Confusion matrix: One vs One')
plt.xticks(rotation=45)
plt.yticks(rotation=0)
svc_ova.fit(X[session_labels < 10], y[session_labels < 10])
y_pred_ova = svc_ova.predict(X[session_labels >= 10])
plot_matrix(confusion_matrix(y_pred_ova, y[session_labels >= 10]),
labels=unique_conditions, cmap='Blues')
plt.title('Confusion matrix: One vs All')
plt.xticks(rotation=45)
plt.yticks(rotation=0)
# -
# ### Classification
# Standardizing
masker = NiftiMasker(mask_img=mask_vt_file, standardize=True)
masked_timecourses = masker.fit_transform(func_file)[nonrest_task_mask]
# +
# Support vector classifier
svm = SVC(C=1., kernel="linear")
# The logistic regression
#logistic = LogisticRegression(C=1., penalty="l1", solver='liblinear')
logistic_50 = LogisticRegression(C=50., penalty="l1", solver='liblinear')
#logistic_l2 = LogisticRegression(C=1., penalty="l2", solver='liblinear')
# Cross-validated versions of these classifiers
svm_cv = GridSearchCV(SVC(C=1., kernel="linear"),
param_grid={'C': [.1, 1., 10., 100.]},
scoring='f1', n_jobs=1, cv=3, iid=False)
logistic_cv = GridSearchCV(
LogisticRegression(C=1., penalty="l1", solver='liblinear'),
param_grid={'C': [.1, 1., 10., 100.]},
scoring='f1', cv=3, iid=False,
)
# The ridge classifier has a specific 'CV' object that can set it's parameters faster than using a GridSearchCV
ridge = RidgeClassifier()
ridge_cv = RidgeClassifierCV()
# A dictionary, to hold all our classifiers
classifiers = {'SVC': svm,
'SVC cv': svm_cv,
'log l1 50': logistic_50,
'log l1 cv': logistic_cv,
'ridge': ridge,
'ridge cv': ridge_cv
}
# -
# Prediction scores:
# +
# Make a data splitting object for cross validation
cv = LeaveOneGroupOut()
classifiers_scores = {}
for classifier_name, classifier in sorted(classifiers.items()):
classifiers_scores[classifier_name] = {}
print(70 * '_')
for category in categories:
classification_target = y[nonrest_task_mask].isin([category])
t0 = time.time()
classifiers_scores[classifier_name][category] = cross_val_score(
classifier,
masked_timecourses,
classification_target,
cv=cv,
groups=session_labels,
scoring="f1",
)
print(
"%10s: %14s -- scores: %1.2f +- %1.2f, time %.2fs" %
(
classifier_name,
category,
classifiers_scores[classifier_name][category].mean(),
classifiers_scores[classifier_name][category].std(),
time.time() - t0,
),
)
# +
# Make a rudimentary diagram
plt.figure()
tick_position = np.arange(len(categories))
plt.xticks(tick_position, categories, rotation=45)
for color, classifier_name in zip(
['#48110C', '#808080', '#DB4C2C', '#E38C2D', '#EBC137'],
sorted(classifiers)):
score_means = [classifiers_scores[classifier_name][category].mean()
for category in categories]
plt.bar(tick_position, score_means, label=classifier_name, width=.11, color=color)
tick_position = tick_position + .09
plt.ylabel('Classification accurancy (f1 score)')
plt.xlabel('Visual stimuli category')
plt.ylim(ymin=0)
plt.legend(bbox_to_anchor=(1, 1))
plt.title('Category-specific classification accuracy for different classifiers')
plt.tight_layout()
# +
# Plot the face vs house map for the different classifiers
mean_epi_img = image.mean_img(func_file)
# Restrict the decoding to face vs house
condition_mask = y.isin(['face', 'house'])
masked_timecourses = masked_timecourses[
condition_mask[nonrest_task_mask]]
y_f = (y[condition_mask] == 'face')
# Transform the stimuli to binary values
y_f.astype(np.int)
for classifier_name, classifier in sorted(classifiers.items()):
classifier.fit(masked_timecourses, y_f)
if hasattr(classifier, 'coef_'):
weights = classifier.coef_[0]
elif hasattr(classifier, 'best_estimator_'):
weights = classifier.best_estimator_.coef_[0]
else:
continue
weight_img = masker.inverse_transform(weights)
weight_map = get_data(weight_img)
threshold = np.max(np.abs(weight_map)) * 1e-3
plot_stat_map(weight_img, bg_img=mean_epi_img, display_mode='z', cut_coords=[-15],
threshold=threshold, title='%s: face vs house' % classifier_name)
| BHS_Haxby_BrainDecoding.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] deletable=true editable=true
# # Spam Filtering Using Gaussian Naive Bayes
# ---
# Use the `spambase.data` file and the Gaussian Naive Bayes algorithm to build a simple spam detection algorithm. You can get accuracies around 80% with this simple classifier that is quickly trained.
# + deletable=true editable=true
# %matplotlib inline
import pandas as pd
import sklearn, sklearn.model_selection, sklearn.linear_model, sklearn.metrics
import numpy as np
import matplotlib.pyplot as plt
import itertools
# + [markdown] deletable=true editable=true
# ## Read the data
#
# ---
#
# We need to read in the data from the spambase.data which is in CSV format. Pandas is perfect for this. If you look at the spambase.data file, you'll notice that there are no column headers and that the identification for spam vs. non-spam is found in column 57. So make sure you set the `header` argument to `None` and the `index_col` argument to `57`.
# + deletable=true editable=true
data = pd.read_csv('~/data/spam.data/spambase.data', header=None, index_col=57)
# + [markdown] deletable=true editable=true
# ## Split the data
#
# ---
#
# Now we want to split the data into training and testing sets. Scikit-learn has a great function for this: `sklearn.model_selection.train_test_split()`. The first parameter is the data to split, `data` in my case. We need to make sure to tell it to split the training and testing set in half by passing `0.5` to the `test_size` argument. Also, set the `random_state` argument to `np.random.RandomState()` so that the data is shuffled. (This is important since `spambase.data` is sorted by class).
# + deletable=true editable=true
X_train, X_test = sklearn.model_selection.train_test_split(data, test_size=0.5, random_state = np.random.RandomState())
# + [markdown] deletable=true editable=true
# ## Get the prior probabilities
#
# ---
#
# Note that the `spambase.data` file has approximately 40% of the emails categorized as spam, so our split should reflect that. Since the `train_test_split()` function returned our split training and testing sets as pandas dataframes and since we set the index column to be the class of the instance, when we say `X_train.loc[1]` we are selecting all of the rows in the training set that have class 1. That is, select all the spam instances. Similarly, we can select all the non-spam (or ham) instances by doing `X_train.loc[0]`.
# + deletable=true editable=true
prob_spam_train = len(X_train.loc[1].index)/len(X_train.index)
prob_spam_train
# + deletable=true editable=true
prob_ham_train = len(X_train.loc[0].index) / len(X_train.index)
prob_ham_train
# + [markdown] deletable=true editable=true
# ## Get the means and the standard deviations over the columns
#
# ---
#
# In order to calculate the probability (which we simulate with the probability density function), we need to have the mean and standard deviation of each column for a given class (spam or ham). This is made quite easy by using the pandas [`describe()` function](http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.describe.html). The result of `describe()` is a dataframe and to get the column means or standard deviations, we merely need to index the row labeled `mean` or `std` respectively.
# + deletable=true editable=true
train_spam_col_means = X_train.loc[1].describe().loc['mean']
# + deletable=true editable=true
train_spam_col_stds = X_train.loc[1].describe().loc['std']
# + deletable=true editable=true
train_ham_col_means = X_train.loc[0].describe().loc['mean']
# + deletable=true editable=true
train_ham_col_stds = X_train.loc[0].describe().loc['std']
# + [markdown] deletable=true editable=true
# ## Define a function to calculate the sum of the log of the probabilities
#
# ---
#
# This function returns the result of the following equation:
#
# $$\log{P(\textit{class})} + \sum_i \log{P(x_i \mid \textit{class})}$$
#
# Where
#
# $$P(x_i \mid c_j) = N(x_i ; \mu_{i,c_j} , \sigma_{i, c_j})$$
#
# and
#
# $$N(x; \mu , \sigma) = \frac{1}{\sqrt{2\pi}\sigma}e^{-\frac{(x-\mu)^2}{2\sigma^2}}$$
#
# Keep in mind, though, that we will be taking the natural log of this equation. So given the log rules we will have
#
# \begin{align}
# \log{N(x; \mu , \sigma)} &= \log{\Bigg(\frac{1}{\sqrt{2\pi}\sigma}e^{-\frac{(x-\mu)^2}{2\sigma^2}}\Bigg)} \\
# &= \log{\Bigg(\frac{1}{\sqrt{2\pi}\sigma}\Bigg)} + \log{\Bigg(e^{-\frac{(x-\mu)^2}{2\sigma^2}}\Bigg)} \\
# &= \log{\Bigg(\frac{1}{\sqrt{2\pi}}\Bigg)} + \log{\Bigg(\frac{1}{\sigma}\Bigg)} - \frac{(x-\mu)^2}{2\sigma^2} \\
# &= -\log{\bigg(\sqrt{2\pi}\bigg)} - \log{(\sigma)} - \frac{(x-\mu)^2}{2\sigma^2}
# \end{align}
#
# Also, since $-\log{\big(\sqrt{2\pi}\big)}$ is a constant, including it won't affect our results. So we only need to calculate the following $$-\sum_i \log{(\sigma_{i,c_j})} + \frac{(x_i-\mu_{i,c_j})^2}{2\sigma_{i, c_j}^2}$$
#
# Again, Pandas makes this easy. It'll be the case that `x`, `mus`, and `sigmas` are all pandas dataframes with 1 row and 57 columns, so we can add, subtract, multiply, eponentiate, log, or divide on an item-by-item basis. So `x - mus` is also a 1 x 57 dataframe, for example. Additionally, the pandas `sum()` function ignores `NaN` values by default, so there isn't anything we need to do to avoid them (i.e. when the std is very large).
# + deletable=true editable=true
def sum_of_log_probs(P_class, x, mus, sigmas):
return np.log(P_class) - (np.log(sigmas) + ((x - mus)**2 / (2 * sigmas**2))).sum()
# + [markdown] deletable=true editable=true
# ## Given an instance $\mathbf{\overrightarrow{x}}$, predict its class
#
# ---
#
# Determine a class of an instance. Calculate the following
#
# $$class_{NB}\big(\overrightarrow{\mathbf{x}}\big) = \underset{\textit{class} \in \{0, 1\}}{\mathrm{argmax}} \Big(\log{P(\textit{class})} + \sum_i \log{P(x_i \text{ | } \textit{class})} \Big)$$
#
# where $P(x_i \mid c_j)$ is as given previously.
# + deletable=true editable=true
def predict(x):
p_spam = sum_of_log_probs(prob_spam_train, x, train_spam_col_means, train_spam_col_stds)
p_ham = sum_of_log_probs(prob_ham_train, x, train_ham_col_means, train_ham_col_stds)
return 1 if p_spam > p_ham else 0
# + [markdown] deletable=true editable=true
# ## Classify all instances in the test set
#
# ---
#
# Walk over the test set predicting each of the instances. Here's another time when pandas makes this nice. The `pandas.DataFrame.iterrows()` function returns a tuple `(index, row dataframe)`. So we have the actual value, which is stored in the index column since that's what was set by the `index_col` argument of the `pandas.read_csv()` function.
#
# *Note: `numpy.log()` will return a runtime warning if the value it gets is equal to 0.0. It is fine for us to ignore this.*
# + deletable=true editable=true
NB_y_pred = X_test.apply(predict, axis=1)
# + deletable=true editable=true
NB_confusion_matrix = sklearn.metrics.confusion_matrix(y_true=X_test.index.values, y_pred=NB_y_pred)
NB_accuracy = sklearn.metrics.accuracy_score(y_true=X_test.index.values, y_pred=NB_y_pred)
NB_precision = sklearn.metrics.precision_score(y_true=X_test.index.values, y_pred=NB_y_pred)
NB_recall = sklearn.metrics.recall_score(y_true=X_test.index.values, y_pred=NB_y_pred)
print(' accuracy: {:.5f}\n'
'precision: {:.5f}\n'
' recall: {:.5f}'.format(NB_accuracy, NB_precision, NB_recall))
# + deletable=true editable=true
def plot_confusion_matrix(cm, classes,
normalize=False,
title='Confusion matrix',
cmap=plt.cm.Blues):
"""
This function prints and plots the confusion matrix.
Normalization can be applied by setting `normalize=True`.
"""
plt.figure(figsize=(7,7))
plt.imshow(cm, interpolation='nearest', cmap=cmap)
plt.title(title, fontsize=24)
plt.colorbar()
tick_marks = np.arange(len(classes))
plt.xticks(tick_marks, classes, rotation=45)
plt.yticks(tick_marks, classes)
if normalize:
cm = cm.astype('float') / cm.sum(axis=1)[:, np.newaxis]
thresh = cm.max() / 2.
for i, j in itertools.product(range(cm.shape[0]), range(cm.shape[1])):
plt.text(j, i, cm[i, j],
horizontalalignment="center",
color="white" if cm[i, j] > thresh else "black")
plt.tight_layout()
plt.ylabel('Actual label', fontsize=14)
plt.xlabel('Predicted label', fontsize=14)
plt.style.use('seaborn-dark')
# + [markdown] deletable=true editable=true
# ## Plot the confusion matrix
# + deletable=true editable=true
plot_confusion_matrix(NB_confusion_matrix, classes=('ham', 'spam'), title='Confusion matrix')
# + [markdown] deletable=true editable=true
# ## Part 2: Using a library to run a logistic regressor
#
# ---
#
# This is simply a matter of running sklearn's LogisticRegression() and the various sklearn metrics functions.
# + deletable=true editable=true
logistic_regressor = sklearn.linear_model.LogisticRegression().fit(X_train, X_train.index.values)
LR_y_pred = logistic_regressor.predict(X_test)
# + deletable=true editable=true
LR_accuracy = sklearn.metrics.accuracy_score(y_true=X_test.index.values, y_pred=LR_y_pred)
LR_precision = sklearn.metrics.precision_score(y_true=X_test.index.values, y_pred=LR_y_pred)
LR_recall = sklearn.metrics.recall_score(y_true=X_test.index.values, y_pred=LR_y_pred)
print(' accuracy: {:.4f}\n'
'precision: {:.4f}\n'
' recall: {:.4f}'.format(LR_accuracy, LR_precision, LR_recall))
# + deletable=true editable=true
LR_confusion_matrix = sklearn.metrics.confusion_matrix(y_true=X_test.index.values, y_pred=LR_y_pred)
# + deletable=true editable=true
plot_confusion_matrix(LR_confusion_matrix, classes=('ham', 'spam'), title='Confusion Matrix')
# + deletable=true editable=true
| Gaussian Naive Bayes.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Tutorial 4. Plotting (and fitting)
# We have already learned quite a lot of python! We know the types of data, how to iterate throught indexable objects, a bit of pandas, how to use functions, scripts and flow control. At this point, many people already say that they can program. But we want to learn how to make programming useful for your research, so we need to keep pushing now :)
#
# In this lesson, we will learn about simple data plotting and also how to make a simple linear fit to our data. We will be using the historical and robust package `matplotlib` for this, but keep in mind that other packages such as `seaborn` and `plotly` offer more visually-appealing plots.
# ## Basic plotting
import matplotlib.pyplot as plt
# Let's begin with a scatter plot.
#
# When you want to make a scatter plot, you must pass the data in two lists: one for the x values and one for the y values. Such as this
plt.scatter([1,2,3,4,5,6], [2,4,6,8,10,12])
plt.show()
# Of course, you can also save the lists in a variable and pass the variables (they don't have to be called x and y by the way).
x = [1,2,3,4,5,6]
y = [2,4,6,8,10,12]
print(x)
plt.scatter(x,y)
plt.show()
# You can also plot a line that connects all the dots, but keep in mind that this is not a regression line.
plt.plot(x,y)
plt.show()
# Let me show you how this is not a regression line:
plt.plot([1,2,3,4],[2,1,5,3])
plt.show()
# ## Enrich your plots with labels and titles
# A plot is nothing without a description of which information it contains. In the same plot, we can add a title, axis labels, several plots, text, modify the style of the background... I don't even know all the posibilities, but the formatting options are rich on `matplotlib`.
#
# The one thing to keep in mind is that all that needs to go into the same plot must be written before `plt.show()`, which displays the figure. After showing the image, the plot should be reseted, but this could also be forced with `plt.close()` if it doesn't happen. This is very important if you're **saving the figure** instead of showing it (more of this in the homework).
plt.scatter(x,y, color='orange', s = 100, marker='v') # Scatter plot of our points
plt.plot(x,y, '-.', color = 'orange', linewidth = 2) # Line-connected plot of our points
plt.scatter([0,1,2,3,4],[0,1,2,3,4], color='blue', s = 100, marker='o') # Scatter plot of our points
plt.plot([0,1,2,3,4],[0,1,2,3,4], '--', color = 'blue', linewidth = 2) # Line-connected plot of our points
plt.title('My first plot') # Title
plt.xlabel('Independent variable') # x-axis label
plt.ylabel('Dependent variable') # y-axis label
plt.show() # show the plot in screen
# You can also do cool things like changing the size and color for each individual dot, passing it on lists:
dot_color = ['red', 'darkorange', 'yellow', 'green', 'blue', 'darkviolet']
dot_size = [100, 60, 500, 150, 100, 300]
plt.scatter(x,y, color=dot_color, s = dot_size) # Scatter plot of our points
plt.show()
# ## Numpy and scipy: the fundamentals of fast calculations on python
# Although python has native math operations, these operations are pretty slow compared with how fast they can be done. Python offers packages like **numpy** and scipy that offer fast pre-implemented operations. Numpy works with **arrays** instead of lists. They seem to behave very similarly to lists, as they are also indexed and can be interated, but they provide very easy and fast operation of their values.
import numpy as np
x = np.array([1,2,3,4,5,6])
y = np.array([2,4,6,8,10,12])
print(x)
print(y)
print(x[-1])
print(type(x))
# - This works:
print(x*y)
print(x+y)
# - This does not work:
print([1,2,3,4]*[2,1,2,4])
# - This doesn't work the way we wanted:
print([1,2,3,4]+[2,1,2,4])
# ### Plotting with numpy
# We can plot numpy arrays as if they were lists:
x = np.array([1,2,3,4,5,6])
y = np.array([2,4,6,8,10,12])
plt.plot(x,y)
plt.show()
# But let's do something more interesting than just plotting. Let's change the values of y and fit a linear regression.
# This is how the plot looks with the new y values
y = np.array([1,5,4,7,10,8])
plt.scatter(x,y)
plt.show()
# And now we're going to apply a linear regression to our data. We will do this by using the function `linregress`, contained in `scipy.stats`. Notice that we have imported `scipy.stats` as `stats`. We can give the names that we desire to the imported packages.
#
# This linear regression returns 5 values, and I know that not because I remember, but because I googled the documentation page, which you also should do: https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.linregress.html
import scipy.stats as stats
slope, intercept, r_value, p_value, std_err = stats.linregress(x,y)
# Here we are obtaining the y values of our fit for each point in our x values. It's the famous ax + b formula that we learned in highschool, but programming it this time:
new_fit = x*slope + intercept
print(new_fit)
# So let's plot it all together! This figure will have the following components:
# - Scatter plot of our data points
# - Linear regression of these points
# - R and R2 values displayed
# - Slope and intercept values displayed
# - Title and labels displayed
plt.scatter(x,y)
plt.plot(x, new_fit)
plt.text(1, 8,'R value = {0}'.format(r_value))
plt.text(1, 7,'R2 value = {0}'.format(str(r_value*r_value)))
plt.text(2, 2, 'Intercept = {0}'.format(intercept))
plt.text(2, 1, 'Slope = {0}'.format(slope))
plt.title('Linear fit')
plt.xlabel('Independent variable')
plt.ylabel('Dependent variable')
plt.show()
# ## Pandas and numpy
# Pandas is really designed FROM numpy. When you select a pandas column or row, you obtain a pandas Series. These Series are actually built with numpy arrays as their base. This is handy because it allows to perform many of the operations that numpy allows. For instance:
import pandas as pd
df = pd.DataFrame({'first_column':[1,2,3,4,5,6], 'second_column':[5,2,3,1,5,7], 'third_column':[3,3,3,3,3,3], 'names':['spam', 'spam', 'eggs', 'eggs', 'ham', 'ham']})
df
df['first_column']
print(type(df['first_column'])) # A series
print(type(np.array(df['first_column']))) # In case you need to conver it to a numpy array
df['first_column']*df['second_column']
df['first times second'] = df['first_column']*df['second_column']
df
# And as a big hint for the homework and a reminder on how to subset from pandas, let's subset our dataframe into 3 dataframes, one for each name:
df['names'].unique()
df['names'] != 'eggs'
df[df['names']!='eggs']
for name in df['names'].unique():
print(name)
temp_df = df[df['names'] == name]
print(temp_df) # OR DO ANYTHING ELSE WITH THIS DATAFRAME
# ## HOMEWORK
# For homework, we are going to use the iris dataset again. You will calculate the petal and sepal ratios using the fancy pandas way explained above, and save it to the dataframe. Then you will generate **and save in disk** 3 plots, one per flower variety. These plots will have the ratios and the linear fit of the data points.
#
# I want you to write a **script** that is divided in (at least) 2 functions:
# - The function `linear_fit` will receive 2 pandas series or 2 numpy arrays and will perform a linear regression on their data. Then, it will return the slope and intercept of this fit.
# - The function `plot_data` will have as input a dataframe with the raw data that needs to be plotted. This function will call the function `linear_fit` and will receive the slope and intercept that `linear_fit` calculates. Finally, it will display a scatter plot of the raw data and a plot of the regression line. The x and y labels must be informative of whether it's the sepal or petal ratio. The title will be the flower variety used for each plot. This function will return nothing, but it will **save** the plots in a .png file with the name of the flower variety.
#
# You can choose whether you want to subset the data before or in `plot_data`. In other words, you can feed `plot_data` with the whole dataframe or with a subset of the dataframe that contains only a variety, but you'll have to do that 3 times in the second case.
#
# I recommend you to perform the ratio calculations before feeding it to `plot_data`, and feel free to organize the code for this in another function if you believe this will look cleaner.
#
# **GOOD LUCK!**
#
# And remember: Google is your friend.
| 4_plotting/4_plotting.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ## Writing to a file
#
# ### Open file options for writing
#
# - "a" - Append - will append to the end of the file. Creates a file if the specified file does not exist
#
# - "w" - Write - will overwrite any existing content. create a file if the specified file does not exist
#
# - "x" - Create - will create a file, returns an error if the file exist
#
# For more OS-level file operations, visit here - https://docs.python.org/3/library/os.html
#open for overwriting
fh = open("test.txt", "w")
fh.write("I am going to eat fat. I am going to be thin.")
fh.write("That's my dream.")
fh.close()
#open for appending
fh = open("test.txt", "a")
fh.write("I am going to eat fat. I am going to be thin.")
fh.write("That's my dream.")
fh.close()
#write multiple lines
fh = open("test.txt", "a")
lines_of_text = ["I am going to eat fat.", "I am going to be thin.", "That's my dream", "...and so on and so forth"]
fh.writelines(lines_of_text)
fh.close()
# ## Using the With Statement
with open("test.txt", "a") as fh:
fh.write("I am going to eat fat. I am going to be thin.")
fh.write("That's my dream.")
lines_of_text = ["I am going to eat fat.", "I am going to be thin.", "That's my dream", "...and so on and so forth"]
fh.writelines(lines_of_text)
# ## Reading file
#
# ### Open file options
#
# - "r" - Read - Default value. Opens a file for reading, error if the file does not exist
#
# - "a" - Append - Opens a file for appending, creates the file if it does not exist
#
# - "w" - Write - Opens a file for writing, creates the file if it does not exist
#
# - "x" - Create - Creates the specified file, returns an error if the file exists
#
# - "t" - Text - Default value. Text mode
#
# - "b" - Binary - Binary mode (e.g. images)
#read the file in read and text mode
fh = open("test.txt", "rt")
#read the entire file as one string
txt = fh.read()
print(type(txt))
print (txt)
#read the file in read and text mode
fh = open("test.txt", "rt")
#read the entire file as one list
txt = fh.readlines()
print(type(txt))
print (txt)
# ## Using the With Statement
# +
import re
with open("test.txt") as f:
counter=0
for line in f:
print (line)
counter=counter+1
print("Line count:{0}".format(str(counter)))
#split the words
words = line.split()
print(words)
print("Word count:{0}".format(len(words)))
#split using multiple delimiters
words = re.split(". |; |, |\*|\n",line)
print(words)
print("Word count:{0}".format(len(words)))
# -
# ### Read only a few characters from a file
with open("test.txt") as f:
print(f.read(21))
# ### Read only one line
with open("test.txt") as f:
print(f.readline())
# ## Delete files
import os
if os.path.exists("test.txt"):
os.remove("test.txt")
else:
print("The file does not exist")
# ## Create folders
# +
def createfolder(directory):
try:
#if not os.path.exists(directory):
#os.makedirs(directory)
os.makedirs(directory, exist_ok = True)
except OSError:
print ('Error: Creating directory. ' + directory)
createfolder('testfolder')
# -
# ## Delete folders
# +
import os
os.rmdir("testfolder")
# -
# ## Using os.stat
#
# https://docs.python.org/2/library/os.html#os.stat
# +
import os
s = os.stat("test.txt")
print(s)
print("Size:{0} bytes".format(s.st_size))
# -
| File Operations.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# import required packages
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
#list = {'num':[1,2,3,4,5], 'micscore':[0.7271, 0.7454, 0.7476, 0.7504, 0.7536], 'desc':['baseline model XGBoost','adding dummy features','dropping correlated and unnecessary features', 'add basic hyperparameter','hyperparameter tuning']}
num = [1,2,3,4,5]
scoremic= [0.7271, 0.7454, 0.7476, 0.7504, 0.7536]
scoremac = [0.6659,0.6932, 0.6974,0.7010, 0.7081]
desc= ['baseline model XGBoost','feature engineering','dropping unnecessary features', 'add basic hyperparameter','hyperparameter tuning']
#df = pd.DataFrame(data=list)
#df
fig, ax = plt.subplots()
plt.scatter(num, scoremic)
plt.scatter(num,scoremac, color='red')
plt.plot(num,scoremic, 'b-', label ='F1-Micro')
plt.plot(num,scoremac, 'r-',label='F1-Macro')
x = 0.008
for i, txt in enumerate(desc):
x = x -0.001
plt.annotate(txt, (num[i]+ 0.05, score[i]- x))
#plt.title('Overview F1-Score (micro) improvement')
plt.xlabel('Model number')
plt.ylabel('Score')
plt.grid()
plt.legend()
plt.savefig('modelimprovement.png', bbox_inches="tight")
plt.show()
| notebooks/modelimprovement_plot.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] Collapsed="false"
# <img src='./img/fire_workshop_banner.png' alt='Logo EU Copernicus EUMETSAT' align='centre' width='90%'></img>
# + [markdown] Collapsed="false"
# <br>
# -
# ## Workshop and Data Discovery
# + [markdown] Collapsed="false"
# # Existing and new generation earth observation based products for wildfire monitoring and forecast
# -
# Jointly organized by [EUMETSAT](https://www.eumetsat.int/website/home/index.html), [CAMS-ECMWF](https://atmosphere.copernicus.eu/), [AC SAF](https://acsaf.org/), [LSA SAF](https://landsaf.ipma.pt/en/), with support from [Copernicus](https://www.copernicus.eu/en)
# <br>
# + [markdown] Collapsed="false"
# The **user workshop on wildfire monitoring and forecast** is a an online event aiming to inform about existing and upcoming datasets for detection of `fire`, `related emissions` and `impacts`.
#
# The course is a combination of expert webinars and two 'data disovery' sessions introducing you to different satellite- and model-based data for wildfire monitoring. The 'data discovery' sessions has the following outline
#
# * 25 May | 14:00-15:30 (CEST): **Data discovery - Satellite-based data**
# * 26 May | 14:00-15:30 (CEST): **Data discovery - Portugal fires 2020 workflow**
#
#
# -
# <br>
# + [markdown] Collapsed="false"
# ## Data on wildfire monitoring
# + [markdown] Collapsed="false"
# This course features the following data:
#
# * **Satellite-based data products**
# * AC-SAF Metop-A/B GOME-2 Level-2 data: [Absorbing Aerosol Index (AAI)](./11_AC-SAF_Metop-AB_GOME-2_AAI.ipynb) and [Absorbing Aerosol Height (AAH)](./12_AC-SAF_Metop-AB_GOME-2_AAH.ipynb)
# * LSA-SAF data: [EDLST, FRP_PIXEL, NDVI](./41_LSA-SAF.ipynb)
# * Metop-A/B IASI Level-2 data: [Carbon Monoxide](./13_Metop-AB_IASI_CO.ipynb)
# * Copernicus Sentinel-5P TROPOMI Level 2 data: [Ultraviolett Aerosol Index](./13_Metop-AB_IASI_CO.ipynb) and [Carbon Monoxide](./31_Sentinel-5P_TROPOMI_CO.ipynb)
# * Copernicus Sentinel-3 OLCI Level-1B data: [Red-Green-Blue (RGB) radiances](./21_Sentinel-3_OLCI_RGB.ipynb)
# * Copernicus Sentinel-3 SLSTR Level 2 data: [Fire Radiative Power](./22_Sentinel-3_SLSTR_FRP.ipynb)
#
#
# * **Model-based data products**
# * Copernicus Atmosphere Monitoring Service (CAMS) | Global Fire Assimilation System (GFAS): [Fire Radiative Power](./51_CAMS_GFAS_FRPFIRE.ipynb)
# * Copernicus Emergency Management Service (CEMS) | Global ECMWF Fire Forecasts (GEFF): [Fire Weather Index](./61_CEMS_GEFF_FWI_data_overview.ipynb)
#
#
# -
# <br>
# + [markdown] Collapsed="false"
# ## Course material
# + [markdown] Collapsed="false"
# The course outline follows the sequence of data types introduced during the webinars in the morning:
#
# * **0 - Introduction to the practical course content**
# * [Introduction to Python and Project Jupyter](./01_introduction_to_python_and_jupyter.ipynb)
# * [Overview of data and data access systems](./02_atmospheric_composition_overview.ipynb)
#
#
# * **1 - AC SAF and Metop-A/B/C GOME-2 and IASI data**
# * [1.1 AC SAF Metop-ABC GOME-2 AAI](./11_AC-SAF_Metop-AB_GOME-2_AAI.ipynb)
# * [1.2 AC SAF Metop-B GOME-2 AAH](./12_AC-SAF_Metop-AB_GOME-2_AAH.ipynb)
# * [1.3 Metop-AB IASI CO](./13_Metop-AB_IASI_CO.ipynb)
#
#
# * **2 - Copernicus Sentinel-3 data**
# * [2.1 Sentinel-3 OLCI Level-1 RGB](./21_Sentinel-3_OLCI_RGB.ipynb)
# * [2.2 Sentinel-3 SLSTR Fire Radiative Power](./22_Sentinel-3_SLSTR_FRP.ipynb)
#
#
# * **3 - Copernicus Sentinel-5P data**
# * [3.1 Sentinel-5P TROPOMI CO](./31_Sentinel-5P_TROPOMI_CO.ipynb)
# * [3.2 Sentinel-5P TROPOMI UVAI](./32_Sentinel-5P_TROPOMI_UVAI.ipynb)
#
#
# * **4 - LSA SAF data products for fire monitoring**
# * [4.1 LSA SAF data products for fire monitoring](./41_LSA-SAF.ipynb)
#
#
# * **5 - Copernicus Atmosphere Monitoring Service (CAMS) data**
# * [5.1 CAMS Global Fire Assimilation System (GFAS) FRPFIRE](./51_CAMS_GFAS_FRPFIRE.ipynb)
#
#
# * **6 - Global ECMWF Fire Forecasting (GEFF) data**
# * [6.1 GEFF data overview](./61_CEMS_GEFF_FWI_data_overview.ipynb)
# * [6.2 GEFF harmonized danger classes](./62_CEMS_GEFF_FWI_harmonized_danger_classes.ipynb)
# * [6.3 GEFF custom danger classes](./63_CEMS_GEFF_FWI_custom_danger_classes.ipynb)
#
#
# * **7 - Practical Workflow - Portugal fires 2020**
# * [7.1 Case study - Portugal Fires - Summer 2020](./71_workflow_portugal_fires_2020.ipynb)
#
# <br>
#
# **NOTE:** Throughout the course, general functions to `load`, `re-shape`, `process` and `visualize` the datasets are defined. These functions are re-used when applicable. The [functions notebook](./functions.ipynb) gives you an overview of all the functions defined and used for the course.
#
# -
# <br>
# + [markdown] Collapsed="false"
# ## Learning outcomes
# + [markdown] Collapsed="false"
# The course is designed for `medium-level users`, who have basic Python knowledge and understanding of Fire monitoring data.
#
# After the course, you should have:
# * an idea about the **different datasets on Fire Monitoring data**,
# * knowledge about the most useful **Python packages** to handle, process and visualise large volumes of Earth Observation data
# * an idea about how the **data can help to detect and monitor fire events**
# + [markdown] Collapsed="false"
# <hr>
# + [markdown] Collapsed="false"
# ## Access to the `JupyterHub`
# + [markdown] Collapsed="false"
# The course material is made available on a JupyterHub instance, a pre-defined environment that gives learners direct access to the data and Python packages required for following the course.
#
# The `JupyterHub` can be accessed as follows:
# + [markdown] Collapsed="false"
# * Web address: [https://training.ltpy.adamplatform.eu](https://training.ltpy.adamplatform.eu)
# * Create an account: [https://login.ltpy.adamplatform.eu/](https://login.ltpy.adamplatform.eu/)
# * Log into the `JupyterHub` with your account created.
# + [markdown] Collapsed="false"
# <hr>
# + [markdown] Collapsed="false"
# <img src='./img/copernicus_logo.png' alt='Logo EU Copernicus' align='right' width='20%'><br><br><br><br>
#
# <p style="text-align:right;">This project is licensed under the <a href="./LICENSE">MIT License</a> and is developed under a Copernicus contract.
| 90_workshops/202105_fire_workshop/00_index.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: lexgen
# language: python
# name: lexgen
# ---
# # Transition Matrix Generator
# ## Imports and Settings
from nate_givens_toolkit import cloud_io as cloud
from nate_givens_toolkit import local_io as local
import pandas as pd
from datetime import datetime
# ## Global Variables
CLEAN_CORPORA_DIR = 'clean_corpora/'
DATA_DIR = 'data_files/'
TRANS_MATS_DIR = 'transition_matrices/'
BUCKET = 'lexgen'
# ## Logic
# ### Read in Data Tables
# #### Clean Corpora
clean_corpora = cloud.read_csv_from_s3('clean_corpora_inventory.dat', DATA_DIR, BUCKET, sep='|')
clean_corpora.head()
# #### Transition Matrices
trans_mats = cloud.read_csv_from_s3('trans_mats_inventory.dat', DATA_DIR, BUCKET, sep='|')
trans_mats.head()
# ### Select Clean Corpus
clean_corpus_filename = 'af_full_2018_A.txt'
# ### Create Output Filenames
tm_names = [f'{clean_corpus_filename.replace(local.get_file_extension(clean_corpus_filename), "")}-tm{x}.dat' for x in range(1,5)]
# ### Process Clean Corpus
clean_corpus = cloud.read_csv_from_s3(clean_corpus_filename, CLEAN_CORPORA_DIR, BUCKET, sep='|')
clean_corpus.head()
# create a list of 4 dictionaries, one for each transition matrix
tms = [{} for i in range(4)]
# the transition matrix dictionaries will be nested dictionaries with structure as follows:
# Outer Dictionary key: the prefix (1 - 4 characters)
# Outer Dictionary value: the Inner Dictionary
# Inner Dictionary key: the suffix (1 character)
# Inner Dictionary value: the frequency of transitioning to that suffix given the prefix (Outer Dictionary key)
# create a list of 4 dictionaries to track total frequency for each from_substr
# this will be used to normalize probability conditioned on from_substr
from_substr_freqs = [{} for i in range(4)]
# run through the clean corpus and create transition matrix dictionaries and cumulative frequency dictionaries
for row in clean_corpus.itertuples(index=False):
word = f' {row[0]} '
frequency = float(row[1])
for i in range(1, len(word)):
to_char = word[i]
# for each character in the word, we're going to iterate through our substring lengths = (1, 2, 3, 4)
# of course, the indices are actually t = (0, 1, 2, 3)
for t in range(4):
# populate from_substr with the 1, 2, 3 or 4-char substring preceding to_char
# if there are not enough characters, set from_substr to None
from_substr = word[i-t-1:i] if i > t else None
if from_substr in tms[t].keys():
if to_char in tms[t][from_substr].keys():
tms[t][from_substr][to_char] = tms[t][from_substr][to_char] + frequency
else:
tms[t][from_substr][to_char] = frequency
elif from_substr is not None:
tms[t][from_substr] = {to_char: frequency}
else:
pass
if from_substr in from_substr_freqs[t].keys():
from_substr_freqs[t][from_substr] = from_substr_freqs[t][from_substr] + frequency
elif from_substr is not None:
from_substr_freqs[t][from_substr] = frequency
else:
pass
# normalize the frequencies
for t in range(4):
for key in from_substr_freqs[t].keys():
total_freq = from_substr_freqs[t][key]
for sub_key in tms[t][key].keys():
tms[t][key][sub_key] = tms[t][key][sub_key] / total_freq
# convert transition matrix dictionaries to Pandas dataframes
tm_dfs = []
for t in range(4):
substr_col = []
to_char_col = []
frequency_col = []
for outer_key in tms[t].keys():
for inner_key, value in tms[t][outer_key].items():
substr_col.append(outer_key)
to_char_col.append(inner_key)
frequency_col.append(value)
data = list(zip(substr_col, to_char_col, frequency_col))
df = pd.DataFrame(data, columns=['from_str', 'to_char', 'rel_frequency'])
tm_dfs.append(df)
# ### Save Transition Matrices to S3
# write transition matrices to S3
for tm_df, tm_name in zip(tm_dfs, tm_names):
cloud.write_csv_to_s3(tm_name, TRANS_MATS_DIR, BUCKET, tm_df, sep='|', index=False)
# ### Update Transition Matrices Inventory
load_dtime = str(datetime.utcnow())
t = 1
for tm_name in tm_names:
new_row = {
'filename' : tm_name
,'clean_corpus_filename' : clean_corpus_filename
,'prefix_len' : t
,'last_load_dtime' : load_dtime
}
t += 1
trans_mats = trans_mats.append(new_row, ignore_index = True)
cloud.write_csv_to_s3('trans_mats_inventory.dat', DATA_DIR, BUCKET, trans_mats, sep='|', index=False)
| Transition_Matrix_Gen.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ### 0] Import libs
import os
import scipy
import numpy as np
import matplotlib.pyplot as plt
os.environ['PATH'] = r"openslide-win64-20171122\bin" + ";" + os.environ['PATH']
from openslide import OpenSlide
from openslide.deepzoom import DeepZoomGenerator
import xml.etree.ElementTree as ET
parser = ET.XMLParser(encoding="utf-8")
import cv2 as cv
import scipy.ndimage
import xmltodict, json
import pandas as pd
import time
images_folder = 'IMAGES_2'
annotations_folder = 'ANNOTATIONS_2'
# ### 1] Definition of a function to get the tiles fro mthe generator
def compute_max_addresses(DZG,tile_size,level,overlap):
"""
input:
- Tile generator DZG
- The size of the tile
- the level of observation
- the value of overlap
output:
- the max value of the adresses for a tile in the slide
"""
lvl_dim = DZG.level_dimensions
#size of the whole slide image with level k
new_w, new_h = lvl_dim[level]
address_max_w, address_max_h = (np.array([new_w, new_h])/tile_size).astype('int') - overlap
#max value of addresses
return(address_max_w,address_max_h)
def get_tile_1(DZG, level, address_w,address_h):
"""
input:
- Tile generator DZG
- level of observation
- adress width of the tile
- adress heigh of the tile
output:
- the image tile
"""
###Choose level
lvl_count = DZG.level_count
print('the max level is : {}'.format(lvl_count))
if level >= lvl_count:
print('the level count is too high')
else:
lvl_dim = DZG.level_dimensions
print('the size of the whole slide image is: {}'.format(lvl_dim[level]))
tile = DZG.get_tile(level,address = np.array([address_w,address_h]))
img = tile
return img
def annotation_to_dataframe(annotation_number,filename):
"""
input:
- the number of the annotation (written in the xml)
- the filename (ex: tumor_110)
output:
'dataframe with 3 columns:
1_ the order of the vertex
2_ the value of the X coordinate of the vertex
3_ the value of the Y coordinate of the vertex
The values of X and Y are the values in the WSI
"""
with open(os.path.join(annotations_folder,filename)+'.tif.xml') as xml_file:
data_dict = xmltodict.parse(xml_file.read())
nodes = data_dict['ASAP_Annotations']['Annotations']['Annotation'][annotation_number]['Coordinates']['Coordinate']
length = len(nodes)
coord = np.zeros((length,3))
for i in range(length):
iter_ = nodes[i]
coord[i] = np.array([iter_['@Order'], iter_['@X'], iter_['@Y']])
df = pd.DataFrame(data=coord, columns=['Order', "X",'Y'])
return df
| Generate_tiles.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Boucle d'Apprentissage
# +
from keras import optimizers
import tensorflow as tf
from keras.models import Sequential
from sklearn.metrics import precision_score
from keras.layers import Dense
import numpy
from keras.layers import Dropout
config = tf.ConfigProto( device_count = {'GPU': 1 , 'CPU': 56} )
from sklearn.metrics import accuracy_score, f1_score, precision_score, recall_score, classification_report, confusion_matrix
def train_fit_test(model, train_x, train_y, test_x, test_y, classWeight):
opt = optimizers.SGD(lr=0.01)
model.compile(loss='categorical_crossentropy',
optimizer=opt,
metrics=['accuracy'])
model.fit(train_x, train_y, epochs=200, verbose=2, batch_size=16, class_weight=classWeight)
y_pred = model.predict(test_x)
y_pred_formated = numpy.argmax(y_pred, axis=1)
test_y_formated = numpy.argmax(test_y, axis=1)
precision = precision_score(test_y_formated, y_pred_formated, average="macro")
conf = confusion_matrix(test_y_formated, y_pred_formated)
print(str(conf))
return precision, conf
def create_model(num_layers, num_neurons):
model = Sequential()
model.add(Dense(64, input_dim=54630))
for i in range(num_layers):
model.add(Dense(units=num_neurons, activation='relu'))
model.add(Dropout(0.2))
model.add(Dense(units=6, activation='sigmoid'))
return model
def hamming_score(y_true, y_pred, normalize=True, sample_weight=None):
import numpy as np
acc_list = []
for i in range(y_true.shape[0]):
set_true = set( np.where(y_true[i])[0] )
set_pred = set( np.where(y_pred[i])[0] )
tmp_a = None
if len(set_true) == 0 and len(set_pred) == 0:
tmp_a = 1
else:
tmp_a = len(set_true.intersection(set_pred))/\
float( len(set_true.union(set_pred)) )
#print('tmp_a: {0}'.format(tmp_a))
acc_list.append(tmp_a)
return np.mean(acc_list)
import pickle
with open('train_x.pickle', 'rb') as handle:
train_x = pickle.load(handle)
with open('train_y.pickle', 'rb') as handle2:
train_y = pickle.load(handle2)
with open('test_x.pickle', 'rb') as handle3:
test_x = pickle.load(handle3)
with open('test_y.pickle', 'rb') as handle4:
test_y = pickle.load(handle4)
best_mean = -999999
best_model = []
layers = [50, 100, 1000, 1500, 2000]
labels = [numpy.where(r==1)[0][0] for r in train_y]
from sklearn.utils.class_weight import compute_class_weight
classWeight = compute_class_weight('balanced', [0,1,2,3,4,5], labels)
classWeight = dict(enumerate(classWeight))
from sklearn.utils import compute_class_weight
for x in range(5): # Number of Retrain
for i in range(3, 7): # Maximum number of hidden layers
for j in layers: # Maximum number of neurons per layer
model = create_model(i, j)
model.summary()
print("Training...")
precision, confusion = train_fit_test(model, train_x, train_y, test_x, test_y, classWeight)
print("\n")
print(type(model))
print("||||| Accuracy : "+str(precision)+" |||||")
print("\n")
with open("infos.txt",'a+') as fh:
model.summary(print_fn=lambda x: fh.write(x + '\n'))
fh.write(str(confusion)+"\n")
fh.write("\n Precision : "+str(precision)+"\n")
fh.close()
try:
v = open("best_mod.txt","r")
best_mean = float(v.read())
print("Best actual accuracy : "+str(best_mean))
v.close()
except:
v = open("best_mod.txt","w")
v.write(str(precision))
v.close()
if (precision > best_mean):
print("\n\n")
print(" BEST MODEL ! ")
print("Saving... ")
print("\n\n")
v = open("best_mod.txt","w")
v.write(str(precision))
v.close()
model.save('model.h5')
best_mean = precision
best_model = model
# -
| 2 - Apprentissage Profond.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
import pandas as pd
import numpy as np
import os
import pickle
import platform
from sklearn.preprocessing import StandardScaler
from mabwiser.mab import MAB, LearningPolicy
from mabwiser.linear import _RidgeRegression, _Linear
class LinTSExample(_RidgeRegression):
def predict(self, x):
if self.scaler is not None:
x = self._scale_predict_context(x)
covar = np.dot(self.alpha**2, self.A_inv)
beta_sampled = rng.multivariate_normal(self.beta, covar)
return np.dot(x, beta_sampled)
class LinearExample(_Linear):
factory = {"ts": LinTSExample}
def __init__(self, rng, arms, n_jobs=1, backend=None, l2_lambda=1, alpha=1, regression='ts', arm_to_scaler = None):
super().__init__(rng, arms, n_jobs, backend, l2_lambda, alpha, regression)
self.l2_lambda = l2_lambda
self.alpha = alpha
self.regression = regression
# Create ridge regression model for each arm
self.num_features = None
if arm_to_scaler is None:
arm_to_scaler = dict((arm, None) for arm in arms)
self.arm_to_model = dict((arm, LinearExample.factory.get(regression)(rng, l2_lambda,
alpha, arm_to_scaler[arm])) for arm in arms)
# -
# # Mac OS Darwin MKL
platform.platform()
print(np.__version__)
users = pd.read_csv('movielens_users.csv')
responses = pd.read_csv('movielens_responses.csv')
# +
train = users[users['set']=='train']
test = users[users['set']=='test']
train = train.merge(responses, how='left', on='user id')
context_features = [c for c in users.columns if c not in ['user id', 'set']]
decisions = MAB._convert_array(train['item id'])
rewards = MAB._convert_array(train['rated'])
contexts = MAB._convert_matrix(train[context_features]).astype('float')
test_contexts = MAB._convert_matrix(test[context_features]).astype('float')
scaler = pickle.load(open('movielens_scaler.pkl', 'rb'))
contexts = scaler.transform(contexts)
test_contexts = scaler.transform(test_contexts)
# +
rng = np.random.RandomState(seed=11)
arms = list(responses['item id'].unique())
mab = LinearExample(rng=rng, arms=arms, l2_lambda=10, alpha=1, regression='ts', n_jobs=1, backend=None)
mab.arm_to_model[1]
# +
mab.fit(decisions, rewards, contexts)
expectations = mab.predict_expectations(test_contexts)
expectations[0][1]
# -
pickle.dump(mab, open(os.path.join('output', 'dar_ml_mab2.pkl'), 'wb'))
pickle.dump(expectations, open(os.path.join('output', 'dar_ml_expectations2.pkl'), 'wb'))
# # Cholesky
arms = list(responses['item id'].unique())
mab = MAB(arms=arms, learning_policy=LearningPolicy.LinTS(l2_lambda=10, alpha=1), n_jobs=1, backend=None, seed=11)
mab._imp.arm_to_model[1]
# +
mab.fit(decisions, rewards, contexts)
expectations = mab.predict_expectations(test_contexts)
expectations[0][1]
# -
pickle.dump(mab, open(os.path.join('output', 'dar_ml_ch_mab2.pkl'), 'wb'))
pickle.dump(expectations, open(os.path.join('output', 'dar_ml_ch_expectations2.pkl'), 'wb'))
mab._imp.arm_to_model[1].beta
| examples/lints_reproducibility/table_2_3/MacOSDarwin_MKL.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python [default]
# language: python
# name: python2
# ---
# # 基于深度学习的手写数字序列识别(识别)
# ## 导入包
import logging
import random
import sys
from io import BytesIO
import gzip
import struct
import mxnet as mx
import numpy as np
from captcha.image import ImageCaptcha
from collections import namedtuple
import matplotlib.pyplot as plt
import cv2
head = '%(asctime)-15s %(message)s'
logging.basicConfig(level=logging.DEBUG, format=head)
# ## 准备数据
# ### 读取手写数据集
def read_data(label_url, image_url):
with gzip.open(label_url) as flbl:
magic, num = struct.unpack(">II", flbl.read(8))
label = np.fromstring(flbl.read(), dtype=np.int8)
with gzip.open(image_url, 'rb') as fimg:
magic, num, rows, cols = struct.unpack(">IIII", fimg.read(16))
image = np.fromstring(fimg.read(), dtype=np.uint8).reshape(
len(label), rows, cols)
return (label, image)
# ### 图像和标签合成构造函数
def Get_image_lable(img,lable):
x = [random.randint(0,9) for x in range(3)]
black = np.zeros((28,28),dtype='uint8')
for i in range(3):
if x[i] == 0:
img[:,i*28:(i+1)*28] = black
lable[i] = 10
return img,lable
# ### 获取待识别图像
def get_image():
(lable, image) = read_data(
't10k-labels-idx1-ubyte.gz', 't10k-images-idx3-ubyte.gz')
num = [random.randint(0, 5000 - 1)
for i in range(3)]
img, _ = Get_image_lable(np.hstack(
(image[x] for x in num)), np.array([lable[x] for x in num]))
imgw = 255 - img
cv2.imwrite("img.jpg", imgw)
img = np.multiply(img, 1 / 255.0)
img = img.reshape(1, 1, 28, 84)
return img
# ## 创建识别网络
# 因为训练网络需要提供标签,而识别的时候不需要提供标签,所以重写了识别网络,也就是去掉了标签相关层,并且在最后加了层组合层,将每层卷积层处理后的结果也一并返回
def get_predictnet():
# 数据层
data = mx.symbol.Variable('data')
# 卷积层一
conv1 = mx.symbol.Convolution(data=data, kernel=(5, 5), num_filter=32)
# 池化层一
pool1 = mx.symbol.Pooling(
data=conv1, pool_type="max", kernel=(2, 2), stride=(1, 1))
# 激活层一
relu1 = mx.symbol.Activation(data=pool1, act_type="relu")
# 卷积层二
conv2 = mx.symbol.Convolution(data=relu1, kernel=(5, 5), num_filter=32)
# 池化层二
pool2 = mx.symbol.Pooling(
data=conv2, pool_type="avg", kernel=(2, 2), stride=(1, 1))
# 激活层二
relu2 = mx.symbol.Activation(data=pool2, act_type="relu")
# 卷积层三
conv3 = mx.symbol.Convolution(data=relu2, kernel=(3, 3), num_filter=32)
# 池化层三
pool3 = mx.symbol.Pooling(
data=conv3, pool_type="avg", kernel=(2, 2), stride=(1, 1))
# 激活层三
relu3 = mx.symbol.Activation(data=pool3, act_type="relu")
# 卷积层四
conv4 = mx.symbol.Convolution(data=relu3, kernel=(3, 3), num_filter=32)
# 池化层四
pool4 = mx.symbol.Pooling(
data=conv4, pool_type="avg", kernel=(2, 2), stride=(1, 1))
# 激活层四
relu4 = mx.symbol.Activation(data=pool4, act_type="relu")
# 衔接层
flatten = mx.symbol.Flatten(data=relu4)
# 全链接层一
fc1 = mx.symbol.FullyConnected(data=flatten, num_hidden=256)
# 第一个数字的全链接层
fc21 = mx.symbol.FullyConnected(data=fc1, num_hidden=11)
# 第二个数字的全链接层
fc22 = mx.symbol.FullyConnected(data=fc1, num_hidden=11)
# 第三个数字的全链接层
fc23 = mx.symbol.FullyConnected(data=fc1, num_hidden=11)
# 联合层,将各个数字链接层的结果联合在一起
fc2 = mx.symbol.Concat(*[fc21, fc22, fc23], dim=0)
# 输出层
SoftmaxOut = mx.symbol.SoftmaxOutput(data=fc2, name="softmax")
out = mx.symbol.Group([SoftmaxOut, conv1, conv2, conv3, conv4])
return out
# ## 加载网络参数并绑定计算模型
# +
_, arg_params, aux_params = mx.model.load_checkpoint("cnn-ocr-mnist", 2)
net = get_predictnet()
predictmod = mx.mod.Module(symbol=net, context=mx.cpu())
predictmod.bind(data_shapes=[('data', (1, 1, 28, 84))])
predictmod.set_params(arg_params, aux_params)
Batch = namedtuple('Batch', ['data'])
# -
# ## 处理识别结果
def predict(out):
prob = out[0].asnumpy()
for n in range(4):
cnnout = out[n + 1].asnumpy()
width = int(np.shape(cnnout[0])[1])
height = int(np.shape(cnnout[0])[2])
cimg = np.zeros((width * 8 + 80, height * 4 + 40), dtype=float)
cimg = cimg + 255
k = 0
for i in range(4):
for j in range(8):
cg = cnnout[0][k]
cg = cg.reshape(width, height)
cg = np.multiply(cg, 255)
k = k + 1
gm = np.zeros((width + 10, height + 10), dtype=float)
gm = gm + 255
gm[0:width, 0:height] = cg
cimg[j * (width + 10):(j + 1) * (width + 10), i *
(height + 10):(i + 1) * (height + 10)] = gm
cv2.imwrite("c" + str(n) + ".jpg", cimg)
line = ''
for i in range(prob.shape[0]):
line += str(np.argmax(prob[i]) if int(np.argmax(prob[i]))!=10 else ' ')
return line
# ## 识别并显示结果
# +
img = get_image()
predictmod.forward(Batch([mx.nd.array(img)]),is_train=False)
out = predictmod.get_outputs()
line = predict(out)
plt.imshow(cv2.imread('img.jpg'), cmap='Greys_r')
plt.axis('off')
plt.show()
print '预测结果:\''+line+'\''
for i in range(4):
plt.imshow(cv2.imread('c'+str(i)+'.jpg'), cmap='Greys_r')
plt.axis('off')
plt.show()
| mnist_predict.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# Kasviryhmän luokittelu
# ===================
#
# Tässä harjoituksessa tarkoituksena on soveltaa koneoppimista kirjastojen esimerkkiaineistojen sijaan oikeaan kerättyyn aineistoon.
#
# **Ongelma**: Voiko kasviryhmän tunnistaa kasvulohkojen metatiedoista?
#
# ## Data lataus
# Lataa Pandas DataFrame-olioina aineistot "../../ml-datasets/peltolohkot.csv" ja "../../ml-datasets/kasvikoodit.csv".
#
# Lataa vain 10% [otos](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.sample.html) koko datasta
# +
import pandas as pd
from IPython.display import display, HTML
parcels = pd.read_csv("../../ml-datasets/peltolohkot.csv", index_col=False).sample(frac=0.1)
codes = pd.read_csv("../../ml-datasets/kasvikoodit.csv", index_col="code")
print("Lohkodata – koko: ", len(parcels))
display(parcels.head(3))
print("Kasvikoodidata – koko:", len(codes))
display(HTML(codes.head(5).to_html()))
# -
# ## Datan esikäsittely
#
# Muodosta aluksi kasvikoodidatasta sanakirja, jossa kasvikoodi on avaimena ja ryhmä on arvona ([vinkki](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.to_dict.html)).
codes_dict = codes.to_dict()['group']
print(codes_dict)
# Sitten luo lohkodataan uusi sarake `group`, ja sijoita siihen aluksi kasvikoodi (`KVI_KASVIKOODI`) ja sen perusteella ryhmä ([vinkki](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.map.html)).
# [Valitse](https://stackoverflow.com/a/46165056) tämän jälkeen vain rivit, joissa ryhmä on määritelty (!=-1).
# +
parcels['group'] = parcels['KVI_KASVIKOODI'].fillna(-1.0).astype(int)
parcels.group = parcels.group.map(codes_dict)
parcels.group = parcels.group.fillna(-1)
parcels = parcels[parcels.group != -1]
parcels.head(3)
# -
# Tarkastelemalla yllä olevaa tulostetta huomaat, että `KASVULOHKOTUNNUS` ei ole numeerinen arvo.
# Ennen luokittelua se on syytä muuttaa numeeriseksi. Yksi tapa tehdä tämä on muuttaa.
#
parcel_tunnus_dict = {i: x for x, i in enumerate(sorted(parcels['KASVULOHKOTUNNUS'].unique()))}
print(parcel_tunnus_dict)
# [Jätetään](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.drop.html)
# samalla pois alkuperäinen kasvikoodi `KVI_KASVIKOODI` ja lajikekoodi `KLE_LAJIKEKOODI`.
parcels_modified = parcels.copy().drop(['KVI_KASVIKOODI', 'KLE_LAJIKEKOODI'], axis=1)
parcels_modified['KASVULOHKOTUNNUS'] = parcels_modified['KASVULOHKOTUNNUS'].map(parcel_tunnus_dict)
parcels_modified.head(3)
# #### Muuttujien valinta
#
# Muodosta nyt havaintojoukko `X` ja luokkajoukko `y`.
# Valitse havaintojoukkoon mieleikkäämmät sarakkeet lohkodatasta `parcels_modified`. Saat valittua vain osan sarakkeista alla olevalla tyylillä:
# ```python
# # Olemassa olevat
# print(list(df.columns))
# X = df[['vain', 'halutut', 'sarakkeet']]
# ```
#
# Valitse luokiksi `y` kasviryhmäsarake `group`.
print(list(parcels_modified.columns))
# Poista kommenttimerkki '#' haluamasi muuttujien edestä
X = parcels_modified[[
#'X',
#'Y',
#'VUOSI',
#'KASVULOHKOTUNNUS',
#'LUOMUVKD_KOODI',
#'SIELAKD_KOODI',
#'PINTAALA',
#'ONKOEKOLOGINENALA',
#'LISATIETO',
#'JATTOPVM',
#'PERPVM',
#'PAIPVM',
]]
y = parcels_modified.group
# #### Jako
# Jaa luomasi X ja luokat y opetus- ja testijoukkoihin X_train, X_test, y_train ja y_test siten, että testijoukon osuus on 80% ja opetusjoukon osuus on 20% havainnoista.
# +
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X,y,test_size=0.8)
# -
# ##### Datan visualisointia
# Visualisoidaan opetusjoukkoa hieman ongelman hahmottamiseksi.
# Koska valitsemiesi muuttujien määrä voi olla enemmän kuin 2, käytetään PCA-analyysiä
# visualisoinnin tukena.
#
# Jos solu palauttaa virheen, voit miettiä, että johtuiko se valitsemistasi muuttujista...
# +
from sklearn.preprocessing import StandardScaler
from sklearn.decomposition import PCA
import matplotlib.pyplot as plt
pca = PCA(n_components=2)
X_train_scaled = StandardScaler().fit_transform(X_train)
X_train_pca = pca.fit_transform(X_train_scaled)
print("Muuttujien määrä oli ennen {}, nyt se on {}".format(len(X.columns), len(X_train_pca[0])))
fig, ax = plt.subplots(figsize=(8, 6))
ax.scatter(X_train_pca[:, 0], X_train_pca[:, 1], c=y_train, cmap=plt.cm.Set1,
edgecolor='k')
ax.set_xlabel('1. ominaisuusvektori')
ax.set_ylabel('2. ominaisuusvektori')
plt.show()
# -
# ## Luokittelu
# Muodosta sitten putki ([`Pipeline`](https://scikit-learn.org/stable/modules/generated/sklearn.pipeline.Pipeline.html#sklearn.pipeline.Pipeline)),
# johon liität StandardScaler ja PCA esikäsittelyvaiheet, käytä PCA:n `n_components`-parametrina arvoa 0.7.
# Lisää putken viimeiseksi komponentiksi haluamasi luokittelija nimelle "clf". Jos käytät Keras-luokittelijaa,
# muista luoda funktio sille oletusparametreilla.
# +
from sklearn.preprocessing import StandardScaler
from sklearn.decomposition import PCA
from sklearn.svm import SVC
from sklearn.pipeline import Pipeline
from tensorflow import keras
def build_fn(first_dense_units=100, dropout_rate=0.2):
model = keras.models.Sequential()
model.add(keras.layers.Flatten(input_shape=(len(X_train.columns), )))
model.add(keras.layers.Dense(first_dense_units, activation="relu"))
model.add(keras.layers.Dropout(dropout_rate))
model.add(keras.layers.Dense(len(y_train.unique()), activation="softmax"))
print(model.summary())
model.compile(optimizer='adam',
loss='sparse_categorical_crossentropy',
metrics=['accuracy'])
return model
clf_keras = keras.wrappers.scikit_learn.KerasClassifier(build_fn)
pipeKeras = Pipeline([('scaler', StandardScaler()), ('clf',clf_keras)]) # keras
pipe = Pipeline([('scaler', StandardScaler()), ('pca',PCA(n_components=0.7)), ('clf',SVC())])
# -
# Luodaan sitten hyperparametrioptimoinnin tekevä opetusfunktio.
# +
from sklearn.model_selection import GridSearchCV
def train(pipe, parameters):
clf = GridSearchCV(pipe, parameters, cv=2, n_jobs=-1)
clf.fit(X_train, y_train)
print("Parhaat parametrit: ", clf.best_params_)
print("Paras opetus OA: {:.4f}".format(clf.best_score_))
print("OA: {:.4f}".format(clf.score(X_test, y_test)))
return clf
# + [markdown] pycharm={"name": "#%% md\n"}
# Suorita sitten hyperparametrioptimointi. Anna toiseksi parametriksi sanakirja hyperparametreista.
#
# Putkitusta käytettäessä laita parametrisanakirjaan luokittelijakomponentin nimi ja kaksi alaviivaa ennen parametrin nimeä. Esimerkiksi SVM-algoritmin `C`-parametrin tapauksessa:
# ```python
# params_without_pipeline = {'C':[1,10]}
# params_with_pipeline = {'clf__C':[1,10]}
#
# ```
#
# Jos käytät Keras-mallia, aseta parametreiksi ainakin
# epochs ja batch_size, esimerkiksi: `{'clf__epochs':[10], 'clf__batch_size':[10]}`.
#
# -
clf = train(pipe, {'clf__C':[1], 'clf__gamma':[1]})
# clf = train(pipeKeras, {'clf__epochs':[10], 'clf__batch_size':[10]}) # keras
y_pred = clf.predict(X_test)
# ### Evaluointi
# Overall Accuracy vihjaisi mallin olevan erittäin hyvä! Tarkista kuitenkin vielä luokitteluraportin ja sekaannusmatriisin avulla mallin hyvyys.
# +
# Evaluoi
from sklearn.metrics import classification_report
from sklearn.metrics import confusion_matrix
from utils import plot_confusion_matrix
def evaluate(y_pred):
names = ['1 - Harkapapu','2 - Herne','3 - Juurikkaat',
'4 - Kesanto','5 - Kevatrapsi','6 - Kevatviljat',
'7 - Nurmi','8 - Peruna','9 - Rypsi','10 - Syysvilja']
cm = confusion_matrix(y_test, y_pred)
print(classification_report(y_test, y_pred))
plot_confusion_matrix(cm, list(range(1,11)), names, normalize=True)
# -
evaluate(y_pred)
# ## Ongelman korjaamista
# Yllä olevaa sekaannusmatriisia tarkastelemalla saatat huomata, että melkein kaikki testijoukon havainnot on luokiteltu kuuluvan kehteen luokkaan 6 tai 7. Tarkastele seuraavaksi kuinka paljon havaintoja on missäkin luokassa opetusjoukossa.
from collections import Counter, OrderedDict
distribution = dict(OrderedDict(sorted(Counter(y_train).items(), key=lambda t: t[0])))
distribution
# Havaintojen epätasapaino vaikuttaa olevan ongelma. Sen korjaamiseksi kokeile [RandomUnderSampler](https://imbalanced-learn.readthedocs.io/en/stable/generated/imblearn.under_sampling.RandomUnderSampler.html#imblearn.under_sampling.RandomUnderSampler)-luokkaa ja [Pipeline_imb](https://imbalanced-learn.readthedocs.io/en/stable/api.html#module-imblearn.pipeline)-luokkaa putkittamiseen. Muodosta putki samaan tapaan kuin edellä, mutta lisää RandomUnderSampler ennen luokittelijaa. Nyt voit myös asettaa PCA-parametrin `n_components` korkeammaksi, sillä suorituskyky on resamplauksen vuoksi parempi.
# +
from imblearn.under_sampling import RandomUnderSampler
from imblearn.pipeline import Pipeline as Pipeline_imb
pipe_imb = Pipeline_imb([('scaler', StandardScaler()),
('pca',PCA(n_components=0.9)),
('rus', RandomUnderSampler('not minority')),
('clf',SVC())])
pipe_imb_keras = Pipeline_imb([('scaler', StandardScaler()),
('rus', RandomUnderSampler('not minority')),
('clf',clf_keras)])
# -
# ##### Korjauksen jälkeinen evaluointi
# Tämän jälkeen suorita hyperparametrien optimointi ja evaluointi.
clf = train(pipe_imb, {'clf__C':[1, 10], 'clf__gamma':[0.1, 0.01, 1]})
# clf = train(pipe_imb_keras, {'clf__epochs':[10], 'clf__batch_size':[10]}) # keras
y_pred = clf.predict(X_test)
evaluate(y_pred)
# Kokeile vielä valita eri määrä muuttujia kohdassa [Muuttujien valinta](#Muuttujien-valinta). Aja se solu ja tämän jälkeen solut [Jako](#Jako) ja [Korjauksen jälkeinen evaluointi](#Korjauksen-jälkeinen-evaluointi).
| Solutions/Harjoitus_pellot.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
import os
import math
import numpy as np
import pandas as pd
from analysis import isovisc, utilities, common
reader = isovisc.data.reader
from everest.h5anchor import Fetch, Scope
from everest.window import Canvas, plot, raster, DataChannel as Channel, get_cmap
# %matplotlib inline
# -
inputs, initials, finals, averages = isovisc.data.get_summary_frames()
rasters = isovisc.data.get_rasters()
from analysis.analysis import linear_regression
print(f"\q")
class TeXStr(str):
def __init__(self, strn):
if not strn[0] == '$':
strn = '$' + strn
if not strn[-1] == '$'
predictor(fs)
plot.line(
(x := np.linspace(0.1, 0.9, 100)),
x**2 / (x**2 + 1)
)
ax.ax.dataLim
dir(ax.ax)
ax.props.edges.x.ticks.minor.values
# +
def tcell(f):
return f**2 / (f**2 + 1)
canvas = Canvas(size = (6, 6))
ax = canvas.make_ax()
ax.scatter(
(fs := inputs.loc[sel]['f']),
Channel(averages.loc[sel]['temp_av'], lims = (0, 1)),
c = inputs.loc[sel]['alpha']
)
ax.line(
(fs := np.linspace(0.5, 1., 100)),
[tcell(f) for f in fs],
)
canvas.show()
# -
sorted(fs)
| analysis/isovisc/isovisc_002.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import tensorflow as tf
import numpy as np
import matplotlib.mlab
import scipy.io.wavfile
import scipy
import os
import time
from scipy import signal
from sklearn import metrics
import matplotlib.pyplot as plt
import seaborn as sns
from scipy import sparse
sns.set()
# +
def log_specgram(audio, sample_rate, window_size=20,
step_size=10, eps=1e-10):
nperseg = int(round(window_size * sample_rate / 1e3))
noverlap = int(round(step_size * sample_rate / 1e3))
freqs, times, spec = signal.spectrogram(audio,
fs=sample_rate,
window='hann',
nperseg=nperseg,
noverlap=noverlap,
detrend=False)
return freqs, times, np.log(spec.T.astype(np.float32) + eps)
def pad_audio(samples, L=16000):
if len(samples) >= L: return samples
else: return np.pad(samples, pad_width=(L - len(samples), 0), mode='constant', constant_values=(0, 0))
def chop_audio(samples, L=16000, num=20):
for i in range(num):
beg = np.random.randint(0, len(samples) - L)
yield samples[beg: beg + L]
# -
folders = [i for i in os.listdir(os.getcwd())if i.find('.md') < 0 and i.find('.txt') < 0 and i.find('ipynb') < 0 and i.find('py') < 0 and i.find('LICENSE') < 0 and i.find('_background_noise_') < 0]
new_sample_rate = 8000
Y = []
X = []
for i in folders:
print(i)
for k in os.listdir(os.getcwd()+'/'+i):
sample_rate, samples = scipy.io.wavfile.read(os.path.join(os.getcwd(), i, k))
samples = pad_audio(samples)
if len(samples) > 16000:
n_samples = chop_audio(samples)
else: n_samples = [samples]
for samples in n_samples:
resampled = signal.resample(samples, int(new_sample_rate / sample_rate * samples.shape[0]))
_, _, specgram = log_specgram(resampled, sample_rate=new_sample_rate)
Y.append(i)
X.append(scipy.misc.imresize(specgram,[45, 40]).flatten())
X = np.array(X)
print(X.shape)
len(Y)
import lightgbm as lgb
from sklearn.cross_validation import train_test_split
from sklearn.preprocessing import LabelEncoder
labels = np.unique(Y)
target = LabelEncoder().fit_transform(Y)
train_X, test_X, train_Y, test_Y = train_test_split(X, target, test_size = 0.2)
params_lgd = {
'boosting_type': 'dart',
'objective': 'multiclass',
'colsample_bytree': 0.4,
'subsample': 0.8,
'learning_rate': 0.1,
'silent': False,
'n_estimators': 10000,
'reg_lambda': 0.0005,
'device':'gpu'
}
clf = lgb.LGBMClassifier(**params_lgd)
lasttime = time.time()
clf.fit(train_X,train_Y, eval_set=[(test_X,test_Y)],
eval_metric='logloss', early_stopping_rounds=20, verbose=True)
print('time taken to fit lgb:', time.time()-lasttime, 'seconds ')
# +
predicted = clf.predict(test_X)
print('accuracy validation set: ', np.mean(predicted == test_Y))
# print scores
print(metrics.classification_report(test_Y, predicted, target_names = labels))
# -
| lgb/log-lgb.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# This example assumes that PyShEx has been installed in jupyter environment
from pyshex import ShExEvaluator
from rdflib import Namespace
# +
BASE = Namespace("https://www.w3.org/2017/10/bibframe-shex/")
shex = """
BASE <https://www.w3.org/2017/10/bibframe-shex/>
PREFIX bf: <http://bibframe.org/vocab/>
PREFIX madsrdf: <http://www.loc.gov/mads/rdf/v1#>
PREFIX locid: <http://id.loc.gov/vocabulary/identifiers/>
PREFIX rdf: <http://www.w3.org/1999/02/22-rdf-syntax-ns#>
<Work> EXTRA a {
a [bf:Work] ;
bf:class @<Classification> ;
bf:creator @<Person> ;
bf:derivedFrom IRI ;
bf:hasRelationship @<Relationship> ;
bf:language [<http://id.loc.gov/vocabulary/languages/>~] ;
bf:subject @<Topic>* ;
^bf:instanceOf @<Instance> ;
}
<Classification> [<http://id.loc.gov/authorities/classification/>~] AND {
a [bf:LCC] ;
bf:label LITERAL
}
<Instance> {
a [bf:Instance] ;
bf:contributor @<Person> ;
bf:derivedFrom IRI ;
bf:instanceOf @<Work> ;
}
<Person> {
a [bf:Person] ;
bf:label LITERAL ;
madsrdf:elementList @<ElementList>
}
<ElementList> CLOSED {
rdf:first @<MadsElement> ;
rdf:rest [rdf:nil] OR @<ElementList>
}
<MadsElement> {
a [ madsrdf:NameElement
madsrdf:DateNameElement
madsrdf:TopicElement
] ;
madsrdf:elementValue LITERAL
}
<Relationship> {
a [bf:Work] ;
bf:title LITERAL ;
bf:contributor {
a [bf:name] ;
bf:label LITERAL ;
madsrdf:elementList @<ElementList>
}
}
<MadsTopic> {
a [madsrdf:Topic] ;
a [madsrdf:Authority] ;
madsrdf:authoritativeLabel [@en @fr @de] ;
madsrdf:elementList @<ElementList>
}
<Topic> {
a [bf:Topic]? ;
a [madsrdf:ComplexSubject] ;
bf:label LITERAL ;
madsrdf:authoritativeLabel [@en @fr @de] ;
madsrdf:componentList @<TopicList>
}
<TopicList> CLOSED {
rdf:first @<MadsTopic> ;
rdf:rest [rdf:nil] OR @<TopicList>
}
"""
# -
rdf = """
@base <https://www.w3.org/2017/10/bibframe-shex/> .
PREFIX bf: <http://bibframe.org/vocab/>
PREFIX madsrdf: <http://www.loc.gov/mads/rdf/v1#>
PREFIX locid: <http://id.loc.gov/vocabulary/identifiers/>
<samples9298996> a bf:Text, bf:Work ;
bf:class <http://id.loc.gov/authorities/classification/PZ3> ;
bf:creator [ a bf:Person ;
bf:label "<NAME>, 1812-1870." ;
madsrdf:elementList (
[ a madsrdf:NameElement ; madsrdf:elementValue "<NAME>," ]
[ a madsrdf:DateNameElement ; madsrdf:elementValue "1812-1870." ] ) ] ;
bf:derivedFrom <http://id.loc.gov/resources/bibs/9298996> ;
bf:hasRelationship [ a bf:Work ;
bf:title "<NAME>." ;
bf:contributor [ a bf:name ;
bf:label "<NAME>." ;
madsrdf:elementList (
[ a madsrdf:NameElement ; madsrdf:elementValue "<NAME>." ] ) ] ] ;
bf:language <http://id.loc.gov/vocabulary/languages/eng> ;
bf:subject
[ a bf:Topic, madsrdf:ComplexSubject ;
bf:label "Criminals--Fiction" ;
madsrdf:authoritativeLabel "Criminals--Fiction"@en ;
madsrdf:componentList (
[ a madsrdf:Authority, madsrdf:Topic ;
madsrdf:authoritativeLabel "Criminals"@en ;
madsrdf:elementList (
[ a madsrdf:TopicElement ; madsrdf:elementValue "Criminals"@en ] ) ]
) ] ;
.
<http://id.loc.gov/authorities/classification/PZ3> a bf:LCC ;
bf:label "PZ3.D55O165PR4567" .
[] a bf:Instance ;
bf:contributor [
a bf:Person ;
bf:label "<NAME>, 1890- [from old catalog]" ;
madsrdf:elementList (
[ a madsrdf:NameElement ; madsrdf:elementValue "<NAME>," ]
[ a madsrdf:DateNameElement ; madsrdf:elementValue "1890- [ from old catalog]" ]
) ] ;
bf:derivedFrom <http://id.loc.gov/resources/bibs/9298996> ;
bf:instanceOf <samples9298996> ;
.
"""
results = ShExEvaluator().evaluate(rdf, shex, focus=BASE.samples9298996,start=BASE.Work)
for r in results:
if r.result:
print("PASS")
else:
print(f"FAIL: {r.reason}")
| notebooks/book_small_text.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Mito Analysis
# language: python
# name: mito-analysis
# ---
# +
from moviepy.editor import *
import imageio
from utoolbox.data.datastore import ImageFolderDatastore
from utils import find_dataset_dir
# -
# raw
path = find_dataset_dir('_movie/raw')
ds_raw = ImageFolderDatastore(path, read_func=imageio.volread)
clip_raw = ImageSequenceClip(list(ds_raw._uri.values()), fps=25)
# n2v_deconv
path = find_dataset_dir('_movie/predict_decon')
ds_n2v = ImageFolderDatastore(path, read_func=imageio.volread)
clip_n2v = ImageSequenceClip(list(ds_n2v._uri.values()), fps=25)
# surface
path = find_dataset_dir('_movie/seg')
ds_seg = ImageFolderDatastore(path, read_func=imageio.volread)
clip_seg = ImageSequenceClip(list(ds_seg._uri.values()), fps=25)
# graph
path = find_dataset_dir('_movie/graph')
ds_graph = ImageFolderDatastore(path, read_func=imageio.volread)
clip_graph = ImageSequenceClip(list(ds_graph._uri.values()), fps=25)
# +
text_raw = TextClip('Raw', color='white', fontsize=24)
final_raw = CompositeVideoClip([clip_raw, text_raw.set_pos(("center","top"))])
text_n2v = TextClip('N2V + RL', color='white', fontsize=24)
final_n2v = CompositeVideoClip([clip_n2v, text_n2v.set_pos(("center","top"))])
text_seg = TextClip('Segmentation', color='white', fontsize=24)
final_seg = CompositeVideoClip([clip_seg, text_seg.set_pos(("center","top"))])
text_graph = TextClip('Graph', color='white', fontsize=24)
final_graph = CompositeVideoClip([clip_graph, text_graph.set_pos(("center","top"))])
final = clips_array([[final_raw, final_n2v],[final_seg, final_graph]])
final.duration = 45
final.write_videofile('test.mp4', fps=25)
# -
| notebooks/create_movie.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="view-in-github" colab_type="text"
# <a href="https://colab.research.google.com/github/priyanshgupta1998/All_codes/blob/master/SVM0.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# + id="WcX8BskOIQ4U" colab_type="code" colab={}
import matplotlib.pyplot as plt
from sklearn import datasets
from sklearn import svm
# + id="IRBQsylGIQ7I" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 136} outputId="45e4a4f8-80f4-4755-b9ad-0400df5e0df3"
digits = datasets.load_digits()
print(digits.data) #two dimensional array ----full complete dataset
# + id="VE2aa6pdLPrc" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 136} outputId="7057589c-dc51-4379-9a97-528d75bdebab"
digits.data[:] # full dataset (all rows) with two dimensional array
# + id="dHDRaDlMLmyQ" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 187} outputId="9a4759c7-51ad-47d3-e517-c1d78e787b96"
digits.data[:2] # two rows are printed
# + id="7uK7kbTWLmu3" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 136} outputId="3f88beb5-aca2-4c62-b1d9-fa3639fb5d4e"
digits.data[:1797] # complete dataset with 1797 rows
# + id="joe2lUy_MVEm" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 136} outputId="3ad246f9-4014-41b5-d932-9148a0a7ca63"
digits.data[:1796] # dataset upto second last row
# + id="HIahKJSBMVCF" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 136} outputId="46add0d4-6051-4b9b-b3ac-8a5e80eaa1d2"
digits.data[:-1] # dataset upto second last row
# + id="QuhHKj1KIQ_H" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="779a3e29-2d8b-4872-905c-1b840878dddf"
len(digits.data)
# + id="ZBmP1TMnIRED" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="e4945696-ab22-43ef-b63a-116810388fba"
len(digits.data[0]) # no . of elements in first row
# + id="_ba2KtfmIRGo" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="48674ba0-6bed-4023-a45a-573e5337aea2"
len(digits.data[-1]) # no. of elements in last row of the dataset
# + id="5wmHRESIJniE" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 368} outputId="1b4e29d1-df93-4bae-d2d7-28f4c55bfb2a"
plt.imshow(digits.data)
# + id="unVw1BpTJnfK" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="9cfcc5c4-88f8-41a1-f557-f17d98f82217"
np.shape(digits.data) #1797 rows and each rows has 64 elements
# + id="kRxwl4H4IRLR" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 850} outputId="0b18831b-c0dd-4c63-8e2d-2a9e722c584f"
digits.images #complete dataset with 3-dimensional array
# + id="lGBurR5TIRPO" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="2f860053-4410-4d26-aa9f-5fb5b12f16d5"
np.shape(digits.images) # 1797 Blocks and each block has 8 rows and 8 columns
# + id="oSPMYvnpIRS_" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="34125c90-8efe-4b1a-d55b-611da4cbaac7"
len(digits.images)
# + id="-yykfX62Mx9N" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="d212cfcb-ee52-47b0-fac2-40a4d68c2a27"
digits.target
# + id="-0P80Y2PMx6V" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="1fc43cdf-be22-4317-ae6f-5a06732cb502"
len(digits.target)
# + id="CvXArVjFM4OQ" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="70be8404-71af-4319-d7ce-e7a2cdc71116"
np.shape(digits.target) # 1797 columns and 1 row
# + id="KhkupFWaM4Ly" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="9fb48622-876c-4e69-d2fb-96a25f34ce25"
digits.target[:8] # target value array upto 8 columns
# + id="Edv0wVfaM4HS" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="751a3c62-5f0b-4aa9-9919-f62913617279"
digits.target[:1796] # target value array upto second last column
# + id="0eB1HAOcNZDm" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="54cf48cf-3123-4333-b76a-ddd5b9dd606a"
digits.target[:-1] # target value array upto second last column
# + id="zyUq5vkbKxb3" colab_type="code" colab={}
clf = svm.SVC(gamma=0.001 , C=100)
# + id="qHZmruoQKxe4" colab_type="code" colab={}
x,y = digits.data[:-1] , digits.target[:-1] #x is two dimensional and y is one dimensional
# + id="2R0zZJuBNyiS" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 136} outputId="eb7c42e7-5a36-4d44-e667-cfd5326f1319"
print(x,y)
# + id="XxFjb1idOYHO" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 51} outputId="6fdd2c3b-3c59-49d7-80c9-e0900c95ab52"
print(x.shape)
print(y.shape)
# + id="UODLemjsOjWV" colab_type="code" colab={}
#y.shape = [1796,1]
# + id="mhBPyjyDOymu" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="4f5e4762-f29e-4774-bb5c-bae326ba14d4"
print(y)
# + id="BwMEokJNQZPJ" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 136} outputId="79aa8068-bfad-440f-c259-1b0109837e00"
#y.shape = [-1, 1]
#y
# + id="A3Y8nAqrSAyb" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="33a9e133-678c-4fcd-bf7a-7a347448a272"
y.shape
# + id="6UnUt8p5Rs24" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 136} outputId="c899ccc1-0443-4bda-cb75-4150ca7a1cd2"
print(x,y)
# + id="ppEPjDQDKxiv" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 85} outputId="1d109101-6091-4a8e-d079-1347900f8622"
clf.fit(x,y)
# + id="H2FPw6geKxlb" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="7b2a7bbf-98cf-4a91-e618-48b40c0d96ac"
print(clf.predict(digits.data[[-1]])) # predict the target value of the last row of the digits.data
# + id="ST0voiWFKxqc" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 364} outputId="29100eaf-75ff-42d1-c6b0-a2641faf6e06"
plt.imshow(digits.images[-1]) # image of last blocks of 8x8 of digits.images
# + id="RSc1hiD7Kxvt" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 364} outputId="a037cfb5-8fec-43f0-f73b-a2dc9f3a027b"
plt.imshow(digits.images[-1] , cmap=plt.cm.gray_r , interpolation='nearest') # image shows 8
# + id="G45YQrijKx00" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 364} outputId="b2b3bbe4-0f82-4fcf-e3eb-8a52078102c1"
plt.imshow(digits.images[-2] , cmap=plt.cm.gray_r , interpolation='nearest') # image shows 9
# + id="KVe66T1WKx4F" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 364} outputId="321975bb-8298-4695-8752-82de4aea5ef7"
plt.imshow(digits.images[0] , cmap=plt.cm.gray_r , interpolation='nearest') # image shows 0
# + id="ke4seGS1Kx7A" colab_type="code" colab={}
pre = svm.SVC(gamma=0.001 , C=100)
# + id="pt0N5L-aKyAR" colab_type="code" colab={}
l,p = digits.data[:-10] , digits.target[:-10]
# + id="w9_92XqgKxzF" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 51} outputId="7a4ffb5c-1201-49d8-8fc3-bc40aece191d"
print(l.shape)
print(p.shape)
# + id="G0G_-9u6KxtS" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 85} outputId="b3799b57-5db9-4775-d5e3-7f683bc0ffaf"
pre.fit(l,p)
# + id="eCQz3ZB9Kxoh" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="24aff6af-7391-4f33-8d8f-6ffefa62e1c8"
print(pre.predict(digits.data[[-1]])) # predict the target value of the last row of the digits.data
# + id="cuPNJcVJXHwo" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="10837894-4d7d-4bce-ca68-5c6d74455fc6"
print(clf.predict(digits.data[[-2]])) # predict the target value of the last row of the digits.data
# + id="_BvIkHwPXH94" colab_type="code" colab={}
# + id="0xDO00k8XIEf" colab_type="code" colab={}
# + id="L-ApFUzQXH8C" colab_type="code" colab={}
# + id="tCpo_dw_XH6U" colab_type="code" colab={}
# + id="VAvHaeWzXH1A" colab_type="code" colab={}
| SVM0.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
import numpy as np
# Recursive formula
def s(prevs, weight):
return weight * prevs - weight * np.square(prevs)
# Generate sequence
def sequence(seed, weight, count):
seq = []
nexts = seed
for i in range(count):
seq.append(nexts)
nexts = s(nexts, weight)
return np.array(seq)
# +
import matplotlib.pyplot as plt
# Plot sequence
def plotSequence(seq):
lines = plt.plot(seq, marker='.')
plt.setp(lines, color='r', linewidth=1.0)
plt.show()
# -
plotSequence(sequence(0.5, 1, 100))
plotSequence(sequence(0.5, 3, 100))
plotSequence(sequence(0.0001, 4, 100))
plotSequence(sequence(0.0001, 3, 100))
| recursive-plot.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .jl
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Julia 1.3.0
# language: julia
# name: julia-1.3
# ---
# Install these requirements if not already installed
using FileIO, Images, JLD, Statistics
# # Loading images
#
# This section shows how to load a large batch of images in two ways - sequentially and parallely in threads.
#
# You only need to do this once, the images are saved in a JLD file at the end. (Skip to [the next section](#Load-and-process-inputs) if done already)
TRAIN_IMAGES_DIR = "train/"
TEST_IMAGES_DIR = "test/"
function images_in_dir(DIR::String)
images = String[]
for file in readdir(DIR)
if endswith(file, ".jpg")
push!(images, joinpath(DIR, file))
end
end
return images
end
function load_image(path::String)
img = load(path)
arr = channelview(img)
return (permuteddimsview(arr, [2, 3, 1]))
end
train_images = images_in_dir(TRAIN_IMAGES_DIR)
test_images = images_in_dir(TEST_IMAGES_DIR)
function load_all_images(paths_vector)
images_vector = Array{Float32}(undef, 32, 32, 3, 0)
i = 0
for image_file in paths_vector
images_vector = cat(images_vector, load_image(image_file), dims=4)
i += 1
if i%100==0
println(string("Processing image no. ", i, " in thread ", Threads.threadid()))
end
end
return images_vector
end
function load_images_parallel(paths_vector, num_bins)
len = length(paths_vector) # Number of images
bin_size = floor(UInt, len/num_bins) # Size of each bin
all_bins = Vector{String}[] # To store image filenames in each bin
# Create bins of images filenames
for bin=1:num_bins-1
paths_bin = paths_vector[(bin-1)*bin_size+1 : bin*bin_size]
push!(all_bins, paths_bin)
end
# Add last bin with all extra images
push!(all_bins, paths_vector[(num_bins-1)*bin_size+1 : len])
bins = Vector{Array{Float32, 4}}(undef, num_bins) # To store bins of actual image arrays
# Load each bin - multithreaded
Threads.@threads for i = 1:num_bins
bins[i] = load_all_images(all_bins[i])
end
# single array to store all images
images_array = Array{Float32}(undef, 32, 32, 3, 0)
# Concatenate all bins into single array
for bin in bins
images_array = cat(images_array, bin, dims=4)
end
return images_array
end
Threads.nthreads()
test_images = sort(test_images)
train_images = sort(train_images)
test_X_orig = load_images_parallel(test_images, 10)
# There might be a concurrency violation error here (this happens only in jupyter for some reason)
# Running it again seems to fix it?
train_X_orig = load_images_parallel(train_images, 100)
save("dataset.jld", "test_X_orig", test_X_orig, "train_X_orig", train_X_orig)
# ## Load and process inputs
using CSV
using DataFrames
using JLD
using Images
using Statistics
input_file = "train.csv"
df = CSV.read(input_file)
function extract_Y_from_csv(input_file)
df = CSV.read(input_file)
df = sort(df, [:id])
images = df[!, :id]
Y = df[!, :has_cactus]
return Y, images
end
train_Y, images = extract_Y_from_csv("train.csv")
# +
# There is no test.csv for this dataset
# test_Y, images = extract_Y_from_csv("test.csv")
# -
train_data = load("dataset.jld")
function reshape_image_array(image_array::Array{Float32,4})
return reshape(image_array, :, size(image_array)[4])
end
function reshape_image_array_reverse(image_array::Array{Float32,2})
return reshape(image_array, 32, 32, 3, :)
end
test_X = reshape_image_array(train_data["test_X_orig"])
train_X = reshape_image_array(train_data["train_X_orig"])
train_Y = reshape(train_Y, (1, size(train_Y)[1]))
# +
# function equalize_inputs(train_X::Array{Float32,2}, train_Y::Array{Int,2}, num_classes)
# train_X_len = size(train_X)[2]
# train_Y_len = size(train_Y)[2]
# @assert train_X_len == train_Y_len
# len = train_X_len
# class_counter = zeros(Int, num_classes)
# for i in 1:len
# class_counter[train_Y[1, i]+1] += 1
# end
# println("Total entries for each class: ", class_counter)
# max_per_class = minimum(class_counter)
# new_train_X = Array{Float32, 2}(undef, size(train_X)[1], min(train_X_len, max_per_class*num_classes))
# new_train_Y = Array{Float32, 2}(undef, size(train_Y)[1], min(train_Y_len, max_per_class*num_classes))
# counters = zeros(Int, num_classes)
# iter = 1
# new_counter = 1
# while (iter < len) && any(counters .< max_per_class) # Exits when all of the counters are more than the max limit
# # println(counters, new_counter)
# if counters[train_Y[1, iter]+1] < max_per_class
# counters[train_Y[1, iter]+1] += 1
# new_train_X[:, new_counter] = train_X[:, iter]
# new_train_Y[:, new_counter] = train_Y[:, iter]
# new_counter += 1
# end
# iter += 1
# end
# println("Counters: ", counters)
# println("Total entries added: ", new_counter)
# println("Total entries checked: ", iter)
# return new_train_X, new_train_Y
# end
# +
# new_train_X, new_train_Y = equalize_inputs(train_X, train_Y, 2)
# -
# Skip to [this section](#Training-on-GPU) for training on GPU
include("../../NeuralNetwork.jl")
parameters, activations = neural_network_dense(train_X, train_Y, [3072, 10, 1], 1000, 0.01)
predicts, accuracy = predict(train_X, train_Y, parameters, activations)
# +
# predicts, accuracy = predict(test_X, test_Y, parameters, activations)
# -
# ## Example to display images
image_to_select = rand(1:size(train_X)[2])
a = reshape_image_array_reverse(train_X[:, [image_to_select]])
a2 = a[:,:,:,1]
a3 = permuteddimsview(a2, [3, 1, 2])
a3 = Array{Float32}(a3)
println("Ground truth ", train_Y[1, image_to_select])
println("Prediction ", Int(predicts[1, image_to_select]))
println("Filename ", df[image_to_select, 1])
colorview(RGB, a3)
# # Training on GPU
# Make sure to import these before including the GPU file
using CuArrays, CUDAnative, CUDAdrv
include("../../NeuralNetworkGPU.jl")
# Copy inputs to GPU
train_X_cu = CuArray{Float32}(train_X)
train_Y_cu = CuArray{Float32}(train_Y)
test_X_cu = CuArray{Float32}(test_X)
# Train (same as CPU, the differences are internal)
layer_dims = [3072, 10, 1]
parameters, activations = neural_network_dense(train_X_cu, train_Y_cu, layer_dims, 1000, 0.01)
predicts, accuracy = predict(train_X_cu, train_Y_cu, parameters, activations)
parameters, activations = neural_network_dense(train_X_cu, train_Y_cu, layer_dims, 2000, 0.01,
resume=true,
parameters=parameters)
predicts, accuracy = predict(train_X_cu, train_Y_cu, parameters, activations)
| Examples/Cactus/Cactus.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Camera Pose Estimation with OpenCV
#
# ### Goal
#
# The goal of this notebook is to estimate the camera pose w.r.t. a reference object of known geometry such as a chessboard.
#
# * Input: intrinsic parameter $K$, an image of a reference object (e.g. chessboard) and its 3D geometry
# * Output: camera pose $R, t$ w.r.t. the reference object
#
# Given a set of 2D-3D correspondences, the below solves a Perspective-n-Point (PnP) problem and obtain $R, t$ that satisfy $\tilde{x} \sim K (R|t)\tilde{X}$.
#
# ## Libraries
# +
# %matplotlib notebook
import sys, os, cv2
import numpy as np
from glob import glob
import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d import Axes3D
module_path = os.path.abspath(os.path.join('..'))
if module_path not in sys.path:
sys.path.append(module_path)
from pycalib.plot import plotCamera
# -
# ## Calibration parameters
# +
# Chessboard configuration
rows = 7 # Number of corners (not cells) in row
cols = 10 # Number of corners (not cells) in column
size = 160 # Physical size of a cell (the distance between neighrboring corners). Any positive number works.
# Input images capturing the chessboard above
input_file = '../data/00000000.png'
# plotCamera() config
plot_mode = 0 # 0: fixed camera / moving chessboard, 1: fixed chessboard, moving camera
plot_range = 4000 # target volume [-plot_range:plot_range]
camera_size = 100 # size of the camera in plot
# -
# ## 3D positions of the chess corners in WCS
X_W = np.empty([rows * cols, 3], dtype=np.float32)
for i_row in range(0, rows):
for i_col in range(0, cols):
X_W[i_row*cols+i_col] = np.array([size*i_col, size*i_row, 0], dtype=np.float32)
print(X_W)
# ## Intrinsic Parameter
#
# Use `incalib.ipynb` to get the intrinsic parameter.
# +
K = np.array([[1.32931637e+03, 0.00000000e+00, 9.57857318e+02],
[0.00000000e+00, 1.32931637e+03, 5.47353253e+02],
[0.00000000e+00, 0.00000000e+00, 1.00000000e+00]])
d = np.array([[ 0.0052289, -0.01161532, 0.0029297, 0.00017352, 0.0003208 ]])
print("Intrinsic parameter K = ", K)
print("Distortion parameters d = (k1, k2, p1, p2, k3) = ", d)
# -
# ## 2D chessconrer detection and PnP
# +
img = cv2.imread(input_file, cv2.IMREAD_GRAYSCALE) # Image
found, x_I = cv2.findChessboardCorners(img, (cols, rows)) # Find chess corners
if found:
term = (cv2.TERM_CRITERIA_EPS+cv2.TERM_CRITERIA_COUNT,30,0.1)
x_I_sub = cv2.cornerSubPix(img, x_I, (5,5), (-1,-1), term) # subpixel refinment
ret, rvec, tvec = cv2.solvePnP(X_W, x_I, K, d) # Solve PnP
# Plot
fig_ex = plt.figure()
ax_ex = Axes3D(fig_ex)
ax_ex.set_xlim(-plot_range, plot_range)
ax_ex.set_ylim(-plot_range, plot_range)
ax_ex.set_zlim(-plot_range, plot_range)
R_w2c = cv2.Rodrigues(rvec)[0] # PnP returns R, t satisfying Xc = R X_w + t
R_c2w = np.linalg.inv(R_w2c) # Camera pose in WCS
t_c2w = -R_c2w.dot(tvec).reshape((1,3)) # Camera position in WCS
plotCamera(ax_ex, R_c2w, t_c2w, "b", camera_size)
ax_ex.plot(X_W[:,0], X_W[:,1], X_W[:,2], ".")
# -
# ## Exercises
#
# 1. Try **ChArUco** patterns instead of chessboards.
# * [Detection of ChArUco Corners](https://docs.opencv.org/master/df/d4a/tutorial_charuco_detection.html)
# * Try with an image having an occluding object on the board.
#
| ipynb/excalib_chess.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ### build-sql-database
# Builds database in SQL por from the CPDOC collection **<NAME>, Ministério das Relações Exteriores (AAS-MRE_**.
#
# The meta-data included are:
# * doc_id
# * text body
# * main language
# * readability
# * URL
# +
import os
import re
import csv
import sqlite3
from langdetect import detect
from IPython.display import clear_output
# -
# # set operational system variables
# Verifies what operational system is being used and creates user-specific variables. Renato = Linux ; Marcelo = nt (Windows)
#
# Also sets working folders
inputs_raw = os.path.join("..", "data", "inputs")
inputs = os.path.join("..", "data")
outputs = os.path.join("..","data")
if os.name == 'nt':
encoding_type = 'utf-8'
else:
encoding_type = 'ISO-8859-1'
path = r"D:/pseudo-dropbox/backups-fgv/textfiles/textfiles-corrected-regrouped/"
# ### create special sorting function
# * Creates function that list files on different order
# * It was important to pay atention to files with different numeration, such as :
# * AAS_mre_onu_1975.01.23_doc_I-A.txt
# * AAS_mre_onu_1975.01.23_doc_I-6A8.txt
def to_zero(x):
if x == '': x = '0'
return x
def special_sort(l):
convert = lambda text: int(text) if text.isdigit() else str(text)
alphanum_key = lambda key: [ convert(to_zero(c)) for c in filter(None, re.split('(\d)A|A\d|([A-Z]*)-A?|.txt', key))]
return sorted(l, key = alphanum_key)
# # create list of files with special sort
files = [f for f in sorted(os.listdir(path))]
fullpath_list = []
fullpath = ''
for file in files:
fullpath = path+file
fullpath_list.append(fullpath)
fullpath_list = special_sort(fullpath_list)
fullpath_list[0:10]
# # url data
# ### creates list of dossiers and urls
url_list = []
dossie_url_list = []
url_inputs = os.path.join(inputs_raw,"URLS_AAS.csv")
with open(url_inputs, 'r') as csvfile:
reader = csv.reader(csvfile, delimiter=';')
for row in reader:
dossie = row[1]
if not dossie.startswith('AAS mre'): continue
dossie = re.sub(' ','_',dossie)
dossie = re.sub('AAS_mre_(.*)',r'\1',dossie)
dossie = re.sub('\/',r'-',dossie)
dossie_url_list.append(dossie)
url_list.append(row[2])
dossie_url_list[:5], url_list[:5]
# # builds sql database
# Classifies corpus by: id, readability and main_language.
#
# Stores metadata in sql. Other meta-data will be placed on dossie table: dossie_id, dossie subject, date (inaccurate)
#
# Documents with extremely low readability (less than 30% of readable phrases) are taken out of the database and listed on a file for later analysis, so that we can think of solutions in the future. Those documents are mainly manuscrits, pictures and texts with stains, scratches or drafts.
#
# Should change in the future the lang_class column. Maybe to 2 columns: main_language, second_language.
#
# For texts with a length of 10 **phrases**, I placed the number -1 on readability because there would be too few observations. Should make a new column with info about **word** length. Instead of the -1, I would have the estimated length and the length info.
#
# +
doc_class = []
lang_class = 'none'
not_readable = []
percentil = int(len(fullpath_list)/100)
sql_db = os.path.join(inputs, 'cpdoc_as.sqlite')
conn = sqlite3.connect(sql_db)
cur = conn.cursor()
'''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''
inserts data into sql database
'''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''
user_input = input("Data will be erased and replaced. Continue? Type 'yes' or 'no' on your keyboard: ")
if user_input.lower() == 'yes':
cur.execute("DROP TABLE IF EXISTS docs")
cur.execute('''CREATE TABLE IF NOT EXISTS docs
(id VARCHAR(31) PRIMARY KEY, main_language VARCHAR(10), readability DECIMAL(3,2), url LONGTEXT, body LONGTEXT
DEFAULT NULL)''')
''' iterates through texts '''
for count_doc,txt in enumerate(fullpath_list):
''' measures completion percentage '''
if count_doc % percentil == 0:
clear_output()
print(int(count_doc/percentil),'% done')
# if count_doc != 0: break
''' captures info about date, year, month and ids '''
txt_date = re.sub('.*(19\d\d\.\d\d\.\d\d).*', r'\1', txt)
txt_year = re.sub('.*(19\d\d).*', r'\1', txt)
txt_month = re.sub('.*19\d\d\.(\d\d).*', r'\1', txt)
txt_id = re.sub('.*AAS_mre_(.*).txt', r'\1', txt)
dossie = re.sub('(.*)_doc_.*', r'\1', txt_id)
url_index = dossie_url_list.index(dossie)
url = url_list[url_index]
''' makes analysis in each document '''
with open(txt, 'r', encoding=encoding_type) as f:
txt_body = f.read()
''' identifies main language and readability of each document '''
text_split = re.split('\.|\?|\:|\,', txt_body)
pt_count = en_count = es_count = fr_count = de_count = lang_count = total_count = 0
for phrase in text_split:
try:
if len(re.findall("[^\W\d]", phrase)) <= 10: continue
language = detect(phrase)
total_count += 1
except:
continue
if language == 'pt':
pt_count += 1
if language == 'en':
en_count += 1
if language == 'es':
es_count += 1
if language == 'fr':
fr_count += 1
if language == 'de':
de_count += 1
lang_count = pt_count + en_count + es_count + fr_count + de_count
if total_count == 0: readability_ratio = 0
else: readability_ratio = float(lang_count/total_count)
if readability_ratio < 0.3:
not_readable.append(txt)
continue
elif total_count > 10: readability = readability_ratio
else: readability = -1
''' note: with the criteria, documents might have readability but no lang_class '''
if de_count/total_count > 0.3 and de_count >= 3:
lang_class = 'de'
if fr_count/total_count > 0.3 and fr_count >= 3:
lang_class = 'fr'
if es_count/total_count > 0.3 and es_count >= 3:
lang_class = 'es'
if en_count/total_count > 0.3 and en_count >= 3:
lang_class = 'en'
if pt_count/total_count > 0.3 and pt_count >= 3:
lang_class = 'pt'
''' inserts data into sql '''
query = "INSERT INTO docs VALUES (?,?,?,?,?)"
cur.execute(query, (txt_id, lang_class, readability, url, txt_body))
else:
print('Table wasnot created/replaced')
conn.commit()
conn.close()
# -
# ### Saves list of files with too low readability
# +
not_readable_files = os.path.join(outputs, "not_readable_files.txt")
with open(not_readable_files, 'w+', encoding='utf-8') as f:
text = f.write(not_readable[0])
text = f.write('\r\n')
for file in not_readable[1:]:
with open(not_readable_files, 'a+', encoding='utf-8') as f:
text = f.write(file)
text = f.write('\r\n')
# -
| notebooks/03_build_sql_database_docs.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3 (ipykernel)
# language: python
# name: python3
# ---
# + [markdown] id="7KaEFFwmaz9f"
# # 5. I/O (Input / Output)
# + [markdown] id="WyMfCdGSaz9n"
# ## I/O란?
#
#
# - 프로그램 입장에서 들어오는 모든 데이터를 input, 나가는 모든 데이터를 output이라고 한다.
# > 메인 메모리 입장에서 생각하는 들어오고 나가는 모든 데이터에 대해서 I/O 처리라고 부릅니다. (단, CPU와의 소통은 제외)
#
#
# - 사용자로 부터 키보드로 입력받는 것을 stdin이라고 하며, 사용자에게 다시 모니터로 출력되는 것을 stdout이라고 한다.
#
# > 통상적으로는 Unix 환경(터미널 환경)에서 부르던 용어인데, 프로그래밍에 자주 등장하기 때문에 소개한다.
#
#
# - 프로그램은 메인 메모리 상에 존재하기 때문에, 스토리지로부터 파일을 불러오는 것도 input이고, 프로그램의 결과를 스토리지에 저장하는 것도 output이다. 이러한 작업을 file I/O로 통칭한다.
# > 스토리지와 프로그램 사이의 I/O를 file I/O라고 합니다.
#
#
# - 파이썬에서 stdin/out을 사용하는 방법과 file I/O를 사용하는 방법에 대해 간단하게 알아본다.
# + [markdown] id="Ob-m8XQbaz9o"
# ## 5.1 STDIN / STDOUT (Standard IN, Standard OUT)
# + [markdown] id="I9B1Br_0az9o"
# - 파이썬은 input()을 통해서 stdin을 사용자로부터 입력받을 수 있다.
#
#
# - 파이썬은 print()를 통해서 stdout을 사용자에게 출력할 수 있다.
# + id="NcHNtr58az9p"
# a에 키보드로 입력받은 값을 할당하고 출력해본다.
a = input()
a, type(a)
# + [markdown] id="-9bkJP0haz9q"
# - 파이썬에서는 stdin은 무조건 문자열 타입으로 들어온다. 이를 type casting을 통해서 다른 데이터 타입으로 바꾸어 사용해야 한다.
# + id="Kre90bwsaz9r"
# 입력받는 값을 숫자라고 가정한 경우.
a = int(input("숫자를 입력하세요 : ")) # type casting, (명시적)타입 변환
a, type(a)
# + id="8iinAvVAaz9r"
# 입력받는 값을 숫자라고 가정했는데 문자열이 들어오면 에러가 난다. 이 경우는 type casting이 실패한 경우이다.
a + 5
# + [markdown] id="6eZWUq_Daz9r"
# - 입력이 문자열이기 때문에 fancy하게 input을 처리할 수 있는 방법이 있다.
# + [markdown] id="XDI8URO8az9s"
# #### Q. 만약에 stdin으로 여러 개의 숫자가 들어오는 경우, 입력의 format을 알고 있다고 가정했을 때, 이를 효과적으로 처리할 수 있을까?
# + id="sg0e_5H5az9s"
# 이는 숫자를 2개로 가정한 경우
s = input()
a = int(s.split(',')[0])
b = int(s.split(',')[1])
print(a, b, type(a), type(b))
# + id="SPAtlM66az9s"
# 이와 같은 표현을 list comprehension이라고 한다.
L = [x for x in range(1, 10) if x % 2 != 0]
L
# -
L = [int(x) for x in input("콤마 기준으로 숫자 여러개 입력 : ").split(',')]
L
# + id="SfAWJB7taz9t"
# 위의 코드는 아래와 같다. 위의 코드가 훨씬 간단한 것을 확인할 수 있다. 익숙해져서 list comprehension을 사용하도록 하자.
L = []
for x in input("콤마를 기준으로 숫자를 여러개 입력해주세요 : ").split(','):
x = int(x)
L.append(x)
L
# + [markdown] id="DPYlwZhEaz9t"
# ## 5.2 File I/O
# + [markdown] id="eVNcSf2iaz9t"
# - 파이썬에서는 open()을 이용해서 파일을 손쉽게 열 수 있다.
#
#
# - open()을 통해 파일을 열고 난뒤엔, close()를 통해서 닫아줘야 한다. ( close를 하지 않으면 jupyter가 계속해서 파일을 점유하고 있게 되어, 시스템 낭비가 일어난다. 자세한 얘기는 생략)
#
#
# - open() 함수는 다양한 옵션을 제공하지만 기본적으로는 txt파일을 여는 것을 기본으로 가정한다.
#
#
#
# - 다른 타입의 파일을 열기 위해선 다른 라이브러리들이 필요하다.
#
# e.g. csv, excel 파일을 열기 위해 pandas, csv, openpyxl 라이브러리를 사용할 수 있다.
#
# e.g. png, jpg 파일을 열기 위해 PIL, opencv 라이브러리를 사용할 수 있다.
#
# e.g. pk, pkl 파일을 열기 위해 pickle 라이브러리를 사용할 수 있다.
# + [markdown] id="BGNX1KElaz9t"
# > 텍스트 파일을 여는 방법에는 read(), readline(), readlines(), for문을 이용한 방법이 있다. 코드를 통해 각 방법의 차이를 알아보자.
# + id="VKYHye1baz9u"
# f.read()를 통해 data 폴더안에 있는 test.txt를 read mode로 열어봅니다.
with open("/Users/ppangppang/Desktop/test.txt",'r') as f:
data = f.read()
data
# + id="4DSliMQlaz9u"
# f.readline()를 통해 data 폴더안에 있는 test.txt를 read mode로 열어봅니다.
with open("/Users/ppangppang/Desktop/test.txt",'r') as f:
data = f.readline()
data
# + id="SLD_zgm4az9u"
# f.readlines()를 통해 data 폴더안에 있는 test.txt를 read mode로 열어봅니다.
with open("/Users/ppangppang/Desktop/test.txt",'r') as f:
data = f.readlines()
data
# + id="WEP8wGLoaz9u"
# for문을 통해 data 폴더안에 있는 test.txt를 read mode로 열어서 출력해봅니다.
with open("/Users/ppangppang/Desktop/test.txt",'r') as f:
for line in f:
print(line)
# + [markdown] id="RJ7dLzyeaz9v"
# #### Q. test.txt를 열어서 한글자짜리를 다 지우고 다시 저장하고 싶다. 어떻게 해야할까?
# + id="fP0xhI0Caz9v"
output = []
# test.txt를 read mode로 열고 할 일이 끝나면 자동으로 닫는다.
with open("/Users/ppangppang/Desktop/test.txt",'r') as f:
for line in f:
line = line.strip()
if len(line) > 1:
output.append(line)
output
# 한글자 이상인 텍스트만 output list에 저장한다.
# result.txt로 output list에 있는 내용을 저장하기 위해 write mode로 열었다.
with open("/Users/ppangppang/Desktop/test.txt",'w') as f:
for line in output:
print(line, file=f)
# + id="WT40gJ8Naz9v"
# 제대로 데이터가 저장되어 있는지, 불러와서 확인한다.
with open("/Users/ppangppang/Desktop/test.txt",'r') as f:
for line in f:
print(line)
# + [markdown] id="Z1O6wq4Baz9v"
# ### (OPTIONAL) pickle 라이브러리를 통해서 파이썬 object 자체를 저장하기
# + id="Y8SXE8dDaz9v"
output
# + id="aRRPi3VZaz9w"
import pickle
# + id="DXswSzeNaz9w"
with open("/Users/ppangppang/Desktop/test.pk",'wb') as f:
pickle.dump(output, f)
with open("/Users/ppangppang/Desktop/test.pk",'rb') as f:
output = pickle.load(f)
# + id="rB84ZyQraz9w"
output
# -
| python/210907-Python-IO_stdin.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import pandas as pd
from datetime import datetime as dt
# "tobs" is "temperature observations"
df = pd.read_csv('hawaii_measurements.csv')
df.head()
df.tail()
df.date.dtype
# Convert the date column format from string to datetime
df.date = pd.to_datetime(df.date, infer_datetime_format=True)
# Set the date column as the DataFrame index
df = df.set_index(df['date'])
# Drop the date column
df = df.drop(columns='date')
df.head()
# ### Compare June and December data across all years
from scipy import stats
# Filter data for desired months
jun_data = df[df.index.month == 6]
dec_data = df[df.index.month == 12]
jun_data.mean()
dec_data.mean()
# Create collections of temperature data
jun_temp = jun_data.tobs
dec_temp = dec_data.tobs
# Run paired t-test
stats.ttest_ind(jun_temp, dec_temp)
# ### Analysis
#
# It's safe to say, across all stations, the mean temperatures for June and December in the years 2010-2017 differ with a 3.9 degrees Celsius. According to the t-test conducted, it's extremely low p-value means the difference is statistically significant.
#
#
| stats_bonus_pandas.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="FrBZe2Qalk2q"
# # Get Depth images using the MiDas Repo
# <NAME>
# <EMAIL>
# + id="GhlU02ImpYQ6" outputId="6d0f886e-a08b-4827-a91f-be528b44619a" colab={"base_uri": "https://localhost:8080/", "height": 141}
# !git clone https://github.com/intel-isl/MiDaS.git
# + id="3ATv8BnOnOUA" outputId="898a9410-06db-4eb2-82a8-35880686e64a" colab={"base_uri": "https://localhost:8080/", "height": 35}
from google.colab import drive
drive.mount('/content/gdrive',force_remount=True)
# + id="u8u7Fu91sbzs" outputId="400a0802-f748-4003-abda-97c56af7b3ed" colab={"base_uri": "https://localhost:8080/", "height": 35}
# %cd /content/
# + id="IxkKWCPpspc8"
# !cp '/content/gdrive/My Drive/EVA5/S13/Copy of YoloV3_Dataset.zip' '.'
# + id="xaf3EB5Dsb3D" outputId="c6973f0b-87e1-4169-b0c2-1c9b902862a3" colab={"base_uri": "https://localhost:8080/", "height": 1000}
# !unzip 'Copy of YoloV3_Dataset.zip'
# + id="22373sDntcO3" outputId="f4bf606c-2240-46fa-f792-8a70680e62ff" colab={"base_uri": "https://localhost:8080/", "height": 52}
# ls
# + id="Bx1PKTG9tJFB"
# !cp -a '/content/YoloV3_Dataset/Images/.' '/content/MiDaS/input/'
# + id="zvo-rTQpn2CP"
# #!wget https://github.com/intel-isl/MiDaS/releases/download/v2/model-f46da743.pt
# + id="arSr8wHLndEo" outputId="925659b1-7a36-4356-b5fb-aa0b761c98e6" colab={"base_uri": "https://localhost:8080/", "height": 35}
# %cd MiDaS
# + id="22PJQZE8t-Y_"
# !ls input/
# + id="WJ6APKXEqpve"
# !cp '/content/gdrive/My Drive/EVA5/15A/MiDas/model-f46da743.pt' '.'
# + id="MCx7wjxArMLT" outputId="bcf42be1-beec-432d-db50-ebef16de11c2" colab={"base_uri": "https://localhost:8080/", "height": 1000}
# !python run.py
# + [markdown] id="ZZc4dnoRl4mU"
# ## Avoid pfm files and store only jpg images
# + id="Rhho9nQCuzXP"
# !cp -a '/content/MiDaS/output/*.jpg' '/content/gdrive/My Drive/EVA5/15A/midas_out/'
# + id="Ea6XEU8Quzal"
| MiDas_depth_images.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # PlWordNet Demo
# ## Utility code
# +
from time import time
# just a simple class for measuring execution time
class timer:
def __enter__(s):
s.t = time()
def __exit__(s,x,y,z):
print(f'took {time() - s.t:.2f} seconds')
# -
# ## Loading `PlWordNet`
import plwordnet
# You can load .xml, .pkl files. They can also be compressed gzip or lzma (.gz or .xz).
#
# Alternatively, you can pass an opened file object (rb) to `load`.
with timer():
wn = plwordnet.load('../local/plwordnet_4_2.xml.xz')
# Calling `str` on `Wordnet` shows basic statistics.
wn
# Pickle the `Wordnet` object for faster loading. Note that the pickled object may not be compatible between different `plwordnet` library versions.
wn.dump('../local/plwordnet_4_2.pkl')
with timer():
wn_from_pickle = plwordnet.load('../local/plwordnet_4_2.pkl')
# ## Examples
# Show some relations and their ids
for x in list(wn.relation_types.values())[:5]:
print('---')
print(x.id, x.name, x.type, sep=', ')
print(x.description)
# Show first 10 synset relations with predicate 14 (hiperonimia). Use the short relation name for printing.
for s, p, o in wn.synset_relations_where(predicate=11)[:10]:
print(p.format(s, o, short=True))
# Show all relations where a lexical unit with lemma 'miód' is a subject or an object:
for lu in wn.lemmas('miód'):
for s, p, o in wn.lexical_relations_where(subject=lu) + wn.lexical_relations_where(object=lu):
print(p.format(s, o))
# Show all subjects of relations with predicate 13 (konwersja), where a lexical unit with lemma 'prababcia' is an object. Also show the part of speech and synset of the found subjects.
for lu in wn.lemmas('prababcia'):
for s, p, o in wn.lexical_relations_where(predicate=13, object=lu):
print('---')
print(s)
print('part of speech =', s.pos)
print('synset =', s.synset)
| docs/examples.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ## Import Contact & Deals from Excel (at the same time)
# <img src="https://upload.wikimedia.org/wikipedia/commons/thumb/3/3f/HubSpot_Logo.svg/220px-HubSpot_Logo.svg.png" alt="drawing" width="200" align='left'/>
# ### Step 1. Install pip package
# +
# #!pip install requests
# -
# ### Step 2. Import hubspot connector
import hubspot_connector
# ### Step 3. set variable
# - token - token for hubspot api
# - excel_file - path to excel file
# - sheet_name - name of sheet
#
#
# Note : the excel file should located on the same drive as your notebook and be structured with following columns : <br>
# DEAL PIPELINE DEAL_STAGE CLOSED_DATE COMPANY FIRSTNAME LASTNAME JOB_TITLE EMAIL
token = "------"
excel_file = "Deal+contact_creation.xlsx"
sheet_name = "Feuil1"
# ### Step 3. Run the script for bulk import
hubspot_connector.connect(token, excel_file, sheet_name)
# ### Step 3. Run the script for bulk delete
hubspot_connector.delete(token, excel_file, sheet_name)
| CRM/Hubpsot/_Workflows/Import Contact & Deals from Excel/Import Contact & Deals from Excel.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] nbgrader={}
# # LaTeX Exercise 1
# + [markdown] nbgrader={}
# The images of the equations on this page were taken from the Wikipedia pages referenced for each equation.
# + [markdown] nbgrader={}
# ## Imports
# + nbgrader={}
from IPython.display import Image
# + [markdown] nbgrader={}
# ## Typesetting equations
# + [markdown] nbgrader={}
# In the following cell, use Markdown and LaTeX to typeset the equation for the probability density of the normal distribution $f(x, \mu, \sigma)$, which can be found [here](http://en.wikipedia.org/wiki/Normal_distribution). Following the main equation, write a sentence that defines all of the variable in the equation.
# + [markdown] nbgrader={}
# \begin{equation*}
# f(x,\mu,\sigma)=\frac{1}{\sigma\sqrt{2pi}}e^{-\frac{(x-\mu)^{2}}{2\sigma^{2}}}
# \end{equation*}
# -
# In this equation $\mu$ is the mean $\sigma$ is the standard deviation and x is the position
# + [markdown] nbgrader={}
# In the following cell, use Markdown and LaTeX to typeset the equation for the time-dependent Schrodinger equation for non-relativistic particles shown [here](http://en.wikipedia.org/wiki/Schr%C3%B6dinger_equation#Time-dependent_equation) (use the version that includes the Laplacian and potential energy). Following the main equation, write a sentence that defines all of the variable in the equation.
# + [markdown] nbgrader={}
# \begin{equation*}
# i\hbar\frac{\partial}{\partial t}\Psi(r,t)=[-\frac{\hbar^{2}}{2\mu}\nabla^{2}+V(r,t)]\Psi(r,t)
# \end{equation*}
# + [markdown] deletable=false nbgrader={"checksum": "4d858b55aeb9117b8cfa6f706ab5b617", "grade": true, "grade_id": "latexex01b", "points": 4, "solution": true}
# i is the imaginary number $(-1)^{1/2}$, $\hbar$ is a constant (plancks) $\frac{\partial}{\partial t}$ is the partial derivative with respect to time, $\mu$ is the reduced mass, $\nabla^{2}$ is the lapalacian operator,Vis the potential energy and $\Psi$ is the wave function
# + [markdown] nbgrader={}
# In the following cell, use Markdown and LaTeX to typeset the equation for the Laplacian squared ($\Delta=\nabla^2$) acting on a scalar field $f(r,\theta,\phi)$ in spherical polar coordinates found [here](http://en.wikipedia.org/wiki/Laplace_operator#Two_dimensions). Following the main equation, write a sentence that defines all of the variable in the equation.
# + [markdown] nbgrader={}
# \begin{equation*}
# \Delta f=\frac{1}{r^{2}}\frac{\partial}{\partial r}(r^{2}\frac{\partial f}{\partial \theta})+\frac{1}{r^{2}sin\theta}
# \frac{\partial}{\partial \theta}(sin\theta\frac{\partial f}{\partial \theta})+\frac{1}{r^{2}sin^{2}\theta}\frac{\partial ^{2} f}{\partial \varphi^{2}}
# \end{equation*}
# + [markdown] deletable=false nbgrader={"checksum": "625624933082a6695c8fd5512a808b77", "grade": true, "grade_id": "latexex01c", "points": 4, "solution": true}
# r is the radial position, $\theta$ is the angle
# + deletable=false nbgrader={"checksum": "19b733e9b9c40a9d640d0ff730227a31", "grade": true, "grade_id": "latexex01a", "points": 2, "solution": true}
| assignments/assignment06/LaTeXEx01.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # 🐍Python Tricks - Black Magics
# ## Table of Content:
# * [EAFP](#anchor1)
# * [self assist](#anchor2)
# * [eval](#anchor3)
# * [str.find](#anchor4)
# * [str.replace](#anchor5)
# * [^ (xor)](#anchor6)
# * [Sentinel](#anchor7)
# * [+= ,](#anchor8)
# * [List Comprehension with Break](#anchor9)
# * [Integrated by Fractions](#anchor10)
# * [Complex Number in Matrix](#anchor11)
# ***
# ## EAFP <a name="anchor1"></a>
# use [**EAFP**](https://docs.python.org/2/glossary.html#term-eafp)(easier to ask for forgiveness than permission) against [**LBYL**](https://docs.python.org/2/glossary.html#term-lbyl) (look before you leap), actually this is the Pythonic design philosophy.
# ```python
# """LBYL"""
# if "key" in dict_:
# value += dict_["key"]
#
# """EAFP"""
# # downplays the importance of the key
# try:
# value += dict_["key"]
# except KeyError:
# pass
# ```
# Let's write a Fibonacci example in a pythonic-style. We will provide the result in generator way and iterate in EAFP style.
# +
"""pythonic fibonacci"""
#generator function
def fibonacci(n):
a, b, counter = 0, 1, 0
while counter <= n:
yield a
a, b = b, a + b
counter += 1
f = fibonacci(5) #f is iterator object
while True:
try:
print (next(f), end=" ")
except StopIteration:
break
# -
# #### Inspirations:
# ```python
# """used in detect cycle in linked list"""
# # L141: determine there is a cycle in a linked list
# def has_cycle_in_linked_list(head: ListNode) -> bool:
# try:
# slow = head
# fast = head.next
# while slow is not fast:
# slow = slow.next
# fast = fast.next.next
# return True
# except:
# return False
#
# """used in binary search and insert"""
# # L334: given an unsorted array return whether an increasing subsequence (incontinuous) of length k exists or not in the array.
# def increasing_triplet(nums: List[int], k: int) -> bool:
# try:
# inc = [float('inf')] * (k - 1)
# for x in nums:
# inc[bisect.bisect_left(inc, x)] = x
# return k == 0
# except:
# return True
#
# """used in eval"""
# # L301: Remove the minimum number of invalid parentheses in order to make the input string valid. Return all possible results.
# def remove_invalid_parentheses(s: str) -> List[str]:
# def isvalid(s):
# try:
# eval('0,' + ''.join(filter('()'.count, s)).replace(')', '),'))
# return True
# except:
# pass
# level = {s}
# while True:
# valid = list(filter(isvalid, level))
# if valid:
# return valid
# level = {s[:i] + s[i+1:] for s in level for i in range(len(s))}
# ```
# ## self assist <a name="anchor2"></a>
# a trick of dynamic property, but a bit intrusive
# #### Inspiration:
# ```python
# """self as dummy node in linked list"""
# def traverse_linked_list(self, head: TreeNode) -> TreeNode:
# pre, pre.next = self, head
# while pre.next:
# self.process_logic(pre.next)
# return self.next # return head node
# ```
# ## eval <a name="anchor3"></a>
# eval() executes arbitrary strings as Python code. But note that eval may become a potential security risk.
# #### Inspiration:
# ```python
# """build a tree constructor and use eval to execute"""
# # L536: construct binary tree from string
# def str2tree(s: str) -> TreeNode:
# def t(val, left=None, right=None):
# node, node.left, node.right = TreeNode(val), left, right
# return node
# return eval('t(' + s.replace('(', ',t(') + ')') if s else None
# ```
# ## str.find <a name="anchor4"></a>
# This is clever use of str.find. Although this usage is not very generalizable, the idea can be used for reference.
# #### Inspiration:
# ```python
# """use find to construct a mechanism: if val is # return 1, if not return -1"""
# # L331: verify preorder serialization of a binary tree, if it is a null node, we record using a sentinel value #
# def is_valid_serialization(preorder: str) -> bool:
# need = 1
# for val in preorder.split(','):
# if not need:
# return False
# need -= ' #'.find(val)
# return not need
# ```
# ## str.replace <a name="anchor5"></a>
# Use str.replace to implement a string version's union-find (disjoint-set).
# #### Inspiration:
# ```python
# """convert int to unicode char, and use str.replace to merge the connected nodes by reduce"""
# # L323: given n nodes from 0 to n - 1 and a list of undirected edges, find the number of connected components in an undirected graph.
# def count_connected_components(n: int, edges: List[List[int]]) -> int:
# return len(set(reduce(lambda s, edge: s.replace(s[edge[1]], s[edge[0]]), edges, ''.join(map(chr, range(n))))))
# ```
# ## ^ (xor) <a name="anchor6"></a>
# ^ is usually used for removing even exactly same numbers and save the odd, or save the distinct bits and remove the same.
# #### Inspirations:
# ```python
# """Use ^ to remove even exactly same numbers and save the odd, or save the distinct bits and remove the same."""
# """bit manipulate: a^b^b = a"""
# # L268: Given an array containing n distinct numbers taken from 0, 1, 2, ..., n, find the one that is missing from the array.
# def missing_number(nums: List[int]) -> int:
# res = 0
# for i, e in enumerate(nums):
# res = res ^ i ^ e
# return res ^ len(nums)
#
# """simply find the first index whose "partner index" (the index xor 1) holds a different value."""
# # L540: find the single element, in a sorted array where every element appears twice except for one.
# def single_non_duplicate(nums: List[int]) -> int:
# lo, hi = 0, len(nums) - 1
# while lo < hi:
# mid = (lo + hi) // 2
# if nums[mid] == nums[mid ^ 1]:
# lo = mid + 1
# else:
# hi = mid
# return nums[lo]
#
# """parity in triple comparisons"""
# # L81: search a target in a rotated sorted array
# def search_in_rotated_sorted_arr(nums: List[int], target: int) -> int:
# """I have three checks (nums[0] <= target), (target <= nums[i]) and (nums[i] < nums[0]), and I want to know
# whether exactly two of them are true. They can't all be true or all be false (check it), so I just need to
# distinguish between "two true" and "one true". Parity is enough for that, so instead of adding them I xor them"""
# self.__getitem__ = lambda i: (nums[0] <= target) ^ (nums[0] > nums[i]) ^ (target > nums[i])
# i = bisect.bisect_left(self, True, 0, len(nums))
# return i if target in nums[i:i+1] else -1
# ```
# ## Sentinel <a name="anchor7"></a>
# Sentinel can make the program simpler: a mechanism to distinguish useful data from placeholders which indicate data is absent.
#
# For example, we can put the key we search after the end of the array, this ensures that we will eventually find the element. When we find it, we only have to check whether we found a real element or just the sentinel.
#
# Here we compare two similar built-in functions: str.index without sentinel and str.find with sentinel.
# #### Sentinel in Python:
# ```python
# """find vs index"""
# """index without sentinel"""
# try:
# i = a.index(b)
# except:
# return
#
# """index with sentinel"""
# i = a.find(b)
# if i == -1:
# return
#
# """sentinel in dict.get"""
# sentinel = object()
# value = my_dict.get(key, sentinel)
# if value is not sentinel:
# # Do something with value
#
# """sentinel in iter"""
# blocks = ''.join(iter(partial(f.read, 32), ''))
# ```
# #### Inspirations:
# ```python
# """add a sentinel n at the end (which is the appropriate last insertion index then)"""
# # L47: given a collection of numbers that might contain duplicates, return all possible unique permutations.
# def permute_unique(nums: List[int]) -> List[List[int]]:
# perms = [[]]
# for n in nums:
# perms = [p[:i] + [n] + p[i:]
# for p in perms
# for i in range((p + [n]).index(n) + 1)]
# return perms
#
# """sentinel in matrix"""
# def traverse_neighbors(matrix: List[List[int]]):
# m, n = len(matrix), len(matrix[0])
# """augment matrix to void length check by sentinel"""
# matrix += [0] * n,
# for row in matrix:
# row.append(0)
#
# for i in range(m):
# for j in range(n):
# # construct neighbor iterator
# for I, J in (i + 1, j), (i - 1, j), (i, j + 1), (i, j - 1):
# """no need to check boundary"""
# process_neighbor_logic(matrix[I][J])
#
# """functional sentinel"""
# def get_element(matrix: List[List[int]], i: int, j: int) -> int:
# return matrix[i][j] if 0 <= i < m and 0 <= j < n else -1
# ```
# ## += , <a name="anchor8"></a>
# less to type and more clear, even faster according to the test. but if not familiar with, may bring confusion.
""", means convert to tuple, += element, equals to append(element)"""
arr = [1, 2, 3]
arr += 4,
arr
# ## List Comprehension with Break <a name="anchor9"></a>
# We know we cannot use branch logic (conditional execution) like if/else in list comprehension, so how to simulate break in list comprehension?
#
# Notice it is just a study and exploration of list comprehension. Since this trick is too hackish with poor readability and performance, so it should not be used in production code. But the techniques and ideas can be used for reference and may provide some inspirations.
#
# Let's take this Stack Overflow question as an example:
# >How can I stop the iteration of list comprehension when a particular element is found.
# ```python
# # pseudo code
# new_list=[a for a in origin_list break if a==break_elment]
# ```
# Here is the hackish solution:
# +
# https://stackoverflow.com/a/56054962/11263560
origin_list = [1, 2, 3, 3, 4, 3, 5]
break_elment = 3
new_list = [a for end in [[]] for a in origin_list
if not end and not (a == break_elment and end.append(42))]
new_list
# -
# There are many techniques used in this trick:
#
# 1. The key point is building an `end` condition in list comprehension. Skip the rest elements when end is not empty (Actually not break out, but indeed in logic).
# 2. How to initialize a variable (`end`) in a list comprehension? Here is a trick to wrap it in a for list: `for end in [[]]`.
# 3. How to implement branch logic in a list comprehension? Use lazy evaluation technique in `and`/`or` to divide branch logics.
#
# #### Inspiration:
# ```python
# # https://stackoverflow.com/a/55671533/11263560
# # How can I break a list comprehension based on a condition, for instance when the number 412 is found?
# # requirement in pseudo code
# even = [n for n in numbers if 0 == n % 2 and break if n == 412]
#
# """use end condition"""
# even = [n for end in [[]] for n in numbers
# if (False if end or n != 412 else end.append(42))
# or not end and not n % 2]
#
# # https://stackoverflow.com/q/55646039/11263560
# """use push & pop to record last pair"""
# res = [last.pop() and last.append(b) or b for last in [[desired_list[0]]] for a, b in
# zip([desired_list[0]] + desired_list, desired_list) if abs(a[1] - b[1]) <= 5 and a == last[0]]
#
# """use end condition"""
# res = [b for end in [[]] for a, b in zip([desired_list[0]] + desired_list, desired_list)
# if (False if end or abs(a[1] - b[1]) <= 5 else end.append(42)) or not end and abs(a[1] - b[1]) <= 5]
# ```
# ## Integrated by Fractions <a name="anchor10"></a>
# Integrate several dimensions into one dictionary by the index with fractions.
# #### Inspiration:
# ```python
# # L562: given a 01 matrix, find the longest line of consecutive one. the line could be horizontal, vertical, diagonal or anti-diagonal.
# def longest_line(matrix: List[List[int]]) -> int:
# max_len = 0
# cur_len = defaultdict(int)
# for i, row in enumerate(matrix):
# for j, v in enumerate(row):
# """merge row, col, analog, anti-analog into one dict by fractions"""
# for key in i, j + .1, i + j + .2, i - j + .3: # analog: i+j, anti-analog: i-j
# cur_len[key] = (cur_len[key] + 1) * v # accumulate util v turn to zero
# max_len = max(max_len, cur_len[key])
# return max_len
# ```
# ## Complex Number in Matrix <a name="anchor11"></a>
# use complex number in 2d representation, and visit 4-directions by imaginary unit calculation.
# #### Inspirations:
# ```python
# """simplify two-dimension index into one-dimension by complex number"""
# # traverse neighbors in matrix
# def traverse_neighbor_by_complex(matrix: List[List[int]]) -> None:
# matrix = {i + 1j * j: v for i, row in enumerate(matrix) for j, v in enumerate(row)}
# for z in matrix:
# """visit 4-directional neighbor by complex calculation"""
# for k in range(4):
# process_neighbor_logic(matrix.get(z + 1j ** k))
#
# # L657: given a sequence of robot 4-directional moves "LRUD", judge if this robot ends up at (0, 0) after it completes its moves.
# def judge_circle(moves: str) -> bool:
# """D: 1j**-1=-1j, R: 1j**0=1+0j, U: 1j**1=1j, L: 1j**2=-1+0j, result in D+U=0 and L+R=0"""
# return not sum(1j**'RUL'.find(m) for m in moves)
#
# """use complex number to represent island(simplify 2d -> 1d and turn into a sparse representation)"""
# # L695: an island is a group of 1's connected 4-directionally, find the maximum area of an island in the given 2D array.
# def max_area_of_island(grid: List[List[int]]) -> int:
# grid = {i + j * 1j: val for i, row in enumerate(grid) for j, val in enumerate(row)}
# """calculate the area of paricular island by visit neigher complex calculat"""
# def area(z):
# return grid.pop(z, 0) and 1 + sum(area(z + 1j ** k) for k in range(4))
# return max(map(area, set(grid)))
# ```
# ***
| black_magics.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
import numpy as np
import h5py
import json
import sys
sys.path.append('F:/Linux')
# sys.path.append("C:/Users/qq651/OneDrive/Codes/A2project/")
import illustris_python as il
import matplotlib.pyplot as plt
# from plotTools.plot import *
def Flatness(MassTensor):
if MassTensor is int:
return 0
#c / a = (M3)**0.5 / (M1)**0.5
return np.sqrt(MassTensor[0]) / np.sqrt(MassTensor[2])
def BtoA(MassTensor):
if MassTensor is int:
return 0
#b / a = (M2)**0.5 / (M1)**0.5
return np.sqrt(MassTensor[0]) / np.sqrt(MassTensor[1])
def LoadMergHist(simu, subhaloID):
'''
return subhalo's main progenitor and merger history with snapshot
'''
if simu == 'TNG':
ldir = 'f:/Linux/localRUN/tng_DiskMerTree/%d.json' % subhaloID
else:
ldir = 'f:/Linux/localRUN/il1_DiskMerTree/%d.json' % subhaloID
with open(ldir) as f:
data = json.load(f)
Main = np.array(data['Main'])
return dict(zip(Main[:, 0], Main[:, 1])), np.array(data['Mergers'])
def MassTensorEigenVals(coor, mas, half_r):
'''
Return eigenvalues of the mass tensor, sorted by M1 < M2 < M3
'''
r = coor - coor[0]
r[r > 37500] -= 75000
r[r < -37500] += 75000
dis = np.linalg.norm(r, axis=1)
inside = dis < (half_r * 2)
r = r[inside]
mas = mas[inside]
M_x = ((mas * (r[:, 0]/0.6774)**2).sum())**0.5 / mas.sum()**0.5
M_y = ((mas * (r[:, 1]/0.6774)**2).sum())**0.5 / mas.sum()**0.5
M_z = ((mas * (r[:, 2]/0.6774)**2).sum())**0.5 / mas.sum()**0.5
M = np.array([M_x, M_y, M_z])
M.sort()
return M
def ErrorBarMedian(data):
#return 25%, 50%, 75%
if len(data) == 0:
return 0, 0, 0
elif len(data) < 3:
return 0, np.median(data), 0
else:
data.sort()
return data[int(len(data) / 4)], np.median(data), data[int(len(data) * 0.75)]
# -
#This function is special, not use the plot.Y_rawdata in this file
def Y_rawdata(data, snapnum):
plotdata = [[], [], []]
for i in range(snapnum):
d0, d1, d2 = ErrorBarMedian(data[:, i])
plotdata[0].append(d0)
plotdata[1].append(d1)
plotdata[2].append(d2)
plotdata = np.array(plotdata)
Err = np.vstack((plotdata[1,:] - plotdata[0,:], plotdata[2,:] - plotdata[1,:]))
return plotdata[1, :], Err
# +
# path_99 = 'f:/Linux/data/TNG/cutoff/disk_99'
il1_barID = np.load('f:/Linux/localRUN/barredID_il1.npy', allow_pickle=1)
il1_diskID = np.load('f:/Linux/localRUN/diskID_il1.npy', allow_pickle=1)
il1_MTE = np.load('f:/Linux/localRUN/MTE_il1.npy', allow_pickle=1).item()
tng_barID = np.load('f:/Linux/localRUN/barredID_4WP_TNG.npy', allow_pickle=1)
tng_diskID = np.load('f:/Linux/localRUN/diskID_4WP.npy', allow_pickle=1)
tng_MTE = np.load('f:/Linux/localRUN/MTE_TNGdisk_4WP.npy', allow_pickle=1).item()
tng_A2list = np.load('f:/Linux/localRUN/TNG_A2withRedshift.npy', allow_pickle=1).item()
'''
Illustris-1 Snapshot-Redshift:
snap_127 z=0.1
snap_120 z=0.2
snap_113 z=0.3
snap_108 z=0.4
snap_103 z=0.5
snap_95 z=0.7
snap_85 z=1.0
snap_75 z=1.5
snap_68 z=2.0
SnapList = [135, 127, 120, 113, 108, 103, 95, 85, 75, 68]
RedShift = [0, 0.1 , 0.2, 0.3, 0.4, 0.5, 0.7, 1.0, 1.5, 2.0]
TNG Snapshot-Redshift:
snap_91 z=0.1
snap_84 z=0.2
snap_78 z=0.3
snap_72 z=0.4
snap_67 z=0.5
snap_59 z=0.7
snap_50 z=1.0
snap_40 z=1.5
snap_33 z=2.0
'''
il1_snapshot = [135, 127, 120, 113, 103, 108, 95, 85, 75, 68]
tng_snapshot = [99, 91, 84, 78, 72, 67, 59, 50, 40, 33]
RedShift = [0, 0.1, 0.2, 0.3, 0.4, 0.5, 0.7, 1.0, 1.5, 2.0]
RedShift = np.array(RedShift)
dolist = [0, 2, 5, 6, 7, 8, 9]
rs = RedShift[dolist]
# +
tng_nobar = []
for i in tng_diskID:
if i not in tng_barID:
tng_nobar.append(i)
il1_nobar = []
for i in il1_diskID:
if i not in il1_barID:
il1_nobar.append(i)
# -
count=0
for haloID in il1_diskID:
mte = il1_MTE[haloID]
for i in range(10):
if type(mte[i]) is int:
count+=1
print(count)
# +
tng_ca_bar = []
tng_ba_bar = []
tng_ca_nobar = []
tng_ba_nobar = []
for haloID in tng_diskID:
mte = tng_MTE[haloID]
ca = []
ba = []
isdata = True
for i in dolist:
if type(mte[i]) is int:
isdata = False
break
ca.append(Flatness(mte[i]))
ba.append(BtoA(mte[i]))
if isdata:
if haloID in tng_barID:
tng_ca_bar.append(ca)
tng_ba_bar.append(ba)
else:
tng_ca_nobar.append(ca)
tng_ba_nobar.append(ba)
tng_ca_bar = np.array(tng_ca_bar)
tng_ba_bar = np.array(tng_ba_bar)
tng_ca_nobar = np.array(tng_ca_nobar)
tng_ba_nobar = np.array(tng_ba_nobar)
# +
il1_ca_bar = []
il1_ba_bar = []
il1_ca_nobar = []
il1_ba_nobar = []
count = 0
b = 0
nb= 0
for haloID in il1_diskID:
mte = il1_MTE[haloID]
ca = []
ba = []
isdata = True
for i in dolist:
if type(mte[i]) is int:
isdata = False
break
ca.append(Flatness(mte[i]))
ba.append(BtoA(mte[i]))
if isdata:
count+=1
if haloID in il1_barID:
b+=1
il1_ca_bar.append(ca)
il1_ba_bar.append(ba)
else:
nb+=1
il1_ca_nobar.append(ca)
il1_ba_nobar.append(ba)
il1_ba_bar = np.array(il1_ba_bar)
il1_ca_bar = np.array(il1_ca_bar)
il1_ba_nobar = np.array(il1_ba_nobar)
il1_ca_nobar = np.array(il1_ca_nobar)
print(b,nb)
# +
tng_ca_Y_bar, tng_b_err = Y_rawdata(tng_ca_bar, len(rs))
tng_ca_Y_nobar, tng_nb_err = Y_rawdata(tng_ca_nobar, len(rs))
il1_ca_Y_bar, il1_b_err = Y_rawdata(il1_ca_bar, len(rs))
il1_ca_Y_nobar, il1_nb_err = Y_rawdata(il1_ca_nobar, len(rs))
plt.figure(figsize=(8,8))
plt.errorbar(rs-0.012, tng_ca_Y_bar, yerr=tng_b_err, elinewidth=2, capthick=2, capsize=3, color='r', fmt='o', ls='-', label='TNG barred')
plt.errorbar(rs-0.012, tng_ca_Y_nobar, yerr=tng_nb_err, elinewidth=2, capthick=2, capsize=3, color='orange', fmt='^', ls='-.', label='TNG no bar')
plt.errorbar(rs+0.012, il1_ca_Y_bar, yerr=il1_b_err, elinewidth=2, capthick=2, capsize=3, color='blue', fmt='o', ls='-', label='Illustris-1 barred')
plt.errorbar(rs+0.012, il1_ca_Y_nobar, yerr=il1_nb_err, elinewidth=2, capthick=2, capsize=3, color='c', fmt='^', ls='-.', label='Illustris-1 no bar')
plt.ylim(0.85, 1)
plt.xticks(rs)
plt.xlabel('Z', fontsize=20)
plt.ylabel('c/a', fontsize=22)
plt.tick_params(labelsize=14)
plt.legend()
plt.savefig('f:/Linux/local_result/AxisRatio/CA_err.pdf')
# +
tng_ba_Y_bar, tng_b_err = Y_rawdata(tng_ba_bar, len(rs))
tng_ba_Y_nobar, tng_nb_err = Y_rawdata(tng_ba_nobar, len(rs))
il1_ba_Y_bar, il1_b_err = Y_rawdata(il1_ba_bar, len(rs))
il1_ba_Y_nobar, il1_nb_err = Y_rawdata(il1_ba_nobar, len(rs))
plt.figure(figsize=(8,8))
plt.errorbar(rs-0.012, tng_ba_Y_bar, yerr=tng_b_err, elinewidth=2, capthick=2, capsize=3, color='r', fmt='o', ls='-', label='TNG barred')
plt.errorbar(rs-0.012, tng_ba_Y_nobar, yerr=tng_nb_err, elinewidth=2, capthick=2, capsize=3, color='orange', fmt='^', ls='-.', label='TNG no bar')
plt.errorbar(rs+0.012, il1_ba_Y_bar, yerr=il1_b_err, elinewidth=2, capthick=2, capsize=3, color='blue', fmt='o', ls='-', label='Illustris-1 barred')
plt.errorbar(rs+0.012, il1_ba_Y_nobar, yerr=il1_nb_err, elinewidth=2, capthick=2, capsize=3, color='c', fmt='^', ls='-.', label='Illustris-1 no bar')
plt.ylim(0.85, 1)
plt.xticks(RedShift[dolist])
plt.xlabel('Z', fontsize=20)
plt.ylabel('b/a', fontsize=22)
plt.tick_params(labelsize=14)
plt.legend()
plt.savefig('f:/Linux/local_result/AxisRatio/BA_err.pdf')
# -
| JpytrNb/pdfPlot/.ipynb_checkpoints/AxisRatio-checkpoint.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import pandas as pd
import numpy as np
data = pd.read_csv("Bankruptcy_data.csv")
data.describe()
data.shape
from sklearn import preprocessing
from sklearn.preprocessing import Imputer
impute = Imputer()
df = pd.DataFrame(impute.fit_transform(data))
df.isnull().sum()
X = df.iloc[:,:-1]
X.describe()
Y = df.iloc[:,-1:]
Y.describe()
from sklearn.linear_model import LogisticRegression
log_reg = LogisticRegression()
log_model = log_reg.fit(X,np.ravel(Y))
log_model.coef_
log_model.intercept_
y_pred = log_model.predict(X)
y_pred
log_model.score(X,Y)
from sklearn.metrics import confusion_matrix
confusion_matrix = confusion_matrix(Y,y_pred)
confusion_matrix
from sklearn.metrics import classification_report
classification_report(Y,y_pred)
import matplotlib.pyplot as plt
from sklearn.metrics import roc_auc_score
from sklearn.metrics import roc_curve
logit_roc_auc = roc_auc_score(Y,log_model.predict(X))
fpr, tpr, thresholds = roc_curve(Y, log_model.predict_proba(X)[:,1])
plt.figure()
plt.plot(fpr, tpr, label='Logistic Regression (area = %0.2f)' % logit_roc_auc)
plt.plot([0, 1], [0, 1],'r--')
plt.xlim([0.0, 1.0])
plt.ylim([0.0, 1.05])
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate')
plt.title('Receiver operating characteristic')
plt.legend(loc="upper left")
plt.savefig('Log_ROC')
plt.show()
| logistic-regression-lab/Logistic Regression.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Multiclass Text Classification
import numpy as np
from sklearn.datasets import fetch_20newsgroups
import pandas as pd
import re
dataset = fetch_20newsgroups()
features = dataset.data
target = dataset.target
dataset = pd.DataFrame({'features':features})
# +
# Lowering
# -
dataset['features'] = dataset['features'].apply(lambda x: x.lower())
# +
# removing characters which are not alphanumeric
# -
dataset['features'] = dataset['features'].apply(lambda x: re.sub(r'[^\w\s]+', ' ',x))
# +
# Removing stopwords
# -
from nltk.corpus import stopwords
stp = stopwords.words('english')
dataset['features'] = dataset['features'].apply(lambda x:' '.join([word for word in x.split() if word not in stp]))
# +
# Stemming
# -
from nltk.stem import SnowballStemmer
stemmer = SnowballStemmer('english')
dataset['features'] = dataset['features'].apply(lambda x: ' '.join([stemmer.stem(word) for word in x.split()]))
# +
# Lemmatization
# -
from textblob import Word
dataset['features'] = dataset['features'].apply(lambda x: ' '.join([Word(word).lemmatize() for word in x.split()]))
# # Text to features
from sklearn.feature_extraction.text import TfidfVectorizer
tfidf = TfidfVectorizer()
features = tfidf.fit_transform(dataset['features'])
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(features,target,test_size=0.1,shuffle=True)
from sklearn.linear_model import SGDClassifier
sgd = SGDClassifier()
sgd.fit(X_train,y_train)
y_pred = sgd.predict(X_test)
from sklearn.metrics import accuracy_score
accuracy_score(y_test,y_pred)
from sklearn.linear_model import PassiveAggressiveClassifier
pac = PassiveAggressiveClassifier()
# +
pac.fit(X_train,y_train)
y_pred = pac.predict(X_test)
accuracy_score(y_test,y_pred)
# -
# # Carrying out sentiment analysis
path = r"C:\Users\surie\Books to notebooks\Apress NLP Recipes\Reviews.csv\Reviews.csv"
dataset = pd.read_csv(path)
dataset = dataset[['Text', 'Score']]
dataset = dataset[:10000]
# +
# Remove punctuation and lowercase
# -
dataset['Text'] = dataset['Text'].apply(lambda x: x.lower())
dataset['Text'] = dataset['Text'].apply(lambda x: re.sub(r'[^\w\s]', ' ', x))
# +
# remove stopwords
# -
import nltk
stp = nltk.corpus.stopwords.words('english')
dataset['Text'] = dataset['Text'].apply(lambda x: ' '.join([word for word in x.split() if word not in stp]))
# +
# Stemming
# -
from nltk.stem import PorterStemmer
stmr = PorterStemmer()
dataset['Text'] = dataset['Text'].apply(lambda x: ' '.join([stmr.stem(word) for word in x.split()]))
# +
# See Score Distribution
# -
dataset.hist(bins=10)
# +
# Need to sample dataset
# -
dataset['Score'].value_counts()
num_samples = 6183
score1 = dataset[dataset['Score'] == 1].sample(num_samples,replace=True)
score2 = dataset[dataset['Score'] == 2].sample(num_samples,replace=True)
score3 = dataset[dataset['Score'] == 3].sample(num_samples,replace=True)
score4 = dataset[dataset['Score'] == 4].sample(num_samples,replace=True)
score5 = dataset[dataset['Score'] == 5].sample(num_samples,replace=True)
dataset = pd.concat([score1,score2,score3,score4,score5],axis=0)
from sklearn.utils import shuffle
dataset = shuffle(dataset)
# +
# Katla.
# +
# They are using
# +
# #!pip install vaderSentiment
# -
from vaderSentiment.vaderSentiment import SentimentIntensityAnalyzer
analyzer = SentimentIntensityAnalyzer()
dataset['Text'][:1].tolist()
analyzer.polarity_scores(dataset['Text'][:1].tolist())
# # Summarizing Text Data Using TextRank
from gensim.summarization.summarizer import summarize
from gensim.summarization import summarize
from gensim.summarization import keywords
text = '''(\'Natural language processing - Wikipedia\',
\'Natural language processing (NLP) is an area of computer
science and artificial intelligence concerned with the
interactions between computers and human (natural) languages,
in particular how to program computers to process and analyze
large amounts of natural language\\xa0data.\\n Challenges
in natural language processing frequently involve speech
recognition, natural language understanding, and natural
language generation.\\n The history of natural language
processing generally started in the 1950s, although work can be
found from earlier periods.\\nIn 1950, <NAME> published
an article titled "Intelligence" which proposed what is now
called the Turing test as a criterion of intelligence.\\n
The Georgetown experiment in 1954 involved fully automatic
translation of more than sixty Russian sentences into English.
The authors claimed that within three or five years, machine
translation would be a solved problem.[2] However, real
progress was '''
summarize(str(text),ratio=0.2)
| Natural Language Processing Recipes/Chapter 5 Implementing Industrial Applications.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# <b>Ευριπίδης Παντελαίος - 1115201600124 </b>
# +
import pandas as pd
import numpy as np
import scipy
import nltk
from sklearn.feature_extraction.text import TfidfVectorizer,CountVectorizer
from sklearn import svm, datasets
from sklearn.ensemble import RandomForestClassifier
from sklearn.naive_bayes import GaussianNB, MultinomialNB
from sklearn.metrics import accuracy_score, f1_score
from nltk.stem import WordNetLemmatizer
# +
from nltk.corpus import stopwords
stop_words = set(stopwords.words('english'))
pd.options.display.max_colwidth = None
# -
# <br><b>Some useful functions </b><br>
# <b> 1) Cleaning</b><br>
# <b> 2) Lemmatization</b><br>
# <b> 3) Remove stop words </b><br>
# <b> 4) Part-of-Speech Tag</b><br>
#clean data and remove symbols, urls, unnecessary words
def cleanData(comments):
StoredComments = []
for line in comments:
line = line.lower()
#replace some words, symbols and letters that appear frequently and are useless
line = line.replace('-', '')
line = line.replace('_', '')
line = line.replace('0', '')
line = line.replace("\n", '')
line = line.replace("\\", '')
line = line.replace('XD', '')
line = line.replace('..', '')
line = line.replace(' ', ' ')
line = line.replace('https', '')
line = line.replace('http', '')
removeList = ['@', r'\x', '\\', 'corrup', '^', '#', '$', '%', '&']
#for line in comments:
words = ' '.join([word for word in line.split() if not any([phrase in word for phrase in removeList]) ])
StoredComments.append(words)
return StoredComments
#lemmatize the comments
def lemmatizer (comments):
lemma = WordNetLemmatizer()
StoredComments = []
for line in comments:
line = ' '.join([lemma.lemmatize(w) for w in nltk.word_tokenize(line)])
StoredComments.append(line)
return StoredComments
#remove stop words
def removeStopWords (comments):
StoredComments=[]
for line in comments:
line = ' '.join([w for w in nltk.word_tokenize(line) if w not in stop_words])
StoredComments.append(line)
return StoredComments
#calculate Pos tags and the frequency of them
def posTag(comments):
adjectiveFrequency=[]
adverbFrequency=[]
nounFrequency=[]
verbFrequency=[]
for comment in comments:
adjectiveCounter=0
adverbCounter=0
nounCounter=0
verbCounter=0
#Pos tagging the words
words = nltk.word_tokenize(comment)
words = nltk.pos_tag(words)
cnt = len(words)
for word in words:
if(word[1][:1] == 'NN'):
nounCounter = nounCounter+1
elif(word[1][:1] == 'VV'):
verbCounter = verbCounter+1
elif(word[1][:1] == 'RR'):
adverbCounter = adverbCounter+1
elif(word[1][:1] == 'JJ'):
adjectiveCounter = adjectiveCounter+1
#not divide with zero
if(cnt!=0): #calculate the frequency of each tag
nounFrequency.append(nounCounter/cnt)
verbFrequency.append(verbCounter/cnt)
adverbFrequency.append(adverbCounter/cnt)
adjectiveFrequency.append(adjectiveCounter/cnt)
else:
nounFrequency.append(0)
verbFrequency.append(0)
adverbFrequency.append(0)
adjectiveFrequency.append(0)
return nounFrequency, verbFrequency, adverbFrequency, adjectiveFrequency
# <br><br><b> Read csv files for train and test set and cleaning the data</b>
# +
trainSet = pd.read_csv("data/train.csv")
testSet = pd.read_csv("data/impermium_verification_labels.csv") #I dont use the file 'impermium_verification_set.csv' at all,
#because the other file named 'impermium_verification_labels.csv'
#covers completely the requirements of the exercise.
#Cleaning the data and test set
trainSet['Comment'] = cleanData(trainSet['Comment'])
testSet['Comment'] = cleanData(testSet['Comment'])
# -
# <br><b>Train the train data with Bag of Words </b>
# +
countVectorizer = CountVectorizer()
BagOfWordsTrain = countVectorizer.fit_transform(trainSet['Comment'].values)
BagOfWordsTrainArray = BagOfWordsTrain.toarray()
# -
# <br><b>Train the test data with Bag of Words </b>
BagOfWordsTest = countVectorizer.transform(testSet['Comment'].values)
BagOfWordsTestArray = BagOfWordsTest.toarray()
# <br><br><b> Gaussian Naive Bayes classifier </b>
# +
classifierNB = GaussianNB()
classifierNB.fit(BagOfWordsTrainArray, trainSet['Insult'])
BoWprediction = classifierNB.predict(BagOfWordsTestArray)
y_test = testSet['Insult']
# -
# <br><br><b> Gaussian Naive Bayes Scores</b>
print ('Accuracy Score:', accuracy_score(y_test, BoWprediction))
print('F1 Score:', f1_score(y_test, BoWprediction))
# <br><br><b> Now I am doing 4 optimizations for Naive Bayes (Lemmatization, Remove stop words, Bigrams, Laplace Smoothing</b>
# <b> 1) Lemmatization</b>
trainSet['commentLemmatization'] = lemmatizer(trainSet['Comment'])
testSet['commentLemmatization'] = lemmatizer(testSet['Comment'])
# +
lemmazationTrain = countVectorizer.fit_transform(trainSet['commentLemmatization'])
lemmazationTrainArray = lemmazationTrain.toarray()
lemmazationTest = countVectorizer.transform(testSet['commentLemmatization'])
lemmazationTestArray = lemmazationTest.toarray()
classifierNB.fit(lemmazationTrainArray,trainSet['Insult'])
lemmatizationPredict = classifierNB.predict(lemmazationTestArray)
print('Accuracy Score:', accuracy_score(y_test, lemmatizationPredict))
print('F1 Score:', f1_score(y_test, lemmatizationPredict))
# -
# <br><b>2) Remove stop words </b>
trainSet['commentStopWords'] = removeStopWords(trainSet['Comment'])
testSet['commentStopWords'] = removeStopWords(testSet['Comment'])
# +
stopWordsTrain = countVectorizer.fit_transform(trainSet['commentStopWords'])
stopWordsTrainArray = stopWordsTrain.toarray()
stopWordsTest = countVectorizer.transform(testSet['commentStopWords'])
stopWordsTestArray = stopWordsTest.toarray()
classifierNB.fit(stopWordsTrainArray,trainSet['Insult'])
stopWordPredict = classifierNB.predict(stopWordsTestArray)
print ('Accuracy Score:', accuracy_score(y_test, stopWordPredict))
print('F1 Score:', f1_score(y_test, stopWordPredict))
# -
# <br><b> 3) Bigrams</b>
# +
bigramVectorizer = CountVectorizer(ngram_range=(2,2))
bigramTrain = bigramVectorizer.fit_transform(trainSet['Comment'])
bigramTrainArray = bigramTrain.toarray()
bigramTest= bigramVectorizer.transform(testSet['Comment'])
bigramTestArray = bigramTest.toarray()
classifierNB.fit(bigramTrainArray,trainSet['Insult'])
bigramPredict = classifierNB.predict(bigramTestArray)
print ('Accuracy Score:', accuracy_score(y_test, bigramPredict))
print('F1 Score:', f1_score(y_test, bigramPredict))
# -
# <br><b> 4) Laplace Smoothing</b>
# +
classifierMultinomialNB = MultinomialNB(alpha=1.0)
classifierMultinomialNB.fit(BagOfWordsTrainArray,trainSet['Insult'])
laplacePredict = classifierMultinomialNB.predict(BagOfWordsTestArray)
print ('Accuracy Score:', accuracy_score(y_test, laplacePredict))
print('F1 Score:', f1_score(y_test, laplacePredict))
# -
# <br><br> <b>Tf-idf Vectorizer </b> <br>
# +
TfIdf = TfidfVectorizer()
TfIdfTrain = TfIdf.fit_transform(trainSet['Comment'])
TfIdfTest = TfIdf.transform(testSet['Comment'])
# -
# <br><br> <b>Part-of-Speech features for Train set </b><br>
#
AdjectiveTrain, AdverbTrain, NounTrain, VerbTrain = posTag(trainSet['Comment'])
# <br><b>Append tf-idf and Part-of-Speech features for train set</b><br>
# +
posTrainVectorizer = scipy.sparse.hstack((TfIdfTrain, scipy.sparse.csr_matrix(NounTrain).T))
posTrainVectorizer = scipy.sparse.hstack((posTrainVectorizer, scipy.sparse.csr_matrix(AdjectiveTrain).T))
posTrainVectorizer = scipy.sparse.hstack((posTrainVectorizer, scipy.sparse.csr_matrix(AdverbTrain).T))
posTrainVectorizer = scipy.sparse.hstack((posTrainVectorizer, scipy.sparse.csr_matrix(VerbTrain).T))
# -
# <br><br><b>Part-of-Speech features for Test set </b>
AdjectiveTest, AdverbTest, NounTest, VerbTest = posTag(testSet['Comment'])
# <br><b>Append tf-idf and Part-of-Speech features for test set</b>
# +
posTestVectorizer = scipy.sparse.hstack((TfIdfTest, scipy.sparse.csr_matrix(NounTest).T))
posTestVectorizer = scipy.sparse.hstack((posTestVectorizer, scipy.sparse.csr_matrix(AdjectiveTest).T))
posTestVectorizer = scipy.sparse.hstack((posTestVectorizer, scipy.sparse.csr_matrix(AdverbTest).T))
posTestVectorizer = scipy.sparse.hstack((posTestVectorizer, scipy.sparse.csr_matrix(VerbTest).T))
# -
#
# <br><b> Test score for Tf-idf PoS model</b>
# +
classifierMultinomialNB.fit(posTrainVectorizer, trainSet['Insult'])
posVectorizerPredict = classifierMultinomialNB.predict(posTestVectorizer)
print('Accuracy Score:', accuracy_score(y_test, posVectorizerPredict))
print('F1 Score:', f1_score(y_test, posVectorizerPredict))
# -
# <br><br><b>SVM </b>
svc = svm.SVC(kernel='linear', C=1.0, gamma=0.9)
# +
svc.fit(posTrainVectorizer,trainSet['Insult'])
posVectorizerSVM = svc.predict(posTestVectorizer)
print ('Accuracy Score:', accuracy_score(y_test, posVectorizerSVM))
print ('Test F1:', f1_score(y_test, posVectorizerSVM))
# -
# <br><br><b> Random Decision Forest</b>
# +
randomDecisionForest = RandomForestClassifier(n_estimators = 150)
randomDecisionForest.fit(posTrainVectorizer, trainSet['Insult'])
posVectorizerRandomForest = randomDecisionForest.predict(posTestVectorizer)
print ('Accuracy Score:', accuracy_score(y_test, posVectorizerRandomForest))
print ('Test F1:', f1_score(y_test, posVectorizerRandomForest))
# -
# <br><br><b> Beat the benchmark with proper data processing with lemmatization, remove stop words and using Tf-idf and SVM</b>
# +
#I couldn't improve the scores much ...
#as there are many slang words and methods that are impossible to understand,
#even with modern improved algorithms, if these words are offensive or not.
#If the values of dataset were labeled correct I could produce better results.
TfIdf = TfidfVectorizer(ngram_range=(1, 2))
trainSet['commentLemmatization'] = removeStopWords(trainSet['commentLemmatization'])
testSet['commentLemmatization'] = removeStopWords(testSet['commentLemmatization'])
TfIdfTrain = TfIdf.fit_transform(trainSet['commentLemmatization'])
TfIdfTest = TfIdf.transform(testSet['commentLemmatization'])
svc.fit(TfIdfTrain,trainSet['Insult'])
TfIdfPredict = svc.predict(TfIdfTest)
print ('Accuracy Score:', accuracy_score(y_test, TfIdfPredict))
print ('F1 Score:', f1_score(y_test, TfIdfPredict))
| classifier.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Environment (conda_pytorch131)
# language: python
# name: conda_pytorch131
# ---
#hide
from your_lib.core import *
# # fastai implementations of Image classification
#
# > How to use fastai for super resolution.
from fastai.vision import *
from fastai.metrics import error_rate
from fastai.callbacks import *
URLs
| nbs/index.ipynb |
;; ---
;; jupyter:
;; jupytext:
;; text_representation:
;; extension: .scm
;; format_name: light
;; format_version: '1.5'
;; jupytext_version: 1.14.4
;; kernelspec:
;; display_name: MIT Scheme
;; language: scheme
;; name: mit-scheme
;; ---
;; + deletable=true editable=true
;; recursive process
;;
;; we need from the start to the end of the process to build a big sum
;; which we reduce before returning the result, so a call like (factorial 4)
;; will recurse until it produces the following calculation:
;;
;; (* 4 (* 3 (* 2 1)))
(define (factorial n)
(if (= n 1)
1
(* n (factorial (- n 1)))))
(factorial 6)
;; + deletable=true editable=true
;; iterative process
;;
;; note that even though the procedure itself is recursive, we call
;; the process iterative - the state of the calculation is captured
;; in the arguments to the "fact-iter" function, and can be cached
;; or resumed to produce the desired result.
(define (factorial n)
(fact-iter 1 1 n))
(define (fact-iter product counter max-count)
(if (> counter max-count)
product
(fact-iter (* product counter)
(+ counter 1)
max-count)))
(factorial 6)
;; + deletable=true editable=true
;; + deletable=true editable=true
(A 1 10)
;; + deletable=true editable=true
(A 2 4)
;; + deletable=true editable=true
(A 3 3)
;; + deletable=true editable=true
(define (h n) (A 2 n))
(h 0)
;; +
;; fibonacci sequence
;; bad implementation, for obvious reasons
(define (fib-bad n)
(cond ((= n 0) 0)
((= n 1) 1)
(else (+ (fib (- n 1)
(- n 2))))))
;; better implementation
(define (fib-iter a b count)
(if (= 0 count)
a
(fib-iter b (+ a b) (- count 1))))
(define (fib-better n)
(fib-iter 0 1 n))
| Chapter 1 - Building Abstractions With Procedures/1.2 - Procedures and the Processes They Generate.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Text classification
#
# You have a lot of options when trying to classify text. In this guide I'll demonstrate a decent amount of techniques for classifying text in four datasets.
#
# 1. whether IMDB reviews are positive or negative
# 2. whether a baby name is male or female
# 3. what category a newsgroup post belongs to
# 4. assign one or more labels to Reuters newswire
#
# We'll try a few techniques on these.
#
# * scikit-learn bag-of-words models
# * binary counting (is there word there or not?)
# * interlude: visualizing model parameters
# * td-idf, lemmatization, and hashing: better than binary
# * word2vec embeddings
# * embeddings trained in situ
# * using large pre-trained embedding models
# * neural networks
# * fasttext-style models
# * short example of recurrent networks
# +
# load various models from scikit-learn's library
from sklearn.linear_model import LogisticRegression
from sklearn.ensemble import AdaBoostClassifier, RandomForestClassifier
from sklearn.tree import DecisionTreeClassifier
from sklearn.naive_bayes import MultinomialNB
from sklearn.neighbors import KNeighborsClassifier
# scikit-leanr preprocessing
from sklearn.preprocessing import MultiLabelBinarizer
from sklearn.multiclass import OneVsRestClassifier
# also get some metrics to try
from sklearn.metrics import accuracy_score
# get data from scikit-learn
from sklearn.datasets import fetch_20newsgroups
# vectorizing text with scikit-learn
from sklearn.feature_extraction.text import CountVectorizer, HashingVectorizer, TfidfVectorizer
# a library for visualizing text classification results
import eli5
# regular expressions in python
import re
# the numpy library for dealing with arrays
import numpy as np
# gensim is a word embedding library
from gensim.models.word2vec import Word2Vec, LineSentence
from gensim.models import KeyedVectors
# keras is an easy-to-use neural network library
from keras.callbacks import EarlyStopping
from keras.models import Sequential
from keras.layers import Embedding, Dense, LSTM, GlobalAveragePooling1D
from keras.optimizers import Adam
from keras.datasets import imdb
from keras.preprocessing import sequence
# the natural language toolkit provides data and some nlp processing
import nltk
from nltk.corpus import reuters
from nltk import word_tokenize
from nltk.stem import WordNetLemmatizer
# for timing python code
import timeit
# quickly running a count on a python list
from collections import Counter
# -
# ## Getting data
#
# ### IMDB reviews sentiment analysis
#
# This is a neural network ready dataset from [Keras](https://keras.io/datasets/#imdb-movie-reviews-sentiment-classification). The words in the dataset have already been converted into integer IDs, so you can't easily have a look at what's in there.
#
# This is a sentiment analysis or polarity dataset, which means that the target labels are positive or negative. It's a relatively simpler task for a ML model to solve.
#
# I'll be using dictionaries to store my data. After I've grabbed the data from keras, I join the integer IDs with spaces to make text for scikit-learn: scikit-learn's vectorizers expect strings.
# +
imdb_data = {"name" : "imdb", "ovr" : False}
(a, b), (c, d) = imdb.load_data(num_words=50000)
imdb_data["X_train_ids"], imdb_data["y_train"], imdb_data["X_test_ids"], imdb_data["y_test"] = a, b, c, d
# For scikit-learn to like the input data, it will needs strings
imdb_data["X_train"] = [" ".join([str(x) for x in line]) for line in imdb_data["X_train_ids"]]
imdb_data["X_test"] = [" ".join([str(x) for x in line]) for line in imdb_data["X_test_ids"]]
# -
# Here's a look at what we're dealing with.
# +
imdb_data["train_size"], imdb_data["test_size"] = len(imdb_data["X_train"]), len(imdb_data["X_test"])
imdb_data["avg_length"] = sum([len(i) for i in imdb_data["X_train_ids"]])/len(imdb_data["X_train_ids"])
imdb_data["vocab_size"] = len(set([i for j in imdb_data["X_train_ids"] for i in j]))
print(f"Observations in training data: {imdb_data['train_size']}; test data: {imdb_data['test_size']}")
print(f"Min number of words per line in training set: {min([len(i) for i in imdb_data['X_train_ids']])}")
print(f"Max number of words per line in training set: {max([len(i) for i in imdb_data['X_train_ids']])}")
print(f"Average number of words per line in training set: {imdb_data['avg_length']}")
print(f"Total vocabulary size: {imdb_data['vocab_size']}")
# -
# ### A lot of baby names
#
# The US government has made available [baby names](https://catalog.data.gov/dataset/baby-names-from-social-security-card-applications-national-level-data) from social security card applications. These records go back to 1880 and also indicate the sex of the baby. I'll be trying to predict which names are male and which are female.
#
# Once you've extracted the files to a folder, the following Python code will join them all into a csv file.
#
# ```
# import os
# import re
#
# with open("babies.csv", "w") as w:
# for f in [f for f in os.listdir(os.getcwd()) if "txt" in str(f)]:
# with open(f) as f:
# year = re.search(r'[\d]{4}', f.name)[0]
# for line in f:
# w.write(year+","+line)
# ```
# What I want to do is sort the names by year, remove all duplicates, and then split older and newer names into the training and test sets, respectively. This way, the test set will tell me if I'm correctly inferring newer names from older names.
#
# Below you can see a sample of a few names.
# +
with open("babies.csv") as f:
baby_list = f.readlines()
# Sort by first 4 characters, the year
baby_list.sort(key=lambda x: x[:4])
print(baby_list[:5])
# -
# What I'm going to do is keep every combination of name and sex in a `set()`. Python sets do not keep duplicates and are very fast at performing `if x in y` operations, making them perfect for this work. With the names properly organized, it's easy to keep 20,000 for the test set.
# +
baby_set = set()
unique_baby_list = []
for baby in baby_list:
if " ".join(baby.split(",")[1:3]) in baby_set:
pass
else:
baby_set.add(" ".join(baby.split(",")[1:3]))
unique_baby_list.append(baby)
# Pick test set
baby_train = unique_baby_list[:-20000]
baby_test = unique_baby_list[-20000:]
baby_data = {"name" : "baby", "ovr" : False}
baby_data["X_train"] = [baby.split(",")[1] for baby in baby_train]
baby_data["X_test"] = [baby.split(",")[1] for baby in baby_test]
baby_data["y_train"] = [baby.split(",")[2] == "M" for baby in baby_train]
baby_data["y_test"] = [baby.split(",")[2] == "M" for baby in baby_test]
# -
# Since the data is split by character, we have a small vocabulary size. Even with the removal of duplicates, we still have 100,000+ records.
# +
baby_data["train_size"], baby_data["test_size"] = len(baby_data["X_train"]), len(baby_data["X_test"])
baby_data["avg_length"] = sum([len(i) for i in baby_data["X_train"]])/len(baby_data["X_train"])
baby_data["vocab_size"] = len(set([i for j in baby_data["X_train"] for i in j]))
print(f"Observations in training data: {baby_data['train_size']}; test data: {len(baby_data['X_test'])}")
print(f"Min number of words per line in training set: {min([len(i) for i in baby_data['X_train']])}")
print(f"Max number of words per line in training set: {max([len(i) for i in baby_data['X_train']])}")
print(f"Average number of characters per name in training set: {baby_data['avg_length']}")
print(f"Total character vocabulary size: {baby_data['vocab_size']}")
# -
# ### Newsgroup posts
#
# These are categorized newsgroup posts you can get [from scikit-learn](http://scikit-learn.org/stable/modules/generated/sklearn.datasets.fetch_20newsgroups.html) ([user guide](scikit-learn.org/stable/datasets/twenty_newsgroups.html). These are rather long and varied texts drawn from 18,000 posts. Each of these belong in a different topic. You can read a bit more about the dataset [here](http://qwone.com/~jason/20Newsgroups/).
ng_train_raw = fetch_20newsgroups(subset="train", remove=("headers", "footers", "quotes"))
ng_test_raw = fetch_20newsgroups(subset="test", remove=("headers", "footers", "quotes"))
# Since this is a scikit-learn datasource, there are extras you can play with. For example, the target labels can be accessed this way:
print(ng_train_raw.target_names)
# An example post looks like this. As you can see, these are multiple sentences.
print(ng_train_raw.data[0])
ng_data = {"name" : "newsgroup20", "ovr" : False}
ng_data.update({"X_train" : ng_train_raw.data, "y_train" : ng_train_raw.target})
ng_data.update({"X_test" : ng_test_raw.data, "y_test" : ng_test_raw.target})
# On average these posts are shorter than the imdb reviews; however, there are some monster posts lurking in there.
#
# The total vocabulary size of the newsgroup set is **much higher** than the others.
# +
ng_data["train_size"], ng_data["test_size"] = len(ng_data["X_train"]), len(ng_data["X_test"])
ng_data["avg_length"] = sum([len(i.split(' ')) for i in ng_data["X_train"]])/len(ng_data["X_train"])
ng_data["vocab_size"] = len(set([i.lower() for j in ng_data["X_train"] for i in j.split(" ")]))
print(f"Observations in training data: {len(ng_data['X_train'])}; test data: {len(ng_data['X_test'])}")
print(f"Min number of words per line in training set: {min([len(i.split(' ')) for i in ng_data['X_train']])}")
print(f"Max number of words per line in training set: {max([len(i.split(' ')) for i in ng_data['X_train']])}")
print(f"Average number of words per line in training set: {ng_data['avg_length']}")
print(f"Total vocabulary size: {ng_data['vocab_size']}")
# -
# ### Reuters newswire dataset
#
# The Reuters dataset is a collection of short categorized news stories. I followed [<NAME>'s blog post to get started](https://martin-thoma.com/nlp-reuters/).
#
# We're usign the nltk version of the dataset, but I'm not sure what that is exactly. Our dataset has 14333 records, but the more popular [reuters-21578](https://archive.ics.uci.edu/ml/datasets/reuters-21578+text+categorization+collection) has 21578. Since that dataset was collected from 1987 newswire texts, I assume the one we're using is similar.
#
# To get a copy of the Reuters data, you have to use `nltk.download("reuters")`.
# +
def load_reuters():
reuters_data = {"name" : "reuters", "ovr" : True}
# The test and train sets are listed as IDs in the .fileids() member
train_ids = list(filter(lambda x: x[:5] == "train", reuters.fileids()))
test_ids = list(filter(lambda x: x[:4] == "test", reuters.fileids()))
reuters_data["X_train"] = list(map(lambda x: reuters.raw(x), train_ids))
reuters_data["X_test"] = list(map(lambda x: reuters.raw(x), test_ids))
# The MultiLabelBinarizer will get you the 1s and 0s your model wants
mlb = MultiLabelBinarizer(sparse_output=True)
reuters_data["y_train"] = mlb.fit_transform(list(map(lambda x: reuters.categories(x), train_ids)))
reuters_data["y_test"] = mlb.transform(list(map(lambda x: reuters.categories(x), test_ids)))
return reuters_data
reuters_data = load_reuters()
# -
# The main challenge with the Reuters dataset are its large amount of classes and their multi-label nature. Models have to cope with these news items belonging to more than one category.
print(f"Example observation targets: {reuters.categories('test/14832')}")
print(f"Number of classes: {len(reuters.categories())}")
print(reuters.categories())
# Most of the observations only have one label.
print(f"Min number of target labels: {min([len(reuters.categories(i)) for i in reuters.fileids()])}")
print(f"Min number of target labels: {max([len(reuters.categories(i)) for i in reuters.fileids()])}")
print(f"Average number of target labels per observation: {sum([len(reuters.categories(i)) for i in reuters.fileids()])/len(reuters.fileids())}")
# Although the stats below aren't as high as imdb and newsgroup20, the models will take longer to do the multi-label. I use an `ovr` flag to tell scikit-learn to treat this dataset as a one-vs-rest problem.
# +
reuters_data["train_size"], reuters_data["test_size"] = len(reuters_data["X_train"]), len(reuters_data["X_test"])
reuters_data["avg_length"] = sum([len(i.split(' ')) for i in reuters_data["X_train"]])/len(reuters_data["X_train"])
reuters_data["vocab_size"] = len(set([i.lower() for j in reuters_data["X_train"] for i in j.split(" ")]))
print(f"Observations in training data: {len(ng_data['X_train'])}; test data: {len(reuters_data['X_test'])}")
print(f"Min number of words per line in training set: {min([len(i.split(' ')) for i in reuters_data['X_train']])}")
print(f"Max number of words per line in training set: {max([len(i.split(' ')) for i in reuters_data['X_train']])}")
print(f"Average number of words per line in training set: {sum([len(i.split(' ')) for i in reuters_data['X_train']])/len(reuters_data['X_train'])}")
print(f"Total vocabulary size: {len(set([i.lower() for j in reuters_data['X_train'] for i in j.split(' ')]))}")
# -
# ## Convenience functions
#
# Whenever we want to train on our datasets, we'll have to pre-process them and then train a bunch of different models on them. To do these things I've written some simple functions.
#
# The `vectorize()` functions expects one of scikit-learn's [vectorizers](http://scikit-learn.org/stable/modules/feature_extraction.html) as its first argument, and then vectorizes the two datasets it's given.
def vectorize(vectorizer, x_train, x_test=None):
train_vec = vectorizer.fit_transform(x_train)
if x_test:
test_vec = vectorizer.transform(x_test)
else:
test_vec = None
return train_vec, test_vec
# The `model_eval()` function iterates over models and datasets, training and evaluating each one. I had originally included more classification metrics, but I found that evaluating test sets so often can take up a lot of time. I'll stick with test accuracy as my main score.
#
# When you wrap a model in the [`OneVsRestClassifier()`](http://scikit-learn.org/stable/modules/generated/sklearn.multiclass.OneVsRestClassifier.html) function, it'll be re-run for each label separately. This makes training take a lot more time.
def models_eval(models, datasets, train_key="X_train_vec", test_key="X_test_vec"):
for dataset in datasets:
print(f"{dataset['name']:20} train/test {dataset['train_size']}/{dataset['test_size']} total vocab {dataset['vocab_size']}")
print(f"{20*' '}{57*'-'}")
results = []
for name, model in models.items():
if dataset["ovr"]: model = OneVsRestClassifier(model)
timer = timeit.default_timer()
model.fit(dataset[train_key], dataset["y_train"])
train_elapsed = timeit.default_timer() - timer
timer = timeit.default_timer()
train_acc = accuracy_score(y_true = dataset["y_train"], y_pred = model.predict(X=dataset[train_key]))
test_acc = accuracy_score(y_true = dataset["y_test"], y_pred = model.predict(X=dataset[test_key]))
eval_elapsed = timeit.default_timer() - timer
results.append({
"name" : name,
"model" : model,
"train_acc" : train_acc,
"test_acc" : test_acc,
"train_elapsed" : train_elapsed,
"eval_elapsed" : eval_elapsed
})
results.sort(key=lambda x: -x["test_acc"])
for result in results:
print("{:>19} | TRAIN {:5.1f}s | EVAL {:5.1f}s | TRAIN/TEST acc {:4.2f}/{:4.2f} |".format(
result["name"],
result["train_elapsed"],
result["eval_elapsed"],
result["train_acc"],
result["test_acc"]
))
print(20*" "+57*"-")
# ## Text classification with basic vectorization
#
# We will start off out adventure with the easy-to-use [vectorizers](http://scikit-learn.org/stable/modules/feature_extraction.html) in scikit-learn. Without much effort these will give good results, which shows how useful a well-organized library like scikit-learn is.
#
# My choice of models comes down to whatever will run reasonably fast. I learned about LogisticRegression's `C=` parameter from [<NAME>'s blog post](https://martin-thoma.com/nlp-reuters/). It's a parameter that's easy to miss in the [scikit-learn documentation](http://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html), but it sometimes gives really nice results.
#
# As I'm starting with word-based vectorization, I won't use the `baby` dataset just yet. I'll need character-level vectorization for that.
# +
list_of_models = {"Logistic" : LogisticRegression(solver="lbfgs", n_jobs = -1),
"Logistic C=1000" : LogisticRegression(solver="lbfgs", n_jobs = -1, C=1000),
"RandomForest 10" : RandomForestClassifier(n_jobs = -1),
"RandomForest 100" : RandomForestClassifier(n_jobs = -1, n_estimators=100),
"RndForest 100 MD25" : RandomForestClassifier(n_jobs = -1, n_estimators=100, max_depth=25),
"DecisionTree" : DecisionTreeClassifier(),
"DecisionTree MD25" : DecisionTreeClassifier(max_depth=25),
"MultinomialNB":MultinomialNB()
}
list_of_datasets = [imdb_data, ng_data, reuters_data]
# -
# ### The simplest bag of words
#
# We'll start with the most simple. If a word is present, it gets a 1; otherwise it gets a 0.
#
# Whenever we test for words and flag them to the models, we're using a technique called "bag of words". Even if we're identifiying short sequences of words, like the presence of "not good" or "red meat", it's still bag of words, or maybe called bag of n-grams. The common alternative is modelling the sequence of words directly, as a kind of time series.
# +
for dataset in list_of_datasets:
dataset["X_train_vec"], dataset["X_test_vec"] = vectorize(CountVectorizer(max_features=50000, binary=True), dataset["X_train"], dataset["X_test"])
models_eval(list_of_models, list_of_datasets)
# -
# ### Interlude: the evils of overfitting
#
# If you're like most people, you enjoy nice colorful representations of model output. The [`eli5`](http://eli5.readthedocs.io/en/latest/tutorials/sklearn-text.html) has a few things to help you out there.
# +
vec = CountVectorizer(max_features=50000, binary=True).fit(ng_data["X_train"])
model = LogisticRegression(solver="lbfgs", n_jobs = -1, C=1000)
fit = model.fit(ng_data["X_train_vec"], ng_data["y_train"])
# -
# The first entry in the newsgroup20 is about cars, and you can see how the logistic model weighs different words. Words like car, bumper, and engine are all positive.
eli5.show_prediction(fit, ng_train_raw.data[0], vec=vec, target_names=ng_train_raw.target_names, targets=["rec.autos"])
# You can view a more compressed version of the information with the `show_weights()` function. For some reason the word `rectum` is the fourth most important for hockey posts.
eli5.show_weights(fit, vec=vec, top=10, target_names=ng_train_raw.target_names,
targets=["rec.autos", "rec.sport.baseball", "rec.sport.hockey", "sci.med", "sci.space"])
# While this particular example is a bit crude, it is a case of a model overfitting. The word only appears once in the data as a joke, yet it will influence all future predictions.
# +
wikipedia = "The rectum is the final straight portion of the large intestine."
print(ng_train_raw.target_names[model.predict(vec.transform([wikipedia]))[0]])
# -
# ### tdidf with unigrams
#
# Compared to binary counting, here is a better approach: the [Term Document Inverser Document Frequency (TD-IDF) vectorizer](http://scikit-learn.org/stable/modules/generated/sklearn.feature_extraction.text.TfidfVectorizer.html). This vectorizer will count word occurences in a sentence (or "document") but then divide these counts with how frequent each word appears in general. For example, if *aardvark* appears once in a sentence and once in the entire corpus, it gets a count of 1; however, if *and* appears once in a sentence but 10,000 times in the corpus, it gets a TD-IDF count of 0.0001. td-idf normalizes your data in a way that gives an edge to rarer words but a penalty to more common words.
#
# According to [this Wikipedia article](https://en.wikipedia.org/wiki/Tf%E2%80%93idf), 83% of text classification tasks use TD-IDF.
#
# Anyways, I'll keep setting the maximum vocabulary size to 50,000 to make of these examples roughly comparable. Increasing the maximum vocabulary usually increases model accuracy. When scikit-learn is given a max vocabulary size, it'll only keep the most frequent words.
#
# We should get **89%** for imdb, **68%** for newsgroup20, and **80%** for reuters. These are the amounts we'll try to beat afterwards.
# +
for dataset in list_of_datasets:
dataset["X_train_vec"], dataset["X_test_vec"] = vectorize(TfidfVectorizer(max_features=50000, min_df=5), dataset["X_train"], dataset["X_test"])
models_eval(list_of_models, list_of_datasets)
# -
# ### tdidf with bigrams
#
# We can help the models by informing them of some short word sequences, say sequences of two: these are called bi-grams. For example the imdb models will get a bit of extra help by knowing the presence of "not good" instead of only "not" and "good" separately; this helps in the sentiment analysis task.
#
# We get a slight increase with with **90%** for imdb but decreases with **67%** for newsgroup20 and **79%** for reuters. Some rarer words are important to the models, and they're being pushed out by more common bi-grams. Increasing the `max_features` to 75,000 doesn't make a lot of difference.
# +
for dataset in list_of_datasets:
dataset["X_train_vec"], dataset["X_test_vec"] = vectorize(TfidfVectorizer(max_features=50000, ngram_range = [1, 2]), dataset["X_train"], dataset["X_test"])
models_eval(list_of_models, list_of_datasets)
# -
# ### Pre-processing
#
# Before we move on to other things, we can try pre-processing our text data further. I got the lemmatization code [here](http://scikit-learn.org/stable/modules/feature_extraction.html#customizing-the-vectorizer-classes).
class LemmaTokenizer(object):
def __init__(self):
self.wnl = WordNetLemmatizer()
def __call__(self, doc):
return [self.wnl.lemmatize(t) for t in word_tokenize(doc)]
# Below you can see that this beats the best we had so far. It won't work with the imdb set. With 75,000 and 1,2 ngrams we get **67%** for newsgroup20 and **80%** for reuters. With only 1,1, we **81%** for reuters, which is out best so far. Newsgroup20 gets **66%**.
list_of_datasets = [ng_data, reuters_data]
# +
for dataset in list_of_datasets:
dataset["X_train_vec"], dataset["X_test_vec"] = vectorize(TfidfVectorizer(max_features=50000, ngram_range = [1, 1],
tokenizer=LemmaTokenizer(), stop_words="english"),
dataset["X_train"], dataset["X_test"])
models_eval(list_of_models, list_of_datasets)
# -
# ### hashing trick with character ngrams
#
# So far we've been setting a limit of 50,000 to our vocabulary. What if we wanted to fit more words into that 50,000, perhaps by some form of compression?
#
# The hashing trick hashes words into ID numbers. The same word will be hashed the same way each time, but two words may "collide" and get the same ID number. These collisons are bad for classification, but hashing can turn out to be a net good.
# +
list_of_models = {"Logistic" : LogisticRegression(solver="lbfgs", n_jobs = -1),
"Logistic C=1000" : LogisticRegression(solver="lbfgs", n_jobs = -1, C=1000),
"RandomForest 10" : RandomForestClassifier(n_jobs = -1)
}
list_of_datasets = [imdb_data, baby_data, ng_data, reuters_data]
# -
# The reason I'm ushing hashing here is to try character n-grams on the baby name dataset. By taking ngrams from 2 to 5, I am going to have a *massive* vocabulary. With hashing it becomes workable.
#
# With the baby dataset, its 1-gram vocabulary of 52 becomes 123,084 with `[2,5]` n-grams. For this we get an accuracy score of **81%**.
# +
for dataset in list_of_datasets:
dataset["X_train_vec"], dataset["X_test_vec"] = vectorize(HashingVectorizer(n_features = 50000, analyzer="char_wb", ngram_range=[2,5]),
dataset["X_train"], dataset["X_test"])
models_eval(list_of_models, list_of_datasets)
# -
# ## Training word embeddings
#
# In 2015-2017 the internet was infatuated with word embeddings. These little embeddings were able to get the "meaning" of words, and they could find [related vegetables and pokemon](https://fasttext.cc/docs/en/unsupervised-tutorial.html#nearest-neighbor-queries) for you.
#
# Word embeddings are vectors assigned to words, meaning that each word is "embedded" in an n-dimensional space. These embedding spaces can have many dimensions, like 100 to 300. Embedding models organize this space by placing similar words close together: the model decides what words are similar by what other words they co-occur with. For example, if bacon and ham are often mentionned alongside pork, breakfast, and greasy, they are related. This is how word embedding models know how to find related words for you.
#
# Ultimately, these word embeddings give a slight edge to classification models. First, they're in n-dimensional space, where n is 100-300 not 50,000 to 100,000 (total size of vocabulary). This saves resources. Second, models can know beforehand treat ham and bacon similarly.
#
# Word embeddings will work alright with our models, but they won't beat our td-idf champion.
#
# ### Embedding our datasets
#
# We can train the vectors easily with the gensim library. There's a little pre-processing involved.
# +
# Lowercase text and remove all non-letters with single spaces
def preprocessor(x):
return re.sub(r"[ ]+", " ", re.sub(r"[^\w]+", " ", x)).lower()
# Either split words or characters into list items (tokenize)
def w2v_prepare(dataset, by_words=True):
if by_words:
return [preprocessor(line).split() for line in dataset]
else:
return [list(line) for line in dataset]
# With the tokenized text, run word2vec on it. Afterwards, delete the model and keep the vectors.
def w2v_fit(text, size=100, alpha=0.025, window=5, min_count=5, workers=4, iter=5):
w2v_model = Word2Vec(text, size=size, alpha=alpha, window=window, min_count=min_count, workers=workers)
word_vectors = w2v_model.wv
del w2v_model
print(f"word2vec model has {len(word_vectors.vocab)} words")
return word_vectors
# After the vectors are ready, we embed our datasets, averaging afterwards
def w2v_transform(text, word_vectors):
vocab = set(word_vectors.vocab)
size = word_vectors.vector_size
vectorized = []
for line in text:
line = list(filter(lambda x: x in vocab, line))
if line:
line = np.mean(list(map(lambda x: word_vectors[x], line)), axis=0)
vectorized.append(line)
else:
vectorized.append(np.zeros(size))
return np.array(vectorized)
# -
# Our corpora won't be great for word2vec since they're fairly small. This will hurt the quality of the vectors.
ng_wv = w2v_fit(w2v_prepare(ng_data["X_train"]), min_count=1, iter=50, alpha=0.05)
baby_wv = w2v_fit(w2v_prepare(baby_data["X_train"], by_words=False), size=20)
reuters_wv = w2v_fit(w2v_prepare(reuters_data["X_train"]), min_count=1, iter=50, alpha=0.05)
# You can query word vectors to get related words. Below I look up "man" in the newsgroup20 embeddings. The matches are reasonable.
ng_wv.most_similar(positive=["man"])
# The data has to be embedded to be used.
# +
ng_data["X_train_wv"] = w2v_transform(w2v_prepare(ng_data["X_train"]), ng_wv)
ng_data["X_test_wv"] = w2v_transform(w2v_prepare(ng_data["X_test"]), ng_wv)
baby_data["X_train_wv"] = w2v_transform(w2v_prepare(baby_data["X_train"], by_words=False), baby_wv)
baby_data["X_test_wv"] = w2v_transform(w2v_prepare(baby_data["X_test"], by_words=False), baby_wv)
reuters_data["X_train_wv"] = w2v_transform(w2v_prepare(reuters_data["X_train"]), reuters_wv)
reuters_data["X_test_wv"] = w2v_transform(w2v_prepare(reuters_data["X_test"]), reuters_wv)
# -
# Since we now have "denser" data (100 dimensions instead of 25,000), I'll try out KNN since it won't take forever anymore (but it will still take a long time). KNN does surprisingly work suprisingly well on the babies and reuters datasets, especially with more neighbors (67% on babies and 74% on reuters).
# +
list_of_models = {"Logistic" : LogisticRegression(solver="lbfgs", n_jobs = -1),
"Logistic C=1000" : LogisticRegression(solver="lbfgs", n_jobs = -1, C=1000),
"RandomForest 10" : RandomForestClassifier(n_jobs = -1),
"RandomForest 100" : RandomForestClassifier(n_jobs = -1, n_estimators=100),
"RandomForest 100/10" : RandomForestClassifier(n_jobs = -1, n_estimators=100, max_depth=10),
"KNN 1" : KNeighborsClassifier(n_neighbors = 1, n_jobs=-1)
}
list_of_datasets = [ng_data, baby_data, reuters_data]
# -
models_eval(list_of_models, list_of_datasets, train_key="X_train_wv", test_key="X_test_wv")
# ### Using large pre-trained embeddings
#
# Above we trained our word embedding model on our datasets, but is that a good idea? Are toy datasets for text classification sufficient to model the English language? Remember that word2vec operates on co-occurences and that words can appear in many different contexts. It's tough to infer the meaning of a word from only a few examples.
#
# Therefore a common strategy with word embeddings is ti simply use pre-trained embeddings from a massive dataset. Thanks to the size of the source corpus, the word relationships are bound to be more finely detailed.
#
# You can get a 3.6GB word vector file from [this blogger](http://mccormickml.com/2016/04/12/googles-pretrained-word2vec-model-in-python/) or this [archived Google Code post](https://code.google.com/archive/p/word2vec/)
googlenews = KeyedVectors.load_word2vec_format('./GoogleNews-vectors-negative300.bin', binary=True)
print(f"word2vec model has {len(googlenews.vocab)} words")
# Prepare our data.
# +
ng_data["X_train_wv"] = w2v_transform(w2v_prepare(ng_data["X_train"]), googlenews)
ng_data["X_test_wv"] = w2v_transform(w2v_prepare(ng_data["X_test"]), googlenews)
reuters_data["X_train_wv"] = w2v_transform(w2v_prepare(reuters_data["X_train"]), googlenews)
reuters_data["X_test_wv"] = w2v_transform(w2v_prepare(reuters_data["X_test"]), googlenews)
# -
# Again, I'll try the same models as above. I am going to ditch the imdb and babies dataset because they lack actual words. (The imdb data is already coded to integer IDs and the baby names are a character-level task.
# +
list_of_models = {"Logistic" : LogisticRegression(solver="lbfgs", n_jobs = -1),
"Logistic C=1000" : LogisticRegression(solver="lbfgs", n_jobs = -1, C=1000),
"RandomForest 10" : RandomForestClassifier(n_jobs = -1),
"RandomForest 100" : RandomForestClassifier(n_jobs = -1, n_estimators=100),
"RandomForest 100/10" : RandomForestClassifier(n_jobs = -1, n_estimators=100, max_depth=10),
"KNN 1" : KNeighborsClassifier(n_neighbors = 1, n_jobs=-1)
}
list_of_datasets = [ng_data, reuters_data]
# -
# The results below are not the best, but they do beat our previous word embedding results. KNN 5 again performs well on the reuters data (76% with KNN 5--not shown).
models_eval(list_of_models, [ng_data, reuters_data], train_key="X_train_wv", test_key="X_test_wv")
# ## Neural networks
#
# We aren't going to be using deep learning here; instead we're just going to use neural networks to train embeddings on-the-fly. Rather than training twice, first on embeddings and then on classification, the word embeddings will be trained alongside classification.
#
# The method we use below is effectively [fasttext](https://fasttext.cc/), a fast barebones classifier. It's pretty fast and tends to meet or beat our best results.
#
# keras expects sequences of integers as inputs. These inputs are fed to embedding layers that do similar work to word2vec above: words get embeddings that are useful for classification.
# +
# given a list of tokens, transform them into ngrams
def get_ngrams(x, n=1):
if n==1:
return x
elif len(x) >= n:
return ["_".join(x[i:i+n]) for i in range(len(x)-n+1)]
else:
return []
# with an ngram interval, such as [2,5], break a list of tokens into the appropriate ngrams
def ngram_iter(x, interval):
y = []
for n in range(interval[0], interval[1]+1):
y += get_ngrams(x, n)
return y
# prepare data for keras's embeddings layers
# this transforms lists of tokens into
def keras_data(train_set, test_set, by_words=True, max_unigrams=50000, ngram_range=[1,1]):
train_set = w2v_prepare(train_set, by_words)
test_set = w2v_prepare(test_set, by_words)
id2word = [i for line in train_set for i in ngram_iter(line, ngram_range)]
if max_unigrams > 0:
id2word = Counter(id2word)
id2word = list(id2word.items())
id2word.sort(key=lambda x: -x[1])
id2word = [x[0] for x in id2word[:max_unigrams-2]]
id2word = ["<PAD>", "<NULL>"] + list(set(id2word))
else:
id2word = ["<PAD>", "<NULL>"] + list(set(id2word))
word2id = dict()
vocab_size = len(id2word)
print(f"Size of vocabulary: {vocab_size}")
for i in range(vocab_size):
word2id[id2word[i]] = i
train_set = [[word2id.get(token, 1) for token in ngram_iter(line, ngram_range)] for line in train_set]
test_set = [[word2id.get(token, 1) for token in ngram_iter(line, ngram_range)] for line in test_set]
return train_set, test_set
# -
# Through trial and error I've ended up with the settings below.
ng_data["X_train_ids"], ng_data["X_test_ids"] = keras_data(ng_data["X_train"], ng_data["X_test"], ngram_range=[1,1])
baby_data["X_train_ids"], baby_data["X_test_ids"] = keras_data(baby_data["X_train"], baby_data["X_test"], by_words=False, ngram_range=[2,5])
reuters_data["X_train_ids"], reuters_data["X_test_ids"] = keras_data(reuters_data["X_train"], reuters_data["X_test"], ngram_range=[1,2])
# ### IMDB
#
# The imdb dataset is used in keras's example implementation of [fastext](https://github.com/keras-team/keras/blob/master/examples/imdb_fasttext.py). I've tweaked it a bit to give the best results.
#
# As you can see, the vector size of the embeddings is only *one* and these feed directly to a *single* sigmoid neuron. Nevertheless the model works quite well. *Always try simpler networks!*
# +
x_train = sequence.pad_sequences(imdb_data["X_train_ids"], maxlen=400)
x_test = sequence.pad_sequences(imdb_data["X_test_ids"], maxlen=400)
model = Sequential()
model.add(Embedding(50000, 1, input_length=400))
model.add(GlobalAveragePooling1D())
model.add(Dense(1, activation='sigmoid'))
optimizer = Adam(lr=0.1)
model.compile(loss='binary_crossentropy',
optimizer=optimizer,
metrics=['accuracy'])
early_stop = EarlyStopping(min_delta=0.01, patience=2)
model.fit(x_train, imdb_data["y_train"],
batch_size=256,
epochs=20,
callbacks=[early_stop],
validation_data=(x_test, imdb_data["y_test"]))
# -
# ### Newsgroup
#
# The newsgroup20 dataset is a bit more complex, so it requires a few more parameters.
# +
x_train = sequence.pad_sequences(ng_data["X_train_ids"], maxlen=500)
x_test = sequence.pad_sequences(ng_data["X_test_ids"], maxlen=500)
model = Sequential()
model.add(Embedding(50000, 32, input_length=500))
model.add(GlobalAveragePooling1D())
model.add(Dense(len(ng_train_raw.target_names), activation='softmax'))
optimizer = Adam(lr=0.01)
model.compile(loss='sparse_categorical_crossentropy',
optimizer=optimizer,
metrics=['accuracy'])
early_stop = EarlyStopping(min_delta=0.01, patience=5)
model.fit(x_train, ng_data["y_train"],
batch_size=64,
epochs=50,
callbacks=[early_stop],
validation_data=(x_test, ng_data["y_test"]))
# -
# ### Reuters
#
# If you run the neural network on Reuters with the softmax function, you'll beat the rest with a score of **82%**.
#
# However, this approach makes no sense because we have multi-label problem (one record can have more than one label), not a multi-class problem (one record has only one label). The softmax function outputs probabilities that sum to 1; in the case of accuracy, the highest probability is the chosen label. That score of 82% above just means that the neural network beats the other models with one hand tied behind its back (only being able to predict one class). Why? Most Reuters observations only have one class/label with an average of 1.25 label/obs.
#
# Instead, the proper thing to do is use a sigmoid action function at output so that each label is evaluated separately. The sigmoids will output 0 or 1, and one sigmoid doesn't affect the other, unlike the softmax.
#
# Even though this approach is the correct one, it makes accuracy kind of meaningless. With sigmoids the accuracy shoots to 99% because the neural networks gets a point each time it predicts 0 correctly: the target labels are pretty sparse so this is pretty easy to do! In fact, if the model only predicts 0s, it does pretty well. We need a better metric.
#
# I'm not extremely familiar with precision, recall, and f1, but I think they're the right ones to use. We want to measure around real and predicted positives. [Here is the formula for the f1-score.](https://en.wikipedia.org/wiki/F1_score) Precision goes down whenever a positive label is assigned incorrectly, and recall goes down whenever a true positive label is incorrectly missed. If the model only predicts 0s, its recall score will be zero; if the model only predicts 1s, its precision score will be zero. This sounds good.
#
# keras removed its precision, recall, and f1 metrics, so you need to define them yourself. Or, you could just grab [some code on Stack Overflow](https://stackoverflow.com/a/45305384).
# +
from keras import backend as K
def f1(y_true, y_pred):
def recall(y_true, y_pred):
"""Recall metric.
Only computes a batch-wise average of recall.
Computes the recall, a metric for multi-label classification of
how many relevant items are selected.
"""
true_positives = K.sum(K.round(K.clip(y_true * y_pred, 0, 1)))
possible_positives = K.sum(K.round(K.clip(y_true, 0, 1)))
recall = true_positives / (possible_positives + K.epsilon())
return recall
def precision(y_true, y_pred):
"""Precision metric.
Only computes a batch-wise average of precision.
Computes the precision, a metric for multi-label classification of
how many selected items are relevant.
"""
true_positives = K.sum(K.round(K.clip(y_true * y_pred, 0, 1)))
predicted_positives = K.sum(K.round(K.clip(y_pred, 0, 1)))
precision = true_positives / (predicted_positives + K.epsilon())
return precision
precision = precision(y_true, y_pred)
recall = recall(y_true, y_pred)
return 2*((precision*recall)/(precision+recall+K.epsilon()))
# -
# With the f1-score metric, the model has a test score of about 81%.
# +
x_train = sequence.pad_sequences(reuters_data["X_train_ids"], maxlen=500)
x_test = sequence.pad_sequences(reuters_data["X_test_ids"], maxlen=500)
model = Sequential()
model.add(Embedding(50000, 128, input_length=500))
model.add(GlobalAveragePooling1D())
model.add(Dense(90, activation='sigmoid'))
optimizer = Adam(lr=0.1)
model.compile(loss='binary_crossentropy',
optimizer=optimizer,
metrics=[f1])
early_stop = EarlyStopping(min_delta=0.1, patience=5)
model.fit(x_train, reuters_data["y_train"],
batch_size=64,
epochs=10,
callbacks=[early_stop],
validation_data=(x_test, reuters_data["y_test"]))
# -
# Since I'm paranoid, I want to make sure that the model can indeed predict more than one label. I had to dig around a little to find an observation that looked nice.
print("Real")
print(reuters_data["y_train"][6].todense())
print("Predicted")
print(np.round(model.predict(x_train)[6], decimals=3))
# ### Babies
#
# The results on the baby names is reasonable when using 2 to 5 ngrams.
# +
x_train = sequence.pad_sequences(baby_data["X_train_ids"], maxlen=100)
x_test = sequence.pad_sequences(baby_data["X_test_ids"], maxlen=100)
model = Sequential()
model.add(Embedding(50000, 1, input_length=100))
model.add(GlobalAveragePooling1D())
model.add(Dense(1, activation='sigmoid'))
optimizer = Adam(lr=0.01)
model.compile(loss='binary_crossentropy',
optimizer=optimizer,
metrics=['accuracy'])
early_stop = EarlyStopping(min_delta=0.01, patience=5)
model.fit(x_train, baby_data["y_train"],
batch_size=256,
epochs=20,
callbacks=[early_stop],
validation_data=(x_test, baby_data["y_test"]))
# -
# ## Recurrent neural networks
#
# Since the baby names are short sequences, they'll work fine with recurrent neural networks. This is based on the [example from keras](https://github.com/keras-team/keras/blob/master/examples/imdb_lstm.py).
# +
baby_data["X_train_ids"], baby_data["X_test_ids"] = keras_data(baby_data["X_train"], baby_data["X_test"], by_words=False, ngram_range=[1,1])
x_train = sequence.pad_sequences(baby_data["X_train_ids"], maxlen=20)
x_test = sequence.pad_sequences(baby_data["X_test_ids"], maxlen=20)
model = Sequential()
model.add(Embedding(55, 64, input_length=20))
model.add(LSTM(128, dropout=0.2, recurrent_dropout=0.2))
model.add(Dense(1, activation='sigmoid'))
optimizer = Adam(lr=0.01)
model.compile(loss='binary_crossentropy',
optimizer=optimizer,
metrics=['accuracy'])
early_stop = EarlyStopping(min_delta=0.01, patience=2)
model.fit(x_train, baby_data["y_train"],
batch_size=64,
epochs=10,
callbacks=[early_stop],
validation_data=(x_test, baby_data["y_test"]))
# -
# We can run predictions on names to see how well the model does. A 1 is a predicted male name, 0 is female.
# +
test_baby = ["john", "mary", "gimli", "frodo", "pikachu"]
_, test_baby_ids = keras_data(baby_data["X_train"], test_baby, by_words=False, ngram_range=[1,1])
test_baby_ids = sequence.pad_sequences(test_baby_ids, maxlen=20)
model.predict(test_baby_ids)
# -
| Statistical Learning/5_Text_Classification.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
"""Compute the magnetic field B in an iron cylinder, the copper wires, and the surrounding vacuum."""
# https://fenicsproject.org/pub/tutorial/html/._ftut1015.html
from fenics import *
from mshr import *
from math import sin, cos, pi
a = 1.0 # inner radius of iron cylinder
b = 1.2 # outer radius of iron cylinder
c_1 = 0.8 # radius for inner circle of copper wires
c_2 = 1.4 # radius for outer circle of copper wires
r = 0.1 # radius of copper wires
R = 2.5 # radius of domain
n = 5 # number of windings
# Define geometry for background
domain = Circle(Point(0, 0), R)
# Define geometry for iron cylinder
cylinder = Circle(Point(0, 0), b) - Circle(Point(0, 0), a)
# Define geometry for wires (N = North (up), S = South (down))
angles_N = [i*2*pi/n for i in range(n)]
angles_S = [(i + 0.5)*2*pi/n for i in range(n)]
wires_N = [Circle(Point(c_1*cos(v), c_1*sin(v)), r) for v in angles_N]
wires_S = [Circle(Point(c_2*cos(v), c_2*sin(v)), r) for v in angles_S]
# Set subdomain for iron cylinder
domain.set_subdomain(1, cylinder)
# Set subdomains for wires
for (i, wire) in enumerate(wires_N):
domain.set_subdomain(2 + i, wire)
for (i, wire) in enumerate(wires_S):
domain.set_subdomain(2 + n + i, wire)
# Create mesh
mesh = generate_mesh(domain, 64)
# Define function space
V = FunctionSpace(mesh, 'P', 1)
# Define boundary condition
bc = DirichletBC(V, Constant(0), 'on_boundary')
# Define subdomain markers and integration measure
markers = MeshFunction('size_t', mesh, 2, mesh.domains())
dx = Measure('dx', domain=mesh, subdomain_data=markers)
# Define current densities
J_N = Constant(1.0)
J_S = Constant(-1.0)
# Define magnetic permeability
class Permeability(UserExpression):
def __init__(self, markers, **kwargs):
self.markers = markers
super().__init__(**kwargs)
def eval_cell(self, values, x, cell):
if self.markers[cell.index] == 0:
values[0] = 4*pi*1e-7 # vacuum
elif self.markers[cell.index] == 1:
values[0] = 1e-5 # iron (should really be 6.3e-3)
else:
values[0] = 1.26e-6 # copper
mu = Permeability(markers, degree=1)
# Define variational problem
A_z = TrialFunction(V)
v = TestFunction(V)
a = (1 / mu)*dot(grad(A_z), grad(v))*dx
L_N = sum(J_N*v*dx(i) for i in range(2, 2 + n))
L_S = sum(J_S*v*dx(i) for i in range(2 + n, 2 + 2*n))
L = L_N + L_S
# Solve variational problem
A_z = Function(V)
solve(a == L, A_z, bc)
# Plot solution
from vedo.dolfin import plot
plot(A_z,
#isolines={'n':10, 'lw':1.5, 'c':'black'} #not yet working
)
# -
| examples/notebooks/dolfin/magnetostatics.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python [conda env:jcopml]
# language: python
# name: conda-env-jcopml-py
# ---
# +
import numpy as np
import pandas as pd
from sklearn.model_selection import train_test_split
from sklearn.pipeline import Pipeline
from sklearn.compose import ColumnTransformer
from jcopml.pipeline import num_pipe, cat_pipe
from jcopml.utils import save_model, load_model
from jcopml.plot import plot_missing_value
from jcopml.feature_importance import mean_score_decrease
from jcopml.plot import plot_association_matrix, plot_correlation_ratio, plot_correlation_matrix
# -
df = pd.read_csv("Admission_Predict_Ver1.1.csv", index_col="Serial No.")
df.columns = df.columns.str.lower().str.strip().str.replace(' ','_')
df.head()
# `GRE Score` = Nilai Test GRE <br>
# `TOEFL Score` = Nilai Toefl <br>
# `University Rating` = Ranking Universitas <br>
# `SOP` = Kualitas Statement Of Purpose <br>
# `LOR` = Kualitas Letter Of Recomendation <br>
# `CGPA` = Cumulative Grade Point Average(IPK) <br>
# `Research` = Berpengalaman dalam riset <br>
# `Chance of Admit` = Persentase Diterima <br>
from sklearn.model_selection import GridSearchCV
from jcopml.tuning import grid_search_params as gsp
from sklearn.model_selection import RandomizedSearchCV
from jcopml.tuning import random_search_params as rsp
from jcopml.tuning.skopt import BayesSearchCV
from jcopml.tuning import bayes_search_params as bsp
from skopt import BayesSearchCV
from jcopml.plot import plot_residual
# ## EDA
df.head()
num_feature = ['gre_score','toefl_score','sop','lor','cgpa']
cat_feature = ['university_rating','research']
print("numerical feature : ", num_feature)
print("categorical feature : ", cat_feature)
import matplotlib.pyplot as plt
import seaborn as sns
# +
fig, ax = plt.subplots(nrows=5)
fig.set_size_inches(12,10)
index=0
for num_ftr in num_feature:
sns.distplot(df[num_ftr], ax=ax[index])
index+=1
fig.tight_layout()
fig.show()
# +
fig, ax = plt.subplots(nrows=5)
fig.set_size_inches(12,10)
index=0
for num_ftr in num_feature:
sns.histplot(data=df, x=num_ftr, bins=11, ax=ax[index])
index+=1
fig.tight_layout()
fig.show()
# +
fig, ax = plt.subplots(nrows=5)
fig.set_size_inches(12,10)
index=0
for num_ftr in num_feature:
sns.scatterplot(x=df[num_ftr], y=df["chance_of_admit"], ax=ax[index])
index+=1
fig.tight_layout()
fig.show()
# -
# Terlihat terdapat 3 feature yang berkorelasi kuat positif
# +
fig, ax = plt.subplots(nrows=2)
fig.set_size_inches(12,5)
index=0
for num_ftr in ["sop","lor"]:
sns.scatterplot(y=df[num_ftr], x=df["chance_of_admit"], ax=ax[index])
index+=1
fig.tight_layout()
fig.show()
# -
# Jika dibalik ternyata change_of_admit berkorelasi positif dengan sop dan lor
# +
fig, ax = plt.subplots(nrows=2)
fig.set_size_inches(12,10)
index=0
for cat_ftr in cat_feature:
sns.countplot(x=cat_ftr, data=df, ax=ax[index])
index+=1
fig.tight_layout()
fig.show()
# +
fig, ax = plt.subplots(nrows=2)
fig.set_size_inches(12,5)
index=0
for cat_ftr in cat_feature:
sns.scatterplot(y=df[cat_ftr], x=df["chance_of_admit"], ax=ax[index])
index+=1
fig.tight_layout()
fig.show()
# -
# ## Korelasi
plot_correlation_matrix(df, "chance_of_admit", num_feature)
plot_correlation_ratio(df, catvar=cat_feature, numvar=["chance_of_admit"])
# `Numerical` Jika dilihat dari korelasi terdapat 3 yang korelasinya kuat
# Sedangkan `Categorical` tidak terlalu kuat, terlebih feature research terbilang berkorelasi lemah
feature = ["gre_score", "toefl_score","cgpa", "university_rating"]
# +
X = df[feature]
y = df.chance_of_admit
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)
X_train.shape, X_test.shape, y_train.shape, y_test.shape
# -
from sklearn.ensemble import RandomForestRegressor
# +
preprocessor = ColumnTransformer([
('numeric', num_pipe(transform="yeo-johnson", scaling="standard"), ["gre_score","toefl_score","cgpa"]),
('categoric', cat_pipe(encoder='ordinal'), ["university_rating"])
])
pipeline = Pipeline([
('prep', preprocessor),
('algo', RandomForestRegressor(n_jobs=-1, random_state=42))
])
model = RandomizedSearchCV(pipeline, rsp.rf_params, cv=3, n_iter=100, n_jobs=-1, verbose=1, random_state=42)
model.fit(X_train, y_train)
print(model.best_params_)
print(model.score(X_train, y_train), model.best_score_, model.score(X_test, y_test))
# +
feature = ["gre_score", "toefl_score","cgpa", "university_rating"]
one = ['cgpa']
X = df[one]
y = df.chance_of_admit
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)
X_train.shape, X_test.shape, y_train.shape, y_test.shape
preprocessor = ColumnTransformer([
('numeric', num_pipe(transform="yeo-johnson", scaling="standard"), one)
])
pipeline = Pipeline([
('prep', preprocessor),
('algo', RandomForestRegressor(n_jobs=-1, random_state=42))
])
model = RandomizedSearchCV(pipeline, rsp.rf_params, cv=3, n_iter=100, n_jobs=-1, verbose=1, random_state=42)
model.fit(X_train, y_train)
print(model.best_params_)
print(model.score(X_train, y_train), model.best_score_, model.score(X_test, y_test))
# +
feature = ["gre_score", "toefl_score","cgpa", "university_rating"]
one = ['cgpa','university_rating']
X = df[one]
y = df.chance_of_admit
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)
X_train.shape, X_test.shape, y_train.shape, y_test.shape
preprocessor = ColumnTransformer([
('numeric', num_pipe(transform="yeo-johnson", scaling="standard"), one),
('categoric', cat_pipe(encoder='ordinal'), ["university_rating"])
])
pipeline = Pipeline([
('prep', preprocessor),
('algo', RandomForestRegressor(n_jobs=-1, random_state=42))
])
model = RandomizedSearchCV(pipeline, rsp.rf_params, cv=3, n_iter=100, n_jobs=-1, verbose=1, random_state=42)
model.fit(X_train, y_train)
print(model.best_params_)
print(model.score(X_train, y_train), model.best_score_, model.score(X_test, y_test))
# +
feature = ["gre_score", "toefl_score","cgpa", "university_rating"]
one = ['toefl_score','cgpa','university_rating']
X = df[one]
y = df.chance_of_admit
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)
X_train.shape, X_test.shape, y_train.shape, y_test.shape
preprocessor = ColumnTransformer([
('numeric', num_pipe(transform="yeo-johnson", scaling="standard"), one),
('categoric', cat_pipe(encoder='ordinal'), ["university_rating"])
])
pipeline = Pipeline([
('prep', preprocessor),
('algo', RandomForestRegressor(n_jobs=-1, random_state=42))
])
model = RandomizedSearchCV(pipeline, rsp.rf_params, cv=3, n_iter=100, n_jobs=-1, verbose=1, random_state=42)
model.fit(X_train, y_train)
print(model.best_params_)
print(model.score(X_train, y_train), model.best_score_, model.score(X_test, y_test))
# +
feature = ["gre_score", "toefl_score","cgpa", "university_rating"]
one = ['gre_score','toefl_score','cgpa','university_rating']
X = df[one]
y = df.chance_of_admit
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)
X_train.shape, X_test.shape, y_train.shape, y_test.shape
preprocessor = ColumnTransformer([
('numeric', num_pipe(transform="yeo-johnson", scaling="standard"), one),
('categoric', cat_pipe(encoder='ordinal'), ["university_rating"])
])
pipeline = Pipeline([
('prep', preprocessor),
('algo', RandomForestRegressor(n_jobs=-1, random_state=42))
])
model = RandomizedSearchCV(pipeline, rsp.rf_params, cv=3, n_iter=100, n_jobs=-1, verbose=1, random_state=42)
model.fit(X_train, y_train)
print(model.best_params_)
print(model.score(X_train, y_train), model.best_score_, model.score(X_test, y_test))
# -
# Model dengan score test terbaik didapat ketika feature yang dipilih adalah `cgpa` dan `university rating`
| Graduate Admission/Model-Width-EDA.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# %pylab inline
from scipy.interpolate import interpn
from helpFunctions import surfacePlot
import numpy as np
from multiprocessing import Pool
from functools import partial
import warnings
import math
warnings.filterwarnings("ignore")
np.printoptions(precision=2)
# ### The value of renting
# Assuming we obtain the value: $\tilde{V}_{t+1}(x_{t+1})$ where:
# $x_{t+1} = [w_{t+1}, n_{t+1}, M_{t+1}, g_{t+1} = 0, e_{t+1}, s_{t+1}, (H)]$ from interpolation. We know $H$ and $M_t$ from the action taken and we could calculate mortgage payment $m$ and $rh$ (now treated as constant) is observed from the market.
# * To start with we have state variable: $x_t = [w_t, n_t, e_t, s_t]$
# * Housing choice is limited: $H_{\text{choice}} = \{750, 1000, 1500, 2000\}$
# * Mortgage choice is also limitted to discrete values $M_{t} = [0.2H, 0.4H, 0.6H, 0.8H]$
# * Action: continue to rent: $a = (c, b, k, h)$ switch to owning a house: $a = (c, b, k, M, H)$
# * Buying house activities can only happend during the age of 10 and 25.
# +
# time line
T_min = 0
T_max = 70
T_R = 45
# discounting factor
beta = 1/(1+0.02)
# utility function parameter
gamma = 2
# relative importance of housing consumption and non durable consumption
alpha = 0.8
# parameter used to calculate the housing consumption
kappa = 0.3
# depreciation parameter
delta = 0.025
# housing parameter
chi = 0.3
# uB associated parameter
B = 2
# constant cost
c_h = 0.5
# All the money amount are denoted in thousand dollars
earningShock = [0.8,1.2]
# Define transition matrix of economical states
# GOOD -> GOOD 0.8, BAD -> BAD 0.6
Ps = np.array([[0.6, 0.4],[0.2, 0.8]])
# current risk free interest rate
r_b = np.array([0.03 ,0.05])
# stock return depends on current and future econ states
# r_k = np.array([[-0.2, 0.15],[-0.15, 0.2]])
r_k = np.array([[-0.15, 0.20],[-0.15, 0.20]])
# expected return on stock market
# r_bar = 0.0667
r_bar = 0.02
# probability of survival
Pa = np.load("prob.npy")
# deterministic income
detEarning = np.load("detEarning.npy")
# probability of employment transition Pe[s, s_next, e, e_next]
Pe = np.array([[[[0.3, 0.7], [0.1, 0.9]], [[0.25, 0.75], [0.05, 0.95]]],
[[[0.25, 0.75], [0.05, 0.95]], [[0.2, 0.8], [0.01, 0.99]]]])
# tax rate before and after retirement
tau_L = 0.2
tau_R = 0.1
# some variables associate with 401k amount
Nt = [np.sum(Pa[t:]) for t in range(T_max-T_min)]
Dt = [np.ceil(((1+r_bar)**N - 1)/(r_bar*(1+r_bar)**N)) for N in Nt]
# income fraction goes into 401k
yi = 0.005
# mortgate rate
rh = 0.036
# this discount is used to calculate mortgage payment m
D = [((1+rh)**N - 1)/(rh*(1+rh)**N) for N in range(T_max-T_min)]
# housing price constant
pt = 250/1000
# renting price constant
pr = 30/1000
# +
#Define the utility function
def u(c):
# shift utility function to the left, so it only takes positive value
return (np.float_power(c, 1-gamma) - 1)/(1 - gamma)
#Define the bequeath function, which is a function of wealth
def uB(tb):
return B*u(tb)
#Calculate TB_rent
def calTB_rent(x):
# change input x as numpy array
# w, n, e, s = x
TB = x[:,0] + x[:,1]
return TB
#Calculate TB_own
def calTB_own(x):
# change input x as numpy array
# transiton from (w, n, e, s) -> (w, n, M, 0, e, s, H)
TB = x[:,0] + x[:,1] + x[:,6]*pt - x[:,2]
return TB
def u_rent(a):
'''
Input:
action a: c, b, k, h = a
Output:
reward value: the length of return should be equal to the length of a
'''
c = a[:,0]
h = a[:,3]
C = np.float_power(c, alpha) * np.float_power(h, 1-alpha)
return u(C)
def u_own(a):
'''
Input:
action a: c, b, k, M, H = a
Output:
reward value: the length of return should be equal to the length of a
'''
c = a[:,0]
H = a[:,4]
C = np.float_power(c, alpha) * np.float_power((1+kappa)*H, 1-alpha)
return u(C)
#Define the earning function, which applies for both employment and unemployment, good econ state and bad econ state
def y(t, x):
w, n, e, s = x
if t <= T_R:
welfare = 5
return detEarning[t] * earningShock[int(s)] * e + (1-e) * welfare
else:
return detEarning[t]
#Earning after tax and fixed by transaction in and out from 401k account
def yAT(t,x):
yt = y(t, x)
w, n, e, s = x
if t <= T_R and e == 1:
# 5% of the income will be put into the 401k
return (1-tau_L)*(yt * (1-yi))
if t <= T_R and e == 0:
return yt
else:
# t > T_R, n/discounting amount will be withdraw from the 401k
return (1-tau_R)*yt + n/Dt[t]
#Define the evolution of the amount in 401k account
def gn(t, n, x, s_next):
w, n, e, s = x
if t <= T_R and e == 1:
# if the person is employed, then 5 percent of his income goes into 401k
n_cur = n + y(t, x) * yi
elif t <= T_R and e == 0:
# if the perons is unemployed, then n does not change
n_cur = n
else:
# t > T_R, n/discounting amount will be withdraw from the 401k
n_cur = n - n/Dt[t]
return (1+r_k[int(s), s_next])*n_cur
# +
def transition_to_rent(x,a,t):
'''
imput a is np array constains all possible actions
from x = [w, n, e, s] to x = [w, n, e, s]
'''
w, n, e, s = x
# variables used to collect possible states and probabilities
x_next = []
prob_next = []
for aa in a:
c, b, k, h = aa
for s_next in [0,1]:
w_next = b*(1+r_b[int(s)]) + k*(1+r_k[int(s), s_next])
n_next = gn(t, n, x, s_next)
if t >= T_R:
e_next = 0
x_next.append([w_next, n_next, e_next, s_next])
prob_next.append(Ps[int(s),s_next])
else:
for e_next in [0,1]:
x_next.append([w_next, n_next, e_next, s_next])
prob_next.append(Ps[int(s),s_next] * Pe[int(s),s_next,int(e),e_next])
return np.array(x_next), np.array(prob_next)
def transition_to_own(x,a,t):
'''
imput a is np array constains all possible actions
from x = [w, n, e, s] to x = [w, n, M, g=0, e, s, H]
'''
w, n, e, s = x
# variables used to collect possible states and probabilities
x_next = []
prob_next = []
for aa in a:
c, b, k, M, H = aa
M_next = M*(1+rh)
for s_next in [0,1]:
w_next = b*(1+r_b[int(s)]) + k*(1+r_k[int(s), s_next])
n_next = gn(t, n, x, s_next)
if t >= T_R:
e_next = 0
x_next.append([w_next, n_next, M_next, 0, e_next, s_next, H])
prob_next.append(Ps[int(s),s_next])
else:
for e_next in [0,1]:
x_next.append([w_next, n_next, M_next, 0, e_next, s_next, H])
prob_next.append(Ps[int(s),s_next] * Pe[int(s),s_next,int(e),e_next])
return np.array(x_next), np.array(prob_next)
# +
# used to calculate dot product
def dotProduct(p_next, uBTB, t):
if t >= T_R:
return (p_next*uBTB).reshape((len(p_next)//2,2)).sum(axis = 1)
else:
return (p_next*uBTB).reshape((len(p_next)//4,4)).sum(axis = 1)
# Value function is a function of state and time, according to the restriction transfer from renting to ownning can only happen
# between the age: 10 - 25
def V(x, t, NN):
w, n, e, s = x
yat = yAT(t,x)
# first define the objective function solver and then the objective function
def obj_solver_rent(obj_rent):
# a = [c, b, k, h]
# Constrain: yat + w = c + b + k + pr*h
# h_portion takes [0:0.05:0.95]
# c_portion takes remaining [0:0.05:0.95]
# b_portion takes reamining [0:0.05:0.95]
# k is the remainder
actions = []
for hp in np.linspace(0,0.99,20):
budget1 = yat + w
h = budget1 * hp/pr
budget2 = budget1 * (1-hp)
for cp in np.linspace(0,1,11):
c = budget2*cp
budget3 = budget2 * (1-cp)
for bp in np.linspace(0,1,11):
b = budget3* bp
k = budget3 * (1-bp)
# q = 1 not renting in this case
actions.append([c,b,k,h])
actions = np.array(actions)
values = obj_rent(actions)
fun = np.max(values)
ma = actions[np.argmax(values)]
return fun, ma
def obj_solver_own(obj_own):
# a = [c, b, k, M, H]
# possible value of H = {750, 1000, 1500, 2000} possible value of [0.2H, 0.4H, 0.6H, 0.8H]]*pt
# (M, t, rh) --> m
# Constrain: yat + w = c + b + k + (H*pt - M) + ch
H_options = [750, 1000, 1500, 2000]
M_options = [0.1, 0.2, 0.3]
actions = []
for H in H_options:
for mp in M_options:
M = mp*H*pt
m = M/D[T_max - t - 1]
# 5 is the welfare income which is also the minimum income
if (H*pt - M) + c_h <= yat + w and m < pr*H + 5:
budget1 = yat + w - (H*pt - M) - c_h
# c_portion takes remaining [0:0.05:0.95]
# b_portion takes reamining [0:0.05:0.95]
# k is the remainder
for cp in np.linspace(0,1,11):
c = budget1*cp
budget2 = budget1 * (1-cp)
for bp in np.linspace(0,1,11):
b = budget2* bp
k = budget2 * (1-bp)
actions.append([c,b,k,M,H])
if len(actions) == 0:
return -np.inf, [0,0,0,0,0]
else:
actions = np.array(actions)
values = obj_own(actions)
fun = np.max(values)
ma = actions[np.argmax(values)]
return fun, ma
if t == T_max-1:
# The objective function of renting
def obj_rent(actions):
# a = [c, b, k, h]
x_next, p_next = transition_to_rent(x, actions, t)
uBTB = uB(calTB_rent(x_next))
return u_rent(actions) + beta * dotProduct(uBTB, p_next, t)
fun, action = obj_solver_rent(obj_rent)
return np.array([fun, action])
elif t < 10 or t > 25:
# The objective function of renting
def obj_rent(actions):
# a = [c, b, k, h]
x_next, p_next = transition_to_rent(x, actions, t)
V_tilda = NN.predict(x_next) # V_rent_{t+1} used to approximate, shape of x is [w,n,e,s]
uBTB = uB(calTB_rent(x_next))
return u_rent(actions) + beta * (Pa[t] * dotProduct(V_tilda, p_next, t) + (1 - Pa[t]) * dotProduct(uBTB, p_next, t))
fun, action = obj_solver_rent(obj_rent)
return np.array([fun, action])
else:
# The objective function of renting
def obj_rent(actions):
# a = [c, b, k, h]
x_next, p_next = transition_to_rent(x, actions, t)
V_tilda = NN.predict(x_next) # V_rent_{t+1} used to approximate, shape of x is [w,n,e,s]
uBTB = uB(calTB_rent(x_next))
return u_rent(actions) + beta * (Pa[t] * dotProduct(V_tilda, p_next, t) + (1 - Pa[t]) * dotProduct(uBTB, p_next, t))
# The objective function of owning
def obj_own(actions):
# a = [c, b, k, M, H]
x_next, p_next = transition_to_own(x, actions, t)
V_tilda = NN.predict(x_next) # V_own_{t+1} used to approximate, shape of x is [w,n,0,e,s,H,M]
uBTB = uB(calTB_own(x_next))
return u_own(actions) + beta * (Pa[t] * dotProduct(V_tilda, p_next, t) + (1 - Pa[t]) * dotProduct(uBTB, p_next, t))
fun1, action1 = obj_solver_rent(obj_rent)
fun2, action2 = obj_solver_own(obj_own)
if fun1 > fun2:
return np.array([fun1, action1])
else:
return np.array([fun2, action2])
# +
# wealth discretization
ws = np.array([10,25,50,75,100,125,150,175,200,250,500,750,1000,1500,3000])
w_grid_size = len(ws)
# 401k amount discretization
ns = np.array([1, 5, 10, 15, 25, 40, 65, 100, 150, 300, 400,1000])
n_grid_size = len(ns)
xgrid = np.array([[w, n, e, s]
for w in ws
for n in ns
for e in [0,1]
for s in [0,1]
]).reshape((w_grid_size, n_grid_size,2,2,4))
Vgrid = np.zeros((w_grid_size, n_grid_size,2,2, T_max-T_min))
cgrid = np.zeros((w_grid_size, n_grid_size,2,2, T_max-T_min))
bgrid = np.zeros((w_grid_size, n_grid_size,2,2, T_max-T_min))
kgrid = np.zeros((w_grid_size, n_grid_size,2,2, T_max-T_min))
hgrid = np.zeros((w_grid_size, n_grid_size,2,2, T_max-T_min))
# Policy function of buying a house
Mgrid = np.zeros((w_grid_size, n_grid_size,2,2, T_max-T_min))
Hgrid = np.zeros((w_grid_size, n_grid_size,2,2, T_max-T_min))
# -
V1000 = np.load("Vgrid1000.npy")
V1500 = np.load("Vgrid1500.npy")
V2000 = np.load("Vgrid2000.npy")
V750 = np.load("Vgrid750.npy")
Vown = [V750, V1000, V1500, V2000]
Hs = [750, 1000, 1500, 2000]
class iApproxy(object):
def __init__(self, pointsRent, Vrent, Vown, t):
self.Vrent = Vrent
self.Vown = Vown
self.Prent = pointsRent
self.t = t
def predict(self, xx):
if xx.shape[1] == 4:
# x = [w, n, e, s]
pvalues = np.zeros(xx.shape[0])
index00 = (xx[:,2] == 0) & (xx[:,3] == 0)
index01 = (xx[:,2] == 0) & (xx[:,3] == 1)
index10 = (xx[:,2] == 1) & (xx[:,3] == 0)
index11 = (xx[:,2] == 1) & (xx[:,3] == 1)
pvalues[index00]=interpn(self.Prent, self.Vrent[:,:,0,0], xx[index00][:,:2], bounds_error = False, fill_value = None)
pvalues[index01]=interpn(self.Prent, self.Vrent[:,:,0,1], xx[index01][:,:2], bounds_error = False, fill_value = None)
pvalues[index10]=interpn(self.Prent, self.Vrent[:,:,1,0], xx[index10][:,:2], bounds_error = False, fill_value = None)
pvalues[index11]=interpn(self.Prent, self.Vrent[:,:,1,1], xx[index11][:,:2], bounds_error = False, fill_value = None)
return pvalues
else:
# x = w, n, M, g=0, e, s, H
pvalues = np.zeros(xx.shape[0])
for i in range(len(Hs)):
H = Hs[i]
# Mortgage amount, * 0.25 is the housing price per unit
Ms = np.array([0.01*H,0.05*H,0.1*H,0.2*H,0.3*H,0.4*H,0.5*H,0.6*H,0.7*H,0.8*H]) * pt
points = (ws,ns,Ms)
index00 = (xx[:,4] == 0) & (xx[:,5] == 0) & (xx[:,6] == H)
index01 = (xx[:,4] == 0) & (xx[:,5] == 1) & (xx[:,6] == H)
index10 = (xx[:,4] == 1) & (xx[:,5] == 0) & (xx[:,6] == H)
index11 = (xx[:,4] == 1) & (xx[:,5] == 1) & (xx[:,6] == H)
pvalues[index00]=interpn(points, self.Vown[i][:,:,:,0,0,0,self.t], xx[index00][:,:3], method = "nearest",bounds_error = False, fill_value = None)
pvalues[index01]=interpn(points, self.Vown[i][:,:,:,0,0,1,self.t], xx[index01][:,:3], method = "nearest",bounds_error = False, fill_value = None)
pvalues[index10]=interpn(points, self.Vown[i][:,:,:,0,1,0,self.t], xx[index10][:,:3], method = "nearest",bounds_error = False, fill_value = None)
pvalues[index11]=interpn(points, self.Vown[i][:,:,:,0,1,1,self.t], xx[index11][:,:3], method = "nearest",bounds_error = False, fill_value = None)
return pvalues
# +
# %%time
# value iteration part
xs = xgrid.reshape((w_grid_size*n_grid_size*2*2,4))
pool = Pool()
pointsRent = (ws, ns)
for t in range(T_max-1,T_min, -1):
print(t)
if t == T_max - 1:
f = partial(V, t = t, NN = None)
results = np.array(pool.map(f, xs))
else:
approx = iApproxy(pointsRent,Vgrid[:,:,:,:,t+1], Vown, t+1)
f = partial(V, t = t, NN = approx)
results = np.array(pool.map(f, xs))
# here results need some clean up due to different length of the actions taken.
# a = [c,b,k,h] or a = [c,b,k,M,H]
Vgrid[:,:,:,:,t] = results[:,0].reshape((w_grid_size,n_grid_size,2,2))
cgrid[:,:,:,:,t] = np.array([r[0] for r in results[:,1]]).reshape((w_grid_size,n_grid_size,2,2))
bgrid[:,:,:,:,t] = np.array([r[1] for r in results[:,1]]).reshape((w_grid_size,n_grid_size,2,2))
kgrid[:,:,:,:,t] = np.array([r[2] for r in results[:,1]]).reshape((w_grid_size,n_grid_size,2,2))
# if a = [c, b, k, h]
hgrid[:,:,:,:,t] = np.array([r[3] if len(r) == 4 else r[4] for r in results[:,1]]).reshape((w_grid_size,n_grid_size,2,2))
# if a = [c, b, k, M, H]
Mgrid[:,:,:,:,t] = np.array([r[3] if len(r) == 5 else 0 for r in results[:,1]]).reshape((w_grid_size,n_grid_size,2,2))
Hgrid[:,:,:,:,t] = np.array([r[4] if len(r) == 5 else 0 for r in results[:,1]]).reshape((w_grid_size,n_grid_size,2,2))
pool.close()
np.save("Vgrid_renting",Vgrid)
np.save("cgrid_renting",cgrid)
np.save("bgrid_renting",bgrid)
np.save("kgrid_renting",kgrid)
np.save("hgrid_renting",hgrid)
np.save("Mgrid_renting",Mgrid)
np.save("Hgrid_renting",Hgrid)
# -
for tt in range(10,25):
print(Hgrid[:,1,1,1,tt])
for tt in range(10,25):
print(Hgrid[:,1,0,1,tt])
for tt in range(10,25):
print(Hgrid[:,1,1,0,tt])
for tt in range(10,25):
print(Mgrid[:,1,0,0,tt])
for tt in range(10,25):
print(Mgrid[:,1,1,1,tt])
750*pt
plt.plot(V2000[:,0,0,0,1,1,10], 'g')
plt.plot(V1500[:,0,0,0,1,1,10], 'y')
plt.plot(V1000[:,0,0,0,1,1,10], 'b')
plt.plot(V750[:,0,0,0,1,1,10], 'r')
plt.plot(V2000[:,5,0,0,1,1,10], 'g')
plt.plot(V1500[:,5,0,0,1,1,10], 'y')
plt.plot(V1000[:,5,0,0,1,1,10], 'b')
plt.plot(V750[:,5,0,0,1,1,10], 'r')
plt.plot(V2000[:,1,0,0,1,1,10],'r')
plt.plot(V2000[:,1,4,0,1,1,10],'g')
plt.plot(V2000[:,1,8,0,1,1,10],'b')
plt.plot(V750[:,1,0,0,1,1,10], 'r')
plt.plot(V750[:,1,4,0,1,1,10], 'g')
plt.plot(V750[:,1,8,0,1,1,10], 'b')
| 20200903/.ipynb_checkpoints/renting-Copy1-checkpoint.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
% run indexget
os.chdir('/home/dwhang/project/dwhang/Industry_index/')
import tushare as ts
import os
import pandas as pd
import numpy as np
from datetime import datetime as dt
from datetime import timedelta
import urllib
import csv
import datetime
import json
import sqlite3
import time
import schedule
os.chdir('/home/dwhang/project/dwhang/Industry_index/')
def index_download(code=''):
hostname=url='http://www.csindex.com.cn/uploads/file/autofile/cons/%s'
dis='/home/dwhang/project/dwhang/Industry_index/content'
tt=code+'cons.xls'
os.chdir(dis)
url=hostname%tt
try:
req = urllib.request.Request(url)
webpage = urllib.request.urlopen(req)
except Exception as e:
print(code +' does not exist')
pass
else:
filename=tt.replace('cons','')
f=open(filename,'wb')
ans = webpage.read()
f.write(ans)
f.close()
temp=pd.read_excel(filename,dtype={'成分券代码Constituent Code':str})
temp=temp.iloc[:,[0,2,4,5]]
temp.columns=['日期','指数名称','成分券代码','成分券名称']
temp=temp.set_index('日期')
temp.to_excel(filename,encoding='utf-8')
os.remove(filename)
def content_update():
index_info=pd.read_csv('index.csv',dtype={'指数代码':str}).指数代码
for code in index_info:
index_download(code=code)
temp.to_excel
| Industry_index/indexget.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# %matplotlib inline
#
# Example on how to plot out a corner reflector raster
# ----------------------------------------------------
#
# This example shows how to plot out a corner reflector
# raster scan which also analyzes the data and returns
# the corner reflector location information
#
# +
import radtraq
import act
import matplotlib.pyplot as plt
# Read in sample data using ACT
f = radtraq.tests.sample_files.EXAMPLE_RASTER
obj = act.io.armfiles.read_netcdf(f)
# Process and plot raster file
data = radtraq.plotting.corner_reflector.plot_cr_raster(obj, target_range=478.,
el_limits=[-0.5, 2.5], noplot=False)
print(data)
plt.show()
obj.close()
| docs/source/source/auto_examples/plot_corner_reflector_raster.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# Zuerst werden alle notwendigen Pakete und Module importiert.
# Dabei wird neben den Modulen der Standardbibliothek (`random`, `sys`, `time` und `uuid`) und den Datenstrukturen `defaultdict` und `OrderedDict` aus dem `collections` Modul noch das Paket *Python-Chess* (`chess`) benötigt, welches bereits installiert wurde.
# Zudem werden für die Anzeige einige Funktionen aus IPython benötigt, die durch die Installation und Verwendung von Jupyter Notebooks bereitgestellt werden.
# + pycharm={"is_executing": false}
import random
import sys
import time
import uuid
import chess
import chess.engine
import chess.polyglot
import chess.svg
from collections import defaultdict
from collections import OrderedDict
from IPython.display import display, HTML, clear_output
# + [markdown] pycharm={"name": "#%% md\n"}
# Zunächst werden globale Konstanten definiert.
#
# - `QUIESCENCE_SEARCH_DEPTH` ist die maximale Suchtiefe der Ruhesuche.
# - `TABLE_SIZE` is die maximale größe der Transpositionstabellen.
# - `TIMEOUT_SECONDS` wird verwendet, um die Zeit des Schachcomputers zur Berechnung seines nächsten Zuges festzulegen.
# + pycharm={"is_executing": false, "name": "#%%\n"}
QUIESCENCE_SEARCH_DEPTH: int = 20
TABLE_SIZE: int = 1.84e19
TIMEOUT_SECONDS: int = 30
# + [markdown] pycharm={"name": "#%% md\n"}
# Im folgenden Schritt werden einige globale Variablen definiert.
# Dieser Schritt ist notwendig, um die Funktionsköpfe möglichst nah am Pseudocode aus dem Theorieteil und der Literatur zu halten.
#
# - Die Variable `best_move` enthält den besten Zug für die aktuelle Iteration der `iterative_deepening`-Funktion.
# - Demgegenüber speichert die Variable `global_best_move` den besten Zug aus allen Iterationen der iterativen Tiefensuche und somit den Zug, der vom Schachcomputer am Ende seines Zuges ausgeführt wird.
# - `current_depth` speichert die aktuelle Tiefe für das Iterative Deepening.
# - `endgame` ist eine Flag, welche auf True gesetzt wird, sobald sich das Spiel im Endspiel befindet.
# - `is_timeout` ist genau dann wahr, wenn die Zeit des Schachcomputers abgelaufen ist.
# - Bei `move_scores` handelt es sich um ein Dictionary, dass den Score für jeden bereits besuchten Zug im Iterative Deepening Verfahren speichert.
# Dabei handelt es sich speziell um ein verschachteltes Dictionary.
# Die Schlüssel auf der obersten Ebene bilden die Stellungen in Forsyth–Edwards Notation (FEN).
# Der dazugehörige Wert ist wiederum ein Dictionary mit den Zügen als Schlüssel und dem dazugehörigen Score als Wert.
# - `piece_zobrist_values` ist eine Liste von Listen, wobei die inneren Listen jeweils ein Feld repräsentieren und ihrerseits für jede einzelne Figur einen eindeutigen Zobrist Hash beinhalten.
# - `repetiton_table` speichert Stellungen um zu überprüfen, ob eine Stellung bereits zwei mal vorgekommen ist.
# - In der Variablen `start_time` wird die Startzeit des Zuges des Schachcomputers abgelegt.
# - `transposition_table` ist ein LRU Cache mit der maximalen Größe `TABLE_SIZE` und bildet damit die Transpositionstabellen ab.
# - `zobrist_turn` ist ein Hash, welcher mit dem Zobrist Hash XOR verrechnet wird, falls die ziehende Person wechselt.
# + pycharm={"is_executing": false, "name": "#%%\n"}
best_move = None
current_depth = 0
endgame = False
global_best_move = None
is_timeout = False
move_scores = defaultdict(dict)
piece_zobrist_values = []
repetition_table = {}
start_time = 0.0
transposition_table = None
zobrist_turn = 0
# + [markdown] pycharm={"name": "#%% md\n"}
# Nachdem alle notwendigen globalen Definitionen getätigt worden sind, wird nun die verwendete Bewertungsheuristik umgesetzt.
# Da sich die Bewertungsfunktion in zwei Teile aufteilt, werden diese getrennt implementiert und am Ende zusammengefügt.
# Zuerst erfolgt die Umsetzung der Figurenwerte.
# Hierfür wird ein Dictionary `piece_values` angelegt, dass jeder Figur ihren entsprechenden Wert zuweist.
# + pycharm={"is_executing": false, "name": "#%%\n"}
piece_values = {
chess.BISHOP: 330,
chess.KING: 20_000,
chess.KNIGHT: 320,
chess.PAWN: 100,
chess.QUEEN: 900,
chess.ROOK: 500,
}
# -
# Die Funktion `get_piece_value` gibt den Figurenwert für eine übergebene Figur auf dem Schachbrett zurück.
# Handelt es sich bei der Figur um eine der Farbe Schwarz, so wird der negierte Werte zurückgegeben, da der Schwarze Spieler der minimierende Spieler ist.
# + pycharm={"is_executing": false, "name": "#%%\n"}
def get_piece_value(piece: chess.Piece) -> int:
factor = -1 if piece.color == chess.BLACK else 1
return factor * piece_values.get(piece.piece_type)
# + [markdown] pycharm={"name": "#%% md\n"}
# Als nächstes werden die figurenspezifischen Positionstabellen implementiert.
# Dafür wird ein Dictionary `piece_squared_tables` angelegt, das für jede Figur die entsprechende Positionstabelle speichert.
# Die Tabellen werden dabei als Tupel von Tupeln gespeichert, da die Veränderung der Werte während des Spiels nicht zulässig ist.
# Die Tabellen sind aus dem Theorieteil übernommen worden, weshalb sie aus Sicht des weißen Spielers zu betrachten sind.
# Das Feld A1 befindet sich somit unten links, was dem Index `[7][0]` entspricht.
# + pycharm={"is_executing": false, "name": "#%%\n"}
piece_squared_tables = {
chess.BISHOP: (
(-20, -10, -10, -10, -10, -10, -10, -20),
(-10, 0, 0, 0, 0, 0, 0, -10),
(-10, 0, 5, 10, 10, 5, 0, -10),
(-10, 5, 5, 10, 10, 5, 5, -10),
(-10, 0, 10, 10, 10, 10, 0, -10),
(-10, 10, 10, 10, 10, 10, 10, -10),
(-10, 5, 0, 0, 0, 0, 5, -10),
(-20, -10, -10, -10, -10, -10, -10, -20),
),
chess.KING: (
(-30, -40, -40, -50, -50, -40, -40, -30),
(-30, -40, -40, -50, -50, -40, -40, -30),
(-30, -40, -40, -50, -50, -40, -40, -30),
(-30, -40, -40, -50, -50, -40, -40, -30),
(-20, -30, -30, -40, -40, -30, -30, -20),
(-10, -20, -20, -20, -20, -20, -20, -10),
( 20, 20, 0, 0, 0, 0, 20, 20),
( 20, 30, 10, 0, 0, 10, 30, 20),
),
chess.KNIGHT: (
(-50, -40, -30, -30, -30, -30, -40, -50),
(-40, -20, 0, 0, 0, 0, -20, -40),
(-30, 0, 10, 15, 15, 10, 0, -30),
(-30, 5, 15, 20, 20, 15, 5, -30),
(-30, 0, 15, 20, 20, 15, 0, -30),
(-30, 5, 10, 15, 15, 10, 5, -30),
(-40, -20, 0, 5, 5, 0, -20, -40),
(-50, -40, -30, -30, -30, -30, -40, -50),
),
chess.PAWN: (
( 0, 0, 0, 0, 0, 0, 0, 0),
( 50, 50, 50, 50, 50, 50, 50, 50),
( 10, 10, 20, 30, 30, 20, 10, 10),
( 5, 5, 10, 25, 25, 10, 5, 5),
( 0, 0, 0, 20, 20, 0, 0, 0),
( 5, -5, -10, 0, 0, -10, -5, 5),
( 5, 10, 10, -20, -20, 10, 10, 5),
( 0, 0, 0, 0, 0, 0, 0, 0),
),
chess.QUEEN: (
(-20, -10, -10, -5, -5, -10, -10, -20),
(-10, 0, 0, 0, 0, 0, 0, -10),
(-10, 0, 5, 5, 5, 5, 0, -10),
( -5, 0, 5, 5, 5, 5, 0, -5),
( 0, 0, 5, 5, 5, 5, 0, -5),
(-10, 5, 5, 5, 5, 5, 0, -10),
(-10, 0, 5, 0, 0, 0, 0, -10),
(-20, -10, -10, -5, -5, -10, -10, -20),
),
chess.ROOK: (
( 0, 0, 0, 0, 0, 0, 0, 0),
( 5, 10, 10, 10, 10, 10, 10, 5),
( -5, 0, 0, 0, 0, 0, 0, -5),
( -5, 0, 0, 0, 0, 0, 0, -5),
( -5, 0, 0, 0, 0, 0, 0, -5),
( -5, 0, 0, 0, 0, 0, 0, -5),
( -5, 0, 0, 0, 0, 0, 0, -5),
( 0, 0, 0, 5, 5, 0, 0, 0),
),
}
# -
# Im Endspiel wird für den König eine andere Tabelle verwendet. Diese ist nachfolgend definiert.
# + pycharm={"is_executing": false, "name": "#%%\n"}
kings_end_game_squared_table = (
(-50, -40, -30, -20, -20, -30, -40, -50),
(-30, -20, -10, 0, 0, -10, -20, -30),
(-30, -10, 20, 30, 30, 20, -10, -30),
(-30, -10, 30, 40, 40, 30, -10, -30),
(-30, -10, 30, 40, 40, 30, -10, -30),
(-30, -10, 20, 30, 30, 20, -10, -30),
(-30, -30, 0, 0, 0, 0, -30, -30),
(-50, -30, -30, -30, -30, -30, -30, -50),
)
# -
# Das Paket Python-Chess weist jeder Positionen einen Ganzzahlwert zu.
# So erhält das Feld A1 den Zahlenwert 0, das Feld B1 den Wert 1.
# Diese Zuweisung lässt sich bis zum letzten Feld H8 fortführen, das den Zahlenwert 63 erhält.
# Um nachfolgend effizient auf die Positionswerte basierend auf dem Zahlenwert des Feldes zugreifen zu können, müssen die Zeilen der Positionstabellen umgekehrt werden, sodass Zeile 1 den Index 0 und die Zeile 8 den Index 7 erhält.
# Dafür wird im Folgenden über jedes Schlüssel-Wert-Paar des `piece_squared_tables` Dictionary iteriert, die Tabelle in eine Liste konvertiert, diese Liste dann umgekehrt und anschließend in ein Tupel zurück überführt.
# Analog wird die eine Tabelle, die in der Variablen `kings_end_game_squared_table` abgelegt ist, in eine Liste überführt und deren Elemente in umgekehrter Reihenfolge als Tupel in die Variable zurückgeschrieben.
piece_squared_tables = {key: tuple(reversed(list(value)))
for key, value in piece_squared_tables.items()}
kings_end_game_squared_table = tuple(reversed(list(kings_end_game_squared_table)))
# Die bisher definierten Positionstabellen sind aus der Sicht des Weißen bzw. maximierenden Spielers definiert.
# Da der zu entwickelnde Schachcomputer aber beide Farben und damit auch gegen sich selber spielen können soll, müssen noch die Positionstabellen für den minimierenden Spieler generiert werden.
# Hierzu werden wie beim maximierenden Spieler die Zeilen der Tabellen umgekehrt.
# Weil sich aber die Tabelle als ganzen um 180° drehen muss, werden zusätzlich alle Spalten, was den Elementen innerhalb der Zeilen entspricht, umgedreht.
# Das resultierende Dictionary bzw. die resultierende Positionstabelle für den König in der Endphase, werden in den zugehörigen Variablennamen mit dem Präfix `reversed_` abgelegt.
reversed_piece_squared_tables = {key: tuple([
piece[::-1]
for piece in value][::-1])
for key, value in piece_squared_tables.items()}
reversed_kings_end_game_squared_table = tuple([
piece[::-1]
for piece in kings_end_game_squared_table][::-1])
# Die Funktion `get_piece_quared_tables_value` liefert den Wert der figurenspezifischen Positionstabellen für eine übergebene Figur `piece` auf dem Schachbrett.
# Der Zeilen- und Spaltenindex in der Tabelle wird basierend auf dem Zahlenwert des Feldes, auf dem die Figur sich befindet (`square`), berechnet.
# Dabei entspricht der Zeilenindex dem Ergebnis der Ganzzahldivision durch 8 und der Spaltenindex dem Rest aus der Division mit der Zahl 8.
# Zudem wird ein Parameter `end_game` benötigt, der genau dann wahr ist, wenn sich das Spiel in der Endphase befindet und für den König eine abweichende Positionstabelle verwendet wird.
# Des Weiteren wird erneut der gefundene Wert negiert, wenn es sich bei der Farbe um Schwarz und somit um den minimierenden Spieler handelt.
def get_piece_squared_tables_value(piece: chess.Piece, square: int, end_game: bool = False) -> int:
factor = -1 if piece.color == chess.BLACK else 1
row = square // 8
column = square % 8
if end_game and piece.piece_type == chess.KING:
if piece.color == chess.WHITE:
return kings_end_game_squared_table[row][column]
else:
return reversed_kings_end_game_squared_table[row][column]
if piece.color == chess.WHITE:
piece_squared_table = piece_squared_tables.get(piece.piece_type)
else:
piece_squared_table = reversed_piece_squared_tables.get(piece.piece_type)
return factor * piece_squared_table[row][column]
# Die Funktion `simple_eval_heuristic` setzt die einfache Bewertungsheuristik um.
# Sie erhält als Eingabeparameter die aktuelle Stellung `board` und ob sich das Spiel im Endspiel befindet (`end_game`).
# Die Funktion schaut sich jedes Feld des Schachbretts an.
# Befindet sich eine Figur auf dem Feld, so werden der Figurenwert und der Positionswert bestimmt und zum Endergebnis, das in der Variablen `piece_value` gespeichert wird, addiert.
# Das Ergebnis wird dann zurückgegeben.
# + pycharm={"is_executing": false, "name": "#%%\n"}
def simple_eval_heuristic(board: chess.Board, end_game: bool = False) -> int:
piece_value = 0
for square in range(64):
piece = board.piece_at(square)
if not piece:
continue
piece_value += get_piece_value(piece)
piece_value += get_piece_squared_tables_value(piece, square, end_game)
return piece_value
# -
# Die beiden Dictionaries `zobrist_values_white`und `zobrist_values_black` werden für die Umsetzung des Zobrist Hashing benötigt.
zobrist_values_white = {
chess.PAWN: 1,
chess.KNIGHT: 2,
chess.BISHOP: 3,
chess.ROOK: 4,
chess.QUEEN: 5,
chess.KING: 6,
}
zobrist_values_black = {
chess.PAWN: 7,
chess.KNIGHT: 8,
chess.BISHOP: 9,
chess.ROOK: 10,
chess.QUEEN: 11,
chess.KING: 12,
}
# In der Funktion `intit_zobrist_list()` werden die Zobrist-Schlüssel für jede Figur auf jedem der 64 Schachfelder initialisiert.
# Es wird `uuid` in Kombination mit dem Operatoren `& (1<<64)-1` verwendet, um einzigartige 64 Bit Hashes zu erzeugen.
# Die erzeugten Hashes werden dann der Reihenfolge aus `zobrist_values_white` und `zobrist_values_black` in die zweidimensionale Liste `piece_zobrist_values` abgespeichert.
# Anschließend wird die Funktion aufgerufen, um die Liste zu initialisieren.
# +
def init_zobrist_list():
global piece_zobrist_values
zobrist_turn = uuid.uuid4().int & (1 << 64) - 1
for i in range(0, 64):
NO_PIECE = uuid.uuid4().int & (1 << 64) - 1
WHITE_PAWN = uuid.uuid4().int & (1 << 64) - 1
WHITE_KNIGHT = uuid.uuid4().int & (1 << 64) - 1
WHITE_BISHOP = uuid.uuid4().int & (1 << 64) - 1
WHITE_ROOK = uuid.uuid4().int & (1 << 64) - 1
WHITE_QUEEN = uuid.uuid4().int & (1 << 64) - 1
WHITE_KING = uuid.uuid4().int & (1 << 64) - 1
BLACK_PAWN = uuid.uuid4().int & (1 << 64) - 1
BLACK_KNIGHT = uuid.uuid4().int & (1 << 64) - 1
BLACK_BISHOP = uuid.uuid4().int & (1 << 64) - 1
BLACK_ROOK = uuid.uuid4().int & (1 << 64) - 1
BLACK_QUEEN = uuid.uuid4().int & (1 << 64) - 1
BLACK_KING = uuid.uuid4().int & (1 << 64) - 1
piece_zobrist_values.append(
[
NO_PIECE,
WHITE_PAWN,
WHITE_KNIGHT,
WHITE_BISHOP,
WHITE_ROOK,
WHITE_QUEEN,
WHITE_KING,
BLACK_PAWN,
BLACK_KNIGHT,
BLACK_BISHOP,
BLACK_ROOK,
BLACK_QUEEN,
BLACK_KING,
]
)
init_zobrist_list()
# -
# `zobrist_hash()` wird verwendet, um eine Stellung in einen einzigartigen Zobrist-Schlüssel umzuwandeln.
# Hierfür wird zunächst die zu überführende Stellung als Parameter (`board`) übergeben.
# Anschließend wird ein leerer Hash erstellt (`zobrist_hash = 0`), welcher dann den Figuren und Feldern der Stellung entsprechend mit den spezifischen Zobrist-Schlüsseln aus der Liste `piece_zobrist_values` XOR verrechnet wird.
# Um auf die richtigen Indizes der Figurtypen zuzugreifen, werden die Dictionaries `zobrist_value_white` und `zobrist_value_black` verwendet.
def zobrist_hash(board: chess.Board) -> int:
global zobrist_turn
zobrist_hash = 0
if board.turn == chess.WHITE:
zobrist_hash = zobrist_hash ^ zobrist_turn
for square in range(64):
piece = board.piece_at(square)
if not piece:
index = 0
elif piece.color == chess.WHITE:
index = zobrist_values_white.get(piece.piece_type)
elif piece.color == chess.BLACK:
index = zobrist_values_black.get(piece.piece_type)
zobrist_hash = zobrist_hash ^ piece_zobrist_values[square][index]
return zobrist_hash
# `zobrist_move()` ändert einen bestehenden Zobrist-Schlüssel (`zobrist_hash`), der die Stellung (`board`) repräsentiert, nachdem ein Zug (`move`) ausgeführt wurde.
#
# - Als erstes wird bestimmt, welcher Spieler am Zug ist, weil diese Information benötigt wird, um zwischen den Dictionaries `zobrist_value_white` und `zobrist_value_black` unterscheiden zu können.
# - Danach werden die beiden Felder bestimmt, von wo nach wo die ziehende Figur sich bewegt.
# - Anschließend werden die Figurtypen auf den entsprechenden Feldern bestimmt.
# Falls sich keine Figur auf dem Feld befindet auf das die ziehende Figur hinzieht, so wird `NO_PIECE` verwendet, welcher sich im Array `piece_zobrist_values` immer auf den Positionen `[X][0]` befindet (X in range(0, 64)).
# - Um den Zug auszuführen wird die Folgende Reihenfolge von XOR Operationen durchgeführt: Zuerst wird der Figurtyp auf dem Startfeld mit sich selbst verrechnet. Anschließend wird `NO_PIECE` auf diesem Feld verrechnet, weil das Feld nachdem die ziehende Figur wegzieht, von keiner Figur besetzt ist. Danach wird der Figurentyp auf dem Zielfeld mit sich selbst verrechnet. Zuletzt wird der ziehende Figurtyp auf dem Zielfeld verrechnet.
# - Am Schluss wird noch die Flag `zobrist_turn` verrechnet, weil nach jedem Zug der ziehende Spieler wechselt.
#
def zobrist_move(board,move,zobrist_hash):
white_to_move = board.turn
from_square = move.from_square
to_square = move.to_square
moving_piece = board.piece_at(from_square)
captured_piece = board.piece_at(to_square)
if white_to_move:
zobrist_hash = zobrist_hash ^ piece_zobrist_values[from_square][zobrist_values_white.get(moving_piece.piece_type)]
zobrist_hash = zobrist_hash ^ piece_zobrist_values[from_square][0]
if not captured_piece:
zobrist_hash = zobrist_hash ^ piece_zobrist_values[to_square][0]
zobrist_hash = zobrist_hash ^ piece_zobrist_values[to_square][zobrist_values_white.get(moving_piece.piece_type)]
else:
zobrist_hash = zobrist_hash ^ piece_zobrist_values[to_square][zobrist_values_black.get(captured_piece.piece_type)]
zobrist_hash = zobrist_hash ^ piece_zobrist_values[to_square][zobrist_values_white.get(moving_piece.piece_type)]
else:
zobrist_hash = zobrist_hash ^ piece_zobrist_values[from_square][zobrist_values_black.get(moving_piece.piece_type)]
zobrist_hash = zobrist_hash ^ piece_zobrist_values[from_square][0]
if not captured_piece:
zobrist_hash = zobrist_hash ^ piece_zobrist_values[to_square][0]
zobrist_hash = zobrist_hash ^ piece_zobrist_values[to_square][zobrist_values_black.get(moving_piece.piece_type)]
else:
zobrist_hash = zobrist_hash ^ piece_zobrist_values[to_square][zobrist_values_white.get(captured_piece.piece_type)]
zobrist_hash = zobrist_hash ^ piece_zobrist_values[to_square][zobrist_values_black.get(moving_piece.piece_type)]
zobrist_hash = zobrist_hash^zobrist_turn
return zobrist_hash
# Für die spätere Implementierung der Transpositionstabellen wird ein LRU Cache implementiert repräsentiert durch die Klasse `LRUCache`.
# Eine detaillierte Beschreibung der Implementierung ist in Kapitel 2.3.3 zu finden.
# +
class LRUCache:
def __init__(self, size):
self.od = OrderedDict()
self.size = size
def get(self, key, default=None):
try:
self.od.move_to_end(key)
except KeyError:
return default
return self.od[key]
def __contains__(self, item):
return item in self.od
def __len__(self):
return len(self.od)
def __getitem__(self, key):
self.od.move_to_end(key)
return self.od[key]
def __setitem__(self, key, value):
try:
del self.od[key]
except KeyError:
if len(self.od) == self.size:
self.od.popitem(last=False)
self.od[key] = value
transposition_table = LRUCache(TABLE_SIZE)
# + [markdown] pycharm={"name": "#%% md\n"}
# Nachdem die verwendete Bewertungsheuristik und das Zobrist Hashing implementiert sind, kann nun das Alpha-Beta Pruning zusammen mit dem Iterative Deepening umgesetzt werden.
# Hierfür wird eine Funktion `iterative_deepening` implementiert, die den Ablauf dieses Prozesses koordiniert.
# Sie erhält folgende drei Parameter:
#
# - Der Parameter `board` enthält die aktuelle Stellung.
# - Die initiale Tiefe, bei der gesucht wird, wird mit dem Parameter `depth` übergeben.
# - `color` repräsentiert den Spieler, der aktuell am Zug ist.
#
# Als erstes wird überprüft, ob sich das Spiel aktuell im Endspiel befindet und falls ja, wird die globale Flag `endgame` auf `True` gesetzt.
# Alle anderen benötigten Informationen werden global abgelegt.
# Zu Beginn des Iterative Deepening wird geprüft, ob der Spieler nur einen erlaubten Zug machen kann.
# Ist dies der Fall, so wird er sofort zurückgegeben und keine weiteren Berechnungen sind von Nöten.
# Ist mehr als ein zulässiger Zug möglich, so wird die aktuelle Systemzeit global gespeichert und die Variable `d` mit 0 initialisiert.
# `d` repräsentiert hierbei die Tiefe, die zur initialen Tiefe `depth` hinzugefügt wird.
# In einer Endlosschleife wird sukzessiv die Suchtiefe erhöht und `minimize` bzw. `maximize` aufgerufen.
# Sowohl `alpha` als auch `beta` werden mit einem sehr hohen bzw. sehr niedrigen Zahlenwert initialisiert.
# Bei jedem Schleifendurchlauf wird dabei dem globale besten Zug (`global_best_move`) der Zug aus der Variablen `best_move` zugewiesen.
# Falls der Return Wert `sys.maxsize` oder `-sys.maxsize` entspricht, bedeutet dies, dass ein Matt von einem der beiden Spieler erzwungen werden kann und deshalb wird die Suche hier abgebrochen.
# Des Weiteren wird überprüft, ob die Zeit des Schachcomputers abgelaufen ist.
# Ist dies der Fall, wird der globale beste Zug zurückgegeben.
# + pycharm={"is_executing": false, "name": "#%%\n"}
def iterative_deepening(board: chess.Board, depth: int, color: int):
global best_move
global current_depth
global global_best_move
global is_timeout
global start_time
global endgame
if not endgame and check_endgame(board):
endgame = True
zobrist = zobrist_hash(board)
increment_repetiton_table(zobrist)
if board.legal_moves == 1:
return board.legal_moves[0]
is_timeout = False
start_time = time.time()
d = 0
current_score = 0
while True:
if d > 1:
global_best_move = best_move
print(f"Completed search with depth {current_depth}. Best move so far: {global_best_move} (Score: {current_score})")
if current_score == sys.maxsize or current_score == -sys.maxsize:
return global_best_move
current_depth = depth + d
if color == chess.BLACK:
current_score = minimize(board, current_depth, -sys.maxsize, sys.maxsize, color, zobrist)
else:
current_score = maximize(board, current_depth, -sys.maxsize, sys.maxsize, color, zobrist)
d += 1
if is_timeout:
return global_best_move
# -
# Die Funktion `check_triple_repetition` überprüft, ob die aktuelle Begegnung mit einer Stellung, die dritte ist. In diesem Fall wird `True` zurückgegeben. Ansonsten wird `False` zurückgegben. Als Argument bekommt die Funktion den Zobrist Hash der aktuellen Stellung.
# Die Funktion ist notwendig, weil die von *python-chess* definierte Funktion `board.can_claim_threefold_repetition()` immer den gesamten Move-Stack aufrollt, um Wiederholungen zu erkennen und somit sehr ineffizient ist.
# +
def check_triple_repetition(zobrist_hash):
global repetition_table
if zobrist_hash in repetition_table:
times_encountered = repetition_table[zobrist_hash]
if times_encountered > 2:
return True
else:
return False
else:
return False
# -
# `increment_repetition_table` wird verwendet um den Eintrag mit dem Schlüssel `zobrist_hash` im Dictionary um eins zu erhöhen.
def increment_repetiton_table(zobrist_hash: int):
if zobrist_hash in repetition_table:
times_encountered = repetition_table[zobrist_hash]
times_encountered = times_encountered + 1
repetition_table[zobrist_hash] = times_encountered
else:
repetition_table[zobrist_hash] = 1
# `decrement_repetition_table` wird verwendet um den Eintrag mit dem Schlüssel `zobrist_hash` im Dictionary um eins zu vermindern. Falls der Wert für `times_encountered` eins sein sollte, wird der Eintrag aus dem Dictionary gelöscht.
def decrement_repetition_table(zobrist_hash: int):
times_encountered = repetition_table[zobrist_hash]
if times_encountered == 1:
del repetition_table[zobrist_hash]
else:
times_encountered = times_encountered - 1
repetition_table[zobrist_hash] = times_encountered
# Die Funktion `check_endgame` überprüft ob sich eine Stellung im Endspiel befindet. Dafür bekommt die Funktion eine Stellung (`board`) als Parameter. Anschließend wird über die Felder des Bretts iteriert und alle Figuren bis auf Bauern und Könige gezählt. Sobald der Zähler für die Figuren (`counter`) größer als 4 ist, ist sicher, dass sich das Spiel nicht im Endspiel befindet und es wird `False` zurückgegeben. Sollte der Zähler nach durchlaufen der Schleife immer noch kleiner als 4 sein, so befindet sich das Spiel im Endspiel und es wird `True` zurückgegeben.
def check_endgame(board: chess.Board) -> bool:
counter = 0
for square in range(64):
piece = board.piece_at(square)
if not piece:
continue
if piece.piece_type is not chess.PAWN and piece.piece_type is not chess.KING:
counter += 1
if counter > 4:
return False
return True
# + [markdown] pycharm={"name": "#%% md\n"}
# Die Implementierung der Funktion `maximize` orientiert sich stark am Pseudocode aus dem Theorieteil.
# Die Funktion erhält fünf Parameter.
#
# - `board` enthält die aktuelle Stellung.
# - `depth` repräsentiert die Tiefe, mit der `maximize` aufgerufen wird.
# - `alpha` repräsentiert die Variable Alpha des Alpha-Beta Prunings.
# - `beta` repräsentiert die Variable Beta des Alpha-Beta Prunings.
# - `zobrist` enthält den Zobrist Hash der aktuellen Stellung
#
# Zu Beginn wird geprüft, ob die Zeit des Schachcomputers bereits abgelaufen ist.
# Ist dies der Fall, so wird die Variable `is_timeout` auf `True` gesetzt und `alpha` zurückgegeben.
# Anschließend wird geprüft ob sich die Stellung in einem Schachmatt `board.is_checkmate()` oder einem Patt `board.is_stalemate` befindet, oder ob sich ein Unentschieden über die dreifache Wiederholung einer Stellung erzwingen lässt. Falls einer dieser drei Fälle zutrifft, wird sofort der entsprechende Score (`sys.maxsiz` bzw. `-sys.maxsize` und `0`) zurückgegeben
# Danach wird überprüft, ob für den `zobrist_hash` der aktuellen Stellung und der aktuellen Tiefe `depth` ein Eintrag in der Transpositionstabelle `transposition_table` vorhanden ist. Ist dies der Fall, so werden anschließend die aktuellen Alpha und Beta Werte mit den Alpha und Beta Werten aus der Transpositionstabelle verglichen.
# Falls sich das aktuelle Alpha-Beta Intervall innerhalb des Alpha-Beta Intervalls der Transpositionstabelle befindet, wird einfach der `score` der Transpositionstabelle als return Wert wiedergegeben. Sonst wird das Minimum von beiden Alpha-Werten und das Maximum von beiden Beta Werten bestimmt und diese als aktuelle Alpha und Beta Werte übernommen.
# Anschließend wird überprüft, ob es sich bei der aktuellen Suchtiefe um eine Tiefe kleiner als 1 handelt.
# Ist dies der Fall, wird die aktuelle Stellung mittels der Ruhesuche evaluiert und der entsprechende Wert zurückgeliefert.
# Weil noch kein Zug durchgeführt wurde, wird die Ruhesuche für `maximize` aufgerufen (`quiescnece_search_maximize`).
# Dabei ist zu beachten, dass die Suchtiefe um eins erhöht wird.
# Ist die aktuelle Suchtiefe nicht kleiner als eins, so wird als nächstes der Variablen `score` der Wert für `alpha` zugewiesen.
# Anschließend werden alle zulässigen Züge nach ihren Scores sortiert, sodass aussichtsreichere Züge zuerst evaluiert werden.
# Ist kein Score für einen zulässigen Zug vorhanden, so werden die Züge mit einem neutralen Score von 0 belegt.
# Nachfolgend wird über alle zulässigen Züge iteriert und jeweils die Zobrist Hashes für die neu entstehenden Stellungen mit `zobrist_move` berechnet und den folgenden Aufrufen der `minimize` Funktion als Parameter übergeben.
# Die Variablen `score` und `alpha` werden basierend auf dem Wert von `move_score` aktualisiert, wie dies im Pseudocode aus dem Theorieteil der Fall ist.
# Zusätzlich wird `best_move` der aktuelle Zug zugewiesen, wenn folgende zwei Bedingungen erfüllt sind:
#
# 1. Der aktuelle Score ist größer als Alpha, wodurch die Variable `alpha` aktualisiert wird und ein möglicher neuer bester Zug gefunden wurde.
# 1. Die aktuelle Suchtiefe entspricht der Suchtiefe, mit der die Funktion `iterative_deepening` die aktuelle Iteration anstieß.
#
# Bevor letztlich der Score der aktuellen Stellung zurückgeliefert wird, wird Stellung und Score mit den aktuellen Alpha und Beta Werten in der Transpositionstabelle abgespeichert.
# Während des Schleifendurchlaufs wird der `move_score` für einen Zug im Dictionary `move_scores` abgelegt.
# Dabei wird er der Stellung zugewiesen, die als Ausgangsstellung für den Zug dient.
# + pycharm={"is_executing": false, "name": "#%%\n"}
def maximize(board: chess.Board, depth: int, alpha: int, beta: int, color: int, zobrist: int) -> int:
global is_timeout
global start_time
global best_move
global global_best_move
if time.time() - start_time > TIMEOUT_SECONDS:
is_timeout = True
return alpha
if board.is_checkmate():
return -sys.maxsize
if board.is_stalemate():
return 0
if check_triple_repetition(zobrist):
return 0
if (zobrist, depth) in transposition_table:
score, a, b = transposition_table[(zobrist, depth)]
if a <= alpha and beta <= b:
return score
else:
alpha = min(alpha, a)
beta = max(beta, b)
if depth < 1:
return quiescence_search_maximize(board, alpha, beta, 1)
score = alpha
board_scores = move_scores.get(board.fen(), dict())
moves = sorted(
board.legal_moves, key=lambda move: -board_scores.get(move, 0),
)
for move in moves:
new_zobrist = zobrist_move(board, move, zobrist)
increment_repetiton_table(new_zobrist)
board.push(move)
move_score = minimize(board, depth - 1, score, beta, color, new_zobrist)
move_scores[board.fen()][move] = move_score
decrement_repetition_table(new_zobrist)
board.pop()
if move_score > score:
score = move_score
if depth == current_depth:
best_move = move
if score >= beta:
break
transposition_table[(zobrist, depth)] = score, alpha, beta
return score
# + [markdown] pycharm={"name": "#%% md\n"}
# Die Funktion `minimize` ist zu großen Teilen mit der Funktion `maximize` identisch, wie dies auch im Pseudocode aus dem Theorieteil der Fall ist.
# Die Unterschiede zur `maximize`-Implementierung sind:
#
# - Die Liste der zulässigen Züge ist aufwärts und nicht abwärts sortiert.
# Somit fängt der Schachcomputer mit dem Zug an, der den geringsten Score besitzt.
# Zudem ist der für den Spieler schlechteste Zug nun `sys.maxsize`.
# - Es wird die Variable `beta` aktualisiert und nicht die Variable `alpha`.
# Diese wird zudem aktualisiert, wenn der aktuelle `score` niedriger und somit besser ist.
# + pycharm={"is_executing": false, "name": "#%%\n"}
def minimize(board: chess.Board, depth: int, alpha: int, beta: int, color: int, zobrist) -> int:
global best_move
global global_best_move
global is_timeout
global start_time
if time.time() - start_time > TIMEOUT_SECONDS:
is_timeout = True
return beta
if board.is_checkmate():
return sys.maxsize
if board.is_stalemate():
return 0
if check_triple_repetition(zobrist):
return 0
if (zobrist, depth) in transposition_table:
score, a, b = transposition_table[(zobrist, depth)]
if a <= alpha and beta <= b:
return score
else:
alpha = min(alpha, a)
beta = max(beta , b)
if depth < 1:
return quiescence_search_minimize(board, alpha, beta, 1)
score = beta
board_scores = move_scores.get(board.fen(), dict())
moves = sorted(
board.legal_moves, key=lambda move: board_scores.get(move, 0),
)
for move in moves:
new_zobrist = zobrist_move(board, move, zobrist)
increment_repetiton_table(new_zobrist)
board.push(move)
move_score = maximize(board, depth - 1, alpha, score, color, new_zobrist)
move_scores[board.fen()][move] = move_score
decrement_repetition_table(new_zobrist)
board.pop()
if move_score < score:
score = move_score
if depth==current_depth:
best_move = move
if score <= alpha:
break
transposition_table[(zobrist, depth)] = score, alpha, beta
return score
# -
# Die Funktion `quiescence_search_maximize` implementiert die Ruhesuche aus der Sicht des maximierenden Spielers. Sie ist daher weitesgehend identisch mit der Funktion maximize und verfügt auch über die selbe Parameterliste. Ist die maximale Tiefe für die Ruhesuche (`QUIESCENCE_SEARCH_DEPTH`) erreicht, so wird die aktuelle Stellung mittels der Bewertungsheuristik evaluiert und das Ergebnis zurückgeliefert.
# Bei den Zügen werden nur die, die als favorisierend betrachtet werden (`favorable_moves`), berücksichtigt.
def quiescence_search_maximize(board: chess.Board, alpha, beta, currentDepth: int):
global best_move
global global_best_move
global endgame
if currentDepth == QUIESCENCE_SEARCH_DEPTH:
return simple_eval_heuristic(board, endgame)
favorable_moves = []
moves = board.legal_moves
for move in moves:
if is_favorable_move(board, move):
favorable_moves.append(move)
if favorable_moves == []:
return simple_eval_heuristic(board)
score = alpha
for move in favorable_moves:
board.push(move)
move_score = quiescence_search_minimize(board, score, beta, currentDepth + 1)
board.pop()
if move_score > score:
score = move_score
if score >= beta:
break
return score
# Analog wird die Funktion für die minimierende Ruhesuche `quiescence_search_min` implementiert.
def quiescence_search_minimize(board: chess.Board, alpha, beta, currentDepth: int):
global best_move
global global_best_move
global endgame
if currentDepth == QUIESCENCE_SEARCH_DEPTH:
return simple_eval_heuristic(board, endgame)
moves = board.legal_moves
favorable_moves = []
for move in moves:
if is_favorable_move(board, move):
favorable_moves.append(move)
if favorable_moves == []:
return simple_eval_heuristic(board)
score = beta
for move in favorable_moves:
board.push(move)
move_score = quiescence_search_maximize(board, alpha, score, currentDepth + 1)
board.pop()
if move_score < score:
score = move_score
if score <= alpha:
break
return score
# Die Funktion `is_favorable_move` überprüft, ob ein Zug zu einer, wie im Kapitel Ruhesuche beschrieben, vorteilhaften Stellung führt oder nicht.
# Als Argumente bekommt die Funktion eine Stellung (`board`) und einen Zug (`move`) übergeben.
# Nun wird überprüft, ob der Zug vorteilhaft ist.
# Der Zug ist dann vorteilhaft, wenn der Zug ein Schlagzug ist und wenn die schlagende Figur weniger Wert ist als die Figur, welche geschlagen wird oder wenn das Feld von der schlagenden Seite feldbeherrschungstechnisch majorisiert wird.
# En-Passant Züge werden zuvor schon herausgefiltert, weil in diesem Fall ein Bauer geschlagen wird, welcher sich aktuell auf einem anderen Feld befindet, als das Zielfeld des Schlagzugs.
def is_favorable_move(board: chess.Board, move: chess.Move) -> bool:
if move.promotion is not None:
return True
if board.is_capture(move) and not board.is_en_passant(move):
if piece_values.get(board.piece_type_at(move.from_square)) < piece_values.get(
board.piece_type_at(move.to_square)
) or len(board.attackers(board.turn, move.to_square)) > len(
board.attackers(not board.turn, move.to_square)
):
return True
return False
# + [markdown] pycharm={"name": "#%% md\n"}
# Die Funktion `get_opening_data_base_moves` ist eine Hilfsfunktion, die einen Zug aus der Opening Data Base für die aktuelle Stellung (`board`) zurückgibt, falls ein Zug für die übergebene Stellung gefunden wird.
# + pycharm={"is_executing": false, "name": "#%%\n"}
def get_opening_data_base_moves(board: chess.Board):
move = None
opening_moves = []
with chess.polyglot.open_reader("Performance.bin") as reader:
for entry in reader.find_all(board):
opening_moves.append(entry)
if opening_moves:
random_entry = random.choice(opening_moves)
move = random_entry.move
print(move)
return move
# + [markdown] pycharm={"name": "#%% md\n"}
# Für die Anzeige werden die Spielernamen benötigt.
# Hierfür wird eine Hilfsfunktion `who` implementiert, die einen Spieler als Parameter erhält und den zugehörigen Namen als Zeichenkette zurückgibt.
# + pycharm={"is_executing": false, "name": "#%%\n"}
def who(player):
return "White" if player == chess.WHITE else "Black"
# + [markdown] pycharm={"name": "#%% md\n"}
# Nachdem der Schachcomputer implementiert ist, wird nun die Schnittstelle für den menschlichen Spieler implementiert.
# Hierfür wird zuerst eine Hilfsfunktion `get_move` benötigt, die die Nutzereingabe in einen Zug umwandelt, sofern dies möglich ist.
# Zudem wird in ihr überprüft, ob das Spiel vorzeitig zu beenden ist.
# Dies ist der Fall, wenn der Nutzer als Zug ein `q` (für engl. *quit*) eingibt.
# Sie erhält als Parameter den dem Nutzer anzuzeigenen Text für die Zugeingabe.
# + pycharm={"is_executing": false, "name": "#%%\n"}
def get_move(prompt):
uci = input(prompt)
if uci and uci[0] == "q":
raise KeyboardInterrupt()
try:
chess.Move.from_uci(uci)
except:
uci = None
return uci
# + [markdown] pycharm={"name": "#%% md\n"}
# Die Funktion `human_player` repräsentiert den menschlichen Spieler und koordiniert die Züge des Spielers.
# Dafür wird zuerst die aktuelle Stellung mittels der IPython-Funktion `display` angezeigt.
# Anschließend werden dem menschlichen Spieler alle zulässigen Züge für die aktuelle Stellung angezeigt und er wird aufgefordert, seinen Zug zu tätigen.
# Diese Aufforderung geschieht solange, bis der Nutzer einen gültigen Zug eingegeben hat oder aber das Abbruchkriterium `q`.
# + pycharm={"is_executing": false, "name": "#%%\n"}
def human_player(board):
display(board)
uci = get_move(f"{who(board.turn)}'s move [q to quit]>")
legal_uci_moves = [move.uci() for move in board.legal_moves]
while uci not in legal_uci_moves:
print(f"Legal moves: {(', '.join(sorted(legal_uci_moves)))}")
uci = get_move(f"{who(board.turn)}'s move [q to quit]>")
return uci
# + [markdown] pycharm={"is_executing": false, "name": "#%% md\n"}
# Nachdem alle notwendigen Funktionen für den Schachcomputer implementiert sind, kann nun die Funktion `ai_player` implementiert werden.
# Sie repräsentiert den Schachcomputer dem Spiel gegenüber.
# Die Funktion besitzt zwei Parameter:
#
# - `board` enthält die aktuelle Stellung.
# - `color` repräsentiert den Spieler, der am Zug ist.
#
# Zuerst wird in der verfügbaren Opening Data Base geschaut, ob ein Zug für die übergebene Stellung vorhanden ist.
# Ist dies der Fall, so wird der Zug zurückgegeben.
# Andernfalls wird das globale `move_scores` Dictionary zurückgesetzt und die Funktion `iterative_deepening` aufgerufen und deren gewählter Zug zurückgegeben.
# Es ist zu beachten, dass wenn ein Zug aus der Opening Data Base verwendet wird, die globalen Dictionaries für die Zugscores und die Nachbarstellungen zurückgesetzt werden, da ansonsten keine ausreichende Kontrolle über die Tiefen existiert.
# Diese Gegebenheit kann durch eine komplexere Implementierung der Zugsortieren optimiert werden, wird an dieser Stellung aber nicht weiter betrachtet.
# + pycharm={"is_executing": false, "name": "#%%\n"}
def ai_player(board: chess.Board, color: int):
move = get_opening_data_base_moves(board)
if move:
moves_scores = defaultdict(dict)
return move
else:
return iterative_deepening(board, 0, color)
# + [markdown] pycharm={"name": "#%% md\n"}
# `play_game` ist die Funktion, die den Spielablauf koordiniert.
# Der Funktion kann optional ein Parameter `pause` übergeben werden.
# Dieser legt fest, wie lange die Stellung und der zu ihr geführte Zug angezeigt werde soll, bevor der andere Spieler am Zug ist.
# Der übergebene Zahlenwert wird als Anzahl an Sekunden interpretiert.
# Nach der Initialisierung des Schachbretts sind solange beide Spieler abwechseld am Zug, bis das Spiel als beendet angesehen wird oder das Spiel abgebrochen wurde.
# Anschließend werden je nach Ausgang des Spiels unterschiedliche Ergebnisse angezeigt.
# + pycharm={"is_executing": false, "name": "#%%\n"}
def play_game(pause=1):
board = chess.Board()
try:
while not board.is_game_over(claim_draw=True):
if board.turn == chess.WHITE:
move = board.parse_uci(human_player(board))
else:
move = ai_player(board, chess.BLACK)
name = who(board.turn)
board.push(move)
html = "<br/>%s </b>"%(board._repr_svg_())
clear_output(wait=True)
display(HTML(html))
time.sleep(pause)
except KeyboardInterrupt:
msg = "Game interrupted"
return None, msg, board
result = None
if board.is_checkmate():
msg = "checkmate: " + who(not board.turn) + " wins!"
result = not board.turn
elif board.is_stalemate():
msg = "draw: stalemate"
elif board.is_fivefold_repetition():
msg = "draw: fivefold repetition"
elif board.is_insufficient_material():
msg = "draw: insufficient material"
elif board.can_claim_draw():
msg = "draw: claim"
print(msg)
return result, msg, board
# + pycharm={"is_executing": false, "name": "#%%\n"}
play_game()
| notebooks/chess_ai.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
from sympy import *
EI, a, q = var("EI, a, q")
pprint("\nFEM-Solution:")
# 1: Stiffness Matrices:
# Element 1
l = 2*a
l2 = l*l
l3 = l*l*l
K = EI/l3 * Matrix(
[
[ 4*l2 , -6*l , 2*l2 , 6*l , 0 , 0 ],
[ -6*l , 12 , -6*l , -12 , 0 , 0 ],
[ 2*l2 , -6*l , 4*l2 , 6*l , 0 , 0 ],
[ 6*l , -12 , 6*l , 12 , 0 , 0 ],
[ 0 , 0 , 0 , 0 , 0 , 0 ],
[ 0 , 0 , 0 , 0 , 0 , 0 ],
]
)
# Element 2
l = a
l2 = l*l
l3 = l*l*l
K += EI/l3 * Matrix(
[
[ 0 , 0 , 0 , 0 , 0 , 0 ],
[ 0 , 0 , 0 , 0 , 0 , 0 ],
[ 0 , 0 , 4*l2 , -6*l , 2*l2 , 6*l ],
[ 0 , 0 , -6*l , 12 , -6*l , -12 ],
[ 0 , 0 , 2*l2 , -6*l , 4*l2 , 6*l ],
[ 0 , 0 , 6*l , -12 , 6*l , 12 ],
]
)
# 2: BCs:
p0,w0,p1,w1,p2,w2 = var("ψ₀,w₀,ψ₁,w₁,ψ₂,w₂")
M0,F0,M1,F1,M2,F2 = var("M₀,F₀,M₁,F₁,M₂,F₂")
Mq1, Fq1 = -q/12*a*a, q/2*a
Mq2, Fq2 = -Mq1, Fq1
# 0 1 2
# qqqqqqqqqqqqqqqq
# |-------------------------A---------------
u = Matrix([ 0,0, p1,0, p2,w2 ] )
f = Matrix([ M0,F0, Mq1,Fq1+F1, Mq2,Fq2 ] )
unks = [ M0,F0, p1,F1, p2,w2 ]
# 3: Solution:
eq = Eq(K*u, f)
sol = solve(eq, unks)
pprint(sol)
pprint("\nMinimum-Total-Potential-Energy-Principle-Solution:")
# 1: Ansatz:
a0, a1, a2, a3 = var("a0, a1, a2, a3")
b0, b1, b2 = var("b0, b1, b2")
order = 2
if (order == 2):
# w2 has order 2
b3, b4 = 0, 0
else:
# w2 has order 4
b3, b4 = var("b3, b4")
x1, x2 = var("x1, x2")
w1 = a0 + a1*x1 + a2*x1**2 + a3*x1**3
w1p = diff(w1, x1)
w1pp = diff(w1p, x1)
w2 = b0 + b1*x2 + b2*x2**2 + b3*x2**3 + b4*x2**4
w2p = diff(w2, x2)
w2pp = diff(w2p, x2)
pprint("\nw1 and w2:")
pprint(w1)
pprint(w2)
# 2: Using BCs:
pprint("\nElimination of a0, a1, a2, a3, b0 using BCs:")
# w1(0)=0
e1 = Eq(w1.subs(x1, 0))
# w1'(0)=0
e2 = Eq(w1p.subs(x1, 0))
# w1(2l)=0
e3 = Eq(w1.subs(x1, 2*a))
# w2(0)=0
e4 = Eq(w2.subs(x2, 0))
# w1p(2a)=w2p(0)
e5 = Eq(w1p.subs(x1, 2*a), w2p.subs(x2, 0))
eqns, unks = [e1, e2, e3, e4, e5], [a0, a1, a2, a3, b0]
sol = solve(eqns, unks)
pprint(sol)
sub_list=[
(a0, sol[a0]),
(a1, sol[a1]),
(a2, sol[a2]),
(a3, sol[a3]),
(b0, sol[b0]),
]
pprint("\nw1 and w2:")
w1 = w1.subs(sub_list)
w2 = w2.subs(sub_list)
pprint(w1)
pprint(w2)
pprint("\nw1'' and w2'':")
w1pp = w1pp.subs(sub_list)
w2pp = w2pp.subs(sub_list)
pprint(w1pp)
pprint(w2pp)
# 3: Using Principle:
pprint("\nU1, U2, Uq:")
i1 = w1pp*w1pp
I1 = integrate(i1, x1)
I1 = I1.subs(x1,2*a) - I1.subs(x1,0)
U1 = EI*I1/2
pprint(U1)
i2 = w2pp*w2pp
I2 = integrate(i2, x2)
I2 = I2.subs(x2,a) - I2.subs(x2,0)
U2 = EI*I2/2
pprint(U2)
i2 = q*w2
I2 = integrate(i2, x2)
I2 = I2.subs(x2,a) - I2.subs(x2,0)
Uq = I2
pprint(Uq)
pprint("\nParameters for U1 + U2 - Uq = Min:")
U = U1 + U2 - Uq
e1 = Eq(diff(U, b1))
e2 = Eq(diff(U, b2))
if (order == 2):
eqns = [e1, e2]
unks = [b1, b2]
sol = solve(eqns, unks)
sub_list=[
(b1, sol[b1]),
(b2, sol[b2]),
]
w2 = w2.subs(sub_list)
else:
e3 = Eq(diff(U, b3))
e4 = Eq(diff(U, b4))
eqns = [e1, e2, e3, e4]
unks = [b1, b2, b3, b4]
sol = solve(eqns, unks)
sub_list=[
(b1, sol[b1]),
(b2, sol[b2]),
(b3, sol[b3]),
(b4, sol[b4]),
]
w2 = w2.subs(sub_list)
pprint(sol)
pprint("\nw2:")
pprint(w2)
pprint("\nw2(a):")
w2 = w2.subs(x2, a)
pprint(w2)
# FEM-Solution:
# ⎧ 2 4 3 3 ⎫
# ⎪ 3⋅a⋅q -11⋅a⋅q -a ⋅q 3⋅a ⋅q -a ⋅q -5⋅a ⋅q ⎪
# ⎨F₀: ─────, F₁: ────────, M₀: ──────, w₂: ──────, ψ₁: ──────, ψ₂: ────────⎬
# ⎪ 8 8 4 8⋅EI 4⋅EI 12⋅EI ⎪
# ⎩ ⎭
#
# 2. Minimum-Total-Potential-Energy-Principle-Solution:
#
# w1 and w2:
# 2 3
# a₀ + a₁⋅x₁ + a₂⋅x₁ + a₃⋅x₁
# 2
# b₀ + b₁⋅x₂ + b₂⋅x₂
#
# Elimination of a0, a1, a2, a3, b0 using BCs:
# ⎧ -b₁ b₁ ⎫
# ⎪a₀: 0, a₁: 0, a₂: ────, a₃: ────, b₀: 0⎪
# ⎨ 2⋅a 2 ⎬
# ⎪ 4⋅a ⎪
# ⎩ ⎭
#
# w1 and w2:
# 2 3
# b₁⋅x₁ b₁⋅x₁
# - ────── + ──────
# 2⋅a 2
# 4⋅a
# 2
# b₁⋅x₂ + b₂⋅x₂
#
# w1'' and w2'':
# b₁ 3⋅b₁⋅x₁
# - ── + ───────
# a 2
# 2⋅a
# 2⋅b₂
#
# U1, U2, Uq:
# 2
# EI⋅b₁
# ──────
# a
# 2
# 2⋅EI⋅a⋅b₂
# 3 2
# a ⋅b₂⋅q a ⋅b₁⋅q
# ─────── + ───────
# 3 2
#
# Parameters for U1 + U2 - Uq = Min:
# ⎧ 3 2 ⎫
# ⎪ a ⋅q a ⋅q⎪
# ⎨b₁: ────, b₂: ─────⎬
# ⎪ 4⋅EI 12⋅EI⎪
# ⎩ ⎭
#
# w2:
# 3 2 2
# a ⋅q⋅x₂ a ⋅q⋅x₂
# ─────── + ────────
# 4⋅EI 12⋅EI
#
# w2(a):
# 4
# a ⋅q
# ────
# 3⋅EI
| ipynb/TM_2/4_BB/2_BL/2.4.2.K-FEM.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Creating line graphs for my thesis
# Thesis is available on [my github](https://github.com/JonasBingel/ThesisHSMZ-RLTicTacToe)
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from pathlib import Path
# # Parameters
# Paths
CSV_PATH = "Data"
OUTPUT_DIRECTORY = "Output"
# # Import CSV
# +
# Compare rate
data_compare_rate = pd.read_csv(Path(CSV_PATH, "compare_rate.csv"), sep=";", header=0, low_memory=False)
# Compare Alpha
data_alpha_QL = pd.read_csv(Path(CSV_PATH, "compare_alpha_QLearning.csv"), sep=";", header=0, low_memory=False)
data_alpha_QL_ALT = pd.read_csv(Path(CSV_PATH, "compare_alpha_QLearning_Alternate.csv"), sep=";", header=0, low_memory=False)
data_alpha_SARSA = pd.read_csv(Path(CSV_PATH, "compare_alpha_SARSA.csv"), sep=";", header=0, low_memory=False)
data_alpha_SARSA_ALT = pd.read_csv(Path(CSV_PATH, "compare_alpha_SARSA_Alternate.csv"), sep=";", header=0, low_memory=False)
#Compare Experience
data_experience_QL = pd.read_csv(Path(CSV_PATH, "compare_experience_QLearning.csv"), sep=";", header=0, low_memory=False)
data_experience_SARSA = pd.read_csv(Path(CSV_PATH, "compare_experience_SARSA.csv"), sep=";", header=0, low_memory=False)
# Compare Algorithm
data_algorithm = pd.read_csv(Path(CSV_PATH, "compare_algorithm_with_afterstates.csv"), sep=";", header=0, low_memory=False)
# -
# # Set Parameters for the Plot
PLOT_WIDTH = 12
PLOT_HEIGHT = 4
XLABEL = "Anzahl Trainingsepisoden"
YLABEL = "Rate optimaler Aktionen"
XTICKS = [20000 * n for n in range(0, 8)]
YTICKS = [0, 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 1.0]
YTICK_LABELS = [str(n*10) + "%" for n in range(0, 11)]
XLIM = [0, 150000]
YLIM = [0.3, 1]
# +
def generate_compare_rate_plot(data, isX):
plottedSymbol = "X" if isX else "O"
if (isX):
subset_of_data = data[data.symbol != "SYMBOL_O"]
else:
subset_of_data = data[data.symbol != "SYMBOL_X"]
ax = subset_of_data.plot(x="episode", y=["with_exploration", "without_exploration"], kind="line", figsize=(PLOT_WIDTH,PLOT_HEIGHT))
ax.set(ylabel=YLABEL, xlabel=XLABEL)
ax.set(xlim=XLIM, ylim=YLIM)
ax.set(xticks=XTICKS, yticks=YTICKS)
ax.set(yticklabels=YTICK_LABELS)
ax.legend(["Rate inkl. explorativer Aktionen", "Rate exkl. explorativer Aktionen"], loc="lower right")
filename = "compare_rate_{0}.pdf".format(plottedSymbol)
ax.figure.savefig(Path(OUTPUT_DIRECTORY, filename), format="pdf", bbox_inches="tight", transparent=True, dpi=300)
def generate_compare_alpha_plot(data, isX, isQL, isAlternate):
plottedSymbol = "X" if isX else "O"
algorithm = "QLearning" if isQL else "SARSA"
selfplay = "_ALTERNATE" if isAlternate else ""
algorithmLegend = "Q-Learning" if isQL else "SARSA"
if (isX):
subset_of_data = data[data.symbol != "SYMBOL_O"]
else:
subset_of_data = data[data.symbol != "SYMBOL_X"]
ax = subset_of_data.plot(x="episode", y=["rate_alpha01", "rate_alpha02", "rate_alphaDecay"], kind="line", figsize=(PLOT_WIDTH,PLOT_HEIGHT))
ax.set(ylabel=YLABEL + ' Symbol {0}'.format(plottedSymbol), xlabel=XLABEL)
ax.set(xlim=XLIM, ylim=YLIM)
ax.set(xticks=XTICKS, yticks=YTICKS)
ax.set(yticklabels=YTICK_LABELS)
ax.legend([r'{0}: $\alpha$ = 0.1'.format(algorithmLegend), r'{0}: $\alpha$ = 0.2'.format(algorithmLegend), r'{0}: $\alpha$ = 1 $\rightarrow$ 0.1'.format(algorithmLegend)], loc="lower right")
filename = "compare_alpha_{0}{1}_{2}.pdf".format(algorithm, selfplay, plottedSymbol)
ax.figure.savefig(Path(OUTPUT_DIRECTORY, filename), format="pdf", bbox_inches="tight", transparent=True, dpi=300)
def generate_compare_experience_plot(data, isX, isQL):
plottedSymbol = "X" if isX else "O"
algorithm = "QLearning" if isQL else "SARSA"
algorithmLegend = "Q-Learning" if isQL else "SARSA"
if (isX):
subset_of_data = data[data.symbol != "SYMBOL_O"]
else:
subset_of_data = data[data.symbol != "SYMBOL_X"]
ax = subset_of_data.plot(x="episode", y=["rate_QTable", "rate_WTable"], kind="line", figsize=(PLOT_WIDTH,PLOT_HEIGHT))
ax.set(ylabel=YLABEL + ' Symbol {0}'.format(plottedSymbol), xlabel=XLABEL)
ax.set(xlim=XLIM, ylim=YLIM)
ax.set(xticks=XTICKS, yticks=YTICKS)
ax.set(yticklabels=YTICK_LABELS)
ax.legend([r'{0}: Q-Table, $\alpha$ = 0.1'.format(algorithmLegend), r'{0}: W-Table, $\alpha$ = 0.1'.format(algorithmLegend)], loc="lower right")
filename = "compare_experience_{0}_{1}.pdf".format(algorithm, plottedSymbol)
ax.figure.savefig(Path(OUTPUT_DIRECTORY, filename), format="pdf", bbox_inches="tight", transparent=True, dpi=300)
def generate_compare_algorithm_plot(data, isX):
plottedSymbol = "X" if isX else "O"
if (isX):
subset_of_data = data[data.symbol != "SYMBOL_O"]
else:
subset_of_data = data[data.symbol != "SYMBOL_X"]
ax = subset_of_data.plot(x="episode", y=["rate_QLearning", "rate_SARSA"], kind="line", figsize=(PLOT_WIDTH,PLOT_HEIGHT))
ax.set(ylabel=YLABEL + ' Symbol {0}'.format(plottedSymbol), xlabel=XLABEL)
ax.set(xlim=XLIM, ylim=YLIM)
ax.set(xticks=XTICKS, yticks=YTICKS)
ax.set(yticklabels=YTICK_LABELS)
ax.legend([r'Q-Learning: W-Table, $\alpha$ = 0.1', r'SARSA: W-Table, $\alpha$ = 0.1'], loc="lower right")
filename = "compare_algorithm_{0}.pdf".format(plottedSymbol)
ax.figure.savefig(Path(OUTPUT_DIRECTORY, filename), format="pdf", bbox_inches="tight", transparent=True, dpi=300)
# -
# # Call Functions to generate plots
# +
# Compare rate with and without explorative actions
generate_compare_rate_plot(data_compare_rate, True)
generate_compare_rate_plot(data_compare_rate, False)
# Compare Alpha plots Q-Learning normal self-play
generate_compare_alpha_plot(data_alpha_QL, True, True, False)
generate_compare_alpha_plot(data_alpha_QL_ALT, False, True, False)
# Compare Alpha plots Q-Learning alternating self-play
generate_compare_alpha_plot(data_alpha_QL_ALT, True, True, True)
generate_compare_alpha_plot(data_alpha_QL_ALT, False, True, True)
# Compare Alpha plots SARSA normal self-play
generate_compare_alpha_plot(data_alpha_SARSA, True, False, False)
generate_compare_alpha_plot(data_alpha_SARSA, False, False, False)
# Compare Alpha plots SARSA alternating self-play
generate_compare_alpha_plot(data_alpha_SARSA_ALT, True, False, True)
generate_compare_alpha_plot(data_alpha_SARSA_ALT, False, False, True)
# Compare Experience Plots for Q-Learning
generate_compare_experience_plot(data_experience_QL, True, True)
generate_compare_experience_plot(data_experience_QL, False, True)
# Compare Experience Plots for SARSA
generate_compare_experience_plot(data_experience_SARSA, True, False)
generate_compare_experience_plot(data_experience_SARSA, False, False)
# Compare Algorithm Plots
generate_compare_algorithm_plot(data_algorithm, True)
generate_compare_algorithm_plot(data_algorithm, False)
# -
# # Zip generated plots
# !tar chvfz generated_plots.tar.gz ./Output
| Bingel_Jonas_ThesisHSMZ-RLTicTacToe-Jupyter.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Análise de Distorção de idade e series
#
# https://dados.rs.gov.br/dataset/fee-taxa-de-distorcao-idade-serie-total-102524
# # instalacao da biblioteca folium para gerar mapas
# # !sudo pip install folium
#importação das bibliotecas
import pandas as pd
# %matplotlib inline
import folium
df = pd.read_csv(r'fee-2013-mun-taxa-de-distorcao-idade-serie-total-102524.csv', encoding='latin1', skiprows=1)
#Renomeia uma coluna
df.rename(columns={'/Educação/Ens...de Série/Total 2013 (-)': 'Taxa Distorcao'}, inplace=True)
df.head()
#Conversao taxa distorcao de texto para float
df['Taxa Distorcao'] = df['Taxa Distorcao'].str.replace(',', '.').astype(float)
df.head()
df.dtypes
df.describe()
#Os 10 Municipios com a menor taxa de distorção
df.nsmallest(10, 'Taxa Distorcao')
#Os 10 Municipios com a maior taxa de distorção
df.nlargest(10, 'Taxa Distorcao')
df['Taxa Distorcao'].plot.hist(bins=100)
#Quantidade de municipios com taxa de distorcao menor igual a 10
df[df['Taxa Distorcao'] <= 10].count()
#Quantidade de municipios com taxa de distorcao maior igual a 45
df[df['Taxa Distorcao'] >= 45].count()
# ## Funcionamento do Folium
brasil = folium.Map(location = [-25.4413569,-49.2740054], width=750, height=50)
brasil = folium.Map(location = [-25.4413569,-49.2740054], title='Taxa de distoção')
brasil = folium.Map(location = [-25.4413569,-49.2740054], zoom_start = 4)
brasil
rs = folium.Map(location=[-31,-54], zoom_start=6, width=750, height=450)
rs
# +
#Criando um marcador para os melhores municipios onde a taxa de distorcao é menor igual a 10
for _, registro in df[df['Taxa Distorcao'] <= 10].iterrows():
folium.Marker(
location = [registro['latitude'], registro['longitude']],
popup = registro['Município'],
icon = folium.map.Icon(color='green')
).add_to(rs)
rs
# +
#Criando um marcador para os melhores municipios onde a taxa de distorcao é maior igual a 45
for _, registro in df[df['Taxa Distorcao'] >= 45].iterrows():
folium.Marker(
location = [registro['latitude'], registro['longitude']],
popup = registro['Município'],
icon = folium.map.Icon(color='red')
).add_to(rs)
rs
# -
#Taxa de distorção de Porto Alegre
df[df['Município'] == 'Porto Alegre']['Taxa Distorcao']
| AnaliseDistorcaoEducacao.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import librosa as lb
import librosa.display
import scipy
import json
import numpy as np
import sklearn
from sklearn.metrics import classification_report
from sklearn.model_selection import train_test_split
import os
import keras
from keras.utils import np_utils
from keras import layers
from keras import models
from keras.models import Sequential
from keras.layers import Dense, Conv2D, MaxPool2D , Flatten, Dropout
from keras.preprocessing.image import ImageDataGenerator
from model_builder import build_example
from plotter import plot_history
import matplotlib.pyplot as plt
# +
# CONSTANTS
DATA_DIR = "openmic-2018/"
CATEGORY_COUNT = 8
LEARNING_RATE = 0.00001
THRESHOLD = 0.5
# +
# LOAD DATA
OPENMIC = np.load(os.path.join(DATA_DIR, 'openmic-mel.npz'), allow_pickle=True)
print('OpenMIC keys: ' + str(list(OPENMIC.keys())))
X, Y_true, Y_mask, sample_key = OPENMIC['MEL'], OPENMIC['Y_true'], OPENMIC['Y_mask'], OPENMIC['sample_key']
print('X has shape: ' + str(X.shape))
print('Y_true has shape: ' + str(Y_true.shape))
print('Y_mask has shape: ' + str(Y_mask.shape))
print('sample_key has shape: ' + str(sample_key.shape))
# +
# LOAD LABELS
with open(os.path.join(DATA_DIR, 'class-map.json'), 'r') as f:
INSTRUMENTS = json.load(f)
print('OpenMIC instruments: ' + str(INSTRUMENTS))
# +
# SPLIT DATA (TRAIN - TEST - VAL)
# CHANGE X TO MEL
split_train, split_test, X_train, X_test, Y_true_train, Y_true_test, Y_mask_train, Y_mask_test = train_test_split(sample_key, X, Y_true, Y_mask)
split_val, split_test, X_val, X_test, Y_true_val, Y_true_test, Y_mask_val, Y_mask_test = train_test_split(split_test, X_test, Y_true_test, Y_mask_test, test_size=0.5)
train_set = np.asarray(set(split_train))
test_set = np.asarray(set(split_test))
print('# Train: {}, # Val: {}, # Test: {}'.format(len(split_train), len(split_test), len(split_val)))
# +
# DUPLICATE OF THE MODEL PREPROCESS
print(X_train.shape)
print(X_test.shape)
for instrument in INSTRUMENTS:
# Map the instrument name to its column number
inst_num = INSTRUMENTS[instrument]
print(instrument)
# TRAIN
train_inst = Y_mask_train[:, inst_num]
X_train_inst = X_train[train_inst]
X_train_inst = X_train_inst.astype('float16')
shape = X_train_inst.shape
X_train_inst = X_train_inst.reshape(shape[0],1, shape[1], shape[2])
Y_true_train_inst = Y_true_train[train_inst, inst_num] >= THRESHOLD
i = 0
for val in Y_true_train_inst:
i += val
print('TRAIN: ' + str(i) + ' true of ' + str(len(Y_true_train_inst)) + ' (' + str(round(i / len(Y_true_train_inst ) * 100,2)) + ' %)' )
# TEST
test_inst = Y_mask_test[:, inst_num]
X_test_inst = X_test[test_inst]
X_test_inst = X_test_inst.astype('float16')
shape = X_test_inst.shape
X_test_inst = X_test_inst.reshape(shape[0],1, shape[1], shape[2])
Y_true_test_inst = Y_true_test[test_inst, inst_num] >= THRESHOLD
i = 0
for val in Y_true_test_inst:
i += val
print('TEST: ' + str(i) + ' true of ' + str(len(Y_true_test_inst)) + ' (' + str(round(i / len(Y_true_test_inst ) * 100,2)) + ' %)' )
# VALIDATION
val_inst = Y_mask_val[:, inst_num]
X_val_inst = X_val[val_inst]
X_val_inst = X_val_inst.astype('float16')
shape = X_val_inst.shape
X_val_inst = X_val_inst.reshape(shape[0],1, shape[1], shape[2])
Y_true_val_inst = Y_true_val[val_inst, inst_num] >= THRESHOLD
i = 0
for val in Y_true_val_inst:
i += val
print('VALIDATION: ' + str(i) + ' true of ' + str(len(Y_true_val_inst)) + ' (' + str(round(i / len(Y_true_val_inst ) * 100,2)) + ' %)' )
# -
# <NAME>
len(Y_true_val_inst)
# +
# This dictionary will include the classifiers for each model
models = dict()
# We'll iterate over all istrument classes, and fit a model for each one
# After training, we'll print a classification report for each instrument
for instrument in INSTRUMENTS:
# Map the instrument name to its column number
inst_num = INSTRUMENTS[instrument]
# Step 1: sub-sample the data
# First, we need to select down to the data for which we have annotations
# This is what the mask arrays are for
train_inst = Y_mask_train[:, inst_num]
test_inst = Y_mask_test[:, inst_num]
# Here, we're using the Y_mask_train array to slice out only the training examples
# for which we have annotations for the given class
X_train_inst = X_train[train_inst]
# Step 3: simplify the data by averaging over time
# Let's arrange the data for a sklearn Random Forest model
# Instead of having time-varying features, we'll summarize each track by its mean feature vector over time
X_train_inst_sklearn = np.mean(X_train_inst, axis=1)
# Again, we slice the labels to the annotated examples
# We thresold the label likelihoods at 0.5 to get binary labels
Y_true_train_inst = Y_true_train[train_inst, inst_num] >= 0.5
# Repeat the above slicing and dicing but for the test set
X_test_inst = X_test[test_inst]
X_test_inst_sklearn = np.mean(X_test_inst, axis=1)
Y_true_test_inst = Y_true_test[test_inst, inst_num] >= 0.5
# Step 3.
# Initialize a new classifier
clf = RandomForestClassifier(max_depth=8, n_estimators=100, random_state=0)
# Step 4.
clf.fit(X_train_inst_sklearn, Y_true_train_inst)
# Step 5.
# Finally, we'll evaluate the model on both train and test
Y_pred_train = clf.predict(X_train_inst_sklearn)
Y_pred_test = clf.predict(X_test_inst_sklearn)
print('-' * 52)
print(instrument)
print('\tTRAIN')
print(classification_report(Y_true_train_inst, Y_pred_train))
print(Y_true_train_inst[3])
print(Y_pred_train[3])
print('\tTEST')
print(classification_report(Y_true_test_inst, Y_pred_test))
print(Y_true_test_inst.shape)
print(Y_pred_test.shape)
# Store the classifier in our dictionary
models[instrument] = clf
# +
import matplotlib.pyplot as plt
from pylab import plot, show, figure, imshow, xlim, ylim, title
def plot_history():
plt.figure(figsize=(9,4))
plt.subplot(1,2,1)
plt.plot(history.history['acc'])
plt.plot(history.history['val_acc'])
plt.title('Model accuracy')
plt.ylabel('Accuracy')
plt.xlabel('Epoch')
plt.legend(['Train accuracy', 'Validation accuracy'], loc='upper left')
plt.subplot(1,2,2)
plt.plot(history.history['loss'])
plt.plot(history.history['val_loss'])
plt.title('Model loss')
plt.ylabel('Loss')
plt.xlabel('Epoch')
plt.legend(['Train loss', 'Validation loss'], loc='upper left')
plt.show()
# +
""""
# Step 3: simplify the data by averaging over time
# Instead of having time-varying features, we'll summarize each track by its mean feature vector over time
X_train_inst_sklearn = np.mean(X_train_inst, axis=1)
X_test_inst_sklearn = np.mean(X_test_inst, axis=1)
X_train_inst_sklearn = X_train_inst_sklearn.astype('float32')
X_train_inst_sklearn = lb.util.normalize(X_train_inst_sklearn)
"""
np.savez('models.npz',model=)
| openmic-mel-ML.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
import matplotlib.pyplot as plt
import numpy as np
import matplotlib.pyplot as plt
import skimage
import skimage.external.tifffile
import os
# +
#zoom call timestamp 19:00
width = 23;
height = 69;
every = 5;
#import mask
input_image_path = os.path.abspath(os.path.join('/Users/johannesschoeneberg/Desktop/SchoenebergLab_Cal/collaboration_daniel_Serwas/PositiveControl/FilamentProjections/TomoJune_Fil06_Projection_crop.tif'))
output_folder_path = os.path.abspath(os.path.join('/Users/johannesschoeneberg/Desktop/SchoenebergLab_Cal/collaboration_daniel_Serwas/PositiveControl/FilamentProjections/output/'))
image = skimage.external.tifffile.imread(input_image_path)
print(image.shape)
plt.imshow(image,cmap='gray')
print(np.min(image))
globalMin = np.min(image)
print(np.max(image))
globalMax = np.max(image)
# +
import time
start_time = time.time()
totalHeight = image.shape[0]
print(totalHeight);
nSubpictures = (np.floor(((totalHeight-height)/every))).astype(int)
subpictures = []
for i in range(0,nSubpictures):
print(i)
subpicture = image[i*every:height+i*every,:]
skimage.external.tifffile.imsave(output_folder_path+"/output_"+str(i).zfill(5)+".tiff", subpicture, imagej=True );
# plt.imshow(subpicture,cmap='gray')
# plt.show()
# subpictures.append(subpicture)
print("--- %s seconds ---" % (time.time() - start_time))
# -
print(len(subpictures))
# +
w=10
h=90
fig=plt.figure(figsize=(w, h))
columns = 5
rows = 9
for i in range(1, columns*rows +1):
img = np.random.randint(10, size=(h,w))
fig.add_subplot(rows, columns, i)
plt.imshow(subpictures[i],vmin=globalMin, vmax=globalMax,cmap='gray')
plt.show()
# -
| dev/extract_fivePixel_subImages.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="Z_Eplcj-guRg"
# # Загрузка модулей
#
# Если вы работаете в Google Colab, не забудьте выбрать GPU как среду выполнения:
#
# 1. Среда выполнения
# 2. Сменить среду выполнения
# 3. Аппаратный ускоритель
# 4. GPU
# 5. Сохранить
#
# Готово! Вы восхитительны. Посмотрим, что в нашем распоряжении в ячейке ниже.
# + id="XUKX_oJttj30"
nvidia-smi
# + [markdown] id="StkCgHznKVoe"
# ```
# # +-----------------------------------------------------------------------------+
# | NVIDIA-SMI 455.45.01 Driver Version: 418.67 CUDA Version: 10.1 |
# |-------------------------------+----------------------+----------------------+
# | GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
# | Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
# | | | MIG M. |
# |===============================+======================+======================|
# | 0 Tesla T4 Off | 00000000:00:04.0 Off | 0 |
# | N/A 63C P8 11W / 70W | 0MiB / 15079MiB | 0% Default |
# | | | ERR! |
# # +-------------------------------+----------------------+----------------------+
#
# # +-----------------------------------------------------------------------------+
# | Processes: |
# | GPU GI CI PID Type Process name GPU Memory |
# | ID ID Usage |
# |=============================================================================|
# | No running processes found |
# # +-----------------------------------------------------------------------------+
# ```
# + [markdown] id="54jusKWuuds-"
# Загрузим [HuggingFace Transformers](https://huggingface.co/transformers/)
#
#
# + id="gFF1bJB3RWhB"
pip install transformers
| Modules.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: SageMath 8.1
# language: ''
# name: sagemath
# ---
# %display latex
var('a,b,c,x,y,z,k,n,m,phi,theta')
y = function('y')(x)
eq = diff(y, x, x) + x*diff(y, x) + y == 0
eq
desolve(eq, y, [0,0,1])
sum(k, k, 1, n).factor(), sum(1/k^2, k, 1, oo)
n, k, y = var('n,k,y')
sum(binomial(n,k)*x^k*y^(n-k), k, 0, n, hold=True) == sum(binomial(n,k)*x^k*y^(n-k), k, 0, n)
forget(), sum(a*x^k, k, 0, n), assume(abs(x)<1), sum(a*x^k, k, 0, oo)
var('k,n,p,j')
def S(n,p):
if p==0:
return n+1
else:
return 1/(p+1)*((n+1)^(p+1) - sum([binomial(p+1, j)*S(n,j) for j in range(p)]))
[S(n,i).factor() for i in range(6)]
# +
f(x) = (x^(1/3) - 2)/( (x+19)^(1/3) - 3)
g(x) = (cos(pi/4 - x) - tan(x))/(1-sin(pi/4 + x))
print(lim(f(x), x=8), lim(g(x), x=pi/4), lim(g(x), x=pi/4, dir='minus'), lim(g(x), x=pi/4, dir='plus'))
# -
u(k) = k^100/100^k; forget('n'); reset('n')
for k in range(1,11): print(u(float(k)))
plot(u(x), x, 1, 40).show()
v(x) = diff(u(x), x)
v(x)
sol = solve(v(x)==0, x)
floor(sol[0].rhs())
f = function('f')(x)
diff(f, x, 5), integrate(f(x), x, 3, 6), integrate(f(x), x), integral_numerical(y^3, 3, 6), sum(f(x), x, y, j)
f = function('f')(x)#, y)
taylor(f(x), x, 0, 5)
sin(x).series(x==0, 10)
k = var('k')
print(u(k))
limit(u(k), k=oo)
((1 + arctan(x))^(1/x)).series(x==0, 3)
ln(2*sin(x)).series(x==pi/6, 3).truncate()
f(x) = (x^3 + x)^(1/3) - (x^3 - x)^(1/3)
f(x).taylor(x, oo, 20)
# Computing a symbolic limit
var('a,x,h')
f = function('f')(x)
f(x) = taylor(f(x), x, 3, 3)
g(h) = 1/h^3*(f(a+3*h) - 3*f(a+2*h) + 3*f(a+h) - f(a))
limit(g(h), h=0)
# Machin's formula
var('a')
eq = a == 4*arctan(1/5) - arctan(1/239)
b = 4*arctan(1/5)
c = pi/4 + arctan(1/239)
tan(b).simplify_trig(), tan(c).simplify_trig()
f = arctan(x).series(x, 10)
(16*f.subs(x==1/5) - 4*f.subs(x==1/239)).n()
f(x) = arctan(x).taylor(x,0, 21)
a = (12*f(1/38) + 20*f(1/57) + 7*f(1/239) + 24*f(1/268)).simplify()
show((4*a).n(digits = 60))
pi.n(digits = 60)
var('k')
sum(1/k^2, k, 1, oo),sum(1/k^4, k, 1, oo),sum(1/k^5, k, 1, oo)
# +
from sage.symbolic.expression_conversions import HoldRemover
s = 2*sqrt(2)/9801*sum(factorial(4*k)*(1103 + 26390*k)/(factorial(k)^4*(396)^(4*k)) , k, 0, oo, hold=True)
show(s == HoldRemover(s)())
show(1/HoldRemover(s)().n(digits=10000))
pi.n(digits=10000)
# +
# Convergence of a series
var('m')
u(m) = sin(pi*sqrt(4*m^2 + 1))
u(m) = sin(pi*sqrt(4*m^2 + 1) - 2*pi*m)
u(m).taylor(m, oo, 3)
try:
sum(1/m, m, 0, oo).simplify()
except ValueError:
print('sum(1/m, m, 0, oo) is a Divergent sum')
# -
var('m', domain = 'integer')
var('x', domain='real')
assume(m>0)
#assume(x>m*pi)
#assume(x<m*pi + pi/2)
(tan(x).taylor(x, m*pi, 6)-x==0).solve(x)
reset('x')
f = function('f')(x)
g = function('g')(x)
diff(f(g(x)), x), diff(f(x)*g(x)), diff(ln(f(x)), x)
def f(x,y):
#if (x,y)==(0,0):
# return 0
#else:
return x*y*(x^2-y^2)/(x^2+y^2)
N, D = f(x,y).diff(x).diff(y).numerator_denominator()
g(k) = (N/D).subs(y = k*x).simplify_rational()
g(0), g(1), g(-1)
sin(x).integral(x, 0, pi/2).show()
integrate(1/(1+x^2), x).show()
integrate(1/(1+x^2), x, -oo, oo).show()
integrate(exp(-x^2), x, 0, oo).show()
try: integrate(exp(-x), x, -oo, oo).show()
except: print('divergent integral')
reset('x'); reset('u')
var('x')
var('u')
forget(); assume(x>0); f(x) = integrate(x*cos(u)/(u^2+x^2), u, 0, oo); f.show()
forget(); assume(x<0); f(x) = integrate(x*cos(u)/(u^2+x^2), u, 0, oo); f.show()
forget()
# The BBP Formula
var('n,N', domain='integer')
assume(N>0)
S(N) = sum( ( 4/(8*n+1) - 2/(8*n+4) - 1/(8*n+5) - 1/(8*n+6) )*(1/16)^n , n, 0, N)
S.show()
var('t')
f(t) = 4*sqrt(2) - 8*t^3 - 4*sqrt(2)*t^4 - 8*t^5
f.show()
I(N) = integrate(f(t)*sum(t^(8*n), n, 0, N), t, 0, 1/sqrt(2))
I.show()
J = integrate(f(t)/(1-t^8), t, 0, 1/sqrt(2))
J.show()
J.simplify_full().show()
S(0).simplify_full().show()
| Chapter2.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: 'Python 3.9.6 64-bit (''ekw-lectures'': conda)'
# name: python3
# ---
# +
from timeit import default_timer as timer
from functools import partial
import yaml
import sys
from estimagic import maximize
from scipy.optimize import root_scalar
from scipy.stats import chi2
import numdifftools as nd
import pandas as pd
import respy as rp
import numpy as np
sys.path.insert(0, "python")
from auxiliary import plot_bootstrap_distribution # noqa: E402
from auxiliary import plot_computational_budget # noqa: E402
from auxiliary import plot_smoothing_parameter # noqa: E402
from auxiliary import plot_score_distribution # noqa: E402
from auxiliary import plot_score_function # noqa: E402
from auxiliary import plot_likelihood # noqa: E402
# -
# # Maximum likelihood estimation
# ## Introduction
# EKW models are calibrated to data on observed individual decisions and experiences under the hypothesis that the individual's behavior is generated from the solution to the model. The goal is to back out information on reward functions, preference parameters, and transition probabilities. This requires the full parameterization $\theta$ of the model.
#
# Economists have access to information for $i = 1, ..., N$ individuals in each time period $t$. For every observation $(i, t)$ in the data, we observe action $a_{it}$, reward $r_{it}$, and a subset $x_{it}$ of the state $s_{it}$. Therefore, from an economist's point of view, we need to distinguish between two types of state variables $s_{it} = (x_{it}, \epsilon_{it})$. At time $t$, the economist and individual both observe $x_{it}$ while $\epsilon_{it}$ is only observed by the individual. In summary, the data $\mathcal{D}$ has the following structure:
#
# \begin{align*}
# \mathcal{D} = \{a_{it}, x_{it}, r_{it}: i = 1, ..., N; t = 1, ..., T_i\},
# \end{align*}
# where $T_i$ is the number of observations for which we observe individual $i$.
#
# Likelihood-based calibration seeks to find the parameterization $\hat{\theta}$ that maximizes the likelihood function $\mathcal{L}(\theta\mid\mathcal{D})$, i.e. the probability of observing the given data as a function of $\theta$. As we only observe a subset $x_t$ of the state, we can determine the probability $p_{it}(a_{it}, r_{it} \mid x_{it}, \theta)$ of individual $i$ at time $t$ in $x_{it}$ choosing $a_{it}$ and receiving $r_{it}$ given parametric assumptions about the distribution of $\epsilon_{it}$. The objective function takes the following form:
#
# \begin{align*}
# \hat{\theta} \equiv \text{argmax}{\theta \in \Theta} \underbrace{\prod^N_{i= 1} \prod^{T_i}_{t= 1}\, p_{it}(a_{it}, r_{it} \mid x_{it}, \theta)}_{\mathcal{L}(\theta\mid\mathcal{D})}.
# \end{align*}
#
# We will explore the following issues:
#
# * likelihood function
#
# * score function and statistic
#
# * asymptotic distribution
# * linearity
#
# * confidence intervals
#
# * Wald
# * likelihood - based
# * Bootstrap
#
#
# * numerical approximations
#
# * smoothing of choice probabilities
# * grid search
#
#
# Most of the material is from the following two references:
#
# * <NAME>. (2001). [In all likelihood: Statistical modelling and inference using likelihood](https://www.amazon.de/dp/0199671222/ref=sr_1_1?keywords=in+all+likelihood&qid=1573806115&sr=8-1). Clarendon Press, Oxford.
#
# * <NAME>., & <NAME>. (2002). [Statistical inference](https://www.amazon.de/dp/0534243126/ref=sr_1_1?keywords=casella+berger&qid=1573806129&sr=8-1). Duxbury, Belmont, CA.
#
# Let's get started!
# +
options_base = yaml.safe_load(open("../../configurations/robinson/robinson.yaml", "r"))
params_base = pd.read_csv(open("../../configurations/robinson/robinson.csv", "r"))
params_base.set_index(["category", "name"], inplace=True)
simulate = rp.get_simulate_func(params_base, options_base)
df = simulate(params_base)
# -
# Let us briefly inspect the parameterization.
params_base
# Several options need to be specified as well.
options_base
# We can now look at the simulated dataset.
df.head()
# ## Likelihood function
# We can now start exploring the likelihood function that provides an order of preference on $\theta$. The likelihood function is a measure of information about the potentially unknown parameters of the model. The information will usually be incomplete and the likelihood function also expresses the degree of incompleteness
#
# We will usually work with the sum of the individual log-likelihoods throughout as the likelihood cannot be represented without raising problems of numerical overflow. Note that the criterion function of the ``respy`` package returns to the average log-likelihood across the sample. Thus, we need to be careful with scaling it up when computing some of the test statistics later in the notebook.
#
# We will first trace out the likelihood over reasonable parameter values.
params_base["lower"] = [0.948, 0.0695, -0.11, 1.04, 0.0030, 0.005, -0.10]
params_base["upper"] = [0.952, 0.0705, -0.09, 1.05, 0.1000, 0.015, +0.10]
# We plot the normalized likelihood, i.e. set the maximum of the likelihood function to one by dividing it by its maximum.
# +
crit_func = rp.get_log_like_func(params_base, options_base, df)
rslts = dict()
for index in params_base.index:
upper, lower = params_base.loc[index][["upper", "lower"]]
grid = np.linspace(lower, upper, 20)
fvals = list()
for value in grid:
params = params_base.copy()
params.loc[index, "value"] = value
fval = options_base["simulation_agents"] * crit_func(params)
fvals.append(fval)
rslts[index] = fvals
# -
# Let's visualize the results.
plot_likelihood(rslts, params_base)
# ### Maximum likelihood estimate
# So far, we looked at the likelihood function in its entirety. Going forward, we will take a narrower view and just focus on the maximum likelihood estimate. We restrict our attention to the discount factor $\delta$ and treat it as the only unknown parameter. We will use [estimagic](https://estimagic.readthedocs.io/) for all our estimations.
crit_func = rp.get_log_like_func(params_base, options_base, df)
# However, we will make our life even easier and fix all parameters but the discount factor $\delta$.
constr_base = [
{"loc": "shocks_sdcorr", "type": "fixed"},
{"loc": "wage_fishing", "type": "fixed"},
{"loc": "nonpec_fishing", "type": "fixed"},
{"loc": "nonpec_hammock", "type": "fixed"},
]
# We will start the estimation with a perturbation of the true value.
params_start = params_base.copy()
params_start.loc[("delta", "delta"), "value"] = 0.91
# Now we are ready to deal with the selection and specification of the optimization algorithm.
# +
algo_options = {"stopping_max_criterion_evaluations ": 100}
algo_name = "nag_pybobyqa"
results = maximize(
criterion=crit_func,
params=params_base,
algorithm=algo_name,
algo_options=algo_options,
constraints=constr_base,
)
# -
# Let's look at the results.
params_rslt = results["solution_params"]
params_rslt
fval = results["solution_criterion"] * options_base["simulation_agents"]
print(f"criterion function at optimum {fval:5.3f}")
# We need to set up a proper interface to use some other Python functionality going forward.
# +
def wrapper_crit_func(crit_func, options_base, params_base, value):
params = params_base.copy()
params.loc["delta", "value"] = value
return options_base["simulation_agents"] * crit_func(params)
p_wrapper_crit_func = partial(wrapper_crit_func, crit_func, options_base, params_base)
# -
# We need to use the MLE repeatedly going forward.
delta_hat = params_rslt.loc[("delta", "delta"), "value"]
# At the maximum, the second derivative of the log-likelihood is negative and we define the observed Fisher information as follows
#
# \begin{align*}
# I(\hat{\theta}) \equiv -\frac{\partial^2 \log L(\hat{\theta})}{\partial^2 \theta}
# \end{align*}
#
# A larger curvature is associated with a strong peak, thus indicating less uncertainty about $\theta$.
delta_fisher = -nd.Derivative(p_wrapper_crit_func, n=2)([delta_hat])
delta_fisher
# ### Score statistic and Score function
# The Score function is the first-derivative of the log-likelihood.
#
# \begin{align*}
# S(\theta) \equiv \frac{\partial \log L(\theta)}{\partial \theta}
# \end{align*}
#
# #### Distribution
# The asymptotic normality of the score statistic is of key importance in deriving the asymptotic normality of the maximum likelihood estimator. Here we simulate $1,000$ samples of $10,000$ individuals and compute the score function at the true values. I had to increase the number of simulated individuals as convergence to the asymptotic distribution just took way to long.
plot_score_distribution()
# #### Linearity
# We seek linearity of the score function around the true value so that the log-likelihood is reasonably well approximated by a second order Taylor-polynomial.
#
# \begin{align*}
# \log L(\theta) \approx \log L(\hat{\theta}) + S(\hat{\theta})(\theta - \hat{\theta}) - \tfrac{1}{2} I(\hat{\theta}))(\theta - \hat{\theta})^2
# \end{align*}
#
# Since $S(\hat{\theta}) = 0$, we get:
#
# \begin{align*}
# \log\left(\frac{L(\theta)}{L(\hat{\theta})}\right) \approx - \tfrac{1}{2} I(\hat{\theta})(\theta - \hat{\theta})^2
# \end{align*}
#
# Taking the derivative to work with the score function, the following relationship is approximately true if the usual regularity conditions hold:
#
# \begin{align*}
# - I^{-1/2}(\hat{\theta}) S(\theta) \approx I^{1/2}(\hat{\theta}) (\theta - \hat{\theta})
# \end{align*}
#
#
# +
num_points, index = 10, ("delta", "delta")
upper, lower = params_base.loc[index, ["upper", "lower"]]
grid = np.linspace(lower, upper, num_points)
fds = np.tile(np.nan, num_points)
for i, point in enumerate(grid):
fds[i] = nd.Derivative(p_wrapper_crit_func, n=1)([point])
norm_fds = fds * -(1 / np.sqrt(delta_fisher))
norm_grid = (grid - delta_hat) * (np.sqrt(delta_fisher))
# -
# In the best case we see a standard normal distribution of $I^{1/2} (\hat{\theta}) (\theta - \hat{\theta})$ and so it is common practice to evaluate the linearity over $-2$ and $2$.
plot_score_function(norm_grid, norm_fds)
# Alternative shapes are possible.
#
# <img src="material/fig-quadratic-approximation.png" width="700" >
# ### Confidence intervals
#
# How do we communicate the statistical evidence using the likelihood? Several notions exist that have different demands on the score function. Wile the Wald intervals rely on the asymptotic normality and linearity, likelihood-based intervals only require asymptotic normality. In well-behaved problems, both measures of uncertainty agree.
#
#
#
# #### Wald intervals
rslt = list()
rslt.append(delta_hat - 1.96 * 1 / np.sqrt(delta_fisher))
rslt.append(delta_hat + 1.96 * 1 / np.sqrt(delta_fisher))
"{:5.3f} / {:5.3f}".format(*rslt)
# #### Likelihood-based intervals
def root_wrapper(delta, options_base, alpha, index):
crit_val = -0.5 * chi2.ppf(1 - alpha, 1)
params_eval = params_base.copy()
params_eval.loc[("delta", "delta"), "value"] = delta
likl_ratio = options_base["simulation_agents"] * (
crit_func(params_eval) - crit_func(params_base)
)
return likl_ratio - crit_val
# +
brackets = [[0.75, 0.95], [0.95, 1.10]]
rslt = list()
for bracket in brackets:
root = root_scalar(
root_wrapper,
method="bisect",
bracket=bracket,
args=(options_base, 0.05, index),
).root
rslt.append(root)
print("{:5.3f} / {:5.3f}".format(*rslt))
# -
# ## Bootstrap
# We can now run a simple bootstrap to see how the asymptotic standard errors line up.
#
# Here are some useful resources on the topic:
#
# * <NAME>., & <NAME>. (1997). [Bootstrap methods and their application](https://www.amazon.de/dp/B00D2WQ02U/ref=sr_1_1?keywords=bootstrap+methods+and+their+application&qid=1574070350&s=digital-text&sr=1-1). Cambridge University Press, Cambridge.
#
# * <NAME>. (2015). [What teachers should know about the bootstrap: Resampling in the undergraduate statistics curriculum](https://amstat.tandfonline.com/doi/full/10.1080/00031305.2015.1089789#.XdZhBldKjIV), *The American Statistician, 69*(4), 371-386.
#
# * <NAME>. (2001). [Chapter 52. The bootstrap](https://www.scholars.northwestern.edu/en/publications/chapter-52-the-bootstrap). In <NAME>., & <NAME>., editors, *Handbook of Econometrics, 5*, 3159-3228. Elsevier Science B.V.
plot_bootstrap_distribution()
# We can now construct the bootstrap confidence interval.
# +
fname = "material/bootstrap.delta_perturb_true.pkl"
boot_params = pd.read_pickle(fname)
rslt = list()
for quantile in [0.025, 0.975]:
rslt.append(boot_params.loc[("delta", "delta"), :].quantile(quantile))
print("{:5.3f} / {:5.3f}".format(*rslt))
# -
# ### Numerical aspects
# The shape and properties of the likelihood function are determined by different numerical tuning parameters such as quality of numerical integration, smoothing of choice probabilities. We would simply choose all components to be the "best", but that comes at the cost of increasing the time to solution.
# +
grid = np.linspace(100, 1000, 100, dtype=int)
rslts = list()
for num_draws in grid:
options = options_base.copy()
options["estimation_draws"] = num_draws
options["solution_draws"] = num_draws
start = timer()
rp.get_solve_func(params_base, options)
finish = timer()
rslts.append(finish - start)
# -
# We are ready to see how time to solution increases as we improve the quality of the numerical integration by increasing the number of Monte Carlo draws.
plot_computational_budget(grid, rslts)
# We need to learn where to invest a limited computational budget. We focus on the following going forward:
#
# * smoothing parameter for logit accept-reject simulator
#
# * grid search across core parameters
# #### Smoothing parameter
# We now show the shape of the likelihood function for alternative choices of the smoothing parameter $\tau$. There exists no closed-form solution for the choice probabilities, so these are simulated. Application of a basic accept-reject (AR) simulator poses the two challenges. First, there is the ocurrance of zero probability simulation for low probability events which causes problems for the evaluation of the log-likelihood. Second, the choice probabilities are not smooth in the parameters and instead are a step function. This is why McFadden (1989) introduces a class of smoothed AR simulators. The logit-smoothed AR simulator is the most popular one and also implemented in `respy`. The implementation requires to specify the smoothing parameter $\tau$. As $\tau \rightarrow 0$ the logit smoother approaches the original indicator function.
#
# * <NAME>. (1989). [A method of simulated moments for estimation of discrete response models without numerical integration](https://www.jstor.org/stable/1913621?seq=1#metadata_info_tab_contents). *Econometrica, 57*(5), 995-1026.
#
# * <NAME>. (2009). [Discrete choice methods with simulation](https://eml.berkeley.edu/books/train1201.pdf). Cambridge University Press, Cambridge.
#
# +
rslts = dict()
for tau in [0.01, 0.001, 0.0001]:
index = ("delta", "delta")
options = options_base.copy()
options["estimation_tau"] = tau
crit_func = rp.get_log_like_func(params_base, options, df)
grid = np.linspace(0.948, 0.952, 20)
fvals = list()
for value in grid:
params = params_base.copy()
params.loc[index, "value"] = value
fvals.append(crit_func(params))
rslts[tau] = fvals - np.max(fvals)
# -
# Now we are ready to inspect the shape of the likelihood function.
plot_smoothing_parameter(rslts, params_base, grid)
# #### Grid search
# We can look at the interplay of several major numerical tuning parameters. We combine choices for `simulation_agents`, `solution_draws`, `estimation_draws`, and `tau` to see how the maximum of the likelihood function changes.
df = pd.read_pickle("material/tuning.delta.pkl")
df.loc[((10000), slice(None)), :]
| lectures/maximum-likelihood/notebook.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernel_info:
# name: rl
# kernelspec:
# display_name: Reinforcement learning
# language: python
# name: rl
# ---
# # Implementation of the value iteration algorithm
#
# This algorithm finds the optimal value function and policy given a
# known model of the environment.
#
# 
#
# From: Sutton and Barto, 2018. Ch. 4.
# +
# first, import necessary modules
import sys
import gym
import random
import numpy as np
import seaborn as sns
import matplotlib.pyplot as plt
# add your own path to the RL repo here
sys.path.append('/Users/wingillis/dev/reinforcement-learning')
from collections import defaultdict
from lib.envs.gridworld import GridworldEnv
from lib.plotting import plot_gridworld_value_function
# -
sns.set_style('white')
# initialize the environment
shape = (5, 5) # size of the gridworld
env = GridworldEnv(shape, n_goals=2)
env.seed(23)
random.seed(23)
def value_iteration(env, gamma=1.0, theta=1e-5):
'''
Arguments:
env: open.ai environment
gamma: discount factor
theta: convergence threshold
'''
def one_step_lookahead(state, V):
'''calculate value of taking each action from this state'''
pass
# initialize value function
# loop until the value function has converged
while True:
delta = 0 # amt of the largest state-value update
# go through each state, use a one-step lookahead to
# find the best action, and update each state's value
if delta < theta: # stop updating once some level of precision has been reached
break
# define the policy as the best choice from all potential actions
return policy, V
policy, V = value_iteration(env, gamma=0.75)
# ## This is what your policy and value function should look like
#
# with `gamma = 0.75`
policy
V.reshape(shape)
fig = plot_gridworld_value_function(V.reshape(shape))
fig.tight_layout()
print("grid policy (0=up, 1=right, 2=down, 3=left):")
print(np.argmax(policy, axis=1).reshape(shape))
| reinforcement learning/01-DP-value-iteration.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import numpy as np
# ### Create an array from an iterable
# Such as
# - ```list```
# - ```tuple```
# - ```range``` iterator
#
# Notice that not all iterables can be used to create a numpy array, such as ```set``` and ```dict```
arr = np.array([1,2,3,4,5])
print(arr)
arr = np.array((1,2,3,4,5))
print(arr)
arr = np.array(range(10))
print(arr)
# ### Create an array with specified data type
arr = np.array([[1,2,3], [4,5,6]], dtype='i2')
print(arr)
print('Data Type: ' + str(arr.dtype))
# ### Create an aray within specified range
# ```np.arange()``` method can be used to replace ```np.array(range())``` method
# np.arange(start, stop, step)
arr = np.arange(0, 20, 2)
print(arr)
# ### Create an array of evenly spaced numbers within specified range
# ```np.linspace(start, stop, num_of_elements, endpoint=True, retstep=False)``` has 5 parameters:
# - ```start```: start number (inclusive)
# - ```stop```: end number (inclusive unless ```endpoint``` set to ```False```)
# - ```num_of_elements```: number of elements contained in the array
# - ```endpoint```: boolean value representing whether the ```stop``` number is inclusive or not
# - ```retstep```: boolean value representing whether to return the step size
arr, step_size = np.linspace(0, 5, 8, endpoint=False, retstep=True)
print(arr)
print('The step size is ' + str(step_size))
# ### Create an array of random values of given shape
# ```np.random.rand()``` method returns values in the range [0,1)
arr = np.random.rand(3, 3)
print(arr)
# ### Create an array of zeros of given shape
# - ```np.zeros()```: create array of all zeros in given shape
# - ```np.zeros_like()```: create array of all zeros with the same shape and data type as the given input array
zeros = np.zeros((2,3))
print(zeros)
arr = np.array([[1,2], [3,4],[5,6]], dtype=np.complex64)
zeros = np.zeros_like(arr)
print(zeros)
print('Data Type: ' + str(zeros.dtype))
# ### Create an array of ones of given shape
# - ```np.ones()```: create array of all ones in given shape
# - ```np.ones_like()```: create array of all ones with the same shape and data type as the given input array
ones = np.ones((3,2))
print(ones)
arr = [[1,2,3], [4,5,6]]
ones = np.ones_like(arr)
print(ones)
print('Data Type: ' + str(ones.dtype))
# ### Create an empty array of given shape
# - ```np.empty()```: create array of empty values in given shape
# - ```np.empty_like()```: create array of empty values with the same shape and data type as the given input array
#
# Notice that the initial values are not necessarily set to zeroes.
#
# They are just some garbage values in random memory addresses.
empty = np.empty((5,5))
print(empty)
arr = np.array([[1,2,3], [4,5,6]], dtype=np.int64)
empty = np.empty_like(arr)
print(empty)
print('Data Type: ' + str(empty.dtype))
# ### Create an array of constant values of given shape
# - ```np.full()```: create array of constant values in given shape
# - ```np.full_like()```: create array of constant values with the same shape and data type as the given input array
full = np.full((4,4), 5)
print(full)
arr = np.array([[1,2], [3,4]], dtype=np.float64)
full = np.full_like(arr, 5)
print(full)
print('Data Type: ' + str(full.dtype))
# ### Create an array in a repetitive manner
# - ```np.repeat(iterable, reps, axis=None)```: repeat each element by n times
# - ```iterable```: input array
# - ```reps```: number of repetitions
# - ```axis```: which axis to repeat along, default is ```None``` which will flatten the input array and then repeat
# - ```np.tile()```: repeat the whole array by n times
# - ```iterable```: input array
# - ```reps```: number of repetitions, it can be a tuple to represent repetitions along x-axis and y-axis
# No axis specified, then flatten the input array first and repeat
arr = [[0, 1, 2], [3, 4, 5]]
print(np.repeat(arr, 3))
# An example of repeating along x-axis
arr = [[0, 1, 2], [3, 4, 5]]
print(np.repeat(arr, 3, axis=0))
# An example of repeating along y-axis
arr = [[0, 1, 2], [3, 4, 5]]
print(np.repeat(arr, 3, axis=1))
# Repeat the whole array by a specified number of times
arr = [0, 1, 2]
print(np.tile(arr, 3))
# Repeat along specified axes
print(np.tile(arr, (2,2)))
# ### Create an identity matrix of given size
# - ```np.eye(size, k=0)```: create an identity matrix of given size
# - ```size```: the size of the identity matrix
# - ```k```: the diagonal offset
# - ```np.identity()```: same as ```np.eye()``` but does not carry parameters
identity_matrix = np.eye(5)
print(identity_matrix)
# An example of diagonal offset
identity_matrix = np.eye(5, k=-1)
print(identity_matrix)
identity_matrix = np.identity(5)
print(identity_matrix)
# ### Create an array with given values on the diagonal
arr = np.random.rand(5,5)
print(arr)
# Extract values on the diagonal
print('Values on the diagonal: ' + str(np.diag(arr)))
# Not necessarily to be a square matrix
arr = np.random.rand(10,3)
print(arr)
# Extract values on the diagonal
print('Values on the diagonal: ' + str(np.diag(arr)))
# Create a matrix given values on the diagonal
# All non-diagonal values set to zeros
arr = np.diag([1,2,3,4,5])
print(arr)
| 1. Create an Array.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python [conda env:icp]
# language: python
# name: conda-env-icp-py
# ---
# # Computer Vision Nanodegree
#
# ## Project: Image Captioning
#
# ---
#
# In this notebook, you will train your CNN-RNN model.
#
# You are welcome and encouraged to try out many different architectures and hyperparameters when searching for a good model.
#
# This does have the potential to make the project quite messy! Before submitting your project, make sure that you clean up:
# - the code you write in this notebook. The notebook should describe how to train a single CNN-RNN architecture, corresponding to your final choice of hyperparameters. You should structure the notebook so that the reviewer can replicate your results by running the code in this notebook.
# - the output of the code cell in **Step 2**. The output should show the output obtained when training the model from scratch.
#
# This notebook **will be graded**.
#
# Feel free to use the links below to navigate the notebook:
# - [Step 1](#step1): Training Setup
# - [Step 2](#step2): Train your Model
# - [Step 3](#step3): (Optional) Validate your Model
# <a id='step1'></a>
# ## Step 1: Training Setup
#
# In this step of the notebook, you will customize the training of your CNN-RNN model by specifying hyperparameters and setting other options that are important to the training procedure. The values you set now will be used when training your model in **Step 2** below.
#
# You should only amend blocks of code that are preceded by a `TODO` statement. **Any code blocks that are not preceded by a `TODO` statement should not be modified**.
#
# ### Task #1
#
# Begin by setting the following variables:
# - `batch_size` - the batch size of each training batch. It is the number of image-caption pairs used to amend the model weights in each training step.
# - `vocab_threshold` - the minimum word count threshold. Note that a larger threshold will result in a smaller vocabulary, whereas a smaller threshold will include rarer words and result in a larger vocabulary.
# - `vocab_from_file` - a Boolean that decides whether to load the vocabulary from file.
# - `embed_size` - the dimensionality of the image and word embeddings.
# - `hidden_size` - the number of features in the hidden state of the RNN decoder.
# - `num_epochs` - the number of epochs to train the model. We recommend that you set `num_epochs=3`, but feel free to increase or decrease this number as you wish. [This paper](https://arxiv.org/pdf/1502.03044.pdf) trained a captioning model on a single state-of-the-art GPU for 3 days, but you'll soon see that you can get reasonable results in a matter of a few hours! (_But of course, if you want your model to compete with current research, you will have to train for much longer._)
# - `save_every` - determines how often to save the model weights. We recommend that you set `save_every=1`, to save the model weights after each epoch. This way, after the `i`th epoch, the encoder and decoder weights will be saved in the `models/` folder as `encoder-i.pkl` and `decoder-i.pkl`, respectively.
# - `print_every` - determines how often to print the batch loss to the Jupyter notebook while training. Note that you **will not** observe a monotonic decrease in the loss function while training - this is perfectly fine and completely expected! You are encouraged to keep this at its default value of `100` to avoid clogging the notebook, but feel free to change it.
# - `log_file` - the name of the text file containing - for every step - how the loss and perplexity evolved during training.
#
# If you're not sure where to begin to set some of the values above, you can peruse [this paper](https://arxiv.org/pdf/1502.03044.pdf) and [this paper](https://arxiv.org/pdf/1411.4555.pdf) for useful guidance! **To avoid spending too long on this notebook**, you are encouraged to consult these suggested research papers to obtain a strong initial guess for which hyperparameters are likely to work best. Then, train a single model, and proceed to the next notebook (**3_Inference.ipynb**). If you are unhappy with your performance, you can return to this notebook to tweak the hyperparameters (and/or the architecture in **model.py**) and re-train your model.
#
# ### Question 1
#
# **Question:** Describe your CNN-RNN architecture in detail. With this architecture in mind, how did you select the values of the variables in Task 1? If you consulted a research paper detailing a successful implementation of an image captioning model, please provide the reference.
#
# **Answer:** in my implementation, original ENCODER (CNN) is replaced by a pre-trained ResNet152 model. Last linear layer is replaced to meet embed_size dimension and a batch normalization layer is added to avoid overfitting.
# DECODER (RNN) is composed by a LSTM node (to add "memory" to our model) and a Linear layer (to make prediction of correct output size - vocab_size). Same embed_size used as output of CNN is used as input in LSTM node.
# An Embedding layer is used during training process to "translate" captions to embed_size dimensions.
#
# In training forward pass, output of LSTM is **NOT** used as new input on next iteration. Correct captions are used as input, to successfully train network. Image features are submitted before captions during training.
# In sample (prediction) method, output of LSTM **IS** used as new input on next iteration, to predict captions.
#
# **Hyperparameters:** to select best hyperparameters I used two different metrics: **Training LOSS** and **Validation BLEU Score**.
#
# 
# 
#
# I've evaluated (train) many different models (up to 10, many GPU hours), finding online successful implementation like [this](https://www.analyticsvidhya.com/blog/2018/04/solving-an-image-captioning-task-using-deep-learning/).
# These 4 seem to best perform (in bold relevant changes of reference model M1):
#
# 
#
# **M3 model (2 LSTM layers) is well performing and BLEU score is smoothly increasing every epoch. Due to this reason I've used it to predict captions.**
#
#
# ### (Optional) Task #2
#
# Note that we have provided a recommended image transform `transform_train` for pre-processing the training images, but you are welcome (and encouraged!) to modify it as you wish. When modifying this transform, keep in mind that:
# - the images in the dataset have varying heights and widths, and
# - if using a pre-trained model, you must perform the corresponding appropriate normalization.
#
# ### Question 2
#
# **Question:** How did you select the transform in `transform_train`? If you left the transform at its provided value, why do you think that it is a good choice for your CNN architecture?
#
# **Answer:** Original normalization is appropriate for my CNN (ResNet) architecture. I've added a Random Rotation transform only, limited to 20 degrees.
#
# ### Task #3
#
# Next, you will specify a Python list containing the learnable parameters of the model. For instance, if you decide to make all weights in the decoder trainable, but only want to train the weights in the embedding layer of the encoder, then you should set `params` to something like:
# ```
# params = list(decoder.parameters()) + list(encoder.embed.parameters())
# ```
#
# ### Question 3
#
# **Question:** How did you select the trainable parameters of your architecture? Why do you think this is a good choice?
#
# **Answer:** ResNet is already trained. Encoder last layer needs to be trained (both linear and batch normalization layers I've added). Decoder must be totally trained.
# This means, in code:
# ```
# params = list(decoder.parameters()) + list(encoder.embed.parameters()) + list(encoder.bn.parameters())
# ```
#
#
# ### Task #4
#
# Finally, you will select an [optimizer](http://pytorch.org/docs/master/optim.html#torch.optim.Optimizer).
#
# ### Question 4
#
# **Question:** How did you select the optimizer used to train your model?
#
# **Answer:** I've used Adam optimizer because "usually" performs better if compared to simple SGD. SGD with momentum could be another good choice. [This paper](https://www.analyticsvidhya.com/blog/2018/04/solving-an-image-captioning-task-using-deep-learning/) and [this](https://shaoanlu.wordpress.com/2017/05/29/sgd-all-which-one-is-the-best-optimizer-dogs-vs-cats-toy-experiment/) were my references.
#
import torch
import torch.nn as nn
from torchvision import transforms
import sys
sys.path.append('/opt/cocoapi/PythonAPI')
from pycocotools.coco import COCO
from data_loader import get_loader
from model import EncoderCNN, EncoderCNN152, DecoderRNN
import math
# +
## TODO #1: Select appropriate values for the Python variables below.
batch_size = 128 # batch size
vocab_threshold = 5 # minimum word count threshold
vocab_from_file = False # if True, load existing vocab file
embed_size = 256 # dimensionality of image and word embeddings
hidden_size = 512 # number of features in hidden state of the RNN decoder
num_epochs = 5 # number of training epochs
save_every = 1 # determines frequency of saving model weights
print_every = 100 # determines window for printing average loss
log_file = 'training_log.txt' # name of file with saved training loss and perplexity
LSTM_layers = 2 #number of LSTM layers
# (Optional) TODO #2: Amend the image transform below.
transform_train = transforms.Compose([
transforms.Resize(256), # smaller edge of image resized to 256
transforms.RandomCrop(224), # get 224x224 crop from random location
transforms.RandomHorizontalFlip(), # horizontally flip image with probability=0.5
transforms.RandomRotation(20), # randomly rotate the image by angle
transforms.ToTensor(), # convert the PIL Image to a tensor
transforms.Normalize((0.485, 0.456, 0.406), # normalize image for pre-trained model
(0.229, 0.224, 0.225))])
# Build data loader.
data_loader = get_loader(transform=transform_train,
mode='train',
batch_size=batch_size,
vocab_threshold=vocab_threshold,
vocab_from_file=vocab_from_file)
# The size of the vocabulary.
vocab_size = len(data_loader.dataset.vocab)
# Initialize the encoder and decoder.
encoder = EncoderCNN152(embed_size)
decoder = DecoderRNN(embed_size, hidden_size, vocab_size, num_layers=LSTM_layers)
# Move models to GPU if CUDA is available.
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
encoder.to(device)
decoder.to(device)
# Define the loss function.
criterion = nn.CrossEntropyLoss().cuda() if torch.cuda.is_available() else nn.CrossEntropyLoss()
# TODO #3: Specify the learnable parameters of the model.
params = list(decoder.parameters()) + list(encoder.embed.parameters())
# TODO #4: Define the optimizer.
optimizer = torch.optim.Adam(params, lr=0.001)
# Set the total number of training steps per epoch.
total_step = math.ceil(len(data_loader.dataset.caption_lengths) / data_loader.batch_sampler.batch_size)
# -
# <a id='step2'></a>
# ## Step 2: Train your Model
#
# Once you have executed the code cell in **Step 1**, the training procedure below should run without issue.
#
# It is completely fine to leave the code cell below as-is without modifications to train your model. However, if you would like to modify the code used to train the model below, you must ensure that your changes are easily parsed by your reviewer. In other words, make sure to provide appropriate comments to describe how your code works!
#
# You may find it useful to load saved weights to resume training. In that case, note the names of the files containing the encoder and decoder weights that you'd like to load (`encoder_file` and `decoder_file`). Then you can load the weights by using the lines below:
#
# ```python
# # Load pre-trained weights before resuming training.
# encoder.load_state_dict(torch.load(os.path.join('./models', encoder_file)))
# decoder.load_state_dict(torch.load(os.path.join('./models', decoder_file)))
# ```
#
# While trying out parameters, make sure to take extensive notes and record the settings that you used in your various training runs. In particular, you don't want to encounter a situation where you've trained a model for several hours but can't remember what settings you used :).
#
# ### A Note on Tuning Hyperparameters
#
# To figure out how well your model is doing, you can look at how the training loss and perplexity evolve during training - and for the purposes of this project, you are encouraged to amend the hyperparameters based on this information.
#
# However, this will not tell you if your model is overfitting to the training data, and, unfortunately, overfitting is a problem that is commonly encountered when training image captioning models.
#
# For this project, you need not worry about overfitting. **This project does not have strict requirements regarding the performance of your model**, and you just need to demonstrate that your model has learned **_something_** when you generate captions on the test data. For now, we strongly encourage you to train your model for the suggested 3 epochs without worrying about performance; then, you should immediately transition to the next notebook in the sequence (**3_Inference.ipynb**) to see how your model performs on the test data. If your model needs to be changed, you can come back to this notebook, amend hyperparameters (if necessary), and re-train the model.
#
# That said, if you would like to go above and beyond in this project, you can read about some approaches to minimizing overfitting in section 4.3.1 of [this paper](http://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=7505636). In the next (optional) step of this notebook, we provide some guidance for assessing the performance on the validation dataset.
# +
import torch.utils.data as data
import numpy as np
import os
#import requests
import time
# Open the training log file.
f = open(log_file, 'w')
old_time = time.time()
#response = requests.request("GET",
# "http://metadata.google.internal/computeMetadata/v1/instance/attributes/keep_alive_token",
# headers={"Metadata-Flavor":"Google"})
for epoch in range(1, num_epochs+1):
for i_step in range(1, total_step+1):
if time.time() - old_time > 60:
old_time = time.time()
#requests.request("POST",
# "https://nebula.udacity.com/api/v1/remote/keep-alive",
# headers={'Authorization': "STAR " + response.text})
# Randomly sample a caption length, and sample indices with that length.
indices = data_loader.dataset.get_train_indices()
# Create and assign a batch sampler to retrieve a batch with the sampled indices.
new_sampler = data.sampler.SubsetRandomSampler(indices=indices)
data_loader.batch_sampler.sampler = new_sampler
# Obtain the batch.
images, captions = next(iter(data_loader))
# Move batch of images and captions to GPU if CUDA is available.
images = images.to(device)
captions = captions.to(device)
# Zero the gradients.
decoder.zero_grad()
encoder.zero_grad()
# Pass the inputs through the CNN-RNN model.
features = encoder(images)
outputs = decoder(features, captions)
# Calculate the batch loss.
loss = criterion(outputs.view(-1, vocab_size), captions.view(-1))
# Backward pass.
loss.backward()
# Update the parameters in the optimizer.
optimizer.step()
# Get training statistics.
stats = 'Epoch [%d/%d], Step [%d/%d], Loss: %.4f, Perplexity: %5.4f' % (epoch, num_epochs, i_step, total_step, loss.item(), np.exp(loss.item()))
# Print training statistics (on same line).
print('\r' + stats, end="")
sys.stdout.flush()
# Print training statistics to file.
f.write(stats + '\n')
f.flush()
# Print training statistics (on different line).
if i_step % print_every == 0:
print('\r' + stats)
# Save the weights.
if epoch % save_every == 0:
torch.save(decoder.state_dict(), os.path.join('./models', 'decoder-%d.pkl' % epoch))
torch.save(encoder.state_dict(), os.path.join('./models', 'encoder-%d.pkl' % epoch))
# Close the training log file.
f.close()
# -
# <a id='step3'></a>
# ## Step 3: (Optional) Validate your Model
#
# To assess potential overfitting, one approach is to assess performance on a validation set. If you decide to do this **optional** task, you are required to first complete all of the steps in the next notebook in the sequence (**3_Inference.ipynb**); as part of that notebook, you will write and test code (specifically, the `sample` method in the `DecoderRNN` class) that uses your RNN decoder to generate captions. That code will prove incredibly useful here.
#
# If you decide to validate your model, please do not edit the data loader in **data_loader.py**. Instead, create a new file named **data_loader_val.py** containing the code for obtaining the data loader for the validation data. You can access:
# - the validation images at filepath `'/opt/cocoapi/images/train2014/'`, and
# - the validation image caption annotation file at filepath `'/opt/cocoapi/annotations/captions_val2014.json'`.
#
# The suggested approach to validating your model involves creating a json file such as [this one](https://github.com/cocodataset/cocoapi/blob/master/results/captions_val2014_fakecap_results.json) containing your model's predicted captions for the validation images. Then, you can write your own script or use one that you [find online](https://github.com/tylin/coco-caption) to calculate the BLEU score of your model. You can read more about the BLEU score, along with other evaluation metrics (such as TEOR and Cider) in section 4.1 of [this paper](https://arxiv.org/pdf/1411.4555.pdf). For more information about how to use the annotation file, check out the [website](http://cocodataset.org/#download) for the COCO dataset.
# ## Validation
#
# Following code is used to evaluate models after each training epoch.
# My approch is:
# - calculate average Loss as done during training
# - calculate average BLEU-1, BLEU-2, BLEU-3, BLEU-4 scores
#
# ### About BLEU scores
#
# I use reference found [here](https://machinelearningmastery.com/calculate-bleu-score-for-text-python/) and python NLTK package.
# BLEU-4 score is probably the better one to evaluate accuracy of image captions: for this reason I choose the model (its hyperparameters) with best-looking BLEU-4 score (see plots at the beginning of this notebook).
#
# Score calculation is implemented in `get_avg_bleu_score` method. Default score is BLEU-4, but optional parameter `weights` could be used to calculate other BLEU scores.
#
# ### Image Transform
#
# I use the same normalization method of training transform. I remove random rotation and flip, in order to submit images with original orientation only.
#
# ### Data loader
#
# A new data loader is implemented in "data_loader_val.py" file.
# Very similar to original data loader defined for test, it points to validation images path at `'cocoapi/images/val2014/'`.
#
# ### Captions
#
# Method `clean_sentence` is used to clean captions in order to remove words after `<end>` tag and other special tags.
# +
## Create image tranform and loader to be used in validation
from data_loader_val import get_loader_val
import os
from nltk.translate.bleu_score import sentence_bleu
import torch.utils.data as data
batch_size = 128
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
#image transform below.
transform_val = transforms.Compose([
transforms.Resize(256), # smaller edge of image resized to 256
transforms.RandomCrop(224), # get 224x224 crop from random location
transforms.ToTensor(), # convert the PIL Image to a tensor
transforms.Normalize((0.485, 0.456, 0.406), # normalize image for pre-trained model
(0.229, 0.224, 0.225))])
data_loader_val = get_loader_val(transform=transform_val,
batch_size=batch_size)
# +
# Remove unrequired words in caption (special tags and after <end> tag).
# <unk> tag is intentionally preserved to display missing words in captions.
def clean_sentence(output, idx2word):
sentence = ''
for x in output:
word = idx2word[x]
if word == '<end>':
break
elif word == '<start>':
pass
elif word == '.':
sentence += word
else:
sentence += ' ' + word
return sentence.strip()
# Calculate and return average BLEU score of a batch of captions. Default is BLEU-4 score.
def get_avg_bleu_score(outputs, references, idx2word, weights=(0.25, 0.25, 0.25, 0.25)):
score = 0
for i in range(len(outputs)):
output = clean_sentence(outputs[i], idx2word)
reference = clean_sentence(references[i], idx2word)
score += sentence_bleu([reference], output, weights)
score /= len(outputs)
return score
# +
# Redefine model and load trained weights
encoder_file = 'encoder-5.pkl'
decoder_file = 'decoder-5.pkl'
# Best performing hyperparameters
embed_size = 256
hidden_size = 512
LSTM_layers = 2
# The size of the vocabulary.
vocab_size = len(data_loader_val.dataset.vocab)
# Initialize the encoder and decoder, and set each to inference mode.
encoder = EncoderCNN152(embed_size)
encoder.eval()
decoder = DecoderRNN(embed_size, hidden_size, vocab_size, num_layers=LSTM_layers)
decoder.eval()
# Load the trained weights.
encoder.load_state_dict(torch.load(os.path.join('./models', encoder_file)))
decoder.load_state_dict(torch.load(os.path.join('./models', decoder_file)))
# Move models to GPU if CUDA is available.
encoder.to(device)
decoder.to(device)
# Define the loss function (same used in training, to compare results).
criterion = nn.CrossEntropyLoss().cuda() if torch.cuda.is_available() else nn.CrossEntropyLoss()
# +
total_step_val = math.ceil(len(data_loader_val.dataset.caption_lengths) / data_loader_val.batch_sampler.batch_size)
bleu1_score, bleu2_score, bleu3_score, bleu4_score = 0, 0, 0, 0
for i_step in range(1, total_step_val+1):
# Randomly sample a caption length, and sample indices with that length.
indices = data_loader_val.dataset.get_train_indices()
# Create and assign a batch sampler to retrieve a batch with the sampled indices.
new_sampler = data.sampler.SubsetRandomSampler(indices=indices)
data_loader_val.batch_sampler.sampler = new_sampler
# Obtain the batch.
images, captions = next(iter(data_loader_val))
# Move batch of images and captions to GPU if CUDA is available.
images = images.to(device)
captions = captions.to(device)
# Pass the inputs through the CNN-RNN model.
features = encoder(images)
# Get predictions and output from decoder (to calculate LOSS)
outputs = decoder(features, captions)
loss = criterion(outputs.view(-1, vocab_size), captions.view(-1))
predictions = decoder.sample(features, max_len = captions.shape[1])
# Get BLEU scores
bleu1_score += get_avg_bleu_score(predictions, captions.tolist(),
data_loader_val.dataset.vocab.idx2word,
weights=(1, 0, 0, 0))
bleu2_score += get_avg_bleu_score(predictions, captions.tolist(),
data_loader_val.dataset.vocab.idx2word,
weights=(0.5, 0.5, 0, 0))
bleu3_score += get_avg_bleu_score(predictions, captions.tolist(),
data_loader_val.dataset.vocab.idx2word,
weights=(0.33, 0.33, 0.33, 0))
bleu4_score += get_avg_bleu_score(predictions, captions.tolist(),
data_loader_val.dataset.vocab.idx2word)
# Get validation statistics.
stats = 'Validation Step [%d/%d], Loss: %.4f, BLEU(1/2/3/4): %.4f, %.4f, %.4f, %.4f' % (i_step, total_step_val, loss.item(), bleu1_score/i_step, bleu2_score/i_step, bleu3_score/i_step, bleu4_score/i_step)
# Print validation statistics (on same line).
print('\r' + stats, end="")
sys.stdout.flush()
# -
| 2_Training.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: python3
# language: python
# metadata:
# cinder_runtime: false
# fbpkg_supported: true
# is_prebuilt: true
# kernel_name: bento_kernel_ae
# nightly_builds: false
# name: python3
# ---
# + [markdown] code_folding=[] hidden_ranges=[] originalKey="c31f62e6-7593-4975-ac72-c8d1a59fe3b7" showInput=false
# ## Constraint Active Search for Multiobjective Experimental Design
#
# In this tutorial we show how to implement the Expected Coverage Improvement (ECI) [1] acquisition function in BoTorch. For a number of outcome constraints, ECI tries to efficiently discover the feasible region and simultaneously sample diverse feasible configurations. Given a user-specified punchout radius $r$, we center a sphere with that radius around each evaluated configuration. The total coverage is now given by the volume of the union of these sphere intersected with the feasible region; see the paper and, in particular, Figure 2 for a full description of how ECI works.
#
# By design, ECI prefers candidates that are in unexplored regions since the candidate's corresponding sphere won't intersect with the spheres around the previously evaluated configurations. On the other hand, ECI also prefers configurations that are likely to satisfy the constraints and to give an improvement in the total coverage. This results in an exploitation-exploration trade-off similar to other acquisition functions.
#
# ECI may be estimated using the following equation:
# $$
# \text{ECI}(x) = \sum_{x' \in \mathbb{N}(x) \setminus \mathbb{N}_{r}(X)} p(Z(x') = 1 \;|\; \mathcal{D}_t).
# $$
#
# where $\mathbb{N}(x) \setminus \mathbb{N}_{r}(X)$ a set of points generated via Monte Carlo to be inside a sphere of radius $r$ around $x$, but sufficiently far from the set of known evaluations $X$ (where sufficiently far is defined by the punchout radius $r$). The function $p(Z(x') = 1 \;|\; \mathcal{D}_t)$ is the probability that the GP at $x'$ satisfies a user-specified threshold value, or threshold values in the case of multiple objective functions.
#
# [1]: [Malkomes et al., Beyond the Pareto Efficient Frontier: Constraint Active Search for Multiobjective Experimental Design, Proceedings of the 38th International Conference on Machine Learning, 2021](http://proceedings.mlr.press/v139/malkomes21a/malkomes21a.pdf).
# + code_folding=[] executionStartTime=1638489228284 executionStopTime=1638489229640 hidden_ranges=[] originalKey="<KEY>" requestMsgId="9896cb02-0d0e-498f-bdf7-86ea14baaf40"
import os
import matplotlib.pyplot as plt
import torch
from botorch.acquisition.monte_carlo import MCAcquisitionFunction
from botorch.acquisition.objective import IdentityMCObjective
from botorch.fit import fit_gpytorch_model
from botorch.models import ModelListGP, SingleTaskGP
from botorch.models.transforms.outcome import Standardize
from botorch.optim import optimize_acqf
from botorch.utils.sampling import sample_hypersphere
from botorch.utils.transforms import t_batch_mode_transform
from gpytorch.constraints import Interval
from gpytorch.likelihoods import GaussianLikelihood
from gpytorch.mlls import ExactMarginalLogLikelihood
from torch.quasirandom import SobolEngine
# %matplotlib inline
SMOKE_TEST = os.environ.get("SMOKE_TEST")
# + code_folding=[] executionStartTime=1638489229684 executionStopTime=1638489230490 hidden_ranges=[] originalKey="<KEY>" requestMsgId="b4b78cb1-b0d4-4203-a97b-7a293ea418d4"
tkwargs = {
"device": torch.device("cuda" if torch.cuda.is_available() else "cpu"),
"dtype": torch.double,
}
# + [markdown] code_folding=[] hidden_ranges=[] originalKey="<KEY>" showInput=false
# To start, we need to be able to sample points in $\mathbb{N}(x) \setminus \mathbb{N}_{r}(X)$. We can generate a pool of points and use standard rejection sampling to do so, but this leads to an acquisition function that isn't immediately differentiable; rejection sampling is essentially providing either a binary weight of either 0 or 1 to each point in the sample pool, which is not a differentiable function.
#
#
# In order to make the acquisition function differentiable, we rely on a differentiable approximation of this binary weight function. For example, `smooth_box_mask` is a continuous differentiable approximation of $a < x < b$ (see the plot below for a visualization). A larger value of eps will make the sigmoid less steep and result in a smoother (and easier to optimize) but less accurate acquisition function.
# + executionStartTime=1638489230493 executionStopTime=1638489230509 originalKey="63c0a300-c3a1-49bb-a6ba-41cf9dfa9632" requestMsgId="63c0a300-c3a1-49bb-a6ba-41cf9dfa9632"
def smooth_mask(x, a, eps=2e-3):
"""Returns 0ish for x < a and 1ish for x > a"""
return torch.nn.Sigmoid()((x - a) / eps)
def smooth_box_mask(x, a, b, eps=2e-3):
"""Returns 1ish for a < x < b and 0ish otherwise"""
return smooth_mask(x, a, eps) - smooth_mask(x, b, eps)
# + code_folding=[] executionStartTime=1638489230587 executionStopTime=1638489233802 hidden_ranges=[] originalKey="<KEY>" requestMsgId="7b49f71b-f131-4600-96fd-5aa581212202"
x = torch.linspace(-2, 2, 500, **tkwargs)
fig, ax = plt.subplots(1, 2, figsize=(8, 4))
ax[0].plot(x.cpu(), smooth_mask(x, -1).cpu(), "b")
ax[1].plot(x.cpu(), smooth_box_mask(x, -1, 1).cpu(), "b")
plt.show()
# + [markdown] code_folding=[] hidden_ranges=[] originalKey="7ff5ed82-355b-45b8-91f9-823c41c46efc" showInput=false
# ## Implementation of ECI
#
# Once we have defined our smooth mask functions, we can compute a differentiable approximation of ECI in a straightforward manner using Monte Carlo (MC). We use the popular variance reduction technique of Common random numbers (CRN).
#
# We first use a low discrepancy sequence to generate a set of base samples. We integrate (sum) over these base samples to approximate the ECI acquisition function. Fixing these base samples makes the method deterministic and by using the smooth masks defined earlier, we can filter out infeasible points while still having a differentiable acquisition function.
#
# This implementation assumes that the GP models for the different outputs are independent and that each constraints only affects one output (simple box-constraints like f(x) <= 0.5).
# + code_folding=[] executionStartTime=1638489233910 executionStopTime=1638489233950 hidden_ranges=[] originalKey="<KEY>" requestMsgId="5dd0a6af-0bde-4e57-8bdd-d53baea75075"
class ExpectedCoverageImprovement(MCAcquisitionFunction):
def __init__(
self,
model,
constraints,
punchout_radius,
bounds,
num_samples=512,
**kwargs,
):
"""Expected Coverage Improvement (q=1 required, analytic)
Right now, we assume that all the models in the ModelListGP have
the same training inputs.
Args:
model: A ModelListGP object containing models matching the corresponding constraints.
All models are assumed to have the same training data.
constraints: List containing 2-tuples with (direction, value), e.g.,
[('gt', 3), ('lt', 4)]. It is necessary that
len(constraints) == model.num_outputs.
punchout_radius: Positive value defining the desired minimum distance between points
bounds: torch.tensor whose first row is the lower bounds and second row is the upper bounds
num_samples: Number of samples for MC integration
"""
super().__init__(model=model, objective=IdentityMCObjective(), **kwargs)
assert len(constraints) == model.num_outputs
assert all(direction in ("gt", "lt") for direction, _ in constraints)
assert punchout_radius > 0
self.constraints = constraints
self.punchout_radius = punchout_radius
self.bounds = bounds
self.base_points = self.train_inputs
self.ball_of_points = self._generate_ball_of_points(
num_samples=num_samples,
radius=punchout_radius,
device=bounds.device,
dtype=bounds.dtype,
)
self._thresholds = torch.tensor(
[threshold for _, threshold in self.constraints]
).to(bounds)
assert (
all(ub > lb for lb, ub in self.bounds.T) and len(self.bounds.T) == self.dim
)
@property
def num_outputs(self):
return self.model.num_outputs
@property
def dim(self):
return self.train_inputs.shape[-1]
@property
def train_inputs(self):
return self.model.models[0].train_inputs[0]
def _generate_ball_of_points(
self, num_samples, radius, device=None, dtype=torch.double
):
"""Creates a ball of points to be used for MC."""
tkwargs = {"device": device, "dtype": dtype}
z = sample_hypersphere(d=self.dim, n=num_samples, qmc=True, **tkwargs)
r = torch.rand(num_samples, 1, **tkwargs) ** (1 / self.dim)
return radius * r * z
def _get_base_point_mask(self, X):
distance_matrix = self.model.models[0].covar_module.base_kernel.covar_dist(
X, self.base_points
)
return smooth_mask(distance_matrix, self.punchout_radius)
def _estimate_probabilities_of_satisfaction_at_points(self, points):
"""Estimate the probability of satisfying the given constraints."""
posterior = self.model.posterior(X=points)
mus, sigma2s = posterior.mean, posterior.variance
dist = torch.distributions.normal.Normal(mus, sigma2s.sqrt())
norm_cdf = dist.cdf(self._thresholds)
probs = torch.ones(points.shape[:-1]).to(points)
for i, (direction, _) in enumerate(self.constraints):
probs = probs * (
norm_cdf[..., i] if direction == "lt" else 1 - norm_cdf[..., i]
)
return probs
@t_batch_mode_transform(expected_q=1)
def forward(self, X):
"""Evaluate Expected Improvement on the candidate set X."""
ball_around_X = self.ball_of_points + X
domain_mask = smooth_box_mask(
ball_around_X, self.bounds[0, :], self.bounds[1, :]
).prod(dim=-1)
num_points_in_integral = domain_mask.sum(dim=-1)
base_point_mask = self._get_base_point_mask(ball_around_X).prod(dim=-1)
prob = self._estimate_probabilities_of_satisfaction_at_points(ball_around_X)
masked_prob = prob * domain_mask * base_point_mask
y = masked_prob.sum(dim=-1) / num_points_in_integral
return y
# + code_folding=[] executionStartTime=1638489234035 executionStopTime=1638489234089 hidden_ranges=[] originalKey="b56e4297-9927-4a5e-aa8f-f5e93181e44d" requestMsgId="b56e4297-9927-4a5e-aa8f-f5e93181e44d"
def get_and_fit_gp(X, Y):
"""Simple method for creating a GP with one output dimension.
X is assumed to be in [0, 1]^d.
"""
assert Y.ndim == 2 and Y.shape[-1] == 1
likelihood = GaussianLikelihood(noise_constraint=Interval(1e-6, 1e-3)) # Noise-free
octf = Standardize(m=Y.shape[-1])
gp = SingleTaskGP(X, Y, likelihood=likelihood, outcome_transform=octf)
mll = ExactMarginalLogLikelihood(gp.likelihood, gp)
fit_gpytorch_model(mll)
return gp
# + [markdown] code_folding=[] hidden_ranges=[] originalKey="<KEY>" showInput=false
# ### Simple 1D function
#
# To sanity check things, we consider the ECI acquisition function on a one-dimensional toy problem.
# + executionStartTime=1638489234145 executionStopTime=1638489234200 originalKey="<KEY>" requestMsgId="cc435c3c-c65f-4446-a33e-fd5cda030962"
def yf(x):
return (1 - torch.exp(-4 * (x[:, 0] - 0.4) ** 2)).unsqueeze(-1)
x = torch.tensor([0, 0.15, 0.25, 0.4, 0.8, 1.0], **tkwargs).unsqueeze(-1)
y = yf(x)
xx = torch.linspace(0, 1, 200, **tkwargs).unsqueeze(-1)
yy = yf(xx)
# + [markdown] originalKey="<KEY>" showInput=false
# ### Create an ECI acquisition function
# Our implementation assumes that the GP is passed in as a `ModelListGP` and that the GPs match the corresponding constraints. As an example, assume we have two outputs, represented by `gp1` and `gp2` and two constraints corresponding to output 1 and a third constraint corresponding to output 2. In that case we will create a model list GP as `ModelListGP(gp1, gp1, gp2)` so they match the constraints.
# + code_folding=[] executionStartTime=1638489234253 executionStopTime=1638489235584 hidden_ranges=[] originalKey="<KEY>" requestMsgId="9efe991c-8256-4c7c-b61f-8abb5d258d40"
gp = get_and_fit_gp(x, y)
model_list_gp = ModelListGP(gp, gp)
constraints = [("lt", 0.3), ("gt", 0.05)]
punchout_radius = 0.03
bounds = torch.tensor([(0, 1)], **tkwargs).T
eci = ExpectedCoverageImprovement(
model=model_list_gp,
constraints=constraints,
punchout_radius=punchout_radius,
bounds=bounds,
num_samples=512 if not SMOKE_TEST else 4,
)
# + [markdown] originalKey="<KEY>" showInput=false
# ### Optimize the acquisition function
# + code_folding=[] executionStartTime=1638489235787 executionStopTime=1638489236864 hidden_ranges=[] originalKey="1ae10691-8d4e-40e7-8c32-f15a35ddf590" requestMsgId="1ae10691-8d4e-40e7-8c32-f15a35ddf590" showInput=true
best_candidate, best_eci_value = optimize_acqf(
acq_function=eci,
bounds=torch.tensor([[0.0], [1.0]], **tkwargs),
q=1,
num_restarts=10,
raw_samples=20, # use a small number here to make sure the optimization works
)
print(f"Best candidate: {best_candidate.cpu().item():.3f}")
# + [markdown] originalKey="15a4d7cf-be03-4e52-9792-e3a680f37bb7" showInput=false
# ### Plot the GP and the ECI acquisition function
# The left plot shows the GP posterior with a 95% confidence interval. The two horizontal lines indicate the feasible region defined by $0.05 \leq f(x) \leq 0.3$. These inequality constraints implicitly define a feasible region, outside which ECI has value zero.
#
# We can see in the right plot that ECI indeed has a nonzero value inside the feasible region and a zero value outside. We also optimize the acquisition function and mark its argmax with black star; the argmax is around $x=0.62$. This is reasonable because ECI seeks to select diverse points within the feasible region. $x=0.62$ is far away from other evaluations and thus has the highest diversity.
# + code_folding=[] executionStartTime=1638489236964 executionStopTime=1638489237535 hidden_ranges=[] originalKey="5f5b4b6a-4d53-4528-8420-53e4f9358f5c" requestMsgId="5f5b4b6a-4d53-4528-8420-53e4f9358f5c"
with torch.no_grad():
posterior = gp.posterior(X=xx.unsqueeze(1))
ymean, yvar = posterior.mean.squeeze(-1), posterior.variance.squeeze(-1)
eci_vals = eci(xx.unsqueeze(1))
fig, axes = plt.subplots(1, 2, figsize=(12, 5))
ax = axes[0]
ax.plot(xx[:, 0].cpu(), ymean[:, 0].cpu(), "b")
ax.fill_between(
xx[:, 0].cpu(),
ymean[:, 0].cpu() - 1.96 * yvar[:, 0].sqrt().cpu(),
ymean[:, 0].cpu() + 1.96 * yvar[:, 0].sqrt().cpu(),
alpha=0.1,
color="b"
)
ax.plot(x[:, 0].cpu(), y[:, 0].cpu(), "or")
ax.axhline(0.05, 0, 1)
ax.axhline(0.3, 0, 1)
ax = axes[1]
ax.plot(xx[:, 0].cpu(), eci_vals.detach().cpu())
ax.plot(x[:, 0].cpu(), torch.zeros(len(x), **tkwargs).cpu(), "or")
ax.plot(best_candidate.cpu(), best_eci_value.cpu(), "*k", ms=10)
ax.set_title("ECI", fontsize=14)
plt.show()
# + [markdown] code_folding=[] hidden_ranges=[] originalKey="33ea647e-bdaf-4264-ab65-3e6df4ba8c6e" showInput=false
# ## Full 2D CAS-loop
# This creates a simple function with two outputs that we will consider under the two constraints $f_1(x) \leq 0.75$ and $f_2(x) \geq 0.55$. In this particular example, the $f_1(x)$ and $f_2(x)$ are same function for simplicity.
#
# The CAS loop follows the prototypical BO loop:
# 1. Given a surrogate model, maximize ECI to select the next evaluation x.
# 2. Observe f(x).
# 3. Update the surrogate model.
# + code_folding=[] executionStartTime=1638489237543 executionStopTime=1638489237685 hidden_ranges=[] originalKey="691460ed-a2c8-45b5-8dc9-c6d8c87ee9d7" requestMsgId="691460ed-a2c8-45b5-8dc9-c6d8c87ee9d7"
def yf2d(x):
v = torch.exp(-2 * (x[:, 0] - 0.3) ** 2 - 4 * (x[:, 1] - 0.6) ** 2)
return torch.stack((v, v), dim=-1)
bounds = torch.tensor([[0, 0], [1, 1]], **tkwargs)
lb, ub = bounds
dim = len(lb)
constraints = [("lt", 0.75), ("gt", 0.55)]
punchout_radius = 0.1
# + [markdown] originalKey="<KEY>" showInput=false
# ### CAS loop using 5 initial Sobol points and 15 ECI iterations
# + code_folding=[] executionStartTime=1638489237803 executionStopTime=1638489266352 hidden_ranges=[] originalKey="6d77353b-8dda-4835-9c6a-b0a53fddc67c" requestMsgId="6d77353b-8dda-4835-9c6a-b0a53fddc67c"
num_init_points = 5
num_total_points = 20 if not SMOKE_TEST else 6
X = lb + (ub - lb) * SobolEngine(dim, scramble=True).draw(num_init_points).to(**tkwargs)
Y = yf2d(X)
while len(X) < num_total_points:
# We don't have to normalize X since the domain is [0, 1]^2. Make sure to
# appropriately adjust the punchout radius if the domain is normalized.
gp_models = [get_and_fit_gp(X, Y[:, i : i + 1]) for i in range(Y.shape[-1])]
model_list_gp = ModelListGP(gp_models[0], gp_models[1])
eci = ExpectedCoverageImprovement(
model=model_list_gp,
constraints=constraints,
punchout_radius=punchout_radius,
bounds=bounds,
num_samples=512 if not SMOKE_TEST else 4,
)
x_next, _ = optimize_acqf(
acq_function=eci,
bounds=bounds,
q=1,
num_restarts=10 if not SMOKE_TEST else 2,
raw_samples=512 if not SMOKE_TEST else 4,
)
y_next = yf2d(x_next)
X = torch.cat((X, x_next))
Y = torch.cat((Y, y_next))
# + [markdown] code_folding=[] hidden_ranges=[] originalKey="255bba4f-4d9a-46cc-aa66-16b90287824a" showInput=false
# ### Plot the selected points
# We plot the feasible region and the points selected by ECI below. The feasible region is outlined with a black ring, and points selected by ECI are marked in green (feasible) and red (infeasible). By design, observe that ECI selects a diverse i.e., well-spaced set of points inside the feasible region.
# + code_folding=[] customInput executionStartTime=1638489266464 executionStopTime=1638489266516 hidden_ranges=[] originalKey="<KEY>" requestMsgId="6b62af84-01c0-4971-9122-bd5f01b9f31b" showInput=true
N1, N2 = 50, 50
Xplt, Yplt = torch.meshgrid(
torch.linspace(0, 1, N1, **tkwargs), torch.linspace(0, 1, N2, **tkwargs)
)
xplt = torch.stack(
(
torch.reshape(Xplt, (Xplt.shape[0] * Xplt.shape[1],)),
torch.reshape(Yplt, (Yplt.shape[0] * Yplt.shape[1],)),
),
dim=1,
)
yplt = yf2d(xplt)
Zplt = torch.reshape(yplt[:, 0], (N1, N2)) # Since f1(x) = f2(x)
# + code_folding=[] executionStartTime=1638489266564 executionStopTime=1638489267143 hidden_ranges=[] originalKey="a44c258c-0373-4c68-9887-9ae7a57bcccc" requestMsgId="a44c258c-0373-4c68-9887-9ae7a57bcccc"
def identify_samples_which_satisfy_constraints(X, constraints):
"""
Takes in values (a1, ..., ak, o) and returns (a1, ..., ak, o)
True/False values, where o is the number of outputs.
"""
successful = torch.ones(X.shape).to(X)
for model_index in range(X.shape[-1]):
these_X = X[..., model_index]
direction, value = constraints[model_index]
successful[..., model_index] = (
these_X < value if direction == "lt" else these_X > value
)
return successful
fig, ax = plt.subplots(figsize=(8, 6))
h1 = ax.contourf(Xplt.cpu(), Yplt.cpu(), Zplt.cpu(), 20, cmap="Blues", alpha=0.6)
fig.colorbar(h1)
ax.contour(Xplt.cpu(), Yplt.cpu(), Zplt.cpu(), [0.55, 0.75], colors="k")
feasible_inds = (
identify_samples_which_satisfy_constraints(Y, constraints)
.prod(dim=-1)
.to(torch.bool)
)
ax.plot(X[feasible_inds, 0].cpu(), X[feasible_inds, 1].cpu(), "sg", label="Feasible")
ax.plot(
X[~feasible_inds, 0].cpu(), X[~feasible_inds, 1].cpu(), "sr", label="Infeasible"
)
ax.legend(loc=[0.7, 0.05])
ax.set_title("$f_1(x)$") # Recall that f1(x) = f2(x)
ax.set_xlabel("$x_1$")
ax.set_ylabel("$x_2$")
ax.set_aspect("equal", "box")
ax.set_xlim([-0.05, 1.05])
ax.set_ylim([-0.05, 1.05])
plt.show()
# + executionStartTime=1638489267152 executionStopTime=1638489267253 originalKey="0ff4a95d-b556-4a21-b794-184ba4181a49" requestMsgId="0ff4a95d-b556-4a21-b794-184ba4181a49"
| tutorials/constraint_active_search.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3 (ipykernel)
# language: python
# name: python3
# ---
# # Kozeny-Carman equation
#
# \begin{equation}
# K = \dfrac{d_p^2}{180}\dfrac{\theta^3}{(1-\theta)^2} \dfrac{\rho g }{\mu}
# \end{equation}
# %reset -f
# +
import numpy as np
import matplotlib.pyplot as plt
from scipy.optimize import root
#Globals
rho = 1000. #kg/m3
g = 9.81 #cm/s2
mu = 0.001 #Ns/m2
dp = 4.4E-4 #m
def KozenyCarman(theta):
return dp**2 * theta**3 * rho * g / (180 * (1-theta)**2 * mu)
def KozenyCarman(theta):
return dp**2 * theta**3 * rho * g / (180 * (1-theta)**2 * mu)
def findTheta(K_expected=1.0E-8):
def minimizer(theta):
K_init = KozenyCarman(theta)
return (K_init - K_expected)**2
solution = root(minimizer,0.1)
print(solution.message + f" >> Porosity = {solution.x}")
return solution.x
# -
porosity = np.linspace(0.001,0.5,100)
hydrCond = KozenyCarman(porosity)
# +
fig,ax = plt.subplots(figsize=(8,5),facecolor="white");
ax.plot(porosity,hydrCond,lw=3,c="blue",label='Kozeny-Carman')
ax.plot(porosity,840*(porosity**3.1),lw=3,c="red",label="Chen2010")
ax.set_yscale('log')
ax.set_xlabel("Porosity $\\theta$ ")
ax.set_ylabel("Hydraulic conductivity \n$K$ [m/s]")
ax.axhline(y=1.0E-8,lw=1,ls='dotted')
ax.legend()
plt.show()
# -
theta2 = findTheta(1.0E-7)
print("{:.4E} m/s".format(KozenyCarman(0.35)))
from jupypft import attachmentRateCFT
katt,_ = attachmentRateCFT.attachmentRate(dp=1.0E-7,dc=4.4E-4,
q=0.35E-3,
theta=0.35,
visco=0.001,
rho_f=1000.,
rho_p=1050.0,
A=1.0E-20,
T=298.0,
alpha=0.0043273861959162,
debug=True)
"{:.6E}".format(0.0043273861959162)
1.0E-4/katt
| notebooks/_old/.ipynb_checkpoints/KozenyCarman-checkpoint.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ### This project is to load the SandP CSV data from data.world website and transform the data to required data types and to load into the DB table
import pandas as pd
from sqlalchemy import create_engine
csv_file = "../Resources/sp.csv"
sp_data_df = pd.read_csv(csv_file)
sp_data_df.head()
new_sp_data_df = sp_data_df[['Date', 'Price', 'Open', 'High', 'Low', 'Change %']].copy()
new_sp_data_df.head()
new_sp_data_df = new_sp_data_df.rename(columns={"Date": "trans_date",
"Price": "price",
"Open": "openvalue",
"High": "dayhighvalue",
"Low": "daylowvalue",
"Change %": "percent_change"})
new_sp_data_df.drop_duplicates("trans_date", inplace=True)
# +
new_sp_data_df['price']=new_sp_data_df['price'].astype('string')
new_sp_data_df['price']=new_sp_data_df['price'].str.replace(',','')
new_sp_data_df['price']=new_sp_data_df['price'].astype('float')
new_sp_data_df['openvalue']=new_sp_data_df['openvalue'].astype('string')
new_sp_data_df['openvalue']=new_sp_data_df['openvalue'].str.replace(',','')
new_sp_data_df['openvalue']=new_sp_data_df['openvalue'].astype('float')
new_sp_data_df['dayhighvalue']=new_sp_data_df['dayhighvalue'].astype('string')
new_sp_data_df['dayhighvalue']=new_sp_data_df['dayhighvalue'].str.replace(',','')
new_sp_data_df['dayhighvalue']=new_sp_data_df['dayhighvalue'].astype('float')
new_sp_data_df['daylowvalue']=new_sp_data_df['daylowvalue'].astype('string')
new_sp_data_df['daylowvalue']=new_sp_data_df['daylowvalue'].str.replace(',','')
new_sp_data_df['daylowvalue']=new_sp_data_df['daylowvalue'].astype('float')
#new_sp_data_df['percent_change']=new_sp_data_df['percent_change'].astype('string')
#new_sp_data_df['percent_change']=new_sp_data_df['percent_change'].str.replace(',','')
#new_sp_data_df['percent_change']=new_sp_data_df['percent_change'].astype('float')
#new_sp_data_df['price'] = pd.to_text(new_sp_data_df['price'])
#new_sp_data_df['price']=new_sp_data_df['price'].replace({'price': '^.,$'}, {'price': ''}, regex=True)
# -
new_sp_data_df.head()
rds_connection_string = "postgres:admin@localhost:5432/finance_db"
#<insert password>@localhost:5432/customer_db"
engine = create_engine(f'postgresql://{rds_connection_string}')
engine.table_names()
new_sp_data_df.to_sql(name='sandp', con=engine, if_exists='append', index=False)
pd.read_sql_query('select * from sandp', con=engine).head()
| etl_project/data_etl.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] colab_type="text" id="view-in-github"
# <a href="https://colab.research.google.com/github/NeuromatchAcademy/course-content/blob/master/tutorials/W0D1_PythonWorkshop1/student/W0D1_Tutorial1.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# + [markdown] colab_type="text" id="YL2_mjdRyaSg"
# # Neuromatch Academy: Week 0, Day 1, Tutorial 1
# # Python for NMA - LIF Neuron - Part I
#
# __Content creators:__ <NAME> and the [CCNSS](https://www.ccnss.org/) team
#
# __Content reviewers:__ <NAME>, <NAME>, <NAME>
# + [markdown] colab_type="text" id="JTnwS5ZOyaSm"
# ---
# ## Tutorial objectives
# NMA students, you are going to use Python skills to advance your understanding of neuroscience. Just like two legs that support and strengthen each other. One has "Python" written in it, and the other has "Neuro". And step-by-step they go.
#
#
#
# In this notebook, we'll practice basic operations with Python variables, control flow, plotting, and a sneak peek at `np.array`, the workhorse of scientific computation in Python.
#
#
#
# Each new concept in Python will unlock a different aspect of our implementation of a **Leaky Integrate-and-Fire (LIF)** neuron. And as if it couldn't get any better, we'll visualize the evolution of its membrane potential in time, and extract its statistical properties!
#
#
#
# Well then, let's start our walk today!
# + [markdown] colab_type="text" id="JgF7_zvb8d0C"
# ---
# ## Imports and helper functions
# Please execute the cell(s) below to initialize the notebook environment.
# + cellView="both" colab={} colab_type="code" id="YtbZcDqIiOUZ"
# Import libraries
import numpy as np
import matplotlib.pyplot as plt
from IPython.display import YouTubeVideo
# + cellView="form" colab={} colab_type="code" id="jv3SY0TOCFOT"
# @title Figure settings
import ipywidgets as widgets
# %config InlineBackend.figure_format = 'retina'
plt.style.use("https://raw.githubusercontent.com/NeuromatchAcademy/course-content/master/nma.mplstyle")
# + [markdown] colab_type="text" id="OvsPehLtdt7_"
# ---
# ## Neuron model
# A *membrane equation* and a *reset condition* define our *leaky-integrate-and-fire (LIF)* neuron:
#
#
# \begin{align*}
# \\
# &\tau_m\,\frac{d}{dt}\,V(t) = E_{L} - V(t) + R\,I(t) &\text{if }\quad V(t) \leq V_{th}\\
# \\
# &V(t) = V_{reset} &\text{otherwise}\\
# \\
# \end{align*}
#
# where $V(t)$ is the membrane potential, $\tau_m$ is the membrane time constant, $E_{L}$ is the leak potential, $R$ is the membrane resistance, $I(t)$ is the synaptic input current, $V_{th}$ is the firing threshold, and $V_{reset}$ is the reset voltage. We can also write $V_m$ for membrane potential - very convenient for plot labels.
#
# The membrane equation is an *ordinary differential equation (ODE)* that describes the time evolution of membrane potential $V(t)$ in response to synaptic input and leaking of change across the cell membrane.
#
# **Note that, in this tutorial the neuron model will not implement a spiking mechanism.**
# + cellView="form" colab={"base_uri": "https://localhost:8080/", "height": 538} colab_type="code" id="oRiNs9d8gx9d" outputId="2e484b75-da0b-4d19-8b11-19020f6cdfe1"
# @title Video: Synaptic input
video = YouTubeVideo(id='UP8rD2AwceM', width=854, height=480, fs=1)
print("Video available at https://youtube.com/watch?v=" + video.id)
video
# + [markdown] colab_type="text" id="h6-SFBNR5An6"
# ### Exercise 1
# We start by defining and initializing the main simulation variables.
#
# **Suggestions**
# * Modify the code below to print the simulation parameters
# + colab={} colab_type="code" id="f-lhVr8Vint0"
# Exercise 1
# insert your code here
# t_max = 150e-3 # second
# dt = 1e-3 # second
# tau = 20e-3 # second
# el = -60e-3 # milivolt
# vr = -70e-3 # milivolt
# vth = -50e-3 # milivolt
# r = 100e6 # ohm
# i_mean = 25e-11 # ampere
# print(t_max, dt, tau, el, vr, vth, r, i_mean)
# + [markdown] colab_type="text" id="HCLfBTp0cbnE"
# **SAMPLE OUTPUT**
#
# ```
# 0.15 0.001 0.02 -0.06 -0.07 -0.05 100000000.0 2.5e-10
# ```
# + [markdown] cellView="code" colab={"base_uri": "https://localhost:8080/", "height": 34} colab_type="text" id="ze86GbV86_iT" outputId="0d254e88-8516-4cab-d0ff-af3928f038bb"
# [*Click for solution*](https://github.com/NeuromatchAcademy/course-content/tree/master//tutorials/W0D1_PythonWorkshop1/solutions/W0D1_Tutorial1_Solution_ea2e072e.py)
#
#
# + [markdown] colab_type="text" id="nIs4B2HoSGPb"
# ### Exercise 2
# 
#
# We start with a sinusoidal model to simulate the synaptic input $I(t)$ given by:
# \begin{align*}
# \\
# I(t)=I_{mean}\left(1+\sin\left(\frac{2 \pi}{0.01}\,t\right)\right)\\
# \\
# \end{align*}
#
# Compute the values of synaptic input $I(t)$ between $t=0$ and $t=0.009$ with step $\Delta t=0.001$.
#
# **Suggestions**
# * Loop variable `step` for 10 steps (`step` takes values from `0` to `9`)
# * At each time step
# * Compute the value of `t` with variables `step` and `dt`
# * Compute the value of `i`
# * Print `i`
# * Use `np.pi` and `np.sin` for evaluating $\pi$ and $\sin(\cdot)$, respectively
# + colab={} colab_type="code" id="-k85BZSvSaLG"
# Exercise 2
# initialize t
t = 0
# loop for 10 steps, variable 'step' takes values from 0 to 9
for step in range(10):
t = step * dt
# --> insert your code here
# + [markdown] colab_type="text" id="2_zaHIK2Shzi"
# **SAMPLE OUTPUT**
#
# ```
# 2.5e-10
# 3.969463130731183e-10
# 4.877641290737885e-10
# 4.877641290737885e-10
# 3.9694631307311837e-10
# 2.5000000000000007e-10
# 1.0305368692688176e-10
# 1.2235870926211617e-11
# 1.223587092621159e-11
# 1.0305368692688186e-10
# ```
# + [markdown] colab={"base_uri": "https://localhost:8080/", "height": 189} colab_type="text" id="K-u3pNQ-SaIW" outputId="011ff751-2886-4119-d6a1-975300317c84"
# [*Click for solution*](https://github.com/NeuromatchAcademy/course-content/tree/master//tutorials/W0D1_PythonWorkshop1/solutions/W0D1_Tutorial1_Solution_1a190769.py)
#
#
# + [markdown] colab_type="text" id="aBBxsitq-Kf9"
# ### Exercise 3
# Print formatting is handy for displaying simulation parameters in a clean and organized form. Python 3.6 introduced the new string formatting [f-strings](https://www.python.org/dev/peps/pep-0498). Since we are dealing with type `float` variables, we use `f'{x:.3f}'` for formatting `x` to three decimal points, and `f'{x:.4e}'` for four decimal points but in exponential notation.
# ```
# x = 3.14159265e-1
# print(f'{x:.3f}')
# --> 0.314
#
# print(f'{x:.4e}')
# --> 3.1416e-01
# ```
#
# Repeat the loop from the previous exercise and print the `t` values with three decimal points, and synaptic input $I(t)$ with four decimal points in exponential notation.
#
# For additional formatting options with f-strings see [here](http://zetcode.com/python/fstring/).
#
# **Suggestions**
# * Print `t` and `i` with help of *f-strings* formatting
# + colab={} colab_type="code" id="jNOUMK61-9Ml"
# Exercise 3
# initialize step_end
step_end = 10
# loop for step_end steps
for step in range(step_end):
t = step * dt
# --> insert your code here
# + [markdown] colab_type="text" id="x35-gC_1CKEZ"
# **SAMPLE OUTPUT**
#
# ```
# 0.000 2.5000e-10
# 0.001 3.9695e-10
# 0.002 4.8776e-10
# 0.003 4.8776e-10
# 0.004 3.9695e-10
# 0.005 2.5000e-10
# 0.006 1.0305e-10
# 0.007 1.2236e-11
# 0.008 1.2236e-11
# 0.009 1.0305e-10
# ```
# + [markdown] colab={"base_uri": "https://localhost:8080/", "height": 189} colab_type="text" id="qcUqaqkyCKQk" outputId="3ea5d2fc-8163-4f8d-b348-0128f82cec69"
# [*Click for solution*](https://github.com/NeuromatchAcademy/course-content/tree/master//tutorials/W0D1_PythonWorkshop1/solutions/W0D1_Tutorial1_Solution_d4f77dbf.py)
#
#
# + [markdown] colab_type="text" id="o1zuDTI5-Via"
# ## ODE integration without spikes
# In the next exercises, we now simulate the evolution of the membrane equation in discrete time steps, with a sufficiently small $\Delta t$.
#
# We start by writing the time derivative $d/dt\,V(t)$ in the membrane equation without taking the limit $\Delta t \to 0$:
#
# \begin{align*}
# \\
# \tau_m\,\frac{V\left(t+\Delta t\right)-V\left(t\right)}{\Delta t} &= E_{L} - V(t) + R\,I(t) \qquad\qquad (1)\\
# \\
# \end{align*}
#
# The value of membrane potential $V\left(t+\Delta t\right)$ can be expressed in terms of its previous value $V(t)$ by simple algebraic manipulation. For *small enough* values of $\Delta t$, this provides a good approximation of the continuous-time integration.
#
# This operation is an integration since we obtain a sequence $\{V(t), V(t+\Delta t), V(t+2\Delta t),...\}$ starting from the ODE. Notice how the ODE describes the evolution of $\frac{d}{dt}\,V(t)$, the derivative of $V(t)$, but not directly the evolution of $V(t)$. For the evolution of $V(t)$ we need to integrate the ODE, and in this tutorial, we will do a discrete-time integration using the Euler method. See [Numerical methods for ordinary differential equations](https://en.wikipedia.org/wiki/Numerical_methods_for_ordinary_differential_equations) for additional details.
# + cellView="form" colab={"base_uri": "https://localhost:8080/", "height": 538} colab_type="code" id="sXvgwHLeDuEB" outputId="684b6f7b-cbde-4b15-ec05-389a701adc5c"
# @title Video: Discrete time integration
video = YouTubeVideo(id='kyCbeR28AYQ', width=854, height=480, fs=1)
print("Video available at https://youtube.com/watch?v=" + video.id)
video
# + [markdown] colab_type="text" id="ndQ1eUJ_iN6j"
# ### Exercise 4
# Compute the values of $V(t)$ between $t=0$ and $t=0.01$ with step $\Delta t=0.001$ and $V(0)=E_L$.
#
# We will write a `for` loop from scratch in this exercise. The following three formulations are all equivalent and loop for three steps:
# ```
# for step in [0, 1, 2]:
# print(step)
#
# for step in range(3):
# print(step)
#
# start = 0
# end = 3
# stepsize = 1
#
# for step in range(start, end, stepsize):
# print(step)
# ```
#
#
# **Suggestions**
# * Reorganize the Eq. (1) to isolate $V\left(t+\Delta t\right)$ on the left side, and express it as function of $V(t)$ and the other terms
# * Initialize the membrane potential variable `v` to leak potential `el`
# * Loop variable `step` for `10` steps
# * At each time step
# * Compute the current value of `t`, `i`
# * Print the current value of `t` and `v`
# * Update the value of `v`
# + colab={} colab_type="code" id="1iJN-oBWkwzr"
# Exercise 4
# initialize step_end and v
step_end = 10
v = el
# loop for step_end steps
# --> insert your code here
# + [markdown] colab_type="text" id="MdQmOnQilUEH"
# **SAMPLE OUTPUT**
#
# ```
# 0.000 -6.0000e-02
# 0.001 -5.8750e-02
# 0.002 -5.6828e-02
# 0.003 -5.4548e-02
# 0.004 -5.2381e-02
# 0.005 -5.0778e-02
# 0.006 -4.9989e-02
# 0.007 -4.9974e-02
# 0.008 -5.0414e-02
# 0.009 -5.0832e-02
# ```
# + [markdown] colab={"base_uri": "https://localhost:8080/", "height": 189} colab_type="text" id="VCB4Y4Hci0Ve" outputId="d04edc9c-b2fa-4a16-8cfc-3299bcb2069b"
# [*Click for solution*](https://github.com/NeuromatchAcademy/course-content/tree/master//tutorials/W0D1_PythonWorkshop1/solutions/W0D1_Tutorial1_Solution_3e7c92b6.py)
#
#
# + cellView="form" colab={"base_uri": "https://localhost:8080/", "height": 538} colab_type="code" id="SG07joUDDws5" outputId="ccd8e9cb-5aa2-4db1-a4ac-64a4633b34dd"
# @title Video: Plotting
from IPython.display import YouTubeVideo
video = YouTubeVideo(id='BOh8CsuTFkY', width=854, height=480, fs=1)
print("Video available at https://youtube.com/watch?v=" + video.id)
video
# + [markdown] colab_type="text" id="XXLyRysvfbqx"
# ### Exercise 5
# 
#
# Plot the values of $I(t)$ between $t=0$ and $t=0.024$.
#
# **Suggestions**
# * Increase `step_end`
# * initialize the figure with `plt.figure`, set title, x and y labels with `plt.title`, `plt.xlabel` and `plt.ylabel`, respectively
# * Replace printing command `print` with plotting command `plt.plot` with argument `'ko'` (short version for `color='k'` and `marker='o'`) for black small dots
# * Use `plt.show()` at the end to display the plot
# + colab={"base_uri": "https://localhost:8080/", "height": 34} colab_type="code" id="_8yw9ndqj_Oq" outputId="ff14d541-a0ab-42a2-a721-4e01a26d2056"
# Exercise 5
# initialize step_end
step_end = 25
# initialize the figure
plt.figure()
# loop for step_end steps
for step in range(step_end):
t = step * dt
# --> insert your code here
# + [markdown] colab={"base_uri": "https://localhost:8080/", "height": 433} colab_type="text" id="R_mYrgiKfTuz" outputId="f3b1bb32-b5a5-4cc1-c296-fdf9c1bedfa4"
# [*Click for solution*](https://github.com/NeuromatchAcademy/course-content/tree/master//tutorials/W0D1_PythonWorkshop1/solutions/W0D1_Tutorial1_Solution_70724599.py)
#
# *Example output:*
#
# <img alt='Solution hint' align='left' width=559 height=416 src=https://raw.githubusercontent.com/NeuromatchAcademy/course-content/master/tutorials/W0D1_PythonWorkshop1/static/W0D1_Tutorial1_Solution_70724599_0.png>
#
#
# + [markdown] colab_type="text" id="IEhl80GmiKo4"
# ### Exercise 6
# Plot the values of $V(t)$ between $t=0$ and $t=t_{max}$.
#
# **Suggestions**
# * Compute the required number of steps with`int(t_max/dt)`
# * Use plotting command for black small(er) dots with argument `'k.'`
# + colab={"base_uri": "https://localhost:8080/", "height": 431} colab_type="code" id="fJpQ7jiflaTs" outputId="96475437-2ced-4dcc-ca21-2f403b3d3b2a"
# Exercise 6
# initialize step_end and v
# step_end = ... # insert your code here
v = el
# initialize the figure
plt.figure()
plt.title('$V_m$ with sinusoidal I(t)')
plt.xlabel('time (s)')
plt.ylabel('$V_m$ (V)');
# loop for step_end steps
for step in range(step_end):
t = step * dt
# --> insert your code here
# + [markdown] colab={"base_uri": "https://localhost:8080/", "height": 433} colab_type="text" id="qTdAwOFxlZ6I" outputId="6c60c627-dd08-4b65-b63c-8f1567ce174a"
# [*Click for solution*](https://github.com/NeuromatchAcademy/course-content/tree/master//tutorials/W0D1_PythonWorkshop1/solutions/W0D1_Tutorial1_Solution_6dcf3e19.py)
#
# *Example output:*
#
# <img alt='Solution hint' align='left' width=560 height=416 src=https://raw.githubusercontent.com/NeuromatchAcademy/course-content/master/tutorials/W0D1_PythonWorkshop1/static/W0D1_Tutorial1_Solution_6dcf3e19_0.png>
#
#
# + [markdown] colab_type="text" id="UcR6S79IiH30"
# ---
# ## Random synaptic input
# From the perspective of neurons, synaptic input is random (or stochastic). We'll improve the synaptic input model by introducing random input current with statistical properties similar to the previous exercise:
#
# \begin{align*}
# \\
# I(t)=I_{mean}\left(1+0.1\sqrt{\frac{t_{max}}{\Delta t}}\,\xi(t)\right)\qquad\text{with }\xi(t)\sim U(-1,1)\\
# \\
# \end{align*}
#
# where $U(-1,1)$ is the [uniform distribution](https://en.wikipedia.org/wiki/Uniform_distribution_(continuous)) with support $x\in[-1,1]$.
#
# Random synaptic input $I(t)$ results in random time course for $V(t)$.
# + [markdown] colab_type="text" id="bmSVMO99iEiT"
# ### Exercise 7
# Plot the values of $V(t)$ between $t=0$ and $t=t_{max}-\Delta t$ with random input $I(t)$.
#
# Initialize the (pseudo) random number generator (RNG) to a fixed value to obtain the same random input each time.
#
# The function `np.random.seed()` initializes the RNG, and `np.random.random()` generates samples from the uniform distribution between `0` and `1`.
#
# **Suggestions**
# * Use `np.random.seed()` to initialize the RNG to `0`
# * Use `np.random.random()` to generate random input in range `[0,1]` at each timestep
# * Multiply random input by an appropriate factor to expand the range to `[-1,1]`
# * Verify that $V(t)$ has a random time course by changing the initial RNG value
# * Alternatively, comment RNG initialization by typing `CTRL` + `\` in the relevant line
# + colab={"base_uri": "https://localhost:8080/", "height": 431} colab_type="code" id="5kJInEMDlnhq" outputId="68fde508-3206-4a00-dd09-81c618c651da"
# Exercise 7
# set random number generator
np.random.seed(2020)
# initialize step_end and v
step_end = int(t_max / dt)
v = el
# initialize the figure
plt.figure()
plt.title('$V_m$ with random I(t)')
plt.xlabel('time (s)')
plt.ylabel(r'$V_m$ (V)')
# loop for step_end steps
for step in range(step_end):
t = step * dt
# --> insert your code here
# + [markdown] colab={"base_uri": "https://localhost:8080/", "height": 433} colab_type="text" id="QW_Doa8JmDcX" outputId="6dce6df7-0826-4e3e-ee4b-cde64425d6b3"
# [*Click for solution*](https://github.com/NeuromatchAcademy/course-content/tree/master//tutorials/W0D1_PythonWorkshop1/solutions/W0D1_Tutorial1_Solution_695309c2.py)
#
# *Example output:*
#
# <img alt='Solution hint' align='left' width=560 height=416 src=https://raw.githubusercontent.com/NeuromatchAcademy/course-content/master/tutorials/W0D1_PythonWorkshop1/static/W0D1_Tutorial1_Solution_695309c2_0.png>
#
#
# + [markdown] colab_type="text" id="2ImK4tfgiBsZ"
# ## Ensemble statistics
# Multiple runs of the previous exercise may give the impression of periodic regularity in the evolution of $V(t)$. We'll collect the sample mean over $N=50$ realizations of $V(t)$ with random input to test such a hypothesis. The sample mean, sample variance and sample autocovariance at times $\left\{t, s\right\}\in[0,t_{max}]$, and for $N$ realizations $V_n(t)$ are given by:
#
# \begin{align*}
# \\
# \left\langle V(t)\right\rangle &= \frac{1}{N}\sum_{n=1}^N V_n(t) & & \text{sample mean}\\
# \left\langle (V(t)-\left\langle V(t)\right\rangle)^2\right\rangle &= \frac{1}{N-1} \sum_{n=1}^N \left(V_n(t)-\left\langle V(t)\right\rangle\right)^2 & & \text{sample variance} \\
# \left\langle \left(V(t)-\left\langle V(t)\right\rangle\right)\left(V(s)-\left\langle V(s)\right\rangle\right)\right\rangle
# &= \frac{1}{N-1} \sum_{n=1}^N \left(V_n(t)-\left\langle V(t)\right\rangle\right)\left(V_n(s)-\left\langle V(s)\right\rangle\right) & & \text{sample autocovariance}\\
# \\
# \end{align*}
# + cellView="form" colab={"base_uri": "https://localhost:8080/", "height": 538} colab_type="code" id="Iug51zteEGLS" outputId="70555956-f9bc-4841-d939-2ef0a346c4f6"
# @title Video: Ensemble statistics
video = YouTubeVideo(id='4nIAS2oPEFI', width=854, height=480, fs=1)
print("Video available at https://youtube.com/watch?v=" + video.id)
video
# + [markdown] colab_type="text" id="QStAPuBLh-3O"
# ### Exercise 8
# Plot multiple realizations ($N=50$) of $V(t)$ by storing in a list the voltage of each neuron at time $t$.
#
# Keep in mind that the plotting command `plt.plot(x, y)` requires `x` to have the same number of elements as `y`.
#
# Mathematical symbols such as $\alpha$ and $\beta$ are specified as `$\alpha$` and `$\beta$` in [TeX markup](https://en.wikipedia.org/wiki/TeX). See additional details in [Writing mathematical expressions](https://matplotlib.org/3.2.2/tutorials/text/mathtext.html) in Matplotlib.
#
# **Suggestions**
# * Initialize a list `v_n` with `50` values of membrane leak potential `el`
# * At each time step:
# * Plot `v_n` with argument `'k.'` and parameter `alpha=0.05` to adjust the transparency (by default, `alpha=1`)
# * In the plot command, replace `t` from the previous exercises with a list of size `n` with values `t`
# * Loop over `50` realizations of random input
# * Update `v_n` with the values of $V(t)$
#
# * Why is there a black dot at $t=0$?
# + colab={"base_uri": "https://localhost:8080/", "height": 431} colab_type="code" id="OWjTmtb1lou0" outputId="56e0a211-0b33-4684-e88e-69871bb01b07"
# Exercise 8
# set random number generator
np.random.seed(2020)
# initialize step_end, n and v_n
step_end = int(t_max / dt)
n = 50
# v_n = ... # --> insert your code here
# initialize the figure
plt.figure()
plt.title('Multiple realizations of $V_m$')
plt.xlabel('time (s)')
plt.ylabel('$V_m$ (V)')
# loop for step_end steps
for step in range(step_end):
t = step * dt
# --> insert your code here
# + [markdown] colab={"base_uri": "https://localhost:8080/", "height": 430} colab_type="text" id="wpkJVw3LmGpO" outputId="5b7117b7-c83f-4d75-8f02-622efab0a397"
# [*Click for solution*](https://github.com/NeuromatchAcademy/course-content/tree/master//tutorials/W0D1_PythonWorkshop1/solutions/W0D1_Tutorial1_Solution_133978a8.py)
#
# *Example output:*
#
# <img alt='Solution hint' align='left' width=558 height=413 src=https://raw.githubusercontent.com/NeuromatchAcademy/course-content/master/tutorials/W0D1_PythonWorkshop1/static/W0D1_Tutorial1_Solution_133978a8_0.png>
#
#
# + [markdown] colab_type="text" id="TijR9T7Eh77y"
# ### Exercise 9
# Add the sample mean $\left\langle V(t)\right\rangle=\frac{1}{N}\sum_{n=1}^N V_n(t)$ to the plot.
#
# **Suggestions**
# * At each timestep:
# * Compute and store in `v_mean` the sample mean $\left\langle V(t)\right\rangle$ by summing the values of list `v_n` with `sum` and dividing by `n`
# * Plot $\left\langle V(t)\right\rangle$ with `alpha=0.8` and argument `'C0.'` for blue (you can read more about [specifying colors](https://matplotlib.org/tutorials/colors/colors.html#sphx-glr-tutorials-colors-colors-py))
# * Loop over `50` realizations of random input
# * Update `v_n` with the values of $V(t)$
# + colab={"base_uri": "https://localhost:8080/", "height": 430} colab_type="code" id="ry5TQmMvlpak" outputId="252fc7be-92a3-4c06-cf5c-10f746ff8c7a"
# Exercise 9
# set random number generator
np.random.seed(2020)
# initialize step_end, n and v_n
step_end = int(t_max / dt)
n = 50
v_n = [el] * n
# initialize the figure
plt.figure()
plt.title('Multiple realizations of $V_m$')
plt.xlabel('time (s)')
plt.ylabel('$V_m$ (V)')
# loop for step_end steps
for step in range(step_end):
t = step * dt
# --> insert your code here
plt.show()
# + [markdown] colab={"base_uri": "https://localhost:8080/", "height": 430} colab_type="text" id="-0Li3pAJmNoN" outputId="ca31bc3c-a6b0-49eb-90e1-48b007440e41"
# [*Click for solution*](https://github.com/NeuromatchAcademy/course-content/tree/master//tutorials/W0D1_PythonWorkshop1/solutions/W0D1_Tutorial1_Solution_0ea7fc4b.py)
#
# *Example output:*
#
# <img alt='Solution hint' align='left' width=558 height=413 src=https://raw.githubusercontent.com/NeuromatchAcademy/course-content/master/tutorials/W0D1_PythonWorkshop1/static/W0D1_Tutorial1_Solution_0ea7fc4b_0.png>
#
#
# + [markdown] colab_type="text" id="AF0f7-9Jh1h-"
# ### Exercise 10
# Add the sample standard deviation $\sigma(t)\equiv\sqrt{\text{Var}\left(t\right)}$ to the plot, with sample variance $\text{Var}(t) = \frac{1}{N-1} \sum_{n=1}^N \left(V_n(t)-\left\langle V(t)\right\rangle\right)^2$.
#
# Use a list comprehension to collect the sample variance `v_var`. Here's an example to initialize a list with squares of `0` to `9`:
# ```
# squares = [x**2 for x in range(10)]
# print(squares)
# --> [0, 1, 4, 9, 16, 25, 36, 49, 64, 81]
# ```
#
# Why are we plotting $\sigma(t)$ rather than the $\text{Var}(t)$? What are the units of each and the units of $\left\langle V(t)\right\rangle$?
#
# **Suggestions**
# * At each timestep:
# * Compute and store in `v_mean` the sample mean $\left\langle V(t)\right\rangle$
# * Initialize a list `v_var_n` with the contribution of each $V_n(t)$ to $\text{Var}\left(t\right)$ with a list comprehension over values of `v_n`
# * Compute sample variance `v_var` by summing the values of `v_var_n` with `sum` and dividing by `n-1`
# * (alternative: loop over the values of `v_n` and add to `v_var` each contribution $V_n(t)$ and divide by `n-1` outside the loop)
# * Compute the standard deviation `v_std` with the function `np.sqrt`
# * Plot $\left\langle V(t)\right\rangle\pm\sigma(t)$ with `alpha=0.8` and argument `'C7.'`
# + colab={"base_uri": "https://localhost:8080/", "height": 431} colab_type="code" id="H1qIVAmOlqJM" outputId="ea16d721-3478-4789-ef87-a16c38fd2358"
# Exercise 10
# set random number generator
np.random.seed(2020)
# initialize n, v_n and step_end
step_end = int(t_max / dt)
n = 50
v_n = [el] * n
# initialize the figure
plt.figure()
plt.title('Multiple realizations of $V_m$')
plt.xlabel('time (s)')
plt.ylabel('$V_m$ (V)')
# loop for step_end steps
for step in range(step_end):
t = step * dt
v_mean = sum(v_n) / n
# --> insert your code here
# + [markdown] colab={"base_uri": "https://localhost:8080/", "height": 430} colab_type="text" id="C01wdZaLmQt0" outputId="5987a3df-e252-4b88-c0a9-fdfbb2f13df2"
# [*Click for solution*](https://github.com/NeuromatchAcademy/course-content/tree/master//tutorials/W0D1_PythonWorkshop1/solutions/W0D1_Tutorial1_Solution_9b06c9a8.py)
#
# *Example output:*
#
# <img alt='Solution hint' align='left' width=558 height=413 src=https://raw.githubusercontent.com/NeuromatchAcademy/course-content/master/tutorials/W0D1_PythonWorkshop1/static/W0D1_Tutorial1_Solution_9b06c9a8_0.png>
#
#
# + [markdown] colab_type="text" id="R3ct7XazB8u8"
# ---
# ## Using NumPy
# The next set of exercises introduces `np.array`, the workhorse from the scientific computation package [NumPy](https://numpy.org). Numpy arrays the default for numerical data storage and computation and will separate computing steps from plotting.
#
# 
#
# We updated plots inside the main loop in the previous exercises and stored intermediate results in lists for plotting them. The purpose was to simplify earlier exercises as much as possible. However, there are very few scenarios where this technique is necessary, and you should avoid it in the future. Using numpy arrays will significantly simplify our coding narrative by computing inside the main loop and plotting afterward.
#
# Lists are much more natural for storing data for other purposes than computation. For example, lists are handy for storing numerical indexes and text.
# + cellView="form" colab={"base_uri": "https://localhost:8080/", "height": 538} colab_type="code" id="yGrP7jyHg5Kf" outputId="bb67d4c0-76a8-4f00-c6a0-ec59ae3676d7"
# @title Video: Using NumPy
video = YouTubeVideo(id='ewyHKKa2_OU', width=854, height=480, fs=1)
print("Video available at https://youtube.com/watch?v=" + video.id)
video
# + [markdown] colab_type="text" id="JKPt3oIQvGJZ"
# ### Exercise 11
# Rewrite the single neuron plot with random input from _Exercise 7_ with numpy arrays. The time range, voltage values, and synaptic current are initialized or pre-computed as numpy arrays before numerical integration.
#
# **Suggestions**
# * Use `np.linspace` to initialize a numpy array `t_range` with `num=step_end=100` values from `0` to `t_max`
# * Use `np.ones` to initialize a numpy array `v` with `step_end` leak potential values `el`
# * Pre-compute `step_end` synaptic current values in numpy array `syn` with `np.random.random(step_end)` for `step_end` random numbers
# * Iterate for numerical integration of `v`
# * Since `v[0]=el`, we should iterate for `step_end - 1` steps, for example by skipping `step=0`. Why?
# + colab={"base_uri": "https://localhost:8080/", "height": 430} colab_type="code" id="Z7o_4DbOvIwU" outputId="58d1fcd5-f3d9-4bb5-93ab-b1949de595fd"
# Exercise 11
# set random number generator
np.random.seed(2020)
# initialize step_end, t_range, v and syn
step_end = int(t_max/dt)
t_range = np.linspace(0, t_max, num=step_end)
v = el * np.ones(step_end)
# --> insert your code here
plt.figure()
plt.title('$V_m$ with random I(t)')
plt.xlabel('time (s)')
plt.ylabel(r'$V_m$ (V)')
plt.plot(t_range, v, 'k.')
plt.show()
# + [markdown] colab={"base_uri": "https://localhost:8080/", "height": 433} colab_type="text" id="-CP_GxZyYVMy" outputId="378e5647-0a05-4306-a351-6cc74a019015"
# [*Click for solution*](https://github.com/NeuromatchAcademy/course-content/tree/master//tutorials/W0D1_PythonWorkshop1/solutions/W0D1_Tutorial1_Solution_218a234b.py)
#
# *Example output:*
#
# <img alt='Solution hint' align='left' width=560 height=416 src=https://raw.githubusercontent.com/NeuromatchAcademy/course-content/master/tutorials/W0D1_PythonWorkshop1/static/W0D1_Tutorial1_Solution_218a234b_0.png>
#
#
# + [markdown] colab_type="text" id="5ybKrWDcRMj2"
# ### Exercise 12
# Let's practice using `enumerate` to iterate over the indexes and values of the synaptic current array `syn`.
#
# **Suggestions**
# * Iterate indexes and values of `syn` with `enumerate` in the `for` loop
# * Plot `v` with argument `'k'` for displaying a line instead of dots
# + colab={"base_uri": "https://localhost:8080/", "height": 430} colab_type="code" id="_22bxiRfRM5l" outputId="e968bf76-fef9-404f-adaa-5110456d3834"
# Exercise 12
# set random number generator
np.random.seed(2020)
# initialize step_end, t_range, v and syn
step_end = int(t_max / dt)
t_range = np.linspace(0, t_max, num=step_end)
v = el * np.ones(step_end)
syn = i_mean * (1 + 0.1 * (t_max / dt)**(0.5) * (2 * np.random.random(step_end) - 1))
# --> insert your code here
plt.figure()
plt.title('$V_m$ with random I(t)')
plt.xlabel('time (s)')
plt.ylabel(r'$V_m$ (V)')
plt.plot(t_range, v, 'k')
plt.show()
# + [markdown] colab={"base_uri": "https://localhost:8080/", "height": 433} colab_type="text" id="BVg8VFNGRNoT" outputId="97a77dc1-9742-4be1-ee41-0dd84ff0d785"
# [*Click for solution*](https://github.com/NeuromatchAcademy/course-content/tree/master//tutorials/W0D1_PythonWorkshop1/solutions/W0D1_Tutorial1_Solution_b24ee7b3.py)
#
# *Example output:*
#
# <img alt='Solution hint' align='left' width=560 height=416 src=https://raw.githubusercontent.com/NeuromatchAcademy/course-content/master/tutorials/W0D1_PythonWorkshop1/static/W0D1_Tutorial1_Solution_b24ee7b3_0.png>
#
#
# + cellView="form" colab={"base_uri": "https://localhost:8080/", "height": 538} colab_type="code" id="JHUxizXnEZKL" outputId="145203f9-b0f7-485d-b7ad-fa8e72354847"
# @title Video: Aggregation
video = YouTubeVideo(id='1ME-0rJXLFg', width=854, height=480, fs=1)
print("Video available at https://youtube.com/watch?v=" + video.id)
video
# + [markdown] colab_type="text" id="6mzFF-sCmj3E"
# ### Exercise 13
# Plot multiple realizations ($N=50$) of $V(t)$ by storing the voltage of each neuron at time $t$ in a numpy array.
#
# **Suggestions**
# * Initialize a numpy array `v_n` of shape `(n, step_end)` with membrane leak potential values `el`
# * Pre-compute synaptic current values in numpy array `syn` of shape `(n, step_end)`
# * Iterate `step_end` steps with a `for` loop for numerical integration
# * Plot results with a single plot command, by providing `v_n.T` to the plot function. `v_n.T` is the transposed version of `v_n` (with rows and columns swapped).
# + colab={} colab_type="code" id="k6r0cuUOmkCJ"
# Exercise 13
# set random number generator
np.random.seed(2020)
# initialize step_end, n, t_range, v and syn
step_end = int(t_max / dt)
n = 50
t_range = np.linspace(0, t_max, num=step_end)
v_n = el * np.ones([n, step_end])
# --> insert your code here
# + [markdown] colab={"base_uri": "https://localhost:8080/", "height": 433} colab_type="text" id="Zm5KgjsMZyAu" outputId="ed59371b-bb32-43e6-bf9a-567f064d15c7"
# [*Click for solution*](https://github.com/NeuromatchAcademy/course-content/tree/master//tutorials/W0D1_PythonWorkshop1/solutions/W0D1_Tutorial1_Solution_597a43c1.py)
#
# *Example output:*
#
# <img alt='Solution hint' align='left' width=560 height=416 src=https://raw.githubusercontent.com/NeuromatchAcademy/course-content/master/tutorials/W0D1_PythonWorkshop1/static/W0D1_Tutorial1_Solution_597a43c1_0.png>
#
#
# + [markdown] colab_type="text" id="WnUbz3G6mlU0"
# ### Exercise 14
# Add sample mean $\left\langle V(t)\right\rangle$ and standard deviation $\sigma(t)\equiv\sqrt{\text{Var}\left(t\right)}$ to the plot.
#
# `np.mean(v_n, axis=0)` computes mean over rows, i.e. mean for each neuron
#
# `np.mean(v_n, axis=1)` computes mean over columns (axis `1`), i.e. mean for each time step
#
# **Suggestions**
# * Use `np.mean` and `np.std` with `axis=0` to sum over neurons
# * BONUS: Use `label` argument in `plt.plot` to specify labels in each trace. Label only the last voltage trace to avoid labeling all `N` of them.
# + colab={"base_uri": "https://localhost:8080/", "height": 430} colab_type="code" id="p0IDofmGmlfW" outputId="e0ded633-22b6-488f-f6fd-d0a07ff76a66"
# Exercise 14
# set random number generator
np.random.seed(2020)
# initialize step_end, n, t_range, v and syn
step_end = int(t_max / dt)
n = 50
t_range = np.linspace(0, t_max, num=step_end)
v_n = el * np.ones([n, step_end])
syn = i_mean * (1 + 0.1 * (t_max / dt)**(0.5) * (2 * np.random.random([n, step_end]) - 1))
# loop for step_end - 1 steps
for step in range(1, step_end):
v_n[:,step] = v_n[:,step - 1] + (dt / tau) * (el - v_n[:, step - 1] + r * syn[:, step])
# --> insert your code here
plt.figure()
plt.title('Multiple realizations of $V_m$')
plt.xlabel('time (s)')
plt.ylabel('$V_m$ (V)')
plt.plot(t_range, v_n[:-1].T, 'k', alpha=0.3)
plt.plot(t_range, v_n[-1], 'k', alpha=0.3, label='V(t)')
plt.show()
# + [markdown] colab={"base_uri": "https://localhost:8080/", "height": 433} colab_type="text" id="QDAK6Kj-aRID" outputId="45760ef7-1c86-4b5e-88b5-0a9c64704ab5"
# [*Click for solution*](https://github.com/NeuromatchAcademy/course-content/tree/master//tutorials/W0D1_PythonWorkshop1/solutions/W0D1_Tutorial1_Solution_6a252098.py)
#
# *Example output:*
#
# <img alt='Solution hint' align='left' width=560 height=416 src=https://raw.githubusercontent.com/NeuromatchAcademy/course-content/master/tutorials/W0D1_PythonWorkshop1/static/W0D1_Tutorial1_Solution_6a252098_0.png>
#
#
| tutorials/W0D1_PythonWorkshop1/student/W0D1_Tutorial1.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
# Erasmus+ ICCT project (2018-1-SI01-KA203-047081)
# Toggle cell visibility
from IPython.display import HTML
tag = HTML('''<script>
code_show=true;
function code_toggle() {
if (code_show){
$('div.input').hide()
} else {
$('div.input').show()
}
code_show = !code_show
}
$( document ).ready(code_toggle);
</script>
Toggle cell visibility <a href="javascript:code_toggle()">here</a>.''')
display(tag)
# Hide the code completely
# from IPython.display import HTML
# tag = HTML('''<style>
# div.input {
# display:none;
# }
# </style>''')
# display(tag)
# +
import sympy as sp # Symbolic Python
import numpy as np # Arrays, matrices and corresponding mathematical operations
from IPython.display import Latex, display, Markdown, clear_output # For displaying Markdown and LaTeX code
from ipywidgets import widgets # Interactivity module
from IPython.display import Javascript
# Function for the conversion of array/matrix to LaTeX/Markdown format.
def vmatrix(a):
if len(a.shape) > 2:
raise ValueError('bmatrix can at most display two dimensions')
lines = str(a).replace('[', '').replace(']', '').splitlines()
rv = [r'\begin{vmatrix}']
rv += [' ' + ' & '.join(l.split()) + r'\\' for l in lines]
rv += [r'\end{vmatrix}']
return '\n'.join(rv)
# -
# ## Routh and Hurwitz stability criterion
#
# In control system theory, the Routh–Hurwitz stability criterion is a mathematical test used to detect the number of poles of the closed-loop transfer function that have positive real parts. The number of the sign changes in the first column of the Routh array gives the number of poles in the right half of the complex plane. The necessary and sufficient condition for the stability of a linear time-invariant control system is that all closed-loop system poles have negative real parts. That means that there should be no changes of sign in the first column. A similar stability criterion based on the determinants of a system is called Hurwitz criterion.
#
# The starting point for determining system stability is the characteristic polynmial defined as:
#
# \begin{equation}
# a_ns^n+a_{n-1}s^{n-1}+...+a_1s+a_0
# \end{equation}
#
# In the case of the Routh's criterion we then form the so called Routh's array:
#
#
# \begin{array}{l|ccccc}
# & 1 & 2 & 3 & 4 & 5 \\
# \hline
# s^n & a_n & a_{n-2} & a_{n-4} & a_{n-6} & \dots \\
# s^{n-1} & a_{n-1} & a_{n-3} & a_{n-5} & a_{n-7} &\dots \\
# s^{n-2} & b_1 & b_2 & b_3 & b_4 & \dots \\
# s^{n-3} & c_1 & c_2 & c_3 & c_4 & \dots \\
# s^{n-4} & d_1 & d_2 & d_3 & d_4 & \dots \\
# \vdots & \vdots & \vdots & \vdots & \vdots & \ddots\\
# \end{array}
#
#
# The coefficients in the first two rows ($a_i$) are obtained from the characteristic polynomial. All the others are determined using the following formulae:
#
# \begin{array}{cccc}
# \, \! \! \! \! b_1 \! = \! \frac{a_{n-1}a_{n-2}-a_n a_{n-3}}{a_{n-1}} & \! \! \! \! \, \! \! b_2 \! = \! \frac{a_{n-1}a_{n-4}-a_n a_{n-5}}{a_{n-1}} & \, \! \! b_3 \! = \! \frac{a_{n-1}a_{n-6}-a_n a_{n-7}}{a_{n-1}} & \, \! \! \! \! \dots \\
# c_1=\frac{b_1a_{n-3}-a_{n-1} b_2}{b_1} & c_2=\frac{b_1a_{n-5}-a_{n-1}b_3}{b_1} & c_3=\frac{b_1a_{n-7}-a_{n-1}b_4}{b_1} & \, \! \! \! \! \dots \\
# d_1=\frac{c_1 b_2-b_1 c_2}{c_1} & d_2=\frac{c_1 b_3-b_1 c_3}{c_1} & d_3=\frac{c_1 b_4-b_1 c_4}{c_1} & \, \! \! \! \! \dots \\
# \vdots & \vdots & \vdots & \, \! \! \! \! \ddots \\
# \end{array}
#
# If all coefficients in the first column ($n+1$ coefficients) have the same sign (either all are positive or all are negative), the system is stable. The number of sign changes in the first column gives us the number of the roots of the characteristic polynomial that lie in the left half of the complex plane.
#
# In the case of the Hurwitz criterion a determinant $\Delta_n$ with the dimensions $n\times n$ is formed based on the characteristic polynomial.
#
# \begin{equation}
# \Delta_n=
# \begin{array}{|cccccccc|}
# a_{n-1} & a_{n-3} & a_{n-5} & \dots & \left[ \begin{array}{cc} a_0 & \mbox{if
# }n \mbox{ is odd} \\ a_1 & \mbox{if }n \mbox{ is even} \end{array}
# \right] & 0 & \dots & 0 \\[3mm]
# a_{n} & a_{n-2} & a_{n-4} & \dots & \left[ \begin{array}{cc} a_1 & \mbox{if }n \mbox{ is odd} \\ a_0 & \mbox{if }n \mbox{ is even} \end{array} \right] & 0 & \dots & 0 \\
# 0 & a_{n-1} & a_{n-3} & a_{n-5} & \dots & \dots & \dots & 0 \\
# 0 & a_{n} & a_{n-2} & a_{n-4} & \dots & \dots & \dots & 0 \\
# 0 & 0 & a_{n-1} & a_{n-3} & \dots & \dots & \dots & 0 \\
# 0 & 0 & a_{n} & a_{n-2} & \dots & \dots & \dots & 0 \\
# \vdots & \vdots & \vdots & \vdots & \vdots & \vdots & \vdots & \vdots \\
# 0 & \dots & \dots & \dots & \dots & \dots & \dots & a_0 \\
# \end{array}
# \end{equation}
#
#
# Based on the determinant $\Delta_n$ we form the subdeterminants on the main diagonal. The subdeterminant $\Delta_1$ is equal to
#
# \begin{equation}
# \Delta_1=a_{n-1},
# \end{equation}
#
# subdterminant $\Delta_2$ to
#
# \begin{equation}
# \Delta_2=
# \begin{array}{|cc|}
# a_{n-1} & a_{n-3} \\
# a_{n} & a_{n-2} \\
# \end{array},
# \end{equation}
#
# and subdeterminant $\Delta_3$ to
#
# \begin{equation}
# \Delta_3=
# \begin{array}{|ccc|}
# a_{n-1} & a_{n-3} & a_{n-5} \\
# a_{n} & a_{n-2} & a_{n-4} \\
# 0 & a_{n-1} & a_{n-3} \\
# \end{array}.
# \end{equation}
#
# We continue in this manner until we get to the subdeterminant
# $\Delta_{n-1}$. The system is stable if all subdeterminants on the main diagonal (from $\Delta_1$ to $\Delta_{n-1}$) as well as the determinant $\Delta_n$ are strictly larger than zero.
#
# ---
#
# ### How to use this notebook?
#
# Please define the characteristic polynomial of interest by inserting its order and the corresponding coefficients, and then choosing the desired stabiliy criterion (Routh or Hurwitz).
# +
polynomialOrder = input ("Insert the order of the characteristic polynomial (press Enter to confirm):")
try:
val = int(polynomialOrder)
except ValueError:
display(Markdown('Order of the polynomial has to be an integer. Please re-enter the value.+'))
display(Markdown('Please insert the coefficients of the characteristic polynomial (use $K$ for undefined coefficient) and click "Confirm".'))
text=[None]*(int(polynomialOrder)+1)
for i in range(int(polynomialOrder)+1):
text[i]=widgets.Text(description=('$s^%i$'%(-(i-int(polynomialOrder)))))
display(text[i])
btn1=widgets.Button(description="Confirm")
btnReset=widgets.Button(description="Reset")
display(widgets.HBox((btn1, btnReset)))
btn2=widgets.Button(description="Confirm")
w=widgets.Select(
options=['Routh', 'Hurwitz'],
rows=3,
description='Select:',
disabled=False
)
coef=[None]*(int(polynomialOrder)+1)
def on_button_clickedReset(ev):
display(Javascript("Jupyter.notebook.execute_cells_below()"))
def on_button_clicked1(btn1):
clear_output()
for i in range(int(polynomialOrder)+1):
if text[i].value=='' or text[i].value=='Please insert a coefficient':
text[i].value='Please insert a coefficient'
else:
try:
coef[i]=float(text[i].value)
except ValueError:
if text[i].value!='' or text[i].value!='Please insert a coefficient':
coef[i]=sp.var(text[i].value)
coef.reverse()
enacba="$"
for i in range (int(polynomialOrder),-1,-1):
if i==int(polynomialOrder):
enacba=enacba+str(coef[i])+"s^"+str(i)
elif i==1:
enacba=enacba+"+"+str(coef[i])+"s"
elif i==0:
enacba=enacba+"+"+str(coef[i])+"$"
else:
enacba=enacba+"+"+str(coef[i])+"s^"+str(i)
coef.reverse()
display(Markdown('The characteristic polynomial of interest is:'), Markdown(enacba))
display(Markdown('Would you like to use Routh or Hurwitz criterion to check the stability?'))
display(w)
display(widgets.HBox((btn2, btnReset)))
display(out)
def on_button_clicked2(btn2):
if w.value=='Routh':
s=np.zeros((len(coef), len(coef)//2+(len(coef)%2)),dtype=object)
xx=np.zeros((len(coef), len(coef)//2+(len(coef)%2)),dtype=object)
check_index=0
if len(s[0]) == len(coef[::2]):
s[0] = coef[::2]
elif len(s[0])-1 == len(coef[::2]):
s[0,:-1] = coef[::2]
#soda mesta
if len(s[1]) == len(coef[1::2]):
s[1] = coef[1::2]
elif len(s[1])-1 == len(coef[1::2]):
s[1,:-1] = coef[1::2]
for i in range(len(s[2:,:])):
i+=2
for j in range(len(s[0,0:-1])):
s[i,j] = (s[i-1,0]*s[i-2,j+1]-s[i-2,0]*s[i-1,j+1]) / s[i-1,0]
if s[i,0] == 0:
epsilon=sp.Symbol('\u03B5')
s[i,0] = epsilon
check_index=1
if check_index==1:
for i in range(len(s)):
for j in range(len(s[0])):
xx[i,j] = sp.limit(s[i,j],epsilon,0)
positive_check=xx[:,0]>0
negative_check=xx[:,0]<0
if all(positive_check)==True:
with out:
clear_output()
display(Markdown('One of the elements in the first column of the Routh table is equal to 0. We replace it with $\epsilon$ and observe the values of the elements when $\epsilon$ goes to zero.'))
display(Markdown('Routh table $%s$\n' % vmatrix(s)))
display(Markdown('System is stable, because all the elements in the first column of the Routh table are positive.'))
display(Markdown('Routh table $%s$\n' % vmatrix(xx)))
elif all(negative_check)==True:
with out:
clear_output()
display(Markdown('One of the elements in the first column of the Routh table is equal to 0. We replace it with $\epsilon$ and observe the values of the elements when $\epsilon$ goes to zero.'))
display(Markdown('Routh table $%s$\n' % vmatrix(s)))
display(Markdown('System is stable, because all the elements in the first column of the Routh table are negative.'))
display(Markdown('Routh table $%s$\n' % vmatrix(xx)))
else:
with out:
clear_output()
display(Markdown('One of the elements in the first column of the Routh table is equal to 0. We replace it with $\epsilon$ and observe the values of the elements when value of $\epsilon$ goes to zero.'))
display(Markdown('Routh table $%s$\n' % vmatrix(s)))
display(Markdown('System is unstable, because the elements in the first column of the Routh table do not have the same sign.'))
display(Markdown('Routh table $%s$\n' % vmatrix(xx)))
elif check_index==0:
if all(isinstance(x, (int,float)) for x in coef):
positive_check=s[:,0]>0
negative_check=s[:,0]<0
if all(positive_check)==True:
with out:
clear_output()
display(Markdown('System is stable, because all the elements in the first column of the Routh table are positive.'))
display(Markdown('Routh table $%s$' % vmatrix(s)))
elif all(negative_check)==True:
with out:
clear_output()
display(Markdown('System is stable, because all the elements in the first column of the Routh table are negative.'))
display(Markdown('Routh table $%s$' % vmatrix(s)))
else:
with out:
clear_output()
display(Markdown('System is unstable, because the elements in the first column of the Routh table do not have the same sign.'))
display(Markdown('Routh table $%s$' % vmatrix(s)))
else:
testSign=[]
for i in range(len(s)):
if isinstance(s[i,0],(int,float)):
testSign.append(s[i,0]>0)
solution=[]
if all(elem == True for elem in testSign):
for x in s[:,0]:
if not isinstance(x,(sp.numbers.Integer,sp.numbers.Float,int,float)):
solution.append(sp.solve(x>0,K)) # Define the solution for each value of the determinant
with out:
clear_output()
display(Markdown('Routh table $%s$' % vmatrix(s)))
display(Markdown('All the known coefficients in the first column of Routh table are positive, therefore the system is stable for:'))
print(solution)
elif all(elem == False for elem in test):
for x in s[:,0]:
if not isinstance(x,(sp.numbers.Integer,sp.numbers.Float,int,float)):
solution.append(sp.solve(x<0,K)) # Define the solution for each value of the determinant
with out:
clear_output()
display(Markdown('Routh table $%s$' % vmatrix(s)))
display(Markdown('All the known coefficients in the first column of Routh table are negative, therefore the system is stable for:'))
print(solution)
else:
with out:
display(Markdown('Routh table $%s$' % vmatrix(s)))
display(Markdown('System is unstable, beacuse the signs of the coefficients in the first column differ between each other.'))
elif w.value=='Hurwitz':
# Check if all the coefficients are numbers or not and preallocate basic determinant.
if all(isinstance(x, (int,float)) for x in coef):
determinant=np.zeros([len(coef)-1,len(coef)-1])
else:
determinant=np.zeros([len(coef)-1,len(coef)-1],dtype=object)
# Define the first two rows of the basic determinant.
for i in range(len(coef)-1):
try:
determinant[0,i]=coef[2*i+1]
except:
determinant[0,i]=0
for i in range(len(coef)-1):
try:
determinant[1,i]=coef[2*i]
except:
determinant[1,i]=0
# Define the remaining rows of the basic determinant by shifting the first two rows.
for i in range(2,len(coef)-1):
determinant[i,:]=np.roll(determinant[i-2,:],1)
determinant[2:,0]=0
# Define all the subdeterminants.
subdet=[];
for i in range(len(determinant)-1):
subdet.append(determinant[0:i+1,0:i+1])
# Append the basic determinant to the subdeterminants' array.
subdet.append(determinant)
# Check if all coefficients are numbers.
if all(isinstance(x, (int,float)) for x in coef):
det_value=[] # Preallocate array containing values of all determinants.
for i in range(len(subdet)):
det_value.append(np.linalg.det(subdet[i])); # Calculate determinant and append the values to det_value.
if all(i > 0 for i in det_value)==True: # Check if all values in det_value are positive or not.
with out:
clear_output()
display(Markdown('System is stable, because all determinants are positive.'))
for i in range(len(subdet)):
display(Markdown('$\Delta_{%i}=$'%(i+1) + '$%s$' %vmatrix(subdet[i]) + '$=%s$' %det_value[i]))
else:
with out:
clear_output()
display(Markdown('System is unstable, because not all determinants are positive.'))
for i in range(len(subdet)):
display(Markdown('$\Delta_{%i}=$'%(i+1) + '$%s$' %vmatrix(subdet[i]) + '$=%s$' %det_value[i]))
else:
subdetSym=[] # Preallocate subdetSym.
det_value=[] # Preallocate det_value.
solution=[] # Preallocate solution.
for i in subdet:
subdetSym.append(sp.Matrix(i)) # Transform matrix subdet to symbolic.
for i in range(len(subdetSym)):
det_value.append(subdetSym[i].det()) # Calculate the value of the determinant.
testSign=[]
for i in range(len(det_value)):
if isinstance(det_value[i],(int,float,sp.numbers.Integer,sp.numbers.Float)):
testSign.append(det_value[i]>0)
if all(elem == True for elem in testSign):
solution=[]
for x in det_value:
if not isinstance(x,(sp.numbers.Integer,sp.numbers.Float,int,float)):
solution.append(sp.solve(x>0,K)) # Define the solution for each value of the determinant
with out:
clear_output()
for i in range(len(subdet)):
display(Markdown('$\Delta_{%i}=$'%(i+1) + '$%s$' %vmatrix(subdet[i]) + '$=%s$' %det_value[i]))
display(Markdown('System is stable, for:'))
print(solution)
else:
with out:
clear_output()
display(Markdown('System is unstable, because not all known determinants are positive.'))
for i in range(len(subdet)):
display(Markdown('$\Delta_{%i}=$'%(i+1) + '$%s$' %vmatrix(subdet[i]) + '$=%s$' %det_value[i]))
global out
out=widgets.Output()
btn3=widgets.Button(description="Reset all")
w=widgets.Select(
options=['Routh', 'Hurwitz'],
rows=3,
description='Select:',
disabled=False
)
btn1.on_click(on_button_clicked1)
btn2.on_click(on_button_clicked2)
btnReset.on_click(on_button_clickedReset)
# -
| ICCT_en/examples/02/TD-14-Routh-Hurwitz-stability-criterion.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Endogeneidade
#
# Qualquer variável explicativa (exógena/independente/X) que for correlacionada com o termo de erro será uma variável endógena. Ou seja, quase sempre.
#
#
# Segundo Wooldridge (Econometric Analysis of Cross-Section and Panel Data, 2 ed, 2011, p. 54-55):
#
# **Variáveis omitidas**: se há uma variável omitida no modelo, ela acaba sendo incorporada ao erro. Caso esta variável omitida seja correlacionada com alguma das variáveis explicativas que já está no modelo (o que é bem comum), necessariamente haverá correlação entre variável explicativa e erro.
#
#
# **Erro de mensuração**: se há apenas uma variável com erro de mensuração e não temos a variável sem erro que deveria estar no modelo, podemos ter correlação entre a variável que temos (com erro) e o erro (porque o erro contém uma parte daquilo que não foi mensurado corretamente). Isso depende da relação entre a variável que temos (com erro) e a que não temos mas gostaríamos de ter (sem erro).
#
#
# **Simultaneidade**: quando uma das variáveis explicativas é determinada pela variável explicada no modelo. Ou seja, x influencia y, mas y também influencia x simultaneamente. Nesse caso, x e o erro geralmente possuem correlação entre si.
#
# Uma razão extra é a **má expecificação da forma funcional** (que sai da teoria).
#
# Com endogeneidade não temos E[e|x1,x2,...] = 0
#
#
# E qual é o problema da endogeneidade? Além do **viés**, o grande problema que podemos ter é a **inconsistência** (além de ser **ineficiente**). Nesse caso, nossas estimativas não convergirão ao parâmetro populacional. Para que então rodar uma regressão? O importante de uma estimativa amostral é que, através dela, possamos dizer algo sobre a população.
#
# # Variaveis Instrumentais
# Wooldridge, 2011 – Capítulo 15
#
# O uso das variáveis instrumentais (IV) nos auxiliará na
# busca de estimadores consistentes, quando tivermos
# regressores endógenos presentes no modelo de
# regressão.
#
#
#
# Considere o modelo
# salarioi = beta_0 + beta_1 * educ + e (1)
#
# com
#
# Cov(educ, e) != 0 (2)
#
# Pergunta: qual razão estaria nos levando à violação desta premissa?
# Resposta: educ deve estar correlacionada com habilidade (que
# certamente afeta salário e encontra-se no termo de erro e,
# além de tudo, é não observável diretamente).
#
# suponha que tenha sido observada uma variável
# explicativa z que satisfaça a duas suposições:
#
# **(a) z é não-correlacionada com e, isto é, Cov(z,e) = 0**
#
# z é exógena na regressão.
#
#
# **(b) z é correlacionada com educ, isto é, Cov(z, educ) != 0**
#
# Como verificar a validade de (a) e (b)?
#
#
# # Estimador em dois estágios
# http://hedibert.org/wp-content/uploads/2014/05/Econometria201401-Aula15a-ARLMXII-Endogeneidade.pdf
| 02-stat-multivariada/03 Modelagem - Capturando não lineariedades/endogeneidade.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: conda_tensorflow_p36
# language: python
# name: conda_tensorflow_p36
# ---
# # [Module 1] Train a Keras Sequential Model (TensorFlow 2.0)
#
# ### [Note] 본 주피터 노트북은 TensorFlow 2.0에서 핸즈온을 수행합니다. Amazon SageMaker는 2020년 1월부터 빌트인 딥러닝 컨테이너 형태로 TensorFlow 2.0을 지원하고 있습니다.
#
# 본 노트북(notebook)은 SageMaker 상에서 Keras Sequential model을 학습하는 방법을 단계별로 설명합니다. 본 노트북에서 사용한 모델은 간단한 deep CNN(Convolutional Neural Network) 모델로 [the Keras examples](https://github.com/keras-team/keras/blob/master/examples/cifar10_cnn.py)에 소개된 모델과 동일합니다.
# - 참고로, 본 모델은 25 epoch 학습 후에 검증셋의 정확도(accuracy)가 약 75%이고 50 epoch 학습 후에 검증셋의 정확도가 약 79% 입니다.
# - 본 워크샵 과정에서는 시간 관계상 5 epoch까지만 학습합니다. (단, Horovod 기반 분산 학습은 10 epoch까지 학습합니다.)
# ## The dataset
#
# [CIFAR-10 dataset](https://www.cs.toronto.edu/~kriz/cifar.html)은 머신 러닝에서 가장 유명한 데이터셋 중 하나입니다.
# 이 데이터셋은 10개의 다른 클래스로 구성된(클래스당 6,000장) 60,000장의 32x32 픽셀 이미지들로 구성되어 있습니다.
# 아래 그림은 클래스당 10장의 이미지들을 랜덤으로 추출한 결과입니다.
#
# 
#
# 본 실습에서 여러분들은 deep CNN을 학습하여 영상 분류(image classification) 작업을 수행합니다. 다음 노트북들에서
# 여러분들은 File Mode, Pipe Mode와 Horovod 기반 분산 학습(distributed training) 결과를 비교할 것입니다.
# ## Getting the data
# 아래 AWS CLI(Command Line Interface) 커맨드를 사용하여 S3(Amazon Simple Storage Service)에 저장된 TFRecord 데이터셋을 여러분의 로컬 노트북 인스턴스로 복사합니다.
# S3 경로는 `s3://floor28/data/cifar10` 입니다.
#
# ### TFRecord는 무엇인가요?
# - Google에서 Tensorflow backend로 모델링 시에 공식적으로 권장하는 binary 포맷입니다.
# - Tensorflow의 protocol buffer 파일로 직렬화된 입력 데이터가 담겨 있습니다.
# - 대용량 데이터를 멀티스레딩으로 빠르게 스트리밍할 때 유용합니다. (모든 데이터는 메모리의 하나의 블록에 저장되므로, 입력 파일이 개별로 저장된 경우보다 데이터 로딩에 필요한 시간이 대폭 단축됩니다.)
# - Example 객체로 구성된 배열의 집합체입니다. (an array of Examples)
# - 아래 그림은 $m$차원 feautre가 $n$개의 샘플로 구성된 TFRecord 예시입니다.
#
# 
# !pip install tensorflow==2.0.0
import tensorflow as tf
import numpy as np
print("Num GPUs Available: ", len(tf.config.experimental.list_physical_devices('GPU')))
# !aws s3 cp --recursive s3://floor28/data/cifar10 ./data
# ## Run the training locally
# 본 스크립트는 모델 학습에 필요한 인자값(arguments)들을 사용합니다. 모델 학습에 필요한 인자값들은 아래와 같습니다.
#
# 1. `model_dir` - 로그와 체크 포인트를 저장하는 경로
# 2. `train, validation, eval` - TFRecord 데이터셋을 저장하는 경로
# 3. `epochs` - epoch 횟수
#
# 아래 명령어로 **<font color='red'>SageMaker 관련 API 호출 없이</font>** 로컬 노트북 인스턴스 환경에서 1 epoch만 학습해 봅니다. 참고로, MacBook Pro(15-inch, 2018) 2.6GHz Core i7 16GB 사양에서 2분 20초~2분 40초 소요됩니다.
# %%time
# !mkdir -p logs
# !python training_script/cifar10_keras_tf2.py --model_dir ./logs \
# --train data/train \
# --validation data/validation \
# --eval data/eval \
# --epochs 1
# !rm -rf logs
# **<font color='blue'>본 스크립트는 SageMaker상의 notebook에서 구동하고 있지만, 여러분의 로컬 컴퓨터에서도 python과 jupyter notebook이 정상적으로 인스톨되어 있다면 동일하게 수행 가능합니다.</font>**
# ## Use TensorFlow Script Mode
#
# TensorFlow 버전 1.11 이상에서 Amazon SageMaker Python SDK는 **스크립트 모드(Script mode)**를 지원합니다. 스크립트 모드는 종래 레거시 모드(Legacy mode) 대비 아래 장점들이 있습니다.
#
# * 스크립트 모드의 학습 스크립트는 일반적으로 TensorFlow 용으로 작성하는 학습 스크립트와 더 유사하므로 TensorFlow 학습 스크립트를 최소한의 변경으로 실행할 수 있습니다. 따라서, 기존 레거시 모드보다 TensorFlow 학습 스크립트를 수정하는 것이 더 쉽습니다.
# - 레거시 모드는 Tensorflow Estimator API를 기반으로 한 아래의 함수들을 반드시 포함해야 합니다.
# - 아래 함수들에서 하나의 함수를 만드시 포함해야 합니다.
# - `model_fn`: 학습할 모델을 정의합니다,
# - `keras_model_fn`: 학습할 tf.keras 모델을 정의합니다.
# - `estimator_fn`: 학습할 tf.estimator.Estimator를 정의합니다.
# - `train_input_fn`: 학습 데이터 로딩과 전처리를 수행합니다.
# - `eval_input_fn`: 검증 데이터의 로딩과 전처리를 수행합니다.
# - (Optional) `serving_input_fn`: 예측(prediction) 중에 모델에 전달할 feautre를 정의합니다. 이 함수는 학습시에만 사용되지만, SageMaker 엔드포인트에서 모델을 배포할 때 필요합니다.
# - `if __name__ == “__main__”:` 블록을 정의할 수 없어 디버깅이 쉽지 않습니다.
#
# * 스크립트 모드는 Python 2.7-와 Python 3.6-을 지원합니다.
#
# * 스크립트 모드는 **Hovorod 기반 분산 학습(distributed training)도 지원**합니다.
#
# TensorFlow 스크립트 모드에서 학습 스크립트를 작성하는 방법 및 Tensorflow 스크립트 모드의 estimator와 model 사용법에 대한 자세한 내용은
# https://sagemaker.readthedocs.io/en/stable/using_tf.html 을 참조하세요.
# ### Preparing your script for training in SageMaker
#
# SageMaker 스크립트 모드의 학습 스크립트는 SageMaker 외부에서 실행할 수 있는 학습 스크립트와 매우 유사합니다.
# SageMaker는 하나의 인자값(argument), model_dir와 로그 및 모델 아티팩트(model artifacts)에 사용되는 S3 경로로 학습 스크립트를 실행합니다.
#
# SageMaker 학습 인스턴스에서는 학습의 컨테이너에 S3에 저장된 데이터를 다운로드하여 학습에 활용합니다. 그 때, S3 버킷의 데이터 경로와 컨테이너의 데이터 경로를 컨테이너 환경 변수를 통해 연결합니다.
#
# 여러분은 다양한 환경 변수를 통해 학습 환경에 대한 유용한 속성들(properties)에 액세스할 수 있습니다.
# 이 스크립트의 경우 `Train, Validation, Eval`이라는 3 개의 데이터 채널을 스크립트로 보냅니다.
#
# **`training_script/cifar10_keras_tf2.py`에서 스크립트 사본을 생성 후, `training_script/cifar10_keras_sm_tf2.py`로 저장하세요.**
#
# 스크립트 사본을 생성하였다면 단계별로 아래의 작업들을 직접 시도합니다.
#
# ----
# ### TODO 1.
# `cifar10_keras_sm_tf2.py`파일에서 SageMaker API 환경 변수 SM_CHANNEL_TRAIN, SM_CHANNEL_VALIDATION, SM_CHANNEL_EVAL에서 디폴트 값을 가져오기 위해 train, validation, eval 인수를 수정해 주세요.
#
# `cifar10_keras_sm_tf2.py`의 `if __name__ == '__main__':` 블록 내에 아래 인자값을 수정해 주세요.
#
# ```python
# parser.add_argument(
# '--train',
# type=str,
# required=False,
# default=os.environ.get('SM_CHANNEL_TRAIN'), # <-- 수정 부분
# help='The directory where the CIFAR-10 input data is stored.')
# parser.add_argument(
# '--validation',
# type=str,
# required=False,
# default=os.environ.get('SM_CHANNEL_VALIDATION'), # <-- 수정 부분
# help='The directory where the CIFAR-10 input data is stored.')
# parser.add_argument(
# '--eval',
# type=str,
# required=False,
# default=os.environ.get('SM_CHANNEL_EVAL'), # <-- 수정 부분
# help='The directory where the CIFAR-10 input data is stored.')
# ```
#
#
# 환경 변수에 따른 S3 경로와 컨테이너 경로는 아래 표와 같습니다.
#
# | S3 경로 | 환경 변수 | 컨테이너 경로 |
# | :---- | :---- | :----|
# | s3://bucket_name/prefix/train | `SM_CHANNEL_TRAIN` | `/opt/ml/input/data/train` |
# | s3://bucket_name/prefix/validation | `SM_CHANNEL_VALIDATION` | `/opt/ml/input/data/validation` |
# | s3://bucket_name/prefix/eval | `SM_CHANNEL_EVAL` | `/opt/ml/input/data/eval` |
# | s3://bucket_name/prefix/model.tar.gz | `SM_MODEL_DIR` | `/opt/ml/model` |
# | s3://bucket_name/prefix/output.tar.gz | `SM_OUTPUT_DATA_DIR` | `/opt/ml/output/data` |
#
# 얘를 들어, `/opt/ml/input/data/train`은 학습 데이터가 다운로드되는 컨테이너 내부의 디렉토리입니다.
#
# 자세한 내용은 아래의 SageMaker Python SDK 문서를 확인하시기 바랍니다.<br>
# (https://sagemaker.readthedocs.io/en/stable/using_tf.html#preparing-a-script-mode-training-script)
#
#
# SageMaker는 train, validation, eval 경로들을 직접 인자로 보내지 않고, 대신 스크립트에서 환경 변수를 사용하여 해당 인자를 필요하지 않은 것으로 표시합니다.
#
# SageMaker는 유용한 환경 변수를 여러분이 작성한 학습 스크립트로 보냅니다. 예시들은 아래와 같습니다.
# * `SM_MODEL_DIR`: 학습 작업이 모델 아티팩트(model artifacts)를 저장할 수 있는 로컬 경로를 나타내는 문자열입니다. 학습 완료 후, 해당 경로 내 모델 아티팩트는 모델 호스팅을 위해 S3에 업로드됩니다. 이는 S3 위치인 학습 스크립트에 전달 된 model_dir 인수와 다르다는 점을 주의해 주세요. SM_MODEL_DIR은 항상 `/opt/ml/model`로 설정됩니다.
# * `SM_NUM_GPUS`: 호스트(Host)에서 사용 가능한 GPU 수를 나타내는 정수(integer)입니다.
# * `SM_OUTPUT_DATA_DIR`: 출력 아티팩트를 저장할 디렉토리의 경로를 나타내는 문자열입니다. 출력 아티팩트에는 체크포인트, 그래프 및 다른 저장용 파일들이 포함될 수 있지만 모델 아티팩트는 포함되지 않습니다. 이 출력 아티팩트들은 압축되어 모델 아티팩트와 동일한 접두사가 있는 S3 버킷으로 S3에 업로드됩니다.
#
# 이 샘플 코드는 네트워크 지연을 줄이기 위해 모델의 체크포인트(checkpoints)를 로컬 환경에 저장합니다. 이들은 학습 종료 후 S3에 업로드할 수 있습니다.
#
# ----
# ### TODO 2.
#
# `cifar10_keras_sm_tf2.py`의 `if __name__ == '__main__':` 블록 내에 아래 인자값을 추가해 주세요.
#
# ```python
# parser.add_argument(
# '--model_output_dir',
# type=str,
# default=os.environ.get('SM_MODEL_DIR'))
# ```
#
# ----
# ### TODO 3.
# `ModelCheckpoint` 함수의 저장 경로를 새 경로로 아래와 같이 수정해 주세요.
#
# From:
# ```python
# callbacks.append(ModelCheckpoint(args.model_dir + '/checkpoint-{epoch}.h5'))
# ```
# To:
# ```python
# callbacks.append(ModelCheckpoint(args.model_output_dir + '/checkpoint-{epoch}.h5'))
# ```
#
# ----
# ### TODO 4.
# `save_model` 함수의 인자값을 아래와 같이 수정해 주세요.
#
# From:
# ```python
# return save_model(model, args.model_dir)
# ```
# To:
# ```python
# return save_model(model, args.model_output_dir)
# ```
#
# <font color='blue'>**본 노트북 실습에 어려움이 있다면 솔루션 파일 `training_script/cifar10_keras_sm_tf2_solution.py`을 참조하시면 됩니다.**</font>
# ### Test your script locally (just like on your laptop)
#
# 테스트를 위해 위와 동일한 명령(command)으로 새 스크립트를 실행하고, 예상대로 실행되는지 확인합니다. <br>
# SageMaker TensorFlow API 호출 시에 환경 변수들은 자동으로 넘겨기지만, 로컬 주피터 노트북에서 테스트 시에는 수동으로 환경 변수들을 지정해야 합니다. (아래 예제 코드를 참조해 주세요.)
#
# ```python
# # %env SM_MODEL_DIR=./logs
# ```
# +
# %%time
# !mkdir -p logs
# Number of GPUs on this machine
# %env SM_NUM_GPUS=1
# Where to save the model
# %env SM_MODEL_DIR=./logs
# Where the training data is
# %env SM_CHANNEL_TRAIN=data/train
# Where the validation data is
# %env SM_CHANNEL_VALIDATION=data/validation
# Where the evaluation data is
# %env SM_CHANNEL_EVAL=data/eval
# !python training_script/cifar10_keras_sm_tf2.py --model_dir ./logs --epochs 1
# !rm -rf logs
# -
# ### Use SageMaker local for local testing
#
# 본격적으로 학습을 시작하기 전에 로컬 모드를 사용하여 디버깅을 먼저 수행합니다. 로컬 모드는 학습 인스턴스를 생성하는 과정이 없이 로컬 인스턴스로 컨테이너를 가져온 후 곧바로 학습을 수행하기 때문에 코드를 보다 신속히 검증할 수 있습니다.
#
# Amazon SageMaker Python SDK의 로컬 모드는 TensorFlow 또는 MXNet estimator서 단일 인자값을 변경하여 CPU (단일 및 다중 인스턴스) 및 GPU (단일 인스턴스) SageMaker 학습 작업을 에뮬레이션(enumlate)할 수 있습니다. 이를 위해 Docker compose와 NVIDIA Docker를 사용합니다.
# 학습 작업을 시작하기 위해 `estimator.fit() ` 호출 시, Amazon ECS에서 Amazon SageMaker TensorFlow 컨테이너를 로컬 노트북 인스턴스로 다운로드합니다.
#
# 로컬 모드의 학습을 통해 여러분의 코드가 현재 사용 중인 하드웨어를 적절히 활용하고 있는지 확인하기 위한 GPU 점유와 같은 지표(metric)를 쉽게 모니터링할 수 있습니다.
# +
import os
import sagemaker
from sagemaker import get_execution_role
sagemaker_session = sagemaker.Session()
role = get_execution_role()
# -
# `sagemaker.tensorflow` 클래스를 사용하여 SageMaker Python SDK의 Tensorflow Estimator 인스턴스를 생성합니다.
# 인자값으로 하이퍼파라메터와 다양한 설정들을 변경할 수 있습니다.
#
# 자세한 내용은 [documentation](https://sagemaker.readthedocs.io/en/stable/using_tf.html#training-with-tensorflow-estimator)을 확인하시기 바랍니다.
from sagemaker.tensorflow import TensorFlow
estimator = TensorFlow(base_job_name='cifar10',
entry_point='cifar10_keras_sm_tf2.py',
source_dir='training_script',
role=role,
framework_version='2.0.0',
py_version='py3',
script_mode=True,
hyperparameters={'epochs' : 1},
train_instance_count=1,
train_instance_type='local')
# 학습을 수행할 3개의 채널과 데이터의 경로를 지정합니다. **로컬 모드로 수행하기 때문에 S3 경로 대신 노트북 인스턴스의 경로를 지정하시면 됩니다.**
# %%time
estimator.fit({'train': 'file://data/train',
'validation': 'file://data/validation',
'eval': 'file://data/eval'})
# Estimator가 처음 실행될 때 Amazon ECR 리포지토리(repository)에서 컨테이너 이미지를 다운로드해야 하지만 학습을 즉시 시작할 수 있습니다. 즉, 별도의 학습 클러스터가 프로비저닝 될 때까지 기다릴 필요가 없습니다. 또한 반복 및 테스트시 필요할 수 있는 후속 실행에서 MXNet 또는 TensorFlow 스크립트에 대한 수정 사항이 즉시 실행되기 시작합니다.
# ### Using SageMaker for faster training time
#
# 이번에는 로컬 모드를 사용하지 않고 SageMaker 학습에 GPU 학습 인스턴스를 생성하여 학습 시간을 단축해 봅니다.<br>
# 로컬 모드와 다른 점들은 (1) `train_instance_type`이 로컬 모드의 ‘local’ 대신 여러분이 원하는 특정 인스턴스 유형으로 설정해야 하고, (2) 학습 데이터를 Amazon S3에 업로드 후 학습 경로를 S3 경로로 설정해야 합니다.
#
# SageMaker SDK는 S3 업로드를 위한 간단한 함수(`Session.upload_data()`)를 제공합니다. 이 함수를 통해 리턴되는 값은 데이터가 저장된 S3 경로입니다.
# 좀 더 자세한 설정이 필요하다면 SageMaker SDK 대신 boto3를 사용하시면 됩니다.
#
# *[Note]: 고성능 워크로드를 위해 Amazon EFS와 Amazon FSx for Lustre도 지원하고 있습니다. 자세한 정보는 아래의 AWS 블로그를 참조해 주세요.<br>
# https://aws.amazon.com/blogs/machine-learning/speed-up-training-on-amazon-sagemaker-using-amazon-efs-or-amazon-fsx-for-lustre-file-systems/*
dataset_location = sagemaker_session.upload_data(path='data', key_prefix='data/DEMO-cifar10')
display(dataset_location)
# S3에 데이터 업로드를 완료했다면, Estimator를 새로 생성합니다. <br>
# 아래 코드를 그대로 복사 후에 `train_instance_type='local'`을 `train_instance_type='ml.p2.xlarge'`로 수정하고
# `hyperparameters={'epochs': 1}`를 `hyperparameters={'epochs': 5}`로 수정합니다.
#
# ```python
# from sagemaker.tensorflow import TensorFlow
# estimator = TensorFlow(base_job_name='cifar10',
# entry_point='cifar10_keras_sm_tf2.py',
# source_dir='training_script',
# role=role,
# framework_version='2.0.0',
# py_version='py3',
# script_mode=True,
# hyperparameters={'epochs': 1},
# train_instance_count=1,
# train_instance_type='local')
# ```
#
# *[Note]
# 2019년 8월부터 SageMaker에서도 학습 인스턴스에 EC2 spot instance를 사용하여 비용을 크게 절감할 수 있습니다. 자세한 정보는 아래의 AWS 블로그를 참조해 주세요.<br>
# https://aws.amazon.com/ko/blogs/korea/managed-spot-training-save-up-to-90-on-your-amazon-sagemaker-training-jobs/*
#
# 만약 Managed Spot Instance로 학습하려면 다음 코드를 Estimator의 train_instance_type의 다음 행에 추가해 주세요.
# ```python
# train_max_run = 3600,
# train_use_spot_instances = 'True',
# train_max_wait = 3600,
# ```
from sagemaker.tensorflow import TensorFlow
estimator = TensorFlow(base_job_name='cifar10',
entry_point='cifar10_keras_sm_tf2.py',
source_dir='training_script',
role=role,
framework_version='2.0.0',
py_version='py3',
script_mode=True,
hyperparameters={'epochs': 5},
train_instance_count=1,
train_instance_type='ml.p2.xlarge')
# 학습을 수행합니다. 이번에는 각각의 채널(`train, validation, eval`)에 S3의 데이터 저장 위치를 지정합니다.<br>
# 학습 완료 후 Billable seconds도 확인해 보세요. Billable seconds는 실제로 학습 수행 시 과금되는 시간입니다.
# ```
# Billable seconds: <time>
# ```
#
# 참고로, `ml.p2.xlarge` 인스턴스로 5 epoch 학습 시 전체 6분-7분이 소요되고, 실제 학습에 소요되는 시간은 3분-4분이 소요됩니다.
# %%time
estimator.fit({'train':'{}/train'.format(dataset_location),
'validation':'{}/validation'.format(dataset_location),
'eval':'{}/eval'.format(dataset_location)})
# ## Start a new SageMaker experiment
#
# Amazon SageMaker Experiments는 데이타 과학자들이 머신 러닝 실험을 구성하고, 추적하고, 비교하고, 평가할 수 있게 합니다.
# 머신 러닝은 반복적인 과정 입니다. 데이타 과학자들은 증분적인 모델 정확도의 변화를 관찰하면서, 데이타, 알고리즘, 파라미터의 조합들을 가지고 실험을 할 필요가 있습니다. 이러한 반복적인 과정은 수 많은 모델 훈련 및 모델의 버전들을 가지게 됩니다. 이것은 성능이 좋은 모델들 및 입력 설정의 구성들을 추적하기가 어렵게 됩니다. 이것은 더욱 더 증분적인 향상을 위한 기회를 찾기 위해서, 현재의 실험들과 과거에 수행한 실험들의 비교를 더욱 더 어렵게 합니다.
#
# **Amazon SageMaker Experiments는 반복적인 과정(시험, Trial)으로서의 입력 값들, 파라미터들, 구성 설정 값들 및 결과들을 자동으로 추적 할 수 있게 합니다.<br>
# 데이타 과학자들은 시험들(Trials)을 실험(Experiment) 안으로 할당하고, 그룹핑하고, 구성할 수 있습니다.**
# Amazon SageMaker Experiments는 현재 및 과거의 실험들을 시각적으로 조회할 수 있게 하는 Amazon SageMaker Studio와 통합이 되어 있습니다. Amazon SageMaker Studio는 또한 주요 평가 지표를 가지고 시험들을 비교할 수 있으며, 가장 우수한 모델들을 확인할 수 있게 합니다.
#
# sagemaker-experiments 를 먼저 설치 합니다.
# !pip install sagemaker-experiments
# 이제 실험(Experiment)을 만듭니다.
# +
from smexperiments.experiment import Experiment
from smexperiments.trial import Trial
import time
# Create an aexperiment
cifar10_experiment = Experiment.create(
experiment_name="TensorFlow-cifar10-experiment",
description="Classification of cifar10 images")
# -
# 다음은 시험(Trial)을 생성 합니다. 이 시험은 GPU Instance 위에서 Epoch 5를 가지고 실행하게 됩니다.
# Create a trial
trial_name = f"cifar10-training-job-{int(time.time())}"
trial = Trial.create(
trial_name=trial_name,
experiment_name=cifar10_experiment.experiment_name
)
# 새로운 estimator를 생성 합니다.
from sagemaker.tensorflow import TensorFlow
estimator = TensorFlow(base_job_name='cifar10',
entry_point='cifar10_keras_sm_tf2.py',
source_dir='training_script',
role=role,
framework_version='2.0.0',
py_version='py3',
hyperparameters={'epochs' : 5},
train_instance_count=1,
train_instance_type='ml.p2.xlarge')
# 다음은 각각 입력 데이타의 채널에 대한 S3 data location을 사용 합니다.
# ```python
# dataset_location + '/train'
# dataset_location + '/validation'
# dataset_location + '/eval'
# ```
# 위에서 설정한 experiment config를 fit 함수의 파라미터로 추가 합니다. 또한 시험은 훈련 Job과 연결이 됩니다.
# <br>TrialComponent는 시험(Trail)의 한 요소를 의미 합니다. 여기서는 "Training"의 훈련 요소를 지칭 합니다.
# ```python
# experiment_config={
# "ExperimentName": cifar10_experiment.experiment_name,
# "TrialName": trial.trial_name,
# "TrialComponentDisplayName": "Training"}
# ```
estimator.fit({'train' : dataset_location + '/train',
'validation' : dataset_location + '/validation',
'eval' : dataset_location + '/eval'
},
experiment_config={
"ExperimentName": cifar10_experiment.experiment_name,
"TrialName": trial.trial_name,
"TrialComponentDisplayName": "Training"
}
)
# ## Analyze the experiments
# 여기서는 DisplayName 이 "Training"과 같은 시험 요소(Trial Component)만 찾는 필터를 생성 합니다. <br>
# 위에서 설정한 TrialComponentDisplayName": "Training" 을 찾게 됩니다.
search_expression = {
"Filters":[
{
"Name": "DisplayName",
"Operator": "Equals",
"Value": "Training",
}
],
}
# ExperimentAnalytics 함수에 experiment 이름과 위에서 생성한 필터를 파라미터로 제공 합니다.
# +
import pandas as pd
pd.options.display.max_columns = 500
from sagemaker.analytics import ExperimentAnalytics
trial_component_analytics = ExperimentAnalytics(
sagemaker_session=sagemaker_session,
experiment_name=cifar10_experiment.experiment_name,
search_expression=search_expression
)
table = trial_component_analytics.dataframe(force_refresh=True)
display(table)
# -
# ### Clenn up the Experiment
# experiment 이름은 계정과 리젼에 유니크한 이름이기에, 사용을 하지 않는다면 지워주는 것이 좋습니다.<br>
# 위에서 생성한 cifar10_experiment 오브젝트를 아래 cleanup 함수에 파라미터로 주어서 지워주게 됩니다.
# 이 작업은 관련된 Trial Component, Trial 을 지우고, 마지막으로 experiment를 삭제 합니다.
# +
import boto3
sess = boto3.Session()
sm = sess.client('sagemaker')
from smexperiments.trial_component import TrialComponent
def cleanup(experiment):
for trial_summary in experiment.list_trials():
trial = Trial.load(sagemaker_boto_client=sm, trial_name=trial_summary.trial_name)
for trial_component_summary in trial.list_trial_components():
tc = TrialComponent.load(
sagemaker_boto_client=sm,
trial_component_name=trial_component_summary.trial_component_name)
trial.remove_trial_component(tc)
try:
# comment out to keep trial components
tc.delete()
except:
# tc is associated with another trial
continue
# to prevent throttling
time.sleep(.5)
trial.delete()
experiment.delete()
print("The experiemnt is deleted")
cleanup(cifar10_experiment)
# -
# **잘 하셨습니다.**
#
# SageMaker에서 GPU 인스턴스를 사용해 5 epoch를 정상적으로 학습할 수 있었습니다.<br>
# 다음 노트북으로 계속 진행하기 전에 SageMaker 콘솔의 Training jobs 섹션을 살펴보고 여러분이 수행한 job을 찾아 configuration을 확인하세요.
#
# 스크립트 모드 학습에 대한 자세한 내용은 아래의 AWS 블로그를 참조해 주세요.<br>
# [Using TensorFlow eager execution with Amazon SageMaker script mode](https://aws.amazon.com/ko/blogs/machine-learning/using-tensorflow-eager-execution-with-amazon-sagemaker-script-mode/)
| 0_Running_TensorFlow_In_SageMaker_tf2.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3 (ipykernel)
# language: python
# name: python3
# ---
# # YOUR PROJECT TITLE
# > **Note the following:**
# > 1. This is *not* meant to be an example of an actual **model analysis project**, just an example of how to structure such a project.
# > 1. Remember the general advice on structuring and commenting your code from [lecture 5](https://numeconcopenhagen.netlify.com/lectures/Workflow_and_debugging).
# > 1. Remember this [guide](https://www.markdownguide.org/basic-syntax/) on markdown and (a bit of) latex.
# > 1. Turn on automatic numbering by clicking on the small icon on top of the table of contents in the left sidebar.
# > 1. The `modelproject.py` file includes a function which could be used multiple times in this notebook.
# Imports and set magics:
# +
import numpy as np
from scipy import optimize
import sympy as sm
# autoreload modules when code is run
# %load_ext autoreload
# %autoreload 2
# local modules
import modelproject
# -
# # OverLapping Generation (OLG)Model
# Consider an economy where individuals live for two periods, and the population grows at a constant rate $n>0$. Identical competitive firms maximize their profits employing a Cobb-Douglas technology that combines labor, $L_{t}$, and capital, $K_{t}$, so that $Y_{t}=A K_{t}^{\alpha} L_{t}^{1-\alpha}$, with $\alpha \in(0,1)$. Assume full capital depreciation (i.e., $\delta=1$ ). Under these assumptions, profit maximization leads to:
# $$
# \begin{aligned}
# r_{t} &=\alpha A k_{t}^{\alpha-1}, \\
# w_{t} &=(1-\alpha) A k_{t}^{\alpha},
# \end{aligned}
# $$
# where $r_{t}$ is the (net) rental rate of capital, $w_{t}$ is the wage rate, and $k_{t}$ denotes capital in per-worker units.
# Utility for young individuals born in period $t$ is
# $$
# U_{t}=\ln c_{1 t}+\frac{1}{1+\rho} \ln c_{2 t+1},
# $$
# with $\rho>-1 . c_{1 t}$ denotes consumption when young, $c_{2 t+1}$ consumption when old. Young agents spend their entire time endowment, which is normalized to one, working. Suppose the government runs an unfunded (pay-as-you-go) social security system, according to which the young pay a contribution $d_{t}$ that
# amounts to a fraction $\tau \in(0,1)$ of their wages. Thus, the contributions are paid out in the same period to the current old. The latter do not work, and sustain their consumption through their savings and the social security benefits. Thus, the budget constraints in each period of life read as:
# $$
# \begin{aligned}
# c_{1 t}+s_{t} &=(1-\tau) w_{t} \\
# c_{2 t+1} &=s_{t} R_{t+1}+(1+n) d_{t+1}
# \end{aligned}
# $$
# where $R_{t+1}=r_{t+1}$ under $\delta=1$.
# a Set up and solve the individual's problem of optimal intertemporal allocation of resources. Derive the Euler equation. Show that individual saving behavior is characterized by
# $$
# s_{t}=\frac{1}{2+\rho}(1-\tau) w_{t}-\tau \frac{1+\rho}{2+\rho} \frac{1+n}{1+r_{t+1}} w_{t+1}
# $$
# b Show that the capital accumulation equation that gives $k_{t+1}$, as a function of $k_{t}$, is given by
# $$
# k_{t+1}=\frac{1}{1+\frac{1+\rho}{2+\rho} \frac{(1-\alpha)}{\alpha} \tau}\left[\frac{(1-\alpha)(1-\tau)}{(1+n)(2+\rho)} A k_{t}^{\alpha}\right]
# $$
# Show also that, in the steady state, the amount of capital-per-worker is
# $$
# \bar{k}=\left[\frac{1}{1+\frac{1+\rho}{2+\rho} \frac{(1-\alpha)}{\alpha} \tau} \frac{(1-\alpha)(1-\tau) A}{(1+n)(2+\rho)}\right]^{\frac{1}{1-\alpha}} .
# $$
# c Suppose that, at time $T$, before saving decisions are made, the government decides to switch to a fully funded social security system according to which the young pay a contribution $d_{T}$ that amounts to a fraction $\tau \in$ $(0,1)$ of their wages. These contributions are then paid out in the next period, together with the accrued interest rate. The budget constraints in each period of life now read as:
# $$
# \begin{aligned}
# c_{1 t}+s_{t} &=(1-\tau) w_{t}, \\
#
# c_{2 t+1} &=\left(s_{t}+\tau w_{t}\right) R_{t+1}, \quad \text { for } t \geq T
# \end{aligned}
# $$
# Show that the new steady-state capital-per-worker, which is denoted by $\bar{k}^{\prime}$, is such that
# $$
# \bar{k}^{\prime}=\left[\frac{(1-\alpha) A}{(1+n)(2+\rho)}\right]^{\frac{1}{1-\alpha}} .
# $$
# d In the absence of any compensation from the government, the old generation at time $T$ is worse off, after the social security system is changed. Explain why. How could the government intervene to compensate them, without imposing any burden on the current generation of workers?
# # OLG model different pension schemes
# ## Model description
# ### Unfunded social security system
# **Time:** Discrete and indexed by $t\in\{0,1,\dots\}$.
# **Demographics:** Population grows at a constant rate $n>0$ and a life consists of
# two periods; *young* and *old*.
# **Households:** Utility for young individuals born in period $t$ is
# $$
# \begin{aligned}
# & U_{t}=\ln c_{1 t}+\frac{1}{1+\rho} \ln c_{2 t+1} \\
# & \text{s.t.}\\
# & S_{t}=s_{t}(1-\tau_{w})w_{t}\\
# & C_{1t}=(1-s_{t})(1-\tau_{w})w_{t}\\
# & C_{2 t+1}=s_{t} r_{t+1}+(1+n) d_{t+1}
# \end{aligned}
# $$
# with $\rho>-1 . c_{1 t}$ denotes consumption when young, $c_{2 t+1}$ consumption when old, where the agents do not work. Young agents spend their entire time endowment, which is normalized to one, working. Suppose the government runs an unfunded (pay-as-you-go) social security system, according to which the young pay a contribution $d_{t}$ that amounts to a fraction $\tau \in(0,1)$ of their wages.
# **Firms:** Firms rent capital $K_{t-1}$ at the rental rate $r_{t}^{K}$,
# and hires labor $E_{t}$ at the wage rate $w_{t}$. Firms have access
# to the production function
#
# $$
# \begin{aligned}
# Y_{t}=F(K_{t-1},E_{t})=A K_{t-1}^{\alpha} E_{t}^{1-\alpha},\,\,\,,\alpha\in(0,1)
# \end{aligned}
# $$
#
# Profits are
#
# $$
# \begin{aligned}
# \Pi_{t}=Y_{t}-w_{t}E_{t}-r_{t}^{K}K_{t-1}
# \end{aligned}
# $$
# **Government:** The Government is not directly included in the model, but runs the social security system by paying the contribution $d_t=\tau w_t$ of the young generation's wage to the old generation.
# **Capital:** Depreciates with a rate of $\delta \in [0,1]$.
# ### Fully funded social security system
# The young pay a contribution $d_{T}$ that amounts to a fraction $\tau \in$ $(0,1)$ of their wages. hese contributions are then paid out in the next period, together with the accrued interest rate. The budget constraints are now:
# $$
# \begin{aligned}
# c_{1 t}+s_{t} &=(1-\tau) w_{t}, \\
# c_{2 t+1} &=\left(s_{t}+\tau w_{t}\right) r_{t+1}, \quad \text { for } t \geq T
# \end{aligned}
# $$
# ## Analytical solution
# If your model allows for an analytical solution, you should provide here.
#
# You may use Sympy for this. Then you can characterize the solution as a function of a parameter of the model.
#
# To characterize the solution, first derive a steady state equation as a function of a parameter using Sympy.solve and then turn it into a python function by Sympy.lambdify. See the lecture notes for details.
# ## Numerical solution
# You can always solve a model numerically.
#
# Define first the set of parameters you need.
#
# Then choose one of the optimization algorithms that we have gone through in the lectures based on what you think is most fitting for your model.
#
# Are there any problems with convergence? Does the model converge for all starting values? Make a lot of testing to figure these things out.
# # Further analysis
# Make detailed vizualizations of how your model changes with parameter values.
#
# Try to make an extension of the model.
# # Conclusion
# Add concise conclusion.
| modelproject/modelproject.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="view-in-github" colab_type="text"
# <a href="https://colab.research.google.com/github/felipe-parodi/DL4DataScience/blob/main/Week10_Tutorial2.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# + [markdown] id="Nn57geGyp2hO"
# # CIS-522 Week 10 Part 2
# # Introduction to Transformers, BERT, and Language Models
#
# **Instructor:** <NAME>
#
# **Content Creators:** Sanjeevini Ganni, <NAME>, <NAME>
#
#
# + [markdown] id="ymkujLBlqM6F"
# ##Tutorial Objectives
#
#
# (1) Recognize NLP tasks: IR/search, Question Answering/text completion, MT \\
# (2) Understand distributional similarity on words and context including Context-oblivious embeddings (word2vec, glove, fastText) and multilingual embeddings \\
# (3) Attention \\
# (4) Context-sensitive embeddings: BERT and transformers: masking and self-attention \\
# (5) The many flavors of BERT: RoBERTa and DistilBERT \\
# (6) Fine-tuning language embeddings \\
# (7) Explaining NLP models \\
# (8) Big language models: GPT-3 and friends \\
# (9) Societal: Bias in language embeddings \\
#
# + [markdown] id="VC2Bqp7M12U1"
# ---
# ## Preface
# We recommend saving this notebook in your Google Drive (`File -> Save a copy in Drive`).
# + id="LsvDRvFG15Rw" cellView="form"
#@markdown What is your Pennkey and pod? (text, not numbers, e.g. bfranklin)
my_pennkey = 'fparodi' #@param {type:"string"}
my_pod = 'superfluous-lyrebird' #@param ['Select', 'euclidean-wombat', 'sublime-newt', 'buoyant-unicorn', 'lackadaisical-manatee','indelible-stingray','superfluous-lyrebird','discreet-reindeer','quizzical-goldfish','ubiquitous-cheetah','nonchalant-crocodile','fashionable-lemur','spiffy-eagle','electric-emu','quotidian-lion','astute-jellyfish', 'quantum-herring']
# start timing
import time
try:t0;
except NameError: t0 = time.time()
# + [markdown] id="2_oC9dNZqQu-"
# ##Setup
# + id="0RcjygJ2CRSG" colab={"base_uri": "https://localhost:8080/"} outputId="4df0413c-9381-4ca6-8f95-19f04405c87b"
#@title Install
# !pip install torchtext==0.4.0
# !pip install transformers
# # !git clone https://github.com/facebookresearch/fastText.git
# # %cd fastText
# # !pip install .
# # %cd ..
# + id="52U7jyR4pngL" colab={"base_uri": "https://localhost:8080/"} outputId="31170c45-ab2a-4071-bc86-ef92bf21f849"
#@title Imports and Seed
import numpy as np
import pandas as pd
import time
import matplotlib.pyplot as plt
import matplotlib.cm as cm
% matplotlib inline
import re
from IPython.display import Image
import os
from tqdm import tqdm_notebook as tqdm
import sys
import random
import torch
import torch.nn as nn
from torch.nn import functional as F
import torch.optim as optim
from torch.autograd import Variable
from torchtext import data, datasets
from torchtext.vocab import Vectors, FastText
# import fasttext
import requests
import zipfile
# import nltk
# nltk.download('punkt')
# from nltk.tokenize import word_tokenize
from transformers import AutoTokenizer, AutoModelForSequenceClassification
import urllib
import csv
from scipy.special import softmax
seed = 522
random.seed(seed)
torch.manual_seed(seed)
torch.cuda.manual_seed_all(seed)
torch.cuda.manual_seed(seed)
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
print(device)
# + id="l82sJ8BNmty-"
# @title Figure Settings
import ipywidgets as widgets
# %matplotlib inline
fig_w, fig_h = (8, 6)
plt.rcParams.update({'figure.figsize': (fig_w, fig_h)})
# %config InlineBackend.figure_format = 'retina'
SMALL_SIZE = 12
plt.style.use("https://raw.githubusercontent.com/NeuromatchAcademy/"
"course-content/master/nma.mplstyle")
# + id="ik8be94eRLPt"
#@title Helper functions
def cosine_similarity(vec_a, vec_b):
"""Compute cosine similarity between vec_a and vec_b"""
return np.dot(vec_a, vec_b) / \
(np.linalg.norm(vec_a) * np.linalg.norm(vec_b))
def tokenize(sentences):
#Tokenize the sentence
#from nltk.tokenize library use word_tokenize
token = word_tokenize(sentences)
return token
def plot_train_val(x, train, val, train_label, val_label, title):
plt.plot(x, train, label=train_label)
plt.plot(x, val, label=val_label)
plt.legend(loc='lower right')
plt.xlabel('epoch')
plt.title(title)
plt.show()
# + [markdown] id="Bi_7aepGAZkf"
# ###Data
# + id="xCZ56QJ96aMW"
#@title Load Data
def load_dataset(emb_vectors, sentence_length = 50):
TEXT = data.Field(sequential=True, tokenize=tokenize, lower=True, include_lengths=True, batch_first=True, fix_length=sentence_length)
LABEL = data.LabelField(dtype=torch.float)
train_data, test_data = datasets.IMDB.splits(TEXT, LABEL)
TEXT.build_vocab(train_data, vectors=emb_vectors)
LABEL.build_vocab(train_data)
train_data, valid_data = train_data.split(split_ratio=0.7, random_state = random.seed(seed))
train_iter, valid_iter, test_iter = data.BucketIterator.splits((train_data, valid_data, test_data), batch_size=32, sort_key=lambda x: len(x.text), repeat=False, shuffle=True)
vocab_size = len(TEXT.vocab)
return TEXT, vocab_size, train_iter, valid_iter, test_iter
def download_file_from_google_drive(id, destination):
URL = "https://docs.google.com/uc?export=download"
session = requests.Session()
response = session.get(URL, params = { 'id' : id }, stream = True)
token = get_confirm_token(response)
if token:
params = { 'id' : id, 'confirm' : token }
response = session.get(URL, params = params, stream = True)
save_response_content(response, destination)
def get_confirm_token(response):
for key, value in response.cookies.items():
if key.startswith('download_warning'):
return value
return None
def save_response_content(response, destination):
CHUNK_SIZE = 32768
with open(destination, "wb") as f:
for chunk in response.iter_content(CHUNK_SIZE):
if chunk: # filter out keep-alive new chunks
f.write(chunk)
# + [markdown] id="IWxI-yHszqqs"
# ---
# ##Section 1: Transformers
# + id="8J7i4TZQAIix" colab={"base_uri": "https://localhost:8080/", "height": 519} cellView="form" outputId="62383297-40a5-4977-da99-7b17973025ef"
#@title Video : Self-attention
from IPython.display import YouTubeVideo
video = YouTubeVideo(id="g860drKesIw", width=854, height=480, fs=1)
print("Video available at https://youtube.com/watch?v=" + video.id)
import time
try: t0;
except NameError: t0=time.time()
video
# + [markdown] id="HJhcL3PivRyM"
# Transformers! Like CNNs and LSTMs, this base model architecture has been the foundation of many very successful models such as BERT and friends. It has seen tremendous use in the NLP space, and there have been effforts to extend it to the audio and visual domains as well. The original paper, [Attention Is All You Need (Vaswani et al. 2017)](https://arxiv.org/abs/1706.03762), is very readable and you are encouraged to take a look.
#
# The Transformer model is fundamentally an encoder-decoder model that operates on sequences of tokens. Both the encoder and decoder components are composed of stacks of submodules that use **only** attention mechanisms and linear weights to learn (there are no CNNs or RNNs). The architecture schematic looks like the following:
#
# 
#
# In the rest of this section we will be going over the various building blocks that go into Transformers. The goal here is not to train anything (that is left for a homework assignment). Rather, the emphasis is on understanding what all the pieces do and how they fit together.
#
# *Note:* Many of the images in this section are taken from Dive Into Deep Learning's chapter on [Attention Mechanisms](https://d2l.ai/chapter_attention-mechanisms/index.html). You are encouraged to check that out for additional details and implementations.
#
#
# + [markdown] id="YjdBlVNuT96x"
# ### Self-Attention
#
# Transformers make use of something called self-attention as a critical component to the entire operation. What does that mean in the context of attention mechanisms? If you recall, attention mechanisms in machine learning have three components:
#
# - the values V (the things you perceive i.e. model inputs)
# - the query Q (the thing you want to attend to)
# - the keys K (a mapping between queries and values)
#
# Generally the number and dimensionality of queries and values can all be different. In self-attention, the queries, keys, and values are all drawn from the same set of inputs. In other words, we don't need to specify anything about what and how queries and keys are formed, as they come straight from the data just like the values!
#
# Take a minute and look at this article from the last pod session. It has detailed graphical explanation on how to calculate attention scores.
# https://towardsdatascience.com/illustrated-self-attention-2d627e33b20a
#
# Ok, so we know that our queries, keys, and values come from our input sequence, but which attention mechanism should we use?
# + [markdown] id="djMSZnkXkeDU"
# ### Masked Scaled Dot Product Attention
# + id="hvk5zc2akcM0" cellView="form" colab={"base_uri": "https://localhost:8080/", "height": 519} outputId="92e4c826-953d-4d83-cba5-2fab6d35c31b"
#@title Video : Masking
from IPython.display import YouTubeVideo
video = YouTubeVideo(id="VtaGIp_9j1w", width=854, height=480, fs=1)
print("Video available at https://youtube.com/watch?v=" + video.id)
import time
try: t0;
except NameError: t0=time.time()
video
# + [markdown] id="TLCjYZvPR4-A"
# Masking is used to hide values from being attended to, including \
# (a) words that are hidden for self-supervised learning, \
# (b) padding tokens, and \
# (c) in the seq2seq tasks in the original Transformers paper, everything that came after the next token in the output training (to enforce autoregressive behavior).
#
# BERT, on the other hand, masks out individual tokens within the input sequence and everything but those tokens on the output side. This provides a more bidirectional-style embedding learning strategy.
#
# This is accomplished by setting every element we want to mask to $-\infty$ before applying softmax, which has the effect of giving that element a probability of 0.
#
# We've provided a masked softmax function below, which assumes a binary matrix of size (batch_size, n_tokens) where a value of 0 indicates that token should be masked.
#
# + id="_kodsdE5dedu"
def masked_softmax(x, mask):
""" Applies softmax on a masked version of the input.
Args:
x (n_batch, n_tokens, t_tokens): - the scaled dot product of Q and K
mask (n_batch, n_tokens): - binary mask, all values = 0 will be set to -inf
Returns:
(n_batch, n_tokens, n_tokens): the result of applying softmax along the last
dimension of the masked input.
"""
return F.softmax(x.masked_fill_(mask.unsqueeze(1) == 0, float('-inf')), dim=-1)
# + [markdown] id="nKRJULI3CM_b"
# #### Exercise 1: Masked Scaled Dot Product Attention Module
# In this exercise you will implement the forward method of a PyTorch module for computing the masked scaled dot product attention function. This is represented by the following equation:
#
# $$
# \alpha(Q, K, V, M) = \mathrm{masked\ softmax} \left( \frac{QK^T}{\sqrt d_k}, M \right) V
# $$
#
# where $Q$ is the query tensor, $K$ is the key tensor, $V$ is the value tensor, $M$ is the masking tensor, and $d_k$ is the dimensionality of our embeddings.
#
# PyTorch provides us with the very useful [`torch.bmm`](https://pytorch.org/docs/stable/generated/torch.bmm.html) function to compute matrix multiplication over batches while preserving the batch dimension. You will also want to make use of the [`torch.transpose`](https://pytorch.org/docs/stable/generated/torch.transpose.html#torch.transpose) function to get the transpose of the keys tensor while also preserving the batch dimension.
#
# To calculate the masked scaled dot product attention, we will first get the dot product from the query and key. We divide the dot product by the square root of the embedding length to scale it. Then we apply the masked softmax function to get the scores. By multiplying the scores and the values we get the masked scaled dot product attention.
#
# *NOTE:* Dropout is normally applied to the `scaled_dot_product` quantity before softmax is applied during training. However, in the interests of clarity, we are omitting it here.
# + id="KguOtPV5ClSA" colab={"base_uri": "https://localhost:8080/"} outputId="13a8c470-9794-49aa-df70-566af6c174cf"
class ScaledDotProductAttention(nn.Module):
def __init__(self, embed_dim):
super().__init__()
self.embed_dim = embed_dim
def forward(self, queries, keys, values, mask):
"""
Args:
queries (n_batch, n_tokens, embed_dim): queries (Q) tensor
keys (n_batch, n_tokens, embed_dim): keys (K) tensor
values (n_batch, n_tokens, embed_dim): values (V) tensor
mask (n_batch, n_tokens, embed_dim): binary mask (M) tensor
Returns:
(n_batch, n_tokens, embed_dim): scaled dot product attention tensor
"""
####################################################################
# Fill in missing code below (...),
# then remove or comment the line below to test your function
# raise NotImplementedError("ScaledDotProductAttention")
####################################################################
# Compute the dot product of the queries and keys, then divide by the square
# root of the embedding size
scaled_dot_product = (torch.bmm(queries, torch.transpose(keys,1,2)) / np.sqrt(embed_dim))
# Next perform the masked softmax function on this product and supplied mask
masked_softmax_scores = masked_softmax(scaled_dot_product, mask)
# Finally compute and return the dot product of the masked output and the values
attention = torch.bmm(masked_softmax_scores, values)
return attention
# Uncomment below to test your module
torch.manual_seed(522)
batch_size, n_tokens, embed_dim = 1, 3, 4
tokens = torch.normal(0, 1, (batch_size, n_tokens, embed_dim))
attention = ScaledDotProductAttention(embed_dim)
mask = torch.ones((batch_size, n_tokens))
print(attention(tokens, tokens, tokens, mask))
mask[0, 2:] = 0
print(attention(tokens, tokens, tokens, mask))
# + [markdown] id="gCxrHyzvhHCv"
# If done correctly, you should see something like the following (numbers may vary but shape should be the same):
# ```
# tensor([[[-0.2151, -0.0403, 0.9237, -1.6000],
# [ 0.4216, 1.3972, 1.3613, -0.0161],
# [ 0.0862, 0.7809, 1.1435, -0.8234]]])
# tensor([[[-0.2688, -0.3399, 0.8707, -1.7680],
# [ 0.5448, 1.5134, 1.4313, 0.2591],
# [ 0.1679, 0.6550, 1.1717, -0.6798]]])
# ```
# [*Click for solution*](https://github.com/CIS-522/course-content/blob/main/tutorials/W10_NLP/solutions/W10_Tutorial2_Solution_Ex01.py)
# + [markdown] id="d2fVCkImZNxq"
# Self-attention is great, but it has two shortcomings:
#
# 1. It doesn't let us specify or control what gets attended to and thus will converge on only one strategy due to averaging. (Like short-term attention or long-term attention).
#
# 2. There is no implicit ordering or notion of relative position of the input tokens to each other unlike in RNNs and ConvNets.
#
# We know things about natural language, such as that word order matters and there are many different grammatical and syntactic features that imbue useful meaning. How do we overcome this?
#
# First, let's address the attention strategy problem. One answer to only having a single attention strategy is to have many!
# + [markdown] id="Onssr49pLFiK"
# ### Multi-Head Attention
#
# In practice, given the same set of queries, keys, and values we may want our model to combine knowledge from different behaviors of the same attention mechanism, such as capturing dependencies of various ranges (e.g., shorter-range vs. longer-range) within a sequence. Thus, it may be beneficial to allow our attention mechanism to jointly use different representation subspaces of queries, keys, and values.
#
# Multi-head attention mechanism is employed by Transformers to concurrently learn multiple different attention strategies or "heads." This is accomplished by passing each of the queries, keys, and values through single, fully-connected linear layers. Attention training is then conducted on all splits, which then get joined together at the end and passed through a linear layer to achieve the final output.
#
# 
#
# Now, to avoid poor scaling performance with each attention head we add, we can take advantage of the fact that we only need to compute matrix multiplications. By effectively making the dimensionality of the newly made query, key, and value heads equal to the original embedding dimension cleanly divided by the number of heads, we can keep the heads strided in one tensor and thus compute the attention scores of all heads in a single call.
#
# The methods to shuffle the data around for the input values (queries, keys, values) and then to unshuffle it for the output are provided below.
# + id="hWdhsAo_3qa6"
def mha_transform_input(x, n_heads, head_dim):
""" Restructure the input tensors to compute the heads in parallel
Requires that head_dim = embed_dim / n_heads
Args:
x (n_batch, n_tokens, embed_dim): input tensor, one of queries, keys, or values
n_heads (int): the number of attention heads
head_dim (int): the dimensionality of each head
Returns:
(n_batch*n_heads, n_tokens, head_dim): 3D Tensor containing all the input heads
"""
n_batch, n_tokens, _ = x.shape
x = x.reshape((n_batch, n_tokens, n_heads, head_dim))
x = x.permute(0, 2, 1, 3)
return x.reshape((n_batch * n_heads, n_tokens, head_dim))
def mha_transform_output(x, n_heads, head_dim):
""" Restructures the output back to the original format
Args:
x (n_bacth*n_heads, n_tokens, head_dim): multi-head representation tensor
n_heads (int): the number of attention heads
head_dim (int): the dimensionality of each head
Returns:
(n_batch, n_tokens, embed_dim): 3D Tensor containing all the input heads
"""
n_concat, n_tokens, _ = x.shape
n_batch = n_concat // n_heads
x = x.reshape((n_batch, n_heads, n_tokens, head_dim))
x = x.permute(0, 2, 1, 3)
return x.reshape((n_batch, n_tokens, n_heads * head_dim))
# + [markdown] id="-_rWeVEtKgdk"
# #### Exercise 2: Multi-Head Attention Module
# In this exercise you will implement the the forward method of a PyTorch module for handling the multi-head attention mechanism. Each of the Q, K, and V inputs need to be run through their corresponding linear layers and then transformed using `mha_transform_input`. You then pass these to our scaled dot product attention module, transform that output back using `mha_transform_output`, and then run that though the corresponding output linear layer.
#
# *NOTE:* In the original Transformers paper, the linear layers were just weight matrices with no bias term which is reproduced here by using `Linear` layers and setting bias to False.
# + id="_Ohoi8GWGN8l" colab={"base_uri": "https://localhost:8080/"} outputId="4848dba1-4a76-468e-8625-a0246631089a"
class MultiHeadAttention(nn.Module):
def __init__(self, n_heads, embed_dim):
super().__init__()
self.n_heads = n_heads
self.head_dim = embed_dim // n_heads
self.attention = ScaledDotProductAttention(embed_dim)
self.query_fc = nn.Linear(embed_dim, embed_dim, bias=False)
self.key_fc = nn.Linear(embed_dim, embed_dim, bias=False)
self.value_fc = nn.Linear(embed_dim, embed_dim, bias=False)
self.out_fc = nn.Linear(embed_dim, embed_dim, bias=False)
def forward(self, queries, keys, values, mask):
"""
Args:
queries (n_batch, n_tokens, embed_dim): queries (Q) tensor
keys (n_batch, n_tokens, embed_dim): keys (K) tensor
values (n_batch, n_tokens, embed_dim): values (V) tensor
mask (n_batch, n_tokens): binary mask tensor
Returns:
(n_batch, n_tokens, embed_dim): multi-head attention tensor
"""
####################################################################
# Fill in missing code below (...),
# then remove or comment the line below to test your function
# raise NotImplementedError("MultiHeadAttention")
####################################################################
# Pass the queries through their linear layer, then apply the mha_transform_input function
q_heads = mha_transform_input(self.query_fc(queries),self.n_heads,self.head_dim)
# Pass the keys through their linear layer, then apply the mha_transform_input function
k_heads = mha_transform_input(self.key_fc(keys),self.n_heads,self.head_dim)
# Pass the values through their linear layer, then apply the mha_transform_input function
v_heads = mha_transform_input(self.value_fc(values),self.n_heads,self.head_dim)
# Compute the scaled dot product attention on the transformed q, k, and v
# attention heads with the provided MASK
attention_heads = self.attention(q_heads,k_heads,v_heads,mask)
# Apply the mha_transform_output function to the attention heads, then pass
# this through the output linear layer
attention = self.out_fc(mha_transform_output(attention_heads,self.n_heads,self.head_dim))
return attention
# Uncomment below to test your module
torch.manual_seed(522)
n_heads, batch_size, n_tokens, embed_dim = 2, 1, 3, 4
tokens = torch.normal(0, 1, (batch_size, n_tokens, embed_dim))
mask = torch.ones((batch_size, n_tokens))
attention = MultiHeadAttention(n_heads, embed_dim)
attention(tokens, tokens, tokens, mask)
# + [markdown] id="gshY1bdKx-IM"
# If done correctly, you should see something like the following (numbers may vary but shape should be the same):
# ```
# tensor([[[ 0.1191, 0.3588, 0.2972, -0.2594],
# [ 0.1204, 0.3102, 0.2904, -0.2539],
# [ 0.1216, 0.3362, 0.2921, -0.2603]]], grad_fn=<UnsafeViewBackward>)
# ```
# [*Click for solution*](https://github.com/CIS-522/course-content/blob/main/tutorials/W10_NLP/solutions/W10_Tutorial2_Solution_Ex02.py)
# + [markdown] id="JcVRwWnKf3ZN"
# So we have a solution for enabling multiple different attention strategies to be applied. But how about knowing where each token is in the sequence? Unlike RNNs that recurrently process tokens of a sequence one by one, self-attention ditches sequential operations in favor of parallel computation. Well, what if we just explicitly added some piece of information to the input representation that encodes each token's position?
#
# Positional encodings can be either learned or fixed. While you can likely imagine ways to do this, Transformers use fixed positional encoding based on sine and cosine functions:
#
# $$
# p_{i,2j} = \sin \left( \frac{i}{10000^{2j/d}} \right) \\
# p_{i,2j+1} = \cos \left( \frac{i}{10000^{2j/d}} \right)
# $$
#
# where $i$ and $j$ are iterated over the rows (tokens) and columns (embedding dimensions), respectively. This likely seems strange at first, but it has the neat effect of
# 1. providing unique values across the matrix elements
# 2. uses float values which easily add to the input embedded tokens
#
# We can see an example of what this looks like when plotted for a few columns below:
#
# 
# + [markdown] id="BpM1VQmqLXQT"
# ### Exercise 3: Positional Encoding Module
# In this exercise you will create the forward method for a PyTorch module that will add positional embeddings to an input batch of tokens. The position embedding values are already computed and cached for you.
#
# *NOTE:* Dropout is normally applied to the output of this module during training, but we have omitted it for clarity.
# + id="i8cAATbDAiA4" cellView="form" colab={"base_uri": "https://localhost:8080/", "height": 519} outputId="76b25be7-6445-429d-9138-09d937cf20a0"
#@title Video : Positional Encoding
from IPython.display import YouTubeVideo
video = YouTubeVideo(id="FoRWkEAJDtg", width=854, height=480, fs=1)
print("Video available at https://youtube.com/watch?v=" + video.id)
import time
try: t0;
except NameError: t0=time.time()
video
# + id="elotwfaIIM1o" colab={"base_uri": "https://localhost:8080/", "height": 612} outputId="72e7f72a-bfbd-403e-e257-e517da9299d4"
class PositionalEncoder(nn.Module):
def __init__(self, embed_dim, max_len=1000):
super().__init__()
self.position_embedding = torch.zeros((1, max_len, embed_dim))
i = torch.arange(max_len, dtype=torch.float32).reshape(-1, 1)
j2 = torch.arange(0, embed_dim, step=2, dtype=torch.float32)
x = i / torch.pow(10000, j2 / embed_dim)
self.position_embedding[..., 0::2] = torch.sin(x)
self.position_embedding[..., 1::2] = torch.cos(x)
def forward(self, x):
####################################################################
# Fill in missing code below (...),
# then remove or comment the line below to test your function
# raise NotImplementedError("PositionalEncoder")
####################################################################
# Add the cached positional encodings to the input
x_plus_p = x + self.position_embedding[:, :x.shape[1]]
return x_plus_p
# Uncomment below to test your module
n_tokens, embed_dim = 10, 4
pos_enc = PositionalEncoder(embed_dim)
p = pos_enc(torch.zeros((1, n_tokens, embed_dim)))
plt.imshow(p.squeeze())
p
# + [markdown] id="T34SUKdayCvt"
# You should see a plot visualizing the different values, as well as the actual positional output values:
# ```
# tensor([[[ 0.0000, 1.0000, 0.0000, 1.0000],
# [ 0.8415, 0.5403, 0.0100, 0.9999],
# [ 0.9093, -0.4161, 0.0200, 0.9998],
# [ 0.1411, -0.9900, 0.0300, 0.9996],
# [-0.7568, -0.6536, 0.0400, 0.9992],
# [-0.9589, 0.2837, 0.0500, 0.9988],
# [-0.2794, 0.9602, 0.0600, 0.9982],
# [ 0.6570, 0.7539, 0.0699, 0.9976],
# [ 0.9894, -0.1455, 0.0799, 0.9968],
# [ 0.4121, -0.9111, 0.0899, 0.9960]]])
# ```
# [*Click for solution*](https://github.com/CIS-522/course-content/blob/main/tutorials/W10_NLP/solutions/W10_Tutorial2_Solution_Ex03.py)
# + [markdown] id="4N7_j4uihthC"
# ### The Encoder
#
# We now have almost everything we need to assemble the full Transformer network. There are just two more modules we need to quickly discuss, and then we will get to putting them all together.
#
# Transformer architecture for reference:
# <div>
# <img src="https://d2l.ai/_images/transformer.svg" width="275"/>
# </div>
#
# First, there is the residual layer norm that appears after every other component. In all cases, this takes the output of the previous component, sums it with the input to that component (the residual connection), and then normalizes the result across the layer.
#
# Second is the positionwise feed forward network that appears after the attention components. It is a two layer fully connected module with a ReLU activation in between.
#
# These are provided below. Note that dropout would normally be applied in various places in these modules during training, but we have omitted it for clarity.
# + id="YAF3g4h51_dA"
class ResidualNorm(nn.Module):
def __init__(self, embed_dim):
super().__init__()
self.norm = nn.LayerNorm(embed_dim)
def forward(self, x, residual):
return self.norm(x + residual)
class Feedforward(nn.Module):
def __init__(self, embed_dim, hidden_dim):
super().__init__()
self.fc1 = nn.Linear(embed_dim, hidden_dim)
self.fc2 = nn.Linear(hidden_dim, embed_dim)
def forward(self, x):
return self.fc2(F.relu(self.fc1(x)))
# + [markdown] id="0DSh0XVD1_Nl"
# Now that we have all the modules we need, we can begin assembling the bigger modules. First we will look at the Encoder Block. The actual encoder will be made up of some number of these stacked together.
# + [markdown] id="kkOaVXrnU-43"
# #### Exercise 4: Encoder Block Module
# In this exercise you will create the forward method of the PyTorch module representing the Encoder Block of the Transformer. The Encoder Block has the following architecture:
# 1. a multi-head attention module using self-attention
# 2. 1st residual layer norm
# 3. feed-forward model
# 4. 2nd residual layer norm
#
# + id="NwIxWxc3S7CT" colab={"base_uri": "https://localhost:8080/"} outputId="ef7f523c-124c-45cb-c04b-78d8a9f5bad6"
class EncoderBlock(nn.Module):
def __init__(self, n_heads, embed_dim, hidden_dim):
super().__init__()
self.attention = MultiHeadAttention(n_heads, embed_dim)
self.norm1 = ResidualNorm(embed_dim)
self.feedforward = Feedforward(embed_dim, hidden_dim)
self.norm2 = ResidualNorm(embed_dim)
def forward(self, src_tokens, src_mask):
"""
Args:
src_tokens (n_batch, n_tokens, embed_dim): the source sequence
src_mask (n_batch, n_tokens): binary mask over the source
Returns:
(n_batch, n_tokens, embed_dim): the encoder state
"""
####################################################################
# Fill in missing code below (...),
# then remove or comment the line below to test your function
# raise NotImplementedError("EncoderBlock")
####################################################################
# First compute self-attention on the source tokens by passing them in
# as the queries, keys, and values to the attention module.
self_attention = self.attention(src_tokens,src_tokens,src_tokens, src_mask)
# Next compute the norm of the self-attention result with a residual
# connection from the source tokens
normed_attention = self.norm1(self_attention, src_tokens)
# Pass the normed attention result through the feedforward component
ff_out = self.feedforward(normed_attention)
# Finally compute the norm of the feedforward output with a residual
# connection from the normed attention output
out = self.norm2(ff_out, normed_attention)
return out
# Uncomment below to test your module
torch.manual_seed(522)
n_heads, batch_size, n_tokens, embed_dim, hidden_dim = 2, 1, 3, 4, 8
tokens = torch.normal(0, 1, (batch_size, n_tokens, embed_dim))
mask = torch.ones((batch_size, n_tokens))
encoder = EncoderBlock(n_heads, embed_dim, hidden_dim)
encoder(tokens, mask)
# + [markdown] id="Z-q1otdryMfO"
# If done correctly, you should see something like the following (numbers may vary but shape should be the same):
# ```
# tensor([[[ 0.0051, 0.0022, 1.4105, -1.4179],
# [-0.7053, 0.9854, 0.9762, -1.2564],
# [-0.4003, 0.7551, 1.0888, -1.4436]]],
# grad_fn=<NativeLayerNormBackward>)
# ```
#
# [*Click for solution*](https://github.com/CIS-522/course-content/blob/main/tutorials/W10_NLP/solutions/W10_Tutorial2_Solution_Ex04.py)
# + [markdown] id="7RaAaNO3DgMa"
# Now that we have our Encoder Block, we can chain these together in a stack to get the full Encoder module. We will include the embedding layer and positional encoding step of the source tokens here as well. The input to this module then will be a tensor of a batch of token IDs and corresponding mask.
#
# For instance, if our entire corpus was the English sentence: `Cat sat on the mat` and we tokenized by word, our vocab size would be 5 as there are 4 unique words. Converting this sentence to IDs would be `[[0,1,2,3,4]]`.
#
# The code for the Encoder module is provided below.
# + id="HRgib6BjDuIo" colab={"base_uri": "https://localhost:8080/"} outputId="bf60286c-fc4c-4ce5-a8cc-4f91c98544ed"
class Encoder(nn.Module):
def __init__(self, vocab_size, embed_dim, hidden_dim, n_heads, n_blocks):
super().__init__()
self.embedding = nn.Embedding(vocab_size, embed_dim)
self.positional_encoding = PositionalEncoder(embed_dim)
self.encoder_blocks = nn.ModuleList([
EncoderBlock(n_heads, embed_dim, hidden_dim)
for _ in range(n_blocks)])
def forward(self, src_tokens, src_mask):
x = self.embedding(src_tokens)
x = self.positional_encoding(x)
for block in self.encoder_blocks:
x = block(x, src_mask)
return x
vocab_size = 5
n_blocks, n_heads, batch_size, embed_dim, hidden_dim = 10, 2, 1, 4, 8
enc = Encoder(vocab_size, embed_dim, hidden_dim, n_heads, n_blocks)
src_tokens = torch.IntTensor([[0,1,2,3,4]])
src_mask = torch.IntTensor([[1,1,1,1,1]])
enc(src_tokens, src_mask)
# + [markdown] id="ZgRqX5_c8nbm"
# ### The Decoder
#
# Like the encoder, the decoder is made up of a stack of repeating Decoder Blocks. Decoder Blocks are similar to the Encoder ones with an additional multi-head attention component that doesn't use self-attention, but instead gets the queries from the decoder's self-attention component and the keys and values from the encoder's output.
# + [markdown] id="7GqPoroY9WEv"
# #### Exercise 5: Decoder Block Module
# In this exercise you will create the forward method of the PyTorch module representing the Decoder Block of the Transformer. The Decoder Block has the following architecture:
# 1. a multi-head attention using self-attention
# 2. 1st residual layer norm
# 3. a 2nd multi-head attention that incorporates the encoder output
# 4. 2nd residual layer norm
# 5. feed-forward model
# 6. 3rd residual layer norm
# + id="J2C7z-h4Awqt" colab={"base_uri": "https://localhost:8080/"} outputId="0f66a0d9-0974-48b2-b74c-bfdf9f252568"
class DecoderBlock(nn.Module):
def __init__(self, n_heads, embed_dim, hidden_dim):
super().__init__()
self.self_attention = MultiHeadAttention(n_heads, embed_dim)
self.norm1 = ResidualNorm(embed_dim)
self.encoder_attention = MultiHeadAttention(n_heads, embed_dim)
self.norm2 = ResidualNorm(embed_dim)
self.feedforward = Feedforward(embed_dim, hidden_dim)
self.norm3 = ResidualNorm(embed_dim)
def forward(self, tgt_tokens, tgt_mask, encoder_state, src_mask):
"""
Args:
tgt_tokens (n_batch, n_tokens, embed_dim): the target sequence
tgt_mask (n_batch, n_tokens): binary mask over the target tokens
encoder_state (n_batch, n_tokens, embed_dim): the output of the encoder pass
src_mask (n_batch, n_tokens): binary mask over the source tokens
Returns:
(n_batch, n_tokens, embed_dim): the decoder state
"""
####################################################################
# Fill in missing code below (...),
# then remove or comment the line below to test your function
# raise NotImplementedError("DecoderBlock")
####################################################################
# First compute self-attention on the target tokens by passing them in
# as the queries, keys, and values to the attention module along with the
# target mask.
self_attention = self.self_attention(tgt_tokens, tgt_tokens, tgt_tokens, tgt_mask)
# Next compute the norm of the self-attention result with a residual
# connection from the target tokens
normed_self_attention = self.norm1(self_attention, tgt_tokens)
# Compute the encoder attention by using the normed self-attention output as
# the queries and the encoder state as the keys and values along with the
# source mask.
encoder_attention = self.encoder_attention(normed_self_attention, encoder_state, encoder_state, src_mask)
# Next compute the norm of the encoder attention result with a residual
# connection from the normed self-attention
normed_encoder_attention = self.norm2(encoder_attention, normed_self_attention)
# Pass the normed encoder attention result through the feedforward component
ff_out = self.feedforward(normed_encoder_attention)
# Finally compute the norm of the feedforward output with a residual
# connection from the normed attention output
out = self.norm3(ff_out, normed_encoder_attention)
return out
# Uncomment below to test your module
torch.manual_seed(522)
n_heads, batch_size, n_tokens, embed_dim, hidden_dim = 2, 1, 3, 4, 8
tokens = torch.normal(0, 1, (batch_size, n_tokens, embed_dim))
src_mask = torch.ones((batch_size, n_tokens))
tgt_mask = torch.ones((batch_size, n_tokens))
encoder = EncoderBlock(n_heads, embed_dim, hidden_dim)
decoder = DecoderBlock(n_heads, embed_dim, hidden_dim)
encoder_state = encoder(tokens, src_mask)
decoder(tokens, tgt_mask, encoder_state, src_mask)
# + [markdown] id="wCyIl3NnyQ-V"
# If done correctly, you should see something like the following (numbers may vary but shape should be the same):
# ```
# tensor([[[ 1.0841, 0.3174, 0.2326, -1.6340],
# [ 0.4667, 1.1922, -0.1277, -1.5312],
# [ 0.6861, 0.9347, 0.0088, -1.6296]]],
# grad_fn=<NativeLayerNormBackward>)
# ```
# [*Click for solution*](https://github.com/CIS-522/course-content/blob/main/tutorials/W10_NLP/solutions/W10_Tutorial2_Solution_Ex05.py)
# + [markdown] id="TmLB6F9QU17z"
# The Decoder module ends up just the same as the Encoder module with one key difference: the forward method needs to also accept the output of the encoder as well as the source token mask.
#
# For instance, let's assume we are doing a translation task and want to translate the English `Cat sat on the mat` into the French `Chat assis sur le tapis`. Out target vocab size is also 5 and would be similarly converted into IDs as `[[0,1,2,3,4]]`.
#
# The code for the Decoder module is presented below.
# + id="Ddhz5rGeUtvT" colab={"base_uri": "https://localhost:8080/"} outputId="33f5b60e-4<PASSWORD>"
class Decoder(nn.Module):
def __init__(self, vocab_size, embed_dim, hidden_dim, n_heads, n_blocks):
super().__init__()
self.embedding = nn.Embedding(vocab_size, embed_dim)
self.positional_encoding = PositionalEncoder(embed_dim)
self.decoder_blocks = nn.ModuleList([
DecoderBlock(n_heads, embed_dim, hidden_dim)
for _ in range(n_blocks)])
def forward(self, tgt_tokens, tgt_mask, encoder_state, src_mask):
x = self.embedding(tgt_tokens)
x = self.positional_encoding(x)
for block in self.decoder_blocks:
x = block(x, tgt_mask, encoder_state, src_mask)
return x
vocab_size = 5
n_blocks, n_heads, batch_size, embed_dim, hidden_dim = 10, 2, 1, 4, 8
tgt_tokens = torch.IntTensor([[0,1,2,3,4]])
tgt_mask = src_mask = torch.IntTensor([[1,1,1,1,1]])
enc_state = torch.randn((1,5,4))
dec = Decoder(vocab_size, embed_dim, hidden_dim, n_heads, n_blocks)
dec(tgt_tokens, tgt_mask, enc_state, src_mask)
# + [markdown] id="Vl3MHxUOCrt0"
# ### The Full Transformer Model
#
# We can now put the Encoder and Decoder together to produce the full Transformer model.
#
#
# + [markdown] id="vZjirF47WdHS"
# #### Exercise 6: Transformer Module
#
# In the last exercise for this section you will implement the forward method of the full Transformer module. First you will apply the source tokens and mask to the Encoder to get its output, then use that along with the target tokens and mask to produce the Decoder output. Finally we run the Decoder output through a linear layer to transform the embeddings back into vocab ID scores in order to determine the actual next word prediction.
# + id="D7A75izaW34X" colab={"base_uri": "https://localhost:8080/"} outputId="a1a21e98-e404-4cf4-ec7f-032867102d33"
class Transformer(nn.Module):
def __init__(self, src_vocab_size, tgt_vocab_size, embed_dim, hidden_dim, n_heads, n_blocks):
super().__init__()
self.encoder = Encoder(src_vocab_size, embed_dim, hidden_dim, n_heads, n_blocks)
self.decoder = Decoder(tgt_vocab_size, embed_dim, hidden_dim, n_heads, n_blocks)
self.out = nn.Linear(embed_dim, tgt_vocab_size)
def forward(self, src_tokens, src_mask, tgt_tokens, tgt_mask):
####################################################################
# Fill in missing code below (...),
# then remove or comment the line below to test your function
# raise NotImplementedError("DecoderBlock")
####################################################################
# Compute the encoder output state from the source tokens and mask
encoder_state = self.encoder(src_tokens, src_mask)
# Compute the decoder output state from the target tokens and mask as well
# as the encoder state and source mask
decoder_state = self.decoder(tgt_tokens, tgt_mask, encoder_state, src_mask)
# Compute the vocab scores by passing the decoder state through the output
# linear layer
out = self.out(decoder_state)
return out
# Uncomment below to test your module
torch.manual_seed(522)
src_vocab_size = tgt_vocab_size = 5
n_blocks, n_heads, batch_size, embed_dim, hidden_dim = 10, 2, 1, 4, 8
src_tokens = tgt_tokens = torch.IntTensor([[0,1,2,3,4]])
src_mask = tgt_mask = torch.IntTensor([[1,1,1,1,1]])
transformer = Transformer(src_vocab_size, tgt_vocab_size, embed_dim, hidden_dim, n_heads, n_blocks)
transformer(src_tokens, src_mask, tgt_tokens, tgt_mask)
# + [markdown] id="24ZGB98DbdPW"
# If done correctly, you should see something like the following (numbers may vary but shape should be the same):
# ```
# tensor([[[-0.1359, 0.5821, -0.5340, -0.7582, 0.0687],
# [ 0.0085, 0.1495, -0.1809, -0.9419, -0.4447],
# [-0.5151, 0.5056, 0.8117, 0.0047, -0.6726],
# [-0.1393, -0.2927, 0.9524, -0.5759, -1.3004],
# [-0.4090, 1.4626, 0.2387, -0.1716, -0.2155]]],
# grad_fn=<AddBackward0>)
# ```
#
# [*Click for solution*](https://github.com/CIS-522/course-content/blob/main/tutorials/W10_NLP/solutions/W10_Tutorial2_Solution_Ex06.py)
# + cellView="form" id="SblV7fWsC4jx" colab={"base_uri": "https://localhost:8080/", "height": 519} outputId="0ad11f46-a150-4a96-d71b-256c4387dfc3"
#@title Video : Transformer Architecture
from IPython.display import YouTubeVideo
video = YouTubeVideo(id="_sKZpAptIZk", width=854, height=480, fs=1)
print("Video available at https://youtube.com/watch?v=" + video.id)
import time
try: t0;
except NameError: t0=time.time()
video
# + [markdown] id="Kz0tL-xaBLkv"
# ### Summary
# We've covered all the building blocks that make up the Transformer network architecture from the attention mechanism up to the fully combined encoder-decoder setup. The module versions presented here were often simplified in some ways and made more verbose in others to emphasize what each component is doing.
# + [markdown] id="guMptHSvoaIz"
# *Estimated time: 95 minutes since start*
# + [markdown] id="G31Vd2hVzuQt"
# ---
# ##Section 2: BERT and friends
# + id="_4b-wW01AUQ8" cellView="form" colab={"base_uri": "https://localhost:8080/", "height": 519} outputId="62f42df0-f177-4e86-ffb7-9459ceeb1645"
#@title Video : Bert and Friends
from IPython.display import YouTubeVideo
video = YouTubeVideo(id="gEkmPb0140w", width=854, height=480, fs=1)
print("Video available at https://youtube.com/watch?v=" + video.id)
import time
try: t0;
except NameError: t0=time.time()
video
# + [markdown] id="IwJPWkxHj8a8"
# ---
# ## Section 3: BERT
#
# BERT, or Bidrectional Encoder Representations from Transforms, is a Transformer-based machine learning technique for NLP pre-training developed by Google. The original English BERT has two models:
#
# 1. BERT$_{BASE}$: $12$ encoders with $12$ bidirectional self-attention heads
# 2. BERT$_{LARGE}$: $24$ encoders with $24$ bidirectional self-attention heads
#
# Both models are pre-trained with unlabeled data extracted from BooksCorpus with $800$M words and Wikipedia with $2500$M words. Importantly, unlike context-free models like GloVe or word2vec, BERT takes context into account for each occurrence of a given word. For instance, whereas the vector for "running" will have the same word2vec vector representation for both of its occurrences in the sentences "He is running a company" and "He is running a marathon", BERT will provide a contextualized embedding that will be different according to the sentence.
#
# + id="lkGd01-Wsthf" cellView="form" colab={"base_uri": "https://localhost:8080/", "height": 519} outputId="8aaf5410-8c37-46be-820f-ba5d3abe60cd"
#@title Video : Using BERT
from IPython.display import YouTubeVideo
video = YouTubeVideo(id="sFQGoswoaeI", width=854, height=480, fs=1)
print("Video available at https://youtube.com/watch?v=" + video.id)
import time
try: t0;
except NameError: t0=time.time()
video
# + [markdown] id="kRmL_Thxoi46"
# *Estimated time: 105 minutes since start*
# + [markdown] id="jlRZjEpPtMAm"
# ---
# ## Section 4: RoBERTa
# + [markdown] id="CJ2X4VP0ssNE"
# As can be seen by the name, RoBERTa builds on BERT, modifying key hyperparameters. It removes the next-sentence pretraining objective and trains with much larger mini-batches and learning rates.
#
# Spend some time playing with RoBERTa natural language inference at https://huggingface.co/roberta-large-mnli?text=The+dog+was+lost.+I+lost+an+animal.+. Additionally, spend some time looking at the other model examples at https://github.com/huggingface/transformers.
#
#
# Roberta, introduced [here](https://arxiv.org/abs/1907.11692), stands for A Robustly Optimized BERT Pretraining Approach. RoBERTa builds on BERT's language masking strategy, where the system learns to predict intentionally hidden sections of text within otherwise unannotated language samples. It modifies key hyperparameters in BERT, removing BERT's next-sentence pretraining objective. This, alongside the larger mini-batch and learning rate, allows RoBERTA to improve ont he masked language modeling objective compared with BERT and leads to better downstream task performance. The changes in training causing a substantial increase in performance leads to the believe that BERt was relatively undertrained.
#
#
# ## TweetEVAL Sentiment Analysis
#
# We utilize an already finetuned network on sentiment analysis to write an example for sentiment analysis. This is a roBERTa model trained on ~58 million tweets and finetuned for sentiment analysis with the [TweetEval](https://arxiv.org/pdf/2010.12421.pdf) benchmark. We use huggingface to implement this.
#
#
# First, we preprocess the text and download our fine-tuned model. By fine-tuning, this model took the initial training done in the roBERTa paper and trained it additionally on a downstream sentiment analysis task.
#
# + id="X7VdwufAsC8g" colab={"base_uri": "https://localhost:8080/", "height": 268, "referenced_widgets": ["d1491920c4a64a39939100779f4e0146", "48e8f6463667407593b4a6631e4b79bd", "97aa12f5d1bf4bffb0cf7c6d2641a967", "732c446928444d5ba3ebd95fdc9c2c61", "1774369d9dc842468619f3d9c2087540", "83ae9ac4dd4e44bea8907019d46f46f0", "c1d3ac2fdd8e4ef6b2dd0eafe7af6bef", "c1a2573ddcb24644ae0f4ebec39120f7", "<KEY>", "<KEY>", "<KEY>", "47ace26c135e4a588d3dcd889e8386bc", "<KEY>", "00be9ddc2334419ea7c02e6d823689c7", "<KEY>", "dc8201ebeeba4f2a83264249b4e82f1e", "<KEY>", "fa85b73a27494b7a97c3f8afa2651613", "<KEY>", "<KEY>", "<KEY>", "06af4187ebc344469793a1a9e9c809e5", "<KEY>", "<KEY>", "185d6ce4cc7b446b8e855198d9aa3cc3", "<KEY>", "<KEY>", "2423cff09b95498aa111307e0a45dbee", "8da094ecf4514a2990b7a56101aca457", "5edd60354b084a1e9c6c9384a6b9c411", "<KEY>", "1342776f95fb412b9a1b71775824766b", "<KEY>", "<KEY>", "f9c9f1d8e1704e658fa4114bb59ce0c8", "<KEY>", "<KEY>", "<KEY>", "<KEY>", "5735caf8d6cc49c2af5ae8ce065c6c60"]} outputId="2e4ccd47-823a-4c00-d581-cdcb66cce5a5"
def preprocess(text):
new_text = []
for t in text.split(" "):
t = '@user' if t.startswith('@') and len(t) > 1 else t
t = 'http' if t.startswith('http') else t
new_text.append(t)
return " ".join(new_text)
# Tasks:
# emoji, emotion, hate, irony, offensive, sentiment
# stance/abortion, stance/atheism, stance/climate, stance/feminist, stance/hillary
task='sentiment'
MODEL = f"cardiffnlp/twitter-roberta-base-{task}"
tokenizer = AutoTokenizer.from_pretrained(MODEL)
# download label mapping
labels=[]
mapping_link = f"https://raw.githubusercontent.com/cardiffnlp/tweeteval/main/datasets/{task}/mapping.txt"
with urllib.request.urlopen(mapping_link) as f:
html = f.read().decode('utf-8').split("\n")
csvreader = csv.reader(html, delimiter='\t')
labels = [row[1] for row in csvreader if len(row) > 1]
model = AutoModelForSequenceClassification.from_pretrained(MODEL)
model.save_pretrained(MODEL)
# + [markdown] id="9BO-LGbisIFt"
# Now that we have our pytorch model ready, let's play with some text. As seen below, we take some input text (feel free to change this to whatever text you want to see the sentiment). The text is preprocessed usingour function above, then tokenized using the huggingface autotokenizer. Then, we sort the sentiments by their relative probabilities, which is what we see at the end.
# + id="UNHjqYEMsJlp" colab={"base_uri": "https://localhost:8080/"} outputId="3fe7431d-e1d4-4216-ee29-8d5bd6ee2b49"
text = "I'm sad :("
text = preprocess(text)
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
scores = output[0][0].detach().numpy()
scores = softmax(scores)
ranking = np.argsort(scores)
ranking = ranking[::-1]
for i in range(scores.shape[0]):
l = labels[ranking[i]]
s = scores[ranking[i]]
print(f"{i+1}) {l} {np.round(float(s), 4)}")
# + [markdown] id="bP1rPKAOooDd"
# *Estimated time: 125 minutes since start*
# + [markdown] id="WEgqsbk5zy0J"
# ---
# ## Section 5: BERT variations (DistilBERT)
#
# https://arxiv.org/abs/1910.01108
#
#
# DistilBERT, as the name suggests, is a "distilled" version of BERT: smaller, faster, cheaper, and lighter. Often times, having very large models is infeasible, as it requires a lot of compute time and resources. Specifically, we often need to run models on smaller devices, without the ability to run many large GPUs to train. DistilBERT is a pre-trained general-purpose language representation model, which we can then fine-tune to achieve good performance on a number of tasks.
#
# + [markdown] id="6vJP1ZEaDXbO"
# Let's use DistilBERT to write a small question answering system. Question answering systems automatically respond to a given query. The input will be framed with context and the question. For example:
#
#
#
# ---
#
#
# Context :
# The US has passed the peak on new coronavirus cases, President <NAME> said and predicted that some states would reopen this month. The US has over 637,000 confirmed Covid-19 cases and over 30,826 deaths, the highest for any country in the world. \\
# Question:
# What was President <NAME>'s prediction?
#
#
# ---
# Answer:
# some states would reopen this month.
#
#
# ---
#
#
#
# + id="vA4C_ZpPD7PD" colab={"base_uri": "https://localhost:8080/", "height": 395, "referenced_widgets": ["62a4175c043e417598d6b5e8e18b42ab", "<KEY>", "e085a48fe0864e3d9cb4a40da0a36df4", "<KEY>", "<KEY>", "<KEY>", "<KEY>", "<KEY>", "3c7dae36b1e94de196165a36fbda8ec5", "<KEY>", "249696df57ed4cda903f3b0404080115", "<KEY>", "f1f346bedacd47b8a8fff93bc0d20ecc", "3a54e0c320e84e87b28c67d890d9606e", "651ad0961d93479fa5dcd2fe2d5a1995", "04e3f499326a4f6482387e6603dfaec7", "9971c3a92d684af187a4c80ee0ccb6a7", "<KEY>", "<KEY>", "67818d297ef7484aa1f7a73f37c4e5c8", "81928681157249078559b4325318a483", "fa40d83ba13b45a18f9009508024feab", "99f4cea3656c4264a3af4272ba96c3cc", "3a0586a629c74e66ad0f11b093d8ae48", "<KEY>", "<KEY>", "afc59a7a9cc34ea099e6f268cd40cebb", "51af566aa16f409fa0398c2afa726d26", "ab90593ee87e468dbc9498e1af8c2c11", "<KEY>", "<KEY>", "fadbb1754bdd490696666fe83d656c19", "98ded11df004423aaef05c3a7658bd9e", "<KEY>", "<KEY>", "34e1e2aefed8410a9cb88196ab599e95", "eacb7ae7cf874776ad049ff0a19a5440", "<KEY>", "<KEY>", "08c7b8c2da2f47f489edd322f38e4e79"]} outputId="d63cc55b-9b0f-486f-8be3-550e46ddff3d"
from transformers import DistilBertTokenizer, DistilBertForQuestionAnswering
tokenizer = DistilBertTokenizer.from_pretrained('distilbert-base-uncased',return_token_type_ids = True)
model = DistilBertForQuestionAnswering.from_pretrained('distilbert-base-uncased-distilled-squad', return_dict=False)
context = "The US has passed the peak on new coronavirus cases, " \
"President <NAME> said and predicted that some states would reopen this month. " \
"The US has over 637,000 confirmed Covid-19 cases and over 30,826 deaths, the highest for any country in the world."
question = "What was President <NAME>'s prediction?"
encoding = tokenizer.encode_plus(question, context)
input_ids, attention_mask = encoding["input_ids"], encoding["attention_mask"]
start_scores, end_scores = model(torch.tensor([input_ids]), attention_mask=torch.tensor([attention_mask]))
ans_tokens = input_ids[torch.argmax(start_scores) : torch.argmax(end_scores)+1]
answer_tokens = tokenizer.convert_ids_to_tokens(ans_tokens , skip_special_tokens=True)
print ("\nQuestion: ",question)
print ("\nAnswer Tokens: ")
print (answer_tokens)
answer_tokens_to_string = tokenizer.convert_tokens_to_string(answer_tokens)
print ("\nAnswer : ",answer_tokens_to_string)
# + [markdown] id="kkLAUmkbEyeK"
# Cool! Go ahead and try your own questions and see how DistilBERT answers it! Let's try multiple questions at once (in a batch).
# + id="Rxjk7g7iE8lT" colab={"base_uri": "https://localhost:8080/"} outputId="ad80229c-a874-477b-f854-c4f94e23c3df"
tokenizer = DistilBertTokenizer.from_pretrained('distilbert-base-uncased',return_token_type_ids = True)
model = DistilBertForQuestionAnswering.from_pretrained('distilbert-base-uncased-distilled-squad', return_dict = False)
context = "The US has passed the peak on new coronavirus cases, " \
"President <NAME> said and predicted that some states would reopen this month." \
"The US has over 637,000 confirmed Covid-19 cases and over 30,826 deaths, " \
"the highest for any country in the world."
print ("\n\nContext : ",context)
questions = ["What was President <NAME>'s prediction?",
"How many deaths have been reported from the virus?",
"How many cases have been reported in the United States?"]
question_context_for_batch = []
for question in questions :
question_context_for_batch.append((question, context))
encoding = tokenizer.batch_encode_plus(question_context_for_batch,pad_to_max_length=True, return_tensors="pt")
input_ids, attention_mask = encoding["input_ids"], encoding["attention_mask"]
start_scores, end_scores = model(input_ids, attention_mask=attention_mask)
for index,(start_score,end_score,input_id) in enumerate(zip(start_scores,end_scores,input_ids)):
max_startscore = torch.argmax(start_score)
max_endscore = torch.argmax(end_score)
ans_tokens = input_ids[index][max_startscore: max_endscore + 1]
answer_tokens = tokenizer.convert_ids_to_tokens(ans_tokens, skip_special_tokens=True)
answer_tokens_to_string = tokenizer.convert_tokens_to_string(answer_tokens)
print ("\nQuestion: ",questions[index])
print ("Answer: ", answer_tokens_to_string)
# + [markdown] id="qoyEwsPPozXz"
# *Estimated time: 130 minutes since start*
# + [markdown] id="yUmSToDB3bFH"
# ---
# ## Section 6: Explaining language models
#
# + id="A2EiG7Rs3kvb" cellView="form" colab={"base_uri": "https://localhost:8080/"} outputId="c1d3eb57-f3bc-4c58-8c38-951a5fa37c1e"
#@title Video : Explaining language models
from IPython.display import YouTubeVideo
video = YouTubeVideo(id="G38ZZNnXaQs", width=854, height=480, fs=1)
print("Video available at https://youtube.com/watch?v=" + video.id)
import time
try: t0;
except NameError: t0=time.time()
# + [markdown] id="_aKo3ReY_9sT"
# #### Questions
# + id="5mkr7X0a__ko" cellView="form"
#@markdown Why would you expect part of speech tagging to be done closer to the input, and co-reference to be done more deeply in the network?
#report to Airtable
NLP_network_structure = '' #@param {type:"string"}
# + id="PzJOIertAVq9" cellView="form"
#@markdown Why are byte pair encodings problematic for using "feature importance" to understand what words "cause" a model to make a given prediction?
BPE_interpretation = 'BPEs would place unusual importance on rare words' #@param {type:"string"}
# + id="C9cTkXEsAuV4" cellView="form"
#@markdown Attention turns out not to be a great way of finding the most important words used by a model. Why not? (Hint: where might attention focus on the sentence: "The movie was long and boring."?)
interpreting_attention = "the mechanism would place more attention on \"long,\" which doesn't tell us much about the context or movie" #@param {type:"string"}
# + [markdown] id="SdrEjN3n31Wx"
# There are lots of tools out there to help visualize what's going on in NLP systems. If you want (this is not an assignment), play around with the demos at https://pair-code.github.io/lit/demos/.
#
# + [markdown] id="I9lghzopo3LF"
# *Estimated time: 140 minutes since start*
# + [markdown] id="52dVCGNMz9w-"
# ---
# ## Section 7: Bias in Embeddings
#
#
# + id="6AU4_g3Ddoli" cellView="form" colab={"base_uri": "https://localhost:8080/", "height": 519} outputId="7a08dbba-c252-4145-ba5a-61139baa94b0"
#@title Video : Bias in Embeddings
from IPython.display import YouTubeVideo
video = YouTubeVideo(id="meUnCri_52c", width=854, height=480, fs=1)
print("Video available at https://youtube.com/watch?v=" + video.id)
import time
try: t0;
except NameError: t0=time.time()
video
# + [markdown] id="NfmYlSc_0BQz"
# You just saw how training on large amounts of historical text can introduce undesirable associations and outcomes. In this section we are going to explore this idea further as it pertains to coreference resolution.
#
# Coreference resolution is the NLP task of finding all the terms that refer to an entity in a passage of text e.g. what noun phrase does a pronoun refer to. This can be quite difficult, even for humans, if the passage is ambiguous enough.
#
# For example, in the sentence:
#
# `The customer asked to speak with the manager because he wanted to fix the billing error quickly.`
#
# what does `he` refer to? We can reasonably assume given the context that `he` refers to the customer. Furthermore, it shouldn't matter which pronoun (he/she/they) was in that spot, it should still refer back to the customer.
#
# However this is not the case with some models! For example, here is the output of Huggingface's [Neural Coreference model](https://github.com/huggingface/neuralcoref) when we use `she` as the pronoun:
#
# 
#
# You can see that `she` is scored against all detected noun phrases and gets the highest score with `the customer`. So far so good. Now let's try it with `he` instead:
#
# 
#
# The model has instead associated `he` with `the manager`, and quite strongly at that, even though that doesn't make sense contextually. As this is a neural-based model trained on historical data, one possibility is there were many instances where `he` and `manager` were associated, enough to get "distracted" by that signal over the rest of the sentence.
#
#
#
# As was mentioned in the video, many people are actively working toward both identifying and mitigating these undesirable behaviors and also educating researchers, practitioners, and the general public about these issues. For instance, the sample sentence used above was taken from the [Winogender Schemas](https://github.com/rudinger/winogender-schemas), a set of sample sentences to check the variance in outcomes when only a single pronoun is changed.
# + [markdown] id="Ftnc8GGUtL7O"
# #### Exercise 6: Explore Bias in Coreference Resolution Models
#
# Two different coreference resolution models that have nice online demos are from [Huggingface](https://huggingface.co/coref/) and [AllenNLP](https://demo.allennlp.org/coreference-resolution). In this exercise, you will explore a variety of sentences with these two tools and see how they compare. Try the following sentences in both and see how they handle the change in pronoun:
#
# `The doctor berated the nurse. He had come in late for the meeting.`
#
# `The doctor berated the nurse. She had come in late for the meeting.`
#
# + [markdown] id="ClZxGxhJBZa0"
# #### Questions
# + id="aUAhscXBBdMX" cellView="form"
#@markdown Did Huggingface get it right?
huggingface_bias = 'Did not get it right for the first sentence; did for the 2nd' #@param {type:"string"}
# + id="DAsS-ChpBuQv" cellView="form"
#@markdown Did Allen Institute get it right?
#report to Airtable
allenInst_bias = 'Did not get it right for the first sentence; did for the 2nd' #@param {type:"string"}
# + id="VKW9WhInB5GM" cellView="form"
#@markdown How might you "fine tune" Bert to reduce such errors?
#report to Airtable
fine_tune_away_bias = 'you can train on more gender-skewed data or set up a system to compare performance across paired sentences like the 2 above' #@param {type:"string"}
# + [markdown] id="gbgTqgB5rH_W"
# ---
# # Wrap up
#
# + id="98YXpSntrY6k" colab={"base_uri": "https://localhost:8080/", "height": 421} cellView="form" outputId="4ba37493-52df-4a55-d7d0-11e438c52289"
#@markdown #Run Cell to Show Airtable Form
#@markdown ##**Confirm your answers and then click "Submit"**
import time
import numpy as np
import urllib.parse
from IPython.display import IFrame
def prefill_form(src, fields: dict):
'''
src: the original src url to embed the form
fields: a dictionary of field:value pairs,
e.g. {"pennkey": my_pennkey, "location": my_location}
'''
prefill_fields = {}
for key in fields:
new_key = 'prefill_' + key
prefill_fields[new_key] = fields[key]
prefills = urllib.parse.urlencode(prefill_fields)
src = src + prefills
return src
#autofill time if it is not present
try: t0;
except NameError: t0 = time.time()
try: t1;
except NameError: t1 = time.time()
try: t2;
except NameError: t2 = time.time()
try: t3;
except NameError: t3 = time.time()
try: t4;
except NameError: t4 = time.time()
try: t5;
except NameError: t5 = time.time()
try: t6;
except NameError: t6 = time.time()
try: t7;
except NameError: t7 = time.time()
# autofill fields if they are not present
# a missing pennkey and pod will result in an Airtable warning
# which is easily fixed user-side.
try: my_pennkey;
except NameError: my_pennkey = ""
try: my_pod;
except NameError: my_pod = "Select"
try: NLP_network_structure;
except NameError: NLP_network_structure = ""
try: BPE_interpretation;
except NameError: BPE_interpretation = ""
try: interpreting_attention;
except NameError: interpreting_attention = ""
try: huggingface_bias;
except NameError: huggingface_bias = ""
try: allenInst_bias;
except NameError: allenInst_bias = ""
try: fine_tune_away_bias;
except NameError: fine_tune_away_bias = ""
times = np.array([t1,t2,t3,t4,t5,t6,t7])-t0
fields = {
"my_pennkey": my_pennkey,
"my_pod": my_pod,
"NLP_network_structure": NLP_network_structure,
"BPE_interpretation": BPE_interpretation,
"interpreting_attention": interpreting_attention,
"huggingface_bias": huggingface_bias,
"allenInst_bias": allenInst_bias,
"fine_tune_away_bias": fine_tune_away_bias,
"cumulative_times": times
}
src = "https://airtable.com/embed/shrfeQ4zBWMSZSheB?"
display(IFrame(src = prefill_form(src, fields), width = 800, height = 400))
# + [markdown] id="oDE0MJbb5dLH"
# ## Feedback
# How could this session have been better? How happy are you in your group? How do you feel right now?
#
# Feel free to use the embeded form below or use this link:
# <a target="_blank" rel="noopener noreferrer" href="https://airtable.com/shrNSJ5ECXhNhsYss">https://airtable.com/shrNSJ5ECXhNhsYss</a>
# + id="IPPjyA-H5kLE" colab={"base_uri": "https://localhost:8080/"} outputId="3e171a75-3d82-4f01-d3ff-4cce52d5d9e2"
display(IFrame(src="https://airtable.com/embed/shrNSJ5ECXhNhsYss?backgroundColor=red", width = 800, height = 400))
| CIS522/Week10_Tutorial2.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3 (ipykernel)
# language: python
# name: python3
# ---
# + jupyter={"outputs_hidden": false} pycharm={"name": "#%%\n"}
"""
Created on 21-April-2020
@author <NAME>
"""
import argparse
import os
import sys
from multiprocessing import cpu_count
from pathlib import Path
import torch
from torch.utils.data import DataLoader
import fileutils as fs
# + [markdown] pycharm={"name": "#%% md\n"}
# ### This Jupyter Notebook Specific Setup
#
# The following configuration is meant only for running this Jupyter notebook. One may use _run_classification.py_ to
# + jupyter={"outputs_hidden": false} pycharm={"name": "#%%\n"}
running_as_notebook = False
root_dir = './'
try:
cfg = get_ipython().config
running_as_notebook = True
except NameError:
pass
if running_as_notebook:
from collections import namedtuple
# cur_dir = !pwd
root_dir = '/'.join(cur_dir[0].split('/')[:-2])
args = {
'batch_size': 128,
'num_epochs': 15,
'train': True,
'pos_dataset': f'{root_dir}/results/positive_examples.pkl',
'neg_dataset': f'{root_dir}/results/negative_examples.pkl',
'test': False,
'test_dataset': f'{root_dir}/results/test_examples.pkl',
'saved_model': f'{root_dir}/results/saved_models/VarValueClassifierRNN_all_types_17-11-2020--19:06:51_0.89.pt',
'name': 'nalin',
'ablation': [] # Possible values --> 'value_as_one_hot', 'var', 'type', 'len', 'shape'
}
results_dir = f'{root_dir}/results'
token_embedding_path = f'{root_dir}/benchmark/python_embeddings.bin'
positive_examples_dir = f'{root_dir}/results/dynamic_analysis_outputs'
list_of_types_in_dataset_out_file = f'{root_dir}/results/list_of_types_in_dataset.json'
Args = namedtuple('Args', args)
args = Args(**args)
else:
from command_line_args import get_parsed_args
args = get_parsed_args(argparse=argparse)
positive_examples_dir = 'results/dynamic_analysis_outputs'
token_embedding_path = 'benchmark/python_embeddings.bin'
list_of_types_in_dataset_out_file = 'results/list_of_types_in_dataset.json'
results_dir = 'results'
# + [markdown] pycharm={"name": "#%% md\n"}
# ### Dataset utilities
# + jupyter={"outputs_hidden": false} pycharm={"name": "#%%\n"}
from dataset_utils.data_transformers.AblationTransformer import AblationTransformer
from dataset_utils.data_transformers.ResizeData import ResizeData
from dataset_utils.data_transformers.ValueToCharSequence import ValueToCharSequence
from dataset_utils.data_transformers.fastTextEmbeddingOfVarName import fastTextEmbeddingOfVarName
from dataset_utils.data_transformers.RepresentLen import RepresentLen
from dataset_utils.data_transformers.RepresentShape import RepresentShape
from dataset_utils.data_transformers.OneHotEncodingOfTypes import OneHotEncodingOfType
from dataset_utils.pre_process_dataset import process, write_types_and_frequencies
from read_dataset import get_training_val_dataset, get_test_dataset
# + [markdown] pycharm={"name": "#%% md\n"}
# ### Models
# + jupyter={"outputs_hidden": false} pycharm={"name": "#%%\n"}
from models.VarValueClassifierRNN import VarValueClassifierRNN
# + [markdown] pycharm={"name": "#%% md\n"}
# ### Configurations
# + jupyter={"outputs_hidden": false} pycharm={"name": "#%%\n"}
train, test = args.train, args.test
if not train and not test:
print('Either "training" or "testing" is required')
sys.exit(1)
batch_size = args.batch_size
num_epochs = args.num_epochs
max_num_of_chars_in_value = 100 # Number of characters in the value part of the assignment
print(f"-- Resizing the values to {max_num_of_chars_in_value} characters during training")
# You may specify your name
if args.name:
model_name_suffix = args.name
else:
model_name_suffix = 'Nalin'
model_name = f'RNNClassifier_{model_name_suffix}'
pos_dataset_file_path = args.pos_dataset
neg_dataset_file_path = args.neg_dataset
test_dataset_file_path = args.test_dataset
"""
There are three heuristics for generating negative examples:
1. use_dimension: refers to computing various properties on the positive examples and then using them to
generate the negative examples. (Code adapted from the initial code by MP)
2. random: only useful for cases when the data contains single type (eg.string). The approach is simply randomizes the
values. The idea is to check if certain idenfiers such as URL are only assigned values having certain properties
3. weighted_random: This is the default strategy. Refer to the code where it is implemented for further details.
"""
heuristics_for_generating_negative_examples = ['random','weighted_random'][1]
# Types and the corresponding frequency in the dataset
"""
Pre-process dataset. This is an one time task ==>
- Remove empty/malformed extracted data
- Create negative examples
- Create labels for the extracted data (label -> probability of buggy)
"""
if not test:
process(positive_examples_dir=positive_examples_dir,
positive_example_out_file_path=pos_dataset_file_path,
negative_example_out_file_path=neg_dataset_file_path,
test_example_out_file_path=test_dataset_file_path,
heuristics_for_generating_negative_examples=heuristics_for_generating_negative_examples)
write_types_and_frequencies(positive_example_out_file_path=pos_dataset_file_path,
list_of_types_in_dataset_out_file=list_of_types_in_dataset_out_file)
# Embeddings have been learned from ALL python files in the benchmark (~1M files). We could
# successfully extract assignments from some of these python files.
if not os.path.exists(token_embedding_path):
print(f'Could not read from {token_embedding_path}. \nNeed an embedding path to continue')
sys.exit(1)
test_examples_dir = 'results/test_examples'
saved_model_path = None
if args.test and args.saved_model:
saved_model_path = args.saved_model
elif args.test and not args.saved_model:
print("A saved model path is needed")
sys.exit(1)
embedding_dim = 0
features_to_ablate = args.ablation
# Workaround for debugging on a laptop. Change with the cpu_count of your machine if required for debugging data loading
# else leave it alone
if cpu_count() > 20:
num_workers_for_data_loading = cpu_count()
else:
num_workers_for_data_loading = 0
config = {"num_workers": num_workers_for_data_loading, "pin_memory": True}
device = torch.device(
'cuda:0' if torch.cuda.is_available() else 'cpu')
# Initialize model and model specific dataset data_transformers
print(f"\n{'-' * 20} Using model '{model_name}' {'-' * 20}")
# + [markdown] pycharm={"name": "#%% md\n"}
# ### Data Transformations
# + jupyter={"outputs_hidden": false} pycharm={"name": "#%%\n"}
resize_data = ResizeData(len_of_value=max_num_of_chars_in_value)
value_to_one_hot = ValueToCharSequence(
len_of_value=max_num_of_chars_in_value)
one_hot_encoding_of_type = OneHotEncodingOfType(max_types_to_select=10,
types_in_dataset_file_path=list_of_types_in_dataset_out_file) # We select only top 10 types
size_of_type_encoding = len(one_hot_encoding_of_type.one_hot_init)
var_name_fastText_embd = fastTextEmbeddingOfVarName(embedding_path=token_embedding_path)
embedding_dim = var_name_fastText_embd.embedding_dim
len_repr = RepresentLen()
shape_repr = RepresentShape()
data_transformations = [resize_data, # must be always the first transformation
var_name_fastText_embd,
value_to_one_hot,
one_hot_encoding_of_type,
len_repr,
shape_repr
]
model = VarValueClassifierRNN(embedding_dim=embedding_dim,
num_of_characters_in_alphabet=value_to_one_hot.nbs_chars,
model_name=model_name,
size_of_value=resize_data.len_of_value)
assert model is not None, "Initialize a model to run training/testing"
model.to(device)
# + [markdown] pycharm={"name": "#%% md\n"}
# ### Ablation
# + jupyter={"outputs_hidden": false} pycharm={"name": "#%%\n"}
if len(features_to_ablate):
ablation_transformer = AblationTransformer(features_to_ablate=features_to_ablate)
print(f"## Not using features --> {features_to_ablate} ##")
data_transformations.append(ablation_transformer)
# -
# ### Training
# + jupyter={"outputs_hidden": false} pycharm={"name": "#%%\n"}
if train:
print(f"{'-' * 15} Reading dataset for training {'-' * 15}")
# Read the dataset
training_dataset, validation_dataset = get_training_val_dataset(
positive_examples_dataset_file_path=pos_dataset_file_path,
negative_examples_dataset_file_path=neg_dataset_file_path,
all_transformations=data_transformations,
nb_examples=-1)
train_data = DataLoader(
dataset=training_dataset, batch_size=batch_size, shuffle=True, drop_last=True, **config)
validation_data = DataLoader(
dataset=validation_dataset, batch_size=batch_size, shuffle=True, drop_last=True, **config)
model.run_epochs(training_data=train_data,
validation_data=validation_data, num_epochs=num_epochs, results_dir=results_dir)
# + [markdown] pycharm={"name": "#%% md\n"}
# ### Testing
# + jupyter={"outputs_hidden": false} pycharm={"name": "#%%\n"}
if test:
print(f"{'-' * 15} Reading dataset for testing {'-' * 15}")
test_dataset = get_test_dataset(
test_examples_dir=test_examples_dir,
results_dir=results_dir,
all_transformations=data_transformations,
dataset_out_file=test_dataset_file_path)
batched_test_dataset = DataLoader(
dataset=test_dataset, batch_size=batch_size, shuffle=False, drop_last=False, **config)
model.load_model(path_to_saved_model=saved_model_path)
predictions = model.run_testing(data=batched_test_dataset)
test_data_with_predictions = test_dataset.data
test_data_with_predictions['predicted_p_buggy'] = predictions
fs.create_dir_list_if_not_present([os.path.join(results_dir, f'prediction_results')])
predicted_outfile_path = os.path.join(results_dir,
f'prediction_results/{Path(test_dataset_file_path).stem}_predictions.pkl')
print(f"Writing to '{predicted_outfile_path}'")
test_data_with_predictions.sort_values('predicted_p_buggy', ascending=False, inplace=True)
test_data_with_predictions.reset_index(drop=True, inplace=True)
# print(
# f"\n Prediction results is follows: \n\n{test_data['predicted_p_buggy'].value_counts()}")
# test_data_with_predictions.to_csv(predicted_outfile_path)
test_data_with_predictions.to_pickle(path=predicted_outfile_path, compression='gzip')
| src/nn/run_classification.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
# +
# df = pd.read_csv('creditcard.csv')
# +
# df.head()
# +
# df.shape
# +
# df['Class'].value_counts()
# +
#Coverting to Pickle file
# +
# df.to_pickle('creditcard.pkl')
# -
df = pd.read_pickle('creditcard.pkl')
# +
# df = df.sample(n= 1000,random_state=42)
# ones = df[df['Class'] == 1].sample(10, random_state = 42)
# zeros = df[df['Class'] == 0].sample(1000, random_state = 42)
# df = pd.concat([zeros,ones])
# -
df.head()
df.describe()
df.Time.plot(kind='box')
df.Amount.plot(kind='box')
from sklearn.preprocessing import StandardScaler
scalar = StandardScaler()
df.Time = scalar.fit_transform(np.array(df.Time).reshape(-1,1))
df.Amount = scalar.fit_transform(np.array(df.Amount).reshape(-1,1))
df.head()
df.describe()
df.V1.plot(kind='box')
from sklearn.linear_model import LogisticRegression
from sklearn.model_selection import train_test_split as tts
from sklearn.metrics import confusion_matrix, classification_report, r2_score, roc_auc_score, accuracy_score, auc
X = df.drop(['Class'],1)
y = df.Class
# Split Dataset into Training set and Validation set
X_train_test, X_val, y_train_test, y_val = tts(X, y, test_size = 0.3, random_state = 42)
# Apply GridSearch CV
log_Reg = LogisticRegression()
from sklearn.model_selection import GridSearchCV
param = {
'C': np.arange(0.001,10,0.1),
'penalty': ['l1','l2']
}
model = model = GridSearchCV(cv = 10 , estimator=log_Reg, param_grid= param)
import warnings
warnings.filterwarnings('ignore')
model.fit(X,y)
model.best_params_
best_model = model.best_estimator_
y_pred = best_model.predict(X_val)
print(classification_report(y_val,y_pred))
# +
#here dataset has learned only for 0 class, We can add random value for class 1 and tune the model
# -
df.shape
i = '[{ "name":"John", "age":30, "car":"av" }]'
type(i)
| C15_Feature Engineering/Feature Engineering and Logistic Regression on CreditCard Dataset.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: 'Python 3.8.11 64-bit (''PIC16B'': conda)'
# language: python
# name: python3811jvsc74a57bd000ce6b323a8a0b9f3b69029fa338424bb9ad4dbfdf789891ecc2bf5c65882714
# ---
# # Agent-Based Modeling
#
# In this set of lectures, we'll study how to design agent-based models in Python.
#
# > An *agent-based model* (ABM) is a simulation model in which many individual entities (*agents*) interact with each other according to fixed rules.
#
# ABMs are often used for modeling a wide range of social and biological systems. In fact, you've already seen an example of an ABM: the SIR model of disease spred that we studied in the previous lecture is one. There, we relied on tools from NetworkX and various other familiar programming paradigms. We'll now explore the topic of agent-based modeling from a somewhat more systematic and flexible perspective.
#
# There exist a large number of dedicated software packages for agent-based modeling. In this course, we'll use a relatively recent package, called [Mesa](https://mesa.readthedocs.io/en/master/index.html), for agent-based modeling in Python. To install the software, run the following code in your terminal:
#
# ```
# conda activate PIC16B
# conda install -c conda-forge mesa
# ```
#
# # The Schelling Model of Racial Segregation
#
# In this set of lecture notes, we will implement the Schelling model of racial residential segregation. The Schelling model is a parable of how only *mild* individual biases can lead to highly segregated outcomes.
#
#
# In the Schelling Model, individuals of two types begin arranged randomly on a grid, which is often taken to represent a city. Not all grid squares are occupied. Here's an example starting configuration
#
# <figure class="image" style="width:50%">
# <img src="https://raw.githubusercontent.com/PhilChodrow/PIC16B/master/_images/schelling-screencap.png" alt="">
# <figcaption><i>An example starting configuration in the Schelling model. Image credit: <a href = "https://ncase.me/polygons/"> Vi Hart and Nicky Case</a>.</i></figcaption>
# </figure>
#
# Here's how the model works:
#
# 1. At each timestep, agents look at their surroundings. An agent is **unhappy** if fewer than 1/3 of their neighbors have the same type, and is **happy** otherwise.
# 2. All **unhappy** agents pick a random empty spot and move there. All **happy** agents stay where they are.
#
# We run the model until all agents are happy. The fundamental result of the model is that, even though agents have only mild biases -- they simply prefer not to be outnumbered -- acting on their preferences can still lead to highly segregated outcomes, like this:
#
# <figure class="image" style="width:50%">
# <img src="https://raw.githubusercontent.com/PhilChodrow/PIC16B/master/_images/schelling-final.png" alt="">
# <figcaption><i>An example final configuration in the Schelling model. Image credit: <a href = "https://ncase.me/polygons/"> Vi Hart and Nicky Case</a>.</i></figcaption>
# </figure>
#
# For an excellent interactive demonstration of the Schelling Model, check out [this blog post](https://ncase.me/polygons/) by Vi Hart and Nicky Case.
#
# ### A Note on History
#
# The Schelling model does not include any concepts of historical oppression, wealth, or power, all of which contribute to racial segregation. The message of the Schelling model is that these factors are not **needed** for segregation -- mildly racist individual preferences would be enough. It is important, however, not to confuse this mathematical parable with the actual historical circumstances of racial segregation in the US or elsewhere. In most societies, including the US, racial segregation arises because of systematic oppression enforced by policy, violence, and erasure.
#
# ### Sources
#
# These lecture notes are closely based on the [Schelling model example](https://github.com/projectmesa/mesa/tree/main/examples/schelling) in the [official Mesa repository](https://github.com/projectmesa/mesa). They also draw on the [Introductory Tutorial](https://mesa.readthedocs.io/en/master/tutorials/intro_tutorial.html) from the official Mesa documentation.
#
# # Implementing the Schelling Model
# Let's start by implementing a bare-bones model. While there is some flexibility in how one does this, there are a few common features of most Mesa models:
#
# 1. There must be an *agent* class, which should inherit from `mesa.Agent`. This class specifies the properties and behaviors of an individual agent in the simulation.
# - This class must call `mesa.Agent.__init__()` as part of its `__init__()` method.
# - This class must have a `step()` method which describes the primary individual behavior.
# 2. There must be a *model* class, which should inherit from `mesa.Model`.
# - The `__init__()` method of this class is responsible for creating agents with their properties, as well as the space (often a grid) on which the simulation unfolds.
# - This class must also have a `step()` method which provides a complete description of what happens in a single model time step. Often, this involves using a `Schedule` to call the `step()` method of each of the agents in some specified sequence.
#
# Let's write a very simple model that demonstrates some of these requirements. Our model won't really do very much yet, but it will demonstrate the key techniques of defining the agent and model, adding agents to the model, and calling the `step()` methods.
# Let's demonstrate the behavior of our toy model:
# Observe that, each time we call `TM.step()`, the model sweeps through the various agents and calls their individual `step()` methods. This is because we created a `RandomActivation` schedule, and added each of the agents to this schedule.
#
# With our architecture in place, our next step is learn how to implement more interesting behaviors.
#
# ## Spatial Grids
#
# The Schelling model usually evolves on a grid. At the moment, we don't have a grid incorporated. Fortunately, this is easy to bring in. We simply need to add a `SingleGrid` object with specified width and height. The `torus` argument of the grid determines whether the edges "wrap around." If it is selected, then walking off the left side of the grid will put you back on the right side. This is often visualized as allowing the grid to lie on the surface of a torus, or donut:
#
# <figure class="image" style="width:30%">
# <img src="https://i.stack.imgur.com/ZZrv4.png" alt="">
# <figcaption><i>A toroidal grid.</i></figcaption>
# </figure>
#
# The modifications we need to make to our previous code are relatively simple:
#
# 1. We need to give each `ToyAgent` a `pos`ition.
# 2. We need to give the model a `grid` instance variable.
# 3. We need to modify our initialization of agents so that we call `self.grid.position_agent(agent, pos)` in order to place each agent on the grid.
# Now we can again instantiate our model. This time, we need to pass `width`, `height`, and `density`. Here, we're creating a 10x10 grid in which roughly 10% of cells have agents in them.
# It's also possible to directly extract the grid and visualize it using familiar tools. In a later lecture, however, we'll see some much better ways to visualize the grid.
import numpy as np
from matplotlib import pyplot as plt
plt.imshow(np.array(TM.grid.grid) == None)
# # A Basic Schelling Model
#
# We're now ready to construct a simple version of the Schelling model. Here are the remaining ingredients we need to bring in:
#
# 1. Agents need to have *types* associated with them.
# 2. The agents `step()` method should check whether the agent is "happy" (i.e. not surrounded by too many neighbors of different `type`, and move them to an empty grid cell if not. The `SingleGrid` class we've used to create the grid provides several useful methods for handling this logic.
SM = SchellingModel(20, 20, 0.9, 0.5)
SM.step()
# Here's a function to plot the model state. Dark purple squares are empty; green squares are agents of type `triangle`, and yellow squares are agents of type `square`.
def viz_state(SM, ax):
G = np.array(SM.grid.grid)
to_viz = np.zeros(G.shape)
for i in range(G.shape[0]):
for j in range(G.shape[1]):
if G[i,j] is not None:
if G[i,j].type == "triangle":
to_viz[i,j] = 1.0
elif G[i,j].type == "square":
to_viz[i,j] = -1.0
ax.imshow(to_viz, cmap = "Spectral", vmin = -1.5, vmax = 1.5)
# Now we're ready to visualize the evolution of our model.
# +
fig, axarr = plt.subplots(2, 5, figsize = (10, 4))
SM = SchellingModel(20, 20, 0.8, 0.7)
t = 0
for ax in axarr.flatten():
ax.axis("off")
ax.set(title = f"timestep {t}")
viz_state(SM, ax)
t += 1
SM.step()
# -
# We observe the characteristic separation of an initially spatially mixed population into large regions of homogeneous types.
#
# In coming lectures, we'll learn how to visualize these processes more gracefully; how to collect data from simulations; and how to implement more complex models.
| lectures/abm/abm-1-live.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 2
# language: python
# name: python2
# ---
# %pylab notebook
import matplotlib.pyplot as plt
import numpy as np
np.math.log(611)
x = np.array(np.arange(11)+1)
# +
y = [np.math.log(xi) for xi in x]
#y2 = [np.math.log(xi) for xi in x]
# -
fig, ax = plt.subplots(1)
plt.plot(x, y)
| GEOG827/Untitled.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
# Charts and Histograms Visualization Tools
import matplotlib.pyplot as plt
# %matplotlib inline
# %config InlineBackend.figure_format = 'svg'
import seaborn as sns
sns.set(style="ticks", color_codes=True)
# %config InlineBackend.figure_format = 'svg'
iris = sns.load_dataset("iris")
def display2frames(df1, df2):
display_html(f"<table><tr><td><pre>{df1}</pre></td>" +
f"<td><pre>{df2}</pre></td></tr></table>", raw=True)
iris.head()
# -
(iris.shape, iris.dropna().shape)
plt.close("all")
cols = iris.columns.drop("species")
for feat in cols:
fig, axes = plt.subplots(nrows=1, ncols=2, figsize=(8, 1.2))
h1 = iris[feat].hist(ax=axes[0], color = 'g', bins=30)
h2 = sns.boxplot(iris[feat])
plt.show()
fig, axes = plt.subplots(nrows=1, ncols=1, figsize=(15, 2))
h3 = sns.countplot(iris[feat], hue=iris["species"])
bp = iris.boxplot(figsize=(8, 4))
# plt.xticks(rotation = 90)
| iris/IrisFeatures.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Running gsum on a notebook to generate figures for Melendez et al. (2019)
#
# All of the figures in [Quantifying Correlated Truncation Errors in Effective Field Theory](https://arxiv.org/abs/1904.10581) are generated by the notebook [correlated_EFT_publication.ipynb](https://github.com/buqeye/gsum/blob/master/docs/notebooks/correlated_EFT_publication.ipynb). You can modify the inputs and thereby extend or validate the analysis.
# Steps to get the notebook working:
# 1. Update the notebook environment once again: `conda env update -f path/to/environment.yml` (this adds cython and cythongsl).
# 1. Now within the environment install gsum. Do `pip install git+https://github.com/jordan-melendez/gsum`. Note that you should NOT do `pip install gsum' as this will install an old version of gsum. Using pip3 rather than pip also seems to create issues.
# 1. This creates a problem with a file in the scipy distribution. We need to go to `/anaconda3/envs/talent-env/lib/python3.7/site-packages/statsmodels/stats` (or wherever your anaconda3 resides). Then edit the file `moment_helpers.py`. In line 17 change `scipy.misc` to `scipy.special`.
# 1. Then do a `pip install tables` to make sure the tables package works.
# 1. In the `topics/model-checking` folder, create a sub-folder called 'figures'. This will hold files for the figures being generated.
# 1. Get the `correlated_EFT_publication_with_commentary.ipynb` notebook from the repository.
# 1. If it doesn't work at this point, please ask!
# ## Things to try
#
# 1. Execute the full notebook and yell if something goes wrong.
# 1. Change the random seed and see how Figures 1a, 1b, and 3 change.
# 1. Change the expansion parameter and see how Figures 1a, 1b, and 3 change.
| topics/model-checking/running_gsum_notebook_for_figures.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
# pandas provides fast, flexible and expressive data structures to make working with relational data both easy and intutive
# it aims to be the fundamental high level building block for doing real world data analysis in python
# -
import pandas as pd
# +
iris = pd.read_csv("iris.data")
#inside parentheses, filepath containing the data to be specified, it can be a url or local path on your system
#there are other functions too for various formats like read_excel(), read_json and read_html()
# -
print(iris)
iris
#Ignoring header -> If you don't want first row to be treated as a header, you can set header = None
iris = pd.read_csv('iris.data', header=None)
iris
#loading the data with column names pre decided
iris = pd.read_csv('iris.data',header = None, names=['sl', 'sw', 'pl', 'pw', 'species'])
iris.head()
type(iris)
# +
# df = iris
# note: any change to iris will be automatically made to df
# +
df = iris.copy()
# this will make copy of iris to df
# so any change to df will not be made in iris and vice-versa
# -
df.head() # returns first 5 rows
df.head(3) # returns first 3 rows
df.tail() # returns last 5 rows
df.tail(3) # returns last 3 rows
df.shape
df.columns
# +
# change column names
df.columns = ['sl','sw','pl','pw','flower_type']
# -
df.columns
df.dtypes
df.describe() # for every column of int or float type it will give stats
# +
# note: in the above,
# count shows the no. of valid values of a particular column
# so if count shows 149 it means that none of the column has NAN values
# std is standard deviation
# 25% means that if we arrange the data in increasing order, then the value at 25th percentile is......
# 50% means that if we arrange the data in increasing order, then the value at 50th percentile is......
# -
#If you want to include all columns for result, then use -
df.describe(include='all')
# +
# to access a particular columnm of dataframe 2 ways:
# 1st way
df.sl
# +
# 2nd way
df['sl']
# +
df[0:1]
# df[0:1, 2:4] will give an error
# +
# so to access a particular dataframe from the main dataframe use iloc[]
df.iloc[0:3, 2:4]
# -
df.isnull() # it will tell if any entry is null but its of no use
df.isnull().sum() # it will give the no. of entry that has null values and it is of great use
#data information
df.info()
# +
# manipulating the rows of dataframe
df.head()
# +
# here 0,1,2,3 are lables that are different from positions
# +
# to delete a row use drop()
df.drop(0) # it takes lable as input not position
df.head()
# +
# it will delete 0th label row but not from the df, instead it will create a separate copy of df and make changes in that
# so no changes can be seen but if we do like
a = df.drop(0)
a.head()
# +
# as we can see that 0th label got deleted so now starting of a from 1 label
# +
# what happens if we again run that command-----> ERROR bcos there is not any 0th label present in a
# a.drop(0)
# a.head()
# +
# if we donot want to make any copy of the dataframe while dropping a row use parameter: inplace = True
df.drop(0, inplace = True)
df.head()
# ab df ka df mai hi delete hoga
# -
df.drop(3, inplace=True)
df.head()
# +
#To delete more than one rows in one go, we can pass a list of row numbers
df.drop([4,5,8], inplace=True)
df.head(10)
# -
# similar to the function columns, we have index to see the label name
df.index
# +
# as we can see that there is no label of 0 and 3
# -
df.index[0] # position 0 pr jo label ka naam hai, ye wo dega
df.head()
# +
# deleting row by position
# agar hme 0th position pr jo row hai, usko delete krna hai then
df.drop(df.index[0] , inplace=True)
df.head()
# -
df.index[[0,1]] # 0th and 1st position pr jo label hai unka naam dega
# +
# agar hme 0th and 1st position pr jo row hai, unko delete krna hai then
df.drop(df.index[[0,1]] , inplace=True)
df.head()
# +
# note: labels are stick to rows
# -
df.sl > 5 # condition to find the rows in the column sl that have values >5 but its of no use
df[df.sl>5] # its of use
# note: in a similar way we can pass conditions
#selecting data based on some condition applied on feature values in columns
#say, we want to select only those rows, where sl > 6 and pl > 5
iris[(iris.sl > 6) and (iris.pl > 5)]
df[df.flower_type=='Iris-setosa']
df[df.flower_type=='Iris-setosa'].describe()
# +
# iloc vs loc
# iloc works on positiion
# loc works on labels
print(df.head())
# -
print(df.iloc[0])
print(df.loc[7])
print(df.loc[0]) # error dega
# +
# add a row
df.loc[4] = [2.0,2.0,2.0,2.0,'Iris-ve'] # creating new label, added at the end
df.loc[5] = [1.0,1.0,1.0,1.0,'Iris-setosa'] # modifying already exist label
df.head()
# -
df.tail()
df.index
# +
# as we can see that index are not properly arranged so to reset the index we have reset_index()
# but couple of issues with this function
# it automatically adds another column named: index that tells the old labels and its of no use so we need to drop it
# it doesnot modify origina df, instead creates a copy of it and modify it
df.reset_index()
# -
df.reset_index(inplace=True, drop=True)
df
# +
# manipulating the column of dataframe
# delete column by 2 ways:
df.drop("sl",axis=1, inplace=True) #note: to delete column we need to pass the axis =1 so that it looks column wise
df.describe()
# -
del(df["sw"])
df.describe()
df.head()
df = iris.copy()
df.columns = ['sl','sw','pl','pw','flower_type']
df.head()
# +
# to add new column in df
df["diff_pl_pw"] = df["pl"]-df["pw"]
df
# -
#check for unique values of each column
for i in df.columns:
print(df[i].unique(),"\t",df[i].nunique())
#Check how these unique categories are distributed among the columns
for i in df.columns:
print(df[i].value_counts())
print()
#grouby function is pandas library to group values based on categorical variables
iris.groupby('species').mean()['pl']
iris.groupby('species').mean()['pw']
iris.groupby('species').mean()['sw']
# +
#handling string data
#most of the ML algorithms work very well with the numeric data
#so, if we have any string data in the dataframe, we can think of a some way to convert that to the numeric data
#for example, here, in the column 'species', we have 3 different types of string values
# let's try to assign 0, 1 and 2 to thise categories
#let's first write a function, which will do this for us
def getNumber(s):
if s == 'Iris-setosa':
return 0
elif s == 'Iris-versicolor':
return 1
else:
return 2
iris['category'] = iris.species.apply(getNumber)
del iris['species']
iris.head()
# -
iris.groupby('category').count()['sl']
| Basics of ML and DL/ML/Pandas/Pandas.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Function
# ### Projectile
# \begin{equation}
# R=\frac{u^2\sin 2\theta}{g}
# \end{equation}
# \begin{equation}
# TF=\frac{2u\sin \theta}{g}
# \end{equation}
# \begin{equation}
# H=\frac{u^2\sin^2\theta}{2g}
# \end{equation}
import numpy as np # from numpy import* not required to write np
import pandas as pd # from pandas import*
import matplotlib.pyplot as pltfrom # from matplotlib import*
# %matplotlib inline
def projectile(u,theta):
g=9.8 # acceleration due to gravity
R = u**2*np.sin(2*np.pi*theta/180)/g # range
H= u**2*np.sin(np.pi*theta/180)**2/(2*g) # max. height
TF = 2*u*np.sin(np.pi*theta/180)/g # time of flight
return [R,H,TF]
p=projectile(100,60)
p
Angle=[] #list for angle
R=[] #list for range
H=[] # list for height
TF=[] #list for time of flight
for angle in range(1,91):
a=projectile(100,angle)
Angle.append(angle)
R.append(a[0]) # added element in list R
H.append(a[1]) # added element in list H
TF.append(a[2]) # added element in list TF
plt.plot(Angle,R,'g-.',label='Range')
plt.plot(Angle,H,'b^',label='Max.height')
plt.xlabel('Angle(degree)')
plt.ylabel('Distance(m)')
plt.title('Projectile')
plt.legend()
plt.show()
plt.plot(Angle,TF,'k*')
plt.xlabel('Angle(degree)')
plt.ylabel('Time of flight(sec)')
plt.title('Projectile')
plt.savefig('projective.eps')
plt.show()
data={} # dictionary
data.update({"Angle":Angle,"Range":R ,"Time of flight":TF,"Max.Height": H})
DF = pd.DataFrame(data)
DF
DF.to_csv("projectile.csv") #save data in csv format in excel
| func_pms1.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3 (ipykernel)
# language: python
# name: python3
# ---
# +
import pandas as pd
import matplotlib.pyplot as plt
plt.rcParams['font.size'] = 14
# -
# Data: https://www.irena.org/Statistics/View-Data-by-Topic/Capacity-and-Generation/Statistics-Time-Series
# +
df = pd.DataFrame()
df["Years"] = [2011, 2012, 2013, 2014, 2015, 2016, 2017, 2018, 2019, 2020, 2021]
df["Installed Capacity (GW)"] = [72, 102, 137, 176, 223, 295, 390, 483, 585, 710, 843]
df.set_index("Years", inplace = True)
df
# +
fig, ax = plt.subplots(figsize = (12, 8))
df.plot(kind = "bar", ax = ax)
plt.ylabel("GW")
#Remove legend along with the frame
plt.legend([], frameon = False)
plt.title("Global Installed Capacity of Solar PV")
# Hide the right and top spines
ax.spines.right.set_visible(False)
ax.spines.top.set_visible(False)
#Set xticks rotation to 0
plt.xticks(rotation = 0)
#Add grid line
plt.grid(axis = "y")
#Adding value labels on top of bars
for i in range(len(df)):
installed = df.iloc[i][0]
plt.text(x = i - 0.2,
y = installed + 5,
s = str(installed))
#Add source
plt.text(8, -100, "Source: IRENA, 2022")
plt.savefig("../output/global solar pv trend.jpeg",
dpi = 300)
plt.show()
# +
from matplotlib.offsetbox import (OffsetImage, AnnotationBbox)
# -
# How to read image in Python?
# https://www.geeksforgeeks.org/reading-images-in-python/
# +
import matplotlib.image as image
#Read the image from file into an array of binary format
file = "../data/solar_pv.png"
logo = image.imread(file)
print (logo.shape); print (logo)
# -
# Returned array has shape:
#
# - (M, N) for grayscale images.
#
# - (M, N, 3) for RGB images.
#
# - (M, N, 4) for RGBA images. (Red, Green, Blue, Alpha images)
# NumPy arrays are high-performance data structures, better suited for mathematical operations than Python's native list data type. A three-dimensional (3D) array is composed of 3 nested levels of arrays, one for each dimension.
#
# Three-dimensional array of 330 rows and 482 columns, which implies
# +
print (logo.shape)
plt.imshow(logo)
plt.show()
# -
# +
import matplotlib.image as image
fig, ax = plt.subplots(figsize = (12, 8))
df.plot(kind = "bar", ax = ax)
from matplotlib.offsetbox import (OffsetImage, AnnotationBbox)
#The OffsetBox is a simple container artist.
#The child artists are meant to be drawn at a relative position to its parent.
imagebox = OffsetImage(logo, zoom = 0.15)
#Annotation box for solar pv logo
#Container for an `OffsetBox` (here imagebox) referring to a specific position *xy*.
ab = AnnotationBbox(imagebox, (5, 700), frameon = False)
ax.add_artist(ab)
plt.ylabel("GW")
#Remove legend along with the frame
plt.legend([], frameon = False)
plt.title("Global Installed Capacity of Solar PV")
# Hide the right and top spines
ax.spines.right.set_visible(False)
ax.spines.top.set_visible(False)
#Set xticks rotation to 0
plt.xticks(rotation = 0)
#Add grid line
plt.grid(axis = "y")
#Adding value labels on top of bars
for i in range(len(df)):
installed = df.iloc[i][0]
plt.text(x = i - 0.2,
y = installed + 5,
s = str(installed))
#Add source
plt.text(8, -100, "Source: IRENA, 2022")
plt.savefig("../output/global solar pv trend with pv logo.jpeg",
dpi = 300)
plt.show()
# -
# Plotting a circle on a figure with unequal axis:
# https://werthmuller.org/blog/2014/circle/
# +
from matplotlib.offsetbox import (OffsetImage, AnnotationBbox)
import matplotlib.image as image
from matplotlib import patches
fig, ax = plt.subplots(figsize = (12, 8))
df.plot(kind = "bar", ax = ax)
#The OffsetBox is a simple container artist.
#The child artists are meant to be drawn at a relative position to its parent.
imagebox = OffsetImage(logo, zoom = 0.15)
#Annotation box for solar pv logo
#Container for an `OffsetBox` (here imagebox) referring to a specific position *xy*.
ab = AnnotationBbox(imagebox, (5, 700), frameon = False)
ax.add_artist(ab)
plt.ylabel("GW")
#Remove legend along with the frame
plt.legend([], frameon = False)
plt.title("Global Installed Capacity of Solar PV")
# Hide the right and top spines
ax.spines.right.set_visible(False)
ax.spines.top.set_visible(False)
#Set xticks rotation to 0
plt.xticks(rotation = 0)
#Add grid line
plt.grid(axis = "y")
#Adding value labels on top of bars
for i in range(len(df)):
installed = df.iloc[i][0]
plt.text(x = i - 0.2,
y = installed + 5,
s = str(installed))
#Add source
plt.text(8, -100, "Source: IRENA, 2022")
plt.scatter(5, 700,
s = 20000,
marker = "o",
color = "red",
facecolors = "none"
)
plt.savefig("../output/global solar pv trend with pv logo with circle.jpeg",
dpi = 300)
plt.show()
# -
| notebooks/Inserting image to a plot in Matplotlib.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3 (ipykernel)
# language: python
# name: python3
# ---
# # Building Bayesian Networks
#
# by <NAME>
# <br>
# March 7th, 2022
#
# In this notebook, you can follow an example of creating a Bayesian Network, by using the <b>pgmpy</b> Python library. The example tackled by this notebook is the <i>Bayesian Network for Computer Failure</i>, which we covered in detail in our lectures.
# Bayesian Network is composed of two components:
# <ol>
# <li>a directed acyclic graph DAG: 𝓖(𝑉,𝐸) - nodes 𝑉, and edges 𝐸</li>
# <ul>
# <li>a set of random variables represented by nodes</li>
# <li>directed edges – connect two random variables with causal probabilistic dependency</li>
# <li>cycles not allowed in the graph</li>
# </ul>
# <li>a set of conditional probability distributions</li>
# <ul>
# <li>conditional probability distribution is defined for each node in the graph</li>
# <li>the conditional probability distribution of a node (random variable) is defined for every possible outcome of the preceding causal node(s)</li>
# </ul>
# ## Step 1: Create the DAG of the Bayesian Network
# ### Step 1.1. Describe the problem
# <ul>
# <li>a computer does not start, which is considered as a <b>computer failure</b> (observation/evidence)</li>
# <li>possible <i>independent</i> causes of failure:</li>
# <ol>
# <li><b>electricity failure</b></li>
# <li><b>computer malfunction</b></li>
# </ol><br>
# <li><b>electricity failure</b> and <b>computer malfunction</b> are ancestors/parents of <b>computer failure</b></li>
# <li>inference objectives:</li>
# <ol>
# <li>calculate the posterior conditional probability distribution of each of the possible unobserved causes given the observed evidence of a <b>computer failure</b></li>
# <li>calculate the prior conditional probability distribution of having a <b>computer failure</b> without any evidence</li>
# <li>calculate the prior conditional probability distribution of having a <b>computer failure</b> considering a recent <b>electricity failure</b> as evidence</li>
# </ol>
# </ul>
# ### Step 1.2. Draw and show the DAG
# Let's show the GAD model from the lectures.
# +
from IPython.display import Image
Image("images/computer_failure.png")
# -
# ### Step 1.3. Create the DAG of the Bayesian Network by using the pgmpy library
# Let's create the Computer Failure Bayesian Network following the DAG shown above. We use the pgmpy library for this task.
# <br>
# Here, we consider three nodes, each representing a binary random variable:
# <ol>
# <li><b>electricity_failure</b>: domain = {yes, no}</li>
# <li><b>computer_malfunction</b>: domain = {yes, no}</li>
# <li><b>computer_failure</b>: domain = {yes, no}</li>
# </ol>
#
# Additional constraints: <b>electricity_failure</b> ⊥ <b>computer_malfunction</b> (independent)
# <br><br>
# To build the Bayesian Network DAG we use the BayesianNetwork class provided by the pgmpy library.
# +
from pgmpy.models import BayesianNetwork
from pgmpy.factors.discrete import TabularCPD
import networkx as nx
import pylab as plt
#**** define the Bayesian DAG structure
#
# we define the Bayesian structure by connecting nodes (random variables)
# the direction is determined from the order of the nodes
#
# here, electricity_failure, computer_malfunction, and computer_failure are random variables
model = BayesianNetwork([('electricity_failure', 'computer_failure'), ('computer_malfunction', 'computer_failure')])
# -
# ### Step 1.4. Show the DAG of the Bayesian Network
# We draw the model created in the previous step.
nx.draw(model, with_labels=True)
plt.show()
plt.close()
# ## Step 2: Create the Conditional Probability Distributions of the Bayesian Network
#
# In this step, we build the distribution models per a random variable. We use our believes and logic to come up with these models. Let's say this is our priori knowledge.
#
# ### Step 2.1: Build the probability distribution tables with TabularCPD
# In this exercise we are going to use the models provided in the lectures:
#
# <ul>
# <li>P (electricity_failure=yes) = 0.1; P (electricity_failure=no) = 0.9</li>
# <li>P (computer_malfunction=yes) = 0.2; P (computer_malfunction=no) = 0.8</li>
# <li>P (computer_failure=yes | electricity_failure=no ∩ computer_malfunction=no) = 0</li>
# <li>P (computer_failure=yes | electricity_failure=no ∩ computer_malfunction=yes) = 0.5</li>
# <li>P (computer_failure=yes | electricity_failure=yes ∩ computer_malfunction=no) = 1</li>
# <li>P (computer_failure=yes | electricity_failure=yes ∩ computer_malfunction=yes) = 1</li>
# </ul>
#
# We use the TabularCPD class provided by the pgmpy library to build the distribution model of each random variable.
# +
#**** define the Conditional Probability Distributions (CPDs)
# define CPD for electricity_failure
#
# the TabularCPD class associates the ditribution model to be constructed with a random variable,
# e.g., electricity_failure
cpd_electricity_failure = TabularCPD (
# associate the distribution table with a random variable - must be included in the Bayesian model
variable="electricity_failure",
# define the cardinality of the variable domain, i.e., binary = 2 -> domain = {yes, no}
variable_card=2,
# define the distribution table
# values are ordered following the domain {yes, no}
# e.g., P (electricity_failure=yes) = 0.1; P (electricity_failure=no) = 0.9
values=[[0.1], [0.9]]
)
# define CPD for computer_malfunction
cpd_computer_malfunction = TabularCPD (
variable="computer_malfunction",
variable_card=2,
values=[[0.2], [0.8]]
)
# define CPD for computer_failure
cpd_computer_failure = TabularCPD (
variable="computer_failure",
variable_card=2,
# define the distribution table
#
# this distrbution table includes values for all the combinations of the values
# held by the variables that condition this variable
#
# electricity_failure = A, computer_malfunction = B, computer_failure = C
# C=yes = {[A=yes,B=yes],[A=yes,B=no],[A=no,B=yes],[A=no,B=no]} = {1, 1, 0.5, 0}
# C=no = {[A=yes,B=yes],[A=yes,B=no],[A=no,B=yes],[A=no,B=no]} = {0, 0, 0.5, 1]}
# C=yes + C=no = 1
#
values=[[1, 1, 0.5, 0],[0, 0, 0.5, 1]],
# define the evidences - one of the causes must be true
evidence=["electricity_failure", "computer_malfunction"],
evidence_card=[2,2]
)
# -
# ### Step 2.2: Associate the distribution models with the network structure
# we associate the distribution models with the network structure, i.e., with the random variables
model.add_cpds(cpd_electricity_failure, cpd_computer_malfunction, cpd_computer_failure)
# ### Step 2.3: Check the model
# We use the built-in mechanism (the check_model() method) of the BayesianNetwork class to check the distribution models. The following checks are performed:
# <ol>
# <li>consistency in terms of domain cardinality vs. number of values</li>
# <li>Gaussian distribution for values of the same series, e.g., P(no) + P(yes) = 1</li>
# </ol>
model.check_model()
# ## Step 3: Inference with the Bayesian Network
# In this step, we use the inference mechanisms of the Bayesian Network to:
# <ol>
# <li>calculate the prior conditional probability distribution of having a <b>computer failure</b> without any evidence</li>
# <li>calculate the prior conditional probability distribution of having a <b>computer failure</b> considering a recent <b>electricity failure</b> as an evidence</li>
# <li>calculate the posterior conditional probability distribution of each of the possible unobserved causes given the observed evidence of a <b>computer failure</b></li>
# </ol>
# To run inference tasks on this network, we use the VariableElimination class provided by the pgmpy.inference iibrary.
# +
from pgmpy.inference import VariableElimination
infer = VariableElimination(model)
# -
# ### Step 3.1: Infer Posterior Probability of having electricity failure
#
# Recall that we use the Bayes' Theorem to calculate posterior probability.
#
# <pre>
#
# P(Evidence|Cause) * P(Cause)
# P(Cause|Evidence) = ---------------------------------------
# P(Evidence)
#
# </pre>
# <br>
# Please, refer to lectures on how we infer posterior probability.
#**** infer posterior probability
# evidence={'computer_failure': 0} means computer_failure = yes
#
posterior_p = infer.query(['electricity_failure'], evidence={'computer_failure': 0})
# Show the posterior probability results.
print(posterior_p)
# ### Step 3.2: Infer Prior Probability of having computer failure with no evidence
# Recall that we use the following formula to infer prior probability.
# <pre>
# 𝑷(𝒄𝟏)=∑(𝒂∈𝑨)∑(𝒃∈𝑩) 〖𝑷(𝑨=𝒂∩𝑩=𝒃∩𝑪=𝒄𝟏)〗
# </pre>
# Please, refer to lectures on how we infer posterior probability.
prior_p = infer.query(['computer_failure'], evidence={})
# Show the prior probability results.
print(prior_p)
# ### Step 3.3: Infer Prior Probability of having computer failure with electricity failure as a prior evidence
prior_p = infer.query(['computer_failure'], evidence={'electricity_failure': 0, 'computer_malfunction': 1})
# Show the prior probability results.
print(prior_p)
| Weeks_7-9/Lessons/.ipynb_checkpoints/bn_comp_failure-checkpoint.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Download the titanic data set from Kaggle site
# ### install python-dotenv
# +
# #!pip3 install python-dotenv
# -
from dotenv import load_dotenv, find_dotenv
# +
#### find the .env file from the directories
dotenv_path = find_dotenv()
##### load the env file
load_dotenv(dotenv_path)
# +
### extarct env variable using os path
import os
username = os.environ.get("KAGGLE_USERNAME")
username
# -
# ### Read data from the Kaggle site
import requests
from requests import session
# +
### post pay load
payload ={
"action":"login",
"username":'vipinparambil',
"password": '<PASSWORD>'
}
### url for test.csv
url = 'https://www.kaggle.com/c/titanic/download/test.csv'
### create a sesion
with session() as c:
c.post("https://www.kaggle.com/account/login", data=payload)
response = c.get('https://www.kaggle.com/c/titanic/download/train.csv')
print(response.content)
# -
# +
### Read Data and write file
import requests
from requests import session
payload ={
"action":"login",
"username": os.environ.get("KAGGLE_USERNAME"),
"password": os.environ.get("KAGGLE_PASSWORD")
}
def extract_data(url, file_path):
#setup session
with session() as c:
c.post("https://www.kaggle.com/account/login", data = payload)
with open(file_path,'w') as f:
result = c.get(url, stream=True) ## read in stream line
for data in result.iter_content(1024):
f.write(data.decode('utf-8'))
# -
## URLS
train_url = "https://www.kaggle.com/c/titanic/download/train.csv"
test_url = "https://www.kaggle.com/c/titanic/download/test.csv"
# +
### Raw folder path in project data folder data/raw
##file paths
raw_data_path = os.path.join(os.path.pardir, "data", "raw")
tarin_csv_path = os.path.join(raw_data_path, "train.csv")
test_csv_path = os.path.join(raw_data_path, "test.csv")
### dwonload an write the file
extract_data(train_url, tarin_csv_path)
extract_data(test_url, test_csv_path)
# -
###list files
# !ls -l ../data/raw
# # Build the functionality as a Script
# +
### creating the script file
get_raw_data_script = os.path.join(os.path.pardir, "src", "data", "get_titanic_raw_data_script.py")
# +
# %%writefile $get_raw_data_script
## write the script to the file
##Import
import os
from dotenv import load_dotenv, find_dotenv
import requests
from requests import session
import logging
###Pay load
payload ={
"action":"login",
"username": os.environ.get("KAGGLE_USERNAME"),
"password": os.environ.get("KAGGLE_PASSWORD")
}
def extract_data(url, file_path):
"""
Method to extract data from kaggle site
"""
with session() as c:
c.post("https://www.kaggle.com/account/login", data=payload)
with open(file_path, 'w') as f:
response = c.get(url, stream = True)
for data in response.iter_content(1024):
f.write(data.decode("utf-8"))
### Main method
def main(project_dir):
"""
Main method
"""
logger = logging.getLogger(__name__)
logger.info("Downloading the raw data ..............")
#urls
## URLS
train_url = "https://www.kaggle.com/c/titanic/download/train.csv"
test_url = "https://www.kaggle.com/c/titanic/download/test.csv"
## Csv file paths
raw_data_path = os.path.join(project_dir, "data", "raw")
train_csv = os.path.join(raw_data_path, "train.csv")
test_csv = os.path.join(raw_data_path, "test.csv")
## extract data
extract_data(train_url, train_csv)
extract_data(test_url, test_csv)
logger.info("Downloaded raw data files ")
## Cal the main
if __name__ == '__main__':
##get the root directory
project_dir = os.path.join(os.path.dirname(__file__), os.pardir, os.pardir)
## setup logger
log_fmt = '%(asctime)s - %(name)s - %(levelname)s - %(message)s'
logging.basicConfig(level=logging.INFO, format=log_fmt)
###load the env variable
dot_env_path = find_dotenv()
load_dotenv(dot_env_path)
##call main
main(project_dir)
# -
# !python $get_raw_data_script
| notebooks/py-kaggle-titanic-data-download-script.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + colab={"base_uri": "https://localhost:8080/"} id="Av4gV39bk5Ak" outputId="62daab35-edf0-49ec-c15e-5816856ae135"
# !pip install pandas
# + colab={"base_uri": "https://localhost:8080/"} id="b-NTFIEqlMI9" outputId="a2f951d0-6790-4ddc-dfcb-69e10227d03f"
import pandas as pd
retail_data = pd.read_csv('/content/OnlineRetail.csv', encoding= 'unicode_escape')
retail_data['InvoiceDate'] = pd.to_datetime(retail_data['InvoiceDate'], errors = 'coerce')
uk_data = retail_data.query("Country=='United Kingdom'").reset_index(drop=True)
uk_data.shape
# + id="pdIGvbrbllUj"
from datetime import datetime, timedelta,date
t1 = pd.Timestamp("2011-06-01 00:00:00.054000")
t2 = pd.Timestamp("2011-03-01 00:00:00.054000")
t3 = pd.Timestamp("2011-12-01 00:00:00.054000")
uk_data_3m = uk_data[(uk_data.InvoiceDate < t1) & (uk_data.InvoiceDate >= t2)].reset_index(drop=True)
uk_data_6m = uk_data[(uk_data.InvoiceDate >= t1) & (uk_data.InvoiceDate < t3)].reset_index(drop=True)
# + id="xXEsZYa0nynl"
uk_data_3m['revenue'] = uk_data_3m['UnitPrice'] * uk_data_3m['Quantity']
max_date = uk_data_3m['InvoiceDate'].max() + timedelta(days=1)
rfm_data = uk_data_3m.groupby(['CustomerID']).agg({
'InvoiceDate': lambda x: (max_date - x.max()).days,
'InvoiceNo': 'count',
'revenue': 'sum'})
# + id="CkKxNZD1oeQs"
rfm_data.rename(columns={'InvoiceDate': 'Recency',
'InvoiceNo': 'Frequency',
'revenue': 'MonetaryValue'}, inplace=True)
# + colab={"base_uri": "https://localhost:8080/", "height": 455} id="qa49ju_XoiRx" outputId="6f9b539b-8924-48a4-89a7-e16583c4bf03"
rfm_data
# + colab={"base_uri": "https://localhost:8080/", "height": 451} id="Wuwx9xpyon4a" outputId="d8435b18-2b47-437c-8847-c00837890269"
r_grp = pd.qcut(rfm_data['Recency'], q=4, labels=range(3,-1,-1))
f_grp = pd.qcut(rfm_data['Frequency'], q=4, labels=range(0,4))
m_grp = pd.qcut(rfm_data['MonetaryValue'], q=4, labels=range(0,4))
rfm_data = rfm_data.assign(R=r_grp.values).assign(F=f_grp.values).assign(M=m_grp.values)
rfm_data['R'] = rfm_data['R'].astype(int)
rfm_data['F'] = rfm_data['F'].astype(int)
rfm_data['M'] = rfm_data['M'].astype(int)
rfm_data['RFMScore'] = rfm_data['R'] + rfm_data['F'] + rfm_data['M']
rfm_data.groupby('RFMScore')['Recency','Frequency','MonetaryValue'].mean()
# + id="LooJboaYo8bX"
rfm_data['Segment'] = 'Low-Value'
rfm_data.loc[rfm_data['RFMScore']>4,'Segment'] = 'Mid-Value'
rfm_data.loc[rfm_data['RFMScore']>6,'Segment'] = 'High-Value'
rfm_data = rfm_data.reset_index()
# + id="D89IoLfEqKQE"
uk_data_6m['revenue'] = uk_data_6m['UnitPrice'] * uk_data_6m['Quantity']
revenue_6m = uk_data_6m.groupby(['CustomerID']).agg({
'revenue': 'sum'})
revenue_6m.rename(columns={'revenue': 'Revenue_6m'}, inplace=True)
# + id="By-Zel4Wsz2P"
revenue_6m = revenue_6m.reset_index()
# + colab={"base_uri": "https://localhost:8080/", "height": 424} id="0O_rPf4stQyz" outputId="ce18c370-3113-439b-f95e-6f343597ab8e"
merged_data = pd.merge(rfm_data, revenue_6m, how="left")
merged_data.fillna(0)
# + id="mXMjfKoSuFBF"
merged_data = merged_data[merged_data['Revenue_6m']<merged_data['Revenue_6m'].quantile(0.99)]
from sklearn.cluster import KMeans
#creating 3 clusters
kmeans = KMeans(n_clusters=3)
kmeans.fit(merged_data[['Revenue_6m']])
merged_data['LTVCluster'] = kmeans.predict(merged_data[['Revenue_6m']])
# + colab={"base_uri": "https://localhost:8080/", "height": 175} id="qh3TRN_cuhva" outputId="4b0f0b30-67ef-4e70-b7a3-3a7f6b496bb8"
merged_data.groupby('LTVCluster')['Revenue_6m'].describe()
# + colab={"base_uri": "https://localhost:8080/", "height": 206} id="FY4IMpMm9RUL" outputId="8d639ed3-1bf2-40d2-8a05-e245833eb3e5"
feature_data = pd.get_dummies(merged_data)
feature_data.reset_index()
feature_data.head(5)
# + id="EOVKk2AlvoMA"
from sklearn.metrics import classification_report,confusion_matrix
import xgboost as xgb
from sklearn.model_selection import KFold, cross_val_score, train_test_split
# feature_data = pd.get_dummies(merged_data)
X = feature_data.drop(['LTVCluster', 'Revenue_6m'], axis=1)
y = feature_data['LTVCluster']
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.1)
# + colab={"base_uri": "https://localhost:8080/"} id="QfvFmCJVxXf_" outputId="147eae36-bbef-4fb5-d078-2cdf4a13f030"
xgb_classifier = xgb.XGBClassifier(max_depth=5, objective='multi:softprob')
xgb_model = xgb_classifier.fit(X_train, y_train)
acc = xgb_model.score(X_test,y_test)
print(acc)
# + colab={"base_uri": "https://localhost:8080/"} id="EIKDimgJznX4" outputId="9afad229-7f01-423f-f5bd-1aa84524ca40"
y_pred = xgb_model.predict(X_test)
print(classification_report(y_test, y_pred))
| Chapter01/Customer_LTV.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3.5 with Spark 2.1
# language: python
# name: python3-spark21
# ---
# ## Visualizing-Food-Insecurity-with-Pixie-Dust-and-Watson-Analytics
# _IBM Journey showing how to visualize US Food Insecurity with Pixie Dust and Watson Analytics._
#
# Often in data science we do a great deal of work to glean insights that have an impact on society or a subset of it and yet, often, we end up not communicating our findings or communicating them ineffectively to non data science audiences. That's where visualizations become the most powerful. By visualizing our insights and predictions, we, as data scientists and data lovers, can make a real impact and educate those around us that might not have had the same opportunity to work on a project of the same subject. By visualizing our findings and those insights that have the most power to do social good, we can bring awareness and maybe even change. This journey walks you through how to do just that, with IBM's Watson Studio, Pandas, Pixie Dust and Watson Analytics.
#
# For this particular journey, food insecurity throughout the US is focused on. Low access, diet-related diseases, race, poverty, geography and other factors are considered by using open government data. For some context, this problem is a more and more relevant problem for the United States as obesity and diabetes rise and two out of three adult Americans are considered obese, one third of American minors are considered obsese, nearly ten percent of Americans have diabetes and nearly fifty percent of the African American population have heart disease. Even more, cardiovascular disease is the leading global cause of death, accounting for 17.3 million deaths per year, and rising. Native American populations more often than not do not have grocery stores on their reservation... and all of these trends are on the rise. The problem lies not only in low access to fresh produce, but food culture, low education on healthy eating as well as racial and income inequality.
#
# The government data that I use in this journey has been conveniently combined into a dataset for our use, which you can find in this repo under combined_data.csv. You can find the original, government data from the US Bureau of Labor Statistics https://www.bls.gov/cex/ and The United States Department of Agriculture https://www.ers.usda.gov/data-products/food-environment-atlas/data-access-and-documentation-downloads/.
# ### What is Watson Studio, Pixie Dust and Watson Analytics and why should I care enough about them to use them for my visualizations?
#
# IBM's Watson Studio, is an online browser platform where you can use notebooks or R Studio for your data science projects. Watson Studio is unique in that it automatically starts up a Spark instance for you, allowing you to work in the cloud without any extra work. Watson Studio also has open data available to you, which you can connect to your notebook. There are also other projects available, in the form of notebooks, which you can follow along with and apply to your own use case. Watson Studio also lets you save your work, share it and collaborate with others, much like I'm doing now!
#
# Pixie Dust is a visualization library you can use on Watson Studio. It is already installed into Watson Studio and once it's imported, it only requires one line of code (two words) to use. With that same line of code, you can pick and choose different values to showcase and visualize in whichever way you want from matplotlib, seaborn and bokeh. If you have geographic data, you can also connect to google maps and Mapbox, depending on your preference. Check out a tutorial on Pixie Dust here: https://ibm-watson-data-lab.github.io/pixiedust/displayapi.html#introduction
#
# IBM's Watson Analytics is another browser platform which allows you to input your data, conduct analysis on it and then visualize your findings. If you're new to data science, Watson recommends connections and visualizations with the data it has been given. These visualizations range from bar and scatter plots to predictive spirals, decision trees, heatmaps, trend lines and more. The Watson platform then allows you to share your findings and visualizations with others, completing your pipeline. Check out my visualizations with the link further down in the notebook, or in the images in this repo.
# ### Let's start with Watson Studio.
#
# Here's a tutorial on getting started with Watson Studio: https://datascience.ibm.com/docs/content/analyze-data/creating-notebooks.html.
#
# To summarize the introduction, you must first make an account and log in. Then, you can create a project (I titled mine: "Diet-Related Disease"). From there, you'll be able to add data and start a notebook. To begin, I used the combined_data.csv as my data asset. You'll want to upload it as a data asset and once that is complete, go into your notebook in the edit mode (click on the pencil icon next to your notebook on the dashboard). To load your data in your notebook, you'll click on the "1001" data icon in the top right. The combined_data.csv should show up. Click on it and select "Insert Pandas Data Frame". Once you do that, a whole bunch of code will show up in your first cell. Once you see that, run the cell and follow along with my tutorial!
#
# _Quick Note: In Github you can view all of the visualizations by selecting the circle with the dash in the middle at the top right of the notebook!_
# +
#import data and libraries (do this by using the 1001 button above to the right)
from io import StringIO
import requests
import json
import pandas as pd
import seaborn as sns
import numpy as np
import matplotlib.pyplot as plt
# %matplotlib inline
# @hidden_cell
# This function accesses a file in your Object Storage. The definition contains your credentials.
# You might want to remove those credentials before you share your notebook.
def get_object_storage_file_with_credentials_97ff18b62fe74e0496a8cb3fe9eaa15b(container, filename):
"""This functions returns a StringIO object containing
the file content from Bluemix Object Storage."""
url1 = ''.join(['https://identity.open.softlayer.com', '/v3/auth/tokens'])
data = {'auth': {'identity': {'methods': ['password'],
'password': {'user': {'name': '<PASSWORD>','domain': {'id': '1e9b529a1b7a4b6185a4f2ea0acbe4d2'},
'password': '<PASSWORD>.'}}}}}
headers1 = {'Content-Type': 'application/json'}
resp1 = requests.post(url=url1, data=json.dumps(data), headers=headers1)
resp1_body = resp1.json()
for e1 in resp1_body['token']['catalog']:
if(e1['type']=='object-store'):
for e2 in e1['endpoints']:
if(e2['interface']=='public'and e2['region']=='dallas'):
url2 = ''.join([e2['url'],'/', container, '/', filename])
s_subject_token = resp1.headers['x-subject-token']
headers2 = {'X-Auth-Token': s_subject_token, 'accept': 'application/json'}
resp2 = requests.get(url=url2, headers=headers2)
return StringIO(resp2.text)
df_data_1 = pd.read_csv(get_object_storage_file_with_credentials_97ff18b62fe74e0496a8cb3fe9eaa15b('DietRelatedDisease', 'combined_data.csv'))
df_data_1.head()
# -
from io import StringIO
import requests
import json
import pandas as pd
import seaborn as sns
import numpy as np
import matplotlib.pyplot as plt
# %matplotlib inline
df_data_1 = pd.read_csv(get_object_storage_file_with_credentials_97ff18b62fe74e0496a8cb3fe9eaa15b('DietRelatedDisease', 'combined_data.csv'))
df_data_1.head()
# ### Cleaning data and Exploring
#
# This notebook starts out as a typical data science pipeline: exploring what our data looks like and cleaning the data. Though this is often considered the boring part of the job, it is extremely important. Without clean data, our insights and visualizations could be inaccurate or unclear.
#
# To initially explore, I used matplotlib to see a correlation matrix of our original data. I also looked at some basic statistics to get a feel for what kind of data we are looking at. I also went ahead and plotted using pandas and seaborn to make bar plots, scatterplots and regression plots.
# +
#check out our data!
# -
df_data_1.columns
df_data_1.describe()
#to see columns distinctly and evaluate their state
df_data_1['PCT_LACCESS_POP10'].unique()
df_data_1['PCT_REDUCED_LUNCH10'].unique()
df_data_1['PCT_DIABETES_ADULTS10'].unique()
df_data_1['FOODINSEC_10_12'].unique()
#looking at correlation in a table format
df_data_1.corr()
#checking out a correlation matrix with matplotlib
plt.matshow(df_data_1.corr())
#we notice that there is a great deal of variables which makes it hard to read!
#other stats
df_data_1.max()
df_data_1.min()
df_data_1.std()
# Plot counts of a specified column using Pandas
df_data_1.FOODINSEC_10_12.value_counts().plot(kind='barh')
# Bar plot example
sns.factorplot("PCT_SNAP09", "PCT_OBESE_ADULTS10", data=df_data_1,size=3,aspect=2)
# Regression plot
sns.regplot("FOODINSEC_10_12", "PCT_OBESE_ADULTS10", data=df_data_1, robust=True, ci=95, color="seagreen")
sns.despine();
# After looking at the data I realize that I'm only interested in seeing the connection between certain values and because the dataset is so large it's bringing in irrelevant information and creating noise. To change this, I created a smaller data frame, making sure to remove NaN and 0 values (0s in this dataset generally mean that a number was not recorded).
#create a dataframe of values that are most interesting to food insecurity
df_focusedvalues = df_data_1[["State", "County","PCT_REDUCED_LUNCH10", "PCT_DIABETES_ADULTS10", "PCT_OBESE_ADULTS10", "FOODINSEC_10_12", "PCT_OBESE_CHILD11", "PCT_LACCESS_POP10", "PCT_LACCESS_CHILD10", "PCT_LACCESS_SENIORS10", "SNAP_PART_RATE10", "PCT_LOCLFARM07", "FMRKT13", "PCT_FMRKT_SNAP13", "PCT_FMRKT_WIC13", "FMRKT_FRVEG13", "PCT_FRMKT_FRVEG13", "PCT_FRMKT_ANMLPROD13", "FOODHUB12", "FARM_TO_SCHOOL", "SODATAX_STORES11", "State_y", "GROC12", "SNAPS12", "WICS12", "PCT_NHWHITE10", "PCT_NHBLACK10", "PCT_HISP10", "PCT_NHASIAN10", "PCT_65OLDER10", "PCT_18YOUNGER10", "POVRATE10", "CHILDPOVRATE10"]]
#remove NaNs and 0s
df_focusedvalues = df_focusedvalues[(df_focusedvalues != 0).all(1)]
df_focusedvalues = df_focusedvalues.dropna(how='any')
# Before visualizing, a quick heatmap is created so that we can see what correlations we may want to visualize. I visualized a few of these relationships using seaborn, but I ultimately want to try out other visualizations. The quickest way to explore these is through Pixie Dust.
#look at heatmap of correlations with the dataframe to see what we should visualize
corr = df_focusedvalues.corr()
sns.heatmap(corr,
xticklabels=corr.columns.values,
yticklabels=corr.columns.values)
# We can immediately see that a fair amount of strong correlations and relationships exist. Some of these include 18 and younger and Hispanic, an inverse relationship between Asian and obese, a correlation between sodatax and Hispanic, African American and obesity as well as food insecurity, sodatax and obese minors, farmers markets and aid such as WIC and SNAP, obese minors and reduced lunches and a few more.
#
# Let's try and plot some of these relationships with seaborn.
#Percent of the population that is white vs SNAP aid participation (positive relationship)
sns.regplot("PCT_NHWHITE10", "SNAP_PART_RATE10", data=df_focusedvalues, robust=True, ci=95, color="seagreen")
sns.despine();
#Percent of the population that is Hispanic vs SNAP aid participation (negative relationship)
sns.regplot("SNAP_PART_RATE10", "PCT_HISP10", data=df_focusedvalues, robust=True, ci=95, color="seagreen")
sns.despine();
#Eligibility and use of reduced lunches in schools vs percent of the population that is Hispanic (positive relationship)
sns.regplot("PCT_REDUCED_LUNCH10", "PCT_HISP10", data=df_focusedvalues, robust=True, ci=95, color="seagreen")
sns.despine();
#Percent of the population that is black vs percent of the population with diabetes (positive relationship)
sns.regplot("PCT_NHBLACK10", "PCT_DIABETES_ADULTS10", data=df_focusedvalues, robust=True, ci=95, color="seagreen")
sns.despine();
#Percent of population with diabetes vs percent of population with obesity (positive relationship)
sns.regplot("PCT_DIABETES_ADULTS10", "PCT_OBESE_ADULTS10", data=df_focusedvalues, robust=True, ci=95, color="seagreen")
sns.despine();
# With these simple regression plots we were able to glean from our data information such as in 2010, non-hispanic whites were highly correlated with the use of the SNAP program, or food stamps. We see that the hispanic population is not highly correlated in this time frame. This could be for a variety of reasons including eligibility, reporting, varying policies and use of the program. In our next graphs we see that in 2010, the percentage of the population who were black were highly correlated with diabetes. Next, we see that diabetes and obesity are highly correlated. These graphs do not represent any statistical significance, but they can help us understand and familiarize ourselves with the data.
# ### Now, let's visualize with Pixie Dust.
#
# Now that we've gained some initial insights, let's try out a different tool: Pixie Dust!
#
# As you can see in the notebook below, to activate Pixie Dust, we just import it and then write:
#
# ```display(your_dataframe_name)```
#
# After doing this your dataframe will show up in a column-row table format. To visualize your data, you can click the chart icon at the top left (looks like an arrow going up). From there you can choose from a variety of visuals. Once you select the type of chart you want, you can then select the variables you want to showcase. It's worth playing around with this to see how you can create the most effective visualizations for your audience. The notebook below showcases a couple options such as scatterplots, bar charts, line charts, and histograms.
import pixiedust
# !pip install --user --upgrade pixiedust
# + pixiedust={"displayParams": {"handlerId": "pieChart", "keyFields": "State", "title": "Food Hub by State", "valueFields": "FOODHUB12"}}
#looking at the dataframe table. Pixie Dust does this automatically, but to find it again you can click the table icon.
#Just to give some examples of what you can do with the data, I've created a pie chart of percent of food hubs in the country by state.
display(df_focusedvalues)
# + pixiedust={"displayParams": {"aggregation": "SUM", "handlerId": "scatterPlot", "histoChartType": "subplots", "keyFields": "FOODINSEC_10_12", "kind": "reg", "lineChartType": "subplots", "mpld3": "false", "rendererId": "seaborn", "rowCount": "500", "title": "Food Insecurity vs Percent of the population that is black", "valueFields": "PCT_LACCESS_CHILD10"}}
#using seaborn in Pixie Dust to look at Food Insecurity and the Percent of the population that is black in a scatter plot
display(df_focusedvalues)
# + pixiedust={"displayParams": {"aggregation": "SUM", "handlerId": "mapView", "keyFields": "State", "legend": "false", "mapboxtoken": "<KEY>", "mpld3": "false", "orientation": "vertical", "rendererId": "google", "rowCount": "500", "title": "Food Insecurity by State", "valueFields": "PCT_OBESE_ADULTS10"}}
#using matplotlib in Pixie Dust to view Food Insecurity by state in a bar chart
display(df_focusedvalues)
# + pixiedust={"displayParams": {"aggregation": "SUM", "handlerId": "lineChart", "keyFields": "PCT_OBESE_ADULTS10", "rendererId": "bokeh", "rowCount": "500", "title": "Percent of Population that is Black vs Percent of Population that is Obese", "valueFields": "PCT_NHBLACK10"}}
#using bokeh in Pixie Dust to view the percent of the population that is black vs the percent of the population that is obese in a line chart
display(df_focusedvalues)
# + pixiedust={"displayParams": {"aggregation": "SUM", "handlerId": "scatterPlot", "keyFields": "PCT_DIABETES_ADULTS10", "kind": "kde", "rendererId": "seaborn", "rowCount": "500", "title": "Obesity vs Diabetes", "valueFields": "PCT_OBESE_ADULTS10"}}
#using seaborn in Pixie Dust to view obesity vs diabetes in a scatterplot
display(df_focusedvalues)
# + pixiedust={"displayParams": {"aggregation": "SUM", "chartsize": "80", "clusterby": "CHILDPOVRATE10", "handlerId": "scatterPlot", "keyFields": "PCT_OBESE_CHILD11", "lineChartType": "subplots", "rendererId": "bokeh", "rowCount": "1000", "title": "Childhood Obesity vs Reduced Lunches", "valueFields": "PCT_REDUCED_LUNCH10"}}
#using matplotlib in Pixie Dust to view childhood obesity vs reduced school lunches in a scatterplot
display(df_focusedvalues)
# -
# ### Let's download our dataframe and work with it on Watson Analytics.
#
# Unfortunately, in Watson Studio we cannot download our dataframe as a csv in one line of code, but we can download it to Watson Studio so that it can be downloaded and used elsewhere as well as for other projects. I demonstrate how to do this in the notebook.
#
# Once you follow along, you can take the new .csv (found under "Data Services" --> "Object Storage" from the top button) and upload it to Watson Analytics. Again, if you do not have an account, you'll want to set one up. Once you are logged in and ready to go, you can upload the data (saved in this repo as df_focusedvalues.csv) to your Watson platform.
# + pixiedust={"displayParams": {"handlerId": "downloadFile"}}
#download dataframe as csv file to use in IBM Watson Analytics
# +
#Below is a tutorial on how to do this, but I found it a bit confusing and uncompatable with python 3. I suggest doing what I did below!
#https://medium.com/ibm-data-science-experience/working-with-object-storage-in-data-science-experience-python-edition-c96bc6c6101
# +
#First get your credentials by going to the "1001" button again and under your csv file selecting "Insert Credentials".
#The cell below will be hidden because it has my personal credentials so go ahead and insert your own.
# +
# @hidden_cell
credentials_98 = {
'auth_url':'https://identity.open.softlayer.com',
'project':'object_storage_97ff18b6_2fe7_4e04_96a8_cb3fe9eaa15b',
'project_id':'f9a95a3fa0b149f8abffc04d3f66e673',
'region':'dallas',
'user_id':'ed198de532fe42e0a4a454c68cb016ab',
'domain_id':'1e9b529a1b7a4b6185a4f2ea0acbe4d2',
'domain_name':'1461811',
'username':'member_790b848fe02fefa02fe23f8e64a27e57ec558643',
'password':"""<PASSWORD>.""",
'container':'DietRelatedDisease',
'tenantId':'undefined',
'filename':'combined_data.csv'
}
# -
df_focusedvalues.to_csv('df_focusedvalues.csv',index=False)
# +
from io import StringIO
import requests
import json
import pandas as pd
def put_file(credentials, local_file_name):
"""This functions returns a StringIO object containing the file content from Bluemix Object Storage V3."""
f = open(local_file_name,'r')
my_data = f.read()
url1 = ''.join(['https://identity.open.softlayer.com', '/v3/auth/tokens'])
data = {'auth': {'identity': {'methods': ['password'], 'password': {'user': {'name': credentials['username'],'domain': {'id': credentials['domain_id']}, 'password': credentials['password']}}}}}
headers1 = {'Content-Type': 'application/json'}
resp1 = requests.post(url=url1, data=json.dumps(data), headers=headers1)
resp1_body = resp1.json()
for e1 in resp1_body['token']['catalog']:
if(e1['type']=='object-store'):
for e2 in e1['endpoints']:
if(e2['interface']=='public'and e2['region']== credentials['region']):
url2 = ''.join([e2['url'],'/', credentials['container'], '/', local_file_name])
s_subject_token = resp1.headers['x-subject-token']
headers2 = {'X-Auth-Token': s_subject_token, 'accept': 'application/json'}
resp2 = requests.put(url=url2, headers=headers2, data = my_data )
print(resp2)
# -
put_file(credentials_98,'df_focusedvalues.csv')
# Once this is complete, go get your csv file from Data Services, Object Storage! (Find this above! ^)
# ### Using Watson to visualize our insights.
#
# Once you've set up your account, you can see taht the Watson plaform has three sections: data, discover and display. You uploaded your data to the "data" section, but now you'll want to go to the "discover" section. Under "discover" you can select your dataframe dataset for use. Once you've selected it, the Watson platform will suggest different insights to visualize. You can move forward with its selections or your own, or both. You can take a look at mine here (you'll need an account to view): https://ibm.co/2xAlAkq or see the screen shots attached to this repo. You can also go into the "display" section and create a shareable layout like mine (again you'll need an account): https://ibm.co/2A38Kg6.
#
# You can see that with these visualizations the user can see the impact of food insecurity by state, geographically distributed and used aid such as reduced school lunches, a map of diabetes by state, a predictive model for food insecurity and diabetes (showcasing the factors that, in combination, suggest a likelihood of food insecurity), drivers of adult diabetes, drivers of food insecurity, the relationship with the frequency of farmers market locations, food insecurity and adult obesity, as well as the relationship between farmers markets, the percent of the population that is Asian, food insecurity and poverty rates.
#
# By reviewing our visualizations both in Watson Studio and Watson Analytics, we learn that obesity and diabetes almost go hand in hand, along with food insecurity. We can also learn that this seems to be an inequality issue, both in income and race, with Black and Hispanic populations being more heavily impacted by food insecurity and diet-related diseases than those of the White and Asian populations. We can also see that school-aged children who qualify for reduced lunch are more likely obese than not whereas those that have a farm-to-school program are more unlikely to be obese.
#
# Like many data science investigations, this analysis could have a big impact on policy and people's approach to food insecurity in the U.S. What's best is that we can create many projects much like this in a quick time period and share them with others by using Pandas, Pixie Dust as well as Watson's predictive and recommended visualizations.
| notebooks/Diet-Related Disease Exploratory.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ### Python Operators
# Operators are used to perform operations on variables and values.
print(10 + 5)
# ### Python divides the operators in the following groups:
#
# - Arithmetic operators: \
# Arithmetic operators are used with numeric values to perform common mathematical operations
# + - / * // %
#
#
# - Assignment operators: \
# Assignment operators are used to assign values to variables \
# = += -= *=
#
#
# - Comparison operators \
# Comparison operators are used to compare two values \
# == != > <
#
# - Logical operators \
# Logical operators are used to combine conditional statements \
# and or not
#
#
# - Identity operators \
# Identity operators are used to compare the objects, not if they are equal, but if they are actually the same object, with the same memory location \
# is is not
#
#
# - Membership operators \
# Membership operators are used to test if a sequence is presented in an object \
# in not in
#
#
# - Bitwise operators \
# Bitwise operators are used to compare (binary) numbers \
# & | ^
#
#
# ## Python Lists
#
# List
# Lists are used to store multiple items in a single variable.
#
# Lists are one of 4 built-in data types in Python used to store collections of data, the other 3 are Tuple, Set, and Dictionary, all with different qualities and usage.
#
# Lists are created using square brackets:
#
# +
#Create a List:
thislist = ["apple", "banana", "cherry"]
print(thislist)
# -
# ## List Items
# List items are ordered, changeable, and allow duplicate values.
#
# List items are indexed, the first item has index [0], the second item has index [1] etc.
print(len(thislist))
print(thislist[1])
thislist = ["apple", "banana", "cherry", "orange", "kiwi", "melon", "mango"]
print(thislist[2:5])
thislist = ["apple", "banana", "cherry"]
if "apple" in thislist:
print("Yes, 'apple' is in the fruits list")
thislist[1] = "blackcurrant"
print(thislist)
thislist = ["apple", "banana", "cherry", "orange", "kiwi", "mango"]
thislist[1:3] = ["blackcurrant", "watermelon"]
print(thislist)
# ### Insert Items
# To insert a new list item, without replacing any of the existing values, we can use the insert() method.
#
# The insert() method inserts an item at the specified index:
thislist = ["apple", "banana", "cherry"]
thislist.insert(2, "watermelon")
print(thislist)
# ### Append Items
# To add an item to the end of the list, use the append() method:
thislist = ["apple", "banana", "cherry"]
thislist.append("orange")
print(thislist)
#
# ### Extend List
# To append elements from another list to the current list, use the extend() method.
#
# Example
# Add the elements of tropical to thislist:
thislist = ["apple", "banana", "cherry"]
tropical = ["mango", "pineapple", "papaya"]
thislist.extend(tropical)
print(thislist)
#
# ### Remove Specified Item
# The remove() method removes the specified item.
#
#
thislist = ["apple", "banana", "cherry"]
thislist.remove("banana")
print(thislist)
#
# ## Remove Specified Index
# The pop() method removes the specified index.
thislist = ["apple", "banana", "cherry"]
thislist.pop(1)
print(thislist)
#
# ### Loop Through a List
# You can loop through the list items by using a for loop:
thislist = ["apple", "banana", "cherry"]
for x in thislist:
print(x)
#
# ### Looping Using List Comprehension
# List Comprehension offers the shortest syntax for looping through lists:
#
# Example
# A short hand for loop that will print all items in a list:
thislist = ["apple", "banana", "cherry"]
[print(x) for x in thislist]
#
# ## List Comprehension
# List comprehension offers a shorter syntax when you want to create a new list based on the values of an existing list.
#
# Example:
#
# Based on a list of fruits, you want a new list, containing only the fruits with the letter "a" in the name.
#
# Without list comprehension you will have to write a for statement with a conditional test inside:
# +
fruits = ["apple", "banana", "cherry", "kiwi", "mango"]
newlist = []
for x in fruits:
if "a" in x:
newlist.append(x)
print(newlist)
# -
# With list comprehension you can do all that with only one line of code:
# +
fruits = ["apple", "banana", "cherry", "kiwi", "mango"]
newlist = [x for x in fruits if "a" in x]
print(newlist)
# -
#
# ## Copy a List
# You cannot copy a list simply by typing list2 = list1, because: list2 will only be a reference to list1, and changes made in list1 will automatically also be made in list2.
#
# There are ways to make a copy, one way is to use the built-in List method copy().
# +
thislist = ["apple", "banana", "cherry"]
mylist = thislist.copy()
print(mylist)
thislist = ["apple", "banana", "cherry"]
mylist = list(thislist)
print(mylist)
# -
# oin Two Lists
# There are several ways to join, or concatenate, two or more lists in Python.
#
# One of the easiest ways are by using the + operator.
# +
list1 = ["a", "b", "c"]
list2 = [1, 2, 3]
list3 = list1 + list2
print(list3)
# +
list1 = ["a", "b" , "c"]
list2 = [1, 2, 3]
for x in list2:
list1.append(x)
print(list1)
# +
list1 = ["a", "b" , "c"]
list2 = [1, 2, 3]
list1.extend(list2)
print(list1)
# -
# ## List Methods
#
# Python has a set of built-in methods that you can use on lists.
#
# Method Description
# - append() Adds an element at the end of the list
# - clear() Removes all the elements from the list
# - copy() Returns a copy of the list
# - count() Returns the number of elements with the specified value
# - extend() Add the elements of a list (or any iterable), to the end of the current list
# - index() Returns the index of the first element with the specified value
# - insert() Adds an element at the specified position
# - pop() Removes the element at the specified position
# - remove() Removes the item with the specified value
# - reverse() Reverses the order of the list
# - sort() Sorts the list
#
# ## Tuple
# Tuples are used to store multiple items in a single variable.
#
# A tuple is a collection which is ordered and unchangeable.
#
# Tuples are written with round brackets.
#
thistuple = ("apple", "banana", "cherry")
print(thistuple)
thistuple = ("apple", "banana", "cherry")
for x in thistuple:
print(x)
# ### Set
# Sets are used to store multiple items in a single variable.
#
# A set is a collection which is both unordered and unindexed.
#
# Sets are written with curly brackets.
thisset = {"apple", "banana", "cherry"}
print(thisset)
#
# ## Access Items
# You cannot access items in a set by referring to an index or a key.
#
# But you can loop through the set items using a for loop, or ask if a specified value is present in a set, by using the in keyword.
# +
thisset = {"apple", "banana", "cherry"}
for x in thisset:
print(x)
# -
#
# ### Add Items
# Once a set is created, you cannot change its items, but you can add new items.
#
# To add one item to a set use the add() method.
# +
thisset = {"apple", "banana", "cherry"}
thisset.add("orange")
print(thisset)
# -
#
# ## Add Sets
# To add items from another set into the current set, use the update() method.
# +
thisset = {"apple", "banana", "cherry"}
tropical = {"pineapple", "mango", "papaya"}
thisset.update(tropical)
print(thisset)
# -
#
# ## Remove Item
# To remove an item in a set, use the remove(), or the discard() method.
# +
thisset = {"apple", "banana", "cherry"}
thisset.remove("banana")
print(thisset)
# +
thisset = {"apple", "banana", "cherry"}
thisset.discard("banana")
print(thisset)
# +
thisset = {"apple", "banana", "cherry"}
x = thisset.pop()
print(x)
print(thisset)
# -
#
# ## Loop Items
# You can loop through the set items by using a for loop:
#
#
# +
thisset = {"apple", "banana", "cherry"}
for x in thisset:
print(x)
# -
#
# ## Join Two Sets
# There are several ways to join two or more sets in Python.
#
# You can use the union() method that returns a new set containing all items from both sets, or the update() method that inserts all the items from one set into another:
# +
set1 = {"a", "b" , "c"}
set2 = {1, 2, 3}
set3 = set1.union(set2)
print(set3)
# +
set1 = {"a", "b" , "c"}
set2 = {1, 2, 3}
set1.update(set2)
print(set1)
# -
#
# ## Set Methods
# Python has a set of built-in methods that you can use on sets.
#
# Method Description
# - add() Adds an element to the set
# - clear() Removes all the elements from the set
# - copy() Returns a copy of the set
# - difference() Returns a set containing the difference between two or more sets
# - difference_update() Removes the items in this set that are also included in another, specified set
# - discard() Remove the specified item
# - intersection() Returns a set, that is the intersection of two other sets
# - intersection_update() Removes the items in this set that are not present in other, specified set
# - isdisjoint() Returns whether two sets have a intersection or not
# - issubset() Returns whether another set contains this set or not
# - issuperset() Returns whether this set contains another set or not
# - pop() Removes an element from the set
# - remove() Removes the specified element
# - symmetric_difference() Returns a set with the symmetric differences of two sets
# - symmetric_difference_update() inserts the symmetric differences from this set and another
# - union() Return a set containing the union of sets
# - update() Update the set with the union of this set and others
#
# ## Dictionary
# Dictionaries are used to store data values in key:value pairs.
#
# A dictionary is a collection which is unordered, changeable and does not allow duplicates.
#
# Dictionaries are written with curly brackets, and have keys and values:
thisdict = {
"brand": "Ford",
"model": "Mustang",
"year": 1964
}
print(thisdict)
#
# ## Dictionary Items
# Dictionary items are unordered, changeable, and does not allow duplicates.
#
# Dictionary items are presented in key:value pairs, and can be referred to by using the key name.
thisdict = {
"brand": "Ford",
"model": "Mustang",
"year": 1964
}
print(thisdict["brand"])
print(len(thisdict))
thisdict = {
"brand": "Ford",
"model": "Mustang",
"year": 1964
}
print(type(thisdict))
x = thisdict["model"]
x
#
# ### Get Keys
# The keys() method will return a list of all the keys in the dictionary.
x = thisdict.keys()
x
# +
car = {
"brand": "Ford",
"model": "Mustang",
"year": 1964
}
x = car.keys()
print(x) #before the change
car["color"] = "white"
print(x) #after the change
# -
thisdict = {
"brand": "Ford",
"model": "Mustang",
"year": 1964
}
thisdict["year"] = 2018
# +
thisdict = {
"brand": "Ford",
"model": "Mustang",
"year": 1964
}
thisdict.update({"year": 2020})
thisdict
# -
# ## Removing Items
# There are several methods to remove items from a dictionary:
thisdict = {
"brand": "Ford",
"model": "Mustang",
"year": 1964
}
thisdict.pop("model")
print(thisdict)
thisdict = {
"brand": "Ford",
"model": "Mustang",
"year": 1964
}
thisdict.popitem()
print(thisdict)
thisdict = {
"brand": "Ford",
"model": "Mustang",
"year": 1964
}
del thisdict["model"]
print(thisdict)
#
# ## Loop Through a Dictionary
# You can loop through a dictionary by using a for loop.
#
# When looping through a dictionary, the return value are the keys of the dictionary, but there are methods to return the values as well.
#
#
# +
thisdict = {
"brand": "Ford",
"model": "Mustang",
"year": 1964
}
# Print all key names in the dictionary, one by one:
for x in thisdict:
print(x)
# +
# Print all values in the dictionary, one by one:
for x in thisdict:
print(thisdict[x])
# +
#You can also use the values() method to return values of a dictionary:
for x in thisdict.values():
print(x)
# +
#You can use the keys() method to return the keys of a dictionary:
for x in thisdict.keys():
print(x)
# +
#Loop through both keys and values, by using the items() method:
for x, y in thisdict.items():
print(x, y)
# -
#
# ### Copy a Dictionary
# You cannot copy a dictionary simply by typing dict2 = dict1, because: dict2 will only be a reference to dict1, and changes made in dict1 will automatically also be made in dict2.
#
# There are ways to make a copy, one way is to use the built-in Dictionary method copy().
thisdict = {
"brand": "Ford",
"model": "Mustang",
"year": 1964
}
mydict = thisdict.copy()
print(mydict)
#
# ### Nested Dictionaries
# A dictionary can contain dictionaries, this is called nested dictionaries.
#
#
# +
#Create a dictionary that contain three dictionaries:
myfamily = {
"child1" : {
"name" : "Emil",
"year" : 2004
},
"child2" : {
"name" : "Tobias",
"year" : 2007
},
"child3" : {
"name" : "Linus",
"year" : 2011
}
}
# -
# ### Dictionary Methods
# Python has a set of built-in methods that you can use on dictionaries.
#
# Method Description
# - clear() Removes all the elements from the dictionary
# - copy() Returns a copy of the dictionary
# - fromkeys() Returns a dictionary with the specified keys and value
# - get() Returns the value of the specified key
# - items() Returns a list containing a tuple for each key value pair
# -keys() Returns a list containing the dictionary's keys
# -pop() Removes the element with the specified key
# -popitem() Removes the last inserted key-value pair
# -setdefault() Returns the value of the specified key. If the key does not exist: insert the key, with the specified value
# -update() Updates the dictionary with the specified key-value pairs
# -values() Returns a list of all the values in the dictionary
#
| tutorial/1.python/1.python-basics/second tutorial.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="eB5A7QmWKqYX"
# #Proyecto Semestral: Hito 2
#
# ##CC5206 Semestre de Primavera 2020
#
# Alumnos: <NAME>, <NAME>, <NAME>, <NAME>
#
# Profesores: <NAME>, <NAME>
#
# Fecha de entrega: 04 de Noviembre de 2020
# + [markdown] id="SmtjoLaz8i3g"
# ## Introducción
#
# Estados Unidos se encuentra dentro de los países con mayor tasa de muertes por mano de la policía al año. En comparación con el resto de países que encabezan la lista, el resto todos presentan o bien conflictos internos (Venezuela y Siria) o políticas severas con respecto al uso de fuerza letal en contra de criminales (Brasil y las Filipinas). Resulta entonces interesante analizar el perfil de las víctimas en los Estados Unidos. Poder ver también si influyen las características que se pueden notar a simple vista en el desenlace de la situación, como si la víctima está armada, si intenta darse a la fuga, la raza a la que pertenece o su género, junto a alguna correlación entre los atributos.
#
# Para la realización del hito 2 se cambió el dataset por uno más completo que aborda el mismo tipo de datos. Este nuevo dataset contiene por sobre 3000 instancias más que el original y nuevos atributos como la geografía del lugar en que tomo lugar el suceso, el arma que cargaba la víctima al momento de confrontar a la policía y si el policía en cuestión fue acusado de un delito posteriormente. El nuevo dataset contiene un total de 8629 filas y 29 columnas. Además de este dataset se trabajó con dos adicionales a modo de enriquecer el análisis. Estos datos corresponden a la tasa de desempleo y tasa de crímenes violentos por estado.
#
# Los datos fueron extraídos de las siguientes direcciones.
# * https://mappingpoliceviolence.org/ (Dataset del proyecto)
# * https://ucr.fbi.gov/crime-in-the-u.s (Índice de crimenes por zona)
# * https://datosmacro.expansion.com/paro/ (Índice de desempleo por zona)
#
# + id="r1CPaHrzypZ7" colab={"base_uri": "https://localhost:8080/"} outputId="3be6386a-2329-498d-ec38-47f7595b6c10"
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
data = pd.read_excel("https://mappingpoliceviolence.org/s/MPVDatasetDownload.xlsx")
unemploiment = pd.read_csv("https://raw.githubusercontent.com/TinSlim/Perfil-An-lisis-de-v-ctimas-fatales-por-acci-n-policial-en-los-Estados-Unidos/main/desempleo_new.csv",
delimiter=";")
crime_index = pd.read_csv("https://raw.githubusercontent.com/TinSlim/Perfil-An-lisis-de-v-ctimas-fatales-por-acci-n-policial-en-los-Estados-Unidos/main/Violent_Crime_rate.csv",
delimiter=";")
# !pip3 install plotly --upgrade
import plotly
# + [markdown] id="xSDFQjg3_n-Z"
# ## Limpieza de Datos
#
# Primero es importante ver la cantidad de datos que trae el dataset y además revisar cuántos valores nulos (NaN) hay en este mismo.
# + id="yiSzTNvm0f3V" colab={"base_uri": "https://localhost:8080/", "height": 382} outputId="70c07d3d-662f-41a9-8ad6-d741cc160899"
print("Cantidad de Datos: ", len(data))
print("Cantidad de nulos totales: ",data.isnull().sum().sum(), "\n")
data.head(1)
# + [markdown] id="NuQYk00NAmhX"
# Para comenzar la limpieza de datos se eliminan los atributos que no resultan relevantes para el análisis e interpretación de datos que se llevará a cabo y aquellos cuya cantidad de datos nulos sea muy elevada. Los atributos no relevantes son "Street Adress of Incident", Victims's name", "URL of image of victim", "WaPo ID", "Fatal Encounters ID", "MPV ID", "Agency respondsible for death", "Link to news article or photo of official document", "Zipcode", "ORI Agenct Identifier" y "A brief description of the circumstances surrounding the death". Los atributos "Off-Duty Killing" y "Body Camera" si bien son interesantes de analizar contienen una gran cantidad de datos nulos, por lo cual no serán considerados. Por último, el atributo "Unarmed/Did Not Have an Actual Weapon" especifica si la persona se encontraba armada, pero la existencia de otro atributo que especifica el arma portada o la ausencia de una lo convierte en redundante. Esto mismo se aplica para los atributos "Official disposition of death" y "Criminal charges", ya que el segundo es una manera más compacta de describir la información presentada por el primero.
#
# Para reducir el ruido del dataset se eliminaron datos ambigüos o mal ingresados, como valores "Unknown" o edades que incluían letras. También se eliminaron datos de muy baja frecuencia que podrían ser considerados como ruido o poco representativos. Estos fueron, por ejemplo, causas de muerte poco comunes con menos de 10 instancias en todo el dataset.
# + id="29LoditgAHOd" colab={"base_uri": "https://localhost:8080/"} outputId="eb4b0f1a-97bb-4ca5-b326-01173a4f685e"
tamanho_inicial = len(data)
data_cleaned = data.drop(["Street Address of Incident","Victim's name","URL of image of victim",
"WaPo ID (If included in WaPo database)","Fatal Encounters ID","MPV ID",
"Agency responsible for death","Link to news article or photo of official document",
"Zipcode","ORI Agency Identifier (if available)","A brief description of the circumstances surrounding the death",
"Off-Duty Killing?", "Body Camera (Source: WaPo)","Unarmed/Did Not Have an Actual Weapon", "Official disposition of death (justified or other)"
],axis=1)
data_cleaned = data_cleaned.dropna()
data_cleaned = data_cleaned.rename(columns = {"Geography (via Trulia methodology based on zipcode population density: http://jedkolko.com/wp-content/uploads/2015/05/full-ZCTA-urban-suburban-rural-classification.xlsx )":"geography",
"Victim's age":"age",
"Victim's gender":"gender",
"Date of Incident (month/day/year)":"Date",
"Official disposition of death (justified or other)": "Disposition of death",
"Victim's race":"race",
"Cause of death":"manner_of_death",
"State":"state",
"Fleeing (Source: WaPo)":"flee",
"Alleged Threat Level (Source: WaPo)":"threat_level",
"Symptoms of mental illness?":"mental_ilness",
"Alleged Weapon (Source: WaPo and Review of Cases Not Included in WaPo Database)":"weapon",
"Disposition of death":"disposition",
"Criminal Charges?":"criminal_charges",
"County":"county",
"City":"city"})
data_cleaned = data_cleaned[data_cleaned['age'] != 'Unknown']
data_cleaned = data_cleaned[data_cleaned['age'] != '40s']
data_cleaned = data_cleaned[data_cleaned['gender'] != 'Unknown']
data_cleaned = data_cleaned[data_cleaned['gender'] != 'Transgender']
data_cleaned = data_cleaned[data_cleaned['gender'] != 'Male ']
data_cleaned["flee"] = data_cleaned["flee"].str.lower()
data_cleaned = data_cleaned[data_cleaned['race'] != 'Unknown race']
data_cleaned = data_cleaned[data_cleaned['race'] != 'Unknown Race']
data_cleaned = data_cleaned[(data_cleaned['manner_of_death'] != 'Gunshot, Bean Bag Gun') & (data_cleaned['manner_of_death'] != 'Gunshot, Beanbag Gun') &
(data_cleaned['manner_of_death'] != 'Gunshot, Pepper Spray') & (data_cleaned['manner_of_death'] != 'Gunshot, Police Dog') &
(data_cleaned['manner_of_death'] != 'Gunshot, Unspecified Less Lethal Weapon') & (data_cleaned['manner_of_death'] != 'Other') &
(data_cleaned['manner_of_death'] != 'Physical restraint') & (data_cleaned['manner_of_death'] != 'Physical Restraint') &
(data_cleaned['manner_of_death'] != 'Bomb') & (data_cleaned['manner_of_death'] != 'Asphyxiated') &
(data_cleaned['manner_of_death'] != 'Beaten') & (data_cleaned['manner_of_death'] != 'Pepper Spray') &
(data_cleaned['manner_of_death'] != 'Gunshot, Vehicle')]
data_cleaned = data_cleaned.replace(to_replace="Gunshot, Taser, Baton", value="Gunshot, Taser")
data_cleaned = data_cleaned.replace(to_replace="Taser", value="Tasered")
cambiar=[]
for p in data_cleaned["criminal_charges"]:
if "charged" in p.lower() and p.lower() not in cambiar:
cambiar.append(p)
for p in cambiar:
data_cleaned = data_cleaned.replace(to_replace=p, value="Charged")
tamanho_final = len(data_cleaned)
print("Tamaño pre procesado: ",tamanho_inicial,"\nTamaño post procesado: ", tamanho_final,"\n")
print("Diferencia: ",tamanho_inicial-tamanho_final)
print("Porcentaje eliminado: ",(int(100-((tamanho_final*100)/tamanho_inicial))),'%')
# + [markdown] id="vJPMFLkp08Tk"
# Si bien se descartó un 41% de los datos totales, se sigue contando con un volumen de datos razonablemente grande para realizar un análisis.
# + id="gHwZe2omDfWA" colab={"base_uri": "https://localhost:8080/", "height": 210} outputId="adaec4c8-873a-485a-e9db-559d2ff73c73"
data_cleaned.head(4)
# + [markdown] id="I1R_LmFtv0bB"
# Después de la limpieza obtenemos un dataset con 5091 datos y 14 atributos. Posteriormente se agregaron los datos correspondientes a los datasets de tasas de criminalidad y desempleo, lo que involucró un trabajo sobre estos para poder incluirlos como nuevos atributos. En ambos casos las tasas se encontraban ordenadas por fecha y estado, siendo las de desempleo mensuales y las de criminalidad anuales. Para poder concatenar estas tablas primero se modificaron las fechas de las muertes para solo incluir mes y año, y se creó una nueva columna "key" que mezcla la fecha y estado del incidente. Esta nueva columna se agrega también a copias de los datasets "unemploiment" y "crime_index", y estas se concatenan con el dataset a trabajar. La columna "key" ya no es necesaria así que se elimina y se renombraron ciertos atributos para tener coherencia con el resto.
#
# Es importante mencionar que las cifras de tasa de criminalidad por estado en los Estados Unidos para el año 2020 aún no se han hecho públicas por la página oficial del FBI, fuente en la cual se obtuvieron los datos para años pasados. Para este análisis se consideraron para el 2020 los mísmos índices de criminalidad de finales del 2019.
# + colab={"base_uri": "https://localhost:8080/", "height": 261} id="SKvVOF6wvu5T" outputId="b7477d44-603d-4773-8190-2b08da3c0ea5"
data_without_day = data_cleaned.copy()
data_without_day['Date'] = data_without_day['Date'].apply(lambda x: x.strftime('%Y-%m'))
data_without_day['key'] = data_without_day[['Date', 'state']].apply(lambda x: ''.join(x), axis=1)
unemploiment_ch = unemploiment.rename(columns = {"date_my":"Date"})
unemploiment_ch['key'] = unemploiment_ch[['Date', 'state']].apply(lambda x: ''.join(x), axis=1)
unemploiment_ch = unemploiment_ch.drop(["Date","state"],axis = 1)
crime_index_ch = crime_index
crime_index_ch['key'] = crime_index[['Date', 'State']].apply(lambda x: ''.join(x), axis=1)
crime_index_ch = crime_index_ch.drop(["Date","State"],axis = 1)
concatenate = pd.merge(data_without_day,unemploiment_ch, on = 'key')
concatenate = pd.merge(concatenate,crime_index_ch, on = 'key')
concatenate = concatenate.drop(["key"],axis = 1)
concatenate = concatenate.rename(columns = {"coef":"unemployment_rate",
"Violent Crime Rate":"crime_rate",
"Date":"date"})
data_cleaned = concatenate
concatenate.head(5)
# + [markdown] id="VaPPR0Bqx-mF"
# Aqui se tiene una vision preliminar de los datos que se trabajarán, son 5091 datos con 16 atributos:
#
# * age (int)
#
# * gender (categoric)
#
# * race (categoric)
#
# * date (año-mes-dia)
#
# * city (categoric)
#
# * state (categoric)
#
# * county (categoric)
#
# * manner_of_death (categoric)
#
# * criminal_charges (categoric)
#
# * mental_ilness (categoric)
#
# * weapon (categoric)
#
# * threat_level (categoric)
#
# * flee (categoric)
#
# * geography (categoric)
#
# * unemployment_rate (float)
#
# * crime_rate (float)
# + [markdown] id="JlY8zEyvCa87"
# # Análisis de datos
# + [markdown] id="rBsdqmxP1-cn"
# En la siguiente sección se presentan distintos gráficos generados a partir de los datos tras su limpieza. En base a estos se realiza una exploración inicial de la distribución de los datos en base a sus atributos.
# + id="H6mK5q29yvQG" colab={"base_uri": "https://localhost:8080/", "height": 354} outputId="d1f1461b-21b3-41b3-e855-f42269d90333"
datos = data_cleaned
males = datos[datos['gender'] == 'Male']
females = datos[datos['gender'] == 'Female']
age = datos['age']
plt.hist(males['age'], label="Males")
plt.hist(females['age'], label="Females")
plt.legend()
plt.title("Age")
plt.show()
print("Media: ",round(age.mean(), 2))
print("Desviación: ", round(age.std(), 2))
print("Mediana: ",age.median())
print("Moda: ",age.mode()[0])
# + [markdown] id="JUauy8X42LiK"
# El gráfico de edad nos muestra que hay una gran diferencia entre la cantidad de mujeres y hombres en el dataset, además se observa que hay un gran incremento en la cantidad de personas involucradas en disparos policiales a partir de los 23-24 años aprox para el caso de los hombres, siendo este el punto de mayor concentración de los datos para luego ir descendiendo a medida que crece la edad.
# En el caso de las mujeres la cantidad de datos se mantiene bastante estable en el rango de edad de 20-40 años, para luego ir descendiendo a medida que crece la edad
# + id="6Z-UmDrq8Q8k" colab={"base_uri": "https://localhost:8080/", "height": 280} outputId="c8ed9ebf-3e5a-4426-94c7-1d316ac1c75c"
gender = datos.groupby('gender').size()
gender = gender.to_dict()
labels = gender.keys()
total = 0
for p in gender.values():
total+=p
porcentajes = []
for p in gender.values():
porcentajes.append(p/total)
explode = (0, 0.1)
fig1, ax1 = plt.subplots()
fig1.suptitle('Genders')
ax1.pie(porcentajes, explode=explode, labels=labels, autopct='%1.1f%%',
shadow=False, startangle=90)
ax1.axis('equal') # Equal aspect ratio ensures that pie is drawn as a circle.
plt.show()
# + [markdown] id="sAIkDziM5u2J"
# En el grafo se observa con mayor claridad la gran diferencia entre las víctimas hombres y mujeres. Esto podría deberse a una sobrerrepresentación de hombres en el dataset, pero dada la gran cantidad y origen de los datos es razonable concluir que este no es el caso.
# + id="bsgPEsrh8N6P" colab={"base_uri": "https://localhost:8080/", "height": 400} outputId="bf650833-f423-497b-b283-2c7974f1578a"
race = datos.groupby('race').size()
race = race.to_dict()
race["A"] = race.pop("Asian")
race["B"] = race.pop("Black")
race["W"] = race.pop("White")
race["N"] = race.pop("Native American")
race["P"] = race.pop("Pacific Islander")
race["H"] = race.pop("Hispanic")
labels = race.keys()
total = 0
for p in race.values():
total+=p
porcentajes = []
for p in race.values():
porcentajes.append(p/total)
explode = (0, 0.1,0,0,0,0)
fig1, ax1 = plt.subplots(figsize=(8,6))
fig1.suptitle('Races')
ax1.pie(porcentajes, explode=explode, labels=labels, autopct='%1.1f%%',
shadow=False, startangle=90)
ax1.axis('equal')
plt.show()
# + [markdown] id="2kVuYmMm8Jbd"
# Las etiquetas corresponden a las razas de las personas que fueron disparadas por la policia, siendo
#
# W: White
#
# B: Black
#
# A: Asian
#
# N: Native American
#
# H: Hispanic
#
# P: Pacific Islander
#
# Se observa que en la mayoría de los casos las personas fallecidas son de raza W, seguido de B y H, mientras tanto las otras razas tienen representación considerablemente menor con respecto a las otras en los datos
# + colab={"base_uri": "https://localhost:8080/"} id="94qZuzwL2cVb" outputId="1c6a7e84-c4e0-445a-9bd3-16fdd361b3fa"
datos.groupby('manner_of_death').size()
# + id="luk4PE8H8MCQ" colab={"base_uri": "https://localhost:8080/", "height": 374} outputId="85162b95-750f-4fe3-a65a-7ecb14271813"
manner_of_death = datos.groupby('manner_of_death').size()
manner_of_death = manner_of_death.to_dict()
fig, axs = plt.subplots(1, 1, figsize=(5, 5), sharey=True)
axs.bar(manner_of_death.keys(),manner_of_death.values())
fig.suptitle('Manner of death')
# + [markdown] id="-roeD3PTAV2R"
# La manera en que las víctimas murieron fueron mayoritariamente sólo por disparos de armas de fuego, pero también hubo una cantidad no despreciable de personas que murieron por causa de disparos y arma de electrochoque, los casos de muerte por solo taser y por solo vehiculos son mínimos en comparacación a los otros.
# + id="oUiFtyM28JoD" colab={"base_uri": "https://localhost:8080/", "height": 251} outputId="edac825d-faa5-4513-d6eb-53bcbe23ed11"
state = datos.groupby('state').size()
state = state.to_dict()
fig, axs = plt.subplots(1, 1, figsize=(16, 3), sharey=True)
axs.bar(state.keys(),state.values())
fig.suptitle('State where the shot happened')
# + [markdown] id="5yQFfdhk74Ij"
# Las etiquetas corresponden a la abreviacion de dos letras del codigo postal de cada estado, este gráfico muestra donde ocurrieron los disparos de la policia,
# siendo CA=California y Tx=Texas los estados donde ocurren más muertes por disparos de policias
# + id="rOtgwP8I8HMg" colab={"base_uri": "https://localhost:8080/", "height": 251} outputId="6af7d638-8595-48d6-f07e-f3cafb1203a0"
flee = datos.groupby('flee').size()
flee = flee.to_dict()
fig, axs = plt.subplots(1, 1, figsize=(10, 3), sharey=True)
axs.bar(flee.keys(),flee.values())
fig.suptitle('Fleeing attempt')
# + [markdown] id="JJ_ph55Q_Rw_"
# Este grafico muestra si hubo un intento de escapar por parte de la persona de la policia, y de ser asi que medio uso para esto, se observa que la mayoría de las personas no intento escapar al momento del encuentro con la policia y que los restantes escaparon mayoritariamente por auto o a pie.
# + id="ugHr2NVU_e2P" colab={"base_uri": "https://localhost:8080/", "height": 251} outputId="35b4fc67-6a75-4152-ed16-e6192668a424"
threat_level= datos.groupby('threat_level').size()
threat_level = threat_level.to_dict()
fig, axs = plt.subplots(1, 1, figsize=(4, 3), sharey=True)
axs.bar(threat_level.keys(),threat_level.values())
fig.suptitle('Threat level')
# + [markdown] id="A76mksTeAGMe"
# El gráfico muestra el nivel de amenaza presentado por la víctima. En la mayoría de los casos este fue calificado como "ataque", seguido de "otro" y por último una pequeña cantidad no determinada.
# + colab={"base_uri": "https://localhost:8080/", "height": 374} id="JItDofjmC1aX" outputId="6b65eab0-0c29-4ec6-baab-a235cd182fb1"
criminal_charges = datos.groupby('criminal_charges').size()
criminal_charges = criminal_charges.to_dict()
fig, axs = plt.subplots(1, 1, figsize=(5, 5), sharey=True)
axs.bar(criminal_charges.keys(),criminal_charges.values())
fig.suptitle('criminal_charges')
# + [markdown] id="KnV_t83oJlDl"
# La mayoría de los casos de disparos policiales terminan sin cargos conocidos para el policía
# + id="qJb_O-Ijw799" colab={"base_uri": "https://localhost:8080/"} outputId="bacb3714-8c66-4ef6-f65c-edaed0d6a525"
from urllib.request import urlopen
import json
# %pip install -U plotly
import plotly.express as px
import requests
import urllib
# cargamos el geojson de counties
with urlopen("https://raw.githubusercontent.com/plotly/datasets/master/geojson-counties-fips.json") as response:
counties_geojson = json.load(response)
# cargamos el geojson de counties
with urlopen("https://raw.githubusercontent.com/PublicaMundi/MappingAPI/master/data/geojson/us-states.json") as response:
state_geojson = json.load(response)
# cargamos el csv con los FIPS codes
state_fips = pd.read_csv("https://raw.githubusercontent.com/kjhealy/fips-codes/master/state_fips_master.csv")
state_fips = state_fips.drop(["state_name","long_name","sumlev",
"region", "division", "state", "region_name", "division_name"],axis=1)
state_fips = state_fips.rename(columns = {"state_abbr":"state"})
for p in range(10):
state_fips = state_fips.replace(to_replace=p, value="0"+str(p))
datos = pd.merge(datos, state_fips, on='state')
data_state_fips_count = datos.groupby('fips', as_index=False).count() # todas las columnas, menos fips, ahora tienen la misma info
data_state_fips_count['Cantidad de Casos']=data_state_fips_count['date'] # así que creamos una que tenga un nombre representativo
# + id="09Dxwqvt4uBz" colab={"base_uri": "https://localhost:8080/", "height": 542} outputId="5a1c3415-1ff1-4fa2-b8b0-3f95bddb09c3"
# Y Graficamos
# -- FIPS:State
fig = px.choropleth(data_state_fips_count, geojson=state_geojson, locations='fips', color='Cantidad de Casos',
color_continuous_scale="Viridis",
range_color=(0, 716),
scope="usa"
)
fig.update_layout(margin={"r":0,"t":0,"l":0,"b":0})
fig.show()
# + [markdown] id="OfOccZY45pU7"
# Este mapa representa la densidad de víctimas por estado en los Estados Unidos por medio de un código de colores.
# + [markdown] id="nBS7fJvXHTMm"
# # Preguntas y Problemas:
#
# A partir de la motivación original y la exploración del dataset presentada anteriormente, se formulan las siguientes preguntas que tratarán de ser respondidas mediante la minería de datos:
#
# * ¿Es posible predecir la raza de una persona muerta por acción policial a partir de los otros atributos?(arbol de decision/knn/svm)
#
# * ¿Es posible predecir la causa de muerte de una persona al realizar un oversampling de los datos?(arbol de decision/knn/svm)
#
# * ¿Existe algun grupo con ciertas características que sea disparado mas frecuentemente por la policía?(Densidad de Clusters)
#
# * ¿Es posible predecir si una persona intenta escapar o no de los policias, y de ser asi que metodo usa basandose en los otros atributos?(arbol de decision/knn/svm)
#
#
#
# + [markdown] id="WziviYoBc7hF"
# # Pre-procesamiento
#
# Antes de proceder a contestar las preguntas se realiza un pre-procesamiento de los datos para reemplazar los datos categoricos por valores numericos de las siguiente manera:
#
# ##gender
# Male:0
#
# Female:1
#
# ##race
# White:0
#
# Black:1
#
# Asian:2
#
# Native American:3
#
# Hispanic:4
#
# Pacific Islander:5
#
# ##city
# Se ordenan los nombres de las ciudades por nombre alfabético y se les asigna valores del 0 hacia arriba, siendo el 0 la primera ciudad ordenada alfabeticamente.
#
# ##state
# Se ordenan los nombres de los estados por nombre alfabético y se les asigna valores del 0 hacia arriba, siendo el 0 el primer estado ordenado alfabeticamente.
#
# ##County
# Se ordenan los nombres de los condados por nombre alfabético y se les asigna valores del 0 hacia arriba, siendo el 0 el primer condado ordenado alfabeticamente.
#
# ##manner_of_death
# Gunshot:0
#
# Gunshot, taser:1
#
# tasered:2
#
# vehicle:3
#
# ##criminal_charges
#
# no known charges:0
#
# charged:1
#
# ##threat_level
# attack:0
#
# other:1
#
# undetermined:2
#
# ##flee
#
# car:0
#
# foot:1
#
# not fleeing:2
#
# other:3
#
# ##Geography
#
# Suburban:0
#
# Urban:1
#
# #Metodolgía
#
#
# - Para contestar la primera pregunta, utilizando métodos de clasificación como árboles de decisión o un Support Vector Machine, se buscará predecir la raza de una persona que muere por acción policial a través de sus otros atributos. Para ello, se dividirán los datos en entrenamiento y validación, y utilizando la variable raza como etiqueta se implementarán estos algoritmos de clasificación para predecirla.
#
# - Para contestar la pregunta 2, nuevamente utilizando métodos de clasificación, como árboles de decisión y Support Vector Machines, se buscará predecir la causa de muerte de una persona que muere por acción policial. Como la mayoría de las causas de muerte son por una pistola, primero se dividirán en datos de entrenamiento y validación, y luego se hará un oversampling de las otras causas de muerte en los datos de entrenamiento. Una vez hecho esto, utilizando la causa de muerte como etiqueta, se implementarán estos algoritmos de clasificación para predecirla.
#
# - Para contestar la pregunta 3 utilizaremos clasificación no supervizada, formando clusters del tipo jerarquico y particional. Utilizaremos metodos como K-means, clustering jerarquico aglomerativo y DBSCAN. Tambien crearemos clusters sin considerar ciertos atributos para ver si se puede generar una mejor agrupación.Para decidir que metodo genera mejores clusters utilizaremos las metricas de matriz de incidencia, SSE, cohesion y separación, luego con los clusters formados haremos una comparación de las densidades de estos, si existe una diferencia significativa en la densidad de los clusters esto implica que existe un grupo que es disparado mas frecuentemente por los policias.
#
# - Para contestar la pregunta 4, nuevamente utilizando métodos de clasificación, como árboles de decisión y Support Vector Machines, se buscará predecir si una persona intenta escapar o no de los policias, y de escaparse se buscara predecir que metodo usa. Como la mayoría de las personas no escapan, primero se dividirán en datos de entrenamiento y validación, y luego se hará un oversampling de los otros metodos de escape en los datos de entrenamiento. Una vez hecho esto, utilizando el metodo de escape como etiqueta, se implementarán estos algoritmos de clasificación para predecirla.
# + [markdown] id="84ejX3Yhqj8I"
# # Aportes de Cada Integrante:
#
# * <NAME>:
# * Hito 1: Obtención de datos de estado y condados para el mapa de calor, generación de mapas de calor. Powerpoint.
# * Hito 2: Discusiones sobre limpieza de datos. Copia de datos sin día. Actualización de mapa de calor. Preguntas y Problemas. Metodología para responder las preguntas de investigación.
#
# * <NAME>:
# * Hito 1: Introducción, descripción de gráficos, y preguntas. Powerpoint.
# * Hito 2: Discusiones sobre limpieza de datos. Actualización de gráficos descriptivos. Búsqueda de tasa de criminalidad, limpieza de esos datos. Preguntas y Problemas.
#
# * <NAME>:
# * Hito 1: Gráficos y descripción del informe. Powerpoint.
# * Hito 2: Discusiones sobre limpieza de datos. Actualización de descripciones del informe. Actualización de gráficos descriptivos. Preguntas y Problemas. Metodología para responder las preguntas.
#
# * <NAME>:
# * Hito 1: Página web y limpieza de datos, gráficos, mapas de calor. Powerpoint.
# * Hito 2: Discusiones sobre limpieza de datos. Limpieza efectiva de los nuevos datos. Copia de datos sin día. Actualización de mapa de calor. Búsqueda de y adaptación datos de desempleo por año,mes y estado. Preguntas y Problemas.
#
| Hito2.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import numpy as np
import pandas as pd
from pathlib import Path
train_df = pd.read_csv(Path('Resources/2019loans.csv'))
test_df = pd.read_csv(Path('Resources/2020Q1loans.csv'))
train_df.head()
# +
#test_df.head()
# +
# Convert categorical data to numeric
X_train = train_df.drop('target', axis=1)
X_train = pd.get_dummies(X_train)
# separate target feature for TRAINING data
y_train = train_df['target']
# -
# verify changes
X_train.head()
# +
# Convert categorical data to numeric and separate
X_test = test_df.drop('target', axis=1)
X_test = pd.get_dummies(X_test)
# target feature for TESTING data
y_test = test_df['target']
# -
#verify changes
X_test.head()
# ## Matching Features
#
# After my inital run, I found that the data had a mismatch in the columns/features. I added a column to the test data to correct this error.
# +
train_cols = X_train.columns
test_cols = X_test.columns
common_cols = train_cols.intersection(test_cols)
train_not_test = train_cols.difference(test_cols)
print(train_not_test)
# -
#identify replacement value
X_train['debt_settlement_flag_Y'].mode()
#add column to the test data
X_test['debt_settlement_flag_Y']=0
X_test.head()
# ## Logistic Regression
# Train the Logistic Regression model on the unscaled data
from sklearn.linear_model import LogisticRegression
lr_classifier = LogisticRegression(solver= 'lbfgs', max_iter=200)
lr_classifier.fit(X_train, y_train)
#print the model score
print(f"Training Data Score: {lr_classifier.score(X_train, y_train)}")
print(f"Testing Data Score: {lr_classifier.score(X_test, y_test)}")
# ## Random Forest
# Train a Random Forest Classifier model and print the model score
from sklearn.ensemble import RandomForestClassifier
rf_classifier = RandomForestClassifier(random_state=42, n_estimators=50)
rf_classifier.fit(X_train, y_train)
#print the model score
print(f"Training Data Score: {rf_classifier.score(X_train, y_train)}")
print(f"Testing Data Score: {rf_classifier.score(X_test, y_test)}")
# Based on the testing results, it appears that the Random Forest Classifier was the more successful test.
# Scale the data
from sklearn.preprocessing import StandardScaler
scaler = StandardScaler().fit(X_train)
X_train_scaled = scaler.transform(X_train)
X_test_scaled = scaler.transform(X_test)
# +
# Train the Logistic Regression model on the scaled data and print the model score
lrs_classifier = LogisticRegression().fit(X_train_scaled, y_train)
print(f'Training Score: {lrs_classifier.score(X_train_scaled, y_train)}')
print(f'Testing Score: {lrs_classifier.score(X_test_scaled, y_test)}')
# +
# Train a Random Forest Classifier model on the scaled data and print the model score
rfs_classifier = RandomForestClassifier(random_state=42, n_estimators=50).fit(X_train_scaled, y_train)
print(f'Training Score: {rfs_classifier.score(X_train_scaled, y_train)}')
print(f'Testing Score: {rfs_classifier.score(X_test_scaled, y_test)}')
# -
| Credit Risk Evaluator.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernel_info:
# name: python3
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# %matplotlib inline
from matplotlib import style
style.use('fivethirtyeight')
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import datetime as dt
# # Reflect Tables into SQLAlchemy ORM
# Python SQL toolkit and Object Relational Mapper
import sqlalchemy
from sqlalchemy.ext.automap import automap_base
from sqlalchemy.orm import Session
from sqlalchemy import create_engine, func
from sqlalchemy import desc
from sqlalchemy import asc
from sqlalchemy.orm import sessionmaker
# create engine to hawaii.sqlite
engine = create_engine("sqlite:///Resources/hawaii.sqlite")
# reflect an existing database into a new model
Base = automap_base()
# reflect the tables
Base.prepare(engine, reflect=True)
# View all of the classes that automap found
Base.classes.keys()
# Save references to each table
HI_prcp_Measurement = Base.classes.measurement
HI_Station = Base.classes.station
HI_Station
# Create our session (link) from Python to the DB
session = Session(engine)
# # Exploratory Precipitation Analysis
HI_prcp_Measurement
# Find the most recent date in the data set. Order database by
# Date column using .order_by and specifyinig the 'date' column from the
# Measurement table. Using desc() to specify the order,
# the .first() method can be used to return the most recent date
prcp_measDB = session.query(HI_prcp_Measurement).order_by(desc(HI_prcp_Measurement.date)).first()
prcp_measDict = prcp_measDB.__dict__
prcp_measDict
# using list comprehension to set the value of
# column "date" to Last_Date
Last_Date = prcp_measDict['date']
Last_Date
# +
# Calculate the date one year from the last date in data set.
# -
# using .replace() the year can be adjusted to the previous year
# and set to variable "One_Year_before"
One_Year_before = Last_Date.replace("2017", "2016")
One_Year_before
# +
# Design a query to retrieve the last 12 months of precipitation data
# +
# Perform a query to retrieve the data and precipitation scores
# +
# Query date and precipitation columns from data
# filter data based on dates and show all data that fits criteria
# then Sort by date using order_by()
last_12_prcp = session.query(HI_prcp_Measurement.date, HI_prcp_Measurement.prcp).\
filter(HI_prcp_Measurement.date >= One_Year_before).\
filter(HI_prcp_Measurement.date <= Last_Date).\
order_by(asc(HI_prcp_Measurement.date)).all()
# Using list comprehension to separate tuples
# last_12_prcp = [a for *a, in last_12_prcp]
# Using list comprehension to remove nested lists
# last_12_prcp = [x for l in last_12_prcp for x in l]
last_12_prcp
last_12_prcp_dict = dict(last_12_prcp)
last_12_prcp_dict
# +
# Query date column from data
# filter data based on dates and show all data that fits criteria
# then Sort by date using order_by()
last_12_prcp_date = session.query(HI_prcp_Measurement.date).\
filter(HI_prcp_Measurement.date >= One_Year_before).\
filter(HI_prcp_Measurement.date <= Last_Date).\
order_by(asc(HI_prcp_Measurement.date)).all()
# Using list comprehension to separate tuples
last_12_prcp_date = [a for *a, in last_12_prcp_date]
# Using list comprehension to remove nested lists
last_12_prcp_date = [x for l in last_12_prcp_date for x in l]
# convert to date time format
last_12_prcp_date = pd.to_datetime(last_12_prcp_date)
last_12_prcp_date
# +
# Query precipitation column from data
# filter data based on dates and show all data that fits criteria
# then Sort by date using order_by()
last_12_prcp_measurement = session.query(HI_prcp_Measurement.prcp).\
filter(HI_prcp_Measurement.date >= One_Year_before).\
filter(HI_prcp_Measurement.date <= Last_Date).\
order_by(asc(HI_prcp_Measurement.date)).all()
# Using list comprehension to separate tuples
last_12_prcp_measurement = [a for *a, in last_12_prcp_measurement]
# Using list comprehension to remove nested lists
last_12_prcp_measurement = [x for l in last_12_prcp_measurement for x in l]
last_12_prcp_measurement
# +
# Save the query results as a Pandas DataFrame and set the index to the date column
# -
last_12_prcp_df = pd.DataFrame({"date":last_12_prcp_date, "Precipitation":last_12_prcp_measurement})
last_12_prcp_df.set_index("date", inplace=True)
last_12_prcp_df.sort_index(ascending=True)
last_12_prcp_df
# +
# plot the results.
# Starting from the most recent data point in the database.
# Use Pandas Plotting with Matplotlib to plot the data
# +
fig, ax = plt.subplots(figsize=(10, 10))
plt.style.use('fivethirtyeight')
# Add x-axis and y-axis
ax.bar(last_12_prcp_df.index.values,
last_12_prcp_df['Precipitation'])
# Set title and labels for axes
ax.set(xlabel="Date",
ylabel="Precipitation (inches)",
title="One Year precipitation")
# Rotate tick marks on x-axis
plt.setp(ax.get_xticklabels(), rotation=45)
plt.style
plt.show()
# +
# Use Pandas to calcualte the summary statistics for the precipitation data
# -
last_12_prcp_df.describe()
# +
# Design a query to retrieve the last 12 months of precipitation data and plot the results.
# Starting from the most recent data point in the database.
# Calculate the date one year from the last date in data set.
# Perform a query to retrieve the data and precipitation scores
# Save the query results as a Pandas DataFrame and set the index to the date column
# Sort the dataframe by date
# Use Pandas Plotting with Matplotlib to plot the data
# -
# Use Pandas to calcualte the summary statistics for the precipitation data
# # Exploratory Station Analysis
# Design a query to calculate the total number stations in the dataset
Station_Count = session.query(func.count(HI_Station.station)).all()
Station_Count = [a for *a, in Station_Count]
Station_Count = [x for l in Station_Count for x in l]
Station_Count = Station_Count[0]
Station_Count
# Design a query to find the most active stations (i.e. what stations have the most rows?)
# List the stations and the counts in descending order.
session.query(HI_prcp_Measurement.station,func.count(HI_prcp_Measurement.station)).\
group_by(HI_prcp_Measurement.station).\
order_by(func.count(HI_prcp_Measurement.station).desc()).all()
# +
Most_Active = session.query(HI_prcp_Measurement.station,func.count(HI_prcp_Measurement.station)).\
group_by(HI_prcp_Measurement.station).\
order_by(func.count(HI_prcp_Measurement.station).desc()).first()
Most_Active
# -
Most_Active_Station = Most_Active[0]
Most_Active_Station
tobs_meas_MAStation = session.query(HI_prcp_Measurement).\
filter(HI_prcp_Measurement.station == Most_Active_Station).\
order_by(desc(HI_prcp_Measurement.date)).first()
tobs_meas_MAStation_Dict = tobs_meas_MAStation.__dict__
tobs_meas_MAStation_Dict
# using list comprehension to set the value of
# column "date" to Last_Date
Last_Date_MAStation = tobs_meas_MAStation_Dict['date']
Last_Date_MAStation
# using .replace() the year can be adjusted to the previous year
# and set to variable "One_Year_before"
One_Year_before_MAStation = Last_Date_MAStation.replace("2017", "2016")
One_Year_before_MAStation
# +
# Query date and precipitation columns from data
# filter data based on dates and show all data that fits criteria
# then Sort by date using order_by()
last_12_tobs_MAStation = session.query(HI_prcp_Measurement.date, HI_prcp_Measurement.tobs).\
filter(HI_prcp_Measurement.station == Most_Active_Station).\
filter(HI_prcp_Measurement.date >= One_Year_before_MAStation).\
filter(HI_prcp_Measurement.date <= Last_Date_MAStation).\
order_by(asc(HI_prcp_Measurement.date)).all()
# Using list comprehension to separate tuples
last_12_tobs_MAStation = [a for *a, in last_12_tobs_MAStation]
# Using list comprehension to remove nested lists
last_12_prcp = [x for l in last_12_prcp for x in l]
last_12_tobs_MAStation
# +
# Query date column from data
# filter data based on dates and show all data that fits criteria
# then Sort by date using order_by()
last_12_prcp_date_MAStation = session.query(HI_prcp_Measurement.date).\
filter(HI_prcp_Measurement.station == Most_Active_Station).\
filter(HI_prcp_Measurement.date >= One_Year_before_MAStation).\
filter(HI_prcp_Measurement.date <= Last_Date_MAStation).\
order_by(asc(HI_prcp_Measurement.date)).all()
# Using list comprehension to separate tuples
last_12_prcp_date_MAStation = [a for *a, in last_12_prcp_date_MAStation]
# Using list comprehension to remove nested lists
last_12_prcp_date_MAStation = [x for l in last_12_prcp_date_MAStation for x in l]
# convert to date time format
last_12_prcp_date_MAStation = pd.to_datetime(last_12_prcp_date_MAStation)
last_12_prcp_date_MAStation
# +
# Query precipitation column from data
# filter data based on dates and show all data that fits criteria
# then Sort by date using order_by()
last_12_tobs_measurement_MAStation = session.query(HI_prcp_Measurement.tobs).\
filter(HI_prcp_Measurement.station == Most_Active_Station).\
filter(HI_prcp_Measurement.date >= One_Year_before_MAStation).\
filter(HI_prcp_Measurement.date <= Last_Date_MAStation).\
order_by(asc(HI_prcp_Measurement.date)).all()
# Using list comprehension to separate tuples
last_12_tobs_measurement_MAStation = [a for *a, in last_12_tobs_measurement_MAStation]
# Using list comprehension to remove nested lists
last_12_tobs_measurement_MAStation = [x for l in last_12_tobs_measurement_MAStation for x in l]
last_12_tobs_measurement_MAStation
# +
# Using the most active station id
# Query the last 12 months of temperature observation data for this station and plot the results as a histogram
# -
last_12_tobs_df_MAStation = pd.DataFrame({"date":last_12_prcp_date_MAStation, "Temp. Obervations":last_12_tobs_measurement_MAStation})
last_12_tobs_df_MAStation.set_index("date", inplace=True)
last_12_tobs_df_MAStation.sort_index(ascending=True)
last_12_tobs_df_MAStation
last_12_tobs_df_MAStation.plot.hist(bins=12)
# # Close session
# Close Session
session.close()
| climate_starter.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: pyml
# language: python
# name: pyml
# ---
import pandas as pd
df = pd.read_csv('wdbc.data', header=None)
X = df.loc[:, 2:].values
y = df.loc[:, 1].values
from sklearn.preprocessing import LabelEncoder
le = LabelEncoder()
y = le.fit_transform(y)
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.20, stratify=y, random_state=1)
| ch06/wdbc_data_split.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: instagram-growth-strategy
# language: python
# name: instagram-growth-strategy
# ---
# ### Links
# * https://community.plotly.com/t/announcing-plotly-py-4-8-plotly-express-support-for-wide-and-mixed-form-data-plus-a-pandas-backend/40048
# * https://plotly.com/python/styling-plotly-express/
# * https://towardsdatascience.com/the-best-format-to-save-pandas-data-414dca023e0d
# +
# %load_ext autoreload
# %autoreload 2
# %matplotlib inline
import sys
import pandas as pd
import numpy as np
from darts import TimeSeries
from darts.models import AutoARIMA
import plotly.express as px
from jupyter_dash import JupyterDash
import dash_core_components as dcc
import dash_html_components as html
from dash.dependencies import Input, Output
import plotly.graph_objects as go
import plotly.subplots as sp
import plotly.io as pio
pio.templates.default = "simple_white"
px.defaults.template = "ggplot2"
px.defaults.color_continuous_scale = px.colors.sequential.Blackbody
px.defaults.width = 700
px.defaults.height = 400
import warnings
warnings.filterwarnings("ignore")
import logging
logging.disable(logging.CRITICAL)
# -
df = pd.read_csv("../../data/later/profile_growth.csv")
df['Date'] = pd.to_datetime(df['Date'])
# Create a TimeSeries, specifying the time and value columns
series = TimeSeries.from_dataframe(df, 'Date', 'Followers')
# +
# Create figures in Express
figure1 = px.line(df, x="Date", y=["Followers"], title='Followers')
figure2 = px.line(df,
x="Date",
y=["Impressions", "Reach"],
title='Impressions & Reach')
# For as many traces that exist per Express figure, get the traces from each plot and store them in an array.
# This is essentially breaking down the Express fig into it's traces
figure1_traces = []
figure2_traces = []
for trace in range(len(figure1["data"])):
figure1_traces.append(figure1["data"][trace])
for trace in range(len(figure2["data"])):
figure2_traces.append(figure2["data"][trace])
# -
model = AutoARIMA()
model.fit(series)
forecast = model.predict(3)
series_df = series.pd_dataframe().reset_index()#.set_index('component')
forecast_df = forecast.pd_dataframe().reset_index()
# Create traces
forecast_fig = go.Figure()
forecast_fig.add_trace(go.Scatter( x=series_df['Date'], y=series_df['Followers'],
mode='lines+markers',
name='ToDate'))
forecast_fig.add_trace(go.Scatter(x=forecast_df['Date'], y=forecast_df['Followers'],
mode='lines+markers',
name='Forecast'))
# +
# Create a 1x2 subplot
this_figure = sp.make_subplots(rows=2,
cols=1,
subplot_titles=("Followers", "Impressions & Reach"),
row_heights=[0.3, 0.7])
# Get the Express fig broken down as traces and add the traces to the proper plot within in the subplot
for traces in figure1_traces:
this_figure.append_trace(traces, row=1, col=1)
for traces in figure2_traces:
this_figure.append_trace(traces, row=2, col=1)
# Update xaxis properties
#this_figure.update_xaxes(title_text="Date", row=1, col=1)
# Update yaxis properties
this_figure.update_yaxes(title_text="Count", visible=False, fixedrange=True, row=1, col=1)
this_figure.update_yaxes(title_text="Count", row=2, col=1)
# hide and lock down axes
this_figure.update_layout(showlegend=False,
height=700,
width=700,
title_text="Profile Growth & Discovery")
# Load Data
df = px.data.tips()
# Build App
app = JupyterDash(__name__)
app.layout = html.Div(
[html.H1("JupyterDash Demo"),
html.Div([dcc.Graph(figure=this_figure)]),
html.Div([dcc.Graph(figure=forecast_fig)])
])
# Run app and display result inline in the notebook
app.run_server(mode='external')
# -
| notebooks/forecast_followers/06_Dash_Subplots_AutoArima.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from CustomRegression import PcmLinearRegression
df = pd.read_csv("./height_weight.csv")
df.head()
df["Height"] = round(df["Height"], 2)
df["Weight"] = round(df["Weight"], 2)
df.head()
df.drop("Gender", axis=1, inplace=True)
df.head()
heights = np.array(df["Height"])
weights = np.array(df["Weight"])
print(heights)
print(weights)
plt.scatter(heights, weights)
lr_model = PcmLinearRegression(epoch=5000, dp=0.1, degree=1)
lr_model.fit(heights, weights)
lr_model.evaluation_graph(heights, weights)
pred = lr_model.predict([55, 60, 65, 70])
print(pred)
lr_model.info()
# +
np.random.seed(48)
X = 2 * np.random.rand(100, 1)
y = 4 + 6 * X+np.random.randn(100, 1)
X = np.ravel(X, order="C")
y = np.ravel(y, order="C")
plt.scatter(X, y)
# +
plt.figure(0)
plt.scatter(X, y, color="blue")
plt.figure(1)
plt.scatter(X, y, color="blue")
data = list(zip(X, y))
data = pd.DataFrame(data, columns=["X", "y"])
def duplicate(x):
duplicate_data = data[(data["X"] > (x - 0.1)) & (data["X"] < (x + 0.1))]["y"]
return sum(duplicate_data) / len(duplicate_data)
data["y"] = data["X"].apply(lambda x: duplicate(x))
data = data.drop_duplicates(["y"], keep="first")
data = data.reset_index(drop=True)
plt.scatter(data.iloc[:, 0], data.iloc[:, 1], color="red")
# -
lr_model = PcmLinearRegression(epoch=5000, degree=1)
lr_model.fit(X, y)
lr_model.evaluation_graph(X, y)
lr_model.info()
# ### sklearn 에서 제공하는 LinearRegression 과 성능 비교
# ##### PcmLinearRegression
# - RMSE: 1.0179020196481647
pred = lr_model.predict(X)
pcm_rmse = 0
for p, r in zip(pred, y):
pcm_rmse += (p-r) ** 2
pcm_rmse = np.sqrt(pcm_rmse / len(X))
pcm_rmse
plt.scatter(X, y, color="blue")
plt.plot(X, pred, color="red")
# ##### Sklearn LinearRegression
# - RMSE: 1.0178952191951303
# +
from sklearn.linear_model import LinearRegression
lr_model = LinearRegression()
lr_model.fit(X.reshape(-1, 1), y.reshape(-1, 1))
pred = lr_model.predict(X.reshape(-1, 1)).reshape(len(y), )
sk_rmse = 0
for p, r in zip(pred, y):
sk_rmse += (p-r) ** 2
sk_rmse = np.sqrt(sk_rmse / len(X))
sk_rmse
# -
plt.scatter(X, y, color='blue')
plt.plot(X, pred, color="red")
plt.show()
print(pcm_rmse - sk_rmse) # 0.000006800453034250253
# +
pcmDegree2 = PcmLinearRegression(epoch=10000, dp=0.1, degree=2)
np.random.seed(1)
X = np.round(6 * np.random.rand(100, 1) - 3, 3)
y = 0.5 * X**2 + X + 2 + np.random.randn(100, 1)
X = X.reshape(X.shape[0], )
y = y.reshape(y.shape[0], )
plt.scatter(X, y)
# -
pcmDegree2.fit(X, y)
pcmDegree2.evaluation_graph(X, y)
pcmDegree2.info()
plt.scatter(X, y, color="blue")
plt.plot(np.linspace(-3, 3, 100), pcmDegree2.predict(np.linspace(-3, 3, 100)), color="red", linewidth="3.0")
# ### sklearn Polynomial
# +
from sklearn.preprocessing import PolynomialFeatures
poly_features = PolynomialFeatures(degree=2, include_bias=False)
X_poly = poly_features.fit_transform(X.reshape(-1, 1))
# -
lr_model = LinearRegression()
lr_model.fit(X_poly, y.reshape(-1, 1))
# +
X_new = np.linspace(-3, 3, 100).reshape(100, 1)
pred = lr_model.predict(poly_features.transform(X_new))
plt.scatter(X, y, color="blue")
plt.plot(np.linspace(-3, 3, 100).reshape(100, 1), pred, color="red", linewidth="3.0")
# -
plt.scatter(X, y)
plt.plot(np.linspace(-3, 3, 100).reshape(100, 1), pred, color="yellow", linewidth="3.0")
plt.plot(list(np.linspace(-3, 3, 100)), pcmDegree2.predict(list(np.linspace(-3, 3, 100))), color="red", linewidth="3.0")
# #### PcmLinearRegression
# - RMSE: 0.8944538011505542
pcm_rmse = 0
pred = pcmDegree2.predict(X)
for p, r in zip(pred, y):
pcm_rmse += (p-r) ** 2
pcm_rmse = np.sqrt(pcm_rmse / len(pred))
pcm_rmse
# #### Sklearn LinearRegression
# - RMSE: 0.8889630668760352
sk_rmse = 0
pred = lr_model.predict(poly_features.transform(X.reshape(-1, 1)))
for p, r in zip(pred, y):
sk_rmse += (p[0]-r) ** 2
sk_rmse = np.sqrt(sk_rmse / len(pred))
sk_rmse
pcm_rmse - sk_rmse
| CustomML/test.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3.9.7 64-bit
# name: python3
# ---
import pandas as pd
df = pd.read_excel("订单数据.xlsx", header=None)
mapping=dict(zip(df[0],df[1]))
mapping
from docx import Document
doc=Document(docx="订单信息.docx")
table = doc.tables[0]
type(table.rows)
list_data = []
len_=len(table.rows)
for i in range(len_):
row=table.rows[i]
data=[]
for cell in row.cells:
data.append(cell.text)
list_data.append(data)
list_data
from openpyxl import load_workbook
wb=load_workbook(filename="订单信息.xlsx")
ws=wb.active
excel_data=[]
for row in list(range(1,20)):
row_data=[]
for col in list(range(1,20)):
row_data.append(ws.cell(row=row, column=col).value)
excel_data.append(row_data)
df_excel = pd.DataFrame(excel_data).dropna(axis="index", how="all")
df_excel = df_excel.dropna(axis="columns", how="all")
df_excel
ws.cell(row=2, column=5).value == None
for row in list(range(1, 20)):
row_data = []
for col in list(range(1, 20)):
val = ws.cell(row=row, column=col).value
cell_right = ws.cell(row=row, column=col+1)
crval = cell_right.value
if crval == None:
if val in mapping:
crval = mapping[val]
else:
mapping[val] = crval
mapping
wb.save("订单信息-out.xlsx")
df2=pd.DataFrame(list_data)
df2
len(df2.columns)
len(df2.index)
mapping
for i in range(len(table.rows)):
for j in range(len(table.rows[i].cells)):
text = table.cell(i, j).text
if text in mapping:
print(text)
try:
table.cell(i, j+1).text = str(mapping[text])
except:
pass
doc.save("订单信息-out.docx")
| Untitled-1.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # HW3 - R-Squared for Linear Regression
#
# ## Review
#
# - Recall the running distance and drinking water dataset example
# - Whenever we refer to `x` or `y` in this assignment, we are referring to the below datasets
# +
import numpy as np
import matplotlib.pyplot as plt
# Running dataset - Distance in Mile
x = np.array([3.3,4.4,5.5,6.71,6.93,4.168,9.779,6.182,7.59,2.167,
7.042,10.791,5.313,7.997,5.654,9.27,3.1])
# Water dataset - Drinks in Litre
y = np.array([1.7,2.76,2.09,3.19,1.694,1.573,3.366,2.596,2.53,1.221,
2.827,3.465,1.65,2.904,2.42,2.94,1.3])
plt.scatter(x, y)
plt.xlabel('Running Distance (Mile)')
plt.ylabel('Water Drinks (Litre)')
# -
# ## Problem to Solve: Obtain the R-squared for this best line
#
# ### Goal of Linear Regression
#
# - We are interested in obtaining the best line described by `y_pred[i] = w_1 x[i] + w_0` that maps running distance to drinking water
#
# - Assume we know that we have obtained the best line when:
#
# - `w_1 = 0.25163494`
#
# - `w_0 = 0.79880123`
w_1 = 0.25163494
w_0 = 0.79880123
y_pred = [w_1*i+w_0 for i in x]
plt.scatter(x, y)
plt.plot(x, y_pred, 'ro-')
# ### Part 1
#
# - First read this [wiki page](https://en.wikipedia.org/wiki/Coefficient_of_determination#Definitions) about R-squared. The relevent section is also show below in the screenshot.
# - Note that in this article, $f_i = y_{pred}[i]$:
#
# <img src="r_squared.png" width="800" height="800">
#
# ### Part 2
#
# - Write a Python function that computes R-squared for our distance and drinking water datasets (shown at the top of this page) when `w_1 = 0.25163494` and `w_0 = 0.79880123`
# +
# Hint: Your function takes four parameters:
# - x (dataset: array of floats)
# - y (dataset: array of floats)
# - w_0 (weight: float)
# - w_1 (weight: float)
# and will return the R-squared value
def r_sq(y, x, w1, w0):
y_bar = np.mean(y)
y_pred = ...
SS_res = ...
SS_tot = ...
return 1- SS_res/SS_tot
print(r_sq(y, x, 0.25163494, 0.79880123))
# +
# Verify that your function works correctly
from scipy import stats
slope, intercept, r_value, p_value, std_err = stats.linregress(x, y)
print("r-squared:", r_value**2)
# -
# ## Requirements
#
# To pass this assignment, you must meet the following requirements:
#
# 1. For the given `x` and `y` datasets and the `w_0` and `w_1` values mentioned above, you must find the R-squared value
# 1. Your answer matches the R-squared value from using the `scipy.stats` library with 0.0001 precision
#
# ## Turning In Your HW
#
# Once you have finished your assignment, provide a link to your repo on GitHub and place the link in the appropriate HW3 column in the progress tracker. See the syllabus for more details on submission links
| Assignments/.ipynb_checkpoints/HW3-checkpoint.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: cdei
# language: python
# name: cdei
# ---
# # Optimal clustering by Zemel et al. - Adult data
#
# This notebook contains an implementation of the pre-processing fairness intervention introduced in [Learning Fair Representations](http://proceedings.mlr.press/v28/zemel13.html) by Zemel et al. (2013) as part of the IBM AIF360 fairness tool box github.com/IBM/AIF360.
#
# The intervention achieves demographic parity by the use of a clustering method which transforms the original data set by expressing points as linear combinations of learnt cluster centres. The transformed data set is as close as possible to the original while containing as little information as possible about the sensitive attributes.
#
# The output of their method includes besides a fair data representation also fair label predictions. Predicted labels of the transformed data set can be defined so that similar points are mapped to a similar label prediction. In that sense Individual Fairness is achieved, too.
#
# Here, we consider fairness defined with respect to sex. There is another notebook considering fairness with respect to race using Zemel et al.'s intervention method.
# +
from pathlib import Path
import joblib
import numpy as np
import pandas as pd
from aif360.algorithms.preprocessing.lfr import LFR # noqa
from aif360.datasets import StandardDataset
from fairlearn.metrics import (
demographic_parity_difference,
demographic_parity_ratio,
)
from helpers.metrics import accuracy
from helpers.plot import group_bar_plots
# + tags=["export"]
from helpers import export_plot
# -
# ## Load data
#
# We have committed preprocessed data to the repository for reproducibility and we load it here. Check out hte preprocessing notebook for details on how this data was obtained.
artifacts_dir = Path("../../../artifacts")
# + tags=["export"]
# override data_dir in source notebook
# this is stripped out for the hosted notebooks
artifacts_dir = Path("../../../../artifacts")
# +
data_dir = artifacts_dir / "data" / "adult"
train = pd.read_csv(data_dir / "processed" / "train-one-hot.csv")
val = pd.read_csv(data_dir / "processed" / "val-one-hot.csv")
test = pd.read_csv(data_dir / "processed" / "test-one-hot.csv")
# -
# In order to process data for our fairness intervention we need to define special dataset objects which are part of every intervention pipeline within the IBM AIF360 toolbox. These objects contain the original data as well as some useful further information, e.g., which feature is the protected attribute as well as which column corresponds to the label.
train_sds = StandardDataset(
train,
label_name="salary",
favorable_classes=[1],
protected_attribute_names=["sex"],
privileged_classes=[[1]],
)
test_sds = StandardDataset(
test,
label_name="salary",
favorable_classes=[1],
protected_attribute_names=["sex"],
privileged_classes=[[1]],
)
val_sds = StandardDataset(
val,
label_name="salary",
favorable_classes=[1],
protected_attribute_names=["sex"],
privileged_classes=[[1]],
)
index = train_sds.feature_names.index("sex")
privileged_groups = [{"sex": 1.0}]
unprivileged_groups = [{"sex": 0.0}]
# ## Demographic parity
#
# Given the original unfair data set we apply Zemel et al.'s intervention to obtain a fair data set including fair labels. More precisely, we load an already learnt mitigation or learn a new mitigation procedure based on the true and predicted labels of the training data. We then apply the learnt procedure to transform the testing data and analyse fairness and accuracy in the transformed testing data.
#
# The degree of fairness and accuracy can be controlled by the choice of parameters $A_x, A_y, A_z$ and $k$ when setting up the mitigation procedure. Here, $A_x$ controls the loss associated with the distance between original and transformed data set, $A_y$ the accuracy loss and $A_z$ the fairness loss. The larger one of these parameter is chosen compared to the others, the larger the priority of minimising the loss associated with that parameter. Hence, leaving $A_x$ and $A_y$ fixed, we can increase the degree of fairness achieved by increasing the parameter $A_z$.
#
# As differences in fairness between independently learnt mitigations with same parameter choice can sometimes be significant we load a pre-trained intervention which achieves reasonable results. The user is still encouraged to train inteventions themselves (see commented out code below), and compare achieved fairness, potentially for a number of indepedent runs.
#
# ## Train unfair model
#
# For maximum reproducibility we load the baseline model from disk, but the code used to train can be found in the baseline model notebook.
# +
bl_model = joblib.load(artifacts_dir / "models" / "finance" / "baseline.pkl")
bl_test_probs = bl_model.predict_proba(test_sds.features)[:, 1]
bl_test_pred = bl_test_probs > 0.5
# -
# ## Load or learn intervention
#
# So that you can reproduce our results we include a pretrained model, but the code for training your own model and experimenting with hyperparameters can be found below.
#
# a) Location of the intervention previously learned on the training data.
TR = joblib.load(artifacts_dir / "models" / "finance" / "zemel-sex.pkl")
# b) Learn intervention of the training data.
# +
# TR = LFR(
# unprivileged_groups=unprivileged_groups,
# privileged_groups=privileged_groups,
# k=5,
# Ax=0.01,
# Ay=1.0,
# Az=25.0,
# )
# TR = TR.fit(train_sds)
# -
# Apply intervention to test set.
transf_test_sds = TR.transform(test_sds)
test_fair_labels = transf_test_sds.labels.flatten()
# Analyse fairness and accuracy on test data.
# +
bl_acc = bl_model.score(test.drop(columns="salary"), test.salary)
bl_dpd = demographic_parity_difference(
test.salary, bl_test_pred, sensitive_features=test.sex,
)
bl_dpr = demographic_parity_ratio(
test.salary, bl_test_pred, sensitive_features=test.sex,
)
acc = accuracy(test.salary, test_fair_labels)
dpd = demographic_parity_difference(
test.salary, test_fair_labels, sensitive_features=test.sex,
)
dpr = demographic_parity_ratio(
test.salary, test_fair_labels, sensitive_features=test.sex,
)
print(f"Baseline accuracy: {bl_acc:.3f}")
print(f"Accuracy: {acc:.3f}\n")
print(f"Baseline demographic parity difference: {bl_dpd:.3f}")
print(f"Demographic parity difference: {dpd:.3f}\n")
print(f"Baseline demographic parity ratio: {bl_dpr:.3f}")
print(f"Demographic parity ratio: {dpr:.3f}")
# -
dp_bar = group_bar_plots(
np.concatenate([bl_test_pred, test_fair_labels]),
np.tile(test.sex.map({0: "Female", 1: "Male"}), 2),
groups=np.concatenate(
[np.zeros_like(bl_test_pred), np.ones_like(test_fair_labels)]
),
group_names=["Baseline", "Zemel"],
title="Proportion of predicted high earners by sex",
xlabel="Propotion of predicted high earners",
ylabel="Method",
)
dp_bar
# + tags=["export"]
export_plot(dp_bar, "zemel-sex-dp.json")
| src/notebooks/finance/interventions/zemel_sex.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Error Correcting Codes Encoding Study
#
# The goal of this study is to understand options to the popular one-hot encoding. There are many sides to each story (no, not only two), on those sides are:
#
# - I never liked one-hot encoding (and is been more than a decade since I first used it, so the disgust might never go out);
# - I don't like how neural networks are treated and should always be end to end learning (no they should not, they should be more complex architectures, many already in research literature)
# - There are priors
# - Each type of input should (and HAS in nature) its own priors, which are adapted to *facilitate* the learning, no we should not do everyhting inside a NN, we should give as input something that has priors that facilitate learning (and might or might not later save processing power during operations)
#
#
# On the priors, many have already shown good results, the most remarcable prior are: Convolutional Neural Networks, MAC Networks, LSTMs, others are more subtle, like (remember citation here ...) adding a coordinate system to the input image as an (or many) extra channel(s). There are many more that I think are worth exploring and adding to the literature, even if they don't give good results.
# On those priors there are many that we not only know, but also we have specialized hardware that is perfectly adapted
# * time and space -> this we can encode and add it as extra channels
# * Different transforms (Fourier, Laplace, Wavelets, ...)
# * spikes (borders in images)
# * ....
#
# There is another idea behind the ECCs, is that we usually can 'feel' that something is not right or missing, what about giving NNs a way of having an extra element that would allow for this 'feeling'?
#
# The idea behind this is that I don't agree with one-hot encoding the way is used today, not because it does not work, but because it imposes a few limits that I don't want to deal with at first
#
# * We know the actual number of values to encode (with words this is not necessary true)
# * We have a sample data to train the encoding
#
# This limits us in several ways; for example, for training on a domain, the encoder will depend on that domain only. If there are under-represented values (such as words that don't appear or are new later, or changing domain) this limits the encoding possibliities. A better idea will be to be able to encode everything even if the internal representations have not yet learned to use those simbols.
#
# I want to be able to do a separation ebtween the *possibility* of representing a value, and the learning of that concept.
#
# The first and biggest limitation of one-hot encoding is that does not allow to represent values that are not accepted.
#
# As some other parts of this study have already focused on integer value representations, arbitrary function representation (although with limitted success on the fourier inspired encodings) this study is more focused on being able to represent correctly all the values of utf-8, basically doing a first binary representation that will be given as input to an OVERFITTED encoder.
#
# The reasoning behind this is:
#
#
# * The origin domain is all text
# * UTF-8 can represent all text in all languages including some extra elements
# * let's use UTF-8 as the origin domain
# * Create an encoder that can deal with ANY and ALL input in the origin domain
# * the encoded values can later be used
#
# As text should be always correctly reconstructed in the presence of noise, I want to imagine now a Neural Network like a REALLY NOISY channel. For this using (Forward) ECCs is one way of thinking in this medium
# Then the tests that I intend to do is:
#
# * Create an autoencoder that OVERFITS to all the data
#
#
# One idea that I have been dealing with my head for the past 3-4 years is that we are thinking overfitting the wrong way, and we can actually use it well, but we have to learn how.
#
# I think that here is the first time I actually find a way of doing it in a useful way
#
# The idea is to overfit so it generates an smaller encoding vector than the one in the input. Autoencoders are good to do this test.
#
# The other idea is that if the autoencoder can NOT do this, then the encoding ideas that I will try are BAAAD and I should feel BAAAD. In this case ... just go to the drawing table and think of other things.
#
# On the other side, if this works, this means that FINALLY I can go on the next stage, that is building the predictors first basic ones (LSTMs, HMMs, Time Convolutions), then with meta-learning and later with my still too fresh idea on neural databases.
#
# One interesting thing I want to find out about Error Correcting Codes (ECCs) is if they are actually useful in the output decoding, as they should be adding *explicit* redundancy to the input and also to the output.
#
# The other thing about ECCs is that we can pile them up, for example, one (or many codes) to representa a symbol (for example the value *'€'* ) and then use convolutional or turbo codes for the *temporal* encoding/decoding part, this means that we not only add priors to the intantaneous input, but also to the temporal dimension, which is something I really want to explore (and this should facilitate fixing and correcting "channel errors")
#
# I don't deal here with *erasure* error types, but that is a possibility later.
#
import numpy as np
import commpy
import bitarray as ba
import struct
import sys
import pickle
# import binascii
from bitstring import BitArray, BitStream
sys.byteorder
c = '€'.encode()
c
a = 'a'
'a'.encode()[0]
len(bytearray('a'.encode()))
zero = BitArray(b'\x00\x00\x00\x00')
b = BitArray(c)
b
b.tobytes()
int.from_bytes(c, byteorder='big')
32 - b.len
int.from_bytes(c, byteorder='big') >> 1
for i in range ((32 - b.len)//8):
b.prepend(b'\x00')
b.len
b
32 - b.len
a = 256
a.bit_length()
'€'.encode()[1]
# +
# Bit of a whacky hack and for sure not the most efficient one, but it just works for what I want
def prepend_zeros(s, n):
return '0'* (n - len(s))+s
def get_strbintable(n):
bl = n.bit_length() - 1 # because n is actually never used
lines = [ ' '.join(i for i in prepend_zeros("{0:b}".format(l), bl)) for l in range(n)]
return lines
def get_npbintable(n):
bins = np.fromstring('\n'.join(get_strbintable(n)), dtype='int32', sep=' ')
bins = bins.reshape([n, -1])
return bins
# -
# Since the entire utf-8 univers is NOT the entire $2^{32}$ domain, but there are limitations explained in [the utf-8 description](https://en.wikipedia.org/wiki/UTF-8)
#
# | Number of bytes | Bits for code point | First code point | Last code point | Byte 1 | Byte 2 | Byte 3 | Byte 4 |
# |----------------|--------------------|-----------------|----------------|----------|----------|----------|----------|
# | 1 | 7 | U+0000 | U+007F | 0xxxxxxx | | | |
# | 2 | 11 | U+0080 | U+07FF | 110xxxxx | 10xxxxxx | | |
# | 3 | 16 | U+0800 | U+FFFF | 1110xxxx | 10xxxxxx | 10xxxxxx | |
# | 4 | 21 | U+10000 | U+10FFFF | 11110xxx | 10xxxxxx | 10xxxxxx | 10xxxxxx |
#
# I'll then compute different table parts and do an append when needed
#
# The thing is that the number of elements in the table should be at most $2^{21}$, I need to create a sort of index that can handle the 4 cases.
# It seems I'll have to create 4 different conversion tables.
#
#
#
# +
# this part makes sure to encode as bin
eye4 = np.eye(4)
eye64 = np.eye(64)
eye256 = np.eye(256)
# code for 7 bits, Byte 1 of utf-8
code_b7 = np.append(np.zeros([2**7, 1]), get_npbintable(2**7), axis=1)
# code for 6 bits, Byte 2 to 4 of utf-8 -> this is going to be used later for all the other values
code_b6 = np.append(np.append(np.ones([2**6, 1]), np.zeros([2**6, 1]), axis=1),
get_npbintable(2**6), axis=1)
# code for 5 bits, Byte 1 of
code_b5 = np.append(np.append(np.ones([2**5, 2]), np.zeros([2**5, 1]), axis=1),
get_npbintable(2**5), axis=1)
# code for 4 bits, Byte 2 to 4 of utf-8 -> this is going to be used later for all the other values
code_b4 = np.append(np.append(np.ones([2**4, 3]), np.zeros([2**4, 1]), axis=1),
get_npbintable(2**4), axis=1)
# code for 3 bits, Byte 2 to 4 of utf-8 -> this is going to be used later for all the other values
code_b3 = np.append(np.append(np.ones([2**3, 4]), np.zeros([2**3, 1]), axis=1),
get_npbintable(2**3), axis=1)
def encode_utf8(l):
el = l.encode()
code = np.zeros(36) # 32 is the size of the input code + 4 of the extra redundancy
nbytes = len(el)
# assert( 0<nbytes && nbytes<=4)
assert(nbytes<=4)
bin4 = eye4[nbytes-1] # this adds redundant knowledge about the part
# this is ugly but explicit, for the moment is good enough and I can see what is
code[:4] = bin4
if nbytes == 1:
code[4:12] = code_b7[el[0]& 0b01111111 ]
elif nbytes == 2:
code[4:12] = code_b5[el[0] & 0b00011111 ]
code[12:20] = code_b6[el[1] & 0b00111111]
elif nbytes == 3:
code[4:12] = code_b4[el[0] & 0b00001111]
code[12:20] = code_b6[el[1] & 0b00111111]
code[20:28] = code_b6[el[2] & 0b00111111]
elif nbytes == 4:
code[4:12] = code_b3[el[0] & 0b00000111]
code[12:20] = code_b6[el[1] & 0b00111111]
code[20:28] = code_b6[el[2] & 0b00111111]
code[28:36] = code_b6[el[3] & 0b00111111]
else:
raise Exception("Bad input, input has to have 1 to 4 input bytes")
return code
# TODO I need to find a more efficient way of doing this that could make this as vector or matrix operations instead
def encode_utf8_multihot(c):
e_c = list(c.encode())
# code = np.zeros(36) # 32 is the size of the input code + 4 of the extra redundancy
nbytes = len(e_c)
# assert( 0<nbytes && nbytes<=4)
assert(nbytes<=4)
bin4 = eye4[nbytes-1] # this adds redundant knowledge about the part
# this is ugly but explicit, for the moment is good enough and I can see what is
# code[:4] = bin4
# max size of each part of the code
# I will treat the first byte as always 8 bits, this will make it easier to decode later and adds aditional information
# this has an extra benefit, when a code is there only certain regions will become 1 giving an extra hint to the network
# maxsizes = [2**8, 2**6, 2**6, 2**6]
code = np.zeros(4 + (2**8) + 3*(2**6))
masks = [0xff, 0x3f, 0x3f, 0x3f]
indices = [256+4, 64+256+4, 2*64 + 256+4, 3*64 + 256+4]
maxsizes = [eye256, eye64, eye64, eye64]
code[:4] = bin4
prev_i = 4
for i,n,e,m in zip(indices[:nbytes], e_c, maxsizes[:nbytes], masks[:nbytes]):
code[prev_i:i] = e[n & m] #masking
prev_i = i
return code
def encode_utf8_ecc(l):
# TODO ...
el = l.encode()
code = np.zeros(36) # 32 is the size of the input code + 4 of the extra redundancy
nbytes = len(el)
# assert( 0<nbytes && nbytes<=4)
assert(nbytes<=4)
bin4 = eye4[nbytes-1] # this adds redundant knowledge about the part
# this is ugly but explicit, for the moment is good enough and I can see what is
raise NotImplementedError("not implemented yet")
# -
list(zip([1,2,3,4], (1,2,3,4), (1,2,3,4)))
el = '€'.encode()
'{0:b} {1:b} {2:b}'.format(el[0], el[1], el[2])
encode_utf8('€')
el = 'á'.encode()
'{0:b} {0:b}'.format(el[0], el[1])
encode_utf8('á')
encode_utf8_multihot('€').shape
l1 = [97,0,0,0]
l2 = [98,0,0,0]
l3 = [99,0,0,0]
l4 = [254,200,210,210]
str(bytes([0x20]),'utf-8')
d = {bytes([1,2,3,4]): 'lala '}
d
2**21 # this should be enough to make the entire utf-8 encoding ... and much more
# +
# %%time
tt10 = get_npbintable(2**21)
# -
tt10[101]
# +
# Dense binary input codes
# code for 7 bits, Byte 1 of utf-8
code_b7 = get_npbintable(2**7)
t_zeros = np.zeros([2**7, 1])
code_b7 = np.append(t_zeros, code_b7, axis=1)
# code for 6 bits, Byte 2 to 4 of utf-8 -> this is going to be used later for all the other values
code_b6 = get_npbintable(2**6)
t_b6 = np.append(np.ones([2**6, 1]), np.zeros([2**6, 1]), axis=1)
code_b6 = np.append(t_b6, code_b6, axis=1)
# code for 5 bits, Byte 1 of
code_b5 = get_npbintable(2**5)
t_b5 = np.append(np.ones([2**5, 2]), np.zeros([2**5, 1]), axis=1)
code_b5 = np.append(t_b5, code_b5, axis=1)
# code for 6 bits, Byte 2 to 4 of utf-8 -> this is going to be used later for all the other values
code_b4 = get_npbintable(2**4)
t_b4 = np.append(np.ones([2**4, 3]), np.zeros([2**4, 1]), axis=1)
code_b4 = np.append(t_b4, code_b4, axis=1)
# code for 6 bits, Byte 2 to 4 of utf-8 -> this is going to be used later for all the other values
code_b3 = get_npbintable(2**3)
t_b3 = np.append(np.ones([2**3, 4]), np.zeros([2**3, 1]), axis=1)
code_b3 = np.append(t_b3, code_b3, axis=1)
# 4 bits
b4 = get_npbintable(2**4)
eye4 = np.eye(4)
# -
eye4
np.eye(16)
# In fact ... it seems that I can just chunk the utf-8 value in chunks and do one-hot per different parts:
# - there are only 4 segment ranges, that can be coded in one-hot also add there either hamming or other ECC
# - the largest value is for 7 bits -> 128 values
# - the others contain 6 bits -> 64 values
# The prefix of each can be taken away and replaced by the initial one-hot
#
# So a complete code would be
# $ 4 + 128 + 64 + 64 + 64 = 324 $
#
# plus the ECC parity bits
#
# Instead of having dimension 1,112,064 to encode any utf-8 value.
#
# The encoder is much simpler than I thought for this case, later I can add ECC for each, knowing that there is only one active bit in each row, this makes the task easier.
#
# This embedding can stil be reduced but should be sparse enough already to make a good input
4 + 128 + 64 + 64 + 64
4 + 128 + 64 + 64 + 64
c
np.fromstring('0 0 1 0 1 0 0 1', dtype=bool, sep=' ')
np.fromstring('00101001', dtype=bool) # there seems to be an issue here on numpy ...
np.fromstring('0 0 1 0 1 0 0 1', dtype=int, sep=' ')
bins = np.fromstring('\n'.join(get_strbintable(16)), dtype='<H', sep=' ')
bins.reshape([16,-1])
np.array(get_strbintable(16))
#
# I tried to do some things about the first part of the code, turning bytes to a numpy array, but seems that the most efficient way would be a table (numpy 2d array that has as index the int value of the input and there the array in that position is the binary code, this can already include the first pass to make a one hot of every N bits (maybe every 4, so there are not so many initial values ), this matrix could already have pre-computed the ECC ...
#
# For the ECC, I stil don't decide if making it by chunks of input bits, or by all the values, I guess that by all should do, but maybe is easier to compute them reshaping the input arrays to the code in use (example for Golay [24,12,8] will do for every 12 input bits)
#
# The idea is to not completely get rid of one-hot encoding, but to limit it to parts of the input vector code restricting the size of the domain
# number of parameters for a one-hot by chunks encoding:
chunk_sizes = [4, 5, 6, 8, 12]
n_params = []
for c in chunk_sizes:
n_params.append((c, (32 // c) * 2**c))
n_params
# Maybe for my tests up to chunks of size 6 should be acceptable (I still need to add the next ECC)
#
# The next code can be:
# - Repetition (x3)
# - Hamming
# - Golay
# - Reed Solomon
# - Latin Square
# - AN Correcting
#
# Here some thoughts about the codes:
#
# Repetition: this has the disadvantage of giving a redundancy that is quite obvious besides the low power of reconstruction of catastrofic errors. Just repeating does not necessarilly adds to a neural network another perspective at the input. Might be worth trying, but for the moment I'm not interested in it.
#
# Hamming: it can correct one error (Hamming 7,4), adding 3 out of 4 bits. With an extra bit it can detect up to 2 errors with an extra parity bit.
#
# Golay: might serve well enough for my first tests as it ads not too much overhead (duplicates the number of elements) for an interesting error correction (up to 3 bits from each 12, so one fourth).
#
#
# There is one difference in the focus from this analysis and telecomunications (or any other domain with noisy channel), here I'm interested not in the code Rate (ammount of information sent vs ammount of actual bits sent) but in giving as input to the NN some form of non necessary evident redundancy that it could use, and having more ways to correct the output if a single mistake is made during the output extrapolation, I want ot check this part.
#
# Thinking a bit more about auto-encoders, it might not be the best idea to start there as it might not give any useful information ... I still have to try some things, might give it a try if it is quick enough to build it once I have the input code.
#
#
# For efficiency what I will do is build from the beginning the encoding table, for the decoding, I still need to think it thorugh.
#
# +
import torch
emd = torch.nn.Embedding(2**10, 300)
# -
model_parameters = filter(lambda p: p.requires_grad, emd.parameters())
params = sum([np.prod(p.size()) for p in model_parameters])
params
# from https://discuss.pytorch.org/t/how-do-i-check-the-number-of-parameters-of-a-model/4325/7
def count_parameters(model):
return sum(p.numel() for p in model.parameters() if p.requires_grad)
count_parameters(emd)
# The embedding layer is a fully connected layer ... this means a LOT of parameters
#
# To be able to do an effective one-hot of all utf-8 would be:
for i in [50,100,200,300]:
print(i, 1112064 * i)
# Which means I don't want to train that layer ... it would not even fit in my GPU
#
# There is another thing, the embedding layer learns from the sample input, this means that it will ignore all values that don't appear or are underrepresented (a know issue). My goal is to deal with this with meta-learning techniques, but always being able to keep adding new inputs.
#
# So I want a few encoders to try:
#
# - chunked one-hot + hamming error correction of the first element
# - binary like encoding only (will be done per byte to avoid making a table that is too big)
# - binary like encoding with ECCs
# - binary like encoding but added one-hot by each 4 bits (total 16 * 8 -> 128 bits)
# - binary like encoding but added one-hot by each (4|12) bits plus ECC (total (16 * 8) + overload), hamming=224, golay=256
#
#
#
128 + 32*3
# Well, for the moment I think that the thing that I need can be done with multihot encoding, this is easier to decode with multiple log softmax.
#
# For the most complex case of ECC, there is much more work to do for the decoding and even though I do have an idea of the encoding part I don't know yet how to do the decoding in a NN yet.
#
#
# +
# decoding utf8 encoded
nums = [0x01, 0x02, 0x03, 0x04]
masks = [0xf0, 0xe0, 0xd0, 0xc0]
# -
np.array(nums) | np.array(masks)
bytes([127])
from utf8_encoder import *
import pickle
tables = create_tables(segments=2)
# +
# tables = create_tables() # 4 segments by default
# -
len(tables)
# +
def save_obj(obj, name ):
with open(name + '.pkl', 'wb') as f:
pickle.dump(obj, f, pickle.HIGHEST_PROTOCOL)
def load_obj(name):
with open(name + '.pkl', 'rb') as f:
return pickle.load(f)
# -
np.save("utf8_code_matrix_2seg", tables[0])
save_obj(tables[1], "txt2code_2seg")
save_obj(tables[2], "code2txt_2seg")
save_obj(tables[3], "txt2num_2seg")
save_obj(tables[4], "num2txt_2seg")
t2c = tables[1]
c2t = tables[2]
n2t = tables[4]
t2n = tables[3]
len(t2n.keys()), len(n2t.keys()), len(tables[1].keys()), len(tables[2].keys()),
# Although Wikipedia says:
#
# UTF-8 is a variable width character encoding capable of encoding all 1,112,064[1] valid code points in Unicode using one to four 8-bit bytes.
#
# We have managed to encode only 1107904 codes, so we are missing somehow 4160 codes that python can't encode from bytes:
1112064 - 1107904
128 + (2**5 * 2**6)+ (2**4 * (2**6)**2) + (2**3 * (2**6)**3)
2**21 + 2**16 + 2**11 + 2**7
print("indices: ", 128, (128 + 2**5 * 2**6), (128 + 2**4 * (2**6)**2), (128 + 2**3 * (2**6)**3) )
# +
# from: https://stackoverflow.com/questions/7971618/python-return-first-n-keyvalue-pairs-from-dict
from itertools import islice
def take(n, iterable):
"Return first n items of the iterable as a list"
return list(islice(iterable, n))
# -
take(100, n2t.items())
t2n['\x09']
len(take(10, t2c.items())[0][1])
import torch
from torch import sparse
codes = torch.from_numpy(tables[0])
# +
# from https://discuss.pytorch.org/t/how-to-convert-a-dense-matrix-to-a-sparse-one/7809
def to_sparse(x):
""" converts dense tensor x to sparse format """
x_typename = torch.typename(x).split('.')[-1]
sparse_tensortype = getattr(torch.sparse, x_typename)
indices = torch.nonzero(x)
if len(indices.shape) == 0: # if all elements are zeros
return sparse_tensortype(*x.shape)
indices = indices.t()
values = x[tuple(indices[i] for i in range(indices.shape[0]))]
return sparse_tensortype(indices, values, x.size())
# -
scodes = to_sparse(codes)
scodes.is_sparse
type(scodes)
# +
# pytorch sparse can't be saved yet ... not implemented for the moment (I should do it myself and send the patch)
# torch.save(scodes, "utf8-codes.pt")
# save_obj(scodes, "utf8-codes.torch")
# -
import scipy as sp
import scipy.sparse
spcodes = sp.sparse.coo_matrix(tables[0])
save_obj(spcodes, "utf8-codes-scipy-sparse_3seg")
# So, for the moment we have the posibility to encode all utf-8 characters, but is still a bit big in size when having the complete. But I'll try to cut the use of memory because 6.8GB for the "dense" matrix reprsentation is too much. In Sparse mode matrix is only 83MB for the entire dataset. Nevertheless there are many characters that I will not be using for the first tests, so having it use only a part will (should) be enough.
#
# So I'll see how big the encoder is without the 4 segments, but only using the first 3 (this should be enough for most applications) so we can encode
#
# number of codes = 59328
#
# number of code_exceptions = 4224
#
# the entire code is now 206MB on a file on disk in non sparse mode and 3.6MB on sparse mode on disk.
#
# But also reducing the number of bytes on the code (using only 3 bytes max instead of 4) by not taking the last one that anyways we are not using for this application we can reduce this to 177MB of the complete "dense" code on disk and 3.6MB on sparse mode.
#
# I would not recomend doing this all the time as it restricts the power of the input network to only known elements (and we want to do with all the possible codes) but for my tests this reduces the usage of memory, the amount of parameters and the processing time.
#
# So I can start playing with it without worrying about memory ;)
#
#
type(c2t)
| predictors/sequence/text/ecc_study_simple.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
from matopt_review.BOtest import BOtest
import numpy as np
# +
#Run a optimization for goal1 using PA
ASF = None
setting = "Recsat"
goal1 = [-140.13, -83.69, -4.06] # Maximization problems are converted to minimization problems by multiplying minus 1.
n_init = 10
bo = BOtest(funcname="OilSorbent", ASF=ASF)
bo.run_BO(setting,
max_iter=10,# Number of Bayesian optimization steps.
sampling=1,
goals=goal1,
N_init=n_init,
exact_feval=True,
)
Y_results = np.concatenate(bo.bowrap.bo_org.y_mult, axis=0)*-1
X_results = bo.bowrap.bo_org.X
# +
#Run a optimization for goal1 using achievement_LCB
ASF = "achievement"
setting = "UCB"
goal1 = [-140.13, -83.69, -4.06] # Maximization problems are converted to minimization problems by multiplying minus 1.
n_init = 10
bo = BOtest(funcname="OilSorbent", ASF=ASF)
bo.run_BO(setting,
max_iter=10,# Number of Bayesian optimization steps.
sampling=1,
goals=goal1,
N_init=n_init,
exact_feval=True,
)
Y_results = np.concatenate(bo.bowrap.bo_org.y_mult, axis=0)*-1
X_results = bo.bowrap.bo_org.X
| Samples/codes/Example_of_running_virtual_inverse_material_design.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# ---
import tensorflow as tf
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Input, Dense, Conv2D, Flatten, Dropout, MaxPooling2D,BatchNormalization
import os
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
import wandb
from tensorflow.keras import layers
from tensorflow.keras import backend
CLASS_NAMES = np.array([])
for i in range(10):
CLASS_NAMES =np.append(CLASS_NAMES,chr(ord("0")+i))
for i in range(26):
CLASS_NAMES = np.append(CLASS_NAMES,chr(ord("A")+i))
#functions
def parse_label(strIN):
return (strIN==CLASS_NAMES).astype(float)
def readimg_to_tensor(fn):
a = tf.io.read_file((fn))
img = tf.io.decode_jpeg(a)
img = tf.image.convert_image_dtype(img, dtype=tf.float16)/255.
img = tf.image.per_image_standardization(img)
return img
#training data 01
def prepData01():
train_dir_data01 = os.path.abspath(os.getcwd())+"/train/data01_train/"
data = pd.read_csv("./train/data01_train.csv")
for i in range(len(data)):
data.iloc[i,1] = list(data.iloc[i,1])
arr = np.zeros([6,50000],str)
for j in range(6):
for i in range(len(data)):
arr[j][i] = data.iloc[i,1][j]
data = data.drop(columns= ["code"]).join(pd.DataFrame(arr.transpose(),columns = ["code0","code1",'code2','code3','code4','code5']))
for i in range(6):
data.iloc[:,i+1] = data.iloc[:,i+1].map(parse_label)
TL = tf.convert_to_tensor(list(data.iloc[:,1].values))
TL2 = tf.convert_to_tensor(list(data.iloc[:,2].values))
TL3 = tf.convert_to_tensor(list(data.iloc[:,3].values))
TL4 = tf.convert_to_tensor(list(data.iloc[:,4].values))
TL5 = tf.convert_to_tensor(list(data.iloc[:,5].values))
TL6 = tf.convert_to_tensor(list(data.iloc[:,6].values))
train_dir = tf.data.Dataset.list_files(train_dir_data01+'*.jpg',shuffle=False)
train_data = train_dir.map(lambda x: readimg_to_tensor(x))
train_label = tf.data.Dataset.from_tensor_slices((TL,TL2,TL3,TL4,TL5,TL6))
train_data01=tf.data.Dataset.zip((train_data,train_label))
validation_split=0.1
num_elements=50000
split = int(num_elements*validation_split)
train_data_gen=iter(train_data01.shuffle(50000,seed = 102).batch(50000))
img , lb = next(train_data_gen)
img_train = img[:-split]
lb_train = (lb[0][:-split],lb[1][:-split],lb[2][:-split],lb[3][:-split],lb[4][:-split],lb[5][:-split])
img_test = img[-split:]
lb_test = (lb[0][-split:],lb[1][-split:],lb[2][-split:],lb[3][-split:],lb[4][-split:],lb[5][-split:])
return img_train, lb_train, img_test, lb_test
#choose data
img_train, lb_train, img_test, lb_test= prepData01()
filepath ="h5s/" #"/content/drive/Shared drives/TingWeis_Drive/Colab Notebooks/Saved Model/"#
config_default = {
"weight_regu":1e-6,
"LR":1e-3,
"dropout":0.3,
"name":"Mymodel",
"batch":256
}
#Slyne config
def Slyne_model(config=config_default):
initializer = tf.keras.initializers.he_normal( seed = 3)
alpha = config["weight_regu"] # weight decay coefficient
regularizer = tf.keras.regularizers.l2(alpha)
dropout_rate =config["dropout"]
inn = Input((60, 200, 3))
out = inn
out = Conv2D(filters=64, kernel_size=(3, 3), padding='valid', kernel_initializer = initializer,kernel_regularizer = regularizer)(out)
out = BatchNormalization()(out)
out = layers.Activation('relu')(out)
out = Conv2D(filters=64, kernel_size=(3, 3), padding='same', kernel_initializer = initializer,kernel_regularizer = regularizer)(out)
out = BatchNormalization()(out)
out = layers.Activation('relu')(out)
out = MaxPooling2D(pool_size=(2, 2))(out)
out = Dropout(dropout_rate)(out)
out = Conv2D(filters=128, kernel_size=(3, 3), padding='valid', kernel_initializer = initializer,kernel_regularizer = regularizer)(out)
out = BatchNormalization()(out)
out = layers.Activation('relu')(out)
out = Conv2D(filters=128, kernel_size=(3, 3), padding='same', kernel_initializer = initializer,kernel_regularizer = regularizer)(out)
out = BatchNormalization()(out)
out = layers.Activation('relu')(out)
out = MaxPooling2D(pool_size=(2, 2))(out)
out = Dropout(dropout_rate)(out)
out = Conv2D(filters=256, kernel_size=(3, 3), padding='valid', kernel_initializer = initializer,kernel_regularizer = regularizer)(out)
out = BatchNormalization()(out)
out = layers.Activation('relu')(out)
out = Conv2D(filters=256, kernel_size=(3, 3), padding='same', kernel_initializer = initializer,kernel_regularizer = regularizer)(out)
out = BatchNormalization()(out)
out = layers.Activation('relu')(out)
out = MaxPooling2D(pool_size=(2, 2))(out)
out = Dropout(dropout_rate)(out)
out = Conv2D(filters=512, kernel_size=(3, 3), padding='valid', kernel_initializer = initializer,kernel_regularizer = regularizer)(out)
out = BatchNormalization()(out)
out = layers.Activation('relu')(out)
out = Conv2D(filters=512, kernel_size=(3, 3), padding='same', kernel_initializer = initializer,kernel_regularizer = regularizer)(out)
out = BatchNormalization()(out)
out = layers.Activation('relu')(out)
out = layers.GlobalAveragePooling2D()(out)
out = Dropout(dropout_rate)(out)
out = layers.RepeatVector(6)(out)
sep = layers.GRU(128,return_sequences=True)(out)
sep0 = layers.Lambda(lambda x: x[:, 0, :])(sep)
sep1 = layers.Lambda(lambda x: x[:, 1, :])(sep)
sep2 = layers.Lambda(lambda x: x[:, 2, :])(sep)
sep3 = layers.Lambda(lambda x: x[:, 3, :])(sep)
sep4 = layers.Lambda(lambda x: x[:, 4, :])(sep)
sep5 = layers.Lambda(lambda x: x[:, 5, :])(sep)
dig1 = layers.Dense(36, name='digit1', activation='softmax')(sep0)
dig2 = layers.Dense(36, name='digit2', activation='softmax')(sep1)
dig3 = layers.Dense(36, name='digit3', activation='softmax')(sep2)
dig4 = layers.Dense(36, name='digit4', activation='softmax')(sep3)
dig5 = layers.Dense(36, name='digit5', activation='softmax')(sep4)
dig6 = layers.Dense(36, name='digit6', activation='softmax')(sep5)
model = tf.keras.models.Model(inputs=inn, outputs=[dig1, dig2,dig3,dig4,dig5,dig6],name="test")
model.compile(
loss=[
tf.keras.losses.CategoricalCrossentropy(),
tf.keras.losses.CategoricalCrossentropy(),
tf.keras.losses.CategoricalCrossentropy(),
tf.keras.losses.CategoricalCrossentropy(),
tf.keras.losses.CategoricalCrossentropy(),
tf.keras.losses.CategoricalCrossentropy(),
],
optimizer=tf.keras.optimizers.Adam(learning_rate=myconfig["LR"], beta_1=0.9),
metrics=['accuracy'])
return model
# +
def trainModel(img_train, lb_train,callB,epochs=100,config = config_default):
history = model.fit(
img_train,
lb_train,
batch_size = config["batch"],
shuffle = True,
epochs=epochs,
verbose=1,
callbacks = callB
)
def switch_to_SGD():
model.compile(
loss=[
tf.keras.losses.CategoricalCrossentropy(),
tf.keras.losses.CategoricalCrossentropy(),
tf.keras.losses.CategoricalCrossentropy(),
tf.keras.losses.CategoricalCrossentropy(),
tf.keras.losses.CategoricalCrossentropy(),
tf.keras.losses.CategoricalCrossentropy()
],
optimizer=tf.keras.optimizers.SGD(learning_rate=0.0001, momentum=0.9),
metrics=['accuracy'])
class myCallback(tf.keras.callbacks.Callback):
def __init__(self, config=config_default,**kwargs):
self.config = config
self.best = 0
self.epochs = 0
self.wait = 0
self.reduce_once = False
self.patient = 8 #wait 8 epochs for early stopping
def on_epoch_end(self, epoch, logs=None):
self.epochs+=1
b= np.prod([logs['digit1_accuracy'],
logs['digit2_accuracy'],
logs['digit3_accuracy'],
logs['digit4_accuracy'],
logs['digit5_accuracy'],
logs['digit6_accuracy'],
])
result = self.model.predict(img_test)
per_digit_acc,val_acc_all = allright(result,lb_test)
digit_acc = np.prod(per_digit_acc)
wandb.log({"loss":logs["loss"],"accuracy": b, "epoch": self.epochs, "val_accuracy_all":val_acc_all, "val_accuracy":digit_acc})
print("\nepoch: {}, loss: {:.4f}, accuracy: {:.5f}, val_accuracy_all: {:.5f}, val_accuracy: {:.5f}, val_per_digit_accuracy: {}".format(
self.epochs, logs["loss"], b,val_acc_all, digit_acc, per_digit_acc))
if val_acc_all> self.best:
print(" ***** accuracy improved from {:.5f} to {:.5f}!! *****".format(self.best,val_acc_all))
print("Model saved to "+filepath+self.config["name"]+".hdf5\n" )
self.best = val_acc_all
self.model.save(filepath+self.config["name"]+".hdf5")
self.wait = 0
wandb.run.summary["best_val_accuracy_all"] = val_acc_all
wandb.run.summary["best_epoch"] = self.epochs
elif self.epochs>30:
if self.wait>=self.patient: #switch to SGD or reduce lr if already on SGD
if hasattr(self.model.optimizer,'momentum') and not self.reduce_once: #reduce lr on SGD if not done it once yet
self.model.optimizer.lr=self.model.optimizer.lr/2
print("\n\n Change leraning rate to {}, continued ... \n".format(self.model.optimizer.lr))
self.wait=0
self.reduce_once=True
else:
self.model.stop_training = True
self.wait=0
print("Model is not learning, training stop at {}".format(epoch))
print("Model accuracy did not improve from {:.4f}\n".format(self.best))
self.wait+=1
class Switch_SGD_callback(tf.keras.callbacks.Callback):
def __init__(self,on_train_end,config=config_default, **kwargs):
self.do_on_train_end=on_train_end
self.config = config
def on_train_end(self, logs=None):
self.do_on_train_end(self.config)
def do_after_train(config=config_default):
'''do this after a seesion is over'''
print("\n ******** Switching to SGD!! **********")
switch_to_SGD()
trainModel(img_train, lb_train,[myCB],config=myconfig)
def allright(inputdata,label):
'''calculate the accuracy of each digit and of all right'''
a = np.argmax(inputdata,axis = 2)
b = np.argmax(lb_test[0],axis=1)
b = np.expand_dims(b,0)
for i in range(5):
b = np.concatenate((b,np.expand_dims(np.argmax(lb_test[i+1],axis=1),0)),axis=0)
acc_digit = np.mean(a==b,axis=1)
acc_all = np.mean(np.all(a.transpose()==b.transpose(),axis=1))
return acc_digit,acc_all
# -
myconfig = {
"stack" : "conv 3x3 64-512 GRU 128 ",
"weight_regu" : 1e-5,
"dropout" : 0.2,
"name" : "slyneGA_GRU2",
"output_WR" : False,
"total_params" : 5066520,
"opt_schedule" : "1e-4 adam + 1e-4 SGD",
"batch" : 128,
"GlobalAvg":True,
"Augmentation":False,
"LR":1e-4
}
wandb.init(project="my-project-railway",reinit=True,name=myconfig["name"],config=myconfig)
#ini call back
myCB = myCallback(config = myconfig)
model=Slyne_model(config=myconfig)
trainModel(img_train, lb_train,[myCB,Switch_SGD_callback(do_after_train,config = myconfig)],config = myconfig)
| Test01_best.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: FinSpace PySpark (finspace-sparkmagic-5567a/latest)
# language: python
# name: pysparkkernel__SAGEMAKER_INTERNAL__arn:aws:sagemaker:us-east-1:656007506553:image/finspace-sparkmagic-5567a
# ---
# # Polygon.io
# Notebook to use polygon.io with Spark and FinSpace
#
# ## Reference
# [Polygon.io](https://polygon.io)
# [Python Client](https://github.com/polygon-io/client-python)
# +
# %local
from aws.finspace.cluster import FinSpaceClusterManager
# if this was already run, no need to run again
if 'finspace_clusters' not in globals():
finspace_clusters = FinSpaceClusterManager()
finspace_clusters.auto_connect()
else:
print(f'connected to cluster: {finspace_clusters.get_connected_cluster_id()}')
# -
try:
sc.install_pypi_package('polygon-api-client')
except Exception as e:
print('Packages already Installed')
# # Variables and Libraries
#
# You will need a Polygon client_key, fill its value below.
#
# **IMPORTANT** Use a group ID from your FinSpace to grant permissions to for the dataset.
# +
import time
import pandas as pd
import urllib.parse as urlparse
from polygon import RESTClient
from urllib.parse import parse_qs
# User Group to grant access to the dataset
group_id = ''
dataset_id = None
client_key = ''
client = RESTClient(client_key)
# -
# # Get Tickers
#
# Using the Polygon APIs, create a table of all Tickers.
# +
# function to extract the pagination cursor
def get_cursor(url):
parsed = urlparse.urlparse(url)
cursor = parse_qs(parsed.query)['cursor']
return cursor
resp = client.reference_tickers_v3(limit=1000)
all_tickers = []
if resp.status == 'OK':
all_tickers.extend(resp.results)
while hasattr(resp, 'next_url'):
cursor = get_cursor(resp.next_url)
resp = client.reference_tickers_v3(limit=1000, cursor=cursor)
all_tickers.extend(resp.results)
# create pandas dataframe from the responses
tickers_df = pd.DataFrame.from_records(all_tickers)
tickers_df
# -
# # Convert to Spark DataFrame
# +
from pyspark.sql.types import *
# Auxiliar functions
def equivalent_type(f):
if f == 'datetime64[ns]': return TimestampType()
elif f == 'int64': return LongType()
elif f == 'int32': return IntegerType()
elif f == 'float64': return FloatType()
elif f == 'bool': return BooleanType()
else: return StringType()
def define_structure(string, format_type):
try: typo = equivalent_type(format_type)
except: typo = StringType()
return StructField(string, typo)
def get_schema(pandas_df):
columns = list(pandas_df.columns)
types = list(pandas_df.dtypes)
struct_list = []
for column, typo in zip(columns, types):
struct_list.append(define_structure(column, typo))
return StructType(struct_list)
# Given pandas dataframe, it will return a spark's dataframe.
def pandas_to_spark(pandas_df):
p_schema = get_schema(pandas_df)
return sqlContext.createDataFrame(pandas_df, p_schema)
# +
tickersDF = pandas_to_spark(tickers_df)
tickersDF.printSchema()
# +
from pyspark.sql.functions import *
# convert the datatime column of string to proper timestamp type
tickersDF = ( tickersDF
.withColumnRenamed('last_updated_utc', 'input_timestamp_str')
.withColumn("last_updated_utc",to_timestamp("input_timestamp_str"))
.drop('input_timestamp_str')
)
# -
# sample the table
tickersDF.show(5)
# # Python Helper Classes
# + jupyter={"source_hidden": true}
# # %load ../Utilities/finspace.py
import datetime
import time
import boto3
import os
import pandas as pd
import urllib
from urllib.parse import urlparse
from botocore.config import Config
from boto3.session import Session
# Base FinSpace class
class FinSpace:
def __init__(
self,
config=Config(retries={'max_attempts': 3, 'mode': 'standard'}),
boto_session: Session = None,
dev_overrides: dict = None,
service_name = 'finspace-data'):
"""
To configure this class object, simply instantiate with no-arg if hitting prod endpoint, or else override it:
e.g.
`hab = FinSpaceAnalyticsManager(region_name = 'us-east-1',
dev_overrides = {'hfs_endpoint': 'https://39g32x40jk.execute-api.us-east-1.amazonaws.com/alpha'})`
"""
self.hfs_endpoint = None
self.region_name = None
if dev_overrides is not None:
if 'hfs_endpoint' in dev_overrides:
self.hfs_endpoint = dev_overrides['hfs_endpoint']
if 'region_name' in dev_overrides:
self.region_name = dev_overrides['region_name']
else:
if boto_session is not None:
self.region_name = boto_session.region_name
else:
self.region_name = self.get_region_name()
self.config = config
self._boto3_session = boto3.session.Session(region_name=self.region_name) if boto_session is None else boto_session
print(f"service_name: {service_name}")
print(f"endpoint: {self.hfs_endpoint}")
print(f"region_name: {self.region_name}")
self.client = self._boto3_session.client(service_name, endpoint_url=self.hfs_endpoint, config=self.config)
@staticmethod
def get_region_name():
req = urllib.request.Request("http://169.254.169.254/latest/meta-data/placement/region")
with urllib.request.urlopen(req) as response:
return response.read().decode("utf-8")
# --------------------------------------
# Utility Functions
# --------------------------------------
@staticmethod
def get_list(all_list: dir, name: str):
"""
Search for name found in the all_list dir and return that list of things.
Removes repetitive code found in functions that call boto apis then search for the expected returned items
:param all_list: list of things to search
:type: dir:
:param name: name to search for in all_lists
:type: str
:return: list of items found in name
"""
r = []
# is the given name found, is found, add to list
if name in all_list:
for s in all_list[name]:
r.append(s)
# return the list
return r
# --------------------------------------
# Classification Functions
# --------------------------------------
def list_classifications(self):
"""
Return list of all classifications
:return: all classifications
"""
all_list = self.client.list_classifications(sort='NAME')
return self.get_list(all_list, 'classifications')
def classification_names(self):
"""
Get the classifications names
:return list of classifications names only
"""
classification_names = []
all_classifications = self.list_classifications()
for c in all_classifications:
classification_names.append(c['name'])
return classification_names
def classification(self, name: str):
"""
Exact name search for a classification of the given name
:param name: name of the classification to find
:type: str
:return
"""
all_classifications = self.list_classifications()
existing_classification = next((c for c in all_classifications if c['name'].lower() == name.lower()), None)
if existing_classification:
return existing_classification
def describe_classification(self, classification_id: str):
"""
Calls the describe classification API function and only returns the taxonomy portion of the response.
:param classification_id: the GUID of the classification to get description of
:type: str
"""
resp = None
taxonomy_details_resp = self.client.describe_taxonomy(taxonomyId=classification_id)
if 'taxonomy' in taxonomy_details_resp:
resp = taxonomy_details_resp['taxonomy']
return (resp)
def create_classification(self, classification_definition):
resp = self.client.create_taxonomy(taxonomyDefinition=classification_definition)
taxonomy_id = resp["taxonomyId"]
return (taxonomy_id)
def delete_classification(self, classification_id):
resp = self.client.delete_taxonomy(taxonomyId=classification_id)
if resp['ResponseMetadata']['HTTPStatusCode'] != 200:
return resp
return True
# --------------------------------------
# Attribute Set Functions
# --------------------------------------
def list_attribute_sets(self):
"""
Get list of all dataset_types in the system
:return: list of dataset types
"""
resp = self.client.list_dataset_types()
results = resp['datasetTypeSummaries']
while "nextToken" in resp:
resp = self.client.list_dataset_types(nextToken=resp['nextToken'])
results.extend(resp['datasetTypeSummaries'])
return (results)
def attribute_set_names(self):
"""
Get the list of all dataset type names
:return list of all dataset type names
"""
dataset_type_names = []
all_dataset_types = self.list_dataset_types()
for c in all_dataset_types:
dataset_type_names.append(c['name'])
return dataset_type_names
def attribute_set(self, name: str):
"""
Exact name search for a dataset type of the given name
:param name: name of the dataset type to find
:type: str
:return
"""
all_dataset_types = self.list_dataset_types()
existing_dataset_type = next((c for c in all_dataset_types if c['name'].lower() == name.lower()), None)
if existing_dataset_type:
return existing_dataset_type
def describe_attribute_set(self, attribute_set_id: str):
"""
Calls the describe dataset type API function and only returns the dataset type portion of the response.
:param attribute_set_id: the GUID of the dataset type to get description of
:type: str
"""
resp = None
dataset_type_details_resp = self.client.describe_dataset_type(datasetTypeId=attribute_set_id)
if 'datasetType' in dataset_type_details_resp:
resp = dataset_type_details_resp['datasetType']
return (resp)
def create_attribute_set(self, attribute_set_def):
resp = self.client.create_dataset_type(datasetTypeDefinition=attribute_set_def)
att_id = resp["datasetTypeId"]
return (att_id)
def delete_attribute_set(self, attribute_set_id: str):
resp = self.client.delete_attribute_set(attributeSetId=attribute_set_id)
if resp['ResponseMetadata']['HTTPStatusCode'] != 200:
return resp
return True
def associate_attribute_set(self, att_name: str, att_values: list, dataset_id: str):
# get the attribute set by name, will need its id
att_set = self.attribute_set(att_name)
# get the dataset's information, will need the arn
dataset = self.describe_dataset_details(dataset_id=dataset_id)
# disassociate any existing relationship
try:
self.client.dissociate_dataset_from_dataset_type(datasetArn=dataset['arn'],
datasetTypeId=att_set['id'])
except:
print("Nothing to disassociate")
self.client.associate_dataset_with_dataset_type(datasetArn=dataset['arn'], datasetTypeId=att_set['id'])
ret = self.client.update_dataset_type_context(datasetArn=dataset['arn'], datasetTypeId=att_set['id'],
values=att_values)
return ret
# --------------------------------------
# Permission Group Functions
# --------------------------------------
def list_permission_groups(self, max_results: int):
all_perms = self.client.list_permission_groups(MaxResults=max_results)
return (self.get_list(all_perms, 'permissionGroups'))
def permission_group(self, name):
all_groups = self.list_permission_groups(max_results = 100)
existing_group = next((c for c in all_groups if c['name'].lower() == name.lower()), None)
if existing_group:
return existing_group
def describe_permission_group(self, permission_group_id: str):
resp = None
perm_resp = self.client.describe_permission_group(permissionGroupId=permission_group_id)
if 'permissionGroup' in perm_resp:
resp = perm_resp['permissionGroup']
return (resp)
# --------------------------------------
# Dataset Functions
# --------------------------------------
def describe_dataset_details(self, dataset_id: str):
"""
Calls the describe dataset details API function and only returns the dataset details portion of the response.
:param dataset_id: the GUID of the dataset to get description of
:type: str
"""
resp = None
dataset_details_resp = self.client.describe_dataset_details(datasetId=dataset_id)
if 'dataset' in dataset_details_resp:
resp = dataset_details_resp["dataset"]
return (resp)
def create_dataset(self, name: str, description: str, permission_group_id: str, dataset_permissions: [], kind: str,
owner_info, schema):
"""
Create a dataset
Warning, dataset names are not unique, be sure to check for the same name dataset before creating a new one
:param name: Name of the dataset
:type: str
:param description: Description of the dataset
:type: str
:param permission_group_id: permission group for the dataset
:type: str
:param dataset_permissions: permissions for the group on the dataset
:param kind: Kind of dataset, choices: TABULAR
:type: str
:param owner_info: owner information for the dataset
:param schema: Schema of the dataset
:return: the dataset_id of the created dataset
"""
if dataset_permissions:
request_dataset_permissions = [{"permission": permissionName} for permissionName in dataset_permissions]
else:
request_dataset_permissions = []
response = self.client.create_dataset(name=name,
permissionGroupId = permission_group_id,
datasetPermissions = request_dataset_permissions,
kind=kind,
description = description.replace('\n', ' '),
ownerInfo = owner_info,
schema = schema)
return response["datasetId"]
def ingest_from_s3(self,
s3_location: str,
dataset_id: str,
change_type: str,
wait_for_completion: bool = True,
format_type: str = "CSV",
format_params: dict = {'separator': ',', 'withHeader': 'true'}):
"""
Creates a changeset and ingests the data given in the S3 location into the changeset
:param s3_location: the source location of the data for the changeset, will be copied into the changeset
:stype: str
:param dataset_id: the identifier of the containing dataset for the changeset to be created for this data
:type: str
:param change_type: What is the kind of changetype? "APPEND", "REPLACE" are the choices
:type: str
:param wait_for_completion: Boolean, should the function wait for the operation to complete?
:type: str
:param format_type: format type, CSV, PARQUET, XML, JSON
:type: str
:param format_params: dictionary of format parameters
:type: dict
:return: the id of the changeset created
"""
create_changeset_response = self.client.create_changeset(
datasetId=dataset_id,
changeType=change_type,
sourceType='S3',
sourceParams={'s3SourcePath': s3_location},
formatType=format_type.upper(),
formatParams=format_params
)
changeset_id = create_changeset_response['changeset']['id']
if wait_for_completion:
self.wait_for_ingestion(dataset_id, changeset_id)
return changeset_id
def describe_changeset(self, dataset_id: str, changeset_id: str):
"""
Function to get a description of the the givn changeset for the given dataset
:param dataset_id: identifier of the dataset
:type: str
:param changeset_id: the idenfitier of the changeset
:type: str
:return: all information about the changeset, if found
"""
describe_changeset_resp = self.client.describe_changeset(datasetId=dataset_id, id=changeset_id)
return describe_changeset_resp['changeset']
def create_as_of_view(self, dataset_id: str, as_of_date: datetime, destination_type: str,
partition_columns: list = [], sort_columns: list = [], destination_properties: dict = {},
wait_for_completion: bool = True):
"""
Creates an 'as of' static view up to and including the requested 'as of' date provided.
:param dataset_id: identifier of the dataset
:type: str
:param as_of_date: as of date, will include changesets up to this date/time in the view
:type: datetime
:param destination_type: destination type
:type: str
:param partition_columns: columns to partition the data by for the created view
:type: list
:param sort_columns: column to sort the view by
:type: list
:param destination_properties: destination properties
:type: dict
:param wait_for_completion: should the function wait for the system to create the view?
:type: bool
:return str: GUID of the created view if successful
"""
create_materialized_view_resp = self.client.create_materialized_snapshot(
datasetId=dataset_id,
asOfTimestamp=as_of_date,
destinationType=destination_type,
partitionColumns=partition_columns,
sortColumns=sort_columns,
autoUpdate=False,
destinationProperties=destination_properties
)
view_id = create_materialized_view_resp['id']
if wait_for_completion:
self.wait_for_view(dataset_id=dataset_id, view_id=view_id)
return view_id
def create_auto_update_view(self, dataset_id: str, destination_type: str,
partition_columns=[], sort_columns=[], destination_properties={},
wait_for_completion=True):
"""
Creates an auto-updating view of the given dataset
:param dataset_id: identifier of the dataset
:type: str
:param destination_type: destination type
:type: str
:param partition_columns: columns to partition the data by for the created view
:type: list
:param sort_columns: column to sort the view by
:type: list
:param destination_properties: destination properties
:type: str
:param wait_for_completion: should the function wait for the system to create the view?
:type: bool
:return str: GUID of the created view if successful
"""
create_materialized_view_resp = self.client.create_materialized_snapshot(
datasetId=dataset_id,
destinationType=destination_type,
partitionColumns=partition_columns,
sortColumns=sort_columns,
autoUpdate=True,
destinationProperties=destination_properties
)
view_id = create_materialized_view_resp['id']
if wait_for_completion:
self.wait_for_view(dataset_id=dataset_id, view_id=view_id)
return view_id
def wait_for_ingestion(self, dataset_id: str, changeset_id: str, sleep_sec=10):
"""
function that will continuously poll the changeset creation to ensure it completes or fails before returning.
:param dataset_id: GUID of the dataset
:type: str
:param changeset_id: GUID of the changeset
:type: str
:param sleep_sec: seconds to wait between checks
:type: int
"""
while True:
status = self.describe_changeset(dataset_id=dataset_id, changeset_id=changeset_id)['status']
if status == 'SUCCESS':
print(f"Changeset complete")
break
elif status == 'PENDING' or status == 'RUNNING':
print(f"Changeset status is still PENDING, waiting {sleep_sec} sec ...")
time.sleep(sleep_sec)
continue
else:
raise Exception(f"Bad changeset status: {status}, failing now.")
def wait_for_view(self, dataset_id: str, view_id: str, sleep_sec=10):
"""
function that will continuously poll the view creation to ensure it completes or fails before returning.
:param dataset_id: GUID of the dataset
:type: str
:param view_id: GUID of the view
:type: str
:param sleep_sec: seconds to wait between checks
:type: int
"""
while True:
list_views_resp = self.client.list_materialization_snapshots(datasetId=dataset_id, maxResults=100)
matched_views = list(filter(lambda d: d['id'] == view_id, list_views_resp['materializationSnapshots']))
if len(matched_views) != 1:
size = len(matched_views)
raise Exception(f"Unexpected error: found {size} views that match the view Id: {view_id}")
status = matched_views[0]['status']
if status == 'SUCCESS':
print(f"View complete")
break
elif status == 'PENDING' or status == 'RUNNING':
print(f"View status is still PENDING, continue to wait till finish...")
time.sleep(sleep_sec)
continue
else:
raise Exception(f"Bad view status: {status}, failing now.")
def list_changesets(self, dataset_id: str):
resp = self.client.list_changesets(datasetId=dataset_id, sortKey='CREATE_TIMESTAMP')
results = resp['changesets']
while "nextToken" in resp:
resp = self.client.list_changesets(datasetId=dataset_id, sortKey='CREATE_TIMESTAMP',
nextToken=resp['nextToken'])
results.extend(resp['changesets'])
return (results)
def list_views(self, dataset_id: str, max_results=50):
resp = self.client.list_materialization_snapshots(datasetId=dataset_id, maxResults=max_results)
results = resp['materializationSnapshots']
while "nextToken" in resp:
resp = self.client.list_materialization_snapshots(datasetId=dataset_id, maxResults=max_results,
nextToken=resp['nextToken'])
results.extend(resp['materializationSnapshots'])
return (results)
def list_datasets(self, max_results: int):
all_datasets = self.client.list_datasets(maxResults=max_results)
return (self.get_list(all_datasets, 'datasets'))
def list_dataset_types(self):
resp = self.client.list_dataset_types(sort='NAME')
results = resp['datasetTypeSummaries']
while "nextToken" in resp:
resp = self.client.list_dataset_types(sort='NAME', nextToken=resp['nextToken'])
results.extend(resp['datasetTypeSummaries'])
return (results)
@staticmethod
def get_execution_role():
"""
Convenience function from SageMaker to get the execution role of the user of the sagemaker studio notebook
:return: the ARN of the execution role in the sagemaker studio notebook
"""
import sagemaker as sm
e_role = sm.get_execution_role()
return (f"{e_role}")
def get_user_ingestion_info(self):
return (self.client.get_user_ingestion_info())
def upload_pandas(self, data_frame: pd.DataFrame):
import awswrangler as wr
resp = self.client.get_working_location(locationType='INGESTION')
upload_location = resp['s3Uri']
wr.s3.to_parquet(data_frame, f"{upload_location}data.parquet", index=False, boto3_session=self._boto3_session)
return upload_location
def ingest_pandas(self, data_frame: pd.DataFrame, dataset_id: str, change_type: str, wait_for_completion=True):
print("Uploading the pandas dataframe ...")
upload_location = self.upload_pandas(data_frame)
print("Data upload finished. Ingesting data ...")
return self.ingest_from_s3(upload_location, dataset_id, change_type, wait_for_completion, format_type='PARQUET')
def read_view_as_pandas(self, dataset_id: str, view_id: str):
"""
Returns a pandas dataframe the view of the given dataset. Views in FinSpace can be quite large, be careful!
:param dataset_id:
:param view_id:
:return: Pandas dataframe with all data of the view
"""
import awswrangler as wr # use awswrangler to read the table
# @todo: switch to DescribeMateriliazation when available in HFS
views = self.list_views(dataset_id=dataset_id, max_results=50)
filtered = [v for v in views if v['id'] == view_id]
if len(filtered) == 0:
raise Exception('No such view found')
if len(filtered) > 1:
raise Exception('Internal Server error')
view = filtered[0]
# 0. Ensure view is ready to be read
if (view['status'] != 'SUCCESS'):
status = view['status']
print(f'view run status is not ready: {status}. Returning empty.')
return
glue_db_name = view['destinationTypeProperties']['databaseName']
glue_table_name = view['destinationTypeProperties']['tableName']
# determine if the table has partitions first, different way to read is there are partitions
p = wr.catalog.get_partitions(table=glue_table_name, database=glue_db_name, boto3_session=self._boto3_session)
def no_filter(partitions):
if len(partitions.keys()) > 0:
return True
return False
df = None
if len(p) == 0:
df = wr.s3.read_parquet_table(table=glue_table_name, database=glue_db_name,
boto3_session=self._boto3_session)
else:
spath = wr.catalog.get_table_location(table=glue_table_name, database=glue_db_name,
boto3_session=self._boto3_session)
cpath = wr.s3.list_directories(f"{spath}/*", boto3_session=self._boto3_session)
read_path = f"{spath}/"
# just one? Read it
if len(cpath) == 1:
read_path = cpath[0]
df = wr.s3.read_parquet(read_path, dataset=True, partition_filter=no_filter,
boto3_session=self._boto3_session)
# Query Glue table directly with wrangler
return df
@staticmethod
def get_schema_from_pandas(df: pd.DataFrame):
"""
Returns the FinSpace schema columns from the given pandas dataframe.
:param df: pandas dataframe to interrogate for the schema
:return: FinSpace column schema list
"""
# for translation to FinSpace's schema
# 'STRING'|'CHAR'|'INTEGER'|'TINYINT'|'SMALLINT'|'BIGINT'|'FLOAT'|'DOUBLE'|'DATE'|'DATETIME'|'BOOLEAN'|'BINARY'
DoubleType = "DOUBLE"
FloatType = "FLOAT"
DateType = "DATE"
StringType = "STRING"
IntegerType = "INTEGER"
LongType = "BIGINT"
BooleanType = "BOOLEAN"
TimestampType = "DATETIME"
hab_columns = []
for name in dict(df.dtypes):
p_type = df.dtypes[name]
switcher = {
"float64": DoubleType,
"int64": IntegerType,
"datetime64[ns, UTC]": TimestampType,
"datetime64[ns]": DateType
}
habType = switcher.get(str(p_type), StringType)
hab_columns.append({
"dataType": habType,
"name": name,
"description": ""
})
return (hab_columns)
@staticmethod
def get_date_cols(df: pd.DataFrame):
"""
Returns which are the data columns found in the pandas dataframe.
Pandas does the hard work to figure out which of the columns can be considered to be date columns.
:param df: pandas dataframe to interrogate for the schema
:return: list of column names that can be parsed as dates by pandas
"""
date_cols = []
for name in dict(df.dtypes):
p_type = df.dtypes[name]
if str(p_type).startswith("date"):
date_cols.append(name)
return (date_cols)
def get_best_schema_from_csv(self, path, is_s3=True, read_rows=500, sep=','):
"""
Uses multiple reads of the file with pandas to determine schema of the referenced files.
Files are expected to be csv.
:param path: path to the files to read
:type: str
:param is_s3: True if the path is s3; False if filesystem
:type: bool
:param read_rows: number of rows to sample for determining schema
:param sep:
:return dict: schema for FinSpace
"""
#
# best efforts to determine the schema, sight unseen
import awswrangler as wr
# 1: get the base schema
df1 = None
if is_s3:
df1 = wr.s3.read_csv(path, nrows=read_rows, sep=sep)
else:
df1 = pd.read_csv(path, nrows=read_rows, sep=sep)
num_cols = len(df1.columns)
# with number of columns, try to infer dates
df2 = None
if is_s3:
df2 = wr.s3.read_csv(path, parse_dates=list(range(0, num_cols)), infer_datetime_format=True,
nrows=read_rows, sep=sep)
else:
df2 = pd.read_csv(path, parse_dates=list(range(0, num_cols)), infer_datetime_format=True, nrows=read_rows,
sep=sep)
date_cols = self.get_date_cols(df2)
# with dates known, parse the file fully
df = None
if is_s3:
df = wr.s3.read_csv(path, parse_dates=date_cols, infer_datetime_format=True, nrows=read_rows, sep=sep)
else:
df = pd.read_csv(path, parse_dates=date_cols, infer_datetime_format=True, nrows=read_rows, sep=sep)
schema_cols = self.get_schema_from_pandas(df)
return (schema_cols)
def s3_upload_file(self, source_file: str, s3_destination: str):
"""
Uploads a local file (full path) to the s3 destination given (expected form: s3://<bucket>/<prefix>/).
The filename will have spaces replaced with _.
:param source_file: path of file to upload
:param s3_destination: full path to where to save the file
:type: str
"""
hab_s3_client = self._boto3_session.client(service_name='s3')
o = urlparse(s3_destination)
bucket = o.netloc
prefix = o.path.lstrip('/')
fname = os.path.basename(source_file)
hab_s3_client.upload_file(source_file, bucket, f"{prefix}{fname.replace(' ', '_')}")
def list_objects(self, s3_location: str):
"""
lists the objects found at the s3_location. Strips out the boto API response header,
just returns the contents of the location. Internally uses the list_objects_v2.
:param s3_location: path, starting with s3:// to get the list of objects from
:type: str
"""
o = urlparse(s3_location)
bucket = o.netloc
prefix = o.path.lstrip('/')
results = []
hab_s3_client = self._boto3_session.client(service_name='s3')
paginator = hab_s3_client.get_paginator('list_objects_v2')
pages = paginator.paginate(Bucket=bucket, Prefix=prefix)
for page in pages:
if 'Contents' in page:
results.extend(page['Contents'])
return (results)
def list_clusters(self, status: str = None):
"""
Lists current clusters and their statuses
:param status: status to filter for
:return dict: list of clusters
"""
resp = self.client.list_clusters()
clusters = []
if 'clusters' not in resp:
return (clusters)
for c in resp['clusters']:
if status is None:
clusters.append(c)
else:
if c['clusterStatus']['state'] in status:
clusters.append(c)
return (clusters)
def get_cluster(self, cluster_id):
"""
Resize the given cluster to desired template
:param cluster_id: cluster id
"""
clusters = self.list_clusters()
for c in clusters:
if c['clusterId'] == cluster_id:
return (c)
return (None)
def update_cluster(self, cluster_id: str, template: str):
"""
Resize the given cluster to desired template
:param cluster_id: cluster id
:param template: target template to resize to
"""
cluster = self.get_cluster(cluster_id=cluster_id)
if cluster['currentTemplate'] == template:
print(f"Already using template: {template}")
return (cluster)
self.client.update_cluster(clusterId=cluster_id, template=template)
return (self.get_cluster(cluster_id=cluster_id))
def wait_for_status(self, clusterId: str, status: str, sleep_sec=10, max_wait_sec=900):
"""
Function polls service until cluster is in desired status.
:param clusterId: the cluster's ID
:param status: desired status for clsuter to reach
:
"""
total_wait = 0
while True and total_wait < max_wait_sec:
resp = self.client.list_clusters()
this_cluster = None
# is this the cluster?
for c in resp['clusters']:
if clusterId == c['clusterId']:
this_cluster = c
if this_cluster is None:
print(f"clusterId:{clusterId} not found")
return (None)
this_status = this_cluster['clusterStatus']['state']
if this_status.upper() != status.upper():
print(f"Cluster status is {this_status}, waiting {sleep_sec} sec ...")
time.sleep(sleep_sec)
total_wait = total_wait + sleep_sec
continue
else:
return (this_cluster)
def get_working_location(self, locationType='SAGEMAKER'):
resp = None
location = self.client.get_working_location(locationType=locationType)
if 's3Uri' in location:
resp = location['s3Uri']
return (resp)
# + jupyter={"source_hidden": true}
# # %load ../Utilities/finspace_spark.py
import datetime
import time
import boto3
from botocore.config import Config
# FinSpace class with Spark bindings
class SparkFinSpace(FinSpace):
import pyspark
def __init__(
self,
spark: pyspark.sql.session.SparkSession = None,
config = Config(retries = {'max_attempts': 0, 'mode': 'standard'}),
dev_overrides: dict = None
):
FinSpace.__init__(self, config=config, dev_overrides=dev_overrides)
self.spark = spark # used on Spark cluster for reading views, creating changesets from DataFrames
def upload_dataframe(self, data_frame: pyspark.sql.dataframe.DataFrame):
resp = self.client.get_user_ingestion_info()
upload_location = resp['ingestionPath']
# data_frame.write.option('header', 'true').csv(upload_location)
data_frame.write.parquet(upload_location)
return upload_location
def ingest_dataframe(self, data_frame: pyspark.sql.dataframe.DataFrame, dataset_id: str, change_type: str, wait_for_completion=True):
print("Uploading data...")
upload_location = self.upload_dataframe(data_frame)
print("Data upload finished. Ingesting data...")
return self.ingest_from_s3(upload_location, dataset_id, change_type, wait_for_completion, format_type='parquet', format_params={})
def read_view_as_spark(
self,
dataset_id: str,
view_id: str
):
# TODO: switch to DescribeMatz when available in HFS
views = self.list_views(dataset_id=dataset_id, max_results=50)
filtered = [v for v in views if v['id'] == view_id]
if len(filtered) == 0:
raise Exception('No such view found')
if len(filtered) > 1:
raise Exception('Internal Server error')
view = filtered[0]
# 0. Ensure view is ready to be read
if (view['status'] != 'SUCCESS'):
status = view['status']
print(f'view run status is not ready: {status}. Returning empty.')
return
glue_db_name = view['destinationTypeProperties']['databaseName']
glue_table_name = view['destinationTypeProperties']['tableName']
# Query Glue table directly with catalog function of spark
return self.spark.table(f"`{glue_db_name}`.`{glue_table_name}`")
def get_schema_from_spark(self, data_frame: pyspark.sql.dataframe.DataFrame):
from pyspark.sql.types import StructType
# for translation to FinSpace's schema
# 'STRING'|'CHAR'|'INTEGER'|'TINYINT'|'SMALLINT'|'BIGINT'|'FLOAT'|'DOUBLE'|'DATE'|'DATETIME'|'BOOLEAN'|'BINARY'
DoubleType = "DOUBLE"
FloatType = "FLOAT"
DateType = "DATE"
StringType = "STRING"
IntegerType = "INTEGER"
LongType = "BIGINT"
BooleanType = "BOOLEAN"
TimestampType = "DATETIME"
hab_columns = []
items = [i for i in data_frame.schema]
switcher = {
"BinaryType" : StringType,
"BooleanType" : BooleanType,
"ByteType" : IntegerType,
"DateType" : DateType,
"DoubleType" : FloatType,
"IntegerType" : IntegerType,
"LongType" : IntegerType,
"NullType" : StringType,
"ShortType" : IntegerType,
"StringType" : StringType,
"TimestampType" : TimestampType,
}
for i in items:
# print( f"name: {i.name} type: {i.dataType}" )
habType = switcher.get( str(i.dataType), StringType)
hab_columns.append({
"dataType" : habType,
"name" : i.name,
"description" : ""
})
return( hab_columns )
# -
# initialize the FinSpace helper object
finspace = SparkFinSpace(spark=spark)
# # Create FinSpace Dataset
#
# Using the FinSpace APIs will define the Dataset, add the Changeset, create auto-updating view, and associate and populate attributes to the dataset.
#
# ## Definitions
#
# Here are the various data elements we need for creating the dataset.
# +
# Name for the dataset
name = "Ticker Universe"
# description for the dataset
description = "All ticker symbols which are supported by Polygon.io"
# this is the attribute set to use, will search for it in system, this name assumes the Capital Markets Sample Data Bundle was installed
att_name = "Sample Data Attribute Set"
# Attributes to associate, based on the definition of the attribute set
att_values = [
{ 'field' : 'AssetClass', 'type' : 'TAXONOMY', 'values' : [ 'Equity', 'CommonStocks', 'Currencies', 'FXSpot', 'Crypto'] },
{ 'field' : 'DataType', 'type' : 'TAXONOMY', 'values' : [ 'Referencedata' ] },
{ 'field' : 'Source', 'type' : 'TAXONOMY', 'values' : [ ] },
{ 'field' : 'EventType', 'type' : 'TAXONOMY', 'values' : [ ] },
{ 'field' : 'SampleData', 'type' : 'TAXONOMY', 'values' : [ ] }
]
# Permissions to grant the above group for the created dataset
basicPermissions = [
"ViewDatasetDetails",
"ReadDatasetData",
"AddDatasetData",
"CreateSnapshot",
"EditDatasetMetadata",
"ManageDatasetPermissions",
"DeleteDataset"
]
# All datasets have ownership
basicOwnerInfo = {
"phoneNumber" : "12125551000",
"email" : "<EMAIL>",
"name" : "<NAME>"
}
# schema of the dataset
schema = {
'primaryKeyColumns': [],
'columns' : [
{'dataType': 'STRING', 'name': 'ticker', 'description': 'The exchange symbol that this item is traded under'},
{'dataType': 'STRING', 'name': 'name', 'description': 'The name of the asset. For stocks equities this will be the companies registered name. For crypto/fx this will be the name of the currency or coin pair'},
{'dataType': 'STRING', 'name': 'market', 'description': 'The market type of the asset'},
{'dataType': 'STRING', 'name': 'locale', 'description': 'The locale of the asset'},
{'dataType': 'STRING', 'name': 'primary_exchange', 'description': 'The ISO code of the primary listing exchange for this asset'},
{'dataType': 'STRING', 'name': 'type', 'description': 'The type of the asset'},
{'dataType': 'BOOLEAN', 'name': 'active', 'description': 'Whether or not the asset is actively traded. False means the asset has been delisted'},
{'dataType': 'STRING', 'name': 'currency_name', 'description': 'The name of the currency that this asset is traded with'},
{'dataType': 'STRING', 'name': 'cik', 'description': 'The CIK number for this ticker'},
{'dataType': 'STRING', 'name': 'composite_figi', 'description': 'The composite OpenFIGI number for this ticker'},
{'dataType': 'STRING', 'name': 'share_class_figi', 'description': 'The share Class OpenFIGI number for this ticker'},
{'dataType': 'STRING', 'name': 'currency_symbol', 'description': ''},
{'dataType': 'STRING', 'name': 'base_currency_symbol', 'description': ''},
{'dataType': 'STRING', 'name': 'base_currency_name', 'description': ''},
{'dataType': 'DATETIME', 'name': 'last_updated_utc', 'description': 'The last time this asset record was updated'}
]
}
# +
# call FinSpace to create the dataset if no ID was assigned
# if an ID was assigned, will not create a dataset but will simply add data to it
if dataset_id is None:
dataset_id = finspace.create_dataset(
name = name,
description = description,
permission_group_id = group_id,
dataset_permissions = basicPermissions,
kind = "TABULAR",
owner_info = basicOwnerInfo,
schema = schema
)
time.sleep(5)
print(f'Dataset ID: {dataset_id}')
# +
# ingest the data
change_type = 'REPLACE' # This replaces previous
print(f"Creating Changeset: {change_type}")
changeset_id = finspace.ingest_dataframe(data_frame=tickersDF, dataset_id = dataset_id, change_type=change_type, wait_for_completion=True)
isFirst = False
print(f"changeset_id = {changeset_id}")
# +
# Create an auto-updating View if one does not exist
existing_snapshots = finspace.list_views(dataset_id = dataset_id, max_results=100)
autoupdate_snapshot_id = None
# does one exist?
for ss in existing_snapshots:
if ss['autoUpdate'] == True:
autoupdate_snapshot_id = ss['id']
# if no auto-updating view, create it
if autoupdate_snapshot_id is None:
autoupdate_snapshot_id = finspace.create_auto_update_view(
dataset_id = dataset_id,
destination_type = "GLUE_TABLE",
partition_columns = [],
sort_columns = [],
wait_for_completion = False)
print( f"Created autoupdate_snapshot_id = {autoupdate_snapshot_id}" )
else:
print( f"Exists: autoupdate_snapshot_id = {autoupdate_snapshot_id}" )
print( f"dataset_id = {dataset_id}" )
# +
# Associate the attribute set and fill its values
# if values where previously populated for this attribute set they will be overwriten
if (att_name is not None and att_values is not None):
print(f"Associating values to attribute set: {att_name}")
finspace.associate_attribute_set(att_name=att_name, att_values=att_values, dataset_id=dataset_id)
# -
import datetime
print( f"Last Run: {datetime.datetime.now()}" )
| notebooks/third_party_apis/polygon_import.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Converting Tab Delimited ASCII file to a Vector Layer
#
# We have an ASCII Tab-Delimited text in the following format
# ```
# (cross section index) (no. of points along transect)
# x-coordinates of transect points
# y-coordinates of transect points
# pre-flood elevation z94 of transect points
# post-flood elevation z96 of transect points
# reconstructed bedrock elevation at transect points
# ```
# CRS: MTM (Modified Transverse Mercator projection) zone 7 coordinates (NAD83)
#
# We can creat a CSV with the polygon geometry stored as text in WKT format. QGIS can read this format easily and display the data.
input = 'crossSections.txt'
output = 'crossSections.csv'
data = []
with open(input, 'r') as f:
# skip first line
f.readline()
for line in f:
# Get number of verticies from the first line
fid, numvertices = line.split()
x_coordinates = f.readline().split()
y_coordinates = f.readline().split()
z94_elevation = f.readline().split()
z96_elevation = f.readline().split()
bedrock_elevation = f.readline().split()
for x, y, z94, z96, bedrock in zip(x_coordinates, y_coordinates, z94_elevation, z96_elevation, z96_elevation):
data.append({'x': x, 'y': y, 'transact_id': int(fid), 'z94': float(z94), 'z96': float(z96), 'bedrock': float(bedrock)})
# +
import csv
with open(output, 'w') as csvfile:
fieldnames = ['transact_id', 'z94', 'z96', 'bedrock', 'x', 'y']
writer = csv.DictWriter(csvfile, fieldnames=fieldnames)
writer.writeheader()
for row in data:
writer.writerow(row)
# -
# The resulting CSV can be imported using the *Add Delimited Text* tab in the QGIS Data Source Manager
#
# 
# The point layers loads in QGIS with the correct CRS specified.
#
# 
| misc/ascii_to_csv.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: venv_multimodal
# language: python
# name: venv_multimodal
# ---
# +
#import argparse
import datetime
import sys
import json
from collections import defaultdict
from pathlib import Path
from tempfile import mkdtemp
import numpy as np
import torch
import torch.distributions as dist
from torch import optim
import math
import models
#import objectives
import objectives_dev as objectives
from utils import Logger, Timer, save_model, save_vars, unpack_data
from utils import log_mean_exp, is_multidata, kl_divergence
# +
#args
experiment = 'atac'
model = 'atac_dev' #VAE試しに使う
obj = 'dreg'
K = 20
looser = False
llik_scaling = 0
batch_size = 256
epochs = 10
latent_dim = 20
num_hidden_layers = 1
learn_prior = False
logp = False
print_freq = 0
no_analytics = True
seed = 1
class params():
def __init__(self,
experiment,
model,
obj,
K,
looser,
llik_scaling,
batch_size,
epochs,
latent_dim,
num_hidden_layers,
learn_prior,
logp,
print_freq,
no_analytics,
seed):
self.experiment = experiment
self.model = model
self.obj = obj
self.K = K
self.looser = looser
self.llik_scaling = llik_scaling
self.batch_size = batch_size
self.epochs = epochs
self.latent_dim = latent_dim
self.num_hidden_layers = num_hidden_layers
self.learn_prior = learn_prior
self.logp = logp
self.print_freq = print_freq
self.no_analytics = no_analytics
self.seed = seed
args = params(experiment,
model,
obj,
K,
looser,
llik_scaling,
batch_size,
epochs,
latent_dim,
num_hidden_layers,
learn_prior,
logp,
print_freq,
no_analytics,
seed)
# random seed
# https://pytorch.org/docs/stable/notes/randomness.html
torch.backends.cudnn.benchmark = True
torch.manual_seed(args.seed)
np.random.seed(args.seed)
device = torch.device("cpu")
# load model
modelC = getattr(models, 'VAE_{}'.format(args.model))
model = modelC(args).to(device)
# -
print(model.vaes[0])
print('')
print(model.vaes[1])
# +
# preparation for training
optimizer = optim.Adam(filter(lambda p: p.requires_grad, model.parameters()),
lr=1e-3, amsgrad=True)
train_loader, test_loader = model.getDataLoaders(args.batch_size, device=device)
objective = getattr(objectives,
('m_' if hasattr(model, 'vaes') else '')
+ args.obj
+ ('_looser' if (args.looser and args.obj != 'elbo') else ''))
t_objective = getattr(objectives, ('m_' if hasattr(model, 'vaes') else '') + 'iwae')
# +
def train(epoch, agg):
model.train()
b_loss = 0
for i, dataT in enumerate(train_loader):
data = unpack_data(dataT, device=device) #RNA_ATAC MMVAEでこの変換いらない
optimizer.zero_grad()
loss = -objective(model, data, K=args.K)
loss.backward()
optimizer.step()
b_loss += loss.item()
if args.print_freq > 0 and i % args.print_freq == 0:
print("iteration {:04d}: loss: {:6.3f}".format(i, loss.item() / args.batch_size))
agg['train_loss'].append(b_loss / len(train_loader.dataset))
print('====> Epoch: {:03d} Train loss: {:.4f}'.format(epoch, agg['train_loss'][-1]))
def test(epoch, agg):
model.eval()
b_loss = 0
with torch.no_grad():
for i, dataT in enumerate(test_loader):
data = unpack_data(dataT, device=device) #RNA_ATAC MMVAEでこの変換いらない
#loss = -t_objective(model, data, K=args.K)
loss = -objective(model, data, K=args.K) #t_objectiveの意味がよくわからないので、trainと同じobjectivesにした
b_loss += loss.item()
# if i == 0:
# model.reconstruct(data, runPath, epoch)
# if not args.no_analytics:
# model.analyse(data, runPath, epoch)
agg['test_loss'].append(b_loss / len(test_loader.dataset))
print('====> Test loss: {:.4f}'.format(agg['test_loss'][-1]))
def estimate_log_marginal(K):
"""Compute an IWAE estimate of the log-marginal likelihood of test data."""
model.eval()
marginal_loglik = 0
with torch.no_grad():
for dataT in test_loader:
data = unpack_data(dataT, device=device)
marginal_loglik += -t_objective(model, data, K).item()
marginal_loglik /= len(test_loader.dataset)
print('Marginal Log Likelihood (IWAE, K = {}): {:.4f}'.format(K, marginal_loglik))
# +
model.train()
b_loss = 0
#1epoch のみ run
for i, dataT in enumerate(train_loader):
data = unpack_data(dataT, device=device)
# optimizer.zero_grad()
# loss = -objective(model, data, K=args.K)
# loss.backward()
# optimizer.step()
# b_loss += loss.item()
# +
data = unpack_data(dataT,device='cpu')
print(dataT)
print('')
print(data)
type(dataT)
print(is_multidata(dataT))
torch.is_tensor(dataT[0])
#data = [d.to(device) for d in list(zip(*dataT))[0]]
#len(data[0])
#len(data[1])
# -
def m_elbo(model, x, K=1):
"""Computes importance-sampled m_elbo (in notes3) for multi-modal vae """
qz_xs, px_zs, zss = model(x)
lpx_zs, klds = [], []
for r, qz_x in enumerate(qz_xs):
kld = kl_divergence(qz_x, model.pz(*model.pz_params))
klds.append(kld.sum(-1))
for d in range(len(px_zs)):
lpx_z = px_zs[d][d].log_prob(x[d]).view(*px_zs[d][d].batch_shape[:2], -1)
lpx_z = (lpx_z * model.vaes[d].llik_scaling).sum(-1)
if d == r:
lwt = torch.tensor(0.0)
else:
zs = zss[d].detach()
lwt = (qz_x.log_prob(zs) - qz_xs[d].log_prob(zs).detach()).sum(-1)
lpx_zs.append(lwt.exp() * lpx_z)
obj = (1 / len(model.vaes)) * (torch.stack(lpx_zs).sum(0) - torch.stack(klds).sum(0))
return obj.mean(0).sum()
obj = m_elbo(model,dataT,K=1)
print(obj)
for i, dataT in enumerate(train_loader):
data = unpack_data(dataT, device=device)
print(data.shape)
print(loss)
print(optimizer)
print(b_loss)
model.eval()
b_loss = 0
with torch.no_grad():
for i, dataT in enumerate(test_loader):
data = unpack_data(dataT, device=device)
#loss = -t_objective(model, data, K=args.K)
loss = -objective(model, data, K=args.K) #t_objectiveの意味がよくわからないので、trainと同じobjectivesにした
b_loss += loss.item()
# set up run path
runId = datetime.datetime.now().isoformat()
experiment_dir = Path('../experiments/' + args.experiment)
experiment_dir.mkdir(parents=True, exist_ok=True)
runPath = mkdtemp(prefix=runId, dir=str(experiment_dir))
#sys.stdout = Logger('{}/run.log'.format(runPath))
#print('Expt:', runPath)
#print('RunID:', runId)
with Timer('MM-VAE') as t:
agg = defaultdict(list)
for epoch in range(1, args.epochs + 1):
train(epoch, agg)
test(epoch, agg)
# save_model(model, runPath + '/model.rar')
# save_vars(agg, runPath + '/losses.rar')
# model.generate(runPath, epoch)
# if args.logp: # compute as tight a marginal likelihood as possible
# estimate_log_marginal(5000)
save_model(model, runPath + '/model.rar')
#class VAE のforward
model._qz_x_params = model.enc(dataT) #Encoderを通して確率分布のパラメータを渡す
qz_x = model.qz_x(*model._qz_x_params) #qz_x はLaplace分布
print(qz_x)
zs = qz_x.rsample(torch.Size([10])) #各サンプルについてqz_xからzをサンプリング
print(zs.shape)
recon = model.dec(zs)
print(type(recon))
#print(recon.shape)
print(len(recon))
print(recon[0].shape)
print(recon[1].shape)
print(recon)
px_z = model.px_z(*model.dec(zs)) #likelihood
print(px_z)
# +
# objectives of choice
import torch
from numpy import prod
from utils import log_mean_exp, is_multidata, kl_divergence
# +
#print(model(x)) #forwardは qz_x, px_z, zs を返す
# -
x = dataT
qz_x, px_z, _ = model(x)
lpx_z = px_z.log_prob(x).view(*px_z.batch_shape[:2], -1) * model.llik_scaling
#lpx_z = px_z.log_prob(x).view(*px_z.batch_shape[:1], -1) * model.llik_scaling
kld = kl_divergence(qz_x, model.pz(*model.pz_params))
print(dataT.shape)
print(px_z.log_prob(x).shape)
print(*px_z.batch_shape[:2])
print(px_z.log_prob(x).view(*px_z.batch_shape[:2], -1).shape)
print(px_z.batch_shape)
print(lpx_z.shape)
print(kld.shape)
print(kld.sum())
#Class MMVAE forward
def forward(self, x, K=1):
qz_xs, zss = [], []
# initialise cross-modal matrix
px_zs = [[None for _ in range(len(self.vaes))] for _ in range(len(self.vaes))]
for m, vae in enumerate(self.vaes):
qz_x, px_z, zs = vae(x[m], K=K)
qz_xs.append(qz_x)
zss.append(zs)
px_zs[m][m] = px_z # fill-in diagonal
for e, zs in enumerate(zss):
for d, vae in enumerate(self.vaes):
if e != d: # fill-in off-diagonal
px_zs[e][d] = vae.px_z(*vae.dec(zs))
return qz_xs, px_zs, zss
qz_xs, zss = [], []
# initialise cross-modal matrix
px_zs = [[None for _ in range(len(model.vaes))] for _ in range(len(model.vaes))]
print(px_zs)
x = dataT
for m, vae in enumerate(model.vaes):
qz_x, px_z, zs = vae(x[m], K=K)
qz_xs.append(qz_x)
zss.append(zs)
px_zs[m][m] = px_z # fill-in diagonal
print(qz_x)
print(qz_xs)
print('')
print(px_z)
print(px_zs) #Cross-modal matrix
print('')
print(zs.shape)
print(len(zss))
for e, zs in enumerate(zss):
for d, vae in enumerate(model.vaes):
if e != d: # fill-in off-diagonal
px_zs[e][d] = vae.px_z(*vae.dec(zs))
print(px_zs)
| src/.ipynb_checkpoints/explore_main-checkpoint.ipynb |
# -*- coding: utf-8 -*-
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .jl
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Julia 1.0.4
# language: julia
# name: julia-1.0
# ---
# # Project: automatic error propagation
# ### <NAME>, <NAME>, <NAME>
# ### 18 December 2019
# # What is error propagation?
#
# During my first year of college, I really liked the physics lectures because everything was exact. In my first year of physics, I really disliked the physics labs, because everything was messy. First-year physics labs are generally disliked partially because of the uninspiring topics (measuring resistance in a wire! determining a heat transfer coefficient! measuring chirality of sugar!) and partly because this was the only lab in which 'exact' measurements had to be performed. These labs introduced the complex and unexplained rules for measurement error propagation. Without any prior statistics or probability courses, it was lost on me where these strange rules originated from, as sharp contrast to nearly everything else in physics.
#
# A year later, after plowing through a basic probability course, error propagation is less mysterious. Almost all these rules to account for the error can be derived from a simple principle:
#
# > Given two **independent** random variables $$X$$ and $$Y$$, the variance of a linear combination $$\text{Var}[aX+bY]=a^2\text{Var}[X] + b^2\text{Var}[Y]$$.
#
# Measurement errors are usually given by a standard deviation, the square root of the variance. Given this principle, error propagation is merely bookkeeping of the standard error on the measurement for various computations. This [table on error propagation](https://en.wikipedia.org/wiki/Propagation_of_uncertainty#Example_formulae) might be useful.
#
# For nonlinear functions, we can compute an approximate uncertainty propagation using a first-order Taylor approximation. We have, for any function $f(x)$:
#
# $$
# f(x\pm \sigma) \approx f(x) \pm |f'(x)|\sigma\,.
# $$
#
# For example, for squaring a function, we have
#
# $$
# (x\pm\sigma)^2 = x^2 \pm 2|x|\sigma\,.
# $$
#
# Note that this is consistent with the above rules for multiplication. Let us implement the general formula for raising a measurement to the power $p$.
#
# We can implement this for all the standard mathematical functions one by one. However, Julia provides use with two efficient tools to do this in one swoop: automatic differentiation and metaprogramming. We just loop a list of functions of interest and automatically generate the correct approximate rule.
#
#
# Instead of processing the measurements and the standard error separately, suppose we could just make a new type of number which contains both the observed value and its uncertainty. And suppose we could just compute something with these numbers, plugging them into our formulas where the error is automatically accounted for using the standard error propagation rules. In Julia, it is dead simple to construct such new numbers and just overload existing functions such that they compute in the correct way.
#
# # Goal of this project
#
# We will implement a binary operator `±`, which can be used to add the standard error to a measurement.
# The result is a new type `Measurement` containing both the value and the standard error. Standard functions will be overloaded to process this structure correctly.
#
# # Assignments
#
# 1. Make an `Measurement` structure with two fields: the measured value `x` and its standard deviation `σ` (`\sigma<TAB>`). Make sure that both `x` and `σ` are of the same type and a subtype of `Real`.
# 2. Make a constructor for `Measurement`, such that an error is returned when a negative standard error is given.
# 3. Make two functions `val` and `err`, which respecitivly return the value and the standard error of a measurement.
# 4. Make a binary operator `±` (`\pm<TAB>`), such that `x ± σ` returns a new instance of the type `Measurement`. Try it on a value 4.0 with a standard error of 1.2.
# 5. Overload all functions of scalar multiplication, adding and substracting measurements and adding a constant such that they correctly process measurements. Try some examples.
# 6. Overload the power function `^` such that one can raise `Measurement`s to a power. Note the special case `(x ± σ)^0` = `one(x) ± zero(σ)`.
# 7. Run example to generate error functions for a whole range of functions using metaprogramming.
# 8. Solve the small exercise using data.
#
# **Assignment 1 and 2**
struct Measurement
...
function Measurement(...)
...
end
end
# **Assignment 3**
val(m::Measurement) = ...
err(m::Measurement) = ...
# **Assignment 4**
#
# Don't forget to add type annotations!
±(x, σ) = ...
# **Assignment 5**
# scalar multiplication
Base.:*(a::Real, m::Measurement) = ...
Base.:/(m::Measurement, a::Real) = ...
# adding and substracting measurements
Base.:+(m1::Measurement, m2::Measurement) = ...
Base.:-(m1::Measurement, m2::Measurement) = ...
Base.:-(m::Measurement) = ...
# adding a constant
Base.:+(m::Measurement, a::Real) = ...
Base.:+(a::Real, m::Measurement) = ...
# multiplying two measurments
Base.:*(m1::Measurement, m2::Measurement) = ...
# **Assignment 6**
Base.:^(m::Measurement, p::Integer) = ...
# **Assignment 7**
# +
using ForwardDiff
for f in [:sin, :cos, :tan, :exp, :log, :log2, :log10, :sqrt, :inv]
eval(quote
# this is a line of code generated using string interpolation
Base.$f(m::Measurement) = $f(m.x) ± abs(ForwardDiff.derivative($f, m.x) * m.σ)
end)
end
# -
# **Assignment 8**
#
# Let's apply this to in a somewhat realistic setting. Many methods in analytical chemistry are based on the law of [Beer-Lambert](https://en.wikipedia.org/wiki/Beer%E2%80%93Lambert_law). This law relates the absorption of a ray of light passing through a cuvet with the concentration of a solution. For a given reference intensity $I_0$ at concentration of 0 and a lower intensity $I$ when it passes through a solution of concentration $c$, we have
#
# $$
# \log \left(\frac{I_0}{I}\right) = \varepsilon c l\,,
# $$
#
# with $\varepsilon$ the molar extinction coefficient and $l$ the thickness of the cuvet.
#
# Suppose we want to determine the extinction coefficient for some substance, using a cuvet of thickness $0.02\pm 0.001$ and a reference solution of a concentration of $0.73\pm 0.02$ M.
#
# We perform some intensity measurements, with associated measurement errors.
#
# | $I0$ | $\sigma_{I0}$ | $I$ | $\sigma_{I}$ |
# |------|---------------|------|---------------|
# | 0.8 | 0.14 | 0.2 | 0.12 |
# | 1.1 | 0.11 | 0.3 | 0.07 |
# | 1.2 | 0.08 | 0.4 | 0.101 |
#
#
# **Estimate the molecular extinction coefficient.**
| notebooks/05-project-errorprop.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3 (Data Science)
# language: python
# name: python3__SAGEMAKER_INTERNAL__arn:aws:sagemaker:us-east-1:081325390199:image/datascience-1.0
# ---
# # Amazon SageMaker Workshop
# ### _**Data Preparation**_
#
# ---
# In this part of the workshop we will prepare the data to later train our churn model.
#
# ---
#
# ## Contents
#
# 1. [Background](#Background) - Getting the rawata prepared in the previous lab.
# 2. [Prepare](#Prepare) - Prepare the data with [Amazon SageMaker Data Wrangler](https://aws.amazon.com/sagemaker/data-wrangler/)
# * [Creating features](https://docs.aws.amazon.com/sagemaker/latest/dg/data-wrangler-transform.html)
# * [Creating analysis](https://docs.aws.amazon.com/sagemaker/latest/dg/data-wrangler-analyses.html)
# * [Analyzing the data and features](https://docs.aws.amazon.com/sagemaker/latest/dg/clarify-measure-data-bias.html)
# 3. [Submiting the data to Feature Store](#FeatureStore) - Store the features created in [Amazon SageMaker Feature Store](https://aws.amazon.com/sagemaker/feature-store/)
#
# ---
#
# ## Background
#
# In the previous [Introduction lab](../0-Introduction/introduction.ipynb) we created a S3 bucket and uploaded the raw data to it.
#
# Let's get started!
#
# Get variables from previous configuration notebook:
# %store -r bucket
# %store -r region
# %store -r prefix
# %store -r s3uri_raw
# %store -r docker_image_name
# %store -r framework_version
bucket, prefix, s3uri_raw, region, docker_image_name, framework_version
# Let's import the libraries for this lab:
# + isConfigCell=true tags=["parameters"]
import sagemaker
sess = sagemaker.Session()
#bucket = sess.default_bucket()
#prefix = "sagemaker/DEMO-xgboost-churn"
# Define IAM role
import boto3
import re
from sagemaker import get_execution_role
role = get_execution_role()
role
# +
import io
import os
import sys
import time
import json
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
from IPython.display import display as dis
from time import strftime, gmtime
from sagemaker.inputs import TrainingInput
from sagemaker.serializers import CSVSerializer
from IPython import display
# -
# # Importing Data on DataWrangler
#
# To start, we will create a new flow and import the raw data to perform analysis and transformations on it. On the left menu, click on "Components and Registries", select "Data Wrangler" on the dropdown, and click "New Flow". This process is shown on the image below.
# <img src="media/010-new_flow.png" width="30%" />
# As soon as we click on "New Flow", Data Wrangler will be on a loading state. After a couple minutes you should be able to import the raw data on it. While we wait, we can rename our flow by right clicking on the flow tab and choosing "Rename Data Wrangler Flow...".
# <img src="media/020-load_new_flow.png" width="100%" />
#
# Let's call the file `churn.flow`
# After Data Wrangler finishes loading, we may proceed importing our data. We'll be importing our data from Amazon S3. The following images guide us through the process.
# <img src="media/030-importing_from_s3.png" width="100%" />
# Using the search box or the table in the bottom, locate the S3 Bucket where our dataset is stored.
#
# To remember the bucket and prefix run cell below:
s3uri_raw
# <img src="media/040-choosing_bucket.png" width="100%" />
# Choose the "churn.csv" file, disable the "Enable sampling" checkbox, and click "Import". Feel free to check the data preview on the "Preview" session on the bottom!
# <img src="media/050-choose_csv_preview.png" width="100%" />
# # Data Analysis on Data Wrangler
#
# Next, we are going to analyze our data by understanding feature distribution and how each of them impacts our target column. Data Wrangler allow us to perform such analysis inside user interface, let's start creating these analysis.
# To start our analysis, we'll create a summary of our imported data. The summary can be rertieved by adding a new analysis on Data Wrangler. The following images show the step by step to create our table summary.
# <img src="media/060-add_first_analysis.png" width="100%" />
# <img src="media/070-table_summary.png" width="100%" />
# As soon as you click on Preview or entering in your saved analysis, you'll see the table summary as shown in the image below.
# <img src="media/080-table_summary_preview.png" width="100%" />
# By modern standards, it’s a relatively small dataset, with only 3,333 records, where each record uses 21 attributes to describe the profile of a customer of an unknown US mobile operator. The attributes are:
#
# - `State`: the US state in which the customer resides, indicated by a two-letter abbreviation; for example, OH or NJ
# - `Account Length`: the number of days that this account has been active
# - `Area Code`: the three-digit area code of the corresponding customer’s phone number
# - `Phone`: the remaining seven-digit phone number
# - `Int’l Plan`: whether the customer has an international calling plan: yes/no
# - `VMail Plan`: whether the customer has a voice mail feature: yes/no
# - `VMail Message`: presumably the average number of voice mail messages per month
# - `Day Mins`: the total number of calling minutes used during the day
# - `Day Calls`: the total number of calls placed during the day
# - `Day Charge`: the billed cost of daytime calls
# - `Eve Mins, Eve Calls, Eve Charge`: the billed cost for calls placed during the evening
# - `Night Mins`, `Night Calls`, `Night Charge`: the billed cost for calls placed during nighttime
# - `Intl Mins`, `Intl Calls`, `Intl Charge`: the billed cost for international calls
# - `CustServ Calls`: the number of calls placed to Customer Service
# - `Churn?`: whether the customer left the service: true/false
#
# The last attribute, `Churn?`, is known as the target attribute–the attribute that we want the ML model to predict. Because the target attribute is binary, our model will be performing binary prediction, also known as binary classification.
# Continuing our data analysis, we can leverage different Analysis type on Data Wrangler. Next, we can use Histogram to understand our feature distribution and how it impacts our target value. Going back to the "Analyze" tab, we can add a new Analysis. Then, we can choose Histogram as the "Analysis type" and select a feature to explore its distribution. On the following images we've chosen `Account Length` as feature and colored it by our target variable `Churn?`.
# <img src="media/090-analyze_tab_create_new_analysis.png" width="100%" />
# <img src="media/100-account_length_histogram.png" width="100%" />
# As we can see, we are able to check the distribution of our feature and how its distribution relates to our target value. Feel free to create new histograms for any other feature!
# For our last analysis, we'll leverage the "Quick Model" analysis provided by Data Wrangler. This analysis trains a Random Forest algorithm on its own and calculates a feature importance score for each feature on our dataset. You can learn more about the "Quick Model" analysis on this [page](https://docs.aws.amazon.com/sagemaker/latest/dg/data-wrangler-analyses.html#data-wrangler-quick-model) of the Amazon SageMaker Data Wrangler documentation.
#
# <img src="media/110-quick_model_analysis.png" width="100%" />
# The higher the score, the more important the feature is. Therefore, feature `Day Mins` is the most important feature on our dataset according to the "Quick Model" analysis.
# # Data Transforms on Data Wrangler
#
# Go back to the DAG view by clicking on `Back to data flow` tab (on the top left).
#
# Follow the instructions on the image below:
# 1 - Click on the plus (+) button
# 2 - Click Edit data types
# <img src="media/120-edit_data_types.png" width="100%" />
# 3 - Find the column you want to change
# 4 - Select the desired type from the dropdown (change **Area Code** to String)
# 5 - Click preview
# 6 - Click Add
# <img src="media/130-area_code_to_object.png" width="100%" />
# Once you finish, click "Back to data flow" on the top right corner
#
# ### Now lets drop the Phone column by adding a Transform
# 1 - Click the plus (+)
# 2 - Add Transform
# <img src="media/140-add_transform.png" width="100%" />
# 3 - Click on manage columns
# 4 - Select the `Phone` column from the dropdown (as shown in step 2)
# 5 - Click on preview
# 6 - Click add
# <img src="media/150-steps_drop_phone_col.png" width="100%" />
# ### Now lets Drop a few more columns
# I'll be dropping the first (Day Charge) as an example, just repeat the steps in the image below for the following columns:
# * "Day Charge"
# * "Eve Charge"
# * "Night Charge"
# * "Intl Charge"
# <img src="media/160-adding_new_transform.png" width="100%" />
# <img src="media/170-drop_column_pt2.png" width="100%" />
# ### Now lets do OneHot Encoding using a custom transform
# You can copy the code for the custom transform here:
# ```python
# import pandas as pd
#
# model_data = pd.get_dummies(df)
#
# df = pd.concat(
# [model_data["Churn?_True."],
# model_data.drop( ["Churn?_False.", "Churn?_True."], axis=1)],
# axis=1
# ).rename(
# columns = {
# "Churn?_True.": "Churn"
# }
# )
# ```
# <img src="media/180-custom_transform.png" width="100%" />
# # Exporting Transformed data on Data Wrangler
#
# After performing the transformations needed on our dataset, we'll export the transformed data to our S3 bucket. We are able to do so inside Data Wrangler UI by following the steps highlighted on the images below.
# <img src="media/190-select_transforms_to_export.png" width="100%" />
# <img src="media/200-exporting_to_s3.png" width="100%" />
# After selecting the `Save to S3` option, a new notebook will be displayed similar to the one presented on the image below.
# <img src="media/210-export_to_s3_notebook.png" width="100%" />
# We can proceed to `Run all cells`, as descrribed in the notebook. The processing job will start and it will take a few minutes to finish. Upon completion, we'll see a similar output on the cell presented on the following image.
# <img src="media/219-dw-notebook-run-all.png" width="100%" />
#
# In the end of that notebook check that Processing Job is running (with the Data Wrangler Docker image):
#
# <img src="media/220-processing_job_finished.png" width="100%" />
# On the left menu, click on "Components and Registries", select "Experiments and trial" on the dropdown.
#
# Select and double-click "Unassigned trial components":
#
# <img src="media/225-open-experiments-processing.png" width="50%"/>
#
# Select your processing job trial and open (right click and select "Open in trial details"):
#
# <img src="media/225-find-processing-info.png" width="60%"/>
#
# Go to the `Artifacts` tab, and **copy** the S3 URI of the output:
#
# <img src="media/225-find-processing-output.png" width="100%" />
# Paste your S3 URI below:
s3uri_processed = "s3://sagemaker-us-east-1-686948287393/export-flow-31-14-44-25-b1725f5a/output"
s3uri_processed_file = sagemaker.s3.S3Downloader.list(s3uri_processed)[0]
s3uri_processed_file
# If you want to check it in S3 console run the cell bellow and click the link:
# +
from IPython.core.display import display, HTML
from sagemaker.s3 import parse_s3_url
out_bucket, out_prefix = parse_s3_url(s3uri_processed_file)
out_path = os.path.dirname(out_prefix)
out_file = os.path.basename(out_prefix)
s3_url_placeholder = "https://s3.console.aws.amazon.com/s3/buckets/{}?&prefix={}/"
display(HTML(f"<a href={s3_url_placeholder.format(out_bucket, out_path)}>Go to S3 console and check output of Data Wrangler</a>"))
# -
# In the S3 console you should see:
# <img src="media/230-download_transformed_data_s3.png" width="100%" />
#
# (If you want to download the data to you computer follow the steps in the image above)
# Let's download the data to Studio:
sess.download_data(".",
out_bucket,
key_prefix=out_prefix)
# Click the refresh button on Studio. You should see something like:
#
# <img src="media/230-download_transformed_data_s3_local.png" width="50%" />
#
# (The CSV file is downloaded)
out_file
model_data = pd.read_csv(out_file)
model_data.head()
# Above we should see the transformed data with `Churn` in the first columns, the one-hot-encoded columns and so on.
#
# Finally, let's break the data into **train, validation and test sets:**
train_data, validation_data, test_data = np.split(
model_data.sample(frac=1, random_state=1729),
[int(0.7 * len(model_data)), int(0.9 * len(model_data))],
)
train_data.shape, validation_data.shape, test_data.shape
train_data.head(2)
model_data.shape
# Create CSV files for the 3 datasets:
# +
train_file_name = "train.csv"
validation_file_name = "validation.csv"
test_file_name = "test.csv"
train_data.to_csv(train_file_name , header=False, index=False)
validation_data.to_csv(validation_file_name, header=False, index=False)
test_data.to_csv(test_file_name, header=False, index=False)
# -
# Lastly, we'll upload these files to S3.
# +
# Return the URLs of the uploaded files, so they can be reviewed or used elsewhere
train_dir = f"{prefix}/data/train"
val_dir = f"{prefix}/data/validation"
test_dir = f"{prefix}/data/test"
s3uri_train = sagemaker.s3.S3Uploader.upload(train_file_name, f's3://{bucket}/{train_dir}')
s3uri_validation = sagemaker.s3.S3Uploader.upload(validation_file_name, f's3://{bucket}/{val_dir}')
s3uri_test = sagemaker.s3.S3Uploader.upload(test_file_name, f's3://{bucket}/{test_dir}')
s3uri_train, s3uri_validation, s3uri_test
# -
# Save the S3 URIs for the 3 datasets for later:
# %store s3uri_train
# %store s3uri_validation
# %store s3uri_test
| 1-DataPrep/data_preparation.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: 'Python 3.8.11 64-bit (''pymwm'': conda)'
# name: python3
# ---
# # Tutorial
# Let's start your survay of dielectric properties of various materials. The first thing you must do is to create a RiiDataFrame oject. The first trial will take a few minutes, because experimental data will be pulled down from Polyanskiy's [refractiveindex.info database](https://github.com/polyanskiy/refractiveindex.info-database) and equi-spaced grid data will be obtained by interpolating the experimental data.
import riip
ri = riip.RiiDataFrame()
# You can use some helper methods for your survay.
#
# ## __search__
# ```
# search(name: str) -> DataFrame
# ```
# This method searches data that contain given __name__ of material and return a catalog for them.
ri.search("NaCl")
ri.search("sodium").head(5) # upper or lower case is not significant
# ## __select__
# ```
# select(condition: str) -> DataFrame
# ```
# This method make a query with the given __condition__ and return a catalog. For example, if you want to find a material whose refractive index n is in a range 2.5 < n < 3 somewhere in the wavelength range 0.4μm < wl < 0.8μm:
ri.select("2.5 < n < 3 and 0.4 < wl < 0.8").head(5)
# ## __show__
# ```
# show(ids: int | Sequence[int]) -> DataFrame
# ```
# This method shows the catalog for given __ids__.
ri.show([23, 118])
# ## __read__
# ```
# read(id, as_dict=False)
# ```
# This method returns the contants of a page associated with the id.
print(ri.read(23))
# ## __references__
# ```
# references(id: int)
# ```
# This method returns the REFERENCES of a page associated with the id.
ri.references(23)
# ## __plot__
# ```
# plot(id: int, comp: str = "n", fmt1: str = "-", fmt2: str = "--", **kwargs)
# ```
# * id: ID number
# * comp: 'n', 'k' or 'eps'
# * fmt1 (Union[str, None]): Plot format for n and Re(eps), such as "-", "--", ":", etc.
# * fmt2 (Union[str, None]): Plot format for k and Im(eps).
#
# This plot uses 200 data points only. If you want more fine plots, use __plot__ method of __RiiMaterial__ explained below.
ri.plot(23, "n")
ri.plot(23, "k")
ri.plot(23, "eps")
# ## __material__
# ```
# material(params: dict) -> Material
# ```
# This method returns __Material__-class instance for given parameter dict __params__.
#
# __params__ can includes the following parameters,
# * 'id': ID number. (int)
# * 'book': book value in catalog of RiiDataFrame. (str)
# * 'page': page value in catalog of RiiDataFrame. (str)
# * 'RI': Constant refractive index. (complex)
# * 'e': Constant permittivity. (complex)
# * 'bound_check': True if bound check should be done. Defaults to True. (bool)
# * 'im_factor': A magnification factor multiplied to the imaginary part of permittivity. Defaults to 1.0. (float)
#
Al = ri.material({'id': 23})
type(Al)
# Using the created Material object, you can get refractive index n, extinction coefficient k, and dielectric function eps, and plot them.
# ### __Material.n__
# ```
# n(wl: ArrayLike) -> ArrayLike
# ```
Al.n(1.0) # refractive index at wavelength = 1.0μm
# ### __Material.k__
# ```
# k(wl: ArrayLikey) -> ArrayLike
# ```
Al.k(1.0) # extinction coeficient at wavelength = 1.0μm
# ### __Material.eps__
# ```
# eps(wl: ArrayLike) -> ArrayLike
# ```
Al.eps(1.0) # permittivity at wavelength = 1.0μm
# Wavelengths __wl__ can be a single complex value or an array of complex values.
import numpy as np
wls = np.linspace(0.5, 1.6)
Al.eps(wls)
#
# ### __Material.plot__
# ```
# plot(wls: np.ndarray, comp: str = "n", fmt1: str = "-", fmt2: str = "--", **kwargs)
# ```
# * wls: Wavelength [μm].
# * comp: 'n', 'k' or 'eps'
# * fmt1 (Union[str, None]): Plot format for n and Re(eps), such as "-", "--", ":", etc.
# * fmt2 (Union[str, None]): Plot format for k and Im(eps).
import matplotlib.pyplot as plt
wls = np.linspace(0.5, 1.0)
Al.plot(wls, "n")
plt.show()
Al.plot(wls, "k")
plt.show()
Al.plot(wls, "eps")
plt.show()
| docs/notebooks/01_tutorial.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: U4-S1-NLP (Python3)
# language: python
# name: u4-s1-nlp
# ---
# + [markdown] pycharm={"name": "#%% md\n"}
# # Objective 01 - describe the components of an autoencoder
#
# PS - An autoencoder is learning a compressed representation and the decoder outputs the reconstructed input.
# + [markdown] pycharm={"name": "#%% md\n"}
# Objective 02 - train an autoencoder
# + pycharm={"name": "#%%\n"}
from tensorflow.keras.datasets import mnist
from tensorflow.keras.layers import Input, Dense
from tensorflow.keras.models import Model
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
(X_train, _), (X_test, _) = mnist.load_data()
X_train = X_train.astype('float32')/255
X_test = X_test.astype('float32')/255
X_train = X_train.reshape(len(X_train), np.prod(X_train.shape[1:])) # reshape is saying to reshape to (60000, 784) really
X_test = X_test.reshape(len(X_test), np.prod(X_test.shape[1:]))
print(X_train.shape)
print(X_test.shape)
# + pycharm={"name": "#%%\n"}
input_img = Input(shape=(784, ))
# + pycharm={"name": "#%%\n"}
# Create simple autoencoder
# The size of our encoded representations
# 32 float -> compression of factor 24.5, assuming the input is 784 floats
encoding_dim = 32
# "encoded" is the encoded representation of the input
encoded = Dense(units=encoding_dim, activation='relu')(input_img) # This is new bracket sort
# "decoded" is the lossy restruction of the input
decoded = Dense(units=784, activation='sigmoid')(encoded)
# This model maps an input to its reconstruction
autoencoder = Model(input_img, decoded)
# This model maps an input to its encoded representation
encoder = Model(input_img, encoded)
# + pycharm={"name": "#%%\n"}
autoencoder.summary()
# + pycharm={"name": "#%%\n"}
encoder.summary()
# + pycharm={"name": "#%%\n"}
# Compile the model
autoencoder.compile(optimizer='nadam',
loss='binary_crossentropy',
metrics=['accuracy'])
# Fit the model
autoencoder.fit(X_train, X_train,
epochs=10,
batch_size=256,
shuffle=True,
validation_data=(X_test, X_test))
# + pycharm={"name": "#%%\n"}
# make the prediction with the test data
encoded_imgs = encoder.predict(X_test)
predicted = autoencoder.predict(X_test)
# + pycharm={"name": "#%%\n"}
# Plotting code from:
# https://medium.com/datadriveninvestor/deep-autoencoder-using-keras-b77cd3e8be95
# Plot the results
plt.figure(figsize=(49, 4))
for i in range(10):
# display original images
ax = plt.subplot(3, 20, i+1)
plt.imshow(X_test[i].reshape(28, 28))
plt.gray()
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
# display encoded images
ax = plt.subplot(3, 20, i + 1 + 20)
plt.imshow(encoded_imgs[i].reshape(8,4))
plt.gray()
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
# display reconstructed images
ax = plt.subplot(3, 20, 2*20 +i+ 1)
plt.imshow(predicted[i].reshape(28, 28))
plt.gray()
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
plt.show() # Uncomment to see the figure
# + pycharm={"name": "#%%\n"}
# + pycharm={"name": "#%%\n"}
# + pycharm={"name": "#%%\n"}
# + pycharm={"name": "#%%\n"}
# + pycharm={"name": "#%%\n"}
# + pycharm={"name": "#%%\n"}
| module3-autoencoders/Warmup/Practice.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python [conda env:onnx13v1]
# language: python
# name: conda-env-onnx13v1-py
# ---
# <font color=gray>ADS Sample Notebook.
#
# Copyright (c) 2020 Oracle, Inc. All rights reserved.
# Licensed under the Universal Permissive License v 1.0 as shown at https://oss.oracle.com/licenses/upl.
# </font>
#
# ***
# # <font color=red>ONNX Integration with ADS</font>
# <p style="margin-left:10%; margin-right:10%;">by the <font color=teal>Oracle Cloud Infrastructure Data Science Team</font></p>
#
# ***
#
# ## Overview:
#
# This notebook showcases the integration between Open Neural Network Exchange (<a href=https://onnx.ai/>ONNX</a>), `ADS`, and `sklearn`. ONNX is an open standard for machine learning interoperability that enables easy deployment of models. ONNX is an extensible computational graph model with built-in operators and machine-independent data types. The operators are portable across hardware and frameworks. The computational flow is an acyclic graph that contains information about the flow of the data and also metadata. Each node in the data flow graph contains an operator that can accept multiple inputs and produce multiple outputs.
#
# **Important:**
#
# Placeholder text for required values are surrounded by angle brackets that must be removed when adding the indicated content. For example, when adding a database name to `database_name = "<database_name>"` would become `database_name = "production"`.
#
# ---
#
# ## Prerequisites:
# - Experience with a specific topic: Intermediate
# - Professional experience: None
#
# ---
#
# ## Objectives:
#
# - <a href="#sklearn-ads">Build a Model</a>
# - <a href="#onnx-serial">Model Serialization with Onnx</a>
# - <a href="#model-artifacts">Model Artifacts</a>
# - <a href="#model-workflow">Model Workflow</a>
# - <a href="#model-prediction">Model Prediction</a>
# - <a href="#model-prediction-adsmodel">Prediction using `ADSModel`</a>
# - <a href="#model-prediction-onnx">Prediction using OnnxRuntime</a>
# - <a href="#model-prediction-missing">Prediction with Missing Values</a>
# - <a href="#ref">References</a>
#
# ***
#
# <font color=gray>Datasets are provided as a convenience. Datasets are considered Third Party Content and are not considered Materials under Your agreement with Oracle applicable to the services.
# </font>
#
# ***
# ## Optional Installation of Pydot
# Prior to executing this notebook you may optionally install a library called `pydot`. This library is necessary to visualize a graph representation of the onnx model. This installation is optional. Set the flag `use_pydot` to True in the cell below and this will trigger the installation of `pydot` and enable code cells that create `pydot` visualizations.
# +
import subprocess
use_pydot = True
if use_pydot:
process = subprocess.Popen(['pip','install','pydot'], stdout=subprocess.PIPE, stderr=subprocess.PIPE)
stdout, stderr = process.communicate()
print(stdout)
print(stderr)
if not process.returncode:
from onnx.tools.net_drawer import GetPydotGraph, GetOpNodeProducer
else:
use_pydot = False
raise Exception("Skipping pydot installation. All pydot graphs are disabled in this notebook.")
# +
import logging
import matplotlib.pyplot as plt
import onnx
import onnxruntime
import os
import random
import shutil
import tempfile
import warnings
from ads import set_documentation_mode
from ads.common.model import ADSModel
from ads.dataset.dataset_browser import DatasetBrowser
from sklearn.pipeline import Pipeline
from sklearn.preprocessing import StandardScaler
from sklearn.linear_model import LogisticRegression
set_documentation_mode(False)
warnings.filterwarnings('ignore')
logging.basicConfig(format='%(levelname)s:%(message)s', level=logging.ERROR)
# -
# <a id='sklearn-ads'></a>
# # Build a Model
#
# In the next cell, the `iris` dataset is loaded, and then split into a training and a test set. A pipeline is created to scale the data and perform a logistic regression. This `sklearn` pipeline is then converted into an `ADSModel`.
ds = DatasetBrowser.sklearn().open("iris")
train, test = ds.train_test_split()
pipe = Pipeline([('scaler', StandardScaler()),
('classifier', LogisticRegression())])
pipe.fit(train.X, train.y)
adsmodel = ADSModel.from_estimator(pipe)
# <a id="onnx-serial"> </a>
# # Model Serialization with ONNX
#
# This example uses the `ADSModel` class. The class supports a number of popular model libraries including Automl, SKlearn, XGBoost, LightGBM, and Pytorch. With `ADSModel` objects, the `prepare()` method is used to create the model artifacts. If you want to use an unsupported model type then the model must be manually serialized into ONNX and put in the folder that was created by a call to the `prepare_generic_model()` method.
#
# `ADSModel.prepare()` does the following:
# - Serializes the model into ONNX format into a file named `model.onnx`.
# - Creates a file to save metadata about the data samples.
# - Calls `prepare_generic_model`.
#
# Thus, a call to `ADSModel.prepare()` is similar to calling `ADSModel.prepare_generic_model()` except that `ADSModel.prepare()` also serializes the model.
#
# The next cell creates a temporary directory, serializes the model into an ONNX format, stores sample data, and then loads the ONNX model into memory.
model_path = tempfile.mkdtemp()
model_artifact = adsmodel.prepare(model_path, X_sample=test.X[:5],
y_sample=test.y[:5], force_overwrite=True, data_science_env=True)
onnx_model = onnx.load_model(os.path.join(model_path, "model.onnx"))
# <a id="model-artifacts"></a>
# ## Model Artifacts
#
# The prediction pipeline is written to the `score.py` file in the `model_path`. This allows for the prediction script, used by the `ADSModel` class, to be customized. This file is validated to confirm that it imports all required libraries so that the model works correctly when it is deployed. It can also be customized to meet your application's specific requirements. More details about using the `score.py` file are found in the `model_catalog.ipynb` notebook.
#
# The next cell outputs the contents of the `score.py` file.
with open(os.path.join(model_path, "score.py"), "r") as f:
print(f.read())
# <a id="model-workflow"></a>
# ## Model Workflow
#
# ONNX is an extensible computational graph model with built-in operators and machine-independent data types. The computational flow is an acyclic graph that contains information about the flow of the data and also metadata. Each node in the data flow graph contains an operator that can accept multiple inputs and produce multiple outputs. The next cell generates a plot of the ONNX model's acyclic graph.
if use_pydot:
graph_path = tempfile.mkdtemp()
graph_dot = os.path.join(graph_path, 'model.dot')
graph_png = os.path.join(graph_path, 'model.dot.png')
graph = GetPydotGraph(onnx_model.graph, name=onnx_model.graph.name,
rankdir="TB",
node_producer=GetOpNodeProducer("docstring", color="yellow",
fillcolor="yellow", style="filled"))
graph.write_dot(graph_dot)
os.system(f"dot -O -Gdpi=300 -Tpng {graph_dot}")
image = plt.imread(graph_png)
shutil.rmtree(graph_path)
fig, ax = plt.subplots(figsize=(40, 20))
ax.imshow(image)
ax.axis('off')
plt.show()
else:
print("Skipping ONNX graph")
# <a id="model-prediction"></a>
# # Model Prediction
#
# Since `ADSModel` was created, predictions can be used using that mechanism. However, ONNX also has the ability to do predictions directly and it can deal with missing data in the predictors.
#
# <a id="model-prediction-adsmodel"></a>
# ## Prediction using ADSModel
#
# The `ADSModel` has the method `predict()` that accepts predictors, in the form of a `DataFrame` object, and returns predicted values. The next cell demonstrates how to make predictions using the test data.
adsmodel.predict(test.X)
# <a id="model-prediction-onnx"></a>
# ## Prediction using OnnxRuntime
#
# An `InterfaceSession` object is needed to create a session connection to the ONNX model. This session is then used to pass the model parameters to the `run()` method. While `ADSmodel.predict()` accepts these parameters as a `DataFrame`, ONNX accepts them as a dictionary. The parameters are stored in a key labeled `input` and the values are in a list of lists.
#
# The next cell creates the `InferenceSession` object, requests a sets of predictions, and prints the predicted values.
session = onnxruntime.InferenceSession(os.path.join(model_path, "model.onnx"))
pred_class, pred_probability = session.run(None,
{'input': [[value for value in row] for index, row in test.X.iterrows()]})
pred_class
# The `run()` method returns two data sets. The first is the class predictions as in the preceding cell. This is the class with the highest probability. The second is a list of all the probabilities for each class in a prediction. This information can be used to assess the confidence that the model has in the prediction. For example, the first predicted class was `setosa`. By examining the probabilities, it can be seen that the evidence is strong that this is a correct prediction because the probabilities for the other classes are extremely low.
pred_probability[0]
# <a id="model-prediction-missing"></a>
# ### Prediction with Missing Values
#
# ONNX can often handle missing data even when the underlying structural model cannot. In this example, a logistic regression is used and generally this class of model can't handle missing data. However, the ONNX inference engine can generally deal with this by imputing the data.
#
# In the next cell, the test data has a small proportion of values masked (removed from the dataset). The ONNX `run()` method is called to make predictions.
random.seed(42)
pred_class, pred_probability = session.run(None,
{'input': [[None if random.random() < 0.1 else value for value in row]
for index, row in test.X.iterrows()]})
pred_class
# <a id="ref"></a>
# # References
#
# * <a href="https://docs.cloud.oracle.com/en-us/iaas/tools/ads-sdk/latest/index.html">Oracle ADS Library documentation</a>
# * [ONNX](https://onnx.ai/about.html)
# * [Using Notebook Sessions to Build and Train Models](https://docs.cloud.oracle.com/en-us/iaas/data-science/using/use-notebook-sessions.htm)
# * [Managing Models](https://docs.cloud.oracle.com/en-us/iaas/data-science/using/manage-models.htm)
| conda_environment_notebooks/onnx/onnx.ipynb |