code stringlengths 38 801k | repo_path stringlengths 6 263 |
|---|---|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: fe_test
# language: python
# name: fe_test
# ---
# ## What is a Variable?
#
# A variable is any characteristic, number, or quantity that can be measured or counted. They are called 'variables' because the value they take may vary, and it usually does. The following are examples of variables:
#
# - Age (21, 35, 62, ...)
# - Gender (male, female)
# - Income (GBP 20000, GBP 35000, GBP 45000, ...)
# - House price (GBP 350000, GBP 570000, ...)
# - Country of birth (China, Russia, Costa Rica, ...)
# - Eye colour (brown, green, blue, ...)
# - Vehicle make (Ford, Volkswagen, ...)
#
# Most variables in a data set can be classified into one of two major types:
#
# - **Numerical variables**
# - **Categorical variables**
#
# ===================================================================================
#
#
# ## Categorical Variables
#
# The values of a categorical variable are selected from a group of **categories**, also called **labels**. Examples are gender (male or female) and marital status (never married, married, divorced or widowed). Other examples of categorical variables include:
#
# - Intended use of loan (debt-consolidation, car purchase, wedding expenses, ...)
# - Mobile network provider (Vodafone, Orange, ...)
# - Postcode
#
# Categorical variables can be further categorised into:
#
# - **Ordinal Variables**
# - **Nominal variables**
#
# ### Ordinal Variable
#
# Ordinal variables are categorical variable in which the categories can be meaningfully ordered. For example:
#
# - Student's grade in an exam (A, B, C or Fail).
# - Days of the week, where Monday = 1 and Sunday = 7.
# - Educational level, with the categories Elementary school, High school, College graduate and PhD ranked from 1 to 4.
#
# ### Nominal Variable
#
# For nominal variables, there isn't an intrinsic order in the labels. For example, country of birth, with values Argentina, England, Germany, etc., is nominal. Other examples of nominal variables include:
#
# - Car colour (blue, grey, silver, ...)
# - Vehicle make (Citroen, Peugeot, ...)
# - City (Manchester, London, Chester, ...)
#
# There is nothing that indicates an intrinsic order of the labels, and in principle, they are all equal.
#
# **To be considered:**
#
# Sometimes categorical variables are coded as numbers when the data are recorded (e.g. gender may be coded as 0 for males and 1 for females). The variable is still categorical, despite the use of numbers.
#
# In a similar way, individuals in a survey may be coded with a number that uniquely identifies them (for example to avoid storing personal information for confidentiality). This number is really a label, and the variable then categorical. The number has no meaning other than making it possible to uniquely identify the observation (in this case the interviewed subject).
#
# Ideally, when we work with a dataset in a business scenario, the data will come with a dictionary that indicates if the numbers in the variables are to be considered as categories or if they are numerical. And if the numbers are categories, the dictionary would explain what each value in the variable represents.
#
# =============================================================================
#
# ## In this demo: Peer to peer lending (Finance)
#
# In this demo, we will use a toy data set which simulates data from a peer-o-peer finance company to inspect discrete and continuous numerical variables.
#
# - You should have downloaded the **Datasets** together with the Jupyter notebooks in **Section 1**.
# +
import pandas as pd
import matplotlib.pyplot as plt
# +
# let's load the dataset
# Variable definitions:
#-------------------------
# loan_purpose: intended use of the loan
# market: the risk market assigned to the borrower (based in their financial situation)
# householder: whether the borrower owns or rents their property
data = pd.read_csv('../loan.csv')
data.head()
# +
# let's inspect the variable householder,
# which indicates whether the borrowers own their home
# or if they are renting, among other things.
data['householder'].unique()
# +
# let's make a bar plot, with the number of loans
# for each category of home ownership
# the code below counts the number of observations (borrowers)
# within each category and then makes a bar plot
fig = data['householder'].value_counts().plot.bar()
fig.set_title('Householder')
fig.set_ylabel('Number of customers')
# -
# The majority of the borrowers either own their house on a mortgage or rent their property. A few borrowers own their home completely.
data['householder'].value_counts()
# +
# the "loan_purpose" variable is another categorical variable
# that indicates how the borrowers intend to use the
# money they are borrowing, for example to improve their
# house, or to cancel previous debt.
data['loan_purpose'].unique()
# -
# Debt consolidation means that the borrower would like a loan to cancel previous debts, Car purchase means that the borrower is borrowing the money to buy a car, and so on. It gives an idea of the intended use of the loan.
# +
# let's make a bar plot with the number of borrowers
# within each category
# the code below counts the number of observations (borrowers)
# within each category and then makes a plot
fig = data['loan_purpose'].value_counts().plot.bar()
fig.set_title('Loan Purpose')
fig.set_ylabel('Number of customers')
# -
# The majority of the borrowers intend to use the loan for 'debt consolidation'. This is quite common. What the borrowers intend to do is, to consolidate all the debt that they have on different financial items, in one single debt, the new loan that they will take from the peer to peer company. This loan will usually provide an advantage to the borrower, either in the form of lower interest rates than a credit card, for example, or longer repayment period.
# +
# let's look at one additional categorical variable,
# "market", which represents the risk market or risk band
# assigned to the borrower
data['market'].unique()
# +
# let's make a bar plot with the number of borrowers
# within each category
fig = data['market'].value_counts().plot.bar()
fig.set_title('Status of the Loan')
fig.set_ylabel('Number of customers')
# -
# Most customers are assigned to markets B and C. A are lower risk custoemrs, and E the highest risk customers. The higher the risk, the more likely the customer is to default, thus the finance companies charge higher interest rates on those loans.
# +
# finally, let's look at a variable that is numerical,
# but its numbers have no real meaning
# their values are more "labels" than real numbers
data['customer_id'].head()
# -
# Each id represents one customer. This number is assigned to identify the customer if needed, while
# maintaining confidentiality and ensuring data protection.
# +
# The variable has as many different id values as customers,
# in this case 10000,
len(data['customer_id'].unique())
# -
# **That is all for this demonstration. I hope you enjoyed the notebook, and see you in the next one.**
| Section-02-Types-of-Variables/02.2-Categorical-Variables.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: ''
# name: sagemath
# ---
# + language="html"
# <link href="http://mathbook.pugetsound.edu/beta/mathbook-content.css" rel="stylesheet" type="text/css" />
# <link href="https://aimath.org/mathbook/mathbook-add-on.css" rel="stylesheet" type="text/css" />
# <style>.subtitle {font-size:medium; display:block}</style>
# <link href="https://fonts.googleapis.com/css?family=Open+Sans:400,400italic,600,600italic" rel="stylesheet" type="text/css" />
# <link href="https://fonts.googleapis.com/css?family=Inconsolata:400,700&subset=latin,latin-ext" rel="stylesheet" type="text/css" /><!-- Hide this cell. -->
# <script>
# var cell = $(".container .cell").eq(0), ia = cell.find(".input_area")
# if (cell.find(".toggle-button").length == 0) {
# ia.after(
# $('<button class="toggle-button">Toggle hidden code</button>').click(
# function (){ ia.toggle() }
# )
# )
# ia.hide()
# }
# </script>
#
# -
# **Important:** to view this notebook properly you will need to execute the cell above, which assumes you have an Internet connection. It should already be selected, or place your cursor anywhere above to select. Then press the "Run" button in the menu bar above (the right-pointing arrowhead), or press Shift-Enter on your keyboard.
# $\newcommand{\identity}{\mathrm{id}}
# \newcommand{\notdivide}{\nmid}
# \newcommand{\notsubset}{\not\subset}
# \newcommand{\lcm}{\operatorname{lcm}}
# \newcommand{\gf}{\operatorname{GF}}
# \newcommand{\inn}{\operatorname{Inn}}
# \newcommand{\aut}{\operatorname{Aut}}
# \newcommand{\Hom}{\operatorname{Hom}}
# \newcommand{\cis}{\operatorname{cis}}
# \newcommand{\chr}{\operatorname{char}}
# \newcommand{\Null}{\operatorname{Null}}
# \newcommand{\lt}{<}
# \newcommand{\gt}{>}
# \newcommand{\amp}{&}
# $
# <div class="mathbook-content"><h2 class="heading hide-type" alt="Section 2.1 Mathematical Induction"><span class="type">Section</span><span class="codenumber">2.1</span><span class="title">Mathematical Induction</span></h2><a href="section-math-induction.ipynb" class="permalink">¶</a></div>
# <div class="mathbook-content"><p id="p-230">Suppose we wish to show that</p><div class="displaymath">
# \begin{equation*}
# 1 + 2 + \cdots + n = \frac{n(n + 1)}{2}
# \end{equation*}
# </div><p>for any natural number $n\text{.}$ This formula is easily verified for small numbers such as $n = 1\text{,}$ 2, 3, or 4, but it is impossible to verify for all natural numbers on a case-by-case basis. To prove the formula true in general, a more generic method is required.</p></div>
# <div class="mathbook-content"><p id="p-231">Suppose we have verified the equation for the first $n$ cases. We will attempt to show that we can generate the formula for the $(n + 1)$th case from this knowledge. The formula is true for $n = 1$ since</p><div class="displaymath">
# \begin{equation*}
# 1 = \frac{1(1 + 1)}{2}.
# \end{equation*}
# </div><p>If we have verified the first $n$ cases, then</p><div class="displaymath">
# \begin{align*}
# 1 + 2 + \cdots + n + (n + 1) & = \frac{n(n + 1)}{2} + n + 1\\
# & = \frac{n^2 + 3n + 2}{2}\\
# & = \frac{(n + 1)[(n + 1) + 1]}{2}.
# \end{align*}
# </div><p>This is exactly the formula for the $(n + 1)$th case.</p></div>
# <div class="mathbook-content"><p id="p-232">This method of proof is known as <dfn class="terminology">mathematical induction</dfn>. Instead of attempting to verify a statement about some subset $S$ of the positive integers ${\mathbb N}$ on a case-by-case basis, an impossible task if $S$ is an infinite set, we give a specific proof for the smallest integer being considered, followed by a generic argument showing that if the statement holds for a given case, then it must also hold for the next case in the sequence. We summarize mathematical induction in the following axiom.</p></div>
# <div class="mathbook-content"><article class="theorem-like" id="principle-integers-first-pmi"><h6 class="heading"><span class="type">Principle</span><span class="codenumber">2.1</span><span class="title">First Principle of Mathematical Induction</span></h6><p id="p-233">Let $S(n)$ be a statement about integers for $n \in {\mathbb N}$ and suppose $S(n_0)$ is true for some integer $n_0\text{.}$ If for all integers $k$ with $k \geq n_0\text{,}$ $S(k)$ implies that $S(k+1)$ is true, then $S(n)$ is true for all integers $n$ greater than or equal to $n_0\text{.}$</p></article></div>
# <div class="mathbook-content"><article class="example-like" id="example-integers-induction-greater-than"><h6 class="heading"><span class="type">Example</span><span class="codenumber">2.2</span></h6><p id="p-234">For all integers $n \geq 3\text{,}$ $2^n \gt n + 4\text{.}$ Since</p><div class="displaymath">
# \begin{equation*}
# 8 = 2^3 \gt 3 + 4 = 7,
# \end{equation*}
# </div><p>the statement is true for $n_0 = 3\text{.}$ Assume that $2^k \gt k + 4$ for $k \geq 3\text{.}$ Then $2^{k + 1} = 2 \cdot 2^{k} \gt 2(k + 4)\text{.}$ But</p><div class="displaymath">
# \begin{equation*}
# 2(k + 4) = 2k + 8 \gt k + 5 = (k + 1) + 4
# \end{equation*}
# </div><p>since $k$ is positive. Hence, by induction, the statement holds for all integers $n \geq 3\text{.}$</p></article></div>
# <div class="mathbook-content"><article class="example-like" id="example-integers-induction-divisible"><h6 class="heading"><span class="type">Example</span><span class="codenumber">2.3</span></h6><p id="p-235">Every integer $10^{n + 1} + 3 \cdot 10^n + 5$ is divisible by 9 for $n \in {\mathbb N}\text{.}$ For $n = 1\text{,}$</p><div class="displaymath">
# \begin{equation*}
# 10^{1 + 1} + 3 \cdot 10 + 5 = 135 = 9 \cdot 15
# \end{equation*}
# </div><p>is divisible by 9. Suppose that $10^{k + 1} + 3 \cdot 10^k + 5$ is divisible by 9 for $k \geq 1\text{.}$ Then</p><div class="displaymath">
# \begin{align*}
# 10^{(k + 1) + 1} + 3 \cdot 10^{k + 1} + 5& = 10^{k + 2} + 3 \cdot 10^{k + 1} + 50 - 45\\
# & = 10 (10^{k + 1} + 3 \cdot 10^{k} + 5) - 45
# \end{align*}
# </div><p>is divisible by 9.</p></article></div>
# <div class="mathbook-content"><article class="example-like" id="example-integers-binomial-theorem"><h6 class="heading"><span class="type">Example</span><span class="codenumber">2.4</span></h6><p id="p-236">We will prove the binomial theorem using mathematical induction; that is,</p><div class="displaymath">
# \begin{equation*}
# (a + b)^n = \sum_{k = 0}^{n} \binom{n}{k} a^k b^{n - k},
# \end{equation*}
# </div><p>where $a$ and $b$ are real numbers, $n \in \mathbb{N}\text{,}$ and</p><div class="displaymath">
# \begin{equation*}
# \binom{n}{k} = \frac{n!}{k! (n - k)!}
# \end{equation*}
# </div><p>is the binomial coefficient. We first show that</p><div class="displaymath">
# \begin{equation*}
# \binom{n + 1}{k} = \binom{n}{k} + \binom{n}{k - 1}.
# \end{equation*}
# </div><p>This result follows from</p><div class="displaymath">
# \begin{align*}
# \binom{n}{k} + \binom{n}{k - 1} & = \frac{n!}{k!(n - k)!} +\frac{n!}{(k-1)!(n - k + 1)!}\\
# & = \frac{(n + 1)!}{k!(n + 1 - k)!}\\
# & =\binom{n + 1}{k}.
# \end{align*}
# </div><p>If $n = 1\text{,}$ the binomial theorem is easy to verify. Now assume that the result is true for $n$ greater than or equal to 1. Then</p><div class="displaymath">
# \begin{align*}
# (a + b)^{n + 1} & = (a + b)(a + b)^n\\
# & = (a + b) \left( \sum_{k = 0}^{n} \binom{n}{k} a^k b^{n - k}\right)\\
# & = \sum_{k = 0}^{n} \binom{n}{k} a^{k + 1} b^{n - k} + \sum_{k = 0}^{n} \binom{n}{k} a^k b^{n + 1 - k}\\
# & = a^{n + 1} + \sum_{k = 1}^{n} \binom{n}{k - 1} a^{k} b^{n + 1 - k} + \sum_{k = 1}^{n} \binom{n}{k} a^k b^{n + 1 - k} + b^{n + 1}\\
# & = a^{n + 1} + \sum_{k = 1}^{n} \left[ \binom{n}{k - 1} + \binom{n}{k} \right]a^k b^{n + 1 - k} + b^{n + 1}\\
# & = \sum_{k = 0}^{n + 1} \binom{n + 1}{k} a^k b^{n + 1- k}.
# \end{align*}
# </div></article></div>
# <div class="mathbook-content"><p id="p-237">We have an equivalent statement of the Principle of Mathematical Induction that is often very useful.</p></div>
# <div class="mathbook-content"><article class="theorem-like" id="principle-integers-second-pmi"><h6 class="heading"><span class="type">Principle</span><span class="codenumber">2.5</span><span class="title">Second Principle of Mathematical Induction</span></h6><p id="p-238">Let $S(n)$ be a statement about integers for $n \in {\mathbb N}$ and suppose $S(n_0)$ is true for some integer $n_0\text{.}$ If $S(n_0), S(n_0 + 1), \ldots, S(k)$ imply that $S(k + 1)$ for $k \geq n_0\text{,}$ then the statement $S(n)$ is true for all integers $n \geq n_0\text{.}$</p></article></div>
# <div class="mathbook-content"><p id="p-239">A nonempty subset $S$ of ${\mathbb Z}$ is <dfn class="terminology">well-ordered</dfn> if $S$ contains a least element. Notice that the set ${\mathbb Z}$ is not well-ordered since it does not contain a smallest element. However, the natural numbers are well-ordered.</p></div>
# <div class="mathbook-content"><article class="theorem-like" id="principle-3"><h6 class="heading"><span class="type">Principle</span><span class="codenumber">2.6</span><span class="title">Principle of Well-Ordering</span></h6><p id="p-240">Every nonempty subset of the natural numbers is well-ordered.</p></article></div>
# <div class="mathbook-content"><p id="p-241">The Principle of Well-Ordering is equivalent to the Principle of Mathematical Induction.</p></div>
# <div class="mathbook-content"><article class="theorem-like" id="lemma-integers-smallest-number"><h6 class="heading"><span class="type">Lemma</span><span class="codenumber">2.7</span></h6><p id="p-242">The Principle of Mathematical Induction implies that $1$ is the least positive natural number.</p></article><article class="proof" id="proof-6"><h6 class="heading"><span class="type">Proof</span></h6><p id="p-243">Let $S = \{ n \in {\mathbb N} : n \geq 1 \}\text{.}$ Then $1 \in S\text{.}$ Assume that $n \in S\text{.}$ Since $0 \lt 1\text{,}$ it must be the case that $n = n + 0 \lt n + 1\text{.}$ Therefore, $1 \leq n \lt n + 1\text{.}$ Consequently, if $n \in S\text{,}$ then $n + 1$ must also be in $S\text{,}$ and by the Principle of Mathematical Induction, and $S = \mathbb N\text{.}$</p></article></div>
# <div class="mathbook-content"><article class="theorem-like" id="theorem-integers-pmi-implies-pwo"><h6 class="heading"><span class="type">Theorem</span><span class="codenumber">2.8</span></h6><p id="p-244">The Principle of Mathematical Induction implies the Principle of Well-Ordering. That is, every nonempty subset of $\mathbb N$ contains a least element.</p></article><article class="proof" id="proof-7"><h6 class="heading"><span class="type">Proof</span></h6><p id="p-245">We must show that if $S$ is a nonempty subset of the natural numbers, then $S$ contains a least element. If $S$ contains 1, then the theorem is true by Lemma <a href="section-math-induction.ipynb#lemma-integers-smallest-number" class="xref" alt="Lemma 2.7 " title="Lemma 2.7 ">2.7</a>. Assume that if $S$ contains an integer $k$ such that $1 \leq k \leq n\text{,}$ then $S$ contains a least element. We will show that if a set $S$ contains an integer less than or equal to $n + 1\text{,}$ then $S$ has a least element. If $S$ does not contain an integer less than $n+1\text{,}$ then $n+1$ is the smallest integer in $S\text{.}$ Otherwise, since $S$ is nonempty, $S$ must contain an integer less than or equal to $n\text{.}$ In this case, by induction, $S$ contains a least element.</p></article></div>
# <div class="mathbook-content"><p id="p-246">Induction can also be very useful in formulating definitions. For instance, there are two ways to define $n!\text{,}$ the factorial of a positive integer $n\text{.}$ </p><ul class="disc"><li id="li-70"><p id="p-247">The <em class="emphasis">explicit</em> definition: $n! = 1 \cdot 2 \cdot 3 \cdots (n - 1) \cdot n\text{.}$</p></li><li id="li-71"><p id="p-248">The <em class="emphasis">inductive</em> or <em class="emphasis">recursive</em> definition: $1! = 1$ and $n! = n(n - 1)!$ for $n \gt 1\text{.}$</p></li></ul><p>Every good mathematician or computer scientist knows that looking at problems recursively, as opposed to explicitly, often results in better understanding of complex issues.</p></div>
| aata/section-math-induction.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# ---
# <style>div.container { width: 100% }</style>
# <img style="float:left; vertical-align:text-bottom;" height="65" width="172" src="assets/PyViz_logo_wm_line.png" />
# <div style="float:right; vertical-align:text-bottom;"><h2>Tutorial 06. Network Graphs</h2></div>
# Visualizing and working with network graphs is a common problem in many different disciplines. HoloViews provides the ability to represent and visualize graphs very simply and easily with facilities for interactively exploring the nodes and edges of the graph, especially using the Bokeh plotting interface. It can also make use of Datashader for plotting large graphs, and NetworkX for some convenient graph functions:
#
# <div style="margin: 10px">
# <a href="http://holoviews.org"><img style="margin:8px; display:inline; object-fit:scale-down; max-height:150px" src="./assets/holoviews.png"/></a>
# <a href="http://bokeh.pydata.org"><img style="margin:8px; display:inline; object-fit:scale-down; max-height:150px" src="./assets/bokeh.png"/></a>
# <a href="http://numpy.org"><img style="margin:8px; display:inline; object-fit:scale-down; max-height:150px" src="./assets/numpy.png"/></a>
# <a href="http://pandas.pydata.org"><img style="margin:8px; display:inline; object-fit:scale-down; max-height:140px" src="./assets/pandas.png"/></a>
# <a href="http://networkx.github.io"><img style="margin:8px; display:inline; object-fit:scale-down; max-height:140px" src="./assets/networkx.png"/></a>
# </div>
# +
import numpy as np
import pandas as pd
import holoviews as hv
import networkx as nx
hv.extension('bokeh')
# %opts Graph [width=400 height=400]
# -
# The HoloViews ``Graph`` ``Element`` differs from other elements in HoloViews in that it consists of multiple sub-elements. The ``Graph`` element itself holds the data that indicates whether each node is connected to each other node. By default the element will automatically compute concrete ``x`` and ``y`` positions for the nodes and represent them using a ``Nodes`` element, which is stored on the Graph. The abstract edges and concrete node positions are sufficient to render the ``Graph`` by drawing straight-line edges between the nodes. In order to supply explicit edge paths we can also declare ``EdgePaths``, providing explicit coordinates for each edge to follow.
#
# To summarize, a ``Graph`` consists of three different components:
#
# * The ``Graph`` itself holds the abstract edges stored as a table of node index pairs.
# * The ``Nodes`` hold the concrete ``x`` and ``y`` positions of each node along with a node ``index``. The ``Nodes`` may also define any number of value dimensions, which can be revealed when hovering over the nodes or to color the nodes by.
# * The ``EdgePaths`` can optionally be supplied to declare explicit node paths.
#
# #### A simple Graph
#
# Let's start by declaring a very simple graph connecting one node to all others. If we simply supply the abstract connectivity of the ``Graph``, it will automatically compute a layout for the nodes using the ``layout_nodes`` operation, which defaults to a circular layout:
# +
# Declare abstract edges
N = 8
node_indices = np.arange(N)
source = np.zeros(N)
target = node_indices
padding = dict(x=(-1.2, 1.2), y=(-1.2, 1.2))
simple_graph = hv.Graph(((source, target),)).redim.range(**padding)
simple_graph
# -
# #### Accessing the nodes and edges
#
# We can easily access the ``Nodes`` and ``EdgePaths`` on the ``Graph`` element using the corresponding properties:
simple_graph.nodes + simple_graph.edgepaths
# #### Supplying explicit paths
#
# Next we will extend this example by supplying explicit edges:
# +
def bezier(start, end, control, steps=np.linspace(0, 1, 100)):
return (1-steps)**2*start + 2*(1-steps)*steps*control+steps**2*end
x, y = simple_graph.nodes.array([0, 1]).T
paths = []
for node_index in node_indices:
ex, ey = x[node_index], y[node_index]
paths.append(np.column_stack([bezier(x[0], ex, 0), bezier(y[0], ey, 0)]))
bezier_graph = hv.Graph(((source, target), (x, y, node_indices), paths)).redim.range(**padding)
bezier_graph
# -
# ## Interactive features
# #### Hover and selection policies
#
# Thanks to Bokeh we can reveal more about the graph by hovering over the nodes and edges. The ``Graph`` element provides an ``inspection_policy`` and a ``selection_policy``, which define whether hovering and selection highlight edges associated with the selected node or nodes associated with the selected edge. These policies can be toggled by setting the policy to ``'nodes'`` (the default) or ``'edges'``.
bezier_graph.options(inspection_policy='edges')
# In addition to changing the policy, we can also change the colors used when hovering and selecting nodes:
# %%opts Graph [tools=['hover', 'box_select']] (edge_hover_line_color='green' node_hover_fill_color='red')
bezier_graph.options(inspection_policy='nodes')
# #### Additional information
#
# We can also associate additional information with the nodes and edges of a graph. By constructing the ``Nodes`` explicitly we can declare additional value dimensions, which are revealed when hovering and/or can be mapped to the color by specifying the ``color_index``. Similarly, we can associate additional information with each *edge* by supplying a value dimension to the ``Graph`` itself.
# +
# %%opts Graph [color_index='Type'] (cmap='Set1')
node_labels = ['Output']+['Input']*(N-1)
edge_labels = list('ABCDEFGH')
nodes = hv.Nodes((x, y, node_indices, node_labels), vdims='Type')
graph = hv.Graph(((source, target, edge_labels), nodes, paths), vdims='Label').redim.range(**padding)
graph + graph.options(inspection_policy='edges')
# -
# If you want to supply additional node information without speciying explicit node positions you may pass in a ``Dataset`` object consisting only of various value dimensions.
# %%opts Graph [color_index='Label'] (cmap='Set1')
node_info = hv.Dataset(node_labels, vdims='Label')
hv.Graph(((source, target), node_info)).redim.range(**padding)
# ## Working with NetworkX
# NetworkX is a very useful library when working with network graphs, and the Graph Element provides ways of importing a NetworkX Graph directly. Here we will load the Karate Club graph and use the ``circular_layout`` function provided by NetworkX to lay it out:
# %%opts Graph [tools=['hover'] color_index='club'] (cmap='Set1')
G = nx.karate_club_graph()
hv.Graph.from_networkx(G, nx.layout.circular_layout).redim.range(**padding)
# #### Animating graphs
# Like all other elements ``Graph`` can be updated in a ``HoloMap`` or ``DynamicMap``. Here we animate how the Fruchterman-Reingold force-directed algorithm lays out the nodes in real time.
# +
# %%opts Graph [tools=['hover'] color_index='club'] (cmap='Set1')
G = nx.karate_club_graph()
def get_graph(iteration):
np.random.seed(10)
return hv.Graph.from_networkx(G, nx.spring_layout, iterations=iteration)
hv.HoloMap({i: get_graph(i) for i in range(5, 30, 5)},
kdims='Iterations').redim.range(x=(-1.2, 1.2), y=(-1.2, 1.2))
# -
# ## Real world graphs
# As a final example let's look at a slightly larger graph. We will load a dataset of a Facebook network consisting a number of friendship groups identified by their ``'circle'``. We will load the edge and node data using pandas and then color each node by their friendship group using many of the things we learned above.
# %opts Nodes Graph [width=800 height=800 xaxis=None yaxis=None]
# %%opts Graph [color_index='circle']
# %%opts Graph (node_size=10 edge_line_width=1)
colors = ['#000000']+hv.Cycle('Category20').values
edges_df = pd.read_csv('../data/fb_edges.csv')
fb_nodes = hv.Nodes(pd.read_csv('../data/fb_nodes.csv')).sort()
fb_graph = hv.Graph((edges_df, fb_nodes), label='Facebook Circles')
fb_graph = fb_graph.redim.range(x=(-0.05, 1.05), y=(-0.05, 1.05)).options(cmap=colors)
fb_graph
# ## Bundling graphs
# Later, in [Working with Large Datasets](10_Working_with_Large_Datasets.ipynb) we will see how the [Datashader](http://datashader.org/) library allows us to render very large datasets efficiently. In this section, we use the algorithms for bundling the edges of large graphs that are available in datashader via HoloViews.
from holoviews.operation.datashader import datashade, bundle_graph
bundled = bundle_graph(fb_graph)
bundled
# ## Datashading graphs
# For graphs with a large number of edges we can datashade the paths and display the nodes separately. This loses some of the interactive features but will let you visualize quite large graphs. If the number of edges is much greater than the number of nodes, using datashader to render the edges still lets you interact with each node for hovering, even though the connections are now drawn as an image:
# %%opts Nodes [color_index='circle'] (size=10 cmap=colors) Overlay [show_legend=False]
datashade(bundled, normalization='linear', width=800, height=800) * bundled.nodes
# ### Applying selections
# Alternatively we can select the nodes and edges by an attribute that resides on either. In this case we will select the nodes and edges for a particular circle and then overlay just the selected part of the graph on the datashaded plot. Note that selections on the ``Graph`` itself will select all nodes that connect to one of the selected nodes. In this way a smaller subgraph can be highlighted and the larger graph can be datashaded to reduce the file size.
# %%opts Graph (node_fill_color='white')
datashade(bundle_graph(fb_graph), normalization='linear', width=800, height=800) *\
bundled.select(circle='circle15')
# To select just the nodes that are in 'circle15' set the ``selection_mode='nodes'`` overriding the default of 'edges':
bundled.select(circle='circle15', selection_mode='nodes')
# # Onwards
#
# Having seen how to visualize and interactively explore graphical data, we now go on to demonstrate how to visualize and explore a specific domain: [Geographic Data](./07_Geographic_Data.ipynb). While domain specific, geographic data is both very common and typically awkward to handle.
| notebooks/06_Network_Graphs.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Project 0
# In matlab, open and show an image,
#
# Write a matlab function (function histImg()) of histogram, cumulative histogram without using histogram(),etc. Show them in a figure.
#
# Write a matlab function (function histEqual()) of histogram equalization without using histeq(). Show the resulting image in a figure.
#
# Write three matlab functions for linearly transforming (function linTransformImg()), smoothing(function smoothImg()), sharpening(function sharpenImage) the image. Show the resulting image in a figure.
# Import packages.
import cv2
import numpy as np
# Some test
print(cv2.__version__)
print(np.__version__)
# ## Import & show an image
# *Open & show the image 1.jpg.*
#
# Refer to: https://docs.opencv.org/3.0-beta/doc/py_tutorials/py_gui/py_image_display/py_image_display.html
# Load an color image in grayscale.
img = cv2.imread('1.jpg',0)
# Show the image.
cv2.imshow('1', img)
# Wait for user to close the window.
cv2.waitKey(0)
cv2.destroyAllWindows()
# ## Create a histogram by hand
# *Do not use a stock function, just create your histogram by hand.*
| Project0/.ipynb_checkpoints/Project0-checkpoint.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# Deep Learning Models -- A collection of various deep learning architectures, models, and tips for TensorFlow and PyTorch in Jupyter Notebooks.
# - Author: <NAME>
# - GitHub Repository: https://github.com/rasbt/deeplearning-models
# %load_ext watermark
# %watermark -a '<NAME>' -v -p torch
# - Runs on CPU or GPU (if available)
# # Gradient Clipping
# Certain types of deep neural networks, especially, simple ones without any other type regularization and a relatively large number of layers, can suffer from exploding gradient problems. The exploding gradient problem is a scenario where large loss gradients accumulate during backpropagation, which will eventually result in very large weight updates during training. As a consequence, the updates will be very unstable and fluctuate a lot, which often causes severe problems during training. This is also a particular problem for unbounded activation functions such as ReLU.
#
# One common, classic technique for avoiding exploding gradient problems is the so-called gradient clipping approach. Here, we simply set gradient values above or below a certain threshold to a user-specified min or max value. In PyTorch, there are several ways for performing gradient clipping.
#
# **1 - Basic Clipping**
#
# The simplest approach to gradient clipping in PyTorch is by using the [`torch.nn.utils.clip_grad_value_`](https://pytorch.org/docs/stable/nn.html?highlight=clip#torch.nn.utils.clip_grad_value_) function. For example, if we have instantiated a PyTorch model from a model class based on `torch.nn.Module` (as usual), we can add the following line of code in order to clip the gradients to [-1, 1] range:
#
# ```python
# torch.nn.utils.clip_grad_value_(parameters=model.parameters(),
# clip_value=1.)
#
# ```
#
# However, notice that via this approach, we can only specify a single clip value, which will be used for both the upper and lower bound such that gradients will be clipped to the range [-`clip_value`, `clip_value`].
#
#
# **2 - Custom Lower and Upper Bounds**
#
# If we want to clip the gradients to an unsymmetric interval around zero, say [-0.1, 1.0], we can take a different approach by defining a backwards hook:
#
# ```python
# for param in model.parameters():
# param.register_hook(lambda gradient: torch.clamp(gradient, -0.1, 1.0))
# ```
#
# This backward hook only needs to be defined once after instantiating the model. Then, each time after calling the `backward` method, it will clip the gradients before running the `model.step()` method.
#
# **3 - Norm-clipping**
#
# Lastly, there's a third clipping option, [`torch.nn.utils.clip_grad_norm_`](https://pytorch.org/docs/stable/nn.html?highlight=clip#torch.nn.utils.clip_grad_norm_), which clips the gradients using a vector norm as follows:
#
#
# > `torch.nn.utils.clip_grad_norm_(parameters, max_norm, norm_type=2)`
#
# >Clips gradient norm of an iterable of parameters. The norm is computed over all gradients together, as if they were concatenated into a single vector. Gradients are modified in-place.
#
# ## Imports
# +
import time
import numpy as np
from torchvision import datasets
from torchvision import transforms
from torch.utils.data import DataLoader
import torch.nn.functional as F
import torch
if torch.cuda.is_available():
torch.backends.cudnn.deterministic = True
# -
# ## Settings and Dataset
# +
##########################
### SETTINGS
##########################
# Device
device = torch.device("cuda:2" if torch.cuda.is_available() else "cpu")
# Hyperparameters
random_seed = 1
learning_rate = 0.01
num_epochs = 10
batch_size = 64
# Architecture
num_features = 784
num_hidden_1 = 256
num_hidden_2 = 128
num_hidden_3 = 64
num_hidden_4 = 32
num_classes = 10
##########################
### MNIST DATASET
##########################
# Note transforms.ToTensor() scales input images
# to 0-1 range
train_dataset = datasets.MNIST(root='data',
train=True,
transform=transforms.ToTensor(),
download=True)
test_dataset = datasets.MNIST(root='data',
train=False,
transform=transforms.ToTensor())
train_loader = DataLoader(dataset=train_dataset,
batch_size=batch_size,
shuffle=True)
test_loader = DataLoader(dataset=test_dataset,
batch_size=batch_size,
shuffle=False)
# Checking the dataset
for images, labels in train_loader:
print('Image batch dimensions:', images.shape)
print('Image label dimensions:', labels.shape)
break
# -
def compute_accuracy(net, data_loader):
net.eval()
correct_pred, num_examples = 0, 0
with torch.no_grad():
for features, targets in data_loader:
features = features.view(-1, 28*28).to(device)
targets = targets.to(device)
logits, probas = net(features)
_, predicted_labels = torch.max(probas, 1)
num_examples += targets.size(0)
correct_pred += (predicted_labels == targets).sum()
return correct_pred.float()/num_examples * 100
# +
##########################
### MODEL
##########################
class MultilayerPerceptron(torch.nn.Module):
def __init__(self, num_features, num_classes):
super(MultilayerPerceptron, self).__init__()
### 1st hidden layer
self.linear_1 = torch.nn.Linear(num_features, num_hidden_1)
### 2nd hidden layer
self.linear_2 = torch.nn.Linear(num_hidden_1, num_hidden_2)
### 3rd hidden layer
self.linear_3 = torch.nn.Linear(num_hidden_2, num_hidden_3)
### 4th hidden layer
self.linear_4 = torch.nn.Linear(num_hidden_3, num_hidden_4)
### Output layer
self.linear_out = torch.nn.Linear(num_hidden_4, num_classes)
def forward(self, x):
out = self.linear_1(x)
out = F.relu(out)
out = self.linear_2(out)
out = F.relu(out)
out = self.linear_3(out)
out = F.relu(out)
out = self.linear_4(out)
out = F.relu(out)
logits = self.linear_out(out)
probas = F.log_softmax(logits, dim=1)
return logits, probas
# -
# ## 1 - Basic Clipping
# +
torch.manual_seed(random_seed)
model = MultilayerPerceptron(num_features=num_features,
num_classes=num_classes)
model = model.to(device)
optimizer = torch.optim.Adam(model.parameters(), lr=learning_rate)
###################################################################
start_time = time.time()
for epoch in range(num_epochs):
model.train()
for batch_idx, (features, targets) in enumerate(train_loader):
features = features.view(-1, 28*28).to(device)
targets = targets.to(device)
### FORWARD AND BACK PROP
logits, probas = model(features)
cost = F.cross_entropy(logits, targets)
optimizer.zero_grad()
cost.backward()
### UPDATE MODEL PARAMETERS
#########################################################
#########################################################
### GRADIENT CLIPPING
torch.nn.utils.clip_grad_value_(model.parameters(), 1.)
#########################################################
#########################################################
optimizer.step()
### LOGGING
if not batch_idx % 50:
print ('Epoch: %03d/%03d | Batch %03d/%03d | Cost: %.4f'
%(epoch+1, num_epochs, batch_idx,
len(train_loader), cost))
with torch.set_grad_enabled(False):
print('Epoch: %03d/%03d training accuracy: %.2f%%' % (
epoch+1, num_epochs,
compute_accuracy(model, train_loader)))
print('Time elapsed: %.2f min' % ((time.time() - start_time)/60))
print('Total Training Time: %.2f min' % ((time.time() - start_time)/60))
# -
print('Test accuracy: %.2f%%' % (compute_accuracy(model, test_loader)))
# ## 2 - Custom Lower and Upper Bounds
# +
torch.manual_seed(random_seed)
model = MultilayerPerceptron(num_features=num_features,
num_classes=num_classes)
#########################################################
#########################################################
### GRADIENT CLIPPING
for p in model.parameters():
p.register_hook(lambda grad: torch.clamp(grad, -0.1, 1.0))
#########################################################
#########################################################
model = model.to(device)
optimizer = torch.optim.Adam(model.parameters(), lr=learning_rate)
###################################################################
start_time = time.time()
for epoch in range(num_epochs):
model.train()
for batch_idx, (features, targets) in enumerate(train_loader):
features = features.view(-1, 28*28).to(device)
targets = targets.to(device)
### FORWARD AND BACK PROP
logits, probas = model(features)
cost = F.cross_entropy(logits, targets)
optimizer.zero_grad()
cost.backward()
### UPDATE MODEL PARAMETERS
optimizer.step()
### LOGGING
if not batch_idx % 50:
print ('Epoch: %03d/%03d | Batch %03d/%03d | Cost: %.4f'
%(epoch+1, num_epochs, batch_idx,
len(train_loader), cost))
with torch.set_grad_enabled(False):
print('Epoch: %03d/%03d training accuracy: %.2f%%' % (
epoch+1, num_epochs,
compute_accuracy(model, train_loader)))
print('Time elapsed: %.2f min' % ((time.time() - start_time)/60))
print('Total Training Time: %.2f min' % ((time.time() - start_time)/60))
# -
print('Test accuracy: %.2f%%' % (compute_accuracy(model, test_loader)))
# ## 3 - Norm-clipping
# +
torch.manual_seed(random_seed)
model = MultilayerPerceptron(num_features=num_features,
num_classes=num_classes)
model = model.to(device)
optimizer = torch.optim.Adam(model.parameters(), lr=learning_rate)
###################################################################
start_time = time.time()
for epoch in range(num_epochs):
model.train()
for batch_idx, (features, targets) in enumerate(train_loader):
features = features.view(-1, 28*28).to(device)
targets = targets.to(device)
### FORWARD AND BACK PROP
logits, probas = model(features)
cost = F.cross_entropy(logits, targets)
optimizer.zero_grad()
cost.backward()
### UPDATE MODEL PARAMETERS
#########################################################
#########################################################
### GRADIENT CLIPPING
torch.nn.utils.clip_grad_norm_(model.parameters(), 1., norm_type=2)
#########################################################
#########################################################
optimizer.step()
### LOGGING
if not batch_idx % 50:
print ('Epoch: %03d/%03d | Batch %03d/%03d | Cost: %.4f'
%(epoch+1, num_epochs, batch_idx,
len(train_loader), cost))
with torch.set_grad_enabled(False):
print('Epoch: %03d/%03d training accuracy: %.2f%%' % (
epoch+1, num_epochs,
compute_accuracy(model, train_loader)))
print('Time elapsed: %.2f min' % ((time.time() - start_time)/60))
print('Total Training Time: %.2f min' % ((time.time() - start_time)/60))
# -
print('Test accuracy: %.2f%%' % (compute_accuracy(model, test_loader)))
# %watermark -iv
| pytorch_ipynb/tricks/gradclipping_mlp.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# This notebook reproduces the timepoint-by-timepoint recall temporal correlation matrices
# ## Imports
# +
import numpy as np
import pandas as pd
from scipy.spatial.distance import cdist
from sherlock_helpers.constants import DATA_DIR, EDGECOLOR, FIG_DIR
from sherlock_helpers.functions import draw_bounds, show_source
import matplotlib.pyplot as plt
import matplotlib as mpl
import seaborn as sns
# %matplotlib inline
# -
# ## Inspect `draw_bounds` function
show_source(draw_bounds)
# ## Paths plotting params
sns.set_context('paper')
mpl.rcParams['pdf.fonttype'] = 42
cmap = plt.cm.bone_r
# ## Load the data
video_model, recall_models = np.load(DATA_DIR.joinpath('models_t100_v50_r10.npy'),
allow_pickle=True)
boundary_models = np.load(DATA_DIR.joinpath('recall_eventseg_models'),
allow_pickle=True)
# ## Compute correlation matrices
corrmats = [np.corrcoef(r) for r in recall_models]
# ## Plot figure
# +
fig, axarr = plt.subplots(5, 4)
axarr = axarr.ravel()
fig.set_size_inches(8, 10)
for i, ax in enumerate(axarr):
try:
c = corrmats[i]
b = boundary_models[i]
except IndexError:
ax.axis('off')
continue
if len(c) > 250:
tick_freq = 100
elif len(c) > 125:
tick_freq = 50
else:
tick_freq = 25
sns.heatmap(c,
cmap=cmap,
xticklabels=tick_freq,
yticklabels=tick_freq,
vmin=0,
vmax=1,
cbar=False,
ax=ax)
ax.set_ylabel('Recall time (window)')
ax.set_xlabel('Recall time (window)')
ax.set_title(f'P{i + 1}')
for spine in ax.spines.values():
spine.set_visible(True)
ax.collections[0].remove()
ax.imshow(c, aspect='auto', cmap=cmap)
draw_bounds(ax, b)
axarr[17].axis('off')
axarr[18].axis('off')
axarr[19].axis('off')
plt.tight_layout()
# plt.savefig(FIG_DIR.joinpath('corrmats.pdf'))
plt.show()
| code/notebooks/supp/corrmats.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # 函数
#
# - 函数可以用来定义可重复代码,组织和简化
# - 一般来说一个函数在实际开发中为一个小功能
# - 一个类为一个大功能
# - 同样函数的长度不要超过一屏
def shenxian(name1,name2):
print('神仙是{},{}'.format(name1,name2))
shenxian('haha','hehe')
def qiuou(shu,is_print=True):
if shu % 2 == 0:
if is_print:
print('oushu')
shu = eval(input())
qiuou(shu,is_print=False)
# +
def ou():
print('偶数:',end='')
for i in range(11):
if i % 2 == 0:
print(i,end=' ')
ji()
def ji():
print()
print('奇数:',end='')
for i in range(11):
if i % 2 != 0:
print(i,end=' ')
# -
ou()
# +
import hashlib
data = 'This a md5 test!'
hash_md5 = hashlib.md5(data.encode())
hash_md5.hexdigest()
def password():
pass
def md5():
pass
def result():
pass
# +
import hashlib
# -------数据库里面的密码---------
data = 'hu<PASSWORD>'
hash_md5 = hashlib.md5(data.encode())
init_password = <PASSWORD>()
# -------ajax登录的明文密码-------
def password(passw):
md5(passw) # 进行MD5摘要
# ----------MD5摘要---------------
def md5(passw):
hash_md5 = hashlib.md5(passw.encode())
input_password = hash_md5.hexdigest() # 输入的MD5摘要密码
if init_password == input_password:
#result('OK')
return
else:
result('Faild')
def result(res):
print(res)
# -
# ## 定义一个函数
#
# def function_name(list of parameters):
#
# do something
# 
# - 以前使用的random 或者range 或者print.. 其实都是函数或者类
# ## 调用一个函数
# - functionName()
# - "()" 就代表调用
# 
# ## 带返回值和不带返回值的函数
# - return 返回的内容
# - return 返回多个值
# - 一般情况下,在多个函数协同完成一个功能的时候,那么将会有返回值
# 
#
# - 当然也可以自定义返回None
# ## EP:
# 
def main():
print(min(min(5,6),(51,6)))
# ## 类型和关键字参数
# - 普通参数
# - 多个参数
# - 默认值参数
# - 不定长参数
# ## 普通参数
# ## 多个参数
# ## 默认值参数
# ## 强制命名
def test(name1,*,name2)
# 传参数必须带名字
# ## 不定长参数
# - \*args
# > - 不定长,来多少装多少,不装也是可以的
# - 返回的数据类型是元组
# - args 名字是可以修改的,只是我们约定俗成的是args
# - \**kwargs
# > - 返回的字典
# - 输入的一定要是表达式(键值对)
# +
def test(*args):
print(args)
test(1,2)
# -
def test(*args,name3) # 传参命名的话可以
def test(*args,name3='hahaha')
def test(name3,*args)
def test(*args) #组
def test(**args) #字典
# +
def test(**arges):
print(arges)
test(a=1,b=2aaasd)
# -
# ## 变量的作用域
# - 局部变量 local
# - 全局变量 global
# - globals 函数返回一个全局变量的字典,包括所有导入的变量
# - locals() 函数会以字典类型返回当前位置的全部局部变量。
# ## 注意:
# - global :在进行赋值操作的时候需要声明
# - 官方解释:This is because when you make an assignment to a variable in a scope, that variable becomes local to that scope and shadows any similarly named variable in the outer scope.
# - 
# # Homework
# - 1
# 
def getPentagonalNumber(n):
for i in range(1,n+1):
a = i*(3*i-1)/2
if i%10==0:
print(a,end=' ')
print()
else:
print(a,end=' ')
getPentagonalNumber(100)
# - 2
# 
def sumDigits(n):
a = n // 100
b = (n % 100) //10
c = (n % 100) %10
print (a+b+c)
sumDigits(234)
# - 3
# 
def display(num1,num2,num3):
a = max(num1,num2,num3)
b = min(num1,num2,num3)
c = num1+num2+num3-a-b
print(b,c,a)
display(3,2,4)
# - 4
# 
# - 5
# 
def printchars(ch1,ch2,num):
a = ord(ch1)
b = ord(ch2)+1
j=0
for i in range(a,b):
j +=1
if j % num ==0:
print(chr(i),end=' ')
print()
else:
print(chr(i),end=' ')
printchars('1','Z',10)
# - 6
# 
# +
def numberof(*year):
for i in year:
if (i % 4 ==0 and i % 100 != 0) or i % 400 ==0:
print(i,'年有366天!')
else:
print(i,'年有365天!')
numberof(2010,2011,2012,2013,2014,2015,2016,2017,2018,2019,2020)
# -
# - 7
# 
import math
def distance(x1,y1,x2,y2):
b = (x1 - x2 )**2 +(y1 - y2 )**2
c = math.sqrt(b)
print(c)
distance(0,6,0,4)
# - 8
# 
# - 9
# 
# 
# - 10
# 
# - 11
# ### 去网上寻找如何用Python代码发送邮件
| 7.20.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3 (ipykernel)
# language: python
# name: python3
# ---
# Kmeans
# +
import numpy as np
import pandas as pd
from sklearn.cluster import KMeans
from sklearn.feature_extraction.text import TfidfVectorizer
from sklearn.decomposition import PCA
from sklearn.preprocessing import normalize
from sklearn.metrics import pairwise_distances
import nltk
import string
import matplotlib.pyplot as plt
# %matplotlib inline
plt.style.use('fivethirtyeight')
# +
df = pd.read_csv('BlueMosque.csv',nrows = 35000,delimiter=';', skiprows=0, low_memory=False)
data = df['Comment']
tf_idf_vectorizor = TfidfVectorizer(stop_words = 'english', max_features = 20000)
tf_idf = tf_idf_vectorizor.fit_transform(data)
tf_idf_norm = normalize(tf_idf)
tf_idf_array = tf_idf_norm.toarray()
# -
class Kmeans:
def __init__(self, k, seed = None, max_iter = 200):
self.k = k
self.seed = seed
if self.seed is not None:
np.random.seed(self.seed)
self.max_iter = max_iter
def initialise_centroids(self, data):
initial_centroids = np.random.permutation(data.shape[0])[:self.k]
self.centroids = data[initial_centroids]
return self.centroids
def assign_clusters(self, data):
if data.ndim == 1:
data = data.reshape(-1, 1)
dist_to_centroid = pairwise_distances(data, self.centroids, metric = 'euclidean')
self.cluster_labels = np.argmin(dist_to_centroid, axis = 1)
return self.cluster_labels
def update_centroids(self, data):
self.centroids = np.array([data[self.cluster_labels == i].mean(axis = 0) for i in range(self.k)])
return self.centroids
def predict(self, data):
return self.assign_clusters(data)
def fit_kmeans(self, data):
self.centroids = self.initialise_centroids(data)
for iter in range(self.max_iter):
self.cluster_labels = self.assign_clusters(data)
self.centroids = self.update_centroids(data)
if iter % 100 == 0:
print("Running Model Iteration %d " %iter)
print("Model finished running")
return self
# +
sklearn_pca = PCA(n_components = 2)
Y_sklearn = sklearn_pca.fit_transform(tf_idf_array)
number_clusters = range(1, 7)
kmeans = [KMeans(n_clusters=i, max_iter = 600) for i in number_clusters]
kmeans
score = [kmeans[i].fit(Y_sklearn).score(Y_sklearn) for i in range(len(kmeans))]
score
plt.plot(number_clusters, score)
plt.xlabel('Number of Clusters')
plt.ylabel('Score')
plt.title('Elbow Method')
plt.show()
# +
test_e = Kmeans(3, 1, 600)
fitted = test_e.fit_kmeans(Y_sklearn)
predicted_values = test_e.predict(Y_sklearn)
plt.scatter(Y_sklearn[:, 0], Y_sklearn[:, 1], c=predicted_values, s=50, cmap='viridis')
centers = fitted.centroids
plt.scatter(centers[:, 0], centers[:, 1],c='black', s=300, alpha=0.6);
# -
from sklearn.cluster import KMeans
sklearn_pca = PCA(n_components = 2)
Y_sklearn = sklearn_pca.fit_transform(tf_idf_array)
kmeans = KMeans(n_clusters=3, max_iter=600, algorithm = 'auto')
fitted = kmeans.fit(Y_sklearn)
prediction = kmeans.predict(Y_sklearn)
def get_top_features_cluster(tf_idf_array, prediction, n_feats):
labels = np.unique(prediction)
dfs = []
for label in labels:
id_temp = np.where(prediction==label)
x_means = np.mean(tf_idf_array[id_temp], axis = 0)
sorted_means = np.argsort(x_means)[::-1][:n_feats]
features = tf_idf_vectorizor.get_feature_names()
best_features = [(features[i], x_means[i]) for i in sorted_means]
df = pd.DataFrame(best_features, columns = ['features', 'score'])
dfs.append(df)
return dfs
dfs = get_top_features_cluster(tf_idf_array, prediction, 10)
dfs
plt.figure(figsize=(15, 5))
x = ["place", "visit", "mosque", "beautiful", "inside", "istanbul", "amazing", "time", "architecture", "building"]
y = [0.041787,0.036934,0.033478,0.032606,0.025985,0.025091,0.024232,0.023783,0.022003,0.019611]
x = x[::-1]
y = y[::-1]
plt.barh(x,y)
plt.title('Blue Mosque - Score of Words')
plt.show()
| kmeans.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Evaluating Machine Learning Algorithms - Extended Examples
#
# ## Preparations
#
# * Download [Anaconda with Python 3.6](https://www.anaconda.com/download) to install a nearly complete Python enviroment for data science projects
# * Install [Keras: The Python Deep Learning Library](https://keras.io/) and other missing packages with the following command: ```conda install keras```
# * Start your local Jupyter instance with ```jupyter notebook```
#
# If you cannot see line numbers press ```Shift+L```to switch them on or check the ```View``` menu.
# +
# The %... is an iPython thing, and is not part of the Python language.
# In this case we're just telling the plotting library to draw things on
# the notebook, instead of on a separate window.
# %matplotlib inline
# the import statements load differnt Python packages that we need for the tutorial
# See all the "as ..." contructs? They're just aliasing the package names.
# That way we can call methods like plt.plot() instead of matplotlib.pyplot.plot().
# packages for scientif computing and visualization
import numpy as np
import scipy as sp
import matplotlib as mpl
import matplotlib.cm as cm
import matplotlib.pyplot as plt
import pandas as pd
import time
# configuration of the notebook
pd.set_option('display.width', 500)
pd.set_option('display.max_columns', 100)
pd.set_option('display.notebook_repr_html', True)
import seaborn as sns
sns.set_style("whitegrid")
sns.set_context("notebook")
# machine learning library imports
from keras.datasets import mnist
from keras.models import Sequential
from keras.layers import Dense
from keras.layers import Dropout
from keras.utils import np_utils
# -
# ## Setting Up the Experiment
#
# In this example, we will rely on the [NIST MNIST data set](http://yann.lecun.com/exdb/mnist/ ), a data set for the recognition of hand-written digits. MNIST is a data set that has been used by the [NIST](https://www.nist.gov/) such as the discussed [TREC campaign](https://trec.nist.gov/).
#
# The following script will display some sample digits to give an example of the contents of the data set.
#
# +
# load (download if needed) the MNIST dataset of handwritten numbers
# we will get a training and test set consisting of bitmaps
# in the X_* arrays and the associated labels in the y_* arrays
(X_train, y_train), (X_test, y_test) = mnist.load_data()
# plot 4 images as gray scale images using subplots without axis labels
plt.subplot(221)
plt.axis('off')
# -1 inverts the image because of aesthetical reasons
plt.imshow(X_train[0]*-1, cmap=plt.get_cmap('gray'))
plt.subplot(222)
plt.axis('off')
plt.imshow(X_train[1]*-1, cmap=plt.get_cmap('gray'))
plt.subplot(223)
plt.axis('off')
plt.imshow(X_train[2]*-1, cmap=plt.get_cmap('gray'))
plt.subplot(224)
plt.axis('off')
plt.imshow(X_train[3]*-1, cmap=plt.get_cmap('gray'))
# show the plot
#plt.savefig("test.pdf",format="pdf")
plt.show()
# -
# Next, we define out machine learning model with different layers. Roughly speaking, the function baseline_model() defines how the neural network looks like. For more details, see the [documentation](https://keras.io/getting-started/sequential-model-guide/).
# +
# define baseline model
def baseline_model():
# create model
model = Sequential()
model.add(Dense(num_pixels, input_dim=num_pixels, kernel_initializer='normal', activation='relu'))
model.add(Dense(num_classes, kernel_initializer='normal', activation='softmax'))
# Compile model, use logarithmic loss for evaluation
model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'])
return model
# fix random seed for reproducibility
seed = 7
np.random.seed(seed)
# flatten 28*28 images from the MNIST data set to a 784 vector for each image
num_pixels = X_train.shape[1] * X_train.shape[2]
X_train = X_train.reshape(X_train.shape[0], num_pixels).astype('float32')
X_test = X_test.reshape(X_test.shape[0], num_pixels).astype('float32')
# normalize inputs from 0-255 to 0-1
X_train = X_train / 255
X_test = X_test / 255
# one hot encode outputs
y_train = np_utils.to_categorical(y_train)
y_test = np_utils.to_categorical(y_test)
num_classes = y_test.shape[1]
# build the model
model = baseline_model()
# fit the model, i.e., start the actual learning
model.fit(X_train, y_train, validation_data=(X_test, y_test), epochs=10, batch_size=200, verbose=2)
# Final evaluation of the model
scores = model.evaluate(X_test, y_test, verbose=0)
# print the error rate of the algorithm
print("Baseline Error: %.2f%%" % (100-scores[1]*100))
# -
# ## Overfitting
#
# In the next cell, we will use very few training data up to the same amount of training data used before to illustrate the overfitting phenomenon.
#
# __ATTENTION!__ This will take some time.
# +
# define baseline model
def baseline_model():
# create model
model = Sequential()
model.add(Dense(num_pixels, input_dim=num_pixels, kernel_initializer='normal', activation='relu'))
model.add(Dense(num_classes, kernel_initializer='normal', activation='softmax'))
# Compile model
model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'])
return model
# the steps indicate the size of the training sample
steps=[18,100,1000,5000,10000,20000,30000,40000,50000]
# this dict (basically a hashmap) holds the error rate for each iteration
errorPerStep=dict()
# fix random seed for reproducibility
seed = 7
np.random.seed(seed)
for step in steps:
# load data
(X_train, y_train), (X_test, y_test) = mnist.load_data()
# limit the training data size to the current step, the : means "from 0 to step"
X_train=X_train[0:step]
y_train=y_train[0:step]
# flatten 28*28 images to a 784 vector for each image
num_pixels = X_train.shape[1] * X_train.shape[2]
X_train = X_train.reshape(X_train.shape[0], num_pixels).astype('float32')
X_test = X_test.reshape(X_test.shape[0], num_pixels).astype('float32')
# normalize inputs from 0-255 to 0-1
X_train = X_train / 255
X_test = X_test / 255
# one hot encode outputs
y_train = np_utils.to_categorical(y_train)
y_test = np_utils.to_categorical(y_test)
num_classes = y_test.shape[1]
# build the model
model = baseline_model()
# Fit the model
model.fit(X_train, y_train, validation_data=(X_test, y_test), epochs=10, batch_size=200, verbose=2)
# Final evaluation of the model
scores = model.evaluate(X_test, y_test, verbose=0)
print("Baseline Error: %.2f%%" % (100-scores[1]*100))
errorPerStep[step]=(100-scores[1]*100)
# -
# Next, we will illustrate our results.
# +
print(errorPerStep)
x=[]
y=[]
for e in errorPerStep:
x.append(e)
y.append(errorPerStep[e])
plt.xlabel("Training Samples")
plt.ylabel("Baseline Error (%)")
plt.plot(x,y,'o-')
plt.savefig("test.pdf",format="pdf")
# -
# The graph indicates clearly that the baseline error decreases with the increase of training data. In other words, the overfitting effect is limited in relation to the amount of data the learning algorithm has seen.
#
# To end the example, we will check how well the model can predict new input.
# +
(X_train, y_train), (X_test, y_test) = mnist.load_data()
# choose a random sample as our test image
test_im = X_train[25]
# display the image
plt.imshow(test_im.reshape(28,28)*-1, cmap=plt.get_cmap('gray'), interpolation='none')
plt.axis('off')
num_pixels = X_train.shape[1] * X_train.shape[2]
# as we are dealing with only one image, we have to restrict the array to a 1D * 784
test_im = test_im.reshape(1, num_pixels).astype('float32')
# let the model predict the image
r=model.predict(test_im)
itemindex = np.where(r[0]==1)
print("The model predicts: %i for the following image:"%itemindex[0])
# -
# ## Accuracy and Error Rate
#
# The next cell illustrates how accuracy changes with respect to different distributions between two classes if the model always predict that an element belongs to class A.
# $$
# Accuracy=\frac{|tp+tn|}{|tp|+|tn|+|fp|+|fn|}\equiv\frac{|\mbox{correct predictions}|}{|\mbox{predictions}|}
# $$
# +
# arrays for plotting
x=[] # samples in A
y=[] # samples in B
accuracies=[] # calculated accuracies for each distribution
# distributions between class A and B, first entry means 90% in A, 10% in B
distributions=[[90,10],[55,45],[70,30],[50,50],[20,80]]
for distribution in distributions:
x.append(distribution[0])
y.append(distribution[1])
samplesA=np.ones((1,distribution[0])) # membership of class A is encoded as 1
samplesB=np.zeros((1,distribution[1])) # membership of class B is encoded as 0
# combine both arrays
reality=np.concatenate((samplesA,samplesB),axis=None)
# as said above, our model always associates the elements with class A (encoded by 1)
prediction=np.ones((1,100))
tpCount=0
# count the true positives
for (i,val) in enumerate(prediction[0]):
if not reality[i]==val:
pass
else:
tpCount+=1
# calculate the accuracy and add the to the accuracies array for later visualization
acc=float(tpCount+tnCount)/100.0
accuracies.append(acc*1000) # the multiplication by 1000 is done for visualization purposes only
print("Accuracy: %.2f"%(acc))
# plot the results as a bubble chart
plt.xlim(0,100)
plt.ylim(0,100)
plt.xlabel("Samples in A")
plt.ylabel("Samples in B")
plt.title("Accuracy of a Always-A Predictor")
plt.scatter(x, y, s=accuracies*100000,alpha=0.5)
#plt.savefig("test.png",format="png")
plt.show()
# -
# ## Logarithmic Loss
# The
# $Logarithmic ~Loss=\frac{-1}{N}\sum_{i=1}^N\sum_{j=1}^M y_{ij}\log(p_{ij}) \rightarrow [0,\infty)$ penalizes wrong predicitions. For the sake of simplicity, we simply use the function provided by [sklearn](http://scikit-learn.org/stable/), a machine-learning toolkit for Python.
#
# The [manual](http://scikit-learn.org/stable/modules/model_evaluation.html#log-loss) will give you more details.
# +
from sklearn.metrics import log_loss
# the correct cluster for each sample, i.e., sample 1 is in class 0
y_true = [0, 0, 1, 1,2]
# the predictions: 1st sample is 90% predicted to be in class 0
y_pred = [[.9, .1,.0], [.8, .2,.0], [.3, .7,.0], [.01, .99,.0],[.0,.0,1.0]]
print(log_loss(y_true, y_pred))
# perfect prediction
y_perfect = [[1.0, .0,.0], [1.0, .0,.0], [.0, 1.0,.0], [0, 1.0,.0],[.0,.0,1.0]]
print(log_loss(y_true, y_perfect))
x=[]
y=[]
# the for loop modifies the first prediction of an element belonging to class 0 from 0 to 1
# in other words, from a wrong to a correct prediction
for i in range(1,11):
r2=y_perfect
r2[0][0]=float(i/10)
x.append(r2[0][0])
y.append(log_loss(y_true,r2))
# plot the result
plt.xlabel("Predicted Probability")
plt.ylabel("Logarithmic Loss")
plt.title("Does an object of class X belong do class X?")
plt.plot(x,y,'o-')
#plt.savefig("test.pdf",format="pdf")
# -
# ## Cross-Validation
#
# Using an exhaustive [sample use case](https://github.com/elektrobohemian/dst4l-copenhagen/blob/master/NaiveBayes.ipynb) that uses a naive Bayes classifier to determine wheter a Rotten Tomatoes critic is positive or negative, you will see how cross-validation works in practice.
| EvaluationMachineLearning.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import pandas as pd
import numpy as np
import networkx as nx
from sklearn.decomposition import NMF
from tqdm import tqdm
params = {'iterations':100,
'pre_iterations':100,
'seed': 43,
'lamb': 0.01,
'layers': [128,64,32],
}
def para(**kwargs):
return kwargs
# +
class DANMF(object):
def __init__(self,
edge_df,
iterations = 1000,
pre_iterations = 1000,
seed = 43,
lamb = 0.05,
layers =[32,8]):
self.edge_df = edge_df
self.iterations = iterations
self.pre_iterations = pre_iterations
self.seed = seed
self.lamb =lamb
self.layers = layers
self.make_graph()
self.A = nx.adjacency_matrix(self.graph)
self.L = nx.laplacian_matrix(self.graph)
self.D = self.L + self.A
self.p = len(self.layers)
def make_graph(self):
self.graph = nx.from_edgelist([(cust,opp) for cust, opp in zip(self.edge_df['cust_id'],
self.edge_df['opp_id'])])
def setup_z(self, i):
if i == 0:
self.Z = self.A
else:
self.Z = self.V_s[i-1]
def sklearn_pretrain(self, i):
"""
Pretraining a single layer of the model with sklearn.
:param i: Layer index.
"""
nmf_model = NMF(n_components = self.layers[i],
init = "random",
random_state = self.seed,
max_iter = self.pre_iterations)
U = nmf_model.fit_transform(self.Z)
V = nmf_model.components_
return U, V
def pre_training(self):
"""
Pre-training each NMF layer.
"""
print("\nLayer pre-training started. \n")
self.U_s = []
self.V_s = []
for i in tqdm(range(self.p), desc = "Layers trained: ", leave=True):
self.setup_z(i)
U, V = self.sklearn_pretrain(i)
self.U_s.append(U)
self.V_s.append(V)
def setup_Q(self):
"""
Setting up Q matrices.
"""
self.Q_s = [None for _ in range(self.p + 1)]
self.Q_s[self.p] = np.eye(self.layers[self.p - 1])
for i in range(self.p - 1, -1, -1):
self.Q_s[i] = np.dot(self.U_s[i], self.Q_s[i + 1])
def update_U(self, i):
"""
Updating left hand factors.
:param i: Layer index.
"""
if i == 0:
R = self.U_s[0].dot(self.Q_s[1].dot(self.VpVpT).dot(self.Q_s[1].T))
R = R+self.A_sq.dot(self.U_s[0].dot(self.Q_s[1].dot(self.Q_s[1].T)))
Ru = 2*self.A.dot(self.V_s[self.p-1].T.dot(self.Q_s[1].T))
self.U_s[0] = (self.U_s[0]*Ru)/np.maximum(R, 10**-10)
else:
R = self.P.T.dot(self.P).dot(self.U_s[i]).dot(self.Q_s[i+1]).dot(self.VpVpT).dot(self.Q_s[i+1].T)
R = R+self.A_sq.dot(self.P).T.dot(self.P).dot(self.U_s[i]).dot(self.Q_s[i+1]).dot(self.Q_s[i+1].T)
Ru = 2*self.A.dot(self.P).T.dot(self.V_s[self.p-1].T).dot(self.Q_s[i+1].T)
self.U_s[i] = (self.U_s[i]*Ru)/np.maximum(R, 10**-10)
def update_P(self, i):
"""
Setting up P matrices.
:param i: Layer index.
"""
if i == 0:
self.P = self.U_s[0]
else:
self.P = self.P.dot(self.U_s[i])
def update_V(self, i):
"""
Updating right hand factors.
:param i: Layer index.
"""
if i < self.p-1:
Vu = 2*self.A.dot(self.P).T
Vd = self.P.T.dot(self.P).dot(self.V_s[i])+self.V_s[i]
self.V_s[i] = self.V_s[i] * Vu/np.maximum(Vd, 10**-10)
else:
Vu = 2*self.A.dot(self.P).T+(self.lamb * self.A.dot(self.V_s[i].T)).T
Vd = self.P.T.dot(self.P).dot(self.V_s[i])
Vd = Vd + self.V_s[i]+(self.lamb * self.D.dot(self.V_s[i].T)).T
self.V_s[i] = self.V_s[i] * Vu/np.maximum(Vd, 10**-10)
def calculate_cost(self, i):
"""
Calculate loss.
:param i: Global iteration.
"""
reconstruction_loss_1 = np.linalg.norm(self.A - self.P.dot(self.V_s[-1]), ord="fro")**2
reconstruction_loss_2 = np.linalg.norm(self.V_s[-1]-self.A.dot(self.P).T, ord="fro")**2
regularization_loss = np.trace(self.V_s[-1].dot(self.L.dot(self.V_s[-1].T)))
self.loss.append([i+1, reconstruction_loss_1, reconstruction_loss_2, regularization_loss])
def save_embedding(self):
"""
Save embedding matrix.
"""
embedding = [np.array(range(self.P.shape[0])).reshape(-1, 1), self.P, self.V_s[-1].T]
embedding = np.concatenate(embedding, axis=1)
columns = ["id"] + ["x_" + str(x) for x in range(self.layers[-1]*2)]
self.embedding = pd.DataFrame(embedding, columns=columns)
return embedding
def save_membership(self):
"""
Save cluster membership.
"""
index = np.argmax(self.P, axis=1)
self.membership = {int(i): int(index[i]) for i in range(len(index))}
return self.membership
def training(self):
"""
Training process after pre-training.
"""
print("\n\nTraining started. \n")
self.loss = []
self.A_sq = self.A.dot(self.A.T)
for iteration in tqdm(range(self.iterations), desc="Training pass: ", leave=True):
self.setup_Q()
self.VpVpT = self.V_s[self.p-1].dot(self.V_s[self.p-1].T)
for i in range(self.p):
self.update_U(i)
self.update_P(i)
self.update_V(i)
self.calculate_cost(iteration)
#print('current loss is:', self.loss[-1])
self.membership = self.save_membership()
self.embedding = self.save_embedding()
# -
edge_df = pd.read_csv('/Users/shuaihengxiao/Desktop/DANMF/DANMF-master/input/chameleon_edges.csv',header=0,
names=['cust_id','opp_id'],)
edge_df
import DANMF
model = DANMF.DANMF(edge_df = edge_df,
iterations = 1000,
pre_iterations = 1000,
seed = 43,
lamb = 0.05,
layers =[64,32,5],)
model.pre_training()
model.training()
import matplotlib.cm as cm
import matplotlib.pyplot as plt
plt.figure(figsize = (20,20))
# draw the graph
pos = nx.spring_layout(model.graph)
# color the nodes according to their partition
cmap = cm.get_cmap('viridis', max(model.membership.values()) + 1)
nx.draw_networkx_nodes(model.graph, pos, model.membership.keys(), node_size=40,
cmap=cmap, node_color=list(model.membership.values()))
nx.draw_networkx_edges(model.graph, pos, alpha=0.5)
plt.show()
| community_detection/DANMF/test_0.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
# Import Essential packages
import numpy as np
import pandas as pd
import matplotlib
import matplotlib.pyplot as plt
import seaborn as sns
import sklearn
import torch
import xgboost as xgb
import pyreadr
sns.set_style("darkgrid")
print("Numpy Version : ",np.__version__)
print("Pandas Version : ",pd.__version__)
print("Pyplot Version : ",matplotlib.__version__)
print("Seaborn Version : ",sns.__version__)
print("Scikit Learn Version : ",sklearn.__version__)
print("Pytorch Version : ",torch.__version__)
print("XGBosst Version : ", xgb.__version__)
# -
result = pyreadr.read_r('Data/Sake.RData')
result['Sake']
| Getting Started/Tutorial Level 1.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: 'Python 3.8.11 64-bit (''tensorflow_gpu'': conda)'
# name: python3
# ---
# This notebook contains plots of various statistical and systems metrics gathered from the last run
# +
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
from visualization_utils import plot_accuracy_vs_round_number_methods, load_data
# +
SHOW_WEIGHTED = True # show weighted accuracy instead of unweighted accuracy
PLOT_CLIENTS = False
stat_file_sgd = 'metrics_stat_sgd.csv'
stat_file_rsa = 'metrics_stat_rsa.csv'
stat_file_sadmm = 'metrics_stat_sadmm.csv'
stat_file_geomed = 'metrics_stat_geomed.csv'
stat_file_med = 'metrics_stat_med.csv'
stat_metrics_sgd, _ = load_data(stat_file_sgd, None)
stat_metrics_rsa, _ = load_data(stat_file_rsa, None)
stat_metrics_sadmm, _ = load_data(stat_file_sadmm, None)
stat_metrics_geomed, _ = load_data(stat_file_geomed, None)
stat_metrics_med, _ = load_data(stat_file_med, None)
stat_metrics_list = [stat_metrics_sgd, stat_metrics_rsa, stat_metrics_sadmm, stat_metrics_geomed, stat_metrics_med]
# -
# Plots accuracy vs. round number.
if stat_metrics_list is not None:
plot_accuracy_vs_round_number_methods(stat_metrics_list, plot_stds=False)
| stochastic_admm_model/metrics/plt.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Initializing Neural Networks
#
# In this notebook we discuss how to initialize neural networks. We'll need numpy:
import numpy as np
# ## Trouble with Symmetry
#
# Recall that the first layer has `n` inputs, the last layer has `k` outputs, and we have a bunch of natural numbers (well, possibly zero, actually) to determine the input sizes of all the intermediate layers. That, along with the chosen activation functions, determines the layout of the network. Therefore, we might expect our initialization functions to look like this:
def initial_network(n, k, intermediate_sizes):
dimensions = [n] + intermediate_sizes + [k] # input/output sizes
weights = [0] * (len(dimensions)-1) # the neurons themselves
biases = [0] * (len(dimensions)-1)
for i in range(0, len(weights)):
weights[i] = np.zeros((dimensions[i], dimensions[i+1]))
biases[i] = np.zeros((dimensions[i+1], 1))
return weights, biases
# Indeed this generates matrices of the appropriate size, but there is a problem. It turns out that if two neurons are literally equal, they will never be able to separate. They will face the same selection pressures and train the same way, so will stay equal forever.
#
# We need to solve this problem, and any solution to it is called *symmetry breaking* in the literature.
# ## Basic Randomization
#
# A common way to do this, and it isn't too bad, is to just make everything random. For example, we might literally just make everything come from uniform random noise, between -1 and 1:
def initialize_network_uniform(n, k, intermediate_sizes):
dimensions = [n] + intermediate_sizes + [k] # input/output sizes
weights = [0] * (len(dimensions)-1) # the neurons themselves
biases = [0] * (len(dimensions)-1)
for i in range(0, len(weights)):
weights[i] = np.random.random((dimensions[i], dimensions[i+1]))*2 - 1
biases[i] = np.random.random((1, dimensions[i+1]))*2 - 1
return weights, biases
# Recall that `np.random.random()` is a random float between 0 and 1, so `np.random.random()*2-1` is a random float between -1 and 1.
# ## More Advanced Randomization
#
# Some amount of research has gone into this, though, and it seems wasteful not to take advantage of it. According to <NAME>, one should still use uniform random numbers for each matrix, but with the following conditions:
# 1. Bias units and output units should be initialized to zero; training will sort them out.
# 2. Sigmoid units should use uniform random weights from $-r$ to $r$, where $r=4\sqrt{6/(\textrm{fan-in}+\textrm{fan-out})}$.
# 3. Hyperbolic tangent units should use uniform random weights from $-r$ to $r$, where $r=\sqrt{6/(\textrm{fan-in}+\textrm{fan-out})}$.
# 4. LeRU units should use Gaussian-distributed random weights with standard deviation $r=\sqrt{2/(\textrm{fan-in})}$.
#
# Here fan-in is the number of inputs to the matrix, and fan-out is the number of outputs of the matrix. Basically this is just fine-tuning the idea that we should have lower weights when more neurons are feeding it, and lower weights when more neurons are looking at you. This is all to prevent saturation.
#
# If you're interested, there's a more in-depth explanation at <a href="http://andyljones.tumblr.com/post/110998971763/an-explanation-of-xavier-initialization">andy's blog</a>. The original paper (by <NAME> and <NAME>; the algorithm is named for the first author) is <a href="http://jmlr.org/proceedings/papers/v9/glorot10a/glorot10a.pdf">here</a>, and they work out the details for sigmoid and hyperbolic tangent units. The formula for rectifier units is worked out <a href="https://arxiv.org/pdf/1502.01852v1.pdf">here</a>.
#
# It seems to me from reading that what's important is that the mean be zero, and the variance be appropriately defined from the fan-in and fan-out. The choice of uniform or Gaussian random numbers seems to be the preference of the authors, but I'm not going to override them. They did a lot of experimental work to get these figures!
#
# With all that in mind, we give the following initialization process:
# +
def initialize_xavier_sigmoid(n, k, intermediate_sizes):
dimensions = [n] + intermediate_sizes + [k] # input/output sizes
weights = [0] * (len(dimensions)-1) # the neurons themselves
biases = [0] * (len(dimensions)-1)
for i in range(0, len(weights)-1):
r = 4 * ((6/(dimensions[i]+dimensions[i+1]))**0.5)
weights[i] = np.random.random((dimensions[i], dimensions[i+1]))*(2*r) - r
biases[i] = np.zeros((1, dimensions[i+1]))
# set the last ones to zero
weights[-1] = np.zeros((dimensions[-2], dimensions[-1]))
biases[-1] = np.zeros((1, dimensions[-1]))
return weights, biases
def initialize_xavier_tanh(n, k, intermediate_sizes):
dimensions = [n] + intermediate_sizes + [k] # input/output sizes
weights = [0] * (len(dimensions)-1) # the neurons themselves
biases = [0] * (len(dimensions)-1)
for i in range(0, len(weights)-1):
r = ((6/(dimensions[i]+dimensions[i+1]))**0.5)
weights[i] = np.random.random((dimensions[i], dimensions[i+1]))*(2*r) - r
biases[i] = np.zeros((1, dimensions[i+1]))
# set the last ones to zero
weights[-1] = np.zeros((dimensions[-2], dimensions[-1]))
biases[-1] = np.zeros((1, dimensions[-1]))
return weights, biases
def initialize_xavier_leru(n, k, intermediate_sizes):
dimensions = [n] + intermediate_sizes + [k] # input/output sizes
weights = [0] * (len(dimensions)-1) # the neurons themselves
biases = [0] * (len(dimensions)-1)
for i in range(0, len(weights)-1):
r = (2/(dimensions[i]))**0.5
weights[i] = np.random.standard_normal((dimensions[i], dimensions[i+1]))*r
biases[i] = np.zeros((1, dimensions[i+1]))
# set the last ones to zero
weights[-1] = np.zeros((dimensions[-2], dimensions[-1]))
biases[-1] = np.zeros((1, dimensions[-1]))
return weights, biases
# -
# Once we get our learning algorithm up and running, we'll experiment with the effect of these initialization methods.
| 03 - Initializing Neural Networks.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="ISubpr_SSsiM"
# ##### Copyright 2020 The TensorFlow Authors.
#
# + cellView="form" id="3jTMb1dySr3V"
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# + [markdown] id="6DWfyNThSziV"
# # Better performance with tf.function
#
# <table class="tfo-notebook-buttons" align="left">
# <td>
# <a target="_blank" href="https://www.tensorflow.org/guide/function"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" />View on TensorFlow.org</a>
# </td>
# <td>
# <a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs/blob/master/site/en/guide/function.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a>
# </td>
# <td>
# <a target="_blank" href="https://github.com/tensorflow/docs/blob/master/site/en/guide/function.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a>
# </td>
# <td>
# <a href="https://storage.googleapis.com/tensorflow_docs/docs/site/en/guide/function.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" />Download notebook</a>
# </td>
# </table>
# + [markdown] id="J122XQYG7W6w"
# In TensorFlow 2, eager execution is turned on by default. The user interface is intuitive and flexible (running one-off operations is much easier
# and faster), but this can come at the expense of performance and deployability.
#
# You can use `tf.function` to make graphs out of your programs. It is a transformation tool that creates Python-independent dataflow graphs out of your Python code. This will help you create performant and portable models, and it is required to use `SavedModel`.
#
# This guide will help you conceptualize how `tf.function` works under the hood so you can use it effectively.
#
# The main takeaways and recommendations are:
#
# - Debug in eager mode, then decorate with `@tf.function`.
# - Don't rely on Python side effects like object mutation or list appends.
# - `tf.function` works best with TensorFlow ops; NumPy and Python calls are converted to constants.
#
# + [markdown] id="SjvqpgepHJPd"
# ## Setup
# + id="otIdN1TS8N7S"
import tensorflow as tf
# + [markdown] id="I0xDjO4SHLUD"
# Define a helper function to demonstrate the kinds of errors you might encounter:
# + id="D25apou9IOXa"
import traceback
import contextlib
# Some helper code to demonstrate the kinds of errors you might encounter.
@contextlib.contextmanager
def assert_raises(error_class):
try:
yield
except error_class as e:
print('Caught expected exception \n {}:'.format(error_class))
traceback.print_exc(limit=2)
except Exception as e:
raise e
else:
raise Exception('Expected {} to be raised but no error was raised!'.format(
error_class))
# + [markdown] id="WPSfepzTHThq"
# ## Basics
# + [markdown] id="CNwYTIJ8r56W"
# ### Usage
#
# A `Function` you define is just like a core TensorFlow operation: You can execute it eagerly; you can compute gradients; and so on.
# + id="SbtT1-Wm70F2"
@tf.function
def add(a, b):
return a + b
add(tf.ones([2, 2]), tf.ones([2, 2])) # [[2., 2.], [2., 2.]]
# + id="uP-zUelB8DbX"
v = tf.Variable(1.0)
with tf.GradientTape() as tape:
result = add(v, 1.0)
tape.gradient(result, v)
# + [markdown] id="ocWZvqrmHnmX"
# You can use `Function`s inside other `Function`s.
# + id="l5qRjdbBVdU6"
@tf.function
def dense_layer(x, w, b):
return add(tf.matmul(x, w), b)
dense_layer(tf.ones([3, 2]), tf.ones([2, 2]), tf.ones([2]))
# + [markdown] id="piBhz7gYsHqU"
# `Function`s can be faster than eager code, especially for graphs with many small ops. But for graphs with a few expensive ops (like convolutions), you may not see much speedup.
#
# + id="zuXt4wRysI03"
import timeit
conv_layer = tf.keras.layers.Conv2D(100, 3)
@tf.function
def conv_fn(image):
return conv_layer(image)
image = tf.zeros([1, 200, 200, 100])
# warm up
conv_layer(image); conv_fn(image)
print("Eager conv:", timeit.timeit(lambda: conv_layer(image), number=10))
print("Function conv:", timeit.timeit(lambda: conv_fn(image), number=10))
print("Note how there's not much difference in performance for convolutions")
# + [markdown] id="uZ4Do2AV80cO"
# ### Tracing
#
# Python's dynamic typing means that you can call functions with a variety of argument types, and Python can do something different in each scenario.
#
# Yet, to create a TensorFlow Graph, static `dtypes` and shape dimensions are required. `tf.function` bridges this gap by wrapping a Python function to create a `Function` object. Based on the given inputs, the `Function` selects the appropriate graph for the given inputs, retracing the Python function as necessary. Once you understand why and when tracing happens, it's much easier to use `tf.function` effectively!
#
# You can call a `Function` with arguments of different types to see this polymorphic behavior in action.
# + id="kojmJrgq8U9v"
@tf.function
def double(a):
print("Tracing with", a)
return a + a
print(double(tf.constant(1)))
print()
print(double(tf.constant(1.1)))
print()
print(double(tf.constant("a")))
print()
# + [markdown] id="QPfouGUQrcNb"
# Note that if you repeatedly call a `Function` with the same argument type, TensorFlow will reuse a previously traced graph, as the generated graph would be identical.
# + id="hFccbWFRrsBp"
# This doesn't print 'Tracing with ...'
print(double(tf.constant("b")))
# + [markdown] id="fgIO_XEzcB9o"
# You can use `pretty_printed_concrete_signatures()` to see all of the available traces:
# + id="IiQc4IKAb-NX"
print(double.pretty_printed_concrete_signatures())
# + [markdown] id="rKQ92VEWI7n8"
# So far, you've seen that `tf.function` creates a cached, dynamic dispatch layer over TensorFlow's graph tracing logic. To be more specific about the terminology:
#
# - A `tf.Graph` is the raw, language-agnostic, portable representation of your computation.
# - A `ConcreteFunction` is an eagerly-executing wrapper around a `tf.Graph`.
# - A `Function` manages a cache of `ConcreteFunction`s and picks the right one for your inputs.
# - `tf.function` wraps a Python function, returning a `Function` object.
#
# + [markdown] id="96IxS2WR37fF"
# ### Obtaining concrete functions
#
# Every time a function is traced, a new concrete function is created. You can directly obtain a concrete function, by using `get_concrete_function`.
#
# + id="mHg2CGtPQ3Hz"
print("Obtaining concrete trace")
double_strings = double.get_concrete_function(tf.constant("a"))
print("Executing traced function")
print(double_strings(tf.constant("a")))
print(double_strings(a=tf.constant("b")))
# + id="6IVZ-NVf9vsx"
# You can also call get_concrete_function on an InputSpec
double_strings_from_inputspec = double.get_concrete_function(tf.TensorSpec(shape=[], dtype=tf.string))
print(double_strings_from_inputspec(tf.constant("c")))
# + [markdown] id="iR4fVmG34xvF"
# Printing a `ConcreteFunction` displays a summary of its input arguments (with types) and its output type.
# + id="o3-JbkIk41r8"
print(double_strings)
# + [markdown] id="QtqfvljZeuOV"
# You can also directly retrieve a concrete function's signature.
# + id="nzbrqFABe0zG"
print(double_strings.structured_input_signature)
print(double_strings.structured_outputs)
# + [markdown] id="lar5A_5m5IG1"
# Using a concrete trace with incompatible types will throw an error
# + id="G5eeTK-T5KYj"
with assert_raises(tf.errors.InvalidArgumentError):
double_strings(tf.constant(1))
# + [markdown] id="st2L9VNQVtSG"
# You may notice that Python arguments are given special treatment in a concrete function's input signature. Prior to TensorFlow 2.3, Python arguments were simply removed from the concrete function's signature. Starting with TensorFlow 2.3, Python arguments remain in the signature, but are constrained to take the value set during tracing.
# + id="U_QyPSGoaC35"
@tf.function
def pow(a, b):
return a ** b
square = pow.get_concrete_function(a=tf.TensorSpec(None, tf.float32), b=2)
print(square)
# + id="E76vIDhQbXIb"
assert square(tf.constant(10.0)) == 100
with assert_raises(TypeError):
square(tf.constant(10.0), b=3)
# + [markdown] id="41gJh_JGIfuA"
# ### Obtaining graphs
#
# Each concrete function is a callable wrapper around a `tf.Graph`. Although retrieving the actual `tf.Graph` object is not something you'll normally need to do, you can obtain it easily from any concrete function.
# + id="5UENeGHfaX8g"
graph = double_strings.graph
for node in graph.as_graph_def().node:
print(f'{node.input} -> {node.name}')
# + [markdown] id="aIKkgr6qdtp4"
# ### Debugging
#
# In general, debugging code is easier in eager mode than inside `tf.function`. You should ensure that your code executes error-free in eager mode before decorating with `tf.function`. To assist in the debugging process, you can call `tf.config.run_functions_eagerly(True)` to globally disable and reenable `tf.function`.
#
# When tracking down issues that only appear within `tf.function`, here are some tips:
# - Plain old Python `print` calls only execute during tracing, helping you track down when your function gets (re)traced.
# - `tf.print` calls will execute every time, and can help you track down intermediate values during execution.
# - `tf.debugging.enable_check_numerics` is an easy way to track down where NaNs and Inf are created.
# - `pdb` can help you understand what's going on during tracing. (Caveat: PDB will drop you into AutoGraph-transformed source code.)
# + [markdown] id="129-iRsPS-gY"
# ## Tracing semantics
# + [markdown] id="h62XoXho6EWN"
# ### Cache key rules
#
# A `Function` determines whether to reuse a traced concrete function by computing a cache key from an input's args and kwargs.
#
# - The key generated for a `tf.Tensor` argument is its shape and dtype.
# - Starting in TensorFlow 2.3, the key generated for a `tf.Variable` argument is its `id()`.
# - The key generated for a Python primitive is its value. The key generated for nested `dict`s, `list`s, `tuple`s, `namedtuple`s, and [`attr`](https://www.attrs.org/en/stable/)s is the flattened tuple. (As a result of this flattening, calling a concrete function with a different nesting structure than the one used during tracing will result in a TypeError).
# - For all other Python types, the keys are based on the object `id()` so that methods are traced independently for each instance of a class.
#
# + [markdown] id="PEDwbumO32Wh"
# ### Controlling retracing
#
# Retracing helps ensures that TensorFlow generates correct graphs for each set of inputs. However, tracing is an expensive operation! If your `Function` retraces a new graph for every call, you'll find that your code executes more slowly than if you didn't use `tf.function`.
#
# To control the tracing behavior, you can use the following techniques:
# + [markdown] id="EUtycWJa34TT"
# - Specify `input_signature` in `tf.function` to limit tracing.
# + id="_BDMIRmu1RGB"
@tf.function(input_signature=(tf.TensorSpec(shape=[None], dtype=tf.int32),))
def next_collatz(x):
print("Tracing with", x)
return tf.where(x % 2 == 0, x // 2, 3 * x + 1)
print(next_collatz(tf.constant([1, 2])))
# We specified a 1-D tensor in the input signature, so this should fail.
with assert_raises(ValueError):
next_collatz(tf.constant([[1, 2], [3, 4]]))
# We specified an int32 dtype in the input signature, so this should fail.
with assert_raises(ValueError):
next_collatz(tf.constant([1.0, 2.0]))
# + [markdown] id="ocxX-HVk7P2o"
# - Specify a \[None\] dimension in `tf.TensorSpec` to allow for flexibility in trace reuse.
#
# Since TensorFlow matches tensors based on their shape, using a `None` dimension as a wildcard will allow `Function`s to reuse traces for variably-sized input. Variably-sized input can occur if you have sequences of different length, or images of different sizes for each batch (See [Transformer](../tutorials/text/transformer.ipynb) and [Deep Dream](../tutorials/generative/deepdream.ipynb) tutorials for example).
# + id="4Viun7dh7PmF"
@tf.function(input_signature=(tf.TensorSpec(shape=[None], dtype=tf.int32),))
def g(x):
print('Tracing with', x)
return x
# No retrace!
print(g(tf.constant([1, 2, 3])))
print(g(tf.constant([1, 2, 3, 4, 5])))
# + [markdown] id="AY5oiQN0XIyA"
# - Cast Python arguments to Tensors to reduce retracing.
#
# Often, Python arguments are used to control hyperparameters and graph constructions - for example, `num_layers=10` or `training=True` or `nonlinearity='relu'`. So if the Python argument changes, it makes sense that you'd have to retrace the graph.
#
# However, it's possible that a Python argument is not being used to control graph construction. In these cases, a change in the Python value can trigger needless retracing. Take, for example, this training loop, which AutoGraph will dynamically unroll. Despite the multiple traces, the generated graph is actually identical, so retracing is unnecessary.
# + id="uydzR5JYUU8H"
def train_one_step():
pass
@tf.function
def train(num_steps):
print("Tracing with num_steps = ", num_steps)
tf.print("Executing with num_steps = ", num_steps)
for _ in tf.range(num_steps):
train_one_step()
print("Retracing occurs for different Python arguments.")
train(num_steps=10)
train(num_steps=20)
print()
print("Traces are reused for Tensor arguments.")
train(num_steps=tf.constant(10))
train(num_steps=tf.constant(20))
# + [markdown] id="4pJqkDR_Q2wz"
# If you need to force retracing, create a new `Function`. Separate `Function` objects are guaranteed not to share traces.
# + id="uHp4ousu4DdN"
def f():
print('Tracing!')
tf.print('Executing')
tf.function(f)()
tf.function(f)()
# + [markdown] id="EJqHGFSVLIKl"
# ### Python side effects
#
# Python side effects like printing, appending to lists, and mutating globals only happen the first time you call a `Function` with a set of inputs. Afterwards, the traced `tf.Graph` is reexecuted, without executing the Python code.
#
# The general rule of thumb is to only use Python side effects to debug your traces. Otherwise, TensorFlow ops like `tf.Variable.assign`, `tf.print`, and `tf.summary` are the best way to ensure your code will be traced and executed by the TensorFlow runtime with each call.
# + id="w2sACuZ9TTRk"
@tf.function
def f(x):
print("Traced with", x)
tf.print("Executed with", x)
f(1)
f(1)
f(2)
# + [markdown] id="msTmv-oyUNaf"
# Many Python features, such as generators and iterators, rely on the Python runtime to keep track of state. In general, while these constructs work as expected in eager mode, many unexpected things can happen inside a `Function`.
#
# To give one example, advancing iterator state is a Python side effect and therefore only happens during tracing.
# + id="FNPD4unZUedH"
external_var = tf.Variable(0)
@tf.function
def buggy_consume_next(iterator):
external_var.assign_add(next(iterator))
tf.print("Value of external_var:", external_var)
iterator = iter([0, 1, 2, 3])
buggy_consume_next(iterator)
# This reuses the first value from the iterator, rather than consuming the next value.
buggy_consume_next(iterator)
buggy_consume_next(iterator)
# + [markdown] id="wcS3TAgCjTWR"
# Some iteration constructs are supported through AutoGraph. See the section on [AutoGraph Transformations](#autograph_transformations) for an overview.
# + [markdown] id="e1I0dPiqTV8H"
# If you would like to execute Python code during each invocation of a `Function`, `tf.py_function` is an exit hatch. The drawback of `tf.py_function` is that it's not portable or particularly performant, nor does it work well in distributed (multi-GPU, TPU) setups. Also, since `tf.py_function` has to be wired into the graph, it casts all inputs/outputs to tensors.
#
# APIs like `tf.gather`, `tf.stack`, and `tf.TensorArray` can help you implement common looping patterns in native TensorFlow.
# + id="7aJD--9qTWmg"
external_list = []
def side_effect(x):
print('Python side effect')
external_list.append(x)
@tf.function
def f(x):
tf.py_function(side_effect, inp=[x], Tout=[])
f(1)
f(1)
f(1)
# The list append happens all three times!
assert len(external_list) == 3
# The list contains tf.constant(1), not 1, because py_function casts everything to tensors.
assert external_list[0].numpy() == 1
# + [markdown] id="lPr_6mK_AQWL"
# ### Variables
#
# You may encounter an error when creating a new `tf.Variable` in a function. This error guards against behavior divergence on repeated calls: In eager mode, a function creates a new variable with each call, but in a `Function`, a new variable may not be created due to trace reuse.
# + id="Tx0Vvnb_9OB-"
@tf.function
def f(x):
v = tf.Variable(1.0)
v.assign_add(x)
return v
with assert_raises(ValueError):
f(1.0)
# + [markdown] id="KYm6-5GCILXQ"
# You can create variables inside a `Function` as long as those variables are only created the first time the function is executed.
# + id="HQrG5_kOiKl_"
class Count(tf.Module):
def __init__(self):
self.count = None
@tf.function
def __call__(self):
if self.count is None:
self.count = tf.Variable(0)
return self.count.assign_add(1)
c = Count()
print(c())
print(c())
# + [markdown] id="ZoPg5w1Pjqna"
# Another error you may encounter is a garbage-collected variable. Unlike normal Python functions, concrete functions only retain [WeakRefs](https://docs.python.org/3/library/weakref.html) to the variables they close over, so you must retain a reference to any variables.
# + id="uMiRPfETjpt-"
external_var = tf.Variable(3)
@tf.function
def f(x):
return x * external_var
traced_f = f.get_concrete_function(4)
print("Calling concrete function...")
print(traced_f(4))
del external_var
print()
print("Calling concrete function after garbage collecting its closed Variable...")
with assert_raises(tf.errors.FailedPreconditionError):
traced_f(4)
# + [markdown] id="5f05Vr_YBUCz"
# ## AutoGraph Transformations
#
# AutoGraph is a library that is on by default in `tf.function`, and transforms a subset of Python eager code into graph-compatible TensorFlow ops. This includes control flow like `if`, `for`, `while`.
#
# TensorFlow ops like `tf.cond` and `tf.while_loop` continue to work, but control flow is often easier to write and understand when written in Python.
# + id="yCQTtTPTW3WF"
# Simple loop
@tf.function
def f(x):
while tf.reduce_sum(x) > 1:
tf.print(x)
x = tf.tanh(x)
return x
f(tf.random.uniform([5]))
# + [markdown] id="KxwJ8znPI0Cg"
# If you're curious you can inspect the code autograph generates.
# + id="jlQD1ffRXJhl"
print(tf.autograph.to_code(f.python_function))
# + [markdown] id="xgKmkrNTZSyz"
# ### Conditionals
#
# AutoGraph will convert some `if <condition>` statements into the equivalent `tf.cond` calls. This substitution is made if `<condition>` is a Tensor. Otherwise, the `if` statement is executed as a Python conditional.
#
# A Python conditional executes during tracing, so exactly one branch of the conditional will be added to the graph. Without AutoGraph, this traced graph would be unable to take the alternate branch if there is data-dependent control flow.
#
# `tf.cond` traces and adds both branches of the conditional to the graph, dynamically selecting a branch at execution time. Tracing can have unintended side effects; see [AutoGraph tracing effects](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/python/autograph/g3doc/reference/control_flow.md#effects-of-the-tracing-process) for more.
# + id="BOQl8PMq2Sf3"
@tf.function
def fizzbuzz(n):
for i in tf.range(1, n + 1):
print('Tracing for loop')
if i % 15 == 0:
print('Tracing fizzbuzz branch')
tf.print('fizzbuzz')
elif i % 3 == 0:
print('Tracing fizz branch')
tf.print('fizz')
elif i % 5 == 0:
print('Tracing buzz branch')
tf.print('buzz')
else:
print('Tracing default branch')
tf.print(i)
fizzbuzz(tf.constant(5))
fizzbuzz(tf.constant(20))
# + [markdown] id="4rBO5AQ15HVC"
# See the [reference documentation](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/python/autograph/g3doc/reference/control_flow.md#if-statements) for additional restrictions on AutoGraph-converted if statements.
# + [markdown] id="yho4J0a0ZkQS"
# ### Loops
#
# AutoGraph will convert some `for` and `while` statements into the equivalent TensorFlow looping ops, like `tf.while_loop`. If not converted, the `for` or `while` loop is executed as a Python loop.
#
# This substitution is made in the following situations:
#
# - `for x in y`: if `y` is a Tensor, convert to `tf.while_loop`. In the special case where `y` is a `tf.data.Dataset`, a combination of `tf.data.Dataset` ops are generated.
# - `while <condition>`: if `<condition>` is a Tensor, convert to `tf.while_loop`.
#
# A Python loop executes during tracing, adding additional ops to the `tf.Graph` for every iteration of the loop.
#
# A TensorFlow loop traces the body of the loop, and dynamically selects how many iterations to run at execution time. The loop body only appears once in the generated `tf.Graph`.
#
# See the [reference documentation](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/python/autograph/g3doc/reference/control_flow.md#while-statements) for additional restrictions on AutoGraph-converted `for` and `while` statements.
# + [markdown] id="sp4rbIdfbM6s"
# #### Looping over Python data
#
# A common pitfall is to loop over Python/Numpy data within a `tf.function`. This loop will execute during the tracing process, adding a copy of your model to the `tf.Graph` for each iteration of the loop.
#
# If you want to wrap the entire training loop in `tf.function`, the safest way to do this is to wrap your data as a `tf.data.Dataset` so that AutoGraph will dynamically unroll the training loop.
# + id="WGZ19LspbZ27"
def measure_graph_size(f, *args):
g = f.get_concrete_function(*args).graph
print("{}({}) contains {} nodes in its graph".format(
f.__name__, ', '.join(map(str, args)), len(g.as_graph_def().node)))
@tf.function
def train(dataset):
loss = tf.constant(0)
for x, y in dataset:
loss += tf.abs(y - x) # Some dummy computation.
return loss
small_data = [(1, 1)] * 3
big_data = [(1, 1)] * 10
measure_graph_size(train, small_data)
measure_graph_size(train, big_data)
measure_graph_size(train, tf.data.Dataset.from_generator(
lambda: small_data, (tf.int32, tf.int32)))
measure_graph_size(train, tf.data.Dataset.from_generator(
lambda: big_data, (tf.int32, tf.int32)))
# + [markdown] id="JeD2U-yrbfVb"
# When wrapping Python/Numpy data in a Dataset, be mindful of `tf.data.Dataset.from_generator` versus ` tf.data.Dataset.from_tensors`. The former will keep the data in Python and fetch it via `tf.py_function` which can have performance implications, whereas the latter will bundle a copy of the data as one large `tf.constant()` node in the graph, which can have memory implications.
#
# Reading data from files via TFRecordDataset/CsvDataset/etc. is the most effective way to consume data, as then TensorFlow itself can manage the asynchronous loading and prefetching of data, without having to involve Python. To learn more, see the [tf.data guide](../../guide/data).
# + [markdown] id="hyksHW9TCukR"
# #### Accumulating values in a loop
#
# A common pattern is to accumulate intermediate values from a loop. Normally, this is accomplished by appending to a Python list or adding entries to a Python dictionary. However, as these are Python side effects, they will not work as expected in a dynamically unrolled loop. Use `tf.TensorArray` to accumulate results from a dynamically unrolled loop.
# + id="HJ3Vb3dXfefN"
batch_size = 2
seq_len = 3
feature_size = 4
def rnn_step(inp, state):
return inp + state
@tf.function
def dynamic_rnn(rnn_step, input_data, initial_state):
# [batch, time, features] -> [time, batch, features]
input_data = tf.transpose(input_data, [1, 0, 2])
max_seq_len = input_data.shape[0]
states = tf.TensorArray(tf.float32, size=max_seq_len)
state = initial_state
for i in tf.range(max_seq_len):
state = rnn_step(input_data[i], state)
states = states.write(i, state)
return tf.transpose(states.stack(), [1, 0, 2])
dynamic_rnn(rnn_step,
tf.random.uniform([batch_size, seq_len, feature_size]),
tf.zeros([batch_size, feature_size]))
# + [markdown] id="IKyrEY5GVX3M"
# ## Further reading
#
# To learn about how to export and load a `Function`, see the [SavedModel guide](../../guide/saved_model). To learn more about graph optimizations that are performed after tracing, see the [Grappler guide](../../guide/graph_optimization). To learn how to optimize your data pipeline and profile your model, see the [Profiler guide](../../guide/profiler.md).
| site/en/guide/function.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ## Read About _
# [Role of _](https://www.datacamp.com/community/tutorials/role-underscore-python#UII)
# # Tuple and Dictionary
d = {'type': 'electronics', 'equipment': 'keyboard', 'keyboard_type': 'mechanichal'}
d
d.items()
x = [('type', 'electronics'), ('equipment', 'keyboard'), ('keyboard_type', 'mechanichal')]
x
y = dict(x)
y
for item in zip('abc', range(3)):
print(item)
x = dict(zip('abc', range(3)))
x
x.update(zip('def', range(3, 6)))
x
x.update(zip('def', range(3)))
x
x.update([('a', 5), ('d', 7), ('e', 13)]) # [('a', 5), ('d', 7), ('e', 13)] is equal to zip('ade', [5,7,13])
x
zip('ade', [5,7,13])
for item in zip('ade', [5,7,13]):
print(item)
for item in zip('ade', (5,7,13)):
print(item)
for char, number, second_char in zip('ade', (5,7,13), 'abc'):
print(char + ' --> ' + str(number) + ' --> ' + second_char)
# ## Comparing String
str1 = 'azzzzz'
str2 = 'zaaaaa'
num1 = 199999
num2 = 911111
str1 > str2
'azzzzz' > 'Zaaaaa'
ord('Z')
ord('a')
str1 = 'azzzzz'
str2 = 'z'
str2 > str1
# ## Comparing tuple
(0, 1, 2) < (0, 3, 4)
# ## DSU (Decorate --> Sort --> Undecorate)
def sort_words_by_length():
fin = open('words.txt')
lt = []
for line in fin:
line = line.strip()
lt.append((len(line), line))
lt.sort(reverse=True)
res = []
for _, word in lt:
res.append(word)
fin.close()
return res
x = sort_words_by_length()
print(x[0])
# ## Sequence of Sequence
x = [3, 5, 6, 2, 6, 8,]
x
x = [[2,3,6], [9,7,4], [2,8,4,3]]
x = ((2,3,6), (9,7,4), [2,8,4,3])
x = 'banana'
x = [2,5,7]
x.append(56)
x
x = 'banana'
x.append(d)
x = ['b', 'a', 'n', 'a', 'n', 'a']
x.append('d')
x
x.insert(3, 'd')
x
x = [y for y in 'banana']
x
x = (34, 56, 12, 89)
# ## Sorted
y = sorted(x)
y
x
# ## Reversed
x = (34, 56, 12, 89)
y = reversed(x)
y
for item in y:
print(item)
x
| July-12-2020.ipynb |
// ---
// jupyter:
// jupytext:
// text_representation:
// extension: .cs
// format_name: light
// format_version: '1.5'
// jupytext_version: 1.14.4
// kernelspec:
// display_name: .NET (C#)
// language: C#
// name: .net-csharp
// ---
#r "nuget: Microsoft.ML"
using Microsoft.ML;
using Microsoft.ML.Data;
using System.Linq;
using static Microsoft.ML.Transforms.NormalizingTransformer;
// This example comes from the ML.NET documentation: https://docs.microsoft.com/en-us/dotnet/api/microsoft.ml.normalizationcatalog.normalizelogmeanvariance?view=ml-dotnet
class DataPoint
{
[VectorType(5)]
public float[] Features { get; set; }
}
var mlContext = new MLContext();
var samples = new List<DataPoint>()
{
new DataPoint(){ Features = new float[5] { 1, 1, 3, 0, float.MaxValue } },
new DataPoint(){ Features = new float[5] { 2, 2, 2, 0, float.MinValue } },
new DataPoint(){ Features = new float[5] { 0, 0, 1, 0, 0} },
new DataPoint(){ Features = new float[5] {-1,-1,-1, 1, 1} }
};
var data = mlContext.Data.LoadFromEnumerable(samples);
// NormalizeLogMeanVariance normalizes the data based on the computed mean and variance of the logarithm of the data. Uses Cumulative distribution function as output.
var normalize = mlContext.Transforms.NormalizeLogMeanVariance("Features", useCdf: true);
// NormalizeLogMeanVariance normalizes the data based on the computed mean and variance of the logarithm of the data.
var normalizeNoCdf = mlContext.Transforms.NormalizeLogMeanVariance("Features", useCdf: false);
var normalizeTransform = normalize.Fit(data);
var transformedData = normalizeTransform.Transform(data);
var normalizeNoCdfTransform = normalizeNoCdf.Fit(data);
var noCdfData = normalizeNoCdfTransform.Transform(data);
transformedData.GetColumn<float[]>("Features")
noCdfData.GetColumn<float[]>("Features")
// Let's get transformation parameters. Since we work with only one column we need to pass 0 as parameter for GetNormalizerModelParameters. If we have multiple columns transformations we need to pass index of InputOutputColumnPair.
normalizeTransform.GetNormalizerModelParameters(0)
// ERF is https://en.wikipedia.org/wiki/Error_function.
//
// Expected output:
// - The 1-index value in resulting array would be produce by:
// - y = 0.5* (1 + ERF((Math.Log(x)- 0.3465736) / (0.3465736 * sqrt(2)))
normalizeNoCdfTransform.GetNormalizerModelParameters(0)
| NormalizeLogMeanVariance.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ## Advanced Lane Finding Project
#
# The goals / steps of this project are the following:
#
# * Compute the camera calibration matrix and distortion coefficients given a set of chessboard images.
# * Apply a distortion correction to raw images.
# * Use color transforms, gradients, etc., to create a thresholded binary image.
# * Apply a perspective transform to rectify binary image ("birds-eye view").
# * Detect lane pixels and fit to find the lane boundary.
# * Determine the curvature of the lane and vehicle position with respect to center.
# * Warp the detected lane boundaries back onto the original image.
# * Output visual display of the lane boundaries and numerical estimation of lane curvature and vehicle position.
#
# ---
# ## First, I'll compute the camera calibration using chessboard images
import matplotlib.pyplot as plt
import matplotlib.image as mpimg
import numpy as np
import glob
import cv2
import math
import os
# %matplotlib inline
# ### Camera Calibration Helper Functions
# +
def find_corners(gray_img, nx, ny):
"""Find corners on a chessboard for camera calibration"""
return cv2.findChessboardCorners(gray_img, (nx, ny), None)
def draw_corners(img, nx, ny, corners, ret):
"""Draws chessboard corners"""
return cv2.drawChessboardCorners(img, (nx, ny), corners, ret)
def calibrate(objpoints, imgpoints, img_shape):
"""Calibrates camera"""
return cv2.calibrateCamera(objpoints, imgpoints, img_size, None, None)
# -
def get_calibration_points():
"""
Gets object points and image points for camera calibration
"""
# prepare object points, like (0,0,0), (1,0,0), (2,0,0) ....,(6,5,0)
objp = np.zeros((6*9,3), np.float32)
objp[:,:2] = np.mgrid[0:9,0:6].T.reshape(-1,2)
# Arrays to store object points and image points from all the images.
objpoints = [] # 3d points in real world space
imgpoints = [] # 2d points in image plane.
nx = 9 # the number of inside corners in x
ny = 6 # the number of inside corners in y
# plot counter
counter = 0
# Make a list of calibration images
images = glob.glob('./camera_cal/calibration*.jpg')
for fname in images:
#read in each img
# note!!! cv2.imread reads in image as BGR
img = cv2.imread(fname)
# image shape
img_size = img.shape[1::-1]
#convert to grayscale
gray = cv2.cvtColor(img, cv2.COLOR_RGB2GRAY)
# find corners
ret, corners = find_corners(gray, nx, ny)
if ret:
objpoints.append(objp)
imgpoints.append(corners)
# draw and display the corners
img = draw_corners(img, nx, ny, corners, ret)
# source points
src = np.float32([corners[0], corners[nx-1], corners[-1], corners[-nx]])
offset = 100
# destination points
dst = np.float32([[offset, offset], [img_size[0]-offset, offset],
[img_size[0]-offset, img_size[1]-offset],
[offset, img_size[1]-offset]])
# Compute the perspective transform, M, given
# source and destination points
M = cv2.getPerspectiveTransform(src, dst)
# Warp an image using the perspective transform, M:
warped = cv2.warpPerspective(img, M, img_size, flags=cv2.INTER_LINEAR)
if counter < 1:
f, (ax1, ax2) = plt.subplots(1, 2, figsize=(24, 9))
ax1.set_title('Original', size=30)
ax1.imshow(img)
ax2.set_title('Undistorted & Transformed', size=30)
ax2.imshow(warped)
counter += 1
return objpoints, imgpoints, img_size, corners
# +
# gets calibration points for camera calibration
objpoints, imgpoints, img_size, corners = get_calibration_points()
# only do this once
# calibrate camera
ret, mtx, dist, rvecs, tvecs = calibrate(objpoints, imgpoints, img_size)
# -
# ### Lane Detection Helper Functions
# +
def rgb_grayscale(img):
"""
Applies the Grayscale transform
Return: Grayscale img
"""
return cv2.cvtColor(img, cv2.COLOR_RGB2GRAY)
def undistort(img, mtx, dist):
"""
Undistorts an image
Return: Undistored img
"""
return cv2.undistort(img, mtx, dist, None, mtx)
def cvt_to_hls(img):
"""
Applies a RGB to HLS color transform
Return: HLS img representation
"""
return cv2.cvtColor(img, cv2.COLOR_RGB2HLS)
def sobel_x(channel):
"""
Takes derivative of x values
"""
return cv2.Sobel(channel, cv2.CV_64F, 1, 0)
def scale_sobel(abs_sobel):
"""
Absolute x derivative to accentuate lines away from horizontal.
"""
return np.uint8(255*abs_sobel/np.max(abs_sobel))
def get_src(img_size):
"""Returns source points"""
src = np.float32(
[[(img_size[0] / 2) - 55, img_size[1] / 2 + 100],
[((img_size[0] / 6) - 10), img_size[1]],
[(img_size[0] * 5 / 6) + 60, img_size[1]],
[(img_size[0] / 2 + 55), img_size[1] / 2 + 100]])
return src
def get_dst(img_size):
"""Returns destination points"""
# img width
width = img_size[0]
# img height
height = img_size[1]
# destination point for transformation
dst = np.float32(
[[(width / 4), 0],
[(width / 4), height],
[(width * 3 / 4), height],
[(width * 3 / 4), 0]])
return dst
# -
# ### Combined Binary Image
def get_color_thresh(img):
# Converts to HLS color space and separate the S channel
# Note: img is the undistorted image
hls = cvt_to_hls(img)
s_channel = hls[:,:,2]
# Grayscale image
gray = rgb_grayscale(img)
# Sobel x
sobelx = sobel_x(gray) # Take the derivative in x
abs_sobelx = np.absolute(sobelx) # Absolute x derivative to accentuate lines away from horizontal
scaled_sobel = scale_sobel(abs_sobelx)
# Threshold x gradient
thresh_min = 20
thresh_max = 100
sxbinary = np.zeros_like(scaled_sobel)
sxbinary[(scaled_sobel >= thresh_min) & (scaled_sobel <= thresh_max)] = 1
# Threshold color channel
s_thresh_min = 170
s_thresh_max = 255
s_binary = np.zeros_like(s_channel)
s_binary[(s_channel >= s_thresh_min) & (s_channel <= s_thresh_max)] = 1
# Stack each channel to view their individual contributions in green and blue respectively
# Returns a stack of the two binary images, whose components you can see as different colors
color_binary = np.dstack(( np.zeros_like(sxbinary), sxbinary, s_binary)) * 255
# Combine the two binary thresholds
combined_binary = np.zeros_like(sxbinary)
combined_binary[(s_binary == 1) | (sxbinary == 1)] = 1
return combined_binary, color_binary
# ### Transform to Top View
# +
def transform(binary_img):
"""
Applies an image mask and transforms the image to a birds eye view.
Return: Top/birds view of lane.
"""
# image size (width, height)
img_size = binary_img.shape[1::-1]
# gets the bounding vertices for the mask (source points)
src = get_src(img_size)
# gets destination points
dst = get_dst(img_size)
# Compute the perspective transform, M, given
# source and destination points
M = cv2.getPerspectiveTransform(src, dst)
# Warp an image using the perspective transform, M
warped = cv2.warpPerspective(binary_img, M, img_size, flags=cv2.INTER_NEAREST)
# color = [255, 0, 0]
# thickness = 4
# cv2.line(warped, (dst[0][0], dst[0][1]), (dst[1][0], dst[1][1]), color, thickness)
# cv2.line(warped, (dst[2][0], dst[2][1]), (dst[3][0], dst[3][1]), color, thickness)
# cv2.line(warped, (dst[3][0], dst[3][1]), (dst[0][0], dst[0][1]), color, thickness)
return warped
# -
# ## Locate Lines
#
#
# ### Shifting Window
def locate_lines(binary_warped):
"""
Locates the left and right lane line in a binary_warped image.
"""
# Take a histogram of the bottom half of binary_warped image
histogram = np.sum(binary_warped[binary_warped.shape[0]//2:,:], axis=0)
# Creates an output image to draw on and visualize the result
out_img = np.dstack((binary_warped, binary_warped, binary_warped))*255
# Finds the peak of the left and right halves of the histogram
# These are the starting points for the left and right lines
midpoint = np.int(histogram.shape[0]//2)
leftx_base = np.argmax(histogram[:midpoint])
rightx_base = np.argmax(histogram[midpoint:]) + midpoint
# Number of sliding windows
nwindows = 9
# Sets height of windows
window_height = np.int(binary_warped.shape[0]//nwindows)
# Identifies the x and y positions of all nonzero pixels in the image
nonzero = binary_warped.nonzero()
nonzeroy = np.array(nonzero[0])
nonzerox = np.array(nonzero[1])
# Current positions to be updated for each window
leftx_current = leftx_base
rightx_current = rightx_base
# Set the width of the windows +/- margin
margin = 100
# Set minimum number of pixels found to recenter window
minpix = 50
# Create empty lists to receive left and right lane pixel indices
left_lane_inds = []
right_lane_inds = []
# Step through the windows one by one
for window in range(nwindows):
# Identify window boundaries in x and y (and right and left)
win_y_low = binary_warped.shape[0] - (window+1)*window_height
win_y_high = binary_warped.shape[0] - window*window_height
win_xleft_low = leftx_current - margin
win_xleft_high = leftx_current + margin
win_xright_low = rightx_current - margin
win_xright_high = rightx_current + margin
# Draw the windows on the visualization image
cv2.rectangle(out_img,(win_xleft_low,win_y_low),(win_xleft_high,win_y_high),(0,255,0), 2)
cv2.rectangle(out_img,(win_xright_low,win_y_low),(win_xright_high,win_y_high),(0,255,0), 2)
# Identify the nonzero pixels in x and y within the window
good_left_inds = ((nonzeroy >= win_y_low) & (nonzeroy < win_y_high) &
(nonzerox >= win_xleft_low) & (nonzerox < win_xleft_high)).nonzero()[0]
good_right_inds = ((nonzeroy >= win_y_low) & (nonzeroy < win_y_high) &
(nonzerox >= win_xright_low) & (nonzerox < win_xright_high)).nonzero()[0]
# Append these indices to the lists
left_lane_inds.append(good_left_inds)
right_lane_inds.append(good_right_inds)
# If you found > minpix pixels, recenter next window on their mean position
if len(good_left_inds) > minpix:
leftx_current = np.int(np.mean(nonzerox[good_left_inds]))
if len(good_right_inds) > minpix:
rightx_current = np.int(np.mean(nonzerox[good_right_inds]))
# Concatenate the arrays of indices
left_lane_inds = np.concatenate(left_lane_inds)
right_lane_inds = np.concatenate(right_lane_inds)
# Extract left and right line pixel positions
leftx = nonzerox[left_lane_inds]
lefty = nonzeroy[left_lane_inds]
rightx = nonzerox[right_lane_inds]
righty = nonzeroy[right_lane_inds]
# Fit a second order polynomial to each
left_fit = np.polyfit(lefty, leftx, 2)
right_fit = np.polyfit(righty, rightx, 2)
# set/update left_fit, right_fit class attributes
left_line.set_recent_poly_coef(left_fit)
right_line.set_recent_poly_coef(right_fit)
# Generate x and y values for plotting
ploty = np.linspace(0, binary_warped.shape[0]-1, binary_warped.shape[0] )
left_fitx = left_fit[0]*ploty**2 + left_fit[1]*ploty + left_fit[2]
right_fitx = right_fit[0]*ploty**2 + right_fit[1]*ploty + right_fit[2]
# set/update left_fitx, right_fitx class attributes
left_line.set_recent_xfitted(left_fitx)
right_line.set_recent_xfitted(right_fitx)
out_img[nonzeroy[left_lane_inds], nonzerox[left_lane_inds]] = [255, 0, 0]
out_img[nonzeroy[right_lane_inds], nonzerox[right_lane_inds]] = [0, 0, 255]
return out_img, ploty
# ### Line Search Within Region
# +
def fit_poly(img_shape, leftx, lefty, rightx, righty):
# Fits a second order polynomial to each with np.polyfit()
left_fit = np.polyfit(lefty, leftx, 2)
right_fit = np.polyfit(righty, rightx, 2)
# Generate x and y values for plotting
ploty = np.linspace(0, img_shape[0]-1, img_shape[0])
left_fitx = left_fit[0]*ploty**2 + left_fit[1]*ploty + left_fit[2]
right_fitx = right_fit[0]*ploty**2 + right_fit[1]*ploty + right_fit[2]
# set/update left_fit, right_fit class attributes
left_line.set_recent_poly_coef(left_fit)
right_line.set_recent_poly_coef(right_fit)
# set/update left_fitx, right_fitx class attributes
left_line.set_recent_xfitted(left_fitx)
right_line.set_recent_xfitted(right_fitx)
return left_fitx, right_fitx, ploty, left_fit, right_fit
def search_around_poly(binary_warped):
"""
Searches for lane pixels within a margin of the previously
detected lane lines.
"""
# Width of the margin around the previous polynomial to search
margin =100
# Grab activated pixels
nonzero = binary_warped.nonzero()
nonzeroy = np.array(nonzero[0])
nonzerox = np.array(nonzero[1])
# get average of n iterations of left_fit and right_fit
left_fit = left_line.get_avg_poly_coef()
right_fit = right_line.get_avg_poly_coef()
# Sets the area of search based on activated x-values
# within the +/- margin of our polynomial function.
left_lane_inds = ((nonzerox > (left_fit[0]*(nonzeroy**2) + left_fit[1]*nonzeroy +
left_fit[2] - margin)) & (nonzerox < (left_fit[0]*(nonzeroy**2) +
left_fit[1]*nonzeroy + left_fit[2] + margin)))
right_lane_inds = ((nonzerox > (right_fit[0]*(nonzeroy**2) + right_fit[1]*nonzeroy +
right_fit[2] - margin)) & (nonzerox < (right_fit[0]*(nonzeroy**2) +
right_fit[1]*nonzeroy + right_fit[2] + margin)))
# Extract left and right line pixel positions
leftx = nonzerox[left_lane_inds]
lefty = nonzeroy[left_lane_inds]
rightx = nonzerox[right_lane_inds]
righty = nonzeroy[right_lane_inds]
# Fit new polynomials
left_fitx, right_fitx, ploty, left_fit, right_fit = fit_poly(binary_warped.shape, leftx, lefty, rightx, righty)
# Create an image to draw on and an image to show the selection window
out_img = np.dstack((binary_warped, binary_warped, binary_warped))*255
window_img = np.zeros_like(out_img)
# Color in left and right line pixels
out_img[nonzeroy[left_lane_inds], nonzerox[left_lane_inds]] = [255, 0, 0]
out_img[nonzeroy[right_lane_inds], nonzerox[right_lane_inds]] = [0, 0, 255]
# Generate a polygon to illustrate the search window area
# And recast the x and y points into usable format for cv2.fillPoly()
left_line_window1 = np.array([np.transpose(np.vstack([left_fitx-margin, ploty]))])
left_line_window2 = np.array([np.flipud(np.transpose(np.vstack([left_fitx+margin,
ploty])))])
left_line_pts = np.hstack((left_line_window1, left_line_window2))
right_line_window1 = np.array([np.transpose(np.vstack([right_fitx-margin, ploty]))])
right_line_window2 = np.array([np.flipud(np.transpose(np.vstack([right_fitx+margin,
ploty])))])
right_line_pts = np.hstack((right_line_window1, right_line_window2))
# Draw the lane onto the warped blank image
cv2.fillPoly(window_img, np.int_([left_line_pts]), (255, 242, 0))
cv2.fillPoly(window_img, np.int_([right_line_pts]), (255, 242, 0))
result = cv2.addWeighted(out_img, 1, window_img, 0.3, 0)
# f, (ax1) = plt.subplots(1, 1, figsize=(24, 9))
# ax1.imshow(out_img)
# Plot the polynomial lines onto the image
# plt.plot(left_fitx, ploty, color='yellow')
# plt.plot(right_fitx, ploty, color='yellow')
return result, ploty
# -
# ### Calculate Line Curvature
def get_curvature(ploty, img_size):
"""Calculates the curvature of polynomial functions in meters."""
left_fitx = left_line.recent_xfitted[-1]
right_fitx = right_line.recent_xfitted[-1]
# Conversions in x and y from pixels space to meters
ym_per_pix = 30/720 # meters per pixel in y dimension
xm_per_pix = 3.7/640 # meters per pixel in x dimension
# Fit a second order polynomial to pixel positions in each lane line
# Fit new polynomials to x,y in world space
left_fit_cr = np.polyfit(ploty*ym_per_pix, left_fitx*xm_per_pix, 2)
right_fit_cr = np.polyfit(ploty*ym_per_pix, right_fitx*xm_per_pix, 2)
# y-value where radius of curvature is calculated
# Maximum y-value, corresponding to the bottom of the image
y_eval = np.max(ploty)
# Calculation of R_curve (radius of curvature)
left_curve_radius = ((1 + (2*left_fit_cr[0]*y_eval*ym_per_pix + left_fit_cr[1])**2)**1.5) / np.absolute(2*left_fit_cr[0])
right_curve_radius = ((1 + (2*right_fit_cr[0]*y_eval*ym_per_pix + right_fit_cr[1])**2)**1.5) / np.absolute(2*right_fit_cr[0])
# update left and right curve radius
left_line.set_radius_of_curvature(left_curve_radius)
right_line.set_radius_of_curvature(right_curve_radius)
# set lane center
lines.set_lane_center()
# get img midpoint/car position
img_midpoint = img_size[0] / 2
# set deviation in pixels and meters
lines.set_deviation(img_midpoint, xm_per_pix)
# ### Draw Predicted Lines
def draw_lines(img, warped, ploty):
# get the avg left_fitx, right_fitx from Line class
left_fitx = left_line.get_avg_xfitted()
right_fitx = right_line.get_avg_xfitted()
# Create an image to draw the lines on
warp_zero = np.zeros_like(warped).astype(np.uint8)
color_warp = np.dstack((warp_zero, warp_zero, warp_zero))
# Recast the x and y points into usable format for cv2.fillPoly()
pts_left = np.array([np.transpose(np.vstack([left_fitx, ploty]))])
pts_right = np.array([np.flipud(np.transpose(np.vstack([right_fitx, ploty])))])
pts = np.hstack((pts_left, pts_right))
# Draw the lane onto the warped blank image
cv2.fillPoly(color_warp, np.int_([pts]), (110, 0, 211))
# image size
img_size = img.shape[1::-1]
# get the bounding vertices for the mask (source points)
src = get_src(img_size)
# get destination points
dst = get_dst(img_size)
# inverse perspective transform
M = cv2.getPerspectiveTransform(dst, src)
# Warp the blank back to original image space using inverse perspective matrix (Minv)
newwarp = cv2.warpPerspective(color_warp, M, img_size)
# Combine the result with the original image
return cv2.addWeighted(img, 1, newwarp, 0.5, 0)
# ### Combine Visual Results
def merge_outputs(binary_img, color_img, output_img, result):
"""Merges multiple visuals into one output image"""
# img width
width = result.shape[1]
# img height
height = result.shape[0]
# img midpoint
midpoint = int(width / 2)
img = np.zeros_like(result)
# resize images
color_img = cv2.resize(color_img, (0,0), fx=0.5, fy=0.5)
output_img = cv2.resize(output_img, (0,0), fx=0.5, fy=0.5)
# concat resized images vertically
vert = np.concatenate((output_img, color_img), axis=0)
result = np.concatenate((result, vert), axis=1)
# draw and fill rectangle for lane line info
cv2.rectangle(result, (0, 0), (470, 100),(0, 0, 0), -1)
# Display text info of left lane radius
cv2.putText(result, "Corner Radius: {} km".format(round(lines.get_radius_of_curvature()/1000, 1)),
(10, 40), cv2.FONT_HERSHEY_SIMPLEX, 1.2, (0, 228, 0), 3)
# get car deviation from center in meters
m_from_center = lines.dist_from_center_m
# get car deviation from center in pixels
px_from_center = lines.dist_from_center_px
# position of lane center
lane_center = lines.lane_center
# draw lane center
# cv2.line(result, (int(lane_center), height - 120), (int(lane_center), height - 70), (0, 228, 0), 4)
# draw current car position
# cv2.line(result, (int(midpoint), height - 110), (int(midpoint), height - 80), (0, 228, 0), 5)
if m_from_center > 0:
direction = 'right'
else:
direction = 'left'
# display text for car position
cv2.putText(result, "Deviation: {:.2f}m {}".format(np.absolute(lines.dist_from_center_m), direction),
(10, 80), cv2.FONT_HERSHEY_SIMPLEX, 1.2, (0, 228, 0), 3)
return result
# ## Pipeline
# +
def pipeline(img):
# undistort image
undist = undistort(img, mtx, dist)
# create copy of undistorted image
img = np.copy(undist)
# img size
img_size = img.shape[1::-1]
# get gradient thresholds
binary_image, color_img = get_color_thresh(undist)
# transform img to arial view
warped = transform(binary_image)
if not lines.detected:
# Locate lane lines
output_img, ploty = locate_lines(warped)
# draw lines on located lines
result = draw_lines(img, warped, ploty)
# Window method has found ane lines, set detected to true
lines.set_detected(True)
else:
# Locate lane lines within small region
output_img, ploty = search_around_poly(warped)
# draw lines on located lines
result = draw_lines(img, warped, ploty)
# Calculate lane curvature
get_curvature(ploty, img_size)
# combine visual results
final = merge_outputs(binary_image, color_img, output_img, result)
# To visualize output for test images, uncomment
# f, (ax1, ax2) = plt.subplots(1, 2, figsize=(24, 9))
# ax1.imshow(final)
# ax2.imshow(output_img)
return final
# -
# ### Test Images
# +
# Class to receive the characteristics of each line detection
class Line():
def __init__(self):
# was the line detected in the last iteration?
self.detected = False
# current xfitted
self.current_xfitted = None
# x values of the last n fits of the line
self.recent_xfitted = []
#polynomial coefficients for the most recent fit
self.poly_coef = []
#radius of curvature of the line in some units
self.radius_of_curvature = None
# center of lane
self.lane_center = None
#distance in meters of vehicle center from the line
self.dist_from_center_m = None
#distance in pixels of vehicle center from the line
self.dist_from_center_px = None
def set_detected(self, boolean):
"""Were lane lines located using window method."""
self.detected = boolean
def set_recent_xfitted(self, xfitted):
"""
Stores x values of the last 5 fits of the line
param: left_fitx, right_fitx
"""
self.current_xfitted = xfitted
self.recent_xfitted.append(xfitted)
if len(self.recent_xfitted) > 5:
self.recent_xfitted.pop(0)
def get_avg_xfitted(self):
"""
Returns the average x values of the fitted line over the
last 5 iterations.
Return: avg of self.recent_xfitted
"""
return np.average(self.recent_xfitted, axis=0)
def set_recent_poly_coef(self, fit):
"""
Stores polynomial coefficients over the last
5 iterations.
Params: left_fit or right_fit"""
self.poly_coef.append(fit)
if len(self.poly_coef) > 5:
self.poly_coef.pop(0)
def get_avg_poly_coef(self):
"""
Returns the polynomial coefficients averaged over
the last 5 iterations
Return: avg of self.poly_coef
"""
return np.average(self.poly_coef, axis=0)
def set_radius_of_curvature(self, radius):
"""Sets curvature radius for new line"""
self.radius_of_curvature = radius
def get_radius_of_curvature(self):
"""Get curvature radius"""
return (left_line.radius_of_curvature + right_line.radius_of_curvature) / 2
def set_lane_center(self):
"""Calculates center of lane from base of left and right lane lines."""
self.lane_center = (left_line.current_xfitted[-1] + right_line.current_xfitted[-1])/2.
def set_deviation(self, img_midpoint, xm_per_pix):
"""Set Car Deviation"""
self.dist_from_center_m = (img_midpoint - self.lane_center)*xm_per_pix #Convert to meters
self.dist_from_center_px = (img_midpoint - self.lane_center)
lines = Line()
left_line = Line()
right_line = Line()
# -
# ### Test Images
# +
# Note: Lane identification will be obscured as lane positioning is averaged over 5 frames.
images = ['straight_lines1.jpg',
'straight_lines2.jpg',
'test1.jpg',
'test2.jpg',
'test3.jpg',
'test4.jpg',
'test5.jpg',
'test6.jpg']
for image in images:
img = mpimg.imread('test_images/' + image)
pipeline(img)
# -
# ### Create Video Output
# Import everything needed to edit/save/watch video clips
# from moviepy.editor import VideoFileClip
from moviepy.editor import *
from IPython.display import HTML
def process_image(image):
# NOTE: The output you return should be a color image (3 channel) for processing video below
# you should return the final output (image where lines are drawn on lanes)
# read in colored image
return pipeline(image)
# +
white_output = 'test_videos_output/project_video.mp4'
## To speed up the testing process you may want to try your pipeline on a shorter subclip of the video
## To do so add .subclip(start_second,end_second) to the end of the line below
## Where start_second and end_second are integer values representing the start and end of the subclip
## You may also uncomment the following line for a subclip of the first 5 seconds
# clip1 = VideoFileClip("./project_video.mp4").subclip(0,5)
clip1 = VideoFileClip("./project_video.mp4")
white_clip = clip1.fl_image(process_image) #NOTE: this function expects color images!!
# # %time white_clip.write_videofile(white_output, audio=False)
white_clip.write_videofile(white_output, audio=False)
| line_detection.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # BIAFlows
#
# ## Nuclei Tracking 2D+t
#
# ### Fiji-workflow
#
# The workflow treats the time-dimension as z-dimension and does a 3D-segmentation of the objects. The resulting object slices are then reduced to a center point.
# +
from getpass import getpass
publicKey = getpass("Please enter the public key: ")
privateKey = getpass("Please enter the private key: ")
argv = ['--cytomine_public_key', publicKey,
'--cytomine_host', 'biaflows.neubias.org',
'--cytomine_private_key', privateKey,
'--cytomine_id_project', '1695226',
'--cytomine_id_software', '11244427',
'--ij_gauss_radius', '3',
'--ij_threshold', '60',
'--ij_open_radius', '7']
# -
# Import CytomineJob and Job and update the status information. Set the problem class to particle-tracking (PrtTrk).
# +
import sys
from subprocess import call
from cytomine.models import Job
from neubiaswg5.helpers import NeubiasJob, prepare_data, upload_data, upload_metrics
from neubiaswg5 import CLASS_PRTTRK
jobID=-666
with NeubiasJob.from_cli(argv) as nj:
nj.job.update(status=Job.RUNNING, progress=0, statusComment="Initialisation...")
jobID = nj.job.id
problem_cls = CLASS_PRTTRK
# -
# #### Create local directories and download images
#
# Create the local working directories in a subfolder jobID of the user's home folder.
#
# - in: input images
# - out: output images
# - ground_truth: ground truth images
# - tmp: temporary path
in_images, gt_images, in_path, gt_path, out_path, tmp_path = prepare_data(problem_cls, nj, is_2d=False, **nj.flags)
# Check the downloaded input and ground-truth images. In this case there is only one time-series as input image and one time-series as ground-truth image.
print(in_path)
# !ls -alh $in_path
print(gt_path)
# !ls -alh $gt_path
# #### Call the image analysis workflow
# +
nj.job.update(progress=25, statusComment="Launching workflow...")
command = "/usr/bin/xvfb-run ./ImageJ-linux64 -macro macro.ijm \"input={}, output={}, gauss_rad={}, threshold={}, open_rad={}\" -batch".format(in_path, out_path, nj.parameters.ij_gauss_radius, nj.parameters.ij_threshold, nj.parameters.ij_open_radius)
return_code = call(command, shell=True, cwd="./fiji") # waits for the subprocess to return
if return_code != 0:
err_desc = "Failed to execute the ImageJ macro (return code: {})".format(return_code)
nj.job.update(progress=50, statusComment=err_desc)
raise ValueError(err_desc)
nj.job.update(progress=30, statusComment="Workflow finished...")
# -
# #### Visualize the result of the workflow
# +
macro = '\
open("'+in_path + '/' + str(in_images[0].object.id)+'.tif"); \n\
cellsStackID = getImageID(); \n\
run("Duplicate...", " "); \n\
cellsTitle = getTitle(); \n\
selectImage(cellsStackID); \n\
close(); \n\
open("'+out_path + '/' + str(in_images[0].object.id)+'.tif"); \n\
tracesStackID = getImageID(); \n\
run("Z Project...", "projection=[Max Intensity]"); \n\
run("3-3-2 RGB"); \n\
run("Maximum...", "radius=4"); \n\
run("8-bit"); \n\
tracesTitle = getTitle(); \n\
run("Merge Channels...", "c2="+tracesTitle+" c4="+cellsTitle+" create"); \n\
selectImage(tracesStackID); \n\
close(); \n\
overlayID = getImageID(); \n\
run("Capture Image"); \n\
selectImage(overlayID); \n\
close(); \n\
resultTitle=getTitle(); \n\
saveAs("jpeg", "'+tmp_path+'/'+str(in_images[0].object.id)+'"); \n\
close(); \n\
run("Quit"); \n\
'
file = open(tmp_path + "/visualize_tracks.ijm", "w")
file.write(macro)
file.close()
print(macro)
# +
command = "/usr/bin/xvfb-run ./ImageJ-linux64 -macro "+tmp_path + "/visualize_tracks.ijm -batch"
return_code = call(command, shell=True, cwd="./fiji") # waits for the subprocess to return
if return_code > 1:
err_desc = "Failed to execute the ImageJ macro (return code: {})".format(return_code)
nj.job.update(progress=50, statusComment=err_desc)
raise ValueError(err_desc)
# -
from IPython.display import Image
Image(filename = tmp_path+'/'+str(in_images[0].object.id)+'.jpg')
# #### Calculate metrics
# +
from neubiaswg5.metrics import computemetrics_batch
from cytomine.models import Property
import os
os.chdir("/home/jovyan/neubiaswg5-utilities/neubiaswg5/metrics")
import re
import shutil
import sys
import subprocess
from sklearn.metrics import f1_score
from sklearn.metrics import accuracy_score
from sklearn.metrics import precision_score
from sklearn.metrics import recall_score
from sklearn.metrics import confusion_matrix
import numpy as np
from scipy import ndimage
import tifffile as tiff
from scipy.spatial import cKDTree
from neubiaswg5 import *
from neubiaswg5 import CLASS_LNDDET
from img_to_xml import *
from img_to_seq import *
from skl2obj import *
from netmets_obj import netmets_obj
from node_sorter import swc_node_sorter
from node_sorter import findchildren
os.chdir("/home/jovyan/")
def computemetrics(infile, reffile, problemclass, tmpfolder, verbose=True, **extra_params):
outputs = _computemetrics(infile, reffile, problemclass, tmpfolder, **extra_params)
return outputs
def get_image_metadata(tiff):
import xml.etree.ElementTree as ET
return list(list(ET.fromstring(tiff.ome_metadata))[0])[0].attrib
def get_dimensions(tiff, time=False):
array = tiff.asarray()
T, Z = 1, 1
if array.ndim > 2:
metadata = get_image_metadata(tiff)
Y, X = int(metadata['SizeY']), int(metadata['SizeX'])
if array.ndim > 3 or time:
T = int(metadata['SizeT'])
if array.ndim > 3 or not time:
Z = int(metadata['SizeZ'])
else:
Y, X = array.shape
return T, Z, Y, X
def _computemetrics(infile, reffile, problemclass, tmpfolder, **extra_params):
# Remove all xml and txt (temporary) files in tmpfolder
filelist = [ f for f in os.listdir(tmpfolder) if (f.endswith(".xml") or f.endswith(".txt")) ]
for f in filelist:
os.remove(os.path.join(tmpfolder, f))
# Remove all (temporary) subdirectories in tmpfolder
for subdir in next(os.walk(tmpfolder))[1]:
shutil.rmtree(os.path.join(tmpfolder, subdir), ignore_errors=True)
metrics_dict = {}
params_dict = {}
# Read metadata from reference image (OME-TIFF)
img = tiff.TiffFile(reffile)
T, Z, Y, X = get_dimensions(img, time=False)
# Convert non null pixels coordinates to track files
ref_xml_fname = os.path.join(tmpfolder, "reftracks.xml")
tracks_to_xml(ref_xml_fname, img_to_tracks(reffile,X,Y,Z,T), True)
in_xml_fname = os.path.join(tmpfolder, "intracks.xml")
tracks_to_xml(in_xml_fname, img_to_tracks(infile,X,Y,Z,T), True)
res_fname = in_xml_fname + ".score.txt"
# Call tracking metric code
gating_dist = extra_params.get("gating_dist", 5)
# the fourth parameter represents the gating distance
os.system('java -jar /home/jovyan/neubiaswg5-utilities/bin/TrackingPerformance.jar -r ' + ref_xml_fname + ' -c ' + in_xml_fname + ' -o ' + res_fname + ' ' + str(gating_dist))
# Parse the output file created automatically in tmpfolder
with open(res_fname, "r") as f:
bchmetrics = [line.split(':')[0].strip() for line in f.readlines()]
metric_names = [
"PD", "NPSA", "FNPSB", "NRT", "NCT",
"JST", "NPT", "NMT", "NST", "NRD",
"NCD", "JSD", "NPD", "NMD", "NSD"
]
metrics_dict.update({name: value for name, value in zip(metric_names, bchmetrics)})
params_dict["GATING_DIST"] = gating_dist
return metrics_dict, params_dict
nj.job.update(progress=80, statusComment="Computing and uploading metrics...")
outfiles, reffiles = zip(*[
(os.path.join(out_path, "{}.tif".format(image.object.id)),
os.path.join(gt_path, "{}.tif".format(image.object.id)))
for image in in_images
])
for infile, reffile in zip(outfiles, reffiles):
metrics = computemetrics(infile, reffile, problem_cls, tmp_path, {})
# -
path = tmp_path+'/'+"intracks.xml.score.txt"
print(path)
with open(path) as file:
for line in file:
print(line)
print("test")
| Nuclei Tracking 2D+t with helpers.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: 'Python 3.7.10 64-bit (''mujoco'': conda)'
# language: python
# name: python3
# ---
import sys
from pathlib import Path
curr_path = str(Path().absolute())
parent_path = str(Path().absolute().parent)
sys.path.append(parent_path) # add current terminal path to sys.path
# +
import gym
import torch
import datetime
from SAC.env import NormalizedActions
from SAC.agent import SAC
from common.utils import save_results, make_dir
from common.plot import plot_rewards
curr_time = datetime.datetime.now().strftime("%Y%m%d-%H%M%S") # obtain current time
# -
class SACConfig:
def __init__(self) -> None:
self.algo = 'SAC'
self.env = 'Pendulum-v0'
self.result_path = curr_path+"/outputs/" +self.env+'/'+curr_time+'/results/' # path to save results
self.model_path = curr_path+"/outputs/" +self.env+'/'+curr_time+'/models/' # path to save models
self.train_eps = 300
self.train_steps = 500
self.eval_eps = 50
self.eval_steps = 500
self.gamma = 0.99
self.mean_lambda=1e-3
self.std_lambda=1e-3
self.z_lambda=0.0
self.soft_tau=1e-2
self.value_lr = 3e-4
self.soft_q_lr = 3e-4
self.policy_lr = 3e-4
self.capacity = 1000000
self.hidden_dim = 256
self.batch_size = 128
self.device=torch.device("cuda" if torch.cuda.is_available() else "cpu")
def env_agent_config(cfg,seed=1):
env = NormalizedActions(gym.make("Pendulum-v0"))
env.seed(seed)
action_dim = env.action_space.shape[0]
state_dim = env.observation_space.shape[0]
agent = SAC(state_dim,action_dim,cfg)
return env,agent
def train(cfg,env,agent):
print('Start to train !')
print(f'Env: {cfg.env}, Algorithm: {cfg.algo}, Device: {cfg.device}')
rewards = []
ma_rewards = [] # moveing average reward
for i_ep in range(cfg.train_eps):
state = env.reset()
ep_reward = 0
for i_step in range(cfg.train_steps):
action = agent.policy_net.get_action(state)
next_state, reward, done, _ = env.step(action)
agent.memory.push(state, action, reward, next_state, done)
agent.update()
state = next_state
ep_reward += reward
if done:
break
if (i_ep+1)%10==0:
print(f"Episode:{i_ep+1}/{cfg.train_eps}, Reward:{ep_reward:.3f}")
rewards.append(ep_reward)
if ma_rewards:
ma_rewards.append(0.9*ma_rewards[-1]+0.1*ep_reward)
else:
ma_rewards.append(ep_reward)
print('Complete training!')
return rewards, ma_rewards
def eval(cfg,env,agent):
print('Start to eval !')
print(f'Env: {cfg.env}, Algorithm: {cfg.algo}, Device: {cfg.device}')
rewards = []
ma_rewards = [] # moveing average reward
for i_ep in range(cfg.eval_eps):
state = env.reset()
ep_reward = 0
for i_step in range(cfg.eval_steps):
action = agent.policy_net.get_action(state)
next_state, reward, done, _ = env.step(action)
state = next_state
ep_reward += reward
if done:
break
if (i_ep+1)%10==0:
print(f"Episode:{i_ep+1}/{cfg.train_eps}, Reward:{ep_reward:.3f}")
rewards.append(ep_reward)
if ma_rewards:
ma_rewards.append(0.9*ma_rewards[-1]+0.1*ep_reward)
else:
ma_rewards.append(ep_reward)
print('Complete evaling!')
return rewards, ma_rewards
if __name__ == "__main__":
cfg=SACConfig()
# train
env,agent = env_agent_config(cfg,seed=1)
rewards, ma_rewards = train(cfg, env, agent)
make_dir(cfg.result_path, cfg.model_path)
agent.save(path=cfg.model_path)
save_results(rewards, ma_rewards, tag='train', path=cfg.result_path)
plot_rewards(rewards, ma_rewards, tag="train",
algo=cfg.algo, path=cfg.result_path)
# eval
env,agent = env_agent_config(cfg,seed=10)
agent.load(path=cfg.model_path)
rewards,ma_rewards = eval(cfg,env,agent)
save_results(rewards,ma_rewards,tag='eval',path=cfg.result_path)
plot_rewards(rewards,ma_rewards,tag="eval",env=cfg.env,algo = cfg.algo,path=cfg.result_path)
| SAC/task0_train.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="HaF1C3dLUhtB"
# # **Random Forest - Credit Default Prediction**
# + [markdown] id="R2N0zYajUvNQ"
# In this lab, we will build a random forest model to predict whether a given customer defaults or not. Credit default is one of the most important problems in the banking and risk analytics industry. There are various attributes which can be used to predict default, such as demographic data (age, income, employment status, etc.), (credit) behavioural data (past loans, payment, number of times a credit payment has been delayed by the customer etc.).
#
# We'll start the process with data cleaning and preparation and then tune the model to find optimal hyperparameters.
# + [markdown] id="Q51MuTbQUyY3"
# <hr>
# + [markdown] id="Vv2m9nDXU2dg"
# ### **Data Understanding and Cleaning**
# + id="lZ5QH9brUeRs" executionInfo={"status": "ok", "timestamp": 1640533401373, "user_tz": -420, "elapsed": 523, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a/default-user=s64", "userId": "04579009044830588081"}}
# Importing the required libraries
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
# %matplotlib inline
# To ignore warnings
import warnings
warnings.filterwarnings("ignore")
# + colab={"resources": {"http://localhost:8080/nbextensions/google.colab/files.js": {"data": "<KEY>", "ok": true, "headers": [["content-type", "application/javascript"]], "status": 200, "status_text": ""}}, "base_uri": "https://localhost:8080/", "height": 74} id="1d-amUapU-XY" executionInfo={"status": "ok", "timestamp": 1640533492718, "user_tz": -420, "elapsed": 84088, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a/default-user=s64", "userId": "04579009044830588081"}} outputId="dc858a29-a525-4a98-e8d2-3b65a0451e49"
from google.colab import files
uploaded = files.upload()
# + colab={"base_uri": "https://localhost:8080/", "height": 270} id="sgZa6sa4U8H_" executionInfo={"status": "ok", "timestamp": 1640533523315, "user_tz": -420, "elapsed": 483, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a/default-user=s64", "userId": "04579009044830588081"}} outputId="3bc49903-f0c1-4d1f-f4ca-c02ca8351445"
# Reading the csv file and putting it into 'df' object.
df = pd.read_csv('credit-card-default.csv')
df.head()
# + colab={"base_uri": "https://localhost:8080/"} id="0X7vyk1fVEw-" executionInfo={"status": "ok", "timestamp": 1640533539221, "user_tz": -420, "elapsed": 471, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a/default-user=s64", "userId": "04579009044830588081"}} outputId="4cae6a27-8abe-4399-ab5d-0093fc98c119"
# Let's understand the type of columns
df.info()
# + [markdown] id="-8lvkIRBVRtB"
# In this case, we know that there are no major data quality issues, so we'll go ahead and build the model.
# + [markdown] id="wJ6s6fcPVTSX"
# <hr>
# + [markdown] id="8TB-GMFgVaMw"
# ### **Data Preparation and Model Building**
# + id="p5NGBX7-VJLl" executionInfo={"status": "ok", "timestamp": 1640533548057, "user_tz": -420, "elapsed": 492, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a/default-user=s64", "userId": "04579009044830588081"}}
# Importing test_train_split from sklearn library
from sklearn.model_selection import train_test_split
# + id="SRZvR6--VgS2" executionInfo={"status": "ok", "timestamp": 1640533555284, "user_tz": -420, "elapsed": 480, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a/default-user=s64", "userId": "04579009044830588081"}}
# Putting feature variable to X
X = df.drop('defaulted',axis=1)
# Putting response variable to y
y = df['defaulted']
# Splitting the data into train and test
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.30, random_state=101)
# + [markdown] id="_FELYfsDVpIm"
# ### **Default Hyperparameters**
# Let's first fit a random forest model with default hyperparameters.
# + id="zB_c-kLWVkff" executionInfo={"status": "ok", "timestamp": 1640533560426, "user_tz": -420, "elapsed": 470, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a/default-user=s64", "userId": "04579009044830588081"}}
# Importing random forest classifier from sklearn library
from sklearn.ensemble import RandomForestClassifier
# Running the random forest with default parameters.
rfc = RandomForestClassifier()
# + colab={"base_uri": "https://localhost:8080/"} id="lQETOKNXVwr-" executionInfo={"status": "ok", "timestamp": 1640533576549, "user_tz": -420, "elapsed": 6947, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a/default-user=s64", "userId": "04579009044830588081"}} outputId="ad274073-b1d5-40b3-ade6-03c81a55fdae"
# fit
rfc.fit(X_train,y_train)
# + id="uDIrxpzxVzzg" executionInfo={"status": "ok", "timestamp": 1640533581259, "user_tz": -420, "elapsed": 469, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a/default-user=s64", "userId": "04579009044830588081"}}
# Making predictions
predictions = rfc.predict(X_test)
# + id="dFHSci0cV42P" executionInfo={"status": "ok", "timestamp": 1640533582998, "user_tz": -420, "elapsed": 4, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a/default-user=s64", "userId": "04579009044830588081"}}
# Importing classification report and confusion matrix from sklearn metrics
from sklearn.metrics import classification_report,confusion_matrix, accuracy_score
# + colab={"base_uri": "https://localhost:8080/"} id="cmEMLDAAV_CG" executionInfo={"status": "ok", "timestamp": 1640533584869, "user_tz": -420, "elapsed": 3, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a/default-user=s64", "userId": "04579009044830588081"}} outputId="1e938a18-ca35-4686-9f8c-7827a5581cdc"
# Let's check the report of our default model
print(classification_report(y_test,predictions))
# + colab={"base_uri": "https://localhost:8080/"} id="0AjqL6vdWBmm" executionInfo={"status": "ok", "timestamp": 1640533588872, "user_tz": -420, "elapsed": 4, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a/default-user=s64", "userId": "04579009044830588081"}} outputId="74030a91-25bc-4edf-9f2d-b726356f180a"
# Printing confusion matrix
print(confusion_matrix(y_test,predictions))
# + colab={"base_uri": "https://localhost:8080/"} id="Ag4UuUpeWHA3" executionInfo={"status": "ok", "timestamp": 1640533591483, "user_tz": -420, "elapsed": 5, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a/default-user=s64", "userId": "04579009044830588081"}} outputId="2024c6cf-2174-42a0-8c0e-88da295411eb"
print(accuracy_score(y_test,predictions))
# + [markdown] id="K38hBGjxWR42"
# So far so good, let's now look at the list of hyperparameters which we can tune to improve model performance.
# + [markdown] id="GFJXkQpEWUI-"
# <hr>
# + [markdown] id="aN_17WBpWZn2"
# ### **Hyperparameter Tuning**
# + [markdown] id="Bnr1O2k9WgB9"
# The following hyperparameters are present in a random forest classifier. Note that most of these hypereparameters are actually of the decision trees that are in the forest.
#
#
# - **n_estimators**: integer, optional (default=10): The number of trees in the forest.
# - **criterion**: string, optional (default=”gini”)The function to measure the quality of a split. Supported criteria are “gini” for the Gini impurity and “entropy” for the information gain. Note: this parameter is tree-specific.
# - **max_features** : int, float, string or None, optional (default=”auto”)The number of features to consider when looking for the best split:
# - If int, then consider max_features features at each split.
# - If float, then max_features is a percentage and int(max_features * n_features) features are considered at each split.
# - If “auto”, then max_features=sqrt(n_features).
# - If “sqrt”, then max_features=sqrt(n_features) (same as “auto”).
# - If “log2”, then max_features=log2(n_features).
# - If None, then max_features=n_features.
# - Note: the search for a split does not stop until at least one valid partition of the node samples is found, even if it requires to effectively inspect more than max_features features.
# - **max_depth** : integer or None, optional (default=None)The maximum depth of the tree. If None, then nodes are expanded until all leaves are pure or until all leaves contain less than min_samples_split samples.
# - **min_samples_split** : int, float, optional (default=2)The minimum number of samples required to split an internal node:**
# - **If int, then consider min_samples_split as the minimum number.
# - **If float, then min_samples_split is a percentage and ceil(min_samples_split, n_samples) are the minimum number of samples for each split.
# - **min_samples_leaf** : int, float, optional (default=1)The minimum number of samples required to be at a leaf node:**
# - **If int, then consider min_samples_leaf as the minimum number.**
# - **If float, then min_samples_leaf is a percentage and ceil(min_samples_leaf * n_samples) are the minimum number of samples for each node.**
# - **min_weight_fraction_leaf** : float, optional (default=0.)The minimum weighted fraction of the sum total of weights (of all the input samples) required to be at a leaf node. Samples have equal weight when sample_weight is not provided.
# - **max_leaf_nodes** : int or None, optional (default=None)Grow trees with max_leaf_nodes in best-first fashion. Best nodes are defined as relative reduction in impurity. If None then unlimited number of leaf nodes.
# - **min_impurity_split** : float,Threshold for early stopping in tree growth. A node will split if its impurity is above the threshold, otherwise it is a leaf.
# + [markdown] id="jpDc-U-QWiZm"
# <hr>
# + [markdown] id="YpxejPJrWmtX"
# ### **Tuning max_depth**
# + [markdown] id="92TI-5hiWpH-"
# Let's try to find the optimum values for ```max_depth``` and understand how the value of max_depth impacts the overall accuracy of the ensemble.
# + colab={"base_uri": "https://localhost:8080/"} id="lu9hksIaWNXI" executionInfo={"status": "ok", "timestamp": 1640533884171, "user_tz": -420, "elapsed": 65960, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a/default-user=s64", "userId": "04579009044830588081"}} outputId="fbee4bfb-53e6-4a9e-c7b1-77348632e049"
# GridSearchCV to find optimal n_estimators
from sklearn.model_selection import KFold
from sklearn.model_selection import GridSearchCV
# specify number of folds for k-fold CV
n_folds = 5
# parameters to build the model on
parameters = {'max_depth': range(2, 20, 5)}
# instantiate the model
rf = RandomForestClassifier()
# fit tree on training data
rf = GridSearchCV(rf, parameters,
cv=n_folds,
scoring="accuracy",return_train_score=True)
rf.fit(X_train, y_train)
# + colab={"base_uri": "https://localhost:8080/", "height": 308} id="loMFkbXpWvDS" executionInfo={"status": "ok", "timestamp": 1640533895298, "user_tz": -420, "elapsed": 486, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a/default-user=s64", "userId": "04579009044830588081"}} outputId="e3e42592-aaf0-429a-f9b0-0243fe1459e9"
# scores of GridSearch CV
scores = rf.cv_results_
pd.DataFrame(scores).head()
# + colab={"base_uri": "https://localhost:8080/", "height": 280} id="hUOmh6iTWyiA" executionInfo={"status": "ok", "timestamp": 1640533899810, "user_tz": -420, "elapsed": 595, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a/default-user=s64", "userId": "04579009044830588081"}} outputId="d5ca2db3-07ed-4a36-c85a-218fc1555501"
# plotting accuracies with max_depth
plt.figure()
plt.plot(scores["param_max_depth"],
scores["mean_train_score"],
label="training accuracy")
plt.plot(scores["param_max_depth"],
scores["mean_test_score"],
label="test accuracy")
plt.xlabel("max_depth")
plt.ylabel("Accuracy")
plt.legend()
plt.show()
# + [markdown] id="Ycet-yNaW40V"
# You can see that as we increase the value of max_depth, both train and test scores increase till a point, but after that test score starts to decrease. The ensemble tries to overfit as we increase the max_depth.
#
# Thus, controlling the depth of the constituent trees will help reduce overfitting in the forest.
# + [markdown] id="aFQS3rNVW6BY"
# <hr>
# + [markdown] id="frGAsqiMW-Gj"
# ### **Tuning n_estimators**
# + [markdown] id="uD3i-fwnXA-l"
# Let's try to find the optimum values for n_estimators and understand how the value of n_estimators impacts the overall accuracy. Notice that we'll specify an appropriately low value of max_depth, so that the trees do not overfit.
# + colab={"base_uri": "https://localhost:8080/"} id="9zO2X-JRW2KY" executionInfo={"status": "ok", "timestamp": 1640534540648, "user_tz": -420, "elapsed": 252525, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a/default-user=s64", "userId": "04579009044830588081"}} outputId="85d79c8b-9a40-4554-ecf6-cbd5fe5f9e40"
# GridSearchCV to find optimal n_estimators
from sklearn.model_selection import KFold
from sklearn.model_selection import GridSearchCV
# specify number of folds for k-fold CV
n_folds = 5
# parameters to build the model on
parameters = {'n_estimators': range(100, 1500, 400)}
# instantiate the model (note we are specifying a max_depth)
rf = RandomForestClassifier(max_depth=4)
# fit tree on training data
rf = GridSearchCV(rf, parameters,
cv=n_folds,
scoring="accuracy",return_train_score=True)
rf.fit(X_train, y_train)
# + colab={"base_uri": "https://localhost:8080/", "height": 308} id="cLqu43GtXKxd" executionInfo={"status": "ok", "timestamp": 1640534557716, "user_tz": -420, "elapsed": 506, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a/default-user=s64", "userId": "04579009044830588081"}} outputId="5db17229-dbd9-4ba9-e879-4ca795d83f12"
# scores of GridSearch CV
scores = rf.cv_results_
pd.DataFrame(scores).head()
# + colab={"base_uri": "https://localhost:8080/", "height": 280} id="6K3FJ54-XOy1" executionInfo={"status": "ok", "timestamp": 1640534559715, "user_tz": -420, "elapsed": 7, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a/default-user=s64", "userId": "04579009044830588081"}} outputId="129425f7-43d9-4918-cfb5-fb3fbb20f7e6"
# plotting accuracies with n_estimators
plt.figure()
plt.plot(scores["param_n_estimators"],
scores["mean_train_score"],
label="training accuracy")
plt.plot(scores["param_n_estimators"],
scores["mean_test_score"],
label="test accuracy")
plt.xlabel("n_estimators")
plt.ylabel("Accuracy")
plt.legend()
plt.show()
# + [markdown] id="nFmDM3cOXb1l"
# <hr>
# + [markdown] id="l8fvUpYEXgtG"
# ### **Tuning max_features**
#
# Let's see how the model performance varies with ```max_features```, which is the maximum numbre of features considered for splitting at a node.
# + colab={"base_uri": "https://localhost:8080/"} id="C1U8hIE5XT7t" executionInfo={"status": "ok", "timestamp": 1640534913167, "user_tz": -420, "elapsed": 113949, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a/default-user=s64", "userId": "04579009044830588081"}} outputId="aced0e68-d958-406e-b34a-53a30fcab67a"
# GridSearchCV to find optimal max_features
from sklearn.model_selection import KFold
from sklearn.model_selection import GridSearchCV
# specify number of folds for k-fold CV
n_folds = 5
# parameters to build the model on
parameters = {'max_features': [4, 8, 14, 20, 24]}
# instantiate the model
rf = RandomForestClassifier(max_depth=4)
# fit tree on training data
rf = GridSearchCV(rf, parameters,
cv=n_folds,
scoring="accuracy",return_train_score=True)
rf.fit(X_train, y_train)
# + colab={"base_uri": "https://localhost:8080/", "height": 357} id="f8A14O_dXqGd" executionInfo={"status": "ok", "timestamp": 1640534926419, "user_tz": -420, "elapsed": 509, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a/default-user=s64", "userId": "04579009044830588081"}} outputId="29e6b8b5-449e-4a47-c7ec-c20f3c91cc15"
# scores of GridSearch CV
scores = rf.cv_results_
pd.DataFrame(scores).head()
# + colab={"base_uri": "https://localhost:8080/", "height": 280} id="WaAVZVN6X4aF" executionInfo={"status": "ok", "timestamp": 1640534928515, "user_tz": -420, "elapsed": 6, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a/default-user=s64", "userId": "04579009044830588081"}} outputId="6dc4a1f8-9139-4acf-aab8-c5c6c9d30789"
# plotting accuracies with max_features
plt.figure()
plt.plot(scores["param_max_features"],
scores["mean_train_score"],
label="training accuracy")
plt.plot(scores["param_max_features"],
scores["mean_test_score"],
label="test accuracy")
plt.xlabel("max_features")
plt.ylabel("Accuracy")
plt.legend()
plt.show()
# + [markdown] id="CVP8vRtbYKO2"
# Apparently, the training and test scores *both* seem to increase as we increase max_features, and the model doesn't seem to overfit more with increasing max_features. Think about why that might be the case.
# + [markdown] id="ebeh5tV4YOIp"
# ### **Tuning min_samples_leaf**
# + [markdown] id="bH1XagRXYRTm"
# The hyperparameter **min_samples_leaf** is the minimum number of samples required to be at a leaf node:
# - If int, then consider min_samples_leaf as the minimum number.
# - If float, then min_samples_leaf is a percentage and ceil(min_samples_leaf * n_samples) are the minimum number of samples for each node.
# + [markdown] id="R0lSi4bmYkzI"
# Let's now check the optimum value for min samples leaf in our case.
# + colab={"base_uri": "https://localhost:8080/"} id="WSigPg6mYDr2" executionInfo={"status": "ok", "timestamp": 1640535046928, "user_tz": -420, "elapsed": 81601, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a/default-user=s64", "userId": "04579009044830588081"}} outputId="63742338-f1ac-44c4-de13-3cc451ff2751"
# GridSearchCV to find optimal min_samples_leaf
from sklearn.model_selection import KFold
from sklearn.model_selection import GridSearchCV
# specify number of folds for k-fold CV
n_folds = 5
# parameters to build the model on
parameters = {'min_samples_leaf': range(100, 400, 50)}
# instantiate the model
rf = RandomForestClassifier()
# fit tree on training data
rf = GridSearchCV(rf, parameters,
cv=n_folds,
scoring="accuracy",return_train_score=True)
rf.fit(X_train, y_train)
# + colab={"base_uri": "https://localhost:8080/", "height": 357} id="DO031XYZYpGA" executionInfo={"status": "ok", "timestamp": 1640535099399, "user_tz": -420, "elapsed": 497, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a/default-user=s64", "userId": "04579009044830588081"}} outputId="a4d9260b-807e-4955-e9a3-1aa586103c5a"
# scores of GridSearch CV
scores = rf.cv_results_
pd.DataFrame(scores).head()
# + colab={"base_uri": "https://localhost:8080/", "height": 280} id="T0qs-f9TYs1d" executionInfo={"status": "ok", "timestamp": 1640535101409, "user_tz": -420, "elapsed": 7, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a/default-user=s64", "userId": "04579009044830588081"}} outputId="31bc5ec2-b519-4d7e-be79-11b453e6b176"
# plotting accuracies with min_samples_leaf
plt.figure()
plt.plot(scores["param_min_samples_leaf"],
scores["mean_train_score"],
label="training accuracy")
plt.plot(scores["param_min_samples_leaf"],
scores["mean_test_score"],
label="test accuracy")
plt.xlabel("min_samples_leaf")
plt.ylabel("Accuracy")
plt.legend()
plt.show()
# + [markdown] id="2luTiojRYy2m"
# You can see that the model starts of overfit as you decrease the value of min_samples_leaf.
# + [markdown] id="bfanD8MnY2wI"
# ### **Tuning min_samples_split**
#
# Let's now look at the performance of the ensemble as we vary min_samples_split.
# + colab={"base_uri": "https://localhost:8080/"} id="4sseWD2HYwPQ" executionInfo={"status": "ok", "timestamp": 1640535250762, "user_tz": -420, "elapsed": 119472, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a/default-user=s64", "userId": "04579009044830588081"}} outputId="7460ce14-6991-4feb-8b49-ee76aaf05d05"
# GridSearchCV to find optimal min_samples_split
from sklearn.model_selection import KFold
from sklearn.model_selection import GridSearchCV
# specify number of folds for k-fold CV
n_folds = 5
# parameters to build the model on
parameters = {'min_samples_split': range(200, 500, 50)}
# instantiate the model
rf = RandomForestClassifier()
# fit tree on training data
rf = GridSearchCV(rf, parameters,
cv=n_folds,
scoring="accuracy",return_train_score=True)
rf.fit(X_train, y_train)
# + colab={"base_uri": "https://localhost:8080/", "height": 357} id="5d8haDMhY-1O" executionInfo={"status": "ok", "timestamp": 1640535307532, "user_tz": -420, "elapsed": 764, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a/default-user=s64", "userId": "04579009044830588081"}} outputId="2d657162-85de-4bdf-8136-0dcca3e64d84"
# scores of GridSearch CV
scores = rf.cv_results_
pd.DataFrame(scores).head()
# + colab={"base_uri": "https://localhost:8080/", "height": 280} id="f36_N474ZCdQ" executionInfo={"status": "ok", "timestamp": 1640535414565, "user_tz": -420, "elapsed": 575, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a/default-user=s64", "userId": "04579009044830588081"}} outputId="403cff1a-d414-4b2c-e077-c7d07908f119"
# plotting accuracies with min_samples_split
plt.figure()
plt.plot(scores["param_min_samples_split"],
scores["mean_train_score"],
label="training accuracy")
plt.plot(scores["param_min_samples_split"],
scores["mean_test_score"],
label="test accuracy")
plt.xlabel("min_samples_split")
plt.ylabel("Accuracy")
plt.legend()
plt.show()
# + [markdown] id="8iQnvdxjZG-F"
# <hr>
# + [markdown] id="-q-YqNGRZK33"
# ### **Grid Search to Find Optimal Hyperparameters**
# + [markdown] id="4kQdLuMiZQoV"
# We can now find the optimal hyperparameters using GridSearchCV.
# + id="O77pwJEXZFPy" executionInfo={"status": "ok", "timestamp": 1640535435887, "user_tz": -420, "elapsed": 476, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a/default-user=s64", "userId": "04579009044830588081"}}
# Create the parameter grid based on the results of random search
param_grid = {
'max_depth': [4,8,10],
'min_samples_leaf': range(100, 400, 200),
'min_samples_split': range(200, 500, 200),
'n_estimators': [100,200, 300],
'max_features': [5, 10]
}
# Create a based model
rf = RandomForestClassifier()
# Instantiate the grid search model
grid_search = GridSearchCV(estimator = rf, param_grid = param_grid,
cv = 3, n_jobs = -1,verbose = 1)
# + colab={"base_uri": "https://localhost:8080/"} id="BN5Ioj9SZUT3" executionInfo={"status": "ok", "timestamp": 1640536283415, "user_tz": -420, "elapsed": 828508, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a/default-user=s64", "userId": "04579009044830588081"}} outputId="2c9e5f1d-ccdc-46de-e90d-c25160bec348"
# Fit the grid search to the data
grid_search.fit(X_train, y_train)
# + colab={"base_uri": "https://localhost:8080/"} id="s4DyueJYZXOm" executionInfo={"status": "ok", "timestamp": 1640536654887, "user_tz": -420, "elapsed": 504, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a/default-user=s64", "userId": "04579009044830588081"}} outputId="fc351ac1-c4bf-4943-d66d-7ac1d2463383"
# printing the optimal accuracy score and hyperparameters
print('We can get accuracy of',grid_search.best_score_,'using',grid_search.best_params_)
# + [markdown] id="R7sMmTyNZedG"
# ### **Fitting the final model with the best parameters obtained from grid search.**
# + id="mQP9fnAGZaeN" executionInfo={"status": "ok", "timestamp": 1640536675335, "user_tz": -420, "elapsed": 3, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a/default-user=s64", "userId": "04579009044830588081"}}
# model with the best hyperparameters
from sklearn.ensemble import RandomForestClassifier
rfc = RandomForestClassifier(bootstrap=True,
max_depth=10,
min_samples_leaf=100,
min_samples_split=200,
max_features=10,
n_estimators=100)
# + colab={"base_uri": "https://localhost:8080/"} id="DMWHHnQAZkYu" executionInfo={"status": "ok", "timestamp": 1640536686781, "user_tz": -420, "elapsed": 7989, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a/default-user=s64", "userId": "04579009044830588081"}} outputId="6b564020-9a51-4696-9ab8-1a6d7c6ffc06"
# fit
rfc.fit(X_train,y_train)
# + id="oD1Jetj4ZnWN" executionInfo={"status": "ok", "timestamp": 1640536690787, "user_tz": -420, "elapsed": 490, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a/default-user=s64", "userId": "04579009044830588081"}}
# predict
predictions = rfc.predict(X_test)
# + id="tdPONJ_2ZqGd" executionInfo={"status": "ok", "timestamp": 1640536692931, "user_tz": -420, "elapsed": 3, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a/default-user=s64", "userId": "04579009044830588081"}}
# evaluation metrics
from sklearn.metrics import classification_report,confusion_matrix
# + colab={"base_uri": "https://localhost:8080/"} id="TJPJaUPjZtue" executionInfo={"status": "ok", "timestamp": 1640536694661, "user_tz": -420, "elapsed": 3, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a/default-user=s64", "userId": "04579009044830588081"}} outputId="ae4d0435-f26e-4a53-aac7-3003f5e4b82b"
print(classification_report(y_test,predictions))
# + colab={"base_uri": "https://localhost:8080/"} id="iH4U65zjZxvd" executionInfo={"status": "ok", "timestamp": 1640536697345, "user_tz": -420, "elapsed": 485, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a/default-user=s64", "userId": "04579009044830588081"}} outputId="e7ebfb02-6543-4c1a-bfc3-1598c050e6d9"
print(confusion_matrix(y_test,predictions))
# + colab={"base_uri": "https://localhost:8080/"} id="27gtGTxpZ1Gm" executionInfo={"status": "ok", "timestamp": 1640536701288, "user_tz": -420, "elapsed": 494, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a/default-user=s64", "userId": "04579009044830588081"}} outputId="b6d6e496-d410-4d1e-baae-ed9a1ebb22c0"
(6753+692)/(6753+692+305+1250)
# + id="nbpiNLx4m3Th"
| Random Forest/Random Forest - Credit Default Prediction.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] application/vnd.databricks.v1+cell={"title": "", "showTitle": false, "inputWidgets": {}, "nuid": "32dbdd89-05d7-49f3-8ea5-371168b411f9"}
# # Example of usage Spark OCR
# * Load images from S3
# * Preview it
# * Recognize text
# + [markdown] application/vnd.databricks.v1+cell={"title": "", "showTitle": false, "inputWidgets": {}, "nuid": "b20875fb-c880-4291-a55d-6f37c1776ef8"}
# ## Import OCR transformers and utils
# + application/vnd.databricks.v1+cell={"title": "", "showTitle": false, "inputWidgets": {}, "nuid": "c9833c34-4732-40e4-a06a-676aaed670d3"}
from sparkocr.transformers import *
from sparkocr.databricks import display_images
from pyspark.ml import PipelineModel
# + [markdown] application/vnd.databricks.v1+cell={"title": "", "showTitle": false, "inputWidgets": {}, "nuid": "45ac4061-aabd-416f-8fb4-b364bcbe2b1e"}
# ## Define OCR transformers and pipeline
# * Transforrm binary data to Image schema using [BinaryToImage](https://nlp.johnsnowlabs.com/docs/en/ocr_pipeline_components#binarytoimage). More details about Image Schema [here](https://nlp.johnsnowlabs.com/docs/en/ocr_structures#image-schema).
# * Recognize text using [ImageToText](https://nlp.johnsnowlabs.com/docs/en/ocr_pipeline_components#imagetotext) transformer.
# + application/vnd.databricks.v1+cell={"title": "", "showTitle": false, "inputWidgets": {}, "nuid": "105d51f9-fbb4-430a-b5d9-564081458f4f"}
def pipeline():
# Transforrm binary data to struct image format
binary_to_image = BinaryToImage()
binary_to_image.setInputCol("content")
binary_to_image.setOutputCol("image")
# Run OCR
ocr = ImageToText()
ocr.setInputCol("image")
ocr.setOutputCol("text")
ocr.setConfidenceThreshold(65)
pipeline = PipelineModel(stages=[
binary_to_image,
ocr
])
return pipeline
# + [markdown] application/vnd.databricks.v1+cell={"title": "", "showTitle": false, "inputWidgets": {}, "nuid": "8de9ddca-5ca6-4a89-91a9-a6981bc3ee1d"}
# ## Download images from public S3 bucket to DBFS
# + application/vnd.databricks.v1+cell={"title": "", "showTitle": false, "inputWidgets": {}, "nuid": "b3f675ab-34fe-48e8-b944-eb9491b76932"}
# %sh
OCR_DIR=/dbfs/tmp/ocr_1
if [ ! -d "$OCR_DIR" ]; then
mkdir $OCR_DIR
cd $OCR_DIR
wget https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/ocr/datasets/news.2B.0.png.zip
unzip news.2B.0.png.zip
fi
# + application/vnd.databricks.v1+cell={"title": "", "showTitle": false, "inputWidgets": {}, "nuid": "76d32262-de3a-43f2-9c41-92af2186b8bd"}
display(dbutils.fs.ls("dbfs:/tmp/ocr_1/0/"))
# + [markdown] application/vnd.databricks.v1+cell={"title": "", "showTitle": false, "inputWidgets": {}, "nuid": "3958a540-cbda-40d5-92c4-b1998f015b0b"}
# ## Read images as binary files
# from DBFS
# + application/vnd.databricks.v1+cell={"title": "", "showTitle": false, "inputWidgets": {}, "nuid": "defa28e3-5aeb-4f62-927b-91a4a8c949bd"}
images_path = "/tmp/ocr_1/0/*.png"
images_example_df = spark.read.format("binaryFile").load(images_path).cache()
display(images_example_df)
# + [markdown] application/vnd.databricks.v1+cell={"title": "", "showTitle": false, "inputWidgets": {}, "nuid": "8a1a26c9-36f1-4165-b253-dc78fa2db389"}
# ## Read data from s3 directly using credentials
# + application/vnd.databricks.v1+cell={"title": "", "showTitle": false, "inputWidgets": {}, "nuid": "30811dc1-c99a-4e11-9ea3-cd56124e3821"}
# ACCESS_KEY = ""
# SECRET_KEY = ""
# sc._jsc.hadoopConfiguration().set("fs.s3n.awsAccessKeyId", ACCESS_KEY)
# sc._jsc.hadoopConfiguration().set("fs.s3n.awsSecretAccessKey", SECRET_KEY)
# imagesPath = "s3a://dev.johnsnowlabs.com/ocr/datasets/news.2B/0/*.tif"
# imagesExampleDf = spark.read.format("binaryFile").load(imagesPath).cache()
# display(imagesExampleDf)
# + [markdown] application/vnd.databricks.v1+cell={"title": "", "showTitle": false, "inputWidgets": {}, "nuid": "dcfbe2ae-403f-45be-bf7d-218ed6a860f9"}
# ## Display count of images
# + application/vnd.databricks.v1+cell={"title": "", "showTitle": false, "inputWidgets": {}, "nuid": "0eef571c-c738-43e9-a54d-db7552bbb28b"}
images_example_df.count()
# + [markdown] application/vnd.databricks.v1+cell={"title": "", "showTitle": false, "inputWidgets": {}, "nuid": "3c81a2fe-b722-4b06-bb76-ede9f12d41c6"}
# ## Preview images using _display_images_ function
# + application/vnd.databricks.v1+cell={"title": "", "showTitle": false, "inputWidgets": {}, "nuid": "b74cef43-64f1-4dae-b7f4-6a6eac7ade95"}
display_images(BinaryToImage().transform(images_example_df), limit=3)
# + [markdown] application/vnd.databricks.v1+cell={"title": "", "showTitle": false, "inputWidgets": {}, "nuid": "d36b1bfa-e27f-47ee-8e95-17af7399f452"}
# ## Run OCR pipelines
# + application/vnd.databricks.v1+cell={"title": "", "showTitle": false, "inputWidgets": {}, "nuid": "cada08fc-5d8e-4689-bb9b-b8bb944e2920"}
result = pipeline().transform(images_example_df.repartition(8)).cache()
# + [markdown] application/vnd.databricks.v1+cell={"title": "", "showTitle": false, "inputWidgets": {}, "nuid": "8ac72356-9e98-4590-8f40-818f1f693331"}
# ## Display results
# + application/vnd.databricks.v1+cell={"title": "", "showTitle": false, "inputWidgets": {}, "nuid": "6fe87a33-6dd6-4217-8825-1b1b438818ec"}
display(result.select("text", "confidence"))
# + [markdown] application/vnd.databricks.v1+cell={"title": "", "showTitle": false, "inputWidgets": {}, "nuid": "bc686346-2fd3-48ed-a046-3ffa7939fae7"}
# ## Clear cache
# + application/vnd.databricks.v1+cell={"title": "", "showTitle": false, "inputWidgets": {}, "nuid": "5eebc75f-f16c-4fe8-8ef2-ef0e91b66118"}
result.unpersist()
| databricks/python/SparkOcrSimpleExample.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import warnings
import itertools
import numpy as np
import matplotlib.pyplot as plt
warnings.filterwarnings("ignore")
plt.style.use('fivethirtyeight')
import pandas as pd
import statsmodels.api as sm
import matplotlib
matplotlib.rcParams['axes.labelsize'] = 14
matplotlib.rcParams['xtick.labelsize'] = 12
matplotlib.rcParams['ytick.labelsize'] = 12
matplotlib.rcParams['text.color'] = 'k'
df = pd.read_excel("D:\Downloads\Sample - Superstore.xls")
furniture = df.loc[df['Category'] == 'Furniture']
furniture['Order Date'].min(), furniture['Order Date'].max()
cols = ['Row ID', 'Order ID', 'Ship Date', 'Ship Mode', 'Customer ID', 'Customer Name', 'Segment', 'Country', 'City', 'State', 'Postal Code', 'Region', 'Product ID', 'Category', 'Sub-Category', 'Product Name', 'Quantity', 'Discount', 'Profit']
furniture.drop(cols, axis=1, inplace=True)
furniture = furniture.sort_values('Order Date')
furniture.isnull().sum()
furniture = furniture.groupby('Order Date')['Sales'].sum().reset_index()
furniture = furniture.set_index('Order Date')
furniture.index
y = furniture['Sales'].resample('MS').mean()
y['2017':]
y.plot(figsize=(15, 6))
plt.show()
from pylab import rcParams
rcParams['figure.figsize'] = 18, 8
decomposition = sm.tsa.seasonal_decompose(y, model='additive')
fig = decomposition.plot()
plt.show()
p = d = q = range(0, 2)
pdq = list(itertools.product(p, d, q))
seasonal_pdq = [(x[0], x[1], x[2], 12) for x in list(itertools.product(p, d, q))]
print('Examples of parameter combinations for Seasonal ARIMA...')
print('SARIMAX: {} x {}'.format(pdq[1], seasonal_pdq[1]))
print('SARIMAX: {} x {}'.format(pdq[1], seasonal_pdq[2]))
print('SARIMAX: {} x {}'.format(pdq[2], seasonal_pdq[3]))
print('SARIMAX: {} x {}'.format(pdq[2], seasonal_pdq[4]))
for param in pdq:
for param_seasonal in seasonal_pdq:
try:
mod = sm.tsa.statespace.SARIMAX(y,
order=param,
seasonal_order=param_seasonal,
enforce_stationarity=False,
enforce_invertibility=False)
results=mod.fit()
print('ARIMA{}x{}12 - AIC:{}'.format(param, param_seasonal, results.aic))
except:
continue
mod = sm.tsa.statespace.SARIMAX(y,
order=(1, 1, 1),
seasonal_order=(1, 1, 0, 12),
enforce_stationarity=False,
enforce_invertibility=False)
results = mod.fit()
print(results.summary().tables[1])
results.plot_diagnostics(figsize=(16, 8))
plt.show()
pred = results.get_prediction(start=pd.to_datetime('2017-01-01'), dynamic=False)
pred_ci = pred.conf_int()
ax = y['2014':].plot(label='observed')
pred.predicted_mean.plot(ax=ax, label='One-step ahead Forecast', alpha=.7, figsize=(14, 7))
ax.fill_between(pred_ci.index,
pred_ci.iloc[:, 0],
pred_ci.iloc[:, 1], color='k', alpha=.2)
ax.set_xlabel('Date')
ax.set_ylabel('Furniture Sales')
plt.legend()
plt.show()
y_forecasted = pred.predicted_mean
y_truth = y['2017-01-01':]
mse = ((y_forecasted - y_truth) ** 2).mean()
print('The Mean Squared Error of our forecasts is {}'.format(round(mse, 2)))
print('The Root Mean Squared Error of our forecasts is {}'.format(round(np.sqrt(mse), 2)))
pred_uc = results.get_forecast(steps=100)
pred_ci = pred_uc.conf_int()
ax = y.plot(label='observed', figsize=(14, 7))
pred_uc.predicted_mean.plot(ax=ax, label='Forecast')
ax.fill_between(pred_ci.index,
pred_ci.iloc[:, 0],
pred_ci.iloc[:, 1], color='k', alpha=.25)
ax.set_xlabel('Date')
ax.set_ylabel('Furniture Sales')
plt.legend()
plt.show()
furniture = df.loc[df['Category'] == 'Furniture']
office = df.loc[df['Category'] == 'Office Supplies']
furniture.shape, office.shape
cols = ['Row ID', 'Order ID', 'Ship Date', 'Ship Mode', 'Customer ID', 'Customer Name', 'Segment', 'Country', 'City', 'State', 'Postal Code', 'Region', 'Product ID', 'Category', 'Sub-Category', 'Product Name', 'Quantity', 'Discount', 'Profit']
furniture.drop(cols, axis=1, inplace=True)
office.drop(cols, axis=1, inplace=True)
furniture = furniture.sort_values('Order Date')
office = office.sort_values('Order Date')
furniture = furniture.groupby('Order Date')['Sales'].sum().reset_index()
office = office.groupby('Order Date')['Sales'].sum().reset_index()
furniture = furniture.set_index('Order Date')
office = office.set_index('Order Date')
y_furniture = furniture['Sales'].resample('MS').mean()
y_office = office['Sales'].resample('MS').mean()
furniture = pd.DataFrame({'Order Date':y_furniture.index, 'Sales':y_furniture.values})
office = pd.DataFrame({'Order Date': y_office.index, 'Sales': y_office.values})
store = furniture.merge(office, how='inner', on='Order Date')
store.rename(columns={'Sales_x': 'furniture_sales', 'Sales_y': 'office_sales'}, inplace=True)
store.head()
plt.figure(figsize=(20, 8))
plt.plot(store['Order Date'], store['furniture_sales'], 'b-', label = 'furniture')
plt.plot(store['Order Date'], store['office_sales'], 'r-', label = 'office supplies')
plt.xlabel('Date'); plt.ylabel('Sales'); plt.title('Sales of Furniture and Office Supplies')
plt.legend();
first_date = store.ix[np.min(list(np.where(store['office_sales'] > store['furniture_sales'])[0])), 'Order Date']
print("Office supplies first time produced higher sales than furniture is {}.".format(first_date.date()))
from fbprophet import Prophet
furniture = furniture.rename(columns={'Order Date': 'ds', 'Sales': 'y'})
furniture_model = Prophet(interval_width=0.95)
furniture_model.fit(furniture)
office = office.rename(columns={'Order Date': 'ds', 'Sales': 'y'})
office_model = Prophet(interval_width=0.95)
office_model.fit(office)
furniture_forecast = furniture_model.make_future_dataframe(periods=36, freq='MS')
furniture_forecast = furniture_model.predict(furniture_forecast)
office_forecast = office_model.make_future_dataframe(periods=36, freq='MS')
office_forecast = office_model.predict(office_forecast)
plt.figure(figsize=(18, 6))
furniture_model.plot(furniture_forecast, xlabel = 'Date', ylabel = 'Sales')
plt.title('Furniture Sales');
furniture_names = ['furniture_%s' % column for column in furniture_forecast.columns]
office_names = ['office_%s' % column for column in office_forecast.columns]
merge_furniture_forecast = furniture_forecast.copy()
merge_office_forecast = office_forecast.copy()
merge_furniture_forecast.columns = furniture_names
merge_office_forecast.columns = office_names
forecast = pd.merge(merge_furniture_forecast, merge_office_forecast, how = 'inner', left_on = 'furniture_ds', right_on = 'office_ds')
forecast = forecast.rename(columns={'furniture_ds': 'Date'}).drop('office_ds', axis=1)
forecast.head()
plt.figure(figsize=(10, 7))
plt.plot(forecast['Date'], forecast['furniture_trend'], 'b-')
plt.plot(forecast['Date'], forecast['office_trend'], 'r-')
plt.legend(); plt.xlabel('Date'); plt.ylabel('Sales')
plt.title('Furniture vs. Office Supplies Sales Trend');
| Time series analysis.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Benchmarking OpenBLAS
#
# 13 de Febrero 2019
#
# _<NAME> - CU: 175904_
#
# _<NAME> - CU: 175921_
#
# _<NAME> - CU: 175875_
#
# ## OpenBLAS
#
# Confirmamos que numpy esté ligado a una compilación de BLAS/MKL.
import numpy as np
np.__config__.show()
# ## Ejecución de Benchmark
# +
# #!/usr/bin/env python3
# -*- coding: UTF-8 -*-
# Basado en http://stackoverflow.com/questions/11443302/compiling-numpy-with-openblas-integration
import numpy as np
from time import time
print("==========================================================================")
print("Valores de la configuracion OpenBLAS")
np.__config__.show()
print("========================================================================\n")
# Establecer la semilla con fines de reproducibilidad
np.random.seed(0)
# Tamaño d ela matriz
size = 4096
# Declaracion de las matrices
A, B = np.random.random((size, size)), np.random.random((size, size))
C, D = np.random.random((size * 128,)), np.random.random((size * 128,))
E = np.random.random((int(size / 2), int(size / 4)))
F = np.random.random((int(size / 2), int(size / 2)))
#Producto punto
F = np.dot(F, F.T)
G = np.random.random((int(size / 2), int(size / 2)))
# Multiplicacion de matriz 20 veces (N) para sacar un promedio.
N = 20
t = time()
for i in range(N):
np.dot(A, B)
delta = time() - t
print('Producto punto de 2 matrices %dx%d tomo %0.2f segundos en promedio. ' % (size, size, delta / N))
del A, B
# Multiplicacion de vectores 5000 Veces para sacar un promedio.
N = 5000
t = time()
for i in range(N):
np.dot(C, D)
delta = time() - t
print('Producto punto de 2 vectores de longitud %d tomo %0.2f ms en promedio.' % (size * 128, 1e3 * delta / N))
del C, D
# SVD: promedio de 3.
N = 3
t = time()
for i in range(N):
np.linalg.svd(E, full_matrices = False)
delta = time() - t
print("SVD de una matriz %dx%d tomo %0.2f segundos." % (size / 2, size / 4, delta / N))
del E
# Cholesky: promedio de 3.
N = 3
t = time()
for i in range(N):
np.linalg.cholesky(F)
delta = time() - t
print("Descomposicion de una Cholesky de una matriz %dx%d tomo %0.2f segundos." % (size / 2, size / 2, delta / N))
# Valores y vectores propios: promedio de 3.
t = time()
for i in range(N):
np.linalg.eig(G)
delta = time() - t
print("Descompisicion de valores y vectores propios de una matriz %dx%d tomo %0.2f segundos." % (size / 2, size / 2, delta / N))
# -
# ## Comparación de Rendimiento
#
# Estos archivos fueron generados ejecutando el script `comp.py`. El cual genera un CSV con los valores que utilizaremos para probar y comparar rendimientos.
# ### Intel MKL
# +
import pandas as pd
import numpy as np
df = pd.read_csv("compare.mkl_rt.csv")
df.head()
# -
# ### OpenBLAS
df_openblas = pd.read_csv("compare.openblas.csv")
df_openblas.head(3)
# ### Atlas
#
# Aunque es marcado como f77blas significa que fue compilado con el compilador Fortran 77.
df_atlas = pd.read_csv("compare.f77blas.csv")
df_atlas.head(3)
# ### Numpy
#
# Sin OpenBLAS
df_noblas = pd.read_csv("compare.noblas.csv")
df_noblas.head(3)
# ## Cupy + CuBLAS
#
# Con Nvidia CUDA
df_cupy = pd.read_csv("compare.cupy+cublas+10.200000.csv")
df_cupy.head(3)
# ## Gráficas
df.Operación.unique()
# +
# %matplotlib inline
from matplotlib import pyplot as plt
fig, axs = plt.subplots(3, 2, sharex=False, sharey=False)
fig.set_size_inches(12, 12)
## Pruebas con MKL
df2 = df[df.Operación == 'Producto Punto de Matrices']
axs[0, 0].set_title(df2.iloc[0,2])
axs[0, 0].plot(np.log(df2.Tamaño), np.log(df2.Tiempo), color="blue", label="MKL")
df2 = df[df.Operación == 'Producto de Matrices']
axs[0, 1].set_title(df2.iloc[0,2])
axs[0, 1].plot(np.log(df2.Tamaño), np.log(df2.Tiempo), color="blue", label="MKL")
df2 = df[df.Operación == 'Producto Punto de 2 Vectores']
axs[1, 0].set_title(df2.iloc[0,2])
axs[1, 0].plot(np.log(df2.Tamaño), np.log(df2.Tiempo), color="blue", label="MKL")
df2 = df[df.Operación == 'SVD']
axs[1, 1].set_title(df2.iloc[0,2])
axs[1, 1].plot(np.log(df2.Tamaño), np.log(df2.Tiempo), color="blue", label="MKL")
df2 = df[df.Operación == 'Cholesky']
axs[2, 0].set_title(df2.iloc[0,2])
axs[2, 0].plot(np.log(df2.Tamaño), np.log(df2.Tiempo), color="blue", label="MKL")
df2 = df[df.Operación == 'Eigen']
axs[2, 1].set_title(df2.iloc[0,2])
axs[2, 1].plot(np.log(df2.Tamaño), np.log(df2.Tiempo), color="blue", label="MKL")
# Etiquetas
axs[0, 0].set_ylabel('log(Tiempo)')
axs[1, 0].set_ylabel('log(Tiempo)')
axs[2, 0].set_ylabel('log(Tiempo)')
axs[2, 0].set_xlabel('log(Tamaño)')
axs[2, 1].set_xlabel('log(Tamaño)')
# ## Pruebas con OpenBLAS
df_blas = pd.read_csv("compare.openblas.csv")
df2 = df_blas[df_blas.Operación == 'Producto Punto de Matrices']
axs[0, 0].plot(np.log(df2.Tamaño), np.log(df2.Tiempo), color="red", label="OpenBLAS")
df2 = df_blas[df_blas.Operación == 'Producto Punto de 2 Vectores']
axs[1, 0].plot(np.log(df2.Tamaño), np.log(df2.Tiempo), color="red", label="OpenBLAS")
df2 = df_blas[df_blas.Operación == 'SVD']
axs[1, 1].plot(np.log(df2.Tamaño), np.log(df2.Tiempo), color="red", label="OpenBLAS")
df2 = df_blas[df_blas.Operación == 'Cholesky']
axs[2, 0].plot(np.log(df2.Tamaño), np.log(df2.Tiempo), color="red", label="OpenBLAS")
df2 = df_blas[df_blas.Operación == 'Eigen']
axs[2, 1].plot(np.log(df2.Tamaño), np.log(df2.Tiempo), color="red", label="OpenBLAS")
# ## Pruebas con ATLAS
df_atlas = pd.read_csv("compare.f77blas.csv")
df2 = df_atlas[df_atlas.Operación == 'Producto Punto de Matrices']
axs[0, 0].plot(np.log(df2.Tamaño), np.log(df2.Tiempo), color="magenta", label="ATLAS")
df2 = df_atlas[df_atlas.Operación == 'Producto Punto de 2 Vectores']
axs[1, 0].plot(np.log(df2.Tamaño), np.log(df2.Tiempo), color="magenta", label="ATLAS")
df2 = df_atlas[df_atlas.Operación == 'SVD']
axs[1, 1].plot(np.log(df2.Tamaño), np.log(df2.Tiempo), color="magenta", label="ATLAS")
df2 = df_atlas[df_atlas.Operación == 'Cholesky']
axs[2, 0].plot(np.log(df2.Tamaño), np.log(df2.Tiempo), color="magenta", label="ATLAS")
df2 = df_atlas[df_atlas.Operación == 'Eigen']
axs[2, 1].plot(np.log(df2.Tamaño), np.log(df2.Tiempo), color="magenta", label="ATLAS")
# ## Pruebas sin OpenBLAS
df_noblas = pd.read_csv("compare.noblas.csv")
df2 = df_noblas[df_noblas.Operación == 'Producto Punto de Matrices']
axs[0, 0].plot(np.log(df2.Tamaño), np.log(df2.Tiempo), color="black", label="Sin BLAS")
df2 = df_noblas[df_noblas.Operación == 'Producto Punto de 2 Vectores']
axs[1, 0].plot(np.log(df2.Tamaño), np.log(df2.Tiempo), color="black", label="Sin BLAS")
df2 = df_noblas[df_noblas.Operación == 'SVD']
axs[1, 1].plot(np.log(df2.Tamaño), np.log(df2.Tiempo), color="black", label="Sin BLAS")
df2 = df_noblas[df_noblas.Operación == 'Cholesky']
axs[2, 0].plot(np.log(df2.Tamaño), np.log(df2.Tiempo), color="black", label="Sin BLAS")
df2 = df_noblas[df_noblas.Operación == 'Eigen']
axs[2, 1].plot(np.log(df2.Tamaño), np.log(df2.Tiempo), color="black", label="Sin BLAS")
## Pruebas CuBLAS
df_cupy = pd.read_csv("compare.cupy+cublas+10.200000.csv")
df2 = df_cupy[df_cupy.Operación == 'Producto Punto de Matrices']
axs[0, 0].plot(np.log(df2.Tamaño), np.log(df2.Tiempo), color="green", label="CuPY+CuBLAS")
df2 = df_cupy[df_cupy.Operación == 'Producto de Matrices']
axs[0, 1].plot(np.log(df2.Tamaño), np.log(df2.Tiempo), color="green", label="CuPY+CuBLAS")
df2 = df_cupy[df_cupy.Operación == 'Producto Punto de 2 Vectores']
axs[1, 0].plot(np.log(df2.Tamaño), np.log(df2.Tiempo), color="green", label="CuPY+CuBLAS")
df2 = df_cupy[df_cupy.Operación == 'SVD']
axs[1, 1].plot(np.log(df2.Tamaño), np.log(df2.Tiempo), color="green", label="CuPY+CuBLAS")
df2 = df_cupy[df_cupy.Operación == 'Cholesky']
axs[2, 0].plot(np.log(df2.Tamaño), np.log(df2.Tiempo), color="green", label="CuPY+CuBLAS")
df2 = df_cupy[df_cupy.Operación == 'Eigen']
axs[2, 1].plot(np.log(df2.Tamaño), np.log(df2.Tiempo), color="green", label="CuPY+CuBLAS")
fig.legend()
plt.savefig("plot.png")
plt.show()
# -
# ## Conclusiones
#
# En la gráfica anterior y haciendo comparaciones de distintos tamaños y operaciones encontramos que las distintas optimizaciones con librerías compiladas (Intel Math Kernel Language, OpenBLAS, NVIDIA cuBLAS, ATLAS y _a secas_) tienen distintos rendimientos computacionalmente hablando.
#
# Se puede decir que las mejores generalmente son:
# * Intel MKL
# * OpenBLAS
# * NVIDIA CuBLAS
#
# Aunque es destacable que sin librerías es mucho más lento, ATLAS no obtiene unos rendimientos mejores a las dos mencionadas anteriormente.
#
# Debido a restricciones de licenciamiento, no hemos distribuido en un Docker la librería MKL, por lo que pueden encontrarse las demás implementaciones que sirvieron para estas pruebas. Así mismo fue consistente en esta librería un comportamiento inconsistente en el benchmarking de Cholesky y Producto Punto de Matrices, donde las matrices de tamaño menor fueron tomaron más tiempo de ejecutarse.
#
# ### Notas generales sobre las pruebas
#
# Esto se realizó en un equipo potente con las siguientes características en un ambiente controlado, no compartido, como es IaaS públicas:
# * 64 GB de RAM
# * Centos 7
# * Xeon Gold Skylake con 16 cores en 2 sockets
# * NVIDIA TESLA M10 - con 960 Cores
# * Nota: los contenedores están compilados en Ubuntu 16.04
#
# Las gráficas están en escala logarítmica para ambos ejes.
#
# En el eje de las ordenadas un número menor, significa que es menor tiempo, siendo el mejor.
#
# ## Bibliografía y Referencias:
#
# * [Building Numpy](https://www.numpy.org/devdocs/user/building.html#ubuntu)
# * [Building ATLAS + Numpy](https://tillahoffmann.github.io/2016/06/01/compiling-numpy-with-ATLAS.html)
# * [Fuentes de nuestros Dockers](https://github.com/philwebsurfer/analisis-numerico-computo-cientifico/tree/master/analisisnum-jorgealtamirano)
# * [Docker Hub: OpenBLAS](https://hub.docker.com/r/philwebsurfer/openblas-py3)
# * [Docker Hub: ATLAS](https://hub.docker.com/r/philwebsurfer/atlas-py3)
# * [Docker Hub: noblas](https://hub.docker.com/r/philwebsurfer/noblas-py3)
# * [Matplotlib subplots](https://matplotlib.org/gallery/recipes/create_subplots.html)
# * [Matplotlib legends](https://matplotlib.org/users/legend_guide.html)
# * [Matplotlib titles](https://matplotlib.org/gallery/text_labels_and_annotations/titles_demo.html#sphx-glr-gallery-text-labels-and-annotations-titles-demo-py)
| analisisnum-jorgealtamirano/openblas-py3/benchmark.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
#Import Dependencies
import pandas as pd
# Scrap the data from the web using pandas
#Assign 2016-2020 URL's to variables
url_2016 = 'https://en.wikipedia.org/wiki/2016_NFL_Draft'
url_2017 = 'https://en.wikipedia.org/wiki/2017_NFL_Draft'
url_2018 = 'https://en.wikipedia.org/wiki/2018_NFL_Draft'
url_2019 = 'https://en.wikipedia.org/wiki/2019_NFL_Draft'
url_2020 = 'https://en.wikipedia.org/wiki/2020_NFL_Draft'
#Read in 2016-2020 URL's into table's using Pandas
table_2016 = pd.read_html(url_2016)
table_2017 = pd.read_html(url_2017)
table_2018 = pd.read_html(url_2018)
table_2019 = pd.read_html(url_2019)
table_2020 = pd.read_html(url_2020)
#Check the variable type
type(table_2016)
#Check the Length of the table
len(table_2016)
# Clean the Data Using Pandas
#Convert the tables to a dataframe
df_2016_combine = table_2016[4]
df_2017_combine = table_2017[4]
df_2018_combine = table_2018[4]
df_2019_combine = table_2019[4]
df_2020_combine = table_2020[4]
#Clean the 2016 Dataframe
df_2016_combine = df_2016_combine.drop(columns = ['Unnamed: 0', 'Notes'])
df_2016_combine = df_2016_combine.rename(columns = {"Pick #": "Pick_no"})
df_2016_combine
#Clean the 2017 Dataframe
df_2017_combine = df_2017_combine.drop(columns = ['Unnamed: 0', 'Notes'])
df_2017_combine = df_2017_combine.rename(columns = {"Pick #": "Pick_no"})
df_2017_combine
#Clean the 2018 Dataframe
df_2018_combine = df_2018_combine.drop(columns = ['Unnamed: 0', 'Notes'])
df_2018_combine = df_2018_combine.rename(columns = {"Pick #": "Pick_no"})
df_2018_combine
#Clean the 2019 Dataframe
df_2019_combine = df_2019_combine.drop(columns = ['Unnamed: 0', 'Notes'])
df_2019_combine = df_2019_combine.rename(columns = {"Pick #": "Pick_no"})
df_2019_combine
#Clean the 2020 Dataframe
df_2020_combine = df_2020_combine.drop(columns = ['Unnamed: 0', 'Notes'])
df_2020_combine = df_2020_combine.rename(columns = {"Pick #": "Pick_no"})
df_2020_combine
# Append the data into a single DataFrame to post it to the database
# Add a year column for data storage Purposes
df_2016_combine['year']='2016'
df_2017_combine['year']='2017'
df_2018_combine['year']='2018'
df_2019_combine['year']='2019'
df_2020_combine['year']='2020'
df_2016_combine
# +
# Append the 5 years DataFrames into one dataframe for storage purposes
draft_df = df_2020_combine.append(df_2019_combine,ignore_index=True,verify_integrity=True)
draft_df = draft_df.append(df_2018_combine,ignore_index=True,verify_integrity=True)
draft_df = draft_df.append(df_2017_combine,ignore_index=True,verify_integrity=True)
draft_df = draft_df.append(df_2016_combine,ignore_index=True,verify_integrity=True)
draft_df
| simon_fix.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # NN Regression with Keras
# - **Created by <NAME>**
# - **Created on Oct 4, 2019**
# General Libraries
import pandas as pd
import numpy as np
from math import sqrt
from platform import python_version
from IPython.display import Image
# ML Libraries - Sklearn
from sklearn.model_selection import train_test_split
from sklearn.metrics import *
from scipy.interpolate import interp1d
# ML Libraries - Keras
import keras
from keras.models import Sequential
from keras.layers import Dense
from keras.layers import Dropout
from keras.wrappers.scikit_learn import KerasRegressor
from keras.optimizers import *
from keras.utils.vis_utils import plot_model
# Load Plot libraries
import matplotlib.pyplot as plt
# Framework version
tf.logging.set_verbosity(tf.logging.ERROR)
print('Python version:', python_version(), ', Keras version:', keras.__version__, ", TensorFlow version: " + tf.__version__)
# ## 1. Load and show data
# The first thing we do is load the input dataset.
# Read raw data
dataURL = "../data/dataset-single.csv"
raw_data = pd.read_csv(dataURL)
# Dataframe rows and columns
raw_data.shape
# Show default data types
raw_data.dtypes
# Preview the first 5 lines of the loaded data
raw_data.head()
# ## 2. Prepare the data to Learn
# Now the data is prepared for the learning of the NN model based on the result of the Data Profiling.
# Create new dataframe
new_data = raw_data.copy(deep=True)
# List of variables to eliminate based on the Data Profiling
delete_cols = ["WellID", "SpgO", "SpgGP"]
delete_cols
# Remove non-relevant columns
new_data.drop(columns=delete_cols, axis=1, inplace=True)
# +
# Data quality: convert date to normalized integer
date_var = "Date"
date_fields = [date_var]
for field in date_fields:
if field in new_data:
new_data[field] = pd.to_numeric(pd.to_datetime(new_data[field]))
if field == date_var:
date_max = new_data[field].max()
new_data[field] = (new_data[field] / new_data[field].max())
# -
# Set deadlines values
deadline_list = ["2018-09-01", "2018-10-01"]
deadline = pd.to_numeric(pd.to_datetime(deadline_list))
date_val = deadline[0] / date_max
date_test = deadline[1] / date_max
print("Date_val:", date_val, ", date_test:", date_test)
# #### Showing new dataframe stats
# Show default data types
new_data.dtypes
# Preview the first 5 lines of the processed data
new_data.head()
# ## 3. Create Train/Validation/Test datasets
# Now the input dataset is separated into 3 new datasets: training (history minus 2 months), validation (last month) and testing (current month).
# Function that interpolates the real value (oil well test)
def get_estimated_value(kind_method=''):
### kind: '', cubic', 'nearest', 'previous', 'next' ###
temp_data = new_data[["Test_Oil"]].dropna(thresh=1)
x = list(temp_data.index)
y = list(temp_data.Test_Oil)
x_min = min(x)
x_max = max(x)
x_new = np.linspace(x_min, x_max, num=(x_max-x_min)+1, endpoint=True)
if kind_method == '':
f = interp1d(x, y)
else:
f = interp1d(x, y, kind=kind_method)
y_new = f(x_new)
return y_new
# Create pretty x axis labels
def get_x_labels(all_labels):
x_labels = []
for ix in range(len(all_labels)):
if ix % 100 == 0:
x_labels.append(all_labels[ix])
else:
x_labels.append('')
return x_labels
# Find deadlines indexes
split_val = int(new_data[new_data[date_var] == date_val].index[0])
split_test = int(new_data[new_data[date_var] == date_test].index[0])
print("Split validation index:", split_val, ", split test index:", split_test)
# Split into input (X) and output (Y) vectors
dataset = new_data.values
nCols = dataset.shape[1] - 1
x_data = dataset[:, 0:nCols]
y_data = dataset[:, nCols]
y_estimated = get_estimated_value()
xs = range(len(x_data))
xticks = get_x_labels(raw_data.Date)
# Plot chart
plt.figure(figsize = (18, 6))
plt.plot(xs, y_estimated, '--', color='green')
plt.plot(xs, y_data, 'o', color='darkgreen')
plt.legend(['Estimated Oil', 'Well Test Oil'], loc='best')
plt.title('Well Tests vs Estimated Oil', fontsize = 14)
plt.xlabel('Date', fontsize = 10)
plt.ylabel('Oil (bbls)', fontsize = 10)
plt.xticks(xs, xticks, fontsize = 10, rotation = 50)
plt.show()
# +
# Split into train-validation and test datasets
test_perc = (len(x_data) - split_test) / len(x_data)
x_train, x_test, y_train, y_test = train_test_split(x_data, y_estimated, test_size=test_perc, shuffle=False)
# Split into train and validation datasets
val_perc = (len(x_train) - split_val) / len(x_train)
x_train, x_val, y_train, y_val = train_test_split(x_train, y_train, test_size=val_perc, shuffle=False)
print("Train rows:", len(x_train), ", Validation rows:", len(x_val), ", Test rows:", len(x_test))
# -
# ## 4. Train Model
# In **machine learning**, hyperparameter optimization or tuning is the problem of choosing a set of optimal hyperparameters for a learning algorithm. A hyperparameter is a parameter whose value is used to control the learning process. By contrast, the values of other parameters (typically node weights) are learned.
# The hyperparameters that we will use next were experimentally selected. Ideally, select them with a more robust method such as gradient descent.
# Set NN hyper-params
curr_n = x_data.shape[1]
curr_model = 'larger'
curr_units = 500
curr_optimizer = 'adam'
curr_loss = 'mean_squared_error'
curr_metric = 'mse'
curr_learn_rate = 0.001
curr_activate = 'LeakyReLU'
curr_epochs = 5000
curr_batch_size = 500
# Create a ANN model
def create_model(curr_model, n, curr_units, curr_optimizer, curr_loss, curr_metric, curr_learn_rate, curr_activate):
model = Sequential()
# define model
if curr_model == "baseline":
# Create model
model.add(Dense(curr_units, input_dim=n, kernel_initializer='normal', activation='relu')) #leaky relu
model.add(Dense(1, kernel_initializer='normal'))
elif curr_model == "larger":
# Input - Layer
model.add(Dense(curr_units, input_dim=n, kernel_initializer='normal', activation='relu'))
model.add(Dense(int(curr_units / 2), kernel_initializer='normal', activation='relu'))
model.add(Dense(1, kernel_initializer='normal'))
elif curr_model == "deep":
# Input - Layer
model.add(Dense(curr_units, input_dim=n, kernel_initializer='normal', activation='relu'))
# Hidden - Layers
model.add(Dropout(0.3, noise_shape=None, seed=None))
model.add(Dense(int(curr_units / 2), kernel_initializer='normal', activation = "relu"))
model.add(Dropout(0.2, noise_shape=None, seed=None))
model.add(Dense(int(curr_units / 2), kernel_initializer='normal', activation = "relu"))
# Hidden - Layers
model.add(Dense(1, kernel_initializer='normal'))
elif curr_model == "wider":
# Create model
model.add(Dense(curr_units, input_dim=n, kernel_initializer='normal', activation=curr_activate))
model.add(Dense(1, kernel_initializer='normal'))
elif curr_model == "lstm":
# Create model
model = Sequential()
# Show model summary
print(model.summary())
# Compile model
if curr_optimizer == "adam":
opAdam = Adam(lr=curr_learn_rate, beta_1=0.9, beta_2=0.999, epsilon=None, decay=0.0, amsgrad=False)
model.compile(loss=curr_loss, optimizer=opAdam, metrics=[curr_metric])
# Return model
return model
# Create the model
model = create_model(curr_model, curr_n, curr_units, curr_optimizer, curr_loss, curr_metric, curr_learn_rate, curr_activate)
# Plot Keras model structure
plot_model(model, show_shapes=True, show_layer_names=True)
# Fit the model
model.fit(x_train, y_train, validation_data=(x_val, y_val), epochs=curr_epochs, batch_size=curr_batch_size, verbose=0)
# ## 5. Make predictions and calculate error
# Calculate model errors (RMSE, MAE, MAPE)
def calculate_error(y_true, y_pred, eType):
error = 0
# Calculations
if eType == "RMSE":
error = sqrt(mean_squared_error(y_true, y_pred))
elif eType == "MAE":
error = mean_absolute_error(y_true, y_pred)
elif eType == "MAPE":
y_true, y_pred = np.array(y_true), np.array(y_pred)
error = np.mean(np.abs((y_true - y_pred) / y_true)) * 100
# Return error metric value
return error
# ### Make Train predictions
# Make train predictions
y_predict = model.predict(x_train, batch_size=curr_batch_size)
# Calculate validation errors
train_rmse = calculate_error(y_train, y_predict, "RMSE")
train_mae = calculate_error(y_train, y_predict, "MAE")
train_mape = calculate_error(y_train, y_predict, "MAPE")
print('Train RMSE:', train_rmse, ', train MAE:', train_mae, ', train MAPE:', train_mape)
# ### Make Validation predictions
# Make validation predictions
y_predict = model.predict(x_val, batch_size=curr_batch_size)
# Calculate validation errors
val_rmse = calculate_error(y_val, y_predict, "RMSE")
val_mae = calculate_error(y_val, y_predict, "MAE")
val_mape = calculate_error(y_val, y_predict, "MAPE")
print('Validation RMSE:', val_rmse, ', validation MAE:', val_mae, ', validation MAPE:', val_mape)
# ### Make Test predictions
# Make test predictions
y_predict = model.predict(x_test, batch_size=curr_batch_size)
# Calculate test errors
test_rmse = calculate_error(y_test, y_predict, "RMSE")
test_mae = calculate_error(y_test, y_predict, "MAE")
test_mape = calculate_error(y_test, y_predict, "MAPE")
print('Test RMSE:', test_rmse, ', test MAE:', test_mae, ', test MAPE:', test_mape)
# ### 6. Plot Results
# Make model predictions
y_predict = model.predict(x_data, batch_size=curr_batch_size)
len(y_predict)
# Plot chart
plt.figure(figsize = (18, 6))
plt.plot(xs, y_predict, '-', color='green')
plt.plot(xs, y_data, 'o', color='darkgreen')
plt.plot(xs, y_estimated + 500, '--', color='red')
plt.plot(xs, y_estimated - 500, '--', color='red')
plt.legend(['Predicted Oil', 'Well Test Oil', 'Upper Limit', 'Lower Limit'], loc='best')
plt.title('Well Tests vs Predicted Oil', fontsize = 14)
plt.xlabel('Date', fontsize = 10)
plt.ylabel('Oil (bbls)', fontsize = 10)
plt.xticks(xs, xticks, fontsize = 10, rotation = 50)
plt.show()
# Daily Difference
y_diff = [x1 - x2 for (x1, x2) in zip(y_predict, y_estimated)]
print('Avg Difference:', (sum(np.absolute(y_diff)) / len(y_diff))[0], 'bbls')
# Difference between Well Tests vs Oil Prediction
plt.figure(figsize = (18, 6))
plt.plot(xs, y_diff, '-', color='black')
plt.legend(['Difference'], loc='best')
plt.title('Difference between Well Tests vs Oil Prediction', fontsize = 14)
plt.xlabel('Date', fontsize = 10)
plt.ylabel('Oil (bbls)', fontsize = 10)
plt.xticks(xs, xticks, fontsize = 10, rotation = 50)
plt.show()
# <hr>
# <p><a href="https://ansegura7.github.io/Keras_RegressionNN/">« Home</a></p>
| code/RegressionNN.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Using Cholesky and Singular Value Decomposition to generated correlated random numbers
# ## The problem:
# The ability to simulate correlated risk factors is key to many risk models. Historical Simulation achieves this implicitly, by using actual timeseries data for risk factors and applying changes for all risk factors for a given day, for a large number of days (250 or 500 typically). The empirically observed correlations, as well as the means and standard deviations, are implicitly embedded across the historical timeseries data sets.
#
# If we are doing *Monte Carlo* simulation however we need to do something different, since random drawings from a Normal(Gaussian)distribution will be uncorrelated - whereas real data will exhibit correlations. Therefore a technique must be developed to transform uncorrelated random variables to variables which exhibit the empirically observed correlations.
#
# In this Jupyter notebook we explore some techniques for producing correlated random variables and variations on these techniques.
# - Cholesky Factorisation : $LL^T=\Sigma$, using both covariance and correlation matrix variations to generate trials
# - Singular Value Decomposition : $UDV^T=\Sigma$ [TODO - help appreciated!]
# ## Theory - Cholesky Factorisation approach:
# Consider a random vector, X, consisting of uncorrelated random variables with each random variable, $X_i$, having zero mean and unit variance 1 ($X\sim N(0,1)$). What we hant is some sort of technique for converting these standard normal variables to correlated variables which exhibit the observed empirical means and variances of theproblem we are modelling.
#
#
# - Useful identities and results:
# - $\mathbb E[XX^T] = I$, where $X\sim N(0,1)$ Since $Var[XX^T]=\mathbb E [XX^T] + \mathbb E[X] \mathbb E[X^T]$
# - To show that we can create new, correlated, random variables $Y$, where $Y=LX$ and
# - $L$ is the Cholesky factorisation matrix (see above "Cholesky"),
# - X is a vector of independent uncorrelated variables from a Normal distribution with mean of zero and variance of one : $\boxed {X\sim N(0,1)}$
# - $Cov[Y,Y] = \mathbb E[YY^T]
#
import pandas as pd
from IPython.display import display, Math, Latex, IFrame
import pandas as pd
#import pandas.io.data as pd_io
from pandas_datareader import data, wb
import numpy as np
import scipy as sci
G=pd.DataFrame(np.random.normal(size=(10000000,5)))
m=pd.DataFrame(np.matmul(G.transpose(), G))
display(Math(r'Demonstration~of~~ \mathbb E[XX^T] = I, ~~where~X\sim N(0,1)'))
print(m/10000000)
import pandas as pd
from pandas_datareader import data, wb
import numpy as np
import scipy as sci
stocks=['WDC', 'AAPL', 'IBM', 'MSFT', 'ORCL']
p=data.DataReader(stocks,data_source='google')#[['Adj Close']]
print(type(p))
from pivottablejs import pivot_ui
pivot_ui(m)
df=p.ix[0]
#df.pop('ATML') get rid of duff entry with NaNs!! - handy as you can just remove (and optionally save) a chunk!!
df=np.log(df/df.shift(1) )
df=df.dropna()
print("Days:{}".format(len(df)))
corr=df.corr()
print(corr)
chol=np.linalg.cholesky(corr)
#chol=sci.linalg.cholesky(corr, lower=True)
print chol
sigma=df.std()
mu=df.mean()
print("sigma=\n{}\n mu=\n{}".format(sigma,mu))
#No generate random normal samples with observed means ("mu"s) and st_devs ("sigma"s)
#G_rands=np.random.normal(loc=mu,scale=sigma,size=(1000,len(sigma)))
G_rands=pd.DataFrame(np.random.normal(size=(1000000,len(sigma))))
#G_Corr_rand=G_rands.dot(chol)
G_Corr_rand=(chol.dot(G_rands.transpose())).transpose()
# Now apply the std dev and mean by multiplation and addition, respectively - return as pandas df
G_=pd.DataFrame(G_Corr_rand * np.broadcast_to(sigma,(1000000,len(sigma))) + np.broadcast_to(mu,(1000000,len(mu))))
print(G_.head())
print(corr)
print(G_.corr())
df.describe().T
# +
import pandas as pd
from pandas_datareader import data, wb
import numpy as np
import scipy as sci
stocks=['WDC', 'AAPL', 'IBM', 'MSFT', 'ORCL']
p=data.DataReader(stocks,data_source='yahoo')[['Adj Close']]
df=p.ix[0] #convert pandas "panel" to pandas "data frame"
df=np.log(df/df.shift(1) )
df=df.dropna()
cov=df.cov()
chol=np.linalg.cholesky(cov) # default is left/lower; use chol=sci.linalg.cholesky(cov, lower=False) otherwise
print ('Cholesky L=\n{}, \nL^T=\n{},\nLL^T=\n{}'.format(chol, chol.transpose(), chol.dot(chol.T)))
G_rands=pd.DataFrame(np.random.normal(size=(1000000,len(sigma))))
G_=pd.DataFrame((chol.dot(G_rands.transpose())).transpose())
print(G_.head())
print(cov)
print(G_.cov())
# -
#Check for tiny size - LL^T should be equal to cov, so diff should be negligible
chol.dot(chol.T) - cov
print (chol.dot(chol.T) - cov).max()
| Misc/.ipynb_checkpoints/Cholesky+and+SVD+Correlated+Random-checkpoint.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ## CPT Poem Generator
#
# ### How to use CPT
# * In your terminal, type `git clone https://github.com/NeerajSarwan/CPT.git` to download CPT
# * If you don't have `git`, download and install it.
# * After download CPT, type `cd CPT` to enter into the folder
# * You code should be created under this file, so that you can run the code below.
# * Python CPT open source: https://github.com/NeerajSarwan/CPT/blob/master/CPT.py
#
# ### Poem Generation Methods with CPT
# * Character based poem generation
# * Word based poem generation
# * To compare with LSTM poem generator: https://github.com/hanhanwu/Hanhan_Data_Science_Practice/blob/master/sequencial_analysis/try_poem_generator.ipynb
#
#
# * Download the sonnets text from : https://github.com/pranjal52/text_generators/blob/master/sonnets.txt
from CPT import *
import pandas as pd
sample_poem = open('sample_sonnets.txt').read().lower().replace('\n', '') # smaller data sample
all_poem = open('sonnets.txt').read().lower().replace('\n', '') # larger data sample
def generate_char_seq(whole_str, n):
"""
Generate a dataframe, each row contains a sequence with length n.
Next sequence is 1 character of the previous sequence.
param: whole_str: original text in string format
param: n: the length of each sequence
return: a dataframe that contains all the sequences.
"""
dct = {}
idx = 0 # the index of the dataframe, the key of each key-value in the dictionary
for i in range(len(whole_str)-n):
sub_str = whole_str[i:i+n]
dct[idx] = {}
for j in range(n):
dct[idx][j] = sub_str[j]
idx += 1
df = pd.DataFrame(dct)
return df
# ## Method 1 Part 1 - Character Based Poem Generator (Smaller Sample Data)
# I'm using 410 character sequence to train
train_seq_len = 410
training_df = generate_char_seq(sample_poem, train_seq_len).T
training_df.head()
# The testing data will use 20 characters, and I only choose 7 seperate rows from all sequences.
## The poem output will try to predict characters based on the 20 characters in each row, 7 rows in total
test_seq_len = 20
all_testing_df = generate_char_seq(sample_poem, test_seq_len).T
testing_df = all_testing_df.iloc[[77, 99, 177, 199, 277, 299, 410],:]
testing_df.head()
# This python open source has a bit weird model input requirement, so I will use its own functions to load the data.
training_df.to_csv("train.csv", index=False)
testing_df.to_csv("test.csv")
model = CPT()
train, test = model.load_files("train.csv", "test.csv")
model.train(train)
predict_len = 10
predictions = model.predict(train,test,test_seq_len,predict_len)
predictions
# ### Generate the poem
for i in range(testing_df.shape[0]):
all_char_lst = testing_df.iloc[i].tolist()
all_char_lst.append(' ')
all_char_lst.extend(predictions[i])
print(''.join(all_char_lst))
# ### Obseravtion
# * The data size is very smaller here. But comparing with LSTM poem generation, CPT is much much faster, and the output is more descent than LSTM results.
# ## Method 1 Part 2 - Character Based Poem Generator (Larger Sample Data)
# I'm using 410 character sequence to train
train_seq_len = 410
training_df = generate_char_seq(all_poem, train_seq_len).T
training_df.head()
# The testing data will use 30 characters, and I only choose 10 seperate rows from all sequences.
## The poem output will try to predict characters based on the 20 characters in each row, 10 rows in total
test_seq_len = 30
all_testing_df = generate_char_seq(all_poem, test_seq_len).T
testing_df = all_testing_df.iloc[[1,2,3,4,5,77, 99, 177, 199, 277, 299, 410],:]
testing_df.head()
training_df.to_csv("all_train.csv", index=False)
testing_df.to_csv("all_test.csv")
model = CPT()
train, test = model.load_files("all_train.csv", "all_test.csv")
model.train(train)
predict_len = 10 # predict the next 10 characters
predictions = model.predict(train,test,test_seq_len,predict_len)
# ### Generate the poem
for i in range(testing_df.shape[0]):
all_char_lst = testing_df.iloc[i].tolist()
all_char_lst.append(' ')
all_char_lst.extend(predictions[i])
print(''.join(all_char_lst))
# ### Observations
# * At the very beginning, I tried to generate poem with selected 7 rows, each row uses 20 characters to predict the next 10 characters. It gave exactly the same output as smaller data sample output.
# * This may indicate that, when the selected testing data is very very small and tend to be unique in the training data, smaller data inout is enough to get the results.
# * Then I changed to 12 rows, each row uses 30 characters to predict the next 10 character.
# * The first 5 rows came from continuous rows, which means the next row is 1 character shift from its previous row. If you check their put, although cannot say accurate, but the 5 rows has similar prediction.
# * I think CPT can be more accurate when there are repeated sub-sequences appeared in the training data, because the algorithm behind CPT is prediction tree + inverted index + lookup table, similar to FP-growth in transaction prediction, more repeat more accurate.
# ## Method 2 - Word Based Poem Generation
all_words = all_poem.split()
print(len(all_words))
# With selected sequence length to train
train_seq_len = 1000
training_df = generate_char_seq(all_words, train_seq_len).T
training_df.head()
# +
# The testing data will use 20 words, and I only choose 10 seperate rows from all sequences.
## The poem output will try to predict characters based on the 20 characters in each row, 10 rows in total
test_seq_len = 20
output_poem_rows = 10
all_testing_df = generate_char_seq(all_words, test_seq_len).T
selected_row_idx_lst = [train_seq_len*i for i in range(output_poem_rows)]
testing_df = all_testing_df.iloc[selected_row_idx_lst,:]
testing_df.head()
# -
training_df.to_csv("all_train_words.csv", index=False)
testing_df.to_csv("all_test_words.csv")
model = CPT()
train, test = model.load_files("all_train_words.csv", "all_test_words.csv")
model.train(train)
predict_len = 10 # predict the next 10 words
predictions = model.predict(train,test,test_seq_len,predict_len)
# ### Generate the poem
for i in range(testing_df.shape[0]):
all_char_lst = testing_df.iloc[i].tolist()
all_char_lst.extend(predictions[i])
print(' '.join(all_char_lst))
# ## Summary
# * After trying LSTM, CPT on the same data (Shakespeare's sonnets), with both character based and the most simple word based methods, now we can see CPT word based creates more descent results. Look at the final output, it's just look like poems (difficult to understand), you won't feel anything is wrong at the first glance, it's a poem.
# * I think python CPT is really good, although it's poen source, you don't need to convert character or words into numerical data, categorical data can be the input, and the model training as well as the prediction is very fast.
| sequencial_analysis/CPT_poem_generator.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import torch
import torch.nn as nn
import torch.nn.functional as F
import torch.utils as utils
import torchvision
from torchvision import datasets
from torchvision import transforms
from PIL import Image
import seaborn as sn
import matplotlib.pyplot as plt
import pandas as pd
import numpy as np
import random
# ### MNIST Dataset
use_cuda = torch.cuda.is_available()
device = torch.device('cuda' if use_cuda else 'cpu')
default_batch_size = 32
loader_args = {'batch_size' : default_batch_size, 'shuffle' : True}
if use_cuda:
loader_args.update({'pin_memory' : True, 'num_workers' : 1})
testset = datasets.MNIST(root='../data', train=False, download=True, transform=transforms.ToTensor())
test_loader = utils.data.DataLoader(testset, **loader_args)
label_size = 10
# ### MNIST CNN Model
class MNISTClassifier(nn.Module):
def __init__(self, isize, osize):
super(MNISTClassifier, self).__init__()
fc1_isize = int((((isize - 2 - 2) / 2) ** 2) * 32)
self.conv1 = nn.Conv2d(1, 64, 3)
self.conv2 = nn.Conv2d(64, 32, 3)
self.pool = nn.MaxPool2d(2)
self.dropout1 = nn.Dropout(0.25)
self.dropout2 = nn.Dropout(0.5)
self.fc1 = nn.Linear(fc1_isize, 128)
self.fc2 = nn.Linear(128, osize)
def forward(self, x):
return self.f_fc2(x)
# extended to access intermediate layer outputs
def f_conv1(self, x):
x = self.conv1(x)
x = F.relu(x)
return x
def f_conv2(self, x):
x = self.f_conv1(x)
x = self.conv2(x)
x = F.relu(x)
return x
def f_pool1(self, x):
x = self.f_conv2(x)
x = self.pool(x)
return x
def f_fc1(self, x):
x = self.f_pool1(x)
x = self.dropout1(x)
x = torch.flatten(x, 1)
x = self.fc1(x)
return x
def f_fc2(self, x):
x = self.f_fc1(x)
x = self.dropout2(x)
x = self.fc2(x)
x = F.log_softmax(x, dim=1)
return x
model_file = '../models/mnist_classifier.pt'
model = torch.load(model_file)
model = model.to(device)
# ### Report Model Performance with Confusion Matrix
def predict(model, device, loader):
model.eval()
inputs = np.empty((0,1,28,28), dtype=float)
predictions = np.empty(0)
targets = np.empty(0)
with torch.no_grad():
for data, target in loader:
inputs = np.concatenate((inputs, data), axis=0)
data = data.to(device)
output = model(data)
prediction = output.argmax(dim=1)
prediction = prediction.cpu()
targets = np.concatenate((targets, target), axis=0)
predictions = np.concatenate((predictions, prediction), axis=0)
return (predictions, targets, inputs)
def predictions_to_matrix(predictions, targets, n_classes):
mtx = [[0 for i in range(n_classes)] for i in range(n_classes)]
for i in range(len(predictions)):
mtx[int(predictions[i])][int(targets[i])] += 1
return mtx
predictions, targets, inputs = predict(model, device, test_loader)
confusion_matrix = predictions_to_matrix(predictions, targets, label_size)
df = pd.DataFrame(confusion_matrix, index=[i for i in range(label_size)], columns=[i for i in range(label_size)])
plt.figure(figsize=(10, 10))
sn.heatmap(df, annot=True)
# ### Sample of Incorrect Predictions
tensor2image = transforms.ToPILImage()
def incorrect(predictions, targets, inputs):
ret = []
for i, (pred, targ) in enumerate(zip(predictions, targets)):
if pred != targ:
ret.append((i, targ, pred, inputs[i]))
return ret
incorrects = incorrect(predictions, targets, inputs)
sample_idxes = [random.randint(0, len(incorrects) - 1) for _ in range(25)]
incorrect_images = np.empty((0,1,28,28))
for i in sample_idxes:
incorrect_images = np.concatenate((incorrect_images, np.expand_dims(incorrects[i][3], axis=0)), axis=0)
incorrect_images = torch.from_numpy(incorrect_images)
incorrect_image_grid = torchvision.utils.make_grid(incorrect_images, nrow=5)
tensor2image(incorrect_image_grid)
sample_idx = random.randint(0, len(incorrects) - 1)
incorrect_image = torch.from_numpy(incorrects[sample_idx][3]).type(torch.FloatTensor)
tensor2image(incorrect_image)
print("Correct Label: {} Prediction Label: {}".format(incorrects[sample_idx][1], incorrects[sample_idx][2]))
# ### Visualizing Model Internals with sample image
incorrect_input = incorrect_image
incorrect_input = incorrect_input.unsqueeze_(0)
incorrect_input = incorrect_input.to(device)
conv1_output = model.f_conv1(incorrect_input)
conv1_output.size()
def create_output_grid(output, rowsize, layer_size, imgsize):
output = output.squeeze().cpu().detach()
output_images = torch.reshape(output, (layer_size, 1, imgsize, imgsize))
grid = torchvision.utils.make_grid(output_images, nrow=rowsize)
return grid
# ### Visualizing First Convolutional Layer output
conv1_output_image_grid = create_output_grid(conv1_output, 8, 64, 26)
tensor2image(conv1_output_image_grid)
# +
### Visualizing Second Convolutional Layer output
# -
conv2_output = model.f_conv2(incorrect_input)
conv2_output.size()
conv2_output_image_grid = create_output_grid(conv2_output, 6, 32, 24)
tensor2image(conv2_output_image_grid)
# ### Visualizing Max Pooling Layer
pool1_output = model.f_pool1(incorrect_input)
pool1_output.size()
pool1_output_image_grid = create_output_grid(pool1_output, 6, 32, 12)
tensor2image(pool1_output_image_grid)
| nst/ExploringMNISTNeuralNet.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + colab={"base_uri": "https://localhost:8080/", "height": 280} colab_type="code" id="7Tf7zVH68Fzy" outputId="fe870841-5b94-4352-addf-1a6f7eca4323"
import warnings
warnings.filterwarnings('ignore')
from keras.models import Sequential
from keras.layers import Conv2D, Dense, Activation, MaxPooling2D, Dropout, Flatten, Activation
from keras.layers.normalization import BatchNormalization
from keras.optimizers import SGD
from keras.regularizers import l2
import tflearn.datasets.oxflower17 as oxflower17
from sklearn.model_selection import train_test_split
# + colab={} colab_type="code" id="KbKGz8jG8F0A"
def vgg16_model(img_shape=(224, 224, 3), classes = 1000):
vgg16 = Sequential()
# Convolutional Layer 1
# Padding is same to keep the the spatial resolution same. (For 3x3 filter, padding = 1)
vgg16.add(Conv2D(filters=64, kernel_size=3, strides=1, padding="same", input_shape=img_shape, kernel_regularizer=l2(0.0005)))
vgg16.add(Activation('relu'))
# Convolutional Layer 2
vgg16.add(Conv2D(filters=64, kernel_size=3, strides=1, padding="same", kernel_regularizer=l2(0.0005)))
vgg16.add(Activation('relu'))
# Maxpooling Layer 1
vgg16.add(MaxPooling2D(pool_size=2, strides=2))
# Convolutional Layer 3
vgg16.add(Conv2D(filters=128, kernel_size=3, strides=1, padding="same", kernel_regularizer=l2(0.0005)))
vgg16.add(Activation('relu'))
# Convolutional Layer 4
vgg16.add(Conv2D(filters=128, kernel_size=3, strides=1, padding="same", kernel_regularizer=l2(0.0005)))
vgg16.add(Activation('relu'))
# Maxpooling Layer 2
vgg16.add(MaxPooling2D(pool_size=2, strides=2))
# Convolutional Layer 5
vgg16.add(Conv2D(filters=256, kernel_size=3, strides=1, padding="same", kernel_regularizer=l2(0.0005)))
vgg16.add(Activation('relu'))
# Convolutional Layer 6
vgg16.add(Conv2D(filters=256, kernel_size=3, strides=1, padding="same", kernel_regularizer=l2(0.0005)))
vgg16.add(Activation('relu'))
# Convolutional Layer 7
vgg16.add(Conv2D(filters=256, kernel_size=3, strides=1, padding="same", kernel_regularizer=l2(0.0005)))
vgg16.add(Activation('relu'))
# Maxpooling Layer 3
vgg16.add(MaxPooling2D(pool_size=2, strides=2))
# Convolutional Layer 8
vgg16.add(Conv2D(filters=512, kernel_size=3, strides=1, padding="same", kernel_regularizer=l2(0.0005)))
vgg16.add(Activation('relu'))
# Convolutional Layer 9
vgg16.add(Conv2D(filters=512, kernel_size=3, strides=1, padding="same", kernel_regularizer=l2(0.0005)))
vgg16.add(Activation('relu'))
# Convolutional Layer 10
vgg16.add(Conv2D(filters=512, kernel_size=3, strides=1, padding="same", kernel_regularizer=l2(0.0005)))
vgg16.add(Activation('relu'))
# Maxpooling Layer 4
vgg16.add(MaxPooling2D(pool_size=2, strides=2))
# Convolutional Layer 11
vgg16.add(Conv2D(filters=512, kernel_size=3, strides=1, padding="same", kernel_regularizer=l2(0.0005)))
vgg16.add(Activation('relu'))
# Convolutional Layer 12
vgg16.add(Conv2D(filters=512, kernel_size=3, strides=1, padding="same", kernel_regularizer=l2(0.0005)))
vgg16.add(Activation('relu'))
# Convolutional Layer 13
vgg16.add(Conv2D(filters=512, kernel_size=3, strides=1, padding="same", kernel_regularizer=l2(0.0005)))
vgg16.add(Activation('relu'))
# Maxpooling Layer 5
vgg16.add(MaxPooling2D(pool_size=2, strides=2))
# Dense Layer 1
vgg16.add(Flatten())
vgg16.add(Dropout(0.5))
vgg16.add(Dense(4096, kernel_regularizer=l2(0.0005)))
vgg16.add(Activation('relu'))
# Dense Layer 2
vgg16.add(Dropout(0.5))
vgg16.add(Dense(4096, kernel_regularizer=l2(0.0005)))
vgg16.add(Activation('relu'))
# Dense Layer 3
vgg16.add(Dense(classes, kernel_regularizer=l2(0.0005)))
vgg16.add(Activation('softmax'))
return vgg16
# + colab={"base_uri": "https://localhost:8080/", "height": 121} colab_type="code" id="Nk3W4JDN8F0G" outputId="e5ff3c78-7e0a-45b4-eb15-8f31c639fe90"
# Import dataset
X, Y = oxflower17.load_data(one_hot=True)
# + colab={"base_uri": "https://localhost:8080/", "height": 52} colab_type="code" id="RzCpZm9m8F0S" outputId="28492a20-c973-4f32-9f61-35d327dbeef1"
print("X's shape: ", X.shape)
print("Y's shape: ", Y.shape)
# + colab={"base_uri": "https://localhost:8080/", "height": 86} colab_type="code" id="2IGggOqB8F0Z" outputId="2d265103-c5e2-4caf-9279-6675ab6fb608"
X_train, X_test, Y_train, Y_test = train_test_split(X, Y, test_size=0.2)
print("X_train's shape: ", X_train.shape)
print("Y_train's shape: ", Y_train.shape)
print("X_test's shape: ", X_test.shape)
print("Y_test's shape: ", Y_test.shape)
# + colab={"base_uri": "https://localhost:8080/", "height": 1000} colab_type="code" id="i4x7yYon8F0e" outputId="3892b3f7-99dc-4b7d-ea9e-6dc3fe4117c7"
vgg16 = vgg16_model((224,224,3), 17)
vgg16.summary()
# + colab={"base_uri": "https://localhost:8080/", "height": 1000} colab_type="code" id="Us3NM_t78F0j" outputId="aeb410a1-98fc-47c9-f12f-638b82044a30"
sgd = SGD(lr=0.1, momentum=0.9, decay=0.0005)
vgg16.compile(loss='categorical_crossentropy', optimizer='sgd', metrics=['accuracy'])
vgg16.fit(X_train, Y_train, batch_size=128, epochs=400, validation_split=0.25, shuffle=True)
# + colab={"base_uri": "https://localhost:8080/", "height": 69} colab_type="code" id="vDkaoq9jjEfP" outputId="9d784ee5-9a49-4438-900c-0218f99c93b6"
loss_and_metrics = vgg16.evaluate(X_test, Y_test, batch_size=128)
print("Loss on test set: ", loss_and_metrics[0])
print("Accuracy on test set: ", loss_and_metrics[1])
| VGG16/VGG16.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Analyzing your model with TensorFlow Model Analysis and the What-If Tool
# ### NB This only works in a Jupyter Notebook, NOT Jupyter Lab.
# Lab extensions have not been released for TFMA and the What-If Tool.
# +
import tensorflow_model_analysis as tfma
import tensorflow as tf
import sys
import os
# stop tf warnings
import logging
logger = tf.get_logger()
logger.setLevel(logging.ERROR)
# -
# You will need a trained model and an evaluation dataset (TFRecords) as produced by the earlier steps in the pipeline.
_EVAL_DATA_FILE = 'data_tfrecord-00000-of-00001'
_MODEL_DIR = 'serving_model_dir_2000_steps/'
# ## TFMA
eval_shared_model = tfma.default_eval_shared_model(
eval_saved_model_path=_MODEL_DIR, tags=[tf.saved_model.SERVING])
slices = [tfma.slicer.SingleSliceSpec(),
tfma.slicer.SingleSliceSpec(columns=['product'])]
eval_config=tfma.EvalConfig(
model_specs=[tfma.ModelSpec(label_key='consumer_disputed')],
slicing_specs=[tfma.SlicingSpec(), tfma.SlicingSpec(feature_keys=['product'])],
metrics_specs=[
tfma.MetricsSpec(metrics=[
tfma.MetricConfig(class_name='BinaryAccuracy'),
tfma.MetricConfig(class_name='ExampleCount'),
tfma.MetricConfig(class_name='FalsePositives'),
tfma.MetricConfig(class_name='TruePositives'),
tfma.MetricConfig(class_name='FalseNegatives'),
tfma.MetricConfig(class_name='TrueNegatives')
])])
eval_result = tfma.run_model_analysis(
eval_shared_model=eval_shared_model,
eval_config=eval_config,
data_location=_EVAL_DATA_FILE,
output_path="./eval_result_2000_steps",
file_format='tfrecords',
slice_spec = slices)
# may take 2 goes
tfma.view.render_slicing_metrics(eval_result)
tfma.view.render_slicing_metrics(eval_result, slicing_spec=slices[1])
# ## Compare 2 models
# +
eval_shared_model_2 = tfma.default_eval_shared_model(
eval_saved_model_path='serving_model_dir_150_steps/', tags=[tf.saved_model.SERVING])
eval_result_2 = tfma.run_model_analysis(
eval_shared_model=eval_shared_model_2,
eval_config=eval_config,
data_location=_EVAL_DATA_FILE,
output_path="./eval_result_150_steps",
file_format='tfrecords',
slice_spec = slices)
# -
tfma.view.render_slicing_metrics(eval_result_2)
eval_results_from_disk = tfma.load_eval_results(
['./eval_result_2000_steps','./eval_result_150_steps'], tfma.constants.MODEL_CENTRIC_MODE)
# bug - only works reliably in Colab
tfma.view.render_time_series(eval_results_from_disk, slices[0])
# ## Validating against thresholds
eval_config_threshold=tfma.EvalConfig(
model_specs=[tfma.ModelSpec(label_key='consumer_disputed')],
slicing_specs=[tfma.SlicingSpec(), tfma.SlicingSpec(feature_keys=['product'])],
metrics_specs=[
tfma.MetricsSpec(metrics=[
tfma.MetricConfig(class_name='BinaryAccuracy'),
tfma.MetricConfig(class_name='ExampleCount'),
tfma.MetricConfig(class_name='AUC')
],
thresholds={
'AUC':
tfma.config.MetricThreshold(
value_threshold=tfma.GenericValueThreshold(
lower_bound={'value': 0.5}))}
)])
# +
eval_shared_models = [
tfma.default_eval_shared_model(
model_name='candidate', # must have this exact name
eval_saved_model_path='serving_model_dir_150_steps/', tags=[tf.saved_model.SERVING]),
tfma.default_eval_shared_model(
model_name='baseline', # must have this exact name
eval_saved_model_path='serving_model_dir_2000_steps/', tags=[tf.saved_model.SERVING]),
]
eval_result = tfma.run_model_analysis(
eval_shared_models,
eval_config=eval_config_threshold,
data_location=_EVAL_DATA_FILE,
output_path="./eval_threshold",slice_spec = slices)
# -
tfma.load_validation_result('./eval_threshold')
tfma.view.render_slicing_metrics(eval_result)
# ## Fairness indicators
# https://github.com/tensorflow/tensorboard/blob/master/docs/fairness-indicators.md
# needs environment without WIT,but with TF2.x, TFX
# !pip install tensorboard_plugin_fairness_indicators
eval_config_fairness=tfma.EvalConfig(
model_specs=[tfma.ModelSpec(label_key='consumer_disputed')],
slicing_specs=[tfma.SlicingSpec(), tfma.SlicingSpec(feature_keys=['product'])],
metrics_specs=[
tfma.MetricsSpec(metrics=[
tfma.MetricConfig(class_name='BinaryAccuracy'),
tfma.MetricConfig(class_name='ExampleCount'),
tfma.MetricConfig(class_name='FalsePositives'),
tfma.MetricConfig(class_name='TruePositives'),
tfma.MetricConfig(class_name='FalseNegatives'),
tfma.MetricConfig(class_name='TrueNegatives'),
tfma.MetricConfig(class_name='FairnessIndicators', config='{"thresholds":[0.25, 0.5, 0.75]}')
])])
eval_result = tfma.run_model_analysis(
eval_shared_model=eval_shared_model,
eval_config=eval_config_fairness,
data_location=_EVAL_DATA_FILE,
output_path="./eval_result_fairness",
file_format='tfrecords',
slice_spec = slices)
from tensorboard_plugin_fairness_indicators import summary_v2
writer = tf.summary.create_file_writer('./fairness_indicator_logs')
with writer.as_default():
summary_v2.FairnessIndicators('./eval_result_fairness', step=1)
writer.close()
# %load_ext tensorboard
# %tensorboard --logdir=./fairness_indicator_logs
# ## The What-If Tool
from witwidget.notebook.visualization import WitConfigBuilder
from witwidget.notebook.visualization import WitWidget
eval_data = tf.data.TFRecordDataset(_EVAL_DATA_FILE)
eval_examples = [tf.train.Example.FromString(d.numpy()) for d in eval_data.take(1000)]
model = tf.saved_model.load(export_dir=_MODEL_DIR)
def predict(examples):
preds = model.signatures['serving_default'](examples=tf.constant([example.SerializeToString() for example in examples]))
return preds['outputs'].numpy()
config_builder = WitConfigBuilder(eval_examples).set_custom_predict_fn(predict)
WitWidget(config_builder)
# ### Debugging
# !pip install witwidget
# works with >2.1
# !pip show tensorflow
# works with >0.21.3
# !pip show tensorflow_model_analysis
# works with >1.6.0
# !pip show witwidget
# +
# may need to run this every time
# !jupyter nbextension install --py --symlink --sys-prefix witwidget
# !jupyter nbextension enable witwidget --py --sys-prefix
# then refresh browser page
# +
# may need to run this every time
# !jupyter nbextension enable --py widgetsnbextension --sys-prefix
# !jupyter nbextension install --py --symlink tensorflow_model_analysis --sys-prefix
# !jupyter nbextension enable --py tensorflow_model_analysis --sys-prefix
# then refresh browser page
# -
# !pip install widgetsnbextension
# !pip install -U ipywidgets
# !pip install jupyter_nbextensions_configurator
# !jupyter nbextension list
# !jupyter serverextension list
| chapters/model_analysis/model_analysis.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: py3env
# language: python
# name: py3env
# ---
# <div class="alert alert-block alert-info">
# <h1 style='color:black'>Python for beginners</h1>
# <h1><i>Software WG tutorial at CNS*2021</i></h1>
# </div>
# <div class="alert alert-block alert-warning">
# <h2 style='color:black'>Part 1: Virtual environments and Python package installation</h2>
# </div>
#
# ### Using virtual environments
#
# Using virtual environments allows you to keep multiple projects isolated from each other and from your system. We will be using the built-in Python module `venv` to create and manage virtual environents. See the official documentation for more information: https://docs.python.org/3/library/venv.html.
#
# First create and enter a new directory:
#
# mkdir temp
# cd temp
#
# Enter the following into your terminal to get information about your Python installation:
#
# python3 --version
# which python3
#
# Enter the following into your terminal to create a virtual environment named `env`:
#
# python3 -m venv env
#
# Note that this created a new directory named `env`, which is where your virtual environment lives.
#
# Enter the following in your terminal to activate (enter) the virtual environment:
#
# source env/bin/activate
#
# You can see that you are in `env` because of the "(env)" in your terminal's prompt.
#
# Now see which Python you are using:
#
# python3 --version
# which python3
#
# It's the same Python as before, but now it's isolated in your virtual environment. You can exit `env` at any time by entering:
#
# deactivate
#
# And you can reactivate later by entering (from within your `temp` directory):
#
# source env/bin/activate
#
# To "uninstall" your virtual environment all you have to do is delete the `env` directory.
#
#
# ### Installing and managing packages using PyPI (pip)
#
# Installing Python packages in a virtual environment is simple from the Python Package Index (PyPI: https://pypi.org/). See their tutorial (https://packaging.python.org/tutorials/installing-packages/) and user's guide (https://pip.pypa.io/en/stable/user_guide/) for more information. A description of the commands and options available for `pip` (PyPI's "Package Installer for Python") can be seen here: https://pip.pypa.io/en/stable/cli/
#
# First, make sure you're in your virtual environment, then we'll update `pip` and its dependencies:
#
# python3 -m pip install --upgrade pip setuptools wheel
#
# #### Common `pip` commands
#
# List all installed packages (https://pip.pypa.io/en/stable/cli/pip_list/)
#
# python3 -m pip list
#
# Show more information about a package (https://pip.pypa.io/en/stable/cli/pip_show/)
#
# python3 -m pip show some_pkg
#
# Install a package from PyPI (https://pip.pypa.io/en/stable/cli/pip_install/)
#
# python3 -m pip install some_pkg # install latest version
# python3 -m pip install -e some_pkg # install editable latest version
# python3 -m pip install some_pkg==1.4 # install specific version
# python3 -m pip install some_pkg>=1,<2 # install between range of versions
#
# Install a package from version control software (the optional "-e" makes the package editable)
#
# python3 -m pip install -e git+https://git.repo/some_pkg.git#egg=SomeProject # from git
# python3 -m pip install -e git+https://git.repo/some_pkg.git@feature#egg=SomeProject # from a git branch
# python3 -m pip install -e hg+https://hg.repo/some_pkg#egg=SomeProject # from mercurial
# python3 -m pip install -e svn+svn://svn.repo/some_pkg/trunk/#egg=SomeProject # from svn
#
# Install an editable package from a local copy
#
# python3 -m pip install -e path/to/SomeProject
#
# Uninstall a package (https://pip.pypa.io/en/stable/cli/pip_uninstall/)
#
# python3 -m pip uninstall some_pkg
#
#
# ### Creating and working with Jupyter notebooks
#
# In order to use iPython and Jupyter notebooks in your virtual environment, you'll need to install a few packages and create a kernel out of your virtual environment:
#
# python3 -m pip install --upgrade ipython ipykernel jupyter
# ipython kernel install --user --name=env
#
# Now you can launch Jupyter notebook with:
#
# jupyter notebook
#
# Or open a specific notebook:
#
# jupyter notebook "Python 101.ipynb"
#
# In the menu bar, click on `Kernel`, then `Change Kernel`, then `env`.
#
#
# ### Making virtual environments easier to manage
#
# We are going to add a couple functions to our `.bashrc` or `.zshrc` files.
#
# The first function will create and enter a virtual environment and install packages:
#
# venv_make () {
# echo
# echo "Preparing a virtual environment"
# echo "============================================================================="
# echo "Using Python version:"
# python3 --version
# echo "Using Python from:"
# which python3
#
# echo
# echo "Creating a virtual environment: python3 -m venv env"
# echo "-----------------------------------------------------------------------------"
# python3 -m venv env
#
# echo
# echo "Activating virtual environment: source env/bin/activate"
# echo "-----------------------------------------------------------------------------"
# source env/bin/activate
#
# echo
# echo "Updating pip: python3 -m pip install --upgrade pip setuptools wheel"
# echo "-----------------------------------------------------------------------------"
# python3 -m pip install --upgrade pip setuptools wheel
#
# echo
# echo "Installing iPython and Jupyter: python3 -m pip install --upgrade ipython"
# echo "-----------------------------------------------------------------------------"
# python3 -m pip install --upgrade ipython ipykernel jupyter
#
# echo
# echo "Creating a Jupyter kernel from env: ipython kernel install --user --name=env"
# echo "-----------------------------------------------------------------------------"
# ipython kernel install --user --name=env
#
# echo
# echo "============================================================================="
# echo "You are in your virtual environment, which lives here:"
# echo "$PWD/env/"
# echo
# echo "To deactivate, execute: deactivate"
# echo "To reactivate, execute: source env/bin/activate"
# echo "============================================================================="
# }
#
# The next function will activate a virtual environment:
#
# venv_activate () {
# echo
# echo "Activating virtual environment: source env/bin/activate"
# echo "To deactivate, execute: deactivate"
# source env/bin/activate
# echo
# }
#
# And finally, for consistency, we will add an alias to deactivate the virtual environment
#
# alias venv_deactivate='deactivate'
#
# Copy these into your `.bashrc` or `.zshrc` file to make virtual environments easy to set up and use.
#
# ### Prepare for Quick Python 101
#
# To prepare for the next section of this tutorial, please execute the following:
#
# git clone https://github.com/OCNS/SoftwareWG-events.git
# cd SoftwareWG-events/20210703-CNS2021/03_python
# venv_make
# jupyter notebook "Python 101.ipynb"
#
# <div class="alert alert-block alert-warning">
# <h2 style='color:black'>Part 2: Quick Python 101</h2>
# </div>
# **Topics**
# 1. print
# 2. variables
# 3. strings
# 4. list
# 5. dictionaries
# 6. conditions
# 7. loops
# 8. functions
# 9. error handling
# 10. numpy
# 11. matplotlib
# 12. import
# <div class="alert alert-block alert-success" style="color:black">
# <h3>1. print</h3>
# </div>
print("Hello World!")
print("This would go on the screen!") # this is how you would add a comment in Python
# <div class="alert alert-block alert-warning" style="color:black">
# <b>Exercise</b><br />
# Print a message of your choice on the screen, and optionally add a comment as well.
# </div>
# here is a comment
print("Something to print here!")
# <div class="alert alert-block alert-success" style="color:black">
# <h3>2. using variables... and printing them</h3>
# <p>note: variable names can only contain alpha-numeric characters and underscores (A-z, 0-9, and _ ); cannot start with number; case-sensitive</p>
# </div>
firstname = "Shailesh"
lastname = 'Appukuttan'
age = 34
print(firstname)
print(age)
firstname, lastname = "Shailesh", 'Appukuttan'
print(firstname)
print(lastname)
x = 10
y = 5
result = x+y
print(result)
x = 10
y = "5"
result = x+int(y)
print(result)
str(5) + "apples"
# ##### print... with string concatenation
print ("Hello " + firstname)
print("Hello " + firstname + ". Age is " + age)
print("Hello " + firstname + ". Age is " + str(age))
# #### Several ways to print...
# ##### a) print... with multiple arguments
print("Hello", firstname, lastname, "! Your age is ", age, ".")
# ##### b) print... with %-formatting (not recommended)
print("Hello %s %s! Your age is %s." % (firstname, lastname, age))
# ##### c) print.... with str.format()
print("Hello {} {}! Your age is {}.".format(firstname, lastname, age))
print("Hello {1} {0}! Your age is {2}.".format(firstname, lastname, age))
# ##### d) print... f-Strings (formatted string literals) - preferred way
# Note the 'f' at the beginning
print(f"Hello {firstname} {lastname}! Your age is {age}.")
mystr = "abcde"
print(type(mystr))
mystr = 'abcde'
print(type(mystr))
# <div class="alert alert-block alert-warning" style="color:black">
# <b>Exercise</b><br />
# Create a variable named <code>day</code> with the value <code>Tuesday</code>.<br />
# Create another variable named <code>date</code> with the value <code>29</code>.<br />
# Now print, using f-string, a sentence with values of these two variables.
# </div>
day = "Tuesday"
date = 29
print(f"Today is {day}, and the date is {date}.")
# <div class="alert alert-block alert-success" style="color:black">
# <h3>3. strings</h3>
# </div>
print(firstname)
len(firstname)
firstname[0]
firstname[-1]
firstname[0] = 'q'
print(firstname.lower())
print(firstname.upper())
type(firstname)
help(str)
firstname.startswith("Sh")
firstname.endswith(".png")
# <div class="alert alert-block alert-warning" style="color:black">
# <b>Exercise</b><br />
# Create two variables named <code>firstname</code> and <code>lastname</code> with values corresponding to your own name.<br />
# Create another variable named <code>fullname</code> by joining <code>firstname</code> and <code>lastname</code> (add space in between them).<br />
# Now print, using f-string, a sentence with <code>fullname</code> all in CAPITALS, and the length of your full name.
# </div>
firstname = "Adam"
lastname = "Bates"
fullname = firstname + " " + lastname
print(f"My name is {fullname}, and consists of {len(fullname)} characters;")
# <div class="alert alert-block alert-success" style="color:black">
# <h3>4. lists</h3>
# <p>List are collections of items. They can contain a mix of data types.</p>
# </div>
fruits = ["apples", "oranges", "mangoes"]
type(fruits)
len(fruits)
fruits.append("grapes")
fruits
fruits.append("pears", "peaches")
fruits.extend(["pears", "peaches"])
print(fruits)
fruits.append(["pears", "peaches"])
print(fruits)
fruits.append(10)
print(fruits)
len(fruits)
print(fruits[0])
print(fruits[-1])
del fruits[-1]
print(fruits)
del fruits[-1]
print(fruits)
fruits.remove("grapes")
print(fruits)
fruits.remove("grapes")
print(fruits)
item = fruits.pop()
print(item)
print(fruits)
item = fruits.pop(1)
print(item)
print(fruits)
fruits.insert(1, "oranges")
print(fruits)
print(fruits)
print(fruits + ["cats", "dogs"])
fruits = fruits + ["cats", "dogs"]
print(fruits)
# <div class="alert alert-block alert-warning" style="color:black">
# <b>Exercise</b><br />
# a) Create a list named <code>superheroes</code> with values <code>batman</code> and <code>superman</code>.<br />
# b) Add <code>wonder woman</code> to the end of this list.<br />
# c) Add <code>thor</code> and <code>iron-man</code> to this list - using a single statement.<br />
# d) Remove <code>batman</code> from the list.<br />
# e) Remove the first element from the list.<br />
# <br />You can print the list after each statement, to verify your operations.
# </div>
# +
# a)
superheroes = ["batman", "superman"]
print(superheroes)
# b)
superheroes.append("wonder woman")
print(superheroes)
# c)
superheroes.extend(["thor", "iron-man"])
print(superheroes)
# d)
superheroes.remove("batman")
print(superheroes)
# e)
superheroes.pop(0)
print(superheroes)
# -
# <div class="alert alert-block alert-success" style="color:black">
# <h3>5. dicts</h3>
# <p>A dictionary is a data type similar to arrays, but works with keys and values instead of indexes.</p>
# </div>
contacts = {"shailesh": "<EMAIL>", "ankur": "<EMAIL>"}
contacts["shailesh"]
contacts["joe"] = "<EMAIL>"
contacts
contacts["shailesh"] = "not available"
contacts
contacts.keys()
contacts.values()
contacts.items()
contacts
value = contacts.pop("shailesh")
print(value)
print(contacts)
del contacts["ankur"]
print(contacts)
# <div class="alert alert-block alert-warning" style="color:black">
# <b>Exercise</b><br />
# a) Create a dict named <code>countries</code> with keys <code>France</code> and <code>England</code>, with <code>Paris</code> and <code>London</code> as their corresponding values.<br />
# b) Add a new key-value pair to this dict: <code>Wakanda</code> : <code>Golden City</code>.<br />
# c) Print all the countries (keys) currently in the dict.<br />
# d) Print all the capitals (values) currently in the dict.<br />
# e) Change the capital of <code>Wakanda</code> to <code>Birnin Zana</code>.<br />
# f) Delete <code>Wakanda</code> from the dict.<br />
# <br />You can print the dict after each statement, to verify your operations.
# </div>
# +
# a)
countries = {"France":"Paris", "England":"London"}
print(countries)
# b)
countries["Wakanda"] = "Golden City"
print(countries)
# c)
print(countries.keys())
# d)
print(countries.values())
# e)
countries["Wakanda"] = "Birnin Zana"
print(countries)
# f)
del countries["Wakanda"]
print(countries)
# -
# <div class="alert alert-block alert-success" style="color:black">
# <h3>6. conditional statements</h3>
# </div>
x = 1
x != 5
x < 5
x = 2
if (x % 2 == 0):
print("Here")
print("Even number!")
print("There")
# +
x = 5
if (x % 2 == 0):
print("Even number!")
else:
print("Odd number!")
print("There")
# +
x = int(input())
if x%2 == 0 and x < 10:
print("even, small")
elif x%2 == 0 and x >= 10:
print("even, large")
elif x%2 != 0 and x < 10:
print("odd, small")
else:
print("odd, large")
# +
#
# -
# ##### the "in" operator
fruits = ["apples", "oranges", "mangoes"]
"apples" in fruits
"Apples" in fruits
if "Apples" in fruits:
print("yes")
else:
print("no")
contacts = {"shailesh": "<EMAIL>", "ankur": "<EMAIL>"}
"joe" in contacts
"joe" in contacts.keys()
"<EMAIL>" in contacts
"<EMAIL>" in contacts.values()
"Shai" in "Shailesh"
# <div class="alert alert-block alert-warning" style="color:black">
# <b>Exercise</b><br />
# a) Accept a number (integer) from the user<br />
# b) Check the number to see if it is:<br />
# 1) between 0-9 -> then print <code>single digit number</code><br />
# 2) between 10-99 -> then print <code>double digit number</code><br />
# 3) greater than 99 -> then print <code>three or more digits</code><br />
# 4) if none of above -> then print <code>negative number</code><br />
# </div>
# +
x = int(input())
if x >= 0 and x<=9:
print("single digit number")
elif x>=10 and x<=99:
print("double digit number")
elif x>99:
print("three or more digits")
else:
print("negative number")
# -
# <div class="alert alert-block alert-success" style="color:black">
# <h3>7. loops</h3>
# </div>
# ##### a) for loop
fruits = ["apples", "oranges", "mangoes"]
for x in fruits:
print(x)
for ind, x in enumerate(fruits):
print(ind, x)
fruits = ["apples", "oranges", "mangoes"]
prices = [10, 12, 20]
for x,y in zip(fruits, prices):
print(x, y)
contacts = {"shailesh": "<EMAIL>", "ankur": "<EMAIL>"}
for item in contacts:
print(item)
for key, val in contacts.items():
print(key, val)
for x in range(10):
print(x)
for x in range(10, 18):
print(x)
# ##### b) while loop
ctr = 0
while ctr < 10:
print(ctr)
ctr += 1
# ##### using 'continue'
ctr = 0
while ctr < 10:
ctr += 1
if ctr % 2 != 0:
continue;
print(ctr)
# ##### using 'break'
sum = 0
while True:
val = int(input())
if val == 0:
break
sum = sum + val
print(f"Sum = {sum}")
# <div class="alert alert-block alert-warning" style="color:black">
# <b>Exercise</b><br />
# a) Create a list named <code>words</code> with values:<br />
# <code>adam, ben, cathy, alex, susan, peter</code><br />
# b) Loop through this list, and print only those names that contain the alphabet "a"<br />
# c) Again loop through this list, but quit on finding the first name without the alphabet "a"
# </div>
# +
words = ["adam", "ben", "cathy", "alex", "susan", "peter"]
for item in words:
if "a" in item:
print(item)
print("------------")
for item in words:
if "a" in item:
print(item)
else:
break
# -
# <div class="alert alert-block alert-success" style="color:black">
# <h3>8. functions</h3>
# <p>reusable blocks of code</p>
# </div>
def my_function():
print("Hello World!")
print("SOmething!")
my_function()
my_function()
my_function()
my_function()
my_function()
# ##### passing arguments to function
def my_function(name, age):
print(f"The name is {name} and age is {age}!")
my_function("Shailesh", 34)
my_function()
def my_function(name="Empty", age=0):
print(f"The name is {name} and age is {age}!")
my_function()
my_function("Shailesh", 34)
# ##### returning value from function
def add(num1, num2):
return num1 + num2
x = add(10, 15)
print(x)
# <div class="alert alert-block alert-warning" style="color:black">
# <b>Exercise</b><br />
# a) Create a dict named <code>countries</code> with keys <code>France</code> and <code>England</code>, with <code>Paris</code> and <code>London</code> as their corresponding values.<br />
# b) Create a function named <code>print_captial</code> that accepts a country name as argument.<br />
# c) Let the function use the default value of <code>Empty</code> for the above parameter (country).<br />
# d) Function should print the capital (value correspondong to key in dict).<br />
# e) Call this function with argument <code>France</code>.<br />
# f) Call this function with no argument.
# </div>
# +
countries = {"France":"Paris", "England":"London"}
def print_captial(country="Empty"):
if country == "Empty":
print("Error: No country specified!")
else:
print(f"Capital of {country} is {countries[country]}.")
print_captial("France")
print_captial()
# -
print_captial("India")
# <div class="alert alert-block alert-success" style="color:black">
# <h3>9. error handling: try... except</h3>
# </div>
def div(num1, num2):
return num1/num2
result = div(10, 0)
result = div(10, 2)
print(result)
result = div(10, "2")
print(result)
def div(num1, num2):
try:
return num1/num2
except TypeError:
print("Inputs have to be numbers!")
except ZeroDivisionError:
print("Denominator cannot be zero!")
except:
print("Some unknown error!")
result = div(10, "2")
print(result)
result = div(10, 0)
try:
return num1/num2
except TypeError:
print("Inputs have to be numbers!")
finally:
print("Denominator cannot be zero!")
# ##### further extensions...
# - try... except... else<br/>
# <b><i>else</i></b> defines a block of code to be executed if no errors were raised
# <br/><br/>
# - try... except... finally<br/>
# <b><i>finally</i></b> defines a block of code to be executed at the end, regardless of error or not
# <div class="alert alert-block alert-warning" style="color:black">
# <b>Exercise</b><br />
# a) Create a dict named <code>countries</code> with keys <code>France</code> and <code>England</code>, with <code>Paris</code> and <code>London</code> as their corresponding values<br />
# b) Create a function named <code>print_captial</code> that accepts a country name as argument and prints the capital (value correspondong to key in dict)<br />
# c) Call this function with argument <code>India</code>. Do you see an error message?<br />
# d) Handle this error message appropriately using <code>try...execept</code>.
# </div>
# +
countries = {"France":"Paris", "England":"London"}
def print_captial(country="Empty"):
try:
if country == "Empty":
print("Error: No country specified!")
else:
print(f"Capital of {country} is {countries[country]}.")
except:
print(f"No info available for: {country}")
print_captial("India")
# -
# <div class="alert alert-block alert-success" style="color:black">
# <h3>10. importing modules</h3>
# <p>modules are pieces of software that offer some specific functionality</p>
# </div>
from datetime import date
print(date.today())
from datetime import date, datetime
print(date.today())
print(datetime.today())
import datetime
print(datetime.date.today())
import datetime as dt
print(dt.date.today())
from datetime import date as d
print(d.today())
# ##### note... displaying variable vs printing variable
print(d.today())
d.today()
print(x)
# <div class="alert alert-block alert-warning" style="color:black">
# <b>Exercise</b><br />
# a) Import the package named <code>datetime</code> and use the alias <code>mydt</code><br />
# b) Using the imported module, print current date and time
# </div>
import datetime as mydt
print(mydt.date.today())
# <div class="alert alert-block alert-success" style="color:black">
# <h3>11. numpy</h3>
# <p>NumPy is a fundamental package for scientific computing in Python. Provides support for large, multi-dimensional arrays and matrices, along with a large collection of high-level mathematical functions to operate on these arrays.</p>
# </div>
numbers = [5,1,2,4,5,6,3,5,6,7,9,0]
sum = 0
for x in numbers:
sum = sum + x
avg = sum/len(numbers)
print(f"Sum = {sum}")
print(f"Average = {avg}")
import numpy as np
mynums = np.array(numbers)
mynums
mynums[-1]
mynums.sum()
mynums.mean()
mynums.max()
mynums.min()
mynums * 2
# ##### ... if time permits, else skip to next section
np.zeros(5)
np.zeros((5, 2))
np.ones((5, 2))
np.ones((5, 2)) * 7
np.full((5, 2),7)
data = np.random.random((5,2))
print(data)
data[0][1]
data[0]
data[0, :]
data[:, 0]
data[:, :]
# <div class="alert alert-block alert-warning" style="color:black">
# <b>Exercise</b><br />
# a) Create a list named <code>numbers</code> with following values:<br />
# <code>[5,1,2,4,5,6,3,5,6,7,9,0]</code><br />
# b) Convert the list into a numpy array.<br />
# c) Print the <code>min</code> and <code>max</code> values in the array.<br />
# d) Update the array by multiplying each element by 2.<br />
# e) Print again the <code>min</code> and <code>max</code> values in the array.<br />
# </div>
numbers = [5,1,2,4,5,6,3,5,6,7,9,0]
import numpy as np
array = np.array(numbers)
print(f"max = {max(array)} and min = {min(array)}")
array = array * 2
print(f"max = {max(array)} and min = {min(array)}")
# <div class="alert alert-block alert-success" style="color:black">
# <h3>12. MatPlotLib</h3>
# <p>Matplotlib is a plotting library</p>
# </div>
import numpy as np
import matplotlib.pyplot as plt
x = np.arange(0,10)
print(x)
y1 = np.power(x,2)
y2 = np.power(x,3)
print(y1)
print(y2)
plt.plot(x, y1)
plt.plot(x, y2)
x = np.arange(0,10, 0.1)
y1 = np.sin(x)
y2 = np.cos(x)
np.arange(0,10, 0.1)
plt.plot(x, y1)
plt.plot(x, y2)
plt.plot(x, y1)
plt.plot(x, y2)
plt.xlabel('x axis label')
plt.ylabel('y axis label')
plt.title('Sine and Cosine')
plt.legend(['Sine', 'Cosine'])
# +
plt.subplot(2, 1, 1)
plt.plot(x, y1, 'r')
plt.xlabel('x axis label')
plt.ylabel('y axis label')
plt.subplot(2, 1, 2)
plt.plot(x, y2, 'b--')
plt.xlabel('x axis label')
plt.ylabel('y axis label')
plt.title('Cosine')
plt.legend(['Cosine'])
# +
plt.subplot(1, 2, 1)
plt.plot(x, y1, 'r')
plt.xlabel('x axis label')
plt.ylabel('y axis label')
plt.title('Sine')
plt.legend(['Sine'])
plt.subplot(1, 2, 2)
plt.plot(x, y2, 'b')
plt.xlabel('x axis label')
plt.ylabel('y axis label')
plt.title('Cosine')
plt.legend(['Cosine'])
# -
# <div class="alert alert-block alert-warning" style="color:black">
# <b>Exercise</b><br />
# a) Create a numpy array named <code>x</code> with values <code>10, 11, 12, ..., 19</code> (using numpy)<br />
# b) Create another numpy array named <code>y</code> with 10 random values (using numpy).<br />
# c) Plot a graph using <code>x</code> and <code>y</code> in red color.<br />
# d) Add labels to the x-axis, y-axis and a title.<br />
# </div>
# +
import numpy as np
x = np.arange(10,20)
print(x)
print(f"Num values in x: {len(x)}")
y = np.random.random(10)
print(y)
print(f"Num values in y: {len(y)}")
import matplotlib.pyplot as plt
plt.plot(x, y)
| 20210703-CNS2021/03_python/Python 101 - With Solutions.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python [Root]
# language: python
# name: Python [Root]
# ---
# # Rozdział 6. Wizualizacja
# ## Wprowadzenie do podstaw pakietu matplotlib
import numpy as np
import matplotlib.pyplot as plt
import matplotlib as mpl
# %matplotlib inline
plt.rc('font', family='Arial')
# ### Rysowanie krzywych
# +
# Rysowanie krzywych
import numpy as np
import matplotlib.pyplot as plt
x = np.linspace(0, 5, 50)
y_cos = np.cos(x)
y_sin = np.sin(x)
plt.figure() # Inicjowanie rysunku
plt.plot(x,y_cos) # Rysowanie zestawu współrzędnych w postaci linii
plt.plot(x,y_sin)
plt.xlabel('x') # Dodawanie etykiety dla osi x
plt.ylabel('y') # Dodawanie etykiety dla osi y
plt.title('Nagłówek') # Dodaje nagłówek
plt.show() # Zamykanie
# -
list(mpl.rcParams['axes.prop_cycle'])
# +
# To polecenie pozwala zmienić sekwencję używanych kolorów
# mpl.rcParams['axes.prop_cycle'] = mpl.cycler('color', ['blue', 'red', 'green'])
# -
# ### Stosowanie paneli
# +
# Stosowanie paneli
import matplotlib.pyplot as plt
plt.subplot(1,2,1) # Definiowanie panelu z jednym wierszem i dwoma kolumnami
# oraz aktywowanie rysunku 1
plt.plot(x,y_cos,'r--')
plt.title('cos') # Dodawanie nagłówka
plt.subplot(1,2,2) # Definiowanie panelu z jednym wierszem i dwoma kolumnami
# oraz aktywowanie rysunku 2
plt.plot(x,y_sin,'b-')
plt.title('sin')
plt.show()
# Źródła informacji na temat dodatkowego modyfikowania rysunków:
# http://matplotlib.org/api/lines_api.html#matplotlib.lines.Line2D.set_linestyle
# http://matplotlib.org/api/colors_api.html
# http://matplotlib.org/api/markers_api.html
# http://matplotlib.org/api/lines_api.html#matplotlib.lines.Line2D
# Można też ustawiać niestandardowe kolory za pomocą wartości RGB (z przedziału [0-1])
# plt.plot(x,y_sin,'b-',color = (0.1,0.9,0.9))
# Wtedy krzywa będzie turkusowa
# -
# ### Wykresy punktowe
# +
# Wykres punktowy
from sklearn.datasets import make_blobs
import matplotlib.pyplot as plt
D = make_blobs(n_samples=100, n_features=2, centers=3, random_state=7)
groups = D[1]
coordinates = D[0]
plt.plot(coordinates[groups==0,0], coordinates[groups==0,1], 'ys', label='Grupa 0') # Żółte kwadraty
plt.plot(coordinates[groups==1,0], coordinates[groups==1,1], 'm*', label='Grupa 1') # Fioletowe gwiazdki
plt.plot(coordinates[groups==2,0], coordinates[groups==2,1], 'rD', label='Grupa 2') # Czerwone romby
plt.ylim(-2,10) # Redefiniowanie zakresu osi y
plt.yticks([10,6,2,-2]) # Redefiniowanie punktów podziałki na osi y
plt.xticks([-15,-5,5,-15]) # Redefiniowanie punktów podziałki na osi x
plt.grid() # Dodawanie siatki
plt.annotate('Kwadraty', (-12,2.5)) # Wyświetlanie tekstu zgodnie ze współrzędnymi
plt.annotate('Gwiazdki', (0,6))
plt.annotate('Romby', (10,3))
plt.legend(loc='lower left', numpoints= 1) # Dodawanie legendy z etykietami
plt.show()
# -
# ### Histogramy
# +
# Histogramy
import numpy as np
import matplotlib.pyplot as plt
x = np.random.normal(loc=0.0, scale=1.0, size=500)
z = np.random.normal(loc=3.0, scale=1.0, size=500)
plt.hist(np.column_stack((x,z)), bins=10, histtype='bar', color = ['c','b'], stacked=True)
plt.grid()
plt.show()
# Wypróbuj też inne parametry instrukcji plt.hist
# normed=1
# histtype='step'
# stacked = False
# fill = False
# -
# ### Wykresy słupkowe
# +
# Wykresy słupkowe
from sklearn.datasets import load_iris
import numpy as np
import matplotlib.pyplot as plt
iris = load_iris()
average = np.mean(iris.data, axis=0)
std = np.std(iris.data, axis=0)
range_ = range(np.shape(iris.data)[1])
plt.subplot(1,2,1) # Definiowanie panelu z jednym wierszem i dwoma kolumnami
# oraz aktywowanie rysunku 1
plt.title('Słupki poziome')
plt.barh(range_,average, color="r", xerr=std, alpha=0.4, align="center")
plt.yticks(range_, iris.feature_names)
plt.subplot(1,2,2) # Definiowanie panelu z jednym wierszem i dwoma kolumnami
# oraz aktywowanie rysunku 2
plt.title('Słupki pionowe')
plt.bar(range_,average, color="b", yerr=std, alpha=0.4, align="center")
plt.xticks(range_, range_)
plt.show()
# -
# ### Wyświetlanie rysunków
# Wyświetlanie rysunków: zbiór danych Olivetti
from sklearn.datasets import fetch_olivetti_faces
import numpy as np
import matplotlib.pyplot as plt
dataset = fetch_olivetti_faces(shuffle=True, random_state=5)
photo = 1
for k in range(6):
plt.subplot(2,3,k+1)
plt.imshow(dataset.data[k].reshape(64,64), cmap=plt.cm.gray,
interpolation='nearest')
plt.title('Osoba nr '+str(dataset.target[k]))
plt.axis('off')
plt.show()
# Wyświetlanie rysunków: odręcznie zapisane cyfry
from sklearn.datasets import load_digits
digits = load_digits()
for number in range(1,10):
plt.subplot(3, 3, number)
plt.imshow(digits.images[number],cmap='binary',
interpolation='none', extent=[0,8,0,8])
plt.grid()
plt.show()
# Wyświetlanie rysunków: odręcznie zapisane cyfry w zbliżeniu
plt.imshow(digits.images[0],cmap='binary',interpolation='none', extent=[0,8,0,8])
# Parametr extent określa maksymalną i minimalną współrzędną rysunku w poziomie i pionie
plt.grid()
plt.show()
# ## Wybrane przykłady graficzne z użyciem pakietu pandas
import pandas as pd
print ('Używana wersja pakietu pandas: %s' % pd.__version__)
from sklearn.datasets import load_iris
iris = load_iris()
iris_df = pd.DataFrame(iris.data, columns=iris.feature_names)
groups = list(iris.target)
iris_df['groups'] = pd.Series([iris.target_names[k] for k in groups])
# ### Wykresy pudełkowe i histogramy
boxplots = iris_df.boxplot(return_type='axes')
boxplots = iris_df.boxplot(column='sepal length (cm)', by='groups', return_type='axes')
densityplot = iris_df.plot(kind='density')
single_distribution = iris_df['petal width (cm)'].plot(kind='hist', alpha=0.5)
# ### Wykresy punktowe
colors_palette = {0: 'red', 1: 'yellow', 2:'blue'}
colors = [colors_palette[c] for c in groups]
simple_scatterplot = iris_df.plot(kind='scatter', x=0, y=1, c=colors)
hexbin = iris_df.plot(kind='hexbin', x=0, y=1, gridsize=10)
from pandas.tools.plotting import scatter_matrix
colors_palette = {0: "red", 1: "green", 2: "blue"}
colors = [colors_palette[c] for c in groups]
matrix_of_scatterplots = scatter_matrix(iris_df, alpha=0.2, figsize=(6, 6), color=colors, diagonal='kde')
# ### Metoda współrzędnych równoległych
from pandas.tools.plotting import parallel_coordinates
iris_df['groups'] = [iris.target_names[k] for k in groups]
pll = parallel_coordinates(iris_df,'groups')
# # Wprowadzenie do pakietu seaborn
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import matplotlib as mpl
import seaborn as sns
sns.set()
x = np.linspace(0, 5, 50)
y_cos = np.cos(x)
y_sin = np.sin(x)
plt.figure()
plt.plot(x,y_cos)
plt.plot(x,y_sin)
plt.xlabel('x')
plt.ylabel('y')
plt.title('Funkcje sinus i cosinus')
plt.show()
sns.set_context("talk")
with sns.axes_style('whitegrid'):
plt.figure()
plt.plot(x,y_cos)
plt.plot(x,y_sin)
plt.show()
sns.set()
current_palette = sns.color_palette()
print (current_palette)
sns.palplot(current_palette)
new_palette=sns.color_palette('hls', 10)
sns.palplot(new_palette)
your_palette = sns.choose_colorbrewer_palette('sequential')
print (your_palette)
# ## Wzbogacanie możliwości z zakresu eksploracji danych
import seaborn as sns
sns.set()
from sklearn.datasets import load_iris
iris = load_iris()
X_iris, y_iris = iris.data, iris.target
features_iris = [a[:-5].replace(' ','_') for a in iris.feature_names]
target_labels = {j: flower for j, flower in enumerate(iris.target_names)}
df_iris = pd.DataFrame(X_iris, columns=features_iris)
df_iris['target'] = [target_labels[y] for y in y_iris]
from sklearn.datasets import load_boston
boston = load_boston()
X_boston, y_boston = boston.data, boston.target
features_boston = np.array(['V'+'_'.join([str(b), a])
for a,b in zip(boston.feature_names,range(len(boston.feature_names)))])
df_boston = pd.DataFrame(X_boston, columns=features_boston)
df_boston['target'] = y_boston
df_boston['target_level'] = pd.qcut(y_boston,3)
with sns.axes_style("ticks"):
sns.factorplot(data=df_boston, x='V8_RAD', y="target")
with sns.axes_style("whitegrid"):
sns.regplot(data=df_boston, x='V12_LSTAT', y="target", order=3)
with sns.axes_style("whitegrid"):
sns.jointplot("V4_NOX", "V7_DIS", data=df_boston, kind='reg',order=3)
with sns.axes_style("darkgrid"):
chart = sns.FacetGrid(df_iris, col="target")
chart.map(plt.scatter, "sepal_length", "petal_length")
with sns.axes_style("darkgrid"):
chart = sns.FacetGrid(df_iris, col="target")
chart.map(sns.distplot, "sepal_length")
with sns.axes_style("darkgrid"):
chart = sns.FacetGrid(df_boston, col="target_level")
chart.map(sns.regplot, "V4_NOX", "V7_DIS")
with sns.axes_style("whitegrid"):
ax = sns.violinplot(x="target", y="sepal_length",
data=df_iris, palette="pastel")
sns.despine(offset=10, trim=True)
with sns.axes_style("whitegrid"):
chart = sns.pairplot(data=df_iris, hue="target", diag_kind="hist")
# # Interaktywne wizualizacje w przeglądarkach z użyciem pakietu Bokeh
import bokeh
print(bokeh.__version__)
# +
import numpy as np
from bokeh.plotting import figure, output_file, show
x = np.linspace(0, 5, 50)
y_cos = np.cos(x)
output_file("cosine.html")
p = figure()
p.line(x, y_cos, line_width=2)
show(p)
# -
from bokeh.io import output_notebook, reset_output
reset_output()
output_notebook()
p = figure()
p.line(x, y_cos, line_width=2)
show(p)
# +
# Przekształcanie wykresu punktowego z wersji opartej na pakiecie matplotlib
# na wersję z pakietu bokeh
from sklearn.datasets import make_blobs
import matplotlib.pyplot as plt
from bokeh import mpl
from bokeh.plotting import show
D = make_blobs(n_samples=100, n_features=2, centers=3, random_state=7)
coord, groups = D[0], D[1]
plt.plot(coord[groups==0,0], coord[groups==0,1], 'ys')
plt.plot(coord[groups==1,0], coord[groups==1,1], 'm*')
plt.plot(coord[groups==2,0], coord[groups==2,1], 'rD')
plt.grid()
plt.annotate('Kwadraty', (-12,2.5))
plt.annotate('Gwiazdki', (0,6))
plt.annotate('Romby', (10,3))
show(mpl.to_bokeh())
# -
# ## Zaawansowane reprezentacje dotyczące uczenia się na podstawie danych
# ### Krzywe uczenia
# Krzywa uczenia
import numpy as np
from sklearn.learning_curve import learning_curve, validation_curve
from sklearn.datasets import load_digits
from sklearn.linear_model import SGDClassifier
digits = load_digits()
X, y = digits.data, digits.target
hypothesis = SGDClassifier(loss='log', shuffle=True, n_iter=5, penalty='l2', alpha=0.0001, random_state=3)
train_size, train_scores, test_scores = learning_curve(hypothesis, X, y, train_sizes=np.linspace(0.1,1.0,5),
cv=10, scoring='accuracy', exploit_incremental_learning=False, n_jobs=-1)
mean_train = np.mean(train_scores,axis=1)
upper_train = np.clip(mean_train + np.std(train_scores,axis=1),0,1)
lower_train = np.clip(mean_train - np.std(train_scores,axis=1),0,1)
mean_test = np.mean(test_scores,axis=1)
upper_test = np.clip(mean_test + np.std(test_scores,axis=1),0,1)
lower_test = np.clip(mean_test - np.std(test_scores,axis=1),0,1)
plt.plot(train_size,mean_train,'ro-', label='Trening')
plt.fill_between(train_size, upper_train, lower_train, alpha=0.1, color='r')
plt.plot(train_size,mean_test,'bo-', label='Walidacja krzyżowa')
plt.fill_between(train_size, upper_test, lower_test, alpha=0.1, color='b')
plt.grid()
plt.xlabel('Wielkość próbki') # Dodawanie etykiety dla osi x
plt.ylabel('Trafność') # Dodawanie etykiety dla osi y
plt.legend(loc='lower right', numpoints= 1)
plt.show()
# ### Krzywe walidacji
# Krzywe walidacji
from sklearn.learning_curve import validation_curve
testing_range = np.logspace(-5,2,8)
hypothesis = SGDClassifier(loss='log', shuffle=True, n_iter=5, penalty='l2', alpha=0.0001, random_state=3)
train_scores, test_scores = validation_curve(hypothesis, X, y, 'alpha', param_range=testing_range, cv=10, scoring='accuracy', n_jobs=-1)
mean_train = np.mean(train_scores,axis=1)
upper_train = np.clip(mean_train + np.std(train_scores,axis=1),0,1)
lower_train = np.clip(mean_train - np.std(train_scores,axis=1),0,1)
mean_test = np.mean(test_scores,axis=1)
upper_test = np.clip(mean_test + np.std(test_scores,axis=1),0,1)
lower_test = np.clip(mean_test - np.std(test_scores,axis=1),0,1)
plt.semilogx(testing_range,mean_train,'ro-', label='Trening')
plt.fill_between(testing_range, upper_train, lower_train, alpha=0.1, color='r')
plt.fill_between(testing_range, upper_train, lower_train, alpha=0.1, color='r')
plt.semilogx(testing_range,mean_test,'bo-', label='Walidacja krzyżowa')
plt.fill_between(testing_range, upper_test, lower_test, alpha=0.1, color='b')
plt.grid()
plt.xlabel('Parametr alpha') # Dodawanie etykiety dla osi x
plt.ylabel('Trafność') # Dodawanie etykiety dla osi y
plt.ylim(0.8,1.0)
plt.legend(loc='lower left', numpoints= 1)
plt.show()
# ### Znaczenie cech
# Znaczenie zmiennych
from sklearn.datasets import load_boston
boston = load_boston()
X, y = boston.data, boston.target
feature_names = np.array([' '.join([str(b), a]) for a,b in zip(boston.feature_names,range(len(boston.feature_names)))])
from sklearn.ensemble import RandomForestRegressor
RF = RandomForestRegressor(n_estimators=100, random_state=101).fit(X, y)
importance = np.mean([tree.feature_importances_ for tree in RF.estimators_],axis=0)
std = np.std([tree.feature_importances_ for tree in RF.estimators_],axis=0)
indices = np.argsort(importance)
range_ = range(len(importance))
plt.figure()
plt.title("Znaczenie zmiennych w algorytmie Random Forests")
plt.barh(range_,importance[indices],
color="r", xerr=std[indices], alpha=0.4, align="center")
plt.yticks(range(len(importance)), feature_names[indices])
plt.ylim([-1, len(importance)])
plt.xlim([0.0, 0.65])
plt.show()
# ### Wykresy częściowej zależności oparte na drzewach GBT
# Wykresy częściowej zależności oparte na drzewach GBT
from sklearn.ensemble.partial_dependence import plot_partial_dependence
from sklearn.ensemble import GradientBoostingRegressor
GBM = GradientBoostingRegressor(n_estimators=100, random_state=101).fit(X, y)
features = [5,12,(5,12)]
fig, axs = plot_partial_dependence(GBM, X, features, feature_names=feature_names)
| python/R06/code/Wizualizacja.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import matplotlib.pyplot as plt
import pandas as pd
import numpy as np
import os
import cv2
from glob import glob
from tqdm import tqdm
# The **'*jpg'** means reading all thefiles ending with .jpg. And here all the files ending with **.jpg** are the image files, that are to be processed.
DIR = glob('Image_data/Images/*jpg')
for image in DIR:
img = cv2.imread(image)
img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)
plt.imshow(img)
break
# ## We will be using `ResNet50` architechture for the NN modelling
from keras.applications import ResNet50
temp_model = ResNet50(include_top=True)
temp_model.summary()
from keras.models import Model
last_layer = temp_model.layers[-2].output
model = Model(inputs=temp_model.input, outputs=last_layer)
model.summary()
for a in range(len(DIR)):
print(DIR[a].split("/")[1][7:])
# Generic structure of the dictionary:
# ```python
# dict = {'Image_name' : 'captions'}
# ```
# +
image__features = {}
count = 0
for image in tqdm(DIR):
img = cv2.imread(image)
img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)
img = cv2.resize(img, (224, 224))
img = img.reshape(1, 224, 224, 3)
pred = model.predict(img).reshape(2048,)
img_name = DIR[a].split("/")[1][7:]
image__features[img_name] = pred
count += 1
if count > 1499:
break
elif count % 50 == 0:
print(count)
# -
# # Text preprocessing
caption_file_path = './Image_data/captions.txt'
caption_file = open(caption_file_path, 'rb').read().decode('utf-8').split('\n')
caption_file
| model.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ## Overlapping Labels
#
# In this exercise, we're going to **simulate** the issue we encounter when training machine learning models using targets that are returns that _overlap in time_. The core issue is that these labels are correlated and in fact _redundant_. We'll see what impact this has on our machine learning algorithm, and we'll have the opportunity to implement some of the solutions to the problem that we described in the lectures, and will later encounter in the project.
# +
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from tqdm import tqdm_notebook as tqdm
# %matplotlib inline
plt.style.use('ggplot')
plt.rcParams['figure.figsize'] = (14, 8)
# -
# ## Load data
# Let's load up a small dataset of artificial "toy" data that we can play with. The columns in these data are `A`, `B`, `C` and `D`. We will use the `E` column as targets.
df = pd.read_csv('dependent_labels_dataset.csv', index_col = 0)
df.head()
df.plot()
df.shape
# ## Create redundancy in the data
# In order to illustrate the effect of redundancy in the data, we are going to deliberately create an extreme version of this condition by **duplicating each row in the data 5 times**, using the function `create_redundant_data` below.
num_duplicates = 5
def create_redundant_data(df, num_duplicates):
"""
From the existing dataset, create a new dataset in which every original row is exactly duplicated `num_duplicates` times.
Re-order this new dataset according to the order of the original index.
Parameters
----------
df : pandas DataFrame
The original dataset.
Returns
-------
redundant_df : pandas DataFrame
The new, redundant dataset.
"""
redundant_df = df.copy()
for i in range(num_duplicates-1):
redundant_df = redundant_df.append(df)
redundant_df.sort_index(axis=0, level='t', inplace=True)
return redundant_df
redundant_df = create_redundant_data(df, num_duplicates)
redundant_df.shape
# Create a function for scoring the model.
def model_score(m, X_train, y_train, X_valid, y_valid):
'''
Take in the model and training and validation datasets, and return the training accuracy score, validation
accuracy score, and out-of-bag score. Furthermore, print each of these results.
Parameters
----------
m : RandomForestClassifier instance
The trained model.
X_train : pandas DataFrame
The training features.
y_train : pandas Series
The training labels.
X_valid : pandas DataFrame
The validation features.
y_valid : pandas Series
The validation labels.
Returns
-------
train_score : float
The mean training accuracy.
valid_score : float
The mean validation accuracy.
oob_score : float
The out-of-bag score.
'''
train_score = m.score(X_train, y_train)
valid_score = m.score(X_valid, y_valid)
oob_score = m.oob_score_
print("train: %f, oob: %f, valid: %f" % (train_score, oob_score, valid_score))
return train_score, valid_score, oob_score
# ## Split data into train, valid and test sets
def make_splits(df, features, target, split_valid=0.20, split_test=0.20):
temp = df.dropna()
X = temp[features].copy()
y = temp[target].copy()
train_end = int(X.shape[0]*(1-split_valid-split_test))
valid_end = train_end + int(X.shape[0]*split_valid)
X_train, X_valid, X_test = X.iloc[:train_end,], X.iloc[(train_end+1):valid_end,], X.iloc[(valid_end+1):]
y_train, y_valid, y_test = y.iloc[:train_end,], y.iloc[(train_end+1):valid_end,], y.iloc[(valid_end+1):]
return X, X_train, X_valid, X_test, y_train, y_valid, y_test
features = ['A', 'B', 'C', 'D']
X, X_train, X_valid, X_test, y_train, y_valid, y_test = make_splits(
redundant_df,
features,
'E'
)
# ## Train one tree, take a look at it
from sklearn.ensemble import RandomForestClassifier
def instantiate_and_fit_one_tree(X_train, y_train):
"""
Instantiate a single decision tree and fit it on the training data. Return the fitted classifier.
Parameters
----------
X_train : pandas DataFrame
The training features.
y_train : pandas Series
The training labels.
Returns
-------
clf : DecisionTreeClassifier
The fitted classifier instance.
"""
# you can do this with a DecisionTreeClassifier or with a RandomForestClassifier, with n_estimators=1
clf = RandomForestClassifier(
n_estimators=1,
max_depth=3,
max_features=None,
bootstrap=True,
criterion='entropy',
random_state=0
)
clf.fit(X_train.values, y_train)
return clf
clf = instantiate_and_fit_one_tree(X_train, y_train)
# ! pip install graphviz
# +
import graphviz
from sklearn.tree import export_graphviz
def export_graph(classifier, feature_names):
"""
First, export the dot data from the fitted classifier. Then, create a graphviz Source object from the dot data,
and return it.
Parameters
----------
classifier : DecisionTreeClassifier
The single decision tree you created and fit above.
Returns
-------
graph : graphviz Source object
The Source object created with the graph information.
"""
dot_data = export_graphviz(
clf.estimators_[0],
out_file=None,
feature_names=features,
filled=True, rounded=True,
special_characters=True,
rotate=False
)
graph = graphviz.Source(dot_data);
return graph
# -
# display the single decision tree graph in the notebook
graph = export_graph(clf, features)
graph
# ## Redundant labels
# Let's see what happens when we train a full random forest on these redundant data.
def instantiate_and_fit_a_rf(n_estimators, X_train, y_train, min_samples_leaf=5):
"""
Instantiate a random forest classifier and fit it on the training data. Return the fitted classifier.
Make sure you use bootstrapping and calculate an out-of-bag score. Set `random_state` equal to an integer so that
when you fit the model again you get the same result.
Parameters
----------
n_estimators : int
The number of trees.
X_train : pandas DataFrame
The training features.
y_train : pandas Series
The training labels.
Returns
-------
clf : RandomForestClassifier
The fitted classifier instance.
"""
clf = RandomForestClassifier(
n_estimators=n_estimators,
max_features='sqrt',
min_samples_leaf=min_samples_leaf,
bootstrap=True,
oob_score=True,
n_jobs=-1,
criterion='entropy',
verbose=0,
random_state=0
)
clf.fit(X_train, y_train)
return clf
# +
train_score = []
valid_score = []
oob_score = []
tree_sizes = [10, 50, 100, 250]
for trees in tqdm(tree_sizes):
clf_red = instantiate_and_fit_a_rf(trees, X_train, y_train)
tr, va, oob = model_score(clf_red, X_train, y_train, X_valid, y_valid)
train_score.append(tr); valid_score.append(va); oob_score.append(oob)
# -
def plot_results(tree_sizes, train_score, oob_score, valid_score, title, y_range):
plt.plot(tree_sizes, train_score, 'xb-');
plt.plot(tree_sizes, oob_score, 'xg-');
plt.plot(tree_sizes, valid_score, 'xr-');
plt.title(title);
plt.xlabel('Number of Trees');
plt.ylabel('Accuracy')
plt.legend(['train','oob', 'valid'])
plt.ylim(y_range[0], y_range[1]);
y_range = []
y_range.append(0.45)
y_range.append(1.005)
plot_results(tree_sizes, train_score, oob_score, valid_score, 'Random Forest with Redundant Data (accuracy): train, validation, oob', y_range)
# What you can see from this run is that the OOB score virtually tracks the training score because the OOB data contains the same information as the training data. The validation score, calculated on unseen data, is much lower.
# ## Solution #1: use every 5th row
# In this section, implement the first solution to the redundancy issue by sub-sampling every 5th row of the data.
def create_subsampled_dataset(num_duplicates, X_train, y_train):
"""
Create the sub-sampled dataset according to the first solution proposed to the problem of overlapping labels.
Parameters
----------
num_duplicates : int
The number of duplications made earlier.
X_train : pandas DataFrame
The training features.
y_train : pandas Series
The training labels.
Returns
-------
X_train_sub : pandas DataFrame
The training features, subsampled.
y_train_sub : pandas Series
The training labels, subsampled.
"""
X_train_sub = X_train[::5]
y_train_sub = y_train[::5]
return X_train_sub, y_train_sub
X_train_sub, y_train_sub = create_subsampled_dataset(5, X_train, y_train)
# +
train_score = []
valid_score = []
oob_score = []
tree_sizes = [10, 50, 100, 250]
for trees in tqdm(tree_sizes):
clf_sub = instantiate_and_fit_a_rf(trees, X_train_sub, y_train_sub)
tr, va, oob = model_score(clf_sub, X_train_sub, y_train_sub, X_valid, y_valid)
train_score.append(tr); valid_score.append(va); oob_score.append(oob)
# -
plot_results(tree_sizes, train_score, oob_score, valid_score, 'Random Forest on Redundant Data, with Subsampling (accuracy): train, validation, oob', y_range)
# In the case of our artificial dataset, this completely removes the redundancy.
# ## Solution #2: reduce bag size
# In this section, implement the second solution we proposed—to use `BaggingClassifier` to randomly draw a smaller fraction of the original dataset's rows when creating each tree's dataset.
from sklearn.ensemble import BaggingClassifier
from sklearn.tree import DecisionTreeClassifier
base_clf = DecisionTreeClassifier(
criterion='entropy',
max_features='sqrt',
min_samples_leaf=5
)
def instantiate_and_fit_a_BaggingClassifier(n_estimators, base_estimator, X_train, y_train, max_samples = 0.2):
"""
Instantiate a Bagging Classifier and fit it on the training data. Return the fitted classifier.
Make sure you use bootstrapping and calculate an out-of-bag score. Set `random_state` equal to an integer so that
when you fit the model again you get the same result.
Parameters
----------
n_estimators : int
The number of trees.
base_estimator : DecisionTreeClassifier
The base estimator of the BaggingClassifier.
max_samples : float
The percentage of the number of original rows to draw for each bag.
X_train : pandas DataFrame
The training features.
y_train : pandas Series
The training labels.
Returns
-------
clf : BaggingClassifier
The fitted classifier instance.
"""
clf = BaggingClassifier(
base_estimator = base_estimator,
n_estimators = n_estimators,
max_samples = max_samples,
bootstrap=True,
oob_score=True,
n_jobs=-1,
verbose=0,
random_state=0
)
clf.fit(X_train, y_train)
return clf
# +
train_score = []
valid_score = []
oob_score = []
tree_sizes = [10, 50, 100, 250]
for trees in tqdm(tree_sizes):
clf_bag = instantiate_and_fit_a_BaggingClassifier(trees, base_clf, X_train, y_train)
tr, va, oob = model_score(clf_bag, X_train, y_train, X_valid, y_valid)
train_score.append(tr); valid_score.append(va); oob_score.append(oob)
# -
plot_results(tree_sizes, train_score, oob_score, valid_score, 'Random Forest on Redundant Data, with Reduced Bag Size (accuracy): train, validation, oob', y_range)
# As you can see, in the case of this small artificial dataset, this helped but didn't fully resolve the problem. The OOB score still vastly overestimates the validation score.
# ## Solution #3: bagged non-overlapping labels
# Finally, let's look at the last solution we proposed to this problem. We'll do this one for you for this small exercise, but you'll have the opportunity to work on it yourself later. We will fit separate models on each non-redundant subset of data, and then ensemble them together.
from sklearn.ensemble import VotingClassifier
from sklearn.base import clone
from sklearn.preprocessing import LabelEncoder
from sklearn.utils.validation import has_fit_parameter, check_is_fitted
from sklearn.utils.metaestimators import _BaseComposition
from sklearn.utils import Bunch
class NoOverlapVoter(VotingClassifier):
def __init__(self, base_estimator, overlap_increment=5):
self.est_list = []
for i in range(overlap_increment):
self.est_list.append(('clf'+str(i), clone(base_estimator)))
self.overlap_increment = overlap_increment
super().__init__(
self.est_list,
voting='soft'
)
@property
def oob_score_(self):
oob = 0
for clf in self.estimators_:
oob = oob + clf.oob_score_
return oob / len(self.estimators_)
def fit(self, X, y, sample_weight=None):
names, clfs = zip(*self.estimators)
self._validate_names(names)
self.le_ = LabelEncoder().fit(y)
self.classes_ = self.le_.classes_
self.estimators_ = []
transformed_y = self.le_.transform(y)
self.estimators_ = []
for i in range(self.overlap_increment):
self.estimators_.append(
clfs[i].fit(X[i::self.overlap_increment], transformed_y[i::self.overlap_increment])
)
self.named_estimators_ = Bunch(**dict())
for k, e in zip(self.estimators, self.estimators_):
self.named_estimators_[k[0]] = e
return self
# +
train_score = []
valid_score = []
oob_score = []
tree_sizes = [10, 50, 100, 250]
for trees in tqdm(tree_sizes):
clf = RandomForestClassifier(
n_estimators=trees,
max_features='sqrt',
min_samples_leaf=5,
bootstrap=True,
oob_score=True,
n_jobs=-1,
criterion='entropy',
verbose=0,
random_state=0
)
clf_nov = NoOverlapVoter(clf)
clf_nov.fit(X_train.reset_index()[['A','B','C','D']], y_train)
t, v, o = model_score(clf_nov, X_train, y_train, X_valid, y_valid)
train_score.append(t); valid_score.append(v); oob_score.append(o)
# -
plot_results(tree_sizes, train_score, oob_score, valid_score, 'Random Forest on Redundant Data, with NoOverlapVoter (accuracy): train, validation, oob', y_range)
# So in the case of this artificial dataset, this method performs as well as the first method, but this makes sense because each of the separate classifiers has the exact same information as the others. Combining them together should not yield much performance enhancement. You can then compare this result to the result that you'll see in the project, when this method is applied to real financial data.
# ## Check the performance on test set
# Finally, let's look at the performance of the third method on the held-out test set. For this, we'll train on all of the previous training and validation data.
X, X_train, X_valid, X_test, y_train, y_valid, y_test = make_splits(
redundant_df,
features,
'E',
split_valid = 0
)
# +
clf = RandomForestClassifier(
n_estimators=150,
max_features='sqrt',
min_samples_leaf=5,
bootstrap=True,
oob_score=True,
n_jobs=-1,
criterion='entropy',
verbose=False,
random_state=0
)
clf_nov = NoOverlapVoter(clf)
clf_nov.fit(X_train.reset_index()[['A','B','C','D']], y_train)
t, v, o = model_score(clf_nov, X_train, y_train, X_test, y_test)
train_score.append(t); valid_score.append(v); oob_score.append(o)
# -
# We see that the result on the held-out test set has improved somewhat.
| Quiz/m7/dependent_labels_solution.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import seaborn as sns
import pandas as pd
from datetime import datetime
vue = ["2014-02-07", "2015-01-01", "2015-06-17", "2015-11-27", "2016-05-08", "2016-10-21", "2017-09-19", "2018-03-01", "2018-08-14", "2019-07-10", "2021-04-30"]
react = ["2013-11-13", "2015-10-11", "2016-04-01", "2016-10-04", "2017-09-05", "2018-02-18", "2018-08-04", "2019-01-17", "2019-07-03", "2019-12-17", "2020-06-01", "2021-04-30"]
angular = ["2015-03-05", "2015-07-19", "2015-12-09", "2016-04-21", "2016-09-15", "2017-02-01", "2017-06-21", "2018-04-01", "2021-04-30"]
aspnetcore = ["2015-05-03", "2015-09-27", "2016-02-11", "2016-11-08", "2017-08-07", "2018-05-04", "2021-04-30"]
gatsby = ["2015-07-22", "2016-02-18", "2016-05-31", "2016-09-09", "2017-03-11", "2017-07-09", "2018-06-10", "2018-10-19", "2019-02-28", "2019-11-19", "2020-08-09", "2020-12-19", "2021-04-30"]
total_len = len(vue+react+angular+aspnetcore+gatsby)
print(f"Number of data points: {total_len}")
# +
data = [{"date": datetime.strptime(date, '%Y-%m-%d'), "project": "Vue.js"} for date in vue] + [{"date": date, "project": "React.js"} for date in react] + [{"date": date, "project": "Angular"} for date in angular] + [{"date": date, "project": "ASP.NET Core"} for date in aspnetcore] + [{"date": date, "project": "Gatsby"} for date in gatsby]
sns.set_theme(context="paper", palette="muted", style="whitegrid", font='sans-serif', font_scale=1.2)
df = pd.DataFrame(data={'days': [record["date"] for record in data], 'projects': [record["project"] for record in data]})
g = sns.catplot(x="projects", y="days", kind="swarm", data=df, order=["Angular", "ASP.NET Core", "Gatsby", "React.js", "Vue.js"]).set(title="Analyzed Data Points")
g.set_axis_labels("Projects", "Year")
g.ax.xaxis.labelpad = 10
g.ax.yaxis.labelpad = 10
g.savefig('../../figures/others/data_points.pdf', format='pdf', bbox_inches="tight")
# -
| notebooks/analysis/data_points.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ... continue [here](https://livebook.manning.com/book/learn-quantum-computing-with-python-and-q-sharp/chapter-2/v-4/point-7622-337-337-0)
# %run ../globals.ipynb
# # Unitary operators and the Hadamard Operator
import numpy as np
ket0 = np.array(
[[1], [0]]
)
ket0
ket1 = np.array(
[[0], [1]]
)
ket1
ket_plus = (ket0 + ket1) / np.sqrt(2)
ket_plus
# ## Definition of Hadamard Operator
H = np.array([[1, 1], [1, -1]]) / np.sqrt(2)
make_pretty(H, show_brackets=True, )
make_pretty(H @ ket0)
make_pretty(H @ ket1)
PP %(H @ ket1)
X = np.array([[0, 1], [1, 0]])
PP % (X @ ket0)
(X @ ket0 == ket1).all()
PP % (X @ H @ ket0)
| ch2/3. Unitary operators and the Hadamard Operator.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ## Project Code
# #### Imports
# +
import pandas as pd
import csv
from sklearn.cluster import KMeans
from sklearn.cluster import SpectralClustering
from sklearn.metrics import normalized_mutual_info_score
from sklearn.cluster import Birch
from sklearn.cluster import AgglomerativeClustering
from sklearn.feature_selection import VarianceThreshold
import math
from sklearn.decomposition import PCA
from sklearn.preprocessing import StandardScaler
# %matplotlib inline
# -
# #### PCA Variance
# +
df = pd.read_csv("genedata.csv")
dfd = df[df.columns[2:]]
sel = VarianceThreshold(threshold=(1.9)).fit_transform(dfd)
std = StandardScaler().fit_transform(sel)
pca = PCA(n_components=11)
principalComponents = pca.fit_transform(std)
features = range(pca.n_components_)
plt.bar(features, pca.explained_variance_ratio_)
plt.xlabel("Features")
plt.ylabel("Variance")
plt.xticks(features)
PCA_components = pd.DataFrame(principalComponents)
# +
df = pd.read_csv("msdata.csv")
dfd = df[df.columns[2:]]
sel = VarianceThreshold(threshold=(1.9)).fit_transform(dfd)
std = StandardScaler().fit_transform(sel)
pca = PCA(n_components=11)
principalComponents = pca.fit_transform(std)
features = range(pca.n_components_)
plt.bar(features, pca.explained_variance_ratio_)
plt.xlabel("Features")
plt.ylabel("Variance")
plt.xticks(features)
PCA_components = pd.DataFrame(principalComponents)
# -
# ### Helper functions for analysing data
# +
# For checking how many features are left after using the threshold
dfd.var()[dfd.var() > 9].count()
# For-loops for checking which values provide the highest NMI
#### Jan paste in din kod hit :DD ####
# -
# #### Analysing the data
# +
df = pd.read_csv("msdata.csv")
dfd = df[df.columns[2:]]
sel = VarianceThreshold(threshold=(4)).fit_transform(dfd)
std = StandardScaler().fit_transform(sel)
pca = PCA()
principalComponents = pca.fit_transform(std)
PCA_components = pd.DataFrame(principalComponents)
kmeans = KMeans(n_clusters=3).fit(PCA_components.iloc[:,:2])
label = kmeans.labels_
#np.savetxt(r'/path-to-dir/ms_labels.txt', label, fmt='%d')
kmeanNorm = normalized_mutual_info_score(df['class'], label, average_method="geometric")
print("Msdata KMeans Normalized Mutual Information Score: {}".format(kmeanNorm))
brch = Birch(n_clusters=3).fit(PCA_components.iloc[:,:3])
label = brch.labels_
brchNorm = normalized_mutual_info_score(df['class'], label, average_method="geometric")
print("Msdata Birch Normalized Mutual Information Score: {}".format(brchNorm))
agglo = AgglomerativeClustering(n_clusters=3, linkage="ward").fit(PCA_components.iloc[:,:12])
label = agglo.labels_
aggloNorm = normalized_mutual_info_score(df['class'], label, average_method="geometric")
print("Msdata Agglomerative clustering Normalized Mutual Information Score: {}".format(aggloNorm))
agglo = AgglomerativeClustering(n_clusters=3, linkage="average").fit(PCA_components.iloc[:,:12])
label = agglo.labels_
aggloNorm = normalized_mutual_info_score(df['class'], label, average_method="geometric")
print("Msdata Agglomerative clustering Normalized Mutual Information Score: {}".format(aggloNorm))
agglo = AgglomerativeClustering(n_clusters=3, linkage="complete").fit(PCA_components.iloc[:,:12])
label = agglo.labels_
aggloNorm = normalized_mutual_info_score(df['class'], label, average_method="geometric")
print("Msdata Agglomerative clustering Normalized Mutual Information Score: {}".format(aggloNorm))
agglo = AgglomerativeClustering(n_clusters=3, linkage="single").fit(PCA_components.iloc[:,:12])
label = agglo.labels_
aggloNorm = normalized_mutual_info_score(df['class'], label, average_method="geometric")
print("Msdata Agglomerative clustering Normalized Mutual Information Score: {}".format(aggloNorm))
# +
df = pd.read_csv("genedata.csv")
dfd = df[df.columns[2:]]
sel = VarianceThreshold(threshold=(1.9)).fit_transform(dfd)
std = StandardScaler().fit_transform(sel)
pca = PCA(n_components=11)
principalComponents = pca.fit_transform(std)
PCA_components = pd.DataFrame(principalComponents)
kmeans = KMeans(n_clusters=5).fit(PCA_components.iloc[:,:12])
label = kmeans.labels_
kmeanNormalized = normalized_mutual_info_score(df['class'], label, average_method="geometric")
print("Gene data KMeans Normalized Mutual Information Score: {}".format(kmeanNormalized))
brch = Birch(n_clusters=5).fit(PCA_components.iloc[:,:12])
label = brch.labels_
#np.savetxt(r'/path-to-dir/gene_labels.txt', label, fmt='%d')
brchNormalized = normalized_mutual_info_score(df['class'], label, average_method="geometric")
print("Gene data Birch Normalized Mutual Information Score: {}".format(brchNormalized))
agglo = AgglomerativeClustering(n_clusters=5, linkage="ward").fit(PCA_components.iloc[:,:12])
label = agglo.labels_
aggloNormalized = normalized_mutual_info_score(df['class'], label, average_method="geometric")
print("Gene data Agglomerative Clustering Normalized Mutual Information Score: {}".format(aggloNormalized))
| project/Project Report - Code.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] id="sn0FLHCiVO8a"
# # Introduction to Python
# + [markdown] id="ZLph_NtWVO8a"
# This notebook contains some programming exercises which will equip you with the programming skills that you require for the workshops ahead. Do not be afraid to make mistakes. Making mistakes while coding is perfectly normal. It is more important for you to learn how to rectify those mistakes. If you are stuck in any of the exercises, you can try to search for similar solutions online and also ask your friends or the facilitator for help.
# + [markdown] id="bgeyyUQBVO8b"
# Lists are like variables that let you store more than one value.
#
# Before we start coding, here are some important properties about lists:
# 1. Lists are wrapped in square brackets []
# 2. Lists can contain strings, integers, floats, Booleans or any combination of these.
# 3. Items in a list are separated by commas ,
#
# With the 3 properties in mind, let's start coding!
# Look at the code below and try running it.
# + id="in3rJK_FVO8b" outputId="2d5b8cb9-61e1-4a40-ce20-f699f92671b7" colab={"base_uri": "https://localhost:8080/"} executionInfo={"status": "ok", "timestamp": 1644214303526, "user_tz": -330, "elapsed": 6, "user": {"displayName": "SE MECH A 05 <NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GjMwdMl6Vx3d1ua_4P8rd6Xm9KwN1FqNyKG-yPk=s64", "userId": "01601407517104806532"}}
# Create a empty list
sampleList = []
print ("Empty List: ", sampleList)
# + id="sodgS1OyVO8b" outputId="f32533c8-797f-4f1f-b2c0-c74b91a2028e" colab={"base_uri": "https://localhost:8080/"} executionInfo={"status": "ok", "timestamp": 1644075844258, "user_tz": -330, "elapsed": 5, "user": {"displayName": "<NAME> 05 <NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GjMwdMl6Vx3d1ua_4P8rd6Xm9KwN1FqNyKG-yPk=s64", "userId": "01601407517104806532"}}
#.append() is a function that allows you to add values to a list
sampleList.append("Bob")
print (sampleList)
# + id="0T9pNjj9VO8c" outputId="ab68e34f-0fcb-4c90-989c-1ff0d21fba47"
# List can contain strings, integers, floats, boolean or any combination of it
sampleList.append(False)
sampleList.append(9)
sampleList.append(10.5)
print ("List can contains different data types and items in a list are separated by commas: ", sampleList)
# + [markdown] id="2oveua1nVO8c"
# Here's a practice question!
# 1. Assign the following numbers to a list, X1 in the cell below. The numbers are 4, 1, 5, 7.
# 2. Now, change the 3rd number from 5 to 10. Print X1 to confirm that you have changed the number.
# 3. Add a value of 2 to all the items in the list in the cell below. Print X1 again to confirm the addition.
# + id="45qi3S-NVO8c" outputId="b36b883e-ae8b-4526-d7b6-c71ca1594cf0" colab={"base_uri": "https://localhost:8080/", "height": 253} executionInfo={"status": "error", "timestamp": 1644076736069, "user_tz": -330, "elapsed": 446, "user": {"displayName": "<NAME> 05 <NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GjMwdMl6Vx3d1ua_4P8rd6Xm9KwN1FqNyKG-yPk=s64", "userId": "01601407517104806532"}}
# TODO: Assign numbers (4,1,5,7) to a list, X1.
X1 = [4,1,5,7]
print(X1)
# TODO: Change the 3rd number, 5, to 10 and then print the list again.
#yourcodehere
X1.insert(5,10)
print(X1)
# TODO: Add a value of 2 to all the items in the list. (HINT: Use a for loop!)
#yourcodehere
X1 = X1+2
print(X2)
# + [markdown] id="KtWVVweYVO8d"
# Expected results:
# 1. [4, 1, 5, 7]
# 2. [4, 1, 10, 7]
# 3. [6, 3, 12, 9]
# + [markdown] id="t6wxlN6BVO8d"
# Let's try apply basic statistics we've just learnt into python.
# Here are your challenge questions!
# 1. Find the mean of the data provided.
# 2. Find the median of the data provided.
# 3. Find the mode of the data provided.
#
# Raise your hands once you've completed exercise, the instructor will come around to look at the code!
# Once you are done, feel free to help your peers around you to conquer the challenge, but do not give them the answers!
# + id="42p5ejW0VO8d" outputId="b103cbf8-6f8f-4229-ebf5-2c2ec2b35321"
# Here's the dataset for the challenge questions:
dataset = [2, 1, 1, 4, 5, 8, 12, 4, 3, 8, 21, 1, 18, 5]
# TODO: Find the mean of the data provided.
# Step 1: Find out the number of items in the list.
number_of_items = #yourcodehere
print(number_of_items)
# Step 2: Find out the sum of items in list. Try doing this with the loop function.
sum_of_items = 0
#yourcodehere
print(sum_of_items)
# Step 3: Find the mean
mean = #yourcodehere
print(mean)
# + id="kxECc4qsVO8f" outputId="ac20ffdb-71e4-44d9-eaa5-df986ad31360"
# TODO: Find the median of the data provided. (HINT: Rearrange the items in the list)
# Step 1: Rearrange the items in the list in ascending order, i.e. (1,1,1,2,3,4,4,5,5,8,8,12,18,21)
#yourcodehere
print(dataset)
# Step 2: Find the middle index
middle_index = #yourcodehere
print (middle_index)
# Step 3: Find median
median = #yourcodehere
print (median)
# + id="r7Oa7eAwVO8g" outputId="b3a01c0f-c96e-4211-919b-d109831287a4"
# TODO: Find the mode of the data provided. (HINT: Use max())
mode = #yourcodehere
print (mode)
# + id="UEcAnx8zVO8g"
| (1) [Jupyter - Youth] Module 5.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Exploratory Data Analysis
# +
import sys
ppath = "../.."
if ppath not in sys.path:
sys.path.append(ppath)
import pygeostatistics as pygs
# -
data = pygs.gslib_reader.SpatialData('../../testData/test.gslib')
data.mean
print(data.summary)
import matplotlib.pyplot as plt
# %matplotlib notebook
fig, ax = plt.subplots()
data.pdf(ax)
type(ax)
type(ax3d)
from mpl_toolkits.mplot3d import Axes3D
fig3d = plt.figure()
ax3d = fig3d.add_subplot(111, projection='3d')
sc = data.scatter(ax3d)
fig3d.colorbar(sc)
| docs/tutorials/EDA.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Resources
#
#
#
#
# ### Python
#
# [Python 3 Documentation](https://docs.python.org/3/library/)
#
#
# ### General
#
# [Stackoverflow](https://stackoverflow.com/)
#
# ### YouTube vids
#
# [<NAME>](https://www.youtube.com/channel/UCWr0mx597DnSGLFk1WfvSkQ)
#
# [PyCon 2019](https://www.youtube.com/channel/UCxs2IIVXaEHHA4BtTiWZ2mQ)
#
# [Tech With Tim](https://www.youtube.com/channel/UC4JX40jDee_tINbkjycV4Sg)
#
# [Python Programmer](https://www.youtube.com/user/consumerchampion)
#
# [sentdex](https://www.youtube.com/user/sentdex)
#
# ### Markdown links
# [Markdown Cheatsheet](https://github.com/adam-p/markdown-here/wiki/Markdown-Cheatsheet)
#
# [Markdown Guide](https://www.markdownguide.org/)
#
# [Markdown Table Generator](https://www.tablesgenerator.com/markdown_tables)
#
#
# ### Code
#
#
# ```df.reset_index(self, level=None, drop=False, inplace=False, col_level=0, col_fill='')```
#
# ```df = pd.DataFrame(np.random.randint(500,4000,size=(200, 1)), columns=list('A'))```
#
# ```df['randNumCol'] = np.random.randint(1, 6, df.shape[0])```
#
# ```
# # Declare a list that is to be converted into a column
# tradcounthvac = xyzhvac.x.count()
# tradehvac = tradcounthvac * ['hvac']
#
# tradcountelec = xyzelec.x.count()
# tradeelec = tradcountelec * ['elec']
#
# # Using 'Trade' as the column name
# # and equating it to the list
# xyzhvac['Trade'] = tradehvac
# xyzelec['Trade'] = tradeelec
# ```
#
#
#
# ### Packages
#
#
# ```! pip install pandas-profiling```
#
# ```! pip install plotly```
#
# ```! pip install cufflinks```
#
# ```! pip install plotly==4.2.1```
#
# ```!pip install dovpanda```
#
# ```import numpy as np```
#
# ```import pandas as pd```
#
# ```import pandas_profiling```
#
# ```import plotly.graph_objects as go```
#
# ```import dovpanda```
#
# ### pandas
# [What can you do with the new ‘Pandas’?](https://towardsdatascience.com/what-can-you-do-with-the-new-pandas-2d24cf8d8b4b)
#
# [Reordering Pandas DataFrame Columns: Thumbs Down On Standard Solutions](https://towardsdatascience.com/reordering-pandas-dataframe-columns-thumbs-down-on-standard-solutions-1ff0bc2941d5)
#
# [pandas Documentation](https://pandas.pydata.org/pandas-docs/stable/)
#
# [7 practical pandas tips when you start working with the library](https://towardsdatascience.com/7-practical-pandas-tips-when-you-start-working-with-the-library-e4a9205eb443)
#
# [dataframe transpose](https://www.geeksforgeeks.org/python-pandas-dataframe-transpose/)
#
# [Combining DataFrames with Pandas](https://datacarpentry.org/python-ecology-lesson/05-merging-data/)
#
# [dovpanda](https://github.com/dovpanda-dev/dovpanda)
#
#
# [Selecting Subsets of Data in Pandas: Part 1](https://medium.com/dunder-data/selecting-subsets-of-data-in-pandas-6fcd0170be9c)
#
# [10 simple Python tips to speed up your data analysis](https://thenextweb.com/syndication/2020/10/12/10-simple-python-tips-to-speed-up-your-data-analysis/)
#
# [15 Tips and Tricks to use in Jupyter Notebooks](https://towardsdatascience.com/15-tips-and-tricks-to-use-jupyter-notebook-more-efficiently-ef05ede4e4b9)
#
# ```result = df.transpose() ```
#
# 5 functions to examine your data: ```df.head()', df.describe(), df.info(), df.shape, df.sum(), df['Trade'].value_counts() ```
#
# Reports: ```pandas_profiling.ProfileReport(df)```
#
# Import: ```import pandas_profiling```
#
# Save a dataframe to a csv```df.to_csv```
#
# Create a Pandas Dataframe```df = pd.DataFrame(data) ```
#
# Read a csv file```pd.read_csv('')```
#
# Read a excel file```pd.read_excel('')```
#
# All rows that have a sepal length greater than 6 are dangerous ```df['is_dangerous'] = np.where(df['sepal length (cm)']>6, 'yes', 'no')```
#
# Max columns option ```pd.set_option('display.max_columns', 500)```
#
# Max row option ```pd.set_option('display.max_rows', 500)```
#
# to see columns ```df.columns```
#
# replace strings ```df.columns = df.columns.str.replace(' \(cm\)', '').str.replace(' ', '_')```
#
# ### plotly
#
# [plotly Graphing Libraries](https://plot.ly/python/)
#
# [Different Colors for Bars in Barchart by their Value](https://community.plot.ly/t/different-colors-for-bars-in-barchart-by-their-value/6527)
#
#
#
# ### Scikit-Learn
#
# [A beginner’s guide to Linear Regression in Python with Scikit-Learn](https://towardsdatascience.com/a-beginners-guide-to-linear-regression-in-python-with-scikit-learn-83a8f7ae2b4f)
#
# ### Notes
# +
# Codewars
# https://www.codewars.com/kata/550f22f4d758534c1100025a/train/python
# Sum of Numbers
# -
map filter and reduce
# +
["NORTH", "SOUTH", "SOUTH", "EAST", "WEST", "NORTH", "WEST"] = ["WEST"]
["NORTH", "SOUTH", "EAST", "WEST"] = []
["NORTH", "EAST", "WEST", "SOUTH", "WEST", "WEST"] = ["WEST", "WEST"]
# -
def dirReduc(arr):
get_sum(-100, 500)
c=add(10)
c = [1,2,3]
def f (c):
return sum(c)
f(c)
range (start, stop[, step])
["NORTH", "WEST", "SOUTH", "EAST"]
l = ["NORTH", "SOUTH", "EAST", "WEST"]
range(start, stop[, step])
def dirReduc(arr):
temp_dir = ""
for i, direction in enumerate(list(arr)):
if len(arr) < 2 :
pass
else:
if direction != temp_dir:
if temp_dir == "NORTH" or "SOUTH" and direction != "NORTH" or "SOUTH":
if temp_dir == direction:
pass
else:
arr.pop(i)
arr.pop(i-1)
print(arr)
if temp_dir == "WEST" or "EAST" and direction != "WEST" or "EAST":
if temp_dir == direction:
pass
else:
try:
arr.pop(i)
arr.pop(i-1)
print(arr)
except:
return arr
else:
pass
temp_dir = direction
return arr
myList = ["W"]
len(myList)
l = ["NORTH", "SOUTH", "EAST", "WEST"]
["NORTH", "SOUTH", "SOUTH", "EAST", "WEST", "NORTH", "WEST"] = ["WEST"]
["NORTH", "SOUTH", "EAST", "WEST"] = []
["NORTH", "EAST", "WEST", "SOUTH", "WEST", "WEST"] = ["WEST", "WEST"]
l = ["NORTH", "SOUTH", "SOUTH", "EAST", "WEST", "NORTH", "WEST"]
dirReduc(l)
#myList.remove(myList[2]) # Removes first instance of "item" from myList
myList.pop(1 and 2) # Removes and returns item at myList[i]
a = 1
b = 2
if a == b:
pass
else:
print(a)
[]
for i, myList in enumerate(myList):
print(i, myList)
| Python Code Challenges/.ipynb_checkpoints/Directions_reduction-checkpoint.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Welcome to Kijang Emas analysis!
#
# 
#
# I was found around last week (18th March 2019), our Bank Negara opened public APIs for certain data, it was really cool and I want to help people get around with the data and what actually they can do with the data!
#
# We are going to cover 2 things here,
#
# 1. Data Analytics
# 2. Predictive Modelling (Linear regression, ARIMA, LSTM)
#
# Hell, I know nothing about Kijang Emas.
#
# **Again, do not use this code to buy something on the real world (if got positive return, please donate some to me)**
import requests
from datetime import date
# ## Data gathering
#
# To get the data is really simple, use this link to get kijang emas data, https://www.bnm.gov.my/kijang-emas-prices
#
# A rest API is available at https://api.bnm.gov.my/portal#tag/Kijang-Emas
#
# Now, I want to get data from january 2020 - march 2021.
#
# https://api.bnm.gov.my/portal#operation/KELatest
# latest https://api.bnm.gov.my/public/kijang-emas
requests.get('https://api.bnm.gov.my/public/kijang-emas',
headers = {'Accept': 'application/vnd.BNM.API.v1+json'},).json()
# by month year https://api.bnm.gov.my/public/kijang-emas/year/{year}/month/{month}
month= 12
year = 2020
print ('https://api.bnm.gov.my/public/kijang-emas/year/{}/month/{}'.format(year,month))
res=requests.get('https://api.bnm.gov.my/public/kijang-emas/year/{}/month/{}'.format(year,month),
headers = {'Accept': 'application/vnd.BNM.API.v1+json'},).json()
res['meta']['total_result']
# 2020 data
data_2020 = []
for i in range(12):
res=requests.get('https://api.bnm.gov.my/public/kijang-emas/year/2020/month/%d'%(i + 1),
headers = {'Accept': 'application/vnd.BNM.API.v1+json'},
).json()
print('https://api.bnm.gov.my/public/kijang-emas/year/2020/month/%d'%(i + 1),res['meta']['total_result'])
data_2020.append(res)
# 2021 data
data_2021 = []
for i in range(3):
res=requests.get('https://api.bnm.gov.my/public/kijang-emas/year/2021/month/%d'%(i + 1),
headers = {'Accept': 'application/vnd.BNM.API.v1+json'},
).json()
print('https://api.bnm.gov.my/public/kijang-emas/year/2021/month/%d'%(i + 1),res['meta']['total_result'])
data_2021.append(res)
# #### Take a peak our data ya
data_2020[6]['data'][:5]
# Again, I got zero knowledge on kijang emas and I don't really care about the value, and I don't know what the value represented.
#
# Now I want to parse `effective_date` and `buying` from `one_oz`.
# +
timestamp, selling = [], []
for month in data_2020 + data_2021:
for day in month['data']:
timestamp.append(day['effective_date'])
selling.append(day['one_oz']['selling'])
len(timestamp), len(selling)
# -
# Going to import matplotlib and seaborn for visualization, I really seaborn because of the font and colors, thats all, hah!
import matplotlib.pyplot as plt
import seaborn as sns
import numpy as np
sns.set()
plt.figure(figsize = (15, 5))
plt.plot(selling)
plt.xticks(np.arange(len(timestamp))[::15], timestamp[::15], rotation = '45')
plt.show()
# ## Perfect!
#
# So now let's we start our Data analytics.
# #### Distribution study
plt.figure(figsize = (15, 5))
sns.histplot(data=selling,stat='density', kde=True)
plt.show()
# Look at this, already normal distribution, coincidence? (I really want to show off [unit scaling](https://en.wikipedia.org/wiki/Feature_scaling) skills!)
#
# In case you are interested in [data normalization](https://towardsdatascience.com/all-kinds-of-cool-feature-scalers-537e54bc22ab), you have to understand scalars. The intention of a scaler is to lower the variance of the data in order to make most of the predictions lay in the area with the most data. There are many different scalers, which can boost your accuracy:
#
# ### Rescaler
#
# Rescaling, or min-max normalization uses the minimum and maximum values to scale an array.
#
# $$x'=\frac{x-\min(x)}{\max(x)-\min(x)}$$
#
# I haven’t really found it to be all that useful for machine-learning. I would say check it out only for the information and learning because this scalar typically throws estimations off and destroys accuracy in my experience. In one situation, I was able to use a rescaler as a min-max filter for bad data outputs on an endpoint. Though this certainly doesn’t cover the lost ground, I think that it was definitely a cool use for it.
def rescaler(x):
return (x-x.min())/(x.max()-x.min())
plt.figure(figsize = (15, 5))
sns.histplot(rescaler(np.array(selling)),stat='density')
plt.show()
# ### Mean Normalization
#
# Mean Normalization is exactly what it sounds like, normalizing the data based on the mean. This one certainly could be useful, the only issue is that typically a z-score scalar does a lot better at normalizing the data than a mean normalizer.
#
# $$x'=\frac{x-mean(x)}{\max(x)-\min(x)}$$
#
# I haven’t used this one particularly that much, just as typically it returns a lower accuracy score than a standard scaler.
def mean_norm(x):
return (x-x.mean())/(x.max()-x.min())
plt.figure(figsize = (15, 5))
sns.histplot(mean_norm(np.array(selling)),stat='density')
plt.show()
# ### Arbitrary Rescale
#
# $$x'=\min(x)+\frac{(x-x\min(x))*(\max(x)-\min(x))}{\max(x)-\min(x)}$$
#
# Arbitrary Rescale is particularly useful when you have a small quartile gap, meaning that the median isn’t far from the minimum or the maximum values.
def arb_rescaler(x):
min = x.min()
max = x.max()
return min+((x-x*min)*(x.max()-x.min()))/(x.max()-x.min())
plt.figure(figsize = (15, 5))
sns.histplot(rescaler(np.array(selling)),stat='density')
plt.show()
# ### Standard Scaler
#
# A Standard Scaler, also known as z-score normalizer, is likely the best go-to for scaling continuous features. The idea behind StandardScaler is that it will transform your data such that its distribution will have a mean value 0 and standard deviation of 1.
#
# $$x'=\frac{x-\hat{x}}{\sigma}$$
#
# If you ever need an accuracy boost, this is the way to do it. I’ve used Standard Scalers a lot, probably everyday I use one at some point. For me, Standard Scaling has been the most useful out of all of the scalars, as it is for most people.
def standard_scaler(x):
return (x-x.mean())/(x.std())
plt.figure(figsize = (15, 5))
sns.histplot(standard_scaler(np.array(selling)),stat='density')
plt.show()
# ### Unit Length Scalar
#
# Another option we have on the machine-learning front is scaling to unit length. When scaling to vector unit length, we transform the components of a feature vector so that the transformed vector has a length of 1, or in other words, a norm of 1.
#
# $$x'=\frac{x}{||x||}$$
#
# There are different ways to define “length” such as as l1 or l2-normalization. If you use l2-normalization, “unit norm” essentially means that if we squared each element in the vector, and summed them, it would equal 1. While in L1 normalization we normalize each element in the vector, so the absolute value of each element sums to 1.
#
# Scaling to unit length can offer a similar result to z-score normalization, and I have certainly found it pretty useful. Unit Length Scalars use Euclidean distance on the denominator. Overall Unit Length Scaling can be very useful towards boosting your model’s accuracy.
#
# So given a matrix X, where the rows represent samples and the columns represent features of the sample, you can apply l2-normalization to normalize each row to a unit norm. This can be done easily in Python using sklearn.
from sklearn import preprocessing
# +
def unit_length_scaler_l2(x):
return preprocessing.normalize(np.expand_dims(x, axis=0), norm='l2')[0]
print (np.sum(unit_length_scaler_l2(np.array(selling,dtype=np.float))**2, axis=0))
plt.figure(figsize = (15, 5))
sns.histplot(unit_length_scaler_l2(np.array(selling,dtype=np.float)),stat='density')
plt.show()
# +
def unit_length_scaler_l1(x):
return preprocessing.normalize(np.expand_dims(x, axis=0), norm='l1')[0]
print (np.sum(np.abs(unit_length_scaler_l1(np.array(selling,dtype=np.float))), axis=0))
plt.figure(figsize = (15, 5))
sns.histplot(unit_length_scaler_l1(np.array(selling,dtype=np.float)),stat='density')
plt.show()
# -
# Now let's change our into Pandas, for lagging analysis.
import pandas as pd
df = pd.DataFrame({'timestamp':timestamp, 'selling':selling})
df.head()
def df_shift(df, lag = 0, start = 1, skip = 1, rejected_columns = []):
df = df.copy()
if not lag:
return df
cols = {}
for i in range(start, lag + 1, skip):
for x in list(df.columns):
if x not in rejected_columns:
if not x in cols:
cols[x] = ['{}_{}'.format(x, i)]
else:
cols[x].append('{}_{}'.format(x, i))
for k, v in cols.items():
columns = v
dfn = pd.DataFrame(data = None, columns = columns, index = df.index)
i = start - 1
for c in columns:
dfn[c] = df[k].shift(periods = i)
i += skip
df = pd.concat([df, dfn], axis = 1).reindex(df.index)
return df
# **Shifted and moving average are not same.**
df_crosscorrelated = df_shift(
df, lag = 12, start = 4, skip = 2, rejected_columns = ['timestamp']
)
df_crosscorrelated['ma7'] = df_crosscorrelated['selling'].rolling(7).mean()
df_crosscorrelated['ma14'] = df_crosscorrelated['selling'].rolling(14).mean()
df_crosscorrelated['ma21'] = df_crosscorrelated['selling'].rolling(21).mean()
# ## why we lagged or shifted to certain units?
#
# Virals took some time, impacts took some time, same goes to price lot / unit.
#
# Now I want to `lag` for until 12 units, `start` at 4 units shifted, `skip` every 2 units.
df_crosscorrelated.head(21)
plt.figure(figsize = (20, 4))
plt.subplot(1, 3, 1)
plt.scatter(df_crosscorrelated['selling'], df_crosscorrelated['selling_4'])
mse = (
(df_crosscorrelated['selling_4'] - df_crosscorrelated['selling']) ** 2
).mean()
plt.title('close vs shifted 4, average change: %f'%(mse))
plt.subplot(1, 3, 2)
plt.scatter(df_crosscorrelated['selling'], df_crosscorrelated['selling_8'])
mse = (
(df_crosscorrelated['selling_8'] - df_crosscorrelated['selling']) ** 2
).mean()
plt.title('close vs shifted 8, average change: %f'%(mse))
plt.subplot(1, 3, 3)
plt.scatter(df_crosscorrelated['selling'], df_crosscorrelated['selling_12'])
mse = (
(df_crosscorrelated['selling_12'] - df_crosscorrelated['selling']) ** 2
).mean()
plt.title('close vs shifted 12, average change: %f'%(mse))
plt.show()
# MSE keeps increasing and increasing!
plt.figure(figsize = (10, 5))
plt.scatter(
df_crosscorrelated['selling'],
df_crosscorrelated['selling_4'],
label = 'close vs shifted 4',
)
plt.scatter(
df_crosscorrelated['selling'],
df_crosscorrelated['selling_8'],
label = 'close vs shifted 8',
)
plt.scatter(
df_crosscorrelated['selling'],
df_crosscorrelated['selling_12'],
label = 'close vs shifted 12',
)
plt.legend()
plt.show()
fig, ax = plt.subplots(figsize = (15, 5))
df_crosscorrelated.plot(
x = 'timestamp', y = ['selling', 'ma7', 'ma14', 'ma21'], ax = ax
)
plt.xticks(np.arange(len(timestamp))[::10], timestamp[::10], rotation = '45')
plt.show()
# As you can see, even moving average 7 already not followed sudden trending (blue line), means that, **dilation rate required less than 7 days! so fast!**
#
# #### How about correlation?
#
# We want to study linear relationship between, how many days required to give impact to future sold units?
# +
colormap = plt.cm.RdBu
plt.figure(figsize = (15, 5))
plt.title('cross correlation', y = 1.05, size = 16)
sns.heatmap(
df_crosscorrelated.iloc[:, 1:].corr(),
linewidths = 0.1,
vmax = 1.0,
cmap = colormap,
linecolor = 'white',
annot = True,
)
plt.show()
# -
# Based on this correlation map, look at selling vs selling_X,
#
# **selling_X from 4 to 12 is getting lower, means that, if today is 50 mean, next 4 days should increased by 0.95 * 50 mean, and continue.**
# #### Outliers
#
# Simple, we can use Z-score to detect outliers, which timestamps gave very uncertain high and low value.
std_selling = (selling - np.mean(selling)) / np.std(selling)
def detect(signal, treshold = 2.0):
detected = []
for i in range(len(signal)):
if np.abs(signal[i]) > treshold:
detected.append(i)
return detected
# Based on z-score table, 2.0 already positioned at 97.772% of the population.
#
# https://d2jmvrsizmvf4x.cloudfront.net/6iEAaVSaT3aGP52HMzo3_z-score-02.png
outliers = detect(std_selling)
plt.figure(figsize = (15, 7))
plt.plot(selling)
plt.plot(
np.arange(len(selling)),
selling,
'X',
label = 'outliers',
markevery = outliers,
c = 'r',
)
plt.legend()
plt.show()
# We can see that, **we have positive and negative outliers**. What happened to our local market on that days? So we should study sentiment from local news to do risk analysis.
# # Give us predictive modelling!
#
# Okay okay.
# ## Predictive modelling
#
# Like I said, I want to compare with 3 models,
#
# 1. Linear regression
# 2. ARIMA
# 3. LSTM Tensorflow (sorry Pytorch, not used to it)
#
# Which models give the best accuracy and lowest error rate?
#
# **I want to split first timestamp 80% for train, another 20% timestamp for test.**
from sklearn.linear_model import LinearRegression
train_selling = selling[: int(0.8 * len(selling))]
test_selling = selling[int(0.8 * len(selling)) :]
# Beware of `:`!
future_count = len(test_selling)
future_count
# Our model should forecast 61 future days ahead.
# #### Linear regression
# %%time
linear_regression = LinearRegression().fit(
np.arange(len(train_selling)).reshape((-1, 1)), train_selling
)
linear_future = linear_regression.predict(
np.arange(len(train_selling) + future_count).reshape((-1, 1))
)
# Took me 594 us to train linear regression from sklearn. Very quick!
fig, ax = plt.subplots(figsize = (15, 5))
ax.plot(selling, label = '20% test trend')
ax.plot(train_selling, label = '80% train trend')
ax.plot(linear_future, label = 'forecast linear regression')
plt.xticks(
np.arange(len(timestamp))[::10],
np.arange(len(timestamp))[::10],
rotation = '45',
)
plt.legend()
plt.show()
# Oh no, if based on linear relationship, the trend is going down!
# #### ARIMA
#
# Stands for Auto-regressive Moving Average.
#
# 3 important parameters you need to know about ARIMA, ARIMA(p, d, q). You will able to see what is `p`, `d`, `q` from wikipedia, https://en.wikipedia.org/wiki/Autoregressive_integrated_moving_average.
#
# `p` for the order (number of time lags).
#
# `d` for degree of differencing.
#
# `q` for the order of the moving-average.
#
# Or,
#
# `p` is how long the periods we need to look back.
#
# `d` is the skip value during calculating future differences.
#
# `q` is how many periods for moving average.
# +
import statsmodels.api as sm
from sklearn.preprocessing import MinMaxScaler
from itertools import product
Qs = range(0, 2)
qs = range(0, 2)
Ps = range(0, 2)
ps = range(0, 2)
D = 1
parameters = product(ps, qs, Ps, Qs)
parameters_list = list(parameters)
# -
# Problem with ARIMA, you cannot feed a high value, so we need to scale, simplest we can use, minmax scaling.
minmax = MinMaxScaler().fit(np.array([train_selling]).T)
minmax_values = minmax.transform(np.array([train_selling]).T)
# Now using naive meshgrid parameter searching, which pairs of parameters are the best! **Lower is better!**
# +
best_aic = float('inf')
for param in parameters_list:
try:
model = sm.tsa.statespace.SARIMAX(
minmax_values[:, 0],
order = (param[0], D, param[1]),
seasonal_order = (param[2], D, param[3], future_count),
).fit(disp = -1)
except Exception as e:
print(e)
continue
aic = model.aic
print(aic)
if aic < best_aic and aic:
best_model = model
best_aic = aic
print(best_model.specification)
print(best_model.model_orders)
arima_future = best_model.get_prediction(
start = 0, end = len(train_selling) + (future_count - 1)
)
arima_future = minmax.inverse_transform(
np.expand_dims(arima_future.predicted_mean, axis = 1)
)[:, 0]
# -
# ### Auto-ARIMA
# https://towardsdatascience.com/time-series-forecasting-using-auto-arima-in-python-bb83e49210cd
#
# Usually, in the basic ARIMA model, we need to provide the p,d, and q values which are essential. We use statistical techniques to generate these values by performing the difference to eliminate the non-stationarity and plotting ACF and PACF graphs. In Auto ARIMA, the model itself will generate the optimal p, d, and q values which would be suitable for the data set to provide better forecasting.
from pmdarima.arima import auto_arima
# #### Test for Stationarity
#
# Stationarity is an important concept in time-series and any time-series data should undergo a stationarity test before proceeding with a model.
#
# We use the ‘Augmented Dickey-Fuller Test’ to check whether the data is stationary or not which is available in the ‘pmdarima’ package.
from pmdarima.arima import ADFTest
adf_test = ADFTest(alpha = 0.05)
adf_test.should_diff(np.array(train_selling))
# From the above, we can conclude that the data is stationary. Hence, we would not need to use the “Integrated (I)” concept, denoted by value ‘d’ in time series to make the data stationary while building the Auto ARIMA model.
# #### Building Auto ARIMA model
#
# In the Auto ARIMA model, note that small p,d,q values represent non-seasonal components, and capital P, D, Q represent seasonal components. It works similarly like hyper tuning techniques to find the optimal value of p, d, and q with different combinations and the final values would be determined with the lower AIC, BIC parameters taking into consideration.
#
# Here, we are trying with the p, d, q values ranging from 0 to 5 to get better optimal values from the model. There are many other parameters in this model and to know more about the functionality, visit this link [here](https://alkaline-ml.com/pmdarima/modules/generated/pmdarima.arima.auto_arima.html)
auto_arima_model=auto_arima(train_selling, start_p=0, d=1, start_q=0, D=1, start_Q=0, max_P=5, max_d=5, max_Q=5, m=12, seasonal=True, error_action='warn', trace=True, supress_warnings=True, stepwise=True, random_state=20, n_fits=50)
auto_arima_model.summary()
# In the basic ARIMA or SARIMA model, you need to perform differencing and plot ACF and PACF graphs to determine these values which are time-consuming.
#
# However, it is always advisable to go with statistical techniques and implement the basic ARIMA model to understand the intuitive behind the p,d, and q values if you are new to time series.
# #### Forecasting on the test data
#
# Using the trained model which was built in the earlier step to forecast the sales on the test data.
auto_arima_future = train_selling
auto_arima_future.extend(auto_arima_model.predict(n_periods=len(test_selling)))
# +
fig, ax = plt.subplots(figsize = (15, 5))
ax.plot(selling, label = '20% test trend')
ax.plot(linear_future, label = 'forecast linear regression')
ax.plot(arima_future, label = 'forecast ARIMA')
ax.plot(auto_arima_future, label = 'forecast auto ARIMA')
ax.plot(train_selling, label = '80% train trend')
plt.xticks(
np.arange(len(timestamp))[::10],
np.arange(len(timestamp))[::10],
rotation = '45',
)
plt.legend()
plt.show()
# -
# Perfect!
#
# Now we left,
#
# #### RNN + LSTM
import tensorflow as tf
class Model:
def __init__(
self,
learning_rate,
num_layers,
size,
size_layer,
output_size,
forget_bias = 0.1,
):
def lstm_cell(size_layer):
return tf.nn.rnn_cell.LSTMCell(size_layer, state_is_tuple = False)
rnn_cells = tf.nn.rnn_cell.MultiRNNCell(
[lstm_cell(size_layer) for _ in range(num_layers)],
state_is_tuple = False,
)
self.X = tf.placeholder(tf.float32, (None, None, size))
self.Y = tf.placeholder(tf.float32, (None, output_size))
drop = tf.contrib.rnn.DropoutWrapper(
rnn_cells, output_keep_prob = forget_bias
)
self.hidden_layer = tf.placeholder(
tf.float32, (None, num_layers * 2 * size_layer)
)
self.outputs, self.last_state = tf.nn.dynamic_rnn(
drop, self.X, initial_state = self.hidden_layer, dtype = tf.float32
)
self.logits = tf.layers.dense(self.outputs[-1], output_size)
self.cost = tf.reduce_mean(tf.square(self.Y - self.logits))
self.optimizer = tf.train.AdamOptimizer(learning_rate).minimize(
self.cost
)
# **Naively defined neural network parameters, no meshgrid here. this parameters came from my dream, believe me :)**
num_layers = 1
size_layer = 128
epoch = 500
dropout_rate = 0.6
skip = 10
# Same goes to LSTM, we need to scale our value becaused LSTM use sigmoid and tanh functions during feed-forward, we don't want any gradient vanishing during backpropagation.
df = pd.DataFrame({'values': train_selling})
minmax = MinMaxScaler().fit(df)
df_log = minmax.transform(df)
df_log = pd.DataFrame(df_log)
df_log.head()
tf.reset_default_graph()
modelnn = Model(
learning_rate = 0.001,
num_layers = num_layers,
size = df_log.shape[1],
size_layer = size_layer,
output_size = df_log.shape[1],
forget_bias = dropout_rate
)
sess = tf.InteractiveSession()
sess.run(tf.global_variables_initializer())
# +
# %%time
for i in range(epoch):
init_value = np.zeros((1, num_layers * 2 * size_layer))
total_loss = 0
for k in range(0, df_log.shape[0] - 1, skip):
index = min(k + skip, df_log.shape[0] -1)
batch_x = np.expand_dims(
df_log.iloc[k : index, :].values, axis = 0
)
batch_y = df_log.iloc[k + 1 : index + 1, :].values
last_state, _, loss = sess.run(
[modelnn.last_state, modelnn.optimizer, modelnn.cost],
feed_dict = {
modelnn.X: batch_x,
modelnn.Y: batch_y,
modelnn.hidden_layer: init_value,
},
)
init_value = last_state
total_loss += loss
total_loss /= ((df_log.shape[0] - 1) / skip)
if (i + 1) % 100 == 0:
print('epoch:', i + 1, 'avg loss:', total_loss)
# +
df = pd.DataFrame({'values': train_selling})
minmax = MinMaxScaler().fit(df)
df_log = minmax.transform(df)
df_log = pd.DataFrame(df_log)
future_day = future_count
output_predict = np.zeros((df_log.shape[0] + future_day, df_log.shape[1]))
output_predict[0] = df_log.iloc[0]
upper_b = (df_log.shape[0] // skip) * skip
init_value = np.zeros((1, num_layers * 2 * size_layer))
for k in range(0, (df_log.shape[0] // skip) * skip, skip):
out_logits, last_state = sess.run(
[modelnn.logits, modelnn.last_state],
feed_dict = {
modelnn.X: np.expand_dims(
df_log.iloc[k : k + skip], axis = 0
),
modelnn.hidden_layer: init_value,
},
)
init_value = last_state
output_predict[k + 1 : k + skip + 1] = out_logits
if upper_b < df_log.shape[0]:
out_logits, last_state = sess.run(
[modelnn.logits, modelnn.last_state],
feed_dict = {
modelnn.X: np.expand_dims(df_log.iloc[upper_b:], axis = 0),
modelnn.hidden_layer: init_value,
},
)
init_value = last_state
output_predict[upper_b + 1 : df_log.shape[0] + 1] = out_logits
df_log.loc[df_log.shape[0]] = out_logits[-1]
future_day = future_day - 1
for i in range(future_day):
out_logits, last_state = sess.run(
[modelnn.logits, modelnn.last_state],
feed_dict = {
modelnn.X: np.expand_dims(df_log.iloc[-skip:], axis = 0),
modelnn.hidden_layer: init_value,
},
)
init_value = last_state
output_predict[df_log.shape[0]] = out_logits[-1]
df_log.loc[df_log.shape[0]] = out_logits[-1]
# -
df_log = minmax.inverse_transform(output_predict)
lstm_future = df_log[:,0]
fig, ax = plt.subplots(figsize = (15, 5))
ax.plot(selling, label = '20% test trend')
ax.plot(train_selling, label = '80% train trend')
ax.plot(linear_future, label = 'forecast linear regression')
ax.plot(arima_future, label = 'forecast ARIMA')
ax.plot(lstm_future, label='forecast lstm')
plt.xticks(
np.arange(len(timestamp))[::10],
np.arange(len(timestamp))[::10],
rotation = '45',
)
plt.legend()
plt.show()
from sklearn.metrics import r2_score
from scipy.stats import pearsonr, spearmanr
# Accuracy based on correlation coefficient, **higher is better!**
def calculate_accuracy(real, predict):
r2 = r2_score(real, predict)
if r2 < 0:
r2 = 0
def change_percentage(val):
# minmax, we know that correlation is between -1 and 1
if val > 0:
return val
else:
return val + 1
pearson = pearsonr(real, predict)[0]
spearman = spearmanr(real, predict)[0]
pearson = change_percentage(pearson)
spearman = change_percentage(spearman)
return {
'r2': r2 * 100,
'pearson': pearson * 100,
'spearman': spearman * 100,
}
# Distance error for mse and rmse, **lower is better!**
def calculate_distance(real, predict):
mse = ((real - predict) ** 2).mean()
rmse = np.sqrt(mse)
return {'mse': mse, 'rmse': rmse}
# #### Now let's check distance error using Mean Square Error and Root Mean Square Error
#
# Validating based on 80% training timestamps
linear_cut = linear_future[: len(train_selling)]
arima_cut = arima_future[: len(train_selling)]
lstm_cut = lstm_future[: len(train_selling)]
# Linear regression
calculate_distance(train_selling, linear_cut)
calculate_accuracy(train_selling, linear_cut)
# ARIMA
calculate_distance(train_selling, arima_cut)
calculate_accuracy(train_selling, arima_cut)
# LSTM
calculate_distance(train_selling, lstm_cut)
calculate_accuracy(train_selling, lstm_cut)
# **LSTM learn better during training session!**
#
# How about another 20%?
linear_cut = linear_future[len(train_selling) :]
arima_cut = arima_future[len(train_selling) :]
lstm_cut = lstm_future[len(train_selling) :]
# Linear regression
calculate_distance(test_selling, linear_cut)
calculate_accuracy(test_selling, linear_cut)
# ARIMA
calculate_distance(test_selling, arima_cut)
calculate_accuracy(test_selling, arima_cut)
# LSTM
calculate_distance(test_selling, lstm_cut)
calculate_accuracy(test_selling, lstm_cut)
# **LSTM is the best model based on testing!**
#
# Deep learning won again!
# I guess that's all for now, **again, do not use these models to buy any stocks or trends!**
| misc/kijang-emas-bank-negara.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Assignment on prime numbers day 3
#
for num in range(1,200):
for i in range(2,num):
if (num % i) == 0:
break
else:
print(num)
# # pilot assignment
#
for distance in range(1000,10000000):
distance = int(input("enter distance from above:"))
if distance==1000:
print("ready to land")
elif distance > 1000 and distance < 5000:
print("come down to 1000")
else:
print("phir kabhi aana...GO AROUND TRY LATER")
| Assignment day 3 both.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# %matplotlib inline
from pyqubo import Array, Placeholder, solve_qubo, Constraint, Sum
import matplotlib.pyplot as plt
import networkx as nx
import numpy as np
# ## Traveling Salesman Problem (TSP)
#
# Find the shortest route that visits each city and returns to the origin city.
# +
def plot_city(cities, sol = {}):
n_city = len(cities)
cities_dict = dict(cities)
G = nx.Graph()
for city in cities_dict:
G.add_node(city)
# draw path
if sol:
city_order = []
for i, v in sol.items():
for j, v2 in v.items():
if v2 == 1:
city_order.append(j)
for i in range(n_city):
city_index1 = city_order[i]
city_index2 = city_order[(i+1) % n_city]
G.add_edge(cities[city_index1][0], cities[city_index2][0])
plt.figure(figsize=(3,3))
pos = nx.spring_layout(G)
nx.draw_networkx(G, cities_dict)
plt.axis("off")
plt.show()
def dist(i, j, cities):
pos_i = cities[i][1]
pos_j = cities[j][1]
return np.sqrt((pos_i[0] - pos_j[0])**2 + (pos_i[1] - pos_j[1])**2)
# -
# City names and coordinates list[("name", (x, y))]
cities = [
("a", (0, 0)),
("b", (1, 3)),
("c", (3, 2)),
("d", (2, 1)),
("e", (0, 1))
]
plot_city(cities)
# Prepare binary vector with bit $(i, j)$ representing to visit $j$ city at time $i$
n_city = len(cities)
x = Array.create('c', (n_city, n_city), 'BINARY')
# +
# Constraint not to visit more than two cities at the same time.
time_const = 0.0
for i in range(n_city):
# If you wrap the hamiltonian by Const(...), this part is recognized as constraint
time_const += Constraint((Sum(0, n_city, lambda j: x[i, j]) - 1)**2, label="time{}".format(i))
# Constraint not to visit the same city more than twice.
city_const = 0.0
for j in range(n_city):
city_const += Constraint((Sum(0, n_city, lambda i: x[i, j]) - 1)**2, label="city{}".format(j))
# -
# distance of route
distance = 0.0
for i in range(n_city):
for j in range(n_city):
for k in range(n_city):
d_ij = dist(i, j, cities)
distance += d_ij * x[k, i] * x[(k+1)%n_city, j]
# Construct hamiltonian
A = Placeholder("A")
H = distance + A * (time_const + city_const)
# Compile model
model = H.compile()
# Generate QUBO
feed_dict = {'A': 4.0}
qubo, offset = model.to_qubo(feed_dict=feed_dict)
sol = solve_qubo(qubo)
solution, broken, energy = model.decode_solution(sol, vartype="BINARY", feed_dict=feed_dict)
print("number of broken constarint = {}".format(len(broken)))
if len(broken) == 0:
plot_city(cities, solution["c"])
| notebooks/TSP.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3 (ipykernel)
# language: python
# name: python3
# ---
# # Root To Node Path In Binary Tree
#
# +
class BinaryTreeNode:
def __init__(self,data):
self.data=data;
self.left=None
self.right=None
def search(root,x):
if root==None:
return False
if root.data==x:
return True
elif root.data>x:
return search(root.left,x)
else:
return search(root.left,x)
def printTreeDetailed(root):
if root==None:
return
print(root.data,end=":")
if root.left!=None:
print("L",root.left.data,end=",")
if root.right!=None:
print("R",root.right.data,end="")
print()
printTreeDetailed(root.left)
printTreeDetailed(root.right)
import queue
def takeTreeInputLevelWise():
q=queue.Queue()
print("Enter root")
rootData=int(input())
if rootData==-1:
return None
root=BinaryTreeNode(rootData)
q.put(root)
while(not(q.empty())):
current_node=q.get()
print("Enter left child of",current_node.data)
leftChildData=int(input())
if leftChildData!=-1:
leftChild=BinaryTreeNode(leftChildData)
current_node.left=leftChild
q.put(leftChild)
print("Enter right child of",current_node.data)
rightChildData=int(input())
if rightChildData!=-1:
rightChild=BinaryTreeNode(rightChildData)
current_node.right=rightChild
q.put(rightChild)
return root
def nodeToRootPath(root,s):
if root==None:
return None
if root.data==s:
l=list()
l.append(root.data)
return l
leftOutput=nodeToRootPath(root.left,s)
if leftOutput!=None:
leftOutput.append(root.data)
return leftOutput
rightOutput=nodeToRootPath(root.right,s)
if rightOutput!=None:
rightOutput.append(root.data)
return rightOutput
else:
return None
root=takeTreeInputLevelWise()
printTreeDetailed(root)
nodeToRootPath(root,5)
# -
# # 1. Find path in BST
# Given a BST and an integer k. Find and return the path from the node with data k and root (if a node with data k is present in given BST) in a list. Return empty list otherwise.
# Note: Assume that BST contains all unique elements.
# Input Format :
# The first line of input contains data of the nodes of the tree in level order form. The data of the nodes of the tree is separated by space. If any node does not have left or right child, take -1 in its place. Since -1 is used as an indication whether the left or right nodes exist, therefore, it will not be a part of the data of any node.
# The following line of input contains an integer, that denotes the value of k.
# Output Format :
# The first line and only line of output prints the data of the nodes in the path from node k to root. The data of the nodes is separated by single space.
# Constraints:
# Time Limit: 1 second
# Sample Input 1:
# 8 5 10 2 6 -1 -1 -1 -1 -1 7 -1 -1
# 2
# Sample Output 1:
# 2 5 8
# +
import queue
class BinaryTreeNode:
def __init__(self, data):
self.data = data
self.left = None
self.right = None
list_=[]
idPresent=False
def findPathBST(root,data):
global idPresent
#############################
# PLEASE ADD YOUR CODE HERE #
#############################
if root is None:
if not idPresent:
return [],False
return list_,True
list_.append(root.data)
if root.data>data:
return findPathBST(root.left,data)
if root.data<data:
return findPathBST(root.right,data)
if root.data==data:
idPresent=True
return list_[::-1],idPresent
def buildLevelTree(levelorder):
index = 0
length = len(levelorder)
if length<=0 or levelorder[0]==-1:
return None
root = BinaryTreeNode(levelorder[index])
index += 1
q = queue.Queue()
q.put(root)
while not q.empty():
currentNode = q.get()
leftChild = levelorder[index]
index += 1
if leftChild != -1:
leftNode = BinaryTreeNode(leftChild)
currentNode.left =leftNode
q.put(leftNode)
rightChild = levelorder[index]
index += 1
if rightChild != -1:
rightNode = BinaryTreeNode(rightChild)
currentNode.right =rightNode
q.put(rightNode)
return root
# Main
levelOrder = [int(i) for i in input().strip().split()]
root = buildLevelTree(levelOrder)
data = int(input())
path = findPathBST(root,data)[0]
if path is not None:
for ele in path:
print(ele,end=' ')
# -
# # Structure Of BST Class
#
# +
class BinaryTreeNode:
def __init__(self,data):
self.data=data;
self.left=None
self.right=None
class BST:
def __init__(self):
self.root=None
self.numNodes=0
def printTree(self):
return
def isPresent(self,data):
return False
def insert(self,data):
return
def deleteData(self):
return False
def count(self):
return 0
b=BST()
b.insert(10)
b.insert(5)
b.insert(12)
print(b.isPresent(10))
print(b.isPresent(7))
print(b.isPresent(4))
print(b.isPresent(10))
print(b.count())
b.printTree()
# -
# # BST Class - Search & Print
#
# +
class BinaryTreeNode:
def __init__(self,data):
self.data=data;
self.left=None
self.right=None
class BST:
def __init__(self):
self.root=None
self.numNodes=0
def printTreeHelper(self,root):
if root==None:
return
print(root.data,end=":")
if root.left!=None:
print("L",root.left.data,end=",")
if root.right!=None:
print("R",root.right.data,end="")
print()
printTreeHelper(root.left)
printTreeHelper(root.right)
def printTree(self):
printTreeHelper(self.root)
def isPresentHelper(self,root,data):
if root==None:
return False
if root.data==data:
return True
if root.data>data:
#call on left
return isPresentHelper(root.left,data)
else:
#Call on right
return isPresentHelper(root.right,data)
def isPresent(self,data):
return isPresentHelper(self.root,data)
def insert(self,data):
return
def deleteData(self):
return False
def count(self):
return 0
b=BST()
b.insert(10)
b.insert(5)
b.insert(12)
print(b.isPresent(10))
print(b.isPresent(7))
print(b.isPresent(4))
print(b.isPresent(10))
print(b.count())
b.printTree()
# -
# # BST Class
# Implement the BST class which includes following functions -
# 1. search
# Given an element, find if that is present in BST or not. Return true or false.
# 2. insert -
# Given an element, insert that element in the BST at the correct position. If element is equal to the data of the node, insert it in the left subtree.
# 3. delete -
# Given an element, remove that element from the BST. If the element which is to be deleted has both children, replace that with the minimum element from right sub-tree.
# 4. printTree (recursive) -
# Print the BST in ithe following format -
# For printing a node with data N, you need to follow the exact format -
# N:L:x,R:y
# where, N is data of any node present in the binary tree. x and y are the values of left and right child of node N. Print the children only if it is not null.
# There is no space in between.
# You need to print all nodes in the recursive format in different lines.
#
#
# +
class BinaryTreeNode:
def __init__(self, data):
self.data = data
self.left = None
self.right = None
class BST:
def __init__(self):
self.root = None
self.numNodes = 0
def printTreeHelper(self,root):
if root==None:
return
print(root.data,end=":")
if root.left!=None:
print("L:%d"%root.left.data,end=",")
if root.right!=None:
print("R:%d"%root.right.data,end="")
print()
self.printTreeHelper(root.left)
self.printTreeHelper(root.right)
def printTree(self):
self.printTreeHelper(self.root)
def isPresentHelper(self,root,data):
if root==None:
return False
if root.data==data:
return True
if root.data>data:
#call on left
return self.isPresentHelper(root.left,data)
else:
#Call on right
return self.isPresentHelper(root.right,data)
def search(self,data):
return self.isPresentHelper(self.root,data)
def insertHelper(self,root,data):
if root==None:
node=BinaryTreeNode(data)
return node
if root.data>=data:
root.left=self.insertHelper(root.left,data)
return root
else:
root.right=self.insertHelper(root.right,data)
return root
def insert(self,data):
self.numNodes+=1
self.root=self.insertHelper(self.root,data)
def min(self,root):
if root==None:
return 10000
if root.left==None:
return root.data
return self.min(root.left)
def deleteDataHelper(self,root,data):
if root==None:
return False, None
if root.data<data:
deleted,newRightNode=self.deleteDataHelper(root.right,data)
root.right=newRightNode
return deleted,root
if root.data>data:
deleted,newLeftNode=self.deleteDataHelper(root.left,data)
root.left=newLeftNode
return deleted,root
#root is leaf
if root.left==None and root.right==None:
return True, None
# root has one child
if root.left==None:
return True,root.right
if root.right==None:
return True,root.left
#root has 2 children
replacement=self.min(root.right)
root.data=replacement
deleted,newRightNode=self.deleteDataHelper(root.right,replacement)
root.right=newRightNode
return True,root
def delete(self,data):
deleted,newRoot=self.deleteDataHelper(self.root,data)
if deleted:
self.numNodes-=1
self.root=newRoot
return deleted
def count(self):
return self.numNodes
b = BST()
q = int(input())
while (q > 0) :
li = [int(ele) for ele in input().strip().split()]
choice = li[0]
q-=1
if choice == 1:
data = li[1]
b.insert(data)
elif choice == 2:
data = li[1]
b.delete(data)
elif choice == 3:
data = li[1]
ans = b.search(data)
if ans is True:
print('true')
else:
print('false')
else:
b.printTree()
# -
# # Insert In BST
#
# +
class BinaryTreeNode:
def __init__(self,data):
self.data=data;
self.left=None
self.right=None
class BST:
def __init__(self):
self.root=None
self.numNodes=0
def printTreeHelper(self,root):
if root==None:
return
print(root.data,end=":")
if root.left!=None:
print("L",root.left.data,end=",")
if root.right!=None:
print("R",root.right.data,end="")
print()
printTreeHelper(root.left)
printTreeHelper(root.right)
def printTree(self):
printTreeHelper(self.root)
def isPresentHelper(self,root,data):
if root==None:
return False
if root.data==data:
return True
if root.data>data:
#call on left
return isPresentHelper(root.left,data)
else:
#Call on right
return isPresentHelper(root.right,data)
def isPresent(self,data):
return isPresentHelper(self.root,data)
def insertHelper(self,root,data):
if root==None:
node=BinaryTreeNode(data)
return node
if root.data>data:
root.left=self.insertHelper(root.left,data)
return root
else:
root.right=self.insertHelper(root.right,data)
return root
def insert(self,data):
self.numNodes+=1
self.root=self.insertHelper(self.root,data)
def deleteData(self):
return False
def count(self):
return self.numNodes
b=BST()
b.insert(10)
b.insert(5)
b.insert(12)
print(b.isPresent(10))
print(b.isPresent(7))
print(b.isPresent(4))
print(b.isPresent(10))
print(b.count())
b.printTree()
# -
# # Delete In BST - Code
#
# +
class BinaryTreeNode:
def __init__(self,data):
self.data=data;
self.left=None
self.right=None
class BST:
def __init__(self):
self.root=None
self.numNodes=0
def printTreeHelper(self,root):
if root==None:
return
print(root.data,end=":")
if root.left!=None:
print("L",root.left.data,end=",")
if root.right!=None:
print("R",root.right.data,end="")
print()
self.printTreeHelper(root.left)
self.printTreeHelper(root.right)
def printTree(self):
self.printTreeHelper(self.root)
def isPresentHelper(self,root,data):
if root==None:
return False
if root.data==data:
return True
if root.data>data:
#call on left
return self.isPresentHelper(root.left,data)
else:
#Call on right
return self.isPresentHelper(root.right,data)
def isPresent(self,data):
return self.isPresentHelper(self.root,data)
def insertHelper(self,root,data):
if root==None:
node=BinaryTreeNode(data)
return node
if root.data>data:
root.left=self.insertHelper(root.left,data)
return root
else:
root.right=self.insertHelper(root.right,data)
return root
def insert(self,data):
self.numNodes+=1
self.root=self.insertHelper(self.root,data)
def min(self,root):
if root==None:
return 10000
if root.left==None:
return root.data
return self.min(root.left)
def deleteDataHelper(self,root,data):
if root==None:
return False, None
if root.data<data:
deleted,newRightNode=self.deleteDataHelper(root.right,data)
root.right=newRightNode
return deleted,root
if root.data>data:
deleted,newLeftNode=self.deleteDataHelper(root.left,data)
root.left=newLeftNode
return deleted,root
#root is leaf
if root.left==None and root.right==None:
return True, None
# root has one child
if root.left==None:
return True,root.right
if root.right==None:
return True,root.left
#root has 2 children
replacement=self.min(root.right)
root.data=replacement
deleted,newRightNode=self.deleteDataHelper(root.right,replacement)
root.right=newRightNode
return True,root
def deleteData(self,data):
deleted,newRoot=self.deleteDataHelper(self.root,data)
if deleted:
self.numNodes-=1
self.root=newRoot
return deleted
def count(self):
return self.numNodes
b=BST()
b.insert(10)
b.insert(5)
b.insert(12)
print(b.isPresent(10))
print(b.isPresent(7))
print(b.deleteData(4))
print(b.deleteData(10))
print(b.count())
b.printTree()
b=BST()
b.insert(10)
b.insert(5)
b.insert(7)
b.insert(6)
b.insert(8)
b.insert(12)
b.insert(11)
b.insert(15)
b.printTree()
print(b.count())
b=BST()
b.insert(10)
b.insert(5)
b.insert(7)
b.insert(6)
b.insert(8)
b.insert(12)
b.insert(11)
b.insert(15)
b.printTree()
print(b.count())
b.deleteData(8)
b.printTree()
b=BST()
b.insert(10)
b.insert(5)
b.insert(7)
b.insert(6)
b.insert(8)
b.insert(12)
b.insert(11)
b.insert(15)
b.printTree()
print(b.count())
b.deleteData(8)
b.printTree()
b.deleteData(5)
b.printTree()
b=BST()
b.insert(10)
b.insert(5)
b.insert(7)
b.insert(6)
b.insert(8)
b.insert(12)
b.insert(11)
b.insert(15)
b.printTree()
print(b.count())
b.deleteData(10)
b.printTree()
# -
| Data Structure & Algorithm/Milestone 4/BST - 2.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + id="0UDQUSvQ3VQK"
import pandas as pd
import numpy as np
import torch
import re
import tqdm
from matplotlib._path import (affine_transform, count_bboxes_overlapping_bbox,
update_path_extents)
# + colab={"base_uri": "https://localhost:8080/", "height": 511} id="WN_z0XzM3wwQ" outputId="f8603123-7588-45a9-bc83-b9c05b72fdf7"
dataset=pd.read_csv('/content/dataset.csv')
dataset.dropna(inplace = True)
dataset
# + colab={"base_uri": "https://localhost:8080/", "height": 121} id="W6ADkaNo5QhW" outputId="f203bff5-74a3-4023-8024-268d7b49cc3e"
print("number of tweets belonging to classes 0,1 and 2")
dataset.groupby('class')['id'].nunique()
# + colab={"base_uri": "https://localhost:8080/", "height": 309} id="HNzXYxq96GI8" outputId="d310dd28-32e8-4f19-dc6e-d338a518af62"
dataset.groupby('class')['id'].nunique().plot(kind='bar',title='Plot of number of tweets belonging to a particular class')
# + [markdown] id="zM4BnF7ETQUN"
# # **Data Cleaning**
# + id="12kKtei07Cby"
from nltk import tokenize
from nltk.corpus import stopwords
from nltk.tokenize import word_tokenize
from gensim.models import Word2Vec
from sklearn.model_selection import train_test_split
# + colab={"base_uri": "https://localhost:8080/", "height": 69} id="57lpaYlZ7W6b" outputId="8df515e1-9148-41e3-e81f-f9caaf0dd62e"
import nltk
nltk.download('stopwords')
# + id="D-NFmeRM7E1T"
stop_words= set(stopwords.words('english'))
# + colab={"base_uri": "https://localhost:8080/", "height": 69} id="SRolts2eQtWd" outputId="39e193f8-b07f-48ed-bf2b-5b2521326f18"
import nltk
nltk.download('punkt')
# + id="1Ks21idy7j8b"
def clean_tweet(tweet):
tweet = re.sub("#", "",tweet) # Removing '#' from hashtags
tweet = re.sub("[^a-zA-Z#]", " ",tweet) # Removing punctuation and special characters
tweet = re.sub(r'http[s]?://(?:[a-z]|[0-9]|[$-_@.&+]|[!*\(\),]|(?:%[0-9a-f][0-9a-f]))+',"<URL>", tweet)
tweet = re.sub('http','',tweet)
tweet = re.sub(" +", " ", tweet)
tweet = tweet.lower()
tweet = word_tokenize(tweet)
return_tweet=[]
for word in tweet:
if word not in stop_words:
return_tweet.append(word)
return return_tweet
dataset["tweet"]=dataset["tweet"].apply(clean_tweet)
# + [markdown] id="fW-CYEDDRX9s"
#
# # **Word2Vec model to get the word embedings.**
# + id="KVAUlPTgRdiP"
model = Word2Vec(dataset["tweet"].values, size=50, window=5, min_count=1, workers=4)
# + id="iQAlyU7DRmDT"
def get_features(tweet):
features=[]
for word in tweet:
features.append(model.wv[word])
return np.mean(features,0)
# + id="PAF7_UcoRurV"
dataset["features"]=dataset["tweet"].apply(get_features)
# + id="7ePsy5cHSBkl"
data=[]
for i in dataset["features"].values:
temp=[]
for j in i:
temp.append(j)
data.append(temp)
data=np.array(data)
# + id="k04GkOghSHrk"
from sklearn.preprocessing import label_binarize
Y = label_binarize(dataset["class"].values, classes=[0, 1, 2])
n_classes = Y.shape[1]
X_train, X_test, y_train, y_test = train_test_split(data, Y, test_size=0.2, random_state=42)
# + colab={"base_uri": "https://localhost:8080/", "height": 364} id="hIeqtdzwSbKM" outputId="d7a0c537-05ad-415f-c802-e7e1fbe18d71"
print(X_train)
print(y_train)
# + [markdown] id="tOyIgHzISsAT"
# # **LOGISTIC REGRESSION MODEL**
# + id="SN5UHd_PSwxG"
from sklearn.linear_model import LogisticRegression
from sklearn.multiclass import OneVsRestClassifier
from sklearn import svm
from sklearn.metrics import f1_score
from sklearn.metrics import recall_score
from sklearn.metrics import precision_score
from sklearn.metrics import precision_recall_curve
import matplotlib.pyplot as plt
import warnings
warnings.filterwarnings('ignore')
# + colab={"base_uri": "https://localhost:8080/", "height": 364} id="Uk19QYlqS1w2" outputId="08394cba-6810-45a2-92cd-03a34ff1d3f7"
lr_clf = OneVsRestClassifier(LogisticRegression(random_state=0, solver='lbfgs',multi_class='multinomial'))
lr_clf.fit(X_train,y_train)
y_pred = lr_clf.predict(X_test)
f = f1_score(y_test, y_pred, average='micro')
print("F1 Score: ", f)
p = precision_score(y_test, y_pred, average='micro')
print("Precision Score: ", p)
r = recall_score(y_test, y_pred, average='micro')
print("Recall Score: ", r)
print("Accuracy: ", lr_clf.score(X_test,y_test))
y_score = lr_clf.predict_proba(X_test)
precision = dict()
recall = dict()
for i in range(n_classes):
precision[i], recall[i], _ = precision_recall_curve(y_test[:, i],
y_score[:, i])
plt.plot(recall[i], precision[i], lw=2, label='class {}'.format(i))
plt.xlabel("Recall")
plt.ylabel("Precision")
plt.legend(loc = "best")
plt.title("Precision vs. Recall curve")
plt.show()
# + [markdown] id="wwwhdkyWTbc0"
# # ***SVM MODEL***
# + colab={"base_uri": "https://localhost:8080/", "height": 364} id="O3NHcj2_Texm" outputId="d3beb200-d683-4ad2-ee6b-ad5674baa49d"
svm_clf = OneVsRestClassifier(svm.SVC(gamma='scale', probability=True))
svm_clf.fit(X_train,y_train)
y_pred = svm_clf.predict(X_test)
f = f1_score(y_test, y_pred, average='micro')
print("F1 Score: ", f)
p = precision_score(y_test, y_pred, average='micro')
print("Precision Score: ", p)
r = recall_score(y_test, y_pred, average='micro')
print("Recall Score: ", r)
print("Accuracy: ", svm_clf.score(X_test,y_test))
y_score = svm_clf.predict_proba(X_test)
precision = dict()
recall = dict()
for i in range(n_classes):
precision[i], recall[i], _ = precision_recall_curve(y_test[:, i],
y_score[:, i])
plt.plot(recall[i], precision[i], lw=2, label='class {}'.format(i))
plt.xlabel("Recall")
plt.ylabel("Precision")
plt.legend(loc = "center right")
plt.title("Precision vs. Recall curve")
plt.show()
| Basic_Analysis.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .jl
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Julia 1.4.0
# language: julia
# name: julia-1.4
# ---
#import Pkg
#Pkg.add("GraphRecipes")
using VMLS, SparseArrays, LinearAlgebra, Plots
using LightGraphs, SimpleWeightedGraphs, GraphRecipes
#https://github.com/JuliaGraphs
using XLSX
# +
#dictionary {(node_from, node_to), edge}
# -
#IMPORT EXCEL FILE
#https://felipenoris.github.io/XLSX.jl/stable/tutorial/
xf = XLSX.readxlsx("./Networks/Target_Distribution_Distance_in_Hours.xlsx")
sh = xf["Sheet1"]
rng = sh["B2:AI35"]
print("excel file loaded")
# +
adjacency_matrix = copy(rng)
adjacency_matrix = Array{Float64}(adjacency_matrix)
# -
#SimpleWeightedGraph (undirected) example
#https://github.com/JuliaGraphs/SimpleWeightedGraphs.jl
#how to load adjacency matrix to simple weighted graph
#https://stackoverflow.com/questions/52392693/generating-a-weighted-and-directed-network-form-adjacency-matrix-in-julia
g = SimpleWeightedDiGraph(adjacency_matrix)
#https://github.com/JuliaGraphs/SimpleWeightedGraphs.jl/issues/12
sg = WGraph(g)
graphplot(sg)
enumerate_paths(dijkstra_shortest_paths(g, 1), 20)
#https://github.com/JuliaGraphs/LightGraphs.jl/blob/master/src/ShortestPaths/dijkstra.jl
| Network Diagram/Networks.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 2
# language: python
# name: python2
# ---
# # Trends
# By <NAME>" Nitishinskaya and <NAME>
#
# Notebook released under the Creative Commons Attribution 4.0 License.
#
# ---
#
# Trends estimate tendencies in data over time, such as overall rising or falling amid noise. They use only historical data and not any knowledge about the processes generating them.
#
# # Linear trend models
#
# A linear trend model assumes that the variable changes at a constant rate with time, and attempts to find a line of best fit. We want to find coefficients $b_0$, $b_1$ such that the series $y_t$ satisfies
# $$ y_t = b_0 + b_1t + \epsilon_t $$
# and so that the sum of the squares of the errors $\epsilon_t$ is minimized. This can be done using a linear regression. After we have fitted a linear model to our data, we predict the value of the variable to be $y_t = b_0 + b_1 t$ for future time periods $t$. We can also use these parameters to compare the rates of growth or decay of two data series.
#
# Let's find a linear trend model for the price of XLY, an ETF for consumer goods.
import numpy as np
import math
from statsmodels import regression
import statsmodels.api as sm
import matplotlib.pyplot as plt
# +
start = '2010-01-01'
end = '2015-01-01'
asset = get_pricing('XLY', fields='price', start_date=start, end_date=end)
dates = asset.index
def linreg(X,Y):
# Running the linear regression
x = sm.add_constant(X)
model = regression.linear_model.OLS(Y, x).fit()
a = model.params[0]
b = model.params[1]
# Return summary of the regression and plot results
X2 = np.linspace(X.min(), X.max(), 100)
Y_hat = X2 * b + a
plt.plot(X2, Y_hat, 'r', alpha=0.9); # Add the regression line, colored in red
return model.summary()
_, ax = plt.subplots()
ax.plot(asset)
ticks = ax.get_xticks()
ax.set_xticklabels([dates[i].date() for i in ticks[:-1]]) # Label x-axis with dates
linreg(np.arange(len(asset)), asset)
# -
# The summary returned by the regression tells us the slope and intercept of the line, as well as giving us some information about how statistically valid the fit is. Note that the Durbin-Watson statistic is very low here, suggeesting that the errors are correlated. The price of this fund is generally increasing, but because of the variance in the data, the line of best fit changes significantly depending on the sample we take. Because small errors in our model magnify with time, its predictions far into the future may not be as good as the fit statistics would suggest. For instance, we can see what will happen if we find a model for the data through 2012 and use it to predict the data through 2014.
# +
# Take only some of the data in order to see how predictive the model is
asset_short = get_pricing('XLY', fields='price', start_date=start, end_date='2013-01-01')
# Running the linear regression
x = sm.add_constant(np.arange(len(asset_short)))
model = regression.linear_model.OLS(asset_short, x).fit()
X2 = np.linspace(0, len(asset), 100)
Y_hat = X2 * model.params[1] + model.params[0]
# Plot the data for the full time range
_, ax = plt.subplots()
ax.plot(asset)
ticks = ax.get_xticks()
ax.set_xticklabels([dates[i].date() for i in ticks[:-1]]) # Label x-axis with dates
# Plot the regression line extended to the full time range
ax.plot(X2, Y_hat, 'r', alpha=0.9);
# -
# Of course, we can keep updating our model as we go along. Below we use all the previous prices to predict prices 30 days into the future.
# +
# Y_hat will be our predictions for the price
Y_hat = [0]*1100
# Start analysis from day 100 so that we have historical prices to work with
for i in range(100,1200):
temp = asset[:i]
x = sm.add_constant(np.arange(len(temp)))
model = regression.linear_model.OLS(temp, x).fit()
# Plug (i+30) into the linear model to get the predicted price 30 days from now
Y_hat[i-100] = (i+30) * model.params[1] + model.params[0]
_, ax = plt.subplots()
ax.plot(asset[130:1230]) # Plot the asset starting from the first day we have predictions for
ax.plot(range(len(Y_hat)), Y_hat, 'r', alpha=0.9)
ticks = ax.get_xticks()
ax.set_xticklabels([dates[i].date() for i in ticks[:-1]]) # Label x-axis with dates;
# -
# # Log-linear trend models
#
# A log-linear trend model attempts to fit an exponential curve to a data set:
# $$ y_t = e^{b_0 + b_1 t + \epsilon_t} $$
#
# To find the coefficients, we can run a linear regression on the equation $ \ln y_t = b_0 + b_1 t + \epsilon_t $ with variables $t, \ln y_t$. (This is the reason for the name of the model — the equation is linear when we take the logarithm of both sides!)
#
# If $b_1$ is very small, then a log-linear curve is approximately linear. For instance, we can find a log-linear model for our data from the previous example, with fit statistics approximately the same as for the linear model.
# +
def loglinreg(X,Y):
# Running the linear regression on X, log(Y)
x = sm.add_constant(X)
model = regression.linear_model.OLS(np.log(Y), x).fit()
a = model.params[0]
b = model.params[1]
# Return summary of the regression and plot results
X2 = np.linspace(X.min(), X.max(), 100)
Y_hat = (math.e)**(X2 * b + a)
plt.plot(X2, Y_hat, 'r', alpha=0.9); # Add the regression curve, colored in red
return model.summary()
_, ax_log = plt.subplots()
ax_log.plot(asset)
ax_log.set_xticklabels([dates[i].date() for i in ticks[:-1]]) # Label x-axis with dates
loglinreg(np.arange(len(asset)), asset)
# -
# In some cases, however, a log-linear model clearly fits the data better.
# +
start2 = '2002-01-01'
end2 = '2012-06-01'
asset2 = get_pricing('AAPL', fields='price', start_date=start2, end_date=end2)
dates2 = asset2.index
_, ax2 = plt.subplots()
ax2.plot(asset2)
ticks2 = ax2.get_xticks()
ax2.set_xticklabels([dates2[i].date() for i in ticks2[:-1]]) # Label x-axis with dates
loglinreg(np.arange(len(asset2)), asset2)
# -
# # Summary
#
# From the above we see that trend models can provide a simple representation of a complex data series. However, the errors (deviations from the model) are highly correlated; so, we cannot apply the usual regression statistics to test for correctness, since the regression model assumes serially uncorrelated errors. This also suggests that the correlation can be used to build a finer model.
| lectures/drafts/Trend models.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python [default]
# language: python
# name: python3
# ---
# # Deep Nets with TF Abstractions
#
# Let's explore a few of the various abstractions that TensorFlow offers. You can check out the tf.contrib documentation for more options.
# # The Data
# To compare these various abstractions we'll use a dataset easily available from the SciKit Learn library. The data is comprised of the results of a chemical analysis of wines grown in the same region in Italy by three different cultivators. There are thirteen different
# measurements taken for different constituents found in the three types of wine. We will use the various TF Abstractions to classify the wine to one of the 3 possible labels.
#
# First let's show you how to get the data:
from sklearn.datasets import load_wine
wine_data = load_wine()
type(wine_data)
wine_data.keys()
print(wine_data.DESCR)
feat_data = wine_data['data']
labels = wine_data['target']
# ### Train Test Split
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(feat_data,
labels,
test_size=0.3,
random_state=101)
# ### Scale the Data
from sklearn.preprocessing import MinMaxScaler
scaler = MinMaxScaler()
scaled_x_train = scaler.fit_transform(X_train)
scaled_x_test = scaler.transform(X_test)
# # Abstractions
# ## Estimator API
import tensorflow as tf
from tensorflow import estimator
X_train.shape
feat_cols = [tf.feature_column.numeric_column("x", shape=[13])]
deep_model = estimator.DNNClassifier(hidden_units=[13,13,13],
feature_columns=feat_cols,
n_classes=3,
optimizer=tf.train.GradientDescentOptimizer(learning_rate=0.01) )
input_fn = estimator.inputs.numpy_input_fn(x={'x':scaled_x_train},y=y_train,shuffle=True,batch_size=10,num_epochs=5)
deep_model.train(input_fn=input_fn,steps=500)
input_fn_eval = estimator.inputs.numpy_input_fn(x={'x':scaled_x_test},shuffle=False)
preds = list(deep_model.predict(input_fn=input_fn_eval))
predictions = [p['class_ids'][0] for p in preds]
from sklearn.metrics import confusion_matrix,classification_report
print(classification_report(y_test,predictions))
# ____________
# ______________
# # TensorFlow Keras
# ### Create the Model
from tensorflow.contrib.keras import models
dnn_keras_model = models.Sequential()
# ### Add Layers to the model
from tensorflow.contrib.keras import layers
dnn_keras_model.add(layers.Dense(units=13,input_dim=13,activation='relu'))
dnn_keras_model.add(layers.Dense(units=13,activation='relu'))
dnn_keras_model.add(layers.Dense(units=13,activation='relu'))
dnn_keras_model.add(layers.Dense(units=3,activation='softmax'))
# ### Compile the Model
from tensorflow.contrib.keras import losses,optimizers,metrics
# +
# explore these
# losses.
# +
#optimizers.
# -
losses.sparse_categorical_crossentropy
dnn_keras_model.compile(optimizer='adam',
loss='sparse_categorical_crossentropy',
metrics=['accuracy'])
# ### Train Model
dnn_keras_model.fit(scaled_x_train,y_train,epochs=50)
predictions = dnn_keras_model.predict_classes(scaled_x_test)
print(classification_report(predictions,y_test))
# # Layers API
#
# https://www.tensorflow.org/tutorials/layers
# ## Formating Data
import pandas as pd
from sklearn.datasets import load_wine
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import MinMaxScaler
wine_data = load_wine()
feat_data = wine_data['data']
labels = wine_data['target']
X_train, X_test, y_train, y_test = train_test_split(feat_data,
labels,
test_size=0.3,
random_state=101)
scaler = MinMaxScaler()
scaled_x_train = scaler.fit_transform(X_train)
scaled_x_test = scaler.transform(X_test)
# ONE HOT ENCODED
onehot_y_train = pd.get_dummies(y_train).as_matrix()
one_hot_y_test = pd.get_dummies(y_test).as_matrix()
# ### Parameters
num_feat = 13
num_hidden1 = 13
num_hidden2 = 13
num_outputs = 3
learning_rate = 0.01
import tensorflow as tf
from tensorflow.contrib.layers import fully_connected
# ### Placeholder
X = tf.placeholder(tf.float32,shape=[None,num_feat])
y_true = tf.placeholder(tf.float32,shape=[None,3])
# ### Activation Function
actf = tf.nn.relu
# ### Create Layers
hidden1 = fully_connected(X,num_hidden1,activation_fn=actf)
hidden2 = fully_connected(hidden1,num_hidden2,activation_fn=actf)
output = fully_connected(hidden2,num_outputs)
# ### Loss Function
loss = tf.losses.softmax_cross_entropy(onehot_labels=y_true, logits=output)
# ### Optimizer
optimizer = tf.train.AdamOptimizer(learning_rate)
train = optimizer.minimize(loss)
# ### Init
init = tf.global_variables_initializer()
training_steps = 1000
with tf.Session() as sess:
sess.run(init)
for i in range(training_steps):
sess.run(train,feed_dict={X:scaled_x_train,y_true:y_train})
# Get Predictions
logits = output.eval(feed_dict={X:scaled_x_test})
preds = tf.argmax(logits,axis=1)
results = preds.eval()
from sklearn.metrics import confusion_matrix,classification_report
print(classification_report(results,y_test))
| Miscellaneous-Topics/00-Deep-Nets-with-TF-Abstractions.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="mgIRFfegQApR"
# ### Eff7_20fold_base -----> (0 to 19) folds
#
# - In this notebook we are using colab pro with high ram and 16gb GPU
# - this notebook sane as previus notebook 01-Eff5_20fold_base part1
# - except encoder/model to EfficientNet-07
#
# #### Important points
# - for saving outputs we need to mount drive
# - for downloading preprocessing data we need to include kaggle.json file for kaggle API
# - for saving outputs we need to give output path
# - output path in `Run.py` `args` class `output_dir`
# + colab={"base_uri": "https://localhost:8080/"} id="RE0b1lGqPCy7" executionInfo={"status": "ok", "timestamp": 1606613838375, "user_tz": -330, "elapsed": 1401, "user": {"displayName": "kaggle ai", "photoUrl": "", "userId": "18291610024750681979"}} outputId="4713e1df-d7da-4025-ddfa-1fd414a14783"
# !nvidia-smi
# + colab={"base_uri": "https://localhost:8080/"} id="XjH07ydpPoyO" executionInfo={"status": "ok", "timestamp": 1606613927008, "user_tz": -330, "elapsed": 85174, "user": {"displayName": "kaggle ai", "photoUrl": "", "userId": "18291610024750681979"}} outputId="36c9f1c2-2d89-4469-ab26-6b181ca0d3cd"
from google.colab import drive
drive.mount("/content/drive")
# + id="1_JDA_DsPwsX"
# kaggle api for download preprocessed data
# ! mkdir /root/.kaggle
# ! cp '/content/drive/My Drive/kaggle.json' /root/.kaggle
# ! chmod 400 /root/.kaggle/kaggle.json
# !pip uninstall -y kaggle >> quit
# !pip install --upgrade pip >> quit
# !pip install kaggle==1.5.6 >> quit
# !kaggle -v >> quit
# + colab={"base_uri": "https://localhost:8080/"} id="RD2w3OaNQAqf" executionInfo={"status": "ok", "timestamp": 1606613954780, "user_tz": -330, "elapsed": 70157, "user": {"displayName": "kaggle ai", "photoUrl": "", "userId": "18291610024750681979"}} outputId="e6381315-81fe-4052-9a75-754d9fccdcb6"
# !kaggle datasets download -d gopidurgaprasad/giz-nlp-agricultural-keyword-spotter
# !unzip giz-nlp-agricultural-keyword-spotter.zip >> quit
# + colab={"base_uri": "https://localhost:8080/"} id="o8yDh-uIQZoP" executionInfo={"status": "ok", "timestamp": 1606613991458, "user_tz": -330, "elapsed": 104231, "user": {"displayName": "kaggle ai", "photoUrl": "", "userId": "18291610024750681979"}} outputId="d1aeacde-a481-4158-da0a-763b1778e9cd"
##install requred packages
# !pip -q install timm
# !pip -q install albumentations
# !pip -q install soundfile
# !pip -q install torchlibrosa
# !pip -q install audiomentations
# !pip -q install catalyst
# !pip -q install transformers
# !pip -q install git+https://github.com/ildoonet/pytorch-gradual-warmup-lr.git
# + [markdown] id="N4BhmjAsQzX9"
# ### process data and create k-folds
# + id="tkuiGG1RQrc-"
import glob, os, random
import pandas as pd, numpy as np
from sklearn.model_selection import StratifiedKFold
# + colab={"base_uri": "https://localhost:8080/"} id="Suq6VHGTQ8cp" executionInfo={"status": "ok", "timestamp": 1606613992192, "user_tz": -330, "elapsed": 99647, "user": {"displayName": "kaggle ai", "photoUrl": "", "userId": "18291610024750681979"}} outputId="0ba0f3b9-9b73-45aa-da44-4f2fb84cba38"
train_wav = glob.glob("audio_train/input/audio_train/*/*.wav")
test_wav = glob.glob("audio_test/input/audio_test/*.wav")
print(len(train_wav), len(test_wav))
train_df = pd.DataFrame({
"fn" : train_wav
}).sort_values("fn")
train_df["label"] = train_df.fn.apply(lambda x: x.split("/")[-2])
test_df = pd.DataFrame({
"fn" : test_wav
}).sort_values("fn")
print(train_df.shape, test_df.shape)
# + colab={"base_uri": "https://localhost:8080/"} id="XpDtvNaeR-Je" executionInfo={"status": "ok", "timestamp": 1606613992705, "user_tz": -330, "elapsed": 98996, "user": {"displayName": "kaggle ai", "photoUrl": "", "userId": "18291610024750681979"}} outputId="4961ea0c-97f0-4287-bb1b-70d60afa3f1f"
FOLDS = 20
SEED = 24
train_df.loc[:, 'kfold'] = -1
train_df = train_df.sample(frac=1, random_state=SEED).reset_index(drop=True)
X = train_df['fn'].values
y = train_df['label'].values
kfold = StratifiedKFold(n_splits=FOLDS)
for fold, (t_idx, v_idx) in enumerate(kfold.split(X, y)):
train_df.loc[v_idx, "kfold"] = fold
print(train_df.kfold.value_counts())
# + id="PyAIDfpgSqvc"
train_df.to_csv("train_20folds_seed24_df.csv", index=False)
test_df.to_csv("test_df.csv", index=False)
# + colab={"base_uri": "https://localhost:8080/"} id="IqorozP9S7vx" executionInfo={"status": "ok", "timestamp": 1606614335765, "user_tz": -330, "elapsed": 2398, "user": {"displayName": "kaggle ai", "photoUrl": "", "userId": "18291610024750681979"}} outputId="c5a3ea90-db5a-41fe-f480-2ad27b42f858"
# %%writefile Codes.py
CODE = {
'Pump': 0,
'Spinach': 1,
'abalimi': 2,
'afukirira': 3,
'agriculture': 4,
'akammwanyi': 5,
'akamonde': 6,
'akasaanyi': 7,
'akatunda': 8,
'akatungulu': 9,
'akawuka': 10,
'amakoola': 11,
'amakungula': 12,
'amalagala': 13,
'amappapaali': 14,
'amatooke': 15,
'banana': 16,
'beans': 17,
'bibala': 18,
'bulimi': 19,
'butterfly': 20,
'cabbages': 21,
'cassava': 22,
'caterpillar': 23,
'caterpillars': 24,
'coffee': 25,
'crop': 26,
'ddagala': 27,
'dig': 28,
'disease': 29,
'doodo': 30,
'drought': 31,
'ebbugga': 32,
'ebibala': 33,
'ebigimusa': 34,
'ebijanjaalo': 35,
'ebijjanjalo': 36,
'ebikajjo': 37,
'ebikolo': 38,
'ebikongoliro': 39,
'ebikoola': 40,
'ebimera': 41,
'ebinyebwa': 42,
'ebirime': 43,
'ebisaanyi': 44,
'ebisooli': 45,
'ebisoolisooli': 46,
'ebitooke': 47,
'ebiwojjolo': 48,
'ebiwuka': 49,
'ebyobulimi': 50,
'eddagala': 51,
'eggobe': 52,
'ejjobyo': 53,
'ekibala': 54,
'ekigimusa': 55,
'ekijanjaalo': 56,
'ekikajjo': 57,
'ekikolo': 58,
'ekikoola': 59,
'ekimera': 60,
'ekirime': 61,
'ekirwadde': 62,
'ekisaanyi': 63,
'ekitooke': 64,
'ekiwojjolo': 65,
'ekyeya': 66,
'emboga': 67,
'emicungwa': 68,
'emisiri': 69,
'emiyembe': 70,
'emmwanyi': 71,
'endagala': 72,
'endokwa': 73,
'endwadde': 74,
'enkota': 75,
'ennima': 76,
'ennimiro': 77,
'ennyaanya': 78,
'ensigo': 79,
'ensiringanyi': 80,
'ensujju': 81,
'ensuku': 82,
'ensukusa': 83,
'enva endiirwa': 84,
'eppapaali': 85,
'faamu': 86,
'farm': 87,
'farmer': 88,
'farming instructor': 89,
'fertilizer': 90,
'fruit': 91,
'fruit picking': 92,
'garden': 93,
'greens': 94,
'ground nuts': 95,
'harvest': 96,
'harvesting': 97,
'insect': 98,
'insects': 99,
'irish potatoes': 100,
'irrigate': 101,
'kaamulali': 102,
'kasaanyi': 103,
'kassooli': 104,
'kikajjo': 105,
'kikolo': 106,
'kisaanyi': 107,
'kukungula': 108,
'leaf': 109,
'leaves': 110,
'lumonde': 111,
'lusuku': 112,
'maize': 113,
'maize stalk borer': 114,
'maize streak virus': 115,
'mango': 116,
'mangoes': 117,
'matooke': 118,
'matooke seedlings': 119,
'medicine': 120,
'miceere': 121,
'micungwa': 122,
'mpeke': 123,
'muceere': 124,
'mucungwa': 125,
'mulimi': 126,
'munyeera': 127,
'muwogo': 128,
'nakavundira': 129,
'nambaale': 130,
'namuginga': 131,
'ndwadde': 132,
'nfukirira': 133,
'nnakati': 134,
'nnasale beedi': 135,
'nnimiro': 136,
'nnyaanya': 137,
'npk': 138,
'nursery bed': 139,
'obulimi': 140,
'obulwadde': 141,
'obumonde': 142,
'obusaanyi': 143,
'obutunda': 144,
'obutungulu': 145,
'obuwuka': 146,
'okufukirira': 147,
'okufuuyira': 148,
'okugimusa': 149,
'okukkoola': 150,
'okukungula': 151,
'okulima': 152,
'okulimibwa': 153,
'okunnoga': 154,
'okusaasaana': 155,
'okusaasaanya': 156,
'okusiga': 157,
'okusimba': 158,
'okuzifuuyira': 159,
'olusuku': 160,
'omuceere': 161,
'omucungwa': 162,
'omulimi': 163,
'omulimisa': 164,
'omusiri': 165,
'omuyembe': 166,
'onion': 167,
'orange': 168,
'pampu': 169,
'passion fruit': 170,
'pawpaw': 171,
'pepper': 172,
'plant': 173,
'plantation': 174,
'ppaapaali': 175,
'pumpkin': 176,
'rice': 177,
'seed': 178,
'sikungula': 179,
'sow': 180,
'spray': 181,
'spread': 182,
'suckers': 183,
'sugarcane': 184,
'sukumawiki': 185,
'super grow': 186,
'sweet potatoes': 187,
'tomatoes': 188,
'vegetables': 189,
'watermelon': 190,
'weeding': 191,
'worm': 192
}
INV_CODE = {v: k for k, v in CODE.items()}
# + colab={"base_uri": "https://localhost:8080/"} id="4zxgzmrfTE3L" executionInfo={"status": "ok", "timestamp": 1606614336534, "user_tz": -330, "elapsed": 1305, "user": {"displayName": "kaggle ai", "photoUrl": "", "userId": "18291610024750681979"}} outputId="9429f76c-c426-4b91-ac73-9e2b0034bd06"
# %%writefile pytorch_utils.py
import numpy as np
import time
import torch
import torch.nn as nn
def move_data_to_device(x, device):
if 'float' in str(x.dtype):
x = torch.Tensor(x)
elif 'int' in str(x.dtype):
x = torch.LongTensor(x)
else:
return x
return x.to(device)
def do_mixup(x, mixup_lambda):
"""Mixup x of even indexes (0, 2, 4, ...) with x of odd indexes
(1, 3, 5, ...).
Args:
x: (batch_size * 2, ...)
mixup_lambda: (batch_size * 2,)
Returns:
out: (batch_size, ...)
"""
out = (x[0 :: 2].transpose(0, -1) * mixup_lambda[0 :: 2] + \
x[1 :: 2].transpose(0, -1) * mixup_lambda[1 :: 2]).transpose(0, -1)
return out
def append_to_dict(dict, key, value):
if key in dict.keys():
dict[key].append(value)
else:
dict[key] = [value]
def forward(model, generator, return_input=False,
return_target=False):
"""Forward data to a model.
Args:
model: object
generator: object
return_input: bool
return_target: bool
Returns:
audio_name: (audios_num,)
clipwise_output: (audios_num, classes_num)
(ifexist) segmentwise_output: (audios_num, segments_num, classes_num)
(ifexist) framewise_output: (audios_num, frames_num, classes_num)
(optional) return_input: (audios_num, segment_samples)
(optional) return_target: (audios_num, classes_num)
"""
output_dict = {}
device = next(model.parameters()).device
time1 = time.time()
# Forward data to a model in mini-batches
for n, batch_data_dict in enumerate(generator):
print(n)
batch_waveform = move_data_to_device(batch_data_dict['waveform'], device)
with torch.no_grad():
model.eval()
batch_output = model(batch_waveform)
append_to_dict(output_dict, 'audio_name', batch_data_dict['audio_name'])
append_to_dict(output_dict, 'clipwise_output',
batch_output['clipwise_output'].data.cpu().numpy())
if 'segmentwise_output' in batch_output.keys():
append_to_dict(output_dict, 'segmentwise_output',
batch_output['segmentwise_output'].data.cpu().numpy())
if 'framewise_output' in batch_output.keys():
append_to_dict(output_dict, 'framewise_output',
batch_output['framewise_output'].data.cpu().numpy())
if return_input:
append_to_dict(output_dict, 'waveform', batch_data_dict['waveform'])
if return_target:
if 'target' in batch_data_dict.keys():
append_to_dict(output_dict, 'target', batch_data_dict['target'])
if n % 10 == 0:
print(' --- Inference time: {:.3f} s / 10 iterations ---'.format(
time.time() - time1))
time1 = time.time()
for key in output_dict.keys():
output_dict[key] = np.concatenate(output_dict[key], axis=0)
return output_dict
def interpolate(x, ratio):
"""Interpolate data in time domain. This is used to compensate the
resolution reduction in downsampling of a CNN.
Args:
x: (batch_size, time_steps, classes_num)
ratio: int, ratio to interpolate
Returns:
upsampled: (batch_size, time_steps * ratio, classes_num)
"""
(batch_size, time_steps, classes_num) = x.shape
upsampled = x[:, :, None, :].repeat(1, 1, ratio, 1)
upsampled = upsampled.reshape(batch_size, time_steps * ratio, classes_num)
return upsampled
def pad_framewise_output(framewise_output, frames_num):
"""Pad framewise_output to the same length as input frames. The pad value
is the same as the value of the last frame.
Args:
framewise_output: (batch_size, frames_num, classes_num)
frames_num: int, number of frames to pad
Outputs:
output: (batch_size, frames_num, classes_num)
"""
pad = framewise_output[:, -1 :, :].repeat(1, frames_num - framewise_output.shape[1], 1)
"""tensor for padding"""
output = torch.cat((framewise_output, pad), dim=1)
"""(batch_size, frames_num, classes_num)"""
return output
def count_parameters(model):
return sum(p.numel() for p in model.parameters() if p.requires_grad)
def count_flops(model, audio_length):
"""Count flops. Code modified from others' implementation.
"""
multiply_adds = True
list_conv2d=[]
def conv2d_hook(self, input, output):
batch_size, input_channels, input_height, input_width = input[0].size()
output_channels, output_height, output_width = output[0].size()
kernel_ops = self.kernel_size[0] * self.kernel_size[1] * (self.in_channels / self.groups) * (2 if multiply_adds else 1)
bias_ops = 1 if self.bias is not None else 0
params = output_channels * (kernel_ops + bias_ops)
flops = batch_size * params * output_height * output_width
list_conv2d.append(flops)
list_conv1d=[]
def conv1d_hook(self, input, output):
batch_size, input_channels, input_length = input[0].size()
output_channels, output_length = output[0].size()
kernel_ops = self.kernel_size[0] * (self.in_channels / self.groups) * (2 if multiply_adds else 1)
bias_ops = 1 if self.bias is not None else 0
params = output_channels * (kernel_ops + bias_ops)
flops = batch_size * params * output_length
list_conv1d.append(flops)
list_linear=[]
def linear_hook(self, input, output):
batch_size = input[0].size(0) if input[0].dim() == 2 else 1
weight_ops = self.weight.nelement() * (2 if multiply_adds else 1)
bias_ops = self.bias.nelement()
flops = batch_size * (weight_ops + bias_ops)
list_linear.append(flops)
list_bn=[]
def bn_hook(self, input, output):
list_bn.append(input[0].nelement() * 2)
list_relu=[]
def relu_hook(self, input, output):
list_relu.append(input[0].nelement() * 2)
list_pooling2d=[]
def pooling2d_hook(self, input, output):
batch_size, input_channels, input_height, input_width = input[0].size()
output_channels, output_height, output_width = output[0].size()
kernel_ops = self.kernel_size * self.kernel_size
bias_ops = 0
params = output_channels * (kernel_ops + bias_ops)
flops = batch_size * params * output_height * output_width
list_pooling2d.append(flops)
list_pooling1d=[]
def pooling1d_hook(self, input, output):
batch_size, input_channels, input_length = input[0].size()
output_channels, output_length = output[0].size()
kernel_ops = self.kernel_size[0]
bias_ops = 0
params = output_channels * (kernel_ops + bias_ops)
flops = batch_size * params * output_length
list_pooling2d.append(flops)
def foo(net):
childrens = list(net.children())
if not childrens:
if isinstance(net, nn.Conv2d):
net.register_forward_hook(conv2d_hook)
elif isinstance(net, nn.Conv1d):
net.register_forward_hook(conv1d_hook)
elif isinstance(net, nn.Linear):
net.register_forward_hook(linear_hook)
elif isinstance(net, nn.BatchNorm2d) or isinstance(net, nn.BatchNorm1d):
net.register_forward_hook(bn_hook)
elif isinstance(net, nn.ReLU):
net.register_forward_hook(relu_hook)
elif isinstance(net, nn.AvgPool2d) or isinstance(net, nn.MaxPool2d):
net.register_forward_hook(pooling2d_hook)
elif isinstance(net, nn.AvgPool1d) or isinstance(net, nn.MaxPool1d):
net.register_forward_hook(pooling1d_hook)
else:
print('Warning: flop of module {} is not counted!'.format(net))
return
for c in childrens:
foo(c)
# Register hook
foo(model)
device = device = next(model.parameters()).device
input = torch.rand(1, audio_length).to(device)
out = model(input)
total_flops = sum(list_conv2d) + sum(list_conv1d) + sum(list_linear) + \
sum(list_bn) + sum(list_relu) + sum(list_pooling2d) + sum(list_pooling1d)
return total_flops
# + colab={"base_uri": "https://localhost:8080/"} id="pHGiEtFHTJ7w" executionInfo={"status": "ok", "timestamp": 1606614340223, "user_tz": -330, "elapsed": 2987, "user": {"displayName": "kaggle ai", "photoUrl": "", "userId": "18291610024750681979"}} outputId="b77a3071-c693-4a4b-caea-3ae4ff89f92f"
# %%writefile Datasets.py
import random, glob
import numpy as np, pandas as pd
import soundfile as sf
import torch
from torch.utils.data import Dataset
from albumentations.pytorch.functional import img_to_tensor
from Codes import CODE, INV_CODE
class AudioDataset(Dataset):
def __init__(self, df, period=1, transforms=None, train=True):
self.period = period
self.transforms = transforms
self.train = train
self.wav_paths = df["fn"].values
if train:
self.labels = df["label"].values
else:
self.labels = np.zeros_like(self.wav_paths)
def __len__(self):
return len(self.wav_paths)
def __getitem__(self, idx):
wav_path, code = self.wav_paths[idx], self.labels[idx]
label = np.zeros(len(CODE), dtype='f')
y, sr = sf.read(wav_path)
if self.transforms:
y = self.transforms(samples=y, sample_rate=sr)
len_y = len(y)
effective_length = sr * self.period
if len_y < effective_length:
new_y = np.zeros(effective_length, dtype=y.dtype)
start = np.random.randint(effective_length - len_y)
new_y[start:start+len_y] = y
y = new_y#.astype(np.float)
elif len_y > effective_length:
start = np.random.randint(len_y - effective_length)
y = y[start:start + effective_length]#.astype(np.float32)
else:
y = y#.astype(np.float32)
if self.train:
#label[CODE[code]] = 1
label = CODE[code]
else:
label = 0
return {
"waveform" : y, #torch.tensor(y, dtype=torch.double),
"target" : torch.tensor(label, dtype=torch.long)
}
def __get_labels__(self):
return self.labels
# + colab={"base_uri": "https://localhost:8080/"} id="ylpcfWr4TNkZ" executionInfo={"status": "ok", "timestamp": 1606614341870, "user_tz": -330, "elapsed": 2417, "user": {"displayName": "kaggle ai", "photoUrl": "", "userId": "18291610024750681979"}} outputId="e2e5cede-876c-48d8-98b1-7f95fa5cb2df"
# %%writefile Augmentation.py
import audiomentations as A
augmenter = A.Compose([
A.AddGaussianNoise(p=0.4),
A.AddGaussianSNR(p=0.4),
#A.AddBackgroundNoise("../input/train_audio/", p=1)
#A.AddImpulseResponse(p=0.1),
#A.AddShortNoises("../input/train_audio/", p=1)
A.FrequencyMask(min_frequency_band=0.0, max_frequency_band=0.2, p=0.05),
A.TimeMask(min_band_part=0.0, max_band_part=0.2, p=0.05),
A.PitchShift(min_semitones=-0.5, max_semitones=0.5, p=0.05),
A.Shift(p=0.1),
A.Normalize(p=0.1),
A.ClippingDistortion(min_percentile_threshold=0, max_percentile_threshold=1, p=0.05),
A.PolarityInversion(p=0.05),
A.Gain(p=0.2)
])
test_augmenter = A.Compose([
A.AddGaussianNoise(p=0.3),
A.AddGaussianSNR(p=0.3),
#A.AddBackgroundNoise("../input/train_audio/", p=1)
#A.AddImpulseResponse(p=0.1),
#A.AddShortNoises("../input/train_audio/", p=1)
A.FrequencyMask(min_frequency_band=0.0, max_frequency_band=0.2, p=0.05),
A.TimeMask(min_band_part=0.0, max_band_part=0.2, p=0.05),
A.PitchShift(min_semitones=-0.5, max_semitones=0.5, p=0.05),
A.Shift(p=0.1),
A.Normalize(p=0.1),
A.ClippingDistortion(min_percentile_threshold=0, max_percentile_threshold=1, p=0.05),
A.PolarityInversion(p=0.05),
A.Gain(p=0.1)
])
# + colab={"base_uri": "https://localhost:8080/"} id="YwTGTAjWTYzk" executionInfo={"status": "ok", "timestamp": 1606614344957, "user_tz": -330, "elapsed": 3588, "user": {"displayName": "kaggle ai", "photoUrl": "", "userId": "18291610024750681979"}} outputId="5551e433-d51c-49f1-c705-f2846cf8cbdd"
# %%writefile Models.py
import numpy as np
from functools import partial
import torch
import torch.nn as nn
import torch.nn.functional as F
from torch.nn.modules.dropout import Dropout
from torch.nn.modules.linear import Linear
from torch.nn.modules.pooling import AdaptiveAvgPool2d, AdaptiveMaxPool2d
import timm
from timm.models.efficientnet import tf_efficientnet_b4_ns, tf_efficientnet_b3_ns, \
tf_efficientnet_b5_ns, tf_efficientnet_b2_ns, tf_efficientnet_b6_ns, tf_efficientnet_b7_ns, tf_efficientnet_b0_ns
from torchlibrosa.stft import Spectrogram, LogmelFilterBank
from torchlibrosa.augmentation import SpecAugmentation
from pytorch_utils import do_mixup, interpolate, pad_framewise_output
encoder_params = {
"resnest50d" : {
"features" : 2048,
"init_op" : partial(timm.models.resnest50d, pretrained=True, in_chans=1)
},
"densenet201" : {
"features": 1920,
"init_op": partial(timm.models.densenet201, pretrained=True)
},
"dpn92" : {
"features": 2688,
"init_op": partial(timm.models.dpn92, pretrained=True)
},
"dpn131": {
"features": 2688,
"init_op": partial(timm.models.dpn131, pretrained=True)
},
"tf_efficientnet_b0_ns": {
"features": 1280,
"init_op": partial(tf_efficientnet_b0_ns, pretrained=True, drop_path_rate=0.2, in_chans=1)
},
"tf_efficientnet_b3_ns": {
"features": 1536,
"init_op": partial(tf_efficientnet_b3_ns, pretrained=True, drop_path_rate=0.2, in_chans=1)
},
"tf_efficientnet_b2_ns": {
"features": 1408,
"init_op": partial(tf_efficientnet_b2_ns, pretrained=True, drop_path_rate=0.2, in_chans=1)
},
"tf_efficientnet_b4_ns": {
"features": 1792,
"init_op": partial(tf_efficientnet_b4_ns, pretrained=True, drop_path_rate=0.2, in_chans=1)
},
"tf_efficientnet_b5_ns": {
"features": 2048,
"init_op": partial(tf_efficientnet_b5_ns, pretrained=True, drop_path_rate=0.2, in_chans=1)
},
"tf_efficientnet_b6_ns": {
"features": 2304,
"init_op": partial(tf_efficientnet_b6_ns, pretrained=True, drop_path_rate=0.2, in_chans=1)
},
"tf_efficientnet_b7_ns": {
"features": 2560,
"init_op": partial(tf_efficientnet_b7_ns, pretrained=True, drop_path_rate=0.2, in_chans=1)
},
}
class AudioClassifier(nn.Module):
def __init__(self, encoder, sample_rate, window_size, hop_size, mel_bins, fmin, fmax, classes_num):
super().__init__()
window = 'hann'
center = True
pad_mode = 'reflect'
ref = 1.0
amin = 1e-10
top_db = None
# Spectrogram extractor
self.spectrogram_extractor = Spectrogram(n_fft=window_size, hop_length=hop_size,
win_length=window_size, window=window, center=center, pad_mode=pad_mode,
freeze_parameters=True)
# Logmel feature extractor
self.logmel_extractor = LogmelFilterBank(sr=sample_rate, n_fft=window_size,
n_mels=mel_bins, fmin=fmin, fmax=fmax, ref=ref, amin=amin, top_db=top_db,
freeze_parameters=True)
# Spec augmenter
self.spec_augmenter = SpecAugmentation(time_drop_width=64, time_stripes_num=2,
freq_drop_width=8, freq_stripes_num=2)
self.encoder = encoder_params[encoder]["init_op"]()
self.avg_pool = AdaptiveAvgPool2d((1, 1))
self.dropout = Dropout(0.3)
self.fc = Linear(encoder_params[encoder]['features'], classes_num)
def forward(self, input, spec_aug=False, mixup_lambda=None):
#print(input.type())
x = self.spectrogram_extractor(input.float()) # (batch_size, 1, time_steps, freq_bins)
x = self.logmel_extractor(x) # (batch_size, 1, time_steps, mel_bins)
#if spec_aug:
# x = self.spec_augmenter(x)
if self.training:
x = self.spec_augmenter(x)
# Mixup on spectrogram
if mixup_lambda is not None:
x = do_mixup(x, mixup_lambda)
#pass
x = self.encoder.forward_features(x)
x = self.avg_pool(x).flatten(1)
x = self.dropout(x)
x = self.fc(x)
return x
def init_layer(layer):
"""Initialize a Linear or Convolutional layer. """
nn.init.xavier_uniform_(layer.weight)
if hasattr(layer, 'bias'):
if layer.bias is not None:
layer.bias.data.fill_(0.)
def init_bn(bn):
"""Initialize a Batchnorm layer. """
bn.bias.data.fill_(0.)
bn.weight.data.fill_(1.)
class ConvBlock(nn.Module):
def __init__(self, in_channels, out_channels):
super(ConvBlock, self).__init__()
self.conv1 = nn.Conv2d(in_channels=in_channels,
out_channels=out_channels,
kernel_size=(3, 3), stride=(1, 1),
padding=(1, 1), bias=False)
self.conv2 = nn.Conv2d(in_channels=out_channels,
out_channels=out_channels,
kernel_size=(3, 3), stride=(1, 1),
padding=(1, 1), bias=False)
self.bn1 = nn.BatchNorm2d(out_channels)
self.bn2 = nn.BatchNorm2d(out_channels)
self.init_weight()
def init_weight(self):
init_layer(self.conv1)
init_layer(self.conv2)
init_bn(self.bn1)
init_bn(self.bn2)
def forward(self, input, pool_size=(2, 2), pool_type='avg'):
x = input
x = F.relu_(self.bn1(self.conv1(x)))
x = F.relu_(self.bn2(self.conv2(x)))
if pool_type == 'max':
x = F.max_pool2d(x, kernel_size=pool_size)
elif pool_type == 'avg':
x = F.avg_pool2d(x, kernel_size=pool_size)
elif pool_type == 'avg+max':
x1 = F.avg_pool2d(x, kernel_size=pool_size)
x2 = F.max_pool2d(x, kernel_size=pool_size)
x = x1 + x2
else:
raise Exception('Incorrect argument!')
return x
class ConvBlock5x5(nn.Module):
def __init__(self, in_channels, out_channels):
super(ConvBlock5x5, self).__init__()
self.conv1 = nn.Conv2d(in_channels=in_channels,
out_channels=out_channels,
kernel_size=(5, 5), stride=(1, 1),
padding=(2, 2), bias=False)
self.bn1 = nn.BatchNorm2d(out_channels)
self.init_weight()
def init_weight(self):
init_layer(self.conv1)
init_bn(self.bn1)
def forward(self, input, pool_size=(2, 2), pool_type='avg'):
x = input
x = F.relu_(self.bn1(self.conv1(x)))
if pool_type == 'max':
x = F.max_pool2d(x, kernel_size=pool_size)
elif pool_type == 'avg':
x = F.avg_pool2d(x, kernel_size=pool_size)
elif pool_type == 'avg+max':
x1 = F.avg_pool2d(x, kernel_size=pool_size)
x2 = F.max_pool2d(x, kernel_size=pool_size)
x = x1 + x2
else:
raise Exception('Incorrect argument!')
return x
class AttBlock(nn.Module):
def __init__(self, n_in, n_out, activation='linear', temperature=1.):
super(AttBlock, self).__init__()
self.activation = activation
self.temperature = temperature
self.att = nn.Conv1d(in_channels=n_in, out_channels=n_out, kernel_size=1, stride=1, padding=0, bias=True)
self.cla = nn.Conv1d(in_channels=n_in, out_channels=n_out, kernel_size=1, stride=1, padding=0, bias=True)
self.bn_att = nn.BatchNorm1d(n_out)
self.init_weights()
def init_weights(self):
init_layer(self.att)
init_layer(self.cla)
init_bn(self.bn_att)
def forward(self, x):
# x: (n_samples, n_in, n_time)
norm_att = torch.softmax(torch.clamp(self.att(x), -10, 10), dim=-1)
cla = self.nonlinear_transform(self.cla(x))
x = torch.sum(norm_att * cla, dim=2)
return x, norm_att, cla
def nonlinear_transform(self, x):
if self.activation == 'linear':
return x
elif self.activation == 'sigmoid':
return torch.sigmoid(x)
class Cnn14(nn.Module):
def __init__(self, sample_rate, window_size, hop_size, mel_bins, fmin,
fmax, classes_num):
super(Cnn14, self).__init__()
window = 'hann'
center = True
pad_mode = 'reflect'
ref = 1.0
amin = 1e-10
top_db = None
# Spectrogram extractor
self.spectrogram_extractor = Spectrogram(n_fft=window_size, hop_length=hop_size,
win_length=window_size, window=window, center=center, pad_mode=pad_mode,
freeze_parameters=True)
# Logmel feature extractor
self.logmel_extractor = LogmelFilterBank(sr=sample_rate, n_fft=window_size,
n_mels=mel_bins, fmin=fmin, fmax=fmax, ref=ref, amin=amin, top_db=top_db,
freeze_parameters=True)
# Spec augmenter
self.spec_augmenter = SpecAugmentation(time_drop_width=64, time_stripes_num=2,
freq_drop_width=8, freq_stripes_num=2)
self.bn0 = nn.BatchNorm2d(64)
self.conv_block1 = ConvBlock(in_channels=1, out_channels=64)
self.conv_block2 = ConvBlock(in_channels=64, out_channels=128)
self.conv_block3 = ConvBlock(in_channels=128, out_channels=256)
self.conv_block4 = ConvBlock(in_channels=256, out_channels=512)
self.conv_block5 = ConvBlock(in_channels=512, out_channels=1024)
self.conv_block6 = ConvBlock(in_channels=1024, out_channels=2048)
self.fc1 = nn.Linear(2048, 2048, bias=True)
self.fc_audioset1 = nn.Linear(2048, classes_num, bias=True)
self.init_weight()
def init_weight(self):
init_bn(self.bn0)
init_layer(self.fc1)
init_layer(self.fc_audioset1)
def forward(self, input, mixup_lambda=None):
"""
Input: (batch_size, data_length)"""
x = self.spectrogram_extractor(input.float()) # (batch_size, 1, time_steps, freq_bins)
x = self.logmel_extractor(x) # (batch_size, 1, time_steps, mel_bins)
x = x.transpose(1, 3)
x = self.bn0(x)
x = x.transpose(1, 3)
if self.training:
x = self.spec_augmenter(x)
# Mixup on spectrogram
if self.training and mixup_lambda is not None:
x = do_mixup(x, mixup_lambda)
x = self.conv_block1(x, pool_size=(2, 2), pool_type='avg')
x = F.dropout(x, p=0.2, training=self.training)
x = self.conv_block2(x, pool_size=(2, 2), pool_type='avg')
x = F.dropout(x, p=0.2, training=self.training)
x = self.conv_block3(x, pool_size=(2, 2), pool_type='avg')
x = F.dropout(x, p=0.2, training=self.training)
x = self.conv_block4(x, pool_size=(2, 2), pool_type='avg')
x = F.dropout(x, p=0.2, training=self.training)
x = self.conv_block5(x, pool_size=(2, 2), pool_type='avg')
x = F.dropout(x, p=0.2, training=self.training)
x = self.conv_block6(x, pool_size=(1, 1), pool_type='avg')
x = F.dropout(x, p=0.2, training=self.training)
x = torch.mean(x, dim=3)
(x1, _) = torch.max(x, dim=2)
x2 = torch.mean(x, dim=2)
x = x1 + x2
x = F.dropout(x, p=0.5, training=self.training)
x = F.relu_(self.fc1(x))
#embedding = F.dropout(x, p=0.5, training=self.training)
#clipwise_output = torch.sigmoid(self.fc_audioset(x))
x = self.fc_audioset1(x)
#output_dict = {'clipwise_output': clipwise_output, 'embedding': embedding}
return x
# + colab={"base_uri": "https://localhost:8080/"} id="wZ3Qtqm3ThGU" executionInfo={"status": "ok", "timestamp": 1606614346479, "user_tz": -330, "elapsed": 3158, "user": {"displayName": "kaggle ai", "photoUrl": "", "userId": "18291610024750681979"}} outputId="389bf271-55b5-4dfd-a623-11141bd9bea1"
# %%writefile Utils.py
import torch
import numpy as np
from sklearn import metrics
from sklearn.metrics import log_loss
def logloss_metric(y_true, y_pred):
y_true = np.asarray(y_true).ravel()
y_pred = np.asarray(y_pred).ravel()
y_pred = np.clip(y_pred, 1e-15, 1 - 1e-15)
loss = np.where(y_true == 1, -np.log(y_pred), -np.log(1 - y_pred))
return loss.mean()
class AverageMeter(object):
"""Computes and stores the average and current value"""
def __init__(self):
self.reset()
def reset(self):
self.val = 0
self.avg = 0
self.sum = 0
self.count = 0
def update(self, val, n=1):
self.val = val
self.sum += val * n
self.count += n
self.avg = self.sum / self.count
class MetricMeter(object):
def __init__(self):
self.reset()
def reset(self):
self.y_true = []
self.y_pred = []
def update(self, y_true, y_pred):
self.y_true.extend(y_true.cpu().detach().numpy().tolist())
self.y_pred.extend(torch.nn.functional.softmax(y_pred).cpu().detach().numpy().tolist())
@property
def avg(self):
#self.logloss = torch.nn.CrossEntropyLoss()(torch.tensor(self.y_pred), torch.tensor(self.y_true)).item()#np.argmax(self.y_true, axis=1)
self.logloss = log_loss(self.y_true, self.y_pred, labels=range(0, 193))
self.acc = metrics.accuracy_score(self.y_true, np.argmax(self.y_pred, axis=1))
self.f1 = metrics.f1_score(self.y_true, np.argmax(self.y_pred, axis=1), labels=range(0, 193), average="micro")
return {
"logloss" : self.logloss,
"acc" : self.acc,
"f1" : self.f1
}
# + colab={"base_uri": "https://localhost:8080/"} id="gzqXc3o_UKds" executionInfo={"status": "ok", "timestamp": 1606614347428, "user_tz": -330, "elapsed": 1924, "user": {"displayName": "kaggle ai", "photoUrl": "", "userId": "18291610024750681979"}} outputId="8a82f420-f487-4898-9d73-401b6d52546f"
# %%writefile Losses.py
from torch.nn import BCEWithLogitsLoss, CrossEntropyLoss
# + colab={"base_uri": "https://localhost:8080/"} id="4j1jV-lRUNpf" executionInfo={"status": "ok", "timestamp": 1606614347430, "user_tz": -330, "elapsed": 1358, "user": {"displayName": "kaggle ai", "photoUrl": "", "userId": "18291610024750681979"}} outputId="4f5d3993-35e9-40cc-9b6c-9ad84056794a"
# %%writefile Functions.py
from tqdm import tqdm
import numpy as np
import torch, torch.nn as nn
import torch.nn.functional as F
from Utils import AverageMeter, MetricMeter
def train_epoch(args, model, loader, criterion, optimizer, scheduler, epoch):
losses = AverageMeter()
scores = MetricMeter()
model.train()
#scaler = torch.cuda.amp.GradScaler()
t = tqdm(loader)
for i, sample in enumerate(t):
optimizer.zero_grad()
input = sample['waveform'].to(args.device)
target = sample['target'].to(args.device)
#print(input.shape)
#with torch.cuda.amp.autocast(enabled=args.amp):
output = model(input)
loss = criterion(output, target)
#scaler.scale(loss).backward()
#scaler.step(optimizer)
#scaler.update()
loss.backward()
optimizer.step()
if scheduler and args.step_scheduler:
scheduler.step()
bs = input.size(0)
scores.update(target, output)
losses.update(loss.item(), bs)
t.set_description(f"Train E:{epoch} - Loss{losses.avg:0.4f}")
t.close()
return scores.avg, losses.avg
def valid_epoch(args, model, loader, criterion, epoch):
losses = AverageMeter()
scores = MetricMeter()
model.eval()
with torch.no_grad():
t = tqdm(loader)
for i, sample in enumerate(t):
input = sample['waveform'].to(args.device)
target = sample['target'].to(args.device)
output = model(input)
loss = criterion(output, target)
bs = input.size(0)
scores.update(target, output)
losses.update(loss.item(), bs)
t.set_description(f"Valid E:{epoch} - Loss:{losses.avg:0.4f}")
t.close()
return scores.avg, losses.avg
def test_epoch(args, model, loader):
model.eval()
pred_list = []
with torch.no_grad():
t = tqdm(loader)
for i, sample in enumerate(t):
input = sample["waveform"].to(args.device)
output = torch.nn.Softmax()(model(input)).cpu().detach().numpy().tolist()
pred_list.extend(output)
return pred_list
def TTA_epoch(args, model, loader, ntta=10):
tta_preds = []
for i in range(ntta):
model.eval()
pred_list = []
with torch.no_grad():
t = tqdm(loader)
for i, sample in enumerate(t):
input = sample["waveform"].to(args.device)
output = torch.nn.Softmax()(model(input)).cpu().detach().numpy().tolist()
pred_list.extend(output)
tta_preds.append(pred_list)
return np.mean(tta_preds, axis=0)
# + colab={"base_uri": "https://localhost:8080/"} id="GRK1FOq6UROb" executionInfo={"status": "ok", "timestamp": 1606614419770, "user_tz": -330, "elapsed": 2047, "user": {"displayName": "kaggle ai", "photoUrl": "", "userId": "18291610024750681979"}} outputId="c49b9f80-91e6-4fea-be1d-ea2054b30d8b"
# %%writefile Run.py
import warnings
warnings.filterwarnings('ignore')
import os, time, librosa, random
import numpy as np, pandas as pd
import torch, torch.nn as nn
import torch.nn.functional as F
from transformers import get_linear_schedule_with_warmup
from catalyst.data.sampler import DistributedSampler, BalanceClassSampler
from tqdm import tqdm
try:
import wandb
except:
wandb = False
import Codes
import Datasets
import Models
import Losses
import Functions
import Augmentation
class args:
DEBUG = False
amp = False
wandb = False
exp_name = "Eff7_20fold_base"
network = "AudioClassifier" #"Cnn14" #"AudioClassifier"
encoder = None #"ResNet38"
pretrain_weights = None #"/content/Cnn14_mAP=0.431.pth"
model_param = {
'encoder' : 'tf_efficientnet_b7_ns',
'sample_rate': 32000,
'window_size' : 1024,
'hop_size' : 320,
'mel_bins' : 64,
'fmin' : 50,
'fmax' : 14000,
'classes_num' : 193
}
losses = "CrossEntropyLoss" #"BCEWithLogitsLoss"
lr = 1e-3
step_scheduler = True
epoch_scheduler = False
period = 3
seed = 24
start_epoch = 0
epochs = 50
batch_size = 64
num_workers = 2
early_stop = 10
device = ('cuda' if torch.cuda.is_available() else 'cpu')
train_csv = "train_20folds_seed24_df.csv"
test_csv = "test_df.csv"
sub_csv = "SampleSubmission.csv"
output_dir = "/content/drive/MyDrive/ZINDI GIZ NLP Agricultural Keyword Spotter #3 place solution/weights" # <--- update output_dir path hear
def main(fold):
# Setting seed
seed = args.seed
random.seed(seed)
os.environ['PYTHONHASHSEED'] = str(seed)
np.random.seed(seed)
torch.manual_seed(seed)
torch.cuda.manual_seed(seed)
torch.backends.cudnn.deterministic = True
args.fold = fold
args.save_path = os.path.join(args.output_dir, args.exp_name)
os.makedirs(args.save_path, exist_ok=True)
train_df = pd.read_csv(args.train_csv)
test_df = pd.read_csv(args.test_csv)
sub_df = pd.read_csv(args.sub_csv)
if args.DEBUG:
train_df = train_df.sample(1000)
train_fold = train_df[train_df.kfold != fold]
valid_fold = train_df[train_df.kfold == fold]
train_dataset = Datasets.AudioDataset(
df=train_fold,
period=args.period,
transforms=Augmentation.augmenter,
train=True
)
valid_dataset = Datasets.AudioDataset(
df=valid_fold,
period=args.period,
transforms=None,
train=True
)
test_dataset = Datasets.AudioDataset(
df=test_df,
period=args.period,
transforms=None,
train=False
)
train_loader = torch.utils.data.DataLoader(
train_dataset,
batch_size=args.batch_size,
#sampler = BalanceClassSampler(labels=train_dataset.__get_labels__(), mode="upsampling"),
shuffle=True,
drop_last=True,
num_workers=args.num_workers
)
valid_loader = torch.utils.data.DataLoader(
valid_dataset,
batch_size=args.batch_size,
shuffle=False,
drop_last=False,
num_workers=args.num_workers
)
test_loader = torch.utils.data.DataLoader(
test_dataset,
batch_size=args.batch_size,
shuffle=False,
drop_last=False,
num_workers=args.num_workers
)
tta_dataset = Datasets.AudioDataset(
df=test_df,
period=args.period,
transforms=Augmentation.test_augmenter,
train=False
)
tta_loader = torch.utils.data.DataLoader(
tta_dataset,
batch_size=args.batch_size,
shuffle=False,
drop_last=False,
num_workers=args.num_workers
)
model = Models.__dict__[args.network](**args.model_param)
model = model.to(args.device)
if args.pretrain_weights:
print("---------------------loading pretrain weights")
model.load_state_dict(torch.load(args.pretrain_weights, map_location=args.device)["model"], strict=False)
model = model.to(args.device)
criterion = Losses.__dict__[args.losses]()
optimizer = torch.optim.AdamW(model.parameters(), lr=args.lr)
num_train_steps = int(len(train_loader) * args.epochs)
num_warmup_steps = int(0.1 * args.epochs * len(train_loader))
scheduler = get_linear_schedule_with_warmup(optimizer, num_warmup_steps=num_warmup_steps, num_training_steps=num_train_steps)
best_logloss = np.inf
for epoch in range(args.start_epoch, args.epochs):
train_avg, train_loss = Functions.train_epoch(args, model, train_loader, criterion, optimizer, scheduler, epoch)
valid_avg, valid_loss = Functions.valid_epoch(args, model, valid_loader, criterion, epoch)
if args.epoch_scheduler:
scheduler.step()
content = f"""
{time.ctime()} \n
Fold:{args.fold}, Epoch:{epoch}, lr:{optimizer.param_groups[0]['lr']:.7}\n
Train Loss:{train_loss:0.4f} - LogLoss:{train_avg['logloss']:0.4f} --- ACC:{train_avg['acc']:0.4f} --- F1:{train_avg['f1']:0.4f}\n
Valid Loss:{valid_loss:0.4f} - LogLoss:{valid_avg['logloss']:0.4f} --- ACC:{valid_avg['acc']:0.4f} --- F1:{valid_avg['f1']:0.4f}\n
"""
print(content)
with open(f'{args.save_path}/log_{args.exp_name}.txt', 'a') as appender:
appender.write(content+'\n')
if valid_avg['logloss'] < best_logloss:
print(f"########## >>>>>>>> Model Improved From {best_logloss} ----> {valid_avg['logloss']}")
torch.save(model.state_dict(), os.path.join(args.save_path, f'fold-{args.fold}.bin'))
best_logloss = valid_avg['logloss']
torch.save(model.state_dict(), os.path.join(args.save_path, f'fold-{args.fold}_last.bin'))
model.load_state_dict(torch.load(os.path.join(args.save_path, f'fold-{args.fold}.bin'), map_location=args.device))
model = model.to(args.device)
target_cols = sub_df.columns.values.tolist()
test_pred = Functions.test_epoch(args, model, test_loader)
print(np.array(test_pred).shape)
tta_pred = Functions.TTA_epoch(args, model, tta_loader, ntta=10)
print(np.array(tta_pred).shape)
test_pred_df = pd.DataFrame({
"fn" : test_df.fn.values
})
test_pred_df["fn"] = test_pred_df["fn"].apply(lambda x: x.split("/")[-1])
test_pred_df["fn"] = test_pred_df["fn"].apply(lambda x: f"audio_files/{x}")
test_pred_df[list(Codes.CODE.keys())] = test_pred
test_pred_df = test_pred_df[target_cols]
test_pred_df.to_csv(os.path.join(args.save_path, f"fold-{args.fold}-submission.csv"), index=False)
print(os.path.join(args.save_path, f"fold-{args.fold}-submission.csv"))
tta_pred_df = pd.DataFrame({
"fn" : test_df.fn.values
})
tta_pred_df["fn"] = tta_pred_df["fn"].apply(lambda x: x.split("/")[-1])
tta_pred_df["fn"] = tta_pred_df["fn"].apply(lambda x: f"audio_files/{x}")
tta_pred_df[list(Codes.CODE.keys())] = tta_pred
tta_pred_df = tta_pred_df[target_cols]
tta_pred_df.to_csv(os.path.join(args.save_path, f"tta-fold-{args.fold}-submission.csv"), index=False)
print(os.path.join(args.save_path, f"tta-fold-{args.fold}-submission.csv"))
oof_pred = Functions.test_epoch(args, model, valid_loader)
oof_pred_df = pd.DataFrame({
"fn" : valid_fold.fn.values
})
oof_pred_df[list(Codes.CODE.keys())] = oof_pred
oof_pred_df = oof_pred_df[target_cols]
oof_pred_df.to_csv(os.path.join(args.save_path, f"oof-fold-{args.fold}.csv"), index=False)
if __name__ == "__main__":
for fold in range(0, 20):
if fold >= 0:
main(fold)
# + colab={"base_uri": "https://localhost:8080/"} id="sFu0YKb_rYwR" executionInfo={"status": "ok", "timestamp": 1606617815406, "user_tz": -330, "elapsed": 3394416, "user": {"displayName": "kaggle ai", "photoUrl": "", "userId": "18291610024750681979"}} outputId="9b42621a-ac0d-4061-fd25-6c38e3078d39"
# !python Run.py
# + id="FtF9T_t47Vnc"
| Competition-Solutions/Audio/GIZ NLP Agricultural Keyword Spotter/Solution 3/04-Eff7_20fold_base Model.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: 'Python 3.8.5 64-bit (''base'': conda)'
# name: python3
# ---
# # Wine Quality Prediction - Unsupervised Learnining / Clustering Project
#
# This data set is a modified version of Kaggle Wine Quality Prediction dataset in which target column removed. Therefore, we have no idea how many classes we have, and we will try to determine the clusters by the help of Sklearn.cluster module.
#
# ## About the data
#
# This dataset is adapted from the Wine Data Set from https://archive.ics.uci.edu/ml/datasets/wine by removing the information about the types of wine for unsupervised learning.
#
# The following descriptions are adapted from the UCI webpage:
#
# These data are the results of a chemical analysis of wines grown in the same region in Italy but derived from three different cultivars. The analysis determined the quantities of 13 constituents found in each of the three types of wines.
#
# The attributes are:
#
# Alcohol, Malic acid , Ash, Alcalinity of ash, Magnesium, Total phenols, Flavanoids, Nonflavanoid phenols, Proanthocyanins, Color intensity, Hue, OD280/OD315 of diluted wines, Proline
#
# ## Which clustering methods to use
# This is a learning project for me. Therefore, I will use following clustering methods:
#
# * KMeans
#
# * Mean-Shift Clustering
#
# * Density-Based Spatial Clustering of Applications with Noise (DBSCAN)
#
# * Expectation–Maximization (EM) Clustering using Gaussian Mixture Models (GMM)
#
# * Agglomerative Hierarchical Clustering
#
# +
# importing necessary packages
import pandas as pd
import numpy as np
import plotly.express as px
import matplotlib.pyplot as plt
import seaborn as sns
import warnings
warnings.filterwarnings('ignore')
# -
wine = pd.read_csv('wine-clustering.csv')
wine.sample(5)
# ## Descriptives
wine.info()
# no missings, 178 variables. get descriptives
wine.describe()
# histograms
_ = wine.hist(figsize=(14,14), bins=20)
# +
print('Shape before outlier removal is : {}'.format(wine.shape))
for col in wine.columns:
q1, q3 = wine[col].quantile([0.25,0.75])
IQR = q3-q1
max_val = q3 + (1.5*IQR)
min_val = q1 - (1.5*IQR)
outliers = wine[(wine[col]>max_val) | (wine[col]<min_val)].index
wine.drop(outliers, axis=0, inplace=True)
print('Shape after outlier removal is : {}'.format(wine.shape))
# -
from scipy import stats
for column in wine.columns:
print(f"Skewness of {column} is : {stats.skew(wine[column])}")
print(f"Kurtosis of {column} is : {stats.kurtosis(wine[column])}")
# The assumptions of normality is accepted if the data have Skewness between -0.5 to 0.5 and Kurtosis between -3.0 to 3.0. Some authors indicated that these values could be expended (e.g. Skewness between -1 to 1 , or even -3 to 3). So, we might assume that the columns are normally distributed (or at least similar to normal distrubition.).
#
# For clustering and principal component analysis, we need our data to be on the same scale. So, I will implement Standard Scaler.
# scale the data
from sklearn.preprocessing import StandardScaler
ss= StandardScaler()
wine_scaled = ss.fit_transform(wine)
# ## PCA
# PCA
from sklearn.decomposition import PCA
pca = PCA()
pca.fit(wine_scaled)
wine_pca = pca.transform(wine_scaled)
# explained variance ratio
wine_pca_var = np.round(pca.explained_variance_ratio_ * 100, 2)
print(f"Total variance explained {wine_pca_var.sum()}%")
print(f"Variance loads of each factor are : {wine_pca_var}")
pca.explained_variance_
# In explained variance, we get eigenvalues; for factor analysis we accept factor which has eigenvalues higher than 1. In this sample we see the first 3 factors describes our n_components.
pca = PCA(n_components = 3)
pca.fit(wine_scaled)
wine_pca = pca.transform(wine_scaled)
# explained variance ratio
wine_pca_var = np.round(pca.explained_variance_ratio_ * 100, 2)
print(f"Total variance explained {wine_pca_var.sum()}%")
print(f"Variance loads of each factor are : {wine_pca_var}")
# +
# sree plot
plt.bar(x=range(1,len(wine_pca_var)+1), height = wine_pca_var)
plt.show()
# -
fig, (ax0,ax1,ax2) = plt.subplots(1,3, figsize=(18,6))
ax0.scatter(x=wine_pca[:,0], y= wine_pca[:,1])
ax0.set_title('Scatterplot between PC1 and PC2')
ax1.scatter(x=wine_pca[:,0], y= wine_pca[:,2])
ax1.set_title('Scatterplot between PC1 and PC3')
ax2.scatter(x=wine_pca[:,1], y= wine_pca[:,2])
ax2.set_title('Scatterplot between PC2 and PC3')
plt.title = 'Scatter Matrix of Wine Dataset'
px.scatter_3d(x=wine_pca[:,0], y= wine_pca[:,1], z= wine_pca[:,2],
title='3D scatter plot of Principle Components')
#
# ## KMeans
# ### Kmeans implementation for wine_scaled
from sklearn.cluster import KMeans
from sklearn.metrics import silhouette_score
# elbow method for neighbors
neighbors = pd.DataFrame(columns=['clusters','intertia','silhouette_score'])
for i in range(2,10):
kmeans = KMeans(n_clusters=i, init='k-means++', random_state=42)
kmeans.fit(wine_scaled)
score_silhouette = silhouette_score(wine_scaled, kmeans.labels_, metric='euclidean')
neighbors = neighbors.append({'clusters':i,
'intertia': kmeans.inertia_,
'silhouette_score':score_silhouette }, ignore_index=True)
neighbors
fig, (ax0,ax1) = plt.subplots(1,2, figsize=(12,6), sharex=True)
ax0.bar(x = 'clusters', height= 'intertia', data=neighbors)
ax0.set_xlabel('Number of Clusters')
ax1.bar(x='clusters',height='silhouette_score', data=neighbors)
ax1.set_xlabel('Silhouette Scores of Clusters')
_ = plt.figtext(0.5, 0.01,'As we can see from the plot, 3 is the optimal value for n_clusters',
ha='center', fontsize=18)
# +
kmeans = KMeans(n_clusters= 3, init='k-means++', random_state=42,)
kmeans_labels = kmeans.fit_predict(wine_scaled)
pd.Series(kmeans_labels).value_counts()
# -
# lets's label the data
wine['labels_kmeans0']= kmeans_labels
wine.groupby('labels_kmeans0').agg(['min','max','mean'])
# +
# plot first 3 columns of the Wine data, to see how the clustering work
fig = px.scatter_3d(x=wine.iloc[:,0], y= wine.iloc[:,1], z = wine.iloc[:,2], color=wine['labels_kmeans0'])
fig.show()
# -
# ### Kmeans with wine_pca data
# elbow method for neighbors
neighbors = pd.DataFrame(columns=['clusters','intertia','silhouette_score'])
for i in range(2,10):
kmeans = KMeans(n_clusters=i, init='k-means++', random_state=42)
kmeans.fit(wine_pca)
score_silhouette = silhouette_score(wine_pca, kmeans.labels_, metric='euclidean')
neighbors = neighbors.append({'clusters':i,
'intertia': kmeans.inertia_,
'silhouette_score':score_silhouette }, ignore_index=True)
fig, (ax0,ax1) = plt.subplots(1,2, figsize=(12,6), sharex=True)
ax0.bar(x = 'clusters', height= 'intertia', data=neighbors)
ax0.set_xlabel('Number of Clusters')
ax1.bar(x='clusters',height='silhouette_score', data=neighbors)
ax1.set_xlabel('Silhouette Scores of Clusters')
_ = plt.figtext(0.5, 0.01,'As we can see from the plot, 3 is the optimal value for n_clusters',
ha='center', fontsize=18)
# +
kmeans = KMeans(n_clusters= 3, init='k-means++', random_state=42,)
kmeans_labels = kmeans.fit_predict(wine_pca)
pd.Series(kmeans_labels).value_counts()
# -
wine['labels_kmeans1']= kmeans_labels
# +
fig = px.scatter_3d(x=wine.iloc[:,0], y= wine.iloc[:,1], z = wine.iloc[:,2], color=wine['labels_kmeans1'])
fig.show()
# -
# ## Mean - Shift Clustering
from sklearn.cluster import MeanShift
ms = MeanShift()
ms.fit(wine.iloc[:,0:13])
labels = ms.labels_
cluster_centers = ms.cluster_centers_
# +
labels_unique = np.unique(labels)
n_clusters_ = len(labels_unique)
print("number of estimated clusters : %d" % n_clusters_)
labels = ms.predict(wine.iloc[:,0:13])
wine['labels_ms'] = labels
# -
px.scatter_3d(x=wine.iloc[:,0], y= wine.iloc[:,1], z = wine.iloc[:,2], color=wine['labels_ms'])
# ## Density-Based Spatial Clustering of Applications with Noise (DBSCAN)
from sklearn.cluster import DBSCAN
from sklearn.neighbors import NearestNeighbors
neigh = NearestNeighbors(n_neighbors=20)
nbrs = neigh.fit(wine)
distances, indices = nbrs.kneighbors(wine)
distances = np.sort(distances, axis=0)
distances[:][1]
distances = np.sort(distances, axis=0)
distances = distances[:,1]
plt.plot(distances)
db = DBSCAN(eps=40, min_samples=wine.shape[1] + 1)
db.fit(wine)
y_pred = db.fit_predict(wine)
px.scatter_3d(x=wine.iloc[:,0], y= wine.iloc[:,1], z = wine.iloc[:,2], color=y_pred)
# +
n_clusters_ = len(set(y_pred)) - (1 if -1 in y_pred else 0)
n_noise_ = list(y_pred).count(-1)
print('Estimated number of clusters: %d' % n_clusters_)
print('Estimated number of noise points: %d' % n_noise_)
# -
# ## Expectation–Maximization (EM) Clustering using Gaussian Mixture Models (GMM)
# +
from sklearn.mixture import GaussianMixture
gmm = GaussianMixture(n_components = 3).fit(wine.iloc[:,0:13])
# -
gmm_labels = gmm.predict(wine.iloc[:,0:13])
px.scatter_3d(x=wine.iloc[:,0], y= wine.iloc[:,1], z = wine.iloc[:,2], color=gmm_labels)
# ## Agglomerative Hierarchical Clustering
from sklearn.cluster import AgglomerativeClustering
agg = AgglomerativeClustering(n_clusters=3)
agg.fit(wine.iloc[:,0:13])
wine['labels_agg'] = agg.labels_
px.scatter_3d(x=wine.iloc[:,0], y= wine.iloc[:,1], z = wine.iloc[:,2], color=wine['labels_agg'])
| Unsupervised Learning Methods on Wine Data.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: dl_tf2
# language: python
# name: dl_tf2
# ---
# # Predicting house prices: a regression example
# Another common type of machine-learning problem is regression, which consists of predicting a continuous value instead of a discrete label: for instance, predicting the temperature tomorrow, given meteorological data; or predicting the time that a software project will take to complete, given its specifications.
#
# ## Dataset: The Boston Housing Price dataset
# We’ll attempt to predict the median price of homes in a given Boston suburb in the mid-1970s, given data points about the suburb at the time, such as the crime rate, the local property tax rate, and so on. It has relatively few data points: only 506, split between 404 training samples and 102 test samples. And each feature in the input data (for example, the crime rate) has a different scale. For instance, some values are pro- portions, which take values between 0 and 1; others take values between 1 and 12, others between 0 and 100, and so on.
# +
import os, time
import tensorflow as tf
# physical_devices = tf.config.experimental.list_physical_devices('GPU')
# tf.config.experimental.set_memory_growth(physical_devices[0], True)
tf.keras.backend.clear_session()
from tensorflow.keras.datasets import boston_housing
(train_data, train_targets), (test_data, test_targets) = boston_housing.load_data()
# Let’s look at the data:
print (train_data.shape, test_data.shape)
# -
train_data
train_targets
# The prices are typically between 10,000 and 50,000. If that sounds cheap, remember that this was the mid-1970s, and these prices aren’t adjusted for inflation.
# ## Preparing the data
# It would be problematic to feed into a neural network values that all take wildly different ranges. The network might be able to automatically adapt to such heterogeneous data, but it would definitely make learning more difficult. A widespread best practice to deal with such data is to do feature-wise normalization: for each feature in the input divide by the standard deviation, so that the feature is centered around 0 and has a unit standard deviation. This is easily done in Numpy.
#
mean = train_data.mean(axis=0)
train_data -= mean
std = train_data.std(axis=0)
train_data /= std
test_data -= mean
test_data /= std
# ## Model Architecture
# Because so few samples are available, you’ll use a very small network with two hidden layers, each with 64 units. In general, the less training data you have, the worse overfitting will be, and using a small network is one way to mitigate overfitting.
#
# +
from tensorflow.keras import models
from tensorflow.keras import layers
def build_model():
model = models.Sequential()
model.add(layers.Dense(64, activation='relu', input_shape=(train_data.shape[1],)))
model.add(layers.Dense(64, activation='relu'))
model.add(layers.Dense(1))
model.compile(optimizer='rmsprop', loss='mse', metrics=['mae'])
return model
# -
# ### Validating your approach using K-fold validation
# To evaluate your networ while you keep adjusting its parameters (such as the number of epochs used for training), you could split the data into a training set and a validation set, as you did in the previous examples. But because you have so few data points, the validation set would end up being very small (for instance, about 100 examples). As a consequence, the validation scores might change a lot depending on which data points you chose to use for validation and which you chose for training: the validation scores might have a high variance with regard to the validation split. This would prevent you from reliably evaluating your model. The best practice in such situations is to use K -fold cross-validation. It consists of splitting the available data into K partitions (typically K = 4 or 5), instantiating K identical models, and training each one on K – 1 partitions while evaluating on the remaining partition. The validation score for the model used is then the average of the K validation scores obtained. In terms of code, this is straightforward
import numpy as np
k = 4
num_val_samples = len(train_data) // k
num_epochs = 100
all_scores = []
for i in range(k):
print('processing fold #', i)
val_data = train_data[i * num_val_samples: (i + 1) * num_val_samples]
val_targets = train_targets[i * num_val_samples: (i + 1) * num_val_samples]
partial_train_data = np.concatenate([train_data[:i * num_val_samples],
train_data[(i + 1) * num_val_samples:]],
axis=0)
partial_train_targets = np.concatenate([train_targets[:i * num_val_samples],
train_targets[(i + 1) * num_val_samples:]],
axis=0)
model = build_model()
model.fit(partial_train_data, partial_train_targets,
epochs=num_epochs, batch_size=1, verbose=0)
val_mse, val_mae = model.evaluate(val_data, val_targets, verbose=0)
all_scores.append(val_mae)
all_scores
all_scores
# Let's train the network for a little longer: 500 epochs
# +
num_epochs = 500
all_mae_histories = []
for i in range(k):
print('processing fold #', i)
val_data = train_data[i * num_val_samples: (i + 1) * num_val_samples]
val_targets = train_targets[i * num_val_samples: (i + 1) * num_val_samples]
partial_train_data = np.concatenate([train_data[:i * num_val_samples],
train_data[(i + 1) * num_val_samples:]],
axis=0)
partial_train_targets = np.concatenate([train_targets[:i * num_val_samples],
train_targets[(i + 1) * num_val_samples:]],
axis=0)
model = build_model()
history = model.fit(partial_train_data, partial_train_targets,
validation_data=(val_data, val_targets),
epochs=num_epochs, batch_size=1, verbose=0)
mae_history = history.history['val_mae']
all_mae_histories.append(mae_history)
# -
average_mae_history = [np.mean([x[i] for x in all_mae_histories]) for i in range(num_epochs)]
import matplotlib.pyplot as plt
plt.plot(range(1, len(average_mae_history) + 1), average_mae_history)
plt.xlabel('Epochs')
plt.ylabel('Validation MAE')
plt.show()
def smooth_curve(points, factor=0.9):
smoothed_points = []
for point in points:
if smoothed_points:
previous = smoothed_points[-1]
smoothed_points.append(previous * factor + point * (1 - factor))
else:
smoothed_points.append(point)
return smoothed_points
smooth_mae_history = smooth_curve(average_mae_history[10:])
plt.plot(range(1, len(smooth_mae_history) + 1), smooth_mae_history)
plt.xlabel('Epochs')
plt.ylabel('Validation MAE')
plt.show()
model = build_model()
model.fit(train_data, train_targets, epochs=80, batch_size=16, verbose=0)
test_mse_score, test_mae_score = model.evaluate(test_data, test_targets, verbose=0)
print(test_mse_score, test_mae_score)
| Complete/RegressionExampleComplete.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + colab={"base_uri": "https://localhost:8080/"} executionInfo={"elapsed": 17207, "status": "ok", "timestamp": 1617094076921, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GgK3ZbZ6ZwwAtwu_CJC4YeTQ6a6EialMtXg9RwfVQ=s64", "userId": "00445686767831085701"}, "user_tz": -480} id="f-H7uxad8Tfz" outputId="004c421d-1074-4740-ef13-9315e1b48c66"
from google.colab import drive
drive.mount('/content/drive')
# + executionInfo={"elapsed": 2318, "status": "ok", "timestamp": 1617094092943, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GgK3ZbZ6ZwwAtwu_CJC4YeTQ6a6EialMtXg9RwfVQ=s64", "userId": "00445686767831085701"}, "user_tz": -480} id="ykIACZHr6AFf"
import os
import numpy as np
import matplotlib.pyplot as plt
import tensorflow as tf
import tensorflow_hub as hub
from tensorflow.keras import layers
from tensorflow.keras import Model
from tensorflow.keras.applications.nasnet import NASNetLarge, NASNetMobile
from keras.preprocessing import image
from tensorflow.keras.preprocessing.image import ImageDataGenerator
# + colab={"background_save": true} id="tB9NhTLuASRU"
WH = 331
IMAGE_SIZE = (WH, WH)
INPUT_SHAPE = IMAGE_SIZE + (3,)
VALIDATION_SPLIT = 0.2
BATCH_SIZE = 32
# + executionInfo={"elapsed": 1154, "status": "ok", "timestamp": 1617094108454, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GgK3ZbZ6ZwwAtwu_CJC4YeTQ6a6EialMtXg9RwfVQ=s64", "userId": "00445686767831085701"}, "user_tz": -480} id="pA9ezFiw8aiJ"
gdrive_dir = "/content/drive/MyDrive"
working_dir = os.path.join(gdrive_dir, "CS3244 Project")
data_dir = os.path.join(working_dir, "landmarks/international/data_split/7")
model_root_dir = os.path.join(working_dir, "models/xihao")
# + colab={"base_uri": "https://localhost:8080/"} executionInfo={"elapsed": 932, "status": "ok", "timestamp": 1617096307949, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GgK3ZbZ6ZwwAtwu_CJC4YeTQ6a6EialMtXg9RwfVQ=s64", "userId": "00445686767831085701"}, "user_tz": -480} id="UA9bq4Ym9HT2" outputId="19678831-9247-4486-c45d-b66c3d65eb15"
num_of_labels = len(os.listdir(data_dir))
print('number of local labels:', num_of_labels)
# + colab={"background_save": true, "base_uri": "https://localhost:8080/"} executionInfo={"elapsed": 1334, "status": "ok", "timestamp": 1617097211028, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GgK3ZbZ6ZwwAtwu_CJC4YeTQ6a6EialMtXg9RwfVQ=s64", "userId": "00445686767831085701"}, "user_tz": -480} id="ab-zjc80_UdN"
# dataflow_kwargs = dict(target_size=IMAGE_SIZE, batch_size=BATCH_SIZE, interpolation="bilinear")
train_datagen = tf.keras.preprocessing.image.ImageDataGenerator(
rescale = 1./255,
validation_split = VALIDATION_SPLIT,
rotation_range = 30,
width_shift_range = 0.1,
height_shift_range = 0.1,
shear_range = 0.1,
zoom_range = 0.1,
brightness_range = [0.9,1.1],
fill_mode = 'nearest'
)
train_generator = train_datagen.flow_from_directory(
data_dir,
subset = "training",
shuffle = True,
target_size = IMAGE_SIZE ,
batch_size = BATCH_SIZE,
class_mode = 'categorical',
)
validation_datagen = ImageDataGenerator(
rescale=1./255,
validation_split = VALIDATION_SPLIT
)
validation_generator = validation_datagen.flow_from_directory(
data_dir,
subset = "validation",
shuffle = False,
target_size = IMAGE_SIZE,
batch_size = BATCH_SIZE,
class_mode = 'categorical'
)
# + executionInfo={"elapsed": 1070, "status": "ok", "timestamp": 1617097220615, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GgK3ZbZ6ZwwAtwu_CJC4YeTQ6a6EialMtXg9RwfVQ=s64", "userId": "00445686767831085701"}, "user_tz": -480} id="UXbK840pZD7A"
input_layer = layers.InputLayer(
input_shape = INPUT_SHAPE,
batch_size = BATCH_SIZE,
name = 'input'
)
# + colab={"base_uri": "https://localhost:8080/"} executionInfo={"elapsed": 706, "status": "ok", "timestamp": 1617097223494, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GgK3ZbZ6ZwwAtwu_CJC4YeTQ6a6EialMtXg9RwfVQ=s64", "userId": "00445686767831085701"}, "user_tz": -480} id="FHKBkeFqeKN7" outputId="8c9b87b8-197a-4922-cb7d-9d7df040f5c2"
model = tf.keras.Sequential(input_layer)
model.summary()
# + colab={"base_uri": "https://localhost:8080/", "height": 1000} executionInfo={"elapsed": 11929, "status": "error", "timestamp": 1617097236589, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GgK3ZbZ6ZwwAtwu_CJC4YeTQ6a6EialMtXg9RwfVQ=s64", "userId": "00445686767831085701"}, "user_tz": -480} id="kt99nvuUUDQk" outputId="2625608a-b814-4e16-8848-c3eb42e8405d"
model.add(
hub.KerasLayer("https://tfhub.dev/google/imagenet/nasnet_large/classification/4",
trainable = True)
)
# + executionInfo={"elapsed": 9198, "status": "aborted", "timestamp": 1617097236587, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GgK3ZbZ6ZwwAtwu_CJC4YeTQ6a6EialMtXg9RwfVQ=s64", "userId": "00445686767831085701"}, "user_tz": -480} id="2hNEVTYkbzdi"
model.add(layers.Flatten())
model.add(layers.Dense(512, activation='relu'))
model.add(layers.Dropout(0.2))
# model.add(layers.Dense(512, activation='relu'))
model.add(layers.Dense(num_of_labels, activation='softmax'))
# + executionInfo={"elapsed": 716, "status": "ok", "timestamp": 1617097248619, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GgK3ZbZ6ZwwAtwu_CJC4YeTQ6a6EialMtXg9RwfVQ=s64", "userId": "00445686767831085701"}, "user_tz": -480} id="0vHYeblCUioD"
build_arg = (None,) + INPUT_SHAPE
model.build(build_arg)
# + executionInfo={"elapsed": 758, "status": "ok", "timestamp": 1617097250535, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GgK3ZbZ6ZwwAtwu_CJC4YeTQ6a6EialMtXg9RwfVQ=s64", "userId": "00445686767831085701"}, "user_tz": -480} id="-5SwjusUOWLX"
model.compile(
loss = 'categorical_crossentropy',
optimizer = tf.keras.optimizers.RMSprop(lr=0.001),
metrics = ['accuracy']
)
# + colab={"base_uri": "https://localhost:8080/"} executionInfo={"elapsed": 1034, "status": "ok", "timestamp": 1617097252349, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GgK3ZbZ6ZwwAtwu_CJC4YeTQ6a6EialMtXg9RwfVQ=s64", "userId": "00445686767831085701"}, "user_tz": -480} id="ctcVc1x5OVB1" outputId="7a60f37f-7f0e-438d-c38b-b60cbc8c647a"
model.summary()
# + colab={"base_uri": "https://localhost:8080/"} executionInfo={"elapsed": 809, "status": "ok", "timestamp": 1617096709199, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GgK3ZbZ6ZwwAtwu_CJC4YeTQ6a6EialMtXg9RwfVQ=s64", "userId": "00445686767831085701"}, "user_tz": -480} id="uM2klwLLJQWW" outputId="a8ccc74f-a33d-4f67-b981-3558cdd35966"
steps_per_epoch = int(train_generator.samples / BATCH_SIZE)
validation_steps = int(validation_generator.samples / BATCH_SIZE)
print("Steps per epoch:", steps_per_epoch)
print("Validation steps:", validation_steps)
# + colab={"base_uri": "https://localhost:8080/", "height": 511} executionInfo={"elapsed": 204055, "status": "error", "timestamp": 1617096913894, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GgK3ZbZ6ZwwAtwu_CJC4YeTQ6a6EialMtXg9RwfVQ=s64", "userId": "00445686767831085701"}, "user_tz": -480} id="syLAqVmIOc_4" outputId="7b80dbc4-deec-4a10-e465-77cc24b7f850"
history = model.fit(
train_generator,
steps_per_epoch = steps_per_epoch,
epochs = 60,
validation_data = validation_generator,
# validation_steps = validation_steps
)
# + id="rW6Ufut-b7NV"
save_model_name = "imagenet_nasnet_large_classification_1"
save_model_dir = os.path.join(model_root_dir, save_model_name)
tf.keras.models.save_model(model)
# + colab={"base_uri": "https://localhost:8080/", "height": 562} executionInfo={"elapsed": 4915740, "status": "ok", "timestamp": 1617092158324, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GgK3ZbZ6ZwwAtwu_CJC4YeTQ6a6EialMtXg9RwfVQ=s64", "userId": "00445686767831085701"}, "user_tz": -480} id="w9YXgrQDP5RB" outputId="0f439b81-f788-4d59-d83a-38312e9860ef"
acc = history.history['accuracy']
val_acc = history.history['val_accuracy']
loss = history.history['loss']
val_loss = history.history['val_loss']
epochs = range(len(acc))
plt.plot(epochs, acc, 'r', label='Training accuracy')
plt.plot(epochs, val_acc, 'b', label='Validation accuracy')
plt.title('Training and validation accuracy')
plt.legend(loc=0)
plt.figure()
plt.plot(epochs, loss, 'r', label='Training loss')
plt.plot(epochs, val_loss, 'b', label='Validation loss')
plt.title('Training and validation loss')
| colabs/nasnet-base_model_with_layer.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: MyPython
# language: Python
# name: mypython
# ---
# +
## new kernel.py
# ##%overwritefile
# ##%file:../../../jupyter-MyRust-kernel/jupyter_MyRust_kernel/kernel.py
# ###%file:rust_kernel.py
# ##%noruncode
#
# MyRust Jupyter Kernel
# generated by MyPython
#
# ##%include:../../src/head.py
from .MyKernel import MyKernel
class RustKernel(MyKernel):
kernel_info={
'info':'[MyRust Kernel]',
'extension':'.rs',
'execsuffix':'.exe',
'needmain':'true',
'compiler':{
'cmd':'rustc',
'outfileflag':'-o',
'clargs':[],
'crargs':[],
},
'interpreter':{
'cmd':'',
'clargs':'',
'crargs':'',
},
}
implementation = 'jupyter_MyRust_kernel'
implementation_version = '1.0'
language = 'Rust'
language_version = ''
language_info = {'name': 'text/rust',
'mimetype': 'text/rust',
'file_extension': kernel_info['extension']}
runfiletype='script'
banner = "Rust kernel.\n" \
"Uses Rust, compiles in rust, and creates source code files and executables in temporary folder.\n"
main_head = "\n\nint main(List<String> arguments){\n"
main_foot = "\nreturn 0;\n}"
## //%include:../../src/comm_attribute.py
## __init__
def __init__(self, *args, **kwargs):
super(RustKernel, self).__init__(*args, **kwargs)
self.runfiletype='script'
self.kernelinfo="[MyRustKernel{0}]".format(time.strftime("%H%M%S", time.localtime()))
## #############################################
# ##%include:src/compile_out_file.py
# ##%include:src/compile_with_sc.py
# ##%include:src/c_exec_sc_.py
# ##%include:src/do_rust_runcode.py
# ##%include:src/do_rust_compilecode.py
# ##%include:src/do_rust_create_codefile.py
# ##%include:src/do_rust_preexecute.py
# -
| kernel/Rust/l_mykernel.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import pandas as pd
import numpy as np
import os
# ### Import
data_path = os.path.join(os.path.pardir, 'data/raw/')
train_path = data_path + 'train.csv'
test_path = data_path + 'test.csv'
train_df = pd.read_csv(train_path, index_col='PassengerId')
test_df = pd.read_csv(test_path, index_col='PassengerId')
type(train_df)
train_df.info()
test_df.info()
train_df.head(1)
# ### Edit Data
test_df['Survived'] = -1
df = pd.concat((train_df, test_df))
df.info()
df.tail()
df[['Name', 'Age']][:2]
df.loc[5:7, ['Age','Parch']]
df[5:7][['Age', 'Fare']]
# ## Filter
df.loc[((df.Sex == 'male') & (df.Age == 22))]
# # Summary statistic
df.describe()
print(df.Age.median())
print(df.Age.quantile(.75))
print(f'variance = {df.Age.var()}')
print(f'Standart Deviation = {df.Age.std()}')
df.Name.unique()
df.Age.plot(kind='box')
# # Categorical Fields - Text
df.describe(include='all')
df.Sex.value_counts()
df.Sex.value_counts(normalize=True)
a = df[df.Survived == 1].Pclass.value_counts()
a
a.plot(kind='bar')
df.Age.plot(kind='kde')
df.Age.plot(kind='hist')
| notebooks/02-exploring-processing-data.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Moving the Robot
# The Pioneer 3-DX robot is an all-purpose base, used for research and applications involving mapping, teleoperation, localization, monitoring and other behaviors.
#
# It is a so-called [**_differential-drive_** mobile platform](https://en.wikipedia.org/wiki/Differential_wheeled_robot), with a powered wheel on either side of the robot body, and a rear castor wheel for balance.
#
# Each wheel is powered by its own motor. The motion of the robot is determined by the speed on the wheels:
# * If both wheels are driven at the same direction and speed, the robot will move in a straight line.
# * If one speed is higher than the other one, the robot will turn towards the direction of the lower speed.
# * If both wheels are turned with equal speed in opposite directions, the robot will spin around the central point of its axis.
# [](http://www.guiott.com/CleaningRobot/C-Motion/Motion.htm)
#
# Let's see a Pioneer robot moving!
# + jupyter={"outputs_hidden": false}
# this is code cell -> click on it, then press Shift+Enter
from IPython.display import YouTubeVideo
YouTubeVideo('vasBnRS3tQk')
# -
# ### Initialization
# Throughout the course, some code is already written for you, and organized in modules called *packages*. The cell below is an initialization step that must be called at the beginning of each notebook. It can take a few seconds to run, so please be patient and wait until the running indicator `In[*]` becomes `In[2]`.
# + jupyter={"outputs_hidden": false}
# this is another code cell -> click on it, then press Shift+Enter
import packages.pioneer3dx as p3dx
p3dx.init()
# -
# ### Motion
# Let's move the robot on the simulator!
#
# You are going to use a *widget*, a Graphical User Interface (GUI) with two sliders for moving the robot in two ways: translation and rotation.
# + jupyter={"outputs_hidden": false}
# and this is again a code cell -> you already know what to do, don't you?
import packages.motion_widget
# -
# The cell above outputs two sliders, which control the translation and rotation of the robot. Initially both values are zero; move the slider left or right to change their values and move the robot.
#
# Once you are familiar with the motion of the robot, please proceed to the next notebook: [Motion Functions](Motion%20Functions.ipynb).
# ---
# #### Try-a-Bot: an open source guide for robot programming
# Developed by:
# [](http://robinlab.uji.es)
#
# Sponsored by:
# <table>
# <tr>
# <td style="border:1px solid #ffffff ;">
# <a href="http://www.ieee-ras.org"><img src="img/logo/ras.png"></a>
# </td>
# <td style="border:1px solid #ffffff ;">
# <a href="http://www.cyberbotics.com"><img src="img/logo/cyberbotics.png"></a>
# </td>
# <td style="border:1px solid #ffffff ;">
# <a href="http://www.theconstructsim.com"><img src="img/logo/theconstruct.png"></a>
# </td>
# </tr>
# </table>
#
# Follow us:
# <table>
# <tr>
# <td style="border:1px solid #ffffff ;">
# <a href="https://www.facebook.com/RobotProgrammingNetwork"><img src="img/logo/facebook.png"></a>
# </td>
# <td style="border:1px solid #ffffff ;">
# <a href="https://www.youtube.com/user/robotprogrammingnet"><img src="img/logo/youtube.png"></a>
# </td>
# </tr>
# </table>
#
| Unit 1/Moving the Robot.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# %matplotlib inline
import matplotlib.pyplot as plt
import pandas as pd
from plotnine import *
import numpy as np
import seaborn as sns
from sklearn.decomposition import PCA
from sklearn.preprocessing import StandardScaler
# ### Obtaining the data
url = "https://archive.ics.uci.edu/ml/machine-learning-databases/iris/iris.data"
df = pd.read_csv(url, names=['sepal length','sepal width','petal length','petal width','target'])
df.head()
# ### Segregate the features data from the target or label data
features = ['sepal length', 'sepal width', 'petal length', 'petal width']
X = df.loc[:, features].values # or X = df.iloc[:, 0:4].values
y = df.loc[:, 'target'].values # or y = df.iloc[:, 4].values
# ### Scale the features data using a Standard Scaler
X_scaled = StandardScaler().fit_transform(X)
pd.DataFrame(data = X_scaled, columns = features).head()
# ### Create 2-D data using PCA
pca = PCA(n_components=2)
principalComponents = pca.fit_transform(X_scaled)
principalDf = pd.DataFrame(data = principalComponents, columns = ['principal component 1', 'principal component 2'])
principalDf.head()
df[['target']].head()
finalDF = pd.concat([principalDf, df[['target']]], axis = 1)
finalDF.head()
pca.explained_variance_ratio_
pca.explained_variance_ratio_.sum()
# ### Visualize the 2-D Data
(sns
.FacetGrid(finalDF, hue='target', size=7)
.map(plt.scatter, 'principal component 1', 'principal component 2')
.add_legend()
.set(
title='PCA Principal Components (n=2)',
xlabel='Principal Component 1',
ylabel='Principal Component 2'
))
plot = (ggplot(finalDF) +
aes(x = 'principal component 1', y = 'principal component 2', color = 'target') +
geom_point() +
ggtitle('PCA Principal Components (n=2)') +
xlab('Principal Component 1') +
ylab('Principal Component 2'))
plot
# +
import altair as alt
alt.renderers.enable('notebook')
alt.Chart(finalDF, title='PCA Principal Components (n=2)').mark_circle(size=60).encode(
x='principal component 1',
y='principal component 2',
color='target',
tooltip=['target', 'principal component 1', 'principal component 2']
).interactive()
| jupyter_notebooks/machine_learning/Feature_Engineering/PCA.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# Home Budget
# -----
# +
import requests
import pickle
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
# %matplotlib notebook
from google_auth_oauthlib import flow
from apiclient.discovery import build
import sklearn as sk
import config.settings as settings
SCOPES = ["https://www.googleapis.com/auth/spreadsheets.readonly"]
URL = 'https://sheets.googleapis.com/v4/spreadsheets/' + settings.SPREADSHEET_ID
def parse_google_auth(file):
"""
parse_goole_auth(file)
:param: file is a String with a path (relative or absolute) to the given JSON file.
This function requires a JSON file for a specific Google OAuth user.
This can be received from the Google Cloud Console for the linked project.
"""
log("Loading Google Sheet... ", end="")
try:
saved_token = open('config/token.bin', 'rb')
creds = pickle.load(saved_token)
log("Saved token found")
except FileNotFoundError:
saved_token = open('config/token.bin', 'wb+')
auth_flow = flow.InstalledAppFlow.from_client_secrets_file(file, scopes=SCOPES)
creds = auth_flow.run_local_server(open_browser=True)
pickle.dump(creds, saved_token)
log("New token saved")
finally:
saved_token.close()
service = build('sheets', 'v4', credentials=creds)
return service
def get_sheet_values(file_id=settings.SPREADSHEET_ID):
service = parse_google_auth("config/oauth2.json")
log("Getting sheet values:")
return_values = {}
for range_string in settings.SHEET_NAMES:
log("\t {}... ".format(range_string), end="")
request = service.spreadsheets().values().batchGet(
spreadsheetId=settings.SPREADSHEET_ID, ranges=range_string)
response = request.execute()
values = response['valueRanges'][0]['values']
return_values[range_string] = values
log("done")
return return_values
def log(string, end='\n'):
'''
For backwards compatibility
'''
print(string)
vals = get_sheet_values()
# -
plt.style.use('fivethirtyeight')
bal_hist = pd.DataFrame(vals['Balance History'][1:], columns=vals['Balance History'][0][:-1])
bal_hist.set_index(pd.to_datetime(bal_hist.pop('Date') + ' ' + bal_hist.pop('Time')), inplace=True)
bal_hist['Balance'] = pd.to_numeric(bal_hist['Balance'].str.replace('$', '').str.replace(',', ''))
# bal_hist = bal_hist[bal_hist['Type'] != 'Credit']
check_hist = bal_hist[bal_hist['Type'] == 'Credit'][:'1971-01-01']
pivoted = check_hist.pivot_table(index=check_hist.index, columns='Account', values='Balance')
plt.figure()
for col in pivoted.columns:
this_col = pivoted[col].dropna()
this_col.rolling(window=30).median().plot.line()
txns = pd.DataFrame(vals['Transactions'][1:], columns=vals['Transactions'][0])
txns.set_index('Date', inplace=True)
txns = pd.concat([txns.pop('Amount').str.replace('$', '').str.replace(',','').astype(float), txns], axis=1)
amount_analysis = txns[['Amount', 'Category']].reset_index()
amount_analysis = amount_analysis[['Amount', 'Category']]
amount_cats = pd.get_dummies(amount_analysis.pop('Category'))
amount_cats
amount_analysis = pd.concat([amount_analysis, amount_cats], axis=1)
amount_analysis.pop('')
amount_corr = amount_analysis.corr(method='spearman').apply(lambda x: abs(x))
amounts = txns.reset_index()['Amount'].abs()
dates = txns.reset_index()['Date'].astype('datetime64').dt.day
anal = pd.concat([amounts,dates], axis=1)
import seaboirn
sb.heatmap(anal)
| jupyter/homebudget2.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Exploring Raw Data
#
# Here are just some very simple examples of going through and inspecting the raw data, and making some plots using `ctapipe`.
# The data explored here are *raw Monte Carlo* data, which is Data Level "R0" in CTA terminology (e.g. it is before any processing that would happen inside a Camera or off-line)
# Setup:
from ctapipe.utils import get_dataset_path
from ctapipe.io import event_source, EventSeeker
from ctapipe.visualization import CameraDisplay
from ctapipe.instrument import CameraGeometry
from matplotlib import pyplot as plt
from astropy import units as u
# %matplotlib inline
# To read SimTelArray format data, ctapipe uses the `pyeventio` library (which is installed automatically along with ctapipe). The following lines however will load any data known to ctapipe (multiple `EventSources` are implemented, and chosen automatically based on the type of the input file.
#
# All data access first starts with an `EventSource`, and here we use a helper function `event_source` that constructs one. The resulting `source` object can be iterated over like a list of events. We also here use an `EventSeeker` which provides random-access to the source (by seeking to the given event ID or number)
source = event_source(get_dataset_path("gamma_test_large.simtel.gz"), max_events=100, back_seekable=True)
seeker = EventSeeker(source)
# ## Explore the contents of an event
#
# note that the R0 level is the raw data that comes out of a camera, and also the lowest level of monte-carlo data.
event = seeker[0] # get first event
event
# the event is just a class with a bunch of data items in it. You can see a more compact represntation via:
print(repr(event.r0))
# printing the event structure, will currently print the value all items under it (so you get a lot of output if you print a high-level container):
print(event.mc)
print(event.r0.tels_with_data)
# note that the event has 2 telescopes in it: 38,40... Let's try the next one:
event = seeker[1] # get the next event
print(event.r0.tels_with_data)
# now, we have a larger event with many telescopes... Let's look at the data from **CT7**:
teldata = event.r0.tel[7]
print(teldata)
teldata
# Note that some values are unit quantities (`astropy.units.Quantity`) or angular quantities (`astropy.coordinates.Angle`), and you can easily maniuplate them:
event.mc.energy
event.mc.energy.to('GeV')
event.mc.energy.to('J')
event.mc.alt
print("Altitude in degrees:", event.mc.alt.deg)
# ## Look for signal pixels in a camera
# again, `event.r0.tel[x]` contains a data structure for the telescope data, with some fields like `waveform`.
#
# Let's make a 2D plot of the sample data (sample vs pixel), so we can see if we see which pixels contain Cherenkov light signals:
plt.pcolormesh(teldata.waveform[0]) # note the [0] is for channel 0
plt.colorbar()
plt.xlabel("sample number")
plt.ylabel("Pixel_id")
# Let's zoom in to see if we can identify the pixels that have the Cherenkov signal in them
plt.pcolormesh(teldata.waveform[0])
plt.colorbar()
plt.ylim(260,290)
plt.xlabel("sample number")
plt.ylabel("pixel_id")
print("waveform[0] is an array of shape (N_pix,N_slice) =",teldata.waveform[0].shape)
# Now we can really see that some pixels have a signal in them!
#
# Lets look at a 1D plot of pixel 270 in channel 0 and see the signal:
trace = teldata.waveform[0][270]
plt.plot(trace, ls='steps')
# Great! It looks like a *standard Cherenkov signal*!
#
# Let's take a look at several traces to see if the peaks area aligned:
for pix_id in [269,270,271,272,273,274,275,276]:
plt.plot(teldata.waveform[0][pix_id], label="pix {}".format(pix_id), ls='steps')
plt.legend()
#
# ## Look at the time trace from a Camera Pixel
#
# `ctapipe.calib.camera` includes classes for doing automatic trace integration with many methods, but before using that, let's just try to do something simple!
#
# Let's define the integration windows first:
# By eye, they seem to be reaonsable from sample 8 to 13 for signal, and 20 to 29 for pedestal (which we define as the sum of all noise: NSB + electronic)
for pix_id in [269,270,271,272,273,274,275,276]:
plt.plot(teldata.waveform[0][pix_id],'+-')
plt.fill_betweenx([0,1200],20,29,color='red',alpha=0.3, label='Ped window')
plt.fill_betweenx([0,1200],8,13,color='green',alpha=0.3, label='Signal window')
plt.legend()
# ## Do a very simplisitic trace analysis
# Now, let's for example calculate a signal and background in a the fixed windows we defined for this single event. Note we are ignoring the fact that cameras have 2 gains, and just using a single gain (channel 0, which is the high-gain channel):
data = teldata.waveform[0]
peds = data[:, 20:29].mean(axis=1) # mean of samples 20 to 29 for all pixels
sums = data[:, 8:13].sum(axis=1)/(13-8) # simple sum integration
phist = plt.hist(peds, bins=50, range=[0,150])
plt.title("Pedestal Distribution of all pixels for a single event")
# let's now take a look at the pedestal-subtracted sums and a pedestal-subtracted signal:
#
plt.plot(sums - peds)
plt.xlabel("pixel id")
plt.ylabel("Pedestal-subtracted Signal")
# Now, we can clearly see that the signal is centered at 0 where there is no Cherenkov light, and we can also clearly see the shower around pixel 250.
# we can also subtract the pedestals from the traces themselves, which would be needed to compare peaks properly
for ii in range(270,280):
plt.plot(data[ii] - peds[ii], ls='steps', label="pix{}".format(ii))
plt.legend()
# ## Camera Displays
#
# It's of course much easier to see the signal if we plot it in 2D with correct pixel positions!
#
# >note: the instrument data model is not fully implemented, so there is not a good way to load all the camera information (right now it is hacked into the `inst` sub-container that is read from the Monte-Carlo file)
camgeom = event.inst.subarray.tel[24].camera
title="CT24, run {} event {} ped-sub".format(event.r0.obs_id,event.r0.event_id)
disp = CameraDisplay(camgeom,title=title)
disp.image = sums - peds
disp.cmap = plt.cm.RdBu_r
disp.add_colorbar()
disp.set_limits_percent(95) # autoscale
# It looks like a nice signal! We have plotted our pedestal-subtracted trace integral, and see the shower clearly!
#
# Let's look at all telescopes:
#
# > note we plot here the raw signal, since we have not calculated the pedestals for each)
for tel in event.r0.tels_with_data:
plt.figure()
camgeom = event.inst.subarray.tel[tel].camera
title="CT{}, run {} event {}".format(tel,event.r0.obs_id,event.r0.event_id)
disp = CameraDisplay(camgeom,title=title)
disp.image = event.r0.tel[tel].waveform[0].sum(axis=1)
disp.cmap = plt.cm.RdBu_r
disp.add_colorbar()
disp.set_limits_percent(95)
# ## some signal processing...
#
# Let's try to detect the peak using the scipy.signal package:
# http://docs.scipy.org/doc/scipy/reference/signal.html
from scipy import signal
import numpy as np
# +
pix_ids = np.arange(len(data))
has_signal = sums > 300
widths = np.array([8,]) # peak widths to search for (let's fix it at 8 samples, about the width of the peak)
peaks = [signal.find_peaks_cwt(trace,widths) for trace in data[has_signal] ]
for p,s in zip(pix_ids[has_signal],peaks):
print("pix{} has peaks at sample {}".format(p,s))
plt.plot(data[p], ls='steps-mid')
plt.scatter(np.array(s),data[p,s])
# -
# clearly the signal needs to be filtered first, or an appropriate wavelet used, but the idea is nice
| docs/tutorials/raw_data_exploration.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] id="view-in-github" colab_type="text"
# <a href="https://colab.research.google.com/github/evitts1989/DS-Unit-2-Linear-Models/blob/master/Corey_Evitts_DS_Sprint_Challenge_5(2).ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# + [markdown] colab_type="text" id="VZf2akBaMjq8"
# _Lambda School Data Science, Unit 2_
#
# # Linear Models Sprint Challenge
#
# To demonstrate mastery on your Sprint Challenge, do all the required, numbered instructions in this notebook.
#
# To earn a score of "3", also do all the stretch goals.
#
# You are permitted and encouraged to do as much data exploration as you want.
# + [markdown] colab_type="text" id="20OITf58NLQh"
# ### Part 1, Classification
# - 1.1. Do train/test split. Arrange data into X features matrix and y target vector
# - 1.2. Use scikit-learn to fit a logistic regression model
# - 1.3. Report classification metric: accuracy
#
# ### Part 2, Regression
# - 2.1. Begin with baselines for regression
# - 2.2. Do train/validate/test split
# - 2.3. Arrange data into X features matrix and y target vector
# - 2.4. Do one-hot encoding
# - 2.5. Use scikit-learn to fit a linear regression or ridge regression model
# - 2.6. Report validation MAE and $R^2$
#
# ### Stretch Goals, Regression
# - Make at least 2 visualizations to explore relationships between features and target. You may use any visualization library
# - Try at least 3 feature combinations. You may select features manually, or automatically
# - Report validation MAE and $R^2$ for each feature combination you try
# - Report test MAE and $R^2$ for your final model
# - Print or plot the coefficients for the features in your model
# + colab_type="code" id="BxoFSeX5OX5k" outputId="6b0b1d5c-c0ad-418f-f831-60e0cc59c3e5" colab={"base_uri": "https://localhost:8080/", "height": 1000}
# If you're in Colab...
import sys
if 'google.colab' in sys.modules:
# !pip install category_encoders==2.*
# !pip install pandas-profiling==2.*
# !pip install plotly==4.*
# + [markdown] colab_type="text" id="Q7u1KtsnOi78"
# # Part 1, Classification: Predict Blood Donations 🚑
# Our dataset is from a mobile blood donation vehicle in Taiwan. The Blood Transfusion Service Center drives to different universities and collects blood as part of a blood drive.
#
# The goal is to predict whether the donor made a donation in March 2007, using information about each donor's history.
#
# Good data-driven systems for tracking and predicting donations and supply needs can improve the entire supply chain, making sure that more patients get the blood transfusions they need.
# + colab_type="code" id="gJzpgv-fO4rh" outputId="47c66ef0-bc7f-4160-d6a5-79daca0647d7" colab={"base_uri": "https://localhost:8080/", "height": 439}
import pandas as pd
donors = pd.read_csv('https://archive.ics.uci.edu/ml/machine-learning-databases/blood-transfusion/transfusion.data')
assert donors.shape == (748,5)
donors = donors.rename(columns={
'Recency (months)': 'months_since_last_donation',
'Frequency (times)': 'number_of_donations',
'Monetary (c.c. blood)': 'total_volume_donated',
'Time (months)': 'months_since_first_donation',
'whether he/she donated blood in March 2007': 'made_donation_in_march_2007'
})
donors
# + [markdown] colab_type="text" id="oU4oE0LJMG7X"
# Notice that the majority class (did not donate blood in March 2007) occurs about 3/4 of the time.
#
# This is the accuracy score for the "majority class baseline" (the accuracy score we'd get by just guessing the majority class every time).
# + colab_type="code" id="TgRp5slvLzJs" outputId="32c68ebe-beaa-4be1-f5d8-8b03c9719ce6" colab={"base_uri": "https://localhost:8080/", "height": 68}
donors['made_donation_in_march_2007'].value_counts(normalize=True)
# + [markdown] colab_type="text" id="P66Fpcq1PYZl"
# ## 1.1. Do train/test split. Arrange data into X features matrix and y target vector
#
# Do these steps in either order.
#
# Use scikit-learn's train/test split function to split randomly. (You can include 75% of the data in the train set, and hold out 25% for the test set, which is the default.)
# + colab_type="code" id="InhicZeZPX8L" outputId="c05d1efb-84f5-48c0-fb65-e448dd9e97c2" colab={"base_uri": "https://localhost:8080/", "height": 221}
import numpy as np
from sklearn.model_selection import train_test_split
X, y = donors.drop(['made_donation_in_march_2007'], axis=1), donors['made_donation_in_march_2007']
y
# + id="RyMedwnMe5rh" colab_type="code" colab={}
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.25, random_state=42)
# + id="OnMu6XxLe9Oc" colab_type="code" outputId="4f5022c1-66b9-4a11-9789-9d3ca51a4014" colab={"base_uri": "https://localhost:8080/", "height": 419}
X_train
# + id="5BzJKbTXe9RM" colab_type="code" outputId="c7139a3f-c521-4d92-fc87-12c1dbd8f145" colab={"base_uri": "https://localhost:8080/", "height": 419}
X_test
# + [markdown] colab_type="text" id="ln9fqAghRmQT"
# ## 1.2. Use scikit-learn to fit a logistic regression model
#
# You may use any number of features
# + colab_type="code" id="a2jf_deRRl64" outputId="b43e31ed-f284-48ad-fe29-bb6a454b589a" colab={"base_uri": "https://localhost:8080/", "height": 170}
from sklearn.linear_model import LogisticRegression
model = LogisticRegression()
model.fit(X_train, y_train)
y_pred = model.predict(X_test)
y_pred
# + [markdown] colab_type="text" id="Ah6EhiRVSusy"
# ## 1.3. Report classification metric: accuracy
#
# What is your model's accuracy on the test set?
#
# Don't worry if your model doesn't beat the majority class baseline. That's okay!
#
# _"The combination of some data and an aching desire for an answer does not ensure that a reasonable answer can be extracted from a given body of data."_ —[<NAME>](https://en.wikiquote.org/wiki/John_Tukey)
#
# (Also, if we used recall score instead of accuracy score, then your model would almost certainly beat the baseline. We'll discuss how to choose and interpret evaluation metrics throughout this unit.)
#
# + colab_type="code" id="ZfJ2NFsASt9_" outputId="fd739a91-7a83-4661-a998-077efbc17a9b" colab={"base_uri": "https://localhost:8080/", "height": 34}
score = model.score(X_test, y_test)
print(score)
# + id="N7D7ZQV3htJg" colab_type="code" colab={}
import matplotlib.pyplot as plt
import seaborn as sns
from sklearn import metrics
# + id="bc1G0HvfhtM8" colab_type="code" outputId="c894318f-cbf3-4afc-b077-c889ec53ebc3" colab={"base_uri": "https://localhost:8080/", "height": 51}
cm = metrics.confusion_matrix(y_test, y_pred)
print(cm)
# + id="G0v2x8_EhtPK" colab_type="code" outputId="a0520f9c-4277-42f1-a288-47612ee59a93" colab={"base_uri": "https://localhost:8080/", "height": 520}
plt.figure(figsize=(9,9))
sns.heatmap(cm, annot=True, fmt=".3f", linewidths=.5, square = True, cmap = 'Blues_r');
plt.ylabel('Actual label');
plt.xlabel('Predicted label');
all_sample_title = 'Accuracy Score: {0}'.format(score)
plt.title(all_sample_title, size = 15);
# + [markdown] colab_type="text" id="xDmZn3ApOM7t"
# # Part 2, Regression: Predict home prices in Ames, Iowa 🏠
#
# You'll use historical housing data. ***There's a data dictionary at the bottom of the notebook.***
#
# Run this code cell to load the dataset:
#
#
#
#
# + id="xuk7v_pqoKNs" colab_type="code" colab={}
import pandas as pd
import sys
URL = 'https://drive.google.com/uc?export=download&id=1522WlEW6HFss36roD_Cd9nybqSuiVcCK'
homes = pd.read_csv(URL)
assert homes.shape == (2904, 47)
# + id="JJjra1y4AvYV" colab_type="code" colab={}
from sklearn import metrics
from sklearn.preprocessing import StandardScaler
from sklearn.metrics import r2_score
from sklearn.metrics import mean_absolute_error
from sklearn.preprocessing import OneHotEncoder
from sklearn.linear_model import LinearRegression
from sklearn.metrics import accuracy_score
# + id="sj8EdHhquyda" colab_type="code" outputId="1954b5d1-b6a1-4de3-b8a2-95b4f252c823" colab={"base_uri": "https://localhost:8080/", "height": 439}
homes
# + [markdown] id="G_PJ1d1foWLe" colab_type="text"
# ## 2.1. Begin with baselines
#
# What is the Mean Absolute Error and R^2 score for a mean baseline? (You can get these estimated scores using all your data, before splitting it.)
# + id="e2pTco_RMbWb" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="d53a0fa5-5830-478f-8249-2624e5fd5b41"
target = homes['Overall_Qual']
majority_value = target.mode()[0]
majority_value
# + id="Rb5f5x7DMWVG" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="fef9e950-2101-4c39-d514-3dac5fea15a6"
from sklearn.metrics import accuracy_score
y_pred = [majority_value] * len(homes)
train_acc = accuracy_score(target, y_pred)
print(f'The training majority baseline is {train_acc*100:.02f}%')
# + id="vaXX83G6Lq3K" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 156} outputId="79f71c8b-5351-4852-b35f-854b77c656ba"
from sklearn.metrics import mean_absolute_error
from sklearn.metrics import r2_score
import statistics
y_true = homes.SalePrice
y_pred = []
for _ in range(len(homes)):
y_pred.append(homes.SalePrice.sum() / len(homes))
print(y_true.head())
print(y_pred[:10])
# + id="SMa6IYntLw3T" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 51} outputId="47da5837-a29e-4a4b-fc48-d8688adbd277"
mae = mean_absolute_error(y_true, y_pred)
r2 = r2_score(y_true, y_pred)
print('Mean absolute error:', mae)
print('R^2 score:', r2)
# + [markdown] colab_type="text" id="MIZt9ZctLQmf"
# ## 2.2. Do train/validate/test split
#
# Train on houses sold in the years 2006 - 2008. (1,920 rows)
#
# Validate on house sold in 2009. (644 rows)
#
# Test on houses sold in 2010. (340 rows)
# + id="5hzMWYUBL1tm" colab_type="code" colab={}
mask = homes[(homes['Yr_Sold'] >= 2006) & (homes['Yr_Sold'] <= 2008)]
X_train = mask.drop(columns='SalePrice')
y_train = mask['SalePrice']
mask = homes[homes['Yr_Sold'] == 2009]
X_val = mask.drop(columns='SalePrice')
y_val = mask['SalePrice']
mask = homes[homes['Yr_Sold'] == 2010]
X_test = mask.drop(columns='SalePrice')
y_test = mask['SalePrice']
assert(len(X_train) == 1920)
assert(len(X_val) == 644)
assert(len(X_test) == 340)
# + id="twAbqkaaMmA7" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 68} outputId="969e47a8-781e-4288-f37c-88fd230e8fbf"
train = homes[homes['Yr_Sold'] == 2006]
train = pd.concat([train, homes[homes['Yr_Sold'] == 2007]])
train = pd.concat([train, homes[homes['Yr_Sold'] == 2008]])
test = homes[homes['Yr_Sold'] == 2010]
val = homes[homes['Yr_Sold'] == 2009]
print(train.shape)
print(test.shape)
print(val.shape)
# + [markdown] id="1Oc2X-rLoY-1" colab_type="text"
# ## 2.3. Arrange data into X features matrix and y target vector
#
# Select at least one numeric feature and at least one categorical feature.
#
# Otherwise, you may choose whichever features and however many you want.
# + id="jpHlj-XrL7Sa" colab_type="code" colab={}
# some of the features that had high correlation with sale price from the profile report
numeric_features = ['1st_Flr_SF', 'Full_Bath', 'Gr_Liv_Area', 'Overall_Qual',
'TotRms_AbvGrd', 'Year_Built', 'Year_Remod/Add', 'Yr_Sold']
# + [markdown] id="6ysrjgQzolMX" colab_type="text"
# ## 2.4. Do one-hot encoding
#
# Encode your categorical feature(s).
# + id="t_2_tCu5L-zT" colab_type="code" colab={}
import category_encoders as ce
from sklearn.preprocessing import StandardScaler
categorical_features = ['Foundation']
features = categorical_features + numeric_features
X_train_subset = X_train[features]
X_val_subset = X_val[features]
encoder = ce.OneHotEncoder(use_cat_names=True)
X_train_encoded = encoder.fit_transform(X_train_subset)
X_val_encoded = encoder.transform(X_val_subset)
scaler = StandardScaler()
X_train_scaled = scaler.fit_transform(X_train_encoded)
X_val_scaled = scaler.transform(X_val_encoded)
# + id="Ji_KIPMpM0TP" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="f8c24f74-52b8-438b-cf45-e23c867ca2d8"
y_train.shape
# + id="lZb8BO48M15M" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="43e59ccf-8848-48cd-c2fe-e6e6f90363be"
X_train.shape
# + [markdown] id="gid8YdXnolO5" colab_type="text"
# ## 2.5. Use scikit-learn to fit a linear regression or ridge regression model
# Fit your model.
# + id="hElAhL5YMC8V" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="473485b7-5d1f-4d72-8300-322c9a85752a"
from sklearn.linear_model import LinearRegression
model = LinearRegression(n_jobs = -1).fit(X_train_scaled, y_train)
print('Validation Accuracy', model.score(X_val_scaled, y_val))
# + [markdown] id="tfTWV7M8oqJH" colab_type="text"
# ## 2.6. Report validation MAE and $R^2$
#
# What is your model's Mean Absolute Error and $R^2$ score on the validation set? (You are not graded on how high or low your validation scores are.)
# + id="-xvH0K9_MHPo" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 51} outputId="ee4dfe8e-53ac-4023-c8a2-1bd3de58d94b"
print('Mean absolute error:', mean_absolute_error(model.predict(X_val_scaled), y_val))
print('R^2 score:', r2_score(model.predict(X_val_scaled), y_val))
# + [markdown] colab_type="text" id="PdkjBN1Dy_-A"
# # Data Dictionary
#
# Here's a description of the data fields:
#
# ```
# 1st_Flr_SF: First Floor square feet
#
# Bedroom_AbvGr: Bedrooms above grade (does NOT include basement bedrooms)
#
# Bldg_Type: Type of dwelling
#
# 1Fam Single-family Detached
# 2FmCon Two-family Conversion; originally built as one-family dwelling
# Duplx Duplex
# TwnhsE Townhouse End Unit
# TwnhsI Townhouse Inside Unit
#
# Bsmt_Half_Bath: Basement half bathrooms
#
# Bsmt_Full_Bath: Basement full bathrooms
#
# Central_Air: Central air conditioning
#
# N No
# Y Yes
#
# Condition_1: Proximity to various conditions
#
# Artery Adjacent to arterial street
# Feedr Adjacent to feeder street
# Norm Normal
# RRNn Within 200' of North-South Railroad
# RRAn Adjacent to North-South Railroad
# PosN Near positive off-site feature--park, greenbelt, etc.
# PosA Adjacent to postive off-site feature
# RRNe Within 200' of East-West Railroad
# RRAe Adjacent to East-West Railroad
#
# Condition_2: Proximity to various conditions (if more than one is present)
#
# Artery Adjacent to arterial street
# Feedr Adjacent to feeder street
# Norm Normal
# RRNn Within 200' of North-South Railroad
# RRAn Adjacent to North-South Railroad
# PosN Near positive off-site feature--park, greenbelt, etc.
# PosA Adjacent to postive off-site feature
# RRNe Within 200' of East-West Railroad
# RRAe Adjacent to East-West Railroad
#
# Electrical: Electrical system
#
# SBrkr Standard Circuit Breakers & Romex
# FuseA Fuse Box over 60 AMP and all Romex wiring (Average)
# FuseF 60 AMP Fuse Box and mostly Romex wiring (Fair)
# FuseP 60 AMP Fuse Box and mostly knob & tube wiring (poor)
# Mix Mixed
#
# Exter_Cond: Evaluates the present condition of the material on the exterior
#
# Ex Excellent
# Gd Good
# TA Average/Typical
# Fa Fair
# Po Poor
#
# Exter_Qual: Evaluates the quality of the material on the exterior
#
# Ex Excellent
# Gd Good
# TA Average/Typical
# Fa Fair
# Po Poor
#
# Exterior_1st: Exterior covering on house
#
# AsbShng Asbestos Shingles
# AsphShn Asphalt Shingles
# BrkComm Brick Common
# BrkFace Brick Face
# CBlock Cinder Block
# CemntBd Cement Board
# HdBoard Hard Board
# ImStucc Imitation Stucco
# MetalSd Metal Siding
# Other Other
# Plywood Plywood
# PreCast PreCast
# Stone Stone
# Stucco Stucco
# VinylSd Vinyl Siding
# Wd Sdng Wood Siding
# WdShing Wood Shingles
#
# Exterior_2nd: Exterior covering on house (if more than one material)
#
# AsbShng Asbestos Shingles
# AsphShn Asphalt Shingles
# BrkComm Brick Common
# BrkFace Brick Face
# CBlock Cinder Block
# CemntBd Cement Board
# HdBoard Hard Board
# ImStucc Imitation Stucco
# MetalSd Metal Siding
# Other Other
# Plywood Plywood
# PreCast PreCast
# Stone Stone
# Stucco Stucco
# VinylSd Vinyl Siding
# Wd Sdng Wood Siding
# WdShing Wood Shingles
#
# Foundation: Type of foundation
#
# BrkTil Brick & Tile
# CBlock Cinder Block
# PConc Poured Contrete
# Slab Slab
# Stone Stone
# Wood Wood
#
# Full_Bath: Full bathrooms above grade
#
# Functional: Home functionality (Assume typical unless deductions are warranted)
#
# Typ Typical Functionality
# Min1 Minor Deductions 1
# Min2 Minor Deductions 2
# Mod Moderate Deductions
# Maj1 Major Deductions 1
# Maj2 Major Deductions 2
# Sev Severely Damaged
# Sal Salvage only
#
# Gr_Liv_Area: Above grade (ground) living area square feet
#
# Half_Bath: Half baths above grade
#
# Heating: Type of heating
#
# Floor Floor Furnace
# GasA Gas forced warm air furnace
# GasW Gas hot water or steam heat
# Grav Gravity furnace
# OthW Hot water or steam heat other than gas
# Wall Wall furnace
#
# Heating_QC: Heating quality and condition
#
# Ex Excellent
# Gd Good
# TA Average/Typical
# Fa Fair
# Po Poor
#
# House_Style: Style of dwelling
#
# 1Story One story
# 1.5Fin One and one-half story: 2nd level finished
# 1.5Unf One and one-half story: 2nd level unfinished
# 2Story Two story
# 2.5Fin Two and one-half story: 2nd level finished
# 2.5Unf Two and one-half story: 2nd level unfinished
# SFoyer Split Foyer
# SLvl Split Level
#
# Kitchen_AbvGr: Kitchens above grade
#
# Kitchen_Qual: Kitchen quality
#
# Ex Excellent
# Gd Good
# TA Typical/Average
# Fa Fair
# Po Poor
#
# LandContour: Flatness of the property
#
# Lvl Near Flat/Level
# Bnk Banked - Quick and significant rise from street grade to building
# HLS Hillside - Significant slope from side to side
# Low Depression
#
# Land_Slope: Slope of property
#
# Gtl Gentle slope
# Mod Moderate Slope
# Sev Severe Slope
#
# Lot_Area: Lot size in square feet
#
# Lot_Config: Lot configuration
#
# Inside Inside lot
# Corner Corner lot
# CulDSac Cul-de-sac
# FR2 Frontage on 2 sides of property
# FR3 Frontage on 3 sides of property
#
# Lot_Shape: General shape of property
#
# Reg Regular
# IR1 Slightly irregular
# IR2 Moderately Irregular
# IR3 Irregular
#
# MS_SubClass: Identifies the type of dwelling involved in the sale.
#
# 20 1-STORY 1946 & NEWER ALL STYLES
# 30 1-STORY 1945 & OLDER
# 40 1-STORY W/FINISHED ATTIC ALL AGES
# 45 1-1/2 STORY - UNFINISHED ALL AGES
# 50 1-1/2 STORY FINISHED ALL AGES
# 60 2-STORY 1946 & NEWER
# 70 2-STORY 1945 & OLDER
# 75 2-1/2 STORY ALL AGES
# 80 SPLIT OR MULTI-LEVEL
# 85 SPLIT FOYER
# 90 DUPLEX - ALL STYLES AND AGES
# 120 1-STORY PUD (Planned Unit Development) - 1946 & NEWER
# 150 1-1/2 STORY PUD - ALL AGES
# 160 2-STORY PUD - 1946 & NEWER
# 180 PUD - MULTILEVEL - INCL SPLIT LEV/FOYER
# 190 2 FAMILY CONVERSION - ALL STYLES AND AGES
#
# MS_Zoning: Identifies the general zoning classification of the sale.
#
# A Agriculture
# C Commercial
# FV Floating Village Residential
# I Industrial
# RH Residential High Density
# RL Residential Low Density
# RP Residential Low Density Park
# RM Residential Medium Density
#
# Mas_Vnr_Type: Masonry veneer type
#
# BrkCmn Brick Common
# BrkFace Brick Face
# CBlock Cinder Block
# None None
# Stone Stone
#
# Mo_Sold: Month Sold (MM)
#
# Neighborhood: Physical locations within Ames city limits
#
# Blmngtn Bloomington Heights
# Blueste Bluestem
# BrDale Briardale
# BrkSide Brookside
# ClearCr Clear Creek
# CollgCr College Creek
# Crawfor Crawford
# Edwards Edwards
# Gilbert Gilbert
# IDOTRR Iowa DOT and Rail Road
# MeadowV Meadow Village
# Mitchel Mitchell
# Names North Ames
# NoRidge Northridge
# NPkVill Northpark Villa
# NridgHt Northridge Heights
# NWAmes Northwest Ames
# OldTown Old Town
# SWISU South & West of Iowa State University
# Sawyer Sawyer
# SawyerW Sawyer West
# Somerst Somerset
# StoneBr Stone Brook
# Timber Timberland
# Veenker Veenker
#
# Overall_Cond: Rates the overall condition of the house
#
# 10 Very Excellent
# 9 Excellent
# 8 Very Good
# 7 Good
# 6 Above Average
# c
# + id="TS_-t-HWVV-t" colab_type="code" colab={}
| Corey_Evitts_DS_Sprint_Challenge_5(2).ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# %matplotlib inline
import pandas as pd
import matplotlib.pyplot as plt
import glob, os
# +
def read_data_shield(fn, reindex=False):
data = pd.read_csv(fn, index_col=0, parse_dates=True)
if reindex:
data.index = pd.DatetimeIndex(start=data.index[0],
end=data.index[-1],
periods=len(data)
)
return data
folder = r'd:\git\elevation-barometry\data\closed_room'
fns = glob.glob(os.path.join(folder, '*.csv'))
fns.sort()
raw_data = [read_data_shield(fn, reindex=True) for fn in fns]
data_start = raw_data[0].index[0]
index_re = raw_data[0].index
data_re = []
var = raw_data[0].keys()[-1]
for data in raw_data:
diff = data.index[0]-data_start
data.index = data.index-diff
data_re.append(data[var].reindex(index_re, method='nearest', tolerance=pd.Timedelta(100, 'ms')))
ax = plt.subplot(111)
pressure = pd.concat(data_re, axis=1, keys=['sensor 1 [Pa]', 'sensor 2 [Pa]', 'sensor 3 [Pa]', 'sensor 4 [Pa]'])
pressure[1000:].plot(ax=ax, legend=False)
ax.set_ylabel('Pressure [Pa]')
# plt.savefig('Pressure_ts.png', dpi=300, bbox_inches='tight')
# data1 = raw_data[0][var][1000:].values
# data2 = raw_data[1][var][1000:].values
# print(len(data1), len(data2))
# diff = data1[:len(data2)]-data2
# diff.std()
# data = read_data_shield(fn, reindex=True)
# data[var][1000:].plot()
# plt.plot(data[var][0:100].values) #.plot()
# -
temp = raw_data[0][raw_data[0].keys()[0]]
pres = raw_data[0][raw_data[0].keys()[1]]
plt.plot(temp, pres, '.')
# +
# EXPERIMENT!
# show constantness of bias
# pd.plotting.scatter_matrix(pressure, alpha = 0.3, figsize = (14,8), diagonal = 'kde')
# plt.savefig('oven_scatter.png', dpi=300, bbox_inches='tight')
# show autocorrelation diagram
# +
# perform cross-over variance test, comparison with theoretical error
import numpy as np
from itertools import combinations
start = 1000
end = 46000
wins = np.arange(1, 500)
cc = list(combinations(pressure.columns, 2))
labels = ['sensor {:s}-{:s}'.format(key1.strip(' [Pa]').strip('sensor '), key2.strip(' [Pa]').strip('sensor ')) for key1, key2 in cc]
labels
cc
std = [[(pressure[key1][-end:]-pressure[key2][-end:]).rolling(window=win).mean().std() for win in wins] for key1, key2 in cc]
# .rename(columns=lambda x: x + 1)
# std = _.transpose().rename(lambda x: x + 1)
# also make an idealized white noise reduction with central limit assumption
white_noise_1 = [std[n][0:1]/np.sqrt(wins) for n in range(len(std))]
# glue together
# axs = []
plt.figure(figsize=(9, 4))
ax = plt.subplot(1, 1, 1)
for n, (std_, white_noise, label) in enumerate(zip(std, white_noise_1, labels)):
# ax = plt.subplot(len(std)/2, 2, n + 1)
# axs.append(ax)
if n == 0:
label = 'sensor error'
else:
label=None
l1 = ax.plot(np.arange(len(std_)), std_, color='grey', marker='.', linewidth=0., label=label)
l2 = ax.plot(np.arange(len(std_)), white_noise, label='theoretical error')
# ax.add_xlabel('Window size')
# ax.add_ylabel('$\sigma$ [Pa]')
ax.legend()
plt.xlabel('window size [0.1 sec]')
plt.ylabel('$\sigma$ [Pa]')
plt.savefig('error.png', dpi=300, bbox_inches='tight')
# plt.title('Impact of averaging on variance')
# ax = plt.subplot(212)
# ax.plot(np.arange(len(std[1])), std[1], marker='.', label='estimated mean')
# ax.plot(np.arange(len(std[1])), white_noise_1[1], label='central limit mean')
# plt.xlabel('Window size')
# plt.ylabel('$\mu$ [Pa]')
# +
start=2000
bar_const = 8485.921788
diffs = [(pressure[key1][start:]-pressure[key2][start:]).rolling(window=1500).mean() -(pressure[key1][start:end]-pressure[key2][start:end]).mean() for key1, key2 in cc] #
diffs_z = []
for key1, key2 in cc:
pressure1 = pressure[key1]
pressure2 = pressure[key2]
bias = (pressure1[start:end] - pressure2[start:end]).mean()
# add bias to pressure 2
pressure2 += bias
# now compute dz
dz = (100*bar_const*np.log(pressure1[start:]/pressure2[start:])).rolling(window=1500).mean()
diffs_z.append(dz)
f = plt.figure(figsize=(12, 3))
ax1 = plt.subplot(121)
ax2 = plt.subplot(122)
for diff, diff_z in zip(diffs, diffs_z):
diff.plot(ax=ax1, alpha=0.3, color='k')
diff_z.plot(ax=ax2, alpha=0.3, color='k')
ax1.set_ylabel('Pressure difference [Pa]')
ax2.set_ylabel('Elevation difference [cm]')
ax1.set_xlabel('time [DD-MM HH]')
ax2.set_xlabel('time [DD-MM HH]')
plt.savefig('Drift.png', dpi=300, bbox_inches='tight')
# -
len(diffs_z)
| python/Oven_test.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Interpolation ([scarlet.interpolation](interpolation.ipynb))
#
# The [interpolation](interpolation.ipynb) module contains the methods needed to interpolate and "resample" images onto different pixel grids. These methods are used for multi-resolution deblending as well as any priors or constraints that take place in any space outside of the model scene.
# + raw_mimetype="text/restructuredtext" active=""
# Reference/API
# -------------
# .. automodule:: scarlet.interpolation
# :members:
| docs/interpolation.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 2
# language: python
# name: python2
# ---
# # importing the MLBox
from mlbox.preprocessing import *
from mlbox.optimisation import *
from mlbox.prediction import *
paths = ["train.csv","test.csv"]
target_name = "SalePrice"
# # reading and cleaning all files
rd = Reader(sep = ',')
df = rd.train_test_split(paths, target_name)
dft = Drift_thresholder()
df = dft.fit_transform(df)
# # tuning
mape = make_scorer(lambda y_true, y_pred: 100*np.sum(np.abs(y_true-y_pred)/y_true)/len(y_true), greater_is_better=False, needs_proba=False)
opt = Optimiser(scoring = mape, n_folds = 3)
opt.evaluate(None, df)
# +
space = {
'ne__numerical_strategy':{"search":"choice",
"space":[0]},
'ce__strategy':{"search":"choice",
"space":["label_encoding","random_projection", "entity_embedding"]},
'fs__threshold':{"search":"uniform",
"space":[0.01,0.3]},
'est__max_depth':{"search":"choice",
"space":[3,4,5,6,7]}
}
best = opt.optimise(space, df,15)
# -
prd = Predictor()
prd.fit_predict(best, df)
| examples/regression/example.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Bead count
#
# This module will demonstrate how to count the beads in the cluster images:
#
# - Load cluster images.
# - Convert image to binary.
# - Scale image up to increase resolution.
# - Dilate image to reduce the possibility to get close local maximas during watershedding.
# - Convert image to set.
# - Dilate image by factor x.
# - For all foreground pixels find connected pixels as new set with flood fill algorithm.
# - Get boundary boxes.
# - Extract subimages.
# - Write subimages to disk.
# +
import math
import numpy as np
import matplotlib.pyplot as plt
import scipy as sp
import skimage as ski
from skimage.morphology import watershed
from skimage.feature import peak_local_max
from skimage.morphology import binary_erosion
from skimage import data, color
from skimage.transform import rescale, hough_circle, hough_circle_peaks
from skimage.filters import scharr
from skimage.feature import canny
from skimage.draw import circle_perimeter
from skimage.util import img_as_ubyte
import modules.oiplib as oiplib
gray2Binary = oiplib.gray2Binary
# -
# Load all clusters.
clusters = oiplib.loadImages("../images/clusters")
# +
# Determine bead count for all clusters.
beadCounts = {}
for cluster in clusters:
labelImg = oiplib.labelRegionWatershed(cluster)
labels = np.unique(labelImg)
beadCount = len(labels) - 1
if beadCounts.get(beadCount) is None:
beadCounts[beadCount] = 1
else:
beadCounts[beadCount] += 1
# +
# General histogram variables.
maxBeadCount = max(beadCounts.keys())
maxOccurrenceCount = max(beadCounts.values())
xAxis = np.arange(1, maxBeadCount + 1)
yAxis = np.arange(0, math.ceil(maxOccurrenceCount / 5) + 1) * 5
yHist = np.zeros(maxBeadCount)
yHistCum = np.zeros(maxBeadCount)
# Create histogram.
for key, value in beadCounts.items():
yHist[key - 1] = value
fig, ax = plt.subplots(figsize=(10, 10))
plot = ax.bar(xAxis, yHist)
ax.grid()
ax.set_axisbelow(True)
ax.set_title("Histogram of clusters per bead count")
ax.set_xlabel("Bead count")
ax.set_ylabel("Clusters with bead count")
ax.set_xticks(xAxis);
ax.set_yticks(yAxis);
| src/bead_count.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 2
# language: python
# name: python2
# ---
# # Archivos y Bases de datos
# La idea de este taller es manipular archivos (leerlos, parsearlos y escribirlos) y hacer lo mismo con bases de datos estructuradas.
# ## Ejercicio 1
#
# Baje el archivo de "All associations with added ontology annotations" del GWAS Catalog.
# + https://www.ebi.ac.uk/gwas/docs/file-downloads
#
# Describa las columnas del archivo (_que información estamos mirando? Para qué sirve? Por qué la hicieron?_)
# +
import pandas as pd
DF = pd.read_csv('../data/alternative.tsv', sep='\t')
DF
# -
# Qué Entidades (tablas) puede definir?
#
# -Entidades intermedias
# -Modelos de entidad y relación
# -llaves foraneas (lineas que conectan entidades)
# -como desde python meter datos en mysql
#
#
# +
DF['Berri1'].plot() # plot easier
# -
# Cree la base de datos (copie el código SQL que se usó)
# ## Ejercicio 2
#
# Lea el archivo y guarde la infomación en la base de datos en las tablas que se definidieron en el __Ejercicio 1__.
# ## Ejercicio 3
#
# Realize de la base de datos una consulta que le responda una pregunta biológica
# (e.g. qué genes estan relacionados con cuales enfermedades)
# ## Ejercicio 4
#
# Guarde el resultado de la consulta anterior en un archivo csv
| Camilo/.ipynb_checkpoints/Taller 2 - Archivos y Bases de Datos-checkpoint.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ## Constructing machine learning potential
# First of all we need to get some dataset for fitting. Good example
# is [this one ](https://archive.materialscloud.org/record/2020.110):
# +
# downloading dataset from https://archive.materialscloud.org/record/2020.110
# !wget "https://archive.materialscloud.org/record/file?file_id=b612d8e3-58af-4374-96ba-b3551ac5d2f4&filename=methane.extxyz.gz&record_id=528" -O methane.extxyz.gz
# !gunzip -k methane.extxyz.gz
# -
import numpy as np
import ase.io
import tqdm
from nice.blocks import *
from nice.utilities import *
from matplotlib import pyplot as plt
from sklearn.linear_model import BayesianRidge
# In the following cell parameters which control subsequent calculations along with hyperparamters of the model are defined.
#
#
# Total amount of structures in the methane dataset is huge, thus it is a good idea to select smaller amount of structures to speed up the calculations.
#
#
# Two out of the three steps of NICE require data to be fitted. In the PCA step atomic environments are used to determine matrix of linear transformation which would allow preservation of the most amount of information for **this particular dataset**. In purifiers eliminated correlations are also dataset specific. Although, it is absolutely not necessary
# to use the same amount of data to fit NICE transformer and to fit subsequent machine learning model. Typically, NICE transformer requires less amount of data to be fitted. In addition, fitting process requires a noticeable amount of RAM, thus, it is a good idea to restrict the amount of data for this step, which is controlled by **environments_for_fitting variable**.
#
# **grid** defines the set of numbers of training configurations for which error would be estimated in order to get an idea of how good is the model is depending on the number of training configurations.
# (yep, NICE transformer uses more data for fitting for few first points, but it is just a tutorial)
#
# In HYPERS dictionary parameters for initial spherical expansion are defined. For more detail we refer the reader to [librascal](https://github.com/cosmo-epfl/librascal) documentation.
# +
HARTREE_TO_EV = 27.211386245988
train_subset = "0:10000" #input for ase.io.read command
test_subset = "10000:15000" #input to ase.io.read command
environments_for_fitting = 1000 #number of environments to fit nice transfomers
grid = [150, 200, 350, 500, 750, 1000, 1500, 2000, 3000, 5000, 7500,
10000] #for learning curve
#HYPERS for librascal spherical expansion coefficients
HYPERS = {
'interaction_cutoff': 6.3,
'max_radial': 5,
'max_angular': 5,
'gaussian_sigma_type': 'Constant',
'gaussian_sigma_constant': 0.05,
'cutoff_smooth_width': 0.3,
'radial_basis': 'GTO'
}
# -
# This cell is the most important one. Our model is defined here. As was mentioned before,
# NICE is the sequence of standard transformations, where each increases the body order by 1.
# The classes which implement this logic are **StandardBlock** and **StandardSequence**.
#
# **StandardSequence** consist of 1) **initial scaler** 2) **initial pca** and of 3) sequence of **standard blocks**.
#
# Let's imagine uniform multiplication of spherical expansion coefficients by some constant k. In this case, covariants of order k would change as *= k ^(body order). In other words, the relative scale of different body orders would change. This might affect subsequent regression, so it is a good idea to fix the scale in some proper way. This is done by **initial scaler**. It has two modes - "**signal integral**" and "**variance**". In the first case, it scales coefficients in such a way to make integral of squared corresponding signal over the ball to be one. In the second case, it assures the variance of coefficient's entries to be one. In practice, the first mode gives better results. Second parameter to this class is to scale coefficients individually, i. e. separately for each environment or globally, thus preserving information about scale of signals in relation with each other.
#
# **Initial pca** is the same pca which is applied in each block, more details about it later. It is the first transformation to coefficients after **initial scaler**.
#
# As already mentioned in the theory, each block consist of two branches - for covariants and for invariants. Each branch consists of expansion, purification and pca steps. During the expansion step in each block, features of next body order are produced by Clebsch-Gordan iteration between features from the previous block and spherical expansion coefficients after **initial_pca**. In case of full expansion, (each with each), the number of features after the transformation would be incredibly huge, as it was already discussed in the theory. Thus, thresholding heuristic is used. For each feature, information how important it is, is stored during the calculations. In standard sequence these importances are just explained variance ratios after the PCA step. During expansion for each pair of covariant vectors "pair importance" is defined as the multiplication of previously discussed single feature **importances**, and after that only a fixed amount of most important input pairs produce output. This fixed amount is controlled by **num_expand** parameter. If it is not specified (or set to **None**) full expansion (each with each) will be performed.
#
# The nature of purifier step was discussed in the theory. Parameter **max_take** controls the number of features to take for purification from previous body orders. (Features are always stored in descending order of importance, and it uses the first ones first). If **max_take** is not specified (**None**) it would use all available features.
# One additional parameter is linear regressor to use. For example
#
# from sklearn.linear_model import Ridge<br>
# CovariantsPurifierBoth(regressor = Ridge(alpha = 42, fit_intercept = False), max_take = 10)
#
# or
#
# from sklearn.linear_model import Lars<br>
# InvariantsPurifier(regressor = Lars(n_nonzero_coefs = 7), max_take = 10)
#
# Default one is Ridge(alpha = 1e-12) without fitting intercept for covariants purifier and with fitting intercept for invariants purifier.
#
# ***Important!*** always put **fit_intercept = False** to regressor in covariants purifer, otherwise resulting features would not be covariants, since the vector with constant entries is not covariant. (it is not checked automatically, since corresponding parameter might have a different name from "fit_intercept")
#
# Custom regressors can be fed into purifiers, more details about it in tutorial "Custom regressors into purifiers"
#
# Parameter of pca states for the number of output features. If it is not specified (None) full pca would be performed.
#
# Both in name of classes states for the fact that transformations are done simultaneously on even and odd features (more details about it in the tutorials "Calculating covariants" (what are even and odd features?) and "Constructor or non standard sequence" (classes to work with no separation?)).
#
# Individual in **IndividualLambdaPCAsBoth** stands for the fact that transformations are independent for each lambda channel.
#
# Since we are interested only in invariants, it is not necessary for the last block to calculate covariants. Thus, corresponding branch is filled with Nones.
#
# In this example parameters of covariant and invariant branches (such as **num_expand** in expansioners) are not dramatically different, but in real life calculations they usually differ from each other dramatically (see examples folder).
#
#our model:
def get_nice():
return StandardSequence([
StandardBlock(ThresholdExpansioner(num_expand=150),
CovariantsPurifierBoth(max_take=10),
IndividualLambdaPCAsBoth(n_components=50),
ThresholdExpansioner(num_expand=300, mode='invariants'),
InvariantsPurifier(max_take=50),
InvariantsPCA(n_components=200)),
StandardBlock(ThresholdExpansioner(num_expand=150),
CovariantsPurifierBoth(max_take=10),
IndividualLambdaPCAsBoth(n_components=50),
ThresholdExpansioner(num_expand=300, mode='invariants'),
InvariantsPurifier(max_take=50),
InvariantsPCA(n_components=200)),
StandardBlock(None, None, None,
ThresholdExpansioner(num_expand=300, mode='invariants'),
InvariantsPurifier(max_take=50),
InvariantsPCA(n_components=200))
],
initial_scaler=InitialScaler(
mode='signal integral', individually=True))
# It is not necessary to always fill all the transformation steps. For example, the following block is valid:
#
# <pre>
# StandardBlock(ThresholdExpansioner(num_expand = 150),
# None,
# IndividualLambdaPCAsBoth(n_components = 50),
# ThresholdExpansioner(num_expand =300, mode = 'invariants'),
# InvariantsPurifier(max_take = 50),
# None)
# </pre>
#
# In this case purifying step in covariants branch and PCA step in invariants branch would be omitted. Covariants and invariants branches are independent. In case of invalid combinations, such as
#
# <pre>
# StandardBlock(None,
# None,
# IndividualLambdaPCAsBoth(n_components = 50),
# ...)
# </pre>
#
# It would raise value error with the description of the problem during initialization.
#
# All intermediate blocks must compute covariants. A block is considered to be computing covariants if it contains covariant expansioner and covariant pca. The latter is required since expansioners in subsequent blocks require not only covariants themselves, but also their **importances** for thresholding.
# In this cell we read the structures, get a set of all the species in the dataset, and calculate the spherical expansion.
#
# **all_species** is a numpy array with ints, where 1 is H, 2 is He and so on.
#
# **coefficients** is the dictionary where the keys are central species, 1 and 6 in our case, and entries are numpy arrays shaped in the **[environment_index, radial/specie index, l, m]** way.
# +
train_structures = ase.io.read('methane.extxyz', index=train_subset)
test_structures = ase.io.read('methane.extxyz', index=test_subset)
all_species = get_all_species(train_structures + test_structures)
print("all species: ", all_species)
train_coefficients = get_spherical_expansion(train_structures, HYPERS,
all_species)
test_coefficients = get_spherical_expansion(test_structures, HYPERS,
all_species)
# -
# We are going to fit two NICE transformers on environments around both the H and C atoms separately.
# The following cells create them and perform the fitting:
#individual nice transformers for each atomic specie in the dataset
nice = {}
for key in train_coefficients.keys():
nice[key] = get_nice()
for key in train_coefficients.keys():
nice[key].fit(train_coefficients[key][:environments_for_fitting])
# It is not necessary to fit different nice transformers for each central specie, see for example qm9 examples in example folder
# Let's calculate representations!:
# +
train_features = {}
for specie in all_species:
train_features[specie] = nice[specie].transform(
train_coefficients[specie], return_only_invariants=True)
test_features = {}
for specie in all_species:
test_features[specie] = nice[specie].transform(test_coefficients[specie],
return_only_invariants=True)
# -
# The result is a nested dictionary. The first level keys are central species, and the inner level keys are body orders. Inside are **numpy arrays** with shapes **[environment_index, invariant_index]**:
#
# In this case number of training structures is 10k, and each structure consists of 4 H atoms. Thus, the total number of H centered environments is 40k.
for key in train_features[1].keys():
print("{} : {}".format(key, train_features[1][key].shape))
# Now we need to prepare to subsequent linear regression. As it was already discussed in theory, energy is an extensive property, and thus it is given as a sum of atomic contributions.
# Each atomic contribution depends on 1) the central specie and 2) the environment. Thus, it is straightforward to see that if each atomic contribution is given by linear combination of previously calculated NICE features, the structural features should have the following form - for each structure, the set of features is a concatenation of representations for each specie. Representation for each specie is a sum of NICE representations over the atoms with this specie in the structure.
#
# In our case, representation of each environment has a size 200 + 200 + 200 + 10 = 610. And we have two atomic species - H and C. Thus, the shape of structural features should be **[number_of_structures, 610 * 2 = 1220]**:
train_features = make_structural_features(train_features, train_structures,
all_species)
test_features = make_structural_features(test_features, test_structures,
all_species)
print(train_features.shape)
# Energies are a part of the dataset we previously downloaded:
# +
train_energies = [structure.info['energy'] for structure in train_structures]
train_energies = np.array(train_energies) * HARTREE_TO_EV
test_energies = [structure.info['energy'] for structure in test_structures]
test_energies = np.array(test_energies) * HARTREE_TO_EV
# -
# And the last step is to do linear regression and plot learning curve.
# +
def get_rmse(first, second):
return np.sqrt(np.mean((first - second)**2))
def get_standard_deviation(values):
return np.sqrt(np.mean((values - np.mean(values))**2))
def get_relative_performance(predictions, values):
return get_rmse(predictions, values) / get_standard_deviation(values)
def estimate_performance(regressor, data_train, data_test, targets_train,
targets_test):
regressor.fit(data_train, targets_train)
return get_relative_performance(regressor.predict(data_test), targets_test)
# -
errors = []
for el in tqdm.tqdm(grid):
errors.append(
estimate_performance(BayesianRidge(), train_features[:el],
test_features, train_energies[:el],
test_energies))
# In this smallest setup best rmse appeared to be about 7%:
print(errors)
# The learning curve looks like this:
from matplotlib import pyplot as plt
plt.plot(grid, errors, 'bo')
plt.plot(grid, errors, 'b')
plt.xlabel("number of structures")
plt.ylabel("relative error")
plt.xscale('log')
plt.yscale('log')
plt.show()
# The pipeline in this tutorial was designed to explain all the intermediate steps, but it has one drawback - at some moment all atomic representations along with all intermediate covariants for the whole dataset are explicitly stored in RAM, which might become a bottleneck for big calculations. Indeed, only structural invariant features are eventually needed, and their size is much smaller than the size of all atomic representations, especially if dataset consist of large molecules. Thus, it is a good idea to calculate structural features by small blocks and get rid of atomic representations for each block immediately. For this purpose there is a function **nice.utilities.transform_sequentially**. It has a parameter **block_size** that controls the size of each chunk. The higher is this chunk the more RAM is required for calculations. But, on the other hand for very small chunks slow python loops over lambda channels with invoking separate python classes for each lambda channel might become a bottleneck (all the other indices are handled either by numpy vectorization either by cython loops). The other reason for slowdown is multiprocessing. Thus, transforming time per single environments monotonically decrease with the **block_size** and eventually goes to saturation. Default value for **block_size** parameter should be fine in most cases.
#
# Full example can be found in examples/methane_home_pc or in examples/qm9_home_pc. Other than that and absence of markdown comments these notebooks are almost identical to this tutorial (in qm9 single nice transformer is used for all central species). Thus, we recommend picking one of them as the code snippet.
| tutorials/constructing_machine_learning_potential.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Think Bayes
#
# This notebook presents code and exercises from Think Bayes, second edition.
#
# Copyright 2018 <NAME>
#
# MIT License: https://opensource.org/licenses/MIT
# +
# Configure Jupyter so figures appear in the notebook
# %matplotlib inline
# Configure Jupyter to display the assigned value after an assignment
# %config InteractiveShell.ast_node_interactivity='last_expr_or_assign'
import numpy as np
import pandas as pd
from thinkbayes2 import Pmf, Cdf, Suite, Joint
import thinkplot
# -
# ### The height problem
#
# For adult male residents of the US, the mean and standard deviation of height are 178 cm and 7.7 cm. For adult female residents the corresponding stats are 163 cm and 7.3 cm. Suppose you learn that someone is 170 cm tall. What is the probability that they are male?
#
# Run this analysis again for a range of observed heights from 150 cm to 200 cm, and plot a curve that shows P(male) versus height. What is the mathematical form of this function?
# To represent the likelihood functions, I'll use `norm` from `scipy.stats`, which returns a "frozen" random variable (RV) that represents a normal distribution with given parameters.
#
# +
from scipy.stats import norm
dist_height = dict(male=norm(178, 7.7),
female=norm(163, 7.3))
# -
# Write a class that implements `Likelihood` using the frozen distributions. Here's starter code:
class Height(Suite):
def Likelihood(self, data, hypo):
"""
data: height in cm
hypo: 'male' or 'female'
"""
return 1
# +
# Solution
class Height(Suite):
def Likelihood(self, data, hypo):
"""
data: height in cm
hypo: 'male' or 'female'
"""
height = data
return dist_height[hypo].pdf(height)
# -
# Here's the prior.
suite = Height(['male', 'female'])
for hypo, prob in suite.Items():
print(hypo, prob)
# And the update:
suite.Update(170)
for hypo, prob in suite.Items():
print(hypo, prob)
# Compute the probability of being male as a function of height, for a range of values between 150 and 200.
# +
# Solution
def prob_male(height):
suite = Height(['male', 'female'])
suite.Update(height)
return suite['male']
# +
# Solution
heights = np.linspace(130, 210)
series = pd.Series(index=heights)
for height in heights:
series[height] = prob_male(height)
# +
# Solution
thinkplot.plot(series)
thinkplot.decorate(xlabel='Height (cm)',
ylabel='Probability of being male')
# -
# If you are curious, you can derive the mathematical form of this curve from the PDF of the normal distribution.
# ### How tall is A?
#
# Suppose I choose two residents of the U.S. at random. A is taller than B. How tall is A?
#
# What if I tell you that A is taller than B by more than 5 cm. How tall is A?
#
# For adult male residents of the US, the mean and standard deviation of height are 178 cm and 7.7 cm. For adult female residents the corresponding stats are 163 cm and 7.3 cm.
# Here are distributions that represent the heights of men and women in the U.S.
dist_height = dict(male=norm(178, 7.7),
female=norm(163, 7.3))
hs = np.linspace(130, 210)
ps = dist_height['male'].pdf(hs)
male_height_pmf = Pmf(dict(zip(hs, ps)));
ps = dist_height['female'].pdf(hs)
female_height_pmf = Pmf(dict(zip(hs, ps)));
# +
thinkplot.Pdf(male_height_pmf, label='Male')
thinkplot.Pdf(female_height_pmf, label='Female')
thinkplot.decorate(xlabel='Height (cm)',
ylabel='PMF',
title='Adult residents of the U.S.')
# -
# Use `thinkbayes2.MakeMixture` to make a `Pmf` that represents the height of all residents of the U.S.
# +
# Solution
from thinkbayes2 import MakeMixture
metapmf = Pmf([male_height_pmf, female_height_pmf])
mix = MakeMixture(metapmf)
mix.Mean()
# +
# Solution
thinkplot.Pdf(mix)
thinkplot.decorate(xlabel='Height (cm)',
ylabel='PMF',
title='Adult residents of the U.S.')
# -
# Write a class that inherits from Suite and Joint, and provides a Likelihood function that computes the probability of the data under a given hypothesis.
# +
# Solution
class Heights(Suite, Joint):
def Likelihood(self, data, hypo):
"""
data: lower bound on the height difference
hypo: h1, h2
"""
h1, h2 = hypo
return 1 if h1 - h2 > data else 0
# -
# Write a function that initializes your `Suite` with an appropriate prior.
# +
# Solution
# We could also use MakeJoint for this
def make_prior(A, B):
suite = Heights()
for h1, p1 in A.Items():
for h2, p2 in B.Items():
suite[h1, h2] = p1 * p2
return suite
# -
suite = make_prior(mix, mix)
suite.Total()
thinkplot.Contour(suite)
thinkplot.decorate(xlabel='B Height (cm)',
ylabel='A Height (cm)',
title='Posterior joint distribution')
# Update your `Suite`, then plot the joint distribution and the marginal distribution, and compute the posterior means for `A` and `B`.
# +
# Solution
suite.Update(0)
# +
# Solution
thinkplot.Contour(suite)
thinkplot.decorate(xlabel='B Height (cm)',
ylabel='A Height (cm)',
title='Posterior joint distribution')
# +
# Solution
posterior_a = suite.Marginal(0)
posterior_b = suite.Marginal(1)
thinkplot.Pdf(posterior_a, label='A')
thinkplot.Pdf(posterior_b, label='B')
thinkplot.decorate(xlabel='Height (cm)',
ylabel='PMF',
title='Posterior marginal distributions')
posterior_a.Mean(), posterior_b.Mean()
# +
# Solution
# Here's one more run of the whole thing, with a margin of 5 cm
suite = make_prior(mix, mix)
suite.Update(5)
posterior_a = suite.Marginal(0)
posterior_b = suite.Marginal(1)
posterior_a.Mean(), posterior_b.Mean()
# -
# ### Second tallest problem
#
# In a room of 10 randomly chosen U.S. residents, A is the second tallest. How tall is A? What is the probability that A is male?
# +
# Solution
# The prior for A and B is the mixture we computed above.
A = mix
B = mix;
# +
# Solution
def faceoff(player1, player2, data):
"""Compute the posterior distributions for both players.
player1: Pmf
player2: Pmf
data: margin by which player1 beats player2
"""
joint = make_prior(player1, player2)
joint.Update(data)
return joint.Marginal(0), joint.Marginal(1)
# +
# Solution
# We can think of the scenario as a sequence of "faceoffs"
# where A wins 8 and loses 1
for i in range(8):
A, _ = faceoff(A, B, 0)
_, A = faceoff(B, A, 0);
# +
# Solution
# Here's the posterior distribution for A
thinkplot.Pdf(A)
A.Mean()
# +
# Solution
# Now we can compute the total probability of being male,
# conditioned on the posterior distribution of height.
total = 0
for h, p in A.Items():
total += p * prob_male(h)
total
# +
# Solution
# Here's a second solution based on an "annotated" mix that keeps
# track of M and F
annotated_mix = Suite()
for h, p in male_height_pmf.Items():
annotated_mix['M', h] = p * 0.49
for h, p in female_height_pmf.Items():
annotated_mix['F', h] = p * 0.51
annotated_mix.Total()
# +
# Solution
# Here's an updated Heights class that can handle the
# annotated mix
class Heights2(Suite, Joint):
def Likelihood(self, data, hypo):
"""
data: who is taller, A or B
hypo: (MF1, h1), (MF2, h2)
"""
(_, hA), (_, hB) = hypo
if data == 'A':
return 1 if hA > hB else 0
if data == 'B':
return 1 if hB > hA else 0
# +
# Solution
# Everything else is pretty much the same
from thinkbayes2 import MakeJoint
def faceoff(player1, player2, data):
joint = Heights2(MakeJoint(player1, player2))
joint.Update(data)
return joint.Marginal(0), joint.Marginal(1)
# +
# Solution
A = annotated_mix
B = annotated_mix;
# +
# Solution
for i in range(8):
A, _ = faceoff(A, B, 'A')
A, _ = faceoff(A, B, 'B');
# +
# Solution
# Now the posterior distribution for A contains the
# probability of being male
A_male = Joint(A).Marginal(0)
# +
# Solution
# The posterior distribution for A also contains the
# posterior probability of height
A_height = Joint(A).Marginal(1)
thinkplot.Pdf(A_height)
A_height.Mean()
# +
# The two solutions are different by a little.
# Because the second problem completely enumerates
# the space of hypotheses, I am more confident
# that it is correct.
# The first solution is, I believe, an approximation
# that works pretty well in this case because the
# dependency it ignores is small.
# -
| solutions/.ipynb_checkpoints/height_soln-checkpoint.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Assignment 1
#
# The goal of this assignment is to supply you with machine learning models and algorithms. In this notebook, we will cover linear and nonlinear models, the concept of loss functions and some optimization techniques. All mathematical operations should be implemented in **NumPy** only.
#
#
# ## Table of contents
# * [1. Logistic Regression](#1.-Logistic-Regression)
# * [1.1 Linear Mapping](#1.1-Linear-Mapping)
# * [1.2 Sigmoid](#1.2-Sigmoid)
# * [1.3 Negative Log Likelihood](#1.3-Negative-Log-Likelihood)
# * [1.4 Model](#1.4-Model)
# * [1.5 Simple Experiment](#1.5-Simple-Experiment)
# * [2. Decision Tree](#2.-Decision-Tree)
# * [2.1 Gini Index & Data Split](#2.1-Gini-Index-&-Data-Split)
# * [2.2 Terminal Node](#2.2-Terminal-Node)
# * [2.3 Build the Decision Tree](#2.3-Build-the-Decision-Tree)
# * [3. Experiments](#3.-Experiments)
# * [3.1 Decision Tree for Heart Disease Prediction](#3.1-Decision-Tree-for-Heart-Disease-Prediction)
# * [3.2 Logistic Regression for Heart Disease Prediction](#3.2-Logistic-Regression-for-Heart-Disease-Prediction)
#
# ### Note
# Some of the concepts below have not (yet) been discussed during the lecture. These will be discussed further during the next lectures.
# ### Before you begin
#
# To check whether the code you've written is correct, we'll use **automark**. For this, we created for each of you an account with the username being your student number.
# +
import automark as am
# fill in you student number as your username
username = 'Your Username'
# to check your progress, you can run this function
am.get_progress(username)
# -
# So far all your tests are 'not attempted'. At the end of this notebook you'll need to have completed all test. The output of `am.get_progress(username)` should at least match the example below. However, we encourage you to take a shot at the 'not attempted' tests!
#
# ```
# ---------------------------------------------
# | Your name / student number |
# | <EMAIL> |
# ---------------------------------------------
# | linear_forward | not attempted |
# | linear_grad_W | not attempted |
# | linear_grad_b | not attempted |
# | nll_forward | not attempted |
# | nll_grad_input | not attempted |
# | sigmoid_forward | not attempted |
# | sigmoid_grad_input | not attempted |
# | tree_data_split_left | not attempted |
# | tree_data_split_right | not attempted |
# | tree_gini_index | not attempted |
# | tree_to_terminal | not attempted |
# ---------------------------------------------
# ```
from __future__ import print_function, absolute_import, division # You don't need to know what this is.
import numpy as np # this imports numpy, which is used for vector- and matrix calculations
# This notebook makes use of **classes** and their **instances** that we have already implemented for you. It allows us to write less code and make it more readable. If you are interested in it, here are some useful links:
# * The official [documentation](https://docs.python.org/3/tutorial/classes.html)
# * Video by *sentdex*: [Object Oriented Programming Introduction](https://www.youtube.com/watch?v=ekA6hvk-8H8)
# * Antipatterns in OOP: [Stop Writing Classes](https://www.youtube.com/watch?v=o9pEzgHorH0)
# # 1. Logistic Regression
#
# We start with a very simple algorithm called **Logistic Regression**. It is a generalized linear model for 2-class classification.
# It can be generalized to the case of many classes and to non-linear cases as well. However, here we consider only the simplest case.
#
# Let us consider a data with 2 classes. Class 0 and class 1. For a given test sample, logistic regression returns a value from $[0, 1]$ which is interpreted as a probability of belonging to class 1. The set of points for which the prediction is $0.5$ is called a *decision boundary*. It is a line on a plane or a hyper-plane in a space.
#
# 
# Logistic regression has two trainable parameters: a weight $W$ and a bias $b$. For a vector of features $X$, the prediction of logistic regression is given by
#
# $$
# f(X) = \frac{1}{1 + \exp(-[XW + b])} = \sigma(h(X))
# $$
# where $\sigma(z) = \frac{1}{1 + \exp(-z)}$ and $h(X)=XW + b$.
#
# Parameters $W$ and $b$ are fitted by maximizing the log-likelihood (or minimizing the negative log-likelihood) of the model on the training data. For a training subset $\{X_j, Y_j\}_{j=1}^N$ the normalized negative log likelihood (NLL) is given by
#
# $$
# \mathcal{L} = -\frac{1}{N}\sum_j \log\Big[ f(X_j)^{Y_j} \cdot (1-f(X_j))^{1-Y_j}\Big]
# = -\frac{1}{N}\sum_j \Big[ Y_j\log f(X_j) + (1-Y_j)\log(1-f(X_j))\Big]
# $$
# There are different ways of fitting this model. In this assignment we consider Logistic Regression as a one-layer neural network. We use the following algorithm for the **forward** pass:
#
# 1. Linear mapping: $h=XW + b$
# 2. Sigmoid activation function: $f=\sigma(h)$
# 3. Calculation of NLL: $\mathcal{L} = -\frac{1}{N}\sum_j \Big[ Y_j\log f_j + (1-Y_j)\log(1-f_j)\Big]$
# In order to fit $W$ and $b$ we perform Gradient Descent ([GD](https://en.wikipedia.org/wiki/Gradient_descent)). We choose a small learning rate $\gamma$ and after each computation of forward pass, we update the parameters
#
# $$W_{\text{new}} = W_{\text{old}} - \gamma \frac{\partial \mathcal{L}}{\partial W}$$
#
# $$b_{\text{new}} = b_{\text{old}} - \gamma \frac{\partial \mathcal{L}}{\partial b}$$
#
# We use Backpropagation method ([BP](https://en.wikipedia.org/wiki/Backpropagation)) to calculate the partial derivatives of the loss function with respect to the parameters of the model.
#
# $$
# \frac{\partial\mathcal{L}}{\partial W} =
# \frac{\partial\mathcal{L}}{\partial h} \frac{\partial h}{\partial W} =
# \frac{\partial\mathcal{L}}{\partial f} \frac{\partial f}{\partial h} \frac{\partial h}{\partial W}
# $$
#
# $$
# \frac{\partial\mathcal{L}}{\partial b} =
# \frac{\partial\mathcal{L}}{\partial h} \frac{\partial h}{\partial b} =
# \frac{\partial\mathcal{L}}{\partial f} \frac{\partial f}{\partial h} \frac{\partial h}{\partial b}
# $$
# ## 1.1 Linear Mapping
# First of all, you need to implement the forward pass of a linear mapping:
# $$
# h(X) = XW +b
# $$
# **Note**: here we use `n_out` as the dimensionality of the output. For logisitc regression `n_out = 1`. However, we will work with cases of `n_out > 1` in next assignments. You will **pass** the current assignment even if your implementation works only in case `n_out = 1`. If your implementation works for the cases of `n_out > 1` then you will not have to modify your method next week. All **numpy** operations are generic. It is recommended to use numpy when is it possible.
def linear_forward(x_input, W, b):
"""Perform the mapping of the input
# Arguments
x_input: input of the linear function - np.array of size `(n_objects, n_in)`
W: np.array of size `(n_in, n_out)`
b: np.array of size `(n_out,)`
# Output
the output of the linear function
np.array of size `(n_objects, n_out)`
"""
#################
### YOUR CODE ###
#################
return output
# Let's check your first function. We set the matrices $X, W, b$:
# $$
# X = \begin{bmatrix}
# 1 & -1 \\
# -1 & 0 \\
# 1 & 1 \\
# \end{bmatrix} \quad
# W = \begin{bmatrix}
# 4 \\
# 2 \\
# \end{bmatrix} \quad
# b = \begin{bmatrix}
# 3 \\
# \end{bmatrix}
# $$
#
# And then compute
# $$
# XW = \begin{bmatrix}
# 1 & -1 \\
# -1 & 0 \\
# 1 & 1 \\
# \end{bmatrix}
# \begin{bmatrix}
# 4 \\
# 2 \\
# \end{bmatrix} =
# \begin{bmatrix}
# 2 \\
# -4 \\
# 6 \\
# \end{bmatrix} \\
# XW + b =
# \begin{bmatrix}
# 5 \\
# -1 \\
# 9 \\
# \end{bmatrix}
# $$
# +
X_test = np.array([[1, -1],
[-1, 0],
[1, 1]])
W_test = np.array([[4],
[2]])
b_test = np.array([3])
h_test = linear_forward(X_test, W_test, b_test)
print(h_test)
# -
am.test_student_function(username, linear_forward, ['x_input', 'W', 'b'])
# Now you need to implement the calculation of the partial derivative of the loss function with respect to the parameters of the model. As this expressions are used for the updates of the parameters, we refer to them as gradients.
# $$
# \frac{\partial \mathcal{L}}{\partial W} =
# \frac{\partial \mathcal{L}}{\partial h}
# \frac{\partial h}{\partial W} \\
# \frac{\partial \mathcal{L}}{\partial b} =
# \frac{\partial \mathcal{L}}{\partial h}
# \frac{\partial h}{\partial b} \\
# $$
def linear_grad_W(x_input, grad_output, W, b):
"""Calculate the partial derivative of
the loss with respect to W parameter of the function
dL / dW = (dL / dh) * (dh / dW)
# Arguments
x_input: input of a dense layer - np.array of size `(n_objects, n_in)`
grad_output: partial derivative of the loss functions with
respect to the ouput of the dense layer (dL / dh)
np.array of size `(n_objects, n_out)`
W: np.array of size `(n_in, n_out)`
b: np.array of size `(n_out,)`
# Output
the partial derivative of the loss
with respect to W parameter of the function
np.array of size `(n_in, n_out)`
"""
#################
### YOUR CODE ###
#################
return grad_W
am.test_student_function(username, linear_grad_W, ['x_input', 'grad_output', 'W', 'b'])
def linear_grad_b(x_input, grad_output, W, b):
"""Calculate the partial derivative of
the loss with respect to b parameter of the function
dL / db = (dL / dh) * (dh / db)
# Arguments
x_input: input of a dense layer - np.array of size `(n_objects, n_in)`
grad_output: partial derivative of the loss functions with
respect to the ouput of the linear function (dL / dh)
np.array of size `(n_objects, n_out)`
W: np.array of size `(n_in, n_out)`
b: np.array of size `(n_out,)`
# Output
the partial derivative of the loss
with respect to b parameter of the linear function
np.array of size `(n_out,)`
"""
#################
### YOUR CODE ###
#################
return grad_b
am.test_student_function(username, linear_grad_b, ['x_input', 'grad_output', 'W', 'b'])
am.get_progress(username)
# ## 1.2 Sigmoid
# $$
# f = \sigma(h) = \frac{1}{1 + e^{-h}}
# $$
#
# Sigmoid function is applied element-wise. It does not change the dimensionality of the tensor and its implementation is shape-agnostic in general.
def sigmoid_forward(x_input):
"""sigmoid nonlinearity
# Arguments
x_input: np.array of size `(n_objects, n_in)`
# Output
the output of relu layer
np.array of size `(n_objects, n_in)`
"""
#################
### YOUR CODE ###
#################
return output
am.test_student_function(username, sigmoid_forward, ['x_input'])
# Now you need to implement the calculation of the partial derivative of the loss function with respect to the input of sigmoid.
#
# $$
# \frac{\partial \mathcal{L}}{\partial h} =
# \frac{\partial \mathcal{L}}{\partial f}
# \frac{\partial f}{\partial h}
# $$
#
# Tensor $\frac{\partial \mathcal{L}}{\partial f}$ comes from the loss function. Let's calculate $\frac{\partial f}{\partial h}$
#
# $$
# \frac{\partial f}{\partial h} =
# \frac{\partial \sigma(h)}{\partial h} =
# \frac{\partial}{\partial h} \Big(\frac{1}{1 + e^{-h}}\Big)
# = \frac{e^{-h}}{(1 + e^{-h})^2}
# = \frac{1}{1 + e^{-h}} \frac{e^{-h}}{1 + e^{-h}}
# = f(h) (1 - f(h))
# $$
#
# Therefore, in order to calculate the gradient of the loss with respect to the input of sigmoid function you need
# to
# 1. calculate $f(h) (1 - f(h))$
# 2. multiply it element-wise by $\frac{\partial \mathcal{L}}{\partial f}$
def sigmoid_grad_input(x_input, grad_output):
"""sigmoid nonlinearity gradient.
Calculate the partial derivative of the loss
with respect to the input of the layer
# Arguments
x_input: np.array of size `(n_objects, n_in)`
grad_output: np.array of size `(n_objects, n_in)`
dL / df
# Output
the partial derivative of the loss
with respect to the input of the function
np.array of size `(n_objects, n_in)`
dL / dh
"""
#################
### YOUR CODE ###
#################
return grad_input
am.test_student_function(username, sigmoid_grad_input, ['x_input', 'grad_output'])
# ## 1.3 Negative Log Likelihood
# $$
# \mathcal{L}
# = -\frac{1}{N}\sum_j \Big[ Y_j\log \dot{Y}_j + (1-Y_j)\log(1-\dot{Y}_j)\Big]
# $$
#
# Here $N$ is the number of objects. $Y_j$ is the real label of an object and $\dot{Y}_j$ is the predicted one.
def nll_forward(target_pred, target_true):
"""Compute the value of NLL
for a given prediction and the ground truth
# Arguments
target_pred: predictions - np.array of size `(n_objects, 1)`
target_true: ground truth - np.array of size `(n_objects, 1)`
# Output
the value of NLL for a given prediction and the ground truth
scalar
"""
#################
### YOUR CODE ###
#################
return output
am.test_student_function(username, nll_forward, ['target_pred', 'target_true'])
# Now you need to calculate the partial derivative of NLL with with respect to its input.
#
# $$
# \frac{\partial \mathcal{L}}{\partial \dot{Y}}
# =
# \begin{pmatrix}
# \frac{\partial \mathcal{L}}{\partial \dot{Y}_0} \\
# \frac{\partial \mathcal{L}}{\partial \dot{Y}_1} \\
# \vdots \\
# \frac{\partial \mathcal{L}}{\partial \dot{Y}_N}
# \end{pmatrix}
# $$
#
# Let's do it step-by-step
#
# \begin{equation}
# \begin{split}
# \frac{\partial \mathcal{L}}{\partial \dot{Y}_0}
# &= \frac{\partial}{\partial \dot{Y}_0} \Big(-\frac{1}{N}\sum_j \Big[ Y_j\log \dot{Y}_j + (1-Y_j)\log(1-\dot{Y}_j)\Big]\Big) \\
# &= -\frac{1}{N} \frac{\partial}{\partial \dot{Y}_0} \Big(Y_0\log \dot{Y}_0 + (1-Y_0)\log(1-\dot{Y}_0)\Big) \\
# &= -\frac{1}{N} \Big(\frac{Y_0}{\dot{Y}_0} - \frac{1-Y_0}{1-\dot{Y}_0}\Big)
# = \frac{1}{N} \frac{\dot{Y}_0 - Y_0}{\dot{Y}_0 (1 - \dot{Y}_0)}
# \end{split}
# \end{equation}
#
# And for the other components it can be done in exactly the same way. So the result is the vector where each component is given by
# $$\frac{1}{N} \frac{\dot{Y}_j - Y_j}{\dot{Y}_j (1 - \dot{Y}_j)}$$
#
# Or if we assume all multiplications and divisions to be done element-wise the output can be calculated as
# $$
# \frac{\partial \mathcal{L}}{\partial \dot{Y}} = \frac{1}{N} \frac{\dot{Y} - Y}{\dot{Y} (1 - \dot{Y})}
# $$
def nll_grad_input(target_pred, target_true):
"""Compute the partial derivative of NLL
with respect to its input
# Arguments
target_pred: predictions - np.array of size `(n_objects, 1)`
target_true: ground truth - np.array of size `(n_objects, 1)`
# Output
the partial derivative
of NLL with respect to its input
np.array of size `(n_objects, 1)`
"""
#################
### YOUR CODE ###
#################
return grad_input
am.test_student_function(username, nll_grad_input, ['target_pred', 'target_true'])
am.get_progress(username)
# ## 1.4 Model
#
# Here we provide a model for your. It consist of the function which you have implmeneted above
class LogsticRegressionGD(object):
def __init__(self, n_in, lr=0.05):
super().__init__()
self.lr = lr
self.b = np.zeros(1, )
self.W = np.random.randn(n_in, 1)
def forward(self, x):
self.h = linear_forward(x, self.W, self.b)
y = sigmoid_forward(self.h)
return y
def update_params(self, x, nll_grad):
# compute gradients
grad_h = sigmoid_grad_input(self.h, nll_grad)
grad_W = linear_grad_W(x, grad_h, self.W, self.b)
grad_b = linear_grad_b(x, grad_h, self.W, self.b)
# update params
self.W = self.W - self.lr * grad_W
self.b = self.b - self.lr * grad_b
# ## 1.5 Simple Experiment
import matplotlib.pyplot as plt
# %matplotlib inline
# +
# Generate some data
def generate_2_circles(N=100):
phi = np.linspace(0.0, np.pi * 2, 100)
X1 = 1.1 * np.array([np.sin(phi), np.cos(phi)])
X2 = 3.0 * np.array([np.sin(phi), np.cos(phi)])
Y = np.concatenate([np.ones(N), np.zeros(N)]).reshape((-1, 1))
X = np.hstack([X1,X2]).T
return X, Y
def generate_2_gaussians(N=100):
phi = np.linspace(0.0, np.pi * 2, 100)
X1 = np.random.normal(loc=[1, 2], scale=[2.5, 0.9], size=(N, 2))
X1 = X1 @ np.array([[0.7, -0.7], [0.7, 0.7]])
X2 = np.random.normal(loc=[-2, 0], scale=[1, 1.5], size=(N, 2))
X2 = X2 @ np.array([[0.7, 0.7], [-0.7, 0.7]])
Y = np.concatenate([np.ones(N), np.zeros(N)]).reshape((-1, 1))
X = np.vstack([X1,X2])
return X, Y
def split(X, Y, train_ratio=0.7):
size = len(X)
train_size = int(size * train_ratio)
indices = np.arange(size)
np.random.shuffle(indices)
train_indices = indices[:train_size]
test_indices = indices[train_size:]
return X[train_indices], Y[train_indices], X[test_indices], Y[test_indices]
f, (ax1, ax2) = plt.subplots(1, 2, figsize=(8, 4))
X, Y = generate_2_circles()
ax1.scatter(X[:,0], X[:,1], c=Y.ravel(), edgecolors= 'none')
ax1.set_aspect('equal')
X, Y = generate_2_gaussians()
ax2.scatter(X[:,0], X[:,1], c=Y.ravel(), edgecolors= 'none')
ax2.set_aspect('equal')
# -
X_train, Y_train, X_test, Y_test = split(*generate_2_gaussians(), 0.7)
# +
# let's train our model
model = LogsticRegressionGD(2, 0.05)
for step in range(30):
Y_pred = model.forward(X_train)
loss_value = nll_forward(Y_pred, Y_train)
accuracy = ((Y_pred > 0.5) == Y_train).mean()
print('Step: {} \t Loss: {:.3f} \t Acc: {:.1f}%'.format(step, loss_value, accuracy * 100))
loss_grad = nll_grad_input(Y_pred, Y_train)
model.update_params(X_train, loss_grad)
print('\n\nTesting...')
Y_test_pred = model.forward(X_test)
test_accuracy = ((Y_test_pred > 0.5) == Y_test).mean()
print('Acc: {:.1f}%'.format(test_accuracy * 100))
# +
def plot_model_prediction(prediction_func, X, Y, hard=True):
u_min = X[:, 0].min()-1
u_max = X[:, 0].max()+1
v_min = X[:, 1].min()-1
v_max = X[:, 1].max()+1
U, V = np.meshgrid(np.linspace(u_min, u_max, 100), np.linspace(v_min, v_max, 100))
UV = np.stack([U.ravel(), V.ravel()]).T
c = prediction_func(UV).ravel()
if hard:
c = c > 0.5
plt.scatter(UV[:,0], UV[:,1], c=c, edgecolors= 'none', alpha=0.15)
plt.scatter(X[:,0], X[:,1], c=Y.ravel(), edgecolors= 'black')
plt.xlim(left=u_min, right=u_max)
plt.ylim(bottom=v_min, top=v_max)
plt.axes().set_aspect('equal')
plt.show()
plot_model_prediction(lambda x: model.forward(x), X_train, Y_train, False)
plot_model_prediction(lambda x: model.forward(x), X_train, Y_train, True)
# +
# Now run the same experiment on 2 circles
# -
# # 2. Decision Tree
# The next model we look at is called **Decision Tree**. This type of model is non-parametric, meaning in contrast to **Logistic Regression** we do not have any parameters here that need to be trained.
#
# Let us consider a simple binary decision tree for deciding on the two classes of "creditable" and "Not creditable".
#
# 
#
# Each node, except the leafs, asks a question about the the client in question. A decision is made by going from the root node to a leaf node, while considering the clients situation. The situation of the client, in this case, is fully described by the features:
# 1. Checking account balance
# 2. Duration of requested credit
# 3. Payment status of previous loan
# 4. Length of current employment
# In order to build a decision tree we need training data. To carry on the previous example: we need a number of clients for which we know the properties 1.-4. and their creditability.
# The process of building a decision tree starts with the root node and involves the following steps:
# 1. Choose a splitting criteria and add it to the current node.
# 2. Split the dataset at the current node into those that fullfil the criteria and those that do not.
# 3. Add a child node for each data split.
# 4. For each child node decide on either A. or B.:
# 1. Repeat from 1. step
# 2. Make it a leaf node: The predicted class label is decided by the majority vote over the training data in the current split.
# ## 2.1 Gini Index & Data Split
# Deciding on how to split your training data at each node is dominated by the following two criterias:
# 1. Does the rule help me make a final decision?
# 2. Is the rule general enough such that it applies not only to my training data, but also to new unseen examples?
#
# When considering our previous example, splitting the clients by their handedness would not help us deciding on their creditability. Knowning if a rule will generalize is usually a hard call to make, but in practice we rely on [Occam's razor](https://en.wikipedia.org/wiki/Occam%27s_razor) principle. Thus the less rules we use, the better we believe it to generalize to previously unseen examples.
#
# One way to measure the quality of a rule is by the [**Gini Index**](https://en.wikipedia.org/wiki/Gini_coefficient).
# Since we only consider binary classification, it is calculated by:
# $$
# Gini = \sum_{n\in\{L,R\}}\frac{|S_n|}{|S|}\left( 1 - \sum_{c \in C} p_{S_n}(c)^2\right)\\
# p_{S_n}(c) = \frac{|\{\mathbf{x}_{i}\in \mathbf{X}|y_{i} = c, i \in S_n\}|}{|S_n|}, n \in \{L, R\}
# $$
# with $|C|=2$ being your set of class labels and $S_L$ and $S_R$ the two splits determined by the splitting criteria.
# While we only consider two class problems for decision trees, the method can also be applied when $|C|>2$.
# The lower the gini score, the better the split. In the extreme case, where all class labels are the same in each split respectively, the gini index takes the value of $0$.
def tree_gini_index(Y_left, Y_right, classes):
"""Compute the Gini Index.
# Arguments
Y_left: class labels of the data left set
np.array of size `(n_objects, 1)`
Y_right: class labels of the data right set
np.array of size `(n_objects, 1)`
classes: list of all class values
# Output
gini: scalar `float`
"""
gini = 0.0
#################
### YOUR CODE ###
#################
return gini
am.test_student_function(username, tree_gini_index, ['Y_left', 'Y_right', 'classes'])
# At each node in the tree, the data is split according to a split criterion and each split is passed onto the left/right child respectively.
# Implement the following function to return all rows in `X` and `Y` such that the left child gets all examples that are less than the split value and vice versa.
# +
def tree_split_data_left(X, Y, feature_index, split_value):
"""Split the data `X` and `Y`, at the feature indexed by `feature_index`.
If the value is less than `split_value` then return it as part of the left group.
# Arguments
X: np.array of size `(n_objects, n_in)`
Y: np.array of size `(n_objects, 1)`
feature_index: index of the feature to split at
split_value: value to split between
# Output
(XY_left): np.array of size `(n_objects_left, n_in + 1)`
"""
X_left, Y_left = None, None
#################
### YOUR CODE ###
#################
XY_left = np.concatenate([X_left, Y_left], axis=-1)
return XY_left
def tree_split_data_right(X, Y, feature_index, split_value):
"""Split the data `X` and `Y`, at the feature indexed by `feature_index`.
If the value is greater or equal than `split_value` then return it as part of the right group.
# Arguments
X: np.array of size `(n_objects, n_in)`
Y: np.array of size `(n_objects, 1)`
feature_index: index of the feature to split at
split_value: value to split between
# Output
(XY_left): np.array of size `(n_objects_left, n_in + 1)`
"""
X_right, Y_right = None, None
#################
### YOUR CODE ###
#################
XY_right = np.concatenate([X_right, Y_right], axis=-1)
return XY_right
# -
am.test_student_function(username, tree_split_data_left, ['X', 'Y', 'feature_index', 'split_value'])
am.test_student_function(username, tree_split_data_right, ['X', 'Y', 'feature_index', 'split_value'])
am.get_progress(username)
# Now to find the split rule with the lowest gini score, we brute-force search over all features and values to split by.
def tree_best_split(X, Y):
class_values = list(set(Y.flatten().tolist()))
r_index, r_value, r_score = float("inf"), float("inf"), float("inf")
r_XY_left, r_XY_right = (X,Y), (X,Y)
for feature_index in range(X.shape[1]):
for row in X:
XY_left = tree_split_data_left(X, Y, feature_index, row[feature_index])
XY_right = tree_split_data_right(X, Y, feature_index, row[feature_index])
XY_left, XY_right = (XY_left[:,:-1], XY_left[:,-1:]), (XY_right[:,:-1], XY_right[:,-1:])
gini = tree_gini_index(XY_left[1], XY_right[1], class_values)
if gini < r_score:
r_index, r_value, r_score = feature_index, row[feature_index], gini
r_XY_left, r_XY_right = XY_left, XY_right
return {'index':r_index, 'value':r_value, 'XY_left': r_XY_left, 'XY_right':r_XY_right}
# ## 2.2 Terminal Node
# The leaf nodes predict the label of an unseen example, by taking a majority vote over all training class labels in that node.
def tree_to_terminal(Y):
"""The most frequent class label, out of the data points belonging to the leaf node,
is selected as the predicted class.
# Arguments
Y: np.array of size `(n_objects)`
# Output
label: most frequent label of `Y.dtype`
"""
label = None
#################
### YOUR CODE ###
#################
return label
am.test_student_function(username, tree_to_terminal, ['Y'])
am.get_progress(username)
# ## 2.3 Build the Decision Tree
# Now we recursively build the decision tree, by greedily splitting the data at each node according to the gini index.
# To prevent the model from overfitting, we transform a node into a terminal/leaf node, if:
# 1. a maximum depth is reached.
# 2. the node does not reach a minimum number of training samples.
#
# +
def tree_recursive_split(X, Y, node, max_depth, min_size, depth):
XY_left, XY_right = node['XY_left'], node['XY_right']
del(node['XY_left'])
del(node['XY_right'])
# check for a no split
if XY_left[0].size <= 0 or XY_right[0].size <= 0:
node['left_child'] = node['right_child'] = tree_to_terminal(np.concatenate((XY_left[1], XY_right[1])))
return
# check for max depth
if depth >= max_depth:
node['left_child'], node['right_child'] = tree_to_terminal(XY_left[1]), tree_to_terminal(XY_right[1])
return
# process left child
if XY_left[0].shape[0] <= min_size:
node['left_child'] = tree_to_terminal(XY_left[1])
else:
node['left_child'] = tree_best_split(*XY_left)
tree_recursive_split(X, Y, node['left_child'], max_depth, min_size, depth+1)
# process right child
if XY_right[0].shape[0] <= min_size:
node['right_child'] = tree_to_terminal(XY_right[1])
else:
node['right_child'] = tree_best_split(*XY_right)
tree_recursive_split(X, Y, node['right_child'], max_depth, min_size, depth+1)
def build_tree(X, Y, max_depth, min_size):
root = tree_best_split(X, Y)
tree_recursive_split(X, Y, root, max_depth, min_size, 1)
return root
# -
# By printing the split criteria or the predicted class at each node, we can visualise the decising making process.
# Both the tree and a a prediction can be implemented recursively, by going from the root to a leaf node.
# +
def print_tree(node, depth=0):
if isinstance(node, dict):
print('%s[X%d < %.3f]' % ((depth*' ', (node['index']+1), node['value'])))
print_tree(node['left_child'], depth+1)
print_tree(node['right_child'], depth+1)
else:
print('%s[%s]' % ((depth*' ', node)))
def tree_predict_single(x, node):
if isinstance(node, dict):
if x[node['index']] < node['value']:
return tree_predict_single(x, node['left_child'])
else:
return tree_predict_single(x, node['right_child'])
return node
def tree_predict_multi(X, node):
Y = np.array([tree_predict_single(row, node) for row in X])
return Y[:, None] # size: (n_object,) -> (n_object, 1)
# -
# Let's test our decision tree model on some toy data.
# +
X_train, Y_train, X_test, Y_test = split(*generate_2_circles(), 0.7)
tree = build_tree(X_train, Y_train, 4, 1)
Y_pred = tree_predict_multi(X_test, tree)
test_accuracy = (Y_pred == Y_test).mean()
print('Test Acc: {:.1f}%'.format(test_accuracy * 100))
# -
# We print the decision tree in [pre-order](https://en.wikipedia.org/wiki/Tree_traversal#Pre-order_(NLR)).
print_tree(tree)
plot_model_prediction(lambda x: tree_predict_multi(x, tree), X_test, Y_test)
# # 3. Experiments
# The [Cleveland Heart Disease](https://archive.ics.uci.edu/ml/datasets/Heart+Disease) dataset aims at predicting the presence of heart disease based on other available medical information of the patient.
#
# Although the whole database contains 76 attributes, we focus on the following 14:
# 1. Age: age in years
# 2. Sex:
# * 0 = female
# * 1 = male
# 3. Chest pain type:
# * 1 = typical angina
# * 2 = atypical angina
# * 3 = non-anginal pain
# * 4 = asymptomatic
# 4. Trestbps: resting blood pressure in mm Hg on admission to the hospital
# 5. Chol: serum cholestoral in mg/dl
# 6. Fasting blood sugar: > 120 mg/dl
# * 0 = false
# * 1 = true
# 7. Resting electrocardiographic results:
# * 0 = normal
# * 1 = having ST-T wave abnormality (T wave inversions and/or ST elevation or depression of > 0.05 mV)
# * 2 = showing probable or definite left ventricular hypertrophy by Estes' criteria
# 8. Thalach: maximum heart rate achieved
# 9. Exercise induced angina:
# * 0 = no
# * 1 = yes
# 10. Oldpeak: ST depression induced by exercise relative to rest
# 11. Slope: the slope of the peak exercise ST segment
# * 1 = upsloping
# * 2 = flat
# * 3 = downsloping
# 12. Ca: number of major vessels (0-3) colored by flourosopy
# 13. Thal:
# * 3 = normal
# * 6 = fixed defect
# * 7 = reversable defect
# 14. Target: diagnosis of heart disease (angiographic disease status)
# * 0 = < 50% diameter narrowing
# * 1 = > 50% diameter narrowing
#
# The 14. attribute is the target variable that we would like to predict based on the rest.
# We have prepared some helper functions to download and pre-process the data in `heart_disease_data.py`
import heart_disease_data
X, Y = heart_disease_data.download_and_preprocess()
X_train, Y_train, X_test, Y_test = split(X, Y, 0.7)
# Let's have a look at some examples
# +
print(X_train[0:2])
print(Y_train[0:2])
# TODO feel free to explore more examples and see if you can predict the presence of a heart disease
# -
# ## 3.1 Decision Tree for Heart Disease Prediction
# Let's build a decision tree model on the training data and see how well it performs
# +
# TODO: you are free to make use of code that we provide in previous cells
# TODO: play around with different hyper parameters and see how these impact your performance
tree = build_tree(X_train, Y_train, 5, 4)
Y_pred = tree_predict_multi(X_test, tree)
test_accuracy = (Y_pred == Y_test).mean()
print('Test Acc: {:.1f}%'.format(test_accuracy * 100))
# -
# How did changing the hyper parameters affect the test performance? Usually hyper parameters are tuned using a hold-out [validation set](https://en.wikipedia.org/wiki/Training,_validation,_and_test_sets#Validation_dataset) instead of the test set.
# ## 3.2 Logistic Regression for Heart Disease Prediction
#
# Instead of manually going through the data to find possible correlations, let's try training a logistic regression model on the data.
# +
# TODO: you are free to make use of code that we provide in previous cells
# TODO: play around with different hyper parameters and see how these impact your performance
# -
# How well did your model perform? Was it actually better then guessing? Let's look at the empirical mean of the target.
Y_train.mean()
# So what is the problem? Let's have a look at the learned parameters of our model.
print(model.W, model.b)
# If you trained sufficiently many steps you'll probably see how some weights are much larger than others. Have a look at what range the parameters were initialized and how much change we allow per step (learning rate). Compare this to the scale of the input features. Here an important concept arises, when we want to train on real world data:
# [Feature Scaling](https://en.wikipedia.org/wiki/Feature_scaling).
#
# Let's try applying it on our data and see how it affects our performance.
# +
# TODO: Rescale the input features and train again
# -
# Notice that we did not need any rescaling for the decision tree. Can you think of why?
| week_2/ML.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + id="dSwpGifpB7W9"
import tensorflow as tf
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
import tensorflow.keras as K
from tensorflow.keras.layers import Dense
# + colab={"base_uri": "https://localhost:8080/", "height": 265} id="t5PZJH8iCFma" outputId="f985216c-90da-4548-f5fe-25d769791ba8"
#Generate a random data
np.random.seed(0)
area = 2.5 * np.random.randn(100) + 25
price = 25 * area + 5 + np.random.randint(20,50, size = len(area))
data = np.array([area, price])
data = pd.DataFrame(data = data.T, columns=['area','price'])
plt.scatter(data['area'], data['price'])
plt.show()
# + id="_1MDPuLTW2Wy"
data = (data - data.min()) / (data.max() - data.min()) #Normalize
# + colab={"base_uri": "https://localhost:8080/"} id="vfy1amXuCHBD" outputId="0f70c62f-eabe-4389-8684-530ce4113e34"
model = K.Sequential([
#normalizer,
Dense(1, input_shape = [1,], activation=None)
])
model.summary()
# + id="lPF_CS2qCNjI"
model.compile(loss='mean_squared_error', optimizer='sgd')
# + colab={"base_uri": "https://localhost:8080/"} id="AFVBcartVIrg" outputId="7088dd5c-bb83-45b7-f88f-48a764cb94e9"
model.fit(x=data['area'],y=data['price'], epochs=100, batch_size=32, verbose=1, validation_split=0.2)
# + id="jE2w6574Y0pD"
y_pred = model.predict(data['area'])
# + colab={"base_uri": "https://localhost:8080/", "height": 297} id="LDFVIsvYCRS4" outputId="b6d79be1-2f6f-411c-803f-37fcb0d62fd7"
plt.plot(data['area'], y_pred, color='red',label="Predicted Price")
plt.scatter(data['area'], data['price'], label="Training Data")
plt.xlabel("Area")
plt.ylabel("Price")
plt.legend()
# + id="bN433O3bCVmH" colab={"base_uri": "https://localhost:8080/"} outputId="6f44e9cc-39ce-41b3-f499-7daae44449d5"
model.weights
# + id="nb22K749kzVK"
| Chapter_2/simple_linear_regression_using_keras_API.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3.8
# language: python
# name: python3
# ---
# <center>
# <img src="https://gitlab.com/ibm/skills-network/courses/placeholder101/-/raw/master/labs/module%201/images/IDSNlogo.png" width="300" alt="cognitiveclass.ai logo" />
# </center>
#
# # **Space X Falcon 9 First Stage Landing Prediction**
#
# ## Web scraping Falcon 9 and Falcon Heavy Launches Records from Wikipedia
#
# Estimated time needed: **40** minutes
#
# In this lab, you will be performing web scraping to collect Falcon 9 historical launch records from a Wikipedia page titled `List of Falcon 9 and Falcon Heavy launches`
#
# [https://en.wikipedia.org/wiki/List_of_Falcon\_9\_and_Falcon_Heavy_launches](https://en.wikipedia.org/wiki/List_of_Falcon\_9\_and_Falcon_Heavy_launches?utm_medium=Exinfluencer&utm_source=Exinfluencer&utm_content=000026UJ&utm_term=10006555&utm_id=NA-SkillsNetwork-Channel-SkillsNetworkCoursesIBMDS0321ENSkillsNetwork26802033-2021-01-01)
#
# 
#
# Falcon 9 first stage will land successfully
#
# 
#
# Several examples of an unsuccessful landing are shown here:
#
# 
#
# More specifically, the launch records are stored in a HTML table shown below:
#
# 
#
# ## Objectives
#
# Web scrap Falcon 9 launch records with `BeautifulSoup`:
#
# * Extract a Falcon 9 launch records HTML table from Wikipedia
# * Parse the table and convert it into a Pandas data frame
#
# First let's import required packages for this lab
#
# !pip3 install beautifulsoup4
# !pip3 install requests
# +
import sys
import requests
from bs4 import BeautifulSoup
import re
import unicodedata
import pandas as pd
# -
# and we will provide some helper functions for you to process web scraped HTML table
#
# +
def date_time(table_cells):
"""
This function returns the data and time from the HTML table cell
Input: the element of a table data cell extracts extra row
"""
return [data_time.strip() for data_time in list(table_cells.strings)][0:2]
def booster_version(table_cells):
"""
This function returns the booster version from the HTML table cell
Input: the element of a table data cell extracts extra row
"""
out=''.join([booster_version for i,booster_version in enumerate( table_cells.strings) if i%2==0][0:-1])
return out
def landing_status(table_cells):
"""
This function returns the landing status from the HTML table cell
Input: the element of a table data cell extracts extra row
"""
out=[i for i in table_cells.strings][0]
return out
def get_mass(table_cells):
mass=unicodedata.normalize("NFKD", table_cells.text).strip()
if mass:
mass.find("kg")
new_mass=mass[0:mass.find("kg")+2]
else:
new_mass=0
return new_mass
def extract_column_from_header(row):
"""
This function returns the landing status from the HTML table cell
Input: the element of a table data cell extracts extra row
"""
if (row.br):
row.br.extract()
if row.a:
row.a.extract()
if row.sup:
row.sup.extract()
colunm_name = ' '.join(row.contents)
# Filter the digit and empty names
if not(colunm_name.strip().isdigit()):
colunm_name = colunm_name.strip()
return colunm_name
# -
# To keep the lab tasks consistent, you will be asked to scrape the data from a snapshot of the `List of Falcon 9 and Falcon Heavy launches` Wikipage updated on
# `9th June 2021`
#
static_url = "https://en.wikipedia.org/w/index.php?title=List_of_Falcon_9_and_Falcon_Heavy_launches&oldid=1027686922"
# Next, request the HTML page from the above URL and get a `response` object
#
# ### TASK 1: Request the Falcon9 Launch Wiki page from its URL
#
# First, let's perform an HTTP GET method to request the Falcon9 Launch HTML page, as an HTTP response.
#
# use requests.get() method with the provided static_url
# assign the response to a object
response = requests.get(static_url)
# Create a `BeautifulSoup` object from the HTML `response`
#
# Use BeautifulSoup() to create a BeautifulSoup object from a response text content
soup = BeautifulSoup(response.text)
# Print the page title to verify if the `BeautifulSoup` object was created properly
#
# Use soup.title attribute
print(soup.title)
# ### TASK 2: Extract all column/variable names from the HTML table header
#
# Next, we want to collect all relevant column names from the HTML table header
#
# Let's try to find all tables on the wiki page first. If you need to refresh your memory about `BeautifulSoup`, please check the external reference link towards the end of this lab
#
# Use the find_all function in the BeautifulSoup object, with element type `table`
# Assign the result to a list called `html_tables`
html_tables = soup.find_all('table')
# Starting from the third table is our target table contains the actual launch records.
#
# Let's print the third table and check its content
first_launch_table = html_tables[2]
print(first_launch_table)
# You should able to see the columns names embedded in the table header elements `<th>` as follows:
#
# ```
# <tr>
# <th scope="col">Flight No.
# </th>
# <th scope="col">Date and<br/>time (<a href="/wiki/Coordinated_Universal_Time" title="Coordinated Universal Time">UTC</a>)
# </th>
# <th scope="col"><a href="/wiki/List_of_Falcon_9_first-stage_boosters" title="List of Falcon 9 first-stage boosters">Version,<br/>Booster</a> <sup class="reference" id="cite_ref-booster_11-0"><a href="#cite_note-booster-11">[b]</a></sup>
# </th>
# <th scope="col">Launch site
# </th>
# <th scope="col">Payload<sup class="reference" id="cite_ref-Dragon_12-0"><a href="#cite_note-Dragon-12">[c]</a></sup>
# </th>
# <th scope="col">Payload mass
# </th>
# <th scope="col">Orbit
# </th>
# <th scope="col">Customer
# </th>
# <th scope="col">Launch<br/>outcome
# </th>
# <th scope="col"><a href="/wiki/Falcon_9_first-stage_landing_tests" title="Falcon 9 first-stage landing tests">Booster<br/>landing</a>
# </th></tr>
# ```
#
# Next, we just need to iterate through the `<th>` elements and apply the provided `extract_column_from_header()` to extract column name one by one
#
# +
column_names = []
th = first_launch_table.find_all('th')
for name in th:
name = extract_column_from_header(name)
if name is not None and len(name) > 0:
column_names.append(name)
# Apply find_all() function with `th` element on first_launch_table
# Iterate each th element and apply the provided extract_column_from_header() to get a column name
# Append the Non-empty column name (`if name is not None and len(name) > 0`) into a list called column_names
# -
# Check the extracted column names
#
print(column_names)
# ## TASK 3: Create a data frame by parsing the launch HTML tables
#
# We will create an empty dictionary with keys from the extracted column names in the previous task. Later, this dictionary will be converted into a Pandas dataframe
#
# +
launch_dict= dict.fromkeys(column_names)
# Remove an irrelvant column
del launch_dict['Date and time ( )']
# Let's initial the launch_dict with each value to be an empty list
launch_dict['Flight No.'] = []
launch_dict['Launch site'] = []
launch_dict['Payload'] = []
launch_dict['Payload mass'] = []
launch_dict['Orbit'] = []
launch_dict['Customer'] = []
launch_dict['Launch outcome'] = []
# Added some new columns
launch_dict['Version Booster']=[]
launch_dict['Booster landing']=[]
launch_dict['Date']=[]
launch_dict['Time']=[]
# -
# Next, we just need to fill up the `launch_dict` with launch records extracted from table rows.
#
# Usually, HTML tables in Wiki pages are likely to contain unexpected annotations and other types of noises, such as reference links `B0004.1[8]`, missing values `N/A [e]`, inconsistent formatting, etc.
#
# To simplify the parsing process, we have provided an incomplete code snippet below to help you to fill up the `launch_dict`. Please complete the following code snippet with TODOs or you can choose to write your own logic to parse all launch tables:
#
extracted_row = 0
#Extract each table
for table_number,table in enumerate(soup.find_all('table',"wikitable plainrowheaders collapsible")):
# get table row
for rows in table.find_all("tr"):
#check to see if first table heading is as number corresponding to launch a number
if rows.th:
if rows.th.string:
flight_number=rows.th.string.strip()
flag=flight_number.isdigit()
else:
flag=False
#get table element
row=rows.find_all('td')
#if it is number save cells in a dictonary
if flag:
extracted_row += 1
# Flight Number value
# TODO: Append the flight_number into launch_dict with key `Flight No.`
#print(flight_number)
datatimelist=date_time(row[0])
# Date value
# TODO: Append the date into launch_dict with key `Date`
date = datatimelist[0].strip(',')
#print(date)
# Time value
# TODO: Append the time into launch_dict with key `Time`
time = datatimelist[1]
#print(time)
# Booster version
# TODO: Append the bv into launch_dict with key `Version Booster`
bv=booster_version(row[1])
if not(bv):
bv=row[1].a.string
print(bv)
# Launch Site
# TODO: Append the bv into launch_dict with key `Launch Site`
launch_site = row[2].a.string
#print(launch_site)
# Payload
# TODO: Append the payload into launch_dict with key `Payload`
payload = row[3].a.string
#print(payload)
# Payload Mass
# TODO: Append the payload_mass into launch_dict with key `Payload mass`
payload_mass = get_mass(row[4])
#print(payload)
# Orbit
# TODO: Append the orbit into launch_dict with key `Orbit`
orbit = row[5].a.string
#print(orbit)
# Customer
# TODO: Append the customer into launch_dict with key `Customer`
customer = row[6].a.string
#print(customer)
# Launch outcome
# TODO: Append the launch_outcome into launch_dict with key `Launch outcome`
launch_outcome = list(row[7].strings)[0]
#print(launch_outcome)
# Booster landing
# TODO: Append the launch_outcome into launch_dict with key `Booster landing`
booster_landing = landing_status(row[8])
#print(booster_landing)
# After you have fill in the parsed launch record values into `launch_dict`, you can create a dataframe from it.
#
df=pd.DataFrame(launch_dict)
# We can now export it to a <b>CSV</b> for the next section, but to make the answers consistent and in case you have difficulties finishing this lab.
#
# Following labs will be using a provided dataset to make each lab independent.
#
# <code>df.to_csv('spacex_web_scraped.csv', index=False)</code>
#
# ## Authors
#
# <a href="https://www.linkedin.com/in/yan-luo-96288783/?utm_medium=Exinfluencer&utm_source=Exinfluencer&utm_content=000026UJ&utm_term=10006555&utm_id=NA-SkillsNetwork-Channel-SkillsNetworkCoursesIBMDS0321ENSkillsNetwork26802033-2021-01-01">Yan Luo</a>
#
# <a href="https://www.linkedin.com/in/nayefaboutayoun/?utm_medium=Exinfluencer&utm_source=Exinfluencer&utm_content=000026UJ&utm_term=10006555&utm_id=NA-SkillsNetwork-Channel-SkillsNetworkCoursesIBMDS0321ENSkillsNetwork26802033-2021-01-01"><NAME></a>
#
# ## Change Log
#
# | Date (YYYY-MM-DD) | Version | Changed By | Change Description |
# | ----------------- | ------- | ---------- | --------------------------- |
# | 2021-06-09 | 1.0 | Yan Luo | Tasks updates |
# | 2020-11-10 | 1.0 | Nayef | Created the initial version |
#
# Copyright © 2021 IBM Corporation. All rights reserved.
#
| Data Collecion with Web Scraping.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ## Convergence Testing:
# ### K-Point Convergence:
# Using a plane-wave energy cutoff of 520 eV, and Monkhorst pack k-grid densities of $i$ x $i$ x $i$ for $i$ ranging from 1 to 8.
# + jupyter={"source_hidden": true}
import numpy as np
import matplotlib.pyplot as plt
kdensity = np.arange(3, 8.1)
kconv_energies = np.array([float(line.rstrip('\n'))
for line in open('VASP_Outputs/kconv_energy_list.txt')])
f, ax = plt.subplots(1, 2, figsize=(22, 10))
ax[0].plot(kdensity, 1000*abs(kconv_energies), color='steelblue',
marker="s", label="Convergence E0", linewidth=2)
ax[0].grid(True)
ax[0].set_xlabel("K-Grid Density (i x i x i)", labelpad=10)
ax[0].set_ylabel("|Electronic Ground State SCF Energy| [meV]", labelpad=10)
ax[0].set_title("Electronic SCF Energy Convergence wrt K-Point Density",
fontsize=20, pad=20) # pad is offset of title from plot
# Adjusting the in-plot margins (i.e. the gap between the final x value and the x limit of the graph)
ax[0].margins(0.1)
ax[0].ticklabel_format(useOffset=False)
plt.setp(ax[0].get_yticklabels(), rotation=20)
f.subplots_adjust(bottom=0.3, top=0.85) # Adjusting specific margins
ax[1].ticklabel_format(useOffset=False)
ax[1].plot(kdensity[1:], 1000*abs(kconv_energies[1:]), color='red',
marker="s", label="Convergence E0", linewidth=2)
ax[1].grid(True)
ax[1].set_xlabel("K-Grid Density (i x i x i)", labelpad=10)
ax[1].set_ylabel("|Electronic Ground State SCF Energy| [meV]", labelpad=10)
ax[1].set_title("Electronic SCF Energy Convergence wrt K-Point Density",
fontsize=20, pad=20) # pad is offset of title from plot
# Adjusting the in-plot margins (i.e. the gap between the final x value and the x limit of the graph)
ax[1].margins(0.1)
plt.setp(ax[1].get_yticklabels(), rotation=20)
plt.show()
# -
# In this case, we see that, for k-densities of 4 and above, the energy converges within a range of 0.01 meV (or $1\times10^{-5} eV$). Because we set EDIFF = $1\times 10^{-6}\ eV$, the electronic SCF loop terminates when the energy difference between two SCF loops is less than 0.000001 eV, and this could be a factor as to why the convergence doesn't necessarily improve beyond a k-density of 6 or 7. If we wanted to change this, we could reduce EDIFF to $1\times 10^{-8}\ eV$. Trying this:
# + jupyter={"source_hidden": true}
kdensity = np.arange(3, 8.1)
kconv_energies_newediff = np.array([float(line.rstrip('\n'))
for line in open('VASP_Outputs/kconv_energy_list_newediff.txt')])
f, ax = plt.subplots(1, 2, figsize=(22, 10))
ax[0].plot(kdensity, 1000*abs(kconv_energies_newediff), color='steelblue',
marker="s", label="Convergence E0", linewidth=2)
ax[0].grid(True)
ax[0].set_xlabel("K-Grid Density (i x i x i)", labelpad=10)
ax[0].set_ylabel("|Electronic Ground State SCF Energy| [meV]", labelpad=10)
ax[0].set_title("Electronic SCF Energy Convergence wrt K-Density (New EDIFF)",
fontsize=20, pad=20) # pad is offset of title from plot
# Adjusting the in-plot margins (i.e. the gap between the final x value and the x limit of the graph)
ax[0].margins(0.1)
ax[0].ticklabel_format(useOffset=False)
plt.setp(ax[0].get_yticklabels(), rotation=20)
f.subplots_adjust(bottom=0.3, top=0.85) # Adjusting specific margins
ax[1].ticklabel_format(useOffset=False)
ax[1].plot(kdensity[1:], 1000*abs(kconv_energies_newediff[1:]), color='red',
marker="s", label="Convergence E0", linewidth=2)
ax[1].grid(True)
ax[1].set_xlabel("K-Grid Density (i x i x i)", labelpad=10)
ax[1].set_ylabel("|Electronic Ground State SCF Energy| [meV]", labelpad=10)
ax[1].set_title("Electronic SCF Energy Convergence wrt K-Density (New EDIFF)",
fontsize=20, pad=20) # pad is offset of title from plot
# Adjusting the in-plot margins (i.e. the gap between the final x value and the x limit of the graph)
ax[1].margins(0.1)
plt.setp(ax[1].get_yticklabels(), rotation=20)
plt.show()
# -
# Hmm... Nope. At this point, I reckon that the limiting factor is our plane wave energy cutoff (set to 520 eV).
# + jupyter={"source_hidden": true}
# %matplotlib inline
kdensity = np.arange(3, 8.1)
kconv_energies_newencut = np.array([float(line.rstrip('\n'))
for line in open('VASP_Outputs/kenconv_energy_list.txt')])
kconv_newencut = np.zeros([6,])
for i in range(len(kdensity)): # Selecting energies for ENCUT of 800 eV
kconv_newencut[i] = kconv_energies_newencut[i*13+12]
f2, ax2 = plt.subplots(1, 2, figsize=(22, 10))
ax2[0].plot(kdensity, 1000*abs(kconv_newencut), color='steelblue',
marker="s", label="Convergence E0", linewidth=2)
ax2[0].grid(True)
ax2[0].set_xlabel("K-Grid Density (i x i x i)", labelpad=10)
ax2[0].set_ylabel("|Electronic Ground State SCF Energy| [meV]", labelpad=10)
ax2[0].set_title("Electronic SCF Energy Convergence wrt K-Density (800 eV ENCUT)",
fontsize=20, pad=20) # pad is offset of title from plot
# Adjusting the in-plot margins (i.e. the gap between the final x value and the x limit of the graph)
ax2[0].margins(0.1)
ax2[0].ticklabel_format(useOffset=False)
plt.setp(ax2[0].get_yticklabels(), rotation=20)
f2.subplots_adjust(bottom=0.3, top=0.85) # Adjusting specific margins
ax2[1].ticklabel_format(useOffset=False)
ax2[1].plot(kdensity[1:], 1000*abs(kconv_newencut[1:]), color='red',
marker="s", label="Convergence E0", linewidth=2)
ax2[1].grid(True)
ax2[1].set_xlabel("K-Grid Density (i x i x i)", labelpad=10)
ax2[1].set_ylabel("|Electronic Ground State SCF Energy| [meV]", labelpad=10)
ax2[1].set_title("Electronic SCF Energy Convergence wrt K-Density (800 eV ENCUT)",
fontsize=20, pad=20) # pad is offset of title from plot
# Adjusting the in-plot margins (i.e. the gap between the final x value and the x limit of the graph)
ax2[1].margins(0.1)
plt.setp(ax2[1].get_yticklabels(), rotation=20)
plt.show()
# -
# In this case, for k-densities greater than 3 x 3 x 3, the energy converges to within ~ 0.05 eV (or $5\times10^{-6} eV$), a slight improvement from 0.01 eV with ENCUT = 520 eV, but not much. It appears that the limiting factor at this point is simply the ability of VASP to converge with the chosen GGA PBEsol potential and (Davidson) optimisation algorithms.
# + jupyter={"source_hidden": true}
ediff = np.array([kconv_energies[i]-kconv_energies[i-1]
for i in range(1, len(kconv_energies))])
f1, ax1 = plt.subplots(1, 2, figsize=(22, 7))
ax1[0].plot(kdensity[1:], np.log10(1000*abs(ediff)),
color='steelblue', marker="s", label="Convergence E0", linewidth=2)
ax1[0].grid(True)
ax1[0].set_xlabel("K-Grid Density (i x i x i)", labelpad=10)
ax1[0].set_ylabel("Log_10(|SCF Energy Difference|) [meV]", labelpad=10)
ax1[0].set_title("SCF Energy Difference wrt K-Point Density",
fontsize=20, pad=20) # pad is offset of title from plot
ax1[0].ticklabel_format(useOffset=False)
plt.setp(ax1[0].get_yticklabels(), rotation=20)
# Adjusting the in-plot margins (i.e. the gap between the final x value and the x limit of the graph)
ax1[0].margins(0.1)
f1.subplots_adjust(bottom=0.3, top=0.85) # Adjusting specific margins
ax1[1].plot(kdensity[2:], np.log10(abs(1000*ediff[1:])),
color='red', marker="s", label="Convergence E0", linewidth=2)
ax1[1].grid(True)
ax1[1].set_xlabel("K-Grid Density (i x i x i)", labelpad=10)
ax1[1].set_ylabel("Log_10(|SCF Energy Difference|) [meV]", labelpad=10)
ax1[1].set_title("SCF Energy Difference wrt K-Point Density",
fontsize=20, pad=20) # pad is offset of title from plot
ax1[1].ticklabel_format(useOffset=False)
plt.setp(ax1[1].get_yticklabels(), rotation=20)
# Adjusting the in-plot margins (i.e. the gap between the final x value and the x limit of the graph)
ax1[1].margins(0.1)
plt.show()
# -
# Note that EDIFF = $1\times 10^{-6}\ eV$ corresponds to $log_{10}(EDIFF) = -6$.
# + jupyter={"source_hidden": true}
ediff = np.array([kconv_energies_newediff[i]-kconv_energies_newediff[i-1]
for i in range(1, len(kconv_energies_newediff))])
f1, ax1 = plt.subplots(1, 2, figsize=(22, 7))
ax1[0].plot(kdensity[1:], np.log10(1000*abs(ediff)),
color='steelblue', marker="s", label="Convergence E0", linewidth=2)
ax1[0].grid(True)
ax1[0].set_xlabel("K-Grid Density (i x i x i)", labelpad=10)
ax1[0].set_ylabel("Log_10(|SCF Energy Difference|) [meV]", labelpad=10)
ax1[0].set_title("SCF Energy Difference wrt K-Density (New EDIFF)",
fontsize=20, pad=20) # pad is offset of title from plot
ax1[0].ticklabel_format(useOffset=False)
plt.setp(ax1[0].get_yticklabels(), rotation=20)
# Adjusting the in-plot margins (i.e. the gap between the final x value and the x limit of the graph)
ax1[0].margins(0.1)
f1.subplots_adjust(bottom=0.3, top=0.85) # Adjusting specific margins
ax1[1].plot(kdensity[2:], np.log10(abs(1000*ediff[1:])),
color='red', marker="s", label="Convergence E0", linewidth=2)
ax1[1].grid(True)
ax1[1].set_xlabel("K-Grid Density (i x i x i)", labelpad=10)
ax1[1].set_ylabel("Log_10(|SCF Energy Difference|) [meV]", labelpad=10)
ax1[1].set_title("SCF Energy Difference wrt K-Density (New EDIFF)",
fontsize=20, pad=20) # pad is offset of title from plot
ax1[1].ticklabel_format(useOffset=False)
plt.setp(ax1[1].get_yticklabels(), rotation=20)
# Adjusting the in-plot margins (i.e. the gap between the final x value and the x limit of the graph)
ax1[1].margins(0.1)
plt.show()
# -
# ### Plane Wave Energy Cutoff Convergence:
# "For ENCUT test, the choice of k-points does not affect the convergence trend of ENCUT. A moderate k-points would be fine."
# Using a Monkhorst pack k-grid density of 5 x 5 x 5, the plane wave energy cutoff was varied from 200 eV to 600 eV.
# + jupyter={"source_hidden": true}
ecutoff = np.arange(200, 700.5, 20)
econv_energies = np.array([float(line.rstrip('\n'))
for line in open('VASP_Outputs/econv_energy_list.txt')])
fec, axec = plt.subplots(1, 2, figsize=(22, 7))
axec[0].plot(ecutoff, 1000*abs(econv_energies), color='steelblue',
marker="s", label="Convergence E0", linewidth=2)
axec[0].grid(True)
axec[0].set_xlabel("Plane Wave Energy Cutoff [eV]", labelpad=10)
axec[0].set_ylabel("|Electronic Ground State SCF Energy| [meV]", labelpad=10)
axec[0].ticklabel_format(useOffset=False)
plt.setp(axec[0].get_yticklabels(), rotation=20)
axec[0].set_title("Electronic SCF Energy Convergence wrt ENCUT",
fontsize=20, pad=20) # pad is offset of title from plot
# Adjusting the in-plot margins (i.e. the gap between the final x value and the x limit of the graph)
axec[0].margins(0.1)
fec.subplots_adjust(bottom=0.3, top=0.85) # Adjusting specific margins
axec[1].plot(ecutoff[3:], 1000*abs(econv_energies[3:]), color='red',
marker="s", label="Convergence E0", linewidth=2)
axec[1].grid(True)
axec[1].set_xlabel("Plane Wave Energy Cutoff [eV]", labelpad=10)
axec[1].set_ylabel("|Electronic Ground State SCF Energy| [meV]", labelpad=10)
axec[1].ticklabel_format(useOffset=False)
plt.setp(axec[1].get_yticklabels(), rotation=20)
axec[1].set_title("Electronic SCF Energy Convergence wrt ENCUT",
fontsize=20, pad=20) # pad is offset of title from plot
# Adjusting the in-plot margins (i.e. the gap between the final x value and the x limit of the graph)
axec[1].margins(0.1)
plt.show()
# + jupyter={"source_hidden": true}
ecdiff = np.array([econv_energies[i]-econv_energies[i-1]
for i in range(1, len(econv_energies))])
fec1, axec1 = plt.subplots(1, 1, figsize=(11, 7))
axec1.plot(ecutoff[1:], np.log10(1000*abs(ecdiff)), color='steelblue',
marker="s", label="Convergence E0", linewidth=2)
axec1.grid(True)
axec1.set_xlabel("Plane Wave Energy Cutoff [eV]", labelpad=10)
axec1.set_ylabel("Log_10(|SCF Energy Difference|) [meV]", labelpad=10)
axec1.set_title("SCF Energy Difference wrt ENCUT", fontsize=20,
pad=20) # pad is offset of title from plot
# Adjusting the in-plot margins (i.e. the gap between the final x value and the x limit of the graph)
axec1.margins(0.1)
fec1.subplots_adjust(bottom=0.3, top=0.85) # Adjusting specific margins
axec1.ticklabel_format(useOffset=False)
plt.setp(axec1.get_yticklabels(), rotation=20)
#axec1[1].plot(ecutoff[2:], np.log10(abs(ecdiff[1:])), color='red', marker="s", label="Convergence E0", linewidth = 2)
# axec1[1].grid(True)
#axec1[1].set_xlabel("Plane Wave Energy Cutoff [eV]", labelpad = 10)
#axec1[1].set_ylabel("Log_10(|SCF Energy Difference|) [eV]", labelpad = 10)
# axec1[1].set_title("SCF Energy Difference wrt ENCUT", fontsize = 20, pad = 20) # pad is offset of title from plot
# axec1[1].margins(0.1) # Adjusting the in-plot margins (i.e. the gap between the final x value and the x limit of the graph)
plt.show()
# -
# Hmm... Would've expected it to converge a bit closer to log = -5 or -6. Above 500 eV, energy converges within a range of 7 meV (which is 0.7 meV/atom -> Good enough pal). Maybe above ENCUT of 500 eV, k-grid density becomes a limiting factor, so increasing ENCUT doesn't help convergence? Also interesting that in every single case, the SCF loop took exactly 15 steps...
# Maybe run a 2D convergence test (with larger variations in ENCUT so that it isn't super expensive) -> Vide infra.
# Note that the convergence limit could also just be the limit of VASP's ability (to converge) with the chosen PBEsol potential.
# + jupyter={"source_hidden": true}
ecutoff_newkgrid = np.arange(200, 700.5, 50)
econv_energies_newkgrid = np.array([float(line.rstrip('\n'))
for line in open('VASP_Outputs/econv_energy_list_newkgrid.txt')])
fec, axec = plt.subplots(1, 2, figsize=(22, 7))
axec[0].plot(ecutoff_newkgrid, 1000*abs(econv_energies_newkgrid), color='steelblue',
marker="s", label="Convergence E0", linewidth=2)
axec[0].grid(True)
axec[0].set_xlabel("Plane Wave Energy Cutoff [eV]", labelpad=10)
axec[0].set_ylabel("|Electronic Ground State SCF Energy| [meV]", labelpad=10)
axec[0].ticklabel_format(useOffset=False)
plt.setp(axec[0].get_yticklabels(), rotation=20)
axec[0].set_title("Electronic SCF Energy Convergence wrt ENCUT (New k-Grid)",
fontsize=20, pad=20) # pad is offset of title from plot
# Adjusting the in-plot margins (i.e. the gap between the final x value and the x limit of the graph)
axec[0].margins(0.1)
fec.subplots_adjust(bottom=0.3, top=0.85) # Adjusting specific margins
axec[1].plot(ecutoff_newkgrid[3:], 1000*abs(econv_energies_newkgrid[3:]), color='red',
marker="s", label="Convergence E0", linewidth=2)
axec[1].grid(True)
axec[1].set_xlabel("Plane Wave Energy Cutoff [eV]", labelpad=10)
axec[1].set_ylabel("|Electronic Ground State SCF Energy| [meV]", labelpad=10)
axec[1].ticklabel_format(useOffset=False)
plt.setp(axec[1].get_yticklabels(), rotation=20)
axec[1].set_title("Electronic SCF Energy Convergence wrt ENCUT (New k-Grid)",
fontsize=20, pad=20) # pad is offset of title from plot
# Adjusting the in-plot margins (i.e. the gap between the final x value and the x limit of the graph)
axec[1].margins(0.1)
plt.show()
# -
# With a k-grid density of 8 x 8 x 8, as opposed to 5 x 5 x 5 previously, the energy converges within a range of 7 meV above ENCUT values of 500 eV; constituting an improvement of sweet fuck all.
# + jupyter={"source_hidden": true}
ecdiff_newkgrid = np.array([econv_energies_newkgrid[i]-econv_energies_newkgrid[i-1]
for i in range(1, len(econv_energies_newkgrid))])
fec1, axec1 = plt.subplots(1, 1, figsize=(11, 7))
axec1.plot(ecutoff_newkgrid[1:], np.log10(1000*abs(ecdiff_newkgrid)), color='steelblue',
marker="s", label="Convergence E0", linewidth=2)
axec1.grid(True)
axec1.set_xlabel("Plane Wave Energy Cutoff [eV]", labelpad=10)
axec1.set_ylabel("Log_10(|SCF Energy Difference|) [meV]", labelpad=10)
axec1.set_title("SCF Energy Difference wrt ENCUT (New k-Grid)", fontsize=20,
pad=20) # pad is offset of title from plot
# Adjusting the in-plot margins (i.e. the gap between the final x value and the x limit of the graph)
axec1.margins(0.1)
fec1.subplots_adjust(bottom=0.3, top=0.85) # Adjusting specific margins
axec1.ticklabel_format(useOffset=False)
plt.setp(axec1.get_yticklabels(), rotation=20)
#axec1[1].plot(ecutoff[2:], np.log10(abs(ecdiff[1:])), color='red', marker="s", label="Convergence E0", linewidth = 2)
# axec1[1].grid(True)
#axec1[1].set_xlabel("Plane Wave Energy Cutoff [eV]", labelpad = 10)
#axec1[1].set_ylabel("Log_10(|SCF Energy Difference|) [eV]", labelpad = 10)
# axec1[1].set_title("SCF Energy Difference wrt ENCUT", fontsize = 20, pad = 20) # pad is offset of title from plot
# axec1[1].margins(0.1) # Adjusting the in-plot margins (i.e. the gap between the final x value and the x limit of the graph)
plt.show()
# -
# ## 2D Convergence Plot
# K-grid density range from 3 x 3 x 3 to 8 x 8 x 8, with ENCUT varying from 200 to 800 eV in 50 eV increments
# + jupyter={"source_hidden": true}
import altair as alt
import numpy as np
import pandas as pd
kenconv_energies = np.array([float(line.rstrip('\n'))
for line in open('VASP_Outputs/kenconv_energy_list.txt')])*1000
kgrid_density = np.arange(3, 9)
encut_range = np.arange(200, 801, 50)
kenarray = np.zeros((len(encut_range), len(kgrid_density)))
# Array of energy values
for i in range(len(kgrid_density)):
kenarray[:, i] = kenconv_energies[i *
len(encut_range): (i+1)*len(encut_range)]
x, y = np.meshgrid(kgrid_density, encut_range)
# Convert this grid to columnar data expected by Altair
source = pd.DataFrame({'K-Grid Density': x.ravel(),
'Plane Wave Energy Cutoff (eV)': y.ravel(),
'Electronic SCF Energy (meV)': kenarray.ravel()})
alt.Chart(source,title = '2D Electronic SCF Energy Convergence Plot').mark_rect().encode(
x='K-Grid Density:O',
y=alt.Y("Plane Wave Energy Cutoff (eV)", sort='descending', type='ordinal'),
color='Electronic SCF Energy (meV):Q',
)
# + jupyter={"source_hidden": true}
source1 = pd.DataFrame({'K-Grid Density': x.ravel()[len(kgrid_density)*2:],
'Plane Wave Energy Cutoff (eV)': y.ravel()[len(kgrid_density)*2:],
'Electronic SCF Energy (meV)': kenarray.ravel()[len(kgrid_density)*2:]})
alt.Chart(source1,title = '2D Electronic SCF Energy Convergence Plot (Adjusted Scale)').mark_rect().encode(
x='K-Grid Density:O',
y=alt.Y("Plane Wave Energy Cutoff (eV)", sort='descending', type='ordinal'),
color='Electronic SCF Energy (meV):Q',
)
# -
# Seems that the convergence effect of the plane wave energy cutoff is much more significant than that of the k-grid density. For the k-grid density, a grid of 2 x 2 x 2 is too small for the SCF energy to be calculated (i.e. to be 'self-consistent' within the NELM limit of 100 electronic SCF loops), and that 3 x 3 x 3 is approximately 1.4 meV off the converged energy value for higher k-densities (see 'K-Point Convergence' above), but that k-grid densities of above 4 x 4 x 4 do not significantly improve the convergence of the final energy.
# Hence, it is concluded that a k-density of 5 x 5 x 5 (or 4 x 4 x 4) and an ENCUT of +500 eV is sufficient to obtain a converged final energy from VASP for $Cs_2 AgSbBr_6$.
# + jupyter={"source_hidden": true}
from mpl_toolkits.mplot3d import Axes3D # noqa: F401 unused import
from matplotlib import cm
from matplotlib.ticker import LinearLocator, FormatStrFormatter
import numpy as np
# %matplotlib widget
fig = plt.figure()
ax = fig.gca(projection='3d')
# Plot the surface.
surf = ax.plot_surface(x, y, kenarray, cmap=cm.coolwarm,
linewidth=0, antialiased=False)
ax.grid(True)
ax.set_xlabel("K-Grid Density (i x i x i)", labelpad=10)
ax.set_zlabel("|Electronic Ground State SCF Energy| [meV]", labelpad=10)
ax.set_ylabel("Plane Wave Energy Cutoff [eV]", labelpad=10)
ax.set_title("2D Electronic SCF Energy Convergence wrt K-Density & ENCUT",
fontsize=20, pad=20) # pad is offset of title from plot
# Adjusting the in-plot margins (i.e. the gap between the final x value and the x limit of the graph)
ax.margins(0.1)
ax.ticklabel_format(useOffset=False)
plt.setp(ax.get_yticklabels(), rotation=20)
f.subplots_adjust(bottom=0.3, top=0.85) # Adjusting specific margins
# Add a color bar which maps values to colors.
fig.colorbar(surf, shrink=0.5, aspect=5, label = 'Final Energy [meV]')
plt.show()
# + jupyter={"source_hidden": true}
# %matplotlib widget
fig = plt.figure()
ax = fig.gca(projection='3d')
# Plot the surface.
surf = ax.plot_surface(x[2:], y[2:], kenarray[2:], cmap=cm.coolwarm,
linewidth=0, antialiased=False)
ax.grid(True)
ax.set_xlabel("K-Grid Density (i x i x i)", labelpad=10)
ax.set_zlabel("|Electronic Ground State SCF Energy| [meV]", labelpad=10)
ax.set_ylabel("Plane Wave Energy Cutoff [eV]", labelpad=10)
ax.set_title("2D Electronic SCF Energy Convergence wrt K-Density & ENCUT",
fontsize=20, pad=20) # pad is offset of title from plot
# Adjusting the in-plot margins (i.e. the gap between the final x value and the x limit of the graph)
ax.margins(0.1)
ax.ticklabel_format(useOffset=False)
plt.setp(ax.get_yticklabels(), rotation=20)
f.subplots_adjust(bottom=0.3, top=0.85) # Adjusting specific margins
# Add a color bar which maps values to colors.
fig.colorbar(surf, shrink=0.5, aspect=5, format = '%.0f', label = 'Final Energy [meV]')
plt.show()
# -
# Also, try with the other structures available on MP (i.e. the larger unit cell versions etc.)
# Also, let's fuck up the structure, look at it in VESTA, then run the structural optimisation and look at it again (see if it looks the same as when we began structural relaxation with a non-fucked POSCAR).
| VASP Convergence Analysis.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
import numpy as np
import cv2
import matplotlib.pyplot as plt
import pynq.lib.dma
from pynq import DefaultHierarchy, Overlay, Xlnk
# -
class MemoryDriver(DefaultHierarchy):
def __init__(self, description):
super().__init__(description)
def execute(self, img):
in_buffer = Xlnk().cma_array(
shape=(img.shape[0] * img.shape[1]), dtype=np.float32)
out_buffer = Xlnk().cma_array(
shape=(img.shape[0] * img.shape[1]), dtype=np.float32)
for i, v in enumerate(np.reshape(img, (img.shape[0] * img.shape[1]))):
in_buffer[i] = v
self.axi_dma_0.sendchannel.transfer(in_buffer)
self.axi_dma_0.recvchannel.transfer(out_buffer)
self.axi_dma_0.sendchannel.wait()
self.axi_dma_0.recvchannel.wait()
result = np.reshape(out_buffer, img.shape)
return result
@staticmethod
def checkhierarchy(description):
if 'axi_dma_0' in description['ip']:
return True
return False
overlay = Overlay('mean_f.bit')
img = cv2.imread('lena.png', 0)
plt.figure(figsize=(4, 4))
plt.imshow(img, 'gray')
# %%time
result = overlay.memory.execute(img)
plt.figure(figsize=(4, 4))
plt.imshow(result, 'gray')
| Mean-Single-Convolution/MEAN_IP_FLOAT.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python [conda env:simulate_expression_compendia] *
# language: python
# name: conda-env-simulate_expression_compendia-py
# ---
# # Human sample level analysis
#
# Main notebook to run sample-level simulation experiment using human gene expression data from recount2.
#
# Make sure to run ```download_data.R``` download raw data
# +
# %load_ext autoreload
# %autoreload 2
# %load_ext rpy2.ipython
from rpy2.robjects.packages import importr
from rpy2.robjects import pandas2ri
import os
import sys
import ast
import pandas as pd
import numpy as np
import random
import subprocess
from plotnine import (ggplot,
labs,
geom_line,
geom_point,
geom_errorbar,
aes,
ggsave,
theme_bw,
theme,
facet_wrap,
scale_color_manual,
guides,
guide_legend,
element_blank,
element_text,
element_rect,
element_line,
coords)
from sklearn.decomposition import PCA
import warnings
warnings.filterwarnings(action='ignore')
from simulate_expression_compendia_modules import pipeline
from ponyo import utils, train_vae_modules
from numpy.random import seed
randomState = 123
seed(randomState)
# -
# Read in config variables
base_dir = os.path.abspath(os.path.join(os.getcwd(),"../"))
config_file = os.path.abspath(os.path.join(base_dir,
"configs",
"config_test_human_sample_limma.tsv"))
params = utils.read_config(config_file)
# Load parameters
local_dir = params["local_dir"]
dataset_name = params['dataset_name']
analysis_name = params["simulation_type"]
correction_method = params["correction_method"]
lst_num_experiments = params["lst_num_experiments"]
train_architecture = params['NN_architecture']
# Input files
rpkm_data_file = os.path.join(
base_dir,
dataset_name,
"data",
"input",
"recount2_gene_RPKM_data_test.tsv")
assert os.path.exists(rpkm_data_file)
# ## Setup directories
utils.setup_dir(config_file)
# ## Pre-process data
# Output file
normalized_data_file = os.path.join(
base_dir,
dataset_name,
"data",
"input",
"recount2_gene_normalized_data_test.tsv.xz")
# Only run pre-processind step if normalized data file is NOT created
#if os.path.exists(normalized_data_file) == False:
train_vae_modules.normalize_expression_data(base_dir,
config_file,
rpkm_data_file,
normalized_data_file)
# ## Train VAE
# Directory containing log information from VAE training
vae_log_dir = os.path.join(
base_dir,
dataset_name,
"logs",
train_architecture)
# Train VAE
train_vae_modules.train_vae(config_file,
normalized_data_file)
# ## Run simulation experiment without noise correction
# Run simulation without correction
corrected = False
pipeline.run_simulation(config_file,
normalized_data_file,
corrected)
# ## Run simulation with correction applied
# Run simulation without correction
corrected = True
pipeline.run_simulation(config_file,
normalized_data_file,
corrected)
# ## Make figures
pca_ind = [0,1,2]
# +
# File directories
similarity_uncorrected_file = os.path.join(
base_dir,
dataset_name,
"results",
"saved_variables",
dataset_name + "_" + analysis_name + "_svcca_uncorrected_" + correction_method + ".pickle")
ci_uncorrected_file = os.path.join(
base_dir,
dataset_name,
"results",
"saved_variables",
dataset_name + "_" + analysis_name + "_ci_uncorrected_" + correction_method + ".pickle")
similarity_corrected_file = os.path.join(
base_dir,
dataset_name,
"results",
"saved_variables",
dataset_name + "_" + analysis_name + "_svcca_corrected_" + correction_method + ".pickle")
ci_corrected_file = os.path.join(
base_dir,
dataset_name,
"results",
"saved_variables",
dataset_name + "_" + analysis_name + "_ci_corrected_" + correction_method + ".pickle")
permuted_score_file = os.path.join(
base_dir,
dataset_name,
"results",
"saved_variables",
dataset_name + "_" + analysis_name + "_permuted.npy")
compendia_dir = os.path.join(
local_dir,
"experiment_simulated",
dataset_name + "_" + analysis_name)
# +
# Output files
svcca_file = os.path.join(
base_dir,
dataset_name,
"results",
dataset_name +"_"+analysis_name+"_svcca_"+correction_method+".svg")
svcca_png_file = os.path.join(
base_dir,
dataset_name,
"results",
dataset_name +"_"+analysis_name+"_svcca_"+correction_method+".png")
pca_uncorrected_file = os.path.join(
base_dir,
dataset_name,
"results",
dataset_name +"_"+analysis_name+"_pca_uncorrected_"+correction_method+".svg")
pca_corrected_file = os.path.join(
base_dir,
dataset_name,
"results",
dataset_name +"_"+analysis_name+"_pca_corrected_"+correction_method+".svg")
# +
# Load pickled files
uncorrected_svcca = pd.read_pickle(similarity_uncorrected_file)
err_uncorrected_svcca = pd.read_pickle(ci_uncorrected_file)
corrected_svcca = pd.read_pickle(similarity_corrected_file)
err_corrected_svcca = pd.read_pickle(ci_corrected_file)
permuted_score = np.load(permuted_score_file)
# -
# Concatenate error bars
uncorrected_svcca_err = pd.concat([uncorrected_svcca, err_uncorrected_svcca], axis=1)
corrected_svcca_err = pd.concat([corrected_svcca, err_corrected_svcca], axis=1)
# Add group label
uncorrected_svcca_err['Group'] = 'uncorrected'
corrected_svcca_err['Group'] = 'corrected'
# Concatenate dataframes
all_svcca = pd.concat([uncorrected_svcca_err, corrected_svcca_err])
all_svcca
# ### SVCCA
# +
# Plot
lst_num_partitions = list(all_svcca.index)
threshold = pd.DataFrame(
pd.np.tile(
permuted_score,
(len(lst_num_partitions), 1)),
index=lst_num_partitions,
columns=['score'])
panel_A = ggplot(all_svcca) \
+ geom_line(all_svcca,
aes(x=lst_num_partitions, y='score', color='Group'),
size=1.5) \
+ geom_point(aes(x=lst_num_partitions, y='score'),
color ='darkgrey',
size=0.5) \
+ geom_errorbar(all_svcca,
aes(x=lst_num_partitions, ymin='ymin', ymax='ymax'),
color='darkgrey') \
+ geom_line(threshold,
aes(x=lst_num_partitions, y='score'),
linetype='dashed',
size=1,
color="darkgrey",
show_legend=False) \
+ labs(x = "Number of Partitions",
y = "Similarity score (SVCCA)",
title = "Similarity across varying numbers of partitions") \
+ theme(
plot_background=element_rect(fill="white"),
panel_background=element_rect(fill="white"),
panel_grid_major_x=element_line(color="lightgrey"),
panel_grid_major_y=element_line(color="lightgrey"),
axis_line=element_line(color="grey"),
legend_key=element_rect(fill='white', colour='white'),
legend_title=element_text(family='sans-serif', size=15),
legend_text=element_text(family='sans-serif', size=12),
plot_title=element_text(family='sans-serif', size=15),
axis_text=element_text(family='sans-serif', size=12),
axis_title=element_text(family='sans-serif', size=15)
) \
+ scale_color_manual(['#1976d2', '#b3e5fc']) \
print(panel_A)
ggsave(plot=panel_A, filename=svcca_file, device="svg", dpi=300)
ggsave(plot=panel_A, filename=svcca_png_file, device="svg", dpi=300)
# -
# ### Uncorrected PCA
# +
lst_num_experiments = [lst_num_experiments[i] for i in pca_ind]
all_data_df = pd.DataFrame()
# Get batch 1 data
experiment_1_file = os.path.join(
compendia_dir,
"Experiment_1_0.txt.xz")
experiment_1 = pd.read_table(
experiment_1_file,
header=0,
index_col=0,
sep='\t')
for i in lst_num_experiments:
print('Plotting PCA of 1 experiment vs {} experiments...'.format(i))
# Simulated data with all samples in a single batch
original_data_df = experiment_1.copy()
# Add grouping column for plotting
original_data_df['num_experiments'] = '1'
# Get data with additional batch effects added
experiment_other_file = os.path.join(
compendia_dir,
"Experiment_"+str(i)+"_0.txt.xz")
experiment_other = pd.read_table(
experiment_other_file,
header=0,
index_col=0,
sep='\t')
# Simulated data with i batch effects
experiment_data_df = experiment_other
# Add grouping column for plotting
experiment_data_df['num_experiments'] = 'multiple'
# Concatenate datasets together
combined_data_df = pd.concat([original_data_df, experiment_data_df])
# PCA projection
pca = PCA(n_components=2)
# Encode expression data into 2D PCA space
combined_data_numeric_df = combined_data_df.drop(['num_experiments'], axis=1)
combined_data_PCAencoded = pca.fit_transform(combined_data_numeric_df)
combined_data_PCAencoded_df = pd.DataFrame(combined_data_PCAencoded,
index=combined_data_df.index,
columns=['PC1', 'PC2']
)
# Variance explained
print(pca.explained_variance_ratio_)
# Add back in batch labels (i.e. labels = "batch_"<how many batch effects were added>)
combined_data_PCAencoded_df['num_experiments'] = combined_data_df['num_experiments']
# Add column that designates which batch effect comparision (i.e. comparison of 1 batch vs 5 batches
# is represented by label = 5)
combined_data_PCAencoded_df['comparison'] = str(i)
# Concatenate ALL comparisons
all_data_df = pd.concat([all_data_df, combined_data_PCAencoded_df])
# +
# Convert 'num_experiments' into categories to preserve the ordering
lst_num_experiments_str = [str(i) for i in lst_num_experiments]
num_experiments_cat = pd.Categorical(all_data_df['num_experiments'], categories=['1', 'multiple'])
# Convert 'comparison' into categories to preserve the ordering
comparison_cat = pd.Categorical(all_data_df['comparison'], categories=lst_num_experiments_str)
# Assign to a new column in the df
all_data_df = all_data_df.assign(num_experiments_cat = num_experiments_cat)
all_data_df = all_data_df.assign(comparison_cat = comparison_cat)
# -
all_data_df.columns = ['PC1', 'PC2', 'num_experiments', 'comparison', 'No. of experiments', 'Comparison']
# +
# Plot all comparisons in one figure
panel_B = ggplot(all_data_df[all_data_df['Comparison'] != '1'],
aes(x='PC1', y='PC2')) \
+ geom_point(aes(color='No. of experiments'),
alpha=0.2) \
+ facet_wrap('~Comparison') \
+ labs(x = "PC 1",
y = "PC 2",
title = "PCA of experiment 1 vs multiple experiments") \
+ theme_bw() \
+ theme(
legend_title_align = "center",
plot_background=element_rect(fill='white'),
legend_key=element_rect(fill='white', colour='white'),
legend_text=element_text(family='sans-serif', size=12),
plot_title=element_text(family='sans-serif', size=15),
axis_text=element_text(family='sans-serif', size=12),
axis_title=element_text(family='sans-serif', size=15)
) \
+ guides(colour=guide_legend(override_aes={'alpha': 1})) \
+ scale_color_manual(['#bdbdbd', '#b3e5fc']) \
+ geom_point(data=all_data_df[all_data_df['Comparison'] == '1'],
alpha=0.1,
color='#bdbdbd')
print(panel_B)
ggsave(plot=panel_B, filename=pca_uncorrected_file, dpi=300)
# -
# ### Corrected PCA
# +
lst_num_experiments = [lst_num_experiments[i] for i in pca_ind]
all_corrected_data_df = pd.DataFrame()
# Get batch 1 data
experiment_1_file = os.path.join(
compendia_dir,
"Experiment_corrected_1_0.txt.xz")
experiment_1 = pd.read_table(
experiment_1_file,
header=0,
index_col=0,
sep='\t')
# Transpose data to df: sample x gene
experiment_1 = experiment_1.T
for i in lst_num_experiments:
print('Plotting PCA of 1 experiment vs {} experiments...'.format(i))
# Simulated data with all samples in a single batch
original_data_df = experiment_1.copy()
# Match format of column names in before and after df
original_data_df.columns = original_data_df.columns.astype(str)
# Add grouping column for plotting
original_data_df['num_experiments'] = '1'
# Get data with additional batch effects added and corrected
experiment_other_file = os.path.join(
compendia_dir,
"Experiment_corrected_"+str(i)+"_0.txt.xz")
experiment_other = pd.read_table(
experiment_other_file,
header=0,
index_col=0,
sep='\t')
# Transpose data to df: sample x gene
experiment_other = experiment_other.T
# Simulated data with i batch effects that are corrected
experiment_data_df = experiment_other
# Match format of column names in before and after df
experiment_data_df.columns = experiment_data_df.columns.astype(str)
# Add grouping column for plotting
experiment_data_df['num_experiments'] = 'multiple'
# Concatenate datasets together
combined_data_df = pd.concat([original_data_df, experiment_data_df])
# PCA projection
pca = PCA(n_components=2)
# Encode expression data into 2D PCA space
combined_data_numeric_df = combined_data_df.drop(['num_experiments'], axis=1)
combined_data_PCAencoded = pca.fit_transform(combined_data_numeric_df)
combined_data_PCAencoded_df = pd.DataFrame(combined_data_PCAencoded,
index=combined_data_df.index,
columns=['PC1', 'PC2']
)
# Add back in batch labels (i.e. labels = "batch_"<how many batch effects were added>)
combined_data_PCAencoded_df['num_experiments'] = combined_data_df['num_experiments']
# Add column that designates which batch effect comparision (i.e. comparison of 1 batch vs 5 batches
# is represented by label = 5)
combined_data_PCAencoded_df['comparison'] = str(i)
# Concatenate ALL comparisons
all_corrected_data_df = pd.concat([all_corrected_data_df, combined_data_PCAencoded_df])
# +
# Convert 'num_experiments' into categories to preserve the ordering
lst_num_experiments_str = [str(i) for i in lst_num_experiments]
num_experiments_cat = pd.Categorical(all_corrected_data_df['num_experiments'], categories=['1', 'multiple'])
# Convert 'comparison' into categories to preserve the ordering
comparison_cat = pd.Categorical(all_corrected_data_df['comparison'], categories=lst_num_experiments_str)
# Assign to a new column in the df
all_corrected_data_df = all_corrected_data_df.assign(num_experiments_cat = num_experiments_cat)
all_corrected_data_df = all_corrected_data_df.assign(comparison_cat = comparison_cat)
# -
all_corrected_data_df.columns = ['PC1', 'PC2', 'num_experiments', 'comparison', 'No. of experiments', 'Comparison']
# +
# Plot all comparisons in one figure
panel_C = ggplot(all_corrected_data_df[all_corrected_data_df['Comparison'] != '1'],
aes(x='PC1', y='PC2')) \
+ geom_point(aes(color='No. of experiments'),
alpha=0.2) \
+ facet_wrap('~Comparison') \
+ labs(x = "PC 1",
y = "PC 2",
title = "PCA of experiment 1 vs multiple experiments") \
+ theme_bw() \
+ theme(
legend_title_align = "center",
plot_background=element_rect(fill='white'),
legend_key=element_rect(fill='white', colour='white'),
legend_text=element_text(family='sans-serif', size=12),
plot_title=element_text(family='sans-serif', size=15),
axis_text=element_text(family='sans-serif', size=12),
axis_title=element_text(family='sans-serif', size=15)
)\
+ guides(colour=guide_legend(override_aes={'alpha': 1})) \
+ scale_color_manual(['#bdbdbd', '#1976d2']) \
+ geom_point(data=all_corrected_data_df[all_corrected_data_df['Comparison'] == '1'],
alpha=0.1,
color='#bdbdbd')
print(panel_C)
ggsave(plot=panel_C, filename=pca_corrected_file, dpi=300)
# -
| Human_tests/Human_sample_limma.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# ---
# ## Library
# +
import platform
import os
import random
from sklearn.metrics import f1_score, classification_report
import efficientnet
import efficientnet.tfkeras as efn
import tensorflow as tf
import tensorflow_addons as tfa
import numpy as np
import sklearn
import matplotlib
import matplotlib.pyplot as plt
# +
SEED = 42
os.environ['PYTHONHASHSEED']=str(SEED)
random.seed(SEED)
np.random.seed(SEED)
tf.random.set_seed(SEED)
# + tags=[]
print('Python version:', platform.python_version())
print('Tensorflow Version:', tf.__version__)
print('Tensorflow Addons Version:', tfa.__version__)
print('Efficientnet Version:', efficientnet.__version__)
print('Numpy Version:', np.__version__)
print('Matplotlib Version:', matplotlib.__version__)
# -
# ## Dataset
def fft(image):
fourier = []
for i in range(image.shape[2]):
img = image[:, :, i]
f = np.fft.fft2(img)
fshift = np.fft.fftshift(f)
magnitude_spectrum = 20*np.log(np.abs(fshift))
fourier.append(magnitude_spectrum)
fourier = (np.dstack(fourier)).astype(np.uint8)
return fourier
(x_train, y_train), (x_test, y_test) = tf.keras.datasets.cifar10.load_data()
# +
len_x_train = len(x_train)
len_x_test = len(x_test)
x_train_fft = x_train.copy()
x_test_fft = x_test.copy()
for i in range(len_x_train):
x_train_fft[i] = fft(x_train[i])
for i in range(len_x_test):
x_test_fft[i] = fft(x_test[i])
# -
# ## Check fourier transform image
def show_fft(image):
plt.figure()
plt.imshow(image, cmap='gray')
plt.title('Fourier Transform image')
plt.xticks([]), plt.yticks([])
plt.show()
show_fft(x_train_fft[np.random.randint(0, len_x_train)])
show_fft(x_train_fft[np.random.randint(0, len_x_train)])
show_fft(x_train_fft[np.random.randint(0, len_x_train)])
# ## Configure model
# +
from tensorflow.keras import Model
from tensorflow.keras.layers import Input, Activation, BatchNormalization, Dense, Concatenate
def get_efn0(name='efn0'):
efn0 = efn.EfficientNetB0(
input_shape=(32, 32, 3), include_top=False,
weights='noisy-student', pooling='max'
)
for layer in efn0.layers:
layer._name = f'{layer._name}_{name}'
return efn0
# + tags=[]
model_image = get_efn0('image')
model_fft = get_efn0('fft')
concatenate = Concatenate(name='concatenate')([model_image.output, model_fft.output])
dense_a = Dense(512)(concatenate)
bn_a = BatchNormalization()(dense_a)
act_a = Activation('relu')(bn_a)
dense_b = Dense(128)(act_a)
bn_b = BatchNormalization()(dense_b)
act_b = Activation('relu')(bn_b)
output = Dense(10, activation='softmax', name='output')(act_b)
model = Model(inputs=[model_image.input, model_fft.input], outputs=output)
model.compile(
optimizer=tfa.optimizers.RectifiedAdam(
lr=0.005,
total_steps=50,
warmup_proportion=0.1,
min_lr=0.0005,
),
loss='sparse_categorical_crossentropy',
metrics=['sparse_categorical_accuracy'])
model.summary()
# -
# ## Train model
# + tags=[]
model.fit(
[x_train, x_train_fft], y_train,
batch_size=500, epochs=50, verbose=1
)
# -
# ## Test model
y_pred = model.predict([x_test, x_test_fft])
y_pred = np.argmax(y_pred, axis=-1)
# + tags=[]
f1 = f1_score(y_test, y_pred, average='weighted')
print('Weighted F1 Score:', f1)
# + tags=[]
print('Classification Report:')
print(classification_report(y_test, y_pred))
| hybrid-mlp.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
# prerequisite package imports
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sb
# %matplotlib inline
from solutions_univ import histogram_solution_1
# -
# We'll continue working with the Pokémon dataset in this workspace.
pokemon = pd.read_csv('./data/pokemon.csv')
pokemon.head()
# **Task**: Pokémon have a number of different statistics that describe their combat capabilities. Here, create a _histogram_ that depicts the distribution of 'special-defense' values taken. **Hint**: Try playing around with different bin width sizes to see what best depicts the data.
# YOUR CODE HERE
plt.hist(data=pokemon, x='special-defense')
# +
plt.figure(figsize=[10,5])
plt.subplot(1,2,1)
bin_edges = np.arange(0, pokemon['special-defense'].max()+10, 10)
plt.hist(data=pokemon, x='special-defense', bins=bin_edges, color='blue')
#plt.subplot(1,2,2)
plt.hist(data=pokemon, x='special-defense', color='orange')
# -
sb.distplot(pokemon['special-defense'])
# run this cell to check your work against ours
histogram_solution_1()
pokemon[pokemon['special-defense'] >= 200]
| Histogram_Practice.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3.8.5 64-bit
# name: python3
# ---
# # **Space X Launch Sites Locations Analysis with Folium**
#
# The launch success rate may depend on many factors such as payload mass, orbit type, and so on. It may also depend on the location and proximities of a launch site, i.e., the initial position of rocket trajectories. Finding an optimal location for building a launch site certainly involves many factors and hopefully we could discover some of the factors by analyzing the existing launch site locations.
#
# ## Objectives
#
# This notebook contains the following tasks:
#
# * Mark all launch sites on a map
# * Mark the success/failed launches for each site on the map
# * Calculate the distances between a launch site to its proximities
#
# once done, we should be able to find some geographical patterns about launch sites.
#
import os
# Trying to import folium
try:
import folium
except ImportError as e:
# !{sys.executable} -m pip install folium
import folium
# Import folium MarkerCluster plugin
from folium.plugins import MarkerCluster
# Import folium MousePosition plugin
from folium.plugins import MousePosition
# Marking all launch sites on a map
#
# First, let's try to add each site's location on a map using site's latitude and longitude coordinates
#
# The following dataset with the name `spacex_launch_geo.csv` is an augmented dataset with latitude and longitude added for each site.
#
# +
import pandas as pd
import numpy as np
spacex_csv_file = 'spacex_launch_geo.csv'
spacex_df=pd.read_csv(spacex_csv_file)
# -
# Now, you can take a look at what are the coordinates for each site.
#
spacex_df = spacex_df[['Launch Site', 'Lat', 'Long', 'class']]
launch_sites_df = spacex_df.groupby(['Launch Site'], as_index=False).first()
launch_sites_df = launch_sites_df[['Launch Site', 'Lat', 'Long']]
launch_sites_df
# Above coordinates are just plain numbers that can not give you any intuitive insights about where are those launch sites. If you are very good at geography, you can interpret those numbers directly in your mind. If not, that's fine too. Let's visualize those locations by pinning them on a map.
#
# We first need to create a folium `Map` object, with an initial center location to be NASA Johnson Space Center at Houston, Texas.
#
# Start location is NASA Johnson Space Center
nasa_coordinate = [29.559684888503615, -95.0830971930759]
site_map = folium.Map(location=nasa_coordinate, zoom_start=10)
# We could use `folium.Circle` to add a highlighted circle area with a text label on a specific coordinate. For example,
#
# +
from folium.features import DivIcon
# Creates a blue circle at NASA Johnson Space Center's coordinate with a popup label showing its name
circle = folium.Circle(nasa_coordinate, radius=1000, color='#d35400', fill=True).add_child(folium.Popup('NASA Johnson Space Center'))
# Creates a blue circle at NASA Johnson Space Center's coordinate with a icon showing its name
marker = folium.map.Marker(
nasa_coordinate,
# Creates an icon as a text label
icon=DivIcon(
icon_size=(20,20),
icon_anchor=(0,0),
html='<div style="font-size: 12; color:#d35400;"><b>%s</b></div>' % 'NASA JSC',
)
)
site_map.add_child(circle)
site_map.add_child(marker)
# -
# and you should find a small yellow circle near the city of Houston and you can zoom-in to see a larger circle.
#
# Now, let's add a circle for each launch site in data frame `launch_sites`
#
# ### Creating and adding a `folium.Circle` and `folium.Marker` for each launch site on the site map
#
# +
# Initial the map
site_map = folium.Map(location=nasa_coordinate, zoom_start=5)
# For each launch site, add a Circle object based on its coordinate (Lat, Long) values. In addition, add Launch site name as a popup label
for index, site in launch_sites_df.iterrows():
circle = folium.Circle([site['Lat'], site['Long']], color='#d35400', radius=50, fill=True).add_child(folium.Popup(site['Launch Site']))
marker = folium.Marker(
[site['Lat'], site['Long']],
# Create an icon as a text label
icon=DivIcon(
icon_size=(20,20),
icon_anchor=(0,0),
html='<div style="font-size: 12; color:#d35400;"><b>%s</b></div>' % site['Launch Site'],
)
)
site_map.add_child(circle)
site_map.add_child(marker)
site_map
# -
# * Launch Sites are in very close proximity to the coast
# ### Marking the success/failed launches for each site on the map
#
# Next, we try to enhance the map by adding the launch outcomes for each site, and see which sites have high success rates.
# Recall that data frame spacex_df has detailed launch records, and the `class` column indicates if this launch was successful or not
#
spacex_df.tail(10)
# Next, let's create markers for all launch records.
# If a launch was successful `(class=1)`, then we use a green marker and if a launch was failed, we use a red marker `(class=0)`
#
# Note that a launch only happens in one of the four launch sites, which means many launch records will have the exact same coordinate. Marker clusters can be a good way to simplify a map containing many markers having the same coordinate.
#
# + id="wP9PVUZ7Jfjt" outputId="6a3b8164-940c-4b93-9f3d-c655c4a0c683" papermill={"duration": 0.904519, "end_time": "2020-09-19T06:27:38.357041", "exception": false, "start_time": "2020-09-19T06:27:37.452522", "status": "completed"} tags=[]
marker_cluster = MarkerCluster()
# -
# Creating a new column in `launch_sites` dataframe called `marker_color` to store the marker colors based on the `class` value
#
# +
def assign_marker_color(launch_outcome):
if launch_outcome == 1:
return 'green'
else:
return 'red'
spacex_df['marker_color'] = spacex_df['class'].apply(assign_marker_color)
spacex_df.tail(10)
# -
# #### For each launch result in `spacex_df` data frame, Adding a `folium.Marker` to `marker_cluster`
#
# +
site_map.add_child(marker_cluster)
for index, record in spacex_df.iterrows():
marker = folium.Marker([record['Lat'], record['Long']],
icon=folium.Icon(color='white', icon_color=record['marker_color']))
marker_cluster.add_child(marker)
site_map
# -
# #### Calculating the distances between a launch site to its proximities
#
# Next, we need to explore and analyze the proximities of launch sites.
#
# +
formatter = "function(num) {return L.Util.formatNum(num, 5);};"
mouse_position = MousePosition(
position='topright',
separator=' Long: ',
empty_string='NaN',
lng_first=False,
num_digits=20,
prefix='Lat:',
lat_formatter=formatter,
lng_formatter=formatter,
)
site_map.add_child(mouse_position)
site_map
# -
# We can calculate the distance between two points on the map based on their `Lat` and `Long` values using the following method (
# This is done by using Haversine's Formula and theory.)
#
# +
from math import sin, cos, sqrt, atan2, radians
def calculate_distance(lat1, lon1, lat2, lon2):
# approximate radius of earth in km
R = 6373.0
lat1 = radians(lat1)
lon1 = radians(lon1)
lat2 = radians(lat2)
lon2 = radians(lon2)
dlon = lon2 - lon1
dlat = lat2 - lat1
a = sin(dlat / 2)**2 + cos(lat1) * cos(lat2) * sin(dlon / 2)**2
c = 2 * atan2(sqrt(a), sqrt(1 - a))
distance = R * c
return distance
# +
#Work out distance to coastline
coordinates = [
[28.56342, -80.57674],
[28.56342, -80.56756]]
lines=folium.PolyLine(locations=coordinates, weight=1)
site_map.add_child(lines)
distance = calculate_distance(coordinates[0][0], coordinates[0][1], coordinates[1][0], coordinates[1][1])
distance_circle = folium.Marker(
[28.56342, -80.56794],
icon=DivIcon(
icon_size=(20,20),
icon_anchor=(0,0),
html='<div style="font-size: 12; color:#d35400;"><b>%s</b></div>' % "{:10.2f} KM".format(distance),
)
)
site_map.add_child(distance_circle)
site_map
# +
#Distance to Highway
coordinates = [
[28.56342, -80.57674],
[28.411780, -80.820630]]
lines=folium.PolyLine(locations=coordinates, weight=1)
site_map.add_child(lines)
distance = calculate_distance(coordinates[0][0], coordinates[0][1], coordinates[1][0], coordinates[1][1])
distance_circle = folium.Marker(
[28.411780, -80.820630],
icon=DivIcon(
icon_size=(20,20),
icon_anchor=(0,0),
html='<div style="font-size: 12; color:#252526;"><b>%s</b></div>' % "{:10.2f} KM".format(distance),
)
)
site_map.add_child(distance_circle)
site_map
# +
#Distance to Florida City
coordinates = [
[28.56342, -80.57674],
[28.5383, -81.3792]]
lines=folium.PolyLine(locations=coordinates, weight=1)
site_map.add_child(lines)
distance = calculate_distance(coordinates[0][0], coordinates[0][1], coordinates[1][0], coordinates[1][1])
distance_circle = folium.Marker(
[28.5383, -81.3792],
icon=DivIcon(
icon_size=(20,20),
icon_anchor=(0,0),
html='<div style="font-size: 12; color:#252526;"><b>%s</b></div>' % "{:10.2f} KM".format(distance),
)
)
site_map.add_child(distance_circle)
site_map
# -
# Some questions and answers about our map data
#
# * Are launch sites in close proximity to railways? No
# * Are launch sites in close proximity to highways? No
# * Are launch sites in close proximity to coastline? Yes
# * Do launch sites keep certain distance away from cities? Yes
# #### Author : <NAME>
| launch_site_locations.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Polynomial Fit Plot with Regression Transform
#
# This example shows how to overlay data with multiple fitted polynomials using the regression transform.
# +
import numpy as np
import pandas as pd
import altair as alt
# Generate some random data
rng = np.random.RandomState(1)
x = rng.rand(40) ** 2
y = 10 - 1.0 / (x + 0.1) + rng.randn(40)
source = pd.DataFrame({"x": x, "y": y})
# Define the degree of the polynomial fits
degree_list = [1, 3, 5]
base = alt.Chart(source).mark_circle(color="black").encode(
alt.X("x"), alt.Y("y")
)
polynomial_fit = [
base.transform_regression(
"x", "y", method="poly", order=order, as_=["x", str(order)]
)
.mark_line()
.transform_fold([str(order)], as_=["degree", "y"])
.encode(alt.Color("degree:N"))
for order in degree_list
]
alt.layer(base, *polynomial_fit)
| doc/gallery/poly_fit_regression.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import numpy as np
import pandas as pd
df = pd.read_excel('./dataFolders/Output/MothChart-LightLevel-FlowerShape.xlsx', sheet_name='Sheet1_copy_forIncidenceOfFligh')
df.head()
# +
# remove all data after 2018_10_23 # date after which we started collecting "data" for final paper
def f(x):
m = x['Date'].eq('2018_10_23')
b = m.idxmax()
if m.any():
x = x.loc[:b]
else:
x
return x
# -
new_df = f(df)
new_df.iloc[-1,:]
subset = new_df.dropna(subset = ['Light Level'], axis = 0)
subset.dropna(subset = ['Total trials'], axis = 0, inplace = True)
dataset = subset.loc[:, ['Animal Name', 'Total trials', 'Successful trials']].copy()
# +
tt = dataset['Animal Name'].str.split('_', n = 2, expand = True)
dataset['lightLevel'] = tt[0]
dataset['lightLevel'] = dataset['lightLevel'].str[1:]
dataset['flowerType'] = tt[1]
dataset[['Total trials', 'Successful trials']] = dataset[['Total trials', 'Successful trials']].astype('int')
dataset.head()
# -
dataset.loc[dataset['Total trials'] == 0, 'incidence'] = False
dataset.loc[dataset['Total trials'] > 0, 'incidence'] = True
# +
l = []
total = []
flew = []
fraction = []
for ll, bla in dataset.groupby('lightLevel'):
total_moths = len(bla)
moth_that_flew = bla.incidence.sum()
l.append(ll)
total.append(total_moths)
flew.append(moth_that_flew)
summary = pd.DataFrame({'lightLevel': l,
'totalMoths': total,
'mothThatFlew': flew})
summary['fraction_flew']=summary.mothThatFlew/summary.totalMoths
# -
summary['lightLevel'] = summary.lightLevel.astype('float')
soreted = summary.sort_values(by = ['lightLevel'])
soreted
trans = soreted.drop(axis = 0, index = 7)
trans
# +
import matplotlib.pyplot as plt
plt.show()
f = plt.figure(figsize = (3.5, 3.5))
plt.plot(np.log10(trans.lightLevel), trans.fraction_flew, 'o-')
plt.xticks(np.log10(trans.lightLevel).values, ['0.03', '0.06', '0.10', '0.30', '3.00', '50.00', '300.00'])
plt.xlabel('log10 (light level) in lux ')
plt.ylabel('Flight incidence')
f.savefig('./dataFolders/Output/Step6_v4/Figure/FlightIncidence_beforeExperimentStarted.pdf')
# -
np.log10(trans.lightLevel).values
| Step00-IncidenceFlightAtDifferentLightLevels.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Ebisu howto
# A quick introduction to using the library to schedule spaced-repetition quizzes in a principled, probabilistically-grounded, Bayesian manner.
#
# See https://fasiha.github.io/ebisu/ for details!
# +
import ebisu
defaultModel = (4., 4., 24.) # alpha, beta, and half-life in hours
# -
# Ebisu—this is what we’re here to learn about!
#
# Ebisu is a library that’s expected to be embedded inside quiz apps, to help schedule quizzes intelligently. It uses Bayesian statistics to let the app predict what the recall probability is for any fact that the student has learned, and to update that prediction based onthe results of a quiz.
#
# Ebisu uses three numbers to describe its belief about the time-evolution of each fact’s recall probability. Its API consumes them as a 3-tuple, and they are:
#
# - the first we call “alpha” and must be ≥ 2 (well, technically, ≥1 is the raw minimum but unless you’re a professor of statistics, keep it more than two);
# - the second is “beta” and also must be ≥ 2. These two numbers encode our belief about the distribution of recall probabilities at
# - the third element, which here is a half-life. This has units of time, and for this example, we’ll assume it’s in hours. It can be any positive float, but we choose the nice round number of 24 hours.
#
# For the nerds: alpha and beta parameterize a Beta distribution to describe our prior belief of the recall probability one half-life (one day) after a fact’s most recent quiz.
#
# For the rest of us: these three numbers mean we expect the recall probability for a newly-learned fact to be 50% after one day, but allow uncertainty: the recall probability after a day is “around” 42% to 58% (±1 standard deviation).
#
# ---
#
# Now. Let’s create a mock database of facts. Say a student has learned two facts, one on the 19th at 2200 hours and another the next morning at 0900 hours.
# +
from datetime import datetime, timedelta
date0 = datetime(2017, 4, 19, 22, 0, 0)
database = [dict(factID=1, model=defaultModel, lastTest=date0),
dict(factID=2, model=defaultModel, lastTest=date0 + timedelta(hours=11))]
# -
# After learning the second fact, at 0900, what does Ebisu expect each fact’s probability of recall to be, for each of the facts?
# +
oneHour = timedelta(hours=1)
now = date0 + timedelta(hours=11.1)
print("On {},".format(now))
for row in database:
recall = ebisu.predictRecall(row['model'],
(now - row['lastTest']) / oneHour,
exact=True)
print("Fact #{} probability of recall: {:0.1f}%".format(row['factID'], recall * 100))
# -
# Both facts are expected to still be firmly in memory—especially the second one since it was just learned! So the quiz app doesn’t ask the student to review anything yet—though if she wanted to, the quiz app would pick the fact most in danger of being forgotten.
#
# Note how we used `ebisu.predictRecall`, which accepts
# - the current model, and
# - the time elapsed since this fact’s last quiz,
#
# and returns a `float`.
#
# …
#
# Now a few hours have elapsed. It’s just past midnight on the 21st and the student opens the quiz app.
now = date0 + timedelta(hours=26.5)
print("On {},".format(now))
for row in database:
recall = ebisu.predictRecall(row['model'],
(now - row['lastTest']) / oneHour,
exact=True)
print("Fact #{} probability of recall: {:0.1f}%".format(row['factID'], recall * 100))
# Suppose the quiz app has been configured to quiz the student if the expected recall probability drops below 50%—which it did for fact 1! The app shows the flashcard once, analyzes the user's response, and sets the result of the quiz to `1` if passed and `0` if failed. It calls Ebisu to update the model, giving it this result as well as the `total` number of times it showed this flashcard (one time—Ebisu can support more advanced cases where an app reviews the same flashcard multiple times in a single review session, but let's keep it simple for now).
# +
row = database[0] # review FIRST question
result = 1 # success!
total = 1 # number of times this flashcard was shown (fixed)
newModel = ebisu.updateRecall(row['model'],
result,
total,
(now - row['lastTest']) / oneHour)
print('New model for fact #1:', newModel)
row['model'] = newModel
row['lastTest'] = now
# -
# Observe how `ebisu.updateRecall` takes
# - the current model,
# - the quiz result, and
# - the time elapsed since the last quiz,
#
# and returns a new model (the new 3-tuple of “alpha”, “beta” and time). We put the new model and the current timestamp into the database.
#
# Now. Suppose the student asks to review another fact—fact 2. It was learned just earlier that morning, and its recall probability is expected to be around 63%, but suppose the student fails this quiz, as sometimes happens.
# +
row = database[1] # review SECOND question
result = 0
newModel = ebisu.updateRecall(row['model'],
result,
total,
(now - row['lastTest']) / oneHour)
print('New model for fact #2:', newModel)
row['model'] = newModel
row['lastTest'] = now
# -
# The new parameters for this fact differ from the previous one because (1) the student failed this quiz while she passed the other, (2) different amounts of time had elapsed since the respective facts were last seen.
#
# Ebisu provides a method to convert parameters to “expected half-life”. It is *not* an essential feature of the API but can be useful:
for row in database:
meanHalflife = ebisu.modelToPercentileDecay(row['model'])
print("Fact #{} has half-life of ≈{:0.1f} hours".format(row['factID'], meanHalflife))
# Note how the half-life (the time between quizzes for expected recall probability to drop to 50%) for the first question increased from 24 to 29 hours after the student got it right, while it decreased to 20 hours for the second when she got it wrong. Ebisu has incorporated the fact that the second fact had been learned not that long ago and should have been strong, and uses the surprising quiz result to strongly adjust its belief about its recall probability.
#
# ---
#
# Suppose the user is tired of reviewing the first fact so often because it’s something they know very well. You could allow the user to delete this flashcard, add it again with a longer initial halflife. But Ebisu gives you a function that will explicitly rescale the halflife of the card as is: `ebisu.rescaleHalflife`, which takes a positive number to act as the halflife scale. In this case, the new halflife is *two* times the old halflife.
# +
database[0]['model'] = ebisu.rescaleHalflife(database[0]['model'], 2.0)
for row in database:
meanHalflife = ebisu.modelToPercentileDecay(row['model'])
print("Fact #{} has half-life of ≈{:0.1f} hours".format(row['factID'], meanHalflife))
# -
# If the user was worried that this flashcard was shown too *infrequently*, and wanted to see it three times as often, you might pass in `1/3` as the second argument.
#
# This short notebook shows the major functions in the Ebisu API:
# - `ebisu.predictRecall` to find out the expected recall probability for a fact right now, and
# - `ebisu.updateRecall` to update those expectations when a new quiz result is available.
# - `ebisu.modelToPercentileDecay` to find the time when the recall probability reaches a certain value.
# - `ebisu.rescaleHalflife` to adjust the halflife up and down without a quiz.
#
# For more advanced functionality, including non-binary fuzzy quizzes, do consult the [Ebisu](https://fasiha.github.io/ebisu/) website, which links to the API’s docstrings and explains how this all works in greater detail.
# ## Adanced topics
# ### Speeding up `predictRecall`
#
# Above, we used `predictRecall` with the `exact=True` keyword argument to have it return true probabilities. We can reduce runtime if we use the following:
# +
# As above: a bit slow to get exact probabilities
# %timeit ebisu.predictRecall(database[0]['model'], 100., exact=True)
# A bit faster alternative: get log-probabilities (this is the defalt)
# %timeit ebisu.predictRecall(database[0]['model'], 100., exact=False)
| EbisuHowto.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + deletable=true editable=true
# %matplotlib inline
# %config InlineBackend.figure_format = 'retina'
import matplotlib.pyplot as plt
import numpy as np
import astropy.units as u
import emcee
import os
import sys
sys.path.insert(0, '../')
# + deletable=true editable=true
from libra import trappist1
planets = list('bcdefgh')
params = [trappist1(planet) for planet in planets]
impact_params = np.array([p.a*np.cos(np.radians(p.inc)) for p in params])
impact_params_upper = np.array([p.a*np.cos(np.radians(p.inc))+p.rp for p in params])
impact_params_lower = np.array([p.a*np.cos(np.radians(p.inc))-p.rp for p in params])
b_range = impact_params_lower.min(), impact_params_upper.max()
# + deletable=true editable=true
impact_params_lower, impact_params_upper
# + deletable=true editable=true
from libra import Star
# + deletable=true editable=true
s = Star.with_trappist1_spot_distribution()
n = 1000
trailed_img = np.ones((n, n))
n_steps = 90
for i in np.ones(n_steps) * 360/n_steps:
s.rotate(i*u.deg)
stacked_arr = np.array([s._compute_image(n=n), trailed_img])
trailed_img = np.min(stacked_arr, axis=0)
# + deletable=true editable=true
b_range
# + deletable=true editable=true
plt.imshow(trailed_img, cmap=plt.cm.Greys_r, extent=[0, 1, 0, 1], origin='lower')
plt.axhspan(0.5-b_range[0], 0.5-b_range[1]/2, color='r', alpha=0.5)
plt.savefig('trappist1_map_onehemisphere.png')
# + deletable=true editable=true
b_range
# + deletable=true editable=true
k2_time, k2_flux, k2_err = np.loadtxt('../libra/data/trappist1/trappist_rotation.txt', unpack=True)
k2_flux /= np.percentile(k2_flux, 95)
k2_time_original, k2_flux_original = k2_time.copy(), k2_flux.copy()
# slice in time
condition = (k2_time > 2457773) & (k2_time < 2457779)
k2_time, k2_flux, k2_err= k2_time[condition], k2_flux[condition], k2_err[condition]
from libra import trappist1_all_transits
# + deletable=true editable=true
model_times = np.arange(k2_time.min(), k2_time.max(), 1/60/60/24)
model_fluxes = trappist1_all_transits(model_times)
plt.plot(model_times, model_fluxes)
# + deletable=true editable=true
from astropy.io import fits
f = fits.getdata('../libra/data/trappist1/nPLDTrappist.fits')
t, f = f['TIME'] + 2454833.0, f['FLUX']
from scipy.signal import medfilt
f = medfilt(f, (1,))/np.median(f)
# + deletable=true editable=true
plt.plot(t, f)
plt.plot(model_times, model_fluxes, ls='--')
plt.xlim([k2_time.min(), k2_time.max()])
plt.ylim([0.98, 1.01])
# + deletable=true editable=true
from libra.starspots.star import trappist1_posteriors_path
posteriors = np.loadtxt('trappist1_spotmodel_posteriors.txt')#trappist1_posteriors_path)
plt.hist(posteriors[:, 0:9:3].ravel())
# + deletable=true editable=true
# nsteps = 100
# rotations = 360/nsteps * np.ones(nsteps) * u.deg
# times = 3.3 * np.linspace(0, 1, nsteps)
# #rotoations = np.linspace(0, 360, n_steps) * u.deg
# star = Star.with_trappist1_spot_distribution()
# fluxes = star.fractional_flux(times)
# for i, r in enumerate(rotations):
# fig, ax = plt.subplots(1, 2, figsize=(8, 4))
# ax[1].plot(times, fluxes)
# ax[1].scatter(times[i], fluxes[i], marker='o')
# star.rotate(r)
# star.plot(n=500, ax=ax[0])
# fig.tight_layout()
# fig.savefig('animation/{0:03d}.png'.format(i), bbox_inches='tight', dpi=200)
# plt.close()
| notebooks/trappist-1maps.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # 08. Memory game
# ### Exercise 1
# What will be printed out?
#
# **Answer: **
x = [1, 2, 3]
y = x
y.append(4)
print(x)
# ### Exercise 2
# What will be printed out?
#
# **Answer:**
# +
def change_it(y):
y.append(4)
x = [1, 2, 3]
change_it(x)
print(x)
# -
# ### Exercise 3
# What will be printed out?
#
# **Answer #1:**
x = 3
y = x
y += 5
print(x)
# **Answer #2:**
x = [1, 2, 3]
y = x
y[1] += x[1]
print(x)
# ### Exercise 4
# What will be printed?
#
# **Answer: **
x = [1, 2, 3]
y = tuple(x)
x.append(4)
print(y)
# ### Exercise 5
# What is going to happen?
#
# **Anwer #1:**
# +
def change_it(y):
y = list(y)
y.append(4)
print(y)
x = [1, 2, 3]
change_it(x)
# -
# **Answer #2:**
# +
def change_it(y):
y = list(y)
y.append(4)
x = [1, 2, 3]
change_it(x)
print(x)
# -
# **Answer #3:**
x = [1, 2, 3]
y = tuple(x)
z = list(y)
z.append(4)
print(x)
# ### Exercise 6.
# Given that `list()` and `x[:]` creates a new list? This means that you can use it to create a copy of a list directly, without an intermediate tuple step. What will happen below?
#
# **Answer :**
x = [1, 2, 3]
y = x[:]
x.append(4)
print(y)
# ### Exercise 7.
# What would be an equivalent form using a normal for loop? Write both versions of code in Jupiter cells and check that the results are the same.
# original
numbers = [1, 2, 3]
numbers_as_strings = [str(item) for item in numbers]
# ### Exercise 8.
# Implement the code below using list comprehension. Check that results match.
# original
strings = ['1', '2', '3']
numbers = []
for astring in strings:
numbers.append(int(astring) + 10)
# ### Exercise 9.
# Create code that filters out all items below 2 and adds 4 to the remaning ones.
numbers = [0, 1, 2, 3]
| docs/notebooks/08. Memory game.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Neural Architecture Search with DARTS
#
# In this example you will deploy Katib experiment with Differentiable Architecture Search (DARTS) algorithm using Jupyter Notebook and Katib SDK. Your Kubernetes cluster must have at least one GPU for this example.
#
# You can read more about how we use DARTS in Katib [here](https://github.com/kubeflow/katib/tree/master/pkg/suggestion/v1beta1/nas/darts).
#
# The notebook shows how to create, get, check status and delete experiment.
# # Install required package
pip install kubeflow-katib
# ## Restart the Notebook kernel to use SDK package
from IPython.display import display_html
display_html("<script>Jupyter.notebook.kernel.restart()</script>",raw=True)
# ## Import required packages
from kubeflow.katib import KatibClient
from kubernetes.client import V1ObjectMeta
from kubeflow.katib import V1beta1Experiment
from kubeflow.katib import V1beta1AlgorithmSpec
from kubeflow.katib import V1beta1AlgorithmSetting
from kubeflow.katib import V1beta1ObjectiveSpec
from kubeflow.katib import V1beta1MetricsCollectorSpec
from kubeflow.katib import V1beta1CollectorSpec
from kubeflow.katib import V1beta1SourceSpec
from kubeflow.katib import V1beta1FilterSpec
from kubeflow.katib import V1beta1FeasibleSpace
from kubeflow.katib import V1beta1ExperimentSpec
from kubeflow.katib import V1beta1NasConfig
from kubeflow.katib import V1beta1GraphConfig
from kubeflow.katib import V1beta1Operation
from kubeflow.katib import V1beta1ParameterSpec
from kubeflow.katib import V1beta1TrialTemplate
from kubeflow.katib import V1beta1TrialParameterSpec
# ## Define experiment
#
# You have to create experiment object before deploying it. This experiment is similar to [this](https://github.com/kubeflow/katib/blob/master/examples/v1beta1/nas/darts-example-gpu.yaml) example.
#
# You can read more about DARTS algorithm settings [here](https://www.kubeflow.org/docs/components/hyperparameter-tuning/experiment/#differentiable-architecture-search-darts).
# +
# Experiment metadata
namespace = "anonymous"
experiment_name = "darts-example"
metadata = V1ObjectMeta(
name=experiment_name,
namespace=namespace
)
# Algorithm specification
algorithm_spec=V1beta1AlgorithmSpec(
algorithm_name="darts",
algorithm_settings=[
V1beta1AlgorithmSetting(
name="num_epochs",
value="2"
),
V1beta1AlgorithmSetting(
name="stem_multiplier",
value="1"
),
V1beta1AlgorithmSetting(
name="init_channels",
value="4"
),
V1beta1AlgorithmSetting(
name="num_nodes",
value="3"
),
]
)
# Objective specification. For DARTS Goal is omitted.
objective_spec=V1beta1ObjectiveSpec(
type="maximize",
objective_metric_name="Best-Genotype",
)
# Metrics collector specification.
# We should specify metrics format to get Genotype from training container.
metrics_collector_spec=V1beta1MetricsCollectorSpec(
collector=V1beta1CollectorSpec(
kind="StdOut"
),
source=V1beta1SourceSpec(
filter=V1beta1FilterSpec(
metrics_format=[
"([\\w-]+)=(Genotype.*)"
]
)
)
)
# Configuration for Neural Network (NN)
# This NN contains 2 number of layers and 5 various operations with different parameters.
nas_config=V1beta1NasConfig(
graph_config=V1beta1GraphConfig(
num_layers=2
),
operations=[
V1beta1Operation(
operation_type="separable_convolution",
parameters=[
V1beta1ParameterSpec(
name="filter_size",
parameter_type="categorical",
feasible_space=V1beta1FeasibleSpace(
list=["3"]
),
)
]
),
V1beta1Operation(
operation_type="dilated_convolution",
parameters=[
V1beta1ParameterSpec(
name="filter_size",
parameter_type="categorical",
feasible_space=V1beta1FeasibleSpace(
list=["3", "5"]
),
)
]
),
V1beta1Operation(
operation_type="avg_pooling",
parameters=[
V1beta1ParameterSpec(
name="filter_size",
parameter_type="categorical",
feasible_space=V1beta1FeasibleSpace(
list=["3"]
),
)
]
),
V1beta1Operation(
operation_type="max_pooling",
parameters=[
V1beta1ParameterSpec(
name="filter_size",
parameter_type="categorical",
feasible_space=V1beta1FeasibleSpace(
list=["3"]
),
)
]
),
V1beta1Operation(
operation_type="skip_connection",
),
]
)
# JSON trial template specification
trial_spec={
"apiVersion": "batch/v1",
"kind": "Job",
"spec": {
"template": {
"spec": {
"containers": [
{
"name": "training-container",
"image": "docker.io/kubeflowkatib/darts-cnn-cifar10:v1beta1-e294a90",
"command": [
'python3',
'run_trial.py',
'--algorithm-settings="${trialParameters.algorithmSettings}"',
'--search-space="${trialParameters.searchSpace}"',
'--num-layers="${trialParameters.numberLayers}"'
],
# Training container requires 1 GPU
"resources": {
"limits": {
"nvidia.com/gpu": 1
}
}
}
],
"restartPolicy": "Never"
}
}
}
}
# Template with trial parameters and trial spec
# Set retain to True to save trial resources after completion.
trial_template=V1beta1TrialTemplate(
retain=True,
trial_parameters=[
V1beta1TrialParameterSpec(
name="algorithmSettings",
description=" Algorithm settings of DARTS Experiment",
reference="algorithm-settings"
),
V1beta1TrialParameterSpec(
name="searchSpace",
description="Search Space of DARTS Experiment",
reference="search-space"
),
V1beta1TrialParameterSpec(
name="numberLayers",
description="Number of Neural Network layers",
reference="num-layers"
),
],
trial_spec=trial_spec
)
# Experiment object
experiment = V1beta1Experiment(
api_version="kubeflow.org/v1beta1",
kind="Experiment",
metadata=metadata,
spec=V1beta1ExperimentSpec(
max_trial_count=1,
parallel_trial_count=1,
max_failed_trial_count=1,
algorithm=algorithm_spec,
objective=objective_spec,
metrics_collector_spec=metrics_collector_spec,
nas_config=nas_config,
trial_template=trial_template,
)
)
# -
# You can print experiment's info to verify it before submission
# Print trial template container info
print(experiment.spec.trial_template.trial_spec["spec"]["template"]["spec"]["containers"][0])
# # Create experiment
#
# You have to create Katib client to use SDK
#
# TODO (andreyvelich): Current experiment link for NAS is incorrect.
# +
# Create client
kclient = KatibClient()
# Create experiment
kclient.create_experiment(experiment,namespace=namespace)
# -
# # Get experiment
#
# You can get experiment by name and receive required data
# +
exp = kclient.get_experiment(name=experiment_name, namespace=namespace)
print(exp)
print("-----------------\n")
# Get last status
print(exp["status"]["conditions"][-1])
# -
# # Get current experiment status
#
# You can check current experiment status
kclient.get_experiment_status(name=experiment_name, namespace=namespace)
# You can check if experiment is succeeded
kclient.is_experiment_succeeded(name=experiment_name, namespace=namespace)
# # Get best Genotype
#
# Best Genotype is located in optimal trial currently. Latest Genotype is the best.
#
# Check trial logs to get more information about training process.
# +
opt_trial = kclient.get_optimal_hyperparameters(name=experiment_name, namespace=namespace)
best_genotype = opt_trial["currentOptimalTrial"]["observation"]["metrics"][0]["latest"]
print(best_genotype)
# -
# # Delete experiments
#
# You can delete experiments
kclient.delete_experiment(name=experiment_name, namespace=namespace)
| sdk/python/v1beta1/examples/nas-with-darts.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: deep-rl-notebooks-poetry
# language: python
# name: deep-rl-notebooks-poetry
# ---
# # Chaper 2 - N-armed Bandits
# ### Deep Reinforcement Learning _in Action_
# ##### Listing 2.1
def get_best_action(actions):
best_action = 0
max_action_value = 0
for i in range(len(actions)): #A
cur_action_value = get_action_value(actions[i]) #B
if cur_action_value > max_action_value:
best_action = i
max_action_value = cur_action_value
return best_action
# ##### Listing 2.2
# +
import numpy as np
from scipy import stats
import random
import matplotlib.pyplot as plt
n = 10
probs = np.random.rand(n) #A
eps = 0.1
# -
# ##### Listing 2.3
def get_reward(prob, n=10):
reward = 0;
for i in range(n):
if random.random() < prob:
reward += 1
return reward
reward_test = [get_reward(0.7) for _ in range(2000)]
np.mean(reward_test)
sum = 0
x = [4,5,6,7]
for j in range(len(x)):
sum = sum + x[j]
sum
plt.figure(figsize=(9,5))
plt.xlabel("Reward",fontsize=22)
plt.ylabel("# Observations",fontsize=22)
plt.hist(reward_test,bins=9)
# ##### Listing 2.4
# 10 actions x 2 columns
# Columns: Count #, Avg Reward
record = np.zeros((n,2))
def get_best_arm(record):
arm_index = np.argmax(record[:,1],axis=0)
return arm_index
def update_record(record,action,r):
new_r = (record[action,0] * record[action,1] + r) / (record[action,0] + 1)
record[action,0] += 1
record[action,1] = new_r
return record
# ##### Listing 2.5
fig,ax = plt.subplots(1,1)
ax.set_xlabel("Plays")
ax.set_ylabel("Avg Reward")
fig.set_size_inches(9,5)
rewards = [0]
for i in range(500):
if random.random() > 0.2:
choice = get_best_arm(record)
else:
choice = np.random.randint(10)
r = get_reward(probs[choice])
record = update_record(record,choice,r)
mean_reward = ((i+1) * rewards[-1] + r)/(i+2)
rewards.append(mean_reward)
ax.scatter(np.arange(len(rewards)),rewards)
# ##### Listing 2.6
def softmax(av, tau=1.12):
softm = ( np.exp(av / tau) / np.sum( np.exp(av / tau) ) )
return softm
probs = np.random.rand(n)
record = np.zeros((n,2))
fig,ax = plt.subplots(1,1)
ax.set_xlabel("Plays")
ax.set_ylabel("Avg Reward")
fig.set_size_inches(9,5)
rewards = [0]
for i in range(500):
p = softmax(record[:,1],tau=0.7)
choice = np.random.choice(np.arange(n),p=p)
r = get_reward(probs[choice])
record = update_record(record,choice,r)
mean_reward = ((i+1) * rewards[-1] + r)/(i+2)
rewards.append(mean_reward)
ax.scatter(np.arange(len(rewards)),rewards)
# ##### Listing 2.9
class ContextBandit:
def __init__(self, arms=10):
self.arms = arms
self.init_distribution(arms)
self.update_state()
def init_distribution(self, arms):
# Num states = Num Arms to keep things simple
self.bandit_matrix = np.random.rand(arms,arms)
#each row represents a state, each column an arm
def reward(self, prob):
reward = 0
for i in range(self.arms):
if random.random() < prob:
reward += 1
return reward
def get_state(self):
return self.state
def update_state(self):
self.state = np.random.randint(0,self.arms)
def get_reward(self,arm):
return self.reward(self.bandit_matrix[self.get_state()][arm])
def choose_arm(self, arm):
reward = self.get_reward(arm)
self.update_state()
return reward
# +
import numpy as np
import torch
arms = 10
N, D_in, H, D_out = 1, arms, 100, arms
# -
env = ContextBandit(arms=10)
state = env.get_state()
reward = env.choose_arm(1)
print(state)
model = torch.nn.Sequential(
torch.nn.Linear(D_in, H),
torch.nn.ReLU(),
torch.nn.Linear(H, D_out),
torch.nn.ReLU(),
)
loss_fn = torch.nn.MSELoss()
env = ContextBandit(arms)
def one_hot(N, pos, val=1):
one_hot_vec = np.zeros(N)
one_hot_vec[pos] = val
return one_hot_vec
def running_mean(x,N=50):
c = x.shape[0] - N
y = np.zeros(c)
conv = np.ones(N)
for i in range(c):
y[i] = (x[i:i+N] @ conv)/N
return y
def train(env, epochs=5000, learning_rate=1e-2):
cur_state = torch.Tensor(one_hot(arms,env.get_state())) #A
optimizer = torch.optim.Adam(model.parameters(), lr=learning_rate)
rewards = []
for i in range(epochs):
y_pred = model(cur_state) #B
av_softmax = softmax(y_pred.data.numpy(), tau=2.0) #C
av_softmax /= av_softmax.sum() #D
choice = np.random.choice(arms, p=av_softmax) #E
cur_reward = env.choose_arm(choice) #F
one_hot_reward = y_pred.data.numpy().copy() #G
one_hot_reward[choice] = cur_reward #H
reward = torch.Tensor(one_hot_reward)
rewards.append(cur_reward)
loss = loss_fn(y_pred, reward)
optimizer.zero_grad()
loss.backward()
optimizer.step()
cur_state = torch.Tensor(one_hot(arms,env.get_state())) #I
return np.array(rewards)
rewards = train(env)
plt.plot(running_mean(rewards,N=500))
| notebooks/02_book.ipynb |
# -*- coding: utf-8 -*-
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .r
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: R
# name: ir
# ---
# + [markdown] colab_type="text" id="view-in-github"
# <a href="https://colab.research.google.com/github/maxigaarp/Gestion-De-Datos-en-R/blob/main/Proyecto_solucion_parcial.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# + [markdown] id="4Lo_4HPFvv-g"
# #Proyecto Grupal
#
# ## Entrega: Domingo 29 de Agosto 2021
#
# Porcentaje: 33.3%
# + [markdown] id="kgbAuynqv0tI"
# Este trabajo consistirá en armar una base de datos en SQL y un informe descriptivo de los datos y la investigación. El objetivo del proyecto es predecir asistencia a clases en base a los datos meteorologicos de precipitación.
#
# La base de datos debe tener la informacion de 3 diferentes fuentes de información:
#
# * Asistencia Declarada Mensual ([MINEDUC](http://datos.mineduc.cl/dashboards/19844/asistencia-declarada-mensual-por-ano/)) años: (2015-2019)
# Acá se incluyen los porcentajes de asistencia y la cantidad de dias trabajados de todos los alumnos en los distintos colegios de Chile
# * Reportes de precipitacion Mensual ([CR2](https://www.cr2.cl/camels-cl/)): Datos historicos de precipitacion las distintas estaciones meteorologicas en Chile con localizacion geografica.
# * Localizacion Colegios (Propia/Google Maps):
# Datos de la ubicacion geografica de los colegios en Chile.
#
# Para esto generar esta base de datos se espera que se incluyan a lo menos 3 diferentes tablas:
# * alumnos: Informacion relevante de los alumnos. A lo menos incluir 3 atributos.
# * colegios: Informacion relevante de los colegios. A lo menos incluir 3 atributos, ademas: nombre (o ID) de la estacion meteorologica más cercana y la distancia a esta, asumiendo latitud y longitud como coordenadas cartesianas.
# * estudia_en: Informacion relevante del vinculo entre el colegio, mes y alumnos. En la tabla se debe incorporar la informacion mensual para cada alumno en terminos de: asistencia, precipitacion. En este caso la precipitacion debe ser una interpolacion en base a los datos de las precipitaciones de las estaciones meteorologicas y la geolocalizacion de los colegios.
#
# Cabe destacar que las bases de datos no se encuentran libres de errores por lo que para consolidar la informacion hay que solucionar problemas de consistencia, completitud, validez. Entre otros se busca que:
#
# * Que los datos para cada uno atributos se encuentren en el formato adecuado y unico. Para lograr esto revisar los atributos más importantes: asistencia y precipitacion.
# * Una unica forma de expresar valores nulos (unificar distintos tipos de descripcion de valores nulos, ejemplo: " ", NA, 0, -9999, etc).
# * Seleccionar una cantidad tolerable (mayor que 2 al menos) de valores nulos para las series temporales (los registros de las estaciones meteorologicas). Interpolar los valores nulos en las series.
#
# Cabe destacar que las tablas deben unificarse en SQL cargando las tablas mediante RSQL. Luego de esto se espera que se seleccione y utilice un algoritmo adecuado para generar una predicción de asistencia a clases en base a la precipitacion mensual, puede ser alguno de los siguientes: k-means, regresion lineal, regresion logistica.
#
# La entrega del proyecto debe tener dos partes:
# * El codigo donde se genera la base de datos y el desarrollo del algoritmo de prediccion. Puede ser Colab o script de R, debe tener comentarios solo con el objetivo de guiar al ayudante en la correccion.
# * Un documento de no mas de 3 paginas (sin incluir gráficos) con la documentacion de los datos (diagrama ER, tipo de datos, descripcion, dominio, etc. Pueden guiarse por el esquema de registro en las bases de datos del MINEDUC.) y además un resumen de lo desarrollado en el proyecto (como se generaron las variables, como se procesaron y que resultados se obtuvieron)
#
# Para una mejor calidad en el informe usar graficos y tablas que pueden incluir en anexos.
#
#
#
# Datos disponibles en: https://drive.google.com/drive/folders/17Or8k6rhYvkaeEn_pD10za8A9jluYfUF?usp=sharing
# + [markdown] id="5HNgUFHlj3iT"
# #Descargar datos
# + colab={"base_uri": "https://localhost:8080/"} id="g0jgBUw8WV_g" outputId="45ba08d6-c0bb-4d16-a699-c916905087db"
install.packages("RSQLite")
# + colab={"base_uri": "https://localhost:8080/"} id="VcSuCfuTUuj5" outputId="4a7e7780-7bb8-474d-e4d3-4dba36b9a68e"
library(tidyverse)
library(data.table)
library(RSQLite)
# + [markdown] id="kbeOmka7jzhC"
# ##Asistencia
# + id="f1cnRTbWv0Ko"
system("gdown --id 1-q1ydcu6afA3LQ9uxlh9J9B9kvEJvrRs")
# + id="5yBkQhxL9TuB"
unzip("/content/DatosProyecto.zip")
# + id="iyb0sbEJbwd7"
library(stringr)
# + id="VVb8HG-l9act"
lista <- list.files("/content/content/drive/MyDrive/Gestion de Datos/Datos Proyecto/Datos asistencia")
directorio <- "/content/content/drive/MyDrive/Gestion de Datos/Datos Proyecto/Datos asistencia/"
for (name in lista){
if (str_detect(name, ".rar")){
foldername<-substr(name, 1, nchar(name)-4)
system(paste("mkdir 'Asistencia/",foldername,"'",sep=""))
system(paste("unrar x '",directorio,name,"' 'Asistencia/",substr(name, 1, nchar(name)-4),"/'", sep=""),intern = T)
}
else {
unzip(paste(directorio,name,sep=""), exdir="Asistencia")
}
}
# + id="OTnXJiY5hBFh"
lista_de_csvs <- list.files("Asistencia",pattern = ".(CSV|csv)$", recursive = TRUE)
# + id="-f4U7Tr0XXmt"
conn <- dbConnect(RSQLite::SQLite(), "mineduc.db")
# + colab={"base_uri": "https://localhost:8080/"} id="ZfIl8nvsPdxP" outputId="35d5c890-247b-468c-f46b-8d33b50f8f7c"
colenames <- c("AGNO","RBD", "NOM_RBD", "COD_DEPE")
est_ennames <- c("MRUN", "RBD", "AGNO", "MES_ESCOLAR", "COD_ENSE", "COD_GRADO", "DIAS_TRABAJADOS", "ASIS_PROMEDIO")
alusnames <- c("AGNO", "MRUN", "FEC_NAC_ALU", "GEN_ALU")
for (i in 1:length(lista_de_csvs)) {
csvs <- lista_de_csvs[i]
data <- fread(paste("Asistencia/",csvs, sep=""), dec=",")
if ("NOM_REG_RBD_A" %in% toupper(names(data))){
data <- select(data, !"NOM_REG_RBD_A")
}
names(data) <- toupper(names(data))
coles <- data %>%
select(colenames) %>%
distinct()
est_en <- data %>%
select(est_ennames) %>%
distinct()
alus <- data %>%
select(alusnames) %>%
distinct()
apnd <- if (i==1) FALSE else TRUE
dbWriteTable(conn , name = "colegios",
value = coles,
row.names = FALSE, header = !apnd ,append=apnd,
colClasses='character')
dbWriteTable(conn , name = "estudia_en",
value = est_en,
row.names = FALSE, header = !apnd ,append=apnd,
colClasses='character')
dbWriteTable(conn , name = "alumnos",
value = alus,
row.names = FALSE, header = !apnd ,append=apnd,
colClasses='character')
}
# + [markdown] id="0_CbGI8JbEwG"
# #Nulos
# + [markdown] id="gnSqZ7hfocv8"
# Todos los nulos a un solo formato
# + colab={"base_uri": "https://localhost:8080/", "height": 50} id="62itRNcgbEGK" outputId="b709c9b6-5a8a-4361-bc0b-63854ef0690d"
dbExecute(conn,"UPDATE alumnos
SET GEN_ALU = NULL
WHERE GEN_ALU=0")
dbExecute(conn,"UPDATE alumnos
SET FEC_NAC_ALU = NULL
WHERE FEC_NAC_ALU=190001 or FEC_NAC_ALU=180001")
# + [markdown] id="iVsT4RnRoiPX"
# Contar nulos
# + colab={"base_uri": "https://localhost:8080/", "height": 127} id="2JkOcjQoe6l6" outputId="961eaf14-67c8-4b49-e0a1-2d3979f22979"
dbGetQuery(conn, "select
sum(case when MRUN is null then 1 else 0 end) MRUN,
sum(case when AGNO is null then 1 else 0 end) AGNO,
sum(case when MES_ESCOLAR is null then 1 else 0 end) MES_ESCOLAR,
sum(case when RBD is null then 1 else 0 end) RBD,
sum(case when COD_ENSE is null then 1 else 0 end) COD_ENSE,
sum(case when COD_GRADO is null then 1 else 0 end) COD_GRADO,
sum(case when DIAS_TRABAJADOS is null then 1 else 0 end) DIAS_TRABAJADOS,
sum(case when ASIS_PROMEDIO is null then 1 else 0 end) ASIS_PROMEDIO
from estudia_en")
# + [markdown] id="6klNlke5gHkl"
# Se deja solo el colegio con mas asistencia del alumno. Tambien puede ser aceptable eliminar los alumnos con muchas inscripciones en colegios
# + colab={"base_uri": "https://localhost:8080/", "height": 1000} id="f4GNOWKYgUAz" outputId="73e049a7-a91e-4fbc-d151-1db8c4cd1212"
dbGetQuery(conn, "
select
MRUN, AGNO, MES_ESCOLAR,
COUNT(*) AS COLEGIOS_INSCRITOS
FROM estudia_en
group by MRUN, AGNO, MES_ESCOLAR
ORDER BY COLEGIOS_INSCRITOS DESC
")
# + colab={"base_uri": "https://localhost:8080/", "height": 34} id="nM_vB7yxjBYF" outputId="c4f5eb06-ce7a-4add-8c0c-676c4c41382e"
dbExecute(conn, "create table ESTUDIAEN as
select
*
FROM estudia_en
group by MRUN, AGNO, MES_ESCOLAR
having DIAS_TRABAJADOS=max(DIAS_TRABAJADOS)
")
# + colab={"base_uri": "https://localhost:8080/", "height": 34} id="nLaw_hM8hhSZ" outputId="9642e7ad-d189-47e7-9d67-cb4ffa75d668"
dbExecute(conn, "
drop table estudia_en
")
# + [markdown] id="7ADvnbYGjxNB"
# ## Precipitacion
# + colab={"base_uri": "https://localhost:8080/"} id="Ly2Sv7__qWwq" outputId="6701257b-8130-4905-e537-51f0d9c7e983"
library(tidyverse)
# + colab={"base_uri": "https://localhost:8080/"} id="SPvki2tZjaxu" outputId="0cca8bae-9a2c-4b95-93e6-faa19a8ceae6"
unzip("/content/content/drive/MyDrive/Gestion de Datos/Datos Proyecto/cr2_prAmon_2019.zip")
# + id="I-yMIavojqtc"
pp <- read.csv("/content/cr2_prAmon_2019/cr2_prAmon_2019.txt",na = "-9999", header =F)
pp <- setNames(as.data.frame(t(pp[,-1])),as.character(pp[,1]))
# + id="YorOEIgVGUj9"
ppp <- pp %>% select( c("codigo_estacion","nombre", "latitud","longitud") | "2015-01":"2019-12")%>%
pivot_longer(cols = "2015-01":"2019-12",
values_to = "Precipitacion",
names_to = c("Año", "Mes"),
names_pattern = "(....)-(..)")
# + colab={"base_uri": "https://localhost:8080/", "height": 1000} id="Zjsrhtjo675n" outputId="a5026f56-e47b-45a2-c788-f46262376051"
ppp$Precipitacion<-as.numeric(ppp$Precipitacion)
ppp$Año<-as.numeric(ppp$Año)
ppp$Mes<-as.numeric(ppp$Mes)
ppp$longitud<-as.numeric(ppp$longitud)
ppp$latitud<-as.numeric(ppp$latitud)
ppp
# + [markdown] id="0GwjdwzkpM0Y"
# Se toman solo las estaciones con menos de 5 NA en los años elegidos
# + id="NaoOGgzR2CAp"
ppp <- ppp %>% group_by(codigo_estacion) %>%
filter(sum(is.na(Precipitacion))<5) %>%
ungroup()
# + [markdown] id="kTVkjQUOrNdb"
# Se llenan los NA con los promedios mensuales
# + id="tcMf-vXW3_Da"
ppp <- ppp %>% group_by(codigo_estacion, Mes)%>%
mutate(Precipitacion = if_else(is.na(Precipitacion), mean(Precipitacion, na.rm = T), Precipitacion))
# + colab={"base_uri": "https://localhost:8080/", "height": 1000} id="FE8uVycP7F-h" outputId="196703d0-593d-4f8a-c2c0-86788c0b3fe7"
ppp
# + id="kbYcILT4Nc5x"
#dbExecute(conn, "drop table precipitacion")
dbWriteTable(conn , name = "precipitacion",
value = ppp,
row.names = FALSE, header = TRUE ,
colClasses='character')
# + [markdown] id="NcC9vMWUj8uP"
# ## Geolocalización
#
#
#
# + id="aLOOJzuVkI0B"
geocoles <- read.csv("/content/content/drive/MyDrive/Gestion de Datos/Datos Proyecto/colesgeo.csv", row.names=1)
# + colab={"base_uri": "https://localhost:8080/", "height": 298} id="r32es3JSOsp8" outputId="24c90bcb-4080-49a3-d709-cf35cbf112aa"
head(geocoles)
# + [markdown] id="cnUlZ5xZrUqv"
# Filtro territorio chileno
# + id="W5GYBw-bThqC"
buenosgeocoles <- geocoles %>% filter((lat>-56.5) &(lat <-17.4) & (lon>-75) & (lon<-67))
# + id="3h8KeyoLNnt5"
dbWriteTable(conn, name = "geocoles",
value = buenosgeocoles,
row.names = FALSE, header = TRUE ,
colClasses='character')
# + [markdown] id="e9PDCv2wEbor"
# #Colegios
# + [markdown] id="ZpsRpIZhrciv"
# Se elige la ultima info que se tiene del colegio (no necesario realmente)
# + colab={"base_uri": "https://localhost:8080/", "height": 34} id="uc62ef9YEbQ_" outputId="759c177d-f410-4640-81b2-9b6b5bdb8625"
dbExecute(conn,"
create table colegios2 as
select *
from colegios
group by RBD
having AGNO=MAX(AGNO)
")
# + colab={"base_uri": "https://localhost:8080/", "height": 34} id="N9iZC5FjEoCJ" outputId="adbf8352-728b-41ce-bf58-1d8373ba42fa"
dbExecute(conn,"drop table colegios")
# + [markdown] id="iVggv98uOUU-"
# # Mezcla
# + [markdown] id="NdG8Y7iKrk65"
# Se crea la tabla combinada
# + colab={"base_uri": "https://localhost:8080/", "height": 34} id="YyvmX2K4OTlC" outputId="95d92425-43de-42ae-ea2a-7aac1c32e928"
dbListTables(conn)
# + colab={"base_uri": "https://localhost:8080/", "height": 1000} id="LH0-nuw5dGI9" outputId="2dc2d5f4-1ecf-46d5-b56d-a6d7825337ca"
dbGetQuery(conn,"
select distinct
PP.codigo_estacion,
PP.latitud,
PP.longitud
from Precipitacion as PP")
# + [markdown] id="M-Q2YSD5rvSF"
# Se vincula la latitud y longitud de los colegios y las estaciones
# + colab={"base_uri": "https://localhost:8080/", "height": 34} id="hhVnjqV7OcIx" outputId="daef3a33-b62b-4c94-9009-ce1fba291497"
#dbExecute(conn, "drop table colegios_full")
dbExecute(conn,"
create table colegios_full as
select
CI.*,
GC.lat as Latitud,
GC.lon as Longitud,
PP.codigo_estacion,
PP.latitud as Lat_Estacion,
PP.longitud as Lon_Estacion,
POWER(PP.longitud - GC.lon, 2)+ POWER(PP.latitud-GC.lat,2) as dist
from colegios2 as CI, geocoles as GC, (
select distinct
PP.codigo_estacion,
PP.latitud,
PP.longitud
from Precipitacion as PP) as PP
where CI.RBD=GC.RBD
group by CI.RBD
having dist=min(dist)
")
# + colab={"base_uri": "https://localhost:8080/", "height": 1000} id="_4wXxrJYF6LV" outputId="89d941df-7e6e-42de-ee81-d5cafe56dabe"
dbGetQuery(conn, "select * from colegios_full")
# + colab={"base_uri": "https://localhost:8080/", "height": 405} id="Rxv-aO1LGnnW" outputId="afbae580-fa78-4a49-ec82-6a3dd5692890"
dbGetQuery(conn,"select * from ESTUDIAEN limit 10")
# + colab={"base_uri": "https://localhost:8080/", "height": 405} id="qKOAng3dGr4p" outputId="07e8317a-9e36-4aa9-8b2b-beff83732ffc"
dbGetQuery(conn,"select * from Precipitacion limit 10")
# + id="BG9Z8Jk9iPGB"
q<-dbSendQuery(conn, "
select
ESTUDIAEN.ASIS_PROMEDIO,
colegios_full.latitud,
Precipitacion.Precipitacion
from ESTUDIAEN, colegios_full, Precipitacion
where ESTUDIAEN.RBD=colegios_full.RBD and
colegios_full.codigo_estacion=Precipitacion.codigo_estacion and
ESTUDIAEN.AGNO=Precipitacion.Año and
ESTUDIAEN.MES_ESCOLAR=Precipitacion.Mes
")
# + [markdown] id="8lv9-F8TJPWx"
# # Regresion
# + [markdown] id="pRPWJxQwt8g6"
# dbSendQuery:
# + id="0T3rUS4Et7mm"
# + colab={"base_uri": "https://localhost:8080/"} id="mOhmFUu_tvJr" outputId="c1293c07-7c5e-4eab-efc1-f73ad95b15df"
conection<-dbSendQuery(conn, "
select
ESTUDIAEN.ASIS_PROMEDIO,
colegios_full.latitud,
Precipitacion.Precipitacion
from ESTUDIAEN, colegios_full, Precipitacion
where ESTUDIAEN.RBD=colegios_full.RBD and
colegios_full.codigo_estacion=Precipitacion.codigo_estacion and
ESTUDIAEN.AGNO=Precipitacion.Año and
ESTUDIAEN.MES_ESCOLAR=Precipitacion.Mes
limit 12000
")
# + [markdown] id="jndsELs2vix1"
# ASIS_PROMEDIO=0.7521-0.002685*Latitud-0.0001125*Precipitacion
# + colab={"base_uri": "https://localhost:8080/", "height": 420} id="Wwhd4vm8uKQ2" outputId="8b70ea61-5c6e-4b2d-ea3e-96b4e0c75eb1"
m1<-glm(ASIS_PROMEDIO~Latitud+Precipitacion, data=dbFetch(conection, 6000))
summary(m1)
# + colab={"base_uri": "https://localhost:8080/", "height": 34} id="lBIfJn9xwbxX" outputId="e5958774-e37f-4895-a9c5-13f33c01de25"
coefficients(m1)
# + colab={"base_uri": "https://localhost:8080/"} id="t5johw-CJWhh" outputId="56f077f3-09a3-493a-a91f-5dea09bb718e"
install.packages("speedglm")
# + id="i14FGd0lJURS"
library(speedglm)
# + id="Zb7pxXuiJei2"
make.data<-function(chunksize){
conection<-NULL
function(reset=FALSE){
if(reset){
conn <- dbConnect(RSQLite::SQLite(), "mineduc.db")
conection<<-dbSendQuery(conn, "
select
ESTUDIAEN.ASIS_PROMEDIO,
colegios_full.latitud,
Precipitacion.Precipitacion
from ESTUDIAEN, colegios_full, Precipitacion
where ESTUDIAEN.RBD=colegios_full.RBD and
colegios_full.codigo_estacion=Precipitacion.codigo_estacion and
ESTUDIAEN.AGNO=Precipitacion.Año and
ESTUDIAEN.MES_ESCOLAR=Precipitacion.Mes
")
} else{
rval<-dbFetch(conection,chunksize)
if ((nrow(rval)==0)) {
conection<<-NULL
rval<-NULL
}
return(rval)
}
}
}
# + colab={"base_uri": "https://localhost:8080/", "height": 1000} id="HNCA8a8PD6hV" outputId="b639aabb-39cb-4e0d-84b8-010bf9d217da"
da<-make.data(chunksize=50)
da(reset=T) #1: opens the connection to "data1.txt"
da(reset=F) #2: reads the first 50 rows (out of 120) of the dataset
da(reset=F) #3: reads the second 50 rows (out of 120) of the dataset
da(reset=F) #3: reads the last 20 rows (out of 120) of the dataset
da(reset=F) #4: is NULL: this latter command closes the connection
# + colab={"base_uri": "https://localhost:8080/", "height": 370} id="ISID-aSVEHC6" outputId="7f8a9804-c0a0-4a36-cf47-8460a6d5cbfe"
da<-make.data(chunksize=10000000)
b1<-shglm(ASIS_PROMEDIO~Latitud+Precipitacion,datafun=da)
summary(b1)
# + id="uTPOudrFb2Bd"
da<-make.data(chunksize=50000)
da(reset=T)
d<- da(reset=F)
# + id="bV2xh5fjkxJw"
d <- d%>% filter((ASIS_PROMEDIO>=0) &(ASIS_PROMEDIO<=1))
# + id="td-z6Y53lEhB"
d <- d%>% filter(Precipitacion>=0)
# + colab={"base_uri": "https://localhost:8080/", "height": 437} id="9ajRWLgxkDwt" outputId="68e0cb98-088f-460e-c72b-7a32bcd5962c"
library(ggplot2)
ggplot(d, aes(x=ASIS_PROMEDIO, y=Precipitacion)) + geom_point(size=2, shape=23)
| Proyecto_solucion_parcial.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="g6lqayVYvCyQ" colab_type="text"
# # Text-to-Speech with Mozilla Tacotron+WaveRNN
#
# This is an English female voice TTS demo using open source projects [mozilla/TTS](https://github.com/mozilla/TTS/) and [erogol/WaveRNN](https://github.com/erogol/WaveRNN).
#
# For other deep-learning Colab notebooks, visit [tugstugi/dl-colab-notebooks](https://github.com/tugstugi/dl-colab-notebooks).
#
# ## Install Mozilla TTS and WaveRNN
# + id="e-Hdw1q_X8JX" colab_type="code" colab={}
import os
import time
from os.path import exists, join, basename, splitext
git_repo_url = 'https://github.com/mozilla/TTS.git'
project_name = splitext(basename(git_repo_url))[0]
if not exists(project_name):
# !git clone -q {git_repo_url}
# !cd {project_name} && git checkout Tacotron2-iter-260K-824c091
# !pip install -q gdown lws librosa Unidecode==0.4.20 tensorboardX git+git://github.com/bootphon/phonemizer@master localimport
# !apt-get install -y espeak
git_repo_url = 'https://github.com/erogol/WaveRNN.git'
project_name = splitext(basename(git_repo_url))[0]
if not exists(project_name):
# !git clone -q {git_repo_url}
# !cd {project_name} && git checkout 8a1c152 && pip install -q -r requirements.txt
import sys
sys.path.append('TTS')
sys.path.append('WaveRNN')
from localimport import localimport
from IPython.display import Audio, display
# + [markdown] id="3_D59k2x_uGW" colab_type="text"
# ## Download pretrained models
# + id="klsVLR6w_u4P" colab_type="code" colab={}
# WaveRNN
# !mkdir -p wavernn_models tts_models
wavernn_pretrained_model = 'wavernn_models/checkpoint_433000.pth.tar'
if not exists(wavernn_pretrained_model):
# !gdown -O {wavernn_pretrained_model} https://drive.google.com/uc?id=12GRFk5mcTDXqAdO5mR81E-DpTk8v2YS9
wavernn_pretrained_model_config = 'wavernn_models/config.json'
if not exists(wavernn_pretrained_model_config):
# !gdown -O {wavernn_pretrained_model_config} https://drive.google.com/uc?id=1kiAGjq83wM3POG736GoyWOOcqwXhBulv
# TTS
tts_pretrained_model = 'tts_models/checkpoint_261000.pth.tar'
if not exists(tts_pretrained_model):
# !gdown -O {tts_pretrained_model} https://drive.google.com/uc?id=1otOqpixEsHf7SbOZIcttv3O7pG0EadDx
tts_pretrained_model_config = 'tts_models/config.json'
if not exists(tts_pretrained_model_config):
# !gdown -O {tts_pretrained_model_config} https://drive.google.com/uc?id=1IJaGo0BdMQjbnCcOL4fPOieOEWMOsXE-
# + [markdown] id="pDaahAVNENpT" colab_type="text"
# ## Initialize models
# + id="Ft1LHHdkA2Yc" colab_type="code" colab={}
#
# this code is copied from: https://github.com/mozilla/TTS/blob/master/notebooks/Benchmark.ipynb
#
import io
import torch
import time
import numpy as np
from collections import OrderedDict
from matplotlib import pylab as plt
import IPython
# %pylab inline
rcParams["figure.figsize"] = (16,5)
import librosa
import librosa.display
from TTS.models.tacotron import Tacotron
from TTS.layers import *
from TTS.utils.data import *
from TTS.utils.audio import AudioProcessor
from TTS.utils.generic_utils import load_config, setup_model
from TTS.utils.text import text_to_sequence
from TTS.utils.synthesis import synthesis
from TTS.utils.visual import visualize
def tts(model, text, CONFIG, use_cuda, ap, use_gl, speaker_id=None, figures=True):
t_1 = time.time()
waveform, alignment, mel_spec, mel_postnet_spec, stop_tokens = synthesis(model, text, CONFIG, use_cuda, ap, truncated=True, enable_eos_bos_chars=CONFIG.enable_eos_bos_chars)
if CONFIG.model == "Tacotron" and not use_gl:
mel_postnet_spec = ap.out_linear_to_mel(mel_postnet_spec.T).T
if not use_gl:
waveform = wavernn.generate(torch.FloatTensor(mel_postnet_spec.T).unsqueeze(0).cuda(), batched=batched_wavernn, target=11000, overlap=550)
print(" > Run-time: {}".format(time.time() - t_1))
if figures:
visualize(alignment, mel_postnet_spec, stop_tokens, text, ap.hop_length, CONFIG, mel_spec)
IPython.display.display(Audio(waveform, rate=CONFIG.audio['sample_rate']))
#os.makedirs(OUT_FOLDER, exist_ok=True)
#file_name = text.replace(" ", "_").replace(".","") + ".wav"
#out_path = os.path.join(OUT_FOLDER, file_name)
#ap.save_wav(waveform, out_path)
return alignment, mel_postnet_spec, stop_tokens, waveform
use_cuda = True
batched_wavernn = True
# initialize TTS
CONFIG = load_config(tts_pretrained_model_config)
from TTS.utils.text.symbols import symbols, phonemes
# load the model
num_chars = len(phonemes) if CONFIG.use_phonemes else len(symbols)
model = setup_model(num_chars, CONFIG)
# load the audio processor
ap = AudioProcessor(**CONFIG.audio)
# load model state
if use_cuda:
cp = torch.load(tts_pretrained_model)
else:
cp = torch.load(tts_pretrained_model, map_location=lambda storage, loc: storage)
# load the model
model.load_state_dict(cp['model'])
if use_cuda:
model.cuda()
model.eval()
print(cp['step'])
model.decoder.max_decoder_steps = 2000
# initialize WaveRNN
VOCODER_CONFIG = load_config(wavernn_pretrained_model_config)
with localimport('/content/WaveRNN') as _importer:
from models.wavernn import Model
bits = 10
wavernn = Model(
rnn_dims=512,
fc_dims=512,
mode="mold",
pad=2,
upsample_factors=VOCODER_CONFIG.upsample_factors, # set this depending on dataset
feat_dims=VOCODER_CONFIG.audio["num_mels"],
compute_dims=128,
res_out_dims=128,
res_blocks=10,
hop_length=ap.hop_length,
sample_rate=ap.sample_rate,
).cuda()
check = torch.load(wavernn_pretrained_model)
wavernn.load_state_dict(check['model'])
if use_cuda:
wavernn.cuda()
wavernn.eval()
print(check['step'])
# + [markdown] id="anTcrJwwvZAr" colab_type="text"
# ## Sentence to synthesize
# + id="GkxJ8J4dveaw" colab_type="code" colab={}
SENTENCE = 'Bill got in the habit of asking himself “Is that thought true?” And if he wasn’t absolutely certain it was, he just let it go.'
# + [markdown] id="VuegqdoVvk14" colab_type="text"
# ## Synthetize
# + id="Hx93hVb6Y8dA" colab_type="code" outputId="f2d2bb89-8f8b-4e01-edfe-045310a58cc0" colab={"base_uri": "https://localhost:8080/", "height": 92}
align, spec, stop_tokens, wav = tts(model, SENTENCE, CONFIG, use_cuda, ap, speaker_id=0, use_gl=False, figures=False)
| Deep-Learning-Notebooks/notebooks/Mozilla_TTS_WaveRNN.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: 'Python 3.7.3 64-bit (''base'': conda)'
# language: python
# name: python373jvsc74a57bd0210f9608a45c0278a93c9e0b10db32a427986ab48cfc0d20c139811eb78c4bbc
# ---
# +
import random
import seaborn as sns
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import sklearn
import torch,torchvision
from torch.nn import *
from tqdm import tqdm
import cv2
from torch.optim import *
# Preproccessing
from sklearn.preprocessing import (
StandardScaler,
RobustScaler,
MinMaxScaler,
MaxAbsScaler,
OneHotEncoder,
Normalizer,
Binarizer
)
# Decomposition
from sklearn.decomposition import PCA
from sklearn.decomposition import KernelPCA
# Feature Selection
from sklearn.feature_selection import VarianceThreshold
from sklearn.feature_selection import SelectKBest
from sklearn.feature_selection import RFECV
from sklearn.feature_selection import SelectFromModel
# Model Eval
from sklearn.compose import make_column_transformer
from sklearn.model_selection import GridSearchCV
from sklearn.model_selection import cross_val_score,train_test_split
from sklearn.metrics import mean_absolute_error,mean_squared_error
# Other
import pickle
import wandb
PROJECT_NAME = 'Weather-archive-Jena'
device = 'cuda:0'
np.random.seed(21)
random.seed(21)
torch.manual_seed(21)
# -
data = pd.read_csv('./data.csv')
data.to_csv('./Cleaned-Data.csv')
data.to_json('./cleaned-data.json')
data = torch.from_numpy(np.array(data['T (degC)'].tolist())).view(1,-1).to(device).float()
data_input = data[:1,:-1].to(device).float()
data_target = data[:1,1:].to(device).float()
# +
class Model(Module):
def __init__(self):
super().__init__()
self.hidden = 256
self.lstm1 = LSTMCell(1,self.hidden).to(device)
self.lstm2 = LSTMCell(self.hidden,self.hidden).to(device)
self.linear1 = Linear(self.hidden,1).to(device)
def forward(self,X,future=0):
preds = []
batch_size = X.size(0)
h_t1 = torch.zeros(batch_size,self.hidden).to(device)
c_t1 = torch.zeros(batch_size,self.hidden).to(device)
h_t2 = torch.zeros(batch_size,self.hidden).to(device)
c_t2 = torch.zeros(batch_size,self.hidden).to(device)
for X_batch in X.split(1,dim=1):
X_batch = X_batch.to(device)
h_t1,c_t1 = self.lstm1(X_batch,(h_t1,c_t1))
h_t1 = h_t1.to(device)
c_t1 = c_t1.to(device)
h_t2,c_t2 = self.lstm2(h_t1,(h_t2,c_t2))
h_t2 = h_t2.to(device)
c_t2 = c_t2.to(device)
pred = self.linear1(h_t2)
preds.append(pred)
for _ in range(future):
h_t1,c_t1 = self.lstm1(X_batch,(h_t1,c_t1))
h_t1 = h_t1.to(device)
c_t1 = c_t1.to(device)
h_t2,c_t2 = self.lstm2(h_t1,(h_t2,c_t2))
h_t2 = h_t2.to(device)
c_t2 = c_t2.to(device)
pred = self.linear1(h_t2)
preds.append(pred)
preds = torch.cat(preds,dim=1)
return preds
# +
model = Model().to(device)
criterion = MSELoss()
optimizer = LBFGS(model.parameters(),lr=0.8)
epochs = 100
# +
torch.save(data,'./data.pt')
torch.save(data,'./data.pth')
torch.save(data_input,'data_input.pt')
torch.save(data_input,'data_input.pth')
torch.save(data_target,'data_target.pt')
torch.save(data_target,'data_target.pth')
# +
torch.save(model,'custom-model.pt')
torch.save(model,'custom-model.pth')
torch.save(model.state_dict(),'custom-model-sd.pt')
torch.save(model.state_dict(),'custom-model-sd.pth')
torch.save(model,'model.pt')
torch.save(model,'model.pth')
torch.save(model.state_dict(),'model-sd.pt')
torch.save(model.state_dict(),'model-sd.pth')
# -
wandb.init(project=PROJECT_NAME,name='baseline')
for _ in tqdm(range(epochs)):
def closure():
optimizer.zero_grad()
preds = model(data_input)
loss = criterion(preds,data_target)
loss.backward()
wandb.log({'Loss':loss.item()})
return loss
optimizer.step(closure)
with torch.no_grad():
future = 100
preds = model(data_input,future)
loss = criterion(preds[:,:-future],data_target)
wandb.log({'Val Loss':loss.item()})
preds = preds[0].view(-1).cpu().detach().numpy()
n = data_input.shape[1]
plt.figure(figsize=(12,6))
plt.plot(np.arange(n),data_target.view(-1).cpu().detach().numpy(),'b')
plt.plot(np.arange(n,n+future),preds[n:],'r')
plt.savefig('./img.png')
plt.close()
wandb.log({'Img':wandb.Image(cv2.imread('./img.png'))})
wandb.finish()
# +
torch.save(model,'custom-model.pt')
torch.save(model,'custom-model.pth')
torch.save(model.state_dict(),'custom-model-sd.pt')
torch.save(model.state_dict(),'custom-model-sd.pth')
torch.save(model,'model.pt')
torch.save(model,'model.pth')
torch.save(model.state_dict(),'model-sd.pt')
torch.save(model.state_dict(),'model-sd.pth')
# -
| 00.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ## K-Means Clustering
# **Overview**<br>
# <a href="https://archive.ics.uci.edu/ml/datasets/online+retail">Online retail is a transnational data set</a> which contains all the transactions occurring between 01/12/2010 and 09/12/2011 for a UK-based and registered non-store online retail. The company mainly sells unique all-occasion gifts. Many customers of the company are wholesalers.
#
# The steps are broadly:
# 1. Read and understand the data
# 2. Clean the data
# 3. Prepare the data for modelling
# 4. Modelling
# 5. Final analysis and reco
# # 1. Read and visualise the data
# +
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
import datetime as dt
import sklearn
from sklearn.preprocessing import StandardScaler
from sklearn.cluster import KMeans
from sklearn.metrics import silhouette_score
from scipy.cluster.hierarchy import linkage
from scipy.cluster.hierarchy import dendrogram
from scipy.cluster.hierarchy import cut_tree
# -
# read the dataset
retail_df = pd.read_csv("Online+Retail.csv", sep=",", encoding="ISO-8859-1", header=0)
retail_df.head()
# basics of the df
retail_df.info()
# # 2. Clean the data
# missing values
round(100*(retail_df.isnull().sum())/len(retail_df), 2)
# drop all rows having missing values
retail_df = retail_df.dropna()
retail_df.shape
retail_df.head()
# new column: amount
retail_df['amount'] = retail_df['Quantity']*retail_df['UnitPrice']
retail_df.head()
# # 3. Prepare the data for modelling
# - R (Recency): Number of days since last purchase
# - F (Frequency): Number of tracsactions
# - M (Monetary): Total amount of transactions (revenue contributed)
# monetary
grouped_df = retail_df.groupby('CustomerID')['amount'].sum()
grouped_df = grouped_df.reset_index()
grouped_df.head()
# frequency
frequency = retail_df.groupby('CustomerID')['InvoiceNo'].count()
frequency = frequency.reset_index()
frequency.columns = ['CustomerID', 'frequency']
frequency.head()
# merge the two dfs
grouped_df = pd.merge(grouped_df, frequency, on='CustomerID', how='inner')
grouped_df.head()
retail_df.head()
# recency
# convert to datetime
retail_df['InvoiceDate'] = pd.to_datetime(retail_df['InvoiceDate'],
format='%d-%m-%Y %H:%M')
retail_df.head()
# compute the max date
max_date = max(retail_df['InvoiceDate'])
max_date
# compute the diff
retail_df['diff'] = max_date - retail_df['InvoiceDate']
retail_df.head()
# recency
last_purchase = retail_df.groupby('CustomerID')['diff'].min()
last_purchase = last_purchase.reset_index()
last_purchase.head()
# merge
grouped_df = pd.merge(grouped_df, last_purchase, on='CustomerID', how='inner')
grouped_df.columns = ['CustomerID', 'amount', 'frequency', 'recency']
grouped_df.head()
# number of days only
grouped_df['recency'] = grouped_df['recency'].dt.days
grouped_df.head()
# 1. outlier treatment
plt.boxplot(grouped_df['recency'])
# +
# two types of outliers:
# - statistical
# - domain specific
# +
# removing (statistical) outliers
Q1 = grouped_df.amount.quantile(0.05)
Q3 = grouped_df.amount.quantile(0.95)
IQR = Q3 - Q1
grouped_df = grouped_df[(grouped_df.amount >= Q1 - 1.5*IQR) & (grouped_df.amount <= Q3 + 1.5*IQR)]
# outlier treatment for recency
Q1 = grouped_df.recency.quantile(0.05)
Q3 = grouped_df.recency.quantile(0.95)
IQR = Q3 - Q1
grouped_df = grouped_df[(grouped_df.recency >= Q1 - 1.5*IQR) & (grouped_df.recency <= Q3 + 1.5*IQR)]
# outlier treatment for frequency
Q1 = grouped_df.frequency.quantile(0.05)
Q3 = grouped_df.frequency.quantile(0.95)
IQR = Q3 - Q1
grouped_df = grouped_df[(grouped_df.frequency >= Q1 - 1.5*IQR) & (grouped_df.frequency <= Q3 + 1.5*IQR)]
# +
# 2. rescaling
rfm_df = grouped_df[['amount', 'frequency', 'recency']]
# instantiate
scaler = StandardScaler()
# fit_transform
rfm_df_scaled = scaler.fit_transform(rfm_df)
rfm_df_scaled.shape
# -
rfm_df_scaled = pd.DataFrame(rfm_df_scaled)
rfm_df_scaled.columns = ['amount', 'frequency', 'recency']
rfm_df_scaled.head()
# ## Hopkins Statistics
#
# One more important data preparation technique that we also need to do but have skipped in the demonstration is the calculation of the Hopkins Statistic. In python, you can use the following code snippet to pass a dataframe to the Hopkins statistic function to find if the dataset is suitable for clustering or not. You can simply copy-paste the code present in the code given below to the main dataset and analyse the Hopkins statistic value.
#
# You don't need to know how the algorithm of Hopkins Statistic works. The algorithm is pretty advanced and hence you don't need to know its workings but rather only interpret the value that it assigns to the dataframe.
# On multiple iterations of Hopkins Statistic, you would be getting multiple values since the algorithm uses some randomisation in the initialisation part of the code. Therefore it is advised to run it a couple of times before confirming whether the data is suitable for clustering or not.
# +
from sklearn.neighbors import NearestNeighbors
from random import sample
from numpy.random import uniform
import numpy as np
from math import isnan
def hopkins(X):
d = X.shape[1]
#d = len(vars) # columns
n = len(X) # rows
m = int(0.1 * n)
nbrs = NearestNeighbors(n_neighbors=1).fit(X.values)
rand_X = sample(range(0, n, 1), m)
ujd = []
wjd = []
for j in range(0, m):
u_dist, _ = nbrs.kneighbors(uniform(np.amin(X,axis=0),np.amax(X,axis=0),d).reshape(1, -1), 2, return_distance=True)
ujd.append(u_dist[0][1])
w_dist, _ = nbrs.kneighbors(X.iloc[rand_X[j]].values.reshape(1, -1), 2, return_distance=True)
wjd.append(w_dist[0][1])
H = sum(ujd) / (sum(ujd) + sum(wjd))
if isnan(H):
print(ujd, wjd)
H = 0
return H
# -
#First convert the numpy array that you have to a dataframe
rfm_df_scaled = pd.DataFrame(rfm_df_scaled)
rfm_df_scaled.columns = ['amount', 'frequency', 'recency']
#Use the Hopkins Statistic function by passing the above dataframe as a paramter
hopkins(rfm_df_scaled,)
# # 4. Modelling
# k-means with some arbitrary k
kmeans = KMeans(n_clusters=4, max_iter=50)
kmeans.fit(rfm_df_scaled)
kmeans.labels_
# +
# help(KMeans)
# -
# ## Finding the Optimal Number of Clusters
#
# ### SSD
# +
# elbow-curve/SSD
ssd = []
range_n_clusters = [2, 3, 4, 5, 6, 7, 8]
for num_clusters in range_n_clusters:
kmeans = KMeans(n_clusters=num_clusters, max_iter=50)
kmeans.fit(rfm_df_scaled)
ssd.append(kmeans.inertia_)
# plot the SSDs for each n_clusters
# ssd
plt.plot(ssd)
# -
# ### Silhouette Analysis
#
# $$\text{silhouette score}=\frac{p-q}{max(p,q)}$$
#
# $p$ is the mean distance to the points in the nearest cluster that the data point is not a part of
#
# $q$ is the mean intra-cluster distance to all the points in its own cluster.
#
# * The value of the silhouette score range lies between -1 to 1.
#
# * A score closer to 1 indicates that the data point is very similar to other data points in the cluster,
#
# * A score closer to -1 indicates that the data point is not similar to the data points in its cluster.
# +
# silhouette analysis
range_n_clusters = [2, 3, 4, 5, 6, 7, 8]
for num_clusters in range_n_clusters:
# intialise kmeans
kmeans = KMeans(n_clusters=num_clusters, max_iter=50)
kmeans.fit(rfm_df_scaled)
cluster_labels = kmeans.labels_
# silhouette score
silhouette_avg = silhouette_score(rfm_df_scaled, cluster_labels)
print("For n_clusters={0}, the silhouette score is {1}".format(num_clusters, silhouette_avg))
# -
# 2 clusters seems to be optimal as per statistics, but 3 clusters seems to be optimal from graphical view
# final model with k=3
kmeans = KMeans(n_clusters=3, max_iter=50)
kmeans.fit(rfm_df_scaled)
kmeans.labels_
# assign the label
grouped_df['cluster_id'] = kmeans.labels_
grouped_df.head()
# plot
sns.boxplot(x='cluster_id', y='amount', data=grouped_df)
# ## Hierarchical Clustering
rfm_df_scaled.head()
grouped_df
# single linkage
mergings = linkage(rfm_df_scaled, method="single", metric='euclidean')
dendrogram(mergings)
plt.show()
# complete linkage
mergings = linkage(rfm_df_scaled, method="complete", metric='euclidean')
dendrogram(mergings)
plt.show()
# 3 clusters
cluster_labels = cut_tree(mergings, n_clusters=3).reshape(-1, )
cluster_labels
# assign cluster labels
grouped_df['cluster_labels'] = cluster_labels
grouped_df.head()
# plots
sns.boxplot(x='cluster_labels', y='recency', data=grouped_df)
# plots
sns.boxplot(x='cluster_labels', y='frequency', data=grouped_df)
# plots
sns.boxplot(x='cluster_labels', y='amount', data=grouped_df)
| 8. Machine Learning-2/7. Clustering/Clustering Python Lab.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 2
# language: python
# name: python2
# ---
# call code to simulate action potentials via FHN biophysical model
import simFHN as fhn
import scipy as sp
# +
# pass in parameters to generate and plot the simulated data and phase portrait of the system
# %matplotlib inline
t = sp.arange(0.0, 100, .5)
a = 0.7
b = 0.8
[V, w2] = fhn.simFN(a,b,t,True,1)
# +
# generate noisy data and plot observations from two neurons over true intracellular membrane potential
import numpy as np
import matplotlib.pyplot as plt
obs1 = V + np.random.normal(0,.1,len(t))
obs2 = V + np.random.normal(0,.15,len(t))
plt.subplot(121)
time = np.arange((len(t)))
lo = plt.plot(time, obs1, 'purple', time, V, 'red')
plt.xlabel('time')
plt.ylabel('signal')
plt.title('noisy measurements vs intracellular membrane potential')
plt.legend(lo, ('measurement 1','intracellular voltage'), loc='lower left')
plt.subplot(122)
lo = plt.plot(time,obs2, 'green', time, V, 'red')
plt.xlabel('time')
plt.ylabel('signal')
plt.title('noisy measurements vs intracellular membrane potential')
plt.legend(lo, ('measurement 2','intracellular voltage'), loc='lower left')
plt.subplots_adjust(right=2.5, hspace=.95)
# -
# import auxiliary particle filter code
from apf_fhn import *
n_particles = 500
import numpy as np
Sigma = .15*np.asarray([[1, .15],[.15, 1]])
Gamma = .12*np.asarray([[1, .15], [.15, 1]])
B = np.diag([1,3])
T = len(t)
x_0 = [0,0]#[0,0]
Obs = np.asarray([obs1]).T
I_ext = 1
# run particle filter
import timeit
start_time = timeit.default_timer()
[w, x, k] = apf(Obs, T, n_particles, 10, B, Sigma, Gamma, x_0, I_ext)
elapsed = timeit.default_timer() - start_time
print "time elapsed: ", elapsed, "seconds or", (elapsed/60.0), "minutes", "\ntime per iteration: ", elapsed/T
# visualize parameters
import matplotlib.pyplot as plt
# %matplotlib inline
#parts = np.array([np.array(xi) for xi in w])
plt.subplot(141)
plt.imshow(w)
plt.xlabel('time')
plt.ylabel('particle weights')
plt.title('weight matrix')
plt.subplot(142)
plt.imshow(x[:,:,0])
plt.xlabel('time')
plt.ylabel('particles')
plt.title('path matrix')
plt.subplot(143)
plt.imshow(x[:,:,1])
plt.xlabel('time')
plt.ylabel('particles')
plt.title('path matrix')
plt.subplot(144)
plt.imshow(k)
plt.xlabel('time')
plt.ylabel('p(y_n | x_{n-1})')
plt.title('posterior')
plt.subplots_adjust(right=2.5, hspace=.75)
# +
# examine particle trajectories over time
plt.subplot(141)
plt.plot(np.transpose(x[:,:,0]), alpha=.01, linewidth=1.5)
plt.xlabel('time')
plt.ylabel('displacement')
plt.title('particle path trajectories over time (dim 1)')
plt.subplot(142)
plt.plot(np.transpose(x[:,:,1]), alpha=.01, linewidth=1.5)
plt.xlabel('time')
plt.ylabel('displacement')
plt.title('particle path trajectories over time (dim 2)')
plt.subplot(143)
plt.plot(x[:,:,0])
plt.xlabel('particle')
plt.ylabel('time')
plt.title('particle variance (dim 1)')
plt.subplot(144)
plt.plot(x[:,:,1])
plt.xlabel('particle')
plt.ylabel('time')
plt.title('particle variance (dim 2)')
plt.subplots_adjust(right=2.5, hspace=.85)
# -
# average over particle trajectories to obtain predicted state means for APF output
predsignal1 = np.mean(x[:,:,0], axis=0)
predsignal2 = np.mean(x[:,:,1], axis=0)
x.shape
# +
# check raw signal before applying smoothing or shifting
time = np.arange(T)
plt.subplot(121)
plt.title('apf recovering V')
lo = plt.plot(time, V, 'r', time, predsignal1, 'b')
plt.xlabel('time')
plt.ylabel('signal')
plt.legend(lo, ('true value','prediction'))
plt.subplot(122)
plt.title('apf recovering w')
lo = plt.plot(time, w2, 'r', time, predsignal2, 'b')
plt.xlabel('time')
plt.ylabel('signal')
plt.legend(lo, ('true value','prediction'))
plt.subplots_adjust(right=1.5, hspace=.75)
# -
# shift and scale the signal
# TO DO: update code...
predsignal3 = predsignal2 + I_ext
w3 = w2[20:800]
predsignal4 = predsignal3[0:780]
print len(w2), len(predsignal4)
# define a moving average to smooth the signals
def moving_average(a, n=7) :
ret = np.cumsum(a, dtype=float)
ret[n:] = ret[n:] - ret[:-n]
return ret[n - 1:] / n
# +
# Smoothed Signal
plt.subplot(121)
plt.title('apf recovering V')
plt.xlabel('time')
plt.ylabel('signal')
plt.plot(moving_average(predsignal1))
plt.plot(V)
plt.subplot(122)
plt.title('apf recovering w')
plt.xlabel('time')
plt.ylabel('signal')
plt.plot(moving_average(predsignal2))
plt.plot(w2)
plt.subplots_adjust(right=1.5, hspace=.85)
# +
# Shifted and Scaled
plt.subplot(121)
plt.title('apf recovering V')
plt.xlabel('time')
plt.ylabel('signal')
plt.plot(moving_average(predsignal1))
plt.plot(V)
plt.subplot(122)
plt.title('apf recovering w')
plt.xlabel('time')
plt.ylabel('signal')
plt.plot(moving_average(predsignal4))
plt.plot(w3)
plt.subplots_adjust(right=1.5, hspace=.85)
| FHN Simulation 001.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python [default]
# language: python
# name: python3
# ---
# +
# %matplotlib inline
import matplotlib.pyplot as plt
from collections import Counter
plt.style.use("seaborn")
# +
def make_chart_simple_line_chart():
years = [1950, 1960, 1970, 1980, 1990, 2000, 2010]
gdp = [300.2, 543.3, 1075.9, 2862.5, 5979.6, 10289.7, 14958.3]
# create a line chart, years on x-axis, gdp on y-axis
plt.plot(years, gdp, color='green', marker='o', linestyle='solid')
# add a title
plt.title("Nominal GDP")
# add a label to the y-axis
plt.ylabel("Billions of $")
plt.show()
make_chart_simple_line_chart()
# +
def make_chart_simple_bar_chart():
movies = ["<NAME>", "Ben-Hur", "Casablanca", "Gandhi", "West Side Story"]
num_oscars = [5, 11, 3, 8, 10]
# bars are by default width 0.8, so we'll add 0.1 to the left coordinates
# so that each bar is centered
xs = [i for i, _ in enumerate(movies)]
# plot bars with left x-coordinates [xs], heights [num_oscars]
plt.bar(xs, num_oscars)
plt.ylabel("# of Academy Awards")
plt.title("My Favorite Movies")
# label x-axis with movie names at bar centers
plt.xticks([i for i, _ in enumerate(movies)], movies)
plt.show()
make_chart_simple_bar_chart()
# + code_folding=[]
def make_chart_histogram():
grades = [83,95,91,87,70,0,85,82,100,67,73,77,0]
decile = lambda grade: grade // 10 * 10
histogram = Counter(decile(grade) for grade in grades)
plt.bar([x for x in histogram.keys()], # shift each bar to the left by 4
histogram.values(), # give each bar its correct height
8) # give each bar a width of 8
plt.axis([-5, 105, 0, 5]) # x-axis from -5 to 105,
# y-axis from 0 to 5
plt.xticks([10 * i for i in range(11)]) # x-axis labels at 0, 10, ..., 100
plt.xlabel("Decile")
plt.ylabel("# of Students")
plt.title("Distribution of Exam 1 Grades")
plt.show()
make_chart_histogram()
# +
def make_chart_misleading_y_axis(mislead=True):
mentions = [500, 505]
years = [2013, 2014]
plt.bar([2013, 2014], mentions, 0.8)
plt.xticks(years)
plt.ylabel("# of times I heard someone say 'data science'")
# if you don't do this, matplotlib will label the x-axis 0, 1
# and then add a +2.013e3 off in the corner (bad matplotlib!)
plt.ticklabel_format(useOffset=False)
if mislead:
# misleading y-axis only shows the part above 500
plt.axis([2012.5,2014.5,499,506])
plt.title("Look at the 'Huge' Increase!")
else:
plt.axis([2012.5,2014.5,0,550])
plt.title("Not So Huge Anymore.")
plt.show()
make_chart_misleading_y_axis(True)
make_chart_misleading_y_axis(False)
# +
def make_chart_several_line_charts():
variance = [1,2,4,8,16,32,64,128,256]
bias_squared = [256,128,64,32,16,8,4,2,1]
total_error = [x + y for x, y in zip(variance, bias_squared)]
xs = range(len(variance))
# we can make multiple calls to plt.plot
# to show multiple series on the same chart
plt.plot(xs, variance, 'g-', label='variance') # green solid line
plt.plot(xs, bias_squared, 'r-.', label='bias^2') # red dot-dashed line
plt.plot(xs, total_error, 'b:', label='total error') # blue dotted line
# because we've assigned labels to each series
# we can get a legend for free
# loc=9 means "top center"
plt.legend(loc=9)
plt.xlabel("model complexity")
plt.title("The Bias-Variance Tradeoff")
plt.show()
make_chart_several_line_charts()
# +
def make_chart_scatter_plot():
friends = [70, 65, 72, 63, 71, 64, 60, 64, 67]
minutes = [175, 170, 205, 120, 220, 130, 105, 145, 190]
labels = ['a', 'b', 'c', 'd', 'e', 'f', 'g', 'h', 'i']
plt.scatter(friends, minutes)
# label each point
for label, friend_count, minute_count in zip(labels, friends, minutes):
plt.annotate(label,
xy=(friend_count, minute_count), # put the label with its point
xytext=(5, -5), # but slightly offset
textcoords='offset points')
plt.title("Daily Minutes vs. Number of Friends")
plt.xlabel("# of friends")
plt.ylabel("daily minutes spent on the site")
plt.show()
make_chart_scatter_plot()
# +
def make_chart_scatterplot_axes(equal_axes=False):
test_1_grades = [ 99, 90, 85, 97, 80]
test_2_grades = [100, 85, 60, 90, 70]
plt.scatter(test_1_grades, test_2_grades)
plt.xlabel("test 1 grade")
plt.ylabel("test 2 grade")
if equal_axes:
plt.title("Axes Are Comparable")
plt.axis("equal")
else:
plt.title("Axes Aren't Comparable")
plt.show()
make_chart_scatterplot_axes(False)
make_chart_scatterplot_axes(True)
# +
def make_chart_pie_chart():
plt.pie([0.95, 0.05], labels=["Uses pie charts", "Knows better"])
# make sure pie is a circle and not an oval
plt.axis("equal")
plt.show()
make_chart_pie_chart()
| code-emre/_03_VisualizingData.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: py37_keras
# language: python
# name: py37_keras
# ---
# + [markdown] colab_type="text"
# # Timeseries classification using Conv1D, LSTM
#
# **Author:** [hfawaz](https://github.com/hfawaz/), [<NAME>](https://github.com/vijaykrishnay/)<br>
# **Date created:** 2020/07/21<br>
# **Last modified:** 2020/12/15<br>
# **Description:** Training a timeseries classifier from scratch on the FordA dataset from the UCR/UEA archive.
# + [markdown] colab_type="text"
# ## Introduction
#
# This example shows how to do timeseries classification using a combination of Conv1D, LSTM layers starting from raw
# CSV timeseries files on disk. We demonstrate the workflow on the FordA dataset from the
# [UCR/UEA archive](https://www.cs.ucr.edu/%7Eeamonn/time_series_data_2018/).
# + [markdown] colab_type="text"
# ## Setup
# + colab_type="code"
from tensorflow import keras
import numpy as np
import matplotlib.pyplot as plt
# + [markdown] colab_type="text"
# ## Load the data: the FordA dataset
#
# ### Dataset description
#
# The dataset we are using here is called FordA.
# The data comes from the UCR archive.
# The dataset contains 3601 training instances and another 1320 testing instances.
# Each timeseries corresponds to a measurement of engine noise captured by a motor sensor.
# For this task, the goal is to automatically detect the presence of a specific issue with
# the engine. The problem is a balanced binary classification task. The full description of
# this dataset can be found [here](http://www.j-wichard.de/publications/FordPaper.pdf).
#
# ### Read the TSV data
#
# We will use the `FordA_TRAIN` file for training and the
# `FordA_TEST` file for testing. The simplicity of this dataset
# allows us to demonstrate effectively how to use ConvNets for timeseries classification.
# In this file, the first column corresponds to the label.
# + colab_type="code"
def readucr(filename):
data = np.loadtxt(filename, delimiter="\t")
y = data[:, 0]
x = data[:, 1:]
return x, y.astype(int)
root_url = "https://raw.githubusercontent.com/hfawaz/cd-diagram/master/FordA/"
x_train, y_train = readucr(root_url + "FordA_TRAIN.tsv")
x_test, y_test = readucr(root_url + "FordA_TEST.tsv")
# -
x_train
# + [markdown] colab_type="text"
# ## Visualize the data
#
# Here we visualize one timeseries example for each class in the dataset.
# + colab_type="code"
classes = np.unique(np.concatenate((y_train, y_test), axis=0))
plt.figure()
for c in classes:
c_x_train = x_train[y_train == c]
plt.plot(c_x_train[0], label="class " + str(c))
plt.legend(loc="best")
plt.show()
plt.close()
# + [markdown] colab_type="text"
# ## Standardize the data
#
# Our timeseries are already in a single length (176). However, their values are
# usually in various ranges. This is not ideal for a neural network;
# in general we should seek to make the input values normalized.
# For this specific dataset, the data is already z-normalized: each timeseries sample
# has a mean equal to zero and a standard deviation equal to one. This type of
# normalization is very common for timeseries classification problems, see
# [Bagnall et al. (2016)](https://link.springer.com/article/10.1007/s10618-016-0483-9).
#
# Note that the timeseries data used here are univariate, meaning we only have one channel
# per timeseries example.
# We will therefore transform the timeseries into a multivariate one with one channel
# using a simple reshaping via numpy.
# This will allow us to construct a model that is easily applicable to multivariate time
# series.
# + colab_type="code"
x_train = x_train.reshape((x_train.shape[0], x_train.shape[1], 1))
x_test = x_test.reshape((x_test.shape[0], x_test.shape[1], 1))
# + [markdown] colab_type="text"
# Finally, in order to use `sparse_categorical_crossentropy`, we will have to count
# the number of classes beforehand.
# + colab_type="code"
num_classes = len(np.unique(y_train))
# + [markdown] colab_type="text"
# Now we shuffle the training set because we will be using the `validation_split` option
# later when training.
# + colab_type="code"
idx = np.random.permutation(len(x_train))
x_train = x_train[idx]
y_train = y_train[idx]
# + [markdown] colab_type="text"
# Standardize the labels to positive integers.
# The expected labels will then be 0 and 1.
# + colab_type="code"
y_train[y_train == -1] = 0
y_test[y_test == -1] = 0
# + [markdown] colab_type="text"
# ## Build a model
#
# We build a model using LSTM as the first later followed by a fully connected layer. Dropout layer is used for regularization.
#
# The following hyperparameters (# of LSTM, dense units, dropout rate, learning rate, batch size, the usage of BatchNorm) were tuned to arrive at the below values.
# + colab_type="code"
def make_model(input_shape):
dropout_rate=0.1
input_layer = keras.layers.Input(input_shape)
conv1a = keras.layers.Conv1D(filters=8, kernel_size=3, padding="same")(input_layer)
conv1a = keras.layers.BatchNormalization()(conv1a)
conv1a = keras.layers.ReLU()(conv1a)
conv1a = keras.layers.Dropout(dropout_rate)(conv1a)
conv1b = keras.layers.Conv1D(filters=8, kernel_size=5, padding="same")(input_layer)
conv1b = keras.layers.BatchNormalization()(conv1b)
conv1b = keras.layers.ReLU()(conv1b)
conv1b = keras.layers.Dropout(dropout_rate)(conv1b)
conv1 = keras.layers.Concatenate(axis=-1)([conv1a, conv1b])
conv2 = keras.layers.Conv1D(filters=24, kernel_size=3, padding="same")(conv1)
conv2 = keras.layers.BatchNormalization()(conv2)
conv2 = keras.layers.ReLU()(conv2)
conv2 = keras.layers.Dropout(dropout_rate)(conv2)
# conv3a_reduce = keras.layers.MaxPooling1D(pool_size=5, strides=5, padding="valid")(conv2)
conv3a_reduce = keras.layers.Conv1D(
filters=8, kernel_size=10, padding="valid", strides=10
)(conv2)
conv3a_reduce = keras.layers.BatchNormalization()(conv3a_reduce)
conv3a_reduce = keras.layers.ReLU()(conv3a_reduce)
conv3a_reduce = keras.layers.Dropout(dropout_rate)(conv3a_reduce)
conv3a_flat = keras.layers.Flatten()(conv3a_reduce)
dense_conv = keras.layers.Dense(4, activation="relu")(conv3a_flat)
lstm1 = keras.layers.LSTM(24, return_sequences=False)(conv1)
lstm1 = keras.layers.BatchNormalization()(lstm1)
lstm1 = keras.layers.Dropout(dropout_rate)(lstm1)
dense_lstm = keras.layers.Dense(4, activation="relu")(lstm1)
dense_combined = keras.layers.Concatenate()([dense_conv, dense_lstm])
output_layer = keras.layers.Dense(num_classes, activation="softmax")(dense_combined)
return keras.models.Model(inputs=input_layer, outputs=output_layer)
model = make_model(input_shape=x_train.shape[1:])
model.summary()
# + [markdown] colab_type="text"
# ## Train the model
# + colab_type="code"
epochs = 100
batch_size = 32
learning_rate = 0.001
model_name = "best_model_conv_lstm.h5"
rho = 0.9
#
callbacks = [
keras.callbacks.ModelCheckpoint(
model_name, save_best_only=True, monitor="val_loss"
),
keras.callbacks.ReduceLROnPlateau(
monitor="val_loss", factor=0.5, patience=15, min_lr=0.00005
),
keras.callbacks.EarlyStopping(monitor="val_loss", patience=100, verbose=1),
]
model.compile(
# RMSProp worked better than Adam solver for this problem, model setup.
optimizer=keras.optimizers.RMSprop(learning_rate=learning_rate, rho=rho),
loss="sparse_categorical_crossentropy",
metrics=["sparse_categorical_accuracy"],
)
history = model.fit(
x_train,
y_train,
batch_size=batch_size,
epochs=epochs,
callbacks=callbacks,
validation_split=0.2,
verbose=1,
)
# + [markdown] colab_type="text"
# ## Evaluate model on test data
# + colab_type="code"
model = keras.models.load_model(model_name)
test_loss, test_acc = model.evaluate(x_test, y_test)
print("Test accuracy", test_acc)
print("Test loss", test_loss)
# + [markdown] colab_type="text"
# ## Plot the model's training and validation loss
# + colab_type="code"
metric = "sparse_categorical_accuracy"
plt.figure()
plt.plot(history.history[metric])
plt.plot(history.history["val_" + metric])
plt.title("model " + metric)
plt.ylabel(metric, fontsize="large")
plt.xlabel("epoch", fontsize="large")
plt.legend(["train", "val"], loc="best")
plt.show()
plt.close()
# + [markdown] colab_type="text"
# Training accuracy stabilizes around 200 epochs. Both Validation and test accuracy are very close to training accuracy indicating that the model generalizes well.
| fordA-classification/timeseries_classification_conv_lstm.ipynb |
# ---
# jupyter:
# jupytext:
# split_at_heading: true
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Tutorials
#
# > To help you get started
# The most important thing to remember is that each page of this documentation comes from a notebook. You can find them in the "nbs" folder in the [main repo](https://github.com/fastai/fastai2/tree/master/nbs). For tutorials, you can play around with the code and tweak if to do your own experiments. For the pages documenting the library, you will be able to see the source code and interact with all the tests.
# If you are just starting with the libary, checkout the beginners tutorials. They cover how to treat each application using the high-level API:
#
# - [vision](http://dev.fast.ai/tutorial.vision)
# - [text](http://dev.fast.ai/tutorial.text)
# - [tabular](http://dev.fast.ai/tutorial.tabular)
# - [collaborative filtering](http://dev.fast.ai/tutorial.collab)
#
# Once you are comfortable enough and want to start digging in the mid-level API, have a look at the intermediate tutorials:
#
# - [the data block API](http://dev.fast.ai/tutorial.datablock)
# - [a base training on Imagenette](http://dev.fast.ai/tutorial.imagenette)
# - [the mid-level data API in vision](http://dev.fast.ai/tutorial.pets)
# - [the mid-level data API in text](http://dev.fast.ai/tutorial.wikitext)
#
# And for even more experienced users that want to customize the library to their needs, check the advanced tutorials:
#
# - [Siamese model data collection and training](http://dev.fast.ai/tutorial.siamese)
| nbs/tutorial.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # 11 yr dropout analysis redux
# +
from __future__ import (absolute_import, division,
print_function, unicode_literals)
import numpy as np
import scipy.stats as sst
import os
import matplotlib.pyplot as plt
try:
import pickle
except:
# Python 2.7 ... harumph!
import cPickle as pickle
from enterprise import constants as const
from enterprise.signals import parameter
from enterprise.signals import selections
from enterprise.signals import signal_base
from enterprise.signals import white_signals
from enterprise.signals import gp_signals
from enterprise.signals import deterministic_signals
from enterprise.signals import utils
from utils import models
from utils import hypermod
from utils.sample_helpers import JumpProposal, get_parameter_groups
from PTMCMCSampler.PTMCMCSampler import PTSampler as ptmcmc
from acor import acor
# %matplotlib inline
# -
# ## use informative priors on BWM params
# * burst epoch $t_0 \in 55421 \pm 25$ (66% CI)
# * fix sky location $(\cos\theta, \phi) = (0.10, 1.15)$
# * amplitude $\log_{10}A \in \mathcal{U}(-15, -11)$
# +
anomaly_costh = 0.10345571882717139
anomaly_phi = 1.15075142923366713
anomaly_skyloc = [anomaly_costh, anomaly_phi]
anomaly_t0 = 55421.5853669
anomaly_dt0 = 25.494436791912449
# -
# # Read in data
ephem = 'DE436'
datadir = '/home/pbaker/nanograv/data/'
slice_yr = 11.5
# +
# read in data pickles
filename = datadir + 'nano11_{}.pkl'.format(ephem)
with open(filename, "rb") as f:
psrs = pickle.load(f)
filename = datadir + 'nano11_setpars.pkl'
with open(filename, "rb") as f:
noise_dict = pickle.load(f)
# -
psrs = models.which_psrs(psrs, slice_yr, 3) # select pulsars
# # setup models
# ## custom BWM w/ dropout param
@signal_base.function
def bwm_delay(toas, pos, log10_h=-14.0, cos_gwtheta=0.0, gwphi=0.0,
gwpol=0.0, t0=55000, psrk=1, antenna_pattern_fn=None):
"""
Function that calculates the earth-term gravitational-wave
burst-with-memory signal, as described in:
Seto et al, van haasteren and Levin, phsirkov et al, Cordes and Jenet.
This version uses the F+/Fx polarization modes, as verified with the
Continuous Wave and Anisotropy papers.
:param toas: Time-of-arrival measurements [s]
:param pos: Unit vector from Earth to pulsar
:param log10_h: log10 of GW strain
:param cos_gwtheta: Cosine of GW polar angle
:param gwphi: GW azimuthal polar angle [rad]
:param gwpol: GW polarization angle
:param t0: Burst central time [day]
:param antenna_pattern_fn:
User defined function that takes `pos`, `gwtheta`, `gwphi` as
arguments and returns (fplus, fcross)
:return: the waveform as induced timing residuals (seconds)
"""
# convert
h = 10**log10_h
gwtheta = np.arccos(cos_gwtheta)
t0 *= const.day
# antenna patterns
if antenna_pattern_fn is None:
apc = utils.create_gw_antenna_pattern(pos, gwtheta, gwphi)
else:
apc = antenna_pattern_fn(pos, gwtheta, gwphi)
# grab fplus, fcross
fp, fc = apc[0], apc[1]
# combined polarization
pol = np.cos(2*gwpol)*fp + np.sin(2*gwpol)*fc
# Define the heaviside function
heaviside = lambda x: 0.5 * (np.sign(x) + 1)
k = np.rint(psrk)
# Return the time-series for the pulsar
return k * pol * h * heaviside(toas-t0) * (toas-t0)
# ## Signal blocks
# +
def wn_block(vary=False):
# define selection by observing backend
selection = selections.Selection(selections.by_backend)
# white noise parameters
if vary:
efac = parameter.Normal(1.0, 0.10)
equad = parameter.Uniform(-8.5, -5)
ecorr = parameter.Uniform(-8.5, -5)
else:
efac = parameter.Constant()
equad = parameter.Constant()
ecorr = parameter.Constant()
# white noise signals
ef = white_signals.MeasurementNoise(efac=efac, selection=selection)
eq = white_signals.EquadNoise(log10_equad=equad, selection=selection)
ec = white_signals.EcorrKernelNoise(log10_ecorr=ecorr, selection=selection)
# combine signals
wn = ef + eq + ec
return wn
def rn_block(prior='log-uniform', Tspan=None):
# red noise parameters
if prior == 'uniform':
log10_A = parameter.LinearExp(-20, -11)
elif prior == 'log-uniform':
log10_A = parameter.Uniform(-20, -11)
else:
raise ValueError('Unknown prior for red noise amplitude!')
gamma = parameter.Uniform(0, 7)
# red noise signal
powlaw = utils.powerlaw(log10_A=log10_A, gamma=gamma)
rn = gp_signals.FourierBasisGP(powlaw, components=30, Tspan=Tspan)
return rn
def bwm_block(t0_param, amp_prior='log-uniform',
skyloc=None, logmin=-18, logmax=-11,
use_k=False, name='bwm'):
# BWM parameters
amp_name = '{}_log10_A'.format(name)
if amp_prior == 'uniform':
log10_A_bwm = parameter.LinearExp(logmin, logmax)(amp_name)
elif amp_prior == 'log-uniform':
log10_A_bwm = parameter.Uniform(logmin, logmax)(amp_name)
pol_name = '{}_pol'.format(name)
pol = parameter.Uniform(0, np.pi)(pol_name)
t0_name = '{}_t0'.format(name)
t0 = t0_param(t0_name)
costh_name = '{}_costheta'.format(name)
phi_name = '{}_phi'.format(name)
if skyloc is None:
costh = parameter.Uniform(-1, 1)(costh_name)
phi = parameter.Uniform(0, 2*np.pi)(phi_name)
else:
costh = parameter.Constant(skyloc[0])(costh_name)
phi = parameter.Constant(skyloc[1])(phi_name)
# BWM signal
if use_k:
k = parameter.Uniform(0,1) # not common, one per PSR
bwm_wf = bwm_delay(log10_h=log10_A_bwm, t0=t0,
cos_gwtheta=costh, gwphi=phi, gwpol=pol,
psrk=k)
else:
bwm_wf = utils.bwm_delay(log10_h=log10_A_bwm, t0=t0,
cos_gwtheta=costh, gwphi=phi, gwpol=pol)
bwm = deterministic_signals.Deterministic(bwm_wf, name=name)
return bwm
# -
# ## build PTA
outdir = '/home/pbaker/nanograv/bwm/tests/11y_dropout'
# !mkdir -p $outdir
# +
amp_prior = 'log-uniform' # for detection
t0_prior = 'anomaly' # use Normal prior on t0
bayesephem = False
# find the maximum time span to set frequency sampling
tmin = np.min([p.toas.min() for p in psrs])
tmax = np.max([p.toas.max() for p in psrs])
Tspan = tmax - tmin
print("Tspan = {:f} sec ~ {:.2f} yr".format(Tspan, Tspan/const.yr))
if t0_prior == 'uniform':
# find clipped prior range for bwm_t0
clip = 0.05 * Tspan
t0min = (tmin + 2*clip)/const.day # don't search in first 10%
t0max = (tmax - clip)/const.day # don't search in last 5%
print("search for t0 in [{:.1f}, {:.1f}] MJD".format(t0min, t0max))
t0 = parameter.Uniform(Tmin, Tmax)
elif t0_prior == 'anomaly':
print("search for t0 in [{:.1f} +/- {:.1f}] MJD".format(anomaly_t0, anomaly_dt0))
t0 = parameter.Normal(anomaly_t0, anomaly_dt0)
# +
# white noise
mod = wn_block(vary=False)
# red noise
mod += rn_block(prior=amp_prior, Tspan=Tspan)
# ephemeris model
if bayesephem:
eph = deterministic_signals.PhysicalEphemerisSignal(use_epoch_toas=True)
# timing model
mod += gp_signals.TimingModel(use_svd=False)
# bwm signal
mod += bwm_block(t0,
skyloc = anomaly_skyloc,
logmin=-15,
amp_prior=amp_prior,
use_k=True)
# -
pta = signal_base.PTA([mod(psr) for psr in psrs])
pta.set_default_params(noise_dict)
pta.summary()
# # Sample
# ## sampling groups
# +
# default groupings
groups = get_parameter_groups(pta)
# custiom groupings
new_groups = []
# all params
new_groups.append(list(range(len(pta.param_names))))
# per psr params
for psr in pta.pulsars:
this_group = []
for par in pta.param_names:
if psr in par:
this_group.append(pta.param_names.index(par))
new_groups.append(this_group)
# all k params
this_group = []
for par in pta.param_names:
if '_bwm_psrk' in par:
this_group.append(pta.param_names.index(par))
new_groups.append(this_group)
# bwm params
this_group = []
for par in pta.param_names:
if par.startswith('bwm_'):
this_group.append(pta.param_names.index(par))
new_groups.append(this_group)
# -
# ## initial sampler
# +
# dimension of parameter space
x0 = np.hstack(p.sample() for p in pta.params)
ndim = len(x0)
# initial jump covariance matrix
cov = np.diag(np.ones(ndim) * 0.1**2)
sampler = ptmcmc(ndim, pta.get_lnlikelihood, pta.get_lnprior,
cov, groups=new_groups, outDir=outdir, resume=True)
# add prior draws to proposal cycle
jp = JumpProposal(pta)
sampler.addProposalToCycle(jp.draw_from_prior, 5)
sampler.addProposalToCycle(jp.draw_from_red_prior, 10)
sampler.addProposalToCycle(jp.draw_from_bwm_prior, 10)
if bayesephem:
sampler.addProposalToCycle(jp.draw_from_ephem_prior, 10)
# -
# ## save parameter file
outfile = outdir + '/params.txt'
with open(outfile, 'w') as f:
for pname in pta.param_names:
f.write(pname+'\n')
# ## Sample!
# +
N = int(3.0e+06)
sampler.sample(x0, N, SCAMweight=30, AMweight=15, DEweight=50)
# -
| postproc_nb/unsorted/11yr_dropout.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python [conda env:tensorflow-gpu]
# language: python
# name: conda-env-tensorflow-gpu-py
# ---
# # Vehicle Detection
#
# The Goal: Create a pipeline that detects cars in a video stream.
# ## Import Important Implementations
# +
import glob
import cv2
import numpy as np
from sklearn.model_selection import train_test_split
from keras.models import Model
from keras.layers import Dense, Dropout, Flatten, Lambda, Conv2D, MaxPooling2D, Input
# %matplotlib inline
import matplotlib.pylab as plt
# -
# ## Load Likeness Lots
# ### Augment Appearances
#
# Here I define a function that given an image returns a set of 4 images, the original image, the original image flipped & the two previous images zoomed in on a random part of the image. This helps the network better detect cars both in the foreground and the background.
def random_zoom(img):
zoom_width = np.random.randint(32, 48)
zoom_x_offset = np.random.randint(0, 64-zoom_width)
zoom_y_offset = np.random.randint(0, 64-zoom_width)
zoomed = cv2.resize(
img[zoom_x_offset:zoom_x_offset+zoom_width, zoom_y_offset:zoom_y_offset+zoom_width],
(img.shape[1], img.shape[0])
)
return zoomed
def agument(img):
img_flipped = cv2.flip(img, 1)
zoomed = random_zoom(img)
zoomed_flipped = random_zoom(img_flipped)
return img, img_flipped, zoomed, zoomed_flipped
# ### Load Lots
#
# The images are loaded and agumented into `features` and labels are created and added to `labels`
# +
# TODO: turn this into a generator so all the images don't have to be loaded at once.
cars = glob.glob('./vehicles/*/*.png')
non_cars = glob.glob('./non-vehicles/*/*.png')
# Load Car Pictures
car_features = []
for path in cars:
img = cv2.cvtColor(cv2.imread(path), cv2.COLOR_BGR2RGB)
car_features.extend(agument(img))
car_features = np.array(car_features)
# Set Car labels to 1s
car_labels = np.ones(car_features.shape[0])
# Load Non-Car Pictures
non_car_features = []
for path in non_cars:
img = cv2.cvtColor(cv2.imread(path), cv2.COLOR_BGR2RGB)
non_car_features.extend(agument(img))
non_car_features = np.array(non_car_features)
non_car_labels = np.zeros(non_car_features.shape[0]) - 1
features = np.concatenate((car_features, non_car_features))
labels = np.concatenate((car_labels, non_car_labels))
print('Car shape: {}\nNon-car shape: {}\n\nFeature shape: {}\nLabel shape: {}'.format(
car_features.shape,
non_car_features.shape,
features.shape,
labels.shape
))
# -
# ### Example Exhibit
#
# A random image from the dataset is shown along with it's label. (run this box multiple times to see more than one image)
idx = np.random.randint(features.shape[0])
plt.title('Label: {}'.format(labels[idx]))
plt.imshow(features[idx])
plt.show()
# ## Split Segments
#
# The training, testing, and validation sets are split.
# +
features_train, features_test, labels_train,labels_test = train_test_split(
features,
labels,
test_size=0.2,
random_state=1
)
features_val, features_test, labels_val, labels_test = train_test_split(
features_test,
labels_test,
test_size=0.5,
random_state=1
)
print('Train Size: {}\nVal Size: {}\nTest Size: {}'.format(
features_train.shape[0],
features_val.shape[0],
features_test.shape[0]
))
# -
# ## Define Model
#
# The Neural Network is defined here.
#
# _Attribute: The idea for using a Conv network like this came from [this github](https://github.com/HTuennermann/Vehicle-Detection-and-Tracking), but I implemented it from scratch._
# +
import tensorflow as tf
tf.reset_default_graph()
def create_model(general=False, drop_rate=0.25):
np.random.seed(42)
if general:
in_layer = Input(shape=(None, None, 3))
else:
in_layer = Input(shape=(64, 64, 3))
#in_layer = Input(shape=shape)
x = Lambda(lambda x: x/127. - 1.)(in_layer)
x = Conv2D(16, (5, 5), activation='elu', padding='same')(x)
x = MaxPooling2D(pool_size=(2, 2))(x)
x = Conv2D(32, (5, 5), activation='elu', padding='valid')(x)
x = MaxPooling2D(pool_size=(2, 2))(x)
x = Conv2D(64, (3, 3), activation='elu', padding='valid')(x)
x = MaxPooling2D(pool_size=(2, 2))(x)
x = Conv2D(32, (3, 3), activation='elu', padding='valid')(x)
x = Dropout(drop_rate)(x)
x = Conv2D(16, (3, 3), activation='elu', padding='valid')(x)
x = Dropout(drop_rate)(x)
x = Conv2D(1, (2, 2), activation="tanh")(x)
return in_layer, x
# in_layer, out_layer = create_model()
# Model(in_layer, out_layer).summary()
# -
# ## Train Model
#
# The network is then trained on the data loaded/augmented earlier.
# +
in_layer, out_layer = create_model()
out_layer = Flatten()(out_layer)
model = Model(in_layer, out_layer)
model.compile(loss='mse', optimizer='adam', metrics=['acc'])
model.fit(features_train, labels_train, batch_size=256, epochs=10, validation_data=(features_val, labels_val))
print('Test accuracy:', model.evaluate(features_test, labels_test, verbose=0)[1])
model.save_weights('model.h5')
# -
# ### Example Exhibit
#
# A random image from the testing set is run through the network to get a prediction.
# +
idx = np.random.randint(features_test.shape[0])
img = np.array([features_test[idx]])
plt.title('Label: {:0.2f}\nPrediction: {:0.5f}'.format(labels_test[idx], model.predict(img)[0][0]))
plt.imshow(features_test[idx])
plt.show()
# -
# ### Define get_heatmap Function
#
# This function runs the same model as before, however instead of running it on the 64x64 images and predicting 1 value, it runs it accross the entire image and produces a heatmap of how "car" like each part of the image is.
# +
in_layer, out_layer = create_model(general=True)
model = Model(in_layer, out_layer)
model.load_weights('model.h5')
def get_heatmap(img):
return model.predict(np.array([img]))[0,:,:,0]
# -
# ### Define box_scale Function
#
# This function is used to scale up the heatmap.
# +
### Not used anymore
# def box_scale(img, box_size=32, scale=8, margin=32):
# scaled = np.zeros(shape=(img.shape[0]*scale+margin*2, img.shape[1]*scale+margin*2)).astype('float32')
# for (x, y), value in np.ndenumerate(img):
# x = (x*scale)+margin
# y = (y*scale)+margin
# if value > 0:
# scaled[x-box_size:x+box_size, y-box_size:y+box_size] += value
# return scaled
# -
# ### Define get_labels Function
#
# `get_labels` on the surface is a wrapper for `scipy.ndimage.measurements.label` but it actually does a little more than that, it allows me to specify what the highest point in a label needs to be (`thresh`) in order to label it & it allows me to specify at what threshold the image should be cut (I.e. how low must the valies be between two points to be classified as different mountains.)
# +
from scipy.ndimage.measurements import label
def get_labels(img, thresh=32, crop_thresh=8, size_min=(20, 20), size_max=(330, 330)):
img = img.copy()
img[img < crop_thresh] = 0
labels, count = label(img)
my_count = 0
my_labels = np.zeros_like(labels)
for car_num in range(1, count+1):
# I don't quite understand advanced indexing, but I fiddled with this till it worked
pixels = img[labels == car_num]
nonzero = (labels == car_num).nonzero()
nonzeroy = np.array(nonzero[0])
nonzerox = np.array(nonzero[1])
x_min, x_max = np.amin(nonzerox), np.amax(nonzerox)
y_min, y_max = np.amin(nonzeroy), np.amax(nonzeroy)
x_size = x_max - x_min
y_size = y_max - y_min
if np.amax(pixels) > thresh and x_size > size_min[0] and x_size < size_max[0] and y_size > size_min[1] and y_size < size_max[1]:
my_count += 1
my_labels[labels == car_num] = my_count
return my_labels, my_count
# -
# ## Box Functions
#
# Here I define two functions, `get_boxes` two get the boxes around each of the labels (as created in the above function) and `draw_boxes` to drow those boxes onto a image. These are seperate functions to allow me to adjust the boxes to make up for a cropping which I'll do later.
# +
def get_boxes(labels):
boxes = []
for car_num in range(1, labels[1]+1):
nonzero = (labels[0] == car_num).nonzero()
nonzeroy = np.array(nonzero[0])
nonzerox = np.array(nonzero[1])
bbox = ((np.min(nonzerox), np.min(nonzeroy)), (np.max(nonzerox), np.max(nonzeroy)))
boxes.append(bbox)
#cv2.rectangle(img, bbox[0], bbox[1], (0,0,255), 6)
return boxes
def draw_boxes(img, boxes, color=(0, 0, 255), thickness=6):
img = img.copy()
for box in boxes:
cv2.rectangle(img, (box[0][0], box[0][1]), (box[1][0], box[1][1]), color, thickness)
return img
# -
# ## Steps Separately
# #### Load Likeness
test_image_paths = glob.glob('./test_images/*.jpg')
# Load Image
img_original = cv2.cvtColor(cv2.imread(test_image_paths[4]), cv2.COLOR_BGR2RGB)
plt.imshow(img_original)
# #### Crop Image
# +
img_cropped = img_original[350:680]
plt.imshow(img_cropped)
# -
# #### Calculate Heatmap
# +
heatmap = get_heatmap(img_cropped)
plt.imshow(heatmap)
# -
# #### Scale Heatmap
# +
scaled = cv2.resize(heatmap, (img_cropped.shape[1], img_cropped.shape[0]), cv2.INTER_AREA)
scaled = (scaled+1)*0.5
plt.imshow(scaled)
# -
# #### Apply Blur
# +
kernel = np.ones((16,16),np.float32)/(16**2)
blur = cv2.filter2D(scaled, -1, kernel)
plt.imshow(blur)
# -
# #### Label Blobs
# +
labels = get_labels(blur, thresh=0.8, crop_thresh=0.3)
plt.imshow(labels[0])
# -
# #### Calculate Boxes
# +
boxes = np.array(get_boxes(labels))
boxed = draw_boxes(img_cropped, boxes)
plt.imshow(boxed)
# -
# #### Shift boxes and Draw boxes on original image
# +
# Shift boxes
boxes[:,:,1] += 350
boxed = draw_boxes(img_original, boxes)
plt.imshow(boxed)
# -
# ## Pipeline
#
# Bring it all together! (& and a cache!)
#
# The way the cache works is pretty straight forward, in stead of using the heatmap directly, I use a weighted average over the past 8 frames.
# +
cache_weights = np.linspace(1, 0, num=8)
blur_size = 16
peak_needed = 0.8
valley_between = 0.15
# Values
cache = []
def pipeline(img):
global cache
img_cropped = img[350:680]
# Calculate Heatmap
heatmap = get_heatmap(img_cropped)
# Resize Heatmap
scaled = cv2.resize(heatmap, (img_cropped.shape[1], img_cropped.shape[0]), cv2.INTER_AREA)
# Scale heatmap between 0 & 1
scaled = (scaled+1)*0.5
cache.insert(0, scaled)
cache = cache[:len(cache_weights)]
# Ignore images until cache is filled
if len(cache) < len(cache_weights):
return img
# Average cache based on supplied weights
scaled = np.average(cache, axis=0, weights=cache_weights)
# Blur Heatmap
kernel = np.ones((blur_size,blur_size),np.float32)/(blur_size**2)
blur = cv2.filter2D(scaled, -1, kernel)
# Label heatmap
labels = get_labels(blur, thresh=peak_needed, crop_thresh=valley_between)
# Calculate boxes around labels
boxes = np.array(get_boxes(labels))
if len(boxes) > 0:
# Shift boxes to account for cropping
boxes[:,:,1] += 350
# Draw Boxes
boxed = draw_boxes(img, boxes)
else:
boxed = img
return boxed
# -
test_image_paths = glob.glob('./test_images/*.jpg')
for _ in range(len(cache_weights)):
plt.imshow(pipeline(cv2.cvtColor(cv2.imread(test_image_paths[0]), cv2.COLOR_BGR2RGB)))
cache = []
# +
from moviepy.editor import VideoFileClip
clip = VideoFileClip("project_video.mp4")
out_clip = clip.fl_image(pipeline)
out_clip.write_videofile('./output_video.mp4', audio=False)
# -
| Pipeline.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: 'Python 3.9.6 64-bit (''dksalaries'': conda)'
# name: python3
# ---
# +
import json
from pathlib import Path
import requests
# + [markdown] application/vnd.databricks.v1+cell={"inputWidgets": {}, "nuid": "59d410a8-ae3c-4da1-89fd-65d802f85ab5", "showTitle": false, "title": ""}
# # DK Object Model
#
# Partially documents the object model in the Draftkings API (skip unused elements)
#
# * GetContestsDocument: (getcontests.json)
#
#
# * contests: list of ContestDocument (contest.json)
#
#
# * draft_groups: list of DraftGroupDocument (draftgroup.json)
#
#
# * game_sets: list of GameSetDocument (gamesets.json)
#
#
# * game_types: list of GameTypeDocument (gametypes.json)
#
#
# * DraftablesDocument: https://api.draftkings.com/draftgroups/v1/draftgroups/53019/draftables?format=json
# + application/vnd.databricks.v1+cell={"inputWidgets": {}, "nuid": "9b0f3c4d-255c-4c0b-b4d3-79defe66f4a9", "showTitle": false, "title": ""}
from dksalaries.util import attr_boiler, camel_to_snake
# + [markdown] application/vnd.databricks.v1+cell={"inputWidgets": {}, "nuid": "91c86f05-4f97-4cc3-9e92-97a4a16e131c", "showTitle": false, "title": ""}
# ## ContestDocument
# + application/vnd.databricks.v1+cell={"inputWidgets": {}, "nuid": "1f0fe87d-2e16-4987-ada7-083c0e76300f", "showTitle": false, "title": ""}
cdoc = {
'uc': 0,
'ec': 0,
'mec': 150,
'fpp': 10,
's': 1,
'n': 'NFL Showdown $4M Thursday Kickoff Millionaire [$1M to 1st + ToC Entry] (DAL vs TB)',
'attr': {'IsGuaranteed': 'true',
'LobbyClass': 'icon-millionaire',
'IsStarred': 'true',
'IsTournamentOfChamp': 'true'},
'nt': 123694,
'm': 473065,
'a': 10,
'po': 4021052.63,
'pd': {'Cash': '$4,000,000', 'LiveFinalSeat': '1 Live Final Seat'},
'tix': False,
'sdstring': 'Thu 8:20PM',
'sd': '/Date(1631233200000)/',
'id': 111604182,
'tmpl': 385514,
'pt': 1,
'so': -99999999,
'fwt': False,
'isOwner': False,
'startTimeType': 0,
'dg': 53018,
'ulc': 1,
'cs': 1,
'gameType': 'Showdown Captain Mode',
'ssd': None,
'dgpo': 8007629.3,
'cso': 0,
'ir': 2,
'rl': False,
'rlc': 0,
'rll': 99999,
'sa': True,
'freeWithCrowns': False,
'crownAmount': 5500,
'isBonusFinalized': False,
'isSnakeDraft': False
}
# -
# ## DraftGroupDocument
dgdoc = {'DraftGroupId': 53019,
'ContestTypeId': 21,
'StartDate': '2021-09-12T17:00:00.0000000Z',
'StartDateEst': '2021-09-12T13:00:00.0000000',
'SortOrder': 1,
'DraftGroupTag': 'Featured',
'GameTypeId': 1,
'GameType': None,
'SportSortOrder': 0,
'Sport': 'NFL',
'GameCount': 13,
'ContestStartTimeSuffix': None,
'ContestStartTimeType': 0,
'Games': None,
'DraftGroupSeriesId': 2,
'GameSetKey': 'FBE061E5C4BADEC29A2BF302DE6DC97A',
'AllowUGC': True}
# ## GameSetDocument
gsdoc = {'GameSetKey': 'FBE061E5C4BADEC29A2BF302DE6DC97A',
'ContestStartTimeSuffix': None,
'Competitions': [{'GameId': 5745133,
'AwayTeamId': 354,
'HomeTeamId': 323,
'HomeTeamScore': 0,
'AwayTeamScore': 0,
'HomeTeamCity': 'Atlanta',
'AwayTeamCity': 'Philadelphia',
'HomeTeamName': 'Falcons',
'AwayTeamName': 'Eagles',
'StartDate': '2021-09-12T17:00:00.0000000Z',
'Location': 'Atlanta',
'LastPlay': None,
'TeamWithPossession': 0,
'TimeRemainingStatus': 'Pre-Game',
'Sport': 'NFL',
'Status': 'Pre-Game',
'Description': 'PHI @ ATL',
'FullDescription': '<NAME> @ Atlanta Falcons',
'ExceptionalMessages': [],
'SeriesType': 0,
'NumberOfGamesInSeries': 1,
'SeriesInfo': None,
'HomeTeamCompetitionOrdinal': 1,
'AwayTeamCompetitionOrdinal': 1,
'HomeTeamCompetitionCount': 1,
'AwayTeamCompetitionCount': 1},
{'GameId': 5745139,
'AwayTeamId': 356,
'HomeTeamId': 324,
'HomeTeamScore': 0,
'AwayTeamScore': 0,
'HomeTeamCity': 'Buffalo',
'AwayTeamCity': 'Pittsburgh',
'HomeTeamName': 'Bills',
'AwayTeamName': 'Steelers',
'StartDate': '2021-09-12T17:00:00.0000000Z',
'Location': 'Buffalo',
'LastPlay': None,
'TeamWithPossession': 0,
'TimeRemainingStatus': 'Pre-Game',
'Sport': 'NFL',
'Status': 'Pre-Game',
'Description': 'PIT @ BUF',
'FullDescription': 'Pittsburgh Steelers @ Buffalo Bills',
'ExceptionalMessages': [],
'SeriesType': 0,
'NumberOfGamesInSeries': 1,
'SeriesInfo': None,
'HomeTeamCompetitionOrdinal': 1,
'AwayTeamCompetitionOrdinal': 1,
'HomeTeamCompetitionCount': 1,
'AwayTeamCompetitionCount': 1},
{'GameId': 5745145,
'AwayTeamId': 347,
'HomeTeamId': 327,
'HomeTeamScore': 0,
'AwayTeamScore': 0,
'HomeTeamCity': 'Cincinnati',
'AwayTeamCity': 'Minnesota',
'HomeTeamName': 'Bengals',
'AwayTeamName': 'Vikings',
'StartDate': '2021-09-12T17:00:00.0000000Z',
'Location': 'Cincinnati',
'LastPlay': None,
'TeamWithPossession': 0,
'TimeRemainingStatus': 'Pre-Game',
'Sport': 'NFL',
'Status': 'Pre-Game',
'Description': 'MIN @ CIN',
'FullDescription': 'Minnesota Vikings @ Cincinnati Bengals',
'ExceptionalMessages': [],
'SeriesType': 0,
'NumberOfGamesInSeries': 1,
'SeriesInfo': None,
'HomeTeamCompetitionOrdinal': 1,
'AwayTeamCompetitionOrdinal': 1,
'HomeTeamCompetitionCount': 1,
'AwayTeamCompetitionCount': 1},
{'GameId': 5745151,
'AwayTeamId': 359,
'HomeTeamId': 334,
'HomeTeamScore': 0,
'AwayTeamScore': 0,
'HomeTeamCity': 'Detroit',
'AwayTeamCity': 'San Francisco',
'HomeTeamName': 'Lions',
'AwayTeamName': '49ers',
'StartDate': '2021-09-12T17:00:00.0000000Z',
'Location': 'Detroit',
'LastPlay': None,
'TeamWithPossession': 0,
'TimeRemainingStatus': 'Pre-Game',
'Sport': 'NFL',
'Status': 'Pre-Game',
'Description': 'SF @ DET',
'FullDescription': 'San Francisco 49ers @ Detroit Lions',
'ExceptionalMessages': [],
'SeriesType': 0,
'NumberOfGamesInSeries': 1,
'SeriesInfo': None,
'HomeTeamCompetitionOrdinal': 1,
'AwayTeamCompetitionOrdinal': 1,
'HomeTeamCompetitionCount': 1,
'AwayTeamCompetitionCount': 1},
{'GameId': 5745157,
'AwayTeamId': 355,
'HomeTeamId': 336,
'HomeTeamScore': 0,
'AwayTeamScore': 0,
'HomeTeamCity': 'Tennessee',
'AwayTeamCity': 'Arizona',
'HomeTeamName': 'Titans',
'AwayTeamName': 'Cardinals',
'StartDate': '2021-09-12T17:00:00.0000000Z',
'Location': 'Tennessee',
'LastPlay': None,
'TeamWithPossession': 0,
'TimeRemainingStatus': 'Pre-Game',
'Sport': 'NFL',
'Status': 'Pre-Game',
'Description': 'ARI @ TEN',
'FullDescription': 'Arizona Cardinals @ Tennessee Titans',
'ExceptionalMessages': [],
'SeriesType': 0,
'NumberOfGamesInSeries': 1,
'SeriesInfo': None,
'HomeTeamCompetitionOrdinal': 1,
'AwayTeamCompetitionOrdinal': 1,
'HomeTeamCompetitionCount': 1,
'AwayTeamCompetitionCount': 1},
{'GameId': 5745163,
'AwayTeamId': 361,
'HomeTeamId': 338,
'HomeTeamScore': 0,
'AwayTeamScore': 0,
'HomeTeamCity': 'Indianapolis',
'AwayTeamCity': 'Seattle',
'HomeTeamName': 'Colts',
'AwayTeamName': 'Seahawks',
'StartDate': '2021-09-12T17:00:00.0000000Z',
'Location': 'Indianapolis',
'LastPlay': None,
'TeamWithPossession': 0,
'TimeRemainingStatus': 'Pre-Game',
'Sport': 'NFL',
'Status': 'Pre-Game',
'Description': 'SEA @ IND',
'FullDescription': 'Seattle Seahawks @ Indianapolis Colts',
'ExceptionalMessages': [],
'SeriesType': 0,
'NumberOfGamesInSeries': 1,
'SeriesInfo': None,
'HomeTeamCompetitionOrdinal': 1,
'AwayTeamCompetitionOrdinal': 1,
'HomeTeamCompetitionCount': 1,
'AwayTeamCompetitionCount': 1},
{'GameId': 5745169,
'AwayTeamId': 357,
'HomeTeamId': 363,
'HomeTeamScore': 0,
'AwayTeamScore': 0,
'HomeTeamCity': 'Washington',
'AwayTeamCity': 'Los Angeles',
'HomeTeamName': 'Football Team',
'AwayTeamName': 'Chargers',
'StartDate': '2021-09-12T17:00:00.0000000Z',
'Location': 'Washington',
'LastPlay': None,
'TeamWithPossession': 0,
'TimeRemainingStatus': 'Pre-Game',
'Sport': 'NFL',
'Status': 'Pre-Game',
'Description': 'LAC @ WAS',
'FullDescription': 'Los Angeles Chargers @ Washington Football Team',
'ExceptionalMessages': [],
'SeriesType': 0,
'NumberOfGamesInSeries': 1,
'SeriesInfo': None,
'HomeTeamCompetitionOrdinal': 1,
'AwayTeamCompetitionOrdinal': 1,
'HomeTeamCompetitionCount': 1,
'AwayTeamCompetitionCount': 1},
{'GameId': 5745175,
'AwayTeamId': 352,
'HomeTeamId': 364,
'HomeTeamScore': 0,
'AwayTeamScore': 0,
'HomeTeamCity': 'Carolina',
'AwayTeamCity': 'New York',
'HomeTeamName': 'Panthers',
'AwayTeamName': 'Jets',
'StartDate': '2021-09-12T17:00:00.0000000Z',
'Location': 'Carolina',
'LastPlay': None,
'TeamWithPossession': 0,
'TimeRemainingStatus': 'Pre-Game',
'Sport': 'NFL',
'Status': 'Pre-Game',
'Description': 'NYJ @ CAR',
'FullDescription': 'New York Jets @ Carolina Panthers',
'ExceptionalMessages': [],
'SeriesType': 0,
'NumberOfGamesInSeries': 1,
'SeriesInfo': None,
'HomeTeamCompetitionOrdinal': 1,
'AwayTeamCompetitionOrdinal': 1,
'HomeTeamCompetitionCount': 1,
'AwayTeamCompetitionCount': 1},
{'GameId': 5745181,
'AwayTeamId': 365,
'HomeTeamId': 325,
'HomeTeamScore': 0,
'AwayTeamScore': 0,
'HomeTeamCity': 'Houston',
'AwayTeamCity': 'Jacksonville',
'HomeTeamName': 'Texans',
'AwayTeamName': 'Jaguars',
'StartDate': '2021-09-12T17:00:00.0000000Z',
'Location': 'Houston',
'LastPlay': None,
'TeamWithPossession': 0,
'TimeRemainingStatus': 'Pre-Game',
'Sport': 'NFL',
'Status': 'Pre-Game',
'Description': 'JAX @ HOU',
'FullDescription': '<NAME> @ Houston Texans',
'ExceptionalMessages': [],
'SeriesType': 0,
'NumberOfGamesInSeries': 1,
'SeriesInfo': None,
'HomeTeamCompetitionOrdinal': 1,
'AwayTeamCompetitionOrdinal': 1,
'HomeTeamCompetitionCount': 1,
'AwayTeamCompetitionCount': 1},
{'GameId': 5745187,
'AwayTeamId': 329,
'HomeTeamId': 339,
'HomeTeamScore': 0,
'AwayTeamScore': 0,
'HomeTeamCity': 'Kansas City',
'AwayTeamCity': 'Cleveland',
'HomeTeamName': 'Chiefs',
'AwayTeamName': 'Browns',
'StartDate': '2021-09-12T20:25:00.0000000Z',
'Location': 'Kansas City',
'LastPlay': None,
'TeamWithPossession': 0,
'TimeRemainingStatus': 'Pre-Game',
'Sport': 'NFL',
'Status': 'Pre-Game',
'Description': 'CLE @ KC',
'FullDescription': 'Cleveland Browns @ Kansas City Chiefs',
'ExceptionalMessages': [],
'SeriesType': 0,
'NumberOfGamesInSeries': 1,
'SeriesInfo': None,
'HomeTeamCompetitionOrdinal': 0,
'AwayTeamCompetitionOrdinal': 0,
'HomeTeamCompetitionCount': 0,
'AwayTeamCompetitionCount': 0},
{'GameId': 5745193,
'AwayTeamId': 345,
'HomeTeamId': 348,
'HomeTeamScore': 0,
'AwayTeamScore': 0,
'HomeTeamCity': 'New England',
'AwayTeamCity': 'Miami',
'HomeTeamName': 'Patriots',
'AwayTeamName': 'Dolphins',
'StartDate': '2021-09-12T20:25:00.0000000Z',
'Location': 'New England',
'LastPlay': None,
'TeamWithPossession': 0,
'TimeRemainingStatus': 'Pre-Game',
'Sport': 'NFL',
'Status': 'Pre-Game',
'Description': 'MIA @ NE',
'FullDescription': '<NAME> @ New England Patriots',
'ExceptionalMessages': [],
'SeriesType': 0,
'NumberOfGamesInSeries': 1,
'SeriesInfo': None,
'HomeTeamCompetitionOrdinal': 0,
'AwayTeamCompetitionOrdinal': 0,
'HomeTeamCompetitionCount': 0,
'AwayTeamCompetitionCount': 0},
{'GameId': 5745199,
'AwayTeamId': 335,
'HomeTeamId': 350,
'HomeTeamScore': 0,
'AwayTeamScore': 0,
'HomeTeamCity': 'New Orleans',
'AwayTeamCity': 'Green Bay',
'HomeTeamName': 'Saints',
'AwayTeamName': 'Packers',
'StartDate': '2021-09-12T20:25:00.0000000Z',
'Location': 'New Orleans',
'LastPlay': None,
'TeamWithPossession': 0,
'TimeRemainingStatus': 'Pre-Game',
'Sport': 'NFL',
'Status': 'Pre-Game',
'Description': 'GB @ NO',
'FullDescription': 'Green Bay Packers @ New Orleans Saints',
'ExceptionalMessages': [],
'SeriesType': 0,
'NumberOfGamesInSeries': 1,
'SeriesInfo': None,
'HomeTeamCompetitionOrdinal': 0,
'AwayTeamCompetitionOrdinal': 0,
'HomeTeamCompetitionCount': 0,
'AwayTeamCompetitionCount': 0},
{'GameId': 5745205,
'AwayTeamId': 332,
'HomeTeamId': 351,
'HomeTeamScore': 0,
'AwayTeamScore': 0,
'HomeTeamCity': 'New York',
'AwayTeamCity': 'Denver',
'HomeTeamName': 'Giants',
'AwayTeamName': 'Broncos',
'StartDate': '2021-09-12T20:25:00.0000000Z',
'Location': 'New York',
'LastPlay': None,
'TeamWithPossession': 0,
'TimeRemainingStatus': 'Pre-Game',
'Sport': 'NFL',
'Status': 'Pre-Game',
'Description': 'DEN @ NYG',
'FullDescription': '<NAME> @ New York Giants',
'ExceptionalMessages': [],
'SeriesType': 0,
'NumberOfGamesInSeries': 1,
'SeriesInfo': None,
'HomeTeamCompetitionOrdinal': 0,
'AwayTeamCompetitionOrdinal': 0,
'HomeTeamCompetitionCount': 0,
'AwayTeamCompetitionCount': 0}],
'GameStyles': [{'GameStyleId': 1,
'SportId': 1,
'SortOrder': 2,
'Name': 'Classic',
'Abbreviation': 'CLA',
'Description': 'Create a 9-player lineup while staying under the $50,000 salary cap',
'IsEnabled': True,
'Attributes': None},
{'GameStyleId': 120,
'SportId': 1,
'SortOrder': 10,
'Name': 'Snake',
'Abbreviation': 'SNKB',
'Description': 'Snake draft a 7-player lineup',
'IsEnabled': True,
'Attributes': None}],
'SortOrder': 1,
'MinStartTime': '/Date(1631466000000)/',
'Tag': ''}
# ## GameTypeDocument
gtdoc = {'GameTypeId': 1,
'Name': 'Classic',
'Description': 'Create a 9-player lineup while staying under the $50,000 salary cap',
'Tag': '',
'SportId': 1,
'DraftType': 'SalaryCap',
'GameStyle': {'GameStyleId': 1,
'SportId': 1,
'SortOrder': 2,
'Name': 'Classic',
'Abbreviation': 'CLA',
'Description': 'Create a 9-player lineup while staying under the $50,000 salary cap',
'IsEnabled': True,
'Attributes': None},
'IsSeasonLong': False}
# ## CompetitionDocument
compdoc = {
'GameId': 5745133,
'AwayTeamId': 354,
'HomeTeamId': 323,
'HomeTeamScore': 0,
'AwayTeamScore': 0,
'HomeTeamCity': 'Atlanta',
'AwayTeamCity': 'Philadelphia',
'HomeTeamName': 'Falcons',
'AwayTeamName': 'Eagles',
'StartDate': '2021-09-12T17:00:00.0000000Z',
'Location': 'Atlanta',
'LastPlay': None,
'TeamWithPossession': 0,
'TimeRemainingStatus': 'Pre-Game',
'Sport': 'NFL',
'Status': 'Pre-Game',
'Description': 'PHI @ ATL',
'FullDescription': '<NAME> @ Atlanta Falcons',
'ExceptionalMessages': [],
'SeriesType': 0,
'NumberOfGamesInSeries': 1,
'SeriesInfo': None,
'HomeTeamCompetitionOrdinal': 1,
'AwayTeamCompetitionOrdinal': 1,
'HomeTeamCompetitionCount': 1,
'AwayTeamCompetitionCount': 1
}
# ## DraftablesDocument
url = 'https://raw.githubusercontent.com/sansbacon/dksalaries/main/tests/data/draftables.json'
dt = requests.get(url).json()
dt.keys()
dt['draftables'][0]
| notebooks/dk_object_model.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Stable Matching
#
# (For [_Markets, Mechanisms, Machines_, Class 7](http://uvammm.github.io/class7).)
#
# ## Gale-Shapley Agorithm
def gale_shapley(A, B):
# This is non-defensively programmed for simplicity, but should include lots of checks and assertions.
pairings = {}
unpaired = set(a for a in A.keys())
proposals = {a: 0 for a in A.keys()}
while unpaired:
a = unpaired.pop()
ap = A[a]
choice = ap[proposals[a]]
proposals[a] += 1
if choice in pairings:
amatch = pairings[choice]
bp = B[choice]
if bp.index(a) < bp.index(amatch):
pairings[choice] = a
unpaired.add(amatch)
else:
unpaired.add(a)
else:
pairings[choice] = a
return [(a, b) for (b, a) in pairings.items()]
# The inputs are two sets with the same number of elements. Each set element is a name, followed by a list of that name's matches (in the opposite set), in preference order. We apologize in advance for the traditional sexist and speciest assumptions in our example, not to mention the flawed understanding of the deeper subtleties of _Frozen_ these preference lists reveal.
frozen_females = {"Anna": ["Kristoff", "Olaf"],
"Elsa": ["Olaf", "Kristoff"]}
frozen_others = {"Kristoff": ["Anna", "Elsa"],
"Olaf": ["Elsa", "Anna"]}
gale_shapley(frozen_females, frozen_others)
gale_shapley(frozen_others, frozen_females)
# In this case, the results are the same. But, the matching algorithm is asymmetrical. For example,
A = {"Alice": ["Bob", "Billy", "Brian"],
"Amy": ["Billy", "Bob", "Brian"],
"Alyssa": ["Bob", "Brian", "Billy"]}
B = {"Bob": ["Alice", "Amy", "Alyssa"],
"Billy": ["Alice", "Amy", "Alyssa"],
"Brian": ["Alyssa", "Amy", "Alice"]}
gale_shapley(A, B)
gale_shapley(B, A)
Students = {"Avery": ["Computer Science", "Economics", "Music"],
"Blake": ["Music", "Computer Science", "Economics"],
"Corey": ["Economics", "Music", "Computer Science"] }
Majors = {"Computer Science": ["Blake", "Corey", "Avery"],
"Economics": ["Corey", "Avery", "Blake"],
"Music": ["Avery", "Blake", "Corey"]}
gale_shapley(Students, Majors)
gale_shapley(Majors, Students)
# There is one other stable matching possible: everyone gets their second choice!
| stablematching.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ### Import Necessary Libraries
import pandas as pd #Data analysis library
import numpy as np #numerical operation
import matplotlib.pyplot as plt# basic visualization
import seaborn as sns#visualization
from sklearn.linear_model import LogisticRegression#Logistics regression model
from sklearn.metrics import accuracy_score,classification_report,confusion_matrix,plot_roc_curve# Classification metrics
import warnings
from sklearn.preprocessing import StandardScaler #make a input variable to fit in a same scale
from sklearn.neighbors import KNeighborsClassifier
from sklearn.naive_bayes import GaussianNB
from sklearn.model_selection import train_test_split, GridSearchCV #to split train and test data.
from sklearn.ensemble import RandomForestClassifier, AdaBoostClassifier
from xgboost import XGBClassifier
from catboost import CatBoostClassifier
from sklearn.svm import SVC
pd.set_option('display.max_columns', None)#Display all columns in the data
warnings.filterwarnings('ignore')#Remove the warnings
# %matplotlib inline
# #### Import Data
diabetes = pd.read_csv('https://raw.githubusercontent.com/susanli2016/Machine-Learning-with-Python/master/diabetes.csv')
diabetes.head()
diabetes.info()
#Looklike there is no null values
# #### EDA
diabetes['Pregnancies'].value_counts()
sns.countplot(x= 'Pregnancies', hue = 'Outcome', data = diabetes)
diabetes['Glucose'].nunique()
diabetes['Glucose'].plot.kde()
diabetes['BloodPressure'].nunique()
diabetes['BloodPressure'].plot.kde()
diabetes['SkinThickness'].nunique()
sns.scatterplot(x = diabetes['Glucose'], y = diabetes['DiabetesPedigreeFunction'])
diabetes['DiabetesPedigreeFunction']
diabetes[(diabetes['Age']==0)]
diabetes.describe()
sns.boxplot(x = diabetes['BMI'])
def filling(col):
median = diabetes[col].median()
diabetes[col]=diabetes[col].replace(0,median)
filling('BloodPressure')
diabetes['SkinThickness'].describe()
filling('Glucose')
filling('SkinThickness')
filling('Insulin')
filling('BMI')
diabetes['Outcome'].value_counts()
# #### Scaling
sc = StandardScaler()
X_scaled = sc.fit_transform(diabetes.drop('Outcome', axis =1))
# #### Spliting
X_train, X_test, y_train, y_test = train_test_split(X_scaled, diabetes['Outcome'], test_size=0.3, random_state=123)
# #### Modelling
def metrics(y_true, y_pred):
print('Confusion Matrix:\n', confusion_matrix(y_true, y_pred))
print('\n\nAccuracy Score:\n', accuracy_score(y_true, y_pred))
print('\n\nClassification Report: \n', classification_report(y_true, y_pred))
def predictions(model,X_train, X_test, y_train, y_test):
model.fit(X_train, y_train)
#predictions
train_pred = model.predict(X_train)
test_pred = model.predict(X_test)
plot_roc_curve(model, X_train, y_train)
plt.show()
actual = [y_train, y_test]
pred = [train_pred, test_pred]
for i in range(0,2):
if i==0:
print('----Train Metrics----')
else:
print('----Test Metrics----')
metrics(actual[i], pred[i])
# #### Logistics Regression
lg_norm = LogisticRegression()
predictions(lg_norm, X_train, X_test, y_train, y_test)
lg = LogisticRegression(penalty='l2', class_weight={0:0.5, 1:0.7})
predictions(lg, X_train, X_test, y_train, y_test)
# ####
# #### KNN
knn = KNeighborsClassifier(n_neighbors=9,n_jobs=-1)
predictions(knn, X_train, X_test, y_train, y_test)
# #### Naive Bayes
from sklearn.naive_bayes import GaussianNB
nb = GaussianNB()
predictions(nb, X_train, X_test, y_train, y_test)
# #### Random forest
from sklearn.ensemble import RandomForestClassifier
rf = RandomForestClassifier()
predictions(rf, X_train, X_test, y_train, y_test)
# #### ADABOOST
ada = AdaBoostClassifier()
predictions(ada, X_train, X_test, y_train, y_test)
# ### XGBoost
xgb = XGBClassifier()
predictions(xgb, X_train, X_test, y_train, y_test)
# **Finding best parameter by using Grid Search**
params = {
'learning_rate' : [0.02,0.05, 0.08],
'max_depth' : [3, 4, 5, 6, 8],
'min_child_weight': [1, 3, 5],
'gamma' : [0.0,0.1,0.2], #less than 1 make sure
'colsample_bytree':[0.3,0.4,0.5] #less than 1
}
grid = GridSearchCV(xgb, params, n_jobs=-1, verbose=2)
grid.fit(X_train, y_train)
grid.best_params_
best_xgb=XGBClassifier(colsample_bytree=0.5,gamma=0.0,learning_rate=0.05,max_depth=3,min_child_weight=1)
predictions(best_xgb, X_train, X_test, y_train, y_test)
# #### CATBoost
cat = CatBoostClassifier()
predictions(cat, X_train, X_test, y_train, y_test)
params_cat = {
'learning_rate' : [0.02,0.05, 0.07],
'max_depth' : [3, 4, 5, 6, 8],
'min_child_samples': [1, 3, 5],
'l2_leaf_reg':[5,10,15]
}
grid_cat = GridSearchCV(cat, params_cat, cv=3, n_jobs=-1)
grid_cat.fit(X_train,y_train)
grid_cat.best_params_
best_cat = CatBoostClassifier(l2_leaf_reg=5, learning_rate=0.02, max_depth=3, min_child_samples=1)
predictions(best_cat, X_train, X_test, y_train, y_test)
# ### SVM
# +
svm = SVC()
predictions(svm, X_train, X_test, y_train, y_test)
| Classification Problems/05-Diabetes Prediction.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3.6.9 64-bit
# name: python3
# ---
# # Goodies of the [Python Standard Library](https://docs.python.org/3/library/#the-python-standard-library)
# The Python Standard Libary is part of your Python installation. It contains a wide range of packages which may be helpful while building your Python masterpieces. This notebook lists some of the commonly used packages and their main functionalities.
# ## [`datetime`](https://docs.python.org/3/library/datetime.html#module-datetime) for working with dates and times
# +
import datetime as dt
local_now = dt.datetime.now()
print('local now: {}'.format(local_now))
utc_now = dt.datetime.utcnow()
print('utc now: {}'.format(utc_now))
# You can access any value separately:
print('{} {} {} {} {} {}'.format(local_now.year, local_now.month,
local_now.day, local_now.hour,
local_now.minute, local_now.second))
print('date: {}'.format(local_now.date()))
print('time: {}'.format(local_now.time()))
# -
# ### `strftime()`
# For string formatting the `datetime`
# +
formatted1 = local_now.strftime('%Y/%m/%d-%H:%M:%S')
print(formatted1)
formatted2 = local_now.strftime('date: %Y-%m-%d time:%H:%M:%S')
print(formatted2)
# -
# ### `strptime()`
# For converting a datetime string into a `datetime` object
my_dt = dt.datetime.strptime('2000-01-01 10:00:00', '%Y-%m-%d %H:%M:%S')
print('my_dt: {}'.format(my_dt))
# ### [`timedelta`](https://docs.python.org/3/library/datetime.html#timedelta-objects)
# For working with time difference.
# +
tomorrow = local_now + dt.timedelta(days=1)
print('tomorrow this time: {}'.format(tomorrow))
delta = tomorrow - local_now
print('tomorrow - now = {}'.format(delta))
print('days: {}, seconds: {}'.format(delta.days, delta.seconds))
print('total seconds: {}'.format(delta.total_seconds()))
# -
# ### Working with timezones
# Let's first make sure [`pytz`](http://pytz.sourceforge.net/) is installed.
import sys
# !{sys.executable} -m pip install pytz
# +
import datetime as dt
import pytz
naive_utc_now = dt.datetime.utcnow()
print('naive utc now: {}, tzinfo: {}'.format(naive_utc_now, naive_utc_now.tzinfo))
# Localizing naive datetimes
UTC_TZ = pytz.timezone('UTC')
utc_now = UTC_TZ.localize(naive_utc_now)
print('utc now: {}, tzinfo: {}'.format(utc_now, utc_now.tzinfo))
# Converting localized datetimes to different timezone
PARIS_TZ = pytz.timezone('Europe/Paris')
paris_now = PARIS_TZ.normalize(utc_now)
print('Paris: {}, tzinfo: {}'.format(paris_now, paris_now.tzinfo))
NEW_YORK_TZ = pytz.timezone('America/New_York')
ny_now = NEW_YORK_TZ.normalize(utc_now)
print('New York: {}, tzinfo: {}'.format(ny_now, ny_now.tzinfo))
# -
# **NOTE**: If your project uses datetimes heavily, you may want to take a look at external libraries, such as [Pendulum](https://pendulum.eustace.io/docs/) and [Maya](https://github.com/kennethreitz/maya), which make working with datetimes easier for certain use cases.
# ## [`logging`](https://docs.python.org/3/library/logging.html#module-logging)
# +
import logging
# Handy way for getting a dedicated logger for every module separately
logger = logging.getLogger(__name__)
logger.setLevel(logging.WARNING)
logger.debug('This is debug')
logger.info('This is info')
logger.warning('This is warning')
logger.error('This is error')
logger.critical('This is critical')
# -
# ### Logging expections
# There's a neat `exception` function in `logging` module which will automatically log the stack trace in addition to user defined log entry.
try:
path_calculation = 1 / 0
except ZeroDivisionError:
logging.exception('All went south in my calculation')
# ### Formatting log entries
# +
import logging
# This is only required for Jupyter notebook environment
from importlib import reload
reload(logging)
my_format = '%(asctime)s | %(name)-12s | %(levelname)-10s | %(message)s'
logging.basicConfig(format=my_format)
logger = logging.getLogger('MyLogger')
logger.warning('Something bad is going to happen')
logger.error('Uups, it already happened')
# -
# ### Logging to a file
# +
import os
import logging
# This is only required for Jupyter notebook environment
from importlib import reload
reload(logging)
logger = logging.getLogger('MyFileLogger')
# Let's define a file_handler for our logger
log_path = os.path.join(os.getcwd(), 'my_log.txt')
file_handler = logging.FileHandler(log_path)
# And a nice format
formatter = logging.Formatter('%(asctime)s | %(name)-12s | %(levelname)-10s | %(message)s')
file_handler.setFormatter(formatter)
logger.addHandler(file_handler)
# If you want to see it also in the console, add another handler for it
# logger.addHandler(logging.StreamHandler())
logger.warning('Oops something is going to happen')
logger.error('<NAME> visits our place')
# -
# ## [`random`](https://docs.python.org/3/library/random.html) for random number generation
# +
import random
rand_int = random.randint(1, 100)
print('random integer between 1-100: {}'.format(rand_int))
rand = random.random()
print('random float between 0-1: {}'.format(rand))
# -
# If you need pseudo random numbers, you can set the `seed` for random. This will reproduce the output (try running the cell multiple times):
# +
import random
random.seed(5) # Setting the seed
# Let's print 10 random numbers
for _ in range(10):
print(random.random())
# -
# ## [`re`](https://docs.python.org/3/library/re.html#module-re) for regular expressions
# ### Searching occurences
# +
import re
secret_code = 'qwret 8sfg12f5 fd09f_df'
# "r" at the beginning means raw format, use it with regular expression patterns
search_pattern = r'(g12)'
match = re.search(search_pattern, secret_code)
print('match: {}'.format(match))
print('match.group(): {}'.format(match.group()))
numbers_pattern = r'[0-9]'
numbers_match = re.findall(numbers_pattern, secret_code)
print('numbers: {}'.format(numbers_match))
# -
# ### Variable validation
# +
import re
def validate_only_lower_case_letters(to_validate):
pattern = r'^[a-z]+$'
return bool(re.match(pattern, to_validate))
print(validate_only_lower_case_letters('thisshouldbeok'))
print(validate_only_lower_case_letters('thisshould notbeok'))
print(validate_only_lower_case_letters('Thisshouldnotbeok'))
print(validate_only_lower_case_letters('thisshouldnotbeok1'))
print(validate_only_lower_case_letters(''))
| notebooks/beginner/notebooks/std_lib.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python [conda env:deeprl]
# language: python
# name: conda-env-deeprl-py
# ---
# # Chaper 8 - Intrinsic Curiosity Module
# #### Deep Reinforcement Learning *in Action*
# ##### Listing 8.1
import gym
from nes_py.wrappers import JoypadSpace #A
import gym_super_mario_bros
from gym_super_mario_bros.actions import SIMPLE_MOVEMENT, COMPLEX_MOVEMENT #B
env = gym_super_mario_bros.make('SuperMarioBros-v0')
env = JoypadSpace(env, COMPLEX_MOVEMENT) #C
done = True
for step in range(2500): #D
if done:
state = env.reset()
state, reward, done, info = env.step(env.action_space.sample())
env.render()
env.close()
# ##### Listing 8.2
# +
import matplotlib.pyplot as plt
from skimage.transform import resize #A
import numpy as np
def downscale_obs(obs, new_size=(42,42), to_gray=True):
if to_gray:
return resize(obs, new_size, anti_aliasing=True).max(axis=2) #B
else:
return resize(obs, new_size, anti_aliasing=True)
# -
plt.imshow(env.render("rgb_array"))
plt.imshow(downscale_obs(env.render("rgb_array")))
# ##### Listing 8.4
# +
import torch
from torch import nn
from torch import optim
import torch.nn.functional as F
from collections import deque
def prepare_state(state): #A
return torch.from_numpy(downscale_obs(state, to_gray=True)).float().unsqueeze(dim=0)
def prepare_multi_state(state1, state2): #B
state1 = state1.clone()
tmp = torch.from_numpy(downscale_obs(state2, to_gray=True)).float()
state1[0][0] = state1[0][1]
state1[0][1] = state1[0][2]
state1[0][2] = tmp
return state1
def prepare_initial_state(state,N=3): #C
state_ = torch.from_numpy(downscale_obs(state, to_gray=True)).float()
tmp = state_.repeat((N,1,1))
return tmp.unsqueeze(dim=0)
# -
# ##### Listing 8.5
def policy(qvalues, eps=None): #A
if eps is not None:
if torch.rand(1) < eps:
return torch.randint(low=0,high=7,size=(1,))
else:
return torch.argmax(qvalues)
else:
return torch.multinomial(F.softmax(F.normalize(qvalues)), num_samples=1) #B
# ##### Listing 8.6
# +
from random import shuffle
import torch
from torch import nn
from torch import optim
import torch.nn.functional as F
class ExperienceReplay:
def __init__(self, N=500, batch_size=100):
self.N = N #A
self.batch_size = batch_size #B
self.memory = []
self.counter = 0
def add_memory(self, state1, action, reward, state2):
self.counter +=1
if self.counter % 500 == 0: #C
self.shuffle_memory()
if len(self.memory) < self.N: #D
self.memory.append( (state1, action, reward, state2) )
else:
rand_index = np.random.randint(0,self.N-1)
self.memory[rand_index] = (state1, action, reward, state2)
def shuffle_memory(self): #E
shuffle(self.memory)
def get_batch(self): #F
if len(self.memory) < self.batch_size:
batch_size = len(self.memory)
else:
batch_size = self.batch_size
if len(self.memory) < 1:
print("Error: No data in memory.")
return None
#G
ind = np.random.choice(np.arange(len(self.memory)),batch_size,replace=False)
batch = [self.memory[i] for i in ind] #batch is a list of tuples
state1_batch = torch.stack([x[0].squeeze(dim=0) for x in batch],dim=0)
action_batch = torch.Tensor([x[1] for x in batch]).long()
reward_batch = torch.Tensor([x[2] for x in batch])
state2_batch = torch.stack([x[3].squeeze(dim=0) for x in batch],dim=0)
return state1_batch, action_batch, reward_batch, state2_batch
# -
# ##### Listing 8.7
# +
class Phi(nn.Module): #A
def __init__(self):
super(Phi, self).__init__()
self.conv1 = nn.Conv2d(3, 32, kernel_size=(3,3), stride=2, padding=1)
self.conv2 = nn.Conv2d(32, 32, kernel_size=(3,3), stride=2, padding=1)
self.conv3 = nn.Conv2d(32, 32, kernel_size=(3,3), stride=2, padding=1)
self.conv4 = nn.Conv2d(32, 32, kernel_size=(3,3), stride=2, padding=1)
def forward(self,x):
x = F.normalize(x)
y = F.elu(self.conv1(x))
y = F.elu(self.conv2(y))
y = F.elu(self.conv3(y))
y = F.elu(self.conv4(y)) #size [1, 32, 3, 3] batch, channels, 3 x 3
y = y.flatten(start_dim=1) #size N, 288
return y
class Gnet(nn.Module): #B
def __init__(self):
super(Gnet, self).__init__()
self.linear1 = nn.Linear(576,256)
self.linear2 = nn.Linear(256,12)
def forward(self, state1,state2):
x = torch.cat( (state1, state2) ,dim=1)
y = F.relu(self.linear1(x))
y = self.linear2(y)
y = F.softmax(y,dim=1)
return y
class Fnet(nn.Module): #C
def __init__(self):
super(Fnet, self).__init__()
self.linear1 = nn.Linear(300,256)
self.linear2 = nn.Linear(256,288)
def forward(self,state,action):
action_ = torch.zeros(action.shape[0],12) #D
indices = torch.stack( (torch.arange(action.shape[0]), action.squeeze()), dim=0)
indices = indices.tolist()
action_[indices] = 1.
x = torch.cat( (state,action_) ,dim=1)
y = F.relu(self.linear1(x))
y = self.linear2(y)
return y
# -
# ##### Listing 8.8
class Qnetwork(nn.Module):
def __init__(self):
super(Qnetwork, self).__init__()
#in_channels, out_channels, kernel_size, stride=1, padding=0
self.conv1 = nn.Conv2d(in_channels=3, out_channels=32, kernel_size=(3,3), stride=2, padding=1)
self.conv2 = nn.Conv2d(32, 32, kernel_size=(3,3), stride=2, padding=1)
self.conv3 = nn.Conv2d(32, 32, kernel_size=(3,3), stride=2, padding=1)
self.conv4 = nn.Conv2d(32, 32, kernel_size=(3,3), stride=2, padding=1)
self.linear1 = nn.Linear(288,100)
self.linear2 = nn.Linear(100,12)
def forward(self,x):
x = F.normalize(x)
y = F.elu(self.conv1(x))
y = F.elu(self.conv2(y))
y = F.elu(self.conv3(y))
y = F.elu(self.conv4(y))
y = y.flatten(start_dim=2)
y = y.view(y.shape[0], -1, 32)
y = y.flatten(start_dim=1)
y = F.elu(self.linear1(y))
y = self.linear2(y) #size N, 12
return y
# ##### Listing 8.9
# +
params = {
'batch_size':150,
'beta':0.2,
'lambda':0.1,
'eta': 1.0,
'gamma':0.2,
'max_episode_len':100,
'min_progress':15,
'action_repeats':6,
'frames_per_state':3
}
replay = ExperienceReplay(N=1000, batch_size=params['batch_size'])
Qmodel = Qnetwork()
encoder = Phi()
forward_model = Fnet()
inverse_model = Gnet()
forward_loss = nn.MSELoss(reduction='none')
inverse_loss = nn.CrossEntropyLoss(reduction='none')
qloss = nn.MSELoss()
all_model_params = list(Qmodel.parameters()) + list(encoder.parameters()) #A
all_model_params += list(forward_model.parameters()) + list(inverse_model.parameters())
opt = optim.Adam(lr=0.001, params=all_model_params)
# -
# ##### Listing 8.10
# +
def loss_fn(q_loss, inverse_loss, forward_loss):
loss_ = (1 - params['beta']) * inverse_loss
loss_ += params['beta'] * forward_loss
loss_ = loss_.sum() / loss_.flatten().shape[0]
loss = loss_ + params['lambda'] * q_loss
return loss
def reset_env():
"""
Reset the environment and return a new initial state
"""
env.reset()
state1 = prepare_initial_state(env.render('rgb_array'))
return state1
# -
# ##### Listing 8.11
def ICM(state1, action, state2, forward_scale=1., inverse_scale=1e4):
state1_hat = encoder(state1) #A
state2_hat = encoder(state2)
state2_hat_pred = forward_model(state1_hat.detach(), action.detach()) #B
forward_pred_err = forward_scale * forward_loss(state2_hat_pred, \
state2_hat.detach()).sum(dim=1).unsqueeze(dim=1)
pred_action = inverse_model(state1_hat, state2_hat) #C
inverse_pred_err = inverse_scale * inverse_loss(pred_action, \
action.detach().flatten()).unsqueeze(dim=1)
return forward_pred_err, inverse_pred_err
# ##### Listing 8.12
def minibatch_train(use_extrinsic=True):
state1_batch, action_batch, reward_batch, state2_batch = replay.get_batch()
action_batch = action_batch.view(action_batch.shape[0],1) #A
reward_batch = reward_batch.view(reward_batch.shape[0],1)
forward_pred_err, inverse_pred_err = ICM(state1_batch, action_batch, state2_batch) #B
i_reward = (1. / params['eta']) * forward_pred_err #C
reward = i_reward.detach() #D
if use_explicit: #E
reward += reward_batch
qvals = Qmodel(state2_batch) #F
reward += params['gamma'] * torch.max(qvals)
reward_pred = Qmodel(state1_batch)
reward_target = reward_pred.clone()
indices = torch.stack( (torch.arange(action_batch.shape[0]), \
action_batch.squeeze()), dim=0)
indices = indices.tolist()
reward_target[indices] = reward.squeeze()
q_loss = 1e5 * qloss(F.normalize(reward_pred), F.normalize(reward_target.detach()))
return forward_pred_err, inverse_pred_err, q_loss
# ##### Listing 8.13
epochs = 5000
env.reset()
state1 = prepare_initial_state(env.render('rgb_array'))
eps=0.15
losses = []
episode_length = 0
switch_to_eps_greedy = 1000
state_deque = deque(maxlen=params['frames_per_state'])
e_reward = 0.
last_x_pos = env.env.env._x_position #A
ep_lengths = []
use_explicit = False
for i in range(epochs):
opt.zero_grad()
episode_length += 1
q_val_pred = Qmodel(state1) #B
if i > switch_to_eps_greedy: #C
action = int(policy(q_val_pred,eps))
else:
action = int(policy(q_val_pred))
for j in range(params['action_repeats']): #D
state2, e_reward_, done, info = env.step(action)
last_x_pos = info['x_pos']
if done:
state1 = reset_env()
break
e_reward += e_reward_
state_deque.append(prepare_state(state2))
state2 = torch.stack(list(state_deque),dim=1) #E
replay.add_memory(state1, action, e_reward, state2) #F
e_reward = 0
if episode_length > params['max_episode_len']: #G
if (info['x_pos'] - last_x_pos) < params['min_progress']:
done = True
else:
last_x_pos = info['x_pos']
if done:
ep_lengths.append(info['x_pos'])
state1 = reset_env()
last_x_pos = env.env.env._x_position
episode_length = 0
else:
state1 = state2
if len(replay.memory) < params['batch_size']:
continue
forward_pred_err, inverse_pred_err, q_loss = minibatch_train(use_extrinsic=False) #H
loss = loss_fn(q_loss, forward_pred_err, inverse_pred_err) #I
loss_list = (q_loss.mean(), forward_pred_err.flatten().mean(),\
inverse_pred_err.flatten().mean())
losses.append(loss_list)
loss.backward()
opt.step()
# ##### Test Trained Agent
done = True
state_deque = deque(maxlen=params['frames_per_state'])
for step in range(5000):
if done:
env.reset()
state1 = prepare_initial_state(env.render('rgb_array'))
q_val_pred = Qmodel(state1)
action = int(policy(q_val_pred,eps))
state2, reward, done, info = env.step(action)
state2 = prepare_multi_state(state1,state2)
state1=state2
env.render()
| Chapter 8/Ch8_book.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Univariate Linear Regression
# In this iPython notebook, I'm going to create an Univariate Linear Regression model without using any scikit-learn or other libraries. The objective is to understand better what these libraries actually does under the hood and get a better intuition into how we can actually make these algorithms perform better. Before we begin, I'd like to take a moment and introduce a few notations that we're going to use throughout this notebook.
#
# **Notations:**
# - $m$ = Numner of Training Examples
# - $X$ = Input Variables
# - $y$ = Output Variables
# - $(x^{(i)},y^{(i)})$ = $i^{th}$ Training Example
#
# Now let's get started and I'll explain the rest on the way as we proceed with this notebook
# Importing Necessary Libraries
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
# The above libraries are some bare minimum which I had to import to make this notebook somewhat concise and not ending up covering a lot of stuff which might turn out to be irrelevant for the purpose of this notebook. Also, these libraries are proven to be fast and I would not have to worry too much on things like writing a function for matrix multiplication which might turn out to have a huge time complexity.
# Okay, enough talk nor now, let's start by importing our dataset
# (This dataset is from one of the Octave excercises from Andrew Ng's tutorial on Machine Learning)
data = pd.read_csv("./data.txt", sep=",", names=["Population in 10k", "Profit in $10k"])
# Taking a look at the imported data
data.head(5)
# Splitting columns from our dataset
X = data['Population in 10k']
y = data['Profit in $10k']
# Now I wanna plot and see how the dataset looks like
# I'm reducing the marker size a bit, so that the individual points close by don't overlap each other
plt.scatter(X, y, s=10, alpha=0.5)
plt.title('Scatter Plot of the Dataset')
plt.xlabel('Population in 10k')
plt.ylabel('Profit in $10k')
plt.show()
# Now I want to convert 'X' into a single column numpy array
X = np.array(X)
X.shape
# Few more notations I'm using here:
# - $R$ = Number of Rows
# - $C$ = Number of Columns
#
# As you can see above, the shape turned out to be a vector instead of a RxC matrix or 2D array. If you're not familiar with numpy, or new to numpy array indexing, let me try to elaborate a bit.
#
# ##### Shape: (R,)
# Here the array is basically single indexed. For example, consider the following example of an array of 8 numbers:
#
# | Data | 9 | 1 | 5 | 3 | 7 | 3 | 2 | 5 |
# | ----- | - | - | - | - | - | - | - | - |
# | Index | 0 | 1 | 2 | 3 | 4 | 5 | 6 | 7 |
#
# ##### Shape: (R,C)
# We want to have a 2D indexing in our array which will basically be a single column matrix (a number of rows with only one column):
#
# | Data | 9 | 1 | 5 | 3 | 7 | 3 | 2 | 5 |
# | --------- | - | - | - | - | - | - | - | - |
# | Index (i) | 0 | 1 | 2 | 3 | 4 | 5 | 6 | 7 |
# | Index (j) | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
#
# **TL;DR:** `array.shape` basically returns the number of rows (R) vs the number of columns (R) in the array/matrix
# Reshaping our numpy array
m = y.size
X = X.reshape(m,1)
X.shape
# ### Hypothesis
# Before we proceed further, it's important to understand what hypothesis is. It is a straight line fit through our dataset, which tries to map the input features (X) to output variables (y).
#
# \begin{align}
# h_\theta(x) = \theta _0 + \theta _1 x
# \end{align}
#
# **Notations:**
# - $h$ = Hypothesis
# - $\theta_0$ & $\theta_1$ = Parameters
#
# > A good hypothesis fits the evidence and can be used to make predictions about new observations or new situations.
# Now I want to add a column of 1's to the 0th index of 'X' (Explanation Below)
# Also shift the existing data in the 0th index to the 1st index
X = np.append(np.ones((m, 1), dtype=int), X, axis=1)
X.shape
# I did the above to take into account the intercept term $\theta_0$. I added an extra column of 1's to the matrix 'X', so that I can treat $\theta_0$ as another feature. For this, I needed to place the new column in the 0th column index and shift the existing column of data from 0th to 1st index. Now the shape of the matrix will be (R,2).
# Let's initialize Theta as a 2x1 null array
theta = np.zeros((2,1), dtype=float)
theta.shape
# ### Cost Function & Gradient Descent
# Gradient Descent attempts to find the local or global optima of a function. We succeed in training a model when the Cost Function reaches the global optimum.
#
# \begin{align}
# J(\theta_0, \theta_1) = \frac{1}{2m} \sum^m_{i=1} (h_\theta(x^{(i)})-y^{(i)})^2
# \end{align}
#
# > A model learns by minimizing the Cost Function using Gradient Descent
#
# **Algorithm:**
# {
# $$
# \theta_j := \theta_j - \alpha \frac{\partial}{\partial\theta_j} J(\theta_0, \theta_1)
# $$
# }
#
# Simultaneously update $\theta_1$ and $\theta_2$,
# $temp0 := \theta_0 - \alpha \frac{\partial}{\partial \theta_0} J(\theta_0, \theta_1)$
# $temp1 := \theta_1 - \alpha \frac{\partial}{\partial \theta_1} J(\theta_0, \theta_1)$
# $\theta_0 := temp0$
# $\theta_1 := temp1$
#
# **Notation:**
# - $\alpha$ = Learning Rate
# Function for Computing Cost Function
def costFunction(theta):
J_sum = 0
for i in range(m):
J_sum += ((theta[0] + theta[1] * X[i][1]) - y[i])**2
J = (1/(2*m)) * J_sum
return J
# Let's see the cost when theta is 0
costFunction(theta)
# Function for Running Gradient Descent
def gradientDescent(theta, iterations, alpha):
J_history = np.zeros((iterations,1), dtype=float)
for i in range(iterations):
t1 = 0
t2 = 0
for k in range(m):
t1 = t1 + ((theta[0] + theta[1]*X[k][1]) - y[k])
t2 = t2 + ((theta[0] + theta[1]*X[k][1]) - y[k])*X[k][1]
theta[0] = theta[0] - (alpha/m)*t1
theta[1] = theta[1] - (alpha/m)*t2
J_history[i] = costFunction(theta)
return theta
# Now let's actually run the gradient descent
iterations = 1500
alpha = 0.01
theta = gradientDescent(theta, iterations, alpha)
# Let's see what values of theta we got
print("Theta 0: ", theta[0])
print("Theta 1: ", theta[1])
# Plotting the linear fit
plt.plot(X[:,1], np.dot(X, theta))
plt.scatter(X[:,1], y, s=10, alpha=0.5)
plt.title('Linear Fit through the Dataset')
plt.xlabel('Population in 10k')
plt.ylabel('Profit in $10k')
plt.show()
# Now let's see how our model performs on new data (Let's initialize a few random ones)
new_data_population = np.array([7.2, 18, 24, 13, 26])
new_data = new_data_population.reshape(5,1)
new_data = np.append(np.ones((5, 1), dtype=int), new_data, axis=1)
# Predicting the Results
predictions = np.zeros((5,1), dtype=float)
for i in range(5):
predictions[i][0] = abs(np.dot(new_data[i][:], theta))
print("Predicted Value of profit for ", new_data_population[i], " population (10k) : $", predictions[i][0], " (10k)")
# Plotting the results
plt.plot(X[:,1], np.dot(X, theta))
plt.scatter(X[:,1], y, s=10, alpha=0.5, color='c')
plt.scatter(new_data_population, predictions, s=150, alpha=0.5, color='r', marker='x')
plt.title('Predicted Results over Existing Dataset')
plt.xlabel('Population in 10k')
plt.ylabel('Profit in $10k')
plt.show()
# So that's it. Our univariate linear regression model works and it was able to predict the profits for some random populations. Now we can go ahead and create a similar linear regression model for datasets with multiple features, also known as **Multivariate Linear Regression**. But that's going to be another notebook soon.
| Univariate Linear Regression.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] id="EqmubOp2whk3"
# # Soal 1. Pemahaman k-means clustering
#
# Jelaskan bagaimana cara kerja k-means clustering!
# + [markdown] id="gf8y3llqw369"
# Jawab disini:
#
#
# + [markdown] id="oXGsjqHCk7DW"
# ---
#
# Metode belajar mesin dibagi menjadi 2, yaitu Supervised Learning dan Unsupervised Learning. Unsupervised learning adalah metode belajar mesin yang tidak membutuhkan training data sets untuk belajar, melainkan Unsupervised Learning adalah metode untuk mengelompokkan sebuah no-labeled data.
#
# Salah satu metode Unsupervised Learning adalah K-Means Clustering. K-Means Clustering ini menggunakan metode centroid dan menghitung jarak dari setiap data yang ada ke centroidnya dan assign data tersebut ke centroid masing-masing (jarak yang paling dekat).
#
# ---
# + [markdown] id="NoX-A3wElHoq"
# Download disini [pelanggan.csv](https://drive.google.com/uc?export=download&id=1jX_rLPfcCfzEEgy9xaoALmpqfU2s5TTB)
# + colab={"base_uri": "https://localhost:8080/", "height": 204} executionInfo={"elapsed": 776, "status": "ok", "timestamp": 1617786708008, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/<KEY>", "userId": "06287560445691811212"}, "user_tz": -420} id="rqRBoUBsw7S0" outputId="4795491d-6d80-4b6d-dbc1-f6afb60bd02d"
import pandas as pd
df = pd.read_csv('pelanggan_supermarket.csv')
df = df.fillna(method='ffill')
df.tail()
# + [markdown] id="AByM6XoYMkTy"
# # Soal 2. Clustering data menggunakan k-means clustering
#
# Dalam soal ini, kalian diminta untuk melakukan clustering antara data Umur dan Skor Belanja(1-100). Namun, clustering tidak bisa dilakukan karena data memiliki outlier dan missing value. Maka, lakukanlah tahapan-tahapan berikut:
#
# * Handling missing value dengan menggunakan method ='ffill'
# * Handling outlier menggunakan metode Interquartile range
# * Lakukan rescaling data menggunakan StandardScaler
# * Lakukan clustering menggunakan kmeans clustering dengan k =2, dan visualisasikan..
# * Gunakan Elbow method untuk menemukan jumlah k kluster yang tepat
# * Gunakan k=3 (rekomendasi methode Elbow) lalu visualisasikan
# * Hitung silhoutte Coefficient dari ke dua hasil prediksi klustering tersebut
#
#
#
#
#
#
#
#
#
# + id="l2u2IRm4Ou_9"
#code here
from sklearn.preprocessing import StandardScaler
from sklearn.cluster import KMeans
import matplotlib.pyplot as plt
# df = df.fillna(method='ffill')
# K MEANS CLUSTERING
df_ = df[['Umur', 'Skor Belanja (1-100)']]
q3 = df_.quantile(.75)
q1 = df_.quantile(.25)
IQR = q3 - q1
df_clean = df_[((df_ >= q1 - 1.5 * IQR) & (df_ <= q3 + 1.5 * IQR))]
df_clean.dropna(axis=0, inplace=True)
scaler = StandardScaler()
df_scaled = scaler.fit_transform(df_clean.astype(float))
kmeans = KMeans(n_clusters=2).fit(df_scaled)
labels = kmeans.labels_
df_scaled = pd.DataFrame(data=df_scaled, columns=['Umur', 'Skor Belanja (1-100)'])
df_scaled['label'] = labels
df_kmeans_0 = df_scaled[df_scaled['label'] == 0]
df_kmeans_1 = df_scaled[df_scaled['label'] == 1]
fig1, axes1 = plt.subplots(figsize=(12, 7))
axes1.scatter(x=df_kmeans_0['Umur'], y=df_kmeans_0['Skor Belanja (1-100)'], c='red', s=100, edgecolor='green', linestyle='-')
axes1.scatter(x=df_kmeans_1['Umur'], y=df_kmeans_1['Skor Belanja (1-100)'], c='blue', s=100, edgecolor='green', linestyle='-')
axes1.set_xlabel('Umur')
axes1.set_ylabel('Skor Belanja (1-100)')
centers = kmeans.cluster_centers_
axes1.scatter(x=centers[:, 0], y=centers[:, 1], c='black', s=500)
plt.show()
# +
# ELBOW GRAPH
wcss = []
for i in range(1, 11):
kmeans = KMeans(n_clusters=i, init='k-means++').fit(df_scaled)
wcss.append(kmeans.inertia_)
plt.figure(figsize=(10, 6))
plt.plot(range(1, 11), wcss)
plt.title('The Elbow Method')
plt.xlabel('Number of Clusters')
plt.ylabel('WCSS')
plt.show()
# +
# K MEANS N_CLUSTERS = 3
N = 3
kmeans = KMeans(n_clusters=N, random_state=42).fit(df_scaled)
labels2 = kmeans.labels_
df_scaled['label 2'] = labels2
colors = ['blue', 'red', 'green']
centers2 = kmeans.cluster_centers_
fig, axes = plt.subplots(figsize=(12, 8))
for i in range(N):
df_kmeans = df_scaled[df_scaled['label 2'] == i]
axes.scatter(x=df_kmeans['Umur'], y=df_kmeans['Skor Belanja (1-100)'], c=colors[i], s=100, edgecolor='green', linestyle='-')
axes.scatter(x=centers2[i, 0], y=centers2[i, 1], c='black', s=500)
axes.set_xlabel('Umur')
axes.set_ylabel('Skor Belanja (1-100)')
plt.show()
# +
# SILHOUETTE COEFFICIENT
from sklearn.metrics import silhouette_score
print(silhouette_score(df_scaled, labels))
print(silhouette_score(df_scaled, labels2))
# + [markdown] id="Fo1GIiVdFP4j"
# Expected output:
#
# n_cluster =2
#
# 
#
#
# Elbow graph
#
# 
#
# n_cluster =3
#
# 
# + [markdown] id="5xpMIoEcm8MM"
#
#
# ---
#
#
#
# ---
#
#
#
# ---
#
#
# + [markdown] id="pt-NFlqTm-Og"
# # Soal 3. Analisa Data Hasil Clustering
#
# Dari hasil klustering diatas, buatlah sebuah analisa segmentasi pelanggan untuk kemajuan supermarket.
# + [markdown] id="zNPDstJynpva"
# Jawab disini:
#
# - Untuk umur yang merupakan di atas rata-rata, sebagian besar memiliki skor belanja yang dibawah rata-rata. Hal ini mungkin dikarenakan dengan pendapatannya yang sudah mulai berkurang, atau sudah mencapai umur pensiumnya
#
# - Di sisi lain, umur yang merupakan di bawah rata-rata, terbagi menjadi 2 bagian ada yang skor belanjanya di atas rata-rata, namun juga ada yang dibawah rata-rata. Kemungkinan besar hal ini dipengaruhi oleh pendapatan dari masing-masing orang juga.
#
# Kesimpulan:
#
# Oleh sebab itu, bagi pemilik supermarket, sebaiknya lebih banyak menyediakan barang-barang kebutuhan anak muda, remaja, atau dewasa muda. Hal ini dikarenakan rentang usia tersebutlah yang memiliki kemungkinan untuk memberikan skor belanja yang lebih tinggi.
| Week 4/Tugas_Day_3(EDA).ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
# %matplotlib inline
customers = pd.read_csv('Ecommerce Customers')
customers.head()
customers.info()
customers.describe()
sns.jointplot(data=customers, x='Time on Website', y='Yearly Amount Spent')
sns.jointplot(data=customers, x='Time on App', y='Yearly Amount Spent')
sns.jointplot(data=customers, x='Time on App', y='Length of Membership', kind='hex')
sns.pairplot(customers)
sns.lmplot(data=customers, x='Length of Membership', y='Yearly Amount Spent')
# # Training and Testing Data
customers.columns
X = customers[['Avg. Session Length', 'Time on App',
'Time on Website', 'Length of Membership']]
y = customers['Yearly Amount Spent']
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.30, random_state=101)
# # Training Model
from sklearn.linear_model import LinearRegression
lm = LinearRegression()
lm.fit(X_train, y_train)
lm.coef_
# # Prediciting Test Data
predicitons = lm.predict(X_test)
plt.scatter(y_test, predicitons)
plt.xlabel('Y Test(True Values)')
plt.ylabel('Predicted Values')
# # Evaluation
from sklearn import metrics
print('MAE :',metrics.mean_absolute_error(y_test, predicitons))
print('MSE :',metrics.mean_squared_error(y_test, predicitons))
print('RMSE :',np.sqrt(metrics.mean_squared_error(y_test, predicitons)))
metrics.explained_variance_score(y_test, predicitons)
# # Residuals
sns.distplot(y_test-predicitons, bins=50)
cdf = pd.DataFrame(lm.coef_, X.columns, columns=['Coefficient'])
cdf
| Linear Regression/Project.ipynb |