code stringlengths 38 801k | repo_path stringlengths 6 263 |
|---|---|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] id="view-in-github" colab_type="text"
# <a href="https://colab.research.google.com/github/Dpgofast/DS-Sprint-02-Storytelling-With-Data/blob/master/LS_DS_122_Choose_appropriate_visualizations.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# + [markdown] id="wWquCM1hCPmP" colab_type="text"
# _Lambda School Data Science_
# # Choose appropriate visualizations
# + [markdown] id="PmQu4BpHCPmS" colab_type="text"
# # Upgrade Seaborn
#
# Make sure you have at least version 0.9.0
# + id="sHXmzeZ4CPmT" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 346} outputId="3a3aa137-56db-46aa-e055-d0cfbe17aa48"
# !pip install --upgrade seaborn
# + id="9pDZPwRkCPmZ" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 35} outputId="f5f0acfd-a9ea-470b-cdb9-977c622e31cc"
import seaborn as sns
sns.__version__
# + [markdown] id="XB-pPnVsCPmd" colab_type="text"
# # Fix misleading visualizations
# + id="VVjdqU9zCPmd" colab_type="code" colab={}
# %matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
# + id="1_qivBxJCPmh" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 237} outputId="8543e0d5-1796-4be8-cb27-613f3577fbd6"
# !wget https://raw.githubusercontent.com/LambdaSchool/DS-Sprint-02-Storytelling-With-Data/master/module2-choose-appropriate-visualizations/misleading.py
import misleading
# + [markdown] id="9bge4DH5CPmk" colab_type="text"
# #### Fix misleading plot #1
# + id="nWvOIwWSCPmk" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 333} outputId="849dce7d-92f1-491d-9037-3d087646310c"
misleading.plot1()
# + id="eiBq4zWMCPmo" colab_type="code" colab={}
# + [markdown] id="mOaPIvpkCPmr" colab_type="text"
# #### Fix misleading plot #2
# + id="i3fatFC5CPms" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 328} outputId="07c8a45e-a370-46eb-d7e2-a80c7925168c"
misleading.plot2()
# + id="WlX-FyTxCPm2" colab_type="code" colab={}
# + [markdown] id="NJqGxTlDCPm6" colab_type="text"
# #### Fix misleading plot #3
# + id="7t34nmNpCPm7" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 336} outputId="ec7436bc-a157-46cd-cee8-0c4bef7e9429"
misleading.plot3()
# + id="ablatiUFCPm9" colab_type="code" colab={}
# + [markdown] id="lHMOJt7OCPnA" colab_type="text"
# #### Fix misleading plot #4
# + [markdown] id="W1OzHl5WCPnA" colab_type="text"
# _If you're on Jupyter (not Colab) then uncomment and run this cell below:_
# + id="e3AYPWACCPnC" colab_type="code" colab={}
# import altair as alt
# alt.renderers.enable('notebook')
# + id="5FR0IoolCPnH" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 348} outputId="d802972d-de83-4089-c6a4-0be805400908"
misleading.plot4()
# + id="MdJrBvYUCPnK" colab_type="code" colab={}
# + [markdown] id="vF-AQiNYCPnM" colab_type="text"
# #### Links
# - [How to Spot Visualization Lies](https://flowingdata.com/2017/02/09/how-to-spot-visualization-lies/)
# - [Where to Start and End Your Y-Axis Scale](http://stephanieevergreen.com/y-axis/)
# - [xkcd heatmap](https://xkcd.com/1138/)
# - [Surprise Maps: Showing the Unexpected](https://medium.com/@uwdata/surprise-maps-showing-the-unexpected-e92b67398865)
# + [markdown] id="Vla7AvcTCPnN" colab_type="text"
# # Use Seaborn to visualize distributions and relationships with continuous and discrete variables
#
# #### Links
# - [Seaborn tutorial](https://seaborn.pydata.org/tutorial.html)
# - [Seaborn example gallery](https://seaborn.pydata.org/examples/index.html)
# - [Chart Chooser](https://extremepresentation.typepad.com/files/choosing-a-good-chart-09.pdf)
# + [markdown] id="EuPdWCiCCPnP" colab_type="text"
# ## 1. Anscombe dataset
# + [markdown] id="P8UNq9hTCPnQ" colab_type="text"
# ### Load dataset
# + id="aog_taBeCPnT" colab_type="code" colab={}
df = sns.load_dataset('anscombe')
# + [markdown] id="sCd1xtJ-CPnX" colab_type="text"
# ### See the data's shape
# + id="Miy23oEfCPna" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 35} outputId="3f4bb363-f3ca-424f-ba89-167a0ceabbb8"
df.shape
# + [markdown] id="1nGQVA4cCPnd" colab_type="text"
# ### See the data
# + id="jvgNlZpICPne" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 1368} outputId="7d7a180d-85a7-49cd-ebf3-ee5fb90ebbf3"
df
# + [markdown] id="GJ3rsnssCPng" colab_type="text"
# ### [Group by](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.groupby.html) `'dataset'`
# + id="B88hI9EBCPnh" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 248} outputId="06884536-e5fd-4859-fe0c-54c54bcc72ad"
grouped_df= df.groupby(['dataset'])
# + [markdown] id="-jPtyoVJCPnm" colab_type="text"
# ### [Describe](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.describe.html) the groups
# + id="BgirGkigCPnn" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 248} outputId="cbbab9ce-8a97-4be8-8d17-46d52b5b4a6a"
grouped_df.describe()
# + [markdown] id="eG9vHqbnCPnr" colab_type="text"
# ### Get the [count](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.count.html), for each column in each group
# + id="oWySHKwgCPns" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 198} outputId="d3ac3048-0964-41ce-d728-cda4d87d382d"
grouped_df.count()
# + [markdown] id="gsl2_xIcCPnx" colab_type="text"
# ### Get the [mean](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.mean.html) ...
# + id="Y23YR98BCPny" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 198} outputId="692c6030-658b-4968-d94b-7f2011414775"
grouped_df.mean()
# + [markdown] id="9Nw9jE7oCPn0" colab_type="text"
# ### Get the [standard deviation](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.std.html) ...
# + id="widCE18GCPn1" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 198} outputId="33e439aa-7dc2-435f-a163-37fff2b46e98"
grouped_df.std()
# + [markdown] id="GONEcV5yCPn6" colab_type="text"
# ### Get the [correlation](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.corr.html) ...
# + id="GKSxRAOjCPn7" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 318} outputId="93e60611-8160-4b18-bd30-72ce442f85af"
grouped_df.corr()
# + [markdown] id="HeYbxN50CPn9" colab_type="text"
# ### Use pandas to [plot](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.plot.html) the groups, as scatter plots
# + id="ixtwwqAxCPn9" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 1458} outputId="e7a68ea4-63e7-4414-f102-1820ba3db53d"
grouped_df.plot.scatter('x','y')
# + [markdown] id="vgs5HNSFCPoA" colab_type="text"
# ### Use Seaborn to make [relational plots](http://seaborn.pydata.org/generated/seaborn.relplot.html)
# + id="Q3j6PJFHCPoB" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 749} outputId="36a2fd2f-6f96-46bd-b5fc-44564c511989"
sns.relplot('x','y',col='dataset',data=df,col_wrap=2,hue="dataset");
# + [markdown] id="h_dPmiMICPoG" colab_type="text"
# ### Use Seaborn to make [linear model plots](http://seaborn.pydata.org/generated/seaborn.lmplot.html)
# + id="ySxI1r1tCPoI" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 729} outputId="85e79030-0862-404a-cef6-d57e2c5aeab7"
sns.lmplot('x','y',col='dataset',data=df,col_wrap=2,hue="dataset",ci=None);
# + [markdown] id="olyjuBEOCPoK" colab_type="text"
# #### Links
# - [Seaborn examples: Anscombe's quartet](http://seaborn.pydata.org/examples/anscombes_quartet.html)
# - [Wikipedia: Anscombe's quartet](https://en.wikipedia.org/wiki/Anscombe%27s_quartet)
# - [The Datasaurus Dozen](https://www.autodeskresearch.com/publications/samestats)
# + [markdown] id="YMEx30UmCPoL" colab_type="text"
# ## 2. Tips dataset
# + [markdown] id="YPHV6QTMCPoL" colab_type="text"
# ### Load dataset
# + id="ge3FNW4_CPoM" colab_type="code" colab={}
tips = sns.load_dataset('tips')
# + [markdown] id="nDvp3NzECPoO" colab_type="text"
# ### See the data's shape
# + id="fVcD_wCcCPoP" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 35} outputId="e675efc1-f7b9-43ef-817f-65207401c587"
tips.shape
# + [markdown] id="SxpASC6JCPoQ" colab_type="text"
# ### See the first 5 rows
# + id="0lUDHxxqCPoT" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 198} outputId="91fdceed-b507-43ec-e6f5-7ea76f5f80f3"
tips.head()
# + [markdown] id="_SboxO1KCPoV" colab_type="text"
# ### Describe the data
# + id="zqjybxquCPoW" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 288} outputId="93bb8748-9538-41f4-e600-dca4c5962251"
tips.describe()
# + [markdown] id="sWUk1JZYCPoc" colab_type="text"
# ### Make univariate [distribution plots](https://seaborn.pydata.org/generated/seaborn.distplot.html)
# + id="fQf7Sk8QCPoe" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 284} outputId="449e386a-7640-4987-8b2a-32fa848e6a30"
sns.distplot(tips.tip);
# + id="D62W_lVHb7QW" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 285} outputId="9a9e23fa-c775-42db-b8a2-b593099849cb"
sns.distplot(tips.total_bill);
# + [markdown] id="W-DUp2EACPou" colab_type="text"
# ### Make bivariate [relational plots](https://seaborn.pydata.org/generated/seaborn.relplot.html)
# + id="MzD4jYUMCPpE" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 369} outputId="a78cc06d-6670-4a15-fd8d-7ecc4ce59929"
sns.relplot('tip','total_bill',data=tips);
# + [markdown] id="xwrSiupCCPpJ" colab_type="text"
# ### Make univariate [categorical plots](https://seaborn.pydata.org/generated/seaborn.catplot.html)
# + id="PL4PEkEjCPpO" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 72} outputId="ad816d51-e6da-4f43-d6ad-e1bb4d1491a7"
tips.sex.value_counts()
# + id="Xq3fAOVfeAYk" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 387} outputId="40edbbd1-ed7b-49bd-b138-3f2e1b44f3e8"
sns.catplot('sex',data=tips,kind='count')
# + [markdown] id="zEQmGcUYCPpR" colab_type="text"
# ### Make bivariate [categorical plots](https://seaborn.pydata.org/generated/seaborn.catplot.html)
# + id="xtHbT60tCPpS" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 387} outputId="498d5140-0741-4630-9071-314c96c9ba56"
sns.catplot('sex','tip',data=tips,kind='strip')
# + id="_23wdhMSe166" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 387} outputId="d8a2a462-6b2f-4808-bb2d-7dfa13c5f1ad"
sns.catplot('sex','tip',data=tips,kind='bar');
# + [markdown] id="pF6Kz8m8CPpd" colab_type="text"
# ## 3. Flights
# + [markdown] id="4hTFxnIBCPpe" colab_type="text"
# ### Load dataset
# + id="IWT__iCyCPpe" colab_type="code" colab={}
flights = sns.load_dataset('flights')
# + [markdown] id="3Xr-2IYaCPph" colab_type="text"
# ### See the data's shape
# + id="F_YgjyJlCPph" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 35} outputId="18f44dc2-db30-44f6-a43f-d47c120bced0"
flights.shape
# + [markdown] id="O2T3sdUUCPpi" colab_type="text"
# ### See the first 5 rows
# + id="gs610DPcCPpj" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 198} outputId="c79a9308-3191-48a1-bfa0-701798deb94c"
flights.head()
# + [markdown] id="zlXQsjVzCPpl" colab_type="text"
# ### Describe the data
# + id="jck2FnsRCPpl" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 288} outputId="52c85e5f-26c8-491c-d9ef-b965ffc255e5"
flights.describe()
# + [markdown] id="AzY6wlp3CPpo" colab_type="text"
# ### Plot year & passengers
# + id="ZQ8ghKC3CPpp" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 284} outputId="60b85694-763b-4773-d791-d3cc026056b1"
flights.plot.scatter('year','passengers');
# + [markdown] id="fvc8eZ1FCPpy" colab_type="text"
# ### Plot month & passengers
# + id="uNQb5UkFCPpz" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 274} outputId="5bbb5349-8347-485a-a793-ebb776d36ecb"
flights.plot.line('month','passengers');
# + id="0ZIksLfds-rk" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 415} outputId="be16b8df-ff04-478e-9ed6-97bf4001406f"
sns.catplot('month','passengers',data=flights)
plt.xticks(rotation=90);
# + [markdown] id="EHsz2rsiCPp5" colab_type="text"
# ### Create a [pivot table](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.pivot_table.html) of passengers by month and year
# + id="_4D3QZ_aCPp5" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 438} outputId="b40c7d72-e570-4713-e950-af8713d7c8e8"
table= flights.pivot_table('passengers','month','year')
table
# + [markdown] id="urU10dnsCPp9" colab_type="text"
# ### Plot the pivot table as a [heat map](https://seaborn.pydata.org/generated/seaborn.heatmap.html)
# + id="-0cktqriCPp-" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 298} outputId="f014a694-7b06-434b-f39e-8f1670fa5b58"
sns.heatmap(table,cmap='Greens');
| LS_DS_122_Choose_appropriate_visualizations.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: 'Python 3.8.5 64-bit (''base'': conda)'
# name: python3
# ---
# +
import pandas as pd
import numpy as np
import matplotlib
import matplotlib.pyplot as plt
import squarify
import yfinance as yf
import plotly
import plotly.express as px
import seaborn as sns
# Configurando los estilos de los grรกficos
plt.ioff()
sns.set_context('talk')
sns.set_style("whitegrid")
# Configurando plotly orca
plotly.io.orca.config.executable = r'C:\anaconda3\orca_app\orca.exe'
plotly.io.orca.config.save()
# -
# ## Data acquisition
excel_df = pd.read_excel('market_stock_value_scoring.xlsx', usecols=['Company','Symbol','Symbol_Yahoo','Description','MarketCap','MarketCap_USD','Currency','Country','Indices','Industries','Score_Long','Score_Short'])
excel_df.columns = ['Company','Symbol','Yahoo Symbol','Description','Country','Indices','Industries','Market Cap','Market Cap (USD)','Currency', 'Long Score','Short Score']
excel_df
# ## Data transformation
source_df = excel_df[['Symbol','Yahoo Symbol','Country','Indices','Industries','Market Cap','Market Cap (USD)']]
source_df
# ## Data visualization
def get_color_palette(cmap, src):
norm = matplotlib.colors.Normalize(vmin=min(src), vmax=max(src))
return [cmap(norm(value)) for value in src]
# ### By Country
gb_country = source_df.groupby(by='Country')
# + tags=[]
# Defining sizes, labels and colors
countries = [key for (key,group) in gb_country]
stock_count = [len(group) for (key,group) in gb_country]
colors = get_color_palette(matplotlib.cm.RdYlGn,stock_count)
# -
plt.subplots(figsize=(20,10))
squarify.plot(sizes=stock_count, label=countries, color=colors, pad=True, alpha=0.42)
plt.title(f'COUNTRIES BY STOCKS COUNT', size=20)
plt.axis('off')
plt.savefig(f"../images/COUNTRIES.png", bbox_inches='tight')
plt.show()
# ## By indice
indices = np.unique(', '.join(source_df['Indices']).split(', '))
indices
def plot_by_indice(indice):
indice_df = excel_df[source_df['Indices'].str.contains(indice)].copy()
indice_df.sort_values(by='Market Cap (USD)', inplace=True, ascending=False)
colors = get_color_palette(matplotlib.cm.YlGn,indice_df['Market Cap (USD)'])
plt.clf()
plt.subplots(figsize=(20,10), )
squarify.plot(sizes=indice_df['Market Cap (USD)'], label=indice_df['Yahoo Symbol'], color=colors, pad=True, alpha=0.8, text_kwargs={'color':'#000', 'size':16})
plt.title(f'{indice} INDEX STOCKS', size=20)
plt.axis('off')
plt.savefig(f"../images/{indice}.png", bbox_inches='tight')
plt.show()
plt.close()
for indice in indices:
plot_by_indice(indice)
# ## By industry
industries = np.unique(', '.join(source_df['Industries']).split(', '))
industries
# +
industries_to_plot = []
for industry in industries:
industry_df = source_df[source_df['Industries'].str.contains(industry)].copy()
industries_to_plot.append((industry,industry_df,len(industry_df)))
industries_to_plot = sorted(industries_to_plot, key=lambda x: x[2], reverse=True)
# +
map_df = pd.DataFrame(columns=['Yahoo Symbol','Industry'])
for industry_to_plot in industries_to_plot:
valid = True
for industry_to_compare in industries_to_plot:
if (industry_to_plot[1] is industry_to_compare[1]):
continue
valid = not all(elem in list(industry_to_compare[1]['Yahoo Symbol']) for elem in list(industry_to_plot[1]['Yahoo Symbol']))
if(not valid):
break
if(valid):
industry_to_plot[1]['Industry'] = industry_to_plot[0]
map_df = map_df.append(industry_to_plot[1][['Yahoo Symbol','Industry']], ignore_index=True)
map_df = map_df[~map_df['Yahoo Symbol'].duplicated(keep='last')].reset_index(drop=True)
map_df = map_df.join(excel_df[['Company','Symbol','Yahoo Symbol','Description','Country','Currency','Market Cap','Market Cap (USD)','Long Score','Short Score']].set_index('Yahoo Symbol'), on='Yahoo Symbol')
map_df = map_df.sort_values(['Country', 'Market Cap (USD)'], ascending=[False, False])
map_df = map_df[['Industry','Company','Symbol','Yahoo Symbol','Description','Country','Currency','Market Cap','Market Cap (USD)','Long Score','Short Score']]
# -
map_df.to_csv("../files/market_stock_treemap_plotting.csv", index=False, float_format='%f')
map_df['Industry'] = map_df['Industry'].str.upper()
fig = px.treemap(map_df,
path=['Industry', 'Symbol'],
hover_data=['Company','Yahoo Symbol','Country'],
values='Market Cap (USD)',
color='Market Cap (USD)',
title='STOCKS CATEGORIZED BY INDUSTRIES',
width=1280,
height=760)
fig.show()
fig.write_image("../images/INDUSTRIES.png")
| src/run.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3 (ipykernel)
# language: python
# name: python3
# ---
# # PETs/TETs โ Hyperledger Aries / PySyft โ City (Relying Party) ๐๏ธ
#
# ---
# โ ๏ธ <span style='background : yellow'>**Warning:**</span>
#
# The `SyMPC` package is still in beta-verion and therefore very buggy. At the time of the development of this project, the function `.reconstruct()` (see Step ???) does not function with remote agents (i.e., one container per agent).
#
# Thus, the notebooks under the directory `xx/` demonstrate the `.reconstruct()` function locally (within one docker container).
#
# ---
# + language="javascript"
# document.title ='๐๏ธ City'
# -
# ## PART 3: Connect with Manufacturers and Analyze Data
#
# **What:** Obtain data from Manufacturers in a trust- and privacy-preserving manner
#
# **Why:** Get manufacturers to share data anonymously to help the City analyze data
#
# **How:** <br>
# 1. [Initiate City's AgentCommunicationManager (ACM)](#1)
# 2. [Connect with anonymous agents via a multi-use SSI invitation](#2)
# 3. [Request VCs to verify agents are certified manufacturers](#3)
# 4. [Join Duet Connections to obtain encrypted data](#4)
#
# **Accompanying Agents and Notebooks:**
# * Manufacturer1 ๐: `03_connect_with_city.ipynb`
# * Manufacturer2 ๐: `03_connect_with_city.ipynb`
# * Manufacturer3 ๐ต: `03_connect_with_city.ipynb`
# + [markdown] pycharm={"name": "#%% md\n"}
# ---
#
# ### 0 - Setup
# #### 0.1 - Imports
# +
import os
import time
import syft as sy
from aries_cloudcontroller import AriesAgentController
from pprintpp import pprint
from sympc.session import Session
from sympc.session import SessionManager
from sympc.tensor import MPCTensor
from termcolor import colored
import libs.helpers as helpers
from libs.agent_connection_manager import RelyingParty
# -
# #### 0.2 โ Variables
# Get relevant details from .env file
api_key = os.getenv("ACAPY_ADMIN_API_KEY")
admin_url = os.getenv("ADMIN_URL")
webhook_port = int(os.getenv("WEBHOOK_PORT"))
webhook_host = "0.0.0.0"
# ---
#
# <a id=1></a>
#
# ### 1 โ Initiate City Agent
# #### 1.1 โ Init ACA-PY agent controller
# Setup
agent_controller = AriesAgentController(admin_url,api_key)
print(f"Initialising a controller with admin api at {admin_url} and an api key of {api_key}")
# #### 1.2 โ Start Webhook Server to enable communication with other agents
# @todo: is communication with other agents, or with other docker containers?
# Listen on webhook server
await agent_controller.init_webhook_server(webhook_host, webhook_port)
print(f"Listening for webhooks from agent at http://{webhook_host}:{webhook_port}")
# #### 1.3 โ Init ACM Relying Party
# + jupyter={"outputs_hidden": false} pycharm={"name": "#%%\n"}
# The CredentialHolder registers relevant webhook servers and event listeners
city_agent = RelyingParty(agent_controller)
# -
# ---
#
# <a id=2></a>
#
# ### 2 โ Establish a connection with the Manufacturer agents
#
# #### 2.1 โ Create multi-use invitation
# Send out a multi-use invitation to all manufacturer agents (i.e., copy & paste the same invitation to manufacturer1, manufacturer2, manufacturer3). This represents a scenario where the City agent invites any agent to connect with them, and authenticate as manufacturers.
#
# ๐ต๏ธโ๏ธ The advantage of a multi-use invitation is, that the City is unaware about who accessed the invitation link and is trying to get in contact with the City. The City agent only knows the names defined in `ACAPY_LABEL` (see respective `.env` files), and does not know which agent is which manufacturer.
#
# **Note:** Please establish a connection with all three manufacturer agents.
# +
# Setup for connection with Manufacturer agents
alias = "Connection 1"
auto_accept = True # Accept response of Manufacturers agent right away
auto_ping = True
public = False # Do not use public DID
multi_use = True # Invitation can be used by multiple inviteees
invitation = city_agent.create_connection_invitation(alias=alias, auto_accept=auto_accept, public=public, multi_use=multi_use, auto_ping=auto_ping)
# -
# <div style="font-size: 25px"><center><b>Break Point 1</b></center></div>
# <div style="font-size: 50px"><center>๐๏ธ โก๏ธ ๐ / ๐ / ๐ต</center></div><br>
# <center><b>Please open all manufacturer agents ๐ / ๐ / ๐ต. <br> For each of the manufacturer agents, open the 03_connect_with_city.ipynb notebook and execute all steps until Break Point 2/3/4. <br> Use the same invitation from the City Agent ๐๏ธ in Step 2.1 for all Manufacturers.</b></center>
#
# #### 2.2 โ Display all active connections
# Display all active connections
for conn in city_agent.get_active_connections():
conn.display()
# #### 2.3 โ Fetch all connection_ids of the active connections with the respective manufacturer agents
# Fetch the connection_ids with the name provided in the active connections (see print from previous cell).
#
# +
# Get connection_id with Manufacturer1
connection_id_a1 = city_agent.get_connection_id("AnonymousAgent1")[-1] # We assume that there is only one connection with Manufacturer1
# Get connection_id with Manufacturer2
connection_id_a2 = city_agent.get_connection_id("AnonymousAgent2")[-1] # We assume that there is only one connection with Manufacturer2
# Get connection_id with Manufacturer3
connection_id_a3 = city_agent.get_connection_id("AnonymousAgent3")[-1] # We assume that there is only one connection with Manufacturer3
# -
# <a id=3></a>
# ### 3 โ Send Proof Request
#
# #### 3.1 Define VC Presentation Request Object
#
# The below cell defines a generic presentation request object, that can be sent across specific connections.
# In this case, the City agent requests the Manufacturers to certify the requested attribute `isManufacturer` from the manufacturer schema.
#
# +
# Define which scheme we want a proof from
identifiers = helpers.get_identifiers()
schema_manufacturer_id = identifiers["manufacturer_schema_identifiers"]["schema_id"]
# Define the attributes we want from the scheme
req_attrs = [{"name": "isManufacturer", "restrictions": [{"schema_id": schema_manufacturer_id}]}]
# Define proof_request
# The proof_request irgnores predicates (e.g., range proofs) revokation
manufacturer_proof_request = {
"name": "isManufacturer Proof Request",
"version": "1.0",
"requested_attributes": {f"0_{req_attr['name']}_uuid": req_attr for req_attr in req_attrs },
"requested_predicates": {}, #{f"0_{req_pred['name']}_GE_uuid": req_pred for req_pred in req_preds },
"non_revoked": {"to": int(time.time())}
}
print(colored("Manufacturer Proof Request:", attrs=["bold"]))
pprint(manufacturer_proof_request)
# -
# #### 3.2 โ Send Proof Request
# The proof request asks the agent at the other end of `connection_id` to prove attributes defined within `manufacturer_proof_request` using the manufacturer schema.
#
# The resulting presentation request is encoded in base64 and packed into a DIDComm Message. The `@type` attribute in the presentation request defines the protocol present-proof and the message request-presentation.
#
# Overall, the proof-presentation procedure has six steps. **R** represents the Relying Party (here the City agent), and **H** the Holder (i.e., the manufacturers).
#
# | Step | Status | Agent | Description | Function |
# | --- | --- | --- | --- | --- |
# | 1 | `request_sent` | R | R requests a proof defined in `manufacturer_proof_request` | `send_proof_request()` |
# | 2 | `request_received` | H | H receives a proof request from R | - |
# | 3 | `presentation-sent` | H | H sends proof presentation to R | `send_proof_presentation()` |
# | 4 | `presentation-received` | R | R receives presentation from H | - |
# | 5 | `verified` | R | R verifies presentation received from H | `verify_proof_presentation()` |
# | 6 | `presentation_acked` | H | H knows, that R verified the presentation | - |
#
# +
# Send proof request to Manufacturer1 and get presentation_exchange_id for connection with manufacturer 1
presentation_exchange_id_a1 = city_agent.send_proof_request(
connection_id=connection_id_a1,
proof_request=manufacturer_proof_request,
comment="Please prove that you are an agent who is a certified manufacturer"
)
# Send proof request to Manufacturer2 and get presentation_exchange_id for connection with manufacturer 2
presentation_exchange_id_a2 = city_agent.send_proof_request(
connection_id=connection_id_a2,
proof_request=manufacturer_proof_request,
comment="Please prove that you are an agent who is a certified manufacturer"
)
# Send proof request to Manufacturer3 and get presentation_exchange_id for connection with manufacturer 3
presentation_exchange_id_a3 = city_agent.send_proof_request(
connection_id=connection_id_a3,
proof_request=manufacturer_proof_request,
comment="Please prove that you are an agent who is a certified manufacturer"
)
# -
# <div style="font-size: 25px"><center><b>Break Point 5</b></center></div>
# <div style="font-size: 50px"><center>๐๏ธ โก๏ธ ๐ / ๐ / ๐ต</center></div><br>
# <center><b>Please go to all manufacturer agents ๐ / ๐ / ๐ต. <br> For each of the manufacturer agents, continue with executing Step 3</b></center>
#
# #### 3.3 โ Verify Proof Presentations
#
# When proof presentations are received by the manufacturers (see output under Step 3.2), you can verify whether the proof presentations are valid.
# A presentation object contains three classes of attributes.
# * Revealed Attributes: Attributes that were signed by an issuer and have been revealed in the presentation process
# * Self Attested Attributes: Attributes that the prover has self attested to in the presentation object.
# * Predicate proofs: Attribute values that have been proven to meet some statement. (TODO: Show how you can parse this information)
#
# Execute the following cell. The `verify_proof_presentation()` function verifies the proof presentation, and parses out relevant information. The returning value states whether the presentation proof is valid or not. In the use case at hand, because only one variable is verified, the boolean denotes if the agent is a manufacturer or not.
# +
a1_is_manufacturer = city_agent.verify_proof_presentation(presentation_exchange_id_a1)
a2_is_manufacturer = city_agent.verify_proof_presentation(presentation_exchange_id_a2)
a3_is_manufacturer = city_agent.verify_proof_presentation(presentation_exchange_id_a3)
print("AnonymousAgent1 is manufacturer:", a1_is_manufacturer)
print("AnonymousAgent2 is manufacturer:", a2_is_manufacturer)
print("AnonymousAgent3 is manufacturer:", a3_is_manufacturer)
# -
# <a id=4></a>
# ### 4 โ Do Data Science
# Now that the City agent verified that all agents are indeed manufactureres, join the individual Duet Connections that the Manufacturers already sent to the City Agent.
#
# #### 4.1 โ Establish a Duet connections
# Duet is a package that allows you to exchange encrypted data and run privacy-preserving arithmetic operations on them (e.g., through homomorphic encryption or secure multiparty computation). The ACM package is configured to allow the exchange of duet tokens. Only one duet connection can be established per aries connection.
#
# ##### Duet Connection with AnonymousAgent1
# +
# Set up connection_id used for for duet token exchange
city_agent._update_connection(connection_id=connection_id_a1, is_duet_connection=True, reset_duet=True)
# Join duet established by external agent
duet_a1 = sy.join_duet(credential_exchanger=city_agent)
# Check that you can access the duet data store
duet_a1.store.pandas
# -
# ##### Duet Connection with AnonymousAgent2
# +
# Set up connection_id used for for duet token exchange and join duet
city_agent._update_connection(connection_id=connection_id_a2, is_duet_connection=True, reset_duet=True)
duet_a2 = sy.join_duet(credential_exchanger=city_agent)
# Check that you can access the duet data store
duet_a2.store.pandas
# -
# ##### Duet Connection with AnonymousAgent3
# +
# Set up connection_id used for for duet token exchange and join duet
city_agent._update_connection(connection_id=connection_id_a3, is_duet_connection=True, reset_duet=True)
duet_a3 = sy.join_duet(credential_exchanger=city_agent)
# Check that you can access the duet data store
duet_a3.store.pandas
# -
# #### 4.2 โ Setup Secure Multiparty Computation (SyMPC) Session with all duet connections
# Create a duet session that is able to access the data from all three established duet connections.
#
# Then, initiate the duet session to enable Secure Multiparty Computation. The `Session` is used to send some config information only once between the parties. This information can be:
# * the ring size in which we do the computation
# * Reference to the parties involved
# * the precision and base
# * ...
#
# The `MPCTensor` is the tensor that holds reference to the shares owned by the different parties. Specifically, it is an orchestrator that can do computations on data that it does not see.
# + jupyter={"outputs_hidden": false} pycharm={"name": "#%%\n"}
session = Session(parties=[duet_a1, duet_a2, duet_a3])
SessionManager.setup_mpc(session)
session.get_protocol()
# + [markdown] pycharm={"name": "#%% md\n"}
# <div style="font-size: 25px"><center><b>Break Point 9</b></center></div>
# <div style="font-size: 50px"><center>๐๏ธ โก๏ธ ๐ / ๐ / ๐ต</center></div><br>
# <center><b>Please go to all manufacturer agents ๐ / ๐ / ๐ต. <br> For each of the manufacturer agents, continue with executing Step 4.2</b></center>
#
# #### 4.3 โ Access encrypted data from AnonymousAgent1 ๐๐, AnonymousAgent2 ๐๐, and AnonymousAgent3 ๐๐ต
# Access data from the encrypted duet stores by ID. Check which data the stores have to offer, and enter the names of the data entries to retrieve them
# + jupyter={"outputs_hidden": false} pycharm={"name": "#%%\n"}
duet_a1.store.pandas
# + jupyter={"outputs_hidden": false} pycharm={"name": "#%%\n"}
duet_a2.store.pandas
# + jupyter={"outputs_hidden": false} pycharm={"name": "#%%\n"}
duet_a3.store.pandas
# + jupyter={"outputs_hidden": false} pycharm={"name": "#%%\n"}
# Retrieve encrypted data
x_secret = duet_a1.store["hourly-co2-per-zip_2021-08-19"] # describe local data to test sum, substract, and multiply
y_secret = duet_a2.store["hourly-co2-per-zip_2021-08-19"]
z_secret = duet_a3.store["hourly-co2-per-zip_2021-08-19"]
# -
# Test the output of one of them
print(x_secret)
# + [markdown] pycharm={"name": "#%% md\n"}
# #### 4.4 โ Share secrets with Agents in session
# + jupyter={"outputs_hidden": false} pycharm={"name": "#%%\n"}
# Convert encrypted data into MPCTensors and share the secrets
x = MPCTensor(secret=x_secret, shape=(1,), session=session) # @todo: adjust shape!
y = MPCTensor(secret=y_secret, shape=(1,), session=session)
z = MPCTensor(secret=z_secret, shape=(1,), session=session)
# -
# Test the output of one of them
print(z)
# #### 4.5 โ Reconstruct data shares ๐(๐ + ๐ + ๐ต)
# Let's first do some basic operations. Because these operations are performed via SMPC, the raw data is not leaving the data owners' servers!
#
# Unfortunately, the following lines of codes always receive an error message.
# [See Github issue](https://github.com/OpenMined/SyMPC/issues/282). If you want to see how the `.reconstruct()` function works, go to `xx/` and run the notebook.
# + jupyter={"outputs_hidden": false} pycharm={"name": "#%%\n"}
# Do basic arithmetic operations on the MPCTensors
print("X + Y + Z = ",(x + y + z).reconstruct())
# -
# ---
#
# ### 5 โ Terminate Controller
#
# Whenever you have finished with this notebook, be sure to terminate the controller. This is especially important if your business logic runs across multiple notebooks.
await agent_controller.terminate()
# + [markdown] pycharm={"name": "#%% md\n"}
# ---
#
# ## ๐ฅ๐ฅ๐ฅ You can close this notebook now ๐ฅ๐ฅ๐ฅ
# + jupyter={"outputs_hidden": false} pycharm={"name": "#%%\n"}
| agents/city/notebooks/03_connect_with_manufacturers.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# !pip install p3_data
# %matplotlib inline
import pandas as pd
import numpy as np
import matplotlib
import matplotlib.pylab as plt
import matplotlib.dates as mdates
import matplotlib.cm as cm
import seaborn as sns
import json
from io import StringIO
import importlib
import p3_data
from p3_data import (glob_file_list, load_json_from_file, merge_dicts, plot_groups,
get_varying_column_names, filter_dataframe, take_varying_columns,
load_json_records_as_dataframe)
# Load result files from P3 Test Driver
src_files = []
#src_files += ['../../../tests/p3-RH-24-tests/p3_test_driver/results/*.json'] # Redhook
src_files += ['../../../tests/pulsar-2.5.2-AWS-20-tests/p3_test_driver/results/*.json'] # AWS
raw_df = load_json_records_as_dataframe(src=src_files, ignore_error=True)
# Clean raw results
def clean_result(result):
try:
r = result.copy()
r['utc_begin'] = pd.to_datetime(r['utc_begin'], utc=True)
r['utc_end'] = pd.to_datetime(r['utc_end'], utc=True)
r['git_commit'] = r['git_commit'].split(' ')[0]
r['driverName'] = r['driver']['name']
if r['driverName'] == 'Pulsar':
r = merge_dicts(r, r['driver']['client']['persistence'])
r = merge_dicts(r, r['workload'])
del r['workload']
r = merge_dicts(r, r['omb_results'])
if 'ansible_vars' in r and isinstance(r['ansible_vars'], dict):
r = merge_dicts(r, r['ansible_vars'])
if r['driverName'] == 'Pravega':
if 'pravegaVersion' not in r:
r['pravegaVersion'] = '0.6.0-2361.f273314-SNAPSHOT'
r['pravegaVersion'] = r['pravegaVersion'].replace('-SNAPSHOT','')
for k in list(r.keys()):
if 'Quantiles' in k:
r[k] = pd.Series(data=[float(q) for q in r[k].keys()], index=list(r[k].values())).sort_index() / 100
elif isinstance(r[k], list) and 'Rate' in k:
r[k] = pd.Series(r[k])
r['%sMean' % k] = r[k].mean()
r['numWorkloadWorkers'] = int(r.get('numWorkers', 0))
r['throttleEventsPerSec'] = r['producerRate']
r['publishRateEventsPerSecMean'] = r['publishRateMean']
r['publishRateMBPerSecMean'] = r['publishRateMean'] * r['messageSize'] * 1e-6
r['publishLatencyMsAvg'] = r['aggregatedPublishLatencyAvg']
r['publishLatencyMs50Pct'] = r['aggregatedPublishLatency50pct']
r['publishLatencyMs99Pct'] = r['aggregatedPublishLatency99pct']
r['endToEndLatencyMsAvg'] = r['aggregatedEndToEndLatencyAvg']
r['endToEndLatencyMs50Pct'] = r['aggregatedEndToEndLatency50pct']
r['endToEndLatencyMs99Pct'] = r['aggregatedEndToEndLatency99pct']
return pd.Series(r)
except Exception as e:
print('ERROR: %s: %s' % (r['test_uuid'], e))
# raise e
# +
# r = clean_result(raw_df.iloc[-1])
# pd.DataFrame(r)
# -
clean_df = raw_df.apply(clean_result, axis=1)
clean_df = clean_df.set_index('test_uuid', drop=False)
clean_df = clean_df[clean_df.error==False]
clean_df = clean_df.sort_values(['utc_begin'])
# Show list of columns
clean_df.columns.values
# Define columns that identify test parameters
param_cols = [
'numWorkloadWorkers',
'topics',
'partitionsPerTopic',
'producersPerTopic',
'subscriptionsPerTopic',
'consumerPerSubscription',
'testDurationMinutes',
'keyDistributor',
'git_commit',
'pulsarVersion',
]
# Define columns that are the output of the experiments
output_cols = [
'publishRateEventsPerSecMean',
'publishRateMBPerSecMean',
'publishLatencyMs50Pct',
'publishLatencyMs99Pct',
'endToEndLatencyMs50Pct',
'endToEndLatencyMs99Pct',
'utc_begin',
]
cols = param_cols + output_cols
# View most recent results
clean_df[cols].tail(38).T
# +
# Export to CSV
#clean_df[cols].to_csv('openmessaging-benchmark-results.csv')
# +
# df = clean_df[cols]
# df = df.sort_values(['messageSize','numWorkloadWorkers','producersPerTopic','throttleEventsPerSec','utc_begin'])
# df.head()
# -
# View distinct values of pravegaVersion and test counts
clean_df.groupby(['pulsarVersion']).size()
# First level of filtering
filt_df = filter_dataframe(
clean_df,
# driverName='Pravega',
# pravegaVersion='0.8.0-2508.30406cf',
# pravegaVersion='0.6.0-2386.23b7340',
# numWorkloadWorkers=2,
topics=1,
testDurationMinutes=1,
# size_of_test_batch=(2,1000), # between
# aggregatedEndToEndLatency50pct=(1,1e6),
)
# filt_df = filt_df[filt_df.size_of_test_batch > 1]
len(filt_df)
def latency_vs_throughput_table(df):
result_df = (df
.set_index(['publishRateMBPerSecMean'])
.sort_index()
[[
'aggregatedPublishLatency50pct',
'aggregatedPublishLatency95pct',
'aggregatedPublishLatency99pct',
'aggregatedEndToEndLatency50pct',
'aggregatedEndToEndLatency95pct',
'aggregatedEndToEndLatency99pct',
'test_uuid',
]]
.rename(columns=dict(
aggregatedPublishLatency50pct='Publish Latency p50',
aggregatedPublishLatency95pct='Publish Latency p95',
aggregatedPublishLatency99pct='Publish Latency p99',
aggregatedEndToEndLatency50pct='E2E Latency p50',
aggregatedEndToEndLatency95pct='E2E Latency p95',
aggregatedEndToEndLatency99pct='E2E Latency p99',
))
)
result_df.index.name = 'Publish Throughput (MB/s)'
return result_df
def plot_latency_vs_throughput(df):
assert len(df.messageSize.drop_duplicates().values) == 1
messageSize = df.messageSize.iloc[0]
partitionsPerTopic = df.partitionsPerTopic.iloc[0]
testDurationMinutes = df.testDurationMinutes.iloc[0]
plot_df = latency_vs_throughput_table(df)
title = 'Pulsar 2.5.2 AWS 03.06.20, Message Size %d, partitions: %d %dmin test' % (messageSize, partitionsPerTopic, testDurationMinutes)
ax = plot_df.plot(
logx=True,
logy=True,
figsize=(10,8),
grid=True,
title=title,
style=['x:b','x-.b','x-b','+:r','+-.r','+-r'])
ax.set_ylabel('Latency (ms)');
tick_formatter = matplotlib.ticker.LogFormatter()
ax.xaxis.set_major_formatter(tick_formatter)
ax.yaxis.set_major_formatter(tick_formatter)
ax.grid('on', which='both', axis='both')
# ## Message Size 100 B
filt_100_df = filter_dataframe(
filt_df,
messageSize=100,
producersPerTopic=1,
partitionsPerTopic=1,
)
# View varying columns
take_varying_columns(filt_100_df[filt_100_df.producerRate==100000]).T
# View distinct sets of parameters.
# There should only be one distinct set of parameters.
filt_100_df[param_cols].drop_duplicates().T
plot_latency_vs_throughput(filt_100_df)
latency_vs_throughput_table(filt_100_df)
# ## Message Size 10 KB
filt_10000_df = filter_dataframe(
filt_df,
messageSize=10000,
#producersPerTopic=1,
partitionsPerTopic=16,
)
# View distinct sets of parameters.
# There should only be one distinct set of parameters.
filt_10000_df[param_cols].drop_duplicates().T
plot_latency_vs_throughput(filt_10000_df)
latency_vs_throughput_table(filt_10000_df)
# ## Analyze 100 B events, 50,000 events/sec, various number of partitions and producers
filt_50000eps_df = filter_dataframe(
filt_df,
messageSize=100,
producerRate=-1,
).sort_values(['endToEndLatencyMs99Pct'], ascending=True)
len(filt_50000eps_df)
take_varying_columns(filt_50000eps_df[cols]).head(20)
# # Analyze Latency Distribution
test_uuid = filt_50000eps_df.iloc[0].name
test_uuid
df = clean_df
t = df[df.test_uuid==test_uuid].iloc[0]
# Cumulative Distribution Function
pubcdf = t.aggregatedPublishLatencyQuantiles
pubcdf.name = 'Publish Latency CDF'
# Probability Distribution Function (latency histogram)
pubpdf = pd.Series(index=pubcdf.index, data=np.gradient(pubcdf, pubcdf.index.values), name='Publish Latency PDF')
fig0, ax0 = plt.subplots()
ax1 = ax0.twinx()
pubpdf.plot(ax=ax0, xlim=[1,100], ylim=[0,None], style='r', title='Publish Latency PDF and CDF')
pubcdf.plot(ax=ax1, xlim=[1,100], secondary_y=True, logx=True, ylim=[0,1])
# ax0.set_ylabel('PDF');
# ax1.set_ylabel('CDF');
ax0.set_xlabel('Publish Latency (ms)');
tick_formatter = matplotlib.ticker.LogFormatter()
ax0.xaxis.set_major_formatter(tick_formatter)
ax0.grid('on', which='both', axis='both')
plt.show()
plt.close()
# Cumulative Distribution Function
e2ecdf = t.aggregatedEndToEndLatencyQuantiles
e2ecdf.name = 'E2E Latency CDF'
# Probability Distribution Function (latency histogram)
e2epdf = pd.Series(index=e2ecdf.index, data=np.gradient(e2ecdf, e2ecdf.index.values), name='E2E Latency PDF')
fig0, ax0 = plt.subplots()
ax1 = ax0.twinx()
e2epdf.plot(ax=ax0, xlim=[1,100], ylim=[0,None], style='r', title='E2E Latency PDF and CDF')
e2ecdf.plot(ax=ax1, xlim=[1,100], secondary_y=True, logx=True, ylim=[0,1])
# ax0.set_ylabel('PDF');
# ax1.set_ylabel('CDF');
ax0.set_xlabel('E2E Latency (ms)');
tick_formatter = matplotlib.ticker.LogFormatter()
ax0.xaxis.set_major_formatter(tick_formatter)
ax0.grid('on', which='both', axis='both')
plt.show()
plt.close()
# Combined publish and E2E latency CDF
fig0, ax0 = plt.subplots()
xlim=[1,25]
pubcdf.plot(ax=ax0, xlim=xlim, logx=True, ylim=[0,1], legend=True, figsize=(10,8))
e2ecdf.plot(ax=ax0, xlim=xlim, logx=True, ylim=[0,1], legend=True)
ax0.set_xlabel('E2E Latency (ms)');
tick_formatter = matplotlib.ticker.LogFormatter()
ax0.xaxis.set_major_formatter(tick_formatter)
ax0.grid('on', which='both', axis='both')
plt.show()
plt.close()
# ## Compare Two Sets
# Common filter
filt_df = filter_dataframe(
clean_df,
driverName='Pravega',
numWorkloadWorkers=4,
topics=1,
testDurationMinutes=15,
size_of_test_batch=(2,1000), # between
aggregatedEndToEndLatency50pct=(1,1e6),
messageSize=100,
producersPerTopic=32,
partitionsPerTopic=16,
)
len(filt_df)
# Set 1
filt1_df = filter_dataframe(
filt_df,
pravegaVersion='0.6.0-2361.f273314',
)
len(filt1_df)
# Set 2
filt2_df = filter_dataframe(
filt_df,
pravegaVersion='0.6.0-2386.23b7340',
)
len(filt2_df)
dfs = [filt1_df, filt2_df]
take_varying_columns(pd.concat(dfs)[param_cols]).drop_duplicates()
def plot_latency_vs_throughput_comparison(dfs, legend_cols=None, latencyMetric='Publish'):
fig0, ax0 = plt.subplots()
cmap = plt.get_cmap('Set1')
colors = cmap.colors[0:len(dfs)]
for index, (df, color) in enumerate(zip(dfs, colors)):
df = df.set_index(['publishRateMBPerSecMean']).sort_index()
name_cols = df.iloc[0][legend_cols]
name = ','.join(['%s=%s' % item for item in name_cols.to_dict().items()])
for percentile, style in [('50',':x'), ('95','-.x'), ('99','-x')]:
plot_df = df[['aggregated%sLatency%spct' % (latencyMetric, percentile)]]
plot_df.columns = ['%s %s Latency p%s' % (name, latencyMetric, percentile)]
plot_df.index.name = 'Publish Throughput (MB/s)'
plot_df.plot(
ax=ax0,
logx=True,
logy=True,
figsize=(10,8),
grid=True,
style=style,
color=color,
)
ax0.set_ylabel('Latency (ms)');
tick_formatter = matplotlib.ticker.LogFormatter()
ax0.xaxis.set_major_formatter(tick_formatter)
ax0.yaxis.set_major_formatter(tick_formatter)
ax0.grid('on', which='both', axis='both')
plot_latency_vs_throughput_comparison([filt1_df, filt2_df], legend_cols=['pravegaVersion'], latencyMetric='Publish')
plot_latency_vs_throughput_comparison([filt1_df, filt2_df], legend_cols=['pravegaVersion'], latencyMetric='EndToEnd')
| results-analyzer/Redhook/ra-pulsar-2.5.2_AWS.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + _cell_guid="21256b6b-21dd-5dbb-7e8a-43af8b852cdd" colab={"base_uri": "https://localhost:8080/"} id="XVpcGLQTSRHy" outputId="7f76df1a-91d7-4733-f923-f2f22046d78a"
# !pip install imbalanced-ensemble
import pandas as pd
import matplotlib.pyplot as plt
import numpy as np
import random
from sklearn.preprocessing import StandardScaler
from sklearn.model_selection import train_test_split
from sklearn.datasets import make_classification
from imbalanced_ensemble.ensemble import SMOTEBoostClassifier
from sklearn.neighbors import KNeighborsClassifier
from sklearn.metrics import roc_curve, auc
import time
# %matplotlib inline
# + [markdown] id="4ctpY1roS1lQ"
# ## Data Loading
# + [markdown] id="qFJpeHX6U56B"
# PIMA Dataset
# + id="99wBcMNVSZBv"
def load_pima_dataset():
data = pd.read_csv('diabetes.csv')
pd.value_counts(data['Outcome']).plot.bar()
plt.title('Pima Dataset')
plt.xlabel('Class')
plt.ylabel('Frequency')
plt.show()
X = data.drop(data.columns.values[-1:],axis=1)
y = data.drop(data.columns.values[:-1],axis=1)
return X.to_numpy(), y.to_numpy()
# + [markdown] id="zjjpRjd0U8xq"
# Credit Card Dataset
# + colab={"base_uri": "https://localhost:8080/", "height": 611} id="nbB4UzyLU39Y" outputId="91b9880c-97a7-41db-bd14-a6e2773bbf9b"
def load_credit_card_dataset():
data = pd.read_csv('creditcard.csv')
pd.value_counts(data['Class']).plot.bar()
plt.title('Credit Card Dataset')
plt.xlabel('Class')
plt.ylabel('Frequency')
plt.show()
data = data.drop(['Time', 'Amount'], axis=1)
X = data.drop(data.columns.values[-1:],axis=1)
y = data.drop(data.columns.values[:-1],axis=1)
return X.to_numpy(),y.to_numpy()
load_credit_card_dataset()
# + [markdown] id="b7W0QDfxVDqg"
# Synthetic Datasets
# + id="1vtZHpwGSZD1"
def load_synthetic_dataset(num_samples=100, num_features=4, num_classes=2, weights_list=[0.9, 0.1], random_state=0, test_size=0.2):
X,y = make_classification(n_samples=num_samples,n_features=num_features,n_classes=num_classes,weights=weights_list,random_state=random_state)
plt.scatter(X[:,0][y==1],X[:,1][y==1],c='g',marker='X')
plt.scatter(X[:,0][y==0],X[:,1][y==0],c='r',marker='.')
plt.show()
y = np.array([[float(i)] for i in y])
return X, y
# + id="FwBmUBi0UcjN"
# + [markdown] id="U22FPpobnDY2"
# ## Evaluation Metrics
# + id="u1CJjLf8nCYD"
def get_evaluation_metrics(X_test, y_test, y_predict):
fn = len(y_test[(y_predict==0) & (y_test==1)])
tp = len(y_predict[(y_predict==1) & (y_test==1)])
fp = len(y_predict[(y_predict==1) & (y_test==0)])
tn = len(y_predict[(y_predict==0) & (y_test==0)])
majority_acc = len(y_test[(y_predict==1) & (y_test==1)])/(len(y_test[(y_predict==0) & (y_test==1)])+len(y_predict[(y_predict==1) & (y_test==1)]))
minority_acc = len(y_test[(y_predict==0) & (y_test==0)])/(len(y_test[(y_predict==1) & (y_test==0)])+len(y_predict[(y_predict==0) & (y_test==0)]))
return fn, tp, fp, tn, majority_acc, minority_acc
# + [markdown] id="srcXvReGSdnU"
# ## SMOTE
# + id="PpOhDwy9tdBY"
def get_knn(X, y, y_minority, k):
neighbours = []
for index_1 in range(len(X[:][y==y_minority])):
neighbours.append([])
for index_2 in range(len(X[:][y==y_minority])):
if index_2 != index_1:
dist = np.sum(np.square(X[:][y==y_minority][index_1]-X[:][y==y_minority][index_2]))
neighbours[index_1].append((dist, index_2))
neighbours[index_1].sort()
neighbours = np.array(neighbours)
k_neighbours = neighbours[:, :k]
return k_neighbours
# + id="X1-QJCobsTV8"
def nn_vect_select(X, y, y_minority, nnarray, k, os_percent):
init_size = len(X[:][y==y_minority])
factor = os_percent // 100
for index_1 in range(init_size):
neigh_nums = random.sample(range(0, k), factor)
for i in neigh_nums:
index_2 = int(nnarray[index_1][i][1])
ratio = random.random()
new_vals = X[:][y==y_minority][index_1] + ratio * (X[:][y==y_minority][index_2] - X[:][y==y_minority][index_1])
X = np.append(X,[new_vals],axis=0)
y = np.append(y,[y_minority])
return X, y
# + id="KmwNREK9SY4x"
def apply_smote(X, y, y_minority, test_size=0.3, random_state=0, os_percent=200, k=5):
y = y.ravel()
nnarray = get_knn(X, y, y_minority, k)
X, y = nn_vect_select(X, y, y_minority, nnarray, k, os_percent)
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=test_size, random_state=random_state)
clf = KNeighborsClassifier()
clf.fit(X_train,y_train)
y_predict = clf.predict(X_test)
y_scores = clf.predict_proba(X_test)
fpr, tpr, threshold = roc_curve(y_test, y_scores[:, 1])
roc_auc = auc(fpr, tpr)
fn, tp, fp, tn, majority_acc, minority_acc = get_evaluation_metrics(X_test, y_test, y_predict)
print('Over-sampling %: ', os_percent)
print('Precision: ', tp/(tp+fp))
print('Recall: ', tp/(tp+fn))
# print('Accuracy: ', (tp+tn)/(tp+fp+tn+fn))
print()
return X, y, fpr, tpr, roc_auc
# + [markdown] id="OnNh-IWwSiO4"
# ## SMOTEBoost
# + id="_OqiMCZZSY6i"
def apply_smote_boost(X, y, y_minority, test_size=0.3, random_state=0):
y = y.ravel()
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=test_size, random_state=random_state)
sm_boost = SMOTEBoostClassifier(random_state=random_state)
sm_boost.fit(X_train, y_train)
y_predict = sm_boost.predict(X_test)
y_scores = sm_boost.decision_function(X_test)
fpr, tpr, threshold = roc_curve(y_test, y_scores)
roc_auc = auc(fpr, tpr)
fn, tp, fp, tn, majority_acc, minority_acc = get_evaluation_metrics(X_test, y_test, y_predict)
print()
print('Precision: ', tp/(tp+fp))
print('Recall: ', tp/(tp+fn))
# print('Accuracy: ', (tp+tn)/(tp+fp+tn+fn))
return fpr, tpr, roc_auc
# + [markdown] id="E5S_5ZkEt7MH"
# ## Analysis
# + id="QL_-G2CNEsc3"
oversampling_percentage = [0,100,200,300]
colours = ['b', 'g', 'y', 'r']
# + id="FhdabOHGKGEx"
def evaluate_smote(X, y, test_size=0.3, random_state=0, os_percent=200, k=5):
newX, newy, fpr, tpr, roc_auc = apply_smote(X, y, 1, test_size=test_size, os_percent=os_percent, k=k)
return newX, newy, fpr, tpr, roc_auc
# + id="HcMm-oWCMF9H"
def evaluate_smote_boost(X, y, test_size=0.3, random_state=0):
fpr, tpr, roc_auc = apply_smote_boost(X, y, 1, test_size=test_size)
return fpr, tpr, roc_auc
# + [markdown] id="frsu0xkQJ31D"
# #### SMOTE+KNN on Pima
# + colab={"base_uri": "https://localhost:8080/", "height": 839} id="CSeCnlTOf61l" outputId="da252651-2139-4b21-ffd4-8773efee14de"
X, y = load_pima_dataset()
for i in range(0, len(oversampling_percentage)):
plt.title('Receiver Operating Characteristic')
plt.plot([0,1],[0,1],'r--')
plt.xlim([-0.1,1.0])
plt.ylim([-0.1,1.01])
plt.ylabel('True Positive Rate')
plt.xlabel('False Positive Rate')
newX, newY, fpr, tpr, roc_auc = evaluate_smote(X, y, os_percent=oversampling_percentage[i])
plt.plot(fpr, tpr, colours[i],label='Oversampling = %d%%, AUC = %0.3f'% (oversampling_percentage[i], roc_auc))
plt.legend()
plt.show()
# + [markdown] id="UQZsb7-bLwAU"
# #### SMOTEBoost on Pima
# + colab={"base_uri": "https://localhost:8080/", "height": 671} id="4zG4sXAdL0Zz" outputId="45d8ab57-8d4c-471b-f721-2cef931113a2"
X, y = load_pima_dataset()
plt.title('Receiver Operating Characteristic')
plt.plot([0,1],[0,1],'r--')
plt.xlim([-0.1,1.0])
plt.ylim([-0.1,1.01])
plt.ylabel('True Positive Rate')
plt.xlabel('False Positive Rate')
fpr, tpr, roc_auc = evaluate_smote_boost(X, y)
plt.plot(fpr, tpr, 'b',label='AUC = %0.3f'% (roc_auc))
print('\n\n')
plt.legend()
plt.show()
# + [markdown] id="G4ZSwh7pBUdY"
# #### SMOTEBoost on Credit Fraud
# + colab={"base_uri": "https://localhost:8080/", "height": 671} id="HIHkHFA4BTLV" outputId="d44ebe58-95bd-48dc-83c1-b725ccc11d19"
X, y = load_credit_card_dataset()
plt.title('Receiver Operating Characteristic')
plt.plot([0,1],[0,1],'r--')
plt.xlim([-0.1,1.0])
plt.ylim([-0.1,1.01])
plt.ylabel('True Positive Rate')
plt.xlabel('False Positive Rate')
fpr, tpr, roc_auc = evaluate_smote_boost(X, y)
plt.plot(fpr, tpr, 'b',label='AUC = %0.3f'% (roc_auc))
print('\n\n')
plt.legend()
plt.show()
# + [markdown] id="HPgKBTxlJ9eX"
# #### SMOTE+KNN on Synthetic Data
# + id="-zouHgRxKcU1"
number_samples = [1000]
oversampling_percentage = [0,100,200,300]
ir = [5, 10, 50]
# + colab={"base_uri": "https://localhost:8080/", "height": 1000} id="UFtfQl1VQAKa" outputId="c074b6a0-6c12-46d8-883b-b38b45fb00c4"
for j in range(len(ir)):
print('Imbalance Ratio: ', ir[j])
for num_samp in number_samples:
print('Number of Samples: ', num_samp)
X, y = load_synthetic_dataset(num_samples=num_samp, weights_list=[ir[j]/(ir[j]+1), 1/(ir[j]+1)], random_state=2)
for i in range(0, len(oversampling_percentage)):
plt.title('Receiver Operating Characteristic')
plt.plot([0,1],[0,1],'r--')
plt.xlim([-0.1,1.0])
plt.ylim([-0.1,1.01])
plt.ylabel('True Positive Rate')
plt.xlabel('False Positive Rate')
newX, newy, fpr, tpr, roc_auc = evaluate_smote(X, y, os_percent=oversampling_percentage[i])
plt.plot(fpr, tpr, colours[i],label='Oversampling = %d%%, AUC = %0.3f'% (oversampling_percentage[i], roc_auc))
plt.legend()
plt.show()
# + colab={"base_uri": "https://localhost:8080/", "height": 1000} id="nTXbsdcsit25" outputId="2451d11c-753a-4528-f941-6f4cfec735c0"
for j in range(len(ir)):
print('Imbalance Ratio: ', ir[j])
for num_samp in number_samples:
print('Number of Samples: ', num_samp)
X, y = load_synthetic_dataset(num_samples=num_samp, weights_list=[ir[j]/(ir[j]+1), 1/(ir[j]+1)], random_state=2)
for i in range(0, len(oversampling_percentage)):
newX, newy, fpr, tpr, roc_auc = evaluate_smote(X, y, os_percent=oversampling_percentage[i])
print('Over-sampling %: ', oversampling_percentage[i])
plt.scatter(X[:,0][y[:,0]==1],X[:,1][y[:,0]==1],c='g',marker='X')
plt.scatter(X[:,0][y[:, 0]==0],X[:,1][y[:, 0]==0],c='r',marker='.')
plt.show()
# + [markdown] id="1TunTl0YSO0t"
# #### SMOTEBoost on Synthetic Data
# + colab={"base_uri": "https://localhost:8080/", "height": 1000} id="smmB1g9mSZLH" outputId="38a0dd81-f764-4a05-a40d-38ae75139dce"
for j in range(len(ir)):
print('Imbalance Ratio: ', ir[j])
for num_samp in number_samples:
print('Number of Samples: ', num_samp)
X, y = load_synthetic_dataset(num_samples=num_samp, weights_list=[ir[j]/(ir[j]+1), 1/(ir[j]+1)], random_state=2)
plt.title('Receiver Operating Characteristic')
plt.plot([0,1],[0,1],'r--')
plt.xlim([-0.1,1.0])
plt.ylim([-0.1,1.01])
plt.ylabel('True Positive Rate')
plt.xlabel('False Positive Rate')
fpr, tpr, roc_auc = evaluate_smote_boost(X, y)
plt.plot(fpr, tpr, 'b',label='AUC = %0.3f'% (roc_auc))
print('\n\n')
plt.legend()
plt.show()
| code/smote/smote_and_smoteboost_analysis.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 2
# language: python
# name: python2
# ---
# %matplotlib inline
#
# Wavelets for seismic modeling
# -----------------------------
#
# The :mod:`fatiando.seismic` package defines classes to evaluate and sample
# wavelets. Here is an example of how to use the
# :class:`~fatiando.seismic.RickerWavelet` class.
#
#
# +
import matplotlib.pyplot as plt
import numpy as np
from fatiando.seismic import RickerWavelet
# Make three wavelets to show how to evaluate them at given times.
w1 = RickerWavelet(f=5)
w2 = RickerWavelet(f=1, delay=0)
w3 = RickerWavelet(f=10, amp=-0.5, delay=0.7)
times = np.linspace(0, 1, 200)
# Call the wavelets like functions to get their values
v1 = w1(times)
v2 = w2(times)
v3 = w3(times)
# You can also sample the wavelets with a given time interval (dt). The default
# duration of the sampling is 2*delay, which guarantees that the whole wavelet
# is sampled.
w = RickerWavelet(f=60)
samples = w.sample(dt=0.001)
plt.figure(figsize=(8, 5))
ax = plt.subplot(1, 2, 1)
ax.set_title('Ricker wavelets')
ax.plot(times, v1, '-', label='f=5')
ax.plot(times, v2, '--', label='f=1')
ax.plot(times, v3, '-.', label='f=10')
ax.grid()
ax.legend()
ax.set_ylabel('Amplitude')
ax.set_xlabel('time (s)')
ax = plt.subplot(1, 2, 2)
ax.set_title('Wavelet sampled at 0.001s')
ax.plot(samples, '.k')
ax.grid()
ax.set_ylabel('Amplitude')
ax.set_xlabel('sample number')
plt.tight_layout()
plt.show()
| _downloads/wavelets.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Self-Driving Car Engineer Nanodegree
#
#
# ## Project: **Finding Lane Lines on the Road**
# ***
# In this project, you will use the tools you learned about in the lesson to identify lane lines on the road. You can develop your pipeline on a series of individual images, and later apply the result to a video stream (really just a series of images). Check out the video clip "raw-lines-example.mp4" (also contained in this repository) to see what the output should look like after using the helper functions below.
#
# Once you have a result that looks roughly like "raw-lines-example.mp4", you'll need to get creative and try to average and/or extrapolate the line segments you've detected to map out the full extent of the lane lines. You can see an example of the result you're going for in the video "P1_example.mp4". Ultimately, you would like to draw just one line for the left side of the lane, and one for the right.
#
# In addition to implementing code, there is a brief writeup to complete. The writeup should be completed in a separate file, which can be either a markdown file or a pdf document. There is a [write up template](https://github.com/udacity/CarND-LaneLines-P1/blob/master/writeup_template.md) that can be used to guide the writing process. Completing both the code in the Ipython notebook and the writeup template will cover all of the [rubric points](https://review.udacity.com/#!/rubrics/322/view) for this project.
#
# ---
# Let's have a look at our first image called 'test_images/solidWhiteRight.jpg'. Run the 2 cells below (hit Shift-Enter or the "play" button above) to display the image.
#
# **Note: If, at any point, you encounter frozen display windows or other confounding issues, you can always start again with a clean slate by going to the "Kernel" menu above and selecting "Restart & Clear Output".**
#
# ---
# **The tools you have are color selection, region of interest selection, grayscaling, Gaussian smoothing, Canny Edge Detection and Hough Tranform line detection. You are also free to explore and try other techniques that were not presented in the lesson. Your goal is piece together a pipeline to detect the line segments in the image, then average/extrapolate them and draw them onto the image for display (as below). Once you have a working pipeline, try it out on the video stream below.**
#
# ---
#
# <figure>
# <img src="examples/line-segments-example.jpg" width="380" alt="Combined Image" />
# <figcaption>
# <p></p>
# <p style="text-align: center;"> Your output should look something like this (above) after detecting line segments using the helper functions below </p>
# </figcaption>
# </figure>
# <p></p>
# <figure>
# <img src="examples/laneLines_thirdPass.jpg" width="380" alt="Combined Image" />
# <figcaption>
# <p></p>
# <p style="text-align: center;"> Your goal is to connect/average/extrapolate line segments to get output like this</p>
# </figcaption>
# </figure>
# **Run the cell below to import some packages. If you get an `import error` for a package you've already installed, try changing your kernel (select the Kernel menu above --> Change Kernel). Still have problems? Try relaunching Jupyter Notebook from the terminal prompt. Also, consult the forums for more troubleshooting tips.**
# ## Import Packages
#importing some useful packages
import matplotlib.pyplot as plt
import matplotlib.image as mpimg
import numpy as np
import cv2
# %matplotlib inline
# ## Read in an Image
# +
#reading in an image
image = mpimg.imread('test_images/solidWhiteRight.jpg')
#printing out some stats and plotting
print('This image is:', type(image), 'with dimensions:', image.shape)
plt.imshow(image) # if you wanted to show a single color channel image called 'gray', for example, call as plt.imshow(gray, cmap='gray')
# -
# ## Ideas for Lane Detection Pipeline
# **Some OpenCV functions (beyond those introduced in the lesson) that might be useful for this project are:**
#
# `cv2.inRange()` for color selection
# `cv2.fillPoly()` for regions selection
# `cv2.line()` to draw lines on an image given endpoints
# `cv2.addWeighted()` to coadd / overlay two images
# `cv2.cvtColor()` to grayscale or change color
# `cv2.imwrite()` to output images to file
# `cv2.bitwise_and()` to apply a mask to an image
#
# **Check out the OpenCV documentation to learn about these and discover even more awesome functionality!**
# ## Helper Functions
# Below are some helper functions to help get you started. They should look familiar from the lesson!
# +
import math
def select_white_yellow(image):
"""
This filters an input image to have HSL (hue, saturation, lightness) spectrum
and put white/yellow mask for advanced grayscale type filtering,
which might have some advantage over simple RGB2GRAY convertion method in OpenCV
"""
# convert RGB to HLS
image_hsl = cv2.cvtColor(image, cv2.COLOR_RGB2HLS)
# Color mask : white
low = np.uint8([ 0, 200, 0])
high = np.uint8([255, 255, 255])
white_mask = cv2.inRange(image_hsl, low, high)
# Color mask : yellow
low = np.uint8([ 20, 0, 100])
high = np.uint8([ 30, 255, 255])
yellow_mask = cv2.inRange(image_hsl, low, high)
# combine the mask
mask = cv2.bitwise_or(white_mask, yellow_mask)
return cv2.bitwise_and(image, image, mask = mask)
def grayscale(img): #set back
"""Applies the Grayscale transform
This will return an image with only one color channel
but NOTE: to see the returned image as grayscale
(assuming your grayscaled image is called 'gray')
you should call plt.imshow(gray, cmap='gray')"""
return cv2.cvtColor(img, cv2.COLOR_RGB2GRAY)
# Or use BGR2GRAY if you read an image with cv2.imread()
#return cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
def canny(img, low_threshold, high_threshold):
"""Applies the Canny transform"""
return cv2.Canny(img, low_threshold, high_threshold)
def gaussian_blur(img, kernel_size):
"""Applies a Gaussian Noise kernel"""
return cv2.GaussianBlur(img, (kernel_size, kernel_size), 0)
def region_of_interest(img, vertices):
"""
Applies an image mask.
Only keeps the region of the image defined by the polygon
formed from `vertices`. The rest of the image is set to black.
`vertices` should be a numpy array of integer points.
"""
#defining a blank mask to start with
mask = np.zeros_like(img)
#defining a 3 channel or 1 channel color to fill the mask with depending on the input image
if len(img.shape) > 2:
channel_count = img.shape[2] # i.e. 3 or 4 depending on your image
ignore_mask_color = (255,) * channel_count
else:
ignore_mask_color = 255
#filling pixels inside the polygon defined by "vertices" with the fill color
cv2.fillPoly(mask, vertices, ignore_mask_color)
#returning the image only where mask pixels are nonzero
masked_image = cv2.bitwise_and(img, mask)
return masked_image
def draw_lines(img, lines, color=[255, 0, 0], thickness=10):
"""
NOTE: this is the function you might want to use as a starting point once you want to
average/extrapolate the line segments you detect to map out the full
extent of the lane (going from the result shown in raw-lines-example.mp4
to that shown in P1_example.mp4).
Think about things like separating line segments by their
slope ((y2-y1)/(x2-x1)) to decide which segments are part of the left
line vs. the right line. Then, you can average the position of each of
the lines and extrapolate to the top and bottom of the lane.
This function draws `lines` with `color` and `thickness`.
Lines are drawn on the image inplace (mutates the image).
If you want to make the lines semi-transparent, think about combining
this function with the weighted_img() function below
"""
#set empty lists
left_slope, right_slope = [], []
left_intercept,right_intercept = [], []
#for line polynomial (optional challenge)
leftx1, leftx2, lefty1, lefty2 = [], [], [], []
rightx1, rightx2, righty1, righty2 = [], [], [], []
#separate left and right for getting slope/intercept
for line in lines:
for x1,y1,x2,y2 in line:
slope = (y2-y1) / (x2-x1)
if slope > 0: #rightSide by Camera coord
right_slope.append(slope)
right_intercept.append(((y1 - slope*x1) + (y2 - slope*x2))/2)
#for line polynomial (optional challenge)
rightx1.append(x1)
rightx2.append(x2)
righty1.append(y1)
righty2.append(y2)
elif slope < 0: #leftSide by Camera coord
left_slope.append(slope)
left_intercept.append(((y1 - slope*x1) + (y2 - slope*x2))/2)
#for line polynomial (optional challenge)
leftx1.append(x1)
leftx2.append(x2)
lefty1.append(y1)
lefty2.append(y2)
#left : extrapolate
#extrapolating based on x from 0 ~ 1/2 size width
imshape = img.shape
x1_l = 0 #left most edge
x2_l = int(imshape[1]*0.45) #left neighbor
#left slope filter:select only the ones within std based on the mean
new_leftSlope = []
for ele in left_slope:
if ele < (np.mean(left_slope) + np.std(left_slope)) and ele > (np.mean(left_slope) - np.std(left_slope)):
new_leftSlope.append(ele)
#left intercept filter:select only the ones within std based on the mean
new_left_intercept = []
for ele in left_intercept:
if ele < (np.mean(left_intercept) + np.std(left_intercept)) and ele > (np.mean(left_intercept) - np.std(left_intercept)):
new_left_intercept.append(ele)
m_l = np.mean(new_leftSlope)
b_l = np.mean(new_left_intercept)
y1_l, y2_l = 0, 0
#sanity check before getting the exptrapolated output
if not math.isnan(m_l*x1_l + b_l):
y1_l = int(m_l*x1_l + b_l)
else:
x1_l, x2_l, y1_l,y2_l = 0, 0, 0, 0
if not math.isnan(m_l*x2_l + b_l):
y2_l = int(m_l*x2_l + b_l)
else:
x1_l, x2_l, y1_l,y2_l = 0, 0, 0, 0
cv2.line(img, (x1_l, y1_l), \
(x2_l, y2_l), color, thickness)
#right :extrapolate
#extrapolating based on x from 1/2 ~ full size width
x1_r = int(imshape[1]*0.55) #right neighbor
x2_r = int(imshape[1]*0.95) #right most edge
#right slope filter:select only the ones within std based on the mean
new_rightSlope = []
for ele in right_slope:
if ele < (np.mean(right_slope) + np.std(right_slope)) and \
ele > (np.mean(right_slope) - np.std(right_slope)):
new_rightSlope.append(ele)
#right intercept filter:select only the ones within std based on the mean
new_right_intercept = []
for ele in right_intercept:
if ele < (np.mean(right_intercept) + np.std(right_intercept)) and \
ele > (np.mean(right_intercept) - np.std(right_intercept)):
new_right_intercept.append(ele)
m_r = np.mean(new_rightSlope)
b_r = np.mean(new_right_intercept)
y1_r, y2_r = 0, 0
#sanity check before getting the exptrapolated output
if not math.isnan(m_r*x1_r + b_r):
y1_r = int(m_r*x1_r + b_r)
else:
x1_r, x2_r, y1_r, y2_r = 0, 0, 0, 0
if not math.isnan(m_r*x2_r + b_r):
y2_r = int(m_r*x2_r + b_r)
else:
x1_r, x2_r, y1_r, y2_r = 0, 0, 0, 0
cv2.line(img, (x1_r, y1_r), \
(x2_r, y2_r), color, thickness)
## Polyline Draw : for line polynomial (optional challenge)
#left
combx_left = (leftx1 + leftx2)
comby_left = (lefty1 + lefty2)
z_l = np.polyfit(combx_left, comby_left, 3)
#right
combx_right = (rightx1 + rightx2)
comby_right = (righty1 + righty2)
z_r = np.polyfit(combx_right, comby_right, 3)
#poly_lines (img,z_l, z_r)
## end: Polyline Draw
def poly_lines (img, z_l, z_r):
xp_l = np.linspace(0,img.shape[1]*0.4,10)
xp_r = np.linspace(img.shape[1]*0.6,img.shape[1]*0.95,10)
xy_l=np.array([np.array([int(x),int(z_l[3]+z_l[2]*x+ z_l[1]*pow(x,2)+ z_l[0]*pow(x,3))]) for x in xp_l])
xy_r=np.array([np.array([int(x),int(z_r[3]+z_r[2]*x+ z_r[1]*pow(x,2)+ z_r[0]*pow(x,3))]) for x in xp_r])
cv2.polylines(img, [ xy_l ], False, (255,255,255),15)
cv2.polylines(img, [ xy_r ], False, (255,255,255),15)
def hough_lines(img, rho, theta, threshold, min_line_len, max_line_gap):
"""
`img` should be the output of a Canny transform.
Returns an image with hough lines drawn.
"""
lines = cv2.HoughLinesP(img, rho, theta, threshold, np.array([]), minLineLength=min_line_len, maxLineGap=max_line_gap)
line_img = np.zeros((img.shape[0], img.shape[1], 3), dtype=np.uint8)
draw_lines(line_img, lines)
return line_img
# Python 3 has support for cool math symbols.
def weighted_img(img, initial_img, ฮฑ=0.8, ฮฒ=1., ฮณ=0.):
"""
`img` is the output of the hough_lines(), An image with lines drawn on it.
Should be a blank image (all black) with lines drawn on it.
`initial_img` should be the image before any processing.
The result image is computed as follows:
initial_img * ฮฑ + img * ฮฒ + ฮณ
NOTE: initial_img and img must be the same shape!
"""
return cv2.addWeighted(initial_img, ฮฑ, img, ฮฒ, ฮณ)
# -
# ## Test Images
#
# Build your pipeline to work on the images in the directory "test_images"
# **You should make sure your pipeline works well on these images before you try the videos.**
import os
import os.path
os.listdir("test_images/")
# ## Build a Lane Finding Pipeline
#
#
# Build the pipeline and run your solution on all test_images. Make copies into the `test_images_output` directory, and you can use the images in your writeup report.
#
# Try tuning the various parameters, especially the low and high Canny thresholds as well as the Hough lines parameters.
# +
# TODO: Build your pipeline that will draw lane lines on the test_images
# then save them to the test_images_output directory.
#Create output folder
folder = "/test_images_output"
if os.path.exists(os.getcwd()+folder):
#print("Exists")
pass
else:
#print("Doesn't exists")
os.mkdir(os.getcwd()+folder)
## Set Tuning parameters
cannyLow = 50
cannyHigh = 200
# Define the Hough transform parameters
lines = 2 #output vector of lines(cv.32FC2 type).
rho = 1 # distance resolution in pixels of the Hough grid
theta = np.pi/180 #angular resolution in radians of the Hough grid
threshold = 10 # minimum number of votes (intersections in Hough grid cell)
min_line_len = 40 #minimum number of pixels making up a line
max_line_gap = 20 # maximum gap in pixels between connectable line segments
#draw lane lines and save files - pipeline
for imgName in os.listdir("test_images/"):
#reading in an image
image = mpimg.imread('test_images/' + imgName)
#image = mpimg.imread('test_images/solidYellowCurve.jpg')
#setting vertices for region extraction : orignal image with (540, 960, 3)
imshape = image.shape
#grayscale and blur
#grayImg = grayscale(image)
grayImg = select_white_yellow(image)
blurImg = gaussian_blur(grayImg, 5)
#canny Edge detection
cannyImg = canny(blurImg, cannyLow, cannyHigh)
vertices = np.array([[(imshape[1]*0.1,imshape[0]*0.95),(imshape[1]*0.4, imshape[0]*0.6),
(imshape[1]*0.5,imshape[0]*0.6),(imshape[1]*0.9,imshape[0]*0.95)]], dtype=np.int32)
#region extraction along set vertices
regionImg = region_of_interest(cannyImg, vertices)
#hough line transformed lines added to the image
lineImg = hough_lines(regionImg, rho, theta, threshold, min_line_len, max_line_gap)
#overlay
weightImg = weighted_img(lineImg, image, ฮฑ=0.8, ฮฒ=1., ฮณ=0.)
returnImg = weightImg
#OpenCV stores an image as BGR, just for printing again with colors for some output
returnImg = cv2.cvtColor(returnImg, cv2.COLOR_BGR2RGB)
#plt.imshow(image, cmap='gray')
#save the output image to the directory
filename = imgName
cv2.imwrite(os.getcwd()+folder+'/output/returnImg_'+imgName, returnImg)
# -
# ## Test on Videos
#
# You know what's cooler than drawing lanes over images? Drawing lanes over video!
#
# We can test our solution on two provided videos:
#
# `solidWhiteRight.mp4`
#
# `solidYellowLeft.mp4`
#
# **Note: if you get an import error when you run the next cell, try changing your kernel (select the Kernel menu above --> Change Kernel). Still have problems? Try relaunching Jupyter Notebook from the terminal prompt. Also, consult the forums for more troubleshooting tips.**
#
# **If you get an error that looks like this:**
# ```
# NeedDownloadError: Need ffmpeg exe.
# You can download it by calling:
# imageio.plugins.ffmpeg.download()
# ```
# **Follow the instructions in the error message and check out [this forum post](https://discussions.udacity.com/t/project-error-of-test-on-videos/274082) for more troubleshooting tips across operating systems.**
# Import everything needed to edit/save/watch video clips
from moviepy.editor import VideoFileClip
from IPython.display import HTML
# +
def process_image(image):
# define parameters needed for helper functions (given inline)
kernel_size = 5 # gaussian blur
low_threshold = 60 # canny edge detection
high_threshold = 180 # canny edge detection
# Define the Hough transform parameters
rho = 1 # distance resolution in pixels of the Hough grid
theta = np.pi/180 # angular resolution in radians of the Hough grid
threshold = 10 # minimum number of votes (intersections in Hough grid cell)
min_line_length = 50 # minimum number of pixels making up a line
max_line_gap = 50 # maximum gap in pixels between connectable line segments
# NOTE: The output you return should be a color image (3 channel) for processing video below
gray = grayscale(image) # convert to grayscale
blur_gray = gaussian_blur(gray, kernel_size) # add gaussian blur to remove noise
edges = canny(blur_gray, low_threshold, high_threshold) # perform canny edge detection
# extract image size and define vertices of the four sided polygon for masking
imshape = image.shape
vertices = np.array([[(imshape[1]*0.05,imshape[0]),(imshape[1]*0.44, imshape[0]*0.6),
(imshape[1]*0.55,imshape[0]*0.6),(imshape[1]*0.95,imshape[0])]], dtype=np.int32)
masked_edges = region_of_interest(edges, vertices) # retain information only in the region of interest
line_image = hough_lines(masked_edges, rho, theta, threshold,\
min_line_length, max_line_gap) # perform hough transform and retain lines with specific properties
lines_edges = weighted_img(line_image, image, ฮฑ=0.8, ฮฒ=1.) # Draw the lines on the edge image
return lines_edges
# -
# Let's try the one with the solid white lane on the right first ...
white_output = 'test_videos_output/solidWhiteRight.mp4'
## To speed up the testing process you may want to try your pipeline on a shorter subclip of the video
## To do so add .subclip(start_second,end_second) to the end of the line below
## Where start_second and end_second are integer values representing the start and end of the subclip
## You may also uncomment the following line for a subclip of the first 5 seconds
##clip1 = VideoFileClip("test_videos/solidWhiteRight.mp4").subclip(0,5)
clip1 = VideoFileClip("test_videos/solidWhiteRight.mp4")
white_clip = clip1.fl_image(process_image) #NOTE: this function expects color images!!
# %time white_clip.write_videofile(white_output, audio=False)
# Play the video inline, or if you prefer find the video in your filesystem (should be in the same directory) and play it in your video player of choice.
HTML("""
<video width="960" height="540" controls>
<source src="{0}">
</video>
""".format(white_output))
# ## Improve the draw_lines() function
#
# **At this point, if you were successful with making the pipeline and tuning parameters, you probably have the Hough line segments drawn onto the road, but what about identifying the full extent of the lane and marking it clearly as in the example video (P1_example.mp4)? Think about defining a line to run the full length of the visible lane based on the line segments you identified with the Hough Transform. As mentioned previously, try to average and/or extrapolate the line segments you've detected to map out the full extent of the lane lines. You can see an example of the result you're going for in the video "P1_example.mp4".**
#
# **Go back and modify your draw_lines function accordingly and try re-running your pipeline. The new output should draw a single, solid line over the left lane line and a single, solid line over the right lane line. The lines should start from the bottom of the image and extend out to the top of the region of interest.**
# Now for the one with the solid yellow lane on the left. This one's more tricky!
yellow_output = 'test_videos_output/solidYellowLeft.mp4'
## To speed up the testing process you may want to try your pipeline on a shorter subclip of the video
## To do so add .subclip(start_second,end_second) to the end of the line below
## Where start_second and end_second are integer values representing the start and end of the subclip
## You may also uncomment the following line for a subclip of the first 5 seconds
##clip2 = VideoFileClip('test_videos/solidYellowLeft.mp4').subclip(0,5)
clip2 = VideoFileClip('test_videos/solidYellowLeft.mp4')
yellow_clip = clip2.fl_image(process_image)
# %time yellow_clip.write_videofile(yellow_output, audio=False)
HTML("""|
<video width="960" height="540" controls>
<source src="{0}">
</video>
""".format(yellow_output))
# ## Writeup and Submission
#
# If you're satisfied with your video outputs, it's time to make the report writeup in a pdf or markdown file. Once you have this Ipython notebook ready along with the writeup, it's time to submit for review! Here is a [link](https://github.com/udacity/CarND-LaneLines-P1/blob/master/writeup_template.md) to the writeup template file.
#
# ## Optional Challenge
#
# Try your lane finding pipeline on the video below. Does it still work? Can you figure out a way to make it more robust? If you're up for the challenge, modify your pipeline so it works with this video and submit it along with the rest of your project!
challenge_output = 'test_videos_output/challenge.mp4'
## To speed up the testing process you may want to try your pipeline on a shorter subclip of the video
## To do so add .subclip(start_second,end_second) to the end of the line below
## Where start_second and end_second are integer values representing the start and end of the subclip
## You may also uncomment the following line for a subclip of the first 5 seconds
##clip3 = VideoFileClip('test_videos/challenge.mp4').subclip(0,5)
clip3 = VideoFileClip('test_videos/challenge.mp4')
challenge_clip = clip3.fl_image(process_image)
# %time challenge_clip.write_videofile(challenge_output, audio=False)
HTML("""
<video width="960" height="540" controls>
<source src="{0}">
</video>
""".format(challenge_output))
| CarND-LaneLines-P1/.ipynb_checkpoints/P1-checkpoint.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Read Me
#
# This notebook allows one to collect images using python opencv using your computer's webcam.
# # 1. Import Dependencies
# !pip install opencv-python
# +
# Import opencv
import cv2
# Import uuid
import uuid
# Import Operating System
import os
# Import time
import time
# -
# # 2. Define Labels for the images we are collecting
labels = ['zhili','roy']
number_imgs = 5
print(labels)
len(labels)
# # 3. Setup Folders to store the images collected
IMAGES_PATH = os.path.join('collected-images')
if not os.path.exists(IMAGES_PATH):
if os.name == 'posix':
# !mkdir -p {IMAGES_PATH}
if os.name == 'nt':
# !mkdir {IMAGES_PATH}
for label in labels:
path = os.path.join(IMAGES_PATH, label)
if not os.path.exists(path):
# !mkdir {path}
# # 4. Capture Images using webcam
#
# Press spacebar to capture an image using webcam
#
# Press ESC or q to quit the opencv window.
# +
# Capture Images 1 by 1
import cv2
#change class name to whatever that you are collecting, press space
class_name = 'zhili'
directory = IMAGES_PATH + '/'+ class_name
cam = cv2.VideoCapture(0)
cv2.namedWindow("Capturing Class {}, press SPACE to capture, press ESC to exit".format(class_name))
width = 1000
height = 1000
cam.set(cv2.CAP_PROP_FRAME_WIDTH, width)
cam.set(cv2.CAP_PROP_FRAME_HEIGHT, height)
img_counter = 1
while True:
ret, frame = cam.read()
if not ret:
print("failed to grab frame")
break
cv2.imshow("Capturing Class {}, press SPACE to capture, press ESC to exit".format(class_name), frame)
k = cv2.waitKey(1)
if k%256 == 27:
# ESC pressed
print("Escape hit, closing...")
break
elif k%256 == 32:
# SPACE BAR pressed
img_name = directory + "/{}_{}.jpg".format(class_name,img_counter)
# resize image to 200x200 pixels
frame = cv2.resize(frame,(200,200))
#print(frame.shape)
# write image to local directory
cv2.imwrite(img_name, frame)
print("{} written!".format(img_name))
img_counter += 1
if img_counter%10==0:
print('\n',img_counter)
if cv2.waitKey(10) & 0xFF == ord('q'):
break
cam.release()
cv2.destroyAllWindows()
# +
# Capture Images in a for loop (not recommended as opencv may crash due to lagging code)
for label in labels:
print('Collecting images for {}'.format(label))
#time.sleep(5)
cap = cv2.VideoCapture(0)
for imgnum in range(number_imgs):
#print('Collecting image {}'.format(imgnum))
ret, frame = cap.read()
imgname = os.path.join(IMAGES_PATH,label,label+'.'+'{}.jpg'.format(str(uuid.uuid1())))
cv2.imwrite(imgname, frame)
cv2.imshow('frame', frame)
#time.sleep(2)
if cv2.waitKey(10) & 0xFF == ord('q'):
break
cap.release()
cv2.destroyAllWindows()
| notebooks/Image Collection.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3.7.11 ('torch1.7')
# language: python
# name: python3
# ---
import pandas as pd
import numpy as np
import unidecode
import matplotlib as mpl
from adjustText import adjust_text
import matplotlib.pyplot as plt
background = '#D7E5E5'
mpl.rcParams['font.family']= 'Candara'
mpl.rcParams['font.size'] = 12
mpl.rcParams['font.weight'] = 'bold'
mpl.rcParams['legend.title_fontsize'] = 15
mpl.rcParams['legend.fontsize'] = 12
mpl.rcParams['savefig.facecolor']='white'
mpl.rcParams['axes.titleweight'] = 'heavy'
mpl.rcParams['axes.labelweight'] = 'heavy'
url_shooting = 'https://fbref.com/en/comps/Big5/shooting/players/Big-5-European-Leagues-Stats'
url_passing = 'https://fbref.com/en/comps/Big5/passing/players/Big-5-European-Leagues-Stats'
data_shooting = pd.read_html(url_shooting, header=1)[0]
data_shooting = data_shooting[data_shooting['Player'] != 'Player']
data_shooting
data_passing = pd.read_html(url_passing, header=1)[0]
data_passing = data_passing[data_passing['Player'] != 'Player']
data_passing
data_shooting.info()
d = data_shooting[['Player', '90s', 'Gls', 'Sh', 'SoT', 'xG', 'npxG', 'npxG/Sh', 'G-xG', 'np:G-xG']]
player_position = pd.read_excel('Player Positions-FBRef format.xlsx')
player_position.drop_duplicates(inplace=True)
set(player_position['Position'])
player_data = pd.merge(player_position, d, on="Player")
player_data.drop_duplicates(inplace=True)
player_data.iloc[:, 3:] = player_data.iloc[:, 3:].astype(float)
# +
data = player_data[player_data['90s'].astype(float) >= 5.0]
data['Glsp90'] = data['Gls']/data['90s']
data['xGp90'] = data['xG']/data['90s']
fig, ax = plt.subplots(figsize=(12, 8))
fig.set_dpi(200)
outlier_top = data['xGp90'].quantile(0.980)
outlier_bottom = data['xGp90'].quantile(0.03)
outlier_right = data['Glsp90'].quantile(0.980)
outlier_left = data['Glsp90'].quantile(0.03)
par_x = list(data['Glsp90'].astype(float))
par_y = list(data['xGp90'].astype(float))
plt.scatter(par_x, par_y, s=10, alpha=0.9, c='red', edgecolors='black')
txts = []
for i, txt in enumerate(list(data['Player'])):
if par_x[i]>outlier_right or par_y[i]>outlier_top:
if len(txt.split()) != 1:
name = txt[0] + ' ' + ' '.join(txt.split()[1:])
else:
name = txt
txts.append(plt.text(par_x[i], par_y[i], name))
adjust_text(txts, arrowprops=dict(arrowstyle='->', color='red'))
ax.set_xlabel('Goals per 90', fontsize=20)
ax.set_ylabel('Expected Goals (xG) per 90', fontsize=20)
ax.set_title('Goals vs Expected Goals (xG) per 90', fontsize=20, fontdict={'weight':'heavy'})
fig.text(0.125,0.04,'Top 5 Leagues 2021-22 | Minimum 5 90s', size = 10)
fig.text(0.125,0.02,'Data Source - Fbref.com. Prepared by <NAME>.', size=10)
# -
fig.savefig('Images/GvsxGp90.png', dpi=400, bbox_inches='tight')
# +
data = player_data[player_data['90s'].astype(float) >= 5.0]
who = ['Left-Back', 'Right-Back']
data = data[data['Position'].apply(lambda x: x in who)]
data['Glsp90'] = data['Gls']/data['90s']
data['xGp90'] = data['xG']/data['90s']
fig, ax = plt.subplots(figsize=(12, 8))
fig.set_dpi(200)
outlier_top = data['xGp90'].quantile(0.900)
outlier_bottom = data['xGp90'].quantile(0.03)
outlier_right = data['Glsp90'].quantile(0.900)
outlier_left = data['Glsp90'].quantile(0.03)
par_x = list(data['Glsp90'].astype(float))
par_y = list(data['xGp90'].astype(float))
col_codes = data.Position.astype('category').cat.codes
scatter = plt.scatter(par_x, par_y, s=10, c=col_codes)
txts = []
for i, txt in enumerate(list(data['Player'])):
if par_x[i]>outlier_right or par_y[i]>outlier_top:
if len(txt.split()) != 1:
name = txt[0] + ' ' + ' '.join(txt.split()[1:])
else:
name = txt
txts.append(plt.text(par_x[i], par_y[i], name))
adjust_text(txts, arrowprops=dict(arrowstyle='->', color='red'))
ax.set_xlabel('Goals per 90', fontsize=20)
ax.set_ylabel('Expected Goals (xG) per 90', fontsize=20)
ax.set_title('Goals vs Expected Goals (xG) per 90 for Full-Backs', fontsize=20, fontdict={'weight':'heavy'})
ax.legend(handles=scatter.legend_elements()[0], labels=who, title="Position")
fig.text(0.125,0.04,'Top 5 Leagues 2021-22 | Minimum 5 90s', size = 10)
fig.text(0.125,0.02,'Data Source - Fbref.com. Prepared by <NAME>.', size=10)
# -
fig.savefig('Images/GvsxGp90_FB.png', dpi=400, bbox_inches='tight')
| shooting.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Compare growth rates of lineages to their parents
# +
import pickle
import numpy as np
import matplotlib
import matplotlib.pyplot as plt
import torch
from pyrocov import pangolin
matplotlib.rcParams["figure.dpi"] = 200
matplotlib.rcParams["axes.edgecolor"] = "gray"
matplotlib.rcParams["savefig.bbox"] = "tight"
matplotlib.rcParams['font.family'] = 'sans-serif'
matplotlib.rcParams['font.sans-serif'] = ['Arial', 'Avenir', 'DejaVu Sans']
# -
dataset = torch.load("results/mutrans.data.single.None.pt", map_location="cpu")
print(dataset.keys())
locals().update(dataset)
lineage_id = {name: i for i, name in enumerate(lineage_id_inv)}
results = torch.load("results/mutrans.final.pt")
for key in results:
print(key)
best_fit = list(results.values())[0]
print(best_fit.keys())
# +
best_rate_loc = best_fit["median"]["rate_loc"]
child_to_delta = {}
for c, child in enumerate(lineage_id_inv):
if child in ("A", "B"):
continue # ignore very early lineages
parent = pangolin.get_parent(pangolin.decompress(child))
assert parent is not None
while pangolin.compress(parent) not in lineage_id:
parent = pangolin.get_parent(parent)
p = lineage_id[pangolin.compress(parent)]
parent_rate = best_rate_loc[p].item()
child_rate = best_rate_loc[c].item()
child_to_delta[child] = child_rate - parent_rate
child_to_delta = {k: v for k, v in sorted(child_to_delta.items(), key=lambda x: -x[1])}
print("len(child_to_delta): {}".format(len(child_to_delta)))
# +
num_annotate = 10
cutoff = list(sorted(child_to_delta.values()))[-num_annotate]
fig, ax = plt.subplots(1, 1, figsize=(3, 2.5))
ax.hist(child_to_delta.values(), bins=50, log=True)
ax.set_ylim(0.75, 1000)
y_pos = 550.0
for child, delta in child_to_delta.items():
if delta >= cutoff:
ax.text(0.37, y_pos, '{}:'.format(child), fontsize=7.)
ax.text(0.77, y_pos, '{:.2f}'.format(delta), fontsize=7.)
y_pos /= 1.6
ax.set_xlabel("$\\log \\; R_{\\rm{child}} - \\log \\; R_{\\rm{parent}}$")
ax.set_ylabel("number of lineages")
plt.tight_layout()
plt.savefig('paper/child_parent_Rcomp.png')
| notebooks/delta_rate_hist.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .r
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: R [conda env:biobombe]
# language: R
# name: conda-env-biobombe-r
# ---
# # Generate a matrix with a given covariance structure
#
# In this notebook, we simulate a matrix with 10,000 samples and 10 features.
# We artifically inject two distinct signals into the matrix.
#
# We sample the 10,000 samples from a given covariance matrix.
# This covariance matrix specifies two groups of correlated features.
#
# | Group | Correlated Features |
# | :---- | :------------------ |
# | 1 | 1, 2, 3 |
# | 2 | 5, 6, 7 |
#
# The remaining features (4, 8, 9, 10) are random Gaussian noise.
# The second group of features has lower correlation than the first group.
suppressPackageStartupMessages(library(dplyr))
set.seed(1234)
n = 10000
p = 10
# +
cov_mat = diag(p)
random_off_diag_structure <- abs(rnorm(n = length(cov_mat[lower.tri(cov_mat)]), mean = 0, sd = 0))
cov_mat[lower.tri(cov_mat)] <- random_off_diag_structure
cov_mat[2, 1] <- 0.95
cov_mat[3, 2] <- 0.90
cov_mat[3, 1] <- 0.93
cov_mat[6, 5] <- 0.90
cov_mat[7, 6] <- 0.85
cov_mat[7, 5] <- 0.88
cov_mat[upper.tri(cov_mat)] <- t(cov_mat)[upper.tri(cov_mat)]
# -
cov_mat
cov_mat %>%
dplyr::as_tibble(.name_repair = "minimal")
# +
feature_ids <- paste0("feature_", seq(1, nrow(cov_mat)))
cov_mat_df <- cov_mat %>%
dplyr::as_tibble(.name_repair = "minimal")
colnames(cov_mat_df) <- feature_ids
cov_mat_df <- cov_mat_df %>%
dplyr::mutate(feature_num = feature_ids) %>%
dplyr::select(feature_num, dplyr::everything())
out_file <- file.path("data", "simulated_covariance_structure.tsv")
cov_mat_df %>% readr::write_tsv(out_file)
cov_mat_df
# +
simulated_data <- MASS::mvrnorm(n = n, mu = rep(0, p), Sigma = cov_mat)
colnames(simulated_data) <- paste0("feature_", 1:ncol(simulated_data))
simulated_data <- simulated_data %>% dplyr::as_tibble(.name_repair = "minimal")
print(dim(simulated_data))
head(simulated_data)
# -
out_file <- file.path("data", "simulated_signal_n1000_p10.tsv")
simulated_data %>% readr::write_tsv(out_file)
| 11.simulation-feature-number/0.generate-simulated-data.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ## 3.2 autograd
#
# ็จTensor่ฎญ็ป็ฝ็ปๅพๆนไพฟ๏ผไฝไปไธไธๅฐ่ๆๅ็็บฟๆงๅๅฝไพๅญๆฅ็๏ผๅๅไผ ๆญ่ฟ็จ้่ฆๆๅจๅฎ็ฐใ่ฟๅฏนไบๅ็บฟๆงๅๅฝ็ญ่พไธบ็ฎๅ็ๆจกๅๆฅ่ฏด๏ผ่ฟๅฏไปฅๅบไป๏ผไฝๅฎ้
ไฝฟ็จไธญ็ปๅธธๅบ็ฐ้ๅธธๅคๆ็็ฝ็ป็ปๆ๏ผๆญคๆถๅฆๆๆๅจๅฎ็ฐๅๅไผ ๆญ๏ผไธไป
่ดนๆถ่ดนๅ๏ผ่ไธๅฎนๆๅบ้๏ผ้พไปฅๆฃๆฅใtorch.autogradๅฐฑๆฏไธบๆนไพฟ็จๆทไฝฟ็จ๏ผ่ไธ้จๅผๅ็ไธๅฅ่ชๅจๆฑๅฏผๅผๆ๏ผๅฎ่ฝๅคๆ นๆฎ่พๅ
ฅๅๅๅไผ ๆญ่ฟ็จ่ชๅจๆๅปบ่ฎก็ฎๅพ๏ผๅนถๆง่กๅๅไผ ๆญใ
#
# ่ฎก็ฎๅพ(Computation Graph)ๆฏ็ฐไปฃๆทฑๅบฆๅญฆไน ๆกๆถๅฆPyTorchๅTensorFlow็ญ็ๆ ธๅฟ๏ผๅ
ถไธบ้ซๆ่ชๅจๆฑๅฏผ็ฎๆณโโๅๅไผ ๆญ(Back Propogation)ๆไพไบ็่ฎบๆฏๆ๏ผไบ่งฃ่ฎก็ฎๅพๅจๅฎ้
ๅ็จๅบ่ฟ็จไธญไผๆๆๅคง็ๅธฎๅฉใๆฌ่ๅฐๆถๅไธไบๅบ็ก็่ฎก็ฎๅพ็ฅ่ฏ๏ผไฝๅนถไธ่ฆๆฑ่ฏป่
ไบๅ
ๅฏนๆญคๆๆทฑๅ
ฅ็ไบ่งฃใๅ
ณไบ่ฎก็ฎๅพ็ๅบ็ก็ฅ่ฏๆจ่้
่ฏปChristopher Olah็ๆ็ซ [^1]ใ
#
# [^1]: http://colah.github.io/posts/2015-08-Backprop/
#
#
# ### 3.2.1 requires_grad
# PyTorchๅจautogradๆจกๅไธญๅฎ็ฐไบ่ฎก็ฎๅพ็็ธๅ
ณๅ่ฝ๏ผautogradไธญ็ๆ ธๅฟๆฐๆฎ็ปๆๆฏVariableใไปv0.4็ๆฌ่ตท๏ผVariableๅTensorๅๅนถใๆไปฌๅฏไปฅ่ฎคไธบ้่ฆๆฑๅฏผ(requires_grad)็tensorๅณVariable. autograd่ฎฐๅฝๅฏนtensor็ๆไฝ่ฎฐๅฝ็จๆฅๆๅปบ่ฎก็ฎๅพใ
#
# Variableๆไพไบๅคง้จๅtensorๆฏๆ็ๅฝๆฐ๏ผไฝๅ
ถไธๆฏๆ้จๅ`inplace`ๅฝๆฐ๏ผๅ ่ฟไบๅฝๆฐไผไฟฎๆนtensor่ช่บซ๏ผ่ๅจๅๅไผ ๆญไธญ๏ผvariable้่ฆ็ผๅญๅๆฅ็tensorๆฅ่ฎก็ฎๅๅไผ ๆญๆขฏๅบฆใๅฆๆๆณ่ฆ่ฎก็ฎๅไธชVariable็ๆขฏๅบฆ๏ผๅช้่ฐ็จๆ น่็นvariable็`backward`ๆนๆณ๏ผautogradไผ่ชๅจๆฒฟ็่ฎก็ฎๅพๅๅไผ ๆญ๏ผ่ฎก็ฎๆฏไธไธชๅถๅญ่็น็ๆขฏๅบฆใ
#
# `variable.backward(gradient=None, retain_graph=None, create_graph=None)`ไธป่ฆๆๅฆไธๅๆฐ๏ผ
#
# - grad_variables๏ผๅฝข็ถไธvariableไธ่ด๏ผๅฏนไบ`y.backward()`๏ผgrad_variables็ธๅฝไบ้พๅผๆณๅ${dz \over dx}={dz \over dy} \times {dy \over dx}$ไธญ็$\textbf {dz} \over \textbf {dy}$ใgrad_variablesไนๅฏไปฅๆฏtensorๆๅบๅใ
# - retain_graph๏ผๅๅไผ ๆญ้่ฆ็ผๅญไธไบไธญ้ด็ปๆ๏ผๅๅไผ ๆญไนๅ๏ผ่ฟไบ็ผๅญๅฐฑ่ขซๆธ
็ฉบ๏ผๅฏ้่ฟๆๅฎ่ฟไธชๅๆฐไธๆธ
็ฉบ็ผๅญ๏ผ็จๆฅๅคๆฌกๅๅไผ ๆญใ
# - create_graph๏ผๅฏนๅๅไผ ๆญ่ฟ็จๅๆฌกๆๅปบ่ฎก็ฎๅพ๏ผๅฏ้่ฟ`backward of backward`ๅฎ็ฐๆฑ้ซ้ถๅฏผๆฐใ
#
# ไธ่ฟฐๆ่ฟฐๅฏ่ฝๆฏ่พๆฝ่ฑก๏ผๅฆๆๆฒกๆ็ๆ๏ผไธ็จ็ๆฅ๏ผไผๅจๆฌ่ๅๅ้จๅ่ฏฆ็ปไป็ป๏ผไธ้ขๅ
็ๅ ไธชไพๅญใ
from __future__ import print_function
import torch as t
#ๅจๅๅปบtensor็ๆถๅๆๅฎrequires_grad
a = t.randn(3,4, requires_grad=True)
# ๆ่
a = t.randn(3,4).requires_grad_()
# ๆ่
a = t.randn(3,4)
a.requires_grad=True
a
b = t.zeros(3,4).requires_grad_()
b
# ไนๅฏๅๆc = a + b
c = a.add(b)
c
d = c.sum()
d.backward() # ๅๅไผ ๆญ
d # d่ฟๆฏไธไธชrequires_grad=True็tensor,ๅฏนๅฎ็ๆไฝ้่ฆๆ
้
d.requires_grad
a.grad
# ๆญคๅค่ฝ็ถๆฒกๆๆๅฎc้่ฆๆฑๅฏผ๏ผไฝcไพ่ตไบa๏ผ่a้่ฆๆฑๅฏผ๏ผ
# ๅ ๆญคc็requires_gradๅฑๆงไผ่ชๅจ่ฎพไธบTrue
a.requires_grad, b.requires_grad, c.requires_grad
# ็ฑ็จๆทๅๅปบ็variableๅฑไบๅถๅญ่็น๏ผๅฏนๅบ็grad_fnๆฏNone
a.is_leaf, b.is_leaf, c.is_leaf
# c.gradๆฏNone, ๅ cไธๆฏๅถๅญ่็น๏ผๅฎ็ๆขฏๅบฆๆฏ็จๆฅ่ฎก็ฎa็ๆขฏๅบฆ
# ๆไปฅ่ฝ็ถc.requires_grad = True,ไฝๅ
ถๆขฏๅบฆ่ฎก็ฎๅฎไนๅๅณ่ขซ้ๆพ
c.grad is None
# ่ฎก็ฎไธ้ข่ฟไธชๅฝๆฐ็ๅฏผๅฝๆฐ๏ผ
# $$
# y = x^2\bullet e^x
# $$
# ๅฎ็ๅฏผๅฝๆฐๆฏ๏ผ
# $$
# {dy \over dx} = 2x\bullet e^x + x^2 \bullet e^x
# $$
# ๆฅ็็autograd็่ฎก็ฎ็ปๆไธๆๅจๆฑๅฏผ่ฎก็ฎ็ปๆ็่ฏฏๅทฎใ
# +
def f(x):
'''่ฎก็ฎy'''
y = x**2 * t.exp(x)
return y
def gradf(x):
'''ๆๅจๆฑๅฏผๅฝๆฐ'''
dx = 2*x*t.exp(x) + x**2*t.exp(x)
return dx
# -
x = t.randn(3,4, requires_grad = True)
y = f(x)
y
y.backward(t.ones(y.size())) # gradientๅฝข็ถไธyไธ่ด
x.grad
# autograd็่ฎก็ฎ็ปๆไธๅฉ็จๅ
ฌๅผๆๅจ่ฎก็ฎ็็ปๆไธ่ด
gradf(x)
# ### 3.2.2 ่ฎก็ฎๅพ
#
# PyTorchไธญ`autograd`็ๅบๅฑ้็จไบ่ฎก็ฎๅพ๏ผ่ฎก็ฎๅพๆฏไธ็ง็นๆฎ็ๆๅๆ ็ฏๅพ๏ผDAG๏ผ๏ผ็จไบ่ฎฐๅฝ็ฎๅญไธๅ้ไน้ด็ๅ
ณ็ณปใไธ่ฌ็จ็ฉๅฝข่กจ็คบ็ฎๅญ๏ผๆคญๅๅฝข่กจ็คบๅ้ใๅฆ่กจ่พพๅผ$ \textbf {z = wx + b}$ๅฏๅ่งฃไธบ$\textbf{y = wx}$ๅ$\textbf{z = y + b}$๏ผๅ
ถ่ฎก็ฎๅพๅฆๅพ3-3ๆ็คบ๏ผๅพไธญ`MUL`๏ผ`ADD`้ฝๆฏ็ฎๅญ๏ผ$\textbf{w}$๏ผ$\textbf{x}$๏ผ$\textbf{b}$ๅณๅ้ใ
#
# 
# ๅฆไธๆๅๆ ็ฏๅพไธญ๏ผ$\textbf{X}$ๅ$\textbf{b}$ๆฏๅถๅญ่็น๏ผleaf node๏ผ๏ผ่ฟไบ่็น้ๅธธ็ฑ็จๆท่ชๅทฑๅๅปบ๏ผไธไพ่ตไบๅ
ถไปๅ้ใ$\textbf{z}$็งฐไธบๆ น่็น๏ผๆฏ่ฎก็ฎๅพ็ๆ็ป็ฎๆ ใๅฉ็จ้พๅผๆณๅๅพๅฎนๆๆฑๅพๅไธชๅถๅญ่็น็ๆขฏๅบฆใ
# $${\partial z \over \partial b} = 1,\space {\partial z \over \partial y} = 1\\
# {\partial y \over \partial w }= x,{\partial y \over \partial x}= w\\
# {\partial z \over \partial x}= {\partial z \over \partial y} {\partial y \over \partial x}=1 * w\\
# {\partial z \over \partial w}= {\partial z \over \partial y} {\partial y \over \partial w}=1 * x\\
# $$
# ่ๆไบ่ฎก็ฎๅพ๏ผไธ่ฟฐ้พๅผๆฑๅฏผๅณๅฏๅฉ็จ่ฎก็ฎๅพ็ๅๅไผ ๆญ่ชๅจๅฎๆ๏ผๅ
ถ่ฟ็จๅฆๅพ3-4ๆ็คบใ
#
# 
#
#
# ๅจPyTorchๅฎ็ฐไธญ๏ผautogradไผ้็็จๆท็ๆไฝ๏ผ่ฎฐๅฝ็ๆๅฝๅvariable็ๆๆๆไฝ๏ผๅนถ็ฑๆญคๅปบ็ซไธไธชๆๅๆ ็ฏๅพใ็จๆทๆฏ่ฟ่กไธไธชๆไฝ๏ผ็ธๅบ็่ฎก็ฎๅพๅฐฑไผๅ็ๆนๅใๆดๅบๅฑ็ๅฎ็ฐไธญ๏ผๅพไธญ่ฎฐๅฝไบๆไฝ`Function`๏ผๆฏไธไธชๅ้ๅจๅพไธญ็ไฝ็ฝฎๅฏ้่ฟๅ
ถ`grad_fn`ๅฑๆงๅจๅพไธญ็ไฝ็ฝฎๆจๆตๅพๅฐใๅจๅๅไผ ๆญ่ฟ็จไธญ๏ผautogradๆฒฟ็่ฟไธชๅพไปๅฝๅๅ้๏ผๆ น่็น$\textbf{z}$๏ผๆบฏๆบ๏ผๅฏไปฅๅฉ็จ้พๅผๆฑๅฏผๆณๅ่ฎก็ฎๆๆๅถๅญ่็น็ๆขฏๅบฆใๆฏไธไธชๅๅไผ ๆญๆไฝ็ๅฝๆฐ้ฝๆไธไนๅฏนๅบ็ๅๅไผ ๆญๅฝๆฐ็จๆฅ่ฎก็ฎ่พๅ
ฅ็ๅไธชvariable็ๆขฏๅบฆ๏ผ่ฟไบๅฝๆฐ็ๅฝๆฐๅ้ๅธธไปฅ`Backward`็ปๅฐพใไธ้ข็ปๅไปฃ็ ๅญฆไน autograd็ๅฎ็ฐ็ป่ใ
x = t.ones(1)
b = t.rand(1, requires_grad = True)
w = t.rand(1, requires_grad = True)
y = w * x # ็ญไปทไบy=w.mul(x)
z = y + b # ็ญไปทไบz=y.add(b)
x.requires_grad, b.requires_grad, w.requires_grad
# ่ฝ็ถๆชๆๅฎy.requires_gradไธบTrue๏ผไฝ็ฑไบyไพ่ตไบ้่ฆๆฑๅฏผ็w
# ๆ
่y.requires_gradไธบTrue
y.requires_grad
x.is_leaf, w.is_leaf, b.is_leaf
y.is_leaf, z.is_leaf
# grad_fnๅฏไปฅๆฅ็่ฟไธชvariable็ๅๅไผ ๆญๅฝๆฐ๏ผ
# zๆฏaddๅฝๆฐ็่พๅบ๏ผๆไปฅๅฎ็ๅๅไผ ๆญๅฝๆฐๆฏAddBackward
z.grad_fn
# next_functionsไฟๅญgrad_fn็่พๅ
ฅ๏ผๆฏไธไธชtuple๏ผtuple็ๅ
็ด ไนๆฏFunction
# ็ฌฌไธไธชๆฏy๏ผๅฎๆฏไนๆณ(mul)็่พๅบ๏ผๆไปฅๅฏนๅบ็ๅๅไผ ๆญๅฝๆฐy.grad_fnๆฏMulBackward
# ็ฌฌไบไธชๆฏb๏ผๅฎๆฏๅถๅญ่็น๏ผ็ฑ็จๆทๅๅปบ๏ผgrad_fnไธบNone๏ผไฝๆฏๆ
z.grad_fn.next_functions
# variable็grad_fnๅฏนๅบ็ๅๅพไธญ็function็ธๅฏนๅบ
z.grad_fn.next_functions[0][0] == y.grad_fn
# ็ฌฌไธไธชๆฏw๏ผๅถๅญ่็น๏ผ้่ฆๆฑๅฏผ๏ผๆขฏๅบฆๆฏ็ดฏๅ ็
# ็ฌฌไบไธชๆฏx๏ผๅถๅญ่็น๏ผไธ้่ฆๆฑๅฏผ๏ผๆไปฅไธบNone
y.grad_fn.next_functions
# ๅถๅญ่็น็grad_fnๆฏNone
w.grad_fn,x.grad_fn
# ่ฎก็ฎw็ๆขฏๅบฆ็ๆถๅ๏ผ้่ฆ็จๅฐx็ๆฐๅผ(${\partial y\over \partial w} = x $)๏ผ่ฟไบๆฐๅผๅจๅๅ่ฟ็จไธญไผไฟๅญๆbuffer๏ผๅจ่ฎก็ฎๅฎๆขฏๅบฆไนๅไผ่ชๅจๆธ
็ฉบใไธบไบ่ฝๅคๅคๆฌกๅๅไผ ๆญ้่ฆๆๅฎ`retain_graph`ๆฅไฟ็่ฟไบbufferใ
# ไฝฟ็จretain_graphๆฅไฟๅญbuffer
z.backward(retain_graph=True)
w.grad
# ๅคๆฌกๅๅไผ ๆญ๏ผๆขฏๅบฆ็ดฏๅ ๏ผ่ฟไนๅฐฑๆฏwไธญAccumulateGradๆ ่ฏ็ๅซไน
z.backward()
w.grad
# PyTorchไฝฟ็จ็ๆฏๅจๆๅพ๏ผๅฎ็่ฎก็ฎๅพๅจๆฏๆฌกๅๅไผ ๆญๆถ้ฝๆฏไปๅคดๅผๅงๆๅปบ๏ผๆไปฅๅฎ่ฝๅคไฝฟ็จPythonๆงๅถ่ฏญๅฅ๏ผๅฆforใif็ญ๏ผๆ นๆฎ้ๆฑๅๅปบ่ฎก็ฎๅพใ่ฟ็นๅจ่ช็ถ่ฏญ่จๅค็้ขๅไธญๅพๆ็จ๏ผๅฎๆๅณ็ไฝ ไธ้่ฆไบๅ
ๆๅปบๆๆๅฏ่ฝ็จๅฐ็ๅพ็่ทฏๅพ๏ผๅพๅจ่ฟ่กๆถๆๆๅปบใ
def abs(x):
if x.data[0]>0: return x
else: return -x
x = t.ones(1,requires_grad=True)
y = abs(x)
y.backward()
x.grad
x = -1*t.ones(1)
x = x.requires_grad_()
y = abs(x)
y.backward()
print(x.grad)
y
x
x.requires_grad
x.requires_grad
cc=x*3
cc.requires_grad
def f(x):
result = 1
for ii in x:
if ii.item()>0:
result=ii*result
return result
x = t.arange(-2,4,dtype=t.float32).requires_grad_()
y = f(x) # y = x[3]*x[4]*x[5]
y.backward()
x.grad
# ๅ้็`requires_grad`ๅฑๆง้ป่ฎคไธบFalse๏ผๅฆๆๆไธไธช่็นrequires_grad่ขซ่ฎพ็ฝฎไธบTrue๏ผ้ฃไนๆๆไพ่ตๅฎ็่็น`requires_grad`้ฝๆฏTrueใ่ฟๅ
ถๅฎๅพๅฅฝ็่งฃ๏ผๅฏนไบ$ \textbf{x}\to \textbf{y} \to \textbf{z}$๏ผx.requires_grad = True๏ผๅฝ้่ฆ่ฎก็ฎ$\partial z \over \partial x$ๆถ๏ผๆ นๆฎ้พๅผๆณๅ๏ผ$\frac{\partial z}{\partial x} = \frac{\partial z}{\partial y} \frac{\partial y}{\partial x}$๏ผ่ช็ถไน้่ฆๆฑ$ \frac{\partial z}{\partial y}$๏ผๆไปฅy.requires_gradไผ่ขซ่ชๅจๆ ไธบTrue.
#
#
#
# ๆไบๆถๅๆไปฌๅฏ่ฝไธๅธๆautogradๅฏนtensorๆฑๅฏผใ่ฎคไธบๆฑๅฏผ้่ฆ็ผๅญ่ฎธๅคไธญ้ด็ปๆ๏ผๅขๅ ้ขๅค็ๅ
ๅญ/ๆพๅญๅผ้๏ผ้ฃไนๆไปฌๅฏไปฅๅ
ณ้ญ่ชๅจๆฑๅฏผใๅฏนไบไธ้่ฆๅๅไผ ๆญ็ๆ
ๆฏ๏ผๅฆinference๏ผๅณๆต่ฏๆจ็ๆถ๏ผ๏ผๅ
ณ้ญ่ชๅจๆฑๅฏผๅฏๅฎ็ฐไธๅฎ็จๅบฆ็้ๅบฆๆๅ๏ผๅนถ่็็บฆไธๅๆพๅญ๏ผๅ ๅ
ถไธ้่ฆๅ้
็ฉบ้ด่ฎก็ฎๆขฏๅบฆใ
x = t.ones(1, requires_grad=True)
w = t.rand(1, requires_grad=True)
y = x * w
# yไพ่ตไบw๏ผ่w.requires_grad = True
x.requires_grad, w.requires_grad, y.requires_grad
with t.no_grad():
x = t.ones(1)
w = t.rand(1, requires_grad = True)
y = x * w
# yไพ่ตไบwๅx๏ผ่ฝ็ถw.requires_grad = True๏ผไฝๆฏy็requires_gradไพๆงไธบFalse
x.requires_grad, w.requires_grad, y.requires_grad
# +
# t.no_grad??
# -
t.set_grad_enabled(False)
x = t.ones(1)
w = t.rand(1, requires_grad = True)
y = x * w
# yไพ่ตไบwๅx๏ผ่ฝ็ถw.requires_grad = True๏ผไฝๆฏy็requires_gradไพๆงไธบFalse
x.requires_grad, w.requires_grad, y.requires_grad
# ๆขๅค้ป่ฎค้
็ฝฎ
t.set_grad_enabled(True)
# ๅฆๆๆไปฌๆณ่ฆไฟฎๆนtensor็ๆฐๅผ๏ผไฝๆฏๅไธๅธๆ่ขซautograd่ฎฐๅฝ๏ผ้ฃไนๆไนๅฏไปฅๅฏนtensor.data่ฟ่กๆไฝ
# +
a = t.ones(3,4,requires_grad=True)
b = t.ones(3,4,requires_grad=True)
c = a * b
a.data # ่ฟๆฏไธไธชtensor
# -
a.data.requires_grad # ไฝๆฏๅทฒ็ปๆฏ็ฌ็ซไบ่ฎก็ฎๅพไนๅค
d = a.data.sigmoid_() # sigmoid_ ๆฏไธชinplaceๆไฝ๏ผไผไฟฎๆนa่ช่บซ็ๅผ
d.requires_grad
a
# ๅฆๆๆไปฌๅธๆๅฏนtensor๏ผไฝๆฏๅไธๅธๆ่ขซ่ฎฐๅฝ, ๅฏไปฅไฝฟ็จtensor.data ๆ่
tensor.detach()
a.requires_grad
# ่ฟไผผไบ tensor=a.data, ไฝๆฏๅฆๆtensor่ขซไฟฎๆน๏ผbackwardๅฏ่ฝไผๆฅ้
tensor = a.detach()
tensor.requires_grad
# ็ป่ฎกtensor็ไธไบๆๆ ๏ผไธๅธๆ่ขซ่ฎฐๅฝ
mean = tensor.mean()
std = tensor.std()
maximum = tensor.max()
tensor[0]=1
# ไธ้ขไผๆฅ้๏ผใRuntimeError: one of the variables needed for gradient
# computation has been modified by an inplace operation
#ใๅ ไธบ c=a*b, b็ๆขฏๅบฆๅๅณไบa๏ผ็ฐๅจไฟฎๆนไบtensor๏ผๅ
ถๅฎไนๅฐฑๆฏไฟฎๆนไบa๏ผๆขฏๅบฆไธๅๅ็กฎ
# c.sum().backward()
# ๅจๅๅไผ ๆญ่ฟ็จไธญ้ๅถๅญ่็น็ๅฏผๆฐ่ฎก็ฎๅฎไนๅๅณ่ขซๆธ
็ฉบใ่ฅๆณๆฅ็่ฟไบๅ้็ๆขฏๅบฆ๏ผๆไธค็งๆนๆณ๏ผ
# - ไฝฟ็จautograd.gradๅฝๆฐ
# - ไฝฟ็จhook
#
# `autograd.grad`ๅ`hook`ๆนๆณ้ฝๆฏๅพๅผบๅคง็ๅทฅๅ
ท๏ผๆด่ฏฆ็ป็็จๆณๅ่ๅฎๆนapiๆๆกฃ๏ผ่ฟ้ไธพไพ่ฏดๆๅบ็ก็ไฝฟ็จใๆจ่ไฝฟ็จ`hook`ๆนๆณ๏ผไฝๆฏๅจๅฎ้
ไฝฟ็จไธญๅบๅฐฝ้้ฟๅ
ไฟฎๆนgrad็ๅผใ
x = t.ones(3, requires_grad=True)
w = t.rand(3, requires_grad=True)
y = x * w
# yไพ่ตไบw๏ผ่w.requires_grad = True
z = y.sum()
x.requires_grad, w.requires_grad, y.requires_grad
# ้ๅถๅญ่็นgrad่ฎก็ฎๅฎไนๅ่ชๅจๆธ
็ฉบ๏ผy.gradๆฏNone
z.backward()
(x.grad, w.grad, y.grad)
# ็ฌฌไธ็งๆนๆณ๏ผไฝฟ็จgrad่ทๅไธญ้ดๅ้็ๆขฏๅบฆ
x = t.ones(3, requires_grad=True)
w = t.rand(3, requires_grad=True)
y = x * w
z = y.sum()
# zๅฏนy็ๆขฏๅบฆ๏ผ้ๅผ่ฐ็จbackward()
t.autograd.grad(z, y)
# +
# ็ฌฌไบ็งๆนๆณ๏ผไฝฟ็จhook
# hookๆฏไธไธชๅฝๆฐ๏ผ่พๅ
ฅๆฏๆขฏๅบฆ๏ผไธๅบ่ฏฅๆ่ฟๅๅผ
def variable_hook(grad):
print('y็ๆขฏๅบฆ๏ผ',grad)
x = t.ones(3, requires_grad=True)
w = t.rand(3, requires_grad=True)
y = x * w
# ๆณจๅhook
hook_handle = y.register_hook(variable_hook)
z = y.sum()
z.backward()
# ้ค้ไฝ ๆฏๆฌก้ฝ่ฆ็จhook๏ผๅฆๅ็จๅฎไนๅ่ฎฐๅพ็งป้คhook
hook_handle.remove()
# -
# ๆๅๅๆฅ็็variableไธญgradๅฑๆงๅbackwardๅฝๆฐ`grad_variables`ๅๆฐ็ๅซไน๏ผ่ฟ้็ดๆฅไธ็ป่ฎบ๏ผ
#
# - variable $\textbf{x}$็ๆขฏๅบฆๆฏ็ฎๆ ๅฝๆฐ${f(x)} $ๅฏน$\textbf{x}$็ๆขฏๅบฆ๏ผ$\frac{df(x)}{dx} = (\frac {df(x)}{dx_0},\frac {df(x)}{dx_1},...,\frac {df(x)}{dx_N})$๏ผๅฝข็ถๅ$\textbf{x}$ไธ่ดใ
# - ๅฏนไบy.backward(grad_variables)ไธญ็grad_variables็ธๅฝไบ้พๅผๆฑๅฏผๆณๅไธญ็$\frac{\partial z}{\partial x} = \frac{\partial z}{\partial y} \frac{\partial y}{\partial x}$ไธญ็$\frac{\partial z}{\partial y}$ใzๆฏ็ฎๆ ๅฝๆฐ๏ผไธ่ฌๆฏไธไธชๆ ้๏ผๆ
่$\frac{\partial z}{\partial y}$็ๅฝข็ถไธvariable $\textbf{y}$็ๅฝข็ถไธ่ดใ`z.backward()`ๅจไธๅฎ็จๅบฆไธ็ญไปทไบy.backward(grad_y)ใ`z.backward()`็็ฅไบgrad_variablesๅๆฐ๏ผๆฏๅ ไธบ$z$ๆฏไธไธชๆ ้๏ผ่$\frac{\partial z}{\partial z} = 1$
x = t.arange(0,3, requires_grad=True,dtype=t.float)
y = x**2 + x*2
z = y.sum()
z.backward() # ไปzๅผๅงๅๅไผ ๆญ
x.grad
x = t.arange(0,3, requires_grad=True,dtype=t.float)
y = x**2 + x*2
z = y.sum()
y_gradient = t.Tensor([1,1,1]) # dz/dy
y.backward(y_gradient) #ไปyๅผๅงๅๅไผ ๆญ
x.grad
# ๅฆๅคๅผๅพๆณจๆ็ๆฏ๏ผๅชๆๅฏนvariable็ๆไฝๆ่ฝไฝฟ็จautograd๏ผๅฆๆๅฏนvariable็data็ดๆฅ่ฟ่กๆไฝ๏ผๅฐๆ ๆณไฝฟ็จๅๅไผ ๆญใ้คไบๅฏนๅๆฐๅๅงๅ๏ผไธ่ฌๆไปฌไธไผไฟฎๆนvariable.data็ๅผใ
# ๅจPyTorchไธญ่ฎก็ฎๅพ็็น็นๅฏๆป็ปๅฆไธ๏ผ
#
# - autogradๆ นๆฎ็จๆทๅฏนvariable็ๆไฝๆๅปบๅ
ถ่ฎก็ฎๅพใๅฏนๅ้็ๆไฝๆฝ่ฑกไธบ`Function`ใ
# - ๅฏนไบ้ฃไบไธๆฏไปปไฝๅฝๆฐ(Function)็่พๅบ๏ผ็ฑ็จๆทๅๅปบ็่็น็งฐไธบๅถๅญ่็น๏ผๅถๅญ่็น็`grad_fn`ไธบNoneใๅถๅญ่็นไธญ้่ฆๆฑๅฏผ็variable๏ผๅ
ทๆ`AccumulateGrad`ๆ ่ฏ๏ผๅ ๅ
ถๆขฏๅบฆๆฏ็ดฏๅ ็ใ
# - variable้ป่ฎคๆฏไธ้่ฆๆฑๅฏผ็๏ผๅณ`requires_grad`ๅฑๆง้ป่ฎคไธบFalse๏ผๅฆๆๆไธไธช่็นrequires_grad่ขซ่ฎพ็ฝฎไธบTrue๏ผ้ฃไนๆๆไพ่ตๅฎ็่็น`requires_grad`้ฝไธบTrueใ
# - variable็`volatile`ๅฑๆง้ป่ฎคไธบFalse๏ผๅฆๆๆไธไธชvariable็`volatile`ๅฑๆง่ขซ่ฎพไธบTrue๏ผ้ฃไนๆๆไพ่ตๅฎ็่็น`volatile`ๅฑๆง้ฝไธบTrueใvolatileๅฑๆงไธบTrue็่็นไธไผๆฑๅฏผ๏ผvolatile็ไผๅ
็บงๆฏ`requires_grad`้ซใ
# - ๅคๆฌกๅๅไผ ๆญๆถ๏ผๆขฏๅบฆๆฏ็ดฏๅ ็ใๅๅไผ ๆญ็ไธญ้ด็ผๅญไผ่ขซๆธ
็ฉบ๏ผไธบ่ฟ่กๅคๆฌกๅๅไผ ๆญ้ๆๅฎ`retain_graph`=Trueๆฅไฟๅญ่ฟไบ็ผๅญใ
# - ้ๅถๅญ่็น็ๆขฏๅบฆ่ฎก็ฎๅฎไนๅๅณ่ขซๆธ
็ฉบ๏ผๅฏไปฅไฝฟ็จ`autograd.grad`ๆ`hook`ๆๆฏ่ทๅ้ๅถๅญ่็น็ๅผใ
# - variable็gradไธdataๅฝข็ถไธ่ด๏ผๅบ้ฟๅ
็ดๆฅไฟฎๆนvariable.data๏ผๅ ไธบๅฏนdata็็ดๆฅๆไฝๆ ๆณๅฉ็จautograd่ฟ่กๅๅไผ ๆญ
# - ๅๅไผ ๆญๅฝๆฐ`backward`็ๅๆฐ`grad_variables`ๅฏไปฅ็ๆ้พๅผๆฑๅฏผ็ไธญ้ด็ปๆ๏ผๅฆๆๆฏๆ ้๏ผๅฏไปฅ็็ฅ๏ผ้ป่ฎคไธบ1
# - PyTorch้็จๅจๆๅพ่ฎพ่ฎก๏ผๅฏไปฅๅพๆนไพฟๅฐๆฅ็ไธญ้ดๅฑ็่พๅบ๏ผๅจๆ็่ฎพ่ฎก่ฎก็ฎๅพ็ปๆใ
#
# ่ฟไบ็ฅ่ฏไธๆๅคงๅคๆฐๆ
ๅตไธไนไธไผๅฝฑๅๅฏนpytorch็ไฝฟ็จ๏ผไฝๆฏๆๆก่ฟไบ็ฅ่ฏๆๅฉไบๆดๅฅฝ็็่งฃpytorch๏ผๅนถๆๆ็้ฟๅผๅพๅค้ท้ฑ
# ### 3.2.3 ๆฉๅฑautograd
#
#
# ็ฎๅ็ปๅคงๅคๆฐๅฝๆฐ้ฝๅฏไปฅไฝฟ็จ`autograd`ๅฎ็ฐๅๅๆฑๅฏผ๏ผไฝๅฆๆ้่ฆ่ชๅทฑๅไธไธชๅคๆ็ๅฝๆฐ๏ผไธๆฏๆ่ชๅจๅๅๆฑๅฏผๆไนๅ? ๅไธไธช`Function`๏ผๅฎ็ฐๅฎ็ๅๅไผ ๆญๅๅๅไผ ๆญไปฃ็ ๏ผ`Function`ๅฏนๅบไบ่ฎก็ฎๅพไธญ็็ฉๅฝข๏ผ ๅฎๆฅๆถๅๆฐ๏ผ่ฎก็ฎๅนถ่ฟๅ็ปๆใไธ้ข็ปๅบไธไธชไพๅญใ
#
# ```python
#
# class Mul(Function):
#
# @staticmethod
# def forward(ctx, w, x, b, x_requires_grad = True):
# ctx.x_requires_grad = x_requires_grad
# ctx.save_for_backward(w,x)
# output = w * x + b
# return output
#
# @staticmethod
# def backward(ctx, grad_output):
# w,x = ctx.saved_tensors
# grad_w = grad_output * x
# if ctx.x_requires_grad:
# grad_x = grad_output * w
# else:
# grad_x = None
# grad_b = grad_output * 1
# return grad_w, grad_x, grad_b, None
# ```
#
# ๅๆๅฆไธ๏ผ
#
# - ่ชๅฎไน็Function้่ฆ็ปงๆฟautograd.Function๏ผๆฒกๆๆ้ ๅฝๆฐ`__init__`๏ผforwardๅbackwardๅฝๆฐ้ฝๆฏ้ๆๆนๆณ
# - backwardๅฝๆฐ็่พๅบๅforwardๅฝๆฐ็่พๅ
ฅไธไธๅฏนๅบ๏ผbackwardๅฝๆฐ็่พๅ
ฅๅforwardๅฝๆฐ็่พๅบไธไธๅฏนๅบ
# - backwardๅฝๆฐ็grad_outputๅๆฐๅณt.autograd.backwardไธญ็`grad_variables`
# - ๅฆๆๆไธไธช่พๅ
ฅไธ้่ฆๆฑๅฏผ๏ผ็ดๆฅ่ฟๅNone๏ผๅฆforwardไธญ็่พๅ
ฅๅๆฐx_requires_gradๆพ็ถๆ ๆณๅฏนๅฎๆฑๅฏผ๏ผ็ดๆฅ่ฟๅNoneๅณๅฏ
# - ๅๅไผ ๆญๅฏ่ฝ้่ฆๅฉ็จๅๅไผ ๆญ็ๆไบไธญ้ด็ปๆ๏ผ้่ฆ่ฟ่กไฟๅญ๏ผๅฆๅๅๅไผ ๆญ็ปๆๅ่ฟไบๅฏน่ฑกๅณ่ขซ้ๆพ
#
# Function็ไฝฟ็จๅฉ็จFunction.apply(variable)
from torch.autograd import Function
class MultiplyAdd(Function):
@staticmethod
def forward(ctx, w, x, b):
ctx.save_for_backward(w,x)
output = w * x + b
return output
@staticmethod
def backward(ctx, grad_output):
w,x = ctx.saved_tensors
grad_w = grad_output * x
grad_x = grad_output * w
grad_b = grad_output * 1
return grad_w, grad_x, grad_b
# +
x = t.ones(1)
w = t.rand(1, requires_grad = True)
b = t.rand(1, requires_grad = True)
# ๅผๅงๅๅไผ ๆญ
z=MultiplyAdd.apply(w, x, b)
# ๅผๅงๅๅไผ ๆญ
z.backward()
# xไธ้่ฆๆฑๅฏผ๏ผไธญ้ด่ฟ็จ่ฟๆฏไผ่ฎก็ฎๅฎ็ๅฏผๆฐ๏ผไฝ้ๅ่ขซๆธ
็ฉบ
x.grad, w.grad, b.grad
# +
x = t.ones(1)
w = t.rand(1, requires_grad = True)
b = t.rand(1, requires_grad = True)
#print('ๅผๅงๅๅไผ ๆญ')
z=MultiplyAdd.apply(w,x,b)
#print('ๅผๅงๅๅไผ ๆญ')
# ่ฐ็จMultiplyAdd.backward
# ่พๅบgrad_w, grad_x, grad_b
z.grad_fn.apply(t.ones(1))
# -
# ไนๆไปฅforwardๅฝๆฐ็่พๅ
ฅๆฏtensor๏ผ่backwardๅฝๆฐ็่พๅ
ฅๆฏvariable๏ผๆฏไธบไบๅฎ็ฐ้ซ้ถๆฑๅฏผใbackwardๅฝๆฐ็่พๅ
ฅ่พๅบ่ฝ็ถๆฏvariable๏ผไฝๅจๅฎ้
ไฝฟ็จๆถautograd.Functionไผๅฐ่พๅ
ฅvariableๆๅไธบtensor๏ผๅนถๅฐ่ฎก็ฎ็ปๆ็tensorๅฐ่ฃ
ๆvariable่ฟๅใๅจbackwardๅฝๆฐไธญ๏ผไนๆไปฅไน่ฆๅฏนvariable่ฟ่กๆไฝ๏ผๆฏไธบไบ่ฝๅค่ฎก็ฎๆขฏๅบฆ็ๆขฏๅบฆ๏ผbackward of backward๏ผใไธ้ขไธพไพ่ฏดๆ๏ผๆๅ
ณtorch.autograd.grad็ๆด่ฏฆ็ปไฝฟ็จ่ฏทๅ็
งๆๆกฃใ
x = t.tensor([5], requires_grad=True,dtype=t.float)
y = x ** 2
grad_x = t.autograd.grad(y, x, create_graph=True)
grad_x # dy/dx = 2 * x
grad_grad_x = t.autograd.grad(grad_x[0],x)
grad_grad_x # ไบ้ถๅฏผๆฐ d(2x)/dx = 2
# ่ฟ็ง่ฎพ่ฎก่ฝ็ถ่ฝ่ฎฉ`autograd`ๅ
ทๆ้ซ้ถๆฑๅฏผๅ่ฝ๏ผไฝๅ
ถไน้ๅถไบTensor็ไฝฟ็จ๏ผๅ autogradไธญๅๅไผ ๆญ็ๅฝๆฐๅช่ฝๅฉ็จๅฝๅๅทฒ็ปๆ็Variableๆไฝใ่ฟไธช่ฎพ่ฎกๆฏๅจ`0.2`็ๆฌๆฐๅ ๅ
ฅ็๏ผไธบไบๆดๅฅฝ็็ตๆดปๆง๏ผไนไธบไบๅ
ผๅฎนๆง็ๆฌ็ไปฃ็ ๏ผPyTorch่ฟๆไพไบๅฆๅคไธ็งๆฉๅฑautograd็ๆนๆณใPyTorchๆไพไบไธไธช่ฃ
้ฅฐๅจ`@once_differentiable`๏ผ่ฝๅคๅจbackwardๅฝๆฐไธญ่ชๅจๅฐ่พๅ
ฅ็variableๆๅๆtensor๏ผๆ่ฎก็ฎ็ปๆ็tensor่ชๅจๅฐ่ฃ
ๆvariableใๆไบ่ฟไธช็นๆงๆไปฌๅฐฑ่ฝๅคๅพๆนไพฟ็ไฝฟ็จnumpy/scipyไธญ็ๅฝๆฐ๏ผๆไฝไธๅๅฑ้ไบvariableๆๆฏๆ็ๆไฝใไฝๆฏ่ฟ็งๅๆณๆญฃๅฆๅๅญไธญๆๆ็คบ็้ฃๆ ทๅช่ฝๆฑๅฏผไธๆฌก๏ผๅฎๆๆญไบๅๅไผ ๆญๅพ๏ผไธๅๆฏๆ้ซ้ถๆฑๅฏผใ
#
#
# ไธ้ขๆๆ่ฟฐ็้ฝๆฏๆฐๅผFunction๏ผ่ฟๆไธชlegacy Function๏ผๅฏไปฅๅธฆๆ`__init__`ๆนๆณ๏ผ`forward`ๅ`backwad`ๅฝๆฐไนไธ้่ฆๅฃฐๆไธบ`@staticmethod`๏ผไฝ้็็ๆฌๆด่ฟญ๏ผๆญค็ฑปFunctionๅฐ่ถๆฅ่ถๅฐ้ๅฐ๏ผๅจๆญคไธๅๆดๅคไป็ปใ
#
# ๆญคๅคๅจๅฎ็ฐไบ่ชๅทฑ็Functionไนๅ๏ผ่ฟๅฏไปฅไฝฟ็จ`gradcheck`ๅฝๆฐๆฅๆฃๆตๅฎ็ฐๆฏๅฆๆญฃ็กฎใ`gradcheck`้่ฟๆฐๅผ้ผ่ฟๆฅ่ฎก็ฎๆขฏๅบฆ๏ผๅฏ่ฝๅ
ทๆไธๅฎ็่ฏฏๅทฎ๏ผ้่ฟๆงๅถ`eps`็ๅคงๅฐๅฏไปฅๆงๅถๅฎนๅฟ็่ฏฏๅทฎใ
# ๅ
ณไบ่ฟ้จไปฝ็ๅ
ๅฎนๅฏไปฅๅ่githubไธๅผๅ่
ไปฌ็่ฎจ่ฎบ[^3]ใ
#
# [^3]: https://github.com/pytorch/pytorch/pull/1016
# ไธ้ขไธพไพ่ฏดๆๅฆไฝๅฉ็จFunctionๅฎ็ฐsigmoid Functionใ
class Sigmoid(Function):
@staticmethod
def forward(ctx, x):
output = 1 / (1 + t.exp(-x))
ctx.save_for_backward(output)
return output
@staticmethod
def backward(ctx, grad_output):
output, = ctx.saved_tensors
grad_x = output * (1 - output) * grad_output
return grad_x
# ้็จๆฐๅผ้ผ่ฟๆนๅผๆฃ้ช่ฎก็ฎๆขฏๅบฆ็ๅ
ฌๅผๅฏนไธๅฏน
test_input = t.randn(3,4, requires_grad=True).double()
t.autograd.gradcheck(Sigmoid.apply, (test_input,), eps=1e-3)
# +
def f_sigmoid(x):
y = Sigmoid.apply(x)
y.backward(t.ones(x.size()))
def f_naive(x):
y = 1/(1 + t.exp(-x))
y.backward(t.ones(x.size()))
def f_th(x):
y = t.sigmoid(x)
y.backward(t.ones(x.size()))
x=t.randn(100, 100, requires_grad=True)
# %timeit -n 100 f_sigmoid(x)
# %timeit -n 100 f_naive(x)
# %timeit -n 100 f_th(x)
# -
# ๆพ็ถ`f_sigmoid`่ฆๆฏๅ็บฏๅฉ็จ`autograd`ๅ ๅๅไนๆนๆไฝๅฎ็ฐ็ๅฝๆฐๅฟซไธๅฐ๏ผๅ ไธบf_sigmoid็backwardไผๅไบๅๅไผ ๆญ็่ฟ็จใๅฆๅคๅฏไปฅ็ๅบ็ณป็ปๅฎ็ฐ็built-inๆฅๅฃ(t.sigmoid)ๆดๅฟซใ
# ### 3.2.4 ๅฐ่ฏ็ๅ: ็จVariableๅฎ็ฐ็บฟๆงๅๅฝ
# ๅจไธไธ่ไธญ่ฎฒ่งฃไบๅฉ็จtensorๅฎ็ฐ็บฟๆงๅๅฝ๏ผๅจ่ฟไธๅฐ่ไธญ๏ผๅฐ่ฎฒ่งฃๅฆไฝๅฉ็จautograd/Variableๅฎ็ฐ็บฟๆงๅๅฝ๏ผไปฅๆญคๆๅautograd็ไพฟๆทไนๅคใ
import torch as t
# %matplotlib inline
from matplotlib import pyplot as plt
from IPython import display
import numpy as np
# +
# ่ฎพ็ฝฎ้ๆบๆฐ็งๅญ๏ผไธบไบๅจไธๅไบบ็ต่ไธ่ฟ่กๆถไธ้ข็่พๅบไธ่ด
t.manual_seed(1000)
def get_fake_data(batch_size=8):
''' ไบง็้ๆบๆฐๆฎ๏ผy = x*2 + 3๏ผๅ ไธไบไธไบๅชๅฃฐ'''
x = t.rand(batch_size,1) * 5
y = x * 2 + 3 + t.randn(batch_size, 1)
return x, y
# -
# ๆฅ็็ไบง็x-yๅๅธๆฏไปไนๆ ท็
x, y = get_fake_data()
plt.scatter(x.squeeze().numpy(), y.squeeze().numpy())
# +
# ้ๆบๅๅงๅๅๆฐ
w = t.rand(1,1, requires_grad=True)
b = t.zeros(1,1, requires_grad=True)
losses = np.zeros(500)
lr =0.005 # ๅญฆไน ็
for ii in range(500):
x, y = get_fake_data(batch_size=32)
# forward๏ผ่ฎก็ฎloss
y_pred = x.mm(w) + b.expand_as(y)
loss = 0.5 * (y_pred - y) ** 2
loss = loss.sum()
losses[ii] = loss.item()
# backward๏ผๆๅจ่ฎก็ฎๆขฏๅบฆ
loss.backward()
# ๆดๆฐๅๆฐ
w.data.sub_(lr * w.grad.data)
b.data.sub_(lr * b.grad.data)
# ๆขฏๅบฆๆธ
้ถ
w.grad.data.zero_()
b.grad.data.zero_()
if ii%50 ==0:
# ็ปๅพ
display.clear_output(wait=True)
x = t.arange(0, 6).view(-1, 1).float()
y = x.mm(w.data) + b.data.expand_as(x)
plt.plot(x.numpy(), y.numpy()) # predicted
x2, y2 = get_fake_data(batch_size=20)
plt.scatter(x2.numpy(), y2.numpy()) # true data
plt.xlim(0,5)
plt.ylim(0,13)
plt.show()
plt.pause(0.5)
print(w.item(), b.item())
# -
plt.plot(losses)
plt.ylim(5,50)
# ็จautogradๅฎ็ฐ็็บฟๆงๅๅฝๆๅคง็ไธๅ็นๅฐฑๅจไบautogradไธ้่ฆ่ฎก็ฎๅๅไผ ๆญ๏ผๅฏไปฅ่ชๅจ่ฎก็ฎๅพฎๅใ่ฟ็นไธๅๆฏๅจๆทฑๅบฆๅญฆไน ๏ผๅจ่ฎธๅคๆบๅจๅญฆไน ็้ฎ้ขไธญ้ฝๅพๆ็จใๅฆๅค้่ฆๆณจๆ็ๆฏๅจๆฏๆฌกๅๅไผ ๆญไนๅ่ฆ่ฎฐๅพๅ
ๆๆขฏๅบฆๆธ
้ถใ
#
# ๆฌ็ซ ไธป่ฆไป็ปไบPyTorchไธญไธคไธชๅบ็กๅบๅฑ็ๆฐๆฎ็ปๆ๏ผTensorๅautogradไธญ็VariableใTensorๆฏไธไธช็ฑปไผผNumpyๆฐ็ป็้ซๆๅค็ปดๆฐๅผ่ฟ็ฎๆฐๆฎ็ปๆ๏ผๆ็ๅNumpy็ธ็ฑปไผผ็ๆฅๅฃ๏ผๅนถๆไพ็ฎๅๆ็จ็GPUๅ ้ใVariableๆฏautogradๅฐ่ฃ
ไบTensorๅนถๆไพ่ชๅจๆฑๅฏผๆๆฏ็๏ผๅ
ทๆๅTensorๅ ไนไธๆ ท็ๆฅๅฃใ`autograd`ๆฏPyTorch็่ชๅจๅพฎๅๅผๆ๏ผ้็จๅจๆ่ฎก็ฎๅพๆๆฏ๏ผ่ฝๅคๅฟซ้้ซๆ็่ฎก็ฎๅฏผๆฐใ
| chapter03-tensor_and_autograd/Autograd.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Collects all the repositories from the 55 programming languages
#
# ### Reference:
# <NAME>, "The top programming languages: Our latest rankings put Python on top-again - [Careers]," in IEEE Spectrum, vol. 57, no. 8, pp. 22-22, Aug. 2020, doi: 10.1109/MSPEC.2020.9150550.
import requests
import urllib
import csv
import time
from os import path
import datetime
# +
IEEE_Rankings = []
with open('ieee_rankings.csv') as f:
reader = csv.reader(f)
IEEE_Rankings = list(reader)[0]
ATTRIBUTES_TO_FETCH = ['name','fork','url','issues_url','labels_url','created_at',
'updated_at','language','forks_count','open_issues', 'watchers', 'stargazers_count']
# -
# # Define functions
# Define the functions that we are going to use for the fetching task, with a support for quicker access. Generate a github access token and copy it in the variable below for increasing usage limits.
# +
REPOSITORY_API_BASE = 'https://api.github.com/search/repositories?'
RESULTS_PER_PAGE = 100
API_RESULTS_LIMIT = 1000
githubAccessToken = ''
fileName = 'repos.csv'
def ensureRequestCount(r):
remaining = int(r.headers['X-RateLimit-Remaining'])
print("Remaining limit: " + str(remaining))
if (remaining == 0):
reset_time = datetime.datetime.fromtimestamp(int(r.headers['X-RateLimit-Reset']))
seconds_until_reset = (reset_time - datetime.datetime.now()).total_seconds() + 10
print("Limit Exceed, Going to wait mode for " + str(seconds_until_reset))
time.sleep(seconds_until_reset)
def getRequest(url):
if (githubAccessToken!=''):
headers = {'Authorization': 'token '+ githubAccessToken}
else:
headers = ''
response = requests.get(url,headers=headers)
ensureRequestCount(response)
return response
def buildQueryUrl(language, pageNumber):
QUERY_COMPONENTS = {
"q": 'language:{0}'.format(language),
"s": 'stars',
"o": 'desc',
"page": pageNumber,
"per_page": RESULTS_PER_PAGE
}
return REPOSITORY_API_BASE + urllib.parse.urlencode(QUERY_COMPONENTS)
def getAllQueryUrls(language):
request = getRequest(buildQueryUrl(language, 1))
json_request = request.json()
total_items = json_request['total_count']
if (total_items > API_RESULTS_LIMIT):
total_items = API_RESULTS_LIMIT # Limit the results to first 1000
queryUrls = []
totalPages = total_items / RESULTS_PER_PAGE
for page in range(1,int(totalPages)+1):
queryUrls.append(buildQueryUrl(language, page))
return queryUrls
def fetchAttributesFromRepo(item):
items = []
for attribute in ATTRIBUTES_TO_FETCH:
items.append(item[attribute])
return items
# -
# # Fetch Repositories
# Fetch all the repositories based on languages in the IEEE Ranking, sorted by star count
for language in IEEE_Rankings:
print("- Starting for " + language)
queryUrls = getAllQueryUrls(language)
for url in queryUrls:
print("-- " + url)
r = getRequest(url)
rObject = r.json()
for repo in rObject['items']:
isNewFile = False
if path.exists(fileName)==False:
isNewFile = True
with open(fileName, 'a', newline='') as csvfile:
writer = csv.writer(csvfile, delimiter=',', quotechar='"', quoting=csv.QUOTE_MINIMAL)
if (isNewFile):
writer.writerow(ATTRIBUTES_TO_FETCH)
isNewFile = False
writer.writerow(fetchAttributesFromRepo(repo))
| data/Extract Repos.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Analysing PMRD Mature for counts
#
# ## To do:
#
# * Check how many miRNAs are being expressed on each sample;
# * Check by present/absent, the miRNAs on each sample
#
# ## Let's go...
#
# Starting by loading the required libraries.
# %pylab inline
import matplotlib_venn
import pandas
import scipy
# Loading input table:
pmrd_mature_counts = pandas.read_csv("pmrd_mature_counts.tsv",
sep = "\t",
header = 0)
pmrd_mature_counts.head(10)
# Calculate the number of miRNAs present on each sample.
#
# We will consider that miRNA is present if it's norm count if >= than 1
# +
# There are some issues with column names on this table:
# - TNEF and MEF_norm columns have multiple white spaces following the name
# - TNH and TNH_norm are wrongly named TH and TH_norm
# To correct this, columns will be renamed!
pmrd_mature_counts.columns = ["miRNA", "accession",
"FB", "FEF", "FH",
"MB", "MEF", "MH",
"TNB", "TNEF", "TNH",
"FB_norm", "FEF_norm", "FH_norm",
"MB_norm", "MEF_norm", "MH_norm",
"TNB_norm", "TNEF_norm", "TNH_norm"]
samples_list = ["FB", "FEF", "FH", "MB", "MEF", "MH", "TNB", "TNEF", "TNH"]
pmrd_miRNAs_actives = dict()
for sample in samples_list:
sample_miRNAs_present = sum(pmrd_mature_counts[sample + "_norm"] >= 10) # Changes from 1 to 10 on 2019.02.01
pmrd_miRNAs_actives[sample] = sample_miRNAs_present
print(pmrd_miRNAs_actives)
# -
matplotlib.pyplot.bar(pmrd_miRNAs_actives.keys(),
pmrd_miRNAs_actives.values(),
color = ["#003300", "#003300", "#003300",
"#336600", "#336600", "#336600",
"#666633", "#666633", "#666633"])
# With exception of Female, the stage with more number of active miRNAs is always the developmental stage B. The fact that on female plants the majority of miRNAs are in the middle of the development (stage E/F), seems to indicate that the miRNA mechanism is used differently according to flower type.
#
# ## Lets check which miRNAs are differently present/absent
#
# ### Approach by flower type
# +
mirna_list = dict()
for sample in samples_list:
mirna_list[sample] = set(pmrd_mature_counts.loc[pmrd_mature_counts[sample + "_norm"] >= 10]["miRNA"]) # Changes from 1 to 10 on 2019.02.01
# print(mirna_list)
venn_female = matplotlib_venn.venn3_unweighted([mirna_list["FB"], mirna_list["FEF"], mirna_list["FH"]],
set_labels = ("FB", "FEF", "FH")
)
#savefig('pmrd_madure_counts_veen_females.png')
relevant_miRNAs_female = list()
print("Exclussivos de FB:")
miRNA_list_FB = mirna_list["FB"].difference(mirna_list["FEF"], mirna_list["FH"])
relevant_miRNAs_female.extend(miRNA_list_FB)
print(sorted(miRNA_list_FB))
print("Exclussivos de FEF:")
miRNA_list_FEF = mirna_list["FEF"].difference(mirna_list["FB"], mirna_list["FH"])
relevant_miRNAs_female.extend(miRNA_list_FEF)
print(sorted(miRNA_list_FEF))
print("Exclussivos de FH:")
miRNA_list_FH = mirna_list["FH"].difference(mirna_list["FB"], mirna_list["FEF"])
relevant_miRNAs_female.extend(miRNA_list_FH)
print(sorted(miRNA_list_FH))
print("Presentes apenas em FB + FEF:")
miRNA_list_FB_FEF = mirna_list["FB"].intersection(mirna_list["FEF"]).difference(mirna_list["FH"])
relevant_miRNAs_female.extend(miRNA_list_FB_FEF)
print(sorted(miRNA_list_FB_FEF))
print("Presentes apenas em FB + FH:")
miRNA_list_FB_FH = mirna_list["FB"].intersection(mirna_list["FH"]).difference(mirna_list["FEF"])
relevant_miRNAs_female.extend(miRNA_list_FB_FH)
print(sorted(miRNA_list_FB_FH))
print("Presentes apenas em FEF + FH:")
miRNA_list_FEF_FH = mirna_list["FEF"].intersection(mirna_list["FH"]).difference(mirna_list["FB"])
relevant_miRNAs_female.extend(miRNA_list_FEF_FH)
print(sorted(miRNA_list_FEF_FH))
print("Lista de miRNAs com presenรงa diferencial:")
relevant_miRNAs_female = sorted(set(relevant_miRNAs_female))
print(relevant_miRNAs_female)
# +
venn_male = matplotlib_venn.venn3_unweighted([mirna_list["MB"], mirna_list["MEF"], mirna_list["MH"]],
set_labels = ("MB", "MEF", "MH")
)
#savefig('pmrd_madure_counts_veen_males.png')
relevant_miRNAs_male = list()
print("Exclussivos de MB:")
miRNA_list_MB = mirna_list["MB"].difference(mirna_list["MEF"], mirna_list["MH"])
relevant_miRNAs_male.extend(miRNA_list_MB)
print(sorted(miRNA_list_MB))
print("Exclussivos de MEF:")
miRNA_list_MEF = mirna_list["MEF"].difference(mirna_list["MB"], mirna_list["MH"])
relevant_miRNAs_male.extend(miRNA_list_MEF)
print(sorted(miRNA_list_MEF))
print("Exclussivos de MH:")
miRNA_list_MH = mirna_list["MH"].difference(mirna_list["MB"], mirna_list["MEF"])
relevant_miRNAs_male.extend(miRNA_list_MH)
print(sorted(miRNA_list_MH))
print("Presentes apenas em MB + MEF:")
miRNA_list_MB_MEF = mirna_list["MB"].intersection(mirna_list["MEF"]).difference(mirna_list["MH"])
relevant_miRNAs_male.extend(miRNA_list_MB_MEF)
print(sorted(miRNA_list_MB_MEF))
print("Presentes apenas em MB + MH:")
miRNA_list_MB_MH = mirna_list["MB"].intersection(mirna_list["MH"]).difference(mirna_list["MEF"])
relevant_miRNAs_male.extend(miRNA_list_MB_MH)
print(sorted(miRNA_list_MB_MH))
print("Presentes apenas em MEF + MH:")
miRNA_list_MEF_MH = mirna_list["MEF"].intersection(mirna_list["MH"]).difference(mirna_list["MB"])
relevant_miRNAs_male.extend(miRNA_list_MEF_MH)
print(sorted(miRNA_list_MEF_MH))
print("Lista de miRNAs com presenรงa diferencial:")
relevant_miRNAs_male = sorted(set(relevant_miRNAs_male))
print(relevant_miRNAs_male)
# +
venn_hermaphrodite = matplotlib_venn.venn3_unweighted([mirna_list["TNB"], mirna_list["TNEF"], mirna_list["TNH"]],
set_labels = ("TNB", "TNEF", "TNH")
)
#savefig('pmrd_madure_counts_veen_hermaphrodites.png')
relevant_miRNAs_hermaphrodite = list()
print("Exclussivos de TNB:")
miRNA_list_TNB = mirna_list["TNB"].difference(mirna_list["TNEF"], mirna_list["TNH"])
relevant_miRNAs_hermaphrodite.extend(miRNA_list_TNB)
print(sorted(miRNA_list_TNB))
print("Exclussivos de TNEF:")
miRNA_list_TNEF = mirna_list["TNEF"].difference(mirna_list["TNB"], mirna_list["TNH"])
relevant_miRNAs_hermaphrodite.extend(miRNA_list_TNEF)
print(sorted(miRNA_list_TNEF))
print("Exclussivos de TNH:")
miRNA_list_TNH = mirna_list["TNH"].difference(mirna_list["TNB"], mirna_list["TNEF"])
relevant_miRNAs_hermaphrodite.extend(miRNA_list_TNH)
print(sorted(miRNA_list_TNH))
print("Presentes apenas em TNB + TNEF:")
miRNA_list_TNB_TNEF = mirna_list["TNB"].intersection(mirna_list["TNEF"]).difference(mirna_list["TNH"])
relevant_miRNAs_hermaphrodite.extend(miRNA_list_TNB_TNEF)
print(sorted(miRNA_list_TNB_TNEF))
print("Presentes apenas em TNB + TNH:")
miRNA_list_TNB_TNH = mirna_list["TNB"].intersection(mirna_list["TNH"]).difference(mirna_list["TNEF"])
relevant_miRNAs_hermaphrodite.extend(miRNA_list_TNB_TNH)
print(sorted(miRNA_list_TNB_TNH))
print("Presentes apenas em TNEF + TNH:")
miRNA_list_TNEF_TNH = mirna_list["TNEF"].intersection(mirna_list["TNH"]).difference(mirna_list["TNB"])
relevant_miRNAs_hermaphrodite.extend(miRNA_list_TNEF_TNH)
print(sorted(miRNA_list_TNEF_TNH))
print("Lista de miRNAs com presenรงa diferencial:")
relevant_miRNAs_hermaphrodite = sorted(set(relevant_miRNAs_hermaphrodite))
print(relevant_miRNAs_hermaphrodite)
# +
relevant_miRNAs_by_flower_type = sorted(set(relevant_miRNAs_female
+ relevant_miRNAs_male
+ relevant_miRNAs_hermaphrodite))
print("Lista de miRNAs com presenรงa diferencial em pelo menos um dos tipos de flor ({}):".format(len(relevant_miRNAs_by_flower_type)))
print(relevant_miRNAs_by_flower_type)
for miRNA in relevant_miRNAs_by_flower_type:
# Colect values
miRNA_norm_counts = list()
miRNA_norm_counts.extend(set(pmrd_mature_counts[pmrd_mature_counts["miRNA"] == miRNA]["FB_norm"]))
miRNA_norm_counts.extend(set(pmrd_mature_counts[pmrd_mature_counts["miRNA"] == miRNA]["FEF_norm"]))
miRNA_norm_counts.extend(set(pmrd_mature_counts[pmrd_mature_counts["miRNA"] == miRNA]["FH_norm"]))
miRNA_norm_counts.extend(set(pmrd_mature_counts[pmrd_mature_counts["miRNA"] == miRNA]["MB_norm"]))
miRNA_norm_counts.extend(set(pmrd_mature_counts[pmrd_mature_counts["miRNA"] == miRNA]["MEF_norm"]))
miRNA_norm_counts.extend(set(pmrd_mature_counts[pmrd_mature_counts["miRNA"] == miRNA]["MH_norm"]))
miRNA_norm_counts.extend(set(pmrd_mature_counts[pmrd_mature_counts["miRNA"] == miRNA]["TNB_norm"]))
miRNA_norm_counts.extend(set(pmrd_mature_counts[pmrd_mature_counts["miRNA"] == miRNA]["TNEF_norm"]))
miRNA_norm_counts.extend(set(pmrd_mature_counts[pmrd_mature_counts["miRNA"] == miRNA]["TNH_norm"]))
# Plot
dataplot = pandas.Series(miRNA_norm_counts,
index = samples_list)
dataplot.plot(kind = "bar",
title = "Frequence of " + miRNA,
color = ["#003300", "#003300", "#003300",
"#336600", "#336600", "#336600",
"#666633", "#666633", "#666633"])
threshold = pandas.Series([10, 10, 10, 10, 10, 10, 10, 10, 10],
index = samples_list)
threshold.plot(kind = "line",
color = ["#660000"])
plt.xlabel("Sample")
plt.ylabel("Normalized counts")
plt.show()
# -
# ### Approach by developmental stage
# +
venn_stage_b = matplotlib_venn.venn3_unweighted([mirna_list["FB"], mirna_list["MB"], mirna_list["TNB"]],
set_labels = ("FB", "MB", "TNB")
)
relevant_miRNAs_b = list()
print("Exclussivos de FB:")
miRNA_list_FB = mirna_list["FB"].difference(mirna_list["MB"], mirna_list["TNB"])
relevant_miRNAs_b.extend(miRNA_list_FB)
print(sorted(miRNA_list_FB))
print("Exclussivos de MB:")
miRNA_list_MB = mirna_list["MB"].difference(mirna_list["FB"], mirna_list["TNB"])
relevant_miRNAs_b.extend(miRNA_list_MB)
print(sorted(miRNA_list_MB))
print("Exclussivos de TNB:")
miRNA_list_TNB = mirna_list["TNB"].difference(mirna_list["FB"], mirna_list["MB"])
relevant_miRNAs_b.extend(miRNA_list_TNB)
print(sorted(miRNA_list_TNB))
print("Presntes apenas em FB + MB:")
miRNA_list_FB_MB = mirna_list["FB"].intersection(mirna_list["MB"]).difference(mirna_list["TNB"])
relevant_miRNAs_b.extend(miRNA_list_FB_MB)
print(sorted(miRNA_list_FB_MB))
print("Presntes apenas em FB + TNB:")
miRNA_list_FB_TNB = mirna_list["FB"].intersection(mirna_list["TNB"]).difference(mirna_list["MB"])
relevant_miRNAs_b.extend(miRNA_list_FB_TNB)
print(sorted(miRNA_list_FB_TNB))
print("Presntes apenas em MB + TNB:")
miRNA_list_MB_TNB = mirna_list["MB"].intersection(mirna_list["TNB"]).difference(mirna_list["FB"])
relevant_miRNAs_b.extend(miRNA_list_MB_TNB)
print(sorted(miRNA_list_MB_TNB))
print("Lista de miRNAs com presenรงa diferencial:")
relevant_miRNAs_b = sorted(set(relevant_miRNAs_b))
print(relevant_miRNAs_b)
# +
venn_stage_ef = matplotlib_venn.venn3_unweighted([mirna_list["FEF"], mirna_list["MEF"], mirna_list["TNEF"]],
set_labels = ("FEF", "MEF", "TNEF")
)
relevant_miRNAs_ef = list()
print("Exclussivos de FEF:")
miRNA_list_FEF = mirna_list["FEF"].difference(mirna_list["MEF"], mirna_list["TNEF"])
relevant_miRNAs_ef.extend(miRNA_list_FEF)
print(sorted(miRNA_list_FEF))
print("Exclussivos de MEF:")
miRNA_list_MEF = mirna_list["MEF"].difference(mirna_list["FEF"], mirna_list["TNEF"])
relevant_miRNAs_ef.extend(miRNA_list_MEF)
print(sorted(miRNA_list_MEF))
print("Exclussivos de TNEF:")
miRNA_list_TNEF = mirna_list["TNEF"].difference(mirna_list["FEF"], mirna_list["MEF"])
relevant_miRNAs_ef.extend(miRNA_list_TNEF)
print(sorted(miRNA_list_TNEF))
print("Presentes apenas em FEF + MEF:")
miRNA_list_FEF_MEF = mirna_list["FEF"].intersection(mirna_list["MEF"]).difference(mirna_list["TNEF"])
relevant_miRNAs_ef.extend(miRNA_list_FEF_MEF)
print(sorted(miRNA_list_FEF_MEF))
print("Presentes apenas em FEF + TNEF:")
miRNA_list_FEF_TNEF = mirna_list["FEF"].intersection(mirna_list["TNEF"]).difference(mirna_list["MEF"])
relevant_miRNAs_ef.extend(miRNA_list_FEF_TNEF)
print(sorted(miRNA_list_FEF_TNEF))
print("Presentes apenas em MEF + TNEF:")
miRNA_list_MEF_TNEF = mirna_list["MEF"].intersection(mirna_list["TNEF"]).difference(mirna_list["FEF"])
relevant_miRNAs_ef.extend(miRNA_list_MEF_TNEF)
print(sorted(miRNA_list_MEF_TNEF))
print("Lista de miRNAs com presenรงa diferencial:")
relevant_miRNAs_ef = sorted(set(relevant_miRNAs_ef))
print(relevant_miRNAs_ef)
# +
venn_stage_h = matplotlib_venn.venn3_unweighted([mirna_list["FH"], mirna_list["MH"], mirna_list["TNH"]],
set_labels = ("FH", "MH", "TNH")
)
relevant_miRNAs_h = list()
print("Exclussivos de FH:")
miRNA_list_FH = mirna_list["FH"].difference(mirna_list["MH"], mirna_list["TNH"])
relevant_miRNAs_h.extend(miRNA_list_FH)
print(sorted(miRNA_list_FH))
print("Exclussivos de MH:")
miRNA_list_MH = mirna_list["MH"].difference(mirna_list["FH"], mirna_list["TNH"])
relevant_miRNAs_h.extend(miRNA_list_MH)
print(sorted(miRNA_list_MH))
print("Exclussivos de TNH:")
miRNA_list_TNH = mirna_list["TNH"].difference(mirna_list["FH"], mirna_list["MH"])
relevant_miRNAs_h.extend(miRNA_list_TNH)
print(sorted(miRNA_list_TNH))
print("Presentes apenas em FH + MH:")
miRNA_list_FH_MH = mirna_list["FH"].intersection(mirna_list["MH"]).difference(mirna_list["TNH"])
relevant_miRNAs_h.extend(miRNA_list_FH_MH)
print(sorted(miRNA_list_FH_MH))
print("Presentes apenas em FH + TNH:")
miRNA_list_FH_TNH = mirna_list["FH"].intersection(mirna_list["TNH"]).difference(mirna_list["MH"])
relevant_miRNAs_h.extend(miRNA_list_FH_TNH)
print(sorted(miRNA_list_FH_TNH))
print("Presentes apenas em MH + TNH:")
miRNA_list_MH_TNH = mirna_list["MH"].intersection(mirna_list["TNH"]).difference(mirna_list["FH"])
relevant_miRNAs_h.extend(miRNA_list_MH_TNH)
print(sorted(miRNA_list_MH_TNH))
print("Lista de miRNAs com presenรงa diferencial:")
relevant_miRNAs_h = sorted(set(relevant_miRNAs_h))
print(relevant_miRNAs_h)
# +
relevant_miRNAs_by_developmental_stage = sorted(set(relevant_miRNAs_b
+ relevant_miRNAs_ef
+ relevant_miRNAs_h))
print("Lista de miRNAs com presenรงa diferencial em pelo menos um dos estรกgios de desenvolvimento ({}):".format(len(relevant_miRNAs_by_developmental_stage)))
print(relevant_miRNAs_by_developmental_stage)
for miRNA in relevant_miRNAs_by_developmental_stage:
# Colect values
miRNA_norm_counts = list()
miRNA_norm_counts.extend(set(pmrd_mature_counts[pmrd_mature_counts["miRNA"] == miRNA]["FB_norm"]))
miRNA_norm_counts.extend(set(pmrd_mature_counts[pmrd_mature_counts["miRNA"] == miRNA]["FEF_norm"]))
miRNA_norm_counts.extend(set(pmrd_mature_counts[pmrd_mature_counts["miRNA"] == miRNA]["FH_norm"]))
miRNA_norm_counts.extend(set(pmrd_mature_counts[pmrd_mature_counts["miRNA"] == miRNA]["MB_norm"]))
miRNA_norm_counts.extend(set(pmrd_mature_counts[pmrd_mature_counts["miRNA"] == miRNA]["MEF_norm"]))
miRNA_norm_counts.extend(set(pmrd_mature_counts[pmrd_mature_counts["miRNA"] == miRNA]["MH_norm"]))
miRNA_norm_counts.extend(set(pmrd_mature_counts[pmrd_mature_counts["miRNA"] == miRNA]["TNB_norm"]))
miRNA_norm_counts.extend(set(pmrd_mature_counts[pmrd_mature_counts["miRNA"] == miRNA]["TNEF_norm"]))
miRNA_norm_counts.extend(set(pmrd_mature_counts[pmrd_mature_counts["miRNA"] == miRNA]["TNH_norm"]))
# Plot
dataplot = pandas.Series(miRNA_norm_counts,
index = samples_list)
dataplot.plot(kind = "bar",
title = "Frequence of " + miRNA,
color = ["#003300", "#003300", "#003300",
"#336600", "#336600", "#336600",
"#666633", "#666633", "#666633"])
threshold = pandas.Series([10, 10, 10, 10, 10, 10, 10, 10, 10],
index = samples_list)
threshold.plot(kind = "line",
color = ["#660000"])
plt.xlabel("Sample")
plt.ylabel("Normalized counts")
plt.show()
# -
# ### Lista de miRNAs relevantes independentemente de onde vรชm
# +
relevant_miRNAs_all = sorted(set(relevant_miRNAs_by_developmental_stage
+ relevant_miRNAs_by_flower_type))
print("Lista de miRNAs com presenรงa diferencial geral ({}):".format(len(relevant_miRNAs_all)))
print(relevant_miRNAs_all)
for miRNA in relevant_miRNAs_all:
# Colect values
miRNA_norm_counts = list()
miRNA_norm_counts.extend(set(pmrd_mature_counts[pmrd_mature_counts["miRNA"] == miRNA]["FB_norm"]))
miRNA_norm_counts.extend(set(pmrd_mature_counts[pmrd_mature_counts["miRNA"] == miRNA]["FEF_norm"]))
miRNA_norm_counts.extend(set(pmrd_mature_counts[pmrd_mature_counts["miRNA"] == miRNA]["FH_norm"]))
miRNA_norm_counts.extend(set(pmrd_mature_counts[pmrd_mature_counts["miRNA"] == miRNA]["MB_norm"]))
miRNA_norm_counts.extend(set(pmrd_mature_counts[pmrd_mature_counts["miRNA"] == miRNA]["MEF_norm"]))
miRNA_norm_counts.extend(set(pmrd_mature_counts[pmrd_mature_counts["miRNA"] == miRNA]["MH_norm"]))
miRNA_norm_counts.extend(set(pmrd_mature_counts[pmrd_mature_counts["miRNA"] == miRNA]["TNB_norm"]))
miRNA_norm_counts.extend(set(pmrd_mature_counts[pmrd_mature_counts["miRNA"] == miRNA]["TNEF_norm"]))
miRNA_norm_counts.extend(set(pmrd_mature_counts[pmrd_mature_counts["miRNA"] == miRNA]["TNH_norm"]))
# Plot
dataplot = pandas.Series(miRNA_norm_counts,
index = samples_list)
dataplot.plot(kind = "bar",
title = "Frequence of " + miRNA,
color = ["#003300", "#003300", "#003300",
"#336600", "#336600", "#336600",
"#666633", "#666633", "#666633"])
threshold = pandas.Series([10, 10, 10, 10, 10, 10, 10, 10, 10],
index = samples_list)
threshold.plot(kind = "line",
color = ["#660000"])
plt.xlabel("Sample")
plt.ylabel("Normalized counts")
plt.show()
# +
relevant_miRNAs_values = list()
for miRNA in relevant_miRNAs_all:
# Colect values
miRNA_norm_counts = list()
miRNA_norm_counts.extend(set(pmrd_mature_counts[pmrd_mature_counts["miRNA"] == miRNA]["FB_norm"]))
miRNA_norm_counts.extend(set(pmrd_mature_counts[pmrd_mature_counts["miRNA"] == miRNA]["FEF_norm"]))
miRNA_norm_counts.extend(set(pmrd_mature_counts[pmrd_mature_counts["miRNA"] == miRNA]["FH_norm"]))
miRNA_norm_counts.extend(set(pmrd_mature_counts[pmrd_mature_counts["miRNA"] == miRNA]["MB_norm"]))
miRNA_norm_counts.extend(set(pmrd_mature_counts[pmrd_mature_counts["miRNA"] == miRNA]["MEF_norm"]))
miRNA_norm_counts.extend(set(pmrd_mature_counts[pmrd_mature_counts["miRNA"] == miRNA]["MH_norm"]))
miRNA_norm_counts.extend(set(pmrd_mature_counts[pmrd_mature_counts["miRNA"] == miRNA]["TNB_norm"]))
miRNA_norm_counts.extend(set(pmrd_mature_counts[pmrd_mature_counts["miRNA"] == miRNA]["TNEF_norm"]))
miRNA_norm_counts.extend(set(pmrd_mature_counts[pmrd_mature_counts["miRNA"] == miRNA]["TNH_norm"]))
relevant_miRNAs_values.append(miRNA_norm_counts)
plt.figure(figsize = (5, 20))
plt.pcolor(relevant_miRNAs_values)
plt.yticks(np.arange(0.5, len(relevant_miRNAs_all), 1), relevant_miRNAs_all)
plt.xticks(numpy.arange(0.5, len(samples_list), 1), labels = samples_list)
colorbar()
plt.show()
# -
relevant_miRNAs = pandas.DataFrame.from_records(relevant_miRNAs_values,
index = relevant_miRNAs_all,
columns = samples_list)
relevant_miRNAs
# +
# This list comes from differential expressed genes
differential_expressed = ['vvi-miR156e', 'vvi-miR156f', 'vvi-miR156g', 'vvi-miR156i', 'vvi-miR159a', 'vvi-miR159b', 'vvi-miR160c', 'vvi-miR160d', 'vvi-miR160e', 'vvi-miR164d', 'vvi-miR167a', 'vvi-miR167c', 'vvi-miR167e', 'vvi-miR169a', 'vvi-miR169c', 'vvi-miR169e', 'vvi-miR169k', 'vvi-miR169x', 'vvi-miR171c', 'vvi-miR171g', 'vvi-miR171i', 'vvi-miR172c', 'vvi-miR172d', 'vvi-miR2111*', 'vvi-miR2950', 'vvi-miR3623*', 'vvi-miR3624', 'vvi-miR3624*', 'vvi-miR3625', 'vvi-miR3625*', 'vvi-miR3626', 'vvi-miR3626*', 'vvi-miR3627', 'vvi-miR3627*', 'vvi-miR3629a*', 'vvi-miR3629c', 'vvi-miR3630*', 'vvi-miR3631b*', 'vvi-miR3632', 'vvi-miR3632*', 'vvi-miR3633b*', 'vvi-miR3634', 'vvi-miR3634*', 'vvi-miR3635', 'vvi-miR3635*', 'vvi-miR3637', 'vvi-miR3640*', 'vvi-miR393a', 'vvi-miR393b', 'vvi-miR394b', 'vvi-miR395a', 'vvi-miR395b', 'vvi-miR395c', 'vvi-miR395d', 'vvi-miR395e', 'vvi-miR395f', 'vvi-miR395g', 'vvi-miR395h', 'vvi-miR395i', 'vvi-miR395j', 'vvi-miR395k', 'vvi-miR395l', 'vvi-miR395m', 'vvi-miR396a', 'vvi-miR396b', 'vvi-miR396c', 'vvi-miR396d', 'vvi-miR397a', 'vvi-miR398a', 'vvi-miR399a', 'vvi-miR399b', 'vvi-miR399c', 'vvi-miR399h', 'vvi-miR479', 'vvi-miR482', 'vvi-miR535b', 'vvi-miR535c']
# List miRNAs found by both strategies
mirnas_both = sorted(set(relevant_miRNAs.index).intersection(differential_expressed))
print("There are {} miRNAs indentified on both methods.".format(len(mirnas_both)))
mirnas_both_values = list()
mirnas_counts = pmrd_mature_counts
for miRNA in mirnas_both:
# Colect values
miRNA_norm_counts = list()
miRNA_norm_counts.extend(set(mirnas_counts[mirnas_counts["miRNA"] == miRNA]["FB_norm"]))
miRNA_norm_counts.extend(set(mirnas_counts[mirnas_counts["miRNA"] == miRNA]["FEF_norm"]))
miRNA_norm_counts.extend(set(mirnas_counts[mirnas_counts["miRNA"] == miRNA]["FH_norm"]))
miRNA_norm_counts.extend(set(mirnas_counts[mirnas_counts["miRNA"] == miRNA]["MB_norm"]))
miRNA_norm_counts.extend(set(mirnas_counts[mirnas_counts["miRNA"] == miRNA]["MEF_norm"]))
miRNA_norm_counts.extend(set(mirnas_counts[mirnas_counts["miRNA"] == miRNA]["MH_norm"]))
miRNA_norm_counts.extend(set(mirnas_counts[mirnas_counts["miRNA"] == miRNA]["TNB_norm"]))
miRNA_norm_counts.extend(set(mirnas_counts[mirnas_counts["miRNA"] == miRNA]["TNEF_norm"]))
miRNA_norm_counts.extend(set(mirnas_counts[mirnas_counts["miRNA"] == miRNA]["TNH_norm"]))
mirnas_both_values.append(miRNA_norm_counts)
mirnas_both_expression = pandas.DataFrame.from_records(mirnas_both_values,
index = mirnas_both,
columns = samples_list)
mirnas_both_expression
# +
# List miRNAs found only by presence/absence
mirnas_only_counts = sorted(set(relevant_miRNAs.index).difference(differential_expressed))
print("There are {} miRNAs indentified only on presence/absence.".format(len(mirnas_only_counts)))
mirnas_only_counts_values = list()
for miRNA in mirnas_only_counts:
# Colect values
miRNA_norm_counts = list()
miRNA_norm_counts.extend(set(mirnas_counts[mirnas_counts["miRNA"] == miRNA]["FB_norm"]))
miRNA_norm_counts.extend(set(mirnas_counts[mirnas_counts["miRNA"] == miRNA]["FEF_norm"]))
miRNA_norm_counts.extend(set(mirnas_counts[mirnas_counts["miRNA"] == miRNA]["FH_norm"]))
miRNA_norm_counts.extend(set(mirnas_counts[mirnas_counts["miRNA"] == miRNA]["MB_norm"]))
miRNA_norm_counts.extend(set(mirnas_counts[mirnas_counts["miRNA"] == miRNA]["MEF_norm"]))
miRNA_norm_counts.extend(set(mirnas_counts[mirnas_counts["miRNA"] == miRNA]["MH_norm"]))
miRNA_norm_counts.extend(set(mirnas_counts[mirnas_counts["miRNA"] == miRNA]["TNB_norm"]))
miRNA_norm_counts.extend(set(mirnas_counts[mirnas_counts["miRNA"] == miRNA]["TNEF_norm"]))
miRNA_norm_counts.extend(set(mirnas_counts[mirnas_counts["miRNA"] == miRNA]["TNH_norm"]))
mirnas_only_counts_values.append(miRNA_norm_counts)
mirnas_only_counts_expression = pandas.DataFrame.from_records(mirnas_only_counts_values,
index = mirnas_only_counts,
columns = samples_list)
mirnas_only_counts_expression
# +
# List miRNAs found only by differential expression
mirnas_only_differential_expressed = sorted(set(differential_expressed).difference(relevant_miRNAs.index))
print("There are {} miRNAs indentified only on differential expression.".format(len(mirnas_only_differential_expressed)))
mirnas_only_differential_expressed_values = list()
for miRNA in mirnas_only_differential_expressed:
# Colect values
miRNA_norm_counts = list()
miRNA_norm_counts.extend(set(mirnas_counts[mirnas_counts["miRNA"] == miRNA]["FB_norm"]))
miRNA_norm_counts.extend(set(mirnas_counts[mirnas_counts["miRNA"] == miRNA]["FEF_norm"]))
miRNA_norm_counts.extend(set(mirnas_counts[mirnas_counts["miRNA"] == miRNA]["FH_norm"]))
miRNA_norm_counts.extend(set(mirnas_counts[mirnas_counts["miRNA"] == miRNA]["MB_norm"]))
miRNA_norm_counts.extend(set(mirnas_counts[mirnas_counts["miRNA"] == miRNA]["MEF_norm"]))
miRNA_norm_counts.extend(set(mirnas_counts[mirnas_counts["miRNA"] == miRNA]["MH_norm"]))
miRNA_norm_counts.extend(set(mirnas_counts[mirnas_counts["miRNA"] == miRNA]["TNB_norm"]))
miRNA_norm_counts.extend(set(mirnas_counts[mirnas_counts["miRNA"] == miRNA]["TNEF_norm"]))
miRNA_norm_counts.extend(set(mirnas_counts[mirnas_counts["miRNA"] == miRNA]["TNH_norm"]))
mirnas_only_differential_expressed_values.append(miRNA_norm_counts)
mirnas_only_differential_expressed_expression = pandas.DataFrame.from_records(mirnas_only_differential_expressed_values,
index = mirnas_only_differential_expressed,
columns = samples_list)
mirnas_only_differential_expressed_expression
# -
| jupyter_notebooks/pmrd_mature_counts.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# This project is to work with a dataset of used cars from eBay Kleinanzeigen, a [classifieds](https://en.wikipedia.org/wiki/Classified_advertising) section of the German eBay website.
#
# The dataset was originally scraped and uploaded to Kaggle.The original dataset isn't available on Kaggle anymore, but you can find it [here](https://data.world/data-society/used-cars-data).
#
# Two info about this data:
# - 50,000 data points were sampled
# - The dataset were dirtied a bit
#
# The data dictionary provided with data is as follows:
#
# - dateCrawled - When this ad was first crawled. All field-values are taken from this date.
# - name - Name of the car.
# - seller - Whether the seller is private or a dealer.
# - offerType - The type of listing
# - price - The price on the ad to sell the car.
# - abtest - Whether the listing is included in an A/B test.
# - vehicleType - The vehicle Type.
# - yearOfRegistration - The year in which the car was first registered.
# - gearbox - The transmission type.
# - powerPS - The power of the car in PS.
# - model - The car model name.
# - kilometer - How many kilometers the car has driven.
# - monthOfRegistration - The month in which the car was first registered.
# - fuelType - What type of fuel the car uses.
# - brand - The brand of the car.
# - notRepairedDamage - If the car has a damage which is not yet repaired.
# - dateCreated - The date on which the eBay listing was created.
# - nrOfPictures - The number of pictures in the ad.
# - postalCode - The postal code for the location of the vehicle.
# - lastSeenOnline - When the crawler saw this ad last online.
#
# The aim of this project is to clean the data and analyze the included used car listings, and to provide a report on the relationship between price and the car condition which are usually a combination result of car brand, car model, registration year, and mileage(odometer).
#
# Here we don't include the factor of damage. As it is common sense that a car is hard to find a buyer if it has unrepaired damage.
#
# At last, we would also explore the popularity between manual and automatic gearbox type in German.
#
# Let's start by importing the libraries we need and reading the dataset into pandas.
# +
import pandas as pd
import numpy as np
autos = pd.read_csv("autos.csv", encoding="Latin-1")
# -
autos
autos.info()
autos.head()
# From the work we did in the last screen, we can make the following observations:
#
# 1. The dataset contains 20 columns, most of which are strings.
# 2. Some columns have null values, but none have more than ~20% null values.
# 3.The column names use camelcase instead of Python's preferred snakecase, which means we can't just replace spaces with underscores.
#
# Let's convert the column names from camelcase to snakecase and reword some of the column names based on the data dictionary to be more descriptive.
new_column = []
for item in autos.columns:
new_column.append(item)
new_column[0] = 'data_crawled'
new_column[3] = 'offer_type'
new_column[5] = 'ab_test'
new_column[6] = 'vehicle_type'
new_column[7] = 'registration_year'
new_column[-1] = 'last_seen'
new_column[-2] = 'postal_code'
new_column[-3] = 'nr_pictures'
new_column[-4] = 'ad_created'
new_column[-5] = 'unrepaired_damage'
new_column[-8] = 'registration_month'
autos.columns = new_column
autos.head()
# Now let's do some basic data exploration to determine what other cleaning tasks need to be done. Initially we will look for:
#
# 1. Text columns where all or almost all values are the same. These can often be dropped as they don't have useful information for analysis.
# 2.Examples of numeric data stored as text which can be cleaned and converted.
autos.describe(include='all')
# Let's first find out which of those colume that have mostly one value:"seller", "offer_type", "ab_test", "unrepaired_damage".
autos["seller"].value_counts()
autos["offer_type"].value_counts()
autos["ab_test"].value_counts()
autos["unrepaired_damage"].value_counts()
# Clearly, "seller", "offer_type" have mostly one value and needs to be dropped. "ab_test" is just for other analyst purpose and has nothing to do with the scope of this project, can be also removed. Plus, it is mentioned that in this project at the begining that we don't need to consider car with damage, so those cars with damage will also be removed.
cleaned_autos = autos.drop(["seller", "offer_type", "ab_test"], axis=1)
cleaned_autos = cleaned_autos.loc[autos["unrepaired_damage"] == "nein"]
cleaned_autos.describe(include='all')
cleaned_autos
# Now let us work on "price" and "odometer" columns now. Let's first examine this two columns.
cleaned_autos["price"]
# +
# First, convert the string format into numerical format for analysis
cleaned_autos["price"] = cleaned_autos["price"].str.replace('$','')\
.str.replace(',','').astype('int64')
# -
# mean is around 9000, however, the Standard deviation is 1.248084e+05. It denotes that there are some price outrageously fall out of the normal range, usually from 500 - 50000 for used cars. Now let remove those car with price lower than 500, and larger than 50000.
cleaned_autos = cleaned_autos.loc[cleaned_autos["price"] > 500]
cleaned_autos = cleaned_autos.loc[cleaned_autos["price"] < 50000]
cleaned_autos.rename({'price':'price_euro'},axis=1,inplace=True)
print(cleaned_autos["price_euro"].describe())
# Let's perform the similar action on "Odometer" column.
cleaned_autos["odometer"]
cleaned_autos["odometer"].unique()
cleaned_autos["odometer"] = cleaned_autos["odometer"].\
str.replace(',','').str.replace('km','').astype('int64')
cleaned_autos.rename({"odometer":"odometer_km"},axis=1,inplace=True)
cleaned_autos["odometer_km"].unique()
# Let's now move on to the date columns and understand the date range the data covers.
cleaned_autos["data_crawled"].str[:10]
date_crawled = cleaned_autos["data_crawled"].str[:10].value_counts(normalize=True, dropna=False)
date_crawled.sort_index(ascending=False)
ad_created = cleaned_autos["ad_created"].str[:10].value_counts(normalize=True, dropna=False)
ad_created.sort_index(ascending=False)
last_seen = cleaned_autos["last_seen"].str[:10].value_counts(normalize=True, dropna=False)
last_seen.sort_index(ascending=False)
cleaned_autos["registration_year"].describe()
# From the ad_created, those ad was created either on 2015 or 2016, therefore, registration year should not be later 2016. So now filter out car with registration year of 1931 - 2016
cleaned_autos = cleaned_autos.loc[cleaned_autos["registration_year"] <= 2016]
cleaned_autos["registration_year"].describe()
# So now we finally get to the point that we can analyze how the price are tied with the brand of car.
brands = cleaned_autos["brand"].value_counts(normalize=True)
brands
brands[:10].sum()
# Let's choose the top 10 brands as they are counted for 80% cars on the list. Now we will find out the average
most_common_brands = brands[:10].index
most_common_brands
# +
brands_mean_price = {}
for brands in most_common_brands:
brand_only = cleaned_autos[autos['brand'] == brands]
mean_price = brand_only['price_euro'].mean()
brands_mean_price[brands] = int(mean_price)
brands_mean_price
# -
# We can see that there is a price gap between the top 5 brands in the sales data. We can see that cars manufactured by Audi, BMW and <NAME> tend to be priced higher than the competition. Opel is the least expensive of the top 5 brands while Volkswagen is in between. This could be one of the reasons for the popularity of Volkswagen cars.
| Guided Project_ Exploring eBay Car Sales Data/Basics.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: DESI Trac
# language: python
# name: desi
# ---
# # CNN Classifier Training Example
#
# This notebook demonstrates a basic 4-layer CNN trained to classify spectra from galaxies and galaxies + SNe Ia within 2 weeks (plus/minus) of max light.
#
# Required software:
# * TensorFlow2
# * [desihub software](https://desi.lbl.gov/trac/wiki/Pipeline/GettingStarted/Laptop) (with usual dependencies).
#
# Adding more spectral categories is straightforward.
# +
from desispec.io import read_spectra
from desitrip.preproc import rebin_flux, rescale_flux
from glob import glob
import glob
import numpy as np
import matplotlib as mpl
import matplotlib.pyplot as plt
from astropy.table import Table
import os
import platform
mpl.rc('font', size=14)
# -
# ## Input Spectra
#
# Input DESI spectra, rebin and rescale them, and then divide them into training and test sets for the classifier.
def condition_spectra(coadd_files, truth_files):
"""Read DESI spectra, rebin to a subsampled logarithmic wavelength grid, and rescale.
Parameters
----------
coadd_files : list or ndarray
List of FITS files on disk with DESI spectra.
truth_files : list or ndarray
Truth files.
Returns
-------
fluxes : ndarray
Array of fluxes rebinned to a logarithmic wavelength grid.
"""
fluxes = None
for cf, tf in zip(coadd_files, truth_files):
spectra = read_spectra(cf)
wave = spectra.wave['brz']
flux = spectra.flux['brz']
ivar = spectra.ivar['brz']
truth = Table.read(tf, 'TRUTH')
truez = truth['TRUEZ']
# uid = truth ['TARGETID']
# # Pre-condition: remove spectra with NaNs and zero flux values.
# mask = np.isnan(flux).any(axis=1) | (np.count_nonzero(flux, axis=1) == 0)
# mask_idx = np.argwhere(mask)
# flux = np.delete(flux, mask_idx, axis=0)
# ivar = np.delete(ivar, mask_idx, axis=0)
# Rebin and rescale fluxes so that each is normalized between 0 and 1.
rewave, reflux, reivar = rebin_flux(wave, flux, ivar, truez, minwave=2500., maxwave=9500., nbins=150, log=True, clip=True)
rsflux = rescale_flux(reflux)
if fluxes is None:
fluxes = rsflux
else:
fluxes = np.concatenate((fluxes, rsflux))
return fluxes, wave
snia_truth = np.sort(glob.glob((r'/scratch/vtiwari2/DESI Transient Sims/sneia*/*truth.fits')))
snia_coadd = np.sort(glob.glob((r'/scratch/vtiwari2/DESI Transient Sims/sneia*/*coadd.fits')))
snia_flux, wave = condition_spectra(snia_coadd, snia_truth)
host_truth = np.sort(glob.glob((r'/scratch/vtiwari2/DESI Transient Sims/host*/*truth.fits')))
host_files= np.sort(glob.glob((r'/scratch/vtiwari2/DESI Transient Sims/host*/*coadd.fits')))
host_flux, wave1 = condition_spectra(host_files, host_truth)
# +
nhost, nbins = host_flux.shape
nsnia, nbins = snia_flux.shape
nhost, nsnia, nbins
# -
set(wave==wave1)
# # Masking
# +
#create a mask sized maskSize at a random point in the spectra
import numpy as np
import matplotlib.pyplot as plt
def mask(wave,spectra, a,b):
wave, spectra = np.array(wave).tolist(), np.array(spectra).tolist()
trial1=[]
masklist=[]
left= min(wave, key=lambda x:abs(x-a))
right = min(wave, key=lambda x:abs(x-b))
l_i, r_i = wave.index(left),wave.index(right)
#l_i, r_i = np.where(wave==left),np.where(wave==right)
masklist= [i for i in range((l_i),(r_i+1))]
for i in range(len(spectra)):
if i in masklist:
trial1.append(0)
else:
trial1.append(spectra[i])
trial1 = np.asarray(trial1)
return trial1
# +
# To check it works- Checking against Amanda's code
#create a mask sized maskSize at a random point in the spectra
import numpy as np
import matplotlib.pyplot as plt
def mask1(spectra, maskSize):
random_150=np.random.randint(0,150-maskSize)
trial1=[]
l=0
masklist=[]
for i in range(maskSize):
masklist.append(random_150+i)
print(masklist)
for i in spectra:
if l in masklist:
trial1.append(0)
else:
trial1.append(i)
l=l+1
return trial1
# -
a = [i for i in range (0,150)]
# Amanda's
x = mask1([i for i in range (0,150)], 5)
plt.plot(x)
b = mask(a,a, 67,71)
plt.plot(b)
# +
""" The cell opens fit files from the Host folder
Then the second part removes the flux that either have no data (np.nan)
or just have 0 flux for all wavelengths"""
import glob
from astropy.io import fits
from scipy.ndimage import median_filter
get_ipython().run_line_magic('matplotlib', 'inline')
files_host = np.sort(glob.glob((r'/scratch/vtiwari2/DESI Transient Sims/host*/*coadd.fits')))
flux_host = []
for f in files_host:
h = fits.open(f)
wave = h['BRZ_WAVELENGTH'].data
flux = h['BRZ_FLUX'].data
flux_host.append(flux)
flux_host = np.concatenate(flux_host)
# print (len(flux_host))
flux_host_valid = []
x = 0
for flux in flux_host:
if (np.isnan(flux).any()) or (not np.any(flux)): #check for nan and 0(if the whole array is 0) respectively
x += 1
else:
flux_host_valid.append(flux)
print(x)
# -
# ## checking on real data
i = 1
plt.plot(wave, flux_host_valid[i])
#plt.axis([0, 6, 0, 20])
plt.show()
# Checking on the wavelength and flux
i = 1
h = mask(wave,flux_host_valid[i], 6000,8000)
plt.plot(wave,h)
#plt.axis([0, 6, 0, 20])
plt.show()
# ### Yay it works
# ### 5600A-5950A and 7450A-7750A zeropad
def mask_spectra(coadd_files, truth_files):
"""Read DESI spectra, rebin to a subsampled logarithmic wavelength grid, and rescale.
Parameters
----------
coadd_files : list or ndarray
List of FITS files on disk with DESI spectra.
truth_files : list or ndarray
Truth files.
Returns
-------
fluxes : ndarray
Array of fluxes rebinned to a logarithmic wavelength grid.
"""
fluxes = None
for cf, tf in zip(coadd_files, truth_files):
spectra = read_spectra(cf)
wave = spectra.wave['brz']
flux = spectra.flux['brz']
flux = mask(wave,flux,5600,5950)
flux = mask(wave,flux,7450,7750)
ivar = spectra.ivar['brz']
truth = Table.read(tf, 'TRUTH')
truez = truth['TRUEZ']
# uid = truth ['TARGETID']
# # Pre-condition: remove spectra with NaNs and zero flux values.
# mask = np.isnan(flux).any(axis=1) | (np.count_nonzero(flux, axis=1) == 0)
# mask_idx = np.argwhere(mask)
# flux = np.delete(flux, mask_idx, axis=0)
# ivar = np.delete(ivar, mask_idx, axis=0)
# Rebin and rescale fluxes so that each is normalized between 0 and 1.
rewave, reflux, reivar = rebin_flux(wave, flux, ivar, truez, minwave=2500., maxwave=9500., nbins=150, log=True, clip=True)
rsflux = rescale_flux(reflux)
if fluxes is None:
fluxes = rsflux
else:
fluxes = np.concatenate((fluxes, rsflux))
return fluxes, wave
# +
snia_truth = np.sort(glob.glob((r'/scratch/vtiwari2/DESI Transient Sims/sneia*/*truth.fits')))
snia_coadd = np.sort(glob.glob((r'/scratch/vtiwari2/DESI Transient Sims/sneia*/*coadd.fits')))
snia_flux_mask, wave_m = mask_spectra(snia_coadd, snia_truth)
host_truth = np.sort(glob.glob((r'/scratch/vtiwari2/DESI Transient Sims/host*/*truth.fits')))
host_files= np.sort(glob.glob((r'/scratch/vtiwari2/DESI Transient Sims/host*/*coadd.fits')))
host_flux_mask, wavem1 = mask_spectra(host_files, host_truth)
# -
x = np.array(snia_flux_mask == snia_flux).tolist()
False in x
# ### <span style="color:blue"> This where the problem is because the two flux arrays are still the same, I dont quite now where the code is not working</span>.
#
# ### Checking
# Checking on the wavelength and flux
i = 1
plt.plot(host_flux[i])
plt.plot(host_flux_mask[i])
#plt.axis([0, 6, 0, 20])
plt.show()
# # <span style="color:red"> DISREGARD THE CODE BELOW </span>.
#
# ### Plot Spectra to Check Output
# +
fig, axes = plt.subplots(1,2, figsize=(14,5), sharex=True, sharey=True)
for i in range(0,500):
ax = axes[0]
ax.plot(host_flux[i], alpha=0.2)
ax = axes[1]
ax.plot(snia_flux[i], alpha=0.2)
axes[0].set_title('host spectra')
axes[1].set_title('SN spectra')
fig.tight_layout()
# -
# ### Set up Training Sets and Labels
#
# 0. "host" spectra based only on BGS templates
# 1. "snia" spectra based on BGS + SN Ia templates
x = np.concatenate([host_flux, snia_flux]).reshape(-1, nbins, 1)
y = np.concatenate([np.zeros(nhost), np.ones(nsnia)])
# ## CNN Network Setup
#
# Train network with TensorFlow+Keras.
import tensorflow as tf
from tensorflow.keras import utils, regularizers, callbacks, backend
from tensorflow.keras.layers import Input, Dense, Activation, ZeroPadding1D, BatchNormalization, Flatten, Reshape, Conv1D, MaxPooling1D, Dropout, Add, LSTM, Embedding
from tensorflow.keras.initializers import glorot_normal, glorot_uniform
from tensorflow.keras.optimizers import Adam
from tensorflow.keras.models import Model, load_model
def network(input_shape, learning_rate=0.0005, reg=0.0032, dropout=0.7436, seed=None):
"""Define the CNN structure.
Parameters
----------
input_shape : int
Shape of the input spectra.
learning_rate : float
Learning rate.
reg : float
Regularization factor.
dropout : float
Dropout rate.
seed : int
Seed of initializer.
Returns
-------
model : tensorflow.keras.Model
A model instance of the network.
"""
X_input = Input(input_shape, name='Input_Spec')
X_input = Input(input_shape, name='Input_Spec')
# First convolutional layer.
with backend.name_scope('Conv_1'):
X = Conv1D(filters=8, kernel_size=5, strides=1, padding='same',
kernel_regularizer=regularizers.l2(reg),
bias_initializer='zeros',
kernel_initializer=glorot_normal(seed))(X_input)
X = BatchNormalization(axis=2)(X)
X = Activation('relu')(X)
X = MaxPooling1D(pool_size= 2)(X)
# Second convolutional layer.
with backend.name_scope('Conv_2'):
X = Conv1D(filters=16, kernel_size=5, strides=1, padding='same',
kernel_regularizer=regularizers.l2(reg),
bias_initializer='zeros',
kernel_initializer=glorot_normal(seed))(X)
X = BatchNormalization(axis=2)(X)
X = Activation('relu')(X)
X = MaxPooling1D(2)(X)
# Third convolutional layer.
with backend.name_scope('Conv_3'):
X = Conv1D(filters=32, kernel_size=5, strides=1, padding='same',
kernel_regularizer=regularizers.l2(reg),
bias_initializer='zeros',
kernel_initializer=glorot_normal(seed))(X)
X = BatchNormalization(axis=2)(X)
X = Activation('relu')(X)
X = MaxPooling1D(2)(X)
# Fourth convolutional layer.
with backend.name_scope('Conv_4'):
X = Conv1D(filters=64, kernel_size=5, strides=1, padding='same',
kernel_regularizer=regularizers.l2(reg),
bias_initializer='zeros',
kernel_initializer=glorot_normal(seed))(X)
X = BatchNormalization(axis=2)(X)
X = Activation('relu')(X)
X = MaxPooling1D(2)(X)
# Flatten to fully connected dense layer.
with backend.name_scope('Dense_Layer'):
X = Flatten()(X)
X = Dense(256, kernel_regularizer=regularizers.l2(reg),
activation='relu')(X)
X = Dropout(rate=dropout, seed=seed)(X)
# Output layer with sigmoid activation.
with backend.name_scope('Output_Layer'):
X = Dense(1, kernel_regularizer=regularizers.l2(reg),
activation='sigmoid',name='Output_Classes')(X)
model = Model(inputs=X_input, outputs=X, name='SNnet')
# Set up optimizer, loss function, and optimization metrics.
model.compile(optimizer=Adam(lr=learning_rate), loss='binary_crossentropy',
metrics=['accuracy'])
return model
model = network((nbins, 1))
# ## Train and Test
#
# Split the data into training and testing (validation) samples and fit the network weights.
from sklearn import svm, datasets
from sklearn.model_selection import train_test_split
# Splitting the data
x = np.concatenate([host_flux, snia_flux]).reshape(-1, nbins, 1)
y = np.concatenate([np.zeros(nhost), np.ones(nsnia)])
x_train, x_test, y_train, y_test= train_test_split(x, y, test_size=0.25)
hist = model.fit(x_train, y_train, batch_size=65, epochs=30, validation_data=(x_test, y_test), shuffle=True)
# ## Performance
#
# ### Loss and Accuracy
#
# Plot loss and accuracy as a function of epoch.
# +
fig, axes = plt.subplots(1,2, figsize=(12,5), sharex=True)
nepoch = len(hist.history['loss'])
epochs = np.arange(1, nepoch+1)
ax = axes[0]
ax.plot(epochs, hist.history['acc'], label='acc')
ax.plot(epochs, hist.history['val_acc'], label='val_acc')
ax.set(xlabel='training epoch',
ylabel='accuracy',
xlim=(0, nepoch),
ylim=(0.5,1.0))
ax.legend(fontsize=12, loc='best')
ax.grid(ls=':')
ax = axes[1]
ax.plot(epochs, hist.history['loss'], label='loss')
ax.plot(epochs, hist.history['val_loss'], label='val_loss')
ax.set(xlabel='training epoch',
ylabel='loss',
xlim=(0, nepoch),
ylim=(0.,2.0))
ax.legend(fontsize=12, loc='best')
ax.grid(ls=':')
fig.tight_layout();
# -
# ### ROC Curve and Precision-Recall
from sklearn.metrics import roc_curve, auc, precision_recall_curve, average_precision_score
y_pred = model.predict(x_test).ravel()
fpr, tpr, thresholds = roc_curve(y_test, y_pred)
pre, rec, _ = precision_recall_curve(y_test, y_pred)
# +
fig, axes = plt.subplots(1,2, figsize=(10,5), sharex=True, sharey=True)
ax = axes[0]
ax.plot(fpr, tpr, lw=2)
ax.plot((0,1), (0,1), 'k--', alpha=0.3)
ax.grid(ls=':')
ax.set(xlim=(-0.01,1.01), xlabel='FPR = FP / (FP + TN)',
ylim=(-0.01,1.01), ylabel='recall (TPR) = TP / (TP + FN)',
title='ROC: AUC = {:.3f}'.format(auc(fpr, tpr)),
aspect='equal')
ax = axes[1]
ax.plot(rec, pre, lw=2)
f_scores = np.linspace(0.1, 0.9, num=5)
lines = []
labels = []
for f_score in f_scores:
x_ = np.linspace(0.01, 1)
y_ = f_score * x_ / (2 * x_ - f_score)
l, = plt.plot(x_[y_ >= 0], y_[y_ >= 0], color='k', ls='--', alpha=0.3)
ax.annotate(' $F_{{1}}={0:0.1f}$'.format(f_score), xy=(1.01, y_[45]-0.02),
fontsize=12, alpha=0.8)
ax.grid(ls=':')
ax.set(xlabel='recall (TPR) = TP / (TP + FN)',
ylabel='precision = TP / (TP + FP)',
title='Average precision = {:.3f}'.format(average_precision_score(y_test, y_pred)),
aspect='equal')
fig.tight_layout()
# -
# ### Confusion Matrix
# +
fig, axes = plt.subplots(1,2, figsize=(12,5), sharex=True)
ax = axes[0]
ybins = np.linspace(0,1,41)
ax.hist(y_test, bins=ybins, alpha=0.5, label='true label')
ax.hist(y_pred[y_test==0], bins=ybins, alpha=0.5, label='prediction (host)')
ax.hist(y_pred[y_test==1], bins=ybins, alpha=0.5, label='prediction (SN Ia)')
ax.grid(ls=':')
ax.set(xlabel='label probability',
xlim=(-0.01, 1.01),
ylabel='count')
ax.legend(fontsize=12, loc='best')
ax = axes[1]
ybins = np.linspace(0,1,41)
ax.hist(y_test, bins=ybins, alpha=0.5, label='true label')
ax.hist(y_pred[y_test==0], bins=ybins, alpha=0.5, label='prediction (host)')
ax.hist(y_pred[y_test==1], bins=ybins, alpha=0.5, label='prediction (SN Ia)', log=True)
ax.grid(ls=':')
ax.set(xlabel='label probability',
xlim=(-0.01, 1.01),
ylabel='count')
fig.tight_layout()
# -
# ## <font color='red'>CM with y_pred > 0.5</font>
# +
from sklearn.metrics import confusion_matrix
cm = confusion_matrix(y_test, y_pred > 0.5)
cmnorm = cm.astype('float') / cm.sum(axis=1)[:, np.newaxis]
# +
fig, ax = plt.subplots(1,1, figsize=(6,5))
im = ax.imshow(cmnorm, cmap='Blues', vmin=0, vmax=1)
cb = ax.figure.colorbar(im, ax=ax, fraction=0.046, pad=0.04)
cb.set_label('correct label probability')
ax.set(aspect='equal',
xlabel='predicted label',
xticks=np.arange(cm.shape[1]),
xticklabels=['host', 'SN Ia'],
ylabel='true label',
yticks=np.arange(cm.shape[1]),
yticklabels=['host', 'SN Ia'])
thresh = 0.5*cm.max()
for i in range(cm.shape[0]):
for j in range(cm.shape[1]):
ax.text(j, i, '{:.3f}\n({:d})'.format(cmnorm[i,j], cm[i,j]),
ha='center', va='center',
color='black' if cm[i,j] < thresh else 'white')
fig.tight_layout()
# -
# ## <font color='red'>CM with y_pred > 0.9</font>
# +
from sklearn.metrics import confusion_matrix
cm = confusion_matrix(y_test, y_pred > 0.9)
cmnorm = cm.astype('float') / cm.sum(axis=1)[:, np.newaxis]
fig, ax = plt.subplots(1,1, figsize=(6,5))
im = ax.imshow(cmnorm, cmap='Blues', vmin=0, vmax=1)
cb = ax.figure.colorbar(im, ax=ax, fraction=0.046, pad=0.04)
cb.set_label('correct label probability')
ax.set(aspect='equal',
xlabel='predicted label',
xticks=np.arange(cm.shape[1]),
xticklabels=['host', 'SN Ia'],
ylabel='true label',
yticks=np.arange(cm.shape[1]),
yticklabels=['host', 'SN Ia'])
thresh = 0.5*cm.max()
for i in range(cm.shape[0]):
for j in range(cm.shape[1]):
ax.text(j, i, '{:.3f}\n({:d})'.format(cmnorm[i,j], cm[i,j]),
ha='center', va='center',
color='black' if cm[i,j] < thresh else 'white')
fig.tight_layout()
# -
# ## <font color='red'>CM with y_pred > 0.99</font>
# +
from sklearn.metrics import confusion_matrix
cm = confusion_matrix(y_test, y_pred > 0.99)
cmnorm = cm.astype('float') / cm.sum(axis=1)[:, np.newaxis]
fig, ax = plt.subplots(1,1, figsize=(6,5))
im = ax.imshow(cmnorm, cmap='Blues', vmin=0, vmax=1)
cb = ax.figure.colorbar(im, ax=ax, fraction=0.046, pad=0.04)
cb.set_label('correct label probability')
ax.set(aspect='equal',
xlabel='predicted label',
xticks=np.arange(cm.shape[1]),
xticklabels=['host', 'SN Ia'],
ylabel='true label',
yticks=np.arange(cm.shape[1]),
yticklabels=['host', 'SN Ia'])
thresh = 0.5*cm.max()
for i in range(cm.shape[0]):
for j in range(cm.shape[1]):
ax.text(j, i, '{:.3f}\n({:d})'.format(cmnorm[i,j], cm[i,j]),
ha='center', va='center',
color='black' if cm[i,j] < thresh else 'white')
fig.tight_layout()
# -
# # IGNORE
# Splitting
from sklearn import svm, datasets
from sklearn.model_selection import train_test_split
x_train, x_test, y_train, y_test= train_test_split(x, y, test_size=0.25)
# +
"""Trying from the blog"""
# Classifier
keras_model = network((nbins, 1))
keras_model.fit(x_train, y_train, epochs=50, batch_size=64, verbose=1)
from sklearn.metrics import roc_curve
y_pred_keras = keras_model.predict(x_test).ravel()
fpr_keras, tpr_keras, thresholds_keras = roc_curve(y_test, y_pred_keras)
# Area under the curve
from sklearn.metrics import auc
auc_keras = auc(fpr_keras, tpr_keras)
plt.figure(1)
plt.plot([0, 1], [0, 1], 'k--')
plt.plot(fpr_keras, tpr_keras, label='Keras (area = {:.3f})'.format(auc_keras))
plt.xlabel('False positive rate')
plt.ylabel('True positive rate')
plt.title('ROC curve')
plt.legend(loc='best')
plt.show()
# +
############
import random
import pylab as pl
import numpy as np
from sklearn import svm, datasets
from sklearn.metrics import precision_recall_curve
from sklearn.metrics import auc
"""# The source code uses decision_function
but that is only applicable to sequential class and not the model class that our classifer uses.
Check here for more reference: https://stats.stackexchange.com/questions/329857/what-is-the-difference-between-decision-function-predict-proba-and-predict-fun"""
y_score = keras_model.predict(x_test).ravel()
# Average precision score
from sklearn.metrics import average_precision_score
average_precision = average_precision_score(y_test, y_score)
print('Average precision-recall score: {0:0.2f}'.format(average_precision))
#https://scikit-plot.readthedocs.io/en/stable/Quickstart.html
#Used Professor's Benzvi's code for PR curve
from sklearn.metrics import precision_recall_curve
precision, recall, _ = precision_recall_curve(y_test, y_pred_keras)
plt.figure()
plt.step(recall, precision, where='post')
plt.xlabel('Recall')
plt.ylabel('Precision')
plt.ylim([0.0, 1.05])
plt.xlim([0.0, 1.0])
plt.title('Precision Recall Curve with average precision of {0:0.2f}'.format(average_precision))
########
# +
from sklearn.metrics import confusion_matrix
y_pred = y_pred_keras
def plotConfusionMatrix(y_true, y_pred, classes=["Hosts", "Type IAs", "Type IIPs"], cmap=plt.cm.Oranges, title="Normalized Confusion Matrix"):
from sklearn.metrics import accuracy_score
from sklearn.metrics import confusion_matrix
# Compute confusion matrix
cm = confusion_matrix(y_test, y_pred > 0.5)
cm = cm.astype('float') / cm.sum(axis=1)[:, np.newaxis]
print("Normalized confusion matrix")
#print(cm)
print("Accuracy: ", accuracy_score(y_true, y_pred))
fig, ax = plt.subplots()
im = ax.imshow(cm, interpolation='nearest', cmap=cmap)
ax.figure.colorbar(im, ax=ax)
# We want to show all ticks...
ax.set(xticks=np.arange(cm.shape[1]),
yticks=np.arange(cm.shape[0]),
xticklabels=classes, yticklabels=classes,
title=title,
ylabel='True label',
xlabel='Predicted label')
# Rotate the tick labels and set their alignment.
plt.setp(ax.get_xticklabels(), rotation=45, ha="right",
rotation_mode="anchor")
# create text annotations
fmt = '.3f'
thresh = cm.max() / 2.
for i in range(cm.shape[0]):
for j in range(cm.shape[1]):
ax.text(j, i, format(cm[i, j], fmt),
ha="center", va="center",
color="blue" if cm[i, j] < thresh else "black")
fig.tight_layout()
plt.ylim([1.5, -.5])
plt.show()
return ax
for i in range(len(y_pred)):
y_pred[i]=round(y_pred[i])
plotConfusionMatrix(y_true=y_test, y_pred=y_pred, title="Normalized Conf. Matrix")
# -
| desitrip/docs/nb/Masking_Vash .ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] deletable=true editable=true
# ## Windy Gridworld with King's Moves (Exercise 6.7)
# This is a modification of the Windy Gridworld problem but with 8 possible actions rather than the usual four.
# + deletable=true editable=true
import numpy as np
import matplotlib.pyplot as plt
# + deletable=true editable=true
# Coordinates in (row, column) format
INITIAL_STATE = [3, 0]
FINAL_STATE = [3, 7]
WIDTH = 10
HEIGHT = 7
ACTION_COUNT = 8
L, LU, U, RU, R, RD, D, LD = range(0, ACTION_COUNT)
MAX_X = WIDTH - 1
MAX_Y = HEIGHT - 1
# + deletable=true editable=true
class WindyGridworld:
up_draft = {0:0, 1:0, 2:0, 3:1, 4:1, 5:1, 6:2, 7:2, 8:1, 9:0}
pos = []
def __init__(self):
self.reset()
def step(self, action):
"""
Takes the given action and returns a tuple (next_state, reward, done)
"""
reward = 0
done = self.is_final()
if not done:
new_pos = np.copy(self.pos)
reward = -1
displacement = self.up_draft[self.pos[1]]
if action == L:
new_pos[1] -= 1
elif action == LU:
new_pos[0] -= 1
new_pos[1] -= 1
elif action == U:
new_pos[0] -= 1
elif action == RU:
new_pos[0] -= 1
new_pos[1] += 1
elif action == R:
new_pos[1] += 1
elif action == RD:
new_pos[0] += 1
new_pos[1] += 1
elif action == D:
new_pos[0] += 1
elif action == LD:
new_pos[0] += 1
new_pos[1] -= 1
# If final position is valid, move to new location
if not (new_pos[0] < 0 or new_pos[0] > MAX_Y or new_pos[1] < 0 or new_pos[1] > MAX_X):
self.pos = new_pos
# Apply upward translation due to wind
new_y = self.pos[0] - displacement
self.pos[0] = 0 if new_y < 0 else new_y
return np.copy(self.pos), reward, self.is_final()
def reset(self):
self.pos = np.copy(INITIAL_STATE)
return np.copy(self.pos)
def is_final(self):
return self.pos[0] == FINAL_STATE[0] and self.pos[1] == FINAL_STATE[1]
# + deletable=true editable=true
def epoch_greedy(Q, state):
qa = Q[state[0], state[1]]
prob = np.random.rand(1)
if prob > epsilon:
# exploit (greedy)
action_index = np.random.choice(np.flatnonzero(qa == qa.max()))
else:
# explore (random action)
action_index = np.random.randint(0, ACTION_COUNT)
return action_index
# + deletable=true editable=true
# Same initialization as in text
gamma = 1 # no discount
epsilon = 0.1
alpha = 0.5
Q = np.zeros((HEIGHT, WIDTH, ACTION_COUNT))
steps = 8000
env = WindyGridworld()
step = 0
episodes = 0
s = env.reset()
x, y = [], [] # For plotting
while step < steps:
# Select action using policy derived from Q (e-greedy)
a = epoch_greedy(Q, s)
# Take action and observe next state and reward
s_, r, done = env.step(a)
# Choose A' from S' using policy derived from Q (e-greedy)
a_ = epoch_greedy(Q, s_)
# Update
if done:
Q[s[0], s[1], a] = Q[s[0], s[1], a] + alpha * (r - Q[s[0], s[1], a])
else:
Q[s[0], s[1], a] = Q[s[0], s[1], a] + alpha * (r + gamma * Q[s_[0], s_[1], a_] - Q[s[0], s[1], a])
s = s_
step += 1
if step % 100 == 0:
x.append(step)
y.append(episodes)
# Episode over, reset environment
if done:
s = env.reset()
episodes += 1
# + deletable=true editable=true
plt.plot(x, y)
plt.xlabel('Time steps')
plt.ylabel('Episodes')
plt.show()
# + [markdown] deletable=true editable=true
# In the same number of steps, we are able to complete more episodes than using four actions. We conclude that it is preferable to have 8 actions than 4.
#
# ## References
# 1. <NAME>, <NAME> (1998). Reinforcement Learning: An Introduction. MIT Press.
# + deletable=true editable=true
| temporal difference/windy_gridworld_kings_moves.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# 
# ___
# ## Chapter 7 - Network Analysis with NetworkX
# ## Segment 4 - Analyzing a social network
# +
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from pylab import rcParams
import seaborn as sb
import networkx as nx
# -
# %matplotlib inline
rcParams['figure.figsize'] = 5, 4
sb.set_style('whitegrid')
# +
DG = nx.gn_graph(7, seed = 25)
for line in nx.generate_edgelist(DG, data=False):
print(line)
DG.node[0]['name'] = 'Alice'
DG.node[1]['name'] = 'Bob'
DG.node[2]['name'] = 'Claire'
DG.node[3]['name'] = 'Dennis'
DG.node[4]['name'] = 'Esther'
DG.node[5]['name'] = 'Frank'
DG.node[6]['name'] = 'George'
# -
G = DG.to_undirected()
print(nx.info(DG))
# #### Considering degrees in a social network
DG.degree()
# #### Identifying successor nodes
nx.draw_circular(DG, node_color='bisque', with_labels=True)
DG.successors(3)
DG.neighbors(4)
G.neighbors(4)
| Ch07/07_04/07_04.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="view-in-github" colab_type="text"
# <a href="https://colab.research.google.com/github/rrsguim/PhD_Economics/blob/master/NN4BC/DL_LSTM_1df_EUR.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# + [markdown] id="dUeKVCYTbcyT"
# ------------------------------------------------------------------------------------------------------
# Copyright (c) 2020 <NAME>
#
# This work was done when I was at the University of California, Riverside, USA.
#
# It is part of my doctoral thesis in Economics at the Federal University of
#
# Rio Grande do Sul, Porto Alegre, Brazil.
#
#
# See full material at https://github.com/rrsguim/PhD_Economics
#
# The code below, under the Apache License, was inspired by
#
# *Classification on imbalanced data*, and
#
# *Introduction to the Keras Tuner*
#
# Copyright 2020 The TensorFlow Authors
#
# https://www.tensorflow.org/tutorials/structured_data/imbalanced_data
#
# https://www.tensorflow.org/tutorials/keras/keras_tuner
#
# -------------------------------------------------------------------------
# + [markdown] id="gJT7cOb44eDi"
# # Transfer Learning for Business Cycle Identification
# + [markdown] id="CJW6mqC85CRP"
# ##Setup
# + id="yJHVo_K_v20i"
from __future__ import absolute_import, division, print_function, unicode_literals
# + id="fYBlUQ5FvzxP"
try:
# # %tensorflow_version only exists in Colab.
# %tensorflow_version 2.x
except Exception:
pass
# + id="43tKdRoNecNy" colab={"base_uri": "https://localhost:8080/"} outputId="af437ad5-1ca6-4a4b-f5ce-effd07967ca0"
# !pip install -U keras-tuner
# + id="JM7hDSNClfoK"
import tensorflow as tf
from tensorflow import keras
import IPython
import kerastuner as kt
from kerastuner import RandomSearch
import os
import tempfile
import matplotlib as mpl
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import seaborn as sns
import sklearn
from sklearn.metrics import confusion_matrix
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import StandardScaler
# + id="c8o1FHzD-_y_"
mpl.rcParams['figure.figsize'] = (12, 10)
colors = plt.rcParams['axes.prop_cycle'].by_key()['color']
# + [markdown] id="7JfLUlawto_D"
# ## Deep Learning | EURO
# + [markdown] id="oG4spZ-K5MkL"
# ###Data loading and pre-processing
# + [markdown] id="Rv6BaNhL5eLQ"
# #### Data
#
# All data refer to Euro area from 2005Q1 to 2019Q4.
#
# >Column| Source| Description| Feature Type | Data Type
# >------------|--------------------|----------------------|-----------------|----------------
# >GDP | FRED-MD | CLVMNACSCAB1GQEA19 - Real Gross Domestic Product (19 countries), s.a. | Numerical | float
# >Income | FRED-MD | NAEXKP02EZQ189S - Private Final Consumption Expenditure, s.a. | Numerical | float
# >Employment | FRED-MD | LFESEETTEZQ647S - Employees, s.a. | Numerical | float
# >Industry | FRED-MD | PRMNTO01EZQ657S - Total Manufacturing Production, s.a. | Numerical | float
# >Sales | FRED-MD | SLRTTO01EZQ657S - Volume of Total Retail Trade sales, s.a. | Numerical | float
# >Target | CEPR | CEPR based Recession Indicator (1 = true; 0 = false) | Classification | integer
# + id="pR_SnbMArXr7"
file = tf.keras.utils
EURO_raw_data = pd.read_csv('https://raw.githubusercontent.com/rrsguim/PhD_Economics/master/TL4BC/TL4BC_Euro_data_2005.csv')
# + id="rGVtGyAas2Hz"
EURO_raw_data.index = EURO_raw_data['DATE']
drop_DATE = EURO_raw_data.pop('DATE')
EURO_raw_data.index = pd.to_datetime(EURO_raw_data.index,infer_datetime_format=True)
EURO_raw_data.index = EURO_raw_data.index.to_period("Q")
# + [markdown] id="ABKRH0il6j3f"
# #### Examine the class label imbalance
# + id="HCJFrtuY2iLF" colab={"base_uri": "https://localhost:8080/"} outputId="c30b021b-f294-4c0e-a886-f48fe7a1334a"
neg, pos = np.bincount(EURO_raw_data['CEPR'])
total = neg + pos
print('Examples:\n Total: {}\n Positive: {} ({:.2f}% of total)\n'.format(
total, pos, 100 * pos / total))
# + [markdown] id="pScKKNZn66QT"
# #### Transform the features into log first difference.
# + id="Ef42jTuxEjnj"
log_data = EURO_raw_data.copy()
log_data['GDP'] = np.log(log_data['GDP'])
log_data['Income'] = np.log(log_data['Income'])
log_data['Employment'] = np.log(log_data['Employment'])
log_data['Industry'] = np.log(log_data['Industry'])
log_data['Sales'] = np.log(log_data['Sales'])
# + id="1NEm9gNt0LJo"
log_1df = log_data.copy()
log_1df['GDP'] = log_data['GDP'] - log_data['GDP'].shift(1)
log_1df['Income'] = log_data['Income'] - log_data['Income'].shift(1)
log_1df['Employment'] = log_data['Employment'] - log_data['Employment'].shift(1)
log_1df['Industry'] = log_data['Industry'] - log_data['Industry'].shift(1)
log_1df['Sales'] = log_data['Sales'] - log_data['Sales'].shift(1)
# + [markdown] id="4ncZt1hD0Sr2"
# Cut the first line because of NaN in lag features and observe last lines of the adjusted dataset.
# + colab={"base_uri": "https://localhost:8080/", "height": 235} id="KXT5fmYH0TOK" outputId="808b3dbe-d40c-4bca-8cc4-98b93b745fc0"
log_1df = log_1df[1:]
log_1df.tail()
# + [markdown] id="XD4o60p47wlD"
# Observe last lines of the adjusted dataset.
# + [markdown] id="A9UejVk58KJ-"
# #### Inspect pre-processed data
# + id="brunxYyx8Nje" colab={"base_uri": "https://localhost:8080/", "height": 593} outputId="11b211f7-dec8-44bd-cfa7-f4053f5dec70"
plt.plot(drop_DATE[1:], log_1df['GDP'], label='GDP')
plt.plot(drop_DATE[1:], log_1df['Income'], label='Income')
plt.plot(drop_DATE[1:], log_1df['Employment'], label='Employment')
plt.plot(drop_DATE[1:], log_1df['Industry'], label='Industry')
plt.plot(drop_DATE[1:], log_1df['Sales'], label='Sales')
plt.bar(drop_DATE, EURO_raw_data['CEPR']/-20, width=1, linewidth=1, align='center', color="lightgray", label='CEPR')
plt.legend()
plt.show()
# + [markdown] id="n9RDMZQq8sV9"
# #### Split, but no shuffle
#
# Split the dataset into train, validation, and test sets. The validation set is used during the model fitting to evaluate the loss and any metrics, however, the model is not fit with this data. The test set is completely unused during the training phase and is only used at the end to evaluate how well the model generalizes to new data. This is especially important with imbalanced datasets where overfitting is a significant concern from the lack of training data.
# + id="xfxhKg7Yr1-b"
# Use a utility from sklearn to split our dataset.
train_df, test_df = train_test_split(log_1df, test_size=0.7, random_state=0, shuffle=False) #then we have at least one recessions on test_set
train_df, val_df = train_test_split(train_df, test_size=0.3, random_state=0, shuffle=False)
# Form np arrays of labels and features.
train_labels = np.array(train_df.pop('CEPR'))
#bool_train_labels = train_labels != 0
val_labels = np.array(val_df.pop('CEPR'))
test_labels = np.array(test_df.pop('CEPR'))
train_features = np.array(train_df)
val_features = np.array(val_df)
test_features = np.array(test_df)
# + [markdown] id="a7rZsWJ89bdf"
# #### Normalize
#
# Normalize the input features using the sklearn StandardScaler.
# This will set the mean to 0 and standard deviation to 1.
#
# Note: The `StandardScaler` is only fit using the `train_features` to be sure the model is not peeking at the validation or test sets.
# + id="IO-qEUmJ5JQg"
scaler = StandardScaler()
train_features = scaler.fit_transform(train_features)
val_features = scaler.transform(val_features)
test_features = scaler.transform(test_features)
train_features = np.clip(train_features, -5, 5)
val_features = np.clip(val_features, -5, 5)
test_features = np.clip(test_features, -5, 5)
# + [markdown] id="ESvy8DSYlGUU"
# Adjust model input dim because of LSTM
# + id="ZNQbG84ClEb8"
train_features = np.expand_dims(train_features, 1)
val_features = np.expand_dims(val_features, 1)
test_features = np.expand_dims(test_features, 1)
# + [markdown] id="ciKNINi7-dFB"
# #### Initial bias
#
# The correct bias to set can be derived from:
#
# $$ p_0 = pos/(pos + neg) = 1/(1+e^{-b_0}) $$
# $$ b_0 = -log_e(1/p_0 - 1) $$
# $$ b_0 = log_e(pos/neg)$$
#
# Set that as the initial bias, and the model will give much more reasonable initial guesses.
# + id="F5KWPSjjstUS"
initial_bias = np.log([pos/neg])
# + [markdown] id="lRgMMwGp-6cP"
# ### Define the model and metrics
#
# Define a function that creates a simple neural network with a densly connected hidden layer, a [dropout](https://developers.google.com/machine-learning/glossary/#dropout_regularization) layer to reduce overfitting, and an output sigmoid layer that returns the probability of recession. Also, pick the optimal set of hyperparameters with [Keras Tunner](https://www.tensorflow.org/tutorials/keras/keras_tuner).
# + id="3JQDzUqT3UYG"
METRICS = [
keras.metrics.TruePositives(name='tp'),
keras.metrics.FalsePositives(name='fp'),
keras.metrics.TrueNegatives(name='tn'),
keras.metrics.FalseNegatives(name='fn'),
keras.metrics.BinaryAccuracy(name='accuracy'),
keras.metrics.Precision(name='precision'),
keras.metrics.Recall(name='recall'),
keras.metrics.AUC(name='auc'),
]
def make_model(hp, metrics = METRICS, output_bias = initial_bias):
output_bias = tf.keras.initializers.Constant(output_bias)
model = keras.Sequential()
#model.add(keras.layers.Flatten(input_shape=(train_features.shape[-1],)))
# Tune the number of units in the layers
# Choose an optimal value between 16-256
hp_units = hp.Int('units', min_value = 16, max_value = 256, step = 16)
model.add(keras.layers.LSTM(units = hp_units, input_shape = (1,train_features.shape[-1],), dropout = 0.3)) #LSTM layer
model.add(keras.layers.Dense(units = hp_units, activation = 'relu')) #Dense layer 1
model.add(keras.layers.Dense(units = hp_units, activation = 'relu')) #Dense layer 2
model.add(keras.layers.Dense(units = hp_units, activation = 'relu')) #Dense layer 3
model.add(keras.layers.Dense(units = hp_units, activation = 'relu')) #Dense layer 4
model.add(keras.layers.Dropout(0.5)) # To prevent overfiting
model.add(keras.layers.Dense(1, activation='sigmoid', bias_initializer=output_bias)) # Output layer
# Tune the learning rate for the optimizer
# Choose an optimal value from 0.01, 0.001, or 0.0001
hp_learning_rate = hp.Choice('learning_rate', values = [1e-2, 1e-3, 1e-4])
model.compile(optimizer = keras.optimizers.Adam(learning_rate = hp_learning_rate),
loss = keras.losses.BinaryCrossentropy(),
metrics = metrics)
return model
# + id="Lu5g3-g6T7q0"
tuner = kt.Hyperband(make_model,
kt.Objective('val_auc', direction='max'), # Maximizes Area Under the ROC Curve
max_epochs = 10,
factor = 3,)
#project_name = 'TL4BC')
# + [markdown] id="XBkamfpAAG-Y"
# #### Understanding useful metrics
#
# Notice that there are a few metrics defined above that can be computed by the model that will be helpful when evaluating the performance.
#
#
#
# * **False** negatives and **false** positives are samples that were **incorrectly** classified
# * **True** negatives and **true** positives are samples that were **correctly** classified
# * **Accuracy** is the percentage of examples correctly classified
# > $\frac{\text{true samples}}{\text{total samples}}$
# * **Precision** is the percentage of **predicted** positives that were correctly classified
# > $\frac{\text{true positives}}{\text{true positives + false positives}}$
# * **Recall** is the percentage of **actual** positives that were correctly classified
# > $\frac{\text{true positives}}{\text{true positives + false negatives}}$
# * **AUC** refers to the Area Under the Curve of a Receiver Operating Characteristic curve (ROC-AUC). This metric is equal to the probability that a classifier will rank a random positive sample higher than than a random negative sample.
#
#
# Read more:
# * [True vs. False and Positive vs. Negative](https://developers.google.com/machine-learning/crash-course/classification/true-false-positive-negative)
# * [Accuracy](https://developers.google.com/machine-learning/crash-course/classification/accuracy)
# * [Precision and Recall](https://developers.google.com/machine-learning/crash-course/classification/precision-and-recall)
# * [ROC-AUC](https://developers.google.com/machine-learning/crash-course/classification/roc-and-auc)
# + [markdown] id="CrOWQtQuIoz9"
# ### Train the model
# + [markdown] id="4hKix_JPH5lq"
# Before running the hyperparameter search, define a callback to clear the training outputs at the end of every training step.
# + id="0vbL_9RaH6Xv"
class ClearTrainingOutput(tf.keras.callbacks.Callback):
def on_train_end(*args, **kwargs):
IPython.display.clear_output(wait = True)
# + [markdown] id="LIzOvr5AIT8L"
# Run the hyperparameter search. The arguments for the search method are the same as those used for tf.keras.model.fit in addition to the callback above.
# + id="sA-2VTjfIUfL" colab={"base_uri": "https://localhost:8080/", "height": 380} outputId="bac5c70e-90c2-454e-9bf4-e582dc072bdd"
tuner.search(train_features, train_labels,
epochs=50,
validation_data=(val_features, val_labels), callbacks = [ClearTrainingOutput()])
# Get the optimal hyperparameters
best_hps = tuner.get_best_hyperparameters(num_trials = 1)[0]
print(f"""
The hyperparameter search is complete. The optimal number of units in the
layers is {best_hps.get('units')} and the optimal learning rate for the optimizer
is {best_hps.get('learning_rate')}.
""")
# + [markdown] id="cVPnf6mIJodT"
# Retrain the model with the optimal hyperparameters from the search.
# + id="LqLQQDGKJqkk" colab={"base_uri": "https://localhost:8080/"} outputId="0d4ddd96-b115-48ca-e047-7ba9230bb74d"
# Build the model with the optimal hyperparameters and train it on the data
model = tuner.hypermodel.build(best_hps)
baseline_history = model.fit(train_features, train_labels,
epochs=50,
validation_data=(val_features, val_labels))
# + colab={"base_uri": "https://localhost:8080/"} id="TWVbFZaqV9um" outputId="fe8b93c3-e579-4f7c-bb2a-cedb2a209ceb"
model.summary()
# + [markdown] id="15V1uaAsM5bR"
# ### Results
# + [markdown] id="SQez8dYQO4G2"
# #### Compare with CODACE
# + id="VXgYxb3vKE9F" colab={"base_uri": "https://localhost:8080/", "height": 592} outputId="0f336826-85ac-4736-cee1-c3b57ed0487c"
EURODL_model = model.predict(test_features)
time_axis = range(0,test_labels.shape[0])
plt.bar(time_axis, test_labels.T, width=1, linewidth=1, align='center', color="lightgray", label='NBER')
plt.plot(time_axis, EURODL_model, 'b-', color="royalblue", label='EURODL_model')
plt.legend()
plt.show()
# + [markdown] id="zjCmyIsVMuys"
# #### Confusion matrix
#
# We can use a [confusion matrix](https://developers.google.com/machine-learning/glossary/#confusion_matrix) to summarize the actual vs. predicted labels.
# + id="l5n43LqOM_vG"
test_predictions_baseline = model.predict(test_features)
# + id="4ugClbDuNNat"
def plot_cm(labels, predictions, p=0.5):
cm = confusion_matrix(labels, predictions > p)
plt.figure(figsize=(5,5))
sns.heatmap(cm, annot=True, fmt="d")
plt.title('Confusion matrix @{:.2f}'.format(p))
plt.ylabel('Actual label')
plt.xlabel('Predicted label')
print('Expansions Detected (True Negatives): ', cm[0][0])
print('Expansions Incorrectly Detected (False Positives): ', cm[0][1])
print('Recessions Missed (False Negatives): ', cm[1][0])
print('Recessions Detected (True Positives): ', cm[1][1])
print('Total Recessions: ', np.sum(cm[1]))
# + [markdown] id="6xKUhOdzNT6o"
# Evaluate the model on the TEST dataset and display the results for the metrics you created above.
# + id="ivZOj7SSNX69" colab={"base_uri": "https://localhost:8080/", "height": 605} outputId="f0197ec6-9171-464f-ade2-4fae181e5126"
baseline_results = model.evaluate(test_features, test_labels, verbose=0)
for name, value in zip(model.metrics_names, baseline_results):
print(name, ': ', value)
print()
plot_cm(test_labels, test_predictions_baseline)
| TL4BC/DL_LSTM_1df_EUR.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
import numpy as np
import scipy.stats
import pandas as pd
import matplotlib
import matplotlib.pyplot as pp
from IPython import display
from ipywidgets import interact, widgets
# %matplotlib inline
# -
threads_number = 300
stats_base_filename = '/home/rzavalet/stats/base_' + str(threads_number) + '.csv'
stats_ac_filename = '/home/rzavalet/stats/ac_' + str(threads_number) + '.csv'
stats_aup_filename = '/home/rzavalet/stats/aup_' + str(threads_number) + '.csv'
stats_ac_aup_filename = '/home/rzavalet/stats/ac_aup_' + str(threads_number) + '.csv'
stats_base = pd.read_csv(stats_base_filename)
stats_ac = pd.read_csv(stats_ac_filename)
stats_aup = pd.read_csv(stats_aup_filename)
stats_ac_aup = pd.read_csv(stats_ac_aup_filename)
# +
stats_base['sample'] = range(stats_base['count'].size)
stats_base['sample'] = stats_base['sample'] + 1
stats_ac['sample'] = range(stats_ac['count'].size)
stats_ac['sample'] = stats_ac['sample'] + 1
stats_aup['sample'] = range(stats_aup['count'].size)
stats_aup['sample'] = stats_aup['sample'] + 1
stats_ac_aup['sample'] = range(stats_ac_aup['count'].size)
stats_ac_aup['sample'] = stats_ac_aup['sample'] + 1
# -
plot_style='o-'
line_styles=['o-','x-', '+-', '-']
tpm_df = pd.DataFrame({'Base': stats_base['tpm'],
'AC': stats_ac['tpm'],
'AUP': stats_aup['tpm'],
'AC_AUP': stats_ac_aup['tpm']})
tpm_df['sample'] = range(stats_aup['count'].size)
tpm_df['sample'] += 1
# +
# Plot the Number of transactions per minute for the different methods
tpm_df.plot('sample', kind='line', style=line_styles, figsize=(10, 5), grid=True,)
pp.title('Transactions per minute (TPM)')
pp.ylabel('TPM')
pp.xlabel('Samplig Period')
# -
ttpm_df = pd.DataFrame({'Base': stats_base['ttpm'],
'AC': stats_ac['ttpm'],
'AUP': stats_aup['ttpm'],
'AC_AUP': stats_ac_aup['ttpm']})
ttpm_df['sample'] = range(stats_aup['count'].size)
ttpm_df['sample'] += 1
# +
# Plot the Number of timely transactions per minute for the different methods
ttpm_df.plot('sample', kind='line', style=line_styles, figsize=(10, 5), grid=True,)
pp.title('Timely Transactions per minute (TPM)')
pp.ylabel('TTPM')
pp.xlabel('Samplig Period')
# -
ttpm_rate_df = pd.DataFrame({'Base': stats_base['ttpm']/stats_base['tpm'],
'AC': stats_ac['ttpm']/stats_ac['tpm'],
'AUP': stats_aup['ttpm']/stats_ac['tpm'],
'AC_AUP': stats_ac_aup['ttpm']/stats_ac['tpm']})
ttpm_rate_df['sample'] = range(ttpm_rate_df['AC'].size)
ttpm_rate_df['sample'] += 1
# +
# Plot the Number of timely transactions per minute for the different methods
ttpm_rate_df.plot('sample', kind='line', style=line_styles, figsize=(10, 5), grid=True,)
pp.title('Rate of Timely Transactions per minute (TPM)')
pp.ylabel('TTPM rate')
pp.xlabel('Samplig Period')
# -
avg_delay_df = pd.DataFrame({'Base': stats_base['average_service_delay_ms'][1:],
'AC': stats_ac['average_service_delay_ms'][1:],
'AUP': stats_aup['average_service_delay_ms'][1:],
'AC_AUP': stats_ac_aup['average_service_delay_ms'][1:]})
avg_delay_df['sample'] = range(avg_delay_df['AC'].size)
avg_delay_df['sample'] += 2
# +
# Plot the Number of timely transactions per minute for the different methods
avg_delay_df.plot('sample', kind='line', style=line_styles, figsize=(10, 5), grid=True, ylim=(0,200))
pp.title('Average service Delay')
pp.ylabel('milliseconds')
pp.xlabel('Samplig Period')
# -
| JupyterNotebooks/LinuxCombinedExperimentGraphs.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
# Gensim
from gensim.test.utils import datapath
from gensim import utils
import gensim.models
# NLTK
from nltk import pos_tag
# Misc
import random
from IPython.display import clear_output
# -
class MyCorpus(object):
"""An interator that yields sentences (lists of str)."""
def __iter__(self):
corpus_path = datapath("lee_background.cor")
for line in open(corpus_path):
# assume there's one document per line, tokens separated by whitespace
yield utils.simple_preprocess(line)
sentences = MyCorpus()
model = gensim.models.Word2Vec(sentences=sentences)
all_vec_words = list(model.wv.vocab.keys())
num_vec_words = len(all_vec_words)
# +
# Game constants
NUM_WORDS = 25
NUM_GREEN_WORDS = 1
NUM_RED_WORDS = 9
NUM_BLUE_WORDS = 8
NUM_COLUMNS = 5
NUM_ROWS = 5
# Color constants
GREEN = "green"
RED = "red"
BLUE = "blue"
BLACK = "black"
# Visual constants
WORD_WIDTH = 15
ROW_PADDING = 1
# +
# String helpers
def paint(text, colorNum):
return "\x1b[{}m{}\x1b[0m".format(colorNum, text)
red = lambda text: paint(text, 91)
blue = lambda text: paint(text, 94)
green = lambda text: paint(text, 92)
black = lambda text: text
gray = lambda text: paint(text, 37)
colorizer_map = {
RED: red,
BLUE: blue,
GREEN: green,
BLACK: black
}
pad = lambda word: word + (" " * (WORD_WIDTH - len(word)))
# -
class Word():
def __init__(self, string, color):
self.string = string
self.color = color
self.has_been_guessed = False
def __str__(self):
padded_string = pad(self.string)
colorizer = gray
if self.has_been_guessed:
colorizer = colorizer_map[self.color]
return colorizer(padded_string)
def guess(self):
self.has_been_guessed = True
# +
# Generate words
def get_color(wordIndex):
if wordIndex < NUM_GREEN_WORDS:
return GREEN
if wordIndex < NUM_GREEN_WORDS + NUM_RED_WORDS:
return RED
if wordIndex < NUM_GREEN_WORDS + NUM_RED_WORDS + NUM_BLUE_WORDS:
return BLUE
return BLACK
def generate_words():
words = []
strings = []
while len(words) < NUM_WORDS:
index = random.randint(0, num_vec_words)
string = all_vec_words[index]
tags = pos_tag([string])
is_good_length = len(string) > 3 and len(string) < 10
is_noun = tags[0][1] == "NN"
is_dup = string in strings
if is_good_length and is_noun and not is_dup:
color = get_color(len(words))
words.append(Word(string, color))
random.shuffle(words)
return words
# -
class Game():
def __init__(self):
self.words = generate_words()
self.brain = self.init_brain()
def init_brain(self):
red_words = []
blue_words = []
bad_words = []
for word in self.words:
if word.color == RED:
red_words.append(word.string)
elif word.color == BLUE:
blue_words.append(word.string)
else:
bad_words.append(word.string)
# TODO: self.brain = Brain(red_words, blue_words, bad_words)
def get_word_strings(self):
return list(map(lambda word: word.string, self.words))
def get_user_guess(self):
guess = input()
if guess not in self.get_word_strings():
print("Invalid guess. Try again.")
return self.get_user_guess()
return guess
def print_board_state(self):
for i in range(NUM_COLUMNS):
row = ""
row_padding = "\n" * ROW_PADDING
for j in range (NUM_ROWS):
word_index = i * NUM_COLUMNS + j
word = self.words[word_index]
row += str(word)
print(row + row_padding)
def print_player(self, player_color):
colorizer = colorizer_map[player_color]
player = colorizer(player_color.upper())
print("Player: {}".format(player))
def guess_word(self, guess, player_color):
for word in self.words:
if word.string == guess:
word.guess()
return word.color == player_color
def give_hint(self, player_color):
# Get hint
# TODO: hint_word, num_hinted_words = self.brain.give_hint()
hint_word, num_hinted_words = "yote", 2
# Let user guess
num_guesses = num_hinted_words + 1
for i in range(num_guesses):
# Print board state
clear_output(wait = True)
self.print_player(player_color)
print("Hint: {}".format(hint_word))
print("Remaining guess: {}\n".format(num_guesses - i))
self.print_board_state()
# Get user guess
guess = self.get_user_guess()
is_guess_correct = self.guess_word(guess, player_color)
if not is_guess_correct:
break
# +
# Play the game
game = Game()
players = [RED, BLUE]
turn_index = 0
while True:
player = players[turn_index]
game.give_hint(player)
turn_index = (turn_index + 1) % 2
| board_builder.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# %run base.py
from sympy import init_printing
init_printing()
from IPython.core.interactiveshell import InteractiveShell
InteractiveShell.ast_node_interactivity = "all"
# # ๆๅปบDBVๅฝๆฐ
# +
m=5
n=5
xvec2 = symbols(f'x1:{n+1}')
xvec2 #ๅ้็ฌฆๅท
rlist = []
h = Rational(1, n)
tlist = [Rational(i+1, n+1) for i in range(n)]
for i in range(m):#่ฟไธชๆฏ่งๆ ๏ผๅๅบๆฅๅฐฑๆฏx_iๅt_i
if i==0:#x_1
rlist.append(2*xvec2[i]-0-xvec2[i+1]+h**2 *(xvec2[i]+tlist[i]+1)**3/2)
if i==n-1:#x_n
rlist.append(2*xvec2[i]-xvec2[i-1]-0+h**2 *(xvec2[i]+tlist[i]+1)**3/2)
else:
rlist.append(2*xvec2[i]-xvec2[i-1]-xvec2[i+1]+h**2 *(xvec2[i]+tlist[i]+1)**3/2)
for rr in rlist:
rr
# -
# %%time
DBV = 0
for rx in rlist:
DBV += rx
DBV
foo_DBV = lambdify(xvec2,DBV,'numpy')
x00 = list(( ((i+1)/(n+1))*((i+1)/(n+1)-1) for i in range(n)))
x00
foo_DBV(*x00)
gexpr = get_g(DBV, xvec2)#ๅพช็ฏๅฅฝๆ
ข
gexpr
Gexpr = get_G(DBV, xvec2)
Gexpr
# %%time
xvec_DBV = symbols(f'x1:{n+1}')
x = modified_newton(DBV, xvec_DBV, x00, eps=1e-1, maxiter=5000)
print('x็ปๆ๏ผ', x)
print("ๅฝๆฐๅผ๏ผ",foo_DBV(*x))
# > ๅบ็ฐ็นๆฎๆ
ๅตๅฏผ่ดๅดฉๆบ
#
# ๅบ็ฐๅฅๅผๆ
ๅต้็จ่ดๆขฏๅบฆ
#
# x็ปๆ๏ผ [[-2.03847409e+166]
# [ 2.74493979e+000]
# [-8.32801226e-001]
# [-9.50239108e-001]
# [-1.06372814e+000]]
#
# Wall time: 36.5 ms
# %%time
xvec_DBV = symbols(f'x1:{n+1}')
x = damped_newton(DBV, xvec_DBV, x00, eps=1e-1, maxiter=5000)
print('็ปๆ๏ผ', x)
print("ๅฝๆฐๅผ๏ผ",foo_DBV(*x))
# > ้ปๅฐผๆณๅฏนไบๅฅๅผๆ
ๅตไธๅซ่งฃๅณๆนๆก
# %%time
xvec_DBV = symbols(f'x1:{n+1}')
x = quasi_newton(DBV, xvec_DBV,x00, eps=1e-1, maxiter=5000)
print('็ปๆ๏ผ', x)
print("ๅฝๆฐๅผ๏ผ",foo_DBV(*x))
# > ๅๆ ท็ปๆญข๏ผไฝๆฏๆไผๅๆๆ
#
# ่ฐ็จg็ฌฌ5001ๆฌก
# ่ฐ็จf็ฌฌ240001ๆฌก
# ็ปๆ๏ผ [[-3.09140573]
# [ 0.77552876]
# [-0.35757999]
# [-0.36318935]
# [-0.32936904]]
# Wall time: 5.09 s
# %%time
xvec_DBV = symbols(f'x1:{n+1}')
x = quasi_newton(DBV, xvec_DBV, x00, eps=1e-1, maxiter=5000,method='SR1')
print('็ปๆ๏ผ', x)
print("ๅฝๆฐๅผ๏ผ",foo_DBV(*x))
# > ๅๆ ท็ปๆญข๏ผไฝๆฏๆไผๅๆๆ
#
# ่ฐ็จg็ฌฌ5001ๆฌก
# ่ฐ็จf็ฌฌ240001ๆฌก
# ็ปๆ๏ผ [[-3.09140158]
# [ 0.77556638]
# [-0.35759143]
# [-0.36320125]
# [-0.32938126]]
# Wall time: 4.87 s
# %%time
xvec_DBV = symbols(f'x1:{n+1}')
x = quasi_newton(DBV, xvec_DBV, x00, eps=1e-1, maxiter=5000,method='DFP')
print('็ปๆ๏ผ', x)
print("ๅฝๆฐๅผ๏ผ",foo_DBV(*x))
# > ๅๆ ท็ปๆญข๏ผไฝๆฏๆไผๅๆๆ
#
# ่ฐ็จg็ฌฌ5001ๆฌก
# ่ฐ็จf็ฌฌ240001ๆฌก
# ็ปๆ๏ผ [[-3.09140353]
# [ 0.77546422]
# [-0.35757021]
# [-0.36317677]
# [-0.32935233]]
# Wall time: 4.97 s
| Discrete-boundary-value.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# %matplotlib inline
#
# # Calibrating Denoisers Using J-Invariance
#
# In this example, we show how to find an optimally calibrated
# version of any denoising algorithm.
#
# The calibration method is based on the `noise2self` algorithm of [1]_.
#
# .. [1] <NAME> & <NAME>. Noise2Self: Blind Denoising by Self-Supervision,
# International Conference on Machine Learning, p. 524-533 (2019).
#
# .. seealso::
# More details about the method are given in the full tutorial
# `sphx_glr_auto_examples_filters_plot_j_invariant_tutorial.py`.
#
# Calibrating a wavelet denoiser
#
#
# +
import numpy as np
from matplotlib import pyplot as plt
from skimage.data import chelsea
from skimage.restoration import calibrate_denoiser, denoise_wavelet
from skimage.util import img_as_float, random_noise
from functools import partial
# rescale_sigma=True required to silence deprecation warnings
_denoise_wavelet = partial(denoise_wavelet, rescale_sigma=True)
image = img_as_float(chelsea())
sigma = 0.3
noisy = random_noise(image, var=sigma ** 2)
# Parameters to test when calibrating the denoising algorithm
sigma_range = np.arange(0.05, 0.5, 0.05)
parameter_ranges = {'sigma': np.arange(0.1, 0.3, 0.02),
'wavelet': ['db1', 'db2'],
'convert2ycbcr': [True, False],
'multichannel': [True]}
# Denoised image using default parameters of `denoise_wavelet`
default_output = denoise_wavelet(noisy, multichannel=True, rescale_sigma=True)
# Calibrate denoiser
calibrated_denoiser = calibrate_denoiser(noisy,
_denoise_wavelet,
denoise_parameters=parameter_ranges)
# Denoised image using calibrated denoiser
calibrated_output = calibrated_denoiser(noisy)
fig, axes = plt.subplots(1, 3, sharex=True, sharey=True, figsize=(15, 5))
for ax, img, title in zip(
axes,
[noisy, default_output, calibrated_output],
['Noisy Image', 'Denoised (Default)', 'Denoised (Calibrated)']
):
ax.imshow(img)
ax.set_title(title)
ax.set_yticks([])
ax.set_xticks([])
plt.show()
| digital-image-processing/notebooks/filters/plot_j_invariant.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3 (ipykernel)
# language: python
# name: python3
# ---
# # Support Vector Machines for Binary Classifcation
#
# (NEEDS TO BE REWRITTEN FOR CONSISTENCY WITH FINAL VERSION OF THE NOTEBOOK).
#
# Creating binary classifiers from sample data is an example of supervised machine learning. This notebook shows how to create a class of binary classifiers known as support vector machines (SVM) from sample data using linear, quadratic, and conic programming. The first implementation produces linear support vector machines that separates the "feature space" with a hyperplane. The implementation uses a dual formulation that extends naturally to non-linear classification.
#
# Like many machine learning techniques based on regression, an SVM classifier can be computed from the solution to an optimization problem. The use of modeling languages and general purpose solvers can support small
#
# The dual optimization problem is the basis for a second implementation. A technical feature of the dual problem extends support vector machines to nonlinear classifiers that have proven highly successful in a wide range of applications.
# ## Bibliographic Notes
#
# The development of support vector machines is largely attributed to Vladimir Vapnik and colleagues at AT&T Bell Laboratories during the 1990's. The seminal papers are highly readable and entry points to the literature.
#
# > <NAME>., <NAME>., & <NAME>. (1992, July). A training algorithm for optimal margin classifiers. In Proceedings of the fifth annual workshop on Computational learning theory (pp. 144-152). https://dl.acm.org/doi/10.1145/130385.130401
#
# > <NAME>., & <NAME>. (1995). Support-vector networks. Machine learning, 20(3), 273-297. https://link.springer.com/content/pdf/10.1007/bf00994018.pdf
#
# Support vector machines are a widely used method for supervised machine learning and described in tutorial blog postings and trade journal articles. Representative examples include
#
# > <NAME>. (2020). Support Vector Machines with Amazon Food Reviews https://medium.com/analytics-vidhya/support-vector-machines-with-amazon-food-reviews-9fe0428e09ef
#
# > http://www.adeveloperdiary.com/data-science/machine-learning/support-vector-machines-for-beginners-linear-svm/
#
#
# ## Binary Classification
#
# Binary classifiers are functions that answer questions like "does this medical test indicate disease?", "will that customer like this movie?", "does this photo contain the image of a car?", or "is this banknote authentic or counterfeit?" The answer is based on values of "features" that may include physical measurements, values representing color of image pixels, data collected from a web page. Depending on the application requirements, classifiers can be tuned for precision (meaning few false positives), recall (meaning few false negatives), or some trade-off between these qualities.
#
# * **Precision**. The number of real positives divided by the number of predicted positives. High precision implies a low false positive rate.
#
# * **Recall**. The number of real positives divided by number of actual positives. High recall test implies a low false negative rate.
#
# Consider, for example, an device that rejects counterfeit banknotes for a vending machine. A false positive would mean the vending machine would rejects a genuine banknote which would be frustrating to a user. Users of the vending machine, therefore, would prefer a device with high precision.
#
# On the other hand, a false negative would mean the vending machine would accept a counterfeit banknote. The owner of the vending machine, therefore, would prefer a device with high recall.
#
# false positive a counterfeit banknote, clearly an undesirable outcome for the seller. The seller would be interested in high precision. A buyer, however, may be frustrated if a valid banknote is needlessly rejected by the vending machine. The buyer would be interested in high recall.
#
# The challenge of developing binary classifiers is to find features, and functions to evaluate those features, that provide the precision and recall needed for a particular application.
# ## The Data Set
#
# The following data set contains data from a collection genuine and counterfeit banknote specimens. The data includes four continuous statistical measures obtained from the wavelet transform of banknote images and a binary variable where 0 indicates genuine and 1 indicates counterfeit.
#
# https://archive.ics.uci.edu/ml/datasets/banknote+authentication
# ### Read data
# +
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
from sklearn.model_selection import train_test_split
# read data set
df = pd.read_csv("data_banknote_authentication.txt", header=None)
df.columns = ["variance", "skewness", "curtosis", "entropy", "class"]
df.name = "Banknotes"
df.describe()
# -
# ### Select features and training sets
# +
# create training and validation test sets
df_train, df_test = train_test_split(df, test_size=0.2)
df_train = df_train.reset_index()
df_test = df_test.reset_index()
features = ["variance", "skewness"]
# separate into features and outputs
X_train = df_train[features]
y_train = 2 * df_train["class"] - 1
# separate into features and outputs
X_test = df_test[features]
y_test = 2 * df_test["class"] - 1
def plot_Xy(X, y, ax=None):
if ax is None:
fig, ax = plt.subplots()
X[y < 0].plot(x=0, y=1, kind="scatter", ax=ax, c="g", alpha=0.5, label="genuine")
X[y > 0].plot(
x=0, y=1, kind="scatter", ax=ax, c="r", alpha=0.5, label="counterfeit"
)
ax.axis("equal")
return ax
plot_Xy(X_train, y_train)
# -
# ## Linear Support Vector Machines (SVM)
#
# A linear support vector machine is a binary classifier that uses a linear expression to determine the classification.
#
# $$y = \text{sgn}\ ( w^\top x + b)$$
#
# where $w\in \mathbb{R}^p$ is a set of coefficients and $w^\top x$ is the dot product. In effect, the linear function divides the feature space $\mathbb{R}^p$ with a hyperplane specified by $w$ and $b$.
#
# A training or validation set consists of $n$ observations $(x_i, y_i)$ where $y_i = \pm 1$ and $x_i\in\mathbb{R}^p$ for $i=1, \dots, n$. The training task is to find coefficients $w\in\mathbb{R}^p$ and $b\in\mathbb{R}$ to achieve high precision and high recall for a validation set. All points $(x_i, y_i)$ for $i\in 1, \dots, n$ in a training or validation set are successfully classified if the
#
# $$
# \begin{align*}
# y_i (w^\top x_i + b) & > 0 & \forall i = 1, 2, \dots, n
# \end{align*}
# $$
#
# The strict inequality can be replaced by
#
# $$
# \begin{align*}
# y_i (w^\top x_i + b) & \geq 1 & \forall i = 1, 2, \dots, n
# \end{align*}
# $$
#
# which defines a **hard-margin** classifier where the size of the margin is determined by the scale of $w$ and $b$. The sample data displayed above shows it is not always possible to perfectly separate a data set into two classes. For that reason a **soft-margin** classifier is defined by slack variables $z_i \geq 0$
#
# $$y_i (w^\top x_i + b) \geq 1 - z_i $$
#
# For parameters $w$ and $b$, every point that satisfies the constraint with $z_i = 0$ is correctly classified with a hard margin ??? from the separating hyperplane. Points where $0 < z_i < 1$ will also be correctly classified. Point with slack variable $z_i > 1$ will be misclassified.
#
# Given parameters $w$ and $b$, the **hinge-loss** function is defined as
#
# $$\ell(x, y) = \left(1 - y(w^\top x + b)\right)_+$$
#
# using the notation $\left(z\right)_+ = \max(0, z)$.
#
# The hinge-loss function has properties that make it useful fitting linear support vector machine. For a properly classified point the hinge-loss will be less than one but never smaller than zero. For a mis-classified point, however, the hinge-loss function is greater than one and will grows in proportion to how far away the feature vector is from the separation plane. Minimizing the sum of hinge-loss functions locates hyperplane a that trades off between a margin for correctly classified points and minimizing the distance between the hyperplane and mis-classified points.
#
#
# One approach to fitting a linear SVM is to assign a regularization term for $w$. In most formulations a norm $\|w\|$ is used for regularization, commonly a sum of squares such as $\|w\|_2^2$. Another choice is $\|w\|_1$ which, similar to Lasso regression, may result in sparse weighting vectors $w$ indicating which elements of the feature vector can be neglected for classification purpose. These considerations result i the objective function
#
# $$\min_{w, b}\left[ \frac{1}{n}\sum_{i=1}^n \left(1 - y_i(w^\top x_i + b)\right)_+ + \lambda \|w\|_1\right]$$
#
# which can be solved by linear programming.
#
# $$
# \begin{align*}
# \min_{z, w, b}\ & \frac{1}{n} \sum_{i=1}^n z_i + \lambda \|w\|_1 \\
# \text{s.t.}\qquad z_i & \geq 1 - y_i(w^\top x_i + b) & \forall i = 1, 2, \dots, n \\
# z_i & \geq 0 & \forall i = 1, 2, \dots, n
# \end{align*}
# $$
#
# This is the primal optimization problem in decision variables $w\in\mathbb{R}^p$, $b\in\mathbb{R}$, and $z\in\mathbb{R}^n$, a total of $n + p + 1$ unknowns with $2n$ constraints.
# ## Alternative Formulation of Linear SVM
#
# The standard formulation of a linear support vector machine uses training sets $p$-element feature vectors $x_i\in\mathbb{R}^p$, a classification for those vectors, $y_i = \pm 1$ and a classifier defined by $w\in\mathbb{R}^p$ and $b\in\mathbb{R}$
#
# $$
# \begin{align*}
# y & = \text{sgn}(w^\top x + b)
# \end{align*}
# $$
#
# The parameter $b$ is an annoying term that unnecessarily clutters the presentation and derivations. As an alternative formulation, consider an augmented feature vector $\bar{x} = (1, x) \in \mathbb{R}^{p+1}$ and parameter vector $\bar{w} = (b, w) \in \mathbb{R}^{p+1}$. The linear SVM machine then becomes
#
# $$
# \begin{align*}
# y & = \text{sgn}(\bar{w}^\top \bar{x})
# \end{align*}
# $$
#
# If a hard-margin classifier exists for a training or validation set $(\bar{x}_i, y_i)$ for $i=1, \dots, n$ then it would satisfy
#
# $$
# \begin{align*}
# y_i \bar{w}^\top \bar{x}_i & \geq 1 & \forall i \in 1, 2, \dots, n
# \end{align*}
# $$
#
# The separating hyperplane consists of all points normal to $\bar{w}$. The distance between $x_i$ and the separating hyperplane is
#
# $$\frac{\bar{w}^\top \bar{x}_i}{\|\bar{w}\|}$$
#
# The soft-margin classifier is found by solving
#
# $$
# \begin{align*}
# \min \frac{1}{2} \|\bar{w}\|_2^2 & + \frac{c}{n}\sum_{i=1}^n z_i \\
# \text{s.t.} \qquad z_i & \geq 1 - y_i \bar{w}^\top \bar{x}_i & \forall i = 1, 2, \dots, n \\
# z_i & \geq 0 & \forall i = 1, 2, \dots, n
# \end{align*}
# $$
#
# Recasting as a conic program
#
# $$
# \begin{align*}
# & \min_{r, \alpha}\ r + \frac{c}{n} 1^\top z\\
# \text{s. t.}\qquad & (r, 1, \bar{w}) \in Q_r^{3 + p} \\
# & z + F \bar{w} \geq 1 \\
# & z \geq 0 \\
# \end{align*}
# $$
#
# +
import pyomo.kernel as pmo
def svm_conic_primal(X, y, c):
n, p = X.shape
F = np.array([y[i]*np.append(1, X.loc[i, :].to_numpy()) for i in range(n)])
m = pmo.block()
m.r = pmo.variable()
m.w = pmo.variable_list()
for i in range(p + 1):
m.w.append(pmo.variable())
m.z = pmo.variable_list()
for i in range(n):
m.z.append(pmo.variable(lb=0))
m.primal = pmo.objective(expr=m.r + (c/n)*sum(m.z[i] for i in range(n)))
m.qr = pmo.conic.rotated_quadratic.as_domain(m.r, 1, m.w)
m.d = pmo.constraint_dict()
for i in range(n):
m.d[i] = pmo.constraint(body=m.z[i] + sum(F[i, j]*m.w[j] for j in range(p+1)), lb=1)
pmo.SolverFactory('mosek_direct').solve(m)
return m
c = 1.0
# %timeit svm_conic_primal(X_train, y_train, c)
m = svm_conic_primal(X_train, y_train, c)
n, p = X_train.shape
print(m.r.value)
for j in range(p+1):
print(j, m.w[j].value)
print()
for i in range(n):
if m.z[i].value > 1:
print(i, m.z[i].value)
# -
# Creating a differentiable Lagrangian with dual factors $\alpha_i$ for $i = 1, \dots, n$, the task is to find saddle points of
#
# $$
# \begin{align*}
# \mathcal{L} & = \frac{1}{2} \|\bar{w}\|_2^2 + \frac{c}{n}\sum_{i=1}^n z_i + \sum_{i=1}^n \alpha_i (1 - y_i \bar{w}^\top \bar{x}_i - z_i) + \sum_{i=1}^n \beta_i (-z_i) \\
# \end{align*}
# $$
#
# Taking derivatives with respect to the primal variables
#
# $$
# \begin{align*}
# \frac{\partial \mathcal{L}}{\partial z_i} & = \frac{c}{n} - \alpha_i - \beta_i = 0 \implies 0 \leq \alpha_i \leq \frac{c}{n}\\
# \frac{\partial \mathcal{L}}{\partial \bar{w}} & = \bar{w} - \sum_{i=1}^n \alpha_i y_i \bar{x}_i = 0 \implies \bar{w} = \sum_{i=1}^n \alpha_i y_i \bar{x}_i \\
# \end{align*}
# $$
#
# resulting in the dual formulation
#
# $$
# \begin{align*}
# \max_{\alpha_i}\ & \sum_{i=1}^n \alpha_i - \frac{1}{2} \sum_{i=1}^n\sum_{j=1}^n \alpha_i \alpha_j y_i y_j ( \bar{x}_i^\top \bar{x}_j ) \\
# \text{s. t.}\quad & \alpha_i \in \left[0, \frac{c}{n}\right] & i = 1, \dots, n \\
# \end{align*}
# $$
#
# Rearranging as a standard quadratic program in $n$ variables $\alpha_i$ for $i = 1, \dots, n$.
#
# $$
# \begin{align*}
# \min_{\alpha_i}\ & \frac{1}{2} \sum_{i=1}^n\sum_{j=1}^n \alpha_i \alpha_j y_i y_j ( \bar{x}_i^\top \bar{x}_j ) - \sum_{i=1}^n \alpha_i \\
# \text{s. t.}\quad & \alpha_i \in \left[0, \frac{c}{n}\right] & i = 1, \dots, n \\
# \end{align*}
# $$
#
# The $n \times n$ **Gram matrix** is defined as
#
# $$G = \begin{bmatrix}
# \bar{x}_1^\top \bar{x}_1 & \dots & \bar{x}_1^\top \bar{x}_n \\
# \vdots & \ddots & \vdots \\
# \bar{x}_n^\top \bar{x}_1 & \dots & \bar{x}_n^\top \bar{x}_n
# \end{bmatrix}$$
#
# where each entry is dot product of two vectors $\bar{x}_i, \bar{x}_j \in \mathbb{R}^{p+1}$.
#
# Compared to the primal, the dual formulation has reduced to the number of decision variables from $n + p + 1$ to $n$. But this has come with the significant penalty of introducing a dense matrix with $n^2$ coefficients and potential processing time of order $n^3$. For large training sets, $n\sim 10^4-10^5$ this is prohibitively expensive.
# ### Reformulation as a conic program
#
# Introduce the $n \times (p+1)$ matrix $F$ defined as
#
# $$F = \begin{bmatrix} y_1 \bar{x}_1^\top \\ y_2 \bar{x}_2^\top \\ \vdots \\ y_n \bar{x}_n^\top \end{bmatrix}$$
#
# Then introducing an additional decision variabl $r \geq 0$
#
# $$
# \begin{align*}
# & \min_{r, \alpha}\ r - 1^\top \alpha\\
# \text{s. t.}\qquad & \alpha^\top F F^\top \alpha \leq 2 r \\
# & \alpha_i \in \left[0, \frac{c}{n}\right] & i = 1, \dots, n \\
# \end{align*}
# $$
#
# Using the notation $\mathcal{Q}^m_r$ for a rotated quadratic cone (for example, see https://docs.mosek.com/modeling-cookbook/cqo.html#equation-eq-sec-qo-modeling-qset2)
#
# $$\mathcal{Q}^m_r = \{z\in\mathbb{R}^m | 2z_1z_2 \geq z_3^2 + \cdots + z_m^2,\ z_1, z_2 \geq 0 \}$$
#
# The quadratic constraint is reformulated as a rotated quadratic cone
#
# $$\frac{1}{2}\alpha^\top F F^\top \alpha \leq r \iff (r, 1, F^\top \alpha) \in Q_r^{3 + p}$$
#
# The reformulated dual problem is then
#
# $$
# \begin{align*}
# & \min_{r, \alpha}\ r - 1^\top \alpha\\
# \text{s. t.}\qquad & (r, 1, F^\top \alpha) \in Q_r^{3 + p} \\
# & \alpha_i \in \left[0, \frac{c}{n}\right] & i = 1, \dots, n \\
# \end{align*}
# $$
#
# The conic reformulation eliminates the need to store an $n\times n$ Gram matrix.
#
# $$
# \begin{align*}
# & \min_{r, \alpha}\ r - 1^\top \alpha\\
# \text{s. t.}\qquad & (r, 1, z) \in Q_r^{3 + p} \\
# & z = F^\top \alpha \\
# & \alpha_i \in \left[0, \frac{c}{n}\right] & i = 1, \dots, n \\
# \end{align*}
# $$
#
# +
import pyomo.kernel as pmo
def svm_conic_dual(X, y, c):
n, p = X.shape
F = np.array([y[i]*np.append(1, X.loc[i, :].to_numpy()) for i in range(n)])
m = pmo.block()
m.r = pmo.variable()
m.a = pmo.variable_list()
for i in range(n):
m.a.append(pmo.variable(lb=0, ub=c/n))
m.z = pmo.variable_list()
for j in range(p + 1):
m.z.append(pmo.variable())
m.d = pmo.constraint_dict()
for j in range(p + 1):
#m.d[j] = pmo.linear_constraint(variables=[m.a, m.z[j]], coefficients=np.append(F[:,j], -1), rhs=0.0)
m.d[j] = pmo.constraint(body=sum(F[i, j]*m.a[i] for i in range(n)) - m.z[j], rhs=0)
m.o = pmo.objective(expr=m.r - sum(m.a[i] for i in range(n)))
m.q = pmo.conic.rotated_quadratic.as_domain(m.r, 1, m.z)
pmo.SolverFactory('mosek_direct').solve(m)
return m
c = 1.0
# %timeit svm_conic(X_train, y_train, c)
m = svm_conic(X_train, y_train, c)
print(m.r.value)
n = len(m.a)
for i in range(n):
if m.a[i].value > 1e-7 and m.a[i].value < c/n - 1e-7:
print(i, m.a[i].value)
# -
# ## Pyomo Implementation
# +
import numpy as np
import pyomo.environ as pyo
def svm_fit(X, y, lambd=0):
m = pyo.ConcreteModel()
# zero-based indexing
n, p = X.shape
m.n = pyo.RangeSet(0, n - 1)
m.p = pyo.RangeSet(0, p - 1)
m.w = pyo.Var(m.p)
m.b = pyo.Var()
m.z = pyo.Var(m.n, domain=pyo.NonNegativeReals)
m.wpos = pyo.Var(m.p, domain=pyo.NonNegativeReals)
m.wneg = pyo.Var(m.p, domain=pyo.NonNegativeReals)
@m.Constraint(m.n)
def hinge_loss(m, n):
return m.z[n] >= 1 - y[n] * (sum(m.w[p] * X.iloc[n, p] for p in m.p) + m.b)
@m.Objective(sense=pyo.minimize)
def objective(m):
return sum(m.z[n] for n in m.n) / n + lambd * sum(
m.wpos[p] + m.wneg[p] for p in m.p
)
pyo.SolverFactory("glpk").solve(m)
w = np.array([m.w[p]() for p in m.p])
b = m.b()
# return a binary classifier
def svm(x):
return np.sign(w @ x + b)
svm.w = w
svm.b = b
return svm
# create a linear SVM binary classifier
# %timeit svm = svm_fit(X_train, y_train)
# +
def svm_test(svm, X, y, plot=False):
y_pred = pd.Series([svm(X.loc[i, :]) for i in X.index])
true_pos = (y > 0) & (y_pred > 0)
false_pos = (y < 0) & (y_pred > 0)
false_neg = (y > 0) & (y_pred < 0)
true_neg = (y < 0) & (y_pred < 0)
tp = sum(true_pos)
fp = sum(false_pos)
fn = sum(false_neg)
tn = sum(true_neg)
print(f" Test Data (n = {len(y)})")
print(f" y = 1 y = -1")
print(f" Predict y = 1 {tp:4d} {fp:4d} precision = {tp/(tp + fp):5.3f}")
print(f" Predict y = -1 {fn:4d} {tn:4d} ")
print(f" Recall = {tp/(tp + fn):5.3f}")
if not plot:
return
def svm_line(svm, ax):
w = svm.w
b = svm.b
xmin, xmax = ax.get_xlim()
ymin, ymax = ax.get_ylim()
ax.plot(
[xmin, xmax], [-(w[0] * xmin + b) / w[1], -(w[0] * xmax + b) / w[1]], lw=3
)
ax.set_xlim(xmin, xmax)
ax.set_ylim(ymin, ymax)
fig, ax = plt.subplots(1, 1, figsize=(5.5, 5))
plot_Xy(X, y, ax)
svm_line(svm, ax)
fig, ax = plt.subplots(1, 2, figsize=(12, 5))
plot_Xy(X[true_pos], y[true_pos], ax[0])
plot_Xy(X[true_neg], y[true_neg], ax[0])
svm_line(svm, ax[0])
ax[0].legend(["true positive", "true negative"])
plot_Xy(X[false_pos], y[false_pos], ax[1])
plot_Xy(X[false_neg], y[false_neg], ax[1])
svm_line(svm, ax[1])
ax[1].legend(["false negative", "false_positive"])
svm = svm_fit(X_train, y_train)
svm_test(svm, X_test, y_test, plot=True)
# +
features = df.columns
# separate into features and outputs
X_train_full = df_train[features]
y_train_full = 2 * df_train["class"] - 1
# separate into features and outputs
X_test_full = df_test[features]
y_test_full = 2 * df_test["class"] - 1
# fit svm and test
svm = svm_fit(X_train_full, y_train_full)
svm_test(svm, X_test_full, y_test_full)
# -
# ## The SVM Dual
#
# Creating the dual of the support vector machine will turn out to have practical consequences. Creating the dual requires a differentiable objective function. For this reason, the regularization term is changed to the 2-norm of $w$
#
# $$
# \begin{align*}
# \min_{z, w, b}\ \frac{1}{2} \|w\|_2^2 + \frac{c}{n} \sum_{i=1}^n z_i \\
# \\
# \text{s.t.}\qquad 1 - y_i(w^\top x_i + b) - z_i & \leq 0 & \forall i = 1, \dots, n \\
# - z_i & \leq 0 & \forall i = 1, \dots, n
# \end{align*}
# $$
#
# where the regularization parameter shifted to $c$, and the constraints restated in standard form. This is a quadratic problem in $n + p + 1$ variables and $2n$ constraints.
#
# The Lagrangian $\mathcal{L}$ is
#
# $$
# \begin{align*}
# \mathcal{L} & = \frac{1}{2} \|w\|_2^2 + \frac{c}{n}\sum_{i=1}^n z_i + \sum_{i=1}^n \alpha_i (1 - y_i(w^\top x_i + b) - z_i) + \sum_{i=1}^n \beta_i (-z_i) \\
# \end{align*}
# $$
#
# where $2n$ non-negative Lagrange multipliers $\alpha_i \geq 0$ and $\beta_1 \geq 0$ have been introduced for $i \in 1,\dots,n$. Intuitively, the Lagrange variables are penalty weights assigned to the inequality constraints introduced into a modified objective function. If the penalties are large enough then the constraints will be satisfied.
#
# $$
# \begin{align*}
# \frac{\partial \mathcal{L}}{\partial z_i} & = \frac{c}{n} - \alpha_i - \beta_i = 0 \implies 0 \leq \alpha_i \leq \frac{c}{n}\\
# \frac{\partial \mathcal{L}}{\partial w} & = w - \sum_{i=1}^n \alpha_i y_i x_i = 0 \implies w = \sum_{i=1}^n \alpha_i y_i x_i \\
# \frac{\partial \mathcal{L}}{\partial b} & = -\sum_{i=1}^n \alpha_i y_i = 0 \implies \sum_{i=1}^n \alpha_i y_i = 0 \\
# \end{align*}
# $$
# The dual problem is then
#
# $$
# \begin{align*}
# \max_{\alpha_i}\ & \sum_{i=1}^n \alpha_i - \frac{1}{2} \sum_{i=1}^n\sum_{j=1}^n \alpha_i \alpha_j y_i y_j ( x_i^\top x_j ) \\
# \text{s. t.}\quad & \sum_{i=1}^n \alpha_i y_i = 0 \\
# & \alpha_i \in \left[0, \frac{c}{n}\right] & i = 1, \dots, n \\
# \end{align*}
# $$
#
# Like the primal, the dual is a quadratic program. The dual, however, has only $n$ decision variables compared to $n + p + 1$ decision variables for the primal, and $n + 1$ constraints compared to $2n$ constraints for the primal. This reduction is significant for problems with many features (i.e, large $p$), or for large training sets (i.e., large $n$). The case of large $p$ becomes important when extending SVM to nonlinear classification using kernels.
#
# Note, however, that the reduced number of decision variables and constraints in the dual problem requires computing $\frac{n(n+1)}{2}$ inner products $(x_i^\top x_j)$ for $i \leq j$ and $i,j\in \mathbb{R}^n$. The inner products can be arranged as a symmetric matrix
# $$
# \begin{align*}
# K = [k_{i,j}] = X X^\top = \begin{bmatrix}
# x_1^\top x_1 & x_1^\top x_2 & \dots & x_1^\top x_n \\
# x_2^\top x_1 & x_2^\top x_2 & \dots & x_2^\top x_n \\
# \vdots & \vdots & \ddots & \vdots \\
# x_n^\top x_1 & x_n^\top x_2 & \dots & x_n^\top x_n \\
# \end{bmatrix}
# \end{align*}
# $$
#
# where $X \in \mathbb{R}^{n\times p}$ is the matrix formed by the $n$ feature vectors $x_i$ for $i=1, 2, \dots, n$. The symmetry of $K$, which is known as the Gram matrix (or Grammian) is a consequence of the symmetry of the inner product for real number spaces.
#
# $$
# \begin{align*}
# w^* & = \sum_{i=1}^n \alpha_i^* y_i x_i \\
# b^* & = y_k - (w^*)^\top x_k & \text{for any }k\ni 0 < \alpha_k < \frac{c}{n} \\
# \end{align*}
# $$
#
# which can be written entirely in terms of the inner product.
#
# $$
# \begin{align*}
# w^* & = \sum_{i=1}^n \alpha_i^* y_i x_i \\
# b^* & = y_k - \sum_{i=1}^n \alpha_i^* y_i x_i^\top x_k & \text{for any }k\ni 0 < \alpha_k < \frac{c}{n}
# \end{align*}
# $$
# Given a value for the feature vector $x\in\mathbb{R}^p$, the classifier $\hat{y} = \text{sgn}\ \left((w^*)^\top x + b^* \right)$ is then
#
# $$
# \begin{align*}
# \hat{y} & = \text{sgn}\ \left( y_k + \sum_{i=1}^n \alpha_i^* y_i x^\top_i (x - x_k) \right)\\
# \end{align*}
# $$
#
# This is result has important consequences. The key point is that is that the dual optimization problem can be solved with knowledge of the inner products appearing in the Gram matrix $K$, and the resulting classifier needs only inner products of training set data with the difference $x - x_k$ for some $k$ found in the optimization calculation.
# +
import numpy as np
import pyomo.environ as pyo
def svm_dual_fit(X, y, lambd=0):
m = pyo.ConcreteModel()
X_np = X.to_numpy()
K = X_np @ X_np.T
# zero-based indexing
n, p = X.shape
m.n = pyo.RangeSet(0, n - 1)
m.p = pyo.RangeSet(0, p - 1)
m.a = pyo.Var(m.n, bounds=(0, 1))
@m.Constraint()
def sumya(m):
return sum(y.loc[i] * m.a[i] for i in m.n) == 0
@m.Objective(sense=pyo.maximize)
def objective(m):
return sum(m.a[i] for i in m.n) - 0.5 * sum(
m.a[i] * m.a[j] * y.loc[i] * y.loc[j] * K[i, j] for i in m.n for j in m.n
)
pyo.SolverFactory("mosek_direct").solve(m)
def svm():
pass
return svm
# create a linear SVM binary classifier
m = svm_dual_fit(X_test, y_test)
# -
m = svm_dual_fit(X_test, y_test)
pyo.SolverFactory("mosek_direct").solve(m)
| _build/html/_sources/notebooks/05/svm-linear.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ## Model Layers
# This module contains many layer classes that we might be interested in using in our models. These layers complement the default [Pytorch layers](https://pytorch.org/docs/stable/nn.html) which we can also use as predefined layers.
# + hide_input=true
from fastai.vision import *
from fastai.gen_doc.nbdoc import *
# -
# ## Custom fastai modules
# + hide_input=true
show_doc(AdaptiveConcatPool2d, title_level=3)
# + hide_input=true
from fastai.gen_doc.nbdoc import *
from fastai.layers import *
# -
# The output will be `2*sz`, or just 2 if `sz` is None.
# The [`AdaptiveConcatPool2d`](/layers.html#AdaptiveConcatPool2d) object uses adaptive average pooling and adaptive max pooling and concatenates them both. We use this because it provides the model with the information of both methods and improves performance. This technique is called `adaptive` because it allows us to decide on what output dimensions we want, instead of choosing the input's dimensions to fit a desired output size.
#
# Let's try training with Adaptive Average Pooling first, then with Adaptive Max Pooling and finally with the concatenation of them both to see how they fare in performance.
#
# We will first define a [`simple_cnn`](/layers.html#simple_cnn) using [Adaptive Max Pooling](https://pytorch.org/docs/stable/nn.html#torch.nn.AdaptiveMaxPool2d) by changing the source code a bit.
path = untar_data(URLs.MNIST_SAMPLE)
data = ImageDataBunch.from_folder(path)
def simple_cnn_max(actns:Collection[int], kernel_szs:Collection[int]=None,
strides:Collection[int]=None) -> nn.Sequential:
"CNN with `conv2d_relu` layers defined by `actns`, `kernel_szs` and `strides`"
nl = len(actns)-1
kernel_szs = ifnone(kernel_szs, [3]*nl)
strides = ifnone(strides , [2]*nl)
layers = [conv_layer(actns[i], actns[i+1], kernel_szs[i], stride=strides[i])
for i in range(len(strides))]
layers.append(nn.Sequential(nn.AdaptiveMaxPool2d(1), Flatten()))
return nn.Sequential(*layers)
model = simple_cnn_max((3,16,16,2))
learner = Learner(data, model, metrics=[accuracy])
learner.fit(1)
# Now let's try with [Adaptive Average Pooling](https://pytorch.org/docs/stable/nn.html#torch.nn.AdaptiveAvgPool2d) now.
def simple_cnn_avg(actns:Collection[int], kernel_szs:Collection[int]=None,
strides:Collection[int]=None) -> nn.Sequential:
"CNN with `conv2d_relu` layers defined by `actns`, `kernel_szs` and `strides`"
nl = len(actns)-1
kernel_szs = ifnone(kernel_szs, [3]*nl)
strides = ifnone(strides , [2]*nl)
layers = [conv_layer(actns[i], actns[i+1], kernel_szs[i], stride=strides[i])
for i in range(len(strides))]
layers.append(nn.Sequential(nn.AdaptiveAvgPool2d(1), Flatten()))
return nn.Sequential(*layers)
model = simple_cnn_avg((3,16,16,2))
learner = Learner(data, model, metrics=[accuracy])
learner.fit(1)
# Finally we will try with the concatenation of them both [`AdaptiveConcatPool2d`](/layers.html#AdaptiveConcatPool2d). We will see that, in fact, it increases our accuracy and decreases our loss considerably!
def simple_cnn(actns:Collection[int], kernel_szs:Collection[int]=None,
strides:Collection[int]=None) -> nn.Sequential:
"CNN with `conv2d_relu` layers defined by `actns`, `kernel_szs` and `strides`"
nl = len(actns)-1
kernel_szs = ifnone(kernel_szs, [3]*nl)
strides = ifnone(strides , [2]*nl)
layers = [conv_layer(actns[i], actns[i+1], kernel_szs[i], stride=strides[i])
for i in range(len(strides))]
layers.append(nn.Sequential(AdaptiveConcatPool2d(1), Flatten()))
return nn.Sequential(*layers)
model = simple_cnn((3,16,16,2))
learner = Learner(data, model, metrics=[accuracy])
learner.fit(1)
# + hide_input=true
show_doc(Lambda, title_level=3)
# -
# This is very useful to use functions as layers in our networks inside a [Sequential](https://pytorch.org/docs/stable/nn.html#torch.nn.Sequential) object. So, for example, say we want to apply a [log_softmax loss](https://pytorch.org/docs/stable/nn.html#torch.nn.functional.log_softmax) and we need to change the shape of our output batches to be able to use this loss. We can add a layer that applies the necessary change in shape by calling:
#
# `Lambda(lambda x: x.view(x.size(0),-1))`
# Let's see an example of how the shape of our output can change when we add this layer.
# +
model = nn.Sequential(
nn.Conv2d(3, 16, kernel_size=3, stride=2, padding=1), nn.ReLU(),
nn.Conv2d(16, 16, kernel_size=3, stride=2, padding=1), nn.ReLU(),
nn.Conv2d(16, 10, kernel_size=3, stride=2, padding=1), nn.ReLU(),
nn.AdaptiveAvgPool2d(1),
)
model.cuda()
for xb, yb in data.train_dl:
out = (model(*[xb]))
print(out.size())
break
# +
model = nn.Sequential(
nn.Conv2d(3, 16, kernel_size=3, stride=2, padding=1), nn.ReLU(),
nn.Conv2d(16, 16, kernel_size=3, stride=2, padding=1), nn.ReLU(),
nn.Conv2d(16, 10, kernel_size=3, stride=2, padding=1), nn.ReLU(),
nn.AdaptiveAvgPool2d(1),
Lambda(lambda x: x.view(x.size(0),-1))
)
model.cuda()
for xb, yb in data.train_dl:
out = (model(*[xb]))
print(out.size())
break
# + hide_input=true
show_doc(Flatten)
# -
# The function we build above is actually implemented in our library as [`Flatten`](/layers.html#Flatten). We can see that it returns the same size when we run it.
# +
model = nn.Sequential(
nn.Conv2d(3, 16, kernel_size=3, stride=2, padding=1), nn.ReLU(),
nn.Conv2d(16, 16, kernel_size=3, stride=2, padding=1), nn.ReLU(),
nn.Conv2d(16, 10, kernel_size=3, stride=2, padding=1), nn.ReLU(),
nn.AdaptiveAvgPool2d(1),
Flatten(),
)
model.cuda()
for xb, yb in data.train_dl:
out = (model(*[xb]))
print(out.size())
break
# + hide_input=true
show_doc(PoolFlatten)
# -
# We can combine these two final layers ([AdaptiveAvgPool2d](https://pytorch.org/docs/stable/nn.html#torch.nn.AdaptiveAvgPool2d) and [`Flatten`](/layers.html#Flatten)) by using [`PoolFlatten`](/layers.html#PoolFlatten).
# +
model = nn.Sequential(
nn.Conv2d(3, 16, kernel_size=3, stride=2, padding=1), nn.ReLU(),
nn.Conv2d(16, 16, kernel_size=3, stride=2, padding=1), nn.ReLU(),
nn.Conv2d(16, 10, kernel_size=3, stride=2, padding=1), nn.ReLU(),
PoolFlatten()
)
model.cuda()
for xb, yb in data.train_dl:
out = (model(*[xb]))
print(out.size())
break
# -
# Another use we give to the Lambda function is to resize batches with [`ResizeBatch`](/layers.html#ResizeBatch) when we have a layer that expects a different input than what comes from the previous one.
# + hide_input=true
show_doc(ResizeBatch)
# -
a = torch.tensor([[1., -1.], [1., -1.]])[None]
print(a)
out = ResizeBatch(4)
print(out(a))
# + hide_input=true
show_doc(Debugger, title_level=3)
# -
# The debugger module allows us to peek inside a network while its training and see in detail what is going on. We can see inputs, outputs and sizes at any point in the network.
#
# For instance, if you run the following:
#
# ``` python
# model = nn.Sequential(
# nn.Conv2d(3, 16, kernel_size=3, stride=2, padding=1), nn.ReLU(),
# Debugger(),
# nn.Conv2d(16, 16, kernel_size=3, stride=2, padding=1), nn.ReLU(),
# nn.Conv2d(16, 10, kernel_size=3, stride=2, padding=1), nn.ReLU(),
# )
#
# model.cuda()
#
# learner = Learner(data, model, metrics=[accuracy])
# learner.fit(5)
# ```
# ... you'll see something like this:
#
# ```
# /home/ubuntu/fastai/fastai/layers.py(74)forward()
# 72 def forward(self,x:Tensor) -> Tensor:
# 73 set_trace()
# ---> 74 return x
# 75
# 76 class StdUpsample(nn.Module):
#
# ipdb>
# ```
# + hide_input=true
show_doc(PixelShuffle_ICNR, title_level=3)
# + hide_input=true
show_doc(MergeLayer, title_level=3)
# + hide_input=true
show_doc(PartialLayer, title_level=3)
# + hide_input=true
show_doc(SigmoidRange, title_level=3)
# + hide_input=true
show_doc(SequentialEx, title_level=3)
# + hide_input=true
show_doc(SelfAttention, title_level=3)
# + hide_input=true
show_doc(BatchNorm1dFlat, title_level=3)
# -
# ## Loss functions
# + hide_input=true
show_doc(FlattenedLoss, title_level=3)
# -
# Create an instance of `func` with `args` and `kwargs`. When passing an output and target, it
# - puts `axis` first in output and target with a transpose
# - casts the target to `float` is `floatify=True`
# - squeezes the `output` to two dimensions if `is_2d`, otherwise one dimension, squeezes the target to one dimension
# - applied the instance of `func`.
# + hide_input=true
show_doc(BCEFlat)
# + hide_input=true
show_doc(BCEWithLogitsFlat)
# + hide_input=true
show_doc(CrossEntropyFlat)
# + hide_input=true
show_doc(MSELossFlat)
# + hide_input=true
show_doc(NoopLoss)
# + hide_input=true
show_doc(WassersteinLoss)
# -
# ## Helper functions to create modules
# + hide_input=true
show_doc(bn_drop_lin, doc_string=False)
# -
# The [`bn_drop_lin`](/layers.html#bn_drop_lin) function returns a sequence of [batch normalization](https://arxiv.org/abs/1502.03167), [dropout](https://www.cs.toronto.edu/~hinton/absps/JMLRdropout.pdf) and a linear layer. This custom layer is usually used at the end of a model.
#
# `n_in` represents the number of size of the input `n_out` the size of the output, `bn` whether we want batch norm or not, `p` is how much dropout and `actn` is an optional parameter to add an activation function at the end.
# + hide_input=true
show_doc(conv2d)
# + hide_input=true
show_doc(conv2d_trans)
# + hide_input=true
show_doc(conv_layer, doc_string=False)
# -
# The [`conv_layer`](/layers.html#conv_layer) function returns a sequence of [nn.Conv2D](https://pytorch.org/docs/stable/nn.html#torch.nn.Conv2d), [BatchNorm](https://arxiv.org/abs/1502.03167) and a ReLU or [leaky RELU](https://ai.stanford.edu/~amaas/papers/relu_hybrid_icml2013_final.pdf) activation function.
#
# `n_in` represents the number of size of the input `n_out` the size of the output, `ks` kernel size, `stride` the stride with which we want to apply the convolutions. `bias` will decide if they have bias or not (if None, defaults to True unless using batchnorm). `norm_type` selects type of normalization (or `None`). If `leaky` is None, the activation is a standard `ReLU`, otherwise it's a `LeakyReLU` of slope `leaky`. Finally if `transpose=True`, the convolution is replaced by a `ConvTranspose2D`.
# + hide_input=true
show_doc(embedding, doc_string=False)
# -
# Create an [embedding layer](https://arxiv.org/abs/1711.09160) with input size `ni` and output size `nf`.
# + hide_input=true
show_doc(relu)
# + hide_input=true
show_doc(res_block)
# + hide_input=true
show_doc(sigmoid_range)
# + hide_input=true
show_doc(simple_cnn)
# -
# ## Initialization of modules
# + hide_input=true
show_doc(batchnorm_2d)
# + hide_input=true
show_doc(icnr)
# + hide_input=true
show_doc(trunc_normal_)
# + hide_input=true
show_doc(icnr)
# + hide_input=true
show_doc(NormType)
# -
# ## Undocumented Methods - Methods moved below this line will intentionally be hidden
show_doc(Debugger.forward)
show_doc(Lambda.forward)
show_doc(AdaptiveConcatPool2d.forward)
show_doc(NoopLoss.forward)
show_doc(PixelShuffle_ICNR.forward)
show_doc(WassersteinLoss.forward)
show_doc(MergeLayer.forward)
show_doc(SigmoidRange.forward)
show_doc(MergeLayer.forward)
show_doc(SelfAttention.forward)
show_doc(SequentialEx.forward)
show_doc(SequentialEx.append)
show_doc(SequentialEx.extend)
show_doc(SequentialEx.insert)
show_doc(PartialLayer.forward)
show_doc(BatchNorm1dFlat.forward)
show_doc(Flatten.forward)
# ## New Methods - Please document or move to the undocumented section
# + hide_input=true
show_doc(View)
# -
#
# + hide_input=true
show_doc(ResizeBatch.forward)
# -
#
# + hide_input=true
show_doc(View.forward)
# -
#
| docs_src/layers.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
import featuretools as ft
import pandas as pd
pd.options.display.max_colwidth=800
# -
primitives = ft.list_primitives()
# aggregations
primitives[primitives['type'] == 'aggregation'].head(100)
# transformations
primitives[primitives['type'] == 'transform'].head(100)
| libs/featuretools/primitives.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + raw_mimetype="text/restructuredtext" active=""
# #####################
# Tensor Network Basics
# #####################
#
# The tensor network functionality in ``quimb`` aims to be easy to
# manipulate and interact with, without losing any generality or efficiency.
# Many of the manipulations try and mimic how one thinks graphically about
# tensor networks, which is helped by the ability to plot a graph of the
# network at any point.
#
# The functionality can be found in the submodule ``quimb.tensor``, and
# is not imported by default with the base ``quimb``:
# -
# %config InlineBackend.figure_formats = ['svg']
import quimb as qu
import quimb.tensor as qtn
# + raw_mimetype="text/restructuredtext" active=""
# The core functions of note are :class:`~quimb.tensor.tensor_core.Tensor`, :class:`~quimb.tensor.tensor_core.TensorNetwork`, :func:`~quimb.tensor.tensor_core.tensor_contract`, and :func:`~quimb.tensor.tensor_core.tensor_split`.
# + raw_mimetype="text/restructuredtext" active=""
# Creating Tensors
# ----------------
#
# Tensors are created using the class :class:`~quimb.tensor.tensor_core.Tensor`. Indicies matching the shape of the data array must be supplied, and optionally tags which can be used to group it with others and/or uniquely identify it.
#
# For example, let's create the singlet state in tensor form, i.e., an index for each qubit:
# +
data = qu.bell_state('psi-').reshape(2, 2)
inds = 'k0', 'k1'
tags = 'KET'
ket = qtn.Tensor(data, inds, tags)
ket
# + raw_mimetype="text/restructuredtext" active=""
# In lots of ways this is like a normal n-dimensional array, but it can also be manipulated without having to keep track of which axis is which. Some key methods are:
#
# - :meth:`~quimb.tensor.tensor_core.Tensor.transpose`
# - :meth:`~quimb.tensor.tensor_core.Tensor.reindex`
# - :meth:`~quimb.tensor.tensor_core.Tensor.retag`
# - :meth:`~quimb.tensor.tensor_core.Tensor.fuse`
# - :meth:`~quimb.tensor.tensor_core.Tensor.squeeze`
#
# .. note::
#
# These all have inplace versions with an underscore appended,
# so that ``ket.transpose_('k1', 'k0')`` would perform a
# tranposition on ``ket`` directly, rather than making a new
# tensor. This is a convention adopted elsewhere also.
#
# Let's also create some tensor paulis, with indices that act on the bell state and map the physical indices into two new ones:
# -
X = qtn.Tensor(qu.pauli('X'), inds=('k0', 'b0'), tags=['PAULI', 'X', '0'])
Y = qtn.Tensor(qu.pauli('Y'), inds=('k1', 'b1'), tags=['PAULI', 'Y', '1'])
# And finally, a random 'bra' to complete the inner product:
bra = qtn.Tensor(qu.rand_ket(4).reshape(2, 2), inds=('b0', 'b1'), tags={'BRA'})
# + raw_mimetype="text/restructuredtext" active=""
# Note how repeating an index is all that is required to define a contraction.
# If you want to join two tensors and have the index generated automatically
# you can use the :func:`~quimb.tensor.tensor_core.connect`.
# + raw_mimetype="text/restructuredtext" active=""
# .. note::
#
# Indices and tags should be strings - though this is currently not enforced.
# A useful convention is to keep ``inds`` lower case, and ``tags`` upper.
#
# Creating Tensor Networks
# ------------------------
#
# We can now combine these into a :class:`~quimb.tensor.tensor_core.TensorNetwork` using the ``&`` operator overload:
# -
TN = ket.H & X & Y & bra
print(TN)
# + raw_mimetype="text/restructuredtext" active=""
# (note that ``.H`` conjugates the data but leaves the indices).
# We could also use the :class:`~quimb.tensor.tensor_core.TensorNetwork`
# constructor, which takes any sequence of tensors and/or tensor networks,
# and has various advanced options.
#
# The geometry of this network is completely defined by the repeated
# indices forming edges between tensors, as well as arbitrary tags
# identifying the tensors. The internal data of the tensor network
# allows efficient access to any tensors based on their ``tags`` or ``inds``.
#
# .. note::
#
# In order to naturally maintain networks geometry, bonds (repeated
# indices) can be mangled when two tensor networks are combined.
# As a result of this, only exterior indices are guaranteed to
# keep their absolute value - since these define the overall object.
# The :func:`~quimb.tensor.tensor_core.bonds` function can be used to
# find the names of indices connecting tensors if explicitly required.
#
#
# Any network can also be graphed using
# :meth:`~quimb.tensor.tensor_core.TensorNetwork.graph`, which will pick
# a layout and also represent bond size as edge thickness, and optionally
# color the nodes based on ``tags``.
# -
TN.draw(color=['KET', 'PAULI', 'BRA'], figsize=(4, 4))
# Note the tags can be used to identify both paulis at once. But they could also be uniquely identified using their ``'X'`` and ``'Y'`` tags respectively:
TN.draw(color=['KET', 'X', 'BRA', 'Y'], figsize=(4, 4))
# + raw_mimetype="text/restructuredtext" active=""
# Contracting Tensors
# -------------------
#
# To completely contract a network we can use the ``^`` operator overload, and the ``all`` function.
# -
TN ^ all
# Or if we just want to contract the paulis:
print(TN ^ 'PAULI')
# Notice how the ``tags`` of the Paulis have been combined on the new tensor.
#
# The contraction order is optimized automatically using ``opt_einsum``, is cached,
# and can easily handle hundreds of tensors (though it uses a greedy algorithm and
# is not guaranteed to find the optimal path).
#
# A cumulative contract allows a custom 'bubbling' order:
# "take KET, then contract X in, then contract BRA *and* Y in, etc..."
print(TN >> ['KET', 'X', ('BRA', 'Y')])
# And a structured contract uses the tensor networks tagging structure (a string
# format specifier like ``"I{}"``) to perform a cumulative contract automatically,
# e.g. grouping the tensors of a MPS/MPO into segments of 10 sites.
# This can be slightly quicker than finding the full contraction path.
#
# When a TN has a structure, structured contractions can be used by specifying either an ``Ellipsis``:
#
# ``TN ^ ...`` # which means full, structured contract
#
# or a ``slice``:
#
# ``TN ^ slice(100, 200)`` # which means a structured contract of those sites only
# + raw_mimetype="text/restructuredtext" active=""
# The full api of contraction methods is:
#
# - :func:`quimb.tensor.tensor_core.tensor_contract`
# - :func:`quimb.tensor.tensor_core.Tensor.contract`
# - :func:`quimb.tensor.tensor_core.TensorNetwork.contract` (which dispatches to one of the below:)
# - :func:`quimb.tensor.tensor_core.TensorNetwork.contract_tags`
# - :func:`quimb.tensor.tensor_core.TensorNetwork.contract_cumulative` (overloaded on ``>>``)
# - :func:`quimb.tensor.tensor_core.TensorNetwork.contract_structured`
# - :func:`quimb.tensor.tensor_core.TensorNetwork.contract_between` (explicitly contract between two tensors only)
# -
print((TN ^ 'PAULI'))
# + raw_mimetype="text/restructuredtext" active=""
# Splitting Tensors
# -----------------
#
# Tensors can be decomposed ('split') using many different methods, all accessed through
# :func:`~quimb.tensor.tensor_core.tensor_split` or :meth:`~quimb.tensor.tensor_core.Tensor.split`.
#
# As an example, let's split the tensor tagged ``'KET'`` in our TN:
# +
# select any tensors matching the 'KET' tag - here only 1
Tk = TN['KET']
# now split it, creating a new tensor network of 2 tensors
Tk_s = Tk.split(left_inds=['k0'])
# note new index created
print(Tk_s)
# + raw_mimetype="text/restructuredtext" active=""
# Finally, let's replace the original 'KET' tensor with its split version:
# +
# remove the original KET tensor
del TN['KET']
# inplace add the split tensor network
TN &= Tk_s
# plot - should now have 5 tensors
TN.draw(color=['KET', 'PAULI', 'BRA'], figsize=(4, 4))
# + raw_mimetype="text/restructuredtext" active=""
# .. _tn-creation-graph-style:
#
# Graph Orientated Tensor Network Creation
# ----------------------------------------
#
# Another way to create tensor networks is define the tensors (nodes)
# first and the make indices (edges) afterwards. This is mostly enabled
# by the functions :meth:`~quimb.tensor.tensor_core.Tensor.new_ind` and
# :meth:`~quimb.tensor.tensor_core.Tensor.new_bond`. Take for example
# making a small periodic matrix product state with bond dimension 7:
# +
L = 10
# create the nodes, by default just the scalar 1.0
tensors = [qtn.Tensor() for _ in range(L)]
for i in range(L):
# add the physical indices, each of size 2
tensors[i].new_ind(f'k{i}', size=2)
# add bonds between neighbouring tensors, of size 7
tensors[i].new_bond(tensors[(i + 1) % L], size=7)
mps = qtn.TensorNetwork(tensors)
mps.draw()
# + raw_mimetype="text/restructuredtext" active=""
# Other Overloads
# ---------------
#
# You can also add tensors/networks together using ``|`` or the inplace ``|=``, which act like ``&`` and ``&=`` respectively, but are virtual, meaning that changes to the tensors propogate across all networks viewing it (see :class:`~quimb.tensor.tensor_core.TensorNetwork`).
#
# The ``@`` symbol is overloaded to combine the objects into a network and then immediately contract them all. In this way it mimics dense dot product. E.g.
# +
# make a 5 qubit tensor state
dims = [2] * 5
data = qu.rand_ket(32).A.reshape(*dims)
inds=['k0', 'k1', 'k2', 'k3', 'k4']
psi = qtn.Tensor(data, inds=inds)
# find the inner product with itself
psi.H @ psi
# -
# In this case, the conjugated copy ``psi.H`` has the same outer indices as ``psi`` and so the inner product is naturally formed.
# + raw_mimetype="text/restructuredtext" active=""
# Other `TensorNetwork` Features
# ------------------------------
#
# - Insert gauges between tensors with
# :meth:`~quimb.tensor.tensor_core.TensorNetwork.insert_gauge`
# - Compress bonds between tensors with
# :meth:`~quimb.tensor.tensor_core.TensorNetwork.compress_between`
# - Replace subnetworks with an SVD (or other decomposition) using
# :meth:`~quimb.tensor.tensor_core.TensorNetwork.replace_with_svd`
# - Take the trace with
# :meth:`~quimb.tensor.tensor_core.TensorNetwork.trace`
# - Convert to dense form with
# :meth:`~quimb.tensor.tensor_core.TensorNetwork.to_dense`
# - Treat TN as a ``scipy``
# `LinearOperator <https://docs.scipy.org/doc/scipy/reference/generated/scipy.sparse.linalg.LinearOperator.html>`_
# with
# :meth:`~quimb.tensor.tensor_core.TensorNetwork.aslinearoperator`
# - Automatically iterate over 'cut' bonds with :meth:`~quimb.tensor.tensor_core.TensorNetwork.cut_iter`
# to trade memory for time / parallelization.
# - Impose unitary/isometric constaints at the tensor level with ``left_inds`` and :meth:`~quimb.tensor.tensor_core.TensorNetwork.unitize`
# - Optimize tensors in arbitrary network geometries with respect to custom loss functions
# using ``tensorflow`` or ``pytorch`` - see :ref:`ex-tensorflow-tn-opt`.
# - Parametrized tensors - :class:`~quimb.tensor.tensor_core.PTensor` -
# can be used to represent tensor's data as a function acting on parameters.
#
# TNs are also picklable so they can be easily saved to and loaded from disk.
#
#
# Internal Structure
# ------------------
#
# A :class:`~quimb.tensor.tensor_core.TensorNetwork` stores its tensors in three dictionaries which allow them to be selected in constant time, regardless of network size, based on their ``tags`` and ``inds``. These are
#
# - ``TensorNetwork.tensor_map``: a mapping of unique string ids (``tids``) to each tensor
# - ``TensorNetwork.tag_map``: a mapping of every tag in the network to the set of ``tids``
# corresponding to tensors which have that tag.
# - ``TensorNetwork.ind_map``: a mapping of every index in the network to the set of ``tids``
# corresponding to tensors which that that index.
#
# Thus the tensors with tag ``'HAM'`` in network ``tn`` would be ``(tn.tensor_map[tid] for tid in tn.tag_map['HAM'])`` etc. The geometry of the network can thus be completely defined by which indices appear twice, and how you label the tensors with tags in order to select them.
#
# This allows any tagging strategy/structure to be used to place/reference/remove tensors etc. For example the default tags a 1D tensor network uses are ``('I0', 'I1', 'I2', ...)`` with physical inds ``('k0', 'k1', 'k2', ...)``. A 2D network might use tags ``('I0J0', 'I0J1', 'I0J2', 'I1J0', ...)`` etc.
#
# To select a subset or partition a network into tensors that match any or all of a set of tags see :func:`~quimb.tensor.tensor_core.TensorNetwork.select` or :func:`~quimb.tensor.tensor_core.TensorNetwork.partition`.
#
# Finally, each :class:`~quimb.tensor.tensor_core.Tensor` also contains a ``weakref.ref`` to each :class:`~quimb.tensor.tensor_core.TensorNetwork` that is viewing it (its ``owners``), so that these maps can be updated whenever the tensor is modified directly.
#
#
# Contraction Backend & Strategy
# ------------------------------
#
# The tensor contractions can be performed with any backend supported by `opt_einsum <https://optimized-einsum.readthedocs.io/en/latest/backends.html>`_, including several which use the GPU. These are specified with the ``backend`` argument to :func:`~quimb.tensor.tensor_core.tensor_contract` and any related functions, or by setting a default backend using :func:`~quimb.tensor.tensor_core.set_contract_backend` and :func:`~quimb.tensor.tensor_core.set_tensor_linop_backend`.
#
# The strategy used to find contraction orders for tensor networks is specified using the ``optimize`` keyword - see the `opt_einsum docs <https://optimized-einsum.readthedocs.io/en/latest/path_finding.html>`_.
#
# There are also the following context-managers for setting temporarily setting the default contraction strategy, contraction backend, and TN linear operator backend.
# For example the following might be useful:
#
# .. code:: python3
#
# with qtn.contract_strategy('optimal'):
# # always find the optimal contraction sequence (exponentially slow for large networks!)
# ...
#
# with qtn.contract_backend('cupy')
# # use the GPU array library cupy to perform any contractions
# ...
#
# with qtn.tensor_linop_backend('tensorflow'):
# # compile any tensor linear operators into tensorflow graphs before use
# ...
#
# .. note::
#
# Recent versions of ``opt_einsum`` have support for automatically detecting the correct backend library to use.
# Sometimes however you want to contract ``numpy`` tensors *via*, for example, ``cupy``, in which case this must
# still be specified.
| docs/tensor-basics.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Bonus: Temperature Analysis I
import pandas as pd
from datetime import datetime as dt
# "tobs" is "temperature observations"
df = pd.read_csv('./Resources/hawaii_measurements.csv')
df.head()
# Convert the date column format from string to datetime
df.date=pd.to_datetime(df.date,infer_datetime_format=True)
# +
# Set the date column as the DataFrame index
df = df.set_index(df['date'])
df.head()
# -
# Drop the date column
df=df.drop(columns='date')
df.head()
# ### Compare June and December data across all years
from scipy import stats
# Filter data for desired months
#filter June data
juneData = df[df.index.month==6]
juneData.head()
# Filter data for desired months
#filter December data
decData = df[df.index.month==12]
decData.head()
# Identify the average temperature for June
juneData.mean()
# Identify the average temperature for December
decData.mean()
# Create collections of temperature data
#June Collection
juneTemp = juneData.tobs
juneTemp
# Create collections of temperature data
#December Collection
decTemp = decData.tobs
decTemp
# Run paired t-test
stats.ttest_ind(juneTemp,decTemp)
# ### Analysis
# The mean temperature difference between June and December is 3.9 F which is not much. The t-test with an low p-value indicates that the difference is statistically significant. Eventhough the difference is meaningful, the actual difference is not thereby indicating that you can travel to Hawaii and enjoy 70 degree temperature whole year
| Instructions/temp_analysis_bonus_1_starter.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="view-in-github" colab_type="text"
# <a href="https://colab.research.google.com/github/abstractguy/lstm_autoencoder_classifier/blob/master/LSTM_autoencoder.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# + [markdown] id="kZOZmPmIyyxr" colab_type="text"
# **Dataset: LSTM Autoencoder for Rare Event Binary Classification in Multivariate Time Series**
#
# https://arxiv.org/abs/1809.10717 (please cite this article, if using the dataset).
#
# https://towardsdatascience.com/extreme-rare-event-classification-using-autoencoders-in-keras-a565b386f098
#
# https://github.com/cran2367/autoencoder_classifier/blob/master/autoencoder_classifier.ipynb
# + id="MggK-vZ9ETZW" colab_type="code" colab={}
# Money maker by <NAME>.
AI = 'LSTM_autoencoder'
# + id="yZKnnbYDEUti" colab_type="code" colab={}
# Hardcode parameters.
if AI == 'LSTM':
main_ticker = 'AAPL'
batch_size = 64
epochs = 100
learning_rate = 0.000003
test_size = 0.2
lookback = 5 # Days of past data.
tickers_list = []
elif AI == 'LSTM_autoencoder':
main_ticker = 'FB'
batch_size = 64
epochs = 200
learning_rate = 0.00008
random_seed = 123
test_size = 0.2
lookback = 5 # Days of past data.
gain = 0.065 # Percentage of gain to consider good for trade.
tickers_list = ['GE',
'ADS',
'INTC',
'AAPL',
'NVDA',
'CSCO',
'AMD',
'AMZN',
'GOOG',
'MSFT',
'S',
'BAC',
'XLNX',
'WFC',
'^DJI',
'^GSPC',
'^NYA',
'^IXIC']
# + id="mJuuvLMCEVuc" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="02d9ec83-6782-42d0-8e93-ccd4d6e2fbe1"
# %matplotlib inline
from math import sqrt
from os import chdir
from os.path import exists
from datetime import datetime
from tqdm import tqdm
from numpy.random import seed
from numpy import append, array, concatenate, count_nonzero, empty, empty_like, expand_dims, mean, nan, power, var, where, zeros
from pandas import concat, DataFrame, date_range, read_csv, Series
from pandas_datareader.data import DataReader
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import MinMaxScaler, StandardScaler
from sklearn.metrics import auc, classification_report, confusion_matrix, f1_score, mean_squared_error, precision_recall_curve, precision_recall_fscore_support, recall_score, roc_curve
from tensorflow import set_random_seed
from keras.preprocessing.sequence import TimeseriesGenerator
from keras.models import load_model, Model, Sequential
from keras.layers import Dense, LSTM, RepeatVector, TimeDistributed
from keras.callbacks import ModelCheckpoint, ReduceLROnPlateau, TerminateOnNaN
from keras.optimizers import Adam
from keras.utils import plot_model
from keras import optimizers, Sequential
from google.colab.drive import mount
from tensorboardcolab import TensorBoardColab, TensorBoardColabCallback
from seaborn import heatmap
import matplotlib.pyplot as plt
seed(7)
set_random_seed(11)
labels = ['Normal', 'Break']
# + id="s9PfXIoazLPd" colab_type="code" colab={}
path = '/content/gdrive/'
mount(path)
path = path + 'My Drive/LSTM_autoencoder/'
chdir(path)
# + id="koUXcR8eCYt6" colab_type="code" colab={}
def add_ticker_to_tickers(symbol, ticker, tickers):
if symbol not in tickers.columns:
tickers = tickers[tickers.index >= ticker.index.min()]
tickers = concat([tickers, ticker.to_frame(name=symbol)], axis='columns', join='inner')
return tickers
def download_ticker_from_yahoo(symbol, start=None, end=None):
if start is None or end is None:
start = datetime(1970, 1, 1)
end = datetime.now()
return DataReader(symbol, 'yahoo', start=start, end=end)
def get_stock(main_ticker, tickers_list, path=path):
csv_path = path + 'dataset.csv'
if exists(csv_path):
tickers = read_csv(csv_path)
else:
start = datetime(1970, 1, 1)
end = datetime.now()
tickers = DataReader(main_ticker, 'yahoo', start=start, end=end).High.dropna().to_frame(name=main_ticker)
tickers.index.name = 'Date'
for symbol in tqdm(tickers_list, unit='symbol'):
try:
ticker = download_ticker_from_yahoo(symbol, start=start, end=end)
tickers = add_ticker_to_tickers(symbol, ticker.High, tickers)
except:
pass
business_days = (~tickers.dropna().asfreq('D').isna().any(axis='columns')).astype(int)
ticker = DataReader(main_ticker, 'yahoo', start=start, end=end)
ticker_y = ticker.Close / ticker.Open
#ticker_y = ticker.High
tickers = add_ticker_to_tickers(main_ticker + '_y', ticker_y, tickers)
tickers = tickers.asfreq('D').fillna(method='backfill')
tickers = add_ticker_to_tickers('Business_days', business_days, tickers)
tickers.to_csv(csv_path)
return tickers
def extract_column(dataset, column_name):
extracted = dataset[column_name].copy()
dataset = dataset.drop(columns=[column_name])
return dataset, extracted
def delta_time_series(data):
data = data.astype('float32').values
return data[1:] - data[:-1]
def delta_time_series_df(data):
data = data.astype('float32')
data = data - data.shift(periods=-1, fill_value=0.0, axis='index')
data.iloc[-1] = 0.0
return data
def delta_time_ratios_df(data):
data = data.astype('float32')
data = data / data.shift(periods=-1, fill_value=1.0, axis='index')
data.iloc[-1] = 1.0
return data
def plot_dataset(dataset):
plt.plot(dataset)
plt.xlabel('Days')
plt.ylabel('Derivatives')
plt.show()
def get_y_from_generator(generator):
'''Get all targets y from a TimeseriesGenerator instance.'''
y = None
for i in range(len(generator)):
batch_y = generator[i][1]
if y is None:
y = batch_y
else:
y = append(y, batch_y)
y = y.reshape((-1, 1))
print(y.shape)
return y
def binary_accuracy(a, b, name='training'):
'''Helper function to compute the match score of two binary numpy arrays.'''
a = a[:,0] > 0
b = b[:,0] > 0
assert len(a) == len(b)
print('Binary accuracy (' + name + ' data):', (a == b).sum() / len(a))
sign = lambda x: (1, -1)[x < 0]
def curve_shift(dataset, shift_by):
vector = dataset.y.copy()
for s in range(abs(shift_by)):
vector += vector.shift(sign(shift_by), fill_value=0)
dataset['ytmp'] = vector
dataset = dataset.drop(dataset[dataset['y'] == 1].index)
dataset = dataset.drop('y', axis=1).rename(columns={'ytmp': 'y'})
dataset['y'] = (dataset.y > 0).astype(int)
return dataset
def temporalize(X, y, lookback):
output_X = []
output_y = []
for i in range(len(X) - lookback - 1):
t = []
for j in range(1, lookback + 1):
t.append(X[[(i + j + 1)],:]) # Gather past records up to lookback.
output_X.append(t)
output_y.append(y[i + lookback + 1])
return output_X, output_y
# 3D -> 2D.
# (samples, timesteps, features) -> (samples, features).
def flatten(X):
flattened_X = empty((X.shape[0], X.shape[2])) # Sample X feature array.
for i in range(X.shape[0]):
flattened_X[i] = X[i,X.shape[1] - 1,:]
return(flattened_X)
# Scale samples individually.
# (samples, timesteps, features) -> (samples, timesteps, features).
def scale(X, scaler):
for i in range(X.shape[0]):
X[i,:,:] = scaler.transform(X[i,:,:])
return X
# + id="HQ1ce3K4Cfbl" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="1c38b98c-28cc-4710-8f94-7b3ccdf0d05a"
dataset_csv = get_stock(main_ticker, tickers_list)
# + id="5auDgZuAGn5h" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 477} outputId="720970b0-600d-46f0-85fd-e3fbb2a76799"
dataset = dataset_csv.copy()
#dataset[[main_ticker + '_y']] = delta_time_ratios_df(dataset[[main_ticker + '_y']])
dataset, ticker_y = extract_column(dataset, main_ticker + '_y')
dataset, business_days = extract_column(dataset, 'Business_days')
#dataset = delta_time_series_df(dataset)
yesterday_date = dataset.index[-1]
yesterday_score = dataset.iloc[-1,dataset.columns == main_ticker]
dataset['Business_days'] = business_days
plot_dataset(dataset)
print(yesterday_date)
print(yesterday_score)
dataset.shape
# + id="gCdAtlPssP-G" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 300} outputId="e1500810-7dee-4527-dce4-bb8ea361eb86"
plot_dataset(ticker_y)
ticker_y.size
# + id="U4ij_2mhC9hD" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="1bc5a421-df6d-417a-be6e-2a5b8459095b"
# Ground truth.
dataset['y'] = (ticker_y > (1.0 + (gain / 2.0))).astype(int)
print('Percentage of ones (keep less than 5%):',
count_nonzero(dataset.y) / ticker_y.size)
# + id="dO_8_S2myyyL" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 1000} outputId="081364b5-e878-4b30-e728-12a1c9e13ac3"
print('Before shifting.')
dataset
# + colab_type="code" id="_X9gRQ_RKK2k" colab={"base_uri": "https://localhost:8080/", "height": 1000} outputId="09f44003-bdb7-467d-e3bd-7e96ec5f9850"
print('After shifting.')
dataset = curve_shift(dataset, shift_by=-1)
dataset
# + id="rosttrXFyyyc" colab_type="code" colab={}
# Converts the DataFrame to a numpy array.
input_X, input_y = extract_column(dataset, 'y')
input_X = input_X.values
input_y = input_y.values
# Number of features.
n_features = input_X.shape[1]
# + id="RgLp2Oyiyyyp" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 513} outputId="bc4a5d09-b934-4b80-aabb-d3f20f258036"
# Test: The 3D tensors (arrays) for LSTM are forming correctly.
print('First instance of y = 1 in the original data.')
display(dataset.iloc[(where(array(input_y) == 1)[0][0] - lookback):(where(array(input_y) == 1)[0][0] + 1),])
# Temporalize the data.
X, y = temporalize(X=input_X, y=input_y, lookback=lookback)
print('For the same instance of y=1, we are keeping past 5 samples in the 3D predictor array, X.')
display(DataFrame(concatenate(X[where(array(y) == 1)[0][0]], axis=0)))
# + [markdown] colab_type="text" id="jG9_gmc9L6RC"
# The two tables are the same. This testifies that we are correctly taking 5 samples (= lookback), X(t):X(t-5) to predict y(t).
# + id="5pmWNnDCyyyz" colab_type="code" colab={}
X_train, X_test, y_train, y_test = train_test_split(array(X), array(y), test_size=test_size, random_state=random_seed)
X_train, X_valid, y_train, y_valid = train_test_split(X_train, y_train, test_size=test_size, random_state=random_seed)
# + id="8VV76ST8yyy7" colab_type="code" colab={}
X_train_y0 = X_train[y_train==0]
X_train_y1 = X_train[y_train==1]
X_valid_y0 = X_valid[y_valid==0]
X_valid_y1 = X_valid[y_valid==1]
# + id="j3UzUymkyyy2" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="cbaef53c-2708-43ae-bb54-e12e90033555"
X_train.shape
# + id="VzDCf3styyzB" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="59a4d084-542d-4d6b-f882-4e93b24bb10c"
X_train_y0.shape
# + id="c-JtRrOqyyzH" colab_type="code" colab={}
# (sample, channel, lookback, feature) -> (sample, lookback, feature).
X_train = X_train.reshape(X_train.shape[0], lookback, n_features)
X_test = X_test.reshape(X_test.shape[0], lookback, n_features)
X_valid = X_valid.reshape(X_valid.shape[0], lookback, n_features)
X_train_y0 = X_train_y0.reshape(X_train_y0.shape[0], lookback, n_features)
X_train_y1 = X_train_y1.reshape(X_train_y1.shape[0], lookback, n_features)
X_valid_y0 = X_valid_y0.reshape(X_valid_y0.shape[0], lookback, n_features)
X_valid_y1 = X_valid_y1.reshape(X_valid_y1.shape[0], lookback, n_features)
# + id="zqg4frnfyyzR" colab_type="code" colab={}
# Initialize a scaler using the training data.
scaler = StandardScaler().fit(flatten(X_train_y0))
X_train_y0_scaled = scale(X_train_y0, scaler)
X_train_y1_scaled = scale(X_train_y1, scaler)
X_train_scaled = scale(X_train, scaler)
# + id="UEY3ThchyyzZ" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 69} outputId="e03675c8-a5a2-4b7e-d5ed-ee885daf3142"
# Test scaling validity.
a = flatten(X_train_y0_scaled)
print('Column-wise mean (should be all zeros):', mean(a, axis=0).round(6))
print('Column-wise variance (should be all ones):', var(a, axis=0))
# + id="7xv9aimpyyzf" colab_type="code" colab={}
# Scale test and validation sets.
X_valid_scaled = scale(X_valid, scaler)
X_valid_y0_scaled = scale(X_valid_y0, scaler)
X_test_scaled = scale(X_test, scaler)
# + id="IXkULB_Hyyzl" colab_type="code" colab={}
_, timesteps, n_features = X_train_y0_scaled.shape
# + id="iC2LW6CMyyzu" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 489} outputId="9d9d8b11-f896-4d9f-f479-b6bd104da15e"
LSTM_autoencoder = Sequential()
# Encoder.
LSTM_autoencoder.add(LSTM(units=32,
activation='relu',
input_shape=(timesteps, n_features),
return_sequences=True))
LSTM_autoencoder.add(LSTM(units=16, activation='relu', return_sequences=False))
LSTM_autoencoder.add(RepeatVector(timesteps))
# Decoder.
LSTM_autoencoder.add(LSTM(units=16, activation='relu', return_sequences=True))
LSTM_autoencoder.add(LSTM(units=32, activation='relu', return_sequences=True))
LSTM_autoencoder.add(TimeDistributed(Dense(n_features)))
# Keep parameters less than non-droped out features.
LSTM_autoencoder.summary()
# + id="xCGKhCicyyzz" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 1000} outputId="fce83dd0-ef71-4d32-cd79-d86161a4df26"
# LSTM autoencoder model for rare stock event prediction.
# Path to model weights (saved periodically).
filepath = path + 'LSTM_autoencoder.h5'
if exists(filepath):
LSTM_autoencoder = load_model(filepath)
else:
# Gradient descent optimization.
optimizer = Adam(lr=learning_rate, clipnorm=1., clipvalue=0.5)
# Training configuration.
LSTM_autoencoder.compile(loss='mean_squared_error', optimizer=optimizer)
# Save model weights after each epoch if validation loss decreased.
checkpointer = ModelCheckpoint(filepath=filepath,
save_best_only=True,
verbose=1)
# Control learning rate schedule when validation is not improving.
reduce_lr = ReduceLROnPlateau(factor=0.1,
patience=epochs // 25,
verbose=1,
min_lr=learning_rate / 1000)
# Various graphics.
tbc = TensorBoardColab()
# Shouldn't happen.
term_on_NaN = TerminateOnNaN()
history = LSTM_autoencoder.fit(X_train_y0_scaled,
X_train_y0_scaled,
epochs=epochs,
batch_size=batch_size,
validation_data=(X_valid_y0_scaled,
X_valid_y0_scaled),
callbacks=[checkpointer,
reduce_lr,
TensorBoardColabCallback(tbc),
term_on_NaN],
verbose=1).history
# + id="cw-O3G2nLjXB" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 269} outputId="0f118d10-2675-45f5-f99b-c294c9b556a3"
# Print training and test loss histories.
try:
plt.plot(history['loss'])
plt.plot(history['val_loss'])
# Skip because model was probably loaded from file.
except:
pass
# + id="PbAw7kQDyyz8" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 312} outputId="9a8e7803-052b-4323-ec1a-800c96dde327"
print('Reconstruction error should be less than one.')
train_x_predictions = LSTM_autoencoder.predict(X_train_scaled)
mse = mean(power(flatten(X_train_scaled) - flatten(train_x_predictions), 2),
axis=1)
error = DataFrame({'Reconstruction_error': mse, 'True_class': y_train.tolist()})
groups = error.groupby('True_class')
fig, ax = plt.subplots()
for name, group in groups:
ax.plot(group.index,
group.Reconstruction_error,
marker='o',
ms=3.5,
linestyle='',
label = 'Break' if name == 1 else 'Normal')
ax.legend()
plt.title('Reconstruction error for different classes')
plt.ylabel('Reconstruction error')
plt.xlabel('Data point index')
plt.show()
# + id="hZvmE6JIyy0C" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 295} outputId="7ee5e493-bf05-4c20-ce7a-d06142770b5c"
# Predictions on validation data.
valid_x_predictions = LSTM_autoencoder.predict(X_valid_scaled)
mse = mean(power(flatten(X_valid_scaled) - flatten(valid_x_predictions), 2),
axis=1)
error = DataFrame({'Reconstruction_error': mse, 'True_class': y_valid.tolist()})
precision_rt, recall_rt, threshold_rt = precision_recall_curve(error.True_class,
error.Reconstruction_error)
plt.plot(threshold_rt, precision_rt[1:], label='Precision')
plt.plot(threshold_rt, recall_rt[1:], label='Recall')
plt.title('Precision and recall for different threshold values')
plt.xlabel('Threshold')
plt.ylabel('Precision/Recall')
plt.legend()
plt.show()
# + id="oowOhqFXyy0G" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 295} outputId="6d1bdf49-9bbf-44be-fc86-0c5e0826d4c3"
# Predictions on testing data.
test_x_predictions = LSTM_autoencoder.predict(X_test_scaled)
mse = mean(power(flatten(X_test_scaled) - flatten(test_x_predictions), 2),
axis=1)
error = DataFrame({'Reconstruction_error': mse, 'True_class': y_test.tolist()})
threshold_fixed = 0.3
groups = error.groupby('True_class')
fig, ax = plt.subplots()
for name, group in groups:
ax.plot(group.index,
group.Reconstruction_error,
marker='o',
ms=3.5,
linestyle='',
label= 'Break' if name == 1 else 'Normal')
ax.hlines(threshold_fixed,
ax.get_xlim()[0],
ax.get_xlim()[1],
colors='r',
zorder=100,
label='Threshold')
ax.legend()
plt.title('Reconstruction error for different classes')
plt.ylabel('Reconstruction error')
plt.xlabel('Data point index')
plt.show()
# + id="1UzmkPL0yy0K" colab_type="code" colab={}
pred_y = [1 if e > threshold_fixed else 0 for e in error.Reconstruction_error.values]
conf_matrix = confusion_matrix(error.True_class, pred_y)
# + id="BYZpiO7Xyy0L" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 295} outputId="77a3b83a-e924-4ca3-f069-f626d1bdaec9"
heatmap(conf_matrix,
xticklabels=labels,
yticklabels=labels,
annot=True,
fmt='d')
plt.title('Confusion matrix')
plt.ylabel('True class')
plt.xlabel('Predicted class')
plt.show()
# + id="Js7P_pZZmu_R" colab_type="code" colab={}
false_pos_rate, true_pos_rate, thresholds = roc_curve(error.True_class,
error.Reconstruction_error)
roc_auc = auc(false_pos_rate, true_pos_rate)
# + id="XlMlT2E2yy0O" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 312} outputId="d184cdde-ca8d-4872-dfd6-3ce448259591"
plt.plot(false_pos_rate, true_pos_rate)
plt.plot([0, 1], [0, 1])
plt.title('Receiver operating characteristic curve (ROC)')
plt.ylabel('True Positive Rate')
plt.xlabel('False Positive Rate')
plt.show()
print('AUC = ' + str(round(roc_auc, 3)))
| LSTM_autoencoder.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="BJegE3xDGDWy"
# $$
# \text{This is the cutdown companion Jupyter notebook of Chapter 5, Variational Quantum Eigensolver (VQE) Algorithm, of the book titled:}$$
# $$\text{ "Quantum Chemistry and Computing for the Curious: Illustrated with Python and Qiskitยฎ code" and with ISBN-13: 978-1803243900.}$$
# + [markdown] id="zfvF0_5uIplY"
# The following MIT license only applies to the code, and not to the text and images. The authors are not granting a license to replicate or reuse the text and images in the companion Jupyter notebook.
#
# # MIT License
#
# Copyright (c) 2022 Packt
#
# Permission is hereby granted, free of charge, to any person obtaining a copy
# of this software and associated documentation files (the "Software"), to deal
# in the Software without restriction, including without limitation the rights
# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
# copies of the Software, and to permit persons to whom the Software is
# furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in all
# copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
# SOFTWARE.
# + [markdown] id="B-c-ynQ9eMA5"
# # 5. Variational Quantum Eigensolver (VQE) Algorithm
#
# + [markdown] id="-VltqWatg_ND"
# # Technical requirements
#
# ## Installing NumPy, and Qiskit and importing various modules
# Install NumPy with the following command:
# + id="7XhpijAbD1v4"
pip install numpy
# + [markdown] id="e86gdgsLD_pU"
# Install Qiskit with the following command:
# + id="5W0P77WaD7Yh"
pip install qiskit
# + [markdown] id="S3Qs6ee42nIF"
# Install Qiskit visualization support with the following command:
# + id="QTMbSY-62kEN"
pip install 'qiskit[visualization]'
# + [markdown] id="LM60UkgZVLaI"
# Install Qiskit Nature with the following command:
# + id="bXRZVmawVN3K"
pip install qiskit-nature
# + [markdown] id="pREHzIBxgoGB"
# Install PySCF with the following command:
# + id="Tu3pFrmffNhZ"
pip install pyscf
# + [markdown] id="_S1V_zmZGD1G"
# Install QuTiP with the following command:
# + id="2zcuvbASF-Qe"
pip install qutip
# + [markdown] id="2HSQs87JGuOK"
# Install ASE with the following command:
# + id="JGdC6bR_czCf"
pip install ase
# + [markdown] id="v6sRTZCwoLBw"
# Install PyQMC with the following command:
# + id="qoYWuDLll5Z3"
pip install pyqmc --upgrade
# + [markdown] id="4nIeJ3or_sBT"
# Install h5py with the following command:
# + id="88X3YHN-_ukQ"
pip install h5py
# + [markdown] id="gK8E5qN6miGi"
# Install SciPy with the following command:
# + id="jrkzrEr3mj1t"
pip install scipy
# + [markdown] id="OxaMfQHwEOdg"
# Import NumPy with the following command:
# + id="QhaIzlQ5EN3r"
import numpy as np
# + [markdown] id="6PSJLYf6cMeH"
# Import Matplotlib, a comprehensive library for creating static, animated, and interactive visualizations in Python with the following command:
#
#
# + id="nJSPiCdFcG-9"
import matplotlib.pyplot as plt
# + [markdown] id="N9_Mtr8sEl3I"
# Import the required functions and class methods. The array_to_latex function() returns a Latex representation of a complex array with dimension 1 or 2:
# + id="wplPi--1ogzl"
from qiskit.visualization import array_to_latex, plot_bloch_vector, plot_bloch_multivector, plot_state_qsphere, plot_state_city
from qiskit import QuantumRegister, ClassicalRegister, QuantumCircuit, transpile
from qiskit import execute, Aer
import qiskit.quantum_info as qi
from qiskit.extensions import Initialize
from qiskit.providers.aer import extensions # import aer snapshot instructions
# + [markdown] id="dVEh1h6Nb975"
# Import Qiskit Nature libraries with the following commands:
#
#
#
#
#
#
#
# + id="Hhf24btZbj7x"
from qiskit import Aer
from qiskit_nature.drivers import UnitsType, Molecule
from qiskit_nature.drivers.second_quantization import ElectronicStructureDriverType, ElectronicStructureMoleculeDriver
from qiskit_nature.problems.second_quantization import ElectronicStructureProblem
from qiskit_nature.mappers.second_quantization import ParityMapper, JordanWignerMapper, BravyiKitaevMapper
from qiskit_nature.converters.second_quantization import QubitConverter
from qiskit_nature.transformers.second_quantization.electronic import ActiveSpaceTransformer, FreezeCoreTransformer
from qiskit_nature.operators.second_quantization import FermionicOp
from qiskit_nature.circuit.library.initial_states import HartreeFock
from qiskit_nature.circuit.library.ansatzes import UCCSD
# + [markdown] id="znAo5E9yi4n7"
# Import the Qiskit Nature property framework with the following command:
# + id="P0qUpRxujCJq"
from qiskit_nature.properties import Property, GroupedProperty
# + [markdown] id="vn7jUiP7jUig"
# Import the ElectronicEnergy property with the following command:
# + id="3GUTmFRiWB8O"
# https://qiskit.org/documentation/nature/tutorials/08_property_framework.html
from qiskit_nature.properties.second_quantization.electronic import (
ElectronicEnergy,
ElectronicDipoleMoment,
ParticleNumber,
AngularMomentum,
Magnetization,
)
# + [markdown] id="4F9Z9O_zjkY3"
# Import the ElectronicIntegrals property with the following command:
# + id="vL7qxdOobzQ_"
from qiskit_nature.properties.second_quantization.electronic.integrals import (
ElectronicIntegrals,
OneBodyElectronicIntegrals,
TwoBodyElectronicIntegrals,
IntegralProperty,
)
from qiskit_nature.properties.second_quantization.electronic.bases import ElectronicBasis
# + [markdown] id="64cFXNNGkGuI"
# Import the Qiskit Aer statevector simulator and various algorithms with the following commands:
# + id="IA8RPyqukF4d"
from qiskit.providers.aer import StatevectorSimulator
from qiskit import Aer
from qiskit.utils import QuantumInstance
from qiskit_nature.algorithms import VQEUCCFactory, GroundStateEigensolver, NumPyMinimumEigensolverFactory, BOPESSampler
from qiskit.algorithms import NumPyMinimumEigensolver, VQE, HamiltonianPhaseEstimation, PhaseEstimation
from qiskit.circuit.library import TwoLocal
from qiskit.algorithms.optimizers import QNSPSA
from qiskit.opflow import StateFn, PauliExpectation, CircuitSampler, PauliTrotterEvolution
from functools import partial as apply_variation
# + [markdown] id="3q_1g7gbcZ1c"
# Import the PySCF gto and scf libraries with the following command:
# + id="AipShC3gcY0E"
from pyscf import gto, scf
# + [markdown] id="p7AEPOIM8jKr"
# Import the PyQMC API library with the following command:
# + id="nN8IdIUz8CTF"
import pyqmc.api as pyq
# + [markdown] id="jvZWffRz_nAS"
# Import h5py with the following command:
#
#
# + id="kptUG-hEAL3H"
import h5py
# + [markdown] id="6BY3rD8LW6NB"
# Import the ASE libraries, the Atoms object, molecular data, and visualizations with the following commands:
# + id="VW3r3lIJ3QJg"
from ase import Atoms
from ase.build import molecule
from ase.visualize import view
# + [markdown] id="koU0wN7hy2W9"
# Import the math libraries with the following commands:
# + id="NV8yxxgPywSj"
import cmath
import math
# + [markdown] id="SpAPcsCe5QaG"
# Import Pythonโs statistical functions provided by the SciPy package with the following command:
# + id="Qc2EdXBO5O-U"
import scipy.stats as stats
# + [markdown] id="3RmYMhFPy5t8"
# Import QuTiP with the following command:
#
#
#
#
# + id="Fk7gy7fQzAri"
import qutip
# + [markdown] id="dMbTi7KfP3y9"
# Import time and datetime with the following command:
# + id="E8RSchxzrN4T"
import time, datetime
# + [markdown] id="EsiDs-xy4BXf"
# Import pandas and os.path with the following commands:
# + id="4BIIrijz3_t1"
import pandas as pd
import os.path
# + [markdown] id="7bPyD8rkBoCi"
# # 5.1. Variational method
# + [markdown] id="fLyYloHBUQoV"
# ## 5.1.2. Variational Monte Carlo methods
# + id="2WInFTkyQxTQ"
def p(x):
if x < 0:
y = 0
else:
y = np.exp(-x)
return(y)
# + id="JFVMuHUUfAi1"
n = 10000 # Size of the Markov chain stationary distribution
# Use np.linspace to create an array of n numbers between 0 and n
index = np.linspace(0, n, num=n)
x = np.linspace(0, n, num=n)
x[0] = 3 # Initialize to 3
for i in range(1, n):
current_x = x[i-1]
# We add a N(0,1) random number to x
proposed_x = current_x + stats.norm.rvs(loc=0, scale=1, size=1, random_state=None)
A = min(1, p(proposed_x)/p(current_x))
r = np.random.uniform(0,1) # Generate a uniform random number in [0, 1]
if r < A:
x[i] = proposed_x # Accept move with probabilty min(1,A)
else:
x[i] = current_x # Otherwise "reject" move, and stay where we are
# + colab={"base_uri": "https://localhost:8080/", "height": 279} id="o0kfuydBsikR" outputId="d815bbfa-8d83-44a2-f595-eb103807818a"
plt.plot(index, x, label="Trace plot")
plt.xlabel('Index')
plt.ylabel('MH value')
plt.legend()
plt.show()
# + [markdown] id="JZsstBNM5p2t"
# Figure 5.2 โ Plot of the locations visited by the Markov chain x
# + colab={"base_uri": "https://localhost:8080/"} id="MXba7Tao8P-c" outputId="2cdc5f59-0b8b-4089-8b66-fcf013f6aee4"
q25, q75 = np.percentile(x, [25, 75])
bin_width = 2 * (q75 - q25) * len(x) ** (-1/3)
bins = round((x.max() - x.min()) / bin_width)
print("FreedmanโDiaconis number of bins:", bins)
# + colab={"base_uri": "https://localhost:8080/", "height": 279} id="is8MAaW17KQR" outputId="87baca88-4acd-4940-98f8-4f9e05d0ef8c"
plt.hist(x, density=True, bins=bins)
plt.ylabel('Density')
plt.xlabel('x');
# + [markdown] id="daX3yFS4yy06"
# Figure 5.3 โ Histogram of the Markov chain x
# + id="O8uSBgdlNCiY"
def run_PySCF(molecule, pyqmc=True, show=True):
# Reset the files
for fname in ['mf.hdf5','optimized_wf.hdf5','vmc_data.hdf5','dmc.hdf5']:
if os.path.isfile(fname):
os.remove(fname)
mol_PySCF = gto.M(atom = [" ".join(map(str, (name, *coord))) for (name, coord) in molecule.geometry])
mf = scf.RHF(mol_PySCF)
mf.chkfile = "mf.hdf5"
conv, e, mo_e, mo, mo_occ = scf.hf.kernel(mf)
if show:
if conv:
print("PySCF restricted HF (RHF) converged ground-state energy: {:.12f}".format(e))
else:
print("PySCF restricted HF (RHF) ground-state computation failed to converge")
if pyqmc:
pyq.OPTIMIZE("mf.hdf5",# Construct a Slater-Jastrow wave function from the pyscf output
"optimized_wf.hdf5", # Store optimized parameters in this file.
nconfig=100, # Optimize using this many Monte Carlo samples/configurations
max_iterations=4, # 4 optimization steps
verbose=False)
with h5py.File("optimized_wf.hdf5") as f:
iter = f['iteration']
energy = f['energy']
error = f['energy_error']
l = energy.shape[0]
e = energy[l-1]
err = error[l-1]
if show:
if err < 0.1:
print("Iteration, Energy, Error")
for k in iter:
print("{}: {:.4f} {:.4f}".format(k, energy[k], error[k]))
print("PyQMC Monte Carlo converged ground-state energy: {:.12f}, error: {:.4f}".format(e, err))
else:
print("PyQMC Monte Carlo failed to converge")
return conv, e
# + [markdown] id="HvCCPvHW_2Q2"
# ## 5.1.3. Quantum Phase Estimation (QPE)
# + id="Bag4FJG5CoJv"
def U(theta):
unitary = QuantumCircuit(1)
unitary.p(np.pi*2*theta, 0)
return unitary
# + id="GMc84XyLDllg"
def do_qpe(unitary, nqubits=3, show=True):
state_in = QuantumCircuit(1)
state_in.x(0)
pe = PhaseEstimation(num_evaluation_qubits=nqubits, quantum_instance=quantum_instance)
result = pe.estimate(unitary, state_in)
phase_out = result.phase
if show:
print("Number of qubits: {}, QPE phase estimate: {}".format(nqubits, phase_out))
return(phase_out)
# + colab={"base_uri": "https://localhost:8080/"} id="-BClSbzXDu-v" outputId="9437ec3e-3725-464f-b67c-7e2f37d91c87"
quantum_instance = QuantumInstance(backend = Aer.get_backend('aer_simulator_statevector'))
theta = 1/2 + 1/4 + 1/8
print("theta: {}".format(theta))
unitary = U(theta)
result = do_qpe(unitary, nqubits=3)
# + colab={"base_uri": "https://localhost:8080/"} id="75807vgGEde2" outputId="43ab94bd-eab5-4ccc-c6bc-9819b28595f1"
theta = 1/2 + 1/4 + 1/8 + 1/16 + 1/32 + 1/64 + 1/128 + 1/256
print("theta: {}".format(theta))
unitary = U(theta)
result = do_qpe(unitary, nqubits=8)
# + [markdown] id="cbOkuGV3UB8l"
# ## 5.1.4. Description of the VQE algorithm
# + [markdown] id="So7eKKk7vLOb"
# ### Trial wavefunctions
#
# ### Setting-up the VQE solver
#
# + id="8pG_4qK4W4PM"
quantum_instance = QuantumInstance(backend = Aer.get_backend('aer_simulator_statevector'))
# + id="VhOBLKHgwinZ"
numpy_solver = NumPyMinimumEigensolver()
# + id="LyQzqQ32W88Q"
tl_circuit = TwoLocal(rotation_blocks = ['h', 'rx'], entanglement_blocks = 'cz',
entanglement='full', reps=2, parameter_prefix = 'y')
# + id="pg9f1GE3XAYg"
vqe_tl_solver = VQE(ansatz = tl_circuit,
quantum_instance = QuantumInstance(Aer.get_backend('aer_simulator_statevector')))
# + id="PkxaNNdRxk4q"
vqe_ucc_solver = VQEUCCFactory(quantum_instance, ansatz=tl_circuit)
# + id="Ihi0Sbo0WFIk"
qnspsa_loss = []
def qnspsa_callback(nfev, x, fx, stepsize, accepted):
qnspsa_loss.append(fx)
# + [markdown] id="LM65eKkliwmY"
# # 5.2. Example chemical calculations
# + id="dfmNsOkfiwmZ"
def get_particle_number(problem, show=True):
particle_number = problem.grouped_property_transformed.get_property("ParticleNumber")
num_particles = (particle_number.num_alpha, particle_number.num_beta)
num_spin_orbitals = particle_number.num_spin_orbitals
if show:
print("Number of alpha electrons: {}".format(particle_number.num_alpha))
print("Number of beta electrons: {}".format(particle_number.num_beta))
print("Number of spin orbitals: {}".format(num_spin_orbitals))
return particle_number
# + id="jzxFDQN2iwmZ"
def fermion_to_qubit(f_op, second_q_op, mapper, truncate=20, two_qubit_reduction=False, z2symmetry_reduction=None, show=True):
if show:
print("Qubit Hamiltonian operator")
dmap = {"Jordan-Wigner": JordanWignerMapper(), "Parity": ParityMapper(), "Bravyi-Kitaev": BravyiKitaevMapper()}
qubit_op = None
qubit_converter = None
for k, v in dmap.items():
if k == mapper:
if show:
print("{} transformation ". format(mapper))
qubit_converter = QubitConverter(v, two_qubit_reduction=two_qubit_reduction, z2symmetry_reduction=z2symmetry_reduction)
if two_qubit_reduction:
qubit_op = qubit_converter.convert(second_q_op[0], num_particles=f_op.num_particles)
else:
qubit_op = qubit_converter.convert(second_q_op[0])
n_items = len(qubit_op)
if show:
print("Number of items in the Pauli list:", n_items)
if n_items <= truncate:
print(qubit_op)
else:
print(qubit_op[0:truncate])
return qubit_op, qubit_converter
# + id="7KtqcWwYiwma"
def run_vqe(name, f_op, qubit_converter, solver, show=True):
calc = GroundStateEigensolver(qubit_converter, solver)
start = time.time()
ground_state = calc.solve(f_op)
elapsed = str(datetime.timedelta(seconds = time.time()-start))
if show:
print("Running the VQE using the {}".format(name))
print("Elapsed time: {} \n".format(elapsed))
print(ground_state)
return ground_state
# + id="DrdDPmSy5lG0"
def run_qpe(particle_number, qubit_converter, qubit_op, n_ancillae=3, num_time_slices = 1, show=True):
initial_state = HartreeFock(particle_number.num_spin_orbitals,
(particle_number.num_alpha,
particle_number.num_beta), qubit_converter)
state_preparation = StateFn(initial_state)
evolution = PauliTrotterEvolution('trotter', reps=num_time_slices)
qpe = HamiltonianPhaseEstimation(n_ancillae, quantum_instance=quantum_instance)
result = qpe.estimate(qubit_op, state_preparation, evolution=evolution)
if show:
print("\nQPE initial Hartree Fock state")
display(initial_state.draw(output='mpl'))
eigv = result.most_likely_eigenvalue
print("QPE computed electronic ground state energy (Hartree): {}".format(eigv))
return eigv
# + id="-sLSVAzXC4_a"
def plot_energy_landscape(energy_surface_result):
if len(energy_surface_result.points) > 1:
plt.plot(energy_surface_result.points, energy_surface_result.energies, label="VQE Energy")
plt.xlabel('Atomic distance Deviation(Angstrom)')
plt.ylabel('Energy (hartree)')
plt.legend()
plt.show()
else:
print("Total Energy is: ", energy_surface_result.energies[0], "hartree")
print("(No need to plot, only one configuration calculated.)")
return
# + id="XBO9ULlLcW2B"
def plot_loss(loss, label, target):
plt.figure(figsize=(12, 6))
plt.plot(loss, 'tab:green', ls='--', label=label)
plt.axhline(target, c='tab:red', ls='--', label='target')
plt.ylabel('loss')
plt.xlabel('iterations')
plt.legend()
# + id="eN9lS_TNce2k"
def solve_ground_state(
molecule,
mapper ="Parity",
num_electrons=None,
num_molecular_orbitals=None,
transformers=None,
two_qubit_reduction=False,
z2symmetry_reduction = "Auto",
name_solver='NumPy exact solver',
solver=NumPyMinimumEigensolver(),
plot_bopes=False,
perturbation_steps=np.linspace(-1, 1, 3),
pyqmc=True,
n_ancillae=3,
num_time_slices=1,
loss=[],
label=None,
target=None,
show=True
):
# Defining the electronic structure molecule driver
driver = ElectronicStructureMoleculeDriver(molecule, basis='sto3g', driver_type=ElectronicStructureDriverType.PYSCF)
# Splitting into classical and quantum
if num_electrons != None and num_molecular_orbitals != None:
split = ActiveSpaceTransformer(num_electrons=num_electrons, num_molecular_orbitals=num_molecular_orbitals)
else:
split = None
# Defining a fermionic Hamiltonian operator
if split != None:
fermionic_hamiltonian = ElectronicStructureProblem(driver, [split])
elif transformers != None:
fermionic_hamiltonian = ElectronicStructureProblem(driver, transformers=transformers)
else:
fermionic_hamiltonian = ElectronicStructureProblem(driver)
# Use the second_q_ops() method [Qiskit_Nat_3] which returns a list of second quantized operators
second_q_op = fermionic_hamiltonian.second_q_ops()
# Get particle number
particle_number = get_particle_number(fermionic_hamiltonian, show=show)
if show:
# We set truncation to 1000 with the method set_truncation(1000)
second_q_op[0].set_truncation(1000)
# then we print the first 20 terms of the fermionic Hamiltonian operator of the molecule
print("Fermionic Hamiltonian operator")
print(second_q_op[0])
# Use the function fermion_to_qubit() to convert a fermionic operator to a qubit operator
if show:
print(" ")
qubit_op, qubit_converter = fermion_to_qubit(fermionic_hamiltonian, second_q_op, mapper=mapper, two_qubit_reduction=two_qubit_reduction, z2symmetry_reduction=z2symmetry_reduction, show=show)
# Run the the PySCF RHF method
if show:
print(" ")
conv, e = run_PySCF(molecule, pyqmc=pyqmc, show=show)
# Run QPE
eigv = run_qpe(particle_number, qubit_converter, qubit_op, n_ancillae=n_ancillae, num_time_slices=num_time_slices, show=show)
# Run VQE
if show:
print(" ")
ground_state = run_vqe(name_solver, fermionic_hamiltonian, qubit_converter, solver, show=show)
# Plot loss function
if loss != []:
plot_loss(loss, label, target)
if plot_bopes:
# Compute the potential energy surface as follows:
energy_surface = BOPESSampler(gss=GroundStateEigensolver(qubit_converter, solver), bootstrap=False)
# Fix enables using BOPESS together with Unitary Coupled Cluster (UCC) factory ansatz
# Set default to an empty dictionary instead of None:
energy_surface._points_optparams = {}
energy_surface_result = energy_surface.sample(fermionic_hamiltonian, perturbation_steps)
# Plot the energy as a function of atomic separation
plot_energy_landscape(energy_surface_result)
return fermionic_hamiltonian, particle_number, qubit_op, qubit_converter, ground_state
# + [markdown] id="2w4yOY4Tir3R"
# ## 5.2.1. Hydrogen molecule
# + id="hppW5zDalVCf"
hydrogen_molecule = Molecule(geometry=[['H', [0., 0., 0.]],
['H', [0., 0., 0.735]]],
charge=0, multiplicity=1)
# + [markdown] id="NiknHhW7S4HK"
# ### Varying the hydrogen molecule
# + id="eN_5Y6gqS4HL"
molecular_variation = Molecule.absolute_stretching
# + id="l88tg61lS4HL"
specific_molecular_variation = apply_variation(molecular_variation, atom_pair=(1, 0))
# + id="pmNyzKSMTEDy"
hydrogen_molecule_stretchable = Molecule(geometry=
[['H', [0., 0., 0.]],
['H', [0., 0., 0.735]]],
charge=0, multiplicity=1,
degrees_of_freedom=[specific_molecular_variation])
# + [markdown] id="ZOE-J6sPGLgJ"
# ### Solving for the Ground-state
# + colab={"base_uri": "https://localhost:8080/", "height": 1000} id="byXv0Y18GFah" outputId="cc9c8f3e-208e-41e1-813a-ea169aa7e830"
H2_fermionic_hamiltonian, H2_particle_number, H2_qubit_op, H2_qubit_converter, H2_ground_state = \
solve_ground_state(hydrogen_molecule, mapper ="Parity",
two_qubit_reduction=True, z2symmetry_reduction=None,
name_solver = 'NumPy exact solver', solver = numpy_solver)
# + [markdown] id="tGWPggHM86hT"
# Figure 5.5. Ground-state of the $\text{H}_{2}$ molecule with PySCF RHF and PyQMC Monte Carlo
#
# Figure 5.6. Ground-state of the $\text{H}_{2}$ molecule computed with VQE using the NumPy minimum eigensolver
# + colab={"base_uri": "https://localhost:8080/", "height": 1000} id="Du6sdRFVGFai" outputId="93dbad39-e219-4da8-d4ec-70a128a381c0"
H2_fermionic_hamiltonian, H2_particle_number, H2_qubit_op, H2_qubit_converter, H2_ground_state = \
solve_ground_state(hydrogen_molecule, mapper ="Parity",
two_qubit_reduction=True, z2symmetry_reduction=None,
name_solver = 'Unitary Coupled Cluster (UCC) factory ansatz', solver = vqe_ucc_solver)
# + [markdown] id="DC4pTgwxeZuB"
# Figure 5.7. Ground-state of the $\text{H}_{2}$ molecule with VQE using the UCC factory ansatz.
# + colab={"base_uri": "https://localhost:8080/", "height": 1000} id="l7A3RRA7GFai" outputId="4fc57177-cfe0-4db1-b4b3-3c0775aebb0f"
H2_fermionic_hamiltonian, H2_particle_number, H2_qubit_op, H2_qubit_converter, H2_ground_state = \
solve_ground_state(hydrogen_molecule, mapper ="Parity",
two_qubit_reduction=True, z2symmetry_reduction=None,
name_solver = 'Heuristic ansatz, the Two-Local circuit with SLSQP',solver = vqe_tl_solver)
# + [markdown] id="VnSvZTSWj0x_"
# Figure 5.8. Ground-state of the $\text{H}_{2}$ molecule with VQE using the Two-Local circuit and SLSQP
# + id="JDx_GsjTEyvl"
qnspsa_loss = []
ansatz = tl_circuit
fidelity = QNSPSA.get_fidelity(ansatz, quantum_instance, expectation=PauliExpectation())
qnspsa = QNSPSA(fidelity, maxiter=200, learning_rate=0.01, perturbation=0.7, callback=qnspsa_callback)
# + id="6FCB2ZXkDnfZ"
vqe_tl_QNSPSA_solver = VQE(ansatz=tl_circuit, optimizer=qnspsa,
quantum_instance=quantum_instance)
# + colab={"base_uri": "https://localhost:8080/", "height": 1000} id="NJVk9vZFDZEJ" outputId="174fa77b-2c4d-48e9-b3df-b42af5d51c98"
H2_fermionic_hamiltonian, H2_particle_number, H2_qubit_op, H2_qubit_converter, H2_ground_state = \
solve_ground_state(hydrogen_molecule, mapper ="Parity",
two_qubit_reduction=True, z2symmetry_reduction=None, loss=qnspsa_loss, label='QN-SPSA', target=-1.857274810366,
name_solver='Two-Local circuit and the QN-SPSA optimizer', solver=vqe_tl_QNSPSA_solver)
# + [markdown] id="eZJzQtZ_xDcw"
# Figure 5.9 โ Ground-state of the $\text{H}_{2}$ molecule with VQE using the Two-Local circuit and QN-SPSA
#
# Figure 5.10 โ Plot of the loss function of the VQE using the Two-Local circuit and QN-SPSA for the $\text{H}_{2}$ molecule
# + [markdown] id="R6-QzelYUfAg"
# ### Computing the BOPES
# + colab={"base_uri": "https://localhost:8080/", "height": 1000} id="2lA7lY1tGqB4" outputId="654148d0-a5e6-4b08-b76f-6a0ca940bf31"
perturbation_steps = np.linspace(-0.5, 2, 25) # 25 equally spaced points from -0.5 to 2, inclusive.
H2_stretchable_fermionic_hamiltonian, H2_stretchable_particle_number, H2_stretchable_qubit_op, H2_stretchable_qubit_converter, H2_stretchable_ground_state = \
solve_ground_state(hydrogen_molecule_stretchable, mapper ="Parity",
two_qubit_reduction=True, z2symmetry_reduction=None,
name_solver = 'NumPy exact solver', solver = numpy_solver,
plot_bopes = True, perturbation_steps=perturbation_steps)
# + [markdown] id="ZmTA538SXFt7"
# Figure 5.12 โ Plot of the BOPES of the hydrogen molecule
#
# ## 5.2.2. Lithium hydride molecule
# + id="NfP3LKlTDqQa"
LiH_molecule = Molecule(geometry=[['Li', [0., 0., 0.]],
['H', [0., 0., 1.5474]]],
charge=0, multiplicity=1)
# + [markdown] id="-PbSZl4xF8Ik"
# ### Varying the lithium hydride molecule
# + id="dWs_kPZsFbTs"
LiH_molecule_stretchable = Molecule(geometry=[['Li', [0., 0., 0.]],
['H', [0., 0., 1.5474]]],
charge=0, multiplicity=1,
degrees_of_freedom=[specific_molecular_variation])
# + [markdown] id="lgKIjsNeSN5F"
# ### Solving for the Ground-state
# + colab={"base_uri": "https://localhost:8080/", "height": 1000} id="S27IqTdwPv_t" outputId="7372513f-5e16-4e1d-b726-40eaf3a13bf4"
LiH_fermionic_hamiltonian, LiH_particle_number, LiH_qubit_op, LiH_qubit_converter, LiH_ground_state = \
solve_ground_state(LiH_molecule, mapper="Parity",
transformers=[FreezeCoreTransformer(freeze_core=True, remove_orbitals=[4, 3])],
two_qubit_reduction=True, z2symmetry_reduction="auto",
name_solver='NumPy exact solver', solver=numpy_solver)
# + [markdown] id="9DhAWQeYXyET"
# Figure 5.13 โ Ground-state of the $\text{LiH}$ molecule with PySCF RHF, PyQMC Monte Carlo and QPE
#
# Figure 5.14. Ground-state of the $\text{LiH}$ molecule with VQE using the NumPy minimum eigensolver
# + colab={"base_uri": "https://localhost:8080/", "height": 1000} id="rEeqmchsUM8d" outputId="8cb9c3e6-f3ea-4783-c3e7-40bc740a5ae5"
LiH_fermionic_hamiltonian, LiH_particle_number, LiH_qubit_op, LiH_qubit_converter, LiH_ground_state = \
solve_ground_state(LiH_molecule, mapper="Parity",
transformers=[FreezeCoreTransformer(freeze_core=True, remove_orbitals=[4, 3])],
two_qubit_reduction=True, z2symmetry_reduction="auto",
name_solver = 'Heuristic ansatz, the Two-Local circuit with SLSQP', solver = vqe_tl_solver)
# + [markdown] id="-m1VgbftX6Cj"
# Figure 5.15 โ Ground-state of the $\text{LiH}$ molecule with VQE using the Two-Local circuit and SLSQP
# + id="iPsEkUA2VfND"
qnspsa_loss = []
ansatz = tl_circuit
fidelity = QNSPSA.get_fidelity(ansatz, quantum_instance, expectation=PauliExpectation())
qnspsa = QNSPSA(fidelity, maxiter=500, learning_rate=0.01, perturbation=0.7, callback=qnspsa_callback)
# + id="vEvtcGM7yHFp"
vqe_tl_QNSPSA_solver = VQE(ansatz=tl_circuit, optimizer=qnspsa,
quantum_instance=quantum_instance)
# + colab={"base_uri": "https://localhost:8080/", "height": 1000} id="401O7P7ZV-FA" outputId="4590e27d-cc63-47f9-af0d-d568f4c27a1f"
LiH_fermionic_hamiltonian, LiH_particle_number, LiH_qubit_op, LiH_qubit_converter, LiH_ground_state = \
solve_ground_state(LiH_molecule, mapper="Parity",
transformers=[FreezeCoreTransformer(freeze_core=True, remove_orbitals=[4, 3])],
two_qubit_reduction=True, z2symmetry_reduction="auto", loss=qnspsa_loss, label='QN-SPSA', target=-1.0703584,
name_solver='Two-Local circuit and the QN-SPSA optimizer', solver=vqe_tl_QNSPSA_solver)
# + [markdown] id="CgnMCPC1Twal"
# Figure 5.16 โ Ground-state of the $\text{LiH}$ molecule with VQE using the Two-Local circuit and QN-SPSA
#
# Figure 5.17 โ Loss function of the VQE using the Two-Local circuit and QN-SPSA for the $\text{LiH}$ molecule
# + [markdown] id="5yq8fMpwA20w"
# ### Computing the BOPES
#
# + colab={"base_uri": "https://localhost:8080/", "height": 1000} id="Ofz1OF_Sc0Xy" outputId="4beb0cac-746b-4e9b-b06a-4984be1e7df0"
perturbation_steps = np.linspace(-0.8, 0.8, 10) # 10 equally spaced points from -0.8 to 0.8, inclusive.
LiH_stretchable_fermionic_hamiltonian, LiH_stretchable_particle_number, LiH_stretchable_qubit_op, LiH_stretchable_qubit_converter, LiH_stretchable_ground_state = \
solve_ground_state(LiH_molecule_stretchable, mapper ="Parity",
transformers=[FreezeCoreTransformer(freeze_core=True, remove_orbitals=[4, 3])],
two_qubit_reduction=True, z2symmetry_reduction="auto",
name_solver='NumPy exact solver', solver=numpy_solver,
plot_bopes=True, perturbation_steps=perturbation_steps)
# + [markdown] id="E55CqTIJBQh5"
# Figure 5.19 โ Plot of the Born-Oppenheimer Potential Energy Surface (BOPES) of the $\text{LiH}$ molecule
# + [markdown] id="yxiPpypwFF-Q"
# ## 5.2.3. Macro molecule
#
# + id="fQcyMo8_dAb2"
macro_ASE = Atoms('ONCHHHC', [(1.1280, 0.2091, 0.0000),
(-1.1878, 0.1791, 0.0000),
(0.0598, -0.3882, 0.0000),
(-1.3085, 1.1864, 0.0001),
(-2.0305, -0.3861, -0.0001),
(-0.0014, -1.4883, -0.0001),
(-0.1805, 1.3955, 0.0000)])
# + colab={"base_uri": "https://localhost:8080/", "height": 1000} id="rczrZIh5dIWD" outputId="c341c84d-4050-48da-9638-78795b25973d"
view(macro_ASE, viewer='x3d')
# + [markdown] id="k5IRl-OkcPGk"
# Figure 5.20 โ Macro molecule
# + id="IEl4uNqSHIPh"
molecular_variation = Molecule.absolute_stretching
# + id="4SFmEz-NHpco"
specific_molecular_variation = apply_variation(molecular_variation, atom_pair=(6, 1))
# + id="JdqvLGEgFXN5"
macromolecule = Molecule(geometry=
[['O', [1.1280, 0.2091, 0.0000]],
['N', [-1.1878, 0.1791, 0.0000]],
['C', [0.0598, -0.3882, 0.0000]],
['H', [-1.3085, 1.1864, 0.0001]],
['H', [-2.0305, -0.3861, -0.0001]],
['H', [-0.0014, -1.4883, -0.0001]],
['C', [-0.1805, 1.3955, 0.0000]]],
charge=0, multiplicity=1,
degrees_of_freedom=[specific_molecular_variation])
# + [markdown] id="GYcdp6gtjuK6"
# ### Solving for the Ground-state
# + colab={"base_uri": "https://localhost:8080/"} id="fljwX8YAaQO5" outputId="80d3fae2-a985-4749-8397-a82f8a3986aa"
print("Macro molecule")
print("Using the ParityMapper with two_qubit_reduction=True to eliminate two qubits")
print("Parameters ActiveSpaceTransformer(num_electrons=2, num_molecular_orbitals=2)")
print("Setting z2symmetry_reduction=\"auto\"")
# + colab={"base_uri": "https://localhost:8080/", "height": 1000} outputId="450d5367-d0ee-4d35-fea2-52ca0b559991" id="UXxbMvIv9DPh"
macro_fermionic_hamiltonian, macro_particle_number, macro_qubit_op, macro_qubit_converter, macro_ground_state = \
solve_ground_state(macromolecule, mapper="Parity",
num_electrons=2, num_molecular_orbitals=2,
two_qubit_reduction=True, z2symmetry_reduction="auto",
name_solver='NumPy exact solver', solver=numpy_solver, pyqmc=False)
# + [markdown] id="8wluc-v-DqcF"
# Figure 5.21 โ First 20 terms of the fermionic Hamiltonian operator of the macro molecule
#
# Figure 5.22 โ Qubit Hamiltonian operator of the outermost two electrons of the macro molecule
#
# Figure 5.23 โ Total and electronic ground state energy of the macro molecule by PySCF and QPE respectively
#
# Figure 5.24 โ Ground state of macro molecule using the NumPy exact minimum eigensolver
#
# ### Computing the BOPES
# + colab={"background_save": true, "base_uri": "https://localhost:8080/", "height": 1000} outputId="633ded70-1d59-4536-b005-6f6ea6f67c22" id="MaYaGpTfNePL"
perturbation_steps = np.linspace(-0.5, 3, 10) # 10 equally spaced points from -0.5 to 3, inclusive.
macro_fermionic_hamiltonian, macro_particle_number, macro_qubit_op, macro_qubit_converter, macro_ground_state = \
solve_ground_state(macromolecule, mapper ="Parity",
num_electrons=2, num_molecular_orbitals=2,
two_qubit_reduction=True, z2symmetry_reduction="auto",
name_solver='NumPy exact solver', solver=numpy_solver, pyqmc=False,
plot_bopes=True, perturbation_steps=perturbation_steps)
# + [markdown] id="5xp5rGc2HSk4"
#  of the macro molecule
# + [markdown] id="N9OUnvrLVe2S"
# # Summary
#
# + [markdown] id="CGfBDGZY82zw"
# # Questions
#
# 1. Does the variational theorem apply to excited states?
#
# + cellView="form" id="DtfjLkr-7wsF"
#@title Enter your answer Yes, No or ? for a solution, then execute cell.
answer = "" #@param {type:"string"}
solution = "Yes"
if answer == solution:
print("Correct")
elif answer == '?':
print(solution)
else:
print("Incorrect, please try again")
# + [markdown] id="08jG9COw-eFW"
# 2. True or False: The Metropolis-Hastings method is a way to approximate integration over spatial coordinates.
# + cellView="form" id="mNQGPNdv9f3V"
#@title Enter your answer True, False or ? for a solution, then execute cell.
answer = "" #@param {type:"string"}
solution = "True"
if answer == solution:
print("Correct")
elif answer == '?':
print(solution)
else:
print("Incorrect, please try again")
# + [markdown] id="9nRVSIzr9c0d"
# 3. True or False: VQE is only a quantum computing algorithm and does not require the use of classical computing.
# + cellView="form" id="JDF0IR8l9jOR"
#@title Enter your answer True, False or ? for a solution, then execute cell.
answer = "" #@param {type:"string"}
solution = "False"
if answer == solution:
print("Correct")
elif answer == '?':
print(solution)
else:
print("Incorrect, please try again")
# + [markdown] id="dHljHEO3ACw8"
# 4. Can UCCSD be mapped to qubits?
# + cellView="form" id="8ScHU5wKARKv"
#@title Enter your answer Yes, No or ? for a solution, then execute cell.
answer = "" #@param {type:"string"}
solution = "Yes"
if answer == solution:
print("Correct")
elif answer == '?':
print(solution)
else:
print("Incorrect, please try again")
| Chapter_05_Variational_Quantum_Eigensolver_(VQE)_algorithm.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
import numpy as np
import pandas as pd
import warnings
import os
import tensorflow as tf
from tensorflow.keras import Sequential
from tensorflow.keras.layers import LSTM, Dense
from tensorflow.keras.layers import Bidirectional
from tensorflow.keras.optimizers import Adam
from tensorflow.keras.layers import Flatten
from tensorflow.compat.v1.keras.layers import TimeDistributed
from tensorflow.keras.layers import Conv1D
from tensorflow.keras.layers import MaxPooling1D
from tensorflow.keras.layers import ConvLSTM2D
warnings.simplefilter('ignore')
countryName = 'Russia'
nFeatures = 1
nDaysMin = 3
k = 3
nValid = 10
nTest = 10
# -
dataDir = os.path.join('C:\\Users\\AMC\\Desktop\\Roshi\\Data')
confirmedFilename = 'confirmed_july.csv'
deathsFilename = 'deaths_july.csv'
recoveredFilename = 'recovered_july.csv'
confirmed = pd.read_csv(confirmedFilename)
confirmed.head()
# split a univariate sequence into samples
def split_sequence(sequence, n_steps, k):
X, y = list(), list()
for i in range(len(sequence)):
# find the end of this pattern
end_ix = i + n_steps
# check if we are beyond the sequence
if end_ix + k >= len(sequence):
break
# gather input and output parts of the pattern
seq_x, seq_y = sequence[i:end_ix], sequence[end_ix:end_ix+k]
X.append(seq_x)
y.append(seq_y)
return np.array(X), np.array(y)
# +
def meanAbsolutePercentageError(yTrueList, yPredList):
absErrorList = [np.abs(yTrue - yPred) for yTrue, yPred in zip(yTrueList, yPredList)]
absPcErrorList = [absError/yTrue for absError, yTrue in zip(absErrorList, yTrueList)]
MAPE = 100*np.mean(absPcErrorList)
return MAPE
def meanAbsolutePercentageError_kDay(yTrueListList, yPredListList):
# Store true and predictions for day 1 in a list, day 2 in a list and so on
# Keep each list of these lists in a respective dict with key as day #
yTrueForDayK = {}
yPredForDayK = {}
for i in range(len(yTrueListList[0])):
yTrueForDayK[i] = []
yPredForDayK[i] = []
for yTrueList, yPredList in zip(yTrueListList, yPredListList):
for i in range(len(yTrueList)):
yTrueForDayK[i].append(yTrueList[i])
yPredForDayK[i].append(yPredList[i])
# Get MAPE for each day in a list
MAPEList = []
for i in yTrueForDayK.keys():
MAPEList.append(meanAbsolutePercentageError(yTrueForDayK[i], yPredForDayK[i]))
return np.mean(MAPEList)
def meanForecastError(yTrueList, yPredList):
forecastErrors = [yTrue - yPred for yTrue, yPred in zip(yTrueList, yPredList)]
MFE = np.mean(forecastErrors)
return MFE
def meanAbsoluteError(yTrueList, yPredList):
absErrorList = [np.abs(yTrue - yPred) for yTrue, yPred in zip(yTrueList, yPredList)]
return np.mean(absErrorList)
def meanSquaredError(yTrueList, yPredList):
sqErrorList = [np.square(yTrue - yPred) for yTrue, yPred in zip(yTrueList, yPredList)]
return np.mean(sqErrorList)
def rootMeanSquaredError(yTrueList, yPredList):
return np.sqrt(meanSquaredError(yTrueList, yPredList))
def medianSymmetricAccuracy(yTrueList, yPredList):
'''https://helda.helsinki.fi//bitstream/handle/10138/312261/2017SW001669.pdf?sequence=1'''
logAccRatioList = [np.abs(np.log(yPred/yTrue)) for yTrue, yPred in zip(yTrueList, yPredList)]
MdSA = 100*(np.exp(np.median(logAccRatioList))-1)
return MdSA
def medianSymmetricAccuracy_kDay(yTrueListList, yPredListList):
# Store true and predictions for day 1 in a list, day 2 in a list and so on
# Keep each list of these lists in a respective dict with key as day #
yTrueForDayK = {}
yPredForDayK = {}
for i in range(len(yTrueListList[0])):
yTrueForDayK[i] = []
yPredForDayK[i] = []
for yTrueList, yPredList in zip(yTrueListList, yPredListList):
for i in range(len(yTrueList)):
yTrueForDayK[i].append(yTrueList[i])
yPredForDayK[i].append(yPredList[i])
# Get MdSA for each day in a list
MdSAList = []
for i in yTrueForDayK.keys():
MdSAList.append(medianSymmetricAccuracy(yTrueForDayK[i], yPredForDayK[i]))
return(np.mean(MdSAList))
# -
# Function to get all three frames for a given country
def getCountryCovidFrDict(countryName):
countryCovidFrDict = {}
for key in covidFrDict.keys():
dataFr = covidFrDict[key]
countryCovidFrDict[key] = dataFr[dataFr['Country/Region'] == countryName]
return countryCovidFrDict
# +
# Load all 3 csv files
covidFrDict = {}
covidFrDict['confirmed'] = pd.read_csv(confirmedFilename)
covidFrDict['deaths'] = pd.read_csv(deathsFilename)
covidFrDict['recovered'] = pd.read_csv(recoveredFilename)
countryCovidFrDict = getCountryCovidFrDict(countryName)
# date list
colNamesList = list(countryCovidFrDict['confirmed'])
dateList = [colName for colName in colNamesList if '/20' in colName]
dataList = [countryCovidFrDict['confirmed'][date].iloc[0] for date in dateList]
dataDict = dict(zip(dateList, dataList))
# Only take time series from where the cases were >100
daysSince = 100
nCasesGreaterDaysSinceList = []
datesGreaterDaysSinceList = []
for key in dataDict.keys():
if dataDict[key] > daysSince:
datesGreaterDaysSinceList.append(key)
nCasesGreaterDaysSinceList.append(dataDict[key])
XList, yList = split_sequence(nCasesGreaterDaysSinceList, nDaysMin, k)
XTrainList = XList[0:len(XList)-(nValid + nTest)]
XValidList = XList[len(XList)-(nValid+nTest):len(XList)-(nTest)]
XTestList = XList[-nTest:]
yTrain = yList[0:len(XList)-(nValid + nTest)]
yValid = yList[len(XList)-(nValid+nTest):len(XList)-(nTest)]
yTest = yList[-nTest:]
print('Total size of data points for LSTM:', len(yList))
print('Size of training set:', len(yTrain))
print('Size of validation set:', len(yValid))
print('Size of test set:', len(yTest))
# Convert the list to matrix
XTrain = XTrainList.reshape((XTrainList.shape[0], XTrainList.shape[1], nFeatures))
XValid = XValidList.reshape((XValidList.shape[0], XValidList.shape[1], nFeatures))
XTest = XTestList.reshape((XTestList.shape[0], XTestList.shape[1], nFeatures))
# -
# # Vanilla LSTM
# +
nNeurons = 100 # number of neurones
nFeatures = 1 # number of features
bestValidMAPE = 100 # 100 validation for best MAPE
bestSeed = -1
for seed in range(100):
tf.random.set_seed(seed=seed)
# define model
model = Sequential()
model.add(LSTM(nNeurons, activation='relu', input_shape=(nDaysMin, nFeatures)))
model.add(Dense(1))
opt = Adam(learning_rate=0.1)
model.compile(optimizer=opt, loss='mse')
# fit model
model.fit(XTrain, yTrain[:,0], epochs=1000, verbose=0)
yPredListList = []
for day in range(nTest):
yPredListList.append([])
XValidNew = XValid.copy()
for day in range(k):
yPred = model.predict(np.float32(XValidNew), verbose=0)
for i in range(len(yPred)):
yPredListList[i].append(yPred[i][0])
XValidNew = np.delete(XValidNew, 0, axis=1)
yPred = np.expand_dims(yPred, 2)
XValidNew = np.append(XValidNew, yPred, axis=1)
# for yTrue, yPred in zip(yTest, yPredList):
# print(yTrue, yPred)
MAPE = meanAbsolutePercentageError_kDay(yValid, yPredListList)
print(seed, MAPE)
if MAPE < bestValidMAPE:
print('Updating best MAPE to {}...'.format(MAPE))
bestValidMAPE = MAPE
print('Updating best seed to {}...'.format(seed))
bestSeed = seed
# define model
print('Training model with best seed...')
tf.random.set_seed(seed=bestSeed)
model = Sequential()
model.add(LSTM(nNeurons, activation='relu', input_shape=(nDaysMin, nFeatures)))
model.add(Dense(1))
opt = Adam(learning_rate=0.1)
model.compile(optimizer=opt, loss='mse')
# fit model
model.fit(XTrain, yTrain[:,0], epochs=1000, verbose=0)
yPredListList = []
for day in range(nTest):
yPredListList.append([])
XTestNew = XTest.copy()
for day in range(k):
yPred = model.predict(np.float32(XTestNew), verbose=0)
for i in range(len(yPred)):
yPredListList[i].append(yPred[i][0])
XTestNew = np.delete(XTestNew, 0, axis=1)
yPred = np.expand_dims(yPred, 2)
XTestNew = np.append(XTestNew, yPred, axis=1)
MAPE = meanAbsolutePercentageError_kDay(yTest, yPredListList)
print('Test MAPE:', MAPE)
MdSA = medianSymmetricAccuracy_kDay(yTest, yPredListList)
print('Test MdSA:', MdSA)
yPredVanilla = yPredListList
# -
model.summary()
# # Stacked LSTM
# +
nNeurons = 50
nFeatures = 1
bestValidMAPE = 100
bestSeed = -1
for seed in range(100):
tf.random.set_seed(seed=seed)
model = Sequential()
model.add(LSTM(nNeurons, activation='relu', return_sequences=True, input_shape=(nDaysMin, nFeatures)))
model.add(LSTM(nNeurons, activation='relu'))
model.add(Dense(1))
opt = Adam(learning_rate=0.1)
model.compile(optimizer=opt, loss='mse')
# fit model
model.fit(XTrain, yTrain[:,0], epochs=1000, verbose=0)
yPredListList = []
for day in range(nTest):
yPredListList.append([])
XValidNew = XValid.copy()
for day in range(k):
yPred = model.predict(np.float32(XValidNew), verbose=0)
for i in range(len(yPred)):
yPredListList[i].append(yPred[i][0])
XValidNew = np.delete(XValidNew, 0, axis=1)
yPred = np.expand_dims(yPred, 2)
XValidNew = np.append(XValidNew, yPred, axis=1)
# for yTrue, yPred in zip(yTest, yPredList):
# print(yTrue, yPred)
MAPE = meanAbsolutePercentageError_kDay(yValid, yPredListList)
print(seed, MAPE)
if MAPE < bestValidMAPE:
print('Updating best MAPE to {}...'.format(MAPE))
bestValidMAPE = MAPE
print('Updating best seed to {}...'.format(seed))
bestSeed = seed
# define model
print('Training model with best seed...')
tf.random.set_seed(seed=bestSeed)
model = Sequential()
model.add(LSTM(nNeurons, activation='relu', return_sequences=True, input_shape=(nDaysMin, nFeatures)))
model.add(LSTM(nNeurons, activation='relu'))
model.add(Dense(1))
opt = Adam(learning_rate=0.1)
model.compile(optimizer=opt, loss='mse')
# fit model
model.fit(XTrain, yTrain[:,0], epochs=1000, verbose=0)
yPredListList = []
for day in range(nTest):
yPredListList.append([])
XTestNew = XTest.copy()
for day in range(k):
yPred = model.predict(np.float32(XTestNew), verbose=0)
for i in range(len(yPred)):
yPredListList[i].append(yPred[i][0])
XTestNew = np.delete(XTestNew, 0, axis=1)
yPred = np.expand_dims(yPred, 2)
XTestNew = np.append(XTestNew, yPred, axis=1)
MAPE = meanAbsolutePercentageError_kDay(yTest, yPredListList)
print('Test MAPE:', MAPE)
MdSA = medianSymmetricAccuracy_kDay(yTest, yPredListList)
print('Test MdSA:', MdSA)
yPredStacked = yPredListList
# -
yPredStacked
model.summary()
# # Bi-directional LSTM
# +
# define model
nNeurons = 50
nFeatures = 1
bestValidMAPE = 100
bestSeed = -1
for seed in range(100):
tf.random.set_seed(seed=seed)
model = Sequential()
model.add(Bidirectional(LSTM(nNeurons, activation='relu'), input_shape=(nDaysMin, nFeatures)))
model.add(Dense(1))
opt = Adam(learning_rate=0.1)
model.compile(optimizer=opt, loss='mse')
# fit model
model.fit(XTrain, yTrain[:,0], epochs=1000, verbose=0)
yPredListList = []
for day in range(nTest):
yPredListList.append([])
XValidNew = XValid.copy()
for day in range(k):
yPred = model.predict(np.float32(XValidNew), verbose=0)
for i in range(len(yPred)):
yPredListList[i].append(yPred[i][0])
XValidNew = np.delete(XValidNew, 0, axis=1)
yPred = np.expand_dims(yPred, 2)
XValidNew = np.append(XValidNew, yPred, axis=1)
# for yTrue, yPred in zip(yTest, yPredList):
# print(yTrue, yPred)
MAPE = meanAbsolutePercentageError_kDay(yValid, yPredListList)
print(seed, MAPE)
if MAPE < bestValidMAPE:
print('Updating best MAPE to {}...'.format(MAPE))
bestValidMAPE = MAPE
print('Updating best seed to {}...'.format(seed))
bestSeed = seed
# define model
print('Training model with best seed...')
tf.random.set_seed(seed=bestSeed)
model = Sequential()
model.add(Bidirectional(LSTM(nNeurons, activation='relu'), input_shape=(nDaysMin, nFeatures)))
model.add(Dense(1))
opt = Adam(learning_rate=0.1)
model.compile(optimizer=opt, loss='mse')
# fit model
model.fit(XTrain, yTrain[:,0], epochs=1000, verbose=0)
yPredListList = []
for day in range(nTest):
yPredListList.append([])
XTestNew = XTest.copy()
for day in range(k):
yPred = model.predict(np.float32(XTestNew), verbose=0)
for i in range(len(yPred)):
yPredListList[i].append(yPred[i][0])
XTestNew = np.delete(XTestNew, 0, axis=1)
yPred = np.expand_dims(yPred, 2)
XTestNew = np.append(XTestNew, yPred, axis=1)
MAPE = meanAbsolutePercentageError_kDay(yTest, yPredListList)
print('Test MAPE:', MAPE)
MdSA = medianSymmetricAccuracy_kDay(yTest, yPredListList)
print('Test MdSA:', MdSA)
yPredBidirectional = yPredListList
# +
import matplotlib.pyplot as plt
from matplotlib.ticker import FuncFormatter
from matplotlib.dates import DateFormatter
#from statsmodels.tsa.statespace.sarimax import SARIMAX
# Format y tick labels
def y_fmt(y, pos):
decades = [1e9, 1e6, 1e3, 1e0, 1e-3, 1e-6, 1e-9 ]
suffix = ["G", "M", "k", "" , "m" , "u", "n" ]
if y == 0:
return str(0)
for i, d in enumerate(decades):
if np.abs(y) >=d:
val = y/float(d)
signf = len(str(val).split(".")[1])
if signf == 0:
return '{val:d} {suffix}'.format(val=int(val), suffix=suffix[i])
else:
if signf == 1:
if str(val).split(".")[1] == "0":
return '{val:d} {suffix}'.format(val=int(round(val)), suffix=suffix[i])
tx = "{"+"val:.{signf}f".format(signf = signf) +"} {suffix}"
return tx.format(val=val, suffix=suffix[i])
#return y
return y
# +
plt.figure(figsize=(10,10))
datesForPlottingList = datesGreaterDaysSinceList[-k:]
groundTruthList = nCasesGreaterDaysSinceList[-k:]
plt.ylabel('Number of confirmed cases in Russia', fontsize=20)
plt.plot(datesForPlottingList, groundTruthList, '-o', linewidth=3, label='Actual confirmed numbers');
plt.plot(datesForPlottingList, yPredVanilla[-1], '-o', linewidth=3, label='Vanilla LSTM predictions');
plt.plot(datesForPlottingList, yPredStacked[-1], '-o', linewidth=3, label='Stacked LSTM predictions');
plt.plot(datesForPlottingList, yPredBidirectional[-1], '-o', linewidth=3, label='Bidirectional LSTM predictions');
plt.xlabel('Date', fontsize=20);
plt.legend(fontsize=14);
plt.xticks(fontsize=16);
plt.yticks(fontsize=16);
ax = plt.gca()
ax.yaxis.set_major_formatter(FuncFormatter(y_fmt))
#date_form = DateFormatter("%d-%m")
#ax.xaxis.set_major_formatter(date_form)
# plt.grid(axis='y')
plt.savefig(os.path.join('Plots_3days_k3', 'predictions_{}.png'.format(countryName)), dpi=400)
plt.savefig(os.path.join('Plots_3days_k3', 'predictions_{}.pdf'.format(countryName)), dpi=400)
# -
RMSE = rootMeanSquaredError(groundTruthList, yPredVanilla)
print('Test RMSE:', RMSE)
RMSE = rootMeanSquaredError(groundTruthList, yPredVanilla[-1])
print('Test RMSE:', RMSE)
RMSE = rootMeanSquaredError(groundTruthList, yPredStacked)
print('Test RMSE:', RMSE)
RMSE = rootMeanSquaredError(groundTruthList, yPredStacked[-1])
print('Test RMSE:', RMSE)#Test RMSE: 2702.2739214139942
RMSE = rootMeanSquaredError(groundTruthList, yPredBidirectional)
print('Test RMSE:', RMSE)#Test RMSE: 51299.70042122786
RMSE = rootMeanSquaredError(groundTruthList, yPredBidirectional[-1])
print('Test RMSE:', RMSE)#Test RMSE: 2813.802043286287
MSE = meanSquaredError(groundTruthList, yPredBidirectional)
MSE
groundTruthList
yPredVanilla[-1]
#[805187.6, 810807.2, 816364.0]
yPredStacked[-1]
#[806959.94, 814447.25, 821993.7]
yPredBidirectional[-1]
#[806834.75, 814283.0, 821850.06]
#us
ustable = pd.read_csv('US_RMSE_2.csv')
ustable
ustable.plot.bar(x = 'Experiments', y = 'Vanilla RMSE')
ustable.plot(kind='bar', x = 'Experiments', y = 'Stacked RMSE')
# +
import matplotlib.pyplot as plot
ustable.plot.bar(x = 'Experiments for US')
plot.show(block=True);
# -
braziltable = pd.read_csv('brazil_RMSE_2.csv')
# +
#brazil
braziltable.plot.bar(x = 'Experiments for Brazil')
plot.show(block=True);
# -
#russia
russiatable = pd.read_csv('russia_RMSE_2.csv')
russiatable.plot.bar(x = 'Experiments for Russia')
plot.show(block=True);
| Russia_LSTM_3days_K3_Exp4.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
INPUT_DIR = '/nrcan_p2/data/01_raw/20201221/doaj_extra_4/geosciences_redo/'
OUTPUT_DIR = '/nrcan_p2/data/01_raw/20201221/doaj/geosciences/'
STORE_DIR = '/nrcan_p2/data/01_raw/20201221/doaj_extra_4/geosciences_removed/'
unfinished_file_file = 'unfinished_files_doaj_3.txt'
with open(unfinished_file_file, 'r') as f:
unfinished_files = f.readlines()
len(unfinished_files)
len(unfinished_files)
import pathlib
display(unfinished_files[0])
unfinished_files = [pathlib.Path(x.strip()).name for x in unfinished_files]
unfinished_files
unfinished_files
# +
import pathlib
from collections import defaultdict
unfinished_files_name_map = defaultdict(list)
for elem in unfinished_files:
elem = pathlib.Path(elem.strip())
name = elem.name
parent = elem.parent.name
print(name, parent)
unfinished_files_name_map[parent].append(name)
unfinished_files_name_map
# -
unfinished_files = ['_2076-3263_8_8_308_pdf.pdf']
# unfinished_files = ['_documents_GPL1724cor_noSI.pdf.pdf',
# '_documents_GPL1802err_noSI.pdf.pdf',
# '_documents_GPL1825cor_noSI.pdf.pdf',
# '_documents_GPL1907cor_noSI.pdf.pdf',
# '_documents_GPL1922cor_noSI.pdf.pdf',
# '_documents_GPL1925cor_noSI.pdf.pdf']
for file in unfinished_files:
f = pathlib.Path(OUTPUT_DIR) / file
fout = pathlib.Path(STORE_DIR) / file
#print(f,'->',fout)
if f.exists():
if not fout.exists():
print(f,'->',fout)
f.replace(fout)
for file in unfinished_files:
f = pathlib.Path(OUTPUT_DIR) / file
if f.exists():
print('still exists!')
for file in unfinished_files:
f = pathlib.Path(INPUT_DIR) / file
fout = pathlib.Path(OUTPUT_DIR) / file
if not fout.exists():
if f.exists():
print(f,'->',fout)
f.replace(fout)
| project_tools/data/01_raw/20201221/Redownloading doaj.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Loading CSV file from Amazon S3 into iguazio file system or database
import pandas as pd
import v3io_frames as v3f
import os
client = v3f.Client('framesd:8081', container='users')
# ## Import sample file from S3 into iguazio file system (v3io)
# + language="sh"
# mkdir -p /v3io/${V3IO_HOME}/examples
#
# # Download a sample stocks file from Iguazio demo bucket in S3
# curl -L "iguazio-sample-data.s3.amazonaws.com/2018-03-26_BINS_XETR08.csv" > /v3io/${V3IO_HOME}/examples/stocks_example.csv
# -
# ## Read the file using into a pandas DataFrame
# Note the file can be read directly from HTTP into a DataFrame (if placing the full URL i.e. `pd.read_csv('http://deutsche-boerse...')`
# +
# read a csv file into a data frame
df = pd.read_csv(os.path.join('/v3io/users/'+os.getenv('V3IO_USERNAME')+'/examples/stocks_example.csv'))
df.set_index('ISIN', inplace=True)
df.head()
# -
# ## Write file into iguazio database as key value table using v3io frames
tablename = os.path.join(os.getenv('V3IO_USERNAME')+'/stocks_example_tab')
client.write('kv', tablename, df)
# ## Read and write the file using Spark DF
# +
from pyspark.sql import SparkSession
spark = SparkSession.builder.appName("Iguazio file access notebook").getOrCreate()
file_path=os.path.join(os.getenv('V3IO_HOME_URL')+'/examples')
# Read the sample stocks.csv file into a Spark DataFrame, and let Spark infer the schema of the CSV file
df = spark.read.option("header", "true").csv(os.path.join(file_path)+'/stocks_example.csv')
# Show the DataFrame data
df.show()
# Write the DataFrame data to a stocks_tab table under "users" container and define "ISIN" column as a key
df.write.format("io.iguaz.v3io.spark.sql.kv").mode("append").option("key", "ISIN").option("allow-overwrite-schema", "true").save(os.path.join(file_path)+'/stocks_tab_spark/')
# -
# ## Read iguazio table and writing it back as a CSV
# +
#myDF2 = spark.read.format("io.iguaz.v3io.spark.sql.kv").load("v3io://users/iguazio/examples/stocks_tab_by_spark").where("TradedVolume>20000")
myDF2 = spark.read.format("io.iguaz.v3io.spark.sql.kv").load(os.path.join(file_path)+'/stocks_tab_spark').where("TradedVolume>20000")
# myDF2.write.csv('v3io://bigdata/examples/stocks_high_volume.csv')
myDF2.coalesce(1).write.mode('overwrite').csv(os.path.join(file_path)+'/stocks_high_volume.csv')
# note that using coalesce(1) is for storing the output as a single file
# -
# ## Viewing files
# Note: the table will apear as a directory under v3io file system
# !ls -l /v3io/${V3IO_HOME}/examples/
# ## Remove all files and tables
# +
# clean data
# #!rm -rf /v3io/${V3IO_HOME}/examples/*
# -
# In order to release compute and memory resources taken by spark we recommend running the following command
spark.stop()
| data-ingestion-and-preparation/file-access.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] id="4zDysbXHvEGp"
# # Spark Preparation
# We check if we are in Google Colab. If this is the case, install all necessary packages.
#
# To run spark in Colab, we need to first install all the dependencies in Colab environment i.e. Apache Spark 3.2.1 with hadoop 3.2, Java 8 and Findspark to locate the spark in the system. The tools installation can be carried out inside the Jupyter Notebook of the Colab.
# Learn more from [A Must-Read Guide on How to Work with PySpark on Google Colab for Data Scientists!](https://www.analyticsvidhya.com/blog/2020/11/a-must-read-guide-on-how-to-work-with-pyspark-on-google-colab-for-data-scientists/)
# + id="JK6PZEMjROK9"
try:
import google.colab
IN_COLAB = True
except:
IN_COLAB = False
# + id="Gr_NGZ5AvIQy"
if IN_COLAB:
# !apt-get install openjdk-8-jdk-headless -qq > /dev/null
# !wget -q https://dlcdn.apache.org/spark/spark-3.2.1/spark-3.2.1-bin-hadoop3.2.tgz
# !tar xf spark-3.2.1-bin-hadoop3.2.tgz
# !mv spark-3.2.1-bin-hadoop3.2 spark
# !pip install -q findspark
# + id="9dby3bREvIaU"
if IN_COLAB:
import os
os.environ["JAVA_HOME"] = "/usr/lib/jvm/java-8-openjdk-amd64"
os.environ["SPARK_HOME"] = "/content/spark"
# + id="rJTEDJzsvZRJ"
import findspark
findspark.init()
# + [markdown] id="_92TFSplRMHq"
# # Pyspark_Basic_RDD
# + id="jNHCZPHYRMHs"
#1 - import module
from pyspark import SparkContext
# + colab={"base_uri": "https://localhost:8080/", "height": 198} id="WFGEa6z_RMHy" outputId="ae2ec601-33a2-40fb-97fd-02ca3309b3b0"
#2 - Create SparkContext
sc = sc = SparkContext.getOrCreate()
sc
# + colab={"base_uri": "https://localhost:8080/"} id="mWzSaEXS8WSW" outputId="e2235492-ca10-4162-e1ea-6061d4779fdf"
import multiprocessing
multiprocessing.cpu_count()
# + id="cA_ajGnk9RED"
#rdd.getNumPartitions
# + id="VzLxhL6bRMIg"
#3 - Print top 5 rows
def printRDD(data,num):
for line in data.take(num):
print(line)
# + colab={"base_uri": "https://localhost:8080/"} id="unlMjoNDxx2L" outputId="aacfe489-1b74-40bc-9e01-95e9093c27f1"
# !wget https://github.com/kaopanboonyuen/GISTDA2022/raw/main/dataset/iris.csv
# + colab={"base_uri": "https://localhost:8080/"} id="vqSOV55WRMIm" outputId="677d6f90-f9b1-42c3-f636-5ae40de12ffa"
#4 - Read file to spark RDD
rdd = sc.textFile("iris.csv")
rdd.cache()
# Attribute Information:
# 1. sepal length in cm
# 2. sepal width in cm
# 3. petal length in cm
# 4. petal width in cm
# 5. class:
# -- <NAME>
# -- <NAME>
# -- <NAME>
printRDD(rdd,5)
# + colab={"base_uri": "https://localhost:8080/"} id="MzrieK0FRMIu" outputId="c3d2e84b-d14f-463f-fb92-1b9a4d4a8b50"
#5 - map
mapped_rdd = rdd.map(lambda line : line.split(","))
printRDD(mapped_rdd,5)
# + colab={"base_uri": "https://localhost:8080/"} id="EqMwUNOyRMI0" outputId="76ab4739-3f12-45b7-a9df-a8ada6301845"
#6 - flatMap
flatMaped_rdd = rdd.flatMap(lambda line : line.split(","))
printRDD(flatMaped_rdd,25)
# + colab={"base_uri": "https://localhost:8080/"} id="bICuISWBRMI5" outputId="3b2e15dd-5e22-40a8-8b62-06f3698ca93d"
#7 - create unique id
zipedWithUniqueId_rdd = rdd.zipWithUniqueId()
print("zipedWithUniqueId_rdd count : " + str(zipedWithUniqueId_rdd.count()))
printRDD(zipedWithUniqueId_rdd,5)
# + colab={"base_uri": "https://localhost:8080/"} id="TVMCXtehRMI_" outputId="e0adbaf7-d3cd-43ab-8897-586892b48c86"
#8 - sample data
sampled_rdd = zipedWithUniqueId_rdd.sample(withReplacement=False, fraction=0.5, seed=50)
print("rdd count : " + str(zipedWithUniqueId_rdd.count()))
print("sampled_rdd count : " + str(sampled_rdd.count()))
printRDD(sampled_rdd,5)
# + colab={"base_uri": "https://localhost:8080/"} id="qmLwYn8jRMJD" outputId="9fe7f7fa-a862-48e6-d923-f35d6026a130"
#9 - union and intersect
sampled1_rdd = zipedWithUniqueId_rdd.sample(withReplacement=False, fraction=0.5, seed=25)
sampled2_rdd = zipedWithUniqueId_rdd.sample(withReplacement=False, fraction=0.5, seed=50)
unioned_rdd = sampled1_rdd.union(sampled2_rdd)
intersected_rdd = sampled1_rdd.intersection(sampled2_rdd)
print("sampled1_rdd count : " + str(sampled1_rdd.count()))
print("sampled2_rdd count : " + str(sampled2_rdd.count()))
print("unioned_rdd count : " + str(unioned_rdd.count()))
print("intersected_rdd count : " + str(intersected_rdd.count()))
# + colab={"base_uri": "https://localhost:8080/"} id="vwR5gBeuRMJJ" outputId="14f986bb-0c40-4691-b953-b0599b98a03e"
#10 - distinct
label_rdd = mapped_rdd.map(lambda line : line[-1])
printRDD(label_rdd,5)
print("\n")
label_list = label_rdd.distinct().collect()
print(label_list)
# + colab={"base_uri": "https://localhost:8080/"} id="3VCbyo8mRMJQ" outputId="57f68ec9-f0b2-4b23-b6f1-6a04b6aa03c0"
#11 - zip 2 rdd together
feature_rdd = mapped_rdd.map(lambda line : line[0:-1])
printRDD(feature_rdd,5)
print("\n")
zip_rdd = feature_rdd.zip(label_rdd)
printRDD(zip_rdd,5)
print("\n")
zip_rdd = zip_rdd.map(lambda features : features + [label])
# + colab={"base_uri": "https://localhost:8080/"} id="GCbbwyzFRMJV" outputId="0dd04b4a-b5d7-4124-ccaa-c42602120d06"
#12 - collect
data_list = rdd.collect()
#Too many result => not a good method when deal with big data
print("data_list size : " + str(len(data_list)))
for data in data_list:
print(data)
# + colab={"base_uri": "https://localhost:8080/"} id="kZKsfG3ZRMJZ" outputId="7ced667e-efa3-4731-b356-065acf1365f8"
#13 - take
data_list = rdd.take(5)
#Select first n rows
print("data_list size : " + str(len(data_list)))
for data in data_list:
print(data)
# + colab={"base_uri": "https://localhost:8080/"} id="FYXq6DZWRMJe" outputId="d5d72c1e-7005-4891-add1-006c1091860e"
#14 - top
data_list = rdd.top(5)
#Select top n rows
print("data_list size : " + str(len(data_list)))
for data in data_list:
print(data)
| code/backup/2_Pyspark_Basic_RDD.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Electronic structure
# ## Introduction
#
# The molecular Hamiltonian is
#
# $$
# \mathcal{H} = - \sum_I \frac{\nabla_{R_I}^2}{M_I} - \sum_i \frac{\nabla_{r_i}^2}{m_e} - \sum_I\sum_i \frac{Z_I e^2}{|R_I-r_i|} + \sum_i \sum_{j>i} \frac{e^2}{|r_i-r_j|} + \sum_I\sum_{J>I} \frac{Z_I Z_J e^2}{|R_I-R_J|}
# $$
#
# Because the nuclei are much heavier than the electrons they do not move on the same time scale and therefore, the behavior of nuclei and electrons can be decoupled. This is the Born-Oppenheimer approximation.
#
# Therefore, one can first tackle the electronic problem with nuclear coordinate entering only as parameters. The energy levels of the electrons in the molecule can be found by solving the non-relativistic time independant Schroedinger equation,
#
# $$
# \mathcal{H}_{\text{el}} |\Psi_{n}\rangle = E_{n} |\Psi_{n}\rangle
# $$
#
# where
#
# $$
# \mathcal{H}_{\text{el}} = - \sum_i \frac{\nabla_{r_i}^2}{m_e} - \sum_I\sum_i \frac{Z_I e^2}{|R_I-r_i|} + \sum_i \sum_{j>i} \frac{e^2}{|r_i-r_j|}.
# $$
#
# In particular the ground state energy is given by:
# $$
# E_0 = \frac{\langle \Psi_0 | H_{\text{el}} | \Psi_0 \rangle}{\langle \Psi_0 | \Psi_0 \rangle}
# $$
# where $\Psi_0$ is the ground state of the system.
#
# However, the dimensionality of this problem grows exponentially with the number of degrees of freedom. To tackle this issue we would like to prepare $\Psi_0$ on a quantum computer and measure the Hamiltonian expectation value (or $E_0$) directly.
#
# So how do we do that concretely?
#
# ## The Hartree-Fock initial state
#
# A good starting point for solving this problem is the Hartree-Fock (HF) method. This method approximates a N-body problem into N one-body problems where each electron evolves in the mean-field of the others. Classically solving the HF equations is efficient and leads to the exact exchange energy but does not include any electron correlation. Therefore, it is usually a good starting point to start adding correlation.
#
# The Hamtiltonian can then be re-expressed in the basis of the solutions of the HF method, also called Molecular Orbitals (MOs):
#
# $$
# \hat{H}_{elec}=\sum_{pq} h_{pq} \hat{a}^{\dagger}_p \hat{a}_q +
# \frac{1}{2} \sum_{pqrs} h_{pqrs} \hat{a}^{\dagger}_p \hat{a}^{\dagger}_q \hat{a}_r \hat{a}_s
# $$
# with the 1-body integrals
# $$
# h_{pq} = \int \phi^*_p(r) \left( -\frac{1}{2} \nabla^2 - \sum_{I} \frac{Z_I}{R_I- r} \right) \phi_q(r)
# $$
# and 2-body integrals
# $$
# h_{pqrs} = \int \frac{\phi^*_p(r_1) \phi^*_q(r_2) \phi_r(r_2) \phi_s(r_1)}{|r_1-r_2|}.
# $$
#
# The MOs ($\phi_u$) can be occupied or virtual (unoccupied). One MO can contain 2 electrons. However, in what follows we actually work with Spin Orbitals which are associated with a spin up ($\alpha$) of spin down ($\beta$) electron. Thus Spin Orbitals can contain one electron or be unoccupied.
#
# We now show how to concreatly realise these steps with Qiskit.
# Qiskit is interfaced with different classical codes which are able to find the HF solutions. Interfacing between Qiskit and the following codes is already available:
# * Gaussian
# * Psi4
# * PyQuante
# * PySCF
#
# In the following we set up a PySCF driver, for the hydrogen molecule at equilibrium bond length (0.735 angstrom) in the singlet state and with no charge.
from qiskit.chemistry.drivers import PySCFDriver, UnitsType, Molecule
molecule = Molecule(geometry=[['H', [0., 0., 0.]],
['H', [0., 0., 0.735]]],
charge=0, multiplicity=1)
driver = PySCFDriver(molecule = molecule, unit=UnitsType.ANGSTROM, basis='sto3g')
# For further information about the drivers see https://qiskit.org/documentation/apidoc/qiskit.chemistry.drivers.html
# ## The mapping from fermions to qubits
#
# <img src="aux_files/jw_mapping.png" width="500">
#
# The Hamiltonian given in the previous section is expressed in terms of fermionic operators. To encode the problem into the state of a quantum computer, these operators must be mapped to spin operators (indeed the qubits follow spin statistics).
#
# There exist different mapping types with different properties. Qiskit already supports the following mappings:
# * The Jordan-Wigner 'jordan_wigner' mapping (รผber das paulische รคquivalenzverbot. In The Collected Works of E<NAME> (pp. 109-129). Springer, Berlin, Heidelberg (1993)).
# * The Parity 'parity' (The Journal of chemical physics, 137(22), 224109 (2012))
# * The Bravyi-Kitaev 'bravyi_kitaev' (Annals of Physics, 298(1), 210-226 (2002))
# * The Bravyi-Kitaev Super Fast 'bksf' (Annals of Physics, 298(1), 210-226 (2002))
#
# The Jordan-Wigner mapping is particularly interesting as it maps each Spin Orbital to a qubit (as shown on the Figure above).
#
# Here we set up an object which contains all the information about any transformation of the fermionic Hamiltonian to the qubits Hamiltonian. In this example we simply ask for the Jordan-Wigner mapping.
# +
from qiskit.chemistry.transformations import (FermionicTransformation,
FermionicTransformationType,
FermionicQubitMappingType)
fermionic_transformation = FermionicTransformation(
transformation=FermionicTransformationType.FULL,
qubit_mapping=FermionicQubitMappingType.JORDAN_WIGNER,
two_qubit_reduction=False,
freeze_core=False)
# -
# If we now transform this Hamiltonian for the given driver defined above we get our qubit operator:
qubit_op, _ = fermionic_transformation.transform(driver)
print(qubit_op)
print(fermionic_transformation.molecule_info)
# In the minimal (STO-3G) basis set 4 qubits are required. We could even lower the qubit count by using the Parity mapping which allows to get rid of to qubits by symmetry considerations.
fermionic_transformation_2 = FermionicTransformation(
transformation=FermionicTransformationType.FULL,
qubit_mapping=FermionicQubitMappingType.PARITY,
two_qubit_reduction=True,
freeze_core=False)
qubit_op_2, _ = fermionic_transformation_2.transform(driver)
print(qubit_op_2)
# This time only 2 qubits are needed.
#
# Another possibility is to use the Particle-Hole tranformation (Physical Review A, 98(2), 022322 (2018)). This shifts the vacuum state to a state lying in the N-particle Fock space. In this representation the HF (reference) state has a null energy and the optimization procedure is more faster.
fermionic_transformation_3 = FermionicTransformation(
transformation=FermionicTransformationType.PARTICLE_HOLE,
qubit_mapping=FermionicQubitMappingType.JORDAN_WIGNER,
two_qubit_reduction=False,
freeze_core=False)
qubit_op_3, _ = fermionic_transformation_3.transform(driver)
print(qubit_op_3)
# The list of available mappings and transformations are
# +
print('*Transformations')
for fer_transform in FermionicTransformationType:
print(fer_transform)
print('\n*Mappings')
for fer_mapping in FermionicQubitMappingType:
print(fer_mapping)
# -
# Now that the Hamiltonian is ready, it can be used in a quantum algorithm to find information about the electronic structure of the corresponding molecule. Check out our tutorials on Ground State Calculation and Excited States Calculation to learn more about how to do that in Qiskit!
import qiskit.tools.jupyter
# %qiskit_version_table
# %qiskit_copyright
| tutorials/chemistry/01_electronic_structure.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 2
# language: python
# name: python2
# ---
# +
import cv2
import os
frames = []
summary_file = "./../vsumm-reinforce/Alin_summary_Day1_007"
input_video = "/scratch/anuj.rathore/Alin/Alin_Day1_007.avi"
output_video = "/scratch/anuj.rathore/output7.avi"
fps = 15
# standard frame module
# with open(summary_file) as f:
# for line in f:
# frames.append(int(line))
a = 1
#module to add when using subsampling
with open(summary_file) as f:
frame_num = 0
for line in f:
frame_num +=1
for i in range(4):
frames.append(frame_num*4 - i)
fourcc = cv2.VideoWriter_fourcc(*'MPEG')
out = cv2.VideoWriter(output_video, fourcc, fps, (1280,720))
if a !=0:
vidcap = cv2.VideoCapture(input_video)
success,image = vidcap.read()
count = 1
act = 0
while success:
success,image = vidcap.read()
if count in frames:
act +=1
out.write(image)
print '.',
count += 1
print count
out.release()
# -
int('4')*4 - 1
import os
os.system("ls -l /scratch/anuj.rathore/")
| feat_extract/create_summary.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 2
# language: python
# name: python2
# ---
# +
import pandas as pd
import numpy as np
import scipy.sparse as sp
import scipy.io as spio
def reverse_complement(seq) :
seq_prime = ''
for j in range(0, len(seq)) :
if seq[j] == 'A' :
seq_prime = 'T' + seq_prime
elif seq[j] == 'C' :
seq_prime = 'G' + seq_prime
elif seq[j] == 'G' :
seq_prime = 'C' + seq_prime
elif seq[j] == 'T' :
seq_prime = 'A' + seq_prime
else :
seq_prime = seq[j] + seq_prime
return seq_prime
# +
emitted_id = []
emitted_chrom = []
emitted_start = []
emitted_end = []
emitted_isoform_start = []
emitted_isoform_end = []
emitted_strand = []
emitted_isoform = []
emitted_search = []
emitted_reads = []
i = 0
with open('TandemUTR.hg19.gff3') as f:
for line in f:
if i > 0 :
lineparts = line[:-1].split('\t')
chrom = lineparts[0]
event_type = lineparts[2]
start = int(lineparts[3])
end = int(lineparts[4])
strand = lineparts[6]
id_str = lineparts[8]
if event_type == 'mRNA' :
emitted_id.append(chrom + ':' + str(start) + '-' + str(end))
emitted_chrom.append(chrom)
if strand == '+' :
emitted_start.append(end - 225)
emitted_end.append(end + 175)
emitted_search.append(chrom[3:] + ':' + str(end - 225) + '-' + str(end + 175))
else :
emitted_start.append(start - 175)
emitted_end.append(start + 225)
emitted_search.append(chrom[3:] + ':' + str(start - 175) + '-' + str(start + 225))
emitted_isoform_start.append(start)
emitted_isoform_end.append(end)
emitted_strand.append(strand)
#Prox = B, Dist = A
emitted_isoform.append(id_str.split(';')[0][-1])
emitted_reads.append(1)
i += 1
bed_df = pd.DataFrame({'chr' : emitted_chrom,
'start' : emitted_start,
'end' : emitted_end,
'gene' : emitted_id,
'reads' : emitted_reads,
'strand' : emitted_strand,
'search_region' : emitted_search,
'isoform' : emitted_isoform,
})
bed_df = bed_df[['chr', 'start', 'end', 'gene', 'reads', 'strand', 'search_region', 'isoform']]
bed_df = bed_df.sort_values(by='gene')
print(bed_df.head())
print(len(bed_df))
bed_df.to_csv('Emitted_Tandem_UTR_200up_200dn.bed', sep='\t', header=False, index=False)
# +
hg19_fai = '../apadb/hg19.fa.fai'
hg19_fa = '../apadb/hg19.fa'
# fasta
output_fa = 'Emitted_Tandem_UTR_200up_200dn_Seqs.fa'
# #!bedtools getfasta -name -s -fi "$hg19_fa" -bed "$output_bed" -fo | cut -d : -f-4 > "$output_fa"
# !bedtools getfasta -name -s -fi "$hg19_fa" -bed "Emitted_Tandem_UTR_200up_200dn.bed" -fo "$output_fa"
# file tops
# !head -5 "Emitted_Tandem_UTR_200up_200dn.bed" | column -t ; echo
# !head -10 "$output_fa" ; echo
# +
#Inflate sample whitelist
sample_set = {}
i = 0
with open('E-GEUV-1.sdrf.txt') as f:
for line in f:
if i > 0 :
lineparts = line[:-1].split('\t')
sample_set[lineparts[0]] = True
i += 1
#Inflate Tandem UTR events
event_dict = {}
with open('Emitted_Tandem_UTR_200up_200dn.bed') as f:
for line in f:
lineparts = line[:-1].split('\t')
event_id = lineparts[3]
event_dict[event_id] = {}
event_dict[event_id]['chrom'] = lineparts[0]
event_dict[event_id]['start'] = int(lineparts[1])
event_dict[event_id]['end'] = int(lineparts[2])
event_dict[event_id]['strand'] = lineparts[5]
event_dict[event_id]['isoform'] = lineparts[7]
event_dict[event_id]['ref'] = {}
event_dict[event_id]['ref']['samples'] = {}
event_dict[event_id]['var'] = {}
event_dict[event_id]['var']['samples'] = {}
event_dict[event_id]['seq_map'] = {}
i = 0
with open('Emitted_Tandem_UTR_200up_200dn_Seqs.fa') as f:
event_id = ''
for line in f:
linep = line[:-1]
if i % 2 == 0 :
event_id = linep[1:]
else :
event_dict[event_id]['seq'] = linep.upper()
i += 1
print(len(event_dict))
print(event_dict['chr10:102587376-102589698'])
# +
def add_snp(seq, strand, var_type, ref, var, start_pos, var_pos) :
rel_pos_start = var_pos - start_pos - 1
rel_pos_end = rel_pos_start + len(ref)
if rel_pos_start < 5 or rel_pos_end > 395 :
return '', 0
var_seq = seq[:]
rel_pos = 0
if strand == '-' :
var_seq = reverse_complement(var_seq)
if var_type == 'SNP' :
rel_pos = var_pos - start_pos - 1
if var_seq[rel_pos] == ref and rel_pos >= 0:
var_seq = var_seq[0:rel_pos] + var + var_seq[rel_pos+1:]
elif rel_pos != -1 :
print(seq)
print(rel_pos)
print(strand)
print(ref)
print(var)
print('ERROR (SNP): Sequence not aligned with genome reference.')
return '', -1
#else :
# return '', -1
elif var_type == 'INDEL' :
rel_pos_start = var_pos - start_pos - 1
rel_pos_end = rel_pos_start + len(ref)
rel_pos = rel_pos_start
if var_seq[rel_pos_start:rel_pos_end] == ref :
var_seq = var_seq[0:rel_pos_start] + var + var_seq[rel_pos_end:]
else :
print(var_seq)
print(rel_pos_start)
print(rel_pos_end)
print(var_seq[rel_pos_start:rel_pos_end])
print(ref)
print(var)
print('ERROR (INDEL): Sequence not aligned with genome reference.')
print('' + 1)
elif var_type == 'OTHER' and var == '<DEL>' :
rel_pos_start = var_pos - start_pos - 1
rel_pos_end = rel_pos_start + len(ref)
rel_pos = rel_pos_start
if var_seq[rel_pos_start:rel_pos_end] == ref :
var_seq = var_seq[0:rel_pos_start] + var_seq[rel_pos_end:]
else :
print('ERROR (DEL): Sequence not aligned with genome reference.')
print('' + 1)
#elif var_type == 'OTHER' and ref == '<INS>' :
# rel_pos = var_pos - start_pos - 1
# var_seq = var_seq[0:rel_pos] + var + var_seq[rel_pos:]
else :
return '', 0
var_seq += ('X' * 20)
var_seq = var_seq[:400]
if strand == '-' :
var_seq = reverse_complement(var_seq)
rel_pos = 400 - (rel_pos + 1)
return var_seq, rel_pos
# +
#Inflate sample variant calls
valid_sample_dict = {}
event_i = 0
for event_id in event_dict :
seq = event_dict[event_id]['seq']
chrom = event_dict[event_id]['chrom']
start = event_dict[event_id]['start']
end = event_dict[event_id]['end']
strand = event_dict[event_id]['strand']
valid_sample_dict[event_id] = {}
call_file = 'snps2/' + event_id.replace(':', '_') + '_' + chrom[3:] + '_' + str(start) + '-' + str(end) + '.txt'
try :
with open(call_file) as f:
for line in f:
lineparts = line[:-1].split('\t')
snp_type = lineparts[0]
snp_pos = int(lineparts[2])
ref = lineparts[3]
var = lineparts[4]
if len(ref) > 10 or len(var) > 10 :
continue
for sample_index in range(5, len(lineparts)) :
sample_lineparts = lineparts[sample_index].split('=')
sample = sample_lineparts[0]
alleles = sample_lineparts[1].split('|')
if len(alleles) == 1 :
alleles = sample_lineparts[1].split('/')
if len(alleles) == 1 :
continue
allele1 = int(alleles[0])
allele2 = int(alleles[1])
if sample not in sample_set :
continue
valid_sample_dict[event_id][sample] = True
zyg = ''
if allele1 == 0 and allele2 == 0 :
continue
elif allele1 > 0 and allele2 > 0 :
zyg = 2
else :
zyg = 1
if sample not in event_dict[event_id]['var']['samples'] :
event_dict[event_id]['var']['samples'][sample] = {}
event_dict[event_id]['var']['samples'][sample]['seq'] = seq
event_dict[event_id]['var']['samples'][sample]['zyg'] = 2
event_dict[event_id]['var']['samples'][sample]['count'] = 0
event_dict[event_id]['var']['samples'][sample]['type'] = ''
event_dict[event_id]['var']['samples'][sample]['pos'] = ''
event_dict[event_id]['var']['samples'][sample]['snpid'] = ''
add_mut = False
if event_dict[event_id]['var']['samples'][sample]['count'] == 0 :
add_mut = True
if snp_type == 'SNP' and 'OTHER' not in event_dict[event_id]['var']['samples'][sample]['type'] and 'INDEL' not in event_dict[event_id]['var']['samples'][sample]['type'] :
add_mut = True
if add_mut == True :
var_seq, rel_pos = add_snp(event_dict[event_id]['var']['samples'][sample]['seq'], strand, snp_type, ref, var, start, snp_pos)
if var_seq != '' :
event_dict[event_id]['var']['samples'][sample]['seq'] = var_seq
event_dict[event_id]['var']['samples'][sample]['zyg'] = min(zyg, event_dict[event_id]['var']['samples'][sample]['zyg'])
event_dict[event_id]['var']['samples'][sample]['count'] += 1
if event_dict[event_id]['var']['samples'][sample]['type'] == '' :
event_dict[event_id]['var']['samples'][sample]['type'] = snp_type
else :
event_dict[event_id]['var']['samples'][sample]['type'] += ',' + snp_type
if event_dict[event_id]['var']['samples'][sample]['pos'] == '' :
event_dict[event_id]['var']['samples'][sample]['pos'] = str(rel_pos)
else :
event_dict[event_id]['var']['samples'][sample]['pos'] += ',' + str(rel_pos)
if event_dict[event_id]['var']['samples'][sample]['snpid'] == '' :
event_dict[event_id]['var']['samples'][sample]['snpid'] = str(chrom) + str(strand) + ':' + str(int(start)) + '-' + str(int(end)) + ':' + str(int(snp_pos)) + '/' + str(snp_type) + '/' + str(ref) + '/' + str(var)
else :
event_dict[event_id]['var']['samples'][sample]['snpid'] += ',' + str(chrom) + str(strand) + ':' + str(int(start)) + '-' + str(int(end)) + ':' + str(int(snp_pos)) + '/' + str(snp_type) + '/' + str(ref) + '/' + str(var)
#print('Number of variant samples for event ' + event_id + ': ' + str(len(event_dict[event_id]['var']['samples'])))
if event_i % 1000 == 0 :
print('Processed ' + str(event_i + 1) + ' events.')
except IOError :
print('ERROR: Could not open file: ' + call_file)
event_i += 1
for event_id in event_dict :
for sample in sample_set :
if sample not in event_dict[event_id]['var']['samples'] and sample in valid_sample_dict[event_id] :
event_dict[event_id]['ref']['samples'][sample] = {}
elif sample in valid_sample_dict[event_id] :
var_event = event_dict[event_id]['var']['samples'][sample]
if var_event['seq'] not in event_dict[event_id]['seq_map'] :
event_dict[event_id]['seq_map'][var_event['seq']] = {}
event_dict[event_id]['seq_map'][var_event['seq']][sample] = True
# +
print(event_dict['chr5:34019556-34020686'])
# +
#Inflate MISO expression
for sample in sample_set :
i = 0
with open('geuvadis/' + sample + '_summary/summary/geuvadis_output.miso_summary') as f:
for line in f:
if i > 0 :
lineparts = line[:-1].split('\t')
psi_mean = float(lineparts[1])
psi_low = float(lineparts[2])
psi_high = float(lineparts[3])
chrom = lineparts[7]
start_positions = lineparts[9].split(',')
end_positions = lineparts[10].split(',')
a_event_id = chrom + ":" + start_positions[0] + '-' + end_positions[0]
b_event_id = chrom + ":" + start_positions[1] + '-' + end_positions[1]
if a_event_id in event_dict and sample in event_dict[a_event_id]['var']['samples'] :
event_dict[a_event_id]['var']['samples'][sample]['psi_mean'] = psi_mean
event_dict[a_event_id]['var']['samples'][sample]['psi_low'] = psi_low
event_dict[a_event_id]['var']['samples'][sample]['psi_high'] = psi_high
elif a_event_id in event_dict and sample in event_dict[a_event_id]['ref']['samples'] :
event_dict[a_event_id]['ref']['samples'][sample]['psi_mean'] = psi_mean
event_dict[a_event_id]['ref']['samples'][sample]['psi_low'] = psi_low
event_dict[a_event_id]['ref']['samples'][sample]['psi_high'] = psi_high
if b_event_id in event_dict and sample in event_dict[b_event_id]['var']['samples'] :
event_dict[b_event_id]['var']['samples'][sample]['psi_mean'] = 1.0 - psi_mean
event_dict[b_event_id]['var']['samples'][sample]['psi_low'] = 1.0 - psi_high
event_dict[b_event_id]['var']['samples'][sample]['psi_high'] = 1.0 - psi_low
elif b_event_id in event_dict and sample in event_dict[b_event_id]['ref']['samples'] :
event_dict[b_event_id]['ref']['samples'][sample]['psi_mean'] = 1.0 - psi_mean
event_dict[b_event_id]['ref']['samples'][sample]['psi_low'] = 1.0 - psi_high
event_dict[b_event_id]['ref']['samples'][sample]['psi_high'] = 1.0 - psi_low
i += 1
# +
#Filter variant events
ci_limit = 0.25#0.25
for event_id in event_dict :
delete_list = []
for sample in event_dict[event_id]['var']['samples'] :
if 'psi_mean' not in event_dict[event_id]['var']['samples'][sample] :
delete_list.append(sample)
continue
if event_dict[event_id]['var']['samples'][sample]['psi_high'] - event_dict[event_id]['var']['samples'][sample]['psi_low'] > ci_limit :
delete_list.append(sample)
continue
if event_dict[event_id]['var']['samples'][sample]['count'] == 0 :
delete_list.append(sample)
continue
for sample in delete_list :
del event_dict[event_id]['var']['samples'][sample]
delete_list = []
for sample in event_dict[event_id]['ref']['samples'] :
if 'psi_mean' not in event_dict[event_id]['ref']['samples'][sample] :
delete_list.append(sample)
continue
if event_dict[event_id]['ref']['samples'][sample]['psi_high'] - event_dict[event_id]['ref']['samples'][sample]['psi_low'] > ci_limit :
delete_list.append(sample)
continue
for sample in delete_list :
del event_dict[event_id]['ref']['samples'][sample]
min_ref_samples = 5
min_var_samples = 1
delete_list = []
for event_id in event_dict :
for seq in event_dict[event_id]['seq_map'] :
delete_list_seq = []
for sample in event_dict[event_id]['seq_map'][seq] :
if sample not in event_dict[event_id]['var']['samples'] :
delete_list_seq.append(sample)
for sample in delete_list_seq :
del event_dict[event_id]['seq_map'][seq][sample]
if len(event_dict[event_id]['var']['samples']) <= min_var_samples :
delete_list.append(event_id)
elif len(event_dict[event_id]['ref']['samples']) <= min_ref_samples :
delete_list.append(event_id)
for event_id in delete_list :
del event_dict[event_id]
# +
print(len(event_dict))
# +
#Make Valid PAS lookup hierarchy
cano_pas1 = 'AATAAA'
cano_pas2 = 'ATTAAA'
valid_pas = []
valid_pas.append({})
valid_pas[0]['AATAAA'] = True
valid_pas[0]['ATTAAA'] = True
valid_pas.append({})
valid_pas[1]['AGTAAA'] = True
valid_pas[1]['TATAAA'] = True
valid_pas[1]['CATAAA'] = True
valid_pas[1]['GATAAA'] = True
valid_pas.append({})
for pos in range(0, 6) :
for base in ['A', 'C', 'G', 'T'] :
valid_pas[2][cano_pas1[:pos] + base + cano_pas1[pos+1:]] = True
valid_pas.append({})
for pos1 in range(0, 6) :
for pos2 in range(pos1 + 1, 6) :
for base1 in ['A', 'C', 'G', 'T'] :
for base2 in ['A', 'C', 'G', 'T'] :
valid_pas[3][cano_pas1[:pos1] + base1 + cano_pas1[pos1+1:pos2] + base2 + cano_pas1[pos2+1:]] = True
# +
def get_average_ref_psi(event) :
psi_mean = 0.0
psi_mean_count = 0.0
any_member = None
for sample in event['ref']['samples'] :
if event['ref']['samples'][sample]['psi_mean'] >= 0 :
psi_mean += event['ref']['samples'][sample]['psi_mean']
psi_mean_count += 1.0
any_member = event['ref']['samples'][sample]
return psi_mean / psi_mean_count, psi_mean_count, any_member
def get_average_var_psi(event_id, event, seq, zyg) :
psi_mean = 0.0
psi_mean_count = 0.0
any_member = None
for sample in event['seq_map'][seq] :
if event['var']['samples'][sample]['psi_mean'] >= 0 and event['var']['samples'][sample]['zyg'] == zyg :
psi_mean += event['var']['samples'][sample]['psi_mean']
psi_mean_count += 1.0
any_member = event['var']['samples'][sample]
if psi_mean_count <= 0 :
return -1, 0, None
return psi_mean / psi_mean_count, psi_mean_count, any_member
def align_seqs(ref_seq, var_seq, cut_start, cut_end, before_cut = 35, after_cut = 5) :
align_j = cut_start - 25
aligned = -1
for i in range(0, len(valid_pas)) :
for j in range(cut_start - before_cut, cut_start + after_cut) :
candidate_pas_ref = ref_seq[j:j+6]
candidate_pas_var = var_seq[j:j+6]
if candidate_pas_ref in valid_pas[i] or candidate_pas_var in valid_pas[i] :
align_j = j
aligned = i
break
if aligned != -1 :
break
aligned_ref_seq = (ref_seq[align_j-50:])[:186]
aligned_var_seq = (var_seq[align_j-50:])[:186]
return aligned_ref_seq, aligned_var_seq, aligned, get_mut_pos(aligned_ref_seq, aligned_var_seq)
def get_mut_pos(ref_seq, var_seq) :
mut_pos = ''
for j in range(0, len(ref_seq)) :
if ref_seq[j] != var_seq[j] :
mut_pos += str(j) + ','
return mut_pos[:-1]
# +
#Deflate data set
with open('APA_Tandem_UTR_GEUV_With_Id.csv', 'w') as out_f :
out_f.write('snp_id' + '\t' + 'snp_type' + '\t' + 'isoform' + '\t' + 'zyg' + '\t' + 'pas' + '\t' + 'ref_seq' + '\t' + 'var_seq' + '\t' + 'ref_ratio' + '\t' + 'var_ratio' + '\t' 'diff' + '\t' 'diff_logodds' + '\t' + 'snp_count' + '\t' + 'snp_pos' + '\t' + 'ref_samples' + '\t' + 'var_samples' + '\n')
for event_id in event_dict :
ref_seq = event_dict[event_id]['seq']
ref_psi, ref_count, ref_member = get_average_ref_psi(event_dict[event_id])
isoform = event_dict[event_id]['isoform']
for var_seq in event_dict[event_id]['seq_map'] :
aligned_ref_seq, aligned_var_seq, aligned, mut_pos = align_seqs(ref_seq, var_seq, 225, 225+1)
#HETEROZYGOUS VARIANT
var_psi, var_count, var_member = get_average_var_psi(event_id, event_dict[event_id], var_seq, 1)
if var_psi >= 0.0 and var_psi <= 1.0 and aligned != -1 and mut_pos != '' :
psi_limit = 0.15
var_sample_limit = 5
ref_sample_limit = 10
if np.abs(var_psi - ref_psi) >= psi_limit and var_count >= var_sample_limit and ref_count >= ref_sample_limit :
diff_logodds = str(round(np.log(var_psi / (1.0 - var_psi)) - np.log(ref_psi / (1.0 - ref_psi)), 2))
out_f.write(var_member['snpid'] + '\t' + var_member['type'] + '\t' + isoform + '\t' + 'HET' + '\t' + str(aligned) + '\t' + aligned_ref_seq + '\t' + aligned_var_seq + '\t' + str(ref_psi) + '\t' + str(var_psi) + '\t' + str(var_psi - ref_psi) + '\t' + diff_logodds + '\t' + str(var_member['count']) + '\t' + str(mut_pos) + '\t' + str(ref_count) + '\t' + str(var_count) + '\n')
elif np.abs(var_psi - ref_psi) >= 0.3 and var_count >= 3 and ref_count >= ref_sample_limit :
diff_logodds = str(round(np.log(var_psi / (1.0 - var_psi)) - np.log(ref_psi / (1.0 - ref_psi)), 2))
out_f.write(var_member['snpid'] + '\t' + var_member['type'] + '\t' + isoform + '\t' + 'HET' + '\t' + str(aligned) + '\t' + aligned_ref_seq + '\t' + aligned_var_seq + '\t' + str(ref_psi) + '\t' + str(var_psi) + '\t' + str(var_psi - ref_psi) + '\t' + diff_logodds + '\t' + str(var_member['count']) + '\t' + str(mut_pos) + '\t' + str(ref_count) + '\t' + str(var_count) + '\n')
#HOMOZYGOUS VARIANT
var_psi, var_count, var_member = get_average_var_psi(event_id, event_dict[event_id], var_seq, 2)
if var_psi >= 0.0 and var_psi <= 1.0 and aligned != -1 and mut_pos != '' :
psi_limit = 0.10
var_sample_limit = 5
ref_sample_limit = 10
if np.abs(var_psi - ref_psi) >= psi_limit and var_count >= var_sample_limit and ref_count >= ref_sample_limit :
diff_logodds = str(round(np.log(var_psi / (1.0 - var_psi)) - np.log(ref_psi / (1.0 - ref_psi)), 2))
out_f.write(var_member['snpid'] + '\t' + var_member['type'] + '\t' + isoform + '\t' + 'HOM' + '\t' + str(aligned) + '\t' + aligned_ref_seq + '\t' + aligned_var_seq + '\t' + str(ref_psi) + '\t' + str(var_psi) + '\t' + str(var_psi - ref_psi) + '\t' + diff_logodds + '\t' + str(var_member['count']) + '\t' + str(mut_pos) + '\t' + str(ref_count) + '\t' + str(var_count) + '\n')
elif np.abs(var_psi - ref_psi) >= 0.25 and var_count >= 2 and ref_count >= ref_sample_limit :
diff_logodds = str(round(np.log(var_psi / (1.0 - var_psi)) - np.log(ref_psi / (1.0 - ref_psi)), 2))
out_f.write(var_member['snpid'] + '\t' + var_member['type'] + '\t' + isoform + '\t' + 'HOM' + '\t' + str(aligned) + '\t' + aligned_ref_seq + '\t' + aligned_var_seq + '\t' + str(ref_psi) + '\t' + str(var_psi) + '\t' + str(var_psi - ref_psi) + '\t' + diff_logodds + '\t' + str(var_member['count']) + '\t' + str(mut_pos) + '\t' + str(ref_count) + '\t' + str(var_count) + '\n')
# +
df = pd.read_csv('APA_Tandem_UTR_GEUV_With_Id.csv', sep='\t')
print(df.head())
df = df.sort_values(by='diff')
df.to_csv('APA_Tandem_UTR_GEUV_With_Id_Sorted.csv', sep='\t', header=True, index=False)
# -
| data/geuvadis_data/process_geuvadis_data.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Emulators: First example
#
# This example illustrates Bayesian inference on a time series, using [Adaptive Covariance MCMC](http://pints.readthedocs.io/en/latest/mcmc_samplers/adaptive_covariance_mcmc.html) with emulator neural networks .
#
# It follows on from [Sampling: First example](../sampling/first-example.ipynb)
#
# Like in the sampling example, I start by importing pints:
import pints
# Next, I create a model class using the "Logistic" toy model included in pints:
# +
import pints.toy as toy
class RescaledModel(pints.ForwardModel):
def __init__(self):
self.base_model = toy.LogisticModel()
def simulate(self, parameters, times):
# Run a simulation with the given parameters for the
# given times and return the simulated values
r, k = parameters
r = r / 50
k = k * 500
return self.base_model.simulate([r, k], times)
def simulateS1(self, parameters, times):
# Run a simulation with the given parameters for the
# given times and return the simulated values
r, k = parameters
r = r / 50
k = k * 500
return self.base_model.simulateS1([r, k], times)
def n_parameters(self):
# Return the dimension of the parameter vector
return 2
# Rescale parameters
#found_parameters = list(found_parameters)
#found_parameters[0] = found_parameters[0] / 50
#found_parameters[1] = found_parameters[1] * 500
# Show score of true solution
#print('Score at true solution: ')
#print(score(true_parameters))
# Compare parameters with original
#print('Found solution: True parameters:' )
#for k, x in enumerate(found_parameters):
#print(pints.strfloat(x) + ' ' + pints.strfloat(true_parameters[k]))
model = toy.LogisticModel()
# -
# In order to generate some test data, I choose an arbitrary set of "true" parameters:
true_parameters = [0.015, 500]
start_parameters = [0.75, 1.0]
# And a number of time points at which to sample the time series:
import numpy as np
times = np.linspace(0, 1000, 400)
# Using these parameters and time points, I generate an example dataset:
org_values = model.simulate(true_parameters, times)
range_values = max(org_values) - min(org_values)
# And make it more realistic by adding gaussian noise:
noise = 0.05 * range_values
print("The noise is:", noise)
values = org_values + np.random.normal(0, noise, org_values.shape)
values = org_values + np.random.normal(0, noise, org_values.shape)
# Using matplotlib, I look at the noisy time series I just simulated:
# +
import matplotlib.pyplot as plt
plt.figure(figsize=(12,4.5))
plt.xlabel('Time')
plt.ylabel('Values')
plt.plot(times, values, label='Noisy data')
plt.plot(times, org_values, lw=2, label='Noise-free data')
plt.legend()
plt.show()
# -
# Now, I have enough data (a model, a list of times, and a list of values) to formulate a PINTS problem:
model = RescaledModel()
problem = pints.SingleOutputProblem(model, times, values)
# I now have some toy data, and a model that can be used for forward simulations. To make it into a probabilistic problem, a _noise model_ needs to be added. This can be done using the `GaussianLogLikelihood` function, which assumes independently distributed Gaussian noise over the data, and can calculate log-likelihoods:
#log_likelihood = pints.GaussianLogLikelihood(problem)
log_likelihood = pints.GaussianKnownSigmaLogLikelihood(problem, noise)
# This `log_likelihood` represents the _conditional probability_ $p(y|\theta)$, given a set of parameters $\theta$ and a series of $y=$ `values`, it can calculate the probability of finding those values if the real parameters are $\theta$.
#
# This can be used in a Bayesian inference scheme to find the quantity of interest:
#
# $p(\theta|y) = \frac{p(\theta)p(y|\theta)}{p(y)} \propto p(\theta)p(y|\theta)$
#
# To solve this, a _prior_ is defined, indicating an initial guess about what the parameters should be.
# Similarly as using a _log-likelihood_ (the natural logarithm of a likelihood), this is defined by using a _log-prior_. Hence, the above equation simplifies to:
#
# $\log p(\theta|y) \propto \log p(\theta) + \log p(y|\theta)$
#
# In this example, it is assumed that we don't know too much about the prior except lower and upper bounds for each variable: We assume the first model parameter is somewhere on the interval $[0.01, 0.02]$, the second model parameter on $[400, 600]$, and the standard deviation of the noise is somewhere on $[1, 100]$.
# Create bounds for our parameters and get prior
#bounds = pints.RectangularBoundaries([0.01, 400], [0.02, 600])
bounds = pints.RectangularBoundaries([0.5, 0.8], [1.0, 1.2])
log_prior = pints.UniformLogPrior(bounds)
# With this prior, the numerator of Bayes' rule can be defined -- the unnormalised log posterior, $\log \left[ p(y|\theta) p(\theta) \right]$, which is the natural logarithm of the likelihood times the prior:
# Create a posterior log-likelihood (log(likelihood * prior))
log_posterior = pints.LogPosterior(log_likelihood, log_prior)
# Finally we create a list of guesses to use as initial positions. We'll run three MCMC chains so we create three initial positions:
xs = [
np.array(start_parameters) * 0.9,
np.array(start_parameters) * 1.05,
np.array(start_parameters) * 1.15,
]
# And this gives us everything we need to run an MCMC routine:
chains = pints.mcmc_sample(log_posterior, 3, xs)
# +
# Revert scaling
scaling_factors = [1/50, 500]
chains_rescaled = np.copy(chains)
chain_rescaled = chains_rescaled[0]
chain_rescaled = chain_rescaled[2000:]
chains = np.array([[[s*f for s,f in zip(samples, scaling_factors)] for samples in chain] for chain in chains])
# -
# ## Using Pints' diagnostic plots to inspect the results
#
# We can take a further look at the obtained results using Pints's [diagnostic plots](http://pints.readthedocs.io/en/latest/diagnostic_plots.html).
# First, we use the [trace](http://pints.readthedocs.io/en/latest/diagnostic_plots.html#pints.plot.trace) method to see if the three chains converged to the same solution.
import pints.plot
pints.plot.trace(chains)
plt.show()
# Based on this plot, it looks like the three chains become very similar after about 1000 iterations.
# To be safe, we throw away the first 2000 samples and continue our analysis with the first chain.
chain = chains[0]
chain = chain[2000:]
# We can also look for autocorrelation in the chains, using the [autocorrelation()](http://pints.readthedocs.io/en/latest/diagnostic_plots.html#pints.plot.autocorrelation) method. If everything went well, the samples in the chain should be relatively independent, so the autocorrelation should get quite low when the `lag` on the x-axis increases.
pints.plot.autocorrelation(chain)
plt.show()
# Now we can inspect the inferred distribution by plotting histograms:
# +
fig, axes = pints.plot.histogram([chain], ref_parameters=true_parameters)
# Show where the sample standard deviation of the generated noise is:
noise_sample_std = np.std(values - org_values)
#axes[-1].axvline(noise_sample_std, color='orange', label='Sample standard deviation of noise')
#axes[-1].legend()
fig.set_size_inches(14, 9)
plt.show()
# -
# Here we've analysed each parameter in isolation, but we can also look at correlations between parameters we found using the [pairwise()](http://pints.readthedocs.io/en/latest/diagnostic_plots.html#pints.plot.pairwise) plot.
#
# To speed things up, we'll first apply some _thinning_ to the chain:
thinned_chain = chain[::10]
pints.plot.pairwise(thinned_chain, kde=True, ref_parameters=true_parameters)
plt.show()
# Finally, we can look at the bit that really matters: The model predictions made from models with the parameters we found (a _posterior predictive check_). Thes can be plotted using the [series()](http://pints.readthedocs.io/en/latest/diagnostic_plots.html#pints.plot.series) method.
# +
fig, axes = pints.plot.series(chain, problem)
# Customise the plot, and add the original, noise-free data
fig.set_size_inches(12,4.5)
plt.plot(times, org_values, c='orange', label='Noise-free data')
plt.legend()
plt.show()
# +
#
# Basic emulator artificial neural network (ANN)
#
# This file is part of PINTS (https://github.com/pints-team/pints/) which is
# released under the BSD 3-clause license. See accompanying LICENSE.md for
# copyright notice and full license details.
#
from __future__ import absolute_import, division
from __future__ import print_function, unicode_literals
import pints
import numpy as np
import tensorflow as tf
import keras
import copy
import os
from keras.models import Sequential
from keras.layers import Dense, Dropout, Activation, Flatten
from keras.callbacks import ReduceLROnPlateau, ModelCheckpoint, EarlyStopping
from keras.utils import HDF5Matrix
from sklearn.preprocessing import StandardScaler, MinMaxScaler
class NeuralNetwork2(pints.ProblemLogLikelihood):
"""
Abstract base class for emulators that are based on neural networks.
Extends :class:`Emulator`.
Parameters
----------
x0
An starting point in the parameter space.
sigma0
An optional (initial) covariance matrix, i.e., a guess of the
covariance of the distribution to estimate, around ``x0``.
"""
"""
*Extends:* :class:`LogPDF`
Abstract class from which all emulators should inherit.
An instance of the Emulator models given log-likelihood
Arguments:
``log_likelihood``
A :class:`LogPDF`, the likelihood distribution being emulated.
``X``
N by n_paremeters matrix containing inputs for training data
``y``
N by 1, target values for each input vector
``input_scaler``
sklearn scalar type, don't pass class just the type.
E.g. StandardScaler provides standardization.
``output_scaler``
sklearn scaler class that will be applied to output
"""
def __init__(self, problem, X, y, input_scaler=None, output_scaler=None):
# Perform sanity checks for given data
if not (isinstance(problem, pints.SingleOutputProblem) or isinstance(problem, pints.MultiOutputProblem)):
raise ValueError("Given problem must extend SingleOutputProblem or MultiOutputProblem.")
super(NeuralNetwork2, self).__init__(problem)
# Store counts
self._no = problem.n_outputs()
self._np = problem.n_parameters()
self._nt = problem.n_times()
# check if dimensions are valid
if X.ndim < 2:
raise ValueError("Input should be 2 dimensional")
X_r, X_c, X_t = X.shape
if (X_c != self._n_parameters):
raise ValueError("Input data should have", self._np, "features")
# if given target array is 1d convert automatically
if y.ndim == 1:
y = y.reshape(len(y), 1)
if y.ndim != 2:
raise ValueError("Target array should be 2 dimensional (N, 1)")
y_r, y_c = y.shape
if y_c != 1:
raise ValueError("Target array should only have 1 feature")
if (X_r != y_r):
raise ValueError("Input and target dimensions don't match")
# Scale data for inputs and output
self._X = copy.deepcopy(X)
if input_scaler:
self._input_scaler = {}
for i in range(X.shape[1]):
self._input_scaler[i] = MinMaxScaler()
self._X[:, i, :] = self._input_scaler[i].fit_transform(X[:, i, :])
#self._input_scaler.fit(X)
#self._X = self._input_scaler.transform(X)
self._output_scaler = output_scaler
if output_scaler:
self._output_scaler.fit(y)
self._y = self._output_scaler.transform(y)
else:
self._y = copy.deepcopy(y) # use a copy to prevent original data from changing
def n_parameters(self):
return self._np
class RescaledMetrics(keras.callbacks.Callback):
def __init__(self, output_scaler):
self._output_scaler = output_scaler
def on_epoch_end(self, epoch, logs=None):
logs = logs or {}
# Predict log-likelihood using training data
X_train, y_train = self.model.X_train, self.model.y_train
y_hat_train = self.model.predict(X_train)
# Predict log-likelihood using validation data
X_val, y_val = self.validation_data[0], self.validation_data[1]
y_hat_val = self.model.predict(X_val)
# Rescale predictions
y_train = self._output_scaler.inverse_transform(y_train)
y_hat_train = self._output_scaler.inverse_transform(y_hat_train)
y_val = self._output_scaler.inverse_transform(y_val)
y_hat_val = self._output_scaler.inverse_transform(y_hat_val)
# Calculate metrics based on rescaled predictions
train_mae = np.mean(np.abs(y_hat_train - y_train))
train_mse = np.mean((y_hat_train - y_train)**2)
val_mae = np.mean(np.abs(y_hat_val - y_val))
val_mse = np.mean((y_hat_val - y_val)**2)
# Store rescaled metrics
logs["rescaled_mae"] = train_mae
logs["rescaled_mse"] = train_mse
logs["val_rescaled_mae"] = val_mae
logs["val_rescaled_mse"] = val_mse
return
class MultiLayerNN2(NeuralNetwork2):
"""
Single layer neural network emulator.
Extends :class:`NNEmulator`.
"""
def __init__(self, problem, X, y, input_scaler=None, output_scaler=None):
super(MultiLayerNN2, self).__init__(problem, X, y, input_scaler, output_scaler)
self._model = Sequential()
def __call__(self, xx):
""" Additional **kwargs can be provided to Keras's predict method. """
x = np.array(copy.deepcopy(xx)).reshape((2, self.n_parameters()))
if self._input_scaler:
for i in range(x.shape[0]):
x[i, :] = self._input_scaler[i].transform(x[i, :].reshape(1, -1))
#x = self._input_scaler.transform(x)
x1, x2 = x.shape
y = self._model.predict([x.reshape(1, x1, x2)])
if self._output_scaler:
y = self._output_scaler.inverse_transform(y)
return y
def set_parameters(self, layers=6, neurons=64, hidden_activation='relu', activation='sigmoid',
learning_rate=0.001,
regularize=True, loss='mse', metrics=['mae']):
""" Provide parameters to compile the model. """
initializer = tf.keras.initializers.he_uniform(seed=1234)
k = int(layers/2)
if regularize:
regularizer=tf.keras.regularizers.l2(0.01)
else:
regularizer=None
if hidden_activation == "relu":
hidden_activation = tf.keras.layers.LeakyReLU(alpha=0.1)
# Input layer
self._model.add(Flatten())
self._model.add(Dense(neurons,
activation=hidden_activation,
input_dim=self._np,
kernel_initializer=initializer,
kernel_regularizer=regularizer
))
# Hidden layers
for n in range(1, k):
self._model.add(Dense(neurons*(2**n),
activation=hidden_activation,
kernel_initializer=initializer,
kernel_regularizer=regularizer
))
for n in range(k-2, -1, -1):
self._model.add(Dense(neurons*(2**n),
activation=hidden_activation,
kernel_initializer=initializer,
kernel_regularizer=regularizer
))
# Output layer
self._model.add(Dense(1, activation=activation))
#opt = keras.optimizers.SGD(learning_rate, momentum=0.9)
opt = keras.optimizers.Adam(learning_rate)
self._model.compile(
loss="mae",
optimizer=opt,
metrics=metrics
)
def fit(self, epochs=50, batch_size=32, X_val=None, y_val=None, verbose=0):
""" Training neural network and return history. """
if self._input_scaler and X_val is not None:
for i in range(X_val.shape[1]):
X_val[:, i, :] = self._input_scaler[i].transform(X_val[:, i, :])
#X_val = self._input_scaler.transform(X_val)
if self._output_scaler and y_val is not None:
y_val = np.array(y_val).reshape((len(y_val),1))
y_val = self._output_scaler.transform(y_val)
# Create a callback object to compute rescaled metrics
metrics_calculater = RescaledMetrics(self._output_scaler)
# Create a callback that saves the model's weights
checkpoint_path = "checkpoints/epoch_{epoch:02d}.ckpt"
checkpoint_dir = os.path.dirname(checkpoint_path)
weights_saver = tf.keras.callbacks.ModelCheckpoint(filepath=checkpoint_path,
save_weights_only=True,
verbose=0)
# Create a callback for early stopping if no progress is made
early_stopper = EarlyStopping(monitor='val_mean_absolute_error',min_delta=0,patience=5,verbose=0,mode='auto')
plateau_reducer = ReduceLROnPlateau(monitor='val_mean_absolute_error', factor=0.3, patience=50)
self._model.X_train = self._X
self._model.y_train = self._y
self._history = self._model.fit(self._X,
self._y,
epochs=epochs,
batch_size=batch_size,
callbacks=[early_stopper,
weights_saver,
metrics_calculater],
shuffle=True,
validation_data=(X_val, y_val),
verbose=verbose
)
return self._history
def summary(self):
return self._model.summary()
def evaluate(self, X_test, y_test, **kwargs):
""" Uses Keras's evaluate() method, so can provide additional paramaters. """
if self._input_scaler:
X_test = self._input_scaler.transform(X_test)
if self._output_scaler:
y_test = self._output_scaler.inverse_transform(y_test)
return self._model.evaluate(X_test, y_test, **kwargs)
def get_model(self):
""" Return model. """
return self._model
def get_model_history(self):
""" Returns the log marginal likelihood of the model. """
assert hasattr(self, "_history"), "Must first train NN"
return self._history
def name(self):
""" See :meth:`pints.NNEmulator.name()`. """
return 'Multi-layer neural network'
# -
n_parameters = 2
sigma = np.array(start_parameters) * 5e-05
sigma = np.array(sigma)
if np.product(sigma.shape) == n_parameters:
# Convert from 1d array
sigma = sigma.reshape((n_parameters,))
sigma = np.diag(sigma)
else:
# Check if 2d matrix of correct size
sigma = sigma.reshape((n_parameters, n_parameters))
sigma
# +
import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d import Axes3D
input_parameters = log_prior.sample(2000)
input_proposals = [np.random.multivariate_normal(current, sigma) for current in input_parameters]
current_likelihoods = np.apply_along_axis(log_likelihood, 1, input_parameters)
proposed_likelihoods = np.apply_along_axis(log_likelihood, 1, input_proposals)
inputs = np.array([(curr, prop) for curr, prop in zip(input_parameters, input_proposals)])
outputs = np.array([prop - curr for curr, prop in zip(current_likelihoods, proposed_likelihoods)])
likelihoods = current_likelihoods
x = [p[0] for p in input_parameters]
y = [p[1] for p in input_parameters]
fig = plt.figure()
ax = fig.add_subplot(111, projection='3d')
ax.scatter(x, y, list(likelihoods))
plt.show()
inputs
# -
inputs[1][0][1]
outputs
log_likelihood([0.84242199, 0.93938406]) - log_likelihood([0.84353595, 0.93088844])
# +
from sklearn.preprocessing import StandardScaler, MinMaxScaler
from sklearn.model_selection import train_test_split
X_train, X_valid, y_train, y_valid = train_test_split(inputs, outputs, test_size=0.3, random_state=0)
emu = MultiLayerNN2(problem, X_train, y_train, input_scaler=MinMaxScaler(), output_scaler=StandardScaler())
emu.set_parameters(layers=6, neurons=64, hidden_activation='relu', activation='linear', learning_rate=0.0001)
hist = emu.fit(epochs=500, batch_size=32, X_val=X_valid, y_val=y_valid, verbose=0)
emu.summary()
# -
[[0.68598206, 0.83795799],
[0.68850398, 0.83024144]]
#test_x = np.array(([0.6, 0.8], [0.612, 0.809]))
#test_x = np.array(([0.68598206, 0.83795799], [0.68850398, 0.83024144]))
test_x = np.array(([0.7, 0.95], [0.71, 0.96]))
emu(test_x)
log_likelihood(test_x[1]) - log_likelihood(test_x[0])
# +
# summarize history for loss
print(hist.history.keys())
fig, ((ax1, ax2), (ax3, ax4)) = plt.subplots(2, 2, figsize=(20,10))
ax1.title.set_text('Learning curves based on MSE')
ax2.title.set_text('Learning curves based on MAE')
ax1.plot(hist.history['loss'])
ax1.plot(hist.history['val_loss'])
ax1.set_ylabel('MSE')
ax1.set_xlabel('Epoch')
ax1.legend(['training', 'validation'], loc='upper left')
ax2.plot(hist.history['mean_absolute_error'])
ax2.plot(hist.history['val_mean_absolute_error'])
ax2.set_ylabel('MAE')
ax2.set_xlabel('Epoch')
ax2.legend(['training', 'validation'], loc='upper left')
ax3.plot(hist.history['rescaled_mse'])
ax3.plot(hist.history['val_rescaled_mse'])
ax3.set_ylabel('Rescaled MSE')
ax3.set_xlabel('Epoch')
ax3.legend(['training', 'validation'], loc='upper left')
ax4.plot(hist.history['rescaled_mae'])
ax4.plot(hist.history['val_rescaled_mae'])
ax4.set_ylabel('Rescaled MAE')
ax4.set_xlabel('Epoch')
ax4.legend(['training', 'validation'], loc='upper left')
plt.show()
# -
n_iterations = 1000
factor = 1.0
x0 = np.array(start_parameters) * factor
sigma0 = np.array(start_parameters) * 5e-05
log_posterior_emu = pints.LogPosterior(emu, log_prior)
# +
xs = []
diffs = []
orig_rule = []
first_rule = []
second_rule = []
step1 = [0] * n_iterations
step2 = [0] * n_iterations
alphas = []
orig = [0] * n_iterations
# Run MCMC methods
for n in range(0, n_iterations):
if n == 0:
# Current point and proposed point
current = x0
#current_log_pdf = log_posterior_emu(x0)
#true_current_log_pdf = log_posterior(x0)
proposed = None
# Acceptance rate and error monitoring
error = 0
accepted1 = 0
accepted2 = 0
orig_accepted = 0
# Check initial position
x0 = pints.vector(x0)
# Get number of parameters
n_parameters = len(x0)
# Check initial standard deviation
if sigma0 is None:
# Get representative parameter value for each parameter
sigma0 = np.abs(x0)
sigma0[sigma0 == 0] = 1
# Use to create diagonal matrix
sigma0 = np.diag(0.01 * sigma0)
else:
sigma0 = np.array(sigma0)
if np.product(sigma0.shape) == n_parameters:
# Convert from 1d array
sigma0 = sigma0.reshape((n_parameters,))
sigma0 = np.diag(sigma0)
else:
# Check if 2d matrix of correct size
sigma0 = sigma0.reshape((n_parameters, n_parameters))
# Ask- Propose new point
#if proposed is None:
proposed = np.random.multivariate_normal(current, sigma0)
xx = [current, proposed]
# Tell
# Calculate logpdfs
fx = log_posterior_emu(xx)
true_fx = log_posterior(proposed) - log_posterior(current)
error += np.abs((true_fx - fx) / true_fx)
# Check if the proposed point can be accepted using the emulator
if np.isfinite(fx):
# Step 1 - Initial reject step:
u1 = np.log(np.random.uniform(0, 1))
#alpha1 = min(0, (fx - current_log_pdf)[0][0]) # either alpha1 or alpha2 must be 0
alpha1 = min(0, fx[0][0])
if alpha1 > u1:
accepted1 += 1
step1[n] = 1
# Step 2 - Metropolis step:
u2 = np.log(np.random.uniform(0, 1))
#alpha2 = min(0, (current_log_pdf - fx)[0][0])
alpha2 = min(0, -fx[0][0])
#if ((true_fx + alpha2) - (true_current_log_pdf + alpha1)) > u2:
if (true_fx + alpha2 - alpha1) > u2:
accepted2 += 1
step2[n] = 1
# Check if the proposed point can be accepted using standard MCMC
if np.isfinite(fx):
# Step 1 - Initial reject step:
u = np.log(np.random.uniform(0, 1))
alpha = log_posterior(proposed) - log_posterior(current)
if alpha > u:
orig_accepted += 1
orig[n] = 1
# Clear proposal
xs.append(proposed)
proposed = None
# Compute difference between emulator and true model
diff = true_fx - fx
diffs.append(diff[0][0])
orig_rule.append(alpha)
first_rule.append(alpha1)
if step1[n] == 1:
second_rule.append((true_fx + alpha2 - alpha1))
alphas.append(alpha2)
else:
second_rule.append(-200)
alphas.append(0)
# Compute acceptance rates
mae = error[0][0] / n_iterations
acceptances = accepted2 / n_iterations
acceptances1 = accepted1 / n_iterations
acceptances2 = accepted2 / accepted1
orig_acceptances = orig_accepted / n_iterations
# -
print("Overall:", acceptances)
print("1st-step:", acceptances1)
print("2nd-step:", acceptances2)
print("Original:", orig_acceptances)
# +
import matplotlib.pyplot as plt
import seaborn as sns
sns.set(context='notebook', style='whitegrid', palette='deep', font='Times New Roman',
font_scale=1.5, color_codes=True, rc={"grid.linewidth": 1})
fig, ax = plt.subplots(figsize=(15,6))
plt.xlabel('Iteration')
plt.ylabel('Difference (true - emulator)')
ax.plot(diffs[:50])
plt.show()
#fig.savefig("figures/gradients/differences-"+str(factor)+".png", bbox_inches='tight', dpi=200)
# -
step_sum = [s1 + s2 for (s1, s2) in zip(step1, step2)]
step_sum1 = [s - 1 if s>0 else s for s in step_sum]
sum([s == o for (s, o) in zip(step_sum1, orig)][:50])
# +
fig, (ax1, ax2) = plt.subplots(2, figsize=(15,6))
plt.xlabel('Iteration')
plt.ylabel('Accepted')
ax1.plot(step1[:50], label='Step 1')
ax1.plot(step2[:50], label='Step 2')
ax1.legend()
ax2.plot(orig[:50], label='Metropolis')
ax2.plot(step_sum1[:50], label='Emulator')
plt.legend()
plt.show()
#fig.savefig("figures/alphas/acceptances-"+str(factor)+".png", bbox_inches='tight', dpi=200)
# -
first_rule[:10]
# +
fig, (ax1, ax2) = plt.subplots(2, figsize=(15,8))
plt.xlabel('Iteration')
plt.ylabel('Rule')
ax1.plot(first_rule[:50], label='Step 1')
ax1.plot(second_rule[:50], label='Step 2')
ax1.legend()
ax2.plot(orig_rule[:50], label='Metropolis')
ax2.plot(alphas[:50], label='Alpha 2')
ax2.legend()
plt.show()
#fig.savefig("figures/gradients/alphas-"+str(factor)+".png", bbox_inches='tight', dpi=200)
# +
import matplotlib.pyplot as plt
import matplotlib as mpl
from mpl_toolkits.mplot3d import Axes3D
test_splits = 50 # number of splits along each axis
r_grid, k_grid, test_data = pints.generate_grid(bounds.lower(), bounds.upper(), test_splits)
model_prediction = pints.predict_grid(log_likelihood, test_data)
emu_prediction = pints.predict_grid(emu, test_data)
figsize=(20,10)
angle=(20, 300)
alpha=0.9
fontsize=16
labelpad=10
title = "Comparison of log-likelihood surfaces"
x_label = "growth rate (r)"
y_label = "carrying capacity (k)"
z_label = "log-likelihood"
fig = plt.figure(figsize=(15,10))
ax = plt.axes(projection='3d')
ax.plot_surface(r_grid, k_grid, model_prediction, cmap='Blues', edgecolor='none', alpha=alpha)
ax.plot_surface(r_grid, k_grid, emu_prediction, cmap='Reds', edgecolor='none', alpha=alpha)
#ax.view_init(60, 35)
ax.view_init(*angle)
#ax.set_title('surface')
plt.title(title, fontsize=fontsize*1.25)
ax.set_xlabel(x_label, fontsize=fontsize, labelpad=labelpad)
ax.set_ylabel(y_label, fontsize=fontsize, labelpad=labelpad)
ax.set_zlabel(z_label, fontsize=fontsize, labelpad=labelpad)
fake2Dline1 = mpl.lines.Line2D([0],[0], linestyle="none", c='blue', marker = 'o')
fake2Dline2 = mpl.lines.Line2D([0],[0], linestyle="none", c='red', marker = 'o')
ax.legend([fake2Dline1, fake2Dline2], ["True log-likelihood", "NN emulator log-likelihood"])
plt.show()
mape = np.mean(np.abs((model_prediction - emu_prediction) / model_prediction))
mape
# +
emu_prediction = np.apply_along_axis(emu, 1, chain_rescaled).flatten()
model_prediction = np.apply_along_axis(log_likelihood, 1, chain_rescaled).flatten()
diffs = (np.abs((model_prediction - emu_prediction) / model_prediction))
iters = np.linspace(0, 10000, len(chain_rescaled))
plt.figure(figsize=(10, 5))
plt.title("Emulator and model absolute differences along a chain of MCMC")
plt.xlabel("Number of iterations")
plt.ylabel("Likelihood")
plt.plot(iters, diffs, color = "Black")
plt.show()
diffs[-1]
# -
print(emu_prediction)
log_posterior_emu = pints.LogPosterior(emu, log_prior)
chains_emu = pints.mcmc_sample(log_posterior_emu, 3, xs)
# +
# Revert scaling
scaling_factors = [1/50, 500]
chains_emu_rescaled = np.copy(chains_emu)
chain_emu_rescaled = chains_emu_rescaled[0]
chain_emu_rescaled = chain_emu_rescaled[2000:]
chains_emu = np.array([[[s*f for s,f in zip(samples, scaling_factors)] for samples in chain] for chain in chains_emu])
# -
pints.plot.trace(chains_emu)
plt.show()
pints.plot.trace(chains)
plt.show()
chain_emu = chains_emu[0]
chain_emu = chain_emu[2000:]
chain = chains[0]
chain = chain[2000:]
fig, axes = pints.plot.histogram([chain_emu, chain], ref_parameters=true_parameters, sample_names=["Emulator", "MCMC"])
fig.set_size_inches(14, 9)
plt.show()
thinned_chain_emu = chain_emu[::10]
pints.plot.pairwise(thinned_chain_emu, kde=True, ref_parameters=true_parameters)
plt.show()
pints.plot.pairwise(thinned_chain, kde=True, ref_parameters=true_parameters)
plt.show()
# +
emu_prediction = np.apply_along_axis(emu, 1, chain_emu_rescaled).flatten()
model_prediction = np.apply_along_axis(log_likelihood, 1, chain_rescaled).flatten()
diffs = (np.abs((model_prediction - emu_prediction) / model_prediction))
iters = np.linspace(0, 10000, len(chain_emu_rescaled))
plt.figure(figsize=(10, 5))
plt.title("Emulator and model absolute differences along a chain of MCMC")
plt.xlabel("Number of iterations")
plt.ylabel("Likelihood")
plt.plot(iters, diffs, color = "Black")
plt.show()
diffs[-1]
# +
emu_prediction = np.apply_along_axis(emu, 1, chain_emu_rescaled).flatten()
model_prediction = np.apply_along_axis(log_likelihood, 1, chain_rescaled).flatten()
diffs = (model_prediction - emu_prediction)
iters = np.linspace(0, 10000, len(chain_emu_rescaled))
plt.figure(figsize=(10, 5))
plt.title("Emulator and model absolute differences along a chain of MCMC")
plt.xlabel("Number of iterations")
plt.ylabel("Likelihood")
plt.plot(iters, diffs, color = "Black")
plt.show()
diffs[-1]
# -
chain_emu
# +
# Create grid of parameters
x = [p[0] for p in chain_rescaled]
y = [p[1] for p in chain_rescaled]
xmin, xmax = np.min(x), np.max(x)
ymin, ymax = np.min(y), np.max(y)
xx, yy = np.mgrid[xmin:xmax:100j, ymin:ymax:100j]
params = [list(n) for n in zip(xx, yy)]
ll = np.apply_along_axis(log_likelihood, 1, params)
ll_emu = np.apply_along_axis(emu, 1, params)
ll_emu = [list(e[0][0]) for e in ll_emu]
fig, (ax1, ax2) = plt.subplots(1, 2, figsize=(20,7))
ax1.title.set_text('Log-Likelihood')
ax2.title.set_text('Neural Network')
ax1.contourf(xx, yy, ll, cmap='Blues', extent=[xmin, xmax, ymin, ymax])
ax1.contour(xx, yy, ll, colors='k')
ax2.contourf(xx, yy, ll_emu, cmap='Reds', extent=[xmin, xmax, ymin, ymax])
ax2.contour(xx, yy, ll_emu, colors='k')
plt.show()
# +
fig, (ax1, ax2) = plt.subplots(1, 2, figsize=(20,7))
ax1.title.set_text('Log-Likelihood with MCMC samples')
ax2.title.set_text('Neural Network with MCMC samples')
# Create grid of parameters
x = [p[0] for p in chain_emu_rescaled]
y = [p[1] for p in chain_emu_rescaled]
xmin, xmax = np.min(x), np.max(x)
ymin, ymax = np.min(y), np.max(y)
xx, yy = np.mgrid[xmin:xmax:100j, ymin:ymax:100j]
params = [list(n) for n in zip(xx, yy)]
ll = np.apply_along_axis(log_likelihood, 1, params)
ll_emu = np.apply_along_axis(emu, 1, params)
ll_emu = [list(e[0][0]) for e in ll_emu]
# Sort according to differences in log-likelihood
idx = diffs.argsort()
x_sorted = np.array(x)[idx]
y_sorted = np.array(y)[idx]
diffs_sorted = diffs[idx]
# Add contour lines of log-likelihood
ax1.contourf(xx, yy, ll, cmap='Greys', extent=[xmin, xmax, ymin, ymax])
ax1.contour(xx, yy, ll, colors='w')
# Plot chain_emu
ax1.set_xlim([xmin, xmax])
ax1.set_ylim([ymin, ymax])
im1 = ax1.scatter(x_sorted, y_sorted, c=diffs_sorted, s=70, edgecolor='k', cmap="RdYlGn_r")
# Add contour lines of emulated likelihood
ax2.contourf(xx, yy, ll_emu, cmap='Greys', extent=[xmin, xmax, ymin, ymax])
ax2.contour(xx, yy, ll_emu, colors='w')
# Plot chain_emu
ax2.set_xlim([xmin, xmax])
ax2.set_ylim([ymin, ymax])
im2 = ax2.scatter(x_sorted, y_sorted, c=diffs_sorted, s=70, edgecolor='k', cmap="RdYlGn_r")
fig.colorbar(im1, ax=ax1)
fig.colorbar(im2, ax=ax2)
plt.show()
# +
# Choose starting points for 3 mcmc chains
x0s = [
np.array(start_parameters) * 0.9,
np.array(start_parameters) * 1.05, #1.1
np.array(start_parameters) * 1.15,
]
# Choose a covariance matrix for the proposal step
#sigma0 = [0.01, 0.01] #np.abs(true_parameters) * 5e-3
#sigma0 = [0.1*0.75, 0.1] #np.abs(true_parameters) * 5e-3
#sigma0 = [[ 1.01547594e-05, -2.58358260e-06], [-2.58358260e-06, 1.22093040e-05]]
sigma0 = np.array(start_parameters) * 5e-05
# Create mcmc routine
mcmc = pints.MCMCController(log_posterior_emu, 3, x0s, sigma0=sigma0, method=pints.EmulatedMetropolisMCMC, f=log_posterior)
# Add stopping criterion
mcmc.set_max_iterations(30000)
# Disable logging mode
#mcmc.set_log_to_screen(False)
# Run!
print('Running...')
emulated_chains = mcmc.run()
print('Done!')
# Show traces and histograms
pints.plot.trace(emulated_chains)
# Discard warm up
emulated_chains_thinned = emulated_chains[:, 10000:, :]
# Check convergence using rhat criterion
print('R-hat:')
print(pints.rhat_all_params(emulated_chains_thinned))
# Look at distribution in chain 0
pints.plot.pairwise(emulated_chains_thinned[0])
# Show graphs
plt.show()
# +
# Choose starting points for 3 mcmc chains
x0s = [
np.array(start_parameters) * 0.9,
np.array(start_parameters) * 1.05, #1.1
np.array(start_parameters) * 1.15,
]
# Choose a covariance matrix for the proposal step
#sigma0 = [0.01, 0.01] #np.abs(true_parameters) * 5e-5
#sigma0 = [[ 1.01547594e-05, -2.58358260e-06], [-2.58358260e-06, 1.22093040e-05]]
# Create mcmc routine
mcmc = pints.MCMCController(log_posterior, 3, x0s, sigma0=sigma0, method=pints.MetropolisRandomWalkMCMC)
# Add stopping criterion
mcmc.set_max_iterations(30000)
# Disable logging mode
#mcmc.set_log_to_screen(False)
# Run!
print('Running...')
metropolis_chains = mcmc.run()
print('Done!')
# Show traces and histograms
pints.plot.trace(metropolis_chains)
# Discard warm up
metropolis_chains_thinned = metropolis_chains[:, 10000:, :]
# Check convergence using rhat criterion
print('R-hat:')
print(pints.rhat_all_params(metropolis_chains_thinned))
# Look at distribution in chain 0
pints.plot.pairwise(metropolis_chains_thinned[0])
# Show graphs
plt.show()
# +
# Choose starting points for 3 mcmc chains
x0s = [
np.array(start_parameters) * 0.9,
np.array(start_parameters) * 1.05, #1.1
np.array(start_parameters) * 1.15,
]
# Choose a covariance matrix for the proposal step
#sigma0 = [0.01, 0.01] #np.abs(true_parameters) * 5e-5
# Create mcmc routine
mcmc = pints.MCMCController(log_posterior, 3, x0s)
# Add stopping criterion
mcmc.set_max_iterations(30000)
# Disable logging mode
#mcmc.set_log_to_screen(False)
# Run!
print('Running...')
ac_chains = mcmc.run()
print('Done!')
# Show traces and histograms
pints.plot.trace(ac_chains)
# Discard warm up
ac_chains_thinned = ac_chains[:, 10000:, :]
# Check convergence using rhat criterion
print('R-hat:')
print(pints.rhat_all_params(ac_chains_thinned))
# Look at distribution in chain 0
pints.plot.pairwise(ac_chains_thinned[0])
# Show graphs
plt.show()
# -
metropolis_chains.shape
metropolis_chains[0][:10]
emulated_chains[0][:10]
fig, axes = pints.plot.histogram([emulated_chains_thinned[0], metropolis_chains_thinned[0]], ref_parameters=start_parameters, sample_names=["Emulator", "MCMC", "2-Step procedure"])
fig.set_size_inches(14, 9)
plt.show()
| examples/emulators/mcmc/first-example-emulator-test-training-on-diffs.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python [conda root]
# language: python
# name: conda-root-py
# ---
# ## Creating a Simple Application
#
# The Streams Python API allows a user to create a streams application using only python. The Python API allows for the definition of data sources, transformations, and sinks by performing operations on `Stream` and `Topology` objects.
#
# First, it is necessary to import the `Topology` class from the `streamsx.topology` package. This will be used to begin create your application. In addition, the `context` module must be imported, allowing your application to be submitted.
#
# Lastly, import the classes of the operators you wish to use. Here we use `Counter`, a source operator which counts from 0 to infinity, and `negative_one`, an operator which returns the negative of every number it is given
from streamsx.topology.topology import Topology
from streamsx.topology import context
from my_module import Counter, negative_one
# ## The Topology Object
# A Topology object is a container for the structure of your application. Give it a unique name, otherwise it may overwrite other compiled streams applications.
top = Topology("myTop")
# ## Define and Create Your Data Source
# By creating an instance of the `Counter` operator, we can use it as a data source by invoking the `source` method on your `Topology` object and passing it the `my_counter` object. The output of a source is a `Stream` which represents the flow of data in your applications. A streams consists of a potentially infinite sequence of Python object upon which you can perform subsequent operations (such as multiplying by negative one)
# Define source
my_counter = Counter()
stream_1 = top.source(my_counter)
# ## Performing Operations On Your Data Stream
# A user might want to perform a number of operations on a stream of live data; for example, extracting sentiment from tweets, keeping track of a GPS device, or monitoring live traffic data. All of these operations take data items of one type and modify them or produce data of another type. With the Python API, this can be achieved by calling `map` on a Stream and passing it an operator.
#
# As mentioned before, `negative_one` is a callable class which takes a number, and returns its negative.
# +
# multiple by negative one
neg_one = negative_one()
stream_2 = stream_1.map(neg_one)
# Print stream
stream_2.print()
# -
# ## Submission
# Submit the application to be run locally in a single process. The output will be sent to standard output and printed to the screen.
out = context.submit("STANDALONE", top.graph)
| samples/python/topology/notebooks/PyAPI/PyAPI.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] deletable=true editable=true
# # HTTP requests
# In this tutorial ti is covered how to make requests via HTTP protocol.
# For more informations about related stuff see:
# * <a href="https://en.wikipedia.org/wiki/Hypertext_Transfer_Protocol">Hypertext Transfer Protocol (HTTP)</a>
# * <a href="https://en.wikipedia.org/wiki/JSON">JavaScript Object Notation</a>
# * <a href="https://en.wikipedia.org/wiki/HTML">HyperText Markup Language (HTML)</a>
#
# Keep in mind, that in this tutorial we work only with static content. How to obtain web dynamic content is not covered in this tutorial. If you want to deal with dynamic content, study <a href="http://selenium-python.readthedocs.io/">Selenium Python Bindings</a>.
#
# ## Get HTML page content
# In this section are examples how to get HTTP response with two different libraries:
# * <a href="https://docs.python.org/3.4/library/urllib.html?highlight=urllib">urllib</a> (standard library in Python 3)
# * <a href="http://docs.python-requests.org/en/master/">Requests</a> (instalable through pip)
#
# In this tutorial is mainly used the Requests library, as a prefered option.
#
# ### Urlib2 library
# Example how to get static content of web page with Urlib2 follows:
# + deletable=true editable=true
from urllib.request import urlopen
r = urlopen('http://www.python.org/')
data = r.read()
print("Status code:", r.getcode())
# + [markdown] deletable=true editable=true
# The variable `data` contains returned HTML code (full page) as string. You can process it, save it, or do anything else you need.
#
# ### Requests
# Example how to get static content of web page with Requests follows.
# + deletable=true editable=true
import requests
r = requests.get("http://www.python.org/")
data = r.text
print("Status code:", r.status_code)
# + [markdown] deletable=true editable=true
# ## Get JSON data from an API
# This task is demonstrated on Open Notify - an open source project that provides a simple programming interface for some of NASAโs awesome data.
#
# The examples bellow cover how to obtain current possition of ISS. With Requests library it is possible to get the JSON from the API in the same way as HTML data.
# + deletable=true editable=true
import requests
r = requests.get("http://api.open-notify.org/iss-now.json")
obj = r.json()
print(obj)
# + [markdown] deletable=true editable=true
# The Requests function `json()` convert the json response to Python dictionary. In next code block is demonstrated how to get data from obtained response.
#
# ## Persistent session with Requests
# Session with Requests are handy for cases where you need to use same cookies (session cookies for example) or authentication for multiple requests.
# + deletable=true editable=true
s = requests.Session()
print("No cookies on start: ")
print(dict(s.cookies))
r = s.get('http://google.cz/')
print("\nA cookie from google: ")
print(dict(s.cookies))
r = s.get('http://google.cz/?q=cat')
print("\nThe cookie is perstent:")
print(dict(s.cookies))
# + [markdown] deletable=true editable=true
# Compare the output of the code above, with the example bellow.
# + deletable=true editable=true
r = requests.get('http://google.cz/')
print("\nA cookie from google: ")
print(dict(r.cookies))
r = requests.get('http://google.cz/?q=cat')
print("\nDifferent cookie:")
print(dict(r.cookies))
# + [markdown] deletable=true editable=true
# ## Custom headers
# Headers of the response are easy to check, example follows.
# + deletable=true editable=true
r = requests.get("http://www.python.org/")
print(r.headers)
# + [markdown] deletable=true editable=true
# The request headers can be modified in simple way as follows.
# + deletable=true editable=true
headers = {
"Accept": "text/plain",
}
r = requests.get("http://www.python.org/", headers=headers)
print(r.status_code)
# + [markdown] deletable=true editable=true
# More information about HTTP headers can be found at <a href="https://en.wikipedia.org/wiki/List_of_HTTP_header_fields">List of HTTP header fields wikipedia page</a>.
| Making_HTTP_requests.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3.8.6 64-bit ('env')
# name: python386jvsc74a57bd0c48f87b51193a18c36f6ec1b39da40df380a4c3fb2fa54b3e413dc2378ec9052
# ---
# # Tube segmentation
from brainlit.utils import session
from brainlit.feature_extraction import *
import napari
url = "s3://open-neurodata/brainlit/brain1"
sess = session.NeuroglancerSession(url=url, url_segments=url+"_segments", mip=0)
SEGLIST=[101,103,106,107,109,11,111,112,115,11,12,120,124,126,127,129,13,132,133,136,137,14,140,141,142,143,144,145,146,147,149,150]
SEGLIST = SEGLIST[:1]
# + tags=[]
# # %%capture
nbr = NeighborhoodFeatures(url=url, radius=1, offset=[50,50,50], segment_url=url+"_segments")
nbr.fit(seg_ids=SEGLIST, num_verts=10, file_path='demo', batch_size=10)
# +
import glob, feather
feathers = glob.glob('*.feather')
for count, feather_file in enumerate(feathers):
if count == 0:
data = feather.read_dataframe(feather_file)
else:
df = feather.read_dataframe(feather_file)
data = pd.concat([data, df])
data.shape
# -
data.head()
# +
from sklearn.preprocessing import StandardScaler
X = data.iloc[:, 3:]
X = StandardScaler().fit_transform(X)
y = data["Label"]
# + tags=[]
from sklearn.neural_network import MLPClassifier
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, stratify=y, random_state=1)
clf = MLPClassifier(hidden_layer_sizes=4, activation="logistic", alpha=1, max_iter=1000).fit(X_train, y_train)
y_score = clf.predict_proba(X_test)
# +
import matplotlib.pyplot as plt
from sklearn.metrics import roc_curve, auc
fpr, tpr, _ = roc_curve(y_test, y_score[:,1])
roc_auc = auc(fpr, tpr)
plt.figure()
lw = 2
plt.plot(fpr, tpr, color='darkorange',
lw=lw, label='ROC curve (area = %0.2f)' % roc_auc)
plt.plot([0, 1], [0, 1], color='navy', lw=lw, linestyle='--')
plt.xlim([0.0, 1.0])
plt.ylim([0.0, 1.05])
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate')
plt.title('MLP ROC')
plt.legend(loc="lower right")
plt.show()
# -
from brainlit.feature_extraction.neighborhood import subsample
X.shape
from sklearn.linear_model import LogisticRegression
Xc_train, Xc_test, yc_train, yc_test = train_test_split(X, y, stratify=y, random_state=1)
clf = LogisticRegression(random_state=1, max_iter=2000).fit(Xc_train, yc_train)
yc_score = clf.predict_proba(Xc_test)
# +
fpr_c, tpr_c, _ = roc_curve(yc_test, yc_score[:,1])
roc_auc_c = auc(fpr_c, tpr_c)
plt.figure()
lw = 2
plt.plot(fpr_c, tpr_c, color='darkorange',
lw=lw, label='ROC curve (area = %0.2f)' % roc_auc_c)
plt.plot([0, 1], [0, 1], color='navy', lw=lw, linestyle='--')
plt.xlim([0.0, 1.0])
plt.ylim([0.0, 1.05])
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate')
plt.title('LogRegression Receiver operating characteristic')
plt.legend(loc="lower right")
plt.show()
# -
| docs/notebooks/pipelines/tubes_feature_extraction_demo.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Essential Python Libraries
#
# ## NumPy
# - Cornerstone
# - Provides data structures, algorithms, and library glue needed for most scientific applications involving numerical data in Python.
#
# ## pandas
# - Provides high level data structures and functions designed to make working with structured data or tabular data fast, easy and expressive.
# - Blends ideas of NumPy with the flexible data manipulation capabilities of spreadsheets and relational databases (SQL)
#
# ## matplotlib
# - Most popular Python library for producing plots and other 2d data visualisations created for publications.
#
# ## IPython & Jupyter
# - IPython: Does not provide any computational or data analytical tools by itself
# - It encourages execute-explore workflow rather than edit-compile-run
# - Helps get the job done faster by providing easy access to your OSโs shell and filesystem.
#
# ## SciPy
# - Collection of packages addressing a number of different standard problem domains in scientific computing.
# - Together NumPy and SciPy form a reasonably complete and mature computational foundation for many traditional scientific computing applications.
#
# ## Scikit-learn
# - General purpose ML toolkit for Python programmers
# - Includes submodules for such models as:
# - Classifications: SVM, nearest neighbours, random forest, logistic regression.
# - Regression: Lasso, ridge regression
# - Clustering: k-means, spectral clustering
# - Model selection: Grid search, cross-validation, metrics
#
# ## Statsmodels
# - Statistical analysis package
# - Contains algorithms for classical statistics and econometrics.
# - Regression models, variance, time series, visualisation of stat model results
#
# # Why Python for DA?
#
# - Popular
# - Large community
# - Support for libraries
# - Combined with general purpose software engineering, excellent option for building data applications
# - Suitable language for not only doing research and prototyping but also for building the production systems
# - Solve the two language problem: Python could be the universal language in an organisation
#
# ## Should not be used when:
# - Slower than code written in a compiled language like Java
# - Challenging language for building highly concurrent, multithreaded applications
#
| Notes/Essential Python Libraries for Data Analysis.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# %matplotlib inline
import pandas
import numpy as np
level = pandas.read_csv('calibration_level.csv')
level.head()
roll = pandas.read_csv('calibration_roll.csv')
roll.head()
ax = level.mag_x.plot()
level.mag_y.plot(ax=ax)
level['pitch'] = np.degrees(np.arctan2(level.acc_x, np.sqrt(level.acc_x**2 + level.acc_z**2)))
level['roll'] = np.degrees(np.arctan2(-level.acc_y, -level.acc_z))
level.pitch.plot()
level.roll.plot()
roll['pitch'] = np.degrees(np.arctan2(roll.acc_x, np.sqrt(roll.acc_x**2 + roll.acc_z**2)))
roll['roll'] = np.degrees(np.arctan2(-roll.acc_y, -roll.acc_z))
roll.pitch.plot(legend=True)
roll.roll.plot(legend=True)
ax = roll.plot('roll', 'mag_x')
roll.plot('roll', 'mag_y', ax=ax)
roll.plot('roll', 'mag_z', ax=ax, xlim=(-90, 90))
offset_x = (level.mag_x.max() + level.mag_x.min())/2
offset_y = (level.mag_y.max() + level.mag_y.min())/2
range_x = level.mag_x.max() - level.mag_x.min()
range_y = level.mag_y.max() - level.mag_y.min()
print(offset_x, range_x, offset_y, range_y)
# +
y_flat = roll[(-3 < roll.roll) & (roll.roll < +3)]['mag_y'].mean()
# Adjust X and Y fields by calibration already got (dance calibration)
MagX = (roll.mag_x - offset_x) / range_x
MagY = (roll.mag_y - offset_y) / range_y
MagY_flat = (y_flat - offset_y) / range_y
raw_z = roll.mag_z
# Convert pitch and roll to radians for trig functions
roll_r = np.radians(roll.roll)
pitch_r = np.radians(roll.pitch)
def mag_y_comp_residuals(p):
MagY_comp = (MagX * np.sin(roll_r) * np.sin(pitch_r)) +\
(MagY * np.cos(roll_r)) - (((raw_z - p[0]) / p[1])* np.sin(roll_r) * np.cos(pitch_r))
return MagY_comp - MagY_flat
# -
from scipy.optimize import leastsq
res, ier = leastsq(mag_y_comp_residuals, (1, 1))
assert 1 <= ier <= 4
res
# +
MagZ = (raw_z - res[0]) / res[1]
MagY_comp = (MagX * np.sin(roll_r) * np.sin(pitch_r)) +\
(MagY * np.cos(roll_r)) - (MagZ * np.sin(roll_r) * np.cos(pitch_r))
# -
import matplotlib.pyplot as plt
plt.plot(roll.roll, MagZ, label='Z')
plt.plot(roll.roll, MagZ * np.sin(roll_r) * np.cos(pitch_r), label='Z adjust')
plt.xlabel('Roll (degrees)')
plt.legend()
plt.plot(roll.roll, MagY, label='Uncalibrated')
plt.plot(roll.roll, MagY_comp, label='Calibrated')
plt.xlabel('Roll (degrees)')
plt.legend()
| notes/roll_calibration/More roll calib.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ## Wavelet Filtering
# One interesting feature of the Wavelet Transform is the possibility to use it as a bandpass filter. There are lots of methods to filter out noise from time series, but the Wavelet Filtering method is a useful (indeed very useful) one as well.
#
# The whole method is briefly described at TC98 section 6 named "Extensions to Wavelet Analysis".
#
# Let's see how we can get it done now
# +
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
import math
import matplotlib
from waveletFunctions import*
# -
n = len(sst)
dt = (1/4)
time = (np.arange(n))*dt + 1871.0 # construct time array
xlim = ([1871, 2000]) # plotting range
pad = 1. # pad the time series with zeroes (recommended)
dj = 0.25 # this will do 4 sub-octaves per octave
s0 = 2 * dt # this says start at a scale of 6 months
j1 = 7 / dj # this says do 7 powers-of-two with dj sub-octaves each
mother = 'MORLET' # options: 'MORLET', 'PAUL' e 'DOG'
##--- Wavelet Transform:
wave, period, scale, coi = wavelet(sst, dt, pad, dj, s0, j1, mother)
realpart = np.real(wave)
##--- filtering
cdelta = 0.776 # cdelta=0.776 for morlet wavelet; see table 2 (TC, 1998)
psi = math.pi**(-1/4) # for morlet wavelet; see table 2 (TC, 1998)
xnc = (dj*math.sqrt(dt))/(cdelta*psi) # 'constant' part for Eq. 29 (TC, 1998)
avg = np.logical_and(scale>=2.,scale<8.) # filtering scales
scale_avg = scale[:, np.newaxis].dot(np.ones(n)[np.newaxis, :]) # expand scale array
xnv = realpart/np.sqrt(scale_avg) # 'variable' part for Eq. 29
xn = xnc*sum(xnv[avg,:]) # Eq. 29
# This is really a no-brainer. The only thing you have to do is to apply the Eq. 29 from TC98 and be careful to use the correct parameters for the Morlet Wavelet (you can use the other 'mothers' as well).
#
# As you can see above, we will make the filtering over the scales from 2 to 8 years but you can select any range you might want as well.
# +
plt.figure(figsize=(12,4))
plt.xlim([1871,2000])
plt.xticks(range(1871, 2005, 25))
plt.plot(time, sst,'black',lw=1,label=u"Niรฑo 3")
plt.plot(time,xn,'red',lw=1,label="2-8 yrs")
plt.xlabel('Years')
plt.ylabel(u'ASST ($\degree$C)')
plt.title(u'2-8 yrs Niรฑo 3 Wavelet Reconstruction')
plt.legend(loc='upper right')
plt.tight_layout
plt.show()
# -
# It's simple as that. If you don't believe me, let's select another range for our filtering.
#
# 10-20 years looks good enough. Let's do it.
##--- filtering
cdelta = 0.776 # cdelta=0.776 for morlet wavelet; see table 2 (TC, 1998)
psi = math.pi**(-1/4) # for morlet wavelet; see table 2 (TC, 1998)
xnc = (dj*math.sqrt(dt))/(cdelta*psi) # 'constant' part for Eq. 29 (TC, 1998)
avg = np.logical_and(scale>=10.,scale<20.) # filtering scales
scale_avg = scale[:, np.newaxis].dot(np.ones(n)[np.newaxis, :]) # expand scale array
xnv = realpart/np.sqrt(scale_avg) # 'variable' part for Eq. 29
xn = xnc*sum(xnv[avg,:]) # Eq. 29
# +
plt.figure(figsize=(12,4))
plt.xlim([1871,2000])
plt.xticks(range(1871, 2005, 25))
plt.plot(time, sst,'black',lw=1,label=u"Niรฑo 3")
plt.plot(time,xn,'red',lw=1,label="10-20 yrs")
plt.xlabel('Years')
plt.ylabel(u'ASST ($\degree$C)')
plt.title(u'10-20 yrs Niรฑo 3 Wavelet Reconstruction')
plt.legend(loc='upper right')
plt.tight_layout
plt.show()
# -
# You can see now we have a new low-frequency time series from our original Niรฑo 3 data. It's truly a piece of cake to do this kind of stuff now, isn't it?
# As we are tired to know by now, no method is perfect and the wavelet analysis also has its own set of troubles which can turn it's applicability to climate science a little bit problematic$^{2}$. However, it's usefulness has also been demonstraded many times and we might get a little tired to review them all.
# ## REFERENCES
# 1 - <NAME>, and <NAME>. "A practical guide to wavelet analysis." Bulletin of the American Meteorological society 79.1 (1998): 61-78.
#
# 2 - Huang, <NAME>., and <NAME>. "A review on HilbertโHuang transform: Method and its applications to geophysical studies." Reviews of geophysics 46.2 (2008).
# ## RECOMMENDED READINGS
# 1 - <NAME>, <NAME>, and <NAME>. "Application of the cross wavelet transform and wavelet coherence to geophysical time series." Nonlinear processes in geophysics 11.5/6 (2004): 561-566.
#
# 2 - Daubechies, Ingrid. Ten lectures on wavelets. Society for industrial and applied mathematics, 1992.
#
# 3 - <NAME>, and <NAME>โGeorgiou. "Wavelet analysis for geophysical applications." Reviews of geophysics 35.4 (1997): 385-412.
#
# 4 - Labat, David, et al. "Wavelet analysis of Amazon hydrological regime variability." Geophysical Research Letters 31.2 (2004).
| Jupyter-Notebooks/wave_filter.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
import numpy as np
import matplotlib.pyplot as plt
np.random.seed(0)
# -
def get_volt():
"""Measure voltage."""
v = np.random.normal(0, 4) # v: measurement noise.
volt_mean = 14.4 # volt_mean: mean (nominal) voltage [V].
volt_meas = volt_mean + v # volt_meas: measured voltage [V] (observable).
return volt_meas
def avg_filter(k, x_meas, x_avg):
"""Calculate average voltage using a average filter."""
alpha = (k - 1) / k
x_avg = alpha * x_avg + (1 - alpha) * x_meas
return x_avg
# Input parameters.
time_end = 10
dt = 0.2
time = np.arange(0, time_end, dt)
n_samples = len(time)
x_meas_save = np.zeros(n_samples)
x_avg_save = np.zeros(n_samples)
x_avg = 0
for i in range(n_samples):
k = i + 1
x_meas = get_volt()
x_avg = avg_filter(k, x_meas, x_avg)
x_meas_save[i] = x_meas
x_avg_save[i] = x_avg
plt.plot(time, x_meas_save, 'r*', label='Measured')
plt.plot(time, x_avg_save, 'b-', label='Average')
plt.legend(loc='upper left')
plt.title('Measured Voltages v.s. Average Filter Values')
plt.xlabel('Time [sec]')
plt.ylabel('Volt [V]')
plt.savefig('png/average_filter.png')
| Ch01.AverageFilter/average_filter.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 2
# language: python
# name: python2
# ---
# + [markdown] deletable=false editable=false nbgrader={"checksum": "ab0e5e5018743f75d0d59bd0e18ddc43", "grade": false, "grade_id": "2", "locked": true, "schema_version": 1, "solution": false} run_control={"frozen": false, "read_only": false} slideshow={"slide_type": "slide"}
# ## Analysis of stock prices using PCA / Notebook 1
#
# We analyze the daily changes in stock prices using PCA and to mesure the dimension of stock sequences.
#
# We Start by downloading data and pre-processing it to make it ready for analysis using Spark.
# + deletable=false editable=false nbgrader={"checksum": "4dd5597eed487ae4dae4a56581e650c1", "grade": false, "grade_id": "3", "locked": true, "schema_version": 1, "solution": false} run_control={"frozen": false, "read_only": false} slideshow={"slide_type": "skip"}
import sys,os
import numpy as np
from numpy.linalg import norm
import matplotlib.pyplot as plt
# %matplotlib inline
from time import time
import math
import pandas as pd
from glob import glob
import pickle
# + [markdown] deletable=false editable=false nbgrader={"checksum": "a8dd320b1af212ca56df97b2db554da6", "grade": false, "grade_id": "4", "locked": true, "schema_version": 1, "solution": false} run_control={"frozen": false, "read_only": false}
# ### Stock Info
# The Pickle file `Tickers.pkl` contains information about the stocks in the SP500.
#
# * `TickerInfo` - a pandas table that stores the name, sector, and sector ID for 505 stocks
# * `Tickers` - A list of the stocks that we are going to analyze (each student should eliminate a few of these stocks before doing their analysis)
#
# + deletable=false editable=false nbgrader={"checksum": "a88535ac8827b8146dd6ab2e894fce9e", "grade": false, "grade_id": "5", "locked": true, "schema_version": 1, "solution": false} run_control={"frozen": false, "read_only": false}
D=pickle.load(open('../Data/Tickers.pkl','rb'))
TickerInfo=D['TickerInfo']
Tickers=D['Tickers']
Sectors={'Consumer Discretionary':'CD',
'Consumer Staples':'CS',
'Energy':'EN',
'Financials':'FIN',
'Health Care':'HC',
'Industrials':'INDS',
'Information Technology':'IT',
'Materials':'MAT',
'Real Estate':'RE',
'Telecommunication Services':'TS',
'Utilities':'UTIL'}
# + deletable=false editable=false nbgrader={"checksum": "46f6655230974a575d3134dfef1c1cb7", "grade": false, "grade_id": "6", "locked": true, "schema_version": 1, "solution": false} run_control={"frozen": false, "read_only": false}
TickerInfo.head()
#TickerInfo.iloc[90]#['SECTOR_ID']
# + [markdown] deletable=false editable=false nbgrader={"checksum": "165c64a797f2bab08d27f1cd6bd4cd3b", "grade": false, "grade_id": "7", "locked": true, "schema_version": 1, "solution": false} run_control={"frozen": false, "read_only": false}
# ### Differences between the stocks lists
# In the following you will work with three different sets of stocks:
#
# 1. The stocks listed in `TickerInfo`
# 2. The stocks listed in `Tickers`
# 3. The stock files that you will download from S3.
#
# * The stocks you will analyze are those in `Tickers` less the one that you were instructed to remove.
# * The Files you will download contain all of the stocks in `Tickers` plus a few stocks that are skipped because they are outliers.
# * The Stocks in `TickerInfo` include most of the stocks in `Tickers`, but there are a few missing. When we will represent each stock with its `SECTOR_ID` these stocks will be represented by their Ticker.
# + deletable=false editable=false nbgrader={"checksum": "f812c146c4ad251d24eae7c9a1a9ea66", "grade": false, "grade_id": "8", "locked": true, "schema_version": 1, "solution": false} run_control={"frozen": false, "read_only": false}
len(Tickers),len(TickerInfo)
# + [markdown] deletable=false editable=false nbgrader={"checksum": "cd9db91c3c35fad6c6521bac0dad0394", "grade": false, "grade_id": "9", "locked": true, "schema_version": 1, "solution": false} run_control={"frozen": false, "read_only": false}
# ### Stock and sector information
# `TickerInfo` is a pandas table containing, for each Ticker, the company name, the sector, and a sector ID. There are 11 sectors. Some, such as `Consumer Discretionary` and `Information Technology` include many stocks while others, such as `Telecommunication Services` include very few.
# + deletable=false editable=false nbgrader={"checksum": "0c90bdefead75f3b5bdcb17b1c8c66db", "grade": false, "grade_id": "10", "locked": true, "schema_version": 1, "solution": false} run_control={"frozen": false, "read_only": false}
from collections import Counter
L=Counter(TickerInfo['Sector']).items()
print 'Sector ID\t\tSector Name\tNo. of Stocks'
print '=========\t\t===========\t============='
for l in L:
print '%s\t%30s\t%d'%(Sectors[l[0]],l[0],l[1])
# + [markdown] deletable=false editable=false nbgrader={"checksum": "ef5863fe4befbd0c1a73f13186693159", "grade": false, "grade_id": "11", "locked": true, "schema_version": 1, "solution": false} run_control={"frozen": false, "read_only": false}
# ### Download Data
# The data is a directory with .csv files, one for each stock. This directory has been tarred and uploaded to
# S3, at:
#
# https://mas-dse-open.s3.amazonaws.com/spdata_csv.tgz
#
# **Download and untar** the file. This creates a directory called **spdata_csv**
# + [markdown] deletable=false editable=false nbgrader={"checksum": "9720f8743bb2304e03f1708204b0aa0f", "grade": false, "grade_id": "12", "locked": true, "schema_version": 1, "solution": false} run_control={"frozen": false, "read_only": false}
# ## Read Data and create a single table
#
# In this notebook we read the stock-information `.csv` files, extract from them the column
# `Adj. Open` and combine them into a single `.csv` file containing all of the information that is relevant for later analysis.
#
# The end result should be a file called `SP500.csv` which stores the information described below.
# + [markdown] deletable=false editable=false nbgrader={"checksum": "a3157ad2335ef904e13696a5217bfca6", "grade": false, "grade_id": "13", "locked": true, "schema_version": 1, "solution": false} run_control={"frozen": false, "read_only": false}
# ### Convert files into pandas dataframes
#
# Read all of the relevant information into a large dictionary we call `Tables`.
#
# The key to this dictionary is the stocks "ticker" which corresponds to the file name excluding the `.csv` extension.
#
# Read in all of the files in the directory `spdata_csv` other than:
#
# * Files for tickers that are not in the list `Tickers`.
# * Files for tickers that were listed in the email you got for this final.
#
#
# + [markdown] deletable=false editable=false nbgrader={"checksum": "0eb25e1d31d8fad9a5377b2f349a609c", "grade": false, "grade_id": "14", "locked": true, "schema_version": 1, "solution": false} run_control={"frozen": false, "read_only": false}
# # Task 1:
# Create a function **getTables** which returns the dictionary *Tables* as described above.
#
# ###### <span style="color:blue">Code:</span>
# ```python
# Tables = getTables()
# print type(Tables)
# print len(Tables)
# Tables['IBM'].head()
# ```
#
# ###### <span style="color:magenta">Output:</span>
# ```python
# dict
# 476
#
# ```
# <p><img alt="" src="figs/IBM.jpg"style="height:180px" /></p>
#
# + run_control={"frozen": false, "read_only": false}
def getTables(Data_dir="../Data/spdata_csv/"): #directory/path/to/
#Data_dir: directory path to spdata_csv , default path should be your local path to the file
Tables={}
# ... write implementation
eliminate=['SLM','TYC','EMC','BEAM','DFS']
files=glob(Data_dir+'*.csv')
for i in files:
basename=os.path.basename(i)
filename=basename.split('.')[0]
if filename not in eliminate and filename in Tickers:
Tables[filename]=pd.read_csv(Data_dir+basename).set_index('Date').iloc[::-1]
return Tables #<-- a dictionary as described above
# + deletable=false editable=false nbgrader={"checksum": "f3ceb8793991f4d5886c58921e8ab8b2", "grade": true, "grade_id": "ex1", "locked": true, "points": 1, "schema_version": 1, "solution": false} run_control={"frozen": false, "read_only": false}
from Tester import prepare as tasks
Tables = tasks.ex1(getTables)
# + [markdown] deletable=false editable=false nbgrader={"checksum": "d97876b71d84aa483b0bb436c5a2fc5e", "grade": false, "grade_id": "15", "locked": true, "schema_version": 1, "solution": false} run_control={"frozen": false, "read_only": false}
# ### Computing diffs and combining into a single table
#
# The next step is to extract from each table the relevant prices, compute an additional quantity we call `diff` and create a single combined pandas dataframe.
#
# The price we ue is the **Adjusted Open Price** which is the price when the stock exchange opens in the morning. We use the **adjusted** price which eliminates technical adjustments such as stock splits.
#
# It is more meaningful to predict *changes* in prices than prices themselves. We therefor compute for each stock a `Diffs` sequence in which $d(t)=\log \frac{p(t+1)}{p(t)}$ where $p(t)$ is the price at time $t$ and $d(t)$ is the price diff or the price ratio.
#
# Obviously, if we have a price sequence of length $T$ then the length of the diff sequence will be $T-1$. To make the price sequence and the diff sequence have the same length we eliminate the last day price for each sequence.
#
# Join the stock tables by date, compute the diff seqeunce, and create one large Pandas DataFrame where the row index is the date, and there are two columns for each ticker. For example for the ticker `IBM`, there would be two columns `IBM_P` and `IBM_D`. The first corresponds to the prices of the IBM stock $p(t)$ and the second to the price difference $d(t)$
# + [markdown] deletable=false editable=false nbgrader={"checksum": "c59f7bcbc03593bd5490d7cf8cbc3087", "grade": false, "grade_id": "16", "locked": true, "schema_version": 1, "solution": false} run_control={"frozen": false, "read_only": false}
# # Task 2:
#
# Create a function **computeDiffs**, which creates the dataframe Diffs as specified above
#
#
# ###### <span style="color:blue">Code:</span>
# ```python
# Diffs = computeDiffs(Tables)
# print type(Diffs)
# print Diffs.shape[1]
# print "IBM_D" in Diffs.columns and "IBM_P" in Diffs.columns
#
# ```
#
# ###### <span style="color:magenta">Output:</span>
# ```python
# <class 'pandas.core.frame.DataFrame'>
# 952
# True
# ```
#
# + run_control={"frozen": false, "read_only": false}
def computeDiffs(Tables):
#Tables: the dictionary Tables that was computed in Task1
Diffs=pd.DataFrame()
# ... write implementation
tablelist=[]
for t in Tables:
temp=Tables[t].copy()
adjopen=temp['Adj. Open'].tolist()
diff=[]
for i in xrange(len(adjopen)-1):
diff.append(math.log(adjopen[i+1]/adjopen[i]))
temp.drop(temp.index[-1],inplace=True)
temp1=pd.DataFrame(temp['Adj. Open'])
temp1['tmp'] = pd.Series(diff, index=temp.index)
temp1.columns=[t+'_P',t+'_D']
Diffs=Diffs.join(temp1,how='outer')
del temp
del temp1
return Diffs #<-- a dataframe as described above
# + deletable=false editable=false nbgrader={"checksum": "e7621daf3692573d52c50679d573e30b", "grade": true, "grade_id": "ex2", "locked": true, "points": 1, "schema_version": 1, "solution": false} run_control={"frozen": false, "read_only": false}
from Tester import prepare as tasks
Diffs = tasks.ex2(computeDiffs(Tables) )
# + deletable=false editable=false nbgrader={"checksum": "b997f37ccdd2ed08f500c676760ac697", "grade": false, "grade_id": "19", "locked": true, "schema_version": 1, "solution": false} run_control={"frozen": false, "read_only": false}
# plot some stocks
Diffs[['AAPL_P','MSFT_P','IBM_P']].plot(figsize=(14,10));
plt.grid()
# + [markdown] deletable=false editable=false nbgrader={"checksum": "fd958a14777803fa99cf359fdbff1c39", "grade": false, "grade_id": "17", "locked": true, "schema_version": 1, "solution": false} run_control={"frozen": false, "read_only": false}
# Lastly,we now save *Diffs* as a csv
# + deletable=false editable=false nbgrader={"checksum": "4b4ee082d57084678f832fda60ceea34", "grade": false, "grade_id": "18", "locked": true, "schema_version": 1, "solution": false} run_control={"frozen": false, "read_only": false}
Diffs.to_csv('../Data/SP500.csv')
# + deletable=false editable=false nbgrader={"checksum": "c48f3cb737fbe803e3ab4977e6046e76", "grade": false, "grade_id": "20", "locked": true, "schema_version": 1, "solution": false} run_control={"frozen": false, "read_only": false}
| Code/1. Prepare daily ratio file to be read as spark Dataframe.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Unit 5 - Financial Planning
#
# +
# Initial imports
import json
import os
import requests
import pandas as pd
from dotenv import load_dotenv
import alpaca_trade_api as tradeapi
from MCForecastTools import MCSimulation
# %matplotlib inline
# -
# Load .env enviroment variables
load_dotenv()
# ## Part 1 - Personal Finance Planner
# ### Collect Crypto Prices Using the `requests` Library
# Set current amount of crypto assets
my_btc = '1.2'
my_eth = '5.3'
# +
# Crypto API URLs
btc_url = "https://api.alternative.me/v2/ticker/Bitcoin/?convert=CAD"
eth_url = "https://api.alternative.me/v2/ticker/Ethereum/?convert=CAD"
response_data_btc = requests.get(btc_url)
response_data_eth = requests.get(eth_url)
# -
# Print `response_data variable`
response_data_btc
response_data_eth
# +
# -------- #
# Fetch current BTC price
# Store response using `content` attribute
btc_data = response_data_btc.content
# Format BTC data as JSON
btc_json = response_data_btc.json()
# Use json.dumps to format data
print(json.dumps(btc_json, indent=8))
# -
# Display btc_json key path to get current price
btc_json['data']['1']['quotes']['USD'].keys()
# Select only the BTC price
btc_json['data']['1']['quotes']['USD']['price']
# +
# -------- #
# Fetch current ETH price
# Store response using `content` attribute
eth_data = response_data_eth.content
# Format BTC data as JSON
eth_json = response_data_eth.json()
# Use json.dumps to format data
print(json.dumps(eth_json, indent=8))
# -
# Display eth_json key path to get current price
eth_json['data']['1027']['quotes']['USD'].keys()
# Select only the ETH price
eth_json['data']['1027']['quotes']['USD']['price']
# +
# -------- #
# Compute current value of my cryptos and store each in its own variable
my_btc_value = btc_json['data']['1']['quotes']['USD']['price']
my_eth_value = eth_json['data']['1027']['quotes']['USD']['price']
# Print current crypto wallet balance
print(f"The current value of your {my_btc} BTC is ${my_btc_value:0.2f}")
print(f"The current value of your {my_eth} ETH is ${my_eth_value:0.2f}")
# -
# ### Collect Investments Data Using Alpaca: `SPY` (stocks) and `AGG` (bonds)
# Current amount of shares
my_agg = '200'
my_spy = '50'
# Set Alpaca API key and secret
alpaca_api_key = os.getenv("ALPACA_API_KEY")
alpaca_secret_key = os.getenv("ALPACA_SECRET_KEY")
# Create the Alpaca API object
api = tradeapi.REST(
alpaca_api_key,
alpaca_secret_key,
api_version = "v2"
)
# +
# Format current date as ISO format
current_date = pd.Timestamp("2021-07-04", tz="America/New_York").isoformat()
# Set the tickers
tickers = ["AGG", "SPY"]
# Set timeframe to '1D' for Alpaca API
timeframe = "1D"
# Get current closing prices for SPY and AGG
shares_data = {
"shares": [200, 50]
}
# Create the shares DataFrame - wasn't part of the HW but I wanted the practice!
agg_spy_df = pd.DataFrame(shares_data, index=tickers)
# Preview DataFrame
agg_spy_df
# +
# Set start and end datetimes between now and 5 years ago.
start_date = pd.Timestamp("2021-07-02", tz="America/New_York").isoformat()
end_date = pd.Timestamp("2021-07-02", tz="America/New_York").isoformat()
# Set the ticker information
tickers = ["AGG", "SPY"]
# Get current closing prices for SPY and AGG
agg_spy_df_ticker = api.get_barset(
tickers,
timeframe,
start=start_date,
end=end_date,
).df
# Display sample data
agg_spy_df_ticker
# +
# Pick AGG and SPY close prices
agg_close_price = agg_spy_df_ticker.iloc[0][3] # not in love with this method because it is not repeatable - have to modify code for each successive date
spy_close_price = agg_spy_df_ticker.iloc[0][8] # same
# Print AGG and SPY close prices
print(f"Current AGG closing price: ${agg_close_price}")
print(f"Current SPY closing price: ${spy_close_price}")
# +
# Compute the current value of shares
my_spy_value = float(spy_close_price)*float(my_spy)
my_agg_value = float(agg_close_price)*float(my_agg)
# Print current value of share
print(f"The current value of your {my_spy} SPY shares is ${my_spy_value:0.2f}")
print(f"The current value of your {my_agg} AGG shares is ${my_agg_value:0.2f}")
# -
# ### Savings Health Analysis
# +
# Set monthly household income
monthly_income = int(12000)
# Create savings DataFrame
crypto_amt = my_btc_value + my_eth_value
shares_amt = my_spy_value + my_agg_value
investments = ["Crypto", "Shares"]
portfolio_data = {
"amount": [crypto_amt, shares_amt]
}
df_savings = pd.DataFrame(portfolio_data, index=investments)
# Display savings DataFrame
display(df_savings)
# -
# Plot savings pie chart
# Create a pie chart to show the proportion of stocks in the portfolio
df_savings.plot.pie(y="amount", title="Savings Portfolio Composition")
# +
# Set ideal emergency fund
emergency_fund = monthly_income * 3
emergency_fund
# Calculate total amount of savings
total_savings = crypto_amt + shares_amt
total_savings
# Validate saving health
# Use `if` conditional statements to validate if the current savings are enough for an emergency fund. An ideal emergency fund should be equal to three times your monthly income.
# * If total savings are greater than the emergency fund, display a message congratulating the person for having enough money in this fund.
# * If total savings are equal to the emergency fund, display a message congratulating the person on reaching this financial goal.
# * If total savings are less than the emergency fund, display a message showing how many dollars away the person is from reaching the goal.
if total_savings > emergency_fund:
print(f"Congratulations, your current total savings of ${total_savings:0.2f} exceeds the minimum recommended amount of emergency funds.")
elif total_savings == emergency_fund:
print(f"Congratulations, your current total savings of ${total_savings:0.2f} meets the minimum recommended amount of emergency funds.")
else:
print(f"Currently, your total savings are ${(emergency_fund - total_savings):0.2f} short of the recommended minimum amount of emergency funds.")
# -
# ## Part 2 - Retirement Planning
#
# ### Monte Carlo Simulation
# Set start and end dates of five years back from today.
# Sample results may vary from the solution based on the time frame chosen
start_date = pd.Timestamp('2016-07-05', tz='America/New_York').isoformat()
end_date = pd.Timestamp('2021-07-05', tz='America/New_York').isoformat()
# +
# Get 5 years' worth of historical data for SPY and AGG
# Set timeframe to '1D'
timeframe = "1D"
# Set the ticker information
tickers = ["AGG", "SPY"]
# Get 5 year's worth of historical price data
retirement_df = api.get_barset(
tickers,
timeframe,
start=start_date,
end=end_date,
limit=1000,
).df
# Display sample data
retirement_df.head(-1)
# +
# Configuring a Monte Carlo simulation to forecast 30 years cumulative returns
MC_4060_dist = MCSimulation(
portfolio_data = retirement_df,
weights = [.40,.60],
num_simulation = 500,
num_trading_days = 252*30
)
# -
# Printing the simulation input data
MC_4060_dist.portfolio_data.head()
# + tags=[]
# Running a Monte Carlo simulation to forecast 30 years cumulative returns
MC_4060_dist.calc_cumulative_return()
# -
# Plot simulation outcomes
line_plot = MC_4060_dist.plot_simulation()
# Plot probability distribution and confidence intervals
dist_plot = MC_4060_dist.plot_distribution()
# + [markdown] tags=[]
# ### Retirement Analysis
# +
# Fetch summary statistics from the Monte Carlo simulation results
MC4060_tbl = MC_4060_dist.summarize_cumulative_return()
# Print summary statistics
MC4060_tbl
# -
# ### Calculate the expected portfolio return at the 95% lower and upper confidence intervals based on a `$20,000` initial investment.
# +
# Set initial investment
initial_investment = 20000
# Use the lower and upper `95%` confidence intervals to calculate the range of the possible outcomes of our $20,000 investments stocks
MC4060_ci_lower = round(MC4060_tbl[8]*initial_investment,2)
MC4060_ci_upper = round(MC4060_tbl[9]*initial_investment,2)
# Print results
print(f"There is a 95% chance that an initial investment of ${initial_investment} in the portfolio"
f" over the next 30 years will end within in the range of"
f" ${MC4060_ci_lower} and ${MC4060_ci_upper}")
# -
# ### Calculate the expected portfolio return at the `95%` lower and upper confidence intervals based on a `50%` increase in the initial investment.
# +
# Set initial investment
modified_initial_investment = 20000 * 1.5
# Use the lower and upper `95%` confidence intervals to calculate the range of the possible outcomes of our $30,000 investments stocks
MC4060_ci_lower = round(MC4060_tbl[8]*modified_initial_investment,2)
MC4060_ci_upper = round(MC4060_tbl[9]*modified_initial_investment,2)
# Print results
print(f"There is a 95% chance that an initial investment of ${modified_initial_investment} in the portfolio"
f" over the next 30 years will end within in the range of"
f" ${MC4060_ci_lower} and ${MC4060_ci_upper}")
# -
# ## Optional Challenge - Early Retirement
#
#
# ### Five Years Retirement Option
# + tags=[]
# Configuring a Monte Carlo simulation to forecast 5 years cumulative returns
MC_5yr_dist = MCSimulation(
portfolio_data = retirement_df,
weights = [.20,.80],
num_simulation = 500,
num_trading_days = 252*5
)
# Print the simulation input data
MC_5yr_dist.portfolio_data.head(-1)
# + tags=[]
# Running a Monte Carlo simulation to forecast 5 years cumulative returns
MC_5yr_dist.calc_cumulative_return()
# -
# Plot simulation outcomes
MC_5yr_dist.plot_simulation()
# Plot probability distribution and confidence intervals
MC_5yr_dist.plot_distribution()
# +
# Fetch summary statistics from the Monte Carlo simulation results
MC_5yr_dist_tbl = MC_5yr_dist.summarize_cumulative_return()
# Print summary statistics
MC_5yr_dist_tbl
# +
# Set initial investment
_5yr_initial_investment = 80000
# Use the lower and upper `95%` confidence intervals to calculate the range of the possible outcomes of our $30,000 investments stocks
MC_5yr_ci_lower = round(MC_5yr_dist_tbl[8]*_5yr_initial_investment,2)
MC_5yr_ci_upper = round(MC_5yr_dist_tbl[9]*_5yr_initial_investment,2)
# Print results
print(f"There is a 95% chance that an initial investment of ${_5yr_initial_investment} in the portfolio"
f" over the next 5 years will end within in the range of"
f" ${MC_5yr_ci_lower} and ${MC_5yr_ci_upper}."
f" It appears that even quadrupling the initial"
f" investment and overleveraging stocks (80%) to bonds (20%),"
f" retirement in 5 years is unlikely.")
# -
# ### Ten Years Retirement Option
# +
# Configuring a Monte Carlo simulation to forecast 10 years cumulative returns
# Configuring a Monte Carlo simulation to forecast 5 years cumulative returns
MC_10yr_dist = MCSimulation(
portfolio_data = retirement_df,
weights = [.20,.80],
num_simulation = 500,
num_trading_days = 252*10
)
# Print the simulation input data
MC_10yr_dist.portfolio_data.head(-1)
# -
# Running a Monte Carlo simulation to forecast 10 years cumulative returns
MC_10yr_dist.calc_cumulative_return()
# Plot simulation outcomes
MC_10yr_dist.plot_simulation()
# Plot probability distribution and confidence intervals
MC_10yr_dist.plot_distribution()
# +
# Fetch summary statistics from the Monte Carlo simulation results
MC_10yr_dist_tbl = MC_10yr_dist.summarize_cumulative_return()
# Print summary statistics
MC_10yr_dist_tbl
# +
# Set initial investment
_10yr_initial_investment = 60000
# Use the lower and upper `95%` confidence intervals to calculate the range of the possible outcomes of our $60,000 investments stocks
MC_10yr_ci_lower = round(MC_10yr_dist_tbl[8]*_10yr_initial_investment,2)
MC_10yr_ci_upper = round(MC_10yr_dist_tbl[9]*_10yr_initial_investment,2)
# Print results
print(f"There is a 95% chance that an initial investment of ${_10yr_initial_investment} in the portfolio"
f" over the next 5 years will end within in the range of"
f" ${MC_10yr_ci_lower} and ${MC_10yr_ci_upper}."
f" The increased initial investment of $60000"
f" along with overleveraging stocks (80%) to bonds (20%)"
f" does seem to make retirement in 10 years possible.")
| Week5_API_HW/Starter_Code/financial-planner.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Pyspark (k8s cluster)
# language: python
# name: pyspark_k8s
# ---
# # Adressefunksjoner
# > Funksjoner knyttet til adresseavledning fra FREG systemet
#
# ## Introduksjon
# I forbindelse med overgangen til FREG har skatteetaten koblet seg pรฅ matrikkelens API.
# Dette har gjort at FREG nรฅ inneholder betraktelig mer informasjon om personers adresser enn tidligere.
# Det har ogsรฅ gjort det enklere enn noen gang รฅ koble sammen personer og bygninger.
#
# Denne notatboken inneholder funksjoner som knytter seg til det รฅ avlede adresseinformasjon.
# Dette kan gjรธres med forskjellige formรฅl.
#
# ## adr_26
# Adresse 26 er en kombinasjon av
# ```kommunenummer - gate/gรฅrdsnummer - husnummer/bruksnummer - bokstav/festenummer - undernummer - bruksenhetsnummer```
# Sammen identifiserer disse alle godkjente(?) bruksenheter.
# I FREG, som i de fleste andre registre, eksisterer ikke adr_26 som en egen variabel og mรฅ derfor sammenstilles.
# Dette kan gjรธres med en av disse funksjonene.
#
# ## To offisielle adresser
# I Norge opererer vi med to offisielle adresser.
# Den ene er matrikkeladressen, som idenitfiserer en eiendom; den andre er vegadressen, som identifiserer bygninger.
# Et adressepunkt har ikke to adresser samtidig, men det er opp til de enkelte kommunenene om de bruker veg- eller matrikkeladresser.
# I dag har alle kommuner gรฅtt over til รฅ bruke vegadresser โ men dette er arbeid som tar tid.
# Derfor er det slik at et bygg enten har en matrikkel- eller en vegadresse, og det er blanding av begge innad i kommunenene.
#
# FREG representerer de to adressene i hvert sitt element i alle adresseelementer som er knyttet til en person.
# Heldigvis er strukturen pรฅ disse feltene like.
# +
#export
import os
import sys
def set_runpath(repo):
repo = '/' + repo
if os.getcwd()[:13] =='/home/runner/':
os.chdir(os.getcwd()[:(os.getcwd().index(repo) + ((len(repo)*2)+1))] )
else:
os.chdir(os.getcwd()[:(os.getcwd().index(repo) + ((len(repo))+1))] )
return print(os.getcwd())
# +
# default_exp freg_adresse_funcs
#hide
import os
import sys
set_runpath('stat-freg')
import toml
paths = toml.load('src/config/filstier.toml')
# +
#exports
import sys
import os
sys.path.append(os.path.abspath(os.getcwd()))
from src.scripts import production_settings as ps
from ssb_sparktools.processing import processing as stproc
import pyspark.sql.functions as F
from pyspark.sql import SparkSession
from pyspark.sql.types import *
from pyspark.sql import SQLContext
from pyspark.sql import Window
from pyspark.sql.functions import broadcast
from pyspark.sql.types import StringType
import string
# -
#hide
#local
prod_path, prod_modus = ps.prod_modus('freg_master')
#hide
#local
tversnitt_med_historik = spark.read.path(prod_path+paths['FREG-produkt']['LIVE_MASTER_PATH']).select('folkeregisteridentifikator',
'folkeregisterperson.identifikasjonsnummer',
'folkeregisterperson.bostedsadresse',
'folkeregisterperson.oppholdsadresse'
)
informasjons_element = stproc.unpack_parquet(tversnitt_med_historik, rootvar=['folkeregisteridentifikator'], rootdf=False, levels=1 )
# ## generate_adresse_info
# Som beskrevet over, er FREG-adresseinformasjon pakket ned i forskjellige informasjonselementer og deretter forskjellige underelementer.
# Denne funksjonen tar som input et informasjonselement med adresseinformasjon, gitt at alle adresseelementer inneholder underelementer, som inneholder adresseinformasjonen.
# Fรฅr รฅ fรฅ tak i sistnevnte mรฅ vi foreta en sekundรฆr utpakking, begrenset til innenfor denne funksjonen.
#
# Matrikkel- og vegadresser prosesseres separat, fรธr de kombineres pรฅ slutten av funksjonen.
#
# Ett av kravene til innputten, er `rootvar`, som er en liste av variabler som skal fรธlge adresseinformasjonen gjennom prossessen.
# For รฅ kunne koble adresseinformasjonen tilbake til det opprinnelige datasettet trenger du i alle fall รฉn variabel i denne listen, og
# som regel vil dette vรฆre `folkeregisteridentifikator`.
# Dersom en person kan vรฆre representert med mer enn en adresse i datasettet, mรฅ det ogsรฅ inkluderes en variabel som i kombinasjon med `folkeregisteridentifikator`, unikt identifiserer en adresseinstans.
# I vanlig produksjon vil dette mest sannsylig vรฆre en `*_id`-variabel.
#
# Som beskrevet innledningsvis, har veg- og matrikkeladresser forskjellige navn pรฅ de forskjellige posisjonenene i den numeriske adressen.
# For รฅ sammenstille til en enhetlig adresseinformasjon mรฅ disse forskjellene harmoniseres. I funksjonen gjรธres dette fรธr datasettene er kombinert.
# I tillegg til harmoniseringen av kolonnenavn, fylles ikke-utfylte variabler med `0`, for รฅ sikre at totalantallet for den numeriske adressen er stabil.
#
# Her er ogsรฅ et filter som fjerner tomme adresser, som er adresser der alle 26 posisjoner bestรฅr av `0`.
#
# Funksjonen retunerer et datasett med de variablene du oppgir i `rootvar`, som koblingsnรธkler.
# I tillegg til den kombinerte adr_26-variabelen, retuneres ogsรฅ den dekonstruerte adresseinformasjonen.
#export
def husnummer_omkoding(husbokstav):
import string
nummer = (string.ascii_uppercase.index(str.upper(husbokstav))+1)
padded = '99'+'{0:02d}'.format(nummer)
return padded
#exports
def get_adresse_info(df , rootvar=['folkeregisteridentifikator']):
"""
Funksjon for รฅ utlede full adresse informasjon.
"""
adr = stproc.unpack_parquet(df,
rootvar=rootvar,
rootdf=False,
levels=1 )
base_list = ['kommunenummer',
'gatenr_gaardsnr',
'husnr_bruksnr',
'bokstav_festenr',
'undernummer',
'bruksenhetsnummer',
'bruksenhetstype',
'adresse_type',
]
select_list = rootvar+base_list
vegadresser = (adr['vegadresse']
.filter(F.col('adressekode').isNotNull())
.withColumn('husnr_bruksnr',F.lpad(F.col('adressenummer.husnummer'),4,'0'))
.withColumn('bokstav_festenr', F.lpad(F.col('adressenummer.husbokstav'),4,'0'))
.withColumn('undernummer', F.lit('0000'))
.withColumn('gatenr_gaardsnr',F.lpad('adressekode',5,'0'))
.withColumn('bruksenhetsnummer', F.lpad('bruksenhetsnummer',5,'0'))
.withColumn('adresse_type', F.lit('O'))
.select(select_list)
.dropDuplicates()
.fillna('0000', subset=['undernummer', 'bokstav_festenr','kommunenummer', 'Husnr_bruksnr'])
.fillna('00000', subset=['bruksenhetsnummer','gatenr_gaardsnr'])
.select('*', F.concat('kommunenummer', 'gatenr_gaardsnr', 'husnr_bruksnr', 'bokstav_festenr', 'undernummer' , 'bruksenhetsnummer').alias('adr_26'))
)
matrikkeladresser = (adr['matrikkeladresse']
.filter(F.col('matrikkelnummer').isNotNull())
.withColumn('kommunenummer',F.lpad(F.col('matrikkelnummer.kommunenummer'),4,'0'))
.withColumn('gatenr_gaardsnr', F.lpad(F.col('matrikkelnummer.gaardsnummer'),5,'0'))
.withColumn('husnr_bruksnr',F.lpad(F.col('matrikkelnummer.bruksnummer'),4,'0'))
.withColumn('undernummer',F.lpad(F.col('undernummer'),4,'0'))
.withColumn('bruksenhetsnummer',F.lpad('bruksenhetsnummer',5,'0'))
.withColumn('bokstav_festenr', F.lpad(F.col('matrikkelnummer.festenummer'),4,'0'))
.withColumn('adresse_type', F.lit('M'))
.select(select_list)
.dropDuplicates()
.fillna('0000', subset=['kommunenummer', 'husnr_bruksnr','bokstav_festenr', 'undernummer'])
.fillna('00000', subset=['bruksenhetsnummer','gatenr_gaardsnr'])
.select('*', F.concat('kommunenummer', 'gatenr_gaardsnr', 'husnr_bruksnr', 'bokstav_festenr', 'undernummer', 'bruksenhetsnummer' ).alias('adr_26'))
)
ukjent_bosted = (adr['ukjentBosted']
.filter(F.col('bostedskommune').isNotNull())
.withColumnRenamed('bostedskommune','kommunenummer')
.withColumn('gatenr_gaardsnr', F.lit('00000'))
.withColumn('husnr_bruksnr',F.lit('0000'))
.withColumn('undernummer',F.lit('0000'))
.withColumn('bruksenhetsnummer',F.lit('00000'))
.withColumn('bokstav_festenr', F.lit('0000'))
.withColumn('adresse_type', F.lit('ukjent'))
.withColumn('bruksenhetstype', F.lit(None))
.select(select_list)
.select('*', F.concat('kommunenummer', 'gatenr_gaardsnr', 'husnr_bruksnr', 'bokstav_festenr', 'undernummer', 'bruksenhetsnummer' ).alias('adr_26'))
)
adresse_info = vegadresser.union(matrikkeladresser).union(ukjent_bosted)
adresse_info = adresse_info.withColumn('bruksenhetsnummer', F.when(F.col('bruksenhetsnummer')=='00000', F.lit(None)).otherwise((F.col('bruksenhetsnummer'))))
return adresse_info
# ### TEST: get_adresse_info
# Koden under dette punktet bestรฅr av enhetstester (unit-tester) som er skrevet for รฅ kontrollere funksjonaliteten til get_adresse_info-funksjonen.
# Den inneholder stort sett bare tester som kjรธres lokalt, noe som skyldes at testdataene ikke er tilgjengelig pรฅ _remote test_-serveren.
# Akkurat det kan endre seg over tid, men per nรฅ er det slik.
#
# Det vil gjennomgรฅs grupperinger av tester for รฅ beskrive hvilket formรฅl disse testene dekker.
# Dette er ikke det endelige settet med tester, men det er en start.
#hide
#local
test_tversnitt_med_historik = spark.read.parquet('../../../tests/test_data/updated_master').select('folkeregisteridentifikator',
'folkeregisterperson.identifikasjonsnummer',
'folkeregisterperson.bostedsadresse',
'folkeregisterperson.oppholdsadresse')
test_informasjons_element = stproc.unpack_parquet(test_tversnitt_med_historik, rootvar=['folkeregisteridentifikator'], rootdf=False, levels=1 )
# Disse testene skal sjekke at funksjonen ikke fjerner data og at data inn samsvarer med data ut.
# De sjekker ogsรฅ at funksjonen klarer รฅ velge korrekt antall enheter med henholdsvis veg- og matrikkeladresser som er kommet inn med data-innputten
#local
Adresse_info = get_adresse_info(test_informasjons_element['bostedsadresse'], rootvar=['folkeregisteridentifikator', 'bostedsadresse_id'])
assert Adresse_info.count() == test_informasjons_element['bostedsadresse'].select('folkeregisteridentifikator','bostedsadresse_id','matrikkeladresse', 'vegadresse', 'ukjentBosted').dropDuplicates().count()
assert Adresse_info.filter(F.col('adresse_type')=='M').count() == test_informasjons_element['bostedsadresse'].filter(F.col('matrikkeladresse').isNotNull()).count()
assert Adresse_info.filter(F.col('adresse_type')=='O').count() == test_informasjons_element['bostedsadresse'].filter(F.col('vegadresse').isNotNull()).count()
# Adr_26 og dens bestanddeler har en spesifikk lengde som sikrer at det endelige resultatet blir korrekt.
# Flere av komponentene krever derfor en padding for รฅ oppnรฅ denne lengden.
# Testene i denne bolken sjekker at variablene fรฅr korrekt lengde.
#local
assert Adresse_info.where(F.length(F.col('adr_26'))<26).count() == 0
assert Adresse_info.where(F.length(F.col('kommunenummer'))<4).count() == 0
assert Adresse_info.where(F.length(F.col('gatenr_gaardsnr'))<5).count() == 0
assert Adresse_info.where(F.length(F.col('husnr_bruksnr'))<4).count() == 0
assert Adresse_info.where(F.length(F.col('bokstav_festenr'))<4).count() == 0
assert Adresse_info.where(F.length(F.col('undernummer'))<4).count() == 0
assert Adresse_info.where(F.length(F.col('bruksenhetsnummer'))<5).count() == 0
assert Adresse_info.where(F.length(F.col('adresse_type'))<1).count() == 0
# get_adresse_info-funksjonen skal ta variablene den mottar som del av rootvar og legge disse til resultatet.
#local
valid_col = ['folkeregisteridentifikator',
'bostedsadresse_id',
'kommunenummer',
'gatenr_gaardsnr',
'husnr_bruksnr',
'bokstav_festenr',
'undernummer',
'bruksenhetsnummer',
'bruksenhetstype',
'adresse_type',
'adr_26'
]
assert Adresse_info.columns == valid_col
# Variabelen `adresse_type` er en konstruert variabel.
# Vi har derfor en funksjon som skjekker at variabelen kun inneholder gyldige verdier.
# `bruksenhetsnummer` og `bruksenhetstype` er i tillegg klassifiseringsvariabler.
# Av den grunn har vi tester som skjekker at funksjonen ikke introduserer ugyldige variabler.
#local
assert Adresse_info.where(F.col('adresse_type').substr(0,1).isin(['M', 'O','ukjent'])==False).count() == 0
assert Adresse_info.where(F.col('bruksenhetsnummer').substr(0,1).isin(['0','H','U'])==False).count() == 0
assert Adresse_info.where(F.col('bruksenhetstype').isin(['bolig','fritidsbolig','null'])==False).count() == 0
# #hide
# Denne nedenforliggende snutten er lagt inn midlertidig for รฅ fullfรธre en oppdatering av BEREG.
# Denne koden slettes etter oppdateringen.
#hide
#local
#Skript for รฅ ta ut alle bosattes bruksenhetsnummer for BEREG Oppdatering. Slettes etter bruk!
personer = (informasjons_element['bostedsadresse'].filter(F.col('erGjeldende')=='true'))
adresser = get_adresse_info(personer)
adresser.show()
print(adresser.count())
(adresser.write
.option("valuation", "INTERNAL")
.option("state", "INPUT")
.path('/produkt/freg/temp/etablere_klargjoring/utgaatte_variabler/Bruksenhetsnummer_oppdatering')
)
| adresse_funcs.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: conda_pytorch_latest_p36
# language: python
# name: conda_pytorch_latest_p36
# ---
# ## Audit and Improve Video Annotation Quality Using Amazon SageMaker Ground Truth
#
# This notebook walks through how to evaluate the quality of video annotations received from SageMaker Ground Truth annotators using several metrics.
#
# The standard functionality of this notebook works with the standard Conda Python3/Data Science kernel; however, there is an optional section that uses a PyTorch model to generate image embeddings.
#
# Start by importing the required libraries and initializing the session and other variables used in this notebook. By default, the notebook uses the default Amazon S3 bucket in the same AWS Region you use to run this notebook. If you want to use a different S3 bucket, make sure it is in the same AWS Region you use to complete this tutorial, and specify the bucket name for `bucket`.
# !pip install tqdm
# %pylab inline
import json
import os
import sys
import boto3
import sagemaker as sm
import subprocess
from glob import glob
from tqdm import tqdm
from PIL import Image
import datetime
import numpy as np
from matplotlib import patches
from plotting_funcs import *
from scipy.spatial import distance
# ## Prerequisites
#
# Create some of the resources you need to launch a Ground Truth audit labeling job in this notebook. To execute this notebook, you must create the following resources:
#
# * A work team: A work team is a group of workers that complete labeling tasks. If you want to preview the worker UI and execute the labeling task, you must create a private work team, add yourself as a worker to this team, and provide the following work team ARN. This [GIF](images/create-workteam-loop.gif) demonstrates how to quickly create a private work team on the Amazon SageMaker console. To learn more about private, vendor, and Amazon Mechanical Turk workforces, see [Create and Manage Workforces](https://docs.aws.amazon.com/sagemaker/latest/dg/sms-workforce-management.html).
# + pycharm={"name": "#%%\n"}
WORKTEAM_ARN = '<<ADD WORK TEAM ARN HERE>>'
print(f'This notebook will use the work team ARN: {WORKTEAM_ARN}')
# Make sure workteam arn is populated
assert (WORKTEAM_ARN != '<<ADD WORK TEAM ARN HERE>>')
# -
# * The IAM execution role you used to create this notebook instance must have the following permissions:
# * `AmazonSageMakerFullAccess`: If you do not require granular permissions for your use case, you can attach the [AmazonSageMakerFullAccess](https://console.aws.amazon.com/iam/home?#/policies/arn:aws:iam::aws:policy/AmazonSageMakerFullAccess) policy to your IAM user or role. If you are running this example in a SageMaker notebook instance, this is the IAM execution role used to create your notebook instance. If you need granular permissions, see [Assign IAM Permissions to Use Ground Truth](https://docs.aws.amazon.com/sagemaker/latest/dg/sms-security-permission.html#sms-security-permissions-get-started) for granular policy to use Ground Truth.
# * The AWS managed policy [AmazonSageMakerGroundTruthExecution](https://console.aws.amazon.com/iam/home#policies/arn:aws:iam::aws:policy/AmazonSageMakerGroundTruthExecution). Run the following code snippet to see your IAM execution role name. This [GIF](images/add-policy-loop.gif) demonstrates how to attach this policy to an IAM role in the IAM console. For further instructions see the: [Adding and removing IAM identity permissions](https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies_manage-attach-detach.html#add-policies-console) section in the *AWS Identity and Access Management User Guide*.
# * Amazon S3 permissions: When you create your role, you specify Amazon S3 permissions. Make sure that your IAM role has access to the S3 bucket that you plan to use in this example. If you do not specify a S3 bucket in this notebook, the default bucket in the AWS region in which you are running this notebook instance is used. If you do not require granular permissions, you can attach [AmazonS3FullAccess](https://console.aws.amazon.com/iam/home#policies/arn:aws:iam::aws:policy/AmazonS3FullAccess) to your role.
# + pycharm={"name": "#%%\n"}
role = sm.get_execution_role()
role_name = role.split('/')[-1]
print('IMPORTANT: Make sure this execution role has the AWS Managed policy AmazonGroundTruthExecution attached.')
print('********************************************************************************')
print('The IAM execution role name:', role_name)
print('The IAM execution role ARN:', role)
print('********************************************************************************')
# + pycharm={"name": "#%%\n"}
sagemaker_cl = boto3.client('sagemaker')
# Make sure the bucket is in the same region as this notebook.
bucket = '<< YOUR S3 BUCKET NAME >>'
sm_session = sm.Session()
s3 = boto3.client('s3')
if(bucket=='<< YOUR S3 BUCKET NAME >>'):
bucket=sm_session.default_bucket()
region = boto3.session.Session().region_name
bucket_region = s3.head_bucket(Bucket=bucket)['ResponseMetadata']['HTTPHeaders']['x-amz-bucket-region']
assert bucket_region == region, f'Your S3 bucket {bucket} and this notebook need to be in the same region.'
print(f'IMPORTANT: make sure the role {role_name} has the access to read and write to this bucket.')
print('********************************************************************************************************')
print(f'This notebook will use the following S3 bucket: {bucket}')
print('********************************************************************************************************')
# -
# ## Download data
#
# Download a dataset from the Multi-Object Tracking Challenge, a commonly used benchmark for multi-object tracking. Depending on your connection speed, this can take 5โ10 minutes. Unzip it and upload it to a `bucket` in Amazon S3.
#
# Disclosure regarding the Multiple Object Tracking Benchmark:
#
# Multiple Object Tracking Benchmark is created by <NAME>, <NAME>, <NAME>, <NAME>, and <NAME>. We have not modified the images or the accompanying annotations. You can obtain the images and the annotations [here](https://motchallenge.net/data/MOT17/). The images and annotations are licensed by the authors under [Creative Commons Attribution-NonCommercial-ShareAlike 3.0 License](https://creativecommons.org/licenses/by-nc-sa/3.0/). The following paper describes Multiple Object Tracking Benchmark in depth: from the data collection and annotation to detailed statistics about the data and evaluation of models trained on it.
#
# MOT17: A Benchmark for Multi-Object Tracking.
# <NAME>, <NAME>, <NAME>, <NAME>, <NAME> [arXiv:1603.00831](https://arxiv.org/abs/1603.00831)
#
# Grab our data this will take ~5 minutes
# !wget https://motchallenge.net/data/MOT17.zip -O /tmp/MOT17.zip
# unzip our data
# !unzip -q /tmp/MOT17.zip -d MOT17
# !rm /tmp/MOT17.zip
# send our data to s3 this will take a couple minutes
# !aws s3 cp --recursive MOT17/MOT17/train s3://{bucket}/MOT17/train --quiet
# ## View images and labels
# The scene is a street setting with a large number of cars and pedestrians. Grab image paths and plot the first image.
# +
img_paths = glob('MOT17/MOT17/train/MOT17-13-SDP/img1/*.jpg')
img_paths.sort()
imgs = []
for imgp in img_paths:
img = Image.open(imgp)
imgs.append(img)
img
# -
# ## Load labels
# The MOT17 dataset has labels for each scene in a single text file. Load the labels and organize them into a frame-level dictionary so you can easily plot them.
# +
# grab our labels
labels = []
with open('MOT17/MOT17/train/MOT17-13-SDP/gt/gt.txt', 'r') as f:
for line in f:
labels.append(line.replace('\n','').split(','))
lab_dict = {}
for i in range(1,len(img_paths)+1):
lab_dict[i] = []
for lab in labels:
lab_dict[int(lab[0])].append(lab)
# -
# ## View MOT17 annotations
#
# In the existing MOT-17 annotations, the labels include both bounding box coordinates and unique IDs for each object being tracked. By plotting the following two frames, you can see how the objects of interest persist across frames. Since our video has a high number of frames per second, look at frame 1 and then frame 31 to see the same scene with approximately one second between frames. You can adjust the start index, end index, and step values to view different labeled frames in the scene.
# +
start_index = 1
end_index = 32
step = 30
for j in range(start_index, end_index, step):
# Create figure and axes
fig,ax = plt.subplots(1, figsize=(24,12))
ax.set_title(f'Frame {j}', fontdict={'fontsize':20})
# Display the image
ax.imshow(imgs[j])
for i,annot in enumerate(lab_dict[j]):
annot = np.array(annot, dtype=np.float32)
# if class is non-pedestrian display box
if annot[6] == 0:
rect = patches.Rectangle((annot[2], annot[3]), annot[4], annot[5], linewidth=1, edgecolor='r', facecolor='none')
ax.add_patch(rect)
plt.text(annot[2], annot[3]-10, f"Object {int(annot[1])}", bbox=dict(facecolor='white', alpha=0.5))
# -
# ## Evaluate labels
#
# For demonstration purposes, we've labeled three vehicles in one of the videos and inserted a few labeling anomalies into the annotations. Identifying mistakes and then sending directed recommendations for frames and objects to fix makes the label auditing process more efficient. If a labeler only has to focus on a few frames instead of a deep review of the entire scene, it can drastically improve speed and reduce cost."
#
# We have provided a JSON file containing intentionally flawed labels. For a typical Ground Truth Video job, this file is in the Amazon S3 output location you specified when creating your labeling job. This label file is organized as a sequential list of labels. Each entry in the list consists of the labels for one frame.
#
# For more information about Ground Truth's output data format, see the [Output Data](https://docs.aws.amazon.com/sagemaker/latest/dg/sms-data-output.html) section of the *Amazon SageMaker Developer Guide*.
# +
# load labels
lab_path = 'SeqLabel.json'
with open(lab_path, 'r') as f:
flawed_labels = json.load(f)
img_paths = glob('MOT17/MOT17/train/MOT17-13-SDP/img1/*.jpg')
img_paths.sort()
# Let's grab our images
imgs = []
for imgp in img_paths:
img = Image.open(imgp)
imgs.append(img)
flawed_labels['tracking-annotations'][0]
# -
# ## View annotations
#
# We annotated 3 vehicles, one of which enters the scene at frame 9. View the scene starting at frame 9 to see all of our labeled vehicles.
# +
# let's view our tracking labels
start_index = 9
end_index = 16
step = 3
for j in range(start_index, end_index, step):
# Create figure and axes
fig,ax = plt.subplots(1, figsize=(24,12))
ax.set_title(f'Frame {j}')
# Display the image
ax.imshow(np.array(imgs[j]))
for i,annot in enumerate(flawed_labels['tracking-annotations'][j]['annotations']):
rect = patches.Rectangle((annot['left'], annot['top']), annot['width'], annot['height'], linewidth=1, edgecolor='r', facecolor='none')
ax.add_patch(rect)
plt.text(annot['left']-5, annot['top']-10, f"{annot['object-name']}", bbox=dict(facecolor='white', alpha=0.5))
# -
# ## Analyze tracking data
#
# Put the tracking data into a form that's easier to analyze.
#
# The following function turns our tracking output into a dataframe. You can use this dataframe to plot values and compute metrics to help you understand how the object labels move through the frames.
# generate dataframes
label_frame = create_annot_frame(flawed_labels['tracking-annotations'])
label_frame.head()
# ## View label progression plots
#
# The following plots illustrate how the coordinates of a given object progress through the frames of a video. Each bounding box has a left and top coordinate, representing the top-left point of the bounding box. It also has height and width values that represent the other 3 points of the box.
#
# In the following plots, the blue lines represent the progression of our 4 values (top coordinate, left coordinate, width, and height) through the video frames and the orange lines represent a rolling average of these values. If a video has 5 frames per second or more, the objects within the video (and therefore the bounding boxes drawn around them) should have some amount of overlap between frames. Our video has vehicles driving at a normal pace, so our plots should show a relatively smooth progression.
#
# You can also plot the deviation between the rolling average and the actual values of bounding box coordinates. You may want to look at frames in which the actual value deviates substantially from the rolling average.
# +
# plot out progression of different metrics
plot_timeseries(label_frame, obj='Vehicle:1', roll_len=5)
plot_deviations(label_frame, obj='Vehicle:1', roll_len=5)
# -
# ## Plot box sizes
#
# Combine the width and height values to examine how the size of the bounding box for a given object progresses through the scene. For Vehicle 1, we intentionally reduced the size of the bounding box on frame 139 and restored it on frame 141. We also removed a bounding box on frame 217. We can see both of these flaws reflected in our size progression plots.
#
# +
def plot_size_prog(annot_frame, obj='Vehicle:1', roll_len = 5, figsize = (17,10)):
"""
Plot size progression of a bounding box for a given object.
"""
fig, ax = plt.subplots(nrows=1, ncols=1, figsize=figsize)
lframe_len = max(annot_frame['frameid'])
ann_subframe = annot_frame[annot_frame.obj==obj]
ann_subframe.index = list(np.arange(len(ann_subframe)))
size_vec = np.zeros(lframe_len+1)
size_vec[ann_subframe['frameid'].values] = ann_subframe['height']*ann_subframe['width']
ax.plot(size_vec)
ax.plot(pd.Series(size_vec).rolling(roll_len).mean())
ax.title.set_text(f'{obj} Size progression')
ax.set_xlabel('Frame Number')
ax.set_ylabel('Box size')
plot_size_prog(label_frame, obj='Vehicle:1')
plot_size_prog(label_frame, obj='Vehicle:2')
# -
# ## View box size differential
#
# Now, look at how the size of the box changes from frame to frame by plotting the actual size differential to get a better idea of the magnitude of these changes.
#
# You can also normalize the magnitude of the size changes by dividing the size differentials by the sizes of the boxes to express the differential as a percentage change from the original size of the box. This makes it easier to set thresholds beyond which you can classify this frame as potentially problematic for this object bounding box.
#
# The following plots visualize both the absolute size differential and the size differential as a percentage. You can also add lines representing where the bounding box changed by more than 20% in size from one frame to the next.
# +
# look at rolling size differential, try changing the object
def plot_size_diff(lab_frame, obj='Vehicle:1', hline=.5, figsize = (24,16)):
"""
Plot the sequential size differential between the bounding box for a given object between frames
"""
ann_subframe = lab_frame[lab_frame.obj==obj]
lframe_len = max(lab_frame['frameid'])
ann_subframe.index = list(np.arange(len(ann_subframe)))
size_vec = np.zeros(lframe_len+1)
size_vec[ann_subframe['frameid'].values] = ann_subframe['height']*ann_subframe['width']
size_diff = np.array(size_vec[:-1])- np.array(size_vec[1:])
norm_size_diff = size_diff/np.array(size_vec[:-1])
fig, ax = plt.subplots(ncols=1, nrows=2, figsize=figsize)
ax[0].plot(size_diff)
ax[0].set_title('Absolute size differential')
ax[1].plot(norm_size_diff)
ax[1].set_title('Normalized size differential')
ax[1].hlines(-hline,0,len(size_diff), colors='red')
ax[1].hlines(hline,0,len(size_diff), colors='red')
plot_size_diff(label_frame, obj='Vehicle:1', hline=.2)
# -
# If you normalize the size differential, you can use a threshold to identify which frames to flag for review. The preceding plot sets a threshold of 20% change from the previous box size; there a few frames that exceed that threshold.
# +
def find_prob_frames(lab_frame, obj='Vehicle:2', thresh = .25):
"""
Find potentially problematic frames via size differential
"""
lframe_len = max(lab_frame['frameid'])
ann_subframe = lab_frame[lab_frame.obj==obj]
size_vec = np.zeros(lframe_len+1)
size_vec[ann_subframe['frameid'].values] = ann_subframe['height']*ann_subframe['width']
size_diff = np.array(size_vec[:-1])- np.array(size_vec[1:])
norm_size_diff = size_diff/np.array(size_vec[:-1])
norm_size_diff[np.where(np.isnan(norm_size_diff))[0]] = 0
norm_size_diff[np.where(np.isinf(norm_size_diff))[0]] = 0
problem_frames = np.where(np.abs(norm_size_diff)>thresh)[0]+1 # adding 1 since we are are looking
worst_frame = np.argmax(np.abs(norm_size_diff))+1
return problem_frames, worst_frame
obj = 'Vehicle:1'
problem_frames, worst_frame = find_prob_frames(label_frame, obj=obj, thresh = .2)
print(f'Worst frame for {obj} is: {worst_frame}')
print('problem frames for', obj, ':',problem_frames.tolist())
# -
# # View the frames with the largest size differential
#
# With the indices for the frames with the largest size differential, you can view them in sequence. In the following frames, you can identify frames including Vehicle 1 where our labeler made a mistake. There was a large difference between frame 216 and frame 217, the subsequent frame, so frame 217 was flagged.
# +
start_index = worst_frame-1
# let's view our tracking labels
for j in range(start_index, start_index+3):
# Create figure and axes
fig,ax = plt.subplots(1, figsize=(24,12))
ax.set_title(f'Frame {j}')
# Display the image
ax.imshow(imgs[j])
for i,annot in enumerate(flawed_labels['tracking-annotations'][j]['annotations']):
rect = patches.Rectangle((annot['left'], annot['top']), annot['width'], annot['height'] ,linewidth=1,edgecolor='r',facecolor='none') # 50,100),40,30
ax.add_patch(rect)
plt.text(annot['left']-5, annot['top']-10, f"{annot['object-name']}", bbox=dict(facecolor='white', alpha=0.5)) #
plt.show()
# -
# ## Rolling IoU
#
# IoU (Intersection over Union) is a commonly used evaluation metric for object detection. It's calculated by dividing the area of overlap between two bounding boxes by the area of union for two bounding boxes. While it's typically used to evaluate the accuracy of a predicted box against a ground truth box, you can use it to evaulate how much overlap a given bounding box has from one frame of a video to the next.
#
# Since there are differences from one frame to the next, we would not expect a given bounding box for a single object to have 100% overlap with the corresponding bounding box from the next frame. However, depending on the frames per second (FPS) for the video, there often is only a small change between one frame and the next since the time elapsed between frames is only a fraction of a second. For higher FPS video, we would expect a substantial amount of overlap between frames. The MOT17 videos are all shot at 25 FPS, so these videos qualify. Operating with this assumption, you can use IoU to identify outlier frames where you see substantial differences between a bounding box in one frame to the next.
#
# +
# calculate rolling intersection over union
def calc_frame_int_over_union(annot_frame, obj, i):
lframe_len = max(annot_frame['frameid'])
annot_frame = annot_frame[annot_frame.obj==obj]
annot_frame.index = list(np.arange(len(annot_frame)))
coord_vec = np.zeros((lframe_len+1,4))
coord_vec[annot_frame['frameid'].values, 0] = annot_frame['left']
coord_vec[annot_frame['frameid'].values, 1] = annot_frame['top']
coord_vec[annot_frame['frameid'].values, 2] = annot_frame['width']
coord_vec[annot_frame['frameid'].values, 3] = annot_frame['height']
boxA = [coord_vec[i,0], coord_vec[i,1], coord_vec[i,0] + coord_vec[i,2], coord_vec[i,1] + coord_vec[i,3]]
boxB = [coord_vec[i+1,0], coord_vec[i+1,1], coord_vec[i+1,0] + coord_vec[i+1,2], coord_vec[i+1,1] + coord_vec[i+1,3]]
return bb_int_over_union(boxA, boxB)
# create list of objects
objs = list(np.unique(label_frame.obj))
# iterate through our objects to get rolling IoU values for each
iou_dict = {}
for obj in objs:
iou_vec = np.ones(len(np.unique(label_frame.frameid)))
ious = []
for i in label_frame[label_frame.obj==obj].frameid[:-1]:
iou = calc_frame_int_over_union(label_frame, obj, i)
ious.append(iou)
iou_vec[i] = iou
iou_dict[obj] = iou_vec
fig, ax = plt.subplots(nrows=1,ncols=3, figsize=(24,8), sharey=True)
ax[0].set_title(f'Rolling IoU {objs[0]}')
ax[0].set_xlabel('frames')
ax[0].set_ylabel('IoU')
ax[0].plot(iou_dict[objs[0]])
ax[1].set_title(f'Rolling IoU {objs[1]}')
ax[1].set_xlabel('frames')
ax[1].set_ylabel('IoU')
ax[1].plot(iou_dict[objs[1]])
ax[2].set_title(f'Rolling IoU {objs[2]}')
ax[2].set_xlabel('frames')
ax[2].set_ylabel('IoU')
ax[2].plot(iou_dict[objs[2]])
# -
# ## Identify low overlap frames
#
# With the IoU for your objects, you can set an IoU threshold and identify objects below it. The following code snippet identifies frames in which the bounding box for a given object has less than 50% overlap.
# +
## ID problem indices
iou_thresh = 0.5
vehicle = 1 # because index starts at 0, 0 -> vehicle:1, 1 -> vehicle:2, etc.
# use np.where to identify frames below our threshold.
inds = np.where(np.array(iou_dict[objs[vehicle]]) < iou_thresh)[0]
worst_ind = np.argmin(np.array(iou_dict[objs[vehicle]]))
print(objs[vehicle],'worst frame:', worst_ind)
# -
# ## Visualize low overlap frames
#
# With low overlap frames identified by the IoU metric, you can see that there is an issue with Vehicle 2 on frame 102. The bounding box for Vehicle 2 does not go low enough and clearly needs to be extended.
# +
start_index = worst_ind-1
# let's view our tracking labels
for j in range(start_index, start_index+3):
# Create figure and axes
fig,ax = plt.subplots(1, figsize=(24,12))
ax.set_title(f'Frame {j}')
# Display the image
ax.imshow(imgs[j])
for i,annot in enumerate(flawed_labels['tracking-annotations'][j]['annotations']):
rect = patches.Rectangle((annot['left'], annot['top']), annot['width'], annot['height'] ,linewidth=1,edgecolor='r',facecolor='none')
ax.add_patch(rect)
plt.text(annot['left']-5, annot['top']-10, f"{annot['object-name']}", bbox=dict(facecolor='white', alpha=0.5))
plt.show()
# -
# ## Embedding comparison (optional)
#
# The preceding two methods work because they are simple and are based on the reasonable assumption that objects in high-FPS video won't move too much from frame to frame. They can be considered more classical methods of comparison.
#
# Can we improve upon them? Try something more experimental to identify outliers: Generate embeddings for bounding box crops with an image classification model like ResNet and compare these across frames.
#
# Convolutional neural network image classification models have a final fully connected layer using a softmax function or another scaling activation function that outputs probabilities. If you remove the final layer of your network, your "predictions" are the image embedding that is essentially the neural network's representation of the image. If you isolate objects by cropping images, you can compare the representations of these objects across frames to identify any outliers.
#
# Start by importing a model from Torchhub and using a ResNet18 model trained on ImageNet. Since ImageNet is a very large and generic dataset, the network has learned information about images and is able to classify them into different categories. While a neural network more finely tuned on vehicles would likely perform better, a network trained on a large dataset like ImageNet should have learned enough information to indicate if images are similar.
#
# Note: As mentioned at the beginning of the notebook, if you wish to run this section, you'll need to use a PyTorch kernel.
# +
import torch
import torch.nn as nn
import torchvision.models as models
import cv2
from torch.autograd import Variable
from scipy.spatial import distance
# download our model from torchhub
model = torch.hub.load('pytorch/vision:v0.6.0', 'resnet18', pretrained=True)
model.eval()
# in order to get embeddings instead of a classification from a model we import, we need to remove the top layer of the network
modules=list(model.children())[:-1]
model=nn.Sequential(*modules)
# -
# # Generate embeddings
#
# Use your headless model to generate image embeddings for your object crops. The following code iterates through images, generates crops of labeled objects, resizes them to 224x224x3 to work with your headless model, and then predicts the image crop embedding.
# +
img_crops = {}
img_embeds = {}
for j,img in tqdm(enumerate(imgs[:300])):
img_arr = np.array(img)
img_embeds[j] = {}
img_crops[j] = {}
for i,annot in enumerate(flawed_labels['tracking-annotations'][j]['annotations']):
# crop our image using our annotation coordinates
crop = img_arr[annot['top']:(annot['top'] + annot['height']), annot['left']:(annot['left'] + annot['width']), :]
# resize image crops to work with our model which takes in 224x224x3 sized inputs
new_crop = np.array(Image.fromarray(crop).resize((224,224)))
img_crops[j][annot['object-name']] = new_crop
# reshape array so that it follows (batch dimension, color channels, image dimension, image dimension)
new_crop = np.reshape(new_crop, (1,224,224,3))
new_crop = np.reshape(new_crop, (1,3,224,224))
torch_arr = torch.tensor(new_crop, dtype=torch.float)
# return image crop embedding from headless model
with torch.no_grad():
embed = model(torch_arr)
img_embeds[j][annot['object-name']] = embed.squeeze()
# -
# ## View image crops
#
# To generate image crops, use the bounding box label dimensions and then resize the cropped images. Look at a few of them in sequence.
# +
def plot_crops(obj = 'Vehicle:1', start=0, figsize = (20,12)):
fig, ax = plt.subplots(nrows=1, ncols=5, figsize=figsize)
for i,a in enumerate(ax):
a.imshow(img_crops[i+start][obj])
a.set_title(f'Frame {i+start}')
plot_crops(start=1)
# -
# ## Compute distance
#
# Compare image embeddings by computing the distance between sequential embeddings for a given object.
# +
def compute_dist(img_embeds, dist_func=distance.euclidean, obj='Vehicle:1'):
dists = []
inds = []
for i in img_embeds:
if (i>0)&(obj in list(img_embeds[i].keys())):
if (obj in list(img_embeds[i-1].keys())):
dist = dist_func(img_embeds[i-1][obj],img_embeds[i][obj]) # distance between frame at t0 and t1
dists.append(dist)
inds.append(i)
return dists, inds
obj = 'Vehicle:2'
dists, inds = compute_dist(img_embeds, obj=obj)
# look for distances that are 2 standard deviation greater than the mean distance
prob_frames = np.where(dists>(np.mean(dists)+np.std(dists)*2))[0]
prob_inds = np.array(inds)[prob_frames]
print(prob_inds)
print('The frame with the greatest distance is frame:', inds[np.argmax(dists)])
# -
# ## View outlier frames
#
# In outlier frame crops, you can see that we were able to catch the issue on frame 102, where the bounding box was off-center.
#
# While this method is fun to play with, it's substantially more computationally expensive than the more generic methods and is not guaranteed to improve accuracy. Using such a generic model will inevitably produce false positives. Feel free to try a model fine-tuned on vehicles, which would likely yield better results!
# +
def plot_crops(obj = 'Vehicle:1', start=0):
fig, ax = plt.subplots(nrows=1, ncols=5, figsize=(20,12))
for i,a in enumerate(ax):
a.imshow(img_crops[i+start][obj])
a.set_title(f'Frame {i+start}')
plot_crops(obj = obj, start=np.argmax(dists))
# -
# ## Combining the metrics
#
# Having explored several methods for identifying anomalous and potentially problematic frames, you can combine them and identify all of those outlier frames. While you might have a few false positives, they are likely to be in areas with a lot of action that you might want our annotators to review regardless.
# +
def get_problem_frames(lab_frame, flawed_labels, size_thresh=.25, iou_thresh=.4, embed=False, imgs=None, verbose=False, embed_std=2):
"""
Function for identifying potentially problematic frames using bounding box size, rolling IoU, and optionally embedding comparison.
"""
if embed:
model = torch.hub.load('pytorch/vision:v0.6.0', 'resnet18', pretrained=True)
model.eval()
modules=list(model.children())[:-1]
model=nn.Sequential(*modules)
frame_res = {}
for obj in list(np.unique(lab_frame.obj)):
frame_res[obj] = {}
lframe_len = max(lab_frame['frameid'])
ann_subframe = lab_frame[lab_frame.obj==obj]
size_vec = np.zeros(lframe_len+1)
size_vec[ann_subframe['frameid'].values] = ann_subframe['height']*ann_subframe['width']
size_diff = np.array(size_vec[:-1])- np.array(size_vec[1:])
norm_size_diff = size_diff/np.array(size_vec[:-1])
norm_size_diff[np.where(np.isnan(norm_size_diff))[0]] = 0
norm_size_diff[np.where(np.isinf(norm_size_diff))[0]] = 0
frame_res[obj]['size_diff'] = [int(x) for x in size_diff]
frame_res[obj]['norm_size_diff'] = [int(x) for x in norm_size_diff]
try:
problem_frames = [int(x) for x in np.where(np.abs(norm_size_diff)>size_thresh)[0]]
if verbose:
worst_frame = np.argmax(np.abs(norm_size_diff))
print('Worst frame for',obj,'in',frame, 'is: ',worst_frame)
except:
problem_frames = []
frame_res[obj]['size_problem_frames'] = problem_frames
iou_vec = np.ones(len(np.unique(lab_frame.frameid)))
for i in lab_frame[lab_frame.obj==obj].frameid[:-1]:
iou = calc_frame_int_over_union(lab_frame, obj, i)
iou_vec[i] = iou
frame_res[obj]['iou'] = iou_vec.tolist()
inds = [int(x) for x in np.where(iou_vec<iou_thresh)[0]]
frame_res[obj]['iou_problem_frames'] = inds
if embed:
img_crops = {}
img_embeds = {}
for j,img in tqdm(enumerate(imgs)):
img_arr = np.array(img)
img_embeds[j] = {}
img_crops[j] = {}
for i,annot in enumerate(flawed_labels['tracking-annotations'][j]['annotations']):
try:
crop = img_arr[annot['top']:(annot['top']+annot['height']),annot['left']:(annot['left']+annot['width']),:]
new_crop = np.array(Image.fromarray(crop).resize((224,224)))
img_crops[j][annot['object-name']] = new_crop
new_crop = np.reshape(new_crop, (1,224,224,3))
new_crop = np.reshape(new_crop, (1,3,224,224))
torch_arr = torch.tensor(new_crop, dtype=torch.float)
with torch.no_grad():
emb = model(torch_arr)
img_embeds[j][annot['object-name']] = emb.squeeze()
except:
pass
dists = compute_dist(img_embeds, obj=obj)
# look for distances that are 2+ standard deviations greater than the mean distance
prob_frames = np.where(dists>(np.mean(dists)+np.std(dists)*embed_std))[0]
frame_res[obj]['embed_prob_frames'] = prob_frames.tolist()
return frame_res
# if you want to add in embedding comparison, set embed=True
num_images_to_validate = 300
embed = False
frame_res = get_problem_frames(label_frame, flawed_labels, size_thresh=.25, iou_thresh=.5, embed=embed, imgs=imgs[:num_images_to_validate])
prob_frame_dict = {}
all_prob_frames = []
for obj in frame_res:
prob_frames = list(frame_res[obj]['size_problem_frames'])
prob_frames.extend(list(frame_res[obj]['iou_problem_frames']))
if embed:
prob_frames.extend(list(frame_res[obj]['embed_prob_frames']))
all_prob_frames.extend(prob_frames)
prob_frame_dict = [int(x) for x in np.unique(all_prob_frames)]
prob_frame_dict
# -
# # Command line interface
#
# For use outside of a notebook, you can use the following command line interface.
# +
# Usage for the CLI is like this
# # !{sys.executable} quality_metrics_cli.py run-quality-check --bucket mybucket \
# # --lab_path job_results/bag-track-mot20-test-tracking/annotations/consolidated-annotation/output/0/SeqLabel.json \
# # --save_path example_quality_output/bag-track-mot20-test-tracking.json
#To get the help text
# !{sys.executable} quality_metrics_cli.py run-quality-check --help
# -
# ## Launch a directed audit job
#
# Take a look at how to create a Ground Truth [video frame tracking adjustment job](https://docs.aws.amazon.com/sagemaker/latest/dg/sms-video-object-tracking.html). Ground Truth provides a worker UI and infrastructure to streamline the process of creating this type of labeling job. All you have to do is specify the worker instructions, labels, and input data.
#
# With problematic annotations identified, you can launch a new audit labeling job. You can do this in SageMaker using the console; however, when you want to launch jobs in a more automated fashion, using the Boto3 API is very helpful.
# To create a new labeling job, first create your label categories so Ground Truth knows what labels to display for your workers. In this file, also specify the labeling instructions. You can use the outlier frames identified above to give directed instructions to your workers so they can spend less time reviewing the entire scene and focus more on potential problems.
# +
# create label categories
os.makedirs('tracking_manifests', exist_ok=True)
labelcats = {
"document-version": "2020-08-15",
"auditLabelAttributeName": "Person",
"labels": [
{
"label": "Vehicle",
"attributes": [
{
"name": "color",
"type": "string",
"enum": [
"Silver",
"Red",
"Blue",
"Black"
]
}
]
},
{
"label": "Pedestrian",
},
{
"label": "Other",
},
],
"instructions": {
"shortInstruction": f"Please draw boxes around pedestrians, with a specific focus on the following frames {prob_frame_dict}",
"fullInstruction": f"Please draw boxes around pedestrians, with a specific focus on the following frames {prob_frame_dict}"
}
}
filename = 'tracking_manifests/label_categories.json'
with open(filename,'w') as f:
json.dump(labelcats,f)
s3.upload_file(Filename=filename, Bucket=bucket, Key='tracking_manifests/label_categories.json')
LABEL_CATEGORIES_S3_URI = f's3://{bucket}/tracking_manifests/label_categories.json'
# -
# ## Generate manifests
#
# SageMaker Ground Truth operates using [manifests](https://docs.aws.amazon.com/sagemaker/latest/dg/sms-input-data-input-manifest.html). When you use a modality like image classification, a single image corresponds to a single entry in a manifest and a given manifest contains paths for all of the images to be labeled. Because videos have multiple frames and you can have [multiple videos in a single manifest](https://docs.aws.amazon.com/sagemaker/latest/dg/sms-video-manual-data-setup.html), a manifest is instead organized with a JSON sequence file for each video that contains the paths to frames in Amazon S3. This allows a single manifest to contain multiple videos for a single job.
#
# In this example, the image files are all split out, so you can just grab file paths. If your data is in the form of video files, you can use the Ground Truth console to split videos into video frames. To learn more, see [Automated Video Frame Input Data Setup](https://docs.aws.amazon.com/sagemaker/latest/dg/sms-video-automated-data-setup.html). You can also use other tools like [ffmpeg](https://ffmpeg.org/) to split video files into individual image frames. The following block stores file paths in a dictionary.
#
# +
# get our target MP4 files,
vids = glob('MOT17/MOT17/train/*')
vids.sort()
# we assume we have folders with the same name as the mp4 file in the same root folder
vid_dict = {}
for vid in vids:
files = glob(f"{vid}/img1/*jpg")
files.sort()
files = files[:300] # look at first 300 images
fileset = []
for fil in files:
fileset.append('/'.join(fil.split('/')[5:]))
vid_dict[vid] = fileset
# -
# With your image paths, you can iterate through frames and create a list of entries for each in your sequence file.
# +
# generate sequences
all_vids = {}
for vid in vid_dict:
frames = []
for i,v in enumerate(vid_dict[vid]):
frame = {
"frame-no": i+1,
"frame": f"{v.split('/')[-1]}",
"unix-timestamp": int(time.time())
}
frames.append(frame)
all_vids[vid] = {
"version": "2020-07-01",
"seq-no": np.random.randint(1,1000),
"prefix": f"s3://{bucket}/{'/'.join(vid.split('/')[1:])}/img1/",
"number-of-frames": len(vid_dict[vid]),
"frames": frames
}
# save sequences
for vid in all_vids:
with open(f"tracking_manifests/{vid.split('/')[-1]}_seq.json", 'w') as f:
json.dump(all_vids[vid],f)
# !cp SeqLabel.json tracking_manifests/SeqLabel.json
# -
# With your sequence file, you can create your manifest file. To create a new job with no existing labels, you can simply pass in a path to your sequence file. Since you already have labels and instead want to launch an adjustment job, point to the location of those labels in Amazon S3 and provide metadata for those labels in your manifest.
# +
# create manifest
manifest_dict = {}
for vid in all_vids:
source_ref = f"s3://{bucket}/tracking_manifests/{vid.split('/')[-1]}_seq.json"
annot_labels = f"s3://{bucket}/tracking_manifests/SeqLabel.json"
manifest = {
"source-ref": source_ref,
'Person': annot_labels,
"Person-metadata":{"class-map": {"2": "Vehicle"},
"human-annotated": "yes",
"creation-date": "2020-05-25T12:53:54+0000",
"type": "groundtruth/video-object-tracking"}
}
manifest_dict[vid] = manifest
# save videos as individual jobs
for vid in all_vids:
with open(f"tracking_manifests/{vid.split('/')[-1]}.manifest", 'w') as f:
json.dump(manifest_dict[vid],f)
print('Example manifest: ', manifest)
# -
# send data to s3
# !aws s3 cp --recursive tracking_manifests s3://{bucket}/tracking_manifests/
# ## Launch jobs (optional)
#
# Now that you've created your manifests, you're ready to launch your adjustment labeling job. Use this template for launching labeling jobs via [boto3](https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/sagemaker.html). In order to access the labeling job, make sure you followed the steps to create a private work team.
#
# +
# generate jobs
job_names = []
outputs = []
arn_region_map = {'us-west-2': '081040173940',
'us-east-1': '432418664414',
'us-east-2': '266458841044',
'eu-west-1': '568282634449',
'eu-west-2': '487402164563',
'ap-northeast-1': '477331159723',
'ap-northeast-2': '845288260483',
'ca-central-1': '918755190332',
'eu-central-1': '203001061592',
'ap-south-1': '565803892007',
'ap-southeast-1': '377565633583',
'ap-southeast-2': '454466003867'
}
region_account = arn_region_map[region]
LABELING_JOB_NAME = f"mot17-tracking-adjust-{int(time.time())}"
task = 'AdjustmentVideoObjectTracking'
job_names.append(LABELING_JOB_NAME)
INPUT_MANIFEST_S3_URI = f's3://{bucket}/tracking_manifests/MOT17-13-SDP.manifest'
human_task_config = {
"PreHumanTaskLambdaArn": f"arn:aws:lambda:{region}:{region_account}:function:PRE-{task}",
"MaxConcurrentTaskCount": 200, # Maximum of 200 objects will be available to the workteam at any time
"NumberOfHumanWorkersPerDataObject": 1, # We will obtain and consolidate 1 human annotationsfor each frame.
"TaskAvailabilityLifetimeInSeconds": 864000, # Your workteam has 24 hours to complete all pending tasks.
"TaskDescription": f"Please draw boxes around vehicles, with a specific focus around the following frames {prob_frame_dict}",
# If using public workforce, specify "PublicWorkforceTaskPrice"
"WorkteamArn": WORKTEAM_ARN,
"AnnotationConsolidationConfig": {
"AnnotationConsolidationLambdaArn": f"arn:aws:lambda:{region}:{region_account}:function:ACS-{task}"
},
"TaskKeywords": [
"Image Classification",
"Labeling"
],
"TaskTimeLimitInSeconds": 14400,
"TaskTitle": LABELING_JOB_NAME,
"UiConfig": {
"HumanTaskUiArn": f'arn:aws:sagemaker:{region}:394669845002:human-task-ui/VideoObjectTracking'
}
}
createLabelingJob_request = {
"LabelingJobName": LABELING_JOB_NAME,
"HumanTaskConfig": human_task_config,
"InputConfig": {
"DataAttributes": {
"ContentClassifiers": [
"FreeOfPersonallyIdentifiableInformation",
"FreeOfAdultContent"
]
},
"DataSource": {
"S3DataSource": {
"ManifestS3Uri": INPUT_MANIFEST_S3_URI
}
}
},
"LabelAttributeName": "Person-ref",
"LabelCategoryConfigS3Uri": LABEL_CATEGORIES_S3_URI,
"OutputConfig": {
"S3OutputPath": f"s3://{bucket}/gt_job_results"
},
"RoleArn": role,
"StoppingConditions": {
"MaxPercentageOfInputDatasetLabeled": 100
}
}
print(createLabelingJob_request)
out = sagemaker_cl.create_labeling_job(**createLabelingJob_request)
outputs.append(out)
print(out)
# -
# ## Conclusion
#
# This notebook introduced how to measure the quality of annotations using statistical analysis and various quality metrics like IoU, rolling IoU, and embedding comparisons. It also demonstrated how to flag frames which may not be labeled properly using these quality metrics and how to send those frames for verification and audit jobs using SageMaker Ground Truth.
#
# Using this approach, you can perform automated quality checks on the annotations at scale, which reduces the number of frames humans need to verify or audit. Please try the notebook with your own data and add your own quality metrics for different task types supported by SageMaker Ground Truth. With this process in place, you can generate high-quality datasets for a wide range of business use cases in a cost-effective manner without compromising the quality of annotations.
# ## Cleanup
#
# Use the following command to stop your labeling job.
# cleanup
sagemaker_cl.stop_labeling_job(LABELING_JOB_NAME)
| ground_truth_labeling_jobs/video_annotations_quality_assessment/Ground_Truth_Video_Quality_Metrics.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: (base_conda
# language: python
# name: base
# ---
import pandas as pd
import sklearn
from sklearn.datasets import load_digits
import matplotlib.pyplot as plt
import keras
# # Data overwiev
# ## Print shape of data
from keras.datasets import mnist
(x_train, y_train), (x_test, y_test) = mnist.load_data()
x_train.shape
print("Image info :\n")
print("Number of images : ",len(x_train))
print("Image width : ",x_train[0].shape[0])
print("Image height : ",x_train[1].shape[1])
plt.imshow(x_train[0])
y_train[0]
plt.figure(figsize=(15,15))
for i in range(20):
ax = plt.subplot(4,5,i+1)
plt.title("Classs : " + str(y_train[i]))
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
plt.imshow(x_train[i])
# # Model architecture
from keras import layers
from keras import models
# +
from keras import layers
from keras import models
model = models.Sequential()
model.add(layers.Conv2D(32, (3, 3), activation='relu', input_shape=(28, 28, 1)))
model.add(layers.MaxPooling2D((2, 2)))
model.add(layers.Conv2D(64, (3, 3), activation='relu'))
model.add(layers.MaxPooling2D((2, 2)))
model.add(layers.Conv2D(64, (3, 3), activation='relu'))
model.add(layers.Flatten())
model.add(layers.Dense(64, activation='relu'))
model.add(layers.Dense(10, activation='softmax'))
model.summary()
# +
from keras.datasets import mnist
from keras.utils import to_categorical
(train_images, train_labels), (test_images, test_labels) = mnist.load_data()
train_images = train_images.reshape((60000, 28, 28, 1))
train_images = train_images.astype('float32') / 255
test_images = test_images.reshape((10000, 28, 28, 1))
test_images = test_images.astype('float32') / 255
train_labels = to_categorical(train_labels)
test_labels = to_categorical(test_labels)
# +
model.compile(optimizer='rmsprop',
loss='categorical_crossentropy',
metrics=['accuracy'])
history = model.fit(train_images, train_labels, epochs=5, batch_size=64)
# -
test_loss, test_acc = model.evaluate(test_images, test_labels)
test_acc
model.save("../models/classifier99.h5")
# # Load model
new_model = keras.models.load_model("../models/classifier99.h5")
test_loss, test_acc2 = new_model.evaluate(test_images, test_labels)
test_acc2
| notebooks/1.0._MikeG27_Classifier.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: bai
# language: python
# name: bai
# ---
# + [markdown] id="KMd5XFEMpmkG"
# <html>
# <body>
# <center>
# <h1><u>Assignment 2</u></h1>
# <h3> Customizing the code to run your own experiments</h3>
# </center>
# </body>
# </html>
# + colab={"base_uri": "https://localhost:8080/"} id="QqSvrxOoNccM" executionInfo={"status": "ok", "timestamp": 1613935775564, "user_tz": 300, "elapsed": 690, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Gjm-Ds-aUxtrQKOUlGls0Vk0WuAFuhS0e-pP6vt=s64", "userId": "12440174443621946212"}} outputId="0c42cc8c-eb10-4b31-e6f9-5a824ec2f27e"
from google.colab import drive
drive.mount('/content/drive')
# + [markdown] id="qvQQRsLvpmkM"
# ### Learning Outcomes
#
# In assignment 1 we covered two things mainly - (1) introducing you to a pipeline for coding ML projects - the data loader, loading models, calculating losses and using an optimizer to update the weights of the model (i.e. learning the weights of the model). But we didn't look into the nuts and bolts of these pieces. Assignment 2 will delve deeper into these pieces and learn how to customize them.
#
# There are two major components:
#
# - Learning how to load your own data.
# - Writing a custom CNN model to classify these images.
# + [markdown] id="CIBmTQh_pmkN"
# ### Please specify your Name, Email ID and forked repository url here:
# - Name: <NAME>
# - Email: <EMAIL>
# - Link to your forked github repository: https://github.com/jaleeli413/Harvard_BAI
# + id="JLZ79lZUpmkN" executionInfo={"status": "ok", "timestamp": 1613935775873, "user_tz": 300, "elapsed": 974, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Gjm-Ds-aUxtrQKOUlGls0Vk0WuAFuhS0e-pP6vt=s64", "userId": "12440174443621946212"}}
### General libraries useful for python ###
import os
import sys
from tqdm.notebook import tqdm
import json
import random
import pickle
import copy
from IPython.display import display
import ipywidgets as widgets
# + colab={"base_uri": "https://localhost:8080/"} id="gLOqAsD4yTjn" executionInfo={"status": "ok", "timestamp": 1613935776824, "user_tz": 300, "elapsed": 1908, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Gjm-Ds-aUxtrQKOUlGls0Vk0WuAFuhS0e-pP6vt=s64", "userId": "12440174443621946212"}} outputId="31966628-c686-4e99-d3f7-b045493dd8ba"
from google.colab import drive
drive.mount('/content/drive', force_remount=True)
# + colab={"base_uri": "https://localhost:8080/"} id="NBYo9vxspmkN" executionInfo={"status": "ok", "timestamp": 1613935776826, "user_tz": 300, "elapsed": 1873, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Gjm-Ds-aUxtrQKOUlGls0Vk0WuAFuhS0e-pP6vt=s64", "userId": "12440174443621946212"}} outputId="d3760a18-4e50-4232-b85f-52377785f783"
### Finding where you clone your repo, so that code upstream paths can be specified programmatically ####
git_dir = '/content/drive/MyDrive/Harvard_BAI'
print('Your github directory is :%s'%git_dir)
# + id="vSIu5jATTZn7" executionInfo={"status": "ok", "timestamp": 1613935776826, "user_tz": 300, "elapsed": 1852, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Gjm-Ds-aUxtrQKOUlGls0Vk0WuAFuhS0e-pP6vt=s64", "userId": "12440174443621946212"}}
assignment_2_folder = "%s/assignment_2"%git_dir
# + id="qBHqiYHSMY9t" executionInfo={"status": "ok", "timestamp": 1613935776827, "user_tz": 300, "elapsed": 1845, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Gjm-Ds-aUxtrQKOUlGls0Vk0WuAFuhS0e-pP6vt=s64", "userId": "12440174443621946212"}}
os.chdir(assignment_2_folder)
# + id="L2vk8XYjpmkO" executionInfo={"status": "ok", "timestamp": 1613935776827, "user_tz": 300, "elapsed": 1840, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Gjm-Ds-aUxtrQKOUlGls0Vk0WuAFuhS0e-pP6vt=s64", "userId": "12440174443621946212"}}
### Libraries for visualizing our results and data ###
from PIL import Image
import matplotlib.pyplot as plt
# + id="stYTIO0EpmkP" executionInfo={"status": "ok", "timestamp": 1613935777275, "user_tz": 300, "elapsed": 2285, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Gjm-Ds-aUxtrQKOUlGls0Vk0WuAFuhS0e-pP6vt=s64", "userId": "12440174443621946212"}}
### Import PyTorch and its components ###
import torch
import torchvision
from torchvision import transforms
import torch.nn as nn
import torch.optim as optim
# + [markdown] id="pw2HumdXpmkP"
# #### Let's load our flexible code-base which you will build on for your research projects in future assignments.
#
# Like assignment 1, we are loading in our code-base for convenient dataloading/model loading etc.
# + colab={"base_uri": "https://localhost:8080/"} id="5bU3tVqsFRT_" executionInfo={"status": "ok", "timestamp": 1613935779356, "user_tz": 300, "elapsed": 4344, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Gjm-Ds-aUxtrQKOUlGls0Vk0WuAFuhS0e-pP6vt=s64", "userId": "12440174443621946212"}} outputId="8742919b-344e-4561-ca59-2c52f570584d"
### Making helper code under the folder res available. This includes loaders, models, etc. ###
sys.path.append('%s/res/'%git_dir)
from models.models import get_model
from loader.loader import get_loader
# + colab={"base_uri": "https://localhost:8080/"} id="2JmltNP-RfKX" executionInfo={"status": "ok", "timestamp": 1613935794540, "user_tz": 300, "elapsed": 19481, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Gjm-Ds-aUxtrQKOUlGls0Vk0WuAFuhS0e-pP6vt=s64", "userId": "12440174443621946212"}} outputId="113bc388-377f-4686-d364-0ad0017cd718"
### Setting up Weights and Biases for tracking your experiments. ###
## We have Weights and Biases (wandb.ai) integrated into the code for easy visualization of results and for tracking performance. `Please make an account at wandb.ai, and follow the steps to login to your account!`
# %pip install --upgrade git+git://github.com/wandb/client.git@task/debug-init-wandb#egg=wandb
import wandb
wandb.login()
# + [markdown] id="nAVtuHh0pmkQ"
# #### See those paths printed above?
# + [markdown] id="MV_MPAjxpmkQ"
# As earlier, models i.e. architectures are being loaded from `res/models`. In this assignment we will be using the ResNet18 architecture, which is being loaded from the script `res/ResNet.py`
# + [markdown] id="wgpN78WapmkR"
# ### Specifying settings/hyperparameters for our code below ###
# + id="0Rp-0B0opmkR" executionInfo={"status": "ok", "timestamp": 1613935794543, "user_tz": 300, "elapsed": 19461, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Gjm-Ds-aUxtrQKOUlGls0Vk0WuAFuhS0e-pP6vt=s64", "userId": "12440174443621946212"}}
wandb_config = {}
wandb_config['batch_size'] = 10
wandb_config['base_lr'] = 0.01
wandb_config['model_arch'] = 'ResNet18'
wandb_config['num_classes'] = 10
wandb_config['run_name'] = 'assignment_2'
### If you are using a CPU, please set wandb_config['use_gpu'] = 0 below. However, if you are using a GPU, leave it unchanged ####
wandb_config['use_gpu'] = 1
wandb_config['num_epochs'] = 2
wandb_config['git_dir'] = git_dir
# + [markdown] id="pAONDMnApmkR"
# By changing above, different experiments can be run. For example, you can specify which model architecture to load, which dataset you will be loading, and so on.
# + [markdown] id="cdgYVaVppmkR"
# ### Data Loading ###
# + [markdown] id="rSeW7ZgopmkS"
# The most common task many of you will be doing in your projects will be running a script on a new dataset. In PyTorch this is done using data loaders, and it is extremely important to understand this works. In next assignment, you will be writing your own dataloader. For now, we only expose you to basic data loading which for the MNIST dataset for which PyTorch provides easy functions.
# + [markdown] id="fROn07mNpmkS"
# ### Let's load our own custom dataset. We will be using the Cats vs Dogs dataset from Kaggle.com
#
# Download the data from https://www.kaggle.com/c/dogs-vs-cats/data.
#
# Store it in `assignment_2/data/` and unzip the files.
#
# So, the train images should be inside the directory: Harvard_BAI/assignment_2/data/dogs-vs-cats/train/
# + [markdown] id="A_f4QqhkpmkT"
# Data Transforms tell PyTorch how to pre-process your data. Recall that images are stored with values between 0-255 usually. One very common pre-processing for images is to normalize to be 0 mean and 1 standard deviation. This pre-processing makes the task easier for neural networks. There are many, many kinds of normalization in deep learning, the most basic one being those imposed on the image data while loading it.
# + [markdown] id="faqQRpGuTLsd"
# ### Let's create a file list of all our image files
# + id="dDQyNNj627eo" executionInfo={"status": "ok", "timestamp": 1613936395902, "user_tz": 300, "elapsed": 203, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Gjm-Ds-aUxtrQKOUlGls0Vk0WuAFuhS0e-pP6vt=s64", "userId": "12440174443621946212"}}
train_folder_files = os.listdir('%s/data/dogs-vs-cats/train/'%assignment_2_folder)
random.shuffle(train_folder_files)
# + id="nKleAnDzI94c" executionInfo={"status": "ok", "timestamp": 1613936397428, "user_tz": 300, "elapsed": 205, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Gjm-Ds-aUxtrQKOUlGls0Vk0WuAFuhS0e-pP6vt=s64", "userId": "12440174443621946212"}}
total_points = len(train_folder_files)
# + id="xHkSt_fgJF3n" executionInfo={"status": "ok", "timestamp": 1613936398805, "user_tz": 300, "elapsed": 217, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Gjm-Ds-aUxtrQKOUlGls0Vk0WuAFuhS0e-pP6vt=s64", "userId": "12440174443621946212"}}
train_files = train_folder_files[:int(0.8*total_points)]
val_files = train_folder_files[int(0.8*total_points):]
# + id="0sTMxWIXTODZ" executionInfo={"status": "ok", "timestamp": 1613936400968, "user_tz": 300, "elapsed": 384, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Gjm-Ds-aUxtrQKOUlGls0Vk0WuAFuhS0e-pP6vt=s64", "userId": "12440174443621946212"}}
test_files = os.listdir('%s/data/dogs-vs-cats/test1/'%assignment_2_folder)
# + id="Y93LkDld3J2X" executionInfo={"status": "ok", "timestamp": 1613936403628, "user_tz": 300, "elapsed": 211, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Gjm-Ds-aUxtrQKOUlGls0Vk0WuAFuhS0e-pP6vt=s64", "userId": "12440174443621946212"}}
labels_dictionary = {}
with open('data/train_file_list.txt','w') as F:
for t in train_files:
file_path = '%s/data/dogs-vs-cats/train/%s'%(assignment_2_folder, t)
if 'dog' in t:
labels_dictionary[file_path] = 0
print(file_path, file = F)
elif 'cat' in t:
labels_dictionary[file_path] = 1
print(file_path, file = F)
with open('data/val_file_list.txt','w') as F:
for t in train_files:
file_path = '%s/data/dogs-vs-cats/train/%s'%(assignment_2_folder, t)
if 'dog' in t:
labels_dictionary[file_path] = 0
print(file_path, file = F)
elif 'cat' in t:
labels_dictionary[file_path] = 1
print(file_path, file = F)
with open('data/test_file_list.txt','w') as F:
for t in test_files:
file_path = '%s/data/dogs-vs-cats/test1/%s'%(assignment_2_folder, t)
labels_dictionary[file_path] = -1
print(file_path, file = F)
# + colab={"base_uri": "https://localhost:8080/", "height": 264} id="RVFUGStX3oKR" executionInfo={"status": "ok", "timestamp": 1613936407599, "user_tz": 300, "elapsed": 496, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Gjm-Ds-aUxtrQKOUlGls0Vk0WuAFuhS0e-pP6vt=s64", "userId": "12440174443621946212"}} outputId="7bc39416-7c75-4905-b294-a78d8b5e3932"
random_data_point = random.choice(list(labels_dictionary.keys()))
plt.imshow(Image.open(random_data_point))
plt.title('Category: %s'%labels_dictionary[random_data_point])
plt.axis('off')
plt.show()
# + [markdown] id="OcMXyppD-vsg"
# Above, you should see 0 if it's a dog, 1 if it's a cat, and -1 if it's an image for which we don't have a label (test set).
# + id="tz0i23ogApQF" executionInfo={"status": "ok", "timestamp": 1613936410908, "user_tz": 300, "elapsed": 204, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Gjm-Ds-aUxtrQKOUlGls0Vk0WuAFuhS0e-pP6vt=s64", "userId": "12440174443621946212"}}
with open('%s/data/labels_dictionary.p'%assignment_2_folder, 'wb') as F:
pickle.dump(labels_dictionary, F)
# + [markdown] id="e-IM8j6vKGTg"
# # Using our custom data-loader.
#
# Our data-loader is called cats_dogs_loader which is in `res/loader/cats_dogs_loader`. Read this file carefully! It's extremely important to understand this.
#
#
# An overview to help you understand the file: when you first call get_loader() below, you only tell python that you will be creating objects from the file cats_dogs_loader.
# + id="GMYXrJgn74-P" executionInfo={"status": "ok", "timestamp": 1613936412951, "user_tz": 300, "elapsed": 194, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Gjm-Ds-aUxtrQKOUlGls0Vk0WuAFuhS0e-pP6vt=s64", "userId": "12440174443621946212"}}
file_list_loader = get_loader('cats_dogs_loader')
# + id="BwVYrw4va5OL" executionInfo={"status": "ok", "timestamp": 1613936415337, "user_tz": 300, "elapsed": 437, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Gjm-Ds-aUxtrQKOUlGls0Vk0WuAFuhS0e-pP6vt=s64", "userId": "12440174443621946212"}}
data_transforms = {
'train': transforms.Compose([
transforms.RandomResizedCrop(224),
transforms.RandomHorizontalFlip(),
transforms.ToTensor(),
transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])
]),
'val': transforms.Compose([
transforms.Resize(256),
transforms.CenterCrop(224),
transforms.ToTensor(),
transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])
]),
'test': transforms.Compose([
transforms.Resize(256),
transforms.CenterCrop(224),
transforms.ToTensor(),
transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])
]),
}
# + [markdown] id="JuzzYUmjpmkT"
# In assignment 1 we just used `torchvision.datasets.MNIST` to load MNIST data. But now we, can't rely on that function as we have a custom dataset. Here we learn how to handle such custom data.
# + [markdown] id="1peYCIkede03"
# We use our custom file_list_loader to load data that we have downloaded and unzipped. Above, we created file lists which contain paths to our train, validation and test datasets, here we will pass these file lists to the file_list_loader.
#
# ### Open the file cats_dogs_loaders.py, you will see a class FileListFolder.
#
# In the file loader.py we load this class FileListFolder in the function get_loader. So, when we run get_loader("cats_dogs_loader") above, what is returned is the class FileListFolder. So, now when we run file_list_folder(), the arguments inside are passed to the class FileListFolder as described in cats_dogs_loader.py.
#
# Thus, the first time you pass this, the __init__ function is run i.e. an object of that class is initialized. As you can see, the __init__ function in cats_dogs_loader.py requires 3 attributes - a file list, a labels dictionary and a pytorch transform object. To create a new data loader, we need to create a file lie cats_dogs_loder.py and make necessary changes to it.
#
#
# The file lists contain paths to train/val/test files. The labels dictionary is a dictionary storing category numbers for each of these files, and the transforms are the pre-processing pytorch does to our loaded images before starting trainig.
# + id="JO9Oy04kITzd" executionInfo={"status": "ok", "timestamp": 1613936419171, "user_tz": 300, "elapsed": 288, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Gjm-Ds-aUxtrQKOUlGls0Vk0WuAFuhS0e-pP6vt=s64", "userId": "12440174443621946212"}}
dsets = {}
dsets['train'] = file_list_loader('%s/data/train_file_list.txt'%assignment_2_folder, '%s/data/labels_dictionary.p'%assignment_2_folder, data_transforms['train'])
dsets['val'] = file_list_loader('%s/data/val_file_list.txt'%assignment_2_folder, '%s/data/labels_dictionary.p'%assignment_2_folder, data_transforms['val'])
dsets['test'] = file_list_loader('%s/data/test_file_list.txt'%assignment_2_folder, '%s/data/labels_dictionary.p'%assignment_2_folder, data_transforms['test'])
# + id="kMPWCW6jCHFh" executionInfo={"status": "ok", "timestamp": 1613936420667, "user_tz": 300, "elapsed": 236, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Gjm-Ds-aUxtrQKOUlGls0Vk0WuAFuhS0e-pP6vt=s64", "userId": "12440174443621946212"}}
### Above, we created datasets. Now, we will pass them into pytorch's inbuild dataloaders,
### these will help us load batches of data for training.
dset_loaders = {}
dset_loaders['train'] = torch.utils.data.DataLoader(dsets['train'], batch_size=wandb_config['batch_size'], shuffle = True, num_workers=2,drop_last=False)
dset_loaders['val'] = torch.utils.data.DataLoader(dsets['val'], batch_size=wandb_config['batch_size'], shuffle = False, num_workers=2,drop_last=False)
dset_loaders['test'] = torch.utils.data.DataLoader(dsets['test'], batch_size=wandb_config['batch_size'], shuffle = True, num_workers=2,drop_last=False)
# + id="PEDTP2GnCUOC" executionInfo={"status": "ok", "timestamp": 1613936423267, "user_tz": 300, "elapsed": 205, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Gjm-Ds-aUxtrQKOUlGls0Vk0WuAFuhS0e-pP6vt=s64", "userId": "12440174443621946212"}}
data_sizes = {}
data_sizes['train'] = len(dsets['train'])
data_sizes['val'] = len(dsets['val'])
data_sizes['test'] = len(dsets['test'])
# + [markdown] id="_xonOzJQeO0Q"
# ## Loading model using PyTorch's in-built methods
# + id="-BtzO9dveSaF" executionInfo={"status": "ok", "timestamp": 1613936425066, "user_tz": 300, "elapsed": 320, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Gjm-Ds-aUxtrQKOUlGls0Vk0WuAFuhS0e-pP6vt=s64", "userId": "12440174443621946212"}}
model = torchvision.models.resnet18(pretrained = False)
# + [markdown] id="NKPyscMfebAw"
# Pytorch makes it easy to load many standard models like resnet18 above. There are many more available. You can see the list here - https://github.com/pytorch/vision/tree/master/torchvision/models.
#
# But, what if you want to build your own custom model? Below, we see how to load your custom models which you would write, and store in the `res/models/` folder. For current purposes, I have created a copy of the popular ResNet architectures in the folder which we will be loading and using.
# + [markdown] id="-39kO7WjpmkU"
# ### We will use the `get_model` functionality to load a CNN architecture.
# + id="ofvHKm47MxST" executionInfo={"status": "ok", "timestamp": 1613936427385, "user_tz": 300, "elapsed": 493, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Gjm-Ds-aUxtrQKOUlGls0Vk0WuAFuhS0e-pP6vt=s64", "userId": "12440174443621946212"}}
### Since we pass ResNet18 below, this will get relayed to
### the get_model function loaded from models.py.
### As you can see, that would load the resnet18 model from the file
### ResNet18.py at `res/models/`.
model = get_model('ResNet18', 1000)
in_filters = model.fc.in_features
model.fc = nn.Linear(in_features=in_filters, out_features=2)
model.cuda();
## above, we first load the ResNet18 architecture, starting with weights from ImageNet.
## that's why we have pre-training = True. But then, our task does not have 1000 classes,
## but only 2 classes (dogs and cats), so we replace the final layer of our model
## with a Linear layer with out_features = 2
## finally, we move our model to the GPU with .cuda()
# + [markdown] id="zFXtAksGbif_"
# # Complete the code below to load a resnet34 model.
# + id="ZXhnbYKQbhG1" executionInfo={"status": "ok", "timestamp": 1613936431437, "user_tz": 300, "elapsed": 752, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Gjm-Ds-aUxtrQKOUlGls0Vk0WuAFuhS0e-pP6vt=s64", "userId": "12440174443621946212"}}
### Above we loaded a ResNet18 model.
### Read the file `res/models/models.py` and decide what you
### should fill below to load the ResNet34 model instead.
model = get_model('ResNet34', 1000)
in_filters = model.fc.in_features
model.fc = nn.Linear(in_features=in_filters, out_features=wandb_config['num_classes'])
model.cuda();
# + [markdown] id="qdfV0B5ob9vr"
# # What changes would you need to make to load a resnet50 model?
#
# As you can see there is a function called resnet50 in the file ResNet.py in `res/models/ResNet.py` but this function is not loaded in models.py. So, we would need to add another if statement to load resnet50, like we have for resnet34 and resnet18.
# + [markdown] id="TR9UeMmvpmkV"
# ### Curious what the model architecture looks like?
# + colab={"base_uri": "https://localhost:8080/"} id="Ibdp07pSpmkV" executionInfo={"status": "ok", "timestamp": 1613936436899, "user_tz": 300, "elapsed": 211, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Gjm-Ds-aUxtrQKOUlGls0Vk0WuAFuhS0e-pP6vt=s64", "userId": "12440174443621946212"}} outputId="99286e1f-1647-4efd-f107-7942a3c4d286"
print(model)
# + [markdown] id="SI2cZxkvd8jw"
# ### Anatomy of the ResNet model
#
# As you can see above, a ResNet model contains many convolutional layers, ReLU layers and pooling layers, among other features.
# + [markdown] id="UrUbaQqgpmkV"
# #### Below we have the function which trains, tests and returns the best model weights.
# + id="1baRVJnOpmkV" executionInfo={"status": "ok", "timestamp": 1613936440558, "user_tz": 300, "elapsed": 207, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Gjm-Ds-aUxtrQKOUlGls0Vk0WuAFuhS0e-pP6vt=s64", "userId": "12440174443621946212"}}
def model_pipeline(model, criterion, optimizer, dset_loaders, dset_sizes, hyperparameters):
with wandb.init(project="HARVAR_BAI", config=hyperparameters):
if hyperparameters['run_name']:
wandb.run.name = hyperparameters['run_name']
config = wandb.config
best_model = model
best_acc = 0.0
print(config)
print(config.num_epochs)
for epoch_num in range(config.num_epochs):
wandb.log({"Current Epoch": epoch_num})
model = train_model(model, criterion, optimizer, dset_loaders, dset_sizes, config)
best_acc, best_model = val_model(model, best_acc, best_model, dset_loaders, dset_sizes, config)
return best_model
# + [markdown] id="1e7k8r9ypmkW"
# #### The different steps of the train model function are annotated below inside the function. Read them step by step
# + id="z_8fxYJ_pmkW" executionInfo={"status": "ok", "timestamp": 1613936443158, "user_tz": 300, "elapsed": 221, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14G<KEY>", "userId": "12440174443621946212"}}
def train_model(model, criterion, optimizer, dset_loaders, dset_sizes, configs):
print('Starting training epoch...')
best_model = model
best_acc = 0.0
### This tells python to track gradients. While testing weights aren't updated hence they are not stored.
model.train()
running_loss = 0.0
running_corrects = 0
iters = 0
### We loop over the data loader we created above. Simply using a for loop.
for data in tqdm(dset_loaders['train']):
inputs, labels = data
### If you are using a gpu, then script will move the loaded data to the GPU.
### If you are not using a gpu, ensure that wandb_configs['use_gpu'] is set to False above.
if configs.use_gpu:
inputs = inputs.float().cuda()
labels = labels.long().cuda()
else:
print('WARNING: NOT USING GPU!')
inputs = inputs.float()
labels = labels.long()
### We set the gradients to zero, then calculate the outputs, and the loss function.
### Gradients for this process are automatically calculated by PyTorch.
optimizer.zero_grad()
outputs = model(inputs)
_, preds = torch.max(outputs.data, 1)
loss = criterion(outputs, labels)
### At this point, the program has calculated gradient of loss w.r.t. weights of our NN model.
loss.backward()
optimizer.step()
### optimizer.step() updated the models weights using calculated gradients.
### Let's store these and log them using wandb. They will be displayed in a nice online
### dashboard for you to see.
iters += 1
running_loss += loss.item()
running_corrects += torch.sum(preds == labels.data)
wandb.log({"train_running_loss": running_loss/float(iters*len(labels.data))})
wandb.log({"train_running_corrects": running_corrects/float(iters*len(labels.data))})
epoch_loss = float(running_loss) / dset_sizes['train']
epoch_acc = float(running_corrects) / float(dset_sizes['train'])
wandb.log({"train_accuracy": epoch_acc})
wandb.log({"train_loss": epoch_loss})
return model
# + id="qunChCjtpmkW" executionInfo={"status": "ok", "timestamp": 1613936446496, "user_tz": 300, "elapsed": 240, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Gjm-Ds-aUxtrQKOUlGls0Vk0WuAFuhS0e-pP6vt=s64", "userId": "12440174443621946212"}}
def val_model(model, best_acc, best_model, dset_loaders, dset_sizes, configs):
print('Starting testing epoch...')
model.eval() ### tells pytorch to not store gradients as we won't be updating weights while testing.
running_corrects = 0
iters = 0
for data in tqdm(dset_loaders['val']):
inputs, labels = data
if configs.use_gpu:
inputs = inputs.float().cuda()
labels = labels.long().cuda()
else:
print('WARNING: NOT USING GPU!')
inputs = inputs.float()
labels = labels.long()
outputs = model(inputs)
_, preds = torch.max(outputs.data, 1)
iters += 1
running_corrects += torch.sum(preds == labels.data)
wandb.log({"train_running_corrects": running_corrects/float(iters*len(labels.data))})
epoch_acc = float(running_corrects) / float(dset_sizes['val'])
wandb.log({"test_accuracy": epoch_acc})
### Code is very similar to train set. One major difference, we don't update weights.
### We only check the performance is best so far, if so, we save this model as the best model so far.
if epoch_acc > best_acc:
best_acc = epoch_acc
best_model = copy.deepcopy(model)
wandb.log({"best_accuracy": best_acc})
return best_acc, best_model
# + [markdown] id="3MTFyOmoL1Tu"
# # Make sure your runtime is GPU. If you changed your run time, make sure to run your code again from the top.
# + id="91_E7yQ8pmkY" executionInfo={"status": "ok", "timestamp": 1613936449419, "user_tz": 300, "elapsed": 216, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Gjm-Ds-aUxtrQKOUlGls0Vk0WuAFuhS0e-pP6vt=s64", "userId": "12440174443621946212"}}
### Criterion is simply specifying what loss to use. Here we choose cross entropy loss.
criterion = nn.CrossEntropyLoss()
### tells what optimizer to use. There are many options, we here choose Adam.
### the main difference between optimizers is that they vary in how weights are updated based on calculated gradients.
optimizer_ft = optim.Adam(model.parameters(), lr = wandb_config['base_lr'])
if wandb_config['use_gpu']:
criterion.cuda()
model.cuda()
# + id="Yt5ctNudpmkY" executionInfo={"status": "ok", "timestamp": 1613936452373, "user_tz": 300, "elapsed": 203, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Gjm-Ds-aUxtrQKOUlGls0Vk0WuAFuhS0e-pP6vt=s64", "userId": "12440174443621946212"}}
### Creating the folder where our models will be saved.
if not os.path.isdir("%s/saved_models/"%wandb_config['git_dir']):
os.mkdir("%s/saved_models/"%wandb_config['git_dir'])
# + id="0Qg7kGgQTGB-" executionInfo={"status": "ok", "timestamp": 1613936275872, "user_tz": 300, "elapsed": 451, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Gjm-Ds-aUxtrQKOUlGls0Vk0WuAFuhS0e-pP6vt=s64", "userId": "12440174443621946212"}}
#os.environ["WANDB_START_METHOD"] = "fork"
# + id="PtrlYCVMTfPv" executionInfo={"status": "ok", "timestamp": 1613936276911, "user_tz": 300, "elapsed": 356, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Gjm-Ds-aUxtrQKOUlGls0Vk0WuAFuhS0e-pP6vt=s64", "userId": "12440174443621946212"}}
#print(os.environ["WANDB_START_METHOD"])
# + id="accYgj_-pmkY" colab={"base_uri": "https://localhost:8080/", "height": 426} executionInfo={"status": "error", "timestamp": 1613936283693, "user_tz": 300, "elapsed": 5842, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Gjm-Ds-aUxtrQKOUlGls0Vk0WuAFuhS0e-pP6vt=s64", "userId": "12440174443621946212"}} outputId="146a202c-d1db-4dc4-efbe-43338fc76deb"
### Let's run it all, and save the final best model.
best_final_model = model_pipeline(model, criterion, optimizer_ft, dset_loaders, data_sizes, wandb_config)
save_path = '%s/saved_models/%s_final.pt'%(wandb_config['git_dir'], wandb_config['run_name'])
with open(save_path,'wb') as F:
torch.save(best_final_model,F)
# + [markdown] id="fbAT5FJCPONy"
# # If wandb gives you some unexpected errors above, just run the code below.
#
# Sometimes, wandb can run into an error when running on google colab for some reason. This is an active issue that wandb is looking into. So, as a workaround the code below does not rely on wandb, it simply calculates the loss and plots it.
# + id="wLNXM8wgY9Zt" executionInfo={"status": "ok", "timestamp": 1613936456350, "user_tz": 300, "elapsed": 193, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Gjm-Ds-aUxtrQKOUlGls0Vk0WuAFuhS0e-pP6vt=s64", "userId": "12440174443621946212"}}
from IPython.display import clear_output
# + colab={"base_uri": "https://localhost:8080/", "height": 282, "referenced_widgets": ["<KEY>"]} id="mwEPrEpRWn3t" executionInfo={"status": "ok", "timestamp": 1613936468888, "user_tz": 300, "elapsed": 11319, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Gjm-Ds-aUxtrQKOUlGls0Vk0WuAFuhS0e-pP6vt=s64", "userId": "12440174443621946212"}} outputId="84b3f3e1-ad36-4053-bf59-7d4b00b91c34"
losses = []
for epoch in range(2):
for data in tqdm(dset_loaders['train']):
inputs, labels, _ = data
### If you are using a gpu, then script will move the loaded data to the GPU.
### If you are not using a gpu, ensure that wandb_configs['use_gpu'] is set to False above.
if wandb_config['use_gpu']:
inputs = inputs.float().cuda()
labels = labels.long().cuda()
else:
print('WARNING: NOT USING GPU!')
inputs = inputs.float()
labels = labels.long()
### We set the gradients to zero, then calculate the outputs, and the loss function.
### Gradients for this process are automatically calculated by PyTorch.
optimizer_ft.zero_grad()
outputs = model(inputs)
_, preds = torch.max(outputs.data, 1)
loss = criterion(outputs, labels)
### At this point, the program has calculated gradient of loss w.r.t. weights of our NN model.
loss.backward()
optimizer_ft.step()
losses.append(loss)
clear_output()
plt.plot(losses)
plt.show()
# + [markdown] id="zcxInTMApmkZ"
# ### Congratulations!
#
# You just completed your dog vs cats classification!
# + [markdown] id="tUC8UOzYpmkZ"
# # Deliverables for Assignment 2:
#
# ### Like assignment 1, the delivarables are two fold:
#
# Please run this assignment through to the end, and then make two submissions:
#
# - Download this notebook as an HTML file. Click File ---> Download as ---> HTML. Submit this on canvas.
# - Add, commit and push these changes to your github repository.
| assignment_2/Assignment2.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ### Business Problem:
# There is a company, say VIEH Games Corporation. The CEO of the game development company has come up with the plan to make the company more stronger. He said he analysed the market and from his industry knoweldge and reports, he knows that an effective way to attract new customers is to build reputation in mobile game industry. He has following plan: develop an iOS platform stratergy type game with lots of positive attention which bring large audience. So since he is new in mobile games market, He want you to help him answer his question (you are a data scientist):
# - What type of stratergy game have great user ratings?
# - What makes a mobile game popular?
#Importing Libraries
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
# %matplotlib inline
# ### 1. Exploring Dataset
'''
Dataset Link
GitHub: https://github.com/viehgroup/YT/blob/main/How%20to%20approach%20a%20Data%20Science%20problem/appstore_games.csv
Google Drive: https://drive.google.com/file/d/1h41bWWaL3DHhNtYrlPNls0VNCCkJAAP6/view?usp=sharing
'''
games=pd.read_csv("appstore_games.csv")
games.head()
#will print (Row x Columns)
games.shape
# ### 2. Data Selection
updated_columns= {n: n.lower().replace(' ', '_') for n in games.columns}
updated_columns
games.rename(columns=updated_columns, inplace="True")
games.head()
games.set_index(keys='id', inplace=True)
games.head()
games.drop(columns=['url', 'icon_url'], inplace=True)
games.head()
games.info()
games['original_release_date']= pd.to_datetime(games['original_release_date'])
games['current_version_release_date']= pd.to_datetime(games['current_version_release_date'])
games.info()
#Calculate and sum the NaN/Null values
games.isnull().sum()
np.array_equal(games['average_user_rating'].isnull(), games['user_rating_count'].isnull())
games=games.loc[games['average_user_rating'].notnull()]
games=games.loc[games['user_rating_count'] >=30]
games.shape
games.isnull().sum()
# ### 3. Using Descriptive Statistics
games.describe()
games["user_rating_count"].sort_values(ascending=False).head(10)
(games["user_rating_count"] >= 1e5).sum()
print('Games which has more than 1 lakh ratings: ', round(40/games['user_rating_count'].count(), 2) *100, '%')
# <font color='blue'>**Analysis:** The data from the user rating counts, suggests that it is very hard for a strategy game to become hugely popular. The data tells us that less than 1% of the strategy games can be considered as popular (in terms of number of user rating) while more than 75% of the games have less than 1200 user ratings which indicates a very low user base.</font>
games['average_user_rating'].describe()
pd.set_option('display.float_format', lambda x: '%.2f' %x)
games['average_user_rating'].describe()
# <center> Extra Insight </center>
rating_4_5= (games['average_user_rating'] == 4.5).sum()
rating_4_5
proporting_4_5= (games['average_user_rating'] == 4.5).mean()
print(proporting_4_5*100,'%')
# There are more than 47% games which has 4.5 as average rating. So if we are building a stratergy games, we have 47% chance we can get 4.5 as average rating!
# ### 4. Univariate EDA
games.info()
#Converting Bytes into Megabytes (MBs)
games['size'] /= 1e6
games['size'].hist(bins=30, ec='red')
games['size'].describe()
games['size'].sort_values(ascending=False).head(12)
games.hist(figsize=(10, 4), bins=30, ec='red')
price_filter=games['price']<= games['price'].quantile(0.99)
user_rating_count_filter=games['user_rating_count']<= games['user_rating_count'].quantile(0.99)
size_filter=games['size']<= games['size'].quantile(0.99)
exclude_top_1= price_filter & user_rating_count_filter & size_filter
games[exclude_top_1].hist(figsize=(10, 4), bins=30, ec='red')
plt.tight_layout()
# 1. Most of the games are free and very few are not free.
# 2. Most of the paid games are under 10 dollar
# 3. 4.5 is the most common average user rating.
# 4. There are very few games whose rating is less than 3
# 5. Large size game is rare
games['age_rating'].value_counts()
games['age_rating'].value_counts().plot(kind='bar')
games['age_rating'].value_counts(normalize=True).plot(kind='bar')
age_percentage = 100*games['age_rating'].value_counts(normalize=True)
for x, y in age_percentage.items():
print("{}: {}%".format(x, round(y, 1)))
# 1. More than 50% of the games are 4+ means evenkids can play the games
# 2. Very few games (less than 3.5%) games are 17+
# ## Bi-variate EDA
# On basis of variables, we have three cases
# 1. Numeric VS Numeric: Scatter Plot
# 2. Numeric VS Categorical: BoxPlots (others depend on what we are trying to find)
# 3. Categorical VS Categorical: Contigency Table
# 
# 1. Numerical VS Numerical
#
# What is relation between Size of the game and the average rating of the game?
sns.scatterplot(x='size', y='average_user_rating', data=games, s=20)
# <font color='blue'>**Analysis:** If we ignore the small size games, we can clearly see that the games which has size of 2GB or more have high ratings. No games which has high size have rating below 3.5. But High size means High graphics which directly increase the rating.</font>
# 2. Numerical VS Categorical
ratings_mapping = {1.5: 'poor', 2.0: 'poor', 2.5: 'poor', 3.0: 'fair',
3.5: 'fair', 4.0: 'good', 4.5: 'good',
5.0: 'excellent'}
games['new_rating'] = games['average_user_rating'].map(ratings_mapping)
sns.boxplot(x='new_rating', y='size', data=games[games['size'] <= 600],
order=['poor', 'fair', 'good', 'excellent'])
# So for the games under 600 MB, the quality and the graphics matters and it directly affect the rating of the games.
# 3. Categorical VS Categorical
#
# What is the relationship between Age and the average rating of the game?
pd.crosstab(games['age_rating'], games['new_rating'])
# Still we can't find insight from the above data. We need to see if there is some relation between these two variables. Let's say we find that more than 85% of the games rated POOR who was for 4+ years and at the same time we see that around 10% of the games are rated POOR who as for 17+ then we can say that there is relation.
pd.crosstab(games['age_rating'], games['new_rating'], normalize='index')*100
# <font color='blue'>**Analysis:** So here we can see that there is almost same values in the EXCELLENT column and it is true for other columns too. So here we can see that age doen't play an important role for ratings of the games.</font>
# #### Hypothetical Questions: Whether Free games have high rating than Paid games?
games['price'].unique()
games['price'] = games['price'].round()
games['price'].unique()
games['price'].value_counts().sort_values().plot(kind='bar')
games['new_price']= (games['price'] == 0).astype(int).map({0: 'Free', 1: 'Paid'})
sns.boxplot(x='new_price', y='average_user_rating', data=games)
# <font color='blue'>**Analysis:** From the above plot we can see that both the box plot are identicals which means Price doesn't affect the ratings of the games.</font>
# #### Final Statement
# <font color='red'>**Analysis:** We have analysed the whole Apple Store Mobile game market on which we got more than 17 thousands stratergy games on iOS. When we ignored the games which has less than 30 ratings (since less than 30 is a bad reputation) we got left with 4311 games. On our first analysis, we found that it is very hard for a strategy game to become hugely popular. The data tells us that less than 1% of the strategy games can be considered as popular (in terms of number of user rating) while more than 75% of the games have less than 1200 user ratings which indicates a very low user base. To which we anlayzed further and found out the factors affecting the ratings. As a mobile game, we still have around 47% of chnace to get an average user rating of 4.5 (but that doesn't mean we will get more user ratings as total). The size of the game plays an important role in the ratings of the game. More the graphics of our game, More the size will be and hence more the rating of the game. But on that part, we found there are very few games which has a larger game size, so the auidence for the game will be less as compared to low game size audience. We found only 3.5% of the games to be 17+, so making 17+ (Adult) game can be a drawback for us, since we will loose a large amount of audience. But when directly comparing Age with Rating, we found age doesn't affect our ratings of the game. Even if we are building a 17+ game, we may only get a small amount of audience but the ratings will not be affected. Also price too doesn't affect the ratings of the game. Both free and paid games have similar user ratings. The only difference is the audience base. Free games have large amount of audience whereas paid games have less.
#
# So to sum up:
# Only Size of the game affects the ratings of the game, whereas Price and Age affects the audience base.</font>
# ### Extra Activities you can perform
#
# 1. You can see whether languages affects the games or not? (HINT: Make language category and use 2nd case, or you can encode the vraiables and perform the KNN)
# 2. You can see which primary genre has highest rating
# 3. You can see whether lastest version of the game affects the game ratings?
#
| How to approach a Data Science problem/Basic Statistics on Games Available on Apple Store.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
loan = pd.read_csv("loan.csv", sep=",")
loan.info()
# # Understanding the data
loan.head()
loan.columns
# # Data Cleaning
#Finding total number of values missing in each column
loan.isnull().sum()
#finding % of missing values in each column
round(loan.isnull().sum()/len(loan.index), 2)*100
# removing the columns having more than 90% missing values
missing_columns = loan.columns[100*(loan.isnull().sum()/len(loan.index)) > 90]
print(missing_columns)
# +
loan = loan.drop(missing_columns, axis=1)
print(loan.shape)
# -
100*(loan.isnull().sum()/len(loan.index))
loan.loc[:, ['desc', 'mths_since_last_delinq']].head()
loan = loan.drop(['desc', 'mths_since_last_delinq'], axis=1)
100*(loan.isnull().sum()/len(loan.index))
# Missing values im rows
loan.isnull().sum(axis=1)
len(loan[loan.isnull().sum(axis=1) > 5].index)
loan.info()
loan['int_rate'] = loan['int_rate'].apply(lambda x: pd.to_numeric(x.split("%")[0]))
loan.info()
loan = loan[~loan['emp_length'].isnull()]
import re
loan['emp_length'] = loan['emp_length'].apply(lambda x: re.findall('\d+', str(x))[0])
loan['emp_length'] = loan['emp_length'].apply(lambda x: pd.to_numeric(x))
loan.info()
# # Data analyzation
behaviour_var = [
"delinq_2yrs",
"earliest_cr_line",
"inq_last_6mths",
"open_acc",
"pub_rec",
"revol_bal",
"revol_util",
"total_acc",
"out_prncp",
"out_prncp_inv",
"total_pymnt",
"total_pymnt_inv",
"total_rec_prncp",
"total_rec_int",
"total_rec_late_fee",
"recoveries",
"collection_recovery_fee",
"last_pymnt_d",
"last_pymnt_amnt",
"last_credit_pull_d",
"application_type"]
behaviour_var
df = loan.drop(behaviour_var, axis=1)
df.info()
df = df.drop(['title', 'url', 'zip_code', 'addr_state'], axis=1)
df['loan_status'] = df['loan_status'].astype('category')
df['loan_status'].value_counts()
# +
df = df[df['loan_status'] != 'Current']
df['loan_status'] = df['loan_status'].apply(lambda x: 0 if x=='Fully Paid' else 1)
df['loan_status'] = df['loan_status'].apply(lambda x: pd.to_numeric(x))
df['loan_status'].value_counts()
# -
# # Univariate Analysis
round(np.mean(df['loan_status']), 2)
sns.barplot(x='grade', y='loan_status', data=df)
plt.show()
def plot_cat(cat_var):
sns.barplot(x=cat_var, y='loan_status', data=df)
plt.show()
plot_cat('term')
plot_cat('grade')
plt.figure(figsize=(16, 6))
plot_cat('sub_grade')
plot_cat('verification_status')
plt.figure(figsize=(16, 6))
plot_cat('purpose')
df['issue_d'].head()
from datetime import datetime
df['issue_d'] = df['issue_d'].apply(lambda x: datetime.strptime(x, '%b-%y'))
# +
df['month'] = df['issue_d'].apply(lambda x: x.month)
df['year'] = df['issue_d'].apply(lambda x: x.year)
# -
df.groupby('year').year.count()
df.groupby('month').month.count()
plt.figure(figsize=(16, 6))
plot_cat('month')
plot_cat('year')
sns.distplot(df['loan_amnt'])
plt.show()
# +
def loan_amount(n):
if n < 5000:
return 'low'
elif n >=5000 and n < 15000:
return 'medium'
elif n >= 15000 and n < 25000:
return 'high'
else:
return 'very high'
df['loan_amnt'] = df['loan_amnt'].apply(lambda x: loan_amount(x))
# -
df['loan_amnt'].value_counts()
plot_cat('loan_amnt')
df['funded_amnt_inv'] = df['funded_amnt_inv'].apply(lambda x: loan_amount(x))
plot_cat('funded_amnt_inv')
# +
def int_rate(n):
if n <= 10:
return 'low'
elif n > 10 and n <=15:
return 'medium'
else:
return 'high'
df['int_rate'] = df['int_rate'].apply(lambda x: int_rate(x))
# -
plot_cat('int_rate')
# +
def dti(n):
if n <= 10:
return 'low'
elif n > 10 and n <=20:
return 'medium'
else:
return 'high'
df['dti'] = df['dti'].apply(lambda x: dti(x))
# -
plot_cat('dti')
# +
def funded_amount(n):
if n <= 5000:
return 'low'
elif n > 5000 and n <=15000:
return 'medium'
else:
return 'high'
df['funded_amnt'] = df['funded_amnt'].apply(lambda x: funded_amount(x))
# -
plot_cat('funded_amnt')
# +
def installment(n):
if n <= 200:
return 'low'
elif n > 200 and n <=400:
return 'medium'
elif n > 400 and n <=600:
return 'high'
else:
return 'very high'
df['installment'] = df['installment'].apply(lambda x: installment(x))
# -
plot_cat('installment')
# +
def annual_income(n):
if n <= 50000:
return 'low'
elif n > 50000 and n <=100000:
return 'medium'
elif n > 100000 and n <=150000:
return 'high'
else:
return 'very high'
df['annual_inc'] = df['annual_inc'].apply(lambda x: annual_income(x))
# -
plot_cat('annual_inc')
# +
df = df[~df['emp_length'].isnull()]
def emp_length(n):
if n <= 1:
return 'fresher'
elif n > 1 and n <=3:
return 'junior'
elif n > 3 and n <=7:
return 'senior'
else:
return 'expert'
df['emp_length'] = df['emp_length'].apply(lambda x: emp_length(x))
# -
plot_cat('emp_length')
# # Segmented univariate analysis
plt.figure(figsize=(16, 6))
plot_cat('purpose')
plt.figure(figsize=(16, 6))
sns.countplot(x='purpose', data=df)
plt.show()
main_purposes = ["credit_card","debt_consolidation","home_improvement","major_purchase"]
df = df[df['purpose'].isin(main_purposes)]
df['purpose'].value_counts()
sns.countplot(x=df['purpose'])
plt.show()
plt.figure(figsize=[10, 6])
sns.barplot(x='term', y="loan_status", hue='purpose', data=df)
plt.show()
# +
def plot_segmented(cat_var):
plt.figure(figsize=(10, 6))
sns.barplot(x=cat_var, y='loan_status', hue='purpose', data=df)
plt.show()
plot_segmented('term')
# -
plot_segmented('grade')
plot_segmented('home_ownership')
plot_segmented('year')
plot_segmented('emp_length')
plot_segmented('loan_amnt')
plot_segmented('int_rate')
plot_segmented('installment')
plot_segmented('dti')
plot_segmented('annual_inc')
df.groupby('annual_inc').loan_status.mean().sort_values(ascending=False)
# +
def diff_rate(cat_var):
default_rates = df.groupby(cat_var).loan_status.mean().sort_values(ascending=False)
return (round(default_rates, 2), round(default_rates[0] - default_rates[-1], 2))
default_rates, diff = diff_rate('annual_inc')
print(default_rates)
print(diff)
# +
df_categorical = df.loc[:, df.dtypes == object]
df_categorical['loan_status'] = df['loan_status']
print([i for i in df.columns])
# -
d = {key: diff_rate(key)[1]*100 for key in df_categorical.columns if key != 'loan_status'}
print(d)
| code(EDA).ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Advanced Tutorial 11: Model Calibration
#
# ## Overview
# In this tutorial, we will discuss the following topics:
# * [Calculating Calibration Error](#ta11error)
# * [Generating and Applying a Model Calibrator](#ta11calibrator)
# We'll start by getting the imports out of the way:
# +
import tempfile
import os
import fastestimator as fe
from fastestimator.architecture.tensorflow import LeNet
from fastestimator.backend import squeeze, reduce_mean
from fastestimator.dataset.data import cifair10
from fastestimator.op.numpyop.meta import Sometimes
from fastestimator.op.numpyop.multivariate import HorizontalFlip, PadIfNeeded, RandomCrop
from fastestimator.op.numpyop.univariate import CoarseDropout, Normalize, Calibrate
from fastestimator.op.tensorop.loss import CrossEntropy
from fastestimator.op.tensorop.model import ModelOp, UpdateOp
from fastestimator.summary.logs import visualize_logs
from fastestimator.trace.adapt import PBMCalibrator
from fastestimator.trace.io import BestModelSaver
from fastestimator.trace.metric import CalibrationError, MCC
from fastestimator.util import to_number, to_list
import matplotlib.pyplot as plt
import numpy as np
label_mapping = {
'airplane': 0,
'automobile': 1,
'bird': 2,
'cat': 3,
'deer': 4,
'dog': 5,
'frog': 6,
'horse': 7,
'ship': 8,
'truck': 9
}
# -
# And let's define a function to build a generic ciFAIR10 estimator. We will show how to use combinations of extra traces and post-processing ops to enhance this estimator throughout the tutorial.
def build_estimator(extra_traces = None, postprocessing_ops = None):
batch_size=128
save_dir = tempfile.mkdtemp()
extra_traces = to_list(extra_traces)
postprocessing_ops = to_list(postprocessing_ops)
train_data, eval_data = cifair10.load_data()
test_data = eval_data.split(range(len(eval_data) // 2))
pipeline = fe.Pipeline(
train_data=train_data,
eval_data=eval_data,
test_data=test_data,
batch_size=batch_size,
ops=[Normalize(inputs="x", outputs="x", mean=(0.4914, 0.4822, 0.4465), std=(0.2471, 0.2435, 0.2616)),
PadIfNeeded(min_height=40, min_width=40, image_in="x", image_out="x", mode="train"),
RandomCrop(32, 32, image_in="x", image_out="x", mode="train"),
Sometimes(HorizontalFlip(image_in="x", image_out="x", mode="train")),
CoarseDropout(inputs="x", outputs="x", mode="train", max_holes=1),
],
num_process=0)
model = fe.build(model_fn=lambda: LeNet(input_shape=(32, 32, 3)), optimizer_fn="adam")
network = fe.Network(
ops=[
ModelOp(model=model, inputs="x", outputs="y_pred"),
CrossEntropy(inputs=("y_pred", "y"), outputs="ce"),
UpdateOp(model=model, loss_name="ce")
],
pops=postprocessing_ops) # <---- Some of the secret sauce will go here
traces = [
MCC(true_key="y", pred_key="y_pred"),
BestModelSaver(model=model, save_dir=save_dir, metric="mcc", save_best_mode="max", load_best_final=True),
]
traces = traces + extra_traces # <---- Most of the secret sauce will go here
estimator = fe.Estimator(pipeline=pipeline,
network=network,
epochs=21,
traces=traces,
log_steps=300)
return estimator
# <a id='ta11error'></a>
# ## Calculating Calibration Error
#
# Suppose you have a neural network that is performing image classification. For the sake of argument, let's imagine that the classification problem is to look at x-ray images and determine whether or not a patient has cancer. Let's further suppose that your model is very accurate: when it assigns a higher probability to 'cancer' the patient is almost always sick, and when it assigns a higher probability to 'healthy' the patient is almost always fine. It could be tempting to think that the job is done, but there is still a potential problem for real-world deployments of your model. Suppose a physician using your model runs an image and gets a report saying that it is 51% likely that the patient is healthy, and 49% likely that there is a cancerous tumor. In reality the patient is indeed healthy. From an accuracy point of view, your model is doing just fine. However, if the doctor sees that it is 49% likely that there is a tumor, they are likely to order a biopsy in order to be on the safe side. Taken to an extreme, suppose that your model always predicts a 49% probability of a tumor whenever it sees a healthy patient. Even though the model might have perfect accuracy, in practice it would always result in extra surgical procedures being performed. Ideally, if the model says that there is a 49% probability of a tumor, you would expect there to actually be a tumor in 49% of those cases. The discrepancy between a models predicted probability of a class and the true probability of that class conditioned on the prediction is measured as the calibration error. Calibration error is notoriously difficult to estimate correctly, but FE provides a `Trace` for this based on a [2019 NeurIPS spotlight paper](https://papers.nips.cc/paper/2019/file/f8c0c968632845cd133308b1a494967f-Paper.pdf) titled "Verified Uncertainty Calibration".
#
# The `CalibrationError` trace can be used just like any other metric trace, though it also optionally can compute confidence intervals around the estimated error. Keep in mind that to measure calibration error you would want your validation dataset to have a reasonable real-world class distribution (only a small percentage of people in the population actually have cancer, for example). For the purpose of easy illustration we will be using the ciFAIR10 dataset, and computing a 95% confidence interval for the estimated calibration error of the model:
estimator = build_estimator(extra_traces=CalibrationError(true_key="y", pred_key="y_pred", confidence_interval=95))
# + jupyter={"outputs_hidden": true}
summary = estimator.fit("experiment1")
# -
estimator.test()
# Let's take a look at how the calibration error changed over training:
visualize_logs([summary], include_metrics={'calibration_error', 'mcc', 'ce'})
# As we can see from the graph above, calibration error is significantly more noisy than classical metrics like mcc or accuracy. In this case it does seem to have improved somewhat with training, though the correlation isn't strong enough to expect to be able to eliminate your calibration error just by training longer. Instead, we will see how you can effectively calibrate a model after-the-fact:
# <a id='ta11calibrator'></a>
# ## Generating and Applying a Model Calibrator
#
# While there have been many proposed approaches for model calibration, we will again be leveraging the Verified Uncertainty Calibration paper mentioned above to achieve highly sample-efficient model re-calibration. There are two steps involved here. The first step is that we will use the `PBMCalibrator` trace to generate a 'platt binner marginal calibrator'. This calibrator is separate from the neural network, but will take neural network outputs and return calibrated outputs. A consequence of performing this calibration is that the output vector for a prediction will no longer sum to 1, since each class is calibrated independently.
#
# Of course, simply having such a calibration object is not useful if we don't use it. To make use of our calibrator object we will use the `Calibrate` numpyOp, which can load any calibrator object from disk and then apply it during `Network` post-processing. Since we are using a best model saver, we will only save the calibrator object when our since_best is 0 so that when we re-load the best model we will also be loading the correct calibrator for that model.
save_path = os.path.join(tempfile.mkdtemp(), 'calibrator.pkl')
estimator = build_estimator(extra_traces=[CalibrationError(true_key="y", pred_key="y_pred", confidence_interval=95),
PBMCalibrator(true_key="y", pred_key="y_pred", save_path=save_path, save_if_key="since_best_mcc", mode="eval"),
# We will also compare the MCC and calibration error between the original and calibrated samples:
MCC(true_key="y", pred_key="y_pred_calibrated", output_name="mcc (calibrated)", mode="test"),
CalibrationError(true_key="y", pred_key="y_pred_calibrated", output_name="calibration_error (calibrated)", confidence_interval=95, mode="test"),
],
postprocessing_ops = Calibrate(inputs="y_pred", outputs="y_pred_calibrated", calibration_fn=save_path, mode="test"))
# + jupyter={"outputs_hidden": true}
summary = estimator.fit("experiment2")
# -
estimator.test()
visualize_logs([summary], include_metrics={'calibration_error', 'mcc', 'ce', "calibration_error (calibrated)", "mcc (calibrated)"})
delta = summary.history['test']['mcc (calibrated)'][8211] - summary.history['test']['mcc'][8211]
relative_delta = delta / summary.history['test']['mcc'][8211]
print(f"mcc change after calibration: {delta} ({relative_delta*100}%)")
delta = summary.history['test']['calibration_error (calibrated)'][8211].y - summary.history['test']['calibration_error'][8211].y
relative_delta = delta / summary.history['test']['calibration_error'][8211].y
print(f"calibration error change after calibration: {delta} ({relative_delta*100}%)")
# As we can see from the graphs and values above, with the use of a platt binning marginal calibrator we can dramatically reduce a model's calibration error (in this case by over 80%) while sacrificing only a very small amount of model performance (in this case less than a 1% reduction in MCC).
| tutorial/advanced/t11_model_calibration.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# **Notebook de referencia** https://www.kaggle.com/gogo827jz/rapids-svm-on-gpu-6000-models-in-1-hour
# + id="TRBnzsqfS7wu"
import pandas as pd
import numpy as np
from tqdm.notebook import tqdm
from time import time
# + [markdown] id="lCvaxid2jW7i"
# # Carga de datos
# + id="ZuzUhyN6jHTj"
sample_submission = pd.read_csv('../input/lish-moa/sample_submission.csv')
test_features = pd.read_csv('../input/lish-moa/test_features.csv')
train_features = pd.read_csv('../input/lish-moa/train_features.csv')
train_targets= pd.read_csv('../input/lish-moa/train_targets_scored.csv')
# + [markdown] id="5MbLl4I3llRU"
# # Preprocesado de los datos
# + [markdown] id="PUWKOWEipS38"
# ### Categorical Pipelines
# + id="dIitFb4ulbS-"
def categorical_encoding(df):
df['cp_type'] = df['cp_type'].map({'trt_cp': 0, 'ctl_vehicle': 1})
df['cp_dose'] = df['cp_dose'].map({'D1': 0, 'D2': 1})
df['cp_time'] = df['cp_time'].map({24:1, 48:2, 72:3})
return df
# + [markdown] id="-IRggtc8qSRB"
# ## Preprocess train data
# + [markdown] id="J1Zlj9M3vW6p"
# ## Train
# + [markdown] id="dpbcZdHTqmw1"
# Delete ID Columns and then we encode categorical labels
# + id="C5nIl1qbqkkH"
df_train = train_features.copy()
df_train.drop('sig_id', axis = 1, inplace = True)
df_train = categorical_encoding(df_train)
# + [markdown] id="40saK2s7vY0F"
# ## Target
# + id="nmKrZohYvaUR"
train_targets.drop('sig_id', axis = 1, inplace = True)
# + [markdown] id="sxyEQpHF3BEd"
# ## Test
# + id="smXuRLPk3D2O"
df_test = test_features.copy()
df_test.drop('sig_id', axis = 1, inplace = True)
df_test = categorical_encoding(df_test)
# + [markdown] id="SfASA9pCwbUt"
# # SVC
# + id="qIsrghN6xEPT"
from sklearn.svm import SVC
from sklearn.model_selection import StratifiedKFold
from sklearn.metrics import log_loss
from sklearn.preprocessing import StandardScaler
import datetime
# + id="2LJRR8C9rYvC"
scaler = StandardScaler()
X = scaler.fit_transform(df_train.values)
X_tt = scaler.transform(df_test.values)
# + id="SZR5p4tpv-mD" outputId="250499f0-ffdc-4673-e1ef-0043205a0c5e"
#Copies
res = train_targets.copy()
train = df_train.copy()
sample_submission.loc[:, train_targets.columns] = 0 # putting all columns to 0
res.loc[:, train_targets.columns] = 0 # putting all columns to 0
N_STARTS = 1 #Semillas
N_SPLITS = 3
for n in tqdm(range(train_targets.shape[1])):
start_time = time() #Time for tqdm
target = train_targets.values[:,n] #y n column
if target.sum() >= N_SPLITS:
for seed in range(N_STARTS):
skf = StratifiedKFold(n_splits = N_SPLITS, random_state = seed, shuffle = True)
for j, (train_idx, test_idx) in enumerate(skf.split(target, target)):
x_train, x_test = X[train_idx],X[test_idx]
y_train, y_test = target[train_idx], target[test_idx]
if y_train.sum() >= 5:
model = SVC(probability = True, cache_size = 2000)
model.fit(x_train, y_train)
sample_submission.loc[:, train_targets.columns[n]] += model.predict_proba(df_test)[:, 1] / (N_SPLITS * N_STARTS) # Aรฑadimos la media de nuestro valor predicho al conjunto para evaluar
res.loc[test_idx, train_targets.columns[n]] += model.predict_proba(x_test)[:, 1] / N_STARTS # Aรฑadimos la media de nuestro valor predicho a nuestro conjunto para medir la mรฉtrica
else:
print(f'Target {target}: Seed {seed}: Fold {n}: Not enough positive values for give probability.')
model = SVC(cache_size = 2000)
model.fit(x_train, y_train)
sample_submission.loc[:, train_targets.columns[n]] += model.predict_proba(df_test)[:, 1] / (N_SPLITS * N_STARTS) # Aรฑadimos la media de nuestro valor predicho al conjunto para evaluar
res.loc[test_idx, train_targets.columns[n]] += model.predict_proba(x_test)[:, 1] / N_STARTS # Aรฑadimos la media de nuestro valor predicho a nuestro conjunto para medir la mรฉtrica
col_score = log_loss(train_targets.loc[:, train_targets.columns[n]], res.loc[:, train_targets.columns[n]])
print(f'[{str(datetime.timedelta(seconds = time() - start_time))[2:7]}] Target {target}:', col_score)
# + id="8nzBrih3ycle"
| notebooks/svc-1-moa.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 2
# language: python
# name: python2
# ---
import pandas as pd
import networkx as nx
pd.read_csv('../data/summary.csv', parse_dates=True)
authors_churn = pd.read_csv('../data/authors.csv')
authors_churn
authors = pd.read_csv('authors.csv')
authors.head()
authors.describe()
# %matplotlib inline
authors.plot(kind='scatter', x='n-authors', y='n-revs');
authors.groupby('n-authors').describe()
communication = pd.read_csv('../data/communication.csv', parse_dates=True)
strength = communication['strength']
communication['normal_strength'] = strength.apply(lambda x: (x - strength.min()) / (strength.max() - strength.min()))
communication['strength'].quantile(.9)
communication[communication['author'].isin(data[0].head(50))].groupby(['author']).apply(lambda x: list(x.peer)).to_json('../data/data.json')
communication[communication['strength'] > 20].describe()
G=nx.from_pandas_dataframe(communication, 'author', 'peer', ['strength'])
# +
pos=nx.spring_layout(G)
edgewidth = [ d['strength'] for (u,v,d) in G.edges(data=True)]
nx.draw_networkx_edge_labels(G, pos)
nx.draw_networkx_nodes(G,pos)
nx.draw_networkx_labels(G,pos)
nx.draw_networkx_edges(G, pos, edge_color=edgewidth)
# -
nx.k_nearest_neighbors(G, weight='strength')
nx.pagerank(G, weight='strength')
# +
G=nx.from_pandas_dataframe(communication, 'author', 'peer', ['strength'])
d = nx.pagerank(G, weight='strength')
data = pd.DataFrame(d.items()).sort_values(by=1, ascending=False)
# -
communication[communication['author'].isin(data[0].head(50))]
| marathon-consul/Calculate factor-Copy1.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Introduction To Probability
# ## Challenge 1
#
# A and B are events of a probability space with $(\omega, \sigma, P)$ such that $P(A) = 0.3$, $P(B) = 0.6$ and $P(A \cap B) = 0.1$
#
# Which of the following statements are false?
# * $P(A \cup B) = 0.6$
# * $P(A \cap B^{C}) = 0.2$
# * $P(A \cap (B \cup B^{C})) = 0.4$
# * $P(A^{C} \cap B^{C}) = 0.3$
# * $P((A \cap B)^{C}) = 0.9$
"""
your solution here:
1. P(A U B) = 0.8 rather than 0.6 because it should be (0.3+0.6)-0.1
2. P (A n B^c) = 0.12 because it should be P(A)*P(B-1) = 0.3*1-0.6= 0.12
3. It seems like ๐(๐ด โฉ(๐ตโช๐ต^c)): 0.3 * ๐ตโช๐ต^c=1 = 0.3*1 = 0.3
4. P(A^c n B^c) = (1-0.3)*(1-0.6) = 0.7*0.4 = 0.28
5. True
"""
# ## Challenge 2
# There is a box with 10 white balls, 12 red balls and 8 black balls. Calculate the probability of:
# * Taking a white ball out.
# * Taking a white ball out after taking a black ball out.
# * Taking a red ball out after taking a black and a red ball out.
# * Taking a red ball out after taking a black and a red ball out with reposition.
#
# **Hint**: Reposition means putting back the ball into the box after taking it out.
"""
your solution here
1. Only white: 10/20, which is 1/2
2. White after a black ball: P(black)*P(white given the black happened) = 0.4*10/19 = 0.2105
3. red after red after black: black, red, red: 8/10*12/19*11/18 = 0.155
4. 8/20*12/20*12/20 = 0.144
"""
# ## Challenge 3
#
# You are planning to go on a picnic today but the morning is cloudy. You hate rain so you don't know whether to go out or stay home! To help you make a decision, you gather the following data about rainy days:
#
# * 50% of all rainy days start off cloudy!
# * Cloudy mornings are common. About 40% of days start cloudy.
# * This month is usually dry so only 3 of 30 days (10%) tend to be rainy.
#
# What is the chance of rain during the day?
"""
your solution here
P(cloudy given rain) = .5
P(cloudy) = .4
P(rain) = .10
P(rain given cloudy) = P(rain)*P(cloudy given rain) / P(cloudy)
= 0.1*0.5 / 0.4
"""
# ## Challenge 4
#
# One thousand people were asked through a telephone survey whether they thought more street lighting is needed at night or not.
#
# Out of the 480 men that answered the survey, 324 said yes and 156 said no. On the other hand, out of the 520 women that answered, 351 said yes and 169 said no.
#
# We wonder if men and women have a different opinions about the street lighting matter. Is gender relevant or irrelevant to the question?
#
# Consider the following events:
# - The answer is yes, so the person that answered thinks that more street lighting is needed.
# - The person who answered is a man.
#
# We want to know if these events are independent, that is, if the fact of wanting more light depends on whether one is male or female. Are these events independent or not?
#
# **Hint**: To clearly compare the answers by gender, it is best to place the data in a table.
# +
# your code here
# -
"""
your solution here
To check for independence, if P(A|B) = P(A) then they are indepdent.
P(yes light given male) = P(yes light)
P(324/480) = P(675/1000)
Thus they are indepdent.
"""
| module-2/Probability-Intro/your-code/.ipynb_checkpoints/main-checkpoint.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
# A simple implementation of a binary search tree
class Node:
def __init__(self, key, left = None, right = None):
self.key = key
self.left, self.right = left, right
def insert(self,key):
if key < self.key:
if self.left:
self.left.insert(key)
else:
self.left = Node(key)
elif key > self.key:
if self.right:
self.right.insert(key)
else:
self.right = Node(key)
else:
print("Error, key already contained in tree")
def search(self,key):
if key < self.key:
if self.left:
self.left.search(key)
else:
print("{} not contained in tree".format(key))
return None
elif key > self.key:
if self.right:
self.right.search(key)
else:
print("{} not contained in tree".format(key))
return None
else:
print("{} contained in tree".format(key))
return key
def inorder(self,arr): # print tree
# inorder for a binary search tree literally means "in" sorted "order"
# left root right
if self.left:
self.left.inorder(arr)
arr.append(self.key) # print(self.key, end = " ")
if self.right:
self.right.inorder(arr)
def preorder(self,arr): # root left right (root will be printed first)
arr.append(self.key) # print(self.key, end = " ")
if self.left:
self.left.preorder(arr)
if self.right:
self.right.preorder(arr)
def postorder(self,arr): # left right root (root will be printed last)
if self.left:
self.left.postorder(arr)
if self.right:
self.right.postorder(arr)
arr.append(self.key) # print(self.key, end = " ")
def build_binary_tree_from_array(X):
tree = Node(X[0])
for i in range(1,len(X)):
tree.insert(X[i])
return tree
def demo_binary_tree(X = [5,3,8,2,0,9,1,4,6,7]):
# build tree
tree = build_binary_tree_from_array(X)
# demo tree traversal algorithms (which output to lists)
inorder, preorder, postorder = [], [], []
tree.inorder(inorder)
tree.preorder(preorder)
tree.postorder(postorder)
print("Tree inorder traversal: ",inorder)
print("Tree preorder traversal: ",preorder)
print("Tree postorder traversal: ",postorder)
# demo search function
print("\nSearching for elements in tree:")
tree.search(5)
tree.search(-1)
tree.search(2)
demo_binary_tree()
# -
# ## Tree Traversals
#
# The three depth first tree traversal algorithms are called **preorder**, **inorder**, and **postorder** where the names give away the order in which the algorithm visits the root, left child, and right child.
#
# - **preorder** $\implies$ root - left - right - root
# - **inorder** $\implies$ left - root - right
# - **postorder** $\implies$ left - right - root
#
# It is possible to reconstruct a tree uniquely when given the **inorder** traversal and one other (including level order). We can do this by observing some properties of the lists of nodes that come out of the traversals.
#
# - In a **preorder** and **level-order** traversal, the root element is always printed *first*.
# - In a **postorder** traversal, the root element is always printed *last*.
# - In an **inorder** traversal, the root node - once located (via one of the other traversals) - separates the traversal of the left subtree from that of the right subtree.
#
# Using these simple facts, we can recursively build or copy a tree given an inorder traversal and one other.
#
# +
def build_tree_from_inorder_and_preorder(inorder,preorder):
rootval = preorder[0]
ind = inorder.index(rootval)
curNode = Node(rootval)
n = len(inorder)
if n > 0: # if more than one element in inorder/preorder, recursively build tree
# if inorder[0] is not root, recursively build left subtree
if ind > 0:
curNode.left = build_tree_from_inorder_and_preorder(inorder[0:ind],preorder[1:1+ind])
# if inorder[n-1] is not root, recursively build right subtree
if ind < n-1:
curNode.right = build_tree_from_inorder_and_preorder(inorder[ind+1:],preorder[1+ind:])
return curNode
def demo_build_tree_from_inorder_and_preorder(X = [5,3,8,2,0,9,1,4,6,7]):
print("Rebuilding a binary tree from inorder and preorder traversal:")
# build tree and get inorder and preorder traversals
tree = build_binary_tree_from_array(X)
inorder, preorder, postorder = [], [], []
tree.inorder(inorder)
tree.preorder(preorder)
tree.postorder(postorder)
# build a copy of tree from inorder and preorder traversals
tree2 = build_tree_from_inorder_and_preorder(inorder,preorder)
inorder2, preorder2, postorder2 = [], [], []
tree2.inorder(inorder2)
tree2.preorder(preorder2)
tree2.postorder(postorder2)
print("verify inorder traversals match: \n ",inorder2,"vs",inorder)
print("verify preorder traversals match: \n ",preorder2,"vs",preorder)
print("verify postorder traversals match: \n ",postorder2,"vs",postorder)
if inorder2 == inorder:
if preorder2 == preorder:
if postorder2 == postorder:
print("Bingo! Perfect match!\n\n")
else:
print("Mwaa aaahhh, something went wrong\n\n")
demo_build_tree_from_inorder_and_preorder()
demo_build_tree_from_inorder_and_preorder([0,1,2])
demo_build_tree_from_inorder_and_preorder([0])
# -
| binary_trees/BinaryTrees.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ## Memory management utils
# Utility functions for memory management. Currently primarily for GPU.
# + hide_input=true
from fastai.gen_doc.nbdoc import *
from fastai.utils.mem import *
# + hide_input=true
show_doc(gpu_mem_get)
# -
# [`gpu_mem_get`](/utils.mem.html#gpu_mem_get)
#
# * for gpu returns `GPUMemory(total, free, used)`
# * for cpu returns `GPUMemory(0, 0, 0)`
# * for invalid gpu id returns `GPUMemory(0, 0, 0)`
# + hide_input=true
show_doc(gpu_mem_get_all)
# -
# [`gpu_mem_get_all`](/utils.mem.html#gpu_mem_get_all)
# * for gpu returns `[ GPUMemory(total_0, free_0, used_0), GPUMemory(total_1, free_1, used_1), .... ]`
# * for cpu returns `[]`
#
# + hide_input=true
show_doc(gpu_mem_get_free)
# -
#
# + hide_input=true
show_doc(gpu_mem_get_free_no_cache)
# -
#
# + hide_input=true
show_doc(gpu_mem_get_used)
# -
#
# + hide_input=true
show_doc(gpu_mem_get_used_no_cache)
# -
# [`gpu_mem_get_used_no_cache`](/utils.mem.html#gpu_mem_get_used_no_cache)
# + hide_input=true
show_doc(gpu_mem_get_used_fast)
# -
# [`gpu_mem_get_used_fast`](/utils.mem.html#gpu_mem_get_used_fast)
# + hide_input=true
show_doc(gpu_with_max_free_mem)
# -
# [`gpu_with_max_free_mem`](/utils.mem.html#gpu_with_max_free_mem):
# * for gpu returns: `gpu_with_max_free_ram_id, its_free_ram`
# * for cpu returns: `None, 0`
#
# + hide_input=true
show_doc(preload_pytorch)
# -
# [`preload_pytorch`](/utils.mem.html#preload_pytorch) is helpful when GPU memory is being measured, since the first time any operation on `cuda` is performed by pytorch, usually about 0.5GB gets used by CUDA context.
# + hide_input=true
show_doc(GPUMemory, title_level=4)
# -
# [`GPUMemory`](/utils.mem.html#GPUMemory) is a namedtuple that is returned by functions like [`gpu_mem_get`](/utils.mem.html#gpu_mem_get) and [`gpu_mem_get_all`](/utils.mem.html#gpu_mem_get_all).
# + hide_input=true
show_doc(b2mb)
# -
# [`b2mb`](/utils.mem.html#b2mb) is a helper utility that just does `int(bytes/2**20)`
# ## Memory Tracing Utils
# + hide_input=true
show_doc(GPUMemTrace, title_level=4)
# -
# **Arguments**:
#
# * `silent`: a shortcut to make `report` and `report_n_reset` silent w/o needing to remove those calls - this can be done from the constructor, or alternatively you can call `silent` method anywhere to do the same.
# * `ctx`: default context note in reports
# * `on_exit_report`: auto-report on ctx manager exit (default `True`)
#
# **Definitions**:
#
# * **Delta Used** is the difference between current used memory and used memory at the start of the counter.
#
# * **Delta Peaked** is the memory overhead if any. It's calculated in two steps:
# 1. The base measurement is the difference between the peak memory and the used memory at the start of the counter.
# 2. Then if delta used is positive it gets subtracted from the base value.
#
# It indicates the size of the blip.
#
# **Warning**: currently the peak memory usage tracking is implemented using a python thread, which is very unreliable, since there is no guarantee the thread will get a chance at running at the moment the peak memory is occuring (or it might not get a chance to run at all). Therefore we need pytorch to implement multiple concurrent and resettable [`torch.cuda.max_memory_allocated`](https://pytorch.org/docs/stable/cuda.html#torch.cuda.max_memory_allocated) counters. Please vote for this [feature request](https://github.com/pytorch/pytorch/issues/16266).
#
# **Usage Examples**:
#
# Setup:
# ```
# from fastai.utils.mem import GPUMemTrace
# def some_code(): pass
# mtrace = GPUMemTrace()
# ```
#
# Example 1: basic measurements via `report` (prints) and via [`data`](/tabular.data.html#tabular.data) (returns) accessors
# ```
# some_code()
# mtrace.report()
# delta_used, delta_peaked = mtrace.data()
#
# some_code()
# mtrace.report('2nd run of some_code()')
# delta_used, delta_peaked = mtrace.data()
# ```
# `report`'s optional `subctx` argument can be helpful if you have many `report` calls and you want to understand which is which in the outputs.
#
# Example 2: measure in a loop, resetting the counter before each run
# ```
# for i in range(10):
# mtrace.reset()
# some_code()
# mtrace.report(f'i={i}')
# ```
# `reset` resets all the counters.
#
# Example 3: like example 2, but having `report` automatically reset the counters
# ```
# mtrace.reset()
# for i in range(10):
# some_code()
# mtrace.report_n_reset(f'i={i}')
# ```
#
# The tracing starts immediately upon the [`GPUMemTrace`](/utils.mem.html#GPUMemTrace) object creation, and stops when that object is deleted. But it can also be `stop`ed, `start`ed manually as well.
# ```
# mtrace.start()
# mtrace.stop()
# ```
# `stop` is in particular useful if you want to **freeze** the [`GPUMemTrace`](/utils.mem.html#GPUMemTrace) object and to be able to query its data on `stop` some time down the road.
#
#
# **Reporting**:
#
# In reports you can print a main context passed via the constructor:
#
# ```
# mtrace = GPUMemTrace(ctx="foobar")
# mtrace.report()
# ```
# prints:
# ```
# โณUsed Peaked MB: 0 0 (foobar)
# ```
#
# and then add subcontext notes as needed:
#
# ```
# mtrace = GPUMemTrace(ctx="foobar")
# mtrace.report('1st try')
# mtrace.report('2nd try')
#
# ```
# prints:
# ```
# โณUsed Peaked MB: 0 0 (foobar: 1st try)
# โณUsed Peaked MB: 0 0 (foobar: 2nd try)
# ```
#
# Both context and sub-context are optional, and are very useful if you sprinkle [`GPUMemTrace`](/utils.mem.html#GPUMemTrace) in different places around the code.
#
# You can silence report calls w/o needing to remove them via constructor or `silent`:
#
# ```
# mtrace = GPUMemTrace(silent=True)
# mtrace.report() # nothing will be printed
# mtrace.silent(silent=False)
# mtrace.report() # printing resumed
# mtrace.silent(silent=True)
# mtrace.report() # nothing will be printed
# ```
#
# **Context Manager**:
#
# [`GPUMemTrace`](/utils.mem.html#GPUMemTrace) can also be used as a context manager:
#
# Report the used and peaked deltas automatically:
#
# ```
# with GPUMemTrace(): some_code()
# ```
#
# If you wish to add context:
#
# ```
# with GPUMemTrace(ctx='some context'): some_code()
# ```
#
# The context manager uses subcontext `exit` to indicate that the report comes after the context exited.
#
# The reporting is done automatically, which is especially useful in functions due to return call:
#
# ```
# def some_func():
# with GPUMemTrace(ctx='some_func'):
# # some code
# return 1
# some_func()
# ```
# prints:
# ```
# โณUsed Peaked MB: 0 0 (some_func: exit)
# ```
# so you still get a perfect report despite the `return` call here. `ctx` is useful for specifying the *context* in case you have many of those calls through your code and you want to know which is which.
#
# And, of course, instead of doing the above, you can use [`gpu_mem_trace`](/utils.mem.html#gpu_mem_trace) decorator to do it automatically, including using the function or method name as the context. Therefore, the example below does the same without modifying the function.
#
# ```
# @gpu_mem_trace
# def some_func():
# # some code
# return 1
# some_func()
# ```
#
# If you don't wish the automatic reporting, just pass `on_exit_report=False` in the constructor:
#
# ```
# with GPUMemTrace(ctx='some_func', on_exit_report=False) as mtrace:
# some_code()
# mtrace.report("measured in ctx")
# ```
#
# or the same w/o the context note:
# ```
# with GPUMemTrace(on_exit_report=False) as mtrace: some_code()
# print(mtrace) # or mtrace.report()
# ```
#
# And, of course, you can get the numerical data (in rounded MBs):
# ```
# with GPUMemTrace() as mtrace: some_code()
# delta_used, delta_peaked = mtrace.data()
# ```
# + hide_input=true
show_doc(gpu_mem_trace)
# -
# This allows you to decorate any function or method with:
#
# ```
# @gpu_mem_trace
# def my_function(): pass
# # run:
# my_function()
# ```
# and it will automatically print the report including the function name as a context:
# ```
# โณUsed Peaked MB: 0 0 (my_function: exit)
# ```
# In the case of methods it'll print a fully qualified method, e.g.:
# ```
# โณUsed Peaked MB: 0 0 (Class.function: exit)
# ```
#
# ## Undocumented Methods - Methods moved below this line will intentionally be hidden
# + hide_input=true
show_doc(GPUMemTrace.report)
# -
#
# + hide_input=true
show_doc(GPUMemTrace.silent)
# -
#
# + hide_input=true
show_doc(GPUMemTrace.start)
# -
#
# + hide_input=true
show_doc(GPUMemTrace.reset)
# -
#
# + hide_input=true
show_doc(GPUMemTrace.peak_monitor_stop)
# -
#
# + hide_input=true
show_doc(GPUMemTrace.stop)
# -
#
# + hide_input=true
show_doc(GPUMemTrace.report_n_reset)
# -
#
# + hide_input=true
show_doc(GPUMemTrace.peak_monitor_func)
# -
#
# + hide_input=true
show_doc(GPUMemTrace.data_set)
# -
#
# + hide_input=true
show_doc(GPUMemTrace.data)
# -
#
# + hide_input=true
show_doc(GPUMemTrace.peak_monitor_start)
# -
#
# ## New Methods - Please document or move to the undocumented section
| docs_src/utils.mem.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
# From: https://github.com/marcotcr/checklist/blob/master/notebooks/tutorials/3.%20Test%20types,%20expectation%20functions,%20running%20tests.ipynb
# -
import spacy
nlp = spacy.load("en_core_web_sm")
# +
dataset = ['This was a very nice movie directed by <NAME>.',
'<NAME> was brilliant.',
'I hated everything about this.',
'This movie was very bad.',
'I really liked this movie.',
'just bad.',
'amazing.',
]
pdataset = list(nlp.pipe(dataset))
# -
import checklist
from checklist.editor import Editor
from checklist.perturb import Perturb
from checklist.test_types import MFT, INV, DIR
# # Invariance Test
# +
# t = Perturb.perturb(pdataset, Perturb.change_names)
t = Perturb.perturb(dataset, Perturb.add_typos)
print('\n'.join(t.data[0][:3]))
print('...')
test = INV(**t)
# -
from pattern.en import sentiment
# +
import numpy as np
def predict_proba(inputs):
p1 = np.array([(sentiment(x)[0] + 1)/2. for x in inputs]).reshape(-1, 1)
p0 = 1- p1
return np.hstack((p0, p1))
# -
# Predictions are random
predict_proba(['good', 'bad'])
from checklist.pred_wrapper import PredictorWrapper
wrapped_pp = PredictorWrapper.wrap_softmax(predict_proba)
test.run(wrapped_pp)
test.summary()
test.visual_summary()
# # Direction Test
def add_negative(x):
phrases = ['Anyway, I thought it was bad.', 'Having said this, I hated it', 'The director should be fired.']
return ['%s %s' % (x, p) for p in phrases]
dataset[0], add_negative(dataset[0])
from checklist.expect import Expect
monotonic_decreasing = Expect.monotonic(label=1, increasing=False, tolerance=0.1)
t = Perturb.perturb(dataset, add_negative)
test = DIR(**t, expect=monotonic_decreasing)
test.run(wrapped_pp)
test.summary()
test.visual_summary()
| future-lessons/checklist-testing/RunTests.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Spam or Legit
# +
import pandas as pd
import numpy as np
import spacy
import sys
import warnings
from imp import reload
from gensim.corpora import Dictionary
from importlib import reload
from tensorflow.keras.preprocessing import sequence
from tensorflow.keras.layers import Dense, Input, LSTM, Embedding, Dropout, Activation, GRU, Flatten
from tensorflow.keras.layers import Bidirectional, GlobalMaxPool1D
from tensorflow.keras.models import Model, Sequential
from tensorflow.keras.layers import Convolution1D
from tensorflow.keras import initializers, regularizers, constraints, optimizers, layers
from sklearn.metrics import f1_score
from tensorflow.keras.models import load_model
# +
warnings.filterwarnings('ignore')
if sys.version[0] == '2':
reload(sys)
sys.setdefaultencoding("utf-8")
nlp = spacy.load('en')
# -
df = pd.read_csv('../datasets/spam.csv', delimiter=',', encoding='latin-1')
df.head()
df.drop(['Unnamed: 2', 'Unnamed: 3', 'Unnamed: 4'], axis=1, inplace=True)
df.head()
df = df[df.v1 != 'unsup']
df['v1'] = df['v1'].map({'ham': 0, 'spam': 1})
df.head()
df.v2.apply(lambda x: len(x.split(" "))).mean()
# +
MAX_SEQUENCE_LEN = 12
UNK = 'UNK'
PAD = 'PAD'
def text_to_id_list(text, dictionary):
return [dictionary.token2id.get(tok, dictionary.token2id.get(UNK))
for tok in text_to_tokens(text)]
def texts_to_input(texts, dictionary):
return sequence.pad_sequences(
list(map(lambda x: text_to_id_list(x, dictionary), texts)), maxlen=MAX_SEQUENCE_LEN,
padding='post', truncating='post', value=dictionary.token2id.get(PAD))
def text_to_tokens(text):
return [tok.text.lower() for tok in nlp.tokenizer(text)
if not (tok.is_punct or tok.is_quote)]
def build_dictionary(texts):
d = Dictionary(text_to_tokens(t)for t in texts)
d.filter_extremes(no_below=3, no_above=1)
d.add_documents([[UNK, PAD]])
d.compactify()
return d
# -
dictionary = build_dictionary(df.v2)
dictionary.save('../utils/dictionary-spam')
len(dictionary)
df.head()
# +
from sklearn.model_selection import train_test_split
df_train, df_test = train_test_split(df, test_size=.1, random_state=31)
# -
x_train = texts_to_input(df_train.v2, dictionary)
x_test = texts_to_input(df_test.v2, dictionary)
x_train
x_test
y_train = df_train['v1']
y_test = df_test['v1']
x_train[:5]
# +
max_features = 1000
maxlen = 12
embed_size = 12
model = Sequential()
model.add(Embedding(len(dictionary), embed_size))
model.add(Bidirectional(LSTM(32, return_sequences = True)))
model.add(GlobalMaxPool1D())
model.add(Dense(20, activation="relu"))
model.add(Dropout(0.05))
model.add(Dense(1, activation="sigmoid"))
model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])
batch_size = 50
epochs = 4
model.fit(x_train, y_train, batch_size=batch_size, epochs=epochs, validation_split=0.2)
# -
prediction = model.predict(x_test)
y_pred = (prediction > 0.5)
print('F1-score: {0}'.format(f1_score(y_pred, y_test)))
model.save('../models/spam/spam_model.h5')
| notebooks/spam.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + pycharm={"name": "#%%\n"}
import re
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
import re
import warnings
warnings.filterwarnings('ignore')
# + pycharm={"name": "#%%\n"}
data = pd.read_csv("../../Datas/gz2_hart16.csv")
df = data.copy()
# + pycharm={"name": "#%%\n"}
from IPython.core.display import display
def checkIfHasRowIncompatible(dataset, rowName1, rowName2):
return dataset[dataset[rowName1] == 1][dataset[rowName2] == 1].shape[0] > 0
def VisualiseDataset(dataset):
print("En-tรชte du dataset :\n----------\n")
display(dataset.head(10))
print("Informations des types du dataset :\n----------\n")
display(dataset.info())
print("\n----------\nTaille du dataset :")
display(dataset.shape)
print("Informations du dataset :\n----------\n")
display(dataset.describe())
print("Pourcentage de valeurs manquantes :\n----------\n")
display((df.isna().sum()/df.shape[0]).sort_values())
print("Type des valeurs :\n----------\n")
display(df.dtypes.value_counts().plot.pie())
print("Classes : \n-------------\n")
display(dataset["gz2_class"].unique())
# + pycharm={"name": "#%%\n"}
VisualiseDataset(df)
# + pycharm={"name": "#%%\n"}
regexMap = {
r'^SB?a$': 'S0',
r'^Sa[\d,+,?]t$': 'Sa',
r'^Sb[\d,+,?]m$': 'Sb',
r'^Sc[\d,+,?]l$': 'Sc',
r'^SBa[\d,+,?]t$': 'SBa',
r'^SBb[\d,+,?]m$': 'SBb',
r'^SBc[\d,+,?]l$': 'SBc',
r'^Er$': 'E0',
r'^Ei$': 'E3-5',
r'^Ec$': 'E7',
}
# + pycharm={"name": "#%%\n"}
dfHubble = df
# + pycharm={"name": "#%%\n"}
def classToHubble(oldClass):
for key in regexMap:
if re.match(key, oldClass): return regexMap[key]
dfHubble['gz2_class'] = dfHubble['gz2_class'].map(classToHubble)
dfHubble.head()
# + pycharm={"name": "#%%\n"}
dfHubble = dfHubble[dfHubble['gz2_class'].notnull()]
# + pycharm={"name": "#%%\n"}
VisualiseDataset(dfHubble)
# + pycharm={"name": "#%%\n"}
pd.value_counts(dfHubble['gz2_class']).plot.bar()
# + pycharm={"name": "#%%\n"}
pd.value_counts(dfHubble['gz2_class'])
# + pycharm={"name": "#%%\n"}
dfHubble = dfHubble.rename(columns={"dr7objid": "OBJID", "ra": "RA", "dec":"DEC", "gz2_class":"TYPE"})
# + pycharm={"name": "#%%\n"}
dfHubble.to_csv('../../Datas/GZ2Hubble.csv', index=False)
| Pre-processing/GZ2/Galaxy_zoo_2_datas_pre-processing.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Build Clause Clusters with Book Boundaries
from tf.app import use
bhsa = use('bhsa')
F, E, T, L = bhsa.api.F, bhsa.api.E, bhsa.api.T, bhsa.api.L
from pathlib import Path
# +
# divide texts evenly into slices of 50 clauses
def cluster_clauses(N):
clusters = []
for book in F.otype.s('book'):
clauses = list(L.d(book,'clause'))
cluster = []
for i, clause in enumerate(clauses):
i += 1
cluster.append(clause)
# create cluster of 50
if (i and i % N == 0):
clusters.append(cluster)
cluster = []
# deal with final uneven clusters
elif i == len(clauses):
if (len(cluster) / N) < 0.6:
clusters[-1].extend(cluster) # add to last cluster
else:
clusters.append(cluster) # keep as cluster
return {
clause:i+1 for i,clust in enumerate(clusters)
for clause in clust
}
# -
cluster_50 = cluster_clauses(50)
cluster_10 = cluster_clauses(10)
# ## Map Book-names to clause clusters
# +
# map book names for visualizing
# map grouped book names
book_map = {
'Genesis':'Gen',
'Exodus':'Exod',
'Leviticus':'Lev',
'Numbers':'Num',
'Deuteronomy':'Deut',
'Joshua':'Josh',
'Judges':'Judg',
'1_Samuel':'Sam',
'2_Samuel':'Sam',
'1_Kings':'Kgs',
'2_Kings':'Kgs',
'Isaiah':'Isa',
'Jeremiah':'Jer',
'Ezekiel':'Ezek',
# 'Hosea':'Hos',
# 'Joel':'Joel',
# 'Amos':'Amos',
# 'Obadiah':'Obad',
# 'Jonah':'Jonah',
# 'Micah':'Mic',
# 'Nahum':'Nah',
# 'Habakkuk':'Hab',
# 'Zephaniah':'Zeph',
# 'Haggai':'Hag',
# 'Zechariah':'Zech',
# 'Malachi':'Mal',
'Psalms':'Pss',
'Job':'Job',
'Proverbs':'Prov',
# 'Ruth':'Ruth',
# 'Song_of_songs':'Song',
# 'Ecclesiastes':'Eccl',
# 'Lamentations':'Lam',
# 'Esther':'Esth',
# 'Daniel':'Dan',
# 'Ezra':'Ezra',
# 'Nehemiah':'Neh',
'1_Chronicles':'Chr',
'2_Chronicles':'Chr'
}
# book of 12
for book in ('Hosea', 'Joel', 'Amos', 'Obadiah',
'Jonah', 'Micah', 'Nahum', 'Habakkuk',
'Zephaniah', 'Haggai', 'Zechariah',
'Malachi'):
book_map[book] = 'Twelve'
# Megilloth
for book in ('Ruth', 'Lamentations', 'Ecclesiastes',
'Esther', 'Song_of_songs'):
book_map[book] = 'Megil'
# Dan-Neh
for book in ('Ezra', 'Nehemiah', 'Daniel'):
book_map[book] = 'Dan-Neh'
# +
clustertypes = [cluster_50, cluster_10]
bookmaps = []
for clust in clustertypes:
bookmap = {'Gen':1}
prev_book = 'Gen'
for cl in clust:
book = T.sectionFromNode(cl)[0]
mbook = book_map.get(book, book)
if prev_book != mbook:
bookmap[mbook] = clust[cl]
prev_book = mbook
bookmaps.append(bookmap)
# -
# # Export
import json
# +
data = {
'50': {
'clusters': cluster_50,
'bookbounds': bookmaps[0],
},
'10': {
'clusters': cluster_10,
'bookbounds': bookmaps[1]
},
}
outpath = Path('/Users/cody/github/CambridgeSemiticsLab/time_collocations/results/cl_clusters')
if not outpath.exists():
outpath.mkdir()
with open(outpath.joinpath('clusters.json'), 'w') as outfile:
json.dump(data, outfile)
# -
| workflow/notebooks/make_data/clause_clusters.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# *<NAME>*
# last modified: 03/31/2014
# <hr>
# I am really looking forward to your comments and suggestions to improve and extend this tutorial! Just send me a quick note
# via Twitter: [@rasbt](https://twitter.com/rasbt)
# or Email: [<EMAIL>](mailto:<EMAIL>)
# <hr>
# ### Problem Category
# - Statistical Pattern Recognition
# - Supervised Learning
# - Parametric Learning
# - Bayes Decision Theory
# - Univariate data
# - 2-class problem
# - different variances
# - Gaussian model (2 parameters)
# - No Risk function
# <hr>
# <p><a name="sections"></a>
# <br></p>
#
# # Sections
#
#
# <p>• <a href="#given">Given information</a><br>
# • <a href="#deriving_db">Deriving the decision boundary</a><br>
# • <a href="#plotting_db">Plotting the class conditional densities, posterior probabilities, and decision boundary</a><br>
# • <a href="#classify_rand">Classifying some random example data</a><br>
# • <a href="#emp_err">Calculating the empirical error rate</a><br>
#
#
#
#
#
#
#
#
# <hr>
# <p><a name="given"></a>
# <br></p>
#
# ## Given information:
#
# [<a href="#sections">back to top</a>] <br>
#
#
#
# ####model: continuous univariate normal (Gaussian) model for the class-conditional densities
#
#
# $ p(x | \omega_j) \sim N(\mu|\sigma^2) $
#
# $ p(x | \omega_j) \sim \frac{1}{\sqrt{2\pi\sigma^2}} \exp{ \bigg[-\frac{1}{2}\bigg( \frac{x-\mu}{\sigma}\bigg)^2 \bigg] } $
#
#
# ####Prior probabilities:
#
#
# $ P(\omega_1) = P(\omega_2) = 0.5 $
#
# #### Variances of the sample distributions
#
# $ \sigma_1^2 = 4, \quad \sigma_2^2 = 1 $
#
# #### Means of the sample distributions
#
# $ \mu_1 = 4, \quad \mu_2 = 10 $
#
#
# <br>
# <p><a name="deriving_db"></a>
# <br></p>
#
# ## Deriving the decision boundary
# [<a href="#sections">back to top</a>] <br>
# ### Bayes' Rule:
#
#
# $ P(\omega_j|x) = \frac{p(x|\omega_j) * P(\omega_j)}{p(x)} $
#
# ###Bayes' Decision Rule:
#
# Decide $ \omega_1 $ if $ P(\omega_1|x) > P(\omega_2|x) $ else decide $ \omega_2 $.
# <br>
#
#
# \begin{equation}
# \begin{aligned}
# &\Rightarrow \frac{p(x|\omega_1) * P(\omega_1)}{p(x)} > \frac{p(x|\omega_2) * P(\omega_2)}{p(x)}
# \end{aligned}
# \end{equation}
# ###Bayes' Decision Rule:
#
# Decide $ \omega_1 $ if $ P(\omega_1|x) > P(\omega_2|x) $ else decide $ \omega_2 $.
# <br>
#
#
#
# $ \Rightarrow \frac{p(x|\omega_1) * P(\omega_1)}{p(x)} > \frac{p(x|\omega_2) * P(\omega_2)}{p(x)} $
#
#
# We can drop $ p(x) $ since it is just a scale factor.
#
# $ \Rightarrow P(x|\omega_1) * P(\omega_1) > p(x|\omega_2) * P(\omega_2) $
#
# $ \Rightarrow \frac{p(x|\omega_1)}{p(x|\omega_2)} > \frac{P(\omega_2)}{P(\omega_1)} $
#
# $ \Rightarrow \frac{p(x|\omega_1)}{p(x|\omega_2)} > \frac{0.5}{0.5} $
#
# $ \Rightarrow \frac{p(x|\omega_1)}{p(x|\omega_2)} > 1 $
#
# $ \Rightarrow \frac{1}{\sqrt{2\pi\sigma_1^2}} \exp{ \bigg[-\frac{1}{2}\bigg( \frac{x-\mu_1}{\sigma_1}\bigg)^2 \bigg] } > \frac{1}{\sqrt{2\pi\sigma_2^2}} \exp{ \bigg[-\frac{1}{2}\bigg( \frac{x-\mu_2}{\sigma_2}\bigg)^2 \bigg] } \quad \bigg| \quad ln $
#
#
# $ \Rightarrow ln(1) - ln\bigg({\sqrt{2\pi\sigma_1^2}}\bigg) -\frac{1}{2}\bigg( \frac{x-\mu_1}{\sigma_1}\bigg)^2 > ln(1) - ln\bigg({{\sqrt{2\pi\sigma_2^2}}}\bigg) -\frac{1}{2}\bigg( \frac{x-\mu_2}{\sigma_2}\bigg)^2 \quad \bigg| \quad \sigma_1^2 = 4, \quad \sigma_2^2 = 1,\quad \mu_1 = 4, \quad \mu_2 = 10 $
#
# $ \Rightarrow -ln({\sqrt{2\pi4}}) -\frac{1}{2}\bigg( \frac{x-4}{2}\bigg)^2 > -ln({{\sqrt{2\pi}}}) -\frac{1}{2}(x-10)^2 $
#
# $ \Rightarrow -\frac{1}{2} ln({2\pi}) - ln(2) -\frac{1}{8} (x-4)^2 > -\frac{1}{2}ln(2\pi) -\frac{1}{2}(x-10)^2 \quad \bigg| \; \times\; 2 $
#
# $ \Rightarrow -ln({2\pi}) - 2ln(2) - \frac{1}{4}(x-4)^2 > -ln(2\pi) - (x-10)^2 \quad \bigg| \; + ln(2\pi) $
#
# $ \Rightarrow -4ln(4) - (x-4)^2 >- 4(x-10)^2 $
#
# $ \Rightarrow -ln(4) - \frac{1}{4}(x-4)^2 > -(x-10)^2 \quad \big| \; \times \; 4 $
#
# $ \Rightarrow -8ln(2) - x^2 + 8x - 16 > - 4x^2 + 80x - 400 $
#
# $ \Rightarrow 3x^2 - 72x + 384 -8ln(2) > 0 $
#
# $ \Rightarrow x < 7.775 \quad and \quad x > 16.225 $
# <p><a name="plotting_db"></a>
# <br></p>
#
# ## Plotting the class conditional densities, posterior probabilities, and decision boundary
#
# [<a href="#sections">back to top</a>] <br>
# +
# %pylab inline
import numpy as np
from matplotlib import pyplot as plt
def pdf(x, mu, sigma):
"""
Calculates the normal distribution's probability density
function (PDF).
"""
term1 = 1.0 / ( math.sqrt(2*np.pi) * sigma )
term2 = np.exp( -0.5 * ( (x-mu)/sigma )**2 )
return term1 * term2
# generating some sample data
x = np.arange(-100, 100, 0.05)
# probability density functions
pdf1 = pdf(x, mu=4, sigma=4)
pdf2 = pdf(x, mu=10, sigma=1)
# Class conditional densities (likelihoods)
plt.plot(x, pdf1)
plt.plot(x, pdf2)
plt.title('Class conditional densities (likelihoods)')
plt.ylabel('p(x)')
plt.xlabel('random variable x')
plt.legend(['p(x|w_1) ~ N(4,4)', 'p(x|w_2) ~ N(10,1)'], loc='upper left')
plt.ylim([0,0.5])
plt.xlim([-15,20])
plt.show()
# +
def posterior(likelihood, prior):
"""
Calculates the posterior probability (after Bayes Rule) without
the scale factor p(x) (=evidence).
"""
return likelihood * prior
# probability density functions
posterior1 = posterior(pdf(x, mu=4, sigma=4), 0.5)
posterior2 = posterior(pdf(x, mu=10, sigma=1), 0.5)
# Class conditional densities (likelihoods)
plt.plot(x, posterior1)
plt.plot(x, posterior2)
plt.title('Posterior Probabilities w. Decision Boundaries')
plt.ylabel('P(w)')
plt.xlabel('random variable x')
plt.legend(['P(w_1|x)', 'p(w_2|X)'], loc='upper left')
plt.ylim([0,0.25])
plt.xlim([-15,20])
plt.axvline(7.775, color='r', alpha=0.8, linestyle=':', linewidth=2)
plt.axvline(16.225, color='r', alpha=0.8, linestyle=':', linewidth=2)
plt.annotate('R2', xy=(10, 0.2), xytext=(10, 0.22))
plt.annotate('R1', xy=(4, 0.2), xytext=(4, 0.22))
plt.annotate('R1', xy=(17, 0.2), xytext=(17.5, 0.22))
plt.show()
# -
# <p><a name="classify_rand"></a>
# <br></p>
#
# ## Classifying some random example data
#
# [<a href="#sections">back to top</a>] <br>
# +
# Parameters
mu_1 = 4
mu_2 = 10
sigma_1_sqr = 4
sigma_2_sqr = 1
# Generating 10 random samples drawn from a Normal Distribution for class 1 & 2
x1_samples = sigma_1_sqr**0.5 * np.random.randn(20) + mu_1
x2_samples = sigma_1_sqr**0.5 * np.random.randn(20) + mu_2
y = [0 for i in range(20)]
# Plotting sample data with a decision boundary
plt.scatter(x1_samples, y, marker='o', color='green', s=40, alpha=0.5)
plt.scatter(x2_samples, y, marker='^', color='blue', s=40, alpha=0.5)
plt.title('Classifying random example data from 2 classes')
plt.ylabel('P(x)')
plt.xlabel('random variable x')
plt.legend(['w_1', 'w_2'], loc='upper right')
plt.ylim([-0.1,0.1])
plt.xlim([0,20])
plt.axvline(7.775, color='r', alpha=0.8, linestyle=':', linewidth=2)
plt.axvline(16.225, color='r', alpha=0.8, linestyle=':', linewidth=2)
plt.annotate('R2', xy=(10, 0.03), xytext=(10, 0.03))
plt.annotate('R1', xy=(4, 0.03), xytext=(4, 0.03))
plt.annotate('R1', xy=(17, 0.03), xytext=(17.5, 0.03))
plt.show()
# -
# <p><a name="emp_err"></a>
# <br></p>
#
# ## Calculating the empirical error rate
#
# [<a href="#sections">back to top</a>] <br>
# +
w1_as_w2, w2_as_w1 = 0, 0
for x1,x2 in zip(x1_samples, x2_samples):
if x1 > 7.775 and x1 < 16.225:
w1_as_w2 += 1
if x2 <= 7.775 and x2 >= 16.225:
w2_as_w1 += 1
emp_err = (w1_as_w2 + w2_as_w1) / float(len(x1_samples) + len(x2_samples))
print('Empirical Error: {}%'.format(emp_err * 100))
# -
test complete; Gopal
| tests/others/2_stat_superv_parametric.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # [Tic-Tac-Toe Checker](https://www.codewars.com/kata/tic-tac-toe-checker/python)
import numpy as np
# Sample board layout
board = [[0, 0, 1], [0, 1, 2], [2, 1, 0]]
def isSolved(board):
win_check = None
board = np.array(board)
board_T = board.T
neg_diag = [board[i][i] for i in list(range(3))]
pos_diag = [board[::-1][i][i] for i in list(range(3))]
if any(set(row) == {1} for row in board) or \
any(set(col) == {1} for col in board.T) or \
all(i == 1 for i in neg_diag) or \
all(i == 1 for i in pos_diag):
win_check = 1
elif any(set(row) == {2} for row in board) or \
any(set(col) == {2} for col in board.T) or \
all(i == 2 for i in neg_diag) or \
all(i == 2 for i in pos_diag):
win_check = 2
elif any(board.flatten() == 0):
win_check = -1
else:
win_check = 0
return win_check
isSolved(board)
# # [Are they the "same"?](https://www.codewars.com/kata/are-they-the-same/train/python)
a = [121, 144, 19, 161, 19, 144, 19, 11]
b = [121, 14641, 20736, 361, 25921, 361, 20736, 361]
def comp(array1, array2):
try:
return sorted(x**2 for x in array1) == sorted(array2)
except:
return False
comp(a, b)
# # [To square(root) or not to square(root)](https://www.codewars.com/kata/to-square-root-or-not-to-square-root/train/python)
import numpy as np
a = [4, 3, 9, 7, 2, 1 ]
def square_or_square_root(arr):
return [int(np.sqrt(x)) if np.sqrt(x) % 1 == 0 else x ** 2 for x in arr]
square_or_square_root(a)
# # [Mexican Wave](https://www.codewars.com/kata/mexican-wave/python)
def wave(str):
wave = []
try:
for i in range(len(str)):
if str[i] != ' ':
wave.append(str[:i] + str[i].upper() + str[i+1:])
except IndexError:
if str[i] != ' ':
wave.append(str[:i] + str[i].upper())
return wave
wave(' herp derp ')
# # [Word a10n (abbreviation)](https://www.codewars.com/kata/word-a10n-abbreviation/python)
def abbreviate(s):
chars = ''
if s.isalpha() and len(s) >= 4:
return s[0] + str(len(s)-2) + s[-1]
elif len(s) <= 4:
return s
else:
for i in range(len(s)):
if s[i].isalpha():
chars += s[i]
else:
word = chars
chars = ''
return abbreviate(word) + s[i] + abbreviate(s[i+1:])
abbreviate('elephant-rides are really fun!')
abbreviate('elephant-ride')
abbreviate('?internationalization!')
# # [Dashatize it](https://www.codewars.com/kata/dashatize-it/train/python)
def dashatize(num):
if type(num) is int:
str_num = str(abs(num))
new = ''
for i in str_num:
if int(i) % 2 == 1:
new += '-' + i + '-'
else:
new += i
new = new.strip('-').replace('--', '-')
return new
elif num is None:
return 'None'
dashatize(274)
dashatize(5311)
dashatize(86320)
dashatize(974302)
dashatize(None)
dashatize(-567)
# # [Last digit of a large number](https://www.codewars.com/kata/last-digit-of-a-large-number/train/python)
def last_digit(n1, n2):
return pow(n1 % 10, n2, 10)
last_digit(2**200, 2**300)
# # [Data Reverse](https://www.codewars.com/kata/data-reverse/python)
def data_reverse(data):
length = 8
if len(data) == length:
return data
elif len(data) > length:
start = data[:-length]
end = data[-length:]
return end + data_reverse(start)
data1 = [1,1,1,1,1,1,1,1,0,0,0,0,0,0,0,0,0,0,0,0,1,1,1,1,1,0,1,0,1,0,1,0]
data2 = [1,0,1,0,1,0,1,0,0,0,0,0,1,1,1,1,0,0,0,0,0,0,0,0,1,1,1,1,1,1,1,1]
data_reverse(data1)
data_reverse(data2)
# # [Statistics for an Athletic Association](https://www.codewars.com/kata/statistics-for-an-athletic-association/python)
import functools
import numpy as np
# +
def convert(int_to_str):
secs = int_to_str % 60
mins = int_to_str // 60
hours = mins // 60
if mins >= 60:
mins -= hours * 60
return '|'.join([str(t).zfill(2) for t in [hours, mins, secs]])
def stat(strg):
if strg == '':
return ''
else:
splt_strg = strg.split(', ')
splt_time = [t.split('|') for t in splt_strg]
int_times = [[int(i) for i in time] for time in splt_time]
totalsecs = [functools.reduce(lambda a, b: (a*60) + b, lst) for lst in int_times]
rnge = convert(max(totalsecs) - min(totalsecs))
avg = convert(int(np.mean(totalsecs)))
medn = convert(int(np.median(totalsecs)))
return 'Range: {} Average: {} Median: {}'.format(rnge, avg, medn)
# -
s = "01|15|59, 1|47|16, 01|17|20, 1|32|34, 2|17|17"
stat(s)
# # [Format a string of names like 'Bart, Lisa & Maggie'](https://www.codewars.com/kata/format-a-string-of-names-like-bart-lisa-and-maggie/train/python)
def namelist(names):
length = len(names)
if names == []:
return ''
elif length == 1:
return names[0]['name']
elif length == 2:
return names[0]['name'] + ' & ' + names[1]['name']
elif length > 2:
return names[0]['name'] + ', ' + namelist(names[1:])
n3 = [ {'name': 'Bart'}, {'name': 'Lisa'}, {'name': 'Maggie'} ]
namelist(n3)
# # [+1 Array](https://www.codewars.com/kata/plus-1-array/python)
def up_array(arr):
if type(arr) is list and len(arr) > 0 and all(10 > i >= 0 for i in arr):
return [int(s) for s in str(int(''.join([str(i) for i in arr])) + 1)]
else:
return None
up_array([10, 4, 32, 1])
up_array([1,-9])
up_array([2,3,9])
# # [Rot13](https://www.codewars.com/kata/rot13-1/python)
# +
import string
from codecs import encode as _dont_use_this_
def rot13(message):
new = ''
for c in message:
if c.isalpha() and c.islower():
new += string.ascii_lowercase[(string.ascii_lowercase.find(c)+13)%26:\
((string.ascii_lowercase.find(c)+13)%26)+1]
elif c.isalpha() and c.isupper():
new += string.ascii_uppercase[(string.ascii_uppercase.find(c)+13)%26:\
((string.ascii_uppercase.find(c)+13)%26)+1]
else:
new += c
return new
# -
rot13('test')
rot13('Test')
# # [Tribonacci Sequence](https://www.codewars.com/kata/tribonacci-sequence/train/python)
def tribonacci(signature, n):
tribs = []
def trib():
yield signature[0]
yield signature[1]
yield signature[2]
l = [signature[0], signature[1], signature[2]]
while True:
l = [l[-2], l[-1], sum(l[-3:])]
yield l[-1]
for i, t in enumerate(trib()):
if i == n:
break
tribs.append(t)
return tribs
tribonacci([1, 1, 2], 10)
| codewars/python/my_progress/2018/2018_11_python.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] deletable=true editable=true
# # Pandas Overview
# + [markdown] deletable=true editable=true
# * Data analysis package for use with Python
# * Data structures are the fundamental elements
# * 1D - Series
# * 2D - DataFrame
# * Easy to work with labeled (or relational) data in SQL tables or Excel spreadsheets
# * Documentation: http://pandas.pydata.org/pandas-docs/version/0.19.1/
# + [markdown] deletable=true editable=true
# # Import the Pandas Package
# + deletable=true editable=true
import pandas as pd
from pandas import Series, DataFrame
# -
# # Series
# A series is a 1-Dimensional object similar to an array. There will be an array of data labels corresponding to an array of data.
# #### Create series1 and display its indices and values:
series1 = Series([1,2,3,4,5])
series1.name = 'MyFirstSeries'
series1.index.name = 'Indx'
series1
series1.index
series1.values
# #### Create series2 with custom indices:
series2 = Series([10,20,30], index=['a','b','c'])
series2
# #### Series indexing and operations
series2[0] == series2['a'] #check value of an index
series1[series1 > 3] #get values greater than 3
series2 / 2 #scalar division of a series
series2.isnull() #check for nulls (multiple)
pd.isnull(series2)
List = ['c','a','b'] #get series values by passing in a List
series2[List]
series1 + series2
# + [markdown] deletable=true editable=true
# # Import DataStructure from File
# -
df = pd.read_csv("data1.csv", header=None) #import csv file
# + deletable=true editable=true
df.columns = ["ID","Name", "Birthday"]
# + deletable=true editable=true
df.head() #view top 5 lines
# + deletable=true editable=true
df.tail(1) #view last line
# + [markdown] deletable=true editable=true
# # Data Structure Indexing
# + deletable=true editable=true
df['Name'] #index based on column name (multiple)
df.Name
# -
df.ix[0] #index based on row number
df[df['ID'] < 60] #index based on values
# #### Create new column w/lambda function which adds 1 to each row in 'ID'
df['ID+1'] = df.apply(lambda row: row['ID'] + 1, axis=1)
df
| pandas/pandas101.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import seaborn as sns
import matplotlib.pyplot as plt
import pandas as pd
data = pd.read_csv('/home/atrides/Desktop/R/statistics_with_Python/04_Exploring_Data_with_Graphs/Data_Files/ChickFlick.dat', sep='\t')
print(data.head())
_ = sns.barplot(x='film', y='arousal', data=data)
plt.show()
_ = sns.barplot(x='film', y='arousal', data=data, hue='gender')
plt.show()
| Python/statistics_with_Python/04_Exploring_Data_with_Graphs/Markdown_notebook/06_summaryBarChart.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ## the problem
# Even though error rates are low, creating transition matrices from predicted labels gives very different results from the same matrices created from ground truth labels.
#
# Why?
#
# `vak.core.predict` currently does not use the same function that `vak.core.learncurve.test` uses to find segments from predicted timebin labels. The `vak.core.predict` function is more computationally expensive because it finds times of onsets and offsets, while the `vak.core.learncurve.test` function just finds wherever labels change and returning the first label after each change point (which will be the same for the rest of the segment).
#
# So worst case scenario would be if those functions give different results.
# There are tests for this already but maybe they are missing something that only emerges from bigger datasets.
# ## load a network and get predictions
# you can ignore most of this code and scroll to comments below
# +
from configparser import ConfigParser
from glob import glob
import json
import os
from pathlib import Path
import shutil
import joblib
import numpy as np
import tensorflow as tf
import tqdm
import vak
# -
VDS_PATH = Path(
'/home/nickledave/Documents/data/BFSongRepository/vak/gy6or6/'
)
train_vds_path = str(VDS_PATH.joinpath('_prep_190726_153000.train.vds.json'))
# +
train_vds = vak.Dataset.load(json_fname=train_vds_path)
if train_vds.are_spects_loaded() is False:
train_vds = train_vds.load_spects()
X_train = train_vds.spects_list()
X_train = np.concatenate(X_train, axis=1)
Y_train = train_vds.lbl_tb_list()
Y_train = np.concatenate(Y_train)
# transpose so rows are time bins
X_train = X_train.T
n_classes = len(train_vds.labelmap)
print(n_classes)
# -
TWEETYNET_VDS_PATH = Path('/home/nickledave/Documents/repos/tweetynet/data/BFSongRepository/gy6or6/vds')
test_vds_path = list(TWEETYNET_VDS_PATH.glob('*test.vds.json'))[0]
num_replicates = 4
train_set_durs = [60, 120, 480]
# +
test_vds = vak.Dataset.load(json_fname=test_vds_path)
if test_vds.are_spects_loaded() is False:
test_vds = test_vds.load_spects()
if test_vds.labelmap != train_vds.labelmap:
raise ValueError(
f'labelmap of test set, {test_vds.labelmap}, does not match labelmap of training set, '
f'{train_vds.labelmap}'
)
def unpack_test():
"""helper function because we want to get back test set unmodified every time we go through
main loop below, without copying giant arrays"""
X_test = test_vds.spects_list()
X_test = np.concatenate(X_test, axis=1)
# transpose so rows are time bins
X_test = X_test.T
Y_test = test_vds.lbl_tb_list()
Y_test = np.concatenate(Y_test)
return X_test, Y_test
# just get X_test to make sure it has the right shape
X_test, _ = unpack_test()
if X_train.shape[-1] != X_test.shape[-1]:
raise ValueError(f'Number of frequency bins in training set spectrograms, {X_train.shape[-1]}, '
f'does not equal number in test set spectrograms, {X_test.shape[-1]}.')
freq_bins = X_test.shape[-1] # number of columns
# concatenate labels into one big string
# used for Levenshtein distance + syllable error rate
Y_train_labels = [voc.annot.labels.tolist() for voc in train_vds.voc_list]
Y_train_labels_for_lev = ''.join([chr(lbl) if type(lbl) is int else lbl
for labels in Y_train_labels for lbl in labels])
Y_test_labels = [voc.annot.labels.tolist() for voc in test_vds.voc_list]
Y_test_labels_for_lev = ''.join([chr(lbl) if type(lbl) is int else lbl
for labels in Y_test_labels for lbl in labels])
replicates = range(1, num_replicates + 1)
NETWORKS = vak.network._load()
# -
# concatenate labels into one big string
# used for Levenshtein distance + syllable error rate
Y_train_labels = [voc.annot.labels.tolist() for voc in train_vds.voc_list]
Y_train_labels_for_lev = ''.join([chr(lbl) if type(lbl) is int else lbl
for labels in Y_train_labels for lbl in labels])
Y_test_labels = [voc.annot.labels.tolist() for voc in test_vds.voc_list]
Y_test_labels_for_lev = ''.join([chr(lbl) if type(lbl) is int else lbl
for labels in Y_test_labels for lbl in labels])
config_path = str(
'/home/nickledave/Documents/repos/tweetynet/src/configs/config_BFSongRepository_gy6or6_.ini'
)
a_config = vak.config.parse_config(config_path)
train_set_dur = 60
replicate = 1
training_records_path = '/home/nickledave/Documents/data/BFSongRepository/vak/gy6or6/results_190726_153021'
spect_scaler = joblib.load(
os.path.join(training_records_path, 'spect_scaler'))
(net_name, net_config) = tuple(a_config.networks.items())[0]
X_test, Y_test = unpack_test()
# Normalize before reshaping to avoid even more convoluted array reshaping.
X_test = spect_scaler.transform(X_test)
# Notice we don't reshape Y_test
(X_test,
_,
num_batches_test) = vak.utils.data.reshape_data_for_batching(
X_test,
net_config.batch_size,
net_config.time_bins,
Y_test)
net_config_dict = net_config._asdict()
net_config_dict['n_syllables'] = n_classes
if 'freq_bins' in net_config_dict:
net_config_dict['freq_bins'] = freq_bins
results_dirname_this_net = os.path.join(training_records_path, net_name)
# +
net = NETWORKS[net_name](**net_config_dict)
# we use latest checkpoint when doing summary for learncurve, assume that's "best trained"
checkpoint_file = tf.train.latest_checkpoint(checkpoint_dir=results_dirname_this_net)
meta_file = glob(checkpoint_file + '*meta')
if len(meta_file) != 1:
raise ValueError('Incorrect number of meta files for last saved checkpoint.\n'
'For checkpoint {}, found these files:\n'
'{}'
.format(checkpoint_file, meta_file))
else:
meta_file = meta_file[0]
data_file = glob(checkpoint_file + '*data*')
if len(data_file) != 1:
raise ValueError('Incorrect number of data files for last saved checkpoint.\n'
'For checkpoint {}, found these files:\n'
'{}'
.format(checkpoint_file, data_file))
else:
data_file = data_file[0]
with tf.Session(graph=net.graph) as sess:
tf.logging.set_verbosity(tf.logging.ERROR)
net.restore(sess=sess,
meta_file=meta_file,
data_file=data_file)
for b in range(num_batches_test): # "b" is "batch number"
d = {
net.X: X_test[:, b * net_config_dict['time_bins']: (b + 1) * net_config_dict['time_bins'], :],
net.lng: [net_config_dict['time_bins']] * net_config_dict['batch_size']}
if 'Y_pred_test' in locals():
preds = sess.run(net.predict, feed_dict=d)
preds = preds.reshape(net_config_dict['batch_size'], -1)
Y_pred_test = np.concatenate((Y_pred_test, preds),
axis=1)
else:
Y_pred_test = sess.run(net.predict, feed_dict=d)
Y_pred_test = Y_pred_test.reshape(net_config_dict['batch_size'], -1)
# again get rid of zero padding predictions
Y_pred_test = Y_pred_test.ravel()[:Y_test.shape[0], np.newaxis]
test_err = np.sum(Y_pred_test != Y_test) / Y_test.shape[0]
# -
# ## okay, now look at predictions -- does `vak.test` output match `vak.predict`?
# We make sure `Y_pred_test` is an array.
Y_pred_test
Y_test_lbl_tb_list = test_vds.lbl_tb_list()
# Get the lengths of each of the individual labeled timebins vectors for each spectrogram, so we can split `Y_pred_test` up into vectors of the same sizes below.
Y_test_lens = [arr.shape for arr in Y_test_lbl_tb_list]
# But before we split them up, answer the question we asked above:
# how different is output of `lbl_tb2segments` (used by `vak.core.predict`) compared to output of `lbl_tb2label` (used by `vak.core.learncurve.test`)?
#
# First of all:
# do they return vectors of the same length?
Y_pred_test_seg = vak.utils.labels.lbl_tb2labels(Y_pred_test, train_vds.labelmap)
len(Y_pred_test_seg)
timebin_dur = set([voc.metaspect.timebin_dur for voc in train_vds.voc_list])
timebin_dur = timebin_dur.pop()
Y_pred_test_lbl, onsets, offsets = vak.utils.labels.lbl_tb2segments(Y_pred_test,
train_vds.labelmap,
timebin_dur)
Y_pred_test_lbl.shape
# Yes, vectors returned by each function are the same length.
#
# Okay, what is the edit distance between them?
# If 0, it's the same vector.
Y_pred_test_lbl_str = ''.join(Y_pred_test_lbl.tolist())
vak.metrics.levenshtein(Y_pred_test_seg, Y_pred_test_lbl_str)
# To be extra sure:
Y_pred_test_seg == Y_pred_test_lbl_str
# Okay, so that's not the problem -- we're getting the same result for all intents and purposes from `test` and `predict`.
#
# ## if that's not the problem, what is?
#
# So even though error is low, maybe we're not recovering the same segments from `predict` that we have in the test set?
#
# To figure that out, we need to go ahead and split up `Y_pred` into labeled timebin vectors of the same size as those in the original test set, segment each vector, and then look at the segments we get out.
starts = [0]
stops = []
current_start = 0
for a_len in Y_test_lens:
a_len = a_len[0]
stops.append(current_start + a_len)
current_start += a_len
if current_start < Y_test.shape[0]:
starts.append(current_start)
Y_pred_lbl_tb_list = []
for start, stop in zip(starts, stops):
Y_pred_lbl_tb_list.append(Y_pred_test[start:stop])
Y_pred_lens = [arr.shape for arr in Y_pred_lbl_tb_list]
all([pred_len == test_len for pred_len, test_len in zip(Y_pred_lens, Y_test_lens)])
Y_pred_labels = []
Y_pred_onsets = []
Y_pred_offsets = []
for a_pred_lbl_tb in Y_pred_lbl_tb_list:
lbl, on, off = vak.utils.labels.lbl_tb2segments(a_pred_lbl_tb, train_vds.labelmap, timebin_dur)
Y_pred_labels.append(lbl)
Y_pred_onsets.append(on)
Y_pred_offsets.append(off)
Y_pred_labels[0]
Y_pred_labels[0].shape
Y_test_labels_from_seg = []
Y_test_onsets = []
Y_test_offsets = []
for a_test_lbl_tb in Y_test_lbl_tb_list:
lbl, on, off = vak.utils.labels.lbl_tb2segments(a_test_lbl_tb, train_vds.labelmap, timebin_dur)
Y_test_labels_from_seg.append(lbl)
Y_test_onsets.append(on)
Y_test_offsets.append(off)
Y_test_labels_from_seg[0]
Y_test_labels_from_seg[0].shape
len(Y_test_labels[0])
# At least for the first vector, there are more segments in the predicted labels.
#
# These could be segments that are not in the ground-truth labels because the person annotating the song removed them.
#
# As a sanity check, do we recover the ground truth labels if we apply `vak.utils.lbl_tb2segments` to the ground truth label vector?
np.array_equal(Y_test_labels[0], Y_test_labels_from_seg[0])
# Yes, we do.
#
# So, yes, we're getting extra segments in our predictions somewhere.
#
# How frequent is this?
same_lengths = [Y_pred_seg.shape == Y_test_seg.shape for Y_pred_seg, Y_test_seg in zip(Y_pred_labels, Y_test_labels_from_seg)]
len_acc = sum(same_lengths) / len(same_lengths)
print(f'% with accurate length: {len_acc: 0.4f}')
# Only about 3% of them are the right lengths
#
# So what if we subtract the number of segments in the predicted labels from the number in the ground truth labels?
# If the number is negative, there are more segments in the predicted labels.
length_diffs = [Y_test_seg.shape[0] - Y_pred_seg.shape[0] for Y_pred_seg, Y_test_seg in zip(Y_pred_labels, Y_test_labels_from_seg)]
print(length_diffs[:5])
np.mean(length_diffs)
# Yes, there are more segments in the predicted labels.
#
# Two approaches to cleaning up:
# (1) remove segments lower than a certain duration
# + this might help if all the spurious segments are shorter than typical syllables
# + it won't help though if e.g. calls are being labeled as syllables, and those calls would have been segments in the ground truth data, but the annotator removed those segments since they weren't syllables
# + problem: what label to give the segment to throw away? If silence on both sides (probably almost all cases) could just set to silence?
#
# (2) remove segments based on syntax
# + throw away segments where label is below some threshold of ever occurring
# + this prevents us from doing an analysis where we ask if recovered original syntax, though
# + because of course we cover the original syntax if we use the original syntax to throw away things that don't match it
# + but I think this is a good way to show the work that actually needs to be done to get this to be useful in the lab, and highlights issues with previous work
from scipy.io import loadmat
from glob import glob
# cd ~/Documents/data/BFSongRepository/gy6or6/032212/
notmats = glob('*.not.mat')
notmat0 = loadmat(notmats[0], squeeze_me=True)
min_dur = notmat0['min_dur']
# Visually inspecting onsets from first song in test set to compare with predicted onsets
Y_test_onsets[0]
Y_pred_onsets[0]
# Okay there's a couple extra predicted onsets.
#
# How many of them are less than the minimum duration for syllables we used when segmenting?
durs_test_0 = (Y_test_offsets[0] - Y_test_onsets[0]) * 1000
print(durs_test_0)
print("number of segments with duration less than minimum syllable duration used to segment: ", np.sum(durs_test_0 < min_dur))
durs_pred_0 = (Y_pred_offsets[0] - Y_pred_onsets[0]) * 1000
print(durs_pred_0)
print("number of segments with duration less than minimum syllable duration used to segment: ", np.sum(durs_pred_0 < min_dur))
# More than a couple in the predicted onsets array.
# What about across *all* the predicted onsets arrays?
durs_pred = []
lt_min_dur = []
for off, on in zip(Y_pred_offsets, Y_pred_onsets):
durs = (off - on) * 1000
durs_pred.append(durs)
lt_min_dur.append(np.sum(durs < min_dur))
print(lt_min_dur)
# Okay and how does that compare to the number of extra segments in each predicted labels array (regardless of whether the segments are less than the minimum duration)?
num_extra = []
for Y_pred_seg, Y_test_seg in zip(Y_pred_labels, Y_test_labels_from_seg):
num_extra.append(Y_pred_seg.shape[0]-Y_test_seg.shape[0])
print(num_extra)
# Hmm, looks similar.
#
# So what if we filtered out all the segments less than the minimum duration?
num_extra_minus_num_lt_min = [extra - lt_dur for extra, lt_dur in zip(num_extra, lt_min_dur)]
print(num_extra_minus_num_lt_min)
np.asarray(num_extra_minus_num_lt_min).mean()
# Looks like we'd do a lot better overall, although in a couple cases we get less than the number of syllables in the test set (?)
| doc/notebooks/compare-lbl_tb2labels-lbl_tb2segments-sober-repo.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # TFX on KubeFlow Pipelines Example
#
# This notebook should be run inside a KF Pipelines cluster.
#
# ### Install TFX and KFP packages
# !pip3 install tfx==0.13.0 --upgrade
# !pip3 install kfp --upgrade
# ### Enable DataFlow API for your GKE cluster
# <https://console.developers.google.com/apis/api/dataflow.googleapis.com/overview>
#
# ## Get the TFX repo with sample pipeline
#
# + tags=["parameters"]
# Directory and data locations (uses Google Cloud Storage).
import os
_input_bucket = '<your gcs bucket>'
_output_bucket = '<your gcs bucket>'
_pipeline_root = os.path.join(_output_bucket, 'tfx')
# Google Cloud Platform project id to use when deploying this pipeline.
_project_id = '<your project id>'
# -
# # copy the trainer code to a storage bucket as the TFX pipeline will need that code file in GCS
from tensorflow import gfile
gfile.Copy('utils/taxi_utils.py', _input_bucket + '/taxi_utils.py')
# ## Configure the TFX pipeline example
#
# Reload this cell by running the load command to get the pipeline configuration file
# ```
# # %load tfx/examples/chicago_taxi_pipeline/taxi_pipeline_kubeflow.py
# ```
#
# Configure:
# - Set `_input_bucket` to the GCS directory where you've copied taxi_utils.py. I.e. gs://<my bucket>/<path>/
# - Set `_output_bucket` to the GCS directory where you've want the results to be written
# - Set GCP project ID (replace my-gcp-project). Note that it should be project ID, not project name.
#
# The dataset in BigQuery has 100M rows, you can change the query parameters in WHERE clause to limit the number of rows used.
#
# +
"""Chicago Taxi example using TFX DSL on Kubeflow."""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import os
from tfx.components.evaluator.component import Evaluator
from tfx.components.example_gen.big_query_example_gen.component import BigQueryExampleGen
from tfx.components.example_validator.component import ExampleValidator
from tfx.components.model_validator.component import ModelValidator
from tfx.components.pusher.component import Pusher
from tfx.components.schema_gen.component import SchemaGen
from tfx.components.statistics_gen.component import StatisticsGen
from tfx.components.trainer.component import Trainer
from tfx.components.transform.component import Transform
from tfx.orchestration.kubeflow.runner import KubeflowRunner
from tfx.orchestration.pipeline import PipelineDecorator
from tfx.proto import evaluator_pb2
from tfx.proto import pusher_pb2
from tfx.proto import trainer_pb2
# Python module file to inject customized logic into the TFX components. The
# Transform and Trainer both require user-defined functions to run successfully.
# Copy this from the current directory to a GCS bucket and update the location
# below.
_taxi_utils = os.path.join(_input_bucket, 'taxi_utils.py')
# Path which can be listened to by the model server. Pusher will output the
# trained model here.
_serving_model_dir = os.path.join(_output_bucket, 'serving_model/taxi_bigquery')
# Region to use for Dataflow jobs and CMLE training.
# Dataflow: https://cloud.google.com/dataflow/docs/concepts/regional-endpoints
# CMLE: https://cloud.google.com/ml-engine/docs/tensorflow/regions
_gcp_region = 'us-central1'
# A dict which contains the training job parameters to be passed to Google
# Cloud ML Engine. For the full set of parameters supported by Google Cloud ML
# Engine, refer to
# https://cloud.google.com/ml-engine/reference/rest/v1/projects.jobs#Job
_cmle_training_args = {
'pythonModule': None, # Will be populated by TFX
'args': None, # Will be populated by TFX
'region': _gcp_region,
'jobDir': os.path.join(_output_bucket, 'tmp'),
'runtimeVersion': '1.12',
'pythonVersion': '2.7',
'project': _project_id,
}
# A dict which contains the serving job parameters to be passed to Google
# Cloud ML Engine. For the full set of parameters supported by Google Cloud ML
# Engine, refer to
# https://cloud.google.com/ml-engine/reference/rest/v1/projects.models
_cmle_serving_args = {
'model_name': 'chicago_taxi',
'project_id': _project_id,
'runtime_version': '1.12',
}
# The rate at which to sample rows from the Chicago Taxi dataset using BigQuery.
# The full taxi dataset is > 120M record. In the interest of resource
# savings and time, we've set the default for this example to be much smaller.
# Feel free to crank it up and process the full dataset!
_query_sample_rate = 0.001 # Generate a 0.1% random sample.
# TODO(zhitaoli): Remove PipelineDecorator after 0.13.0.
@PipelineDecorator(
pipeline_name='chicago_taxi_pipeline_kubeflow',
log_root='/var/tmp/tfx/logs',
pipeline_root=_pipeline_root,
additional_pipeline_args={
'beam_pipeline_args': [
'--runner=DataflowRunner',
'--experiments=shuffle_mode=auto',
'--project=' + _project_id,
'--temp_location=' + os.path.join(_output_bucket, 'tmp'),
'--region=' + _gcp_region,
],
# Optional args:
# 'tfx_image': custom docker image to use for components. This is needed
# if TFX package is not installed from an RC or released version.
})
def _create_pipeline():
"""Implements the chicago taxi pipeline with TFX."""
query = """
SELECT
pickup_community_area,
fare,
EXTRACT(MONTH FROM trip_start_timestamp) AS trip_start_month,
EXTRACT(HOUR FROM trip_start_timestamp) AS trip_start_hour,
EXTRACT(DAYOFWEEK FROM trip_start_timestamp) AS trip_start_day,
UNIX_SECONDS(trip_start_timestamp) AS trip_start_timestamp,
pickup_latitude,
pickup_longitude,
dropoff_latitude,
dropoff_longitude,
trip_miles,
pickup_census_tract,
dropoff_census_tract,
payment_type,
company,
trip_seconds,
dropoff_community_area,
tips
FROM `bigquery-public-data.chicago_taxi_trips.taxi_trips`
WHERE RAND() < {}""".format(_query_sample_rate)
# Brings data into the pipeline or otherwise joins/converts training data.
example_gen = BigQueryExampleGen(query=query)
# Computes statistics over data for visualization and example validation.
statistics_gen = StatisticsGen(input_data=example_gen.outputs.examples)
# Generates schema based on statistics files.
infer_schema = SchemaGen(stats=statistics_gen.outputs.output)
# Performs anomaly detection based on statistics and data schema.
validate_stats = ExampleValidator(
stats=statistics_gen.outputs.output, schema=infer_schema.outputs.output)
# Performs transformations and feature engineering in training and serving.
transform = Transform(
input_data=example_gen.outputs.examples,
schema=infer_schema.outputs.output,
module_file=_taxi_utils)
# Uses user-provided Python function that implements a model using TF-Learn.
trainer = Trainer(
module_file=_taxi_utils,
transformed_examples=transform.outputs.transformed_examples,
schema=infer_schema.outputs.output,
transform_output=transform.outputs.transform_output,
train_args=trainer_pb2.TrainArgs(num_steps=10000),
eval_args=trainer_pb2.EvalArgs(num_steps=5000),
custom_config={'cmle_training_args': _cmle_training_args})
# Uses TFMA to compute a evaluation statistics over features of a model.
model_analyzer = Evaluator(
examples=example_gen.outputs.examples,
model_exports=trainer.outputs.output,
feature_slicing_spec=evaluator_pb2.FeatureSlicingSpec(specs=[
evaluator_pb2.SingleSlicingSpec(
column_for_slicing=['trip_start_hour'])
]))
# Performs quality validation of a candidate model (compared to a baseline).
model_validator = ModelValidator(
examples=example_gen.outputs.examples, model=trainer.outputs.output)
# Checks whether the model passed the validation steps and pushes the model
# to a file destination if check passed.
pusher = Pusher(
model_export=trainer.outputs.output,
model_blessing=model_validator.outputs.blessing,
custom_config={'cmle_serving_args': _cmle_serving_args},
push_destination=pusher_pb2.PushDestination(
filesystem=pusher_pb2.PushDestination.Filesystem(
base_directory=_serving_model_dir)))
return [
example_gen, statistics_gen, infer_schema, validate_stats, transform,
trainer, model_analyzer, model_validator, pusher
]
pipeline = KubeflowRunner().run(_create_pipeline())
# -
# ## Submit pipeline for execution on the Kubeflow cluster
import kfp
run_result = kfp.Client().create_run_from_pipeline_package('chicago_taxi_pipeline_kubeflow.tar.gz', arguments={})
# ### Connect to the ML Metadata Store
# !pip3 install ml_metadata
# +
from ml_metadata.metadata_store import metadata_store
from ml_metadata.proto import metadata_store_pb2
import os
connection_config = metadata_store_pb2.ConnectionConfig()
connection_config.mysql.host = os.getenv('MYSQL_SERVICE_HOST')
connection_config.mysql.port = int(os.getenv('MYSQL_SERVICE_PORT'))
connection_config.mysql.database = 'mlmetadata'
connection_config.mysql.user = 'root'
store = metadata_store.MetadataStore(connection_config)
# -
# Get all output artifacts
store.get_artifacts()
# +
# Get a specific artifact type
# TFX types
# types = ['ModelExportPath', 'ExamplesPath', 'ModelBlessingPath', 'ModelPushPath', 'TransformPath', 'SchemaPath']
store.get_artifacts_by_type('ExamplesPath')
| samples/core/tfx-oss/TFX Example.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ## KNN: K-vecinos mas cercanos
# ## Modelos parametricos frente a no parametricos
# Los algoritmos de aprendizaje automatico pueden clasificarse en dos grupos, en modelos parametricos y modelos no parametricos.
#
# Los parametricos estiman una serie de parametros a partir de conjuntos de datos de entrenamiento para aprender una funcion que pueda clasificar nuevos puntos de datos sin necesidad del conjunto de datos de entrenamiento. Ejemplos de este tipo de aprndizaje son _el perceptron_ , _la regresion logistica_ , y las _SVM lineales_.
#
# Los no parametricos no se pueden caracterizar por un conjunto fijo de parametros, porque el numero de parametros crece con los datos de entrenamiento. Ejemplo de esto son los _arboles de decision_ , _bosques aleatorios_ y _SVM kernelizadas_.
#
# El _KNN_ pertence a una subcategoria de los modelos no parametricos, llamada **aprendizaje basado en instancias** que se caracteriza por modelos que memorizan los datos de entrenamiento. Mas aun, _KNN_ pertence a un caso especial conocido como **aprendizaje vago**, el cual esta asociado con un costo nulo durante el proceso de aprendizaje. Visto de otra forma, el aprendizaje vago se caracteriza por la no obtencion de informacion discriminativa a partir de los datos de entrenamiento, sino mas bien por una memorizacion de los mismos.
# Pasos del algoritmo:
# 1. Elegir un $k$ y una medida de distancia.
# 2. Encontrar los $k$-vecinos mas cercanos de la muestra que se quiere clasificar.
# 3. Asignar la etiqueta de clase por mayoria de votos.
#
# 
# La ventaja de un enfoque basado en memoria como este es que el algoritmo se adapta inmediatamente conuando recogemos nuevos datos de entrenamiento, lo malo es que la complejidad computacional del algortimo aumenta linealmente a medida que agregamos nuevas muestras de entrenamiento.
#
# Es importante recordar que **el escalamiento de variables es muy importante para que cada caracteristica aporte de manera significativa en el calculo de las distancias.**
def plot_decision_regions(X, y, classifier, resolution=0.02):
import matplotlib.pyplot as plt
from matplotlib.colors import ListedColormap
# setup marker generator and color map
markers = ('s', 'x', 'o', '^', 'v')
colors = ('red', 'blue', 'lightgreen', 'gray', 'cyan')
cmap = ListedColormap(colors[:len(np.unique(y))])
# plot the decision surface
x1_min, x1_max = X[:, 0].min() - 1, X[:, 0].max() + 1
x2_min, x2_max = X[:, 1].min() - 1, X[:, 1].max() + 1
xx1, xx2 = np.meshgrid(np.arange(x1_min, x1_max, resolution),
np.arange(x2_min, x2_max, resolution))
Z = classifier.predict(np.array([xx1.ravel(), xx2.ravel()]).T)
Z = Z.reshape(xx1.shape)
plt.contourf(xx1, xx2, Z, alpha=0.3, cmap=cmap)
plt.xlim(xx1.min(), xx1.max())
plt.ylim(xx2.min(), xx2.max())
for idx, cl in enumerate(np.unique(y)):
plt.scatter(x=X[y == cl, 0],
y=X[y == cl, 1],
alpha=0.8,
c=colors[idx],
marker=markers[idx],
label=cl,
edgecolor='black')
from sklearn import datasets
import numpy as np
datos = datasets.load_iris()
X = datos.data[:, [2, 3]]
y = datos.target
# +
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.2, stratify = y)
from sklearn.preprocessing import StandardScaler
sc = StandardScaler()
sc.fit(X_train)
X_train_std = sc.transform(X_train)
X_test_std = sc.transform(X_test)
# +
from sklearn.neighbors import KNeighborsClassifier
knn = KNeighborsClassifier(n_neighbors=5, p=3, metric='minkowski')
knn.fit(X_train_std, y_train)
# +
# %matplotlib notebook
import matplotlib.pyplot as plt
fig, ax = plt.subplots()
plot_decision_regions(X_train_std, y_train, classifier=knn)
plt.xlabel('petal length [standardized]')
plt.ylabel('petal width [standardized]')
plt.legend(loc='upper left')
# +
# %matplotlib notebook
import matplotlib.pyplot as plt
fig, ax = plt.subplots()
plot_decision_regions(X_test_std, y_test, classifier=knn)
plt.xlabel('petal length [standardized]')
plt.ylabel('petal width [standardized]')
plt.legend(loc='upper left')
# -
from sklearn.metrics import accuracy_score
y_pred_train = knn.predict(X_train_std)
y_pred_test = knn.predict(X_test_std)
print(f'Precision en los datos de entrenamiento: {accuracy_score(y_train, y_pred_train)}')
print(f'Precision en los datos de prueba: {accuracy_score(y_test, y_pred_test)}')
# En caso de empate, la implementacion de sklearn del algoritmo KNN se decantara por los vecinos que esten mas cerca de la muestra. Si las distancias son similares, el algoritmo elegira la etiqueta de clase que aparezca de primeras en los datos de entrenamiento.
#
# La eleccion de $k$ es muy importante para evitar el sobreajuste y el subajuste. Ademas debemos elegir una metrica apropiada para el conjunto de datos; para datos numericos, lo mas apropiado es utilizar la metrica de Minkowsky:
#
# $$d(\textbf{x}^{(i)}, \textbf{x}^{(j)})=\sqrt[p]{\sum_k|{x_k^{(i)}-x_k^{(j)}}|^p}$$
#
# con $p=2$, tendremos la clasica distancia euclidiana; con $p=1$, tendremos la distancia Manhattan, etc...
#
# Mas informacion en https://scikit-learn.org/stable/modules/generated/sklearn.neighbors.DistanceMetric.html
# ###### La maldicion de la dimensionalidad
#
# KNN es muy suceptible al sobreajuste debido a que el espacio de caracteristicas se vuelve cada vez mas escaso para un numero cada vez mayor de dimensiones de un conjutno de datos de entrenamiento de tamaรฑo fijo. En un espacio de mayores dimensiones incluso los vecinos ams cercanos estan demasiado lejos para ofrecer una buena estimacion.
# <span class="burk">EJERCICIOS</span>
#
# 1. Utilice el archivo Social_Network_Ads.csv para clasificar sus clases. Utilice las variables User Age, EstimatedSalary, Purchased. Grafique, mida la precision de los conjuntos de entrenamiento y test y saque sus conclusiones.
#
# 2. Utilice el archivo usuarios_win_mac_lin y clasifique. Mida la precision de los conjuntos de entrenamiento y test y saque sus conclusiones.
#
# 3. Utilice la siguiente instruccion para cargar valores para X y y:
#
# datasets.make_classification(1000, 20, n_informative=3)
# Realice la parte de preprocesado, clasifique los datos y mida su precision.
#
# Mas informacion sobre esta instruccion en: https://scikit-learn.org/stable/modules/generated/sklearn.datasets.make_classification.html
| Semana-13/KNN.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3 (ipykernel)
# language: python
# name: python3
# ---
# # Analysis
# Convenient jupyter setup
# %load_ext autoreload
# %autoreload 2
# %config IPCompleter.greedy=True
# set up plotting settings for dark mode.
from jupyterthemes import jtplot
jtplot.style(theme="grade3", context="notebook", ticks=True, grid=False)
| notebooks/vi.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python [conda root]
# language: python
# name: conda-root-py
# ---
# +
import pandas as pd
import numpy as np
from sklearn.ensemble import GradientBoostingRegressor
from sklearn.ensemble import RandomForestRegressor
from sklearn.model_selection import train_test_split
from sklearn.model_selection import GridSearchCV
from sklearn.metrics import classification_report
from sklearn.model_selection import validation_curve
from sklearn.model_selection import learning_curve
from sklearn import metrics
from sklearn.metrics import fbeta_score, make_scorer
# -
import matplotlib
import matplotlib.pyplot as plt
def plot_learning_curve(estimator, title, X, y, ylim=None, cv=None,
n_jobs=1, train_sizes=np.linspace(.1, 1.0, 5)):
"""
Generate a simple plot of the test and training learning curve.
Parameters
----------
estimator : object type that implements the "fit" and "predict" methods
An object of that type which is cloned for each validation.
title : string
Title for the chart.
X : array-like, shape (n_samples, n_features)
Training vector, where n_samples is the number of samples and
n_features is the number of features.
y : array-like, shape (n_samples) or (n_samples, n_features), optional
Target relative to X for classification or regression;
None for unsupervised learning.
ylim : tuple, shape (ymin, ymax), optional
Defines minimum and maximum yvalues plotted.
cv : int, cross-validation generator or an iterable, optional
Determines the cross-validation splitting strategy.
Possible inputs for cv are:
- None, to use the default 3-fold cross-validation,
- integer, to specify the number of folds.
- An object to be used as a cross-validation generator.
- An iterable yielding train/test splits.
For integer/None inputs, if ``y`` is binary or multiclass,
:class:`StratifiedKFold` used. If the estimator is not a classifier
or if ``y`` is neither binary nor multiclass, :class:`KFold` is used.
Refer :ref:`User Guide <cross_validation>` for the various
cross-validators that can be used here.
n_jobs : integer, optional
Number of jobs to run in parallel (default 1).
"""
plt.figure()
plt.title(title)
if ylim is not None:
plt.ylim(*ylim)
plt.xlabel("Training examples")
plt.ylabel("Score")
train_sizes, train_scores, test_scores = learning_curve(
estimator, X, y, cv=cv, n_jobs=n_jobs, train_sizes=train_sizes)
train_scores_mean = np.mean(train_scores, axis=1)
train_scores_std = np.std(train_scores, axis=1)
test_scores_mean = np.mean(test_scores, axis=1)
test_scores_std = np.std(test_scores, axis=1)
plt.grid()
plt.fill_between(train_sizes, train_scores_mean - train_scores_std,
train_scores_mean + train_scores_std, alpha=0.1,
color="r")
plt.fill_between(train_sizes, test_scores_mean - test_scores_std,
test_scores_mean + test_scores_std, alpha=0.1, color="g")
plt.plot(train_sizes, train_scores_mean, 'o-', color="r",
label="Training score")
plt.plot(train_sizes, test_scores_mean, 'o-', color="g",
label="Cross-validation score")
plt.legend(loc="best")
return plt
X_train_df = pd.read_csv("../data/offline/X_train.csv", index_col=0)
y_train_df = pd.read_csv("../data/offline/y_train.csv", index_col=0)
X_train, X_test, y_train, y_test = train_test_split(X_train_df.values, y_train_df['SalePrice'].values, test_size=0.5, random_state=1729)
gbdt_regressor = GradientBoostingRegressor(learning_rate=0.12, n_estimators=30, min_samples_split=2, min_samples_leaf=2, max_depth=3)
plot_learning_curve(gbdt_regressor, 'gbdt', X_train, y_train, cv=5)
plt.show()
gbdt_regressor.fit(X_train, y_train)
gbdt_regressor.score(X_test, y_test)
0.81542595907365656
X_train_df.columns
gbdt_regressor.feature_importances_
importance_df = pd.DataFrame([X_train_df.columns, gbdt_regressor.feature_importances_ ]).T
importance_df = importance_df.rename(columns={0:'feature', 1:'contribution'})
importance_df.sort_values('contribution')
importance_df[importance_df[0].str.find('Neighborhood') != -1].sort_values(1)
| offline/.ipynb_checkpoints/XgboostModel-checkpoint.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] id="Vfi4TcINGIaS" colab_type="text"
# # Classifying DNA Sequences
#
# ## Step 1: Importing the Dataset
#
# The following code cells will import necessary libraries and import the dataset from the UCI repository as a Pandas DataFrame.
# + id="3y-xmxi0GIaU" colab_type="code" colab={}
# To make sure all of the correct libraries are installed, import each module and print the version number
import sys
import numpy
import sklearn
import pandas
# + id="hY2YBzRgGIaX" colab_type="code" colab={}
# Import, change module names
import numpy as np
import pandas as pd
# import the uci Molecular Biology (Promoter Gene Sequences) Data Set
url = 'https://archive.ics.uci.edu/ml/machine-learning-databases/molecular-biology/promoter-gene-sequences/promoters.data'
names = ['Class', 'id', 'Sequence']
data = pd.read_csv(url, names = names)
# + id="1x4oidOCGIab" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 85} outputId="8159d799-3cfa-48bb-ef63-c94ce764d419"
print(data.iloc[0])
# + [markdown] id="Poc5KtvgGIaf" colab_type="text"
# ## Step 2: Preprocessing the Dataset
#
# The data is not in a usable form; as a result, we will need to process it before using it to train our algorithms.
# + id="9OJ_Sp_2GIaf" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 119} outputId="48aa2d07-7e35-4dcc-97d5-f09beaa5708e"
# Building our Dataset by creating a custom Pandas DataFrame
# Each column in a DataFrame is called a Series. Lets start by making a series for each column.
classes = data.loc[:, 'Class']
print(classes[:5])
# + id="hor7ZkDzGIaj" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 54} outputId="2159b322-e0a8-42a4-9376-3191080fa360"
# generate list of DNA sequences
sequences = list(data.loc[:, 'Sequence'])
dataset = {}
# loop through sequences and split into individual nucleotides
for i, seq in enumerate(sequences):
# split into nucleotides, remove tab characters
nucleotides = list(seq)
nucleotides = [x for x in nucleotides if x != '\t']
# append class assignment
nucleotides.append(classes[i])
# add to dataset
dataset[i] = nucleotides
print(dataset[0])
# + id="h5bLSOrXGIam" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 1000} outputId="07bf52bf-e581-4c86-b49c-5afb93349ff1"
# turn dataset into pandas DataFrame
dframe = pd.DataFrame(dataset)
print(dframe)
# + id="hm0AmWJJGIap" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 153} outputId="b1f49630-21e6-4b6f-f3f5-1ff8f2cdf325"
# transpose the DataFrame
df = dframe.transpose()
print(df.iloc[:5])
# + id="G--Gn52uGIat" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 153} outputId="935be7bc-0baf-41c4-cfc2-f1b9619dd35d"
# for clarity, lets rename the last dataframe column to class
df.rename(columns = {57: 'Class'}, inplace = True)
print(df.iloc[:5])
# + id="RJZj7rXQGIaw" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 193} outputId="a6d1a1f5-ec5e-4ef2-a72d-aed5131a86c1"
# looks good! Let's start to familiarize ourselves with the dataset so we can pick the most suitable
# algorithms for this data
df.describe()
# + id="NOz7pwudGIaz" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 170} outputId="d6f5f934-3d3f-45a2-fc4b-cda5da35c27e"
# desribe does not tell us enough information since the attributes are text. Lets record value counts for each sequence
series = []
for name in df.columns:
series.append(df[name].value_counts())
info = pd.DataFrame(series)
details = info.transpose()
print(details)
# + id="s7hG11TeGIa2" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 253} outputId="57d9ed3f-8dd2-462c-d3d0-9f76b70fe798"
# Unfortunately, we can't run machine learning algorithms on the data in 'String' formats. As a result, we need to switch
# it to numerical data. This can easily be accomplished using the pd.get_dummies() function
numerical_df = pd.get_dummies(df)
numerical_df.iloc[:5]
# + id="_YreOch6GIa6" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 153} outputId="2de9cbb6-5ceb-4a9c-97af-b10c6531b139"
# We don't need both class columns. Lets drop one then rename the other to simply 'Class'.
df = numerical_df.drop(columns=['Class_-'])
df.rename(columns = {'Class_+': 'Class'}, inplace = True)
print(df.iloc[:5])
# + id="Z5AX0og-GIa_" colab_type="code" colab={}
# Use the model_selection module to separate training and testing datasets
from sklearn import model_selection
# Create X and Y datasets for training
X = np.array(df.drop(['Class'], 1))
y = np.array(df['Class'])
# define seed for reproducibility
seed = 1
# split data into training and testing datasets
X_train, X_test, y_train, y_test = model_selection.train_test_split(X, y, test_size=0.25, random_state=seed)
# + [markdown] id="e4pEa1EUGIbB" colab_type="text"
# ## Step 3: Training and Testing the Classification Algorithms
#
# Now that we have preprocessed the data and built our training and testing datasets, we can start to deploy different classification algorithms. It's relatively easy to test multiple models; as a result, we will compare and contrast the performance of ten different algorithms.
# + id="WkYWehHQGIbC" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 887} outputId="d8c198d8-f181-43fc-fb22-8ecbcdab19f2"
# Now that we have our dataset, we can start building algorithms! We'll need to import each algorithm we plan on using
# from sklearn. We also need to import some performance metrics, such as accuracy_score and classification_report.
from sklearn.neighbors import KNeighborsClassifier
from sklearn.neural_network import MLPClassifier
from sklearn.gaussian_process import GaussianProcessClassifier
from sklearn.gaussian_process.kernels import RBF
from sklearn.tree import DecisionTreeClassifier
from sklearn.ensemble import RandomForestClassifier, AdaBoostClassifier
from sklearn.naive_bayes import GaussianNB
from sklearn.svm import SVC
from sklearn.metrics import classification_report, accuracy_score
# define scoring method
scoring = 'accuracy'
# Define models to train
names = ["Nearest Neighbors", "Gaussian Process",
"Decision Tree", "Random Forest", "Neural Net", "AdaBoost",
"Naive Bayes", "SVM Linear", "SVM RBF", "SVM Sigmoid"]
classifiers = [
KNeighborsClassifier(n_neighbors = 3),
GaussianProcessClassifier(1.0 * RBF(1.0)),
DecisionTreeClassifier(max_depth=5),
RandomForestClassifier(max_depth=5, n_estimators=10, max_features=1),
MLPClassifier(alpha=1),
AdaBoostClassifier(),
GaussianNB(),
SVC(kernel = 'linear'),
SVC(kernel = 'rbf'),
SVC(kernel = 'sigmoid')
]
models = zip(names, classifiers)
# evaluate each model in turn
results = []
names = []
for name, model in models:
kfold = model_selection.KFold(n_splits=10, random_state = seed)
cv_results = model_selection.cross_val_score(model, X_train, y_train, cv=kfold, scoring=scoring)
results.append(cv_results)
names.append(name)
msg = "%s: %f (%f)" % (name, cv_results.mean(), cv_results.std())
print(msg)
# + id="nLZIzFEsGIbF" colab_type="code" colab={}
# Remember, performance on the training data is not that important. We want to know how well our algorithms
# can generalize to new data. To test this, let's make predictions on the validation dataset.
for name, model in models:
model.fit(X_train, y_train)
predictions = model.predict(X_test)
print(name)
print(accuracy_score(y_test, predictions))
print(classification_report(y_test, predictions))
# Accuracy - ratio of correctly predicted observation to the total observations.
# Precision - (false positives) ratio of correctly predicted positive observations to the total predicted positive observations
# Recall (Sensitivity) - (false negatives) ratio of correctly predicted positive observations to the all observations in actual class - yes.
# F1 score - F1 Score is the weighted average of Precision and Recall. Therefore, this score takes both false positives and false
# + id="1h4q5fvoGIbH" colab_type="code" colab={}
| ML-Dna-Classification-master/DNA_Classification.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Python modeling of the impact of social distancing and early termination of lockdown
# ### Dr. <NAME>, Fremont, CA 94536
# ---
#
# ## What is this demo about?
# The greatest [global crisis since World War II](https://www.bloomberg.com/opinion/articles/2020-03-24/coronavirus-recession-it-will-be-a-lot-like-world-war-ii) and the [largest global pandemic since the 1918โ19 Spanish Flu](https://www.cdc.gov/flu/pandemic-resources/1918-pandemic-h1n1.html) is upon us today. Everybody is looking at the [daily rise of the death toll](https://www.worldometers.info/coronavirus/worldwide-graphs/) and the rapid, [exponential spread](https://www.forbes.com/sites/startswithabang/2020/03/17/why-exponential-growth-is-so-scary-for-the-covid-19-coronavirus/#2bf6b23f4e9b) of this novel strain of the virus.
#
# Data scientists, like so many people from all other walks of life, may also be feeling anxious. They may also be eager to see if they can contribute somehow to the fight against this highly infectious pathogen.
#
# There can be many avenues for data scientists and statistical modeling professionals to contribute to the cause. **In almost all cases, they should be working closely with domain expertsโ-โvirologists, healthcare professionals, epidemiologists. Without such active collaboration and teamwork, it is dangerous and meaningless to embark on a project of predictive modeling or forecasting the spread of the disease and mitigation efforts**.
#
# However, even without venturing into actual predictive modeling, it is possible to demonstrate the efficacy of the only basic weapon that we all have against the COVID-19 virusโ-โ___["social distancing"](https://www.cdc.gov/coronavirus/2019-ncov/prevent-getting-sick/social-distancing.html)___โ-โby a simple programmatic approach.
#
# In this demo, we will show how to build such a demo in Python, following a simple yet fundamental epidemiological theory.
#
# This is not an attempt to build any forecasting or predictive model. **No actual data**, other than some fictitious numbers, will be used to generate the plots and visualizations.
#
# The goal is to show that **with only a basic dynamical model**, it is possible to understand key concepts such as the **_"flattening the curve"_**, **_"herd immunity"_**, and **_"lifting the lockdown too quickly"_**. And, you can program such a model with rudimentary python and mathematical knowledge.
#
# ---
#
# ## The basic epidemiology model class
#
# ### A reference article
# We created a Python class called `SEIRclass` based on the [SEIR model of basic epidemiology](https://en.wikipedia.org/wiki/Compartmental_models_in_epidemiology). The script is in the `SEIRclass.py` file and the class needs to be imported for this demo.
#
# 
#
# The idea is inspired by this article which lays down the epidemiological model and the dynamical equations clearly: **[Social Distancing to Slow the Coronavirus](https://towardsdatascience.com/social-distancing-to-slow-the-coronavirus-768292f04296)**
#
# Here is the Github repo corresponding to the article above: https://github.com/dgoldman0/socialdistancing
#
# ### Dynamical equations
#
# The basic dynamical equations are as follows,
#
# $$\frac{dS}{dt} = -\beta.S.I $$
#
# $$\frac{dE}{dt} = \beta.S.I - \alpha.E$$
#
# $$\frac{dI}{dt} = \alpha.E - \gamma.I$$
#
# $$\frac{dR}{dt} = \gamma.I$$
#
# where,
#
# - $\alpha$ is the inverse of the virus incubation period
# - $\beta$ is the average contact rate in the population
# - $\gamma$ is the inverse of the mean infectious period
# - S, E, I, and R represent the fraction of population in the _Susceptible_, _Exposed_, _Infected_, and _Recovered_ categories.
#
# The first two equations changes slightly with the introduction of a fourth parameters $\rho$ which represents the social mixing. The higher the $/rho$ the less social distancing. It can have a value from 0 to 1.
#
# $$\frac{dS}{dt} = -\rho.\beta.S.I $$
#
# $$\frac{dE}{dt} = \rho.\beta.S.I - \alpha.E$$
#
# ### The core Python class
#
# Partial class definition is shown below,
#
# ```python
# # SEIR model class definition
# # Dr. <NAME>, Fremont, CA
# # April 2020
#
# import numpy as np
# import matplotlib.pyplot as plt
#
# class SEIR:
# def __init__(self,
# init_vals=[1 - 1/1000, 1/1000, 0, 0],
# params_=[0.2,1.75,0.5,0.9]):
# """
# Initializes and sets the initial lists and parameters
# """
# # Initial values
# self.s0 = init_vals[0]
# self.e0 = init_vals[1]
# self.i0 = init_vals[2]
# self.r0 = init_vals[3]
# # Lists
# self.s, self.e, self.i, self.r = [self.s0], [self.e0], [self.i0], [self.r0]
# # Dynamical parameters
# self.alpha = params_[0]
# self.beta = params_[1]
# self.gamma = params_[2]
# self.rho = params_[3]
# # All parameters together in a list
# self.params_ = [self.alpha,self.beta,self.gamma,self.rho]
# # All final values together in a list
# self.vals_ = [self.s[-1], self.e[-1], self.i[-1], self.r[-1]]
# ```
#
# ### Various methods
#
# We define various utility methods in this class,
#
# - `reinitialize`: Re-initializes the model with new values (of population)
# - `set_params`: Sets the dynamical parameters value
# - `reset`: Resets the internal lists to zero-state i.e. initial values
# - `run`: Runs the dynamical simulation for a given amount of time (with a given time step delta)
# - `plot`: Plots the basic results of the simulation on a time-axis
#
# ## Demo
# ### Import the class and examine parameters/variables
from SEIRclass import *
import numpy as np
s = SEIR()
s.alpha
s.s0
s.i
s.params_
s.vals_
# ### Run a simulation for 90 time units
r=s.run(t_max=90,dt=0.1)
# ### Plot results (all four variables - susceptible, exposed, infected, and recovered)
s.plot(r)
# ### Run simulation for various social distancing factors
# Note that, here, **higher value means less social distancing i.e. higher social mixing/interactions**.
#
# We observe that **higher degree of social mixing leads to higher peak for the infected population fraction**, which can potentially overwhelm the healthcare system capacity.
social_dist = [0.4,0.5,0.6,0.7,0.8]
plt.figure(figsize=(12,8))
for d in social_dist:
s = SEIR()
s.rho=d
r = s.run(t_max=90,dt=0.1)
plt.plot(r[:,2],lw=3)
plt.title('Flattening the curve with social distancing',fontsize=18)
plt.legend(["Social distancing factor: "+str(d) for d in social_dist],
fontsize=15)
plt.xlabel('Time Steps',fontsize=16)
plt.ylabel('Fraction of Population',fontsize=16)
plt.grid(True)
plt.xticks(fontsize=15)
plt.yticks(fontsize=15)
plt.show()
# ## Tweaking social distancing
#
# Now, we examine what happens with early termination of lockdown or "stay-at-home" measures, which aim to reduce social mixing i.e. increase social distancing.
#
# We start with a certain value of the social distancing factor, let it run for a certain amount of time, and then relax the norms i.e. increase the value of the social distancing factor, and see what happens to the infected population fraction. **We will observe a second peak appearing**. Depending on social and epidemiological factors, this second peak can be higher than the first one.
#
# For this study, we will use two SEIR models,
# - The first model will run for a certain amount of time with a low social mixing factor (strict social distancing norm)
# - The final values of the population proportion from this model will be fed into the second model.
# - The second model will run with a higher value of social mixing (relaxed social distancing norm)
# ### Parameters
#
# We will use this set of parameters for modeling the disease spread for this experiment.
p = [0.7,2.1,0.7,0.1]
# ### First model with the parameters and default initial values
s1 = SEIR(params_=p)
# ### Set 'social distancing factor' to 0.4 (i.e. 40% interaction, stay-at-home measures are working)
s1.rho=0.4
# ### Run the simulation for 90 time units
r1=s1.run(t_max=90,dt=0.1)
# ### Capture the final values as the new initial values for the next model
new_init = s1.vals_
# ### New model with the final values from the previous model as the initial values
s2 = SEIR(init_vals=new_init,
params_=p)
# ### Relaxing the 'social distancing factor' to 0.9 (social mixing is happening up to 90%)
s2.rho = 0.9
# ### Run the simulation for another 90 time units
# Note that we put `reset=False` for this simulation
r2=s2.run(t_max=90,dt=0.1,reset=False)
# ### Aggregating both results (adding simulated values up to 90+90 = 180 time units)
r3=np.vstack((r1,r2))
# ### Plotting only the infected fraction
# Note the two peaks. The first peak was subsiding when social distancing was relaxed and the second - a much higher peak - came upon us!
s2.plot_var(r3[:,2],var_name='Infected')
# ### "Herd immunity" - letting first 'lockdown' run for a longer time helps to reduce both peaks
#
# We also observe that letting the first 'lockdown' run for a longer time can potentially reduce the absolute value of both peaks. This is due to the fact of ___"herd immunity"___ i.e. **with a longer lockdown, the susceptible population gradually reduces slowly over time** and therefore when the lockdown is relaxed, there is less susceptible population for the virus to infect!
#
# We also note that beyond 150 days, the susceptible fraction level does not go down significantly i.e. there is no visible reduction from 150 days case to the 180 days case to the 210 days case. Therefore, **if the goal is to build herd immunity, the lockdown extension may be effective only up to a certain period**.
p = [0.7,2.1,0.7,0.4]
days = [60,90,120,150,180,210]
s = SEIR(params_=p)
fig,ax=plt.subplots(2,3,figsize=(15,12))
axes = ax.ravel()
plt.xticks(fontsize=12)
plt.yticks(fontsize=12)
for i,d in enumerate(days):
r=s.run(t_max=d,dt=0.1)
axes[i].plot(r[:,0],lw=3)
axes[i].set_title('Lockdown for {} days'.format(d),fontsize=16)
axes[i].set_xlabel('Time Steps',fontsize=14)
axes[i].set_ylabel('Susceptible fraction',fontsize=14)
axes[i].set_xlim(0,1800)
axes[i].set_ylim(0.5,1.0)
axes[i].grid(True)
axes[i].tick_params(labelsize=12)
plt.show()
# ### Reduce peaks by letting the first lockdown run longer
# The example of 'tweaking social distancing' above showed a second peak of as high as > 0.05 i.e. 5% of the population getting infected. Here, we show that letting the first lockdown running longer can potentially reduce both peaks. In fact, we see the first peak is higher in this experiment and it is only around 0.018 or 1.8%.
s1 = SEIR(params_=p)
s1.rho=0.45
r1=s1.run(t_max=150,dt=0.1)
new_init = s1.vals_
s2 = SEIR(init_vals=new_init,
params_=p)
s2.rho = 0.9
r2=s2.run(t_max=135,dt=0.1,reset=False)
r3=np.vstack((r1,r2))
s2.plot_var(r3[:,2],var_name='Infected')
| SEIR-demo.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: tf2.4
# language: python
# name: tf2.4
# ---
# +
import sys
import tensorflow as tf
import scipy.io
from scipy.io import loadmat
import matplotlib.pyplot as plt
from skimage.util import montage as montage2d
from glob import glob
import numpy as np
from IPython.core.display import display, HTML
display(HTML("<style>.container { width:98% !important; }</style>"))
# -
sys.path.append("..")
from spatial_transform.aff_mnist_data import IMAGE_SIZE, IMAGE_SHAPE, IMAGE_NUM_CHANNELS, get_aff_mnist_data
from spatial_transform.spatial_transforms import AffineTransform, QuadraticTransform
from spatial_transform.st_blocks import SimpleSpatialTransformBlock
from spatial_transform.localization import StandardConvolutionalLocalizationLayer, CoordConvLocalizationLayer, LargeLocalizationLayer
from spatial_transform.interpolation import BilinearInterpolator
from spatial_transform.layers import RepeatWithSharedWeights
from spatial_transform.visualization import show_train_progress
train_img_data, train_img_label, validation_img_data, validation_img_label, test_img_data, test_img_label = get_aff_mnist_data()
# +
image = tf.keras.layers.Input(shape=IMAGE_SHAPE + (IMAGE_NUM_CHANNELS,))
size_after_transform = 28
spatial_transform = QuadraticTransform()
st_block = SimpleSpatialTransformBlock(
localization_layer = LargeLocalizationLayer(
spatial_transform_params_cls = spatial_transform.param_type,
init_scale = 1,
),
spatial_transform = spatial_transform,
interpolator = BilinearInterpolator(),
shape_out = (size_after_transform, size_after_transform)
)
stn_slx_chain = RepeatWithSharedWeights(layer=st_block, num_repetitions=3)
x = tf.image.resize(image, size=(size_after_transform,size_after_transform))
x = stn_slx_chain(x)
x = tf.keras.layers.Conv2D(16, [5, 5], activation='relu', padding="valid")(x)
x = tf.keras.layers.MaxPool2D()(x)
x = tf.keras.layers.Conv2D(16, [5, 5], activation='relu', padding="valid")(x)
x = tf.keras.layers.MaxPool2D()(x)
x = tf.keras.layers.Flatten()(x)
x = tf.keras.layers.Dense(64, activation='relu')(x)
x = tf.keras.layers.Dense(10, activation=None)(x)
model = tf.keras.models.Model(inputs=image, outputs=x)
model.summary()
# -
model.compile(
optimizer = tf.keras.optimizers.Adam(learning_rate=1e-5),
loss = tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
metrics = ['accuracy']
)
history = model.fit(
x = train_img_data,
y = train_img_label,
batch_size = 128,
epochs = 20,
validation_data = (test_img_data, test_img_label),
validation_batch_size = 1024,
)
show_train_progress(history)
# +
#model.save_weights("STN_quadratic_chain_backbone.h5")
# -
# ### Investigate transformed features
images = tf.cast(tf.expand_dims(test_img_data[0:20], 3), dtype=tf.float32)
labels = test_img_label[0:20]
x = tf.image.resize(images, size=(size_after_transform,size_after_transform))
transformed_images = stn_slx_chain(x)
for image, label in zip(transformed_images, labels):
print(label)
plt.imshow(image.numpy()[:,:,0])
plt.show()
| experiments/STN_quadratic_chain_backbone.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# %matplotlib inline
#
# # Iterated Dynamical Systems
#
#
# Digraphs from Integer-valued Iterated Functions
#
# Sums of cubes on 3N
# -------------------
#
# The number 153 has a curious property.
#
# Let 3N={3,6,9,12,...} be the set of positive multiples of 3. Define an
# iterative process f:3N->3N as follows: for a given n, take each digit
# of n (in base 10), cube it and then sum the cubes to obtain f(n).
#
# When this process is repeated, the resulting series n, f(n), f(f(n)),...
# terminate in 153 after a finite number of iterations (the process ends
# because 153 = 1**3 + 5**3 + 3**3).
#
# In the language of discrete dynamical systems, 153 is the global
# attractor for the iterated map f restricted to the set 3N.
#
# For example: take the number 108
#
# f(108) = 1**3 + 0**3 + 8**3 = 513
#
# and
#
# f(513) = 5**3 + 1**3 + 3**3 = 153
#
# So, starting at 108 we reach 153 in two iterations,
# represented as:
#
# 108->513->153
#
# Computing all orbits of 3N up to 10**5 reveals that the attractor
# 153 is reached in a maximum of 14 iterations. In this code we
# show that 13 cycles is the maximum required for all integers (in 3N)
# less than 10,000.
#
# The smallest number that requires 13 iterations to reach 153, is 177, i.e.,
#
# 177->687->1071->345->216->225->141->66->432->99->1458->702->351->153
#
# The resulting large digraphs are useful for testing network software.
#
# The general problem
# -------------------
#
# Given numbers n, a power p and base b, define F(n; p, b) as the sum of
# the digits of n (in base b) raised to the power p. The above example
# corresponds to f(n)=F(n; 3,10), and below F(n; p, b) is implemented as
# the function powersum(n,p,b). The iterative dynamical system defined by
# the mapping n:->f(n) above (over 3N) converges to a single fixed point;
# 153. Applying the map to all positive integers N, leads to a discrete
# dynamical process with 5 fixed points: 1, 153, 370, 371, 407. Modulo 3
# those numbers are 1, 0, 1, 2, 2. The function f above has the added
# property that it maps a multiple of 3 to another multiple of 3; i.e. it
# is invariant on the subset 3N.
#
#
# The squaring of digits (in base 10) result in cycles and the
# single fixed point 1. I.e., from a certain point on, the process
# starts repeating itself.
#
# keywords: "Recurring Digital Invariant", "Narcissistic Number",
# "Happy Number"
#
# The 3n+1 problem
# ----------------
#
# There is a rich history of mathematical recreations
# associated with discrete dynamical systems. The most famous
# is the Collatz 3n+1 problem. See the function
# collatz_problem_digraph below. The Collatz conjecture
# --- that every orbit returrns to the fixed point 1 in finite time
# --- is still unproven. Even the great <NAME> said "Mathematics
# is not yet ready for such problems", and offered $500
# for its solution.
#
# keywords: "3n+1", "3x+1", "Collatz problem", "Thwaite's conjecture"
#
# +
import networkx as nx
nmax = 10000
p = 3
def digitsrep(n, b=10):
"""Return list of digits comprising n represented in base b.
n must be a nonnegative integer"""
if n <= 0:
return [0]
dlist = []
while (n > 0):
# Prepend next least-significant digit
dlist = [n % b] + dlist
# Floor-division
n = n // b
return dlist
def powersum(n, p, b=10):
"""Return sum of digits of n (in base b) raised to the power p."""
dlist = digitsrep(n, b)
sum = 0
for k in dlist:
sum += k**p
return sum
def attractor153_graph(n, p, multiple=3, b=10):
"""Return digraph of iterations of powersum(n,3,10)."""
G = nx.DiGraph()
for k in range(1, n + 1):
if k % multiple == 0 and k not in G:
k1 = k
knext = powersum(k1, p, b)
while k1 != knext:
G.add_edge(k1, knext)
k1 = knext
knext = powersum(k1, p, b)
return G
def squaring_cycle_graph_old(n, b=10):
"""Return digraph of iterations of powersum(n,2,10)."""
G = nx.DiGraph()
for k in range(1, n + 1):
k1 = k
G.add_node(k1) # case k1==knext, at least add node
knext = powersum(k1, 2, b)
G.add_edge(k1, knext)
while k1 != knext: # stop if fixed point
k1 = knext
knext = powersum(k1, 2, b)
G.add_edge(k1, knext)
if G.out_degree(knext) >= 1:
# knext has already been iterated in and out
break
return G
def sum_of_digits_graph(nmax, b=10):
def f(n): return powersum(n, 1, b)
return discrete_dynamics_digraph(nmax, f)
def squaring_cycle_digraph(nmax, b=10):
def f(n): return powersum(n, 2, b)
return discrete_dynamics_digraph(nmax, f)
def cubing_153_digraph(nmax):
def f(n): return powersum(n, 3, 10)
return discrete_dynamics_digraph(nmax, f)
def discrete_dynamics_digraph(nmax, f, itermax=50000):
G = nx.DiGraph()
for k in range(1, nmax + 1):
kold = k
G.add_node(kold)
knew = f(kold)
G.add_edge(kold, knew)
while kold != knew and kold << itermax:
# iterate until fixed point reached or itermax is exceeded
kold = knew
knew = f(kold)
G.add_edge(kold, knew)
if G.out_degree(knew) >= 1:
# knew has already been iterated in and out
break
return G
def collatz_problem_digraph(nmax):
def f(n):
if n % 2 == 0:
return n // 2
else:
return 3 * n + 1
return discrete_dynamics_digraph(nmax, f)
def fixed_points(G):
"""Return a list of fixed points for the discrete dynamical
system represented by the digraph G.
"""
return [n for n in G if G.out_degree(n) == 0]
if __name__ == "__main__":
nmax = 10000
print("Building cubing_153_digraph(%d)" % nmax)
G = cubing_153_digraph(nmax)
print("Resulting digraph has", len(G), "nodes and",
G.size(), " edges")
print("Shortest path from 177 to 153 is:")
print(nx.shortest_path(G, 177, 153))
print("fixed points are %s" % fixed_points(G))
| NoSQL/NetworkX/plot_iterated_dynamical_systems.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # TTV Retrieval for K2-24 (a sparsely sampled TTV system from *K2*)
#
# In this notebook, we will perform a dynamical retrieval for K2-24. This system is fairly representative of those observed by *K2* (and subsequent follow-up) in that the TTV phase coverage is relatively non-uniform compared to that of the *Kepler* TTV planets. But, as shown by Petigura et al. (2019), we can still place constraints on the masses! First, let's import packages and download data:
# +
# %matplotlib inline
import ttvnest
import numpy as np
import pandas as pd
k2_24 = ttvnest.io_utils.load_results('k2_24_rvprior.p')
# -
ttvnest.plot_utils.plot_eccentricity_posteriors(k2_24, outname = 'k2_24_rvprior')
| examples/k2-24/k2-24_derived_params.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# !pip install --quiet climetlab matplotlib
# # WeatherBench
# This is an attempt to reproduce this research: https://arxiv.org/abs/2002.00469. There is a notebook available at: https://binder.pangeo.io/v2/gh/pangeo-data/WeatherBench/master?filepath=quickstart.ipynb
import matplotlib.pyplot as plt
# + tags=[]
import climetlab as cml
# + tags=[]
ds = cml.load_dataset("weather-bench")
# + tags=[]
ds
# + tags=[]
print(ds.citation)
# -
z500 = ds.to_xarray()
z500
cml.plot_map(z500)
z500.z.isel(time=0).plot()
cml.plot_map(z500.z.isel(time=0))
climatology = z500.sel(time=slice('2016', '2016')).mean('time').load()
climatology.z.plot()
cml.plot_map(climatology.z)
climatology.z
| docs/examples/09-weatherbench.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 2
# language: python
# name: python2
# ---
# # Practice: scrape Baidu Baike
# **Here we build a scraper to crawl Baidu Baike from this [page](https://baike.baidu.com/item/%E7%BD%91%E7%BB%9C%E7%88%AC%E8%99%AB/5162711) onwards. We store a historical webpage that we have already visited to keep tracking it.**
# +
from bs4 import BeautifulSoup
from urllib.request import urlopen
import re
import random
base_url = "https://baike.baidu.com"
his = ["/item/%E7%BD%91%E7%BB%9C%E7%88%AC%E8%99%AB/5162711"]
# -
# **Select the last sub url in "his", print the title and url.**
# +
url = base_url + his[-1]
html = urlopen(url).read().decode('utf-8')
soup = BeautifulSoup(html, features='lxml')
print(soup.find('h1').get_text(), ' url: ', his[-1])
# -
# **Find all sub_urls for baidu baike (item page), randomly select a sub_urls and store it in "his". If no valid sub link is found, than pop last url in "his".**
# +
# find valid urls
sub_urls = soup.find_all("a", {"target": "_blank", "href": re.compile("/item/(%.{2})+$")})
if len(sub_urls) != 0:
his.append(random.sample(sub_urls, 1)[0]['href'])
else:
# no valid sub link found
his.pop()
print(his)
# -
# **Put everthing together. Random running for 20 iterations. See what we end up with.**
# +
his = ["/item/%E7%BD%91%E7%BB%9C%E7%88%AC%E8%99%AB/5162711"]
for i in range(20):
url = base_url + his[-1]
html = urlopen(url).read().decode('utf-8')
soup = BeautifulSoup(html, features='lxml')
print(i, soup.find('h1').get_text(), ' url: ', his[-1])
# find valid urls
sub_urls = soup.find_all("a", {"target": "_blank", "href": re.compile("/item/(%.{2})+$")})
if len(sub_urls) != 0:
his.append(random.sample(sub_urls, 1)[0]['href'])
else:
# no valid sub link found
his.pop()
# -
| notebook/2-4-practice-baidu-baike.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
import tensorflow as tf
import numpy as np
from datetime import datetime as dt
import numpy.random as rnd
from sklearn.preprocessing import StandardScaler
from tensorflow.examples.tutorials.mnist import input_data
import sys
# %matplotlib inline
import matplotlib
import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d import Axes3D
plt.rcParams['axes.labelsize'] = 14
plt.rcParams['xtick.labelsize'] = 12
plt.rcParams['ytick.labelsize'] = 12
def reset_graph(seed=42):
tf.reset_default_graph()
tf.set_random_seed(seed)
np.random.seed(seed)
def plot_image(image, shape=[28, 28]):
plt.imshow(image.reshape(shape), cmap="Greys", interpolation="nearest")
plt.axis("off")
def plot_multiple_images(images, n_rows, n_cols, pad=2):
images = images - images.min() # make the minimum == 0, so the padding looks white
w,h = images.shape[1:]
image = np.zeros(((w+pad)*n_rows+pad, (h+pad)*n_cols+pad))
for y in range(n_rows):
for x in range(n_cols):
image[(y*(h+pad)+pad):(y*(h+pad)+pad+h),(x*(w+pad)+pad):(x*(w+pad)+pad+w)] = images[y*n_cols+x]
plt.imshow(image, cmap="Greys", interpolation="nearest")
plt.axis("off")
def show_reconstructed_digits(X, outputs, model_path = None, n_test_digits = 2):
sample = np.random.randint(0, mnist.test.num_examples - n_test_digits)
with tf.Session() as sess:
if model_path:
saver.restore(sess, model_path)
X_test = mnist.test.images[sample:sample + n_test_digits]
outputs_val = outputs.eval(feed_dict={X: X_test})
fig = plt.figure(figsize=(8, 3 * n_test_digits))
for digit_index in range(n_test_digits):
plt.subplot(n_test_digits, 2, digit_index * 2 + 1)
plot_image(X_test[digit_index])
plt.subplot(n_test_digits, 2, digit_index * 2 + 2)
plot_image(outputs_val[digit_index])
def show_reconstructed_noisy_digits(X, X_noisy,outputs, model_path = None, n_test_digits = 2):
sample = np.random.randint(0, mnist.test.num_examples - n_test_digits)
with tf.Session() as sess:
if model_path:
saver.restore(sess, model_path)
X_test = mnist.test.images[sample:sample + n_test_digits]
X_test_noisy = X_noisy.eval(feed_dict={X: X_test})
outputs_val = outputs.eval(feed_dict={X: X_test})
fig = plt.figure(figsize=(8, 3 * n_test_digits))
for digit_index in range(n_test_digits):
plt.subplot(n_test_digits, 2, digit_index * 2 + 1)
plot_image(X_test_noisy[digit_index])
plt.subplot(n_test_digits, 2, digit_index * 2 + 2)
plot_image(outputs_val[digit_index])
# +
rnd.seed(4)
m = 200
w1, w2 = 0.1, 0.3
noise = 0.1
angles = rnd.rand(m) * 3 * np.pi / 2 - 0.5
data = np.empty((m, 3))
data[:, 0] = np.cos(angles) + np.sin(angles)/2 + noise * rnd.randn(m) / 2
data[:, 1] = np.sin(angles) * 0.7 + noise * rnd.randn(m) / 2
data[:, 2] = data[:, 0] * w1 + data[:, 1] * w2 + noise * rnd.randn(m)
scaler = StandardScaler()
X_train = scaler.fit_transform(data[:100])
X_test = scaler.transform(data[100:])
# +
fig = plt.figure(figsize=(10,8))
ax = fig.gca(projection='3d')
ax.plot(data[:, 0] , data[:, 1],data[:, 2] , "b.")
ax.legend()
plt.show()
# +
reset_graph()
n_inputs = 3
n_hidden = 2
n_outputs = n_inputs
learning_rate = 0.01
X = tf.placeholder(tf.float32, shape = [None, n_inputs])
hidden = tf.contrib.layers.fully_connected(X, n_hidden, activation_fn = None)
outputs = tf.contrib.layers.fully_connected(hidden, n_outputs, activation_fn = None)
codings = hidden
reconstruction_loss = tf.reduce_mean(tf.square(outputs - X))
optimizer = tf.train.AdamOptimizer(learning_rate)
training_op = optimizer.minimize(reconstruction_loss)
init = tf.global_variables_initializer()
# +
n_iterations = 1000
with tf.Session() as sess:
init.run()
for iteration in range(n_iterations):
training_op.run(feed_dict = {X : X_train})
codings_val = codings.eval(feed_dict = {X : X_test})
fig = plt.figure(figsize=(10,8))
plt.plot(codings_val[:,0], codings_val[:, 1], "b.")
plt.show()
# -
# # Stacked Autoencoder
mnist = input_data.read_data_sets("/tmp/data/")
# +
reset_graph()
n_inputs = 28 * 28
n_hidden1 = 300
n_hidden2 = 150
n_hidden3 = n_hidden1
n_outputs = n_inputs
learning_rate = 0.01
l2_reg = 0.001
X = tf.placeholder(tf.float32, shape = [None, n_inputs])
with tf.contrib.framework.arg_scope([tf.contrib.layers.fully_connected], activation_fn = tf.nn.elu,
weights_initializer = tf.contrib.layers.variance_scaling_initializer(),
weights_regularizer = tf.contrib.layers.l2_regularizer(l2_reg)):
hidden1 = tf.contrib.layers.fully_connected(X, n_hidden1)
hidden2 = tf.contrib.layers.fully_connected(hidden1, n_hidden2)
hidden3 = tf.contrib.layers.fully_connected(hidden2, n_hidden3)
outputs = tf.contrib.layers.fully_connected(hidden3, n_outputs, activation_fn = None)
reconstruction_loss = tf.reduce_mean(tf.square(outputs - X))
reg_losses = tf.get_collection(tf.GraphKeys.REGULARIZATION_LOSSES)
loss = tf.add_n([reconstruction_loss] + reg_losses)
optimizer = tf.train.AdamOptimizer(learning_rate)
training_op = optimizer.minimize(loss)
init = tf.global_variables_initializer()
saver = tf.train.Saver()
# +
n_epochs = 5
batch_size = 150
with tf.Session() as sess:
init.run()
n_batches = mnist.train.num_examples // batch_size
for epoch in range(n_epochs):
for iteration in range(n_batches):
X_batch, _ = mnist.train.next_batch(batch_size)
sess.run(training_op, feed_dict = {X: X_batch})
save_path = saver.save(sess, "./CH15/models/stacked_autoencoder_mnist.ckpt")
# -
show_reconstructed_digits(X, outputs, "./CH15/models/stacked_autoencoder_mnist.ckpt")
# ## Tying weights
# +
reset_graph()
n_inputs = 28 * 28
n_hidden1 = 300
n_hidden2 = 150
n_hidden3 = n_hidden1
n_outputs = n_inputs
learning_rate = 0.01
l2_reg = 0.001
X = tf.placeholder(tf.float32, shape = [None, n_inputs])
activation = tf.nn.elu
regularizer = tf.contrib.layers.l2_regularizer(l2_reg)
initializer = tf.contrib.layers.variance_scaling_initializer()
weights1_init = initializer([n_inputs, n_hidden1])
weights2_init = initializer([n_hidden1, n_hidden2])
weights1 = tf.Variable(weights1_init, dtype = tf.float32, name = "weights1")
weights2 = tf.Variable(weights2_init, dtype = tf.float32, name = "weights2")
weights3 = tf.transpose(weights2, name = "weights3")
weights4 = tf.transpose(weights1, name = "weights4")
biases1 = tf.Variable(tf.zeros(n_hidden1), name = "biases1")
biases2 = tf.Variable(tf.zeros(n_hidden2), name = "biases2")
biases3 = tf.Variable(tf.zeros(n_hidden3), name = "biases3")
biases4 = tf.Variable(tf.zeros(n_outputs), name = "biases4")
hidden1 = activation(tf.matmul(X, weights1) + biases1)
hidden2 = activation(tf.matmul(hidden1, weights2) + biases2)
hidden3 = activation(tf.matmul(hidden2, weights3) + biases3)
outputs = tf.matmul(hidden3, weights4) + biases4
reconstruction_loss = tf.reduce_mean(tf.square(outputs - X))
reg_losses = regularizer(weights1) + regularizer(weights2)
loss = reconstruction_loss + reg_losses
optimizer = tf.train.AdamOptimizer(learning_rate)
training_op = optimizer.minimize(loss)
init = tf.global_variables_initializer()
saver = tf.train.Saver()
# +
n_epochs = 5
batch_size = 150
with tf.Session() as sess:
init.run()
n_batches = mnist.train.num_examples // batch_size
for epoch in range(n_epochs):
for iteration in range(n_batches):
X_batch, _ = mnist.train.next_batch(batch_size)
sess.run(training_op, feed_dict = {X: X_batch})
weights1_val = weights1.eval()
for i in range(5):
plt.subplot(1,5,i+1)
plot_image(weights1_val.T[i])
save_path = saver.save(sess, "./CH15/models/stacked_autoencoder_with_tied_weights_mnist.ckpt")
# -
show_reconstructed_digits(X, outputs, "./CH15/models/stacked_autoencoder_with_tied_weights_mnist.ckpt")
# ## Denoising autoencoder
# +
reset_graph()
n_inputs = 28 * 28
n_hidden1 = n_inputs
n_hidden2 = n_inputs
n_hidden3 = n_inputs
n_outputs = n_inputs
learning_rate = 0.01
l2_reg = 0.001
X = tf.placeholder(tf.float32, shape = [None, n_inputs])
X_noisy = X + tf.random_normal(tf.shape(X), stddev=0.05)
with tf.contrib.framework.arg_scope([tf.contrib.layers.fully_connected], activation_fn = tf.nn.elu,
weights_initializer = tf.contrib.layers.variance_scaling_initializer(),
weights_regularizer = tf.contrib.layers.l2_regularizer(l2_reg)):
hidden1 = tf.contrib.layers.fully_connected(X_noisy, n_hidden1)
hidden2 = tf.contrib.layers.fully_connected(hidden1, n_hidden2)
hidden3 = tf.contrib.layers.fully_connected(hidden2, n_hidden3)
outputs = tf.contrib.layers.fully_connected(hidden3, n_outputs, activation_fn = None)
reconstruction_loss = tf.reduce_mean(tf.square(outputs - X))
reg_losses = tf.get_collection(tf.GraphKeys.REGULARIZATION_LOSSES)
loss = tf.add_n([reconstruction_loss] + reg_losses)
optimizer = tf.train.AdamOptimizer(learning_rate)
training_op = optimizer.minimize(loss)
init = tf.global_variables_initializer()
saver = tf.train.Saver()
# +
n_epochs = 5
batch_size = 150
with tf.Session() as sess:
init.run()
n_batches = mnist.train.num_examples // batch_size
for epoch in range(n_epochs):
for iteration in range(n_batches):
X_batch, _ = mnist.train.next_batch(batch_size)
sess.run(training_op, feed_dict = {X: X_batch})
save_path = saver.save(sess, "./CH15/models/stacked_denoising_autoencoder_mnist.ckpt")
# -
show_reconstructed_noisy_digits(X, X_noisy, outputs, "./CH15/models/stacked_denoising_autoencoder_mnist.ckpt")
# ## Sparse autoencoder
# +
def kl_divergence(p, q):
return p*tf.log(p/q) + (1-p)*tf.log((1-p)/(1-q))
reset_graph()
n_inputs = 28 * 28
n_hidden1 = 1000
n_outputs = n_inputs
sparsity_target = 0.1
sparsity_weight = 0.2
learning_rate = 0.01
initializer = tf.contrib.layers.variance_scaling_initializer()
X = tf.placeholder(tf.float32, shape = [None, n_inputs])
hidden1 = tf.contrib.layers.fully_connected(X, n_hidden1, activation_fn = tf.nn.sigmoid)
outputs = tf.contrib.layers.fully_connected(hidden1, n_outputs, weights_regularizer = regularizer, activation_fn = None)
reconstruction_loss = tf.reduce_mean(tf.square(outputs - X))
hidden1_mean = tf.reduce_mean(hidden1, axis = 0)
sparsity_loss = tf.reduce_sum(kl_divergence(sparsity_target, hidden1_mean))
loss = reconstruction_loss + sparsity_weight * sparsity_loss
optimizer = tf.train.AdamOptimizer(learning_rate)
training_op = optimizer.minimize(loss)
init = tf.global_variables_initializer()
saver = tf.train.Saver()
# +
n_epochs = 100
batch_size = 1000
with tf.Session() as sess:
init.run()
n_batches = mnist.train.num_examples // batch_size
for epoch in range(n_epochs):
for iteration in range(n_batches):
X_batch, _ = mnist.train.next_batch(batch_size)
sess.run(training_op, feed_dict = {X: X_batch})
save_path = saver.save(sess, "./CH15/models/sparse_autoencoder_mnist.ckpt")
# -
show_reconstructed_digits(X, outputs, "./CH15/models/sparse_autoencoder_mnist.ckpt")
# ## Variational autoencoder
# +
reset_graph()
n_inputs = 28 * 28
n_hidden1 = 500
n_hidden2 = 500
n_hidden3 = 20 # codings
n_hidden4 = n_hidden2
n_hidden5 = n_hidden1
n_outputs = n_inputs
learning_rate = 0.001
X = tf.placeholder(tf.float32, [None, n_inputs])
with tf.contrib.framework.arg_scope([tf.contrib.layers.fully_connected], activation_fn = tf.nn.elu,
weights_initializer = tf.contrib.layers.variance_scaling_initializer()):
hidden1 = tf.contrib.layers.fully_connected(X, n_hidden1)
hidden2 = tf.contrib.layers.fully_connected(hidden1, n_hidden2)
hidden3_mean = tf.contrib.layers.fully_connected(hidden2, n_hidden3, activation_fn=None)
hidden3_gamma = tf.contrib.layers.fully_connected(hidden2, n_hidden3, activation_fn=None)
noise = tf.random_normal(tf.shape(hidden3_gamma), dtype=tf.float32)
hidden3 = hidden3_mean + tf.exp(0.5 * hidden3_gamma) * noise
hidden4 = tf.contrib.layers.fully_connected(hidden3, n_hidden4)
hidden5 = tf.contrib.layers.fully_connected(hidden4, n_hidden5)
logits = tf.contrib.layers.fully_connected(hidden5, n_outputs, activation_fn=None)
outputs = tf.sigmoid(logits)
xentropy = tf.nn.sigmoid_cross_entropy_with_logits(labels=X, logits=logits)
reconstruction_loss = tf.reduce_sum(xentropy)
latent_loss = 0.5 * tf.reduce_sum(
tf.exp(hidden3_gamma) + tf.square(hidden3_mean) - 1 - hidden3_gamma)
loss = reconstruction_loss + latent_loss
optimizer = tf.train.AdamOptimizer(learning_rate=learning_rate)
training_op = optimizer.minimize(loss)
init = tf.global_variables_initializer()
saver = tf.train.Saver()
# +
n_digits = 60
n_epochs = 50
batch_size = 150
with tf.Session() as sess:
init.run()
for epoch in range(n_epochs):
n_batches = mnist.train.num_examples // batch_size
for iteration in range(n_batches):
print("\r{}%".format(100 * iteration // n_batches), end="") # not shown in the book
sys.stdout.flush() # not shown
X_batch, _ = mnist.train.next_batch(batch_size)
sess.run(training_op, feed_dict={X: X_batch})
loss_val, reconstruction_loss_val, latent_loss_val = sess.run([loss, reconstruction_loss, latent_loss], feed_dict={X: X_batch}) # not shown
print("\r{}".format(epoch), "Train total loss:", loss_val, "\tReconstruction loss:", reconstruction_loss_val, "\tLatent loss:", latent_loss_val) # not shown
saver.save(sess, "./my_model_variational.ckpt") # not shown
codings_rnd = np.random.normal(size=[n_digits, n_hidden3])
outputs_val = outputs.eval(feed_dict={hidden3: codings_rnd})
# -
| CH15 - Autoencoders.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + id="beIMs03HQHvo"
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
from sklearn.cluster import KMeans
from sklearn.preprocessing import StandardScaler
# + id="KywW8_-bQeyf"
base = pd.read_csv('credit_card_clients.csv', header=1)
# + colab={"base_uri": "https://localhost:8080/", "height": 275} id="5l0uBEgdQuCN" outputId="e1e47bf5-b52a-45e1-df5f-4bfa7b555401"
base.head()
# + [markdown] id="Ij9_QyJZUVcc"
#
# + id="41Cm7dGzQ1rP"
base['BILL_TOTAL'] = base['BILL_AMT1'] + base['BILL_AMT2'] + base['BILL_AMT3'] + base['BILL_AMT4'] + base['BILL_AMT5'] + base['BILL_AMT6']
# + id="bn9s1W2VUxyV"
x = base.iloc[:, [1,25]].values
# + id="HjXHHHAZVBI6"
scaler = StandardScaler()
x = scaler.fit_transform(x)
# + id="nBtwOmc6VJo6"
wcss = list()
# + id="15TvkYV7Vkpt"
for i in range(1, 11):
kmeans = KMeans(n_clusters=i, random_state=0)
kmeans.fit(x)
wcss.append(kmeans.inertia_)
# + colab={"base_uri": "https://localhost:8080/", "height": 296} id="w_Y5ToleXbmK" outputId="b7d8b79e-1b0e-4478-a2c5-0b79b3da7077"
plt.plot(range(1, 11), wcss)
plt.xlabel('Nรบmero de cluster')
plt.ylabel('WCSS')
# + id="gP9DTuq3fvkd"
kmean = KMeans(n_clusters=4, random_state=0)
previsoes = kmean.fit_predict(x)
# + colab={"base_uri": "https://localhost:8080/", "height": 296} id="qWqrlut2f6y-" outputId="7a722709-45ff-41c3-9448-671b724dc153"
plt.scatter(x[previsoes == 0, 0], x[previsoes == 0 , 1], s=100, c='red', label='cluster - 1')
plt.scatter(x[previsoes == 1, 0], x[previsoes == 1 , 1], s=100, c='orange', label='cluster - 2')
plt.scatter(x[previsoes == 2, 0], x[previsoes == 2 , 1], s=100, c='green', label='cluster - 3')
plt.scatter(x[previsoes == 3, 0], x[previsoes == 3 , 1], s=100, c='blue', label='cluster - 4')
plt.xlabel('Limite')
plt.ylabel('Gasto')
plt.legend()
# + id="UfjYT--Bg4lS"
lista_client = np.column_stack((base, previsoes))
lista_client = lista_client[lista_client[:,26].argsort()]
# + colab={"base_uri": "https://localhost:8080/"} id="b-Ut74mthvBg" outputId="154ac935-0023-4a79-97b0-ac3dfb1d147c"
lista_client
| kmeans_credito.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 2
# language: python
# name: python2
# ---
# # Text Analysis
# ---
#
# # Table of Contents
#
# - [Introduction](#Introduction)
#
# - [Learning Outcomes](#Learning-Outcomes)
# - [Glossary of Terms](#Glossary-of-Terms)
#
# - [Setup](#Setup)
# - [Data Orientation](#Data-Orientation)
#
# - [Load the Data](#Load-the-Data)
# - [Explore the Data](#Explore-the-Data)
# - [How many facilities and types of facilities are in this dataset?](#How-many-facilities-and-types-of-facilities-are-in-this-dataset?)
#
# - [Topic Modeling](#Topic-Modeling)
#
# - [Preparing Text Data for NLP](#Preparing-Text-Data-for-NLP)
#
# - [Creating a matrix of features from text - Bag of N-gram Example](#Creating-a-matrix-of-features-from-text---Bag-of-N-gram-Example)
#
# - **_[Exercise 1 - convert corpus to matrix](#Exercise-1---convert-corpus-to-matrix)_**
#
# - [Calculating Word Counts](#Calculating-Word-Counts)
#
# - **_[Exercise 2 - getting word counts](#Exercise-2---getting-word-counts)_**
#
# - [Creating Text Corpus - choosing text to analyze](#Creating-Text-Corpus---choosing-text-to-analyze)
# - [Text Cleaning and Normalization](#Text-Cleaning-and-Normalization)
#
# - [First Description, Before Cleaning](#First-Description,-Before-Cleaning)
# - [First Description, After Cleaning](#First-Description,-After-Cleaning)
#
# - [Tokenizing text - breaking it into pieces](#Tokenizing-text---breaking-it-into-pieces)
# - [Removing meaningless text - Stopwords](#Removing-meaningless-text---Stopwords)
#
# - **_[Exercise 3 - practicing slicing](#Exercise-3---practicing-slicing)_**
#
# - [Topic Modeling on Cleaned Data](#Topic-Modeling-on-Cleaned-Data)
#
# - [Stemming and Lemmatization - Distilling text data](#Stemming-and-Lemmatization---Distilling-text-data)
# - [N-grams - Adding context by creating N-grams](#N-grams---Adding-context-by-creating-N-grams)
# - [TF-IDF - Weighting terms based on frequency](#TF-IDF---Weighting-terms-based-on-frequency)
# - **_[Exercise 4 - Refining a topic model](#Exercise-4---Refining-a-topic-model)_**
# - **_[Exercise 5 - Interpreting a model's "topics"](#Exercise-5---Interpreting-a-model's-"topics")_**
#
# - [Supervised Learning: Document Classification](#Supervised-Learning:-Document-Classification)
#
# - [Supervised Learning - Prepare the Data](#Supervised-Learning---Prepare-the-Data)
# - [Prepare Data for Document Classification](#Prepare-Data-for-Document-Classification)
# - [Model Training - Train Document Classification Model](#Model-Training---Train-Document-Classification-Model)
# - [Model Evaluation - Precision and Recall](#Model-Evaluation---Precision-and-Recall)
# - [Model Evaluation - Feature Importances](#Model-Evaluation---Feature-Importances)
#
# - **_[Exercise 6 - interpreting feature importances](#Exercise-6---interpreting-feature-importances)_**
#
# - [Model Evaluation - Cross-validation](#Model-Evaluation---Cross-validation)
#
# - **_[Exercise 7 - Try a 5-fold cross-validation](#Exercise-7---Try-a-5-fold-cross-validation)_**
#
# - [Model Output - Examples of Document Classification](#Model-Output---Examples-of-Document-Classification)
#
# - [Further Resources](#Further-Resources)
# - Back to [Table of Contents](#Table-of-Contents)
# ---
#
# ## Introduction
#
# - Back to [Table of Contents](#Table-of-Contents)
#
# **Text analysis** is used to extract useful information from or summarize a large amount of unstructured text stored in documents. This opens up the opportunity of using text data alongside more conventional data sources (e.g. surveys and administrative data). The goal of text analysis is to take a large corpus of complex and unstructured text data and extract important and meaningful messages in a comprehensible way.
#
# Text analysis can help with the following tasks:
#
# * **Information Retrieval**: Find relevant information in a large database, such as a systematic literature review, that would be very time-consuming for humans to do manually.
#
# * **Clustering and Text Categorization**: Summarize a large corpus of text by finding the most important phrases, using methods like topic modeling.
#
# * **Text Summarization**: Create category-sensitive text summaries of a large corpus of text.
#
# * **Machine Translation**: Translate documents from one language to another.
#
# In this tutorial, we are going to analyze social services descriptions using topic modeling to examine the content of our data and document classification to tag the type of job in the advertisement.
#
# ## Learning Outcomes
#
# - Back to [Table of Contents](#Table-of-Contents)
#
# In this tutorial, you will...
# * Learn how to transform a corpus of text into a structured matrix format so that we can apply natural language processing (NLP) methods
# * Learn the basics and applications of topic modeling
# * Learn how to do document tagging and evaluate the results
#
#
# ## Glossary of Terms
#
# - Back to [Table of Contents](#Table-of-Contents)
#
# Glossary of Terms:
#
# * **Corpus**: A corpus is the set of all text documents used in your analysis; for example, your corpus of text may include hundreds of research articles.
#
# * **Tokenize**: Tokenization is the process by which text is separated into meaningful terms or phrases. In English this is easy to do for individual words, as they are separated by whitespace; however, it can get more complicated to automate determining which groups of words constitute meaningful phrases.
#
# * **Stemming**: Stemming is normalizing text by reducing all forms or conjugations of a word to the word's most basic form. In English, this can mean making a rule of removing the suffixes "ed" or "ing" from the end of all words, but it gets more complex. For example, "to go" is irregular, so you need to tell the algorithm that "went" and "goes" stem from a common lemma, and should be considered alternate forms of the word "go."
#
# * **TF-IDF**: TF-IDF (term frequency-inverse document frequency) is an example of feature engineering where the most important words are extracted by taking account their frequency in documents and the entire corpus of documents as a whole.
#
# * **Topic Modeling**: Topic modeling is an unsupervised learning method where groups of words that often appear together are clustered into topics. Typically, the words in one topic should be related and make sense (e.g. boat, ship, captain). Individual documents can fall under one topic or multiple topics.
#
# * **LDA**: LDA (Latent Dirichlet Allocation) is a type of probabilistic model commonly used for topic modeling.
#
# * **Stop Words**: Stop words are words that have little semantic meaning but occur very frequently, like prepositions, articles and common nouns. For example, every document (in English) will probably contain the words "and" and "the" many times. You will often remove them as part of preprocessing using a list of stop words.
#
# # Setup
#
# - Back to [Table of Contents](#Table-of-Contents)
# +
# %pylab inline
import nltk
import ujson
import re
import time
import progressbar
import pandas as pd
from __future__ import print_function
from six.moves import zip, range
from sklearn.model_selection import train_test_split
from sklearn.feature_extraction.text import CountVectorizer, TfidfTransformer
from sklearn.decomposition import LatentDirichletAllocation
from sklearn.linear_model import LogisticRegression
from sklearn.metrics import precision_recall_curve, roc_auc_score, auc
from sklearn import preprocessing
from collections import Counter, OrderedDict
from nltk.corpus import stopwords
from nltk import SnowballStemmer
# +
#nltk.download('stopwords') #download the latest stopwords
# -
# # Data Orientation
#
# - Back to [Table of Contents](#Table-of-Contents)
#
# Our dataset for this tutorial will be a description of social services in PLACE, and how the subset we're using was created, can be found in the `data` folder in this tutorial.
#
# ## Load the Data
#
# - Back to [Table of Contents](#Table-of-Contents)
#
# To start, we'll load the data into a pandas DataFrame from a CSV file.
df_socialservices_data = pd.read_csv('./data/socialservices.csv')
# ## Explore the Data
#
# - Back to [Table of Contents](#Table-of-Contents)
#
# Our text data table has 7 fields:
#
# - `FACID` - unique ID for each facility.
# - `facname` - name of the current facility.
# - `factype` - type of the current facility.
# - `facurl` - URL of facility's main web page.
# - `facloc` - Location of facilitiy.
# - `abouturl` - URL of "about" page used to collect text.
# - `textfromurl` - Text collected from about URL.
#
# Let's take a look at examples of the values:
df_socialservices_data.head()
# ## How many facilities and types of facilities are in this dataset?
#
# - Back to [Table of Contents](#Table-of-Contents)
#
# Next, let's get an idea of the contents of this data set. First, an overview:
# overview of contents of data file
df_socialservices_data.info()
# list unique facility types
df_socialservices_data.factype.unique()
# list unique facility names
df_socialservices_data.facname.unique()
# count of unique facility names
df_socialservices_data.facname.unique().shape
# There are X facilities, categorized into Y unique facility types: education, income, health, and safety net.
#
#
# # Topic Modeling
#
# - Back to [Table of Contents](#Table-of-Contents)
#
# We are going to apply topic modeling, an unsupervised learning method, to our corpus to find the high-level topics in our corpus as a "first go" for exploring our data. Through this process, we'll discuss how to clean and preprocess our data to get the best results.
#
# Topic modeling is a broad subfield of machine learning and natural language processing. We are going to focus on a common modeling approach called Latent Dirichlet Allocation (LDA).
#
# To use topic modeling, we first have to assume that topics exist in our corpus, and that some small number of these topics can "explain" the corpus. Topics in this context refer to words from the corpus, in a list that is ranked by probability. A single document can be explained by multiple topics. For instance, an article on net neutrality would fall under the topic "technology" as well as the topic "politics." The set of topics used by a document is known as the document's allocation, hence, the name Latent Dirchlet Allocation, each document has an allocation of latent topics allocated by Dirchlet distribution.
#
# ## Preparing Text Data for NLP
#
# - Back to [Table of Contents](#Table-of-Contents)
#
# The first important step in working with text data is cleaning and processing the data, which includes (but is not limited to):
#
# - forming a corpus of text
# - tokenization
# - removing stop-words
# - finding words co-located together (N-grams)
# - stemming and lemmatization
#
# Each of these steps will be discussed below.
#
# The ultimate goal is to transform our text data into a form an algorithm can work with, because a document or a corpus of text cannot be fed directly into an algorithm. Algorithms expect numerical feature vectors with certain fixed sizes, and can't handle documents, which are basically sequences of symbols with variable length. We will be transforming our text corpus into a *bag of n-grams* to be further analyzed. In this form our text data is represented as a matrix where each row refers to a specific job description (document) and each column is the occurence of a word (feature).
# ### Creating a matrix of features from text - Bag of N-gram Example
#
# - Back to [Table of Contents](#Table-of-Contents)
#
# Ultimately, we want to take our collection of documents, corpus, and convert it into a matrix. Fortunately, `sklearn` has a pre-built object, `CountVectorizer`, that can tokenize, eliminate stopwords, identify n-grams, and stem our corpus, and output a matrix in one step. Before we apply the vectorizer to our corpus of data, let's apply it to a toy example so that we see what the output looks like and how a bag of words is represented.
def create_bag_of_words( corpus,
NGRAM_RANGE = ( 0, 1 ),
stop_words = None,
stem = False,
MIN_DF = 0.05,
MAX_DF = 0.95,
USE_IDF = False ):
"""
Turn a corpus of text into a bag-of-words.
Parameters
-----------
corpus: ls
test of documents in corpus
NGRAM_RANGE: tuple
range of N-gram. Default (0,1)
stop_words: ls
list of commonly occuring words that have little semantic
value
stem: bool
use a stemmer to stem words
MIN_DF: float
exclude words that have a frequency less than the threshold
MAX_DF: float
exclude words that have a frequency greater than the threshold
Returns
-------
bag_of_words: scipy sparse matrix
scipy sparse matrix of text
features:
ls of words
"""
#parameters for vectorizer
ANALYZER = "word" #unit of features are single words rather then phrases of words
STRIP_ACCENTS = 'unicode'
stemmer = nltk.SnowballStemmer("english")
if stem:
tokenize = lambda x: [stemmer.stem(i) for i in x.split()]
else:
tokenize = None
vectorizer = CountVectorizer(analyzer=ANALYZER,
tokenizer=tokenize,
ngram_range=NGRAM_RANGE,
stop_words = stop_words,
strip_accents=STRIP_ACCENTS,
min_df = MIN_DF,
max_df = MAX_DF)
bag_of_words = vectorizer.fit_transform( corpus ) #transform our corpus is a bag of words
features = vectorizer.get_feature_names()
if USE_IDF:
NORM = None #turn on normalization flag
SMOOTH_IDF = True #prvents division by zero errors
SUBLINEAR_IDF = True #replace TF with 1 + log(TF)
transformer = TfidfTransformer(norm = NORM,smooth_idf = SMOOTH_IDF,sublinear_tf = True)
#get the bag-of-words from the vectorizer and
#then use TFIDF to limit the tokens found throughout the text
tfidf = transformer.fit_transform(bag_of_words)
return tfidf, features
else:
return bag_of_words, features
# create example corpus.
toy_corpus = [ 'this is document one', 'this is document two', 'text analysis on documents is fun' ]
# convert to bag of words
toy_bag_of_words, toy_features = create_bag_of_words( toy_corpus )
# review - our corpus:
toy_corpus
# features derived from the corpus
toy_features
# bag of words that results:
np_bag_of_words = toy_bag_of_words.toarray()
np_bag_of_words
# Our data has been transformed from a document into a 3 x 9 matrix, where each row in the matrix corresponds to a document, and each column corresponds to a feature (in the order they appear in `toy_features`). A 1 indicates the existence of the feature or word in the document, and a 0 indicates the word is not present.
#
# It is very common that this representation will be a "sparse" matrix, or a matrix that has a lot of 0s. With sparse matrices, it is often more efficient to keep track of which values *aren't* 0 and where those non-zero entries are located, rather than to save the entire matrix. To save space, the `scipy` library has special ways of storing sparse matrices in an efficient way.
#
# Our toy corpus is now ready to be analyzed. We used this toy example to illustrate how a document is turned into a matrix to be used in text analysis. When you're applying this to real text data, the matrix will be much larger and harder to interpret, but it's important that you know the process.
# #### Exercise 1 - convert corpus to matrix
#
# - Back to [Table of Contents](#Table-of-Contents)
#
# To check your knowledge, make your own toy corpus and turn it into a matrix.
#solution
exercise_corpus = [ 'Batman is friends with Superman',
'Superman is enemies with <NAME>',
'Batman is enemies with <NAME>' ]
exercise_bag_of_words, exercise_features = create_bag_of_words( exercise_corpus )
# convert bag of words to array
np_bag_of_words = exercise_bag_of_words.toarray()
# show features:
exercise_features
# output derived bag of words:
np_bag_of_words
# ---
# ### Calculating Word Counts
#
# - Back to [Table of Contents](#Table-of-Contents)
#
# As an initial look into the data, we can examine the most frequently occuring words in our corpus. We can sum the columns of the bag_of_words and then convert to a numpy array. From here we can zip the features and word_count into a dictionary, and display the results.
def get_word_counts( bag_of_words, feature_names ):
"""
Get the ordered word counts from a bag_of_words
Parameters
----------
bag_of_words: obj
scipy sparse matrix from CounterVectorizer
feature_names: ls
list of words
Returns
-------
word_counts: dict
Dictionary of word counts
"""
# convert bag of words to array
np_bag_of_words = bag_of_words.toarray()
# calculate word count.
word_count = np.sum(np_bag_of_words,axis=0)
# convert to flattened array.
np_word_count = np.asarray(word_count).ravel()
# create dict of words mapped to count of occurrences of each word.
dict_word_counts = dict( zip(feature_names, np_word_count) )
# Create ordered dictionary
orddict_word_counts = OrderedDict( sorted(dict_word_counts.items(), key=lambda x: x[1], reverse=True), )
return orddict_word_counts
# get ordered word counts for our example corpus.
get_word_counts(toy_bag_of_words, toy_features)
# Note that the words "document" and "documents" both appear separately in the list. Should they be treated as the same words, since one is just the plural of the other, or should they be considered distinct words? These are the types of decisions you will have to make in your preprocessing steps.
# #### Exercise 2 - getting word counts
#
# - Back to [Table of Contents](#Table-of-Contents)
#
# Get the word counts of your exercise corpus.
#
get_word_counts(exercise_bag_of_words, exercise_features)
# ### Creating Text Corpus - choosing text to analyze
#
# - Back to [Table of Contents](#Table-of-Contents)
#
# First we need to form our corpus, or the set of all descriptions from all websites. We can pull out the array of descriptions from the data frame using the data frame's `.values` attribute.
corpus = df_socialservices_data['textfromurl'].values #pull all the descriptions and put them in a numpy array
corpus
# ### Model topics with LDA
#
# - Back to [Table of Contents](#Table-of-Contents)
#
# As a start to modeling topics, we define below a function "create_topics" that uses Latent Dirichlet Allocation (LDA) to find topics.
def create_topics(tfidf, features, N_TOPICS=3, N_TOP_WORDS=5,):
"""
Given a matrix of features of text data generate topics
Parameters
-----------
tfidf: scipy sparse matrix
sparse matrix of text features
N_TOPICS: int
number of topics (default 10)
N_TOP_WORDS: int
number of top words to display in each topic (default 10)
Returns
-------
ls_keywords: ls
list of keywords for each topics
doctopic: array
numpy array with percentages of topic that fit each category
N_TOPICS: int
number of assumed topics
N_TOP_WORDS: int
Number of top words in a given topic.
"""
with progressbar.ProgressBar(max_value=progressbar.UnknownLength) as bar:
i=0
lda = LatentDirichletAllocation( n_topics= N_TOPICS,
learning_method='online') #create an object that will create 5 topics
bar.update(i)
i+=1
doctopic = lda.fit_transform( tfidf )
bar.update(i)
i+=1
ls_keywords = []
for i,topic in enumerate(lda.components_):
word_idx = np.argsort(topic)[::-1][:N_TOP_WORDS]
keywords = ', '.join( features[i] for i in word_idx)
ls_keywords.append(keywords)
print(i, keywords)
bar.update(i)
i+=1
return ls_keywords, doctopic
# Create a bag of words and set of features from our social services corpus:
corpus_bag_of_words, corpus_features = create_bag_of_words(corpus)
# Let's examine our features.
corpus_features
# The first aspect to notice about the feature list is that the first few entries are numbers that have no real semantic meaning. The feature lists also includes numerous other useless words, such as prepositions and articles, that will just add noise to our analysis.
#
# We can also notice the words *action* and *activities*, or the words *addition* and *additional*, are close enough to each other that it might not make sense to treat them as entirely separate words. Part of your cleaning and preprocessing duties will be manually inspecting your lists of features, seeing where these issues arise, and making decisions to either remove them from your analysis or address them separately.
#
# Let's get the count of the number of times that each of the words appears in our corpus.
get_word_counts( corpus_bag_of_words, corpus_features )
# Our top words are articles, prepositions and conjunctions that are not informative whatsoever, so we're probably not going to come up with anything interesting ("garbage in, garbage out").
#
# Nevertheless, let's forge blindly ahead and try to create topics, and see the quality of the results that we get.
ls_corpus_keywords, corpus_doctopic = create_topics(corpus_bag_of_words, corpus_features)
# These topics don't give us any real insight to what the data contains - one of the topics is "and, the, to, of, in"! There are some hints to the subjects of the websites ("YWCA", "youth") and their locations ("Evanston"), but the signal is being swamped by the noise.
#
# The word "click" also comes up. This word might be useful in some contexts, but since we scraped this data from websites, it's likely that "click" is more related to the website itself (e.g. "Click here to find out more") as opposed to the content of the website.
#
# We'll have to clean and process our data to get any meaningful information out of this text.
# ### Text Cleaning and Normalization
#
# - Back to [Table of Contents](#Table-of-Contents)
#
# To clean and normalize text, we'll remove all special characters, numbers, and punctuation, so we're left with only the words themselves. Then we will make all the text lowercase; this uniformity will ensure that the algorithm doesn't treat "the" and "The" as different words, for example.
#
# To remove the special characters, numbers and punctuation we will use regular expressions.
#
#
# **Regular Expressions**, or "regexes" for short, let you find all the words or phrases in a document or text file that match a certain pattern. These rules are useful for pulling out useful information from a large amount of text. For example, if you want to find all email addresses in a document, you might look for everything that looks like *some combination of letters, _, .* followed by *@*, followed by more letters, and ending in *.com* or *.edu*. If you want to find all the credit card numbers in a document, you might look for everywhere you see the pattern "four numbers, space, four numbers, space, four numbers, space, four numbers." Regexes are also helpful if you are scraping information from websites, because you can use them to separate the content from the HTML code used for formatting the website.
#
# A full tutorial on regular expressions would be outside the scope of this tutorial, but many good tutorials that can be found on-line. [regex101.com](regex101.com) is also a great interactive tool for developing and checking regular expressions.
#
# >"Some people, when confronted with a problem, think
# >'I know, I'll use regular expressions.' Now they have two problems."
# > -- <NAME>
#
# *A word of warning:* Regexes can work much more quickly than plain text sorting; however, if your regular expressions are becoming overly complicated, it's a good idea to find a simpler way to do what you want to do. Any developer should keep in mind there is a trade-off between optimization and understandability. The general philosophy of programming in Python is that your code is meant to be as understandable by *people* as much as possible, because human time is more valuable than computer time. You should therefore lean toward understandability rather than overly optimizing your code to make it run as quickly as possible. Your future-self, code-reviewers, people who inherit your code, and anyone else who has to make sense of your code in the future will appreciate it.
#
# For our purposes, we are going to use a regular expression to match all characters that are not letters -- punctuation, quotes, special characters and numbers -- and replace them with spaces. Then we'll make all of the remaining characters lowercase.
#
# We will be using the `re` library in python for regular expression matching.
# +
#get rid of the punctuations and set all characters to lowercase
RE_PREPROCESS = re.compile( r'\W+|\d+' ) #the regular expressions that matches all non-characters
#get rid of punctuation and make everything lowercase
#the code below works by looping through the array of text ("corpus")
#for a given piece of text ( "description" ) we invoke the `re.sub` command
#the `re.sub` command takes 3 arguments: (1) the regular expression to match,
#(2) what we want to substitute in place of that matching string (' ', a space)
#and (3) the text we want to apply this to.
#we then invoke the `lower()` method on the output of the `re.sub` command
#to make all the remaining characters lowercase.
#the result is a list, where each entry in the list is a cleaned version of the
#corresponding entry in the original corpus.
#we then make the list into a numpy array to use it in analysis
processed_corpus = np.array( [ re.sub( RE_PREPROCESS, ' ', description ).lower() for description in corpus ] )
# -
# Next, let's look at an example of the results of this cleanup.
# #### First Description, Before Cleaning
#
# - Back to [Table of Contents](#Table-of-Contents)
#
# First, we'll look at the first description in our corpus, before it was cleaned:
corpus[0]
# This text includes a lot of useful information, but also includes some things we don't want or need. There are some weird special characters (...). There are also some numbers, which are informative and interesting to a human reading the text (phone numbers, addresses, ...), but when we break down the documents into individual words, the numbers will become meaningless. We'll also want to remove all punctuation, so that we can say any two things separated by a space are individual words.
# #### First Description, After Cleaning
#
# - Back to [Table of Contents](#Table-of-Contents)
#
# Now, let's look at this text after cleaning:
processed_corpus[0]
# All lowercase, all numbers and special characters have been removed. Out text is now normalized.
# ### Tokenizing text - breaking it into pieces
#
# - Back to [Table of Contents](#Table-of-Contents)
#
# Now that we've cleaned our text, we can *tokenize* it by deciding which words or phrases are the most meaningful. Normally the `CountVectorizer` handles this for us, but in this case, we'll split our text into individual words manually to show how it is done.
#
# To go from a whole document to a list of individual words, we can use the `.split()` command. By default, this command splits based on spaces in between words, so we don't need to specify that explicitly.
tokens = processed_corpus[0].split()
tokens
# ### Removing meaningless text - Stopwords
#
# - Back to [Table of Contents](#Table-of-Contents)
#
# Stopwords are words that are found commonly throughout a text and carry little semantic meaning. Examples of common stopwords are prepositions ("to", "on", "in"), articles ("the", "an", "a"), conjunctions ("and", "or", "but") and common nouns. For example, the words *the* and *of* are totally ubiquitous, so they won't serve as meaningful features, whether to distinguish documents from each other or to tell what a given document is about. You may also run into words that you want to remove based on where you obtained your corpus of text or what it's about. There are many lists of common stopwords available for you to use, both for general documents and for specific contexts, so you don't have to start from scratch.
#
# We can eliminate stopwords by checking all the words in our corpus against a list of commonly occuring stopwords that comes with NLTK.
eng_stopwords = stopwords.words('english')
eng_stopwords
#sample of stopwords
#this is an example of slicing where we implicitly start at the beginning and move to the end
#we select every 10th entry in the array
eng_stopwords[::10]
# Notice that this list includes "weren" and "hasn" as well as single letters ("t"). Why do you think these are contained in the list of stopwords?
# #### Exercise 3 - practicing slicing
#
# - Back to [Table of Contents](#Table-of-Contents)
#
# Try slicing to retrieve every 5th word.
eng_stopwords[::5]
# ---
# ## Topic Modeling on Cleaned Data
#
# - Back to [Table of Contents](#Table-of-Contents)
#
# Now that we've cleaned up our data a little bit, let's see what our bag of words looks like.
# create bag of words from processed_corpus
processed_bag_of_words, processed_features = create_bag_of_words( processed_corpus, stop_words = eng_stopwords )
dict_processed_word_counts = get_word_counts( processed_bag_of_words, processed_features )
dict_processed_word_counts
# Much better! Now this is starting to look like a reasonable representation of our corpus of text.
#
# We mentioned that, in addition to stopwords that are common across all types of text analysis problems, there wil also be specific stopwords based on the context of your domain. Notice how the top words include words like "..."? It makes sense that these words are so common - so they won't be very helpful in analysis.
#
# One quick way to remove some of these domain-specific stopwords is by dropping some of your most frequent words. We'll start out by dropping the top 20. You'll want to change this number, playing with making it bigger and smaller, to see how it affects your resulting topics.
# +
# get top 20 stopwords (slice from start through 20 items in list)
top_20_words = list(dict_processed_word_counts.keys())[:20]
# create new stopword list by combining default stopwords with our top 20.
domain_specific_stopwords = eng_stopwords + top_20_words
# make a new bag of words excluding custom stopwords.
processed_bag_of_words, processed_features = create_bag_of_words(processed_corpus,
stop_words=domain_specific_stopwords)
# -
# what do we have now?
dict_processed_word_counts = get_word_counts(processed_bag_of_words, processed_features)
dict_processed_word_counts
# This is a bit better - although we still see some words that are probably very common ("..."), words like "..." will probably help us come up with more specific categories within the broader realm of social services. Let's see what topics we produce.
processed_keywords, processed_doctopic = create_topics(processed_bag_of_words,
processed_features)
# Now we are starting to get somewhere! We can manipulate the number of topics we want to find and the number of words to use for each topic to see if we can understand more from our corpus.
# look for 5 topics, include 10 words in each.
processed_keywords, processed_doctopic = create_topics(processed_bag_of_words,
processed_features,
N_TOPICS = 5,
N_TOP_WORDS= 10)
# Some structure is starting to reveal itself - .... Adding more topics has revealed to larger subtopics. Let's see if increasing the number of topics gives us more information.
#
# However, we can see that .... are still present - This is an iterative process - after seeing the results of some analysis, you will need to go back to the preprocessing step and add more words to your list of stopwords or change how you cleaned the data.
#
# Now let's try 10 topics with 15 words each:
# 10 topics, 15 words each
processed_keywords, processed_doctopic = create_topics(processed_bag_of_words,
processed_features,
N_TOPICS = 10,
N_TOP_WORDS= 15)
# This looks like a good amount of topics for now. Some of the top words are quite similar, like "...." Let's move to stemming and lemmatization.
# ### Stemming and Lemmatization - Distilling text data
#
# - Back to [Table of Contents](#Table-of-Contents)
#
# We can further process our text through *stemming and lemmatization*, or replacing words with their root or simplest form. For example "systems," "systematic," and "system" are all different words, but we can replace all these words with "system" without sacrificing much meaning.
#
# - A **lemma** is the original dictionary form of a word (e.g. the lemma for "lies," "lied," and "lying" is "lie").
# - The process of turning a word into its simplest form is **stemming**. There are several well known stemming algorithms -- Porter, Snowball, Lancaster -- that all have their respective strengths and weaknesses.
#
# For this tutorial, we'll use the Snowball Stemmer:
# Examples of how a Stemmer works:
stemmer = SnowballStemmer("english")
print(stemmer.stem('lies'))
print(stemmer.stem("lying"))
print(stemmer.stem('systematic'))
print(stemmer.stem("running"))
# Let's try creating a bag of stemmed words.
# include stemming when creating our bag of words.
processed_bag_of_words, processed_features = create_bag_of_words(processed_corpus,
stop_words=domain_specific_stopwords,
stem=True)
# create topics with stemmed words.
processed_keywords, processed_doctopic = create_topics(processed_bag_of_words,
processed_features,
N_TOPICS = 10,
N_TOP_WORDS= 15)
# What do we think of these topics?
# ### N-grams - Adding context by creating N-grams
#
# - Back to [Table of Contents](#Table-of-Contents)
#
# Obviously, reducing a document to a bag of words means losing much of its meaning - we put words in certain orders, and group words together in phrases and sentences, precisely to give them more meaning. If you follow the processing steps we've gone through so far, splitting your document into individual words and then removing stopwords, you'll completely lose all phrases like "kick the bucket," "commander in chief," or "sleeps with the fishes."
#
# One way to address this is to break down each document similarly, but rather than treating each word as an individual unit, treat each group of 2 words, or 3 words, or *n* words, as a unit. We call this a "bag of *n*-grams," where *n* is the number of words in each chunk. Then you can analyze which groups of words commonly occur together (in a fixed order).
#
# Let's transform our corpus into a bag of n-grams with *n*=2: a bag of 2-grams, AKA a bag of bi-grams.
# +
# create bag of words with stemmed words and 2-grams (NGRAM_RANGE = (0, 2)).
processed_bag_of_words, processed_features = create_bag_of_words(processed_corpus,
stop_words=domain_specific_stopwords,
stem=True,
NGRAM_RANGE=(0,2))
# Create topics.
processed_keywords, processed_doctopic = create_topics(processed_bag_of_words,
processed_features,
N_TOPICS = 10,
N_TOP_WORDS= 15)
# -
# We can see that this lets us uncover patterns that we couldn't when we just used a bag of words: "north shore" and "domest violenc" come up as words. Note that this still includes the individual words, as well as the bi-grams.
# ### TF-IDF - Weighting terms based on frequency
#
# - Back to [Table of Contents](#Table-of-Contents)
#
# A final step in cleaning and processing our text data is **Term Frequency-Inverse Document Frequency (TF-IDF)**. TF-IDF is based on the idea that the words (or terms) that are most related to a certain topic will occur frequently in documents on that topic, and infrequently in unrelated documents. TF-IDF re-weights words so that we emphasize words that are unique to a document and suppress words that are common throughout the corpus by inversely weighting terms based on their frequency within the document and across the corpus.
#
# Let's look at how using TF-IDF affects our bag of words:
# create bag of words including TF-IDF weighting.
processed_bag_of_words, processed_features = create_bag_of_words( processed_corpus,
stop_words = domain_specific_stopwords,
stem = True,
NGRAM_RANGE = ( 0, 2 ),
USE_IDF = True )
# let's see what we have:
dict_word_counts = get_word_counts( processed_bag_of_words, processed_features )
dict_word_counts
# The words counts have been reweighted to emphasize the more meaningful words of the corpus, while de-emphasizing the words that are found commonly throughout the corpus.
#
# How does this affect our topics?
processed_keywords, processed_doctopic = create_topics(processed_bag_of_words,
processed_features,
N_TOPICS = 10,
N_TOP_WORDS= 15)
# ---
#
# ### Exercise 4 - Refining a topic model
#
# - Back to [Table of Contents](#Table-of-Contents)
#
# You can only develop an intuition for the right number of topics and topic words suitable for a given problem by iterating until you find a good match.
#
# Change the number of topics and topic words until you get an intution of how many words and topics are enough.
exercise_keywords, exercise_doctopic = create_topics( processed_bag_of_words,
processed_features,
N_TOPICS = 5,
N_TOP_WORDS= 25 )
exercise_keywords, exercise_doctopic = create_topics( processed_bag_of_words,
processed_features,
N_TOPICS = 10,
N_TOP_WORDS= 25 )
#grab the topic_id of the majority topic for each document and store it in a list
ls_topic_id = [np.argsort(processed_doctopic[comment_id])[::-1][0] for comment_id in range(len(corpus))]
df_socialservices_data['topic_id'] = ls_topic_id #add to the dataframe so we can compare with the job titles
# Now that each row is tagged with a topic ID. Let's see how well the topics explain the social services by looking at the first topic, and seeing how similar the social services within that topic are to each other.
topic_num = 0
print(processed_keywords[topic_num])
df_socialservices_data[ df_socialservices_data.topic_id == topic_num ].head(10)
# ---
#
# ### Exercise 5 - Interpreting a model's "topics"
#
# - Back to [Table of Contents](#Table-of-Contents)
#
# Examine the other topic IDs, and see if the "topics" we identified make sense as groupings of social service agencies.
topic_num = 3
print(processed_keywords[topic_num])
df_socialservices_data[ df_socialservices_data.topic_id == topic_num ].head(10)
# ---
# # Supervised Learning: Document Classification
#
# - Back to [Table of Contents](#Table-of-Contents)
#
# Previously, we used topic modeling to infer relationships between social service facilities within the data. That is an example of unsupervised learning: we were looking to uncover structure in the form of topics, or groups of agencies, but we did not necessarily know the ground truth of how many groups we should find or which agencies belonged in which group.
#
# Now we turn our attention to supervised learning. In supervised learning, we have a *known* outcome or label (*Y*) that we want to produce given some data (*X*), and in general, we want to be able to produce this *Y* when we *don't* know it, or when we *only* have *X*.
#
# In order to produce labels we need to first have examples our algorithm can learn from, a "training set." In the context of text analysis, developing a training set can be very expensive, as it can require a large amount of human labor or linguistic expertise. **Document classification** is an example of supervised learning in which want to characterize our documents based on their contents (*X*). A common example of document classification is spam e-mail detection. Another example of supervised learning in text analysis is *sentiment analysis*, where *X* is our documents and *Y* is the state of the author. This "state" is dependent on the question you're trying to answer, and can range from the author being happy or unhappy with a product to the author being politically conservative or liberal. Another example is *part-of-speech tagging* where *X* are individual words and *Y* is the part-of-speech.
#
# In this section, we'll train a classifier to classify social service agencies. Let's see if we can label a new website as belonging to facility type "income" or "health."
# ## Supervised Learning - Prepare the Data
#
# - Back to [Table of Contents](#Table-of-Contents)
# look at counts
df_socialservices_data.factype.value_counts()
# make a mask column we can use to flag rows with facility type in our types of interest.
mask = df_socialservices_data.factype.isin(['income','health'])
# use mask to subset our data.
df_income_health = df_socialservices_data[mask]
# Split into training and testing sets (20% held back for training):
# split into train and test sets.
df_train, df_test = train_test_split(df_income_health, test_size=0.20, random_state=17)
# Look at our training set:
# look at our training set.
df_train.head()
# make sure we only have the facility types we expect.
df_train['factype'].unique()
# look at the counts for each value.
Counter(df_train['factype'].values)
# Look at our testing set:
# look at our testing set.
df_test.head()
# make sure we only have the facility types we expect.
df_test['factype'].unique()
# look at the counts for each value.
Counter(df_test['factype'].values)
# ## Prepare Data for Document Classification
#
# - Back to [Table of Contents](#Table-of-Contents)
#
# In order to feed out data into a classifier, we need to pull out the labels (*Y*) and a clean corpus of documents (*X*) for our training and testing sets.
# +
# prepare training data - get labels we'll train on.
train_labels = df_train.factype.values
# prepare training data - clean text.
train_corpus = np.array( [re.sub(RE_PREPROCESS, ' ', text).lower() for text in df_train.textfromurl.values])
# prepare testing data - get labels we'll train on.
test_labels = df_test.factype.values
# prepare testing data - clean text.
test_corpus = np.array( [re.sub(RE_PREPROCESS, ' ', text).lower() for text in df_test.textfromurl.values])
# make list of all labels across train and test (should just be 'income' and 'health')
labels = np.append(train_labels, test_labels)
# -
# Just as we had done in the unsupervised learning context, we have to transform our data. This time we have to transform our testing and training set into two different bags of words. The classifier will learn from the training set, and we will evaluate the classifier's performance on the testing set.
#
# First, we create a CountVectorizer that we'll use to convert our text documents to matrices of features based on words contained within our corpus.
# +
#parameters for vectorizer
ANALYZER = "word" #unit of features are single words rather then phrases of words
STRIP_ACCENTS = 'unicode'
TOKENIZER = None
NGRAM_RANGE = (0,2) #Range for pharases of words
MIN_DF = 0.01 # Exclude words that have a frequency less than the threshold
MAX_DF = 0.8 # Exclude words that have a frequency greater then the threshold
vectorizer = CountVectorizer( analyzer = ANALYZER,
tokenizer = None, # alternatively tokenize_and_stem but it will be slower
ngram_range = NGRAM_RANGE,
stop_words = stopwords.words( 'english' ),
strip_accents = STRIP_ACCENTS,
min_df = MIN_DF,
max_df = MAX_DF )
# -
# Next, we create a TF-IDF transformer, and create our bags of words, then weight them using TF-IDF.
# +
NORM = None # turn on normalization flag
SMOOTH_IDF = True # prevents division by zero errors
SUBLINEAR_IDF = True # replace TF with 1 + log(TF)
USE_IDF = True # flag to control whether to use TFIDF
transformer = TfidfTransformer( norm = NORM,
smooth_idf = SMOOTH_IDF,
sublinear_tf = True )
# timing code - start!
start_time = time.time()
# get the bag-of-words for train and test from the vectorizer and
# then use TFIDF to limit the tokens found throughout the text
train_bag_of_words = vectorizer.fit_transform( train_corpus ) #using all the data on for generating features!! Bad!
test_bag_of_words = vectorizer.transform( test_corpus )
# if we use IDF, compute it here.
if USE_IDF:
train_tfidf = transformer.fit_transform(train_bag_of_words)
test_tfidf = transformer.transform(test_bag_of_words)
# Get list of the feature names, for passing to our model.
features = vectorizer.get_feature_names()
# timing code - done!
print('Time Elapsed: {0:.2f}s'.format( time.time() - start_time ) )
# -
# We cannot pass the labels "income" or "health" directly to the classifier. Instead, we to encode them as 0s and 1s using the `labelencoder` part of `sklearn`.
#relabel our labels as a 0 or 1
le = preprocessing.LabelEncoder()
le.fit(labels)
labels_binary = le.transform(labels)
list(zip(labels,labels_binary))
# We also need to create arrays of indices so we can access the training and testing sets accordingly.
train_size = df_train.shape[ 0 ]
train_set_idx = np.arange( 0, train_size )
test_set_idx = np.arange( train_size, len( labels ) )
train_labels_binary = labels_binary[ train_set_idx ]
test_labels_binary = labels_binary[ test_set_idx ]
# ## Model Training - Train Document Classification Model
#
# - Back to [Table of Contents](#Table-of-Contents)
# The classifier we are using in the example is LogisticRegression. As we saw in the Machine Learning tutorial, first we decide on a classifier, then we fit the classifier to the data to create a model. We can then test our model on the test set by passing the features (*X*) from our test set to get predicted labels. The model will output the probability of each document being classified as income or health.
# +
# create our LogisticRegression classifier.
clf = LogisticRegression(penalty='l1')
# train the classifer to create our model.
mdl = clf.fit( train_tfidf, labels_binary[ train_set_idx ] )
# create scores for each of the documents predicting whether each refers to
# an income or health agency
y_score = mdl.predict_proba( test_tfidf )
# -
# ## Model Evaluation - Precision and Recall
#
# - Back to [Table of Contents](#Table-of-Contents)
#
# Now that we have calculated a score for each of our facility types of interest, we look at how well our model performed by outputting precision and recall curves at different cutoffs.
#
# First, we define the function that will do this work:
def plot_precision_recall_n( y_true, y_prob, model_name ):
"""
y_true: ls
ls of ground truth labels
y_prob: ls
ls of predic proba from model
model_name: str
str of model name (e.g, LR_123)
"""
from sklearn.metrics import precision_recall_curve
y_score = y_prob
precision_curve, recall_curve, pr_thresholds = precision_recall_curve(y_true, y_score)
precision_curve = precision_curve[:-1]
recall_curve = recall_curve[:-1]
pct_above_per_thresh = []
number_scored = len(y_score)
for value in pr_thresholds:
num_above_thresh = len(y_score[y_score>=value])
pct_above_thresh = num_above_thresh / float(number_scored)
pct_above_per_thresh.append(pct_above_thresh)
pct_above_per_thresh = np.array(pct_above_per_thresh)
plt.clf()
fig, ax1 = plt.subplots()
ax1.plot(pct_above_per_thresh, precision_curve, 'b')
ax1.set_xlabel('percent of population')
ax1.set_ylabel('precision', color='b')
ax1.set_ylim(0,1.05)
ax2 = ax1.twinx()
ax2.plot(pct_above_per_thresh, recall_curve, 'r')
ax2.set_ylabel('recall', color='r')
ax2.set_ylim(0,1.05)
name = model_name
plt.title(name)
plt.show()
# Then we output the graphs for our model:
plot_precision_recall_n( labels_binary[ test_set_idx ], y_score[:,1], 'LR' )
# If we examine our precision-recall curve we can see that our precision is 1 up to 25 percent of the population. We can use a "precision at *k*" curve to see what percent of the corpus can be tagged by the classifier, and which should undergo a manual clerical review. Based on this curve, we might say that we can use our classifier to tag the 25% of the documents that have the highest scores as 1, and manually review the rest.
#
# Alternatively, we can try to maximize the entire precision-recall space. In this case we need a different metric - "Area Under Curve" (AUC).
def plot_precision_recall(y_true,y_score):
"""
Plot a precision recall curve
Parameters
----------
y_true: ls
ground truth labels
y_score: ls
score output from model
"""
precision_curve, recall_curve, pr_thresholds = precision_recall_curve(y_true,y_score[:,1])
plt.plot(recall_curve, precision_curve)
plt.xlabel('Recall')
plt.ylabel('Precision')
auc_val = auc(recall_curve,precision_curve)
print('AUC-PR: {0:1f}'.format(auc_val))
plt.show()
plt.clf()
plot_precision_recall(labels_binary[test_set_idx],y_score)
# The AUC shows how accurate our scores are under different cutoff thresholds. The model will output a score between 0 and 1. We specify a range of cutoff values and label all of the examples as 0 or 1 based on whether they are above or below each cutoff value. The closer our scores are to the true values, the more resilient they are to different cutoffs. For instance, if our scores were perfect, our AUC would be 1.
# ## Model Evaluation - Feature Importances
#
# - Back to [Table of Contents](#Table-of-Contents)
#
# Next, we look at the importance of different features (words) in our model.
#
# The function that will calculate these:
def display_feature_importances( coef, features, labels, num_features = 10 ):
"""
output feature importances
Parameters
----------
coef: numpy
feature importances
features: ls
feature names
labels: ls
labels for the classifier
num_features: int
number of features to output (default 10)
Example
--------
"""
coef = mdl.coef_.ravel()
dict_feature_importances = dict( zip(features, coef) )
orddict_feature_importances = OrderedDict(
sorted(dict_feature_importances.items(), key=lambda x: x[1]) )
ls_sorted_features = list(orddict_feature_importances.keys())
label0_features = ls_sorted_features[:num_features]
label1_features = ls_sorted_features[-num_features:]
print(labels[0],label0_features)
print(labels[1], label1_features)
display_feature_importances(mdl.coef_.ravel(), features, ['health','income'])
# The feature importances give us the words which are the most relevant for distinguishing the type of social service agency (between income and health). Some of these make sense ("city church" seems more likely to be health than income), but some don't make as much sense, or seem to be artifacts from the website that we should remove ("housing humancarelogo").
# ---
#
# ### Exercise 6 - interpreting feature importances
#
# - Back to [Table of Contents](#Table-of-Contents)
#
# Display the top 25 feature importances to get an intution of which words are the most and least important.
# We need to know how to pass into the function we want the top 25 feature importances. We can do this by consulting the docstring of the function.
# From this docstring we can see that `num_features` is a keyword argument that is set to 10 by default. We can pass `num_features=25` into the keyword argument instead to get the top 25 feature importances.
display_feature_importances(mdl.coef_.ravel(),
features,
['health','income'],
num_features=25)
# ---
# ## Model Evaluation - Cross-validation
#
# - Back to [Table of Contents](#Table-of-Contents)
#
# Recall from the machine learning tutorial that we are seeking the find the most general pattern in the data in order to have to most general model that will be successful at classifying new unseen data. Our previous strategy above was the *Out-of-sample and holdout set*. With this strategy we try to find a general pattern by randomly dividing our data into a test and training set based on some percentage split (e.g., 50-50 or 80-20). We train on the test set and evaluate on the test set, where we pretend that we don't have the labels for the test set. A significant drawback with this approach is that we may be lucky or unlucky with our random split, and so our estimate of how we'd perform on truly new data is overly optimistic or overly pessimistic. A possible solution is to create many random splits into training and testing sets and evaluate each split to estimate the performance of a given model.
#
# A more sophisticated holdout training and testing procedure is *cross-validation*. In cross-validation we split our data into *k* folds or partitions, where *k* is usually 5 or 10. We then iterate k times. In each iteration, one of the folds is used as a test set, and the rest of the folds are combined to form the training set. We can then evaluate the performance at each iteration to estimate the performance of a given method. An advantage of using cross-validation is all examples of data are used in the training set at least once.
#
# Define function to create test and train bags of words:
def create_test_train_bag_of_words(train_corpus, test_corpus):
"""
Create test and training set bag of words
Parameters
----------
train_corpus: ls
ls of raw text for text corpus.
test_corpus: ls
ls of raw text for train corpus.
Returns
-------
(train_bag_of_words,test_bag_of_words): scipy sparse matrix
bag-of-words representation of train and test corpus
features: ls
ls of words used as features.
"""
#parameters for vectorizer
ANALYZER = "word" #unit of features are single words rather then phrases of words
STRIP_ACCENTS = 'unicode'
TOKENIZER = None
NGRAM_RANGE = (0,2) #Range for pharases of words
MIN_DF = 0.01 # Exclude words that have a frequency less than the threshold
MAX_DF = 0.8 # Exclude words that have a frequency greater then the threshold
vectorizer = CountVectorizer(analyzer=ANALYZER,
tokenizer=None, # alternatively tokenize_and_stem but it will be slower
ngram_range=NGRAM_RANGE,
stop_words = stopwords.words('english'),
strip_accents=STRIP_ACCENTS,
min_df = MIN_DF,
max_df = MAX_DF)
NORM = None #turn on normalization flag
SMOOTH_IDF = True #prevents division by zero errors
SUBLINEAR_IDF = True #replace TF with 1 + log(TF)
USE_IDF = True #flag to control whether to use TFIDF
transformer = TfidfTransformer(norm = NORM,smooth_idf = SMOOTH_IDF,sublinear_tf = True)
#get the bag-of-words from the vectorizer and
#then use TFIDF to limit the tokens found throughout the text
train_bag_of_words = vectorizer.fit_transform( train_corpus )
test_bag_of_words = vectorizer.transform( test_corpus )
if USE_IDF:
train_tfidf = transformer.fit_transform(train_bag_of_words)
test_tfidf = transformer.transform(test_bag_of_words)
features = vectorizer.get_feature_names()
return train_tfidf, test_tfidf, features
# And now, use scikit-learn's StratifiedKFold object to generate our folds (we'll do 3 here as an example), and then train and validate across all combinations.
# +
from sklearn.cross_validation import StratifiedKFold
# create folds
cv = StratifiedKFold( train_labels_binary, n_folds = 3 )
# get our labels
train_labels_binary = le.transform(train_labels)
# for each fold, get rows specified as train and test, then
# train and test our model with it.
for i, ( train, test ) in enumerate(cv):
# break out train and test data.
cv_train = train_corpus[train]
cv_test = train_corpus[test]
# create bags of words and get feature names
bag_of_words_train, bag_of_words_test, feature_names = create_test_train_bag_of_words(cv_train,
cv_test)
# fit our model and then use it to predict values in test set.
probas_ = clf.fit(bag_of_words_train,
train_labels_binary[train]).predict_proba(bag_of_words_test)
cv_test_labels = train_labels_binary[test]
# draw precision and recall curve
precision_curve, recall_curve, pr_thresholds = precision_recall_curve(cv_test_labels, probas_[:,1] )
# calculate and plot AUC
auc_val = auc( recall_curve,precision_curve )
plt.plot( recall_curve, precision_curve, label = 'AUC-PR {0} {1:.2f}'.format( i, auc_val ) )
#-- END loop over folds --#
# and, plot the collected graphs.
plt.ylim( 0, 1.05 )
plt.xlabel( 'Recall' )
plt.ylabel( 'Precision' )
plt.legend( loc = "lower left", fontsize = 'x-small' )
# -
# In this case we did 3-fold cross-validation and plotted precision-recall curves for each iteration. You can see that there is a marked difference between the iterations. We can then average the AUC-PR of each iteration to estimate the performance of our method.
# ---
# ### Exercise 7 - Try a 5-fold cross-validation
#
# - Back to [Table of Contents](#Table-of-Contents)
#
# Try 5-fold cross-validation.
# +
from sklearn.cross_validation import StratifiedKFold
cv = StratifiedKFold(train_labels_binary, n_folds=5)
train_labels_binary = le.transform(train_labels)
for i, (train,test) in enumerate(cv):
cv_train = train_corpus[train]
cv_test = train_corpus[test]
bag_of_words_train, bag_of_words_test, feature_names = create_test_train_bag_of_words(cv_train,
cv_test)
probas_ = clf.fit(bag_of_words_train,
train_labels_binary[train]).predict_proba(bag_of_words_test)
cv_test_labels = train_labels_binary[test]
precision_curve, recall_curve, pr_thresholds = precision_recall_curve(cv_test_labels,
probas_[:,1])
auc_val = auc(recall_curve,precision_curve)
plt.plot(recall_curve, precision_curve, label='AUC-PR {0} {1:.2f}'.format(i,auc_val))
plt.ylim(0,1.05)
plt.xlabel('Recall')
plt.ylabel('Precision')
plt.legend(loc="lower left", fontsize='x-small')
# -
# ---
# ## Model Output - Examples of Document Classification
#
# - Back to [Table of Contents](#Table-of-Contents)
#
# Look at our data:
df_test
# And, look at the text the model is most certain of being categorized as each facility type:
# +
num_comments = 2
label0_comment_idx = y_score[:,1].argsort()[:num_comments]
label1_comment_idx = y_score[:,1].argsort()[-num_comments:]
test_set_labels = labels[test_set_idx]
#convert back to the indices of the original dataset
top_comments_testing_set_idx = np.concatenate([label0_comment_idx,
label1_comment_idx])
#these are the 5 comments the model is most sure of
for i in top_comments_testing_set_idx:
print(
u"""{}:{}\n---\n{}\n===""".format(test_set_labels[i],
y_score[i,1],
test_corpus[i]))
# -
# These are the top 2 examples that the model is the most sure of for each label. We can see our important feature words in the descriptions, which gives a hint of how the model made these classifications.
# # Further Resources
#
# - Back to [Table of Contents](#Table-of-Contents)
# A great resource for NLP in Python is
# [Natural Language Processing with Python](https://www.amazon.com/Natural-Language-Processing-Python-Analyzing/dp/0596516495).
| notebooks/session_09/Text_Analysis.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python [Root]
# language: python
# name: Python [Root]
# ---
# ## Problem 1: Set up Github and clone assignment repo.
#
# - Go to http://www.github.com and create an account.
# - Send your Github username to <EMAIL>.
# - Install Git - https://github.com/blog/1510-installing-git-from-github-for-mac. Make sure to install command line tools.
# - When I have received your email, you should get a confirmation that you have been added to the repo.
# - Click on this link: https://classroom.github.com/assignment-invitations/11415026d0459793405d3c1ff95cc259
# - Follow the instructions to clone that repo to your local machine.
# - You should type a command like:
#
# ```$ git clone https://github.com/Columbia-Intro-Data-Science/python-introduction-(your-github-username).git```
#
#
# **Next:** Solve the problems directly in this notebook, and then push to the repo above (not to the course repo!)
#
#
# The process should be to create a copy of this notebook, move it into the folder you created above. Then do this:
#
# ``` $ git add mynotebooksolutions.ipynb ```
#
# ``` $ git commit -m "added my homework" ```
#
# ``` $ git push origin master $ ```
#
# ## Problem 2: Sales Data Analysis
# +
# read data into a DataFrame
import pandas as pd
import pylab as plt
import seaborn
from sklearn.linear_model import LinearRegression
import numpy as np
import numpy.random as nprnd
import random
import json
pd.set_option('display.max_columns', 500)
# %matplotlib inline
df = pd.read_csv('http://www-bcf.usc.edu/~gareth/ISL/Advertising.csv', index_col=0)
df.head()
# -
df
#
# #### What are the features?
#
# - **TV:** advertising dollars spent on TV for a single product in a given market (in thousands of dollars)
# - **Radio:** advertising dollars spent on Radio
# - **Newspaper:** advertising dollars spent on Newspaper
#
# #### Goal: Predict the number of sales in a given market based on the advertising in TV, Radio and Newspaper.
#
# ### Problem 2, Part 0: Plot box plots of the coefficient ranges
# Use df.boxplot()
df.boxplot()
# ### Problem 2, Part 1: Create scatter plots using plt.scatter()
#
# Create scatter plots of the advertising dollars spent on TV, Radio and Newspaper to the total Sales dollars gained. Fill in the parameter for scatter() below, and simply
# +
fig = plt.figure()
fig.set_size_inches(18,5)
axes = fig.add_subplot(1,3,1)
plt.scatter(df.TV,df.Sales)
axes = plt.xlabel('TV')
axes = plt.ylabel('Sales')
axes = fig.add_subplot(1,3,2)
plt.scatter(df.Radio,df.Sales)
axes = plt.xlabel('Radio')
axes = plt.ylabel('Sales')
axes = fig.add_subplot(1,3,3)
plt.scatter(df.Newspaper,df.Sales)
axes = plt.xlabel('Newspaper')
axes = plt.ylabel('Sales')
# -
from pandas.tools.plotting import scatter_matrix
scatter_matrix(df, alpha=1.0, figsize=(12, 12), diagonal='kde')
# Which of the variables seem correlated with one another? Which don't? Explain your answer
# From the figure, we can find that TV and Sales seem to have strong correlation, while Radio, Newspaper and Sales seem to be lower. Cause TV and sales are approximately to have linear relation. TV and Radio seem to have no correlation the figure are totally scatter dots.
# ### Probelm 2, Part 2: Predict sales using sklearn
#
# - Split data into training and testing subsets.
# - Train model using LinearRegression() from sklearn.linear_model on training data.
# - Evaluate using RMSE and R^2 on testing set
#
#
# If you need help, please refer to this example:
#
# https://github.com/Columbia-Intro-Data-Science/APMAE4990-/blob/master/notebooks/Lecture%202%20-%20Regression%20Bookingdotcom%20Case%20Study.ipynb
#
# See where I split the data into testing/training and evalaute performance.
from sklearn.linear_model import LinearRegression
# a) Set y to be the sales in df
y = df['Sales']
# b) Set X to be just the features described above in df
X = df[['TV','Radio','Newspaper']]
# c) Randomly split data into training and testing - 80% training, 20% testing.
# +
msk = range(len(X)+1)
msk = msk[1:]
random.shuffle(msk)
train_index = msk[0:int(len(X)*0.8)]
test_index = msk[int(len(X)*0.8):len(X)]
X_train = X.ix[train_index]
y_train = y.ix[train_index]
X_test = X.ix[test_index]
y_test = y.ix[test_index]
print len(y_test)
print len(y_train)
print len(X)
# -
# d) Train model on training data, and make predictions on testing data
# +
# Create linear regression object
regr = LinearRegression()
# Train the model using the training sets
regr.fit(X_train, y_train)
# The coefficients
print('Coefficients: \n', regr.coef_)
# The mean square error
print("Residual sum of squares: %.2f"
% np.mean((regr.predict(X_test) - y_test) ** 2))
# -
# e) Evalute the R^2 on training data. Is this good? Bad? Why?
# +
plt.figure(figsize=(10,10))
plt.title('R^2')
plt.scatter(regr.predict(X_train),y_train)
# Explained variance score: 1 is perfect prediction
print('Variance score: %.2f' % regr.score(X_test, y_test))
# -
# The result is good. We got the $R^2$ is 0.94. That is to say, 94 percent of the variance in the y can be explained by the X, which means we have 94 percent confidence to our model. The remaining 6 percent can be attributed to unknown, lurking variables or inherent variability.
# f) Make a scatter plot of your predictions vs the actual values on the testing data. Does it look like a good model?
plt.figure(figsize=(10,10))
plt.title('test')
plt.scatter(regr.predict(X_test),y_test)
plt.plot(y_test,y_test)
# It's a good model. We scatter plot the prediction value vs true value and then we plot the true solution as a traight line. From the figure, we can find that most of test data fit the line well which means we get small error in this model.
# g) Can you measure the importance of features in this model? What is something you should check before making conclusions?
#
# Try looking at LinearRegression().coef_
print('Coefficients: \n', regr.coef_)
# From the regression coefficient, we can find that Radio has the biggest coefficient, and then is TV and newspaper. But we still have to take in account the variance of each coefficient.Cause the feature with high variance may have little contribution to the model.
# h) What can you conclude from g) - can you think of a way to interpret the result? What should we have done to measure the importance of the features involved?
# Normlize the coefficient is criticle to our feature selection. We should always take the variance into account. One way to measure the importance of the feature is by measure the ratio between coefficient and standard of error of the feature which can eliminating the variance of the feature.
# ## How could you have improved performance?
# *Hint:* Try plotting the data in three dimensions along with the hyperplane solution, and see if you can infer
# a new variable which will help, or try a nonlinear/non-parametric model
# +
from mpl_toolkits.mplot3d import Axes3D
import matplotlib.pyplot as plt
import numpy
fig = plt.figure(1)
fig.set_size_inches(18,5)
axes = fig.add_subplot(1,1,1,projection='3d')
axes.scatter(df.TV,df.Radio,df.Sales,alpha=1)
axes.set_xlabel('TV')
axes.set_ylabel('Radio')
axes.set_zlabel('Sales')
x = numpy.linspace(0, 350, 30)
y = numpy.linspace(0, 60,30)
X, Y = numpy.meshgrid(x,y)
plane = numpy.array([regr.coef_[0] * x + regr.coef_[1] * y for x,y in zip(np.ravel(X), np.ravel(Y))])
Z = plane.reshape(X.shape)
axes.plot_surface(X,Y,Z, color='red', alpha=0.4)
fig = plt.figure(2)
fig.set_size_inches(18,5)
axes = fig.add_subplot(1,1,1,projection='3d')
axes.scatter(df.TV,df.Newspaper,df.Sales,alpha=1)
axes.set_xlabel('TV')
axes.set_ylabel('Newspaper')
axes.set_zlabel('Sales')
x = numpy.linspace(0, 350, 30)
y = numpy.linspace(0, 60,30)
X, Y = numpy.meshgrid(x,y)
plane = numpy.array([regr.coef_[0] * x + regr.coef_[2] * y for x,y in zip(np.ravel(X), np.ravel(Y))])
Z = plane.reshape(X.shape)
axes.plot_surface(X,Y,Z, color='red', alpha=0.4)
fig = plt.figure(3)
fig.set_size_inches(18,5)
axes = fig.add_subplot(1,1,1,projection='3d')
axes.scatter(df.Radio,df.Newspaper,df.Sales,alpha=1)
axes.set_xlabel('Radio')
axes.set_ylabel('Newspaper')
axes.set_zlabel('Sales')
x = numpy.linspace(0, 350, 30)
y = numpy.linspace(0, 60,30)
X, Y = numpy.meshgrid(x,y)
plane = numpy.array([regr.coef_[1] * x + regr.coef_[2] * y for x,y in zip(np.ravel(X), np.ravel(Y))])
Z = plane.reshape(X.shape)
axes.plot_surface(X,Y,Z, color='red', alpha=0.4)
plt.show()
# -
# From the first subplot, we can find out that without the newspaper, we use TV and Radio to predict, the model fits well. And also in the third subplot, without TV data fit well but it only take part in small area of the plane. The reason for that is TV and Newspaper tend to have higher variance which the weight for that dimension will be sensitive to the data and generate large variance. So we need to pick out the dimension with large variance control them to make our data more predictable.
#
# Obviously from the matrix scatter plot in problem 1, we find out that the distribution of TV, Radio and Newspaper is not normal. So use the non-parametric method will better fit the data
# ## Problem 3: Gradient Descent and the learning rate
# By modifying the learning rate below, show how the convergence takes longer or doesn't converge at all.
# Can you explain in words or math why this is?
# +
from numpy import *
# y = mx + b
# m is slope, b is y-intercept
def compute_error_for_line_given_points(b, m, points):
totalError = 0
for i in range(0, len(points)):
x = points[i, 0]
y = points[i, 1]
totalError += (y - (m * x + b)) ** 2
return totalError / float(len(points))
def step_gradient(b_current, m_current, points, learningRate):
b_gradient = 0
m_gradient = 0
N = float(len(points))
for i in range(0, len(points)):
x = points[i, 0]
y = points[i, 1]
b_gradient += -(2/N) * (y - ((m_current * x) + b_current))
m_gradient += -(2/N) * x * (y - ((m_current * x) + b_current))
new_b = b_current - (learningRate * b_gradient)
new_m = m_current - (learningRate * m_gradient)
return [new_b, new_m]
def gradient_descent_runner(points, starting_b, starting_m, learning_rate, num_iterations):
b = starting_b
m = starting_m
for i in range(num_iterations):
b, m = step_gradient(b, m, array(points), learning_rate)
return [b, m]
def run(num_iterations):
points = genfromtxt("/Users/apple/Documents/university/data science industry/test/adult/APMAE4990-/data/data.csv", delimiter=",")
learning_rate = 0.0001
initial_b = 0.0 # initial y-intercept guess
initial_m = 0.0 # initial slope guess
num_iterations = num_iterations
print "Starting gradient descent at b = {0}, m = {1}, error = {2}".format(initial_b, initial_m, compute_error_for_line_given_points(initial_b, initial_m, points))
print "Running..."
[b, m] = gradient_descent_runner(points, initial_b, initial_m, learning_rate, num_iterations)
print "After {0} iterations b = {1}, m = {2}, error = {3}".format(num_iterations, b, m, compute_error_for_line_given_points(b, m, points))
for i in range(0,len(points)):
plt.scatter(points[i,0],points[i,1])
plt.scatter(points[i,0],m*points[i,0]+b,color='r')
run(100)
# -
# By changing the learning rate, we can find that, when learning rate is small such as 0.00001,0.0001 the error is gradually getting smaller and then converge to the what we want, and with learning rate getting smaller, the convergence speed will slow down which take more steps. However, if learning rate is big such as 0.1. The error is getting bigger and oscillated.
#
# To explain this, the learning rate is actually the step size we try to move to the minimum or maximum. If step size is small, it will take longer time but will finally reach the target. But if the step size is big, it will go across the other side of the curve and then oscillate around the target point which don't converge anymore.
# +
for num in range(0,10):
run(num)
plt.show()
# -
# ## Problem 3 Part 2
# Plot the error as a function of the number of iterations for various learning rates. Choose the rates
# so that it tells a story.
# +
import numpy
def errcal(num_iterations,learning_rate):
points = genfromtxt("/Users/apple/Documents/university/data science industry/test/adult/APMAE4990-/data/data.csv", delimiter=",")
initial_b = 0.0 # initial y-intercept guess
initial_m = 0.0 # initial slope guess
[b, m] = gradient_descent_runner(points, initial_b, initial_m, learning_rate, num_iterations)
error = compute_error_for_line_given_points(b, m, points)
return error
for n in range(3,7):
lr = 10 ** (-n)
print "learning rate is ", lr
fig = plt.figure(1)
error = numpy.ones(100)
num = numpy.ones(100)
for n in range(100):
num[n] = n
error[n] = errcal(n,lr)
plt.plot(num,error)
plt.title("error vs iteration")
plt.xlabel("iteration number")
plt.ylabel("error")
plt.show()
# -
# As we can see from the figure, when learning rate is 0.001. The error is getting bigger and not converge anymore. When learning rate is small, it begins to converge and with learning rate getting smaller, converge speed slow down.
# # problem 4
# Take the model that you trained in Problem 2 (before any improvements), and let's try to determine
# the confidience we have in the coefficient estimates, and also try to optimize for the number of coefficients.
import pandas as pd
import statsmodels.api as sm
from sklearn.cross_validation import KFold
from sklearn.metrics import confusion_matrix
from sklearn.preprocessing import StandardScaler
from sklearn.linear_model import LogisticRegression
from sklearn.svm import SVC
from sklearn.ensemble import RandomForestClassifier as RF
from sklearn.neighbors import KNeighborsClassifier as KNN
import numpy as np
import matplotlib.pyplot as plt
from sklearn.metrics import roc_curve, auc
from sklearn.utils import shuffle
from sklearn.metrics import roc_curve, auc
import pylab
from sklearn import svm
from sklearn.linear_model import LogisticRegression
from sklearn.linear_model import Ridge
from sklearn.ensemble import RandomForestClassifier
# from mpl_toolkits.basemap import Basemap
import re
import pylab as plt
import seaborn
from sklearn.linear_model import LinearRegression
import numpy.random as nprnd
import random
# ### Probelm 4.1. Lasso Regression
#
# a) Use $L^1$ regularization on the scaled dataset (but original, with no new features), to the paramter
# which yields the best $R^2$. Plot your result.
#
# **Hint:** Take as your range of alpha:
# `alphas = np.logspace(-3,0.5,30)`
#
#
# b) Using the optimal constant, plot the feature coefficients - which one seems the least significant?
#
#
# c) Now repeat the above, but take an average off 5 folds using cross validation. Do a boxplot of the coefficients
# and their range of values.
# +
alphas = np.logspace(-3, 0.5, 30)
clf = Ridge()
coefs = []
for a in alphas:
clf.set_params(alpha=a)
clf.fit(X_train, y_train)
coefs.append(clf.coef_)
axe = plt.gca()
axe.plot(alphas, coefs)
axe.set_xscale('log')
axe.set_xlim(ax.get_xlim()[::-1]) # reverse axis
plt.xlabel('alpha')
plt.ylabel('weights')
plt.title('Ridge coefficients as a function of the regularization')
plt.axis('tight')
plt.show()
# # Create linear regression object
# alphas = np.logspace(-3,0.5,30)
# scores = []
# for alpha in alphas:
# regr = Ridge(alpha=alpha)
# # Train the model using the training sets
# regr.fit(X_train, y_train)
# scores.append(regr.score(X_test,y_test))
# plt.plot(alphas,scores)
# +
# Ridge?
# -
| .ipynb_checkpoints/Homework 1 - Github and Pandas-Copy1-checkpoint.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import random
import os
import sys
import json
import numpy as np
from sklearn.cluster import KMeans
import math
from collections import Counter, defaultdict
import itertools
import time
# +
file_path = './hw6_clustering.txt'
seed = 1
random.seed(seed)
def data_generator():
data = np.loadtxt(file_path, delimiter=',')
data = data.tolist()
random.shuffle(data)
data_batches = []
for i in range(0, len(data), math.floor(0.2*len(data))):
d = np.array(data[i:i+math.floor(0.2*len(data))])
data_batches.append(d)
return data_batches
data_batches = data_generator()
# +
class stats:
def __init__(self, points):
# create the stats from points
self.point_indices = points[:, 0].tolist()
self.actual_clusters = points[:, 1].tolist()
self.n = len(points)
self.sum = np.sum(points[:, 2:], axis=0)
self.sumsq = np.sum(np.power(points[:, 2:], 2), axis=0)\
def update_stats(self, point):
self.point_indices.append(point[0])
self.actual_clusters.append(point[1])
self.n += 1
self.sum += point[2:]
self.sumsq += np.power(point[2:], 2)
return
def merge_with(self, cluster):
self.point_indices += cluster.point_indices
self.actual_clusters += cluster.actual_clusters
self.n += cluster.n
self.sum += cluster.sum
self.sumsq += cluster.sumsq
def calculate_variance(self):
return np.power((self.sumsq/self.n) - np.power(self.sum/self.n, 2), 1/2)
def calculate_centroid(self):
return self.sum / self.n
def mahalanobis_distance(cluster, point):
return np.power(np.sum(np.power((point - cluster.calculate_centroid()) / cluster.calculate_variance(), 2)), 1/2)
def bfr(data, k, d, output_file_path):
global seed
rs = defaultdict(list)
ds = defaultdict(list)
cs = defaultdict(list)
output_file = open(output_file_path, "wt")
output_file.write("The intermediate results:\n")
for i, batch in enumerate(data):
if(i == 0):
# Run K-Means (e.g., from sklearn) with a large K (e.g., 5 times of the number of the input clusters) on the data in memory using the Euclidean distance as the similarity measurement.
model = KMeans(n_clusters=k*5, random_state=seed)
model = model.fit(batch[:, 2:])
# In the K-Means result from Step 2, move all the clusters that contain only one point to RS
index = defaultdict(list)
for pos, centroid_id in enumerate(model.labels_):
index[centroid_id].append(pos)
rest_of_data = []
for centroid_id, positions in index.items():
if(len(positions) == 1):
rs[centroid_id] = np.take(batch, positions, axis=0)
if(len(positions) > 1):
rest_of_data.append(np.take(batch, positions, axis=0))
rest_of_data = np.concatenate(rest_of_data, axis=0)
# Run K-Means again to cluster the rest of the data points with K = the number of input clusters.
model = KMeans(n_clusters=k, random_state=seed)
model = model.fit(rest_of_data[:, 2:])
# Use the K-Means result from Step 4 to generate the DS clusters (i.e., discard their points and generate statistics).
index = defaultdict(list)
for pos, centroid_id in enumerate(model.labels_):
index[centroid_id].append(pos)
for centroid_id, positions in index.items():
points = np.take(rest_of_data, positions, axis=0)
ds[centroid_id] = stats(points)
points = [x for x in rs.values()]
points = np.concatenate(points, axis=0)
model = KMeans(n_clusters=min(5*k, len(points)), random_state=seed)
model = model.fit(points[:, 2:])
index = defaultdict(list)
for pos, centroid_id in enumerate(model.labels_):
index[centroid_id].append(pos)
rs = defaultdict(list)
for centroid_id, positions in index.items():
p = np.take(points, positions, axis=0)
if(len(positions) == 1):
rs[centroid_id] = p
if(len(positions) > 1):
cs[centroid_id] = stats(p)
else:
for point in batch:
cluster_id, min_dist = min(list(map(lambda x: (x[0], mahalanobis_distance(x[1], point[2:])), ds.items())), key=lambda x: x[1])
if(min_dist < 2*math.pow(d, 1/2)):
ds[cluster_id].update_stats(point)
else:
if(len(cs) != 0):
cluster_id, min_dist = min(list(map(lambda x: (x[0], mahalanobis_distance(x[1], point[2:])), cs.items())), key=lambda x: x[1])
if(min_dist < 2*math.pow(d, 1/2)):
cs[cluster_id].update_stats(point)
else:
if(len(rs) == 0):
rs[0] = np.expand_dims(point, axis=0)
else:
rs[max(rs.keys())+1] = np.expand_dims(point, axis=0)
else:
if(len(rs) == 0):
rs[0] = np.expand_dims(point, axis=0)
else:
rs[max(rs.keys())+1] = np.expand_dims(point, axis=0)
points = [x for x in rs.values()]
points = np.concatenate(points, axis=0)
model = KMeans(n_clusters=min(5*k, len(points)), random_state=seed)
model = model.fit(points[:, 2:])
index = defaultdict(list)
for pos, centroid_id in enumerate(model.labels_):
index[centroid_id].append(pos)
rs = defaultdict(list)
for centroid_id, positions in index.items():
p = np.take(points, positions, axis=0)
if(len(positions) == 1):
rs[centroid_id] = p
if(len(positions) > 1):
if(len(cs) == 0):
cs[0] = stats(p)
else:
cs[max(cs.keys())+1] = stats(p)
# merge cs clusters if distance < 2 root d
to_be_merged = []
for c1, c2 in itertools.combinations(cs.keys(), 2):
dist = mahalanobis_distance(cs[c1], cs[c2].calculate_centroid())
if(dist < 2*math.pow(dist, 1/2)):
to_be_merged.append((c1, c2))
for (c1, c2) in to_be_merged:
if(c1 in cs and c2 in cs):
cs[c1].merge_with(cs[c2])
del cs[c2]
# after each round output
number_of_ds_points = sum([x.n for x in ds.values()])
number_of_clusters_cs = len(cs)
number_of_cs_points = sum([x.n for x in cs.values()])
number_of_rs_points = sum([len(x) for x in rs.values()])
if(i != len(data)-1):
output_file.write("Round {}: {},{},{},{}\n".format(i+1, number_of_ds_points, number_of_clusters_cs, number_of_cs_points, number_of_rs_points))
# after last round
# merge cs with ds with distance less than 2 root d
merged_cs = []
for k, c in cs.items():
point = c.calculate_centroid()
cluster_id, min_dist = min(list(map(lambda x: (x[0], mahalanobis_distance(x[1], point)), ds.items())), key=lambda x: x[1])
if(min_dist < 2*math.pow(d, 1/2)):
ds[cluster_id].merge_with(c)
merged_cs.append(k)
for k in merged_cs:
del cs[k]
number_of_ds_points = sum([x.n for x in ds.values()])
number_of_clusters_cs = len(cs)
number_of_cs_points = sum([x.n for x in cs.values()])
number_of_rs_points = sum([len(x) for x in rs.values()])
output_file.write("Round {}: {},{},{},{}\n".format(len(data), number_of_ds_points, number_of_clusters_cs, number_of_cs_points, number_of_rs_points))
gt = []
pred = []
original_index = []
cluster_id = 0
for i, x in ds.items():
gt += x.actual_clusters
pred += [cluster_id]*x.n
original_index += x.point_indices
cluster_id += 1
for i, x in cs.items():
gt += x.actual_clusters
pred += [cluster_id]*x.n
original_index += x.point_indices
cluster_id += 1
for i, x in rs.items():
gt += x[:, 1].tolist()
original_index += x[:, 0].tolist()
pred += [-1]*len(x)
gt = [int(x) for x in gt]
pred = [int(x) for x in pred]
original_index = [int(x) for x in original_index]
final_output = sorted([(x,y) for x,y in zip(original_index, pred)])
output_file.write("The clustering results:\n")
for x, y in final_output:
output_file.write("{},{}\n".format(x, y))
from sklearn.metrics.cluster import v_measure_score
print(v_measure_score(gt, pred))
output_file.close()
return ds, cs, rs
# -
st = time.time()
d = data_batches[0].shape[-1]-2
ds, cs, rs = bfr(data_batches, 10, d, "output.txt")
print(time.time()-st)
from sklearn.metrics.cluster import v_measure_score
v_measure_score(gt, pred)
ds[6].calculate_variance()
mahalanobis_distance(ds[4], data_batches[1][1])
np.expand_dims(data_batches[0][0], axis=0)
d = {1:1, 2: 2}
del d[1]
d
model = KMeans(n_clusters=k*5, random_state=12)
model = model.fit(x)
clusters = {}
for i, v in enumerate(model.labels_):
if(v in clusters):
clusters[v].append(i)
else:
clusters[v] = []
mask = []
for i, v in clusters.items():
if(len(v) == 1):
mask.append(True)
else:
mask += [False] * len(v)
c = Counter(model.labels_)
set_rs = set()
for i, v in c.items():
if(v == 1):
set_rs.add(i)
set_rs
model.labels_
x[list(map(lambda x: x in set_rs, model.labels_))]
rs_index = list(map(lambda v: len(v[1])==1, clusters.items()))
x[rs_index, :]
| task.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: 'Python 3.8.1 64-bit (''covid19'': conda)'
# language: python
# name: python38164bitcovid19condaae07e86b13ec4b53ae8497f9e83e3ada
# ---
# # Experiment plotting
#
# This notebook contains the code for plotting results for the different experiments.
# When run for the first time for a town, condensed summary files are being created which strongly speed up subsequent generations of plots from the same summaries. It is possible to create the plots only from the condensed summaries located in 'summaries/condensed_summaries'.
# Note that this works for all experiments but the Rt plots, which still require the full summary file.
#
# %load_ext autoreload
# %autoreload 2
import os
import pandas as pd
import pickle
import itertools
from lib.measures import *
from lib.experiment import Experiment, Plot, Result, get_properties, load_summary_list, load_summary
from lib.data import collect_data_from_df
from lib.calibrationSettings import calibration_lockdown_dates, calibration_start_dates, calibration_mob_paths
from lib.calibrationFunctions import get_calibrated_params, downsample_cases
import lib.plot as lib_plot
from lib.plot import Plotter
import matplotlib.pyplot as plt
from lib.summary import load_condensed_summary, get_tracing_probability
from matplotlib import colors
import matplotlib
from collections import defaultdict
# +
commithash = '7d65965' #'f5cfa6f' #'6bbe7a3'
baseline_colors = ['#31a354', '#74c476'] # '#006d2c',
spect_colors = ['#08519c', '#3182bd', '#6baed6', '#bdd7e7']
pancast_colors = ['#bd0026', '#f03b20', '#fd8d3c', '#fecc5c', '#ffffb2']
LINE_WIDTH = 7.0
COL_WIDTH = 3.333
# -
# # Social distancing
# +
def plot_relative_reduction(*, country, area, mode, ps_adoption, show_reduction=True,
colors=None, log_yscale=False, figsize=None, ylim=None, xlabel=None,
box_plot=False, legend_is_left=True,
show_significance=None, sig_options=None, commithash=None):
ps_adoption = np.sort(ps_adoption)
if figsize is None:
figsize=lib_plot.FIG_SIZE_FULL_PAGE_DOUBLE_ARXIV_TALL
if mode == 'r_eff':
if area == 'TU':
from lib.settings.town_settings_tubingen import town_population
elif area == 'BE':
from lib.settings.town_settings_bern import town_population
else:
raise NotImplementedError('Specify town population')
else:
town_population = None
if ylim is None:
if mode == 'r_eff' and show_reduction:
ylim = (0.0, 50)
elif mode == 'r_eff' and not show_reduction:
ylim= (1.0, 3.7)
plot_filename = (f'comparison-{mode}-{country}-{area}'
f'-reduction={show_reduction}'
f'-box_plot={box_plot}')
paths = [
[f'lockdown-{country}-{area}-{commithash}/'
f'lockdown-{country}-{area}'
f'-p_compliance={p_adoption}'
'.pk' for p_adoption in ps_adoption],
[f'k-groups-{country}-{area}-{commithash}/'
f'k-groups-{country}-{area}'
f'-K_groups=2-p_compliance={p_adoption}'
'.pk' for p_adoption in ps_adoption],
[f'vulnerable-groups-{country}-{area}-{commithash}/'
f'vulnerable-groups-{country}-{area}'
f'-p_compliance={p_adoption}'
'.pk' for p_adoption in ps_adoption],
# [f'conditional-measures-{country}-{area}-{commithash}/'
# f'conditional-measures-{country}-{area}'
# f'-max_incidence=50-p_compliance={p_adoption}'
# '.pk' for p_adoption in ps_adoption],
]
colors = ['#08519c', '#bd0026', '#fd8d3c', '#31a354',]
titles = ['Everyone',
'Alternating groups',
'Vulnerable groups',
'No measures']
baseline_path = (f'baseline-{country}-{area}-{commithash}/'
f'baseline-{country}-{area}'
'-expected_daily_base_expo_per100k=0.7142857142857143'
f'.pk')
plotter = Plotter()
plotter.compare_peak_reduction(path_list=paths,
baseline_path=baseline_path,
ps_adoption=ps_adoption,
area_population=town_population,
labels=titles,
mode=mode,
show_reduction=show_reduction,
box_plot=box_plot,
log_xscale=True,
log_yscale=log_yscale,
show_baseline=True,
ylim=ylim,
show_significance=show_significance,
sig_options=sig_options,
colors=colors,
xlabel=xlabel,
filename=plot_filename,
figsize=figsize,
figformat='neurips-double',
legend_is_left=legend_is_left)
# +
# LINE_WIDTH = 7.0
# COL_WIDTH = 3.333
# figsize = (1.2 * LINE_WIDTH / 2, 1.2 * LINE_WIDTH / 3 * 4.5/6)
# FIG_SIZE_FULL_PAGE_DOUBLE_ARXIV_TALL = (LINE_WIDTH / 2, LINE_WIDTH / 3 * 4.5/6) # 2 tall
figsize = lib_plot.FIG_SIZE_FULL_PAGE_DOUBLE_ARXIV_TALL
sig_options={'height': 1.0, 'same_height': False}
plot_relative_reduction(
country='CH',
area='BE',
mode='hosp',
ps_adoption=[1.0, 0.75, 0.5, 0.25, 0.05],
show_significance=None,
sig_options=sig_options,
show_reduction=True,
log_yscale=False,
box_plot=True,
# ylim=[0,3.5],
ylim=[0,105],
xlabel=f'\% reduced mobility',
figsize=figsize,
commithash=commithash
)
# -
plot_relative_reduction(
country='CH',
area='BE',
mode='r_eff',
ps_adoption=[1.0, 0.75, 0.5, 0.25],
show_significance=None,
sig_options=sig_options,
show_reduction=False,
log_yscale=False,
box_plot=True,
ylim=[0,3.0],
xlabel=f'\% reduced mobility',
legend_is_left='outside',
figsize=figsize,
commithash=commithash
)
# +
# def plot_over_time(*, country, area, quantity, mode, p_adoption=0.25,
# ymax=None, commithash=None):
# paths = [ f'baseline-{country}-{area}-{commithash}/'
# f'baseline-{country}-{area}'
# '-expected_daily_base_expo_per100k=0.7142857142857143'
# f'.pk',
# f'lockdown-{country}-{area}-{commithash}/'
# f'lockdown-{country}-{area}'
# f'-p_compliance={p_adoption}'
# '.pk',
# f'k-groups-{country}-{area}-{commithash}/'
# f'k-groups-{country}-{area}'
# f'-K_groups=2-p_compliance={p_adoption}'
# '.pk',
# f'vulnerable-groups-{country}-{area}-{commithash}/'
# f'vulnerable-groups-{country}-{area}'
# f'-p_compliance={p_adoption}'
# '.pk',
# # f'conditional-measures-{country}-{area}-{commithash}/'
# # f'conditional-measures-{country}-{area}'
# # f'-max_incidence=50-p_compliance={p_adoption}'
# # '.pk',
# ]
# labels = ['No measures',
# 'Everyone',
# 'Alternating groups',
# 'Vulnerable groups',
# 'Conditional']
# # colors = [baseline_colors[0], spect_colors[0], spect_colors[1], spect_colors[2]]
# colors = ['#31a354', '#08519c', '#bd0026', '#fd8d3c']
# plot_filename = f'{mode}-{quantity}-over-time-p_adoption_{p_adoption}'
# plotter = Plotter()
# figsize = (LINE_WIDTH / 3.3, LINE_WIDTH / 3 * 4.5/6)
# plotter.compare_quantity(
# paths,
# labels=labels,
# quantity=quantity,
# mode=mode,
# ymax=ymax,
# titles=f'{int(p_adoption *100)}\% reduced activity',
# colors=colors,
# filename=plot_filename,
# figsize=figsize,#lib_plot.FIG_SIZE_FULL_PAGE_TRIPLE_ARXIV,
# figformat='neurips-double',
# legend_is_left=False)
# plot_over_time(
# country='CH',
# area='BE',
# quantity='infected', # Allowed values: ['infected', 'hosp', 'dead']
# mode='total', # Allowed values: ['total', 'daily', 'cumulative', 'weekly incidence']
# ymax=70000,
# p_adoption=0.25,
# commithash=commithash
# )
# plot_over_time(
# country='CH',
# area='BE',
# quantity='infected', # Allowed values: ['infected', 'hosp', 'dead']
# mode='total', # Allowed values: ['total', 'daily', 'cumulative', 'weekly incidence']
# ymax=70000,
# p_adoption=0.5,
# commithash=commithash
# )
# plot_over_time(
# country='CH',
# area='BE',
# quantity='infected', # Allowed values: ['infected', 'hosp', 'dead']
# mode='total', # Allowed values: ['total', 'daily', 'cumulative', 'weekly incidence']
# ymax=70000,
# p_adoption=0.75,
# commithash=commithash
# )
# plot_over_time(
# country='CH',
# area='BE',
# quantity='hosp', # Allowed values: ['infected', 'hosp', 'dead']
# mode='total', # Allowed values: ['total', 'daily', 'cumulative', 'weekly incidence']
# ymax=None,
# p_adoption=0.5,
# commithash=commithash
# )
# +
def plot_over_time_panel(*, country, area, quantity, mode,
ymax=None, commithash=None):
paths = []
for p_adoption in [0.25, 0.5, 0.75]:
paths.append([ f'baseline-{country}-{area}-{commithash}/'
f'baseline-{country}-{area}'
'-expected_daily_base_expo_per100k=0.7142857142857143'
f'.pk',
f'lockdown-{country}-{area}-{commithash}/'
f'lockdown-{country}-{area}'
f'-p_compliance={p_adoption}'
'.pk',
f'k-groups-{country}-{area}-{commithash}/'
f'k-groups-{country}-{area}'
f'-K_groups=2-p_compliance={p_adoption}'
'.pk',
f'vulnerable-groups-{country}-{area}-{commithash}/'
f'vulnerable-groups-{country}-{area}'
f'-p_compliance={p_adoption}'
'.pk',
# f'conditional-measures-{country}-{area}-{commithash}/'
# f'conditional-measures-{country}-{area}'
# f'-max_incidence=50-p_compliance={p_adoption}'
# '.pk',
])
labels = ['No measures',
'Everyone',
'Alternating groups',
'Vulnerable groups',
'Conditional']
# colors = [baseline_colors[0], spect_colors[0], spect_colors[1], spect_colors[2]]
colors = ['#31a354', '#08519c', '#bd0026', '#fd8d3c']
plot_filename = f'{mode}-{quantity}-over-time'
plotter = Plotter()
figsize = (LINE_WIDTH, LINE_WIDTH / 3 * 4.5/6)
plotter.compare_quantity(
paths,
labels=labels,
quantity=quantity,
mode=mode,
ymax=ymax,
titles=[f'{int(p_adoption *100)}\% reduced mobility' for p_adoption in [0.25, 0.5, 0.75]],
colors=colors,
filename=plot_filename,
figsize=figsize,#lib_plot.FIG_SIZE_FULL_PAGE_TRIPLE_ARXIV,
figformat='neurips-double',
legend_is_left=None)
plot_over_time_panel(
country='CH',
area='BE',
quantity='infected', # Allowed values: ['infected', 'hosp', 'dead']
mode='total', # Allowed values: ['total', 'daily', 'cumulative', 'weekly incidence']
ymax=40000,
commithash=commithash
)
# -
# # Site closures
# +
# def plot_over_time_closures(*, country, area, quantity, mode,
# ymax=None, commithash=None):
# paths = [f'baseline-{country}-{area}-{commithash}/'
# f'baseline-{country}-{area}'
# '-expected_daily_base_expo_per100k=0.7142857142857143'
# f'.pk',
# f'site-closures-{country}-{area}-{commithash}/'
# f'site-closures-{country}-{area}-closures=social'
# '.pk',
# f'site-closures-{country}-{area}-{commithash}/'
# f'site-closures-{country}-{area}-closures=social_education'
# '.pk',
# f'site-closures-{country}-{area}-{commithash}/'
# f'site-closures-{country}-{area}-closures=social_office'
# '.pk',
# f'site-closures-{country}-{area}-{commithash}/'
# f'site-closures-{country}-{area}-closures=social_education_office'
# '.pk',
# ]
# labels = ['No measures',
# 'social',
# 'social + education',
# 'social + office',
# 'social + office + education']
# colors = ['red', baseline_colors[0], spect_colors[0], spect_colors[1], spect_colors[2]]
# plot_filename = f'site-closures-{mode}-{quantity}-over-time'
# plotter = Plotter()
# plotter.compare_quantity(
# paths,
# labels=labels,
# quantity=quantity,
# mode=mode,
# ymax=ymax,
# colors=colors,
# filename=plot_filename,
# figsize=lib_plot.FIG_SIZE_FULL_PAGE_DOUBLE_ARXIV_TALL,
# figformat='neurips-double',
# legend_is_left=False)
# plot_over_time_closures(
# country='CH',
# area='BE',
# quantity='infected', # Allowed values: ['infected', 'hosp', 'dead']
# mode='total', # Allowed values: ['total', 'daily', 'cumulative', 'weekly incidence']
# ymax=100,
# commithash=commithash
# )
# -
# # Contact tracing
# +
# def plot_relative_reduction_tracing(*, country, area, mode, ps_adoption, ps_social_distancing,
# show_reduction=True,
# colors=None, log_yscale=False, figsize=None, ylim=None,
# box_plot=False, legend_is_left=True,
# show_significance=None, sig_options=None, commithash=None):
# ps_adoption = np.sort(ps_adoption)
# if figsize is None:
# figsize=lib_plot.FIG_SIZE_FULL_PAGE_DOUBLE_ARXIV_TALL
# if mode == 'r_eff':
# if area == 'TU':
# from lib.settings.town_settings_tubingen import town_population
# elif area == 'BE':
# from lib.settings.town_settings_bern import town_population
# else:
# raise NotImplementedError('Specify town population')
# else:
# town_population = None
# if ylim is None:
# if mode == 'r_eff' and show_reduction:
# ylim = (0.0, 50)
# elif mode == 'r_eff' and not show_reduction:
# ylim= (1.0, 3.7)
# plot_filename = (f'tracing-comparison-{mode}-{country}-{area}'
# f'-reduction={show_reduction}'
# f'-box_plot={box_plot}')
# paths = []
# labels = []
# for p_social_distancing in ps_social_distancing:
# paths.append([f'tracing-social-distancing-{country}-{area}-{commithash}/'
# f'tracing-social-distancing-{country}-{area}'
# f'-p_tracing={p_adoption}'
# f'-p_social_distancing={p_social_distancing}'
# f'-test_lag=0.5'
# f'-tracing_threshold=0.007584090158343315'
# '.pk' for p_adoption in ps_adoption])
# labels.append(f'{int(p_social_distancing*100)}\% social distancing')
# baseline_path = (f'baseline-{country}-{area}-6bbe7a3/'
# f'baseline-{country}-{area}'
# '-expected_daily_base_expo_per100k=0.7142857142857143'
# f'.pk')
# plotter = Plotter()
# plotter.compare_peak_reduction(path_list=paths,
# baseline_path=baseline_path,
# ps_adoption=ps_adoption,
# area_population=town_population,
# labels=labels,
# mode=mode,
# show_reduction=show_reduction,
# box_plot=box_plot,
# log_xscale=True,
# log_yscale=log_yscale,
# ylim=ylim,
# show_significance=show_significance,
# sig_options=sig_options,
# colors=colors,
# filename=plot_filename,
# figsize=figsize,
# figformat='neurips-double',
# legend_is_left=legend_is_left)
# +
# # Main results section, manual tracing panel
# LINE_WIDTH = 7.0
# COL_WIDTH = 3.333
# figsize = (1.2 * LINE_WIDTH / 2, 1.2 * LINE_WIDTH / 3 * 4.5/6)
# sig_options={'height': 1.0, 'same_height': False}
# plot_relative_reduction_tracing(
# country='CH',
# area='BE',
# mode='cumu_infected',
# ps_adoption=[0.75, 0.5, 0.25, 0.1],
# ps_social_distancing=[0.5, 0.25, 0.1, 0.0],
# show_significance=False,
# show_reduction=True,
# log_yscale=True,
# sig_options=sig_options,
# box_plot=True,
# ylim=None,
# figsize=figsize,
# commithash='64bcf31'
# )
# +
def plot_relative_quantity_heatmap(*, country, area, mode,
ps_adoption, ps_social_distancing, interpolate, cmap, commithash):
filename = (
f'tracing-social-distancing-heatmap'
)
paths = [(p_social_distancing, p_adoption,
(f'tracing-social-distancing-{country}-{area}-{commithash}/'
f'tracing-social-distancing-{country}-{area}'
f'-p_tracing={p_adoption}'
f'-p_social_distancing={p_social_distancing}'
f'-test_lag=0.5'
f'-tracing_threshold=0.007584090158343315'
'.pk'))
for p_adoption in ps_adoption
for p_social_distancing in ps_social_distancing]
for p_adoption in [0.4, 0.6, 0.8, 0.9]:
for p_social_distancing in [0.3, 0.4]:
paths.append((p_social_distancing, p_adoption,
(f'tracing-social-distancing-{country}-{area}-5393ad6/'
f'tracing-social-distancing-{country}-{area}'
f'-p_tracing={p_adoption}'
f'-p_social_distancing={p_social_distancing}'
f'-test_lag=0.5'
f'-tracing_threshold=0.007584090158343315'
'.pk')))
paths = (paths,)
baseline_path = (f'tracing-social-distancing-{country}-{area}-{commithash}/'
f'tracing-social-distancing-{country}-{area}'
f'-p_tracing=0.0'
f'-p_social_distancing=0.0'
f'-test_lag=0.5'
f'-tracing_threshold=0.007584090158343315'
f'.pk')
LINE_WIDTH = 7.0
COL_WIDTH = 3.333
figsize = (LINE_WIDTH / 2.5, LINE_WIDTH / 3)
figsize = (LINE_WIDTH / 2 , LINE_WIDTH / 2.5)
# plots
plotter = Plotter()
plotter.relative_quantity_heatmap_acm(
xlabel=r'\% reduced mobility',
ylabel=r'\% adoption of contact tracing',
path_labels=['a', 'b'],
paths=paths,
mode=mode,
zmax=100,
baseline_path=baseline_path,
filename=filename,
# figsize=lib_plot.FIG_SIZE_FULL_PAGE_DOUBLE_ARXIV_TALL,
figsize=figsize,
figformat='neurips-double',
interpolate=interpolate,
width_ratio=1,
cmap=cmap,
)
ps_adoption = [1.0, 0.75, 0.5, 0.25, 0.1, 0.05, 0.0]
# ps_adoption = [1.0, 0.0]
ps_social_distancing=[0.5, 0.25, 0.1, 0.05, 0.0]
# ps_social_distancing=[0.5, 0.0]
for mode in ["cumu_infected"]:
plot_relative_quantity_heatmap(
mode=mode,
country='CH',
area='BE',
ps_adoption=ps_adoption[::-1],
ps_social_distancing=ps_social_distancing[::-1],
interpolate='linear',
cmap=plt.cm.RdYlGn,
commithash='64bcf31'
)
# +
# def plot_over_time_tracing(*, country, area, quantity, mode, ps_adoption, p_social_distancing,
# ymax=None, commithash=None):
# labels = []
# paths = [f'tracing-social-distancing-{country}-{area}-{commithash}/'
# f'tracing-social-distancing-{country}-{area}'
# f'-p_tracing={p_adoption}'
# f'-p_social_distancing={p_social_distancing}'
# f'-test_lag=0.5'
# f'-tracing_threshold=0.007584090158343315'
# '.pk' for p_adoption in ps_adoption]
# labels = [f'{int(p_adoption*100)}\% adoption' for p_adoption in ps_adoption]
# colors = [pancast_colors[0], spect_colors[3], spect_colors[2], spect_colors[1], spect_colors[0]]
# # colors = ['#bdd7e7', '#6baed6', '#3182bd', '#08519c', pancast_colors[0]][::-1]
# colors = ['#fed976', '#feb24c', '#fd8d3c', '#f03b20', '#bd0026'][::-1]
# plot_filename = f'{mode}-{quantity}-over-time-p_social_distancing={p_social_distancing}'
# plotter = Plotter()
# plotter.compare_quantity(
# paths,
# labels=labels,
# quantity=quantity,
# mode=mode,
# ymax=ymax,
# titles=f'{int(p_social_distancing *100)}\% reduction of activity',
# colors=colors,
# filename=plot_filename,
# figsize=lib_plot.FIG_SIZE_FULL_PAGE_DOUBLE_ARXIV_TALL,
# figformat='neurips-double',
# legend_is_left=True)
# plot_over_time_tracing(
# country='CH',
# area='BE',
# quantity='infected', # Allowed values: ['infected', 'hosp', 'dead']
# mode='total', # Allowed values: ['total', 'daily', 'cumulative', 'weekly incidence']
# ymax=30000,
# ps_adoption=[0.0, 0.1, 0.25, 0.5, 0.75],
# p_social_distancing=0.1,
# commithash='64bcf31',
# )
# plot_over_time_tracing(
# country='CH',
# area='BE',
# quantity='infected', # Allowed values: ['infected', 'hosp', 'dead']
# mode='total', # Allowed values: ['total', 'daily', 'cumulative', 'weekly incidence']
# ymax=25000,
# ps_adoption=[0.0, 0.1, 0.25, 0.5, 0.75],
# p_social_distancing=0.25,
# commithash='64bcf31',
# )
# plot_over_time_tracing(
# country='CH',
# area='BE',
# quantity='infected', # Allowed values: ['infected', 'hosp', 'dead']
# mode='total', # Allowed values: ['total', 'daily', 'cumulative', 'weekly incidence']
# ymax=1000,
# ps_adoption=[0.0, 0.1, 0.25, 0.5, 0.75],
# p_social_distancing=0.5,
# commithash='64bcf31',
# )
# +
def plot_over_time_tracing_panel(*, country, area, quantity, mode, ps_adoption, ps_social_distancing,
ymax=None, commithash=None):
labels = []
paths = []
for p_social_distancing in ps_social_distancing:
paths.append([f'tracing-social-distancing-{country}-{area}-{commithash}/'
f'tracing-social-distancing-{country}-{area}'
f'-p_tracing={p_adoption}'
f'-p_social_distancing={p_social_distancing}'
f'-test_lag=0.5'
f'-tracing_threshold=0.007584090158343315'
'.pk' for p_adoption in ps_adoption])
labels = [f'{int(p_adoption*100)}\% adoption' for p_adoption in ps_adoption]
colors = [pancast_colors[0], spect_colors[3], spect_colors[2], spect_colors[1], spect_colors[0]]
# colors = ['#bdd7e7', '#6baed6', '#3182bd', '#08519c', pancast_colors[0]][::-1]
colors = ['#fed976', '#feb24c', '#fd8d3c', '#f03b20', '#bd0026'][::-1]
colors = [ '#41ab5d', '#78c679', '#41b6c4','#2c7fb8','#253494',][::-1]
plot_filename = f'{mode}-{quantity}-over-time-tracing'
plotter = Plotter()
figsize = (LINE_WIDTH, LINE_WIDTH / 3 * 4.5/6)
plotter.compare_quantity(
paths,
labels=labels,
quantity=quantity,
mode=mode,
ymax=ymax,
titles=[f'{int(p_social_distancing *100)}\% reduced mobility' for p_social_distancing in ps_social_distancing],
colors=colors,
filename=plot_filename,
figsize=figsize, # lib_plot.FIG_SIZE_FULL_PAGE_DOUBLE_ARXIV_TALL,
figformat='neurips-double',
legend_is_left=None)
plot_over_time_tracing_panel(
country='CH',
area='BE',
quantity='infected', # Allowed values: ['infected', 'hosp', 'dead']
mode='total', # Allowed values: ['total', 'daily', 'cumulative', 'weekly incidence']
ymax=40000,
ps_adoption=[0.0, 0.1, 0.25, 0.5, 0.75],
ps_social_distancing=[0.0, 0.1, 0.25],
commithash='64bcf31',
)
# -
# # Rt over time
# +
def plot_rt_panel(experiment, ps_adoption, p_social_distancing=0.25):
country = 'CH'
area = 'BE'
assert len(ps_adoption) == 4
paths = [f'tracing-social-distancing-{country}-{area}-64bcf31/'
f'tracing-social-distancing-{country}-{area}'
f'-p_tracing={p_adoption}'
f'-p_social_distancing={p_social_distancing}'
f'-test_lag=0.5'
f'-tracing_threshold=0.007584090158343315'
'.pk' for p_adoption in ps_adoption]
labels = [f'{int(p_adoption *100)}\% adoption' for p_adoption in ps_adoption]
# if experiment == 'main-manual':
# labels = [f'Only manual tracing',
# f'SPECTS, {int(p_adoption*100)}\% adopt.',
# f'PanCast(25\%), {int(p_adoption*100)}\% adopt.',
# f'PanCast(100\%), {int(p_adoption*100)}\% adopt.']
# elif experiment == 'no-manual':
# labels = [f'No contact tracing',
# f'SPECTS, {int(p_adoption*100)}\% adopt.',
# f'PanCast(25\%), {int(p_adoption*100)}\% adopt.',
# f'PanCast(100\%), {int(p_adoption*100)}\% adopt.']
# elif experiment == 'interoperation':
# labels = [f'Only manual tracing',
# f'0\% beacons, {int(p_adoption*100)}\% adopt.',
# f'25\% beacons, {int(p_adoption*100)}\% adopt.',
# f'100\% beacons, {int(p_adoption*100)}\% adopt.']
# names = [f'baseline',
# f'SPECTS-p_adoption={p_adoption}',
# f'beacons=0.25-p_adoption={p_adoption}',
# f'beacons=0.100-p_adoption={p_adoption}',
# ]
#_, filename = os.path.split(path)
plot_filename = 'rt-tracing'
LINE_WIDTH = 7.0
COL_WIDTH = 3.333
figsize = (0.5*LINE_WIDTH, 1.5 * LINE_WIDTH / 3 * 4.5/6)
figsize = (LINE_WIDTH / 2 , LINE_WIDTH / 2.5)
# plot
plotter = Plotter()
plotter.plot_daily_nbinom_rts_panel(
paths=paths,
titles=labels,
filename=plot_filename,
cmap_range=(0.5, 1.5),
figsize=figsize,
figformat='neurips-double',
ymax=3.3,
#xlim=(0, 185),
x_axis_dates=False,
show_legend=False,
subplots_adjust={'bottom':0.2, 'top': 0.98, 'left': 0.12, 'right': 0.96},
)
plot_rt_panel(experiment='main-manual',
ps_adoption=[0.0, 0.25, 0.5, 0.75],
p_social_distancing=0.1)
# !bash crop_pdfs.sh plots/rt-*.pdf
# -
# !bash crop_pdfs.sh plots/*.pdf
# # Parameter estimation
# +
def plot_model_fit_and_transferability(calibration_area='BE', ymax=None, show_legend=True):
validation_pairs = {
'BE': [('CH', 'BE')],
'JU': [('CH', 'JU')],
'TU': [('GER', 'TU')],
'KL': [('GER', 'KL')],
'RH': [('GER', 'RH')],
}
ymax = {
'GER': {'TU': 600, 'SB': 800, 'KL': 300, 'RH': 500, 'TR': 2000,},
'CH': {'VD': 2000, 'BE': 590, 'TI': 500, 'JU': 500, 'BS': 3000, 'LU' : 500}
}
for country, area in validation_pairs[calibration_area]:
print(country, area)
labels = ['Simulated cases']
# labels = [f'Simulation {area} ({calibration_area} params)']
paths = [f'validation-{calibration_area}-{commithash}/validation-{calibration_area}-validation_region={area}.pk']
plot_filename = f'Modelfit-{calibration_area}-{area}'
plotter = Plotter()
ts, predicted = plotter.plot_positives_vs_target(
paths=paths,
labels=labels,
country=country,
area=area,
ymax=ymax[country][area],
lockdown_label_y=ymax[country][area]/8,
filename=plot_filename,
# figsize=lib_plot.FIG_SIZE_NEURIPS_TRIPLE,
figsize=(2.3, 1.5),
figformat='neurips-double',
small_figure=True,
show_legend=show_legend)
plot_model_fit_and_transferability(calibration_area='BE')
plot_model_fit_and_transferability(calibration_area='TU')
plot_model_fit_and_transferability(calibration_area='JU')
plot_model_fit_and_transferability(calibration_area='KL')
plot_model_fit_and_transferability(calibration_area='RH')
# !bash crop_pdfs.sh plots/Modelfit-*.pdf
# +
plotter = Plotter()
plotter.beta_parameter_heatmap(
country='CH',
area='BE',
calibration_state='logs/calibration_be0_state.pk',
figsize=(2.7, 2.0),
# cmap='gist_heat_r',
cmap='viridis_r',
levels=12,
ceil=1e6
)
plotter.beta_parameter_heatmap(
country='CH',
area='JU',
calibration_state='logs/calibration_ju0_state.pk',
figsize=(2.7, 2.0),
# cmap='gist_heat_r',
cmap='viridis_r',
levels=12,
ceil=1e6
)
plotter.beta_parameter_heatmap(
country='GER',
area='TU',
calibration_state='logs/calibration_tu0_state.pk',
figsize=(2.7, 2.0),
# cmap='gist_heat_r',
cmap='viridis_r',
levels=12,
ceil=1e6
)
plotter.beta_parameter_heatmap(
country='GER',
area='KL',
calibration_state='logs/calibration_kl0_state.pk',
figsize=(2.7, 2.0),
# cmap='gist_heat_r',
cmap='viridis_r',
levels=12,
ceil=1e6,
)
plotter.beta_parameter_heatmap(
country='GER',
area='RH',
calibration_state='logs/calibration_rh0_state.pk',
figsize=(2.7, 2.0),
# cmap='gist_heat_r',
cmap='viridis_r',
levels=12,
ceil=1e6,
)
# !bash crop_pdfs.sh plots/bo-result-*.pdf
# -
# # Overdispersion
# +
def plot_exposure_nbinoms(path, day_eval_points=[28]):
for t0_days in day_eval_points:
t0 = t0_days * 24.0
label_range = []
plotter = lib_plot.Plotter()
plotter.plot_nbinom_distributions(
path=path,
filename=f'baseline-t0={t0_days}',
t0=t0,
label_range=label_range,
ymax=0.85,
figsize=lib_plot.FIG_SIZE_FULL_PAGE_DOUBLE_ARXIV,
figformat='neurips-double',
)
plotter.plot_visit_nbinom_distributions(
path=path,
filename=f'baseline-t0={t0_days}',
t0=t0,
label_range=label_range,
ymax=0.85,
figsize=lib_plot.FIG_SIZE_FULL_PAGE_DOUBLE_ARXIV,
figformat='neurips-double',
)
path = (
f'baseline-CH-BE-f5cfa6f/'
f'baseline-CH-BE-expected_daily_base_expo_per100k=0.7142857142857143'
'.pk'
)
day_eval_points = [28]
plot_exposure_nbinoms(path, day_eval_points)
# !bash crop_pdfs.sh plots/nbin-secondary-*.pdf
# !bash crop_pdfs.sh plots/nbin-visit-*.pdf
# -
| sim/sim-plot.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3.10.0 64-bit ('3.10.0')
# language: python
# name: python3
# ---
import odrive
from odrive.enums import *
from setup import *
from motor import *
# # Connect to motor card
# Find a connected ODrive (this will block until you connect one)
my_drive = find_odrive()
# # Config motor card
setup_odrive(my_drive.axis0)
# # Run calibration
run_calibration(my_drive.axis0)
blocked_motor_mode(my_drive.axis0)
# # Move motor to position
# do 1 revolution
go_to_position(my_drive.axis0, 360)
# return to 0
go_to_position(my_drive.axis0, 0)
| control/test.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="JSdgZ7tIC1Su"
# # Shell Basics
#
# > Light notes about interacting with the shell.
# + [markdown] id="DIMOLMGd6WOA"
# [](https://mybinder.org/v2/gh/bnia/dataguide/main?filepath=%2Fnotebooks%2F03_Shell_Basics.ipynb)
# [](https://colab.research.google.com/github/bnia/dataguide/blob/main/notebooks/03_Shell_Basics.ipynb)
# [](https://github.com/bnia/dataguide/tree/main/notebooks/03_Shell_Basics.ipynb)
# [](https://github.com/ellerbrock/open-source-badges/)
#
# [](https://github.com/bnia/dataguide/blob/main/LICENSE)
# [](https://bnia.github.io)
# []()
#
# [](https://github.com/bnia/dataguide)
# [](https://github.com/bnia/dataguide)
# [](https://github.com/bnia/dataguide)
# [](https://github.com/bnia/dataguide)
#
# [](https://twitter.com/intent/tweet?text=Check%20out%20this%20%E2%9C%A8%20colab%20by%20@bniajfi%20https://github.com/bnia/dataguide%20%F0%9F%A4%97)
# [](https://twitter.com/bniajfi)
# + [markdown] id="TxBmpy7IRmxA"
# - command line > terminal > console > shell
# - **command line **- txt-based UI, way to talk w shell (windows)
# - **terminal** - Wrapper that runs the shell. Typically ran by the CLI. (the command prompt)
# - **console** - Refers to a physical terminal with I/O's (xbox, playstation, wii)
# - **shell** - Permits executing operating system level commands. (bash-unix, powershell, cmd-windows)
# - bash > cmd - more expressive and supporting.
# [geekforgeeks](https://www.geeksforgeeks.org/difference-between-terminal-console-shell-and-command-line/)
# + [markdown] id="3sDvnRk7Inhz"
# The **Command Prompt** is a command line interpreter *application* available in most **Windows** operating systems. It's used to execute entered commands. Most of those commands automate tasks via scripts and batch files - [Lifewire](https://www.lifewire.com/command-prompt-2625840#:~:text=Command%20Prompt%20is%20a%20command,certain%20kinds%20of%20Windows%20issues.)
# + [markdown] id="XBTypMoEGqeN"
# Bash is a Unix shell. A bash file is a batch file, but not the other way around. A batch file is a text file containing a series of commands. - [tkalve](https://stackoverflow.com/questions/5079180/what-is-the-difference-between-batch-and-bash-files)
# + [markdown] id="FFYR40ydC1S6"
# Shell [scripts ](howtogeek.com/67469/the-beginners-guide-to-shell-scripting-the-basics/)
#
# > Scripting allows you to use programming functions โ such as โforโ loops, if/then/else statements, and so forth โ directly within your operating systemโs interface...
# >
# > It allow us to program commands in chains and have the system execute them as a scripted event, just like batch files.
# >
# > They allow for command substitution: invoke a command, like date, and use itโs output as part of a file-naming scheme.
# + [markdown] id="rAmEyzHuDaIl"
# Types of โbatchโ files in [windows](https://www.geeksforgeeks.org/writing-windows-batch-script/)
# ```
# - `INI` (*.ini) โ Initialization file. These set the default variables for the system and programs.
# - `CFG` (*.cfg) โ These are the configuration files.
# - `SYS` (*.sys) โ System files, can sometimes be edited, mostly compiled machine code in new versions.
# - `COM` (*.com) โ Command files. These are the executable files for all the DOS commands. In early versions there was a separate file for each command. Now, most are inside COMMAND.COM.
# - `CDM` (*.cmd) โ These were the batch files used in NT operating systems.
# ```
# + [markdown] id="1dUM1CoPC1S4"
# And some basic commands of batch file
#
# ```
# - `echo` โ Prints out the input string. It can be ON or OFF, for ECHO to turn the echoing feature on or off. If ECHO is ON, the command prompt will display the command it is executing.
# - `cls` โ Clears the command prompt screen.
# - `title` Changes the title text displayed on top of prompt window.
# - `EXIT` โ To exit the Command Prompt.
# - `pause` โ Used to stop the execution of Windows batch file.
# - `::` โ Add a comment in the batch file.
# - `COPY` โ Copy a file or files
# - `CAT` - Displays inner content of a file
# ```
# + id="N5Lcf8Q8C1Sx"
# sed '1d' gitgist.js;
# ! echo 'This pipe works but I have trouble with the wget one.?' > myoutput.txt
# + id="7ZSymAwDC1Sy" outputId="03d7944e-17ee-4720-8d96-02960cb9844e"
# !ls; pwd; whoami;
# + id="ubPMRV24C1Sz"
# A ; B โ Run A and then B, regardless of the success or failure of A
# A && B โ Run B only if A succeeded
# A || B โ Run B only if A failed
| notebooks/03_Shell_Basics.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Merge Subroutine
#
# __Input:__ sorted arrays _C_ and _D_ (length $n/2$ each)
# __Output:__ sorted array B (length _n_)
# __Simplifying assumption:__ _n_ is even
# ***
# $i:=1$
# $j:=1$
# for $k:=1$ to _n_ do
# __if__ $C[i] < D[j]$ __then__
# $B[k]:=C[i]$ #populate output array
# $i:=i + 1$ #increment i
# __else__ #$D[j]<C[i]$
# $B[k]:=D[j]$
# $j:=j+1$
#
# +
def merge_subroutine():
B = []
C = [1, 3, 5, 8, 10]
D = [2, 4, 6, 9, 101]
i = 1
j = 1
for k in range(0, 9):
if C[i] < D[j]:
B[k] = C[i] # populate output array
i = i + 1 # increment i
else:
B[k] = D[j]
j = j + 1
print(B)
merge_subroutine()
# -
# # More Modern Approach
# Abandoning this approach for a slightly more modern one. [Translated from JS](https://medium.com/hackernoon/programming-with-js-merge-sort-deb677b777c0) and using slice instead of all of those `for` loops.
# +
import math
def mergeSort(arr):
#print(len(arr))
if len(arr) == 1: #no further subdivision possible!
return arr
MIDDLE = round(len(arr) / 2)
LEFT = arr[:MIDDLE] #using slice
RIGHT = arr[MIDDLE:]
return merge(mergeSort(LEFT), mergeSort(RIGHT))
def merge(left, right):
result = []
indexLeft = indexRight = 0
while (indexLeft < len(left) and indexRight < len(right)):
# One-liner to start the debugger here.
#from IPython.core.debugger import Tracer; Tracer()()
if left[indexLeft] < right[indexRight]:
result.append(left[indexLeft])
indexLeft += 1
else:
result.append(right[indexRight])
indexRight += 1
# return result.concat(left.slice(indexLeft)).concat(right.slice(indexRight));
return(result + left[indexLeft] + right[indexRight])
#return result
myList = [123, 212, 123, 3221, 123, 421, 2, 4, 1, 5, 6, 99, 100, -999, 3]
print(mergeSort(myList))
# -
# playing around with slice syntax
a = ("a", "b", "c", "d", "e", "q", "f", "g", "h")
MIDDLE = round(len(a) / 2)
LEFT = a[:MIDDLE]
RIGHT = a[MIDDLE:]
#x = slice(2)
print(MIDDLE, LEFT, RIGHT)
print(a[1])
print(a[1] + a[5])
# +
def someFunc(someArr):
print(len(someArr))
someFunc([1, 2, 3])
# +
#fiddling with the first part of the operation:
def mergeSort(arr):
print(len(arr))
if len(arr) == 1: #no further subdivision possible!
return arr
MIDDLE = round(len(arr) / 2)
LEFT = arr[MIDDLE:] #using slice
RIGHT = arr[:MIDDLE]
#return (merge(mergeSort(LEFT), mergeSort(RIGHT)))
print (MIDDLE, LEFT, RIGHT)
mergeSort([10, 20, 30, 40, 50])
# -
x = ["fee", "fie", "foe"]
y = ["eeny", "meeny", "miney"]
z = ["ecka", "bekka", "soda cracker"]
output = x + y + z
print(output)
# # Cheat time!
# My knowledge of python is a bit thin for this task. So stealing the answer from [CodeSpeedy](https://www.codespeedy.com/merge-sort-in-python/). Not too far off from what I'm doing above, except it actually works.
# +
def merge_sort(arr, begin, end):
if end - begin > 1:
middle = (begin + end)//2
merge_sort(arr, begin, middle)
merge_sort(arr, middle, end)
merge_list(arr, begin, middle, end)
def merge_list(arr, begin, middle, end):
left = arr[begin:middle]
right = arr[middle:end]
k = begin #what is k?
i = 0
j = 0
while(begin + i < middle and middle + j < end):
if(left[i] <= right[j]):
arr[k] = left[i]
i = i + 1
else:
arr[k] = right[j]
j = j + 1
k = k + 1 #again, what is k?
if begin + i < middle:
while k < end:
arr[k] = left[i]
i = i + 1
k = k + 1
else:
while k < end:
arr[k] = right[j]
j = j + 1
k = k + 1
arr = [10, 21, 3, -99, 11, 456, 12, 32, -0.1, 3, 77, 3.14, 91, 23, 54, 9]
merge_sort(arr, 0, len(arr))
print(arr)
# -
| algorithms/week01/MergeSort.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Cocorepr
#
# > A tool to convert COCO datasets between different representations (for now, only Object Detection is supported).
# ## Installation
#
# ```bash
# $ pip install -U cocorepr
# ```
# ## Basic usage
#
# ```
# $ cocorepr --help
# usage: cocorepr [-h] [--in_json_file [IN_JSON_FILE [IN_JSON_FILE ...]]]
# [--in_json_tree [IN_JSON_TREE [IN_JSON_TREE ...]]]
# [--in_crop_tree [IN_CROP_TREE [IN_CROP_TREE ...]]] --out_path
# OUT_PATH --out_format {json_file,json_tree,crop_tree}
# [--seed SEED] [--max_crops_per_class MAX_CROPS_PER_CLASS]
# [--overwrite] [--indent INDENT] [--update] [--debug]
#
# Tool for converting datasets in COCO format between different representations
#
# optional arguments:
# -h, --help show this help message and exit
# --in_json_file [IN_JSON_FILE [IN_JSON_FILE ...]]
# Path to one or multiple json files storing COCO
# dataset in `json_file` representation (all json-based
# datasets will be merged).
# --in_json_tree [IN_JSON_TREE [IN_JSON_TREE ...]]
# Path to one or multiple directories storing COCO
# dataset in `json_tree` representation (all json-based
# datasets will be merged).
# --in_crop_tree [IN_CROP_TREE [IN_CROP_TREE ...]]
# Path to one or multiple directories storing COCO
# dataset in `crop_tree` representation (all crop-based
# datasets will be merged and will overwrite the json-
# based datasets).
# --out_path OUT_PATH Path to the output dataset (file or directory: depends
# on `--out_format`)
# --out_format {json_file,json_tree,crop_tree}
# --seed SEED Random seed.
# --max_crops_per_class MAX_CROPS_PER_CLASS
# If set, the tool will randomly select up to this
# number of crops (annotations) per each class
# (category) and drop the others.
# --overwrite If set, will delete the output file/directory before
# dumping the result dataset.
# --indent INDENT Indentation in the output json files.
# --update Whether to update objects with the same ID, but
# different content during the dataset merge. If not
# used and such objects are found - exception will be
# thrown. The update strategy: [in_json_tree,
# in_json_file, in_crop_tree], from left to right within
# each group, top-right one wins. Beware, crop_tree
# datasets are owerwritting and removing data from other
# datasets: consider first merging crop_tree with it's
# json_tree/file into json_tree/file and merge the
# resulting dataset with others.
# --debug
# ```
#
# This tool converts a dataset between three formats:
# - json file (a single json file) - common ML format,
# - json tree (a set of json chunks) - suitable for Git,
# - crop tree (a set of png crops of the object detection annotations) - used for cleaning the object detection dataset.
#
#
# While json-based formats are self-contained, crop-based format needs at least one json path in order to reconstruct the dataset:
# ```
# $ cocorepr \
# --in_crop_tree /path/to/tree \
# --out_path /tmp/crop_tree \
# --out_format crop_tree
# INFO: Arguments: Namespace(debug=False, in_crop_tree=[PosixPath('/path/to/tree')], in_json_file=[], in_json_tree=[], indent=4, out_format='crop_tree', out_path=PosixPath('/tmp/crop_tree'), overwrite=False)
# Traceback (most recent call last):
# File "/home/ay/.pyenv/versions/3.7.6/bin/cocorepr", line 33, in <module>
# sys.exit(load_entry_point('cocorepr', 'console_scripts', 'cocorepr')())
# File "/plain/github/nm/cocorepr/cocorepr/main.py", line 66, in main
# raise ValueError(f'Not found base dataset, please specify either of: '
# ValueError: Not found base dataset, please specify either of: --in_json_tree / --in_json_file (multiple arguments allowed)
# ```
#
#
# Options `--in_json_tree`, `--in_json_file` and `--in_crop_tree` expect 1 or more path to the specified dataset representation.
# If multiple values were passed, the datasets will be merged (enforcing all the elements to have unique `id` fields).
# ```
# $ cocorepr \
# --in_json_file /tmp/json_file/file1.json /tmp/json_file/file2.json \
# --in_json_tree /tmp/json_tree/dir1 /tmp/json_file/dir2 /tmp/json_file/dir3 \
# --in_crop_tree /tmp/crop_tree/dir1 /tmp/crop_tree/dir2 \
# --out_path /tmp/json_tree \
# --out_format json_tree
# ```
#
# The command above will load `json_file` dataset from `/tmp/json_file/file1.json`, then load `/tmp/json_file/file2.json` and merge it with the first one, then load the `json_tree` from `/tmp/json_tree/dir1` and merge it with the previous result, etc.
# Then it'll load the `crop_tree` from `/tmp/crop_tree/dir1` using meta-info from the previously constructed dataset and merge it with `/tmp/crop_tree/dir2`.
# The result will be written in form of `json_tree` to `/tmp/json_tree` (if directory exists, the tool will fail unless the `--overwrite` is specified).
#
# ## Motivation
#
# This tool was born in [Neu.ro](https://neu.ro) when we worked on an ML project for a client who needed a system that would process photos, detect objects and then classify them by one a large number of classes. The client had large volumes of data, but the data was very noisy.
#
# Roughly, our solution comprised two models:
# 1. Object Detection (`OD`) model: trained on a dataset and finding generic objects (similar to COCO: bottle, laptop, bus),
# 2. Object Classification (`CL`) model: fine-tuned on the client's domain (for example: which exactly mark of the bottle, which type of laptop).
#
# While the first model could be generated on a generic dataset, the second problem required large amount of work with the client on cleaning the noisy data and preparing a fine-tuned classification dataset.
#
# For historical reasons, both datasets were collected, cleaned and stored in COCO format. Hopefully, we didn't need to store image blobs -- the client's API enforced their availability and immutability, therefore we could store only image URL and some other metadata (`coco_url` and `id`, other fields are optional):
#
# ```json5
# {
# "id": "49428", // image ID
# "coco_url": "http://images.cocodataset.org/train2017/000000049428.jpg", // URL of the immutable image blob
# // "license": 6,
# // "file_name": "000000049428.jpg",
# // "height": 427,
# // "width": 640,
# // "date_captured": "2013-11-15 04:30:29",
# // "flickr_url": "http://farm7.staticflickr.com/6014/5923365195_bee5603371_z.jpg"
# },
# ```
#
# Though COCO format is native fine for OD datasets, it might be bulky for CL datasets, which are concerned on the class of annotations, not images:
# ```json5
# {
# "id": "124710", // annotation ID
# "image_id": "140006", // image ID in the section "images"
# "category_id": "2", // class ID in the section "categories"
# "bbox": [496.52, 125.94, 143.48, 113.54], // crop coordinates in pixels: [x,y,w,h] (from top-left, x=horizontal)
# }
# ```
#
# In order to train a CL model, we want to have a certain number of "clean" crops per each class (by *crop* we call a small picture cropped from given image using coordinates of given annotation). In order to facilitate the manual process of choosing the clean crops, we would like them to be sorted into directories grouping them into classes (categories). After the cleaning, we would like to reconstruct this subset of COCO dataset, register it in Git and then use it to train the model.
# Here comes `cocorepr`, which was created to automate these conversions between different representations of a COCO dataset.
#
# Below you can find the detailed discussion of the COCO dataset representations.
#
# ---
# ## Representations of COCO dataset
# ### Json file
#
# This is a regular format for a COCO dataset: all the annotations are stored in a single json file:
#
# ```json5
# $ cat examples/coco_chunk/json_file/instances_train2017_chunk3x2.json
# {
# "licenses": [
# {
# "url": "http://creativecommons.org/licenses/by-nc-sa/2.0/",
# "id": "1",
# "name": "Attribution-NonCommercial-ShareAlike License"
# },
# ...
# ],
# "info": {
# "description": "COCO 2017 Dataset",
# "url": "http://cocodataset.org",
# "version": "1.0",
# "year": 2017,
# "contributor": "COCO Consortium",
# "date_created": "2017/09/01"
# },
# "categories": [
# {
# "supercategory": "person",
# "id": "1",
# "name": "person"
# },
# ...
# ],
# "images": [
# {
# "license": "6",
# "file_name": "000000049428.jpg",
# "coco_url": "http://images.cocodataset.org/train2017/000000049428.jpg",
# "height": 427,
# "width": 640,
# "date_captured": "2013-11-15 04:30:29",
# "flickr_url": "http://farm7.staticflickr.com/6014/5923365195_bee5603371_z.jpg",
# "id": "49428"
# },
# ...
# ],
# "annotations": [
# {
# "image_id": "140006",
# "bbox": [
# 496.52,
# 125.94,
# 143.48,
# 113.54
# ],
# "category_id": "2",
# "id": "124710"
# },
# ...
# ]
# }
# ```
#
# This format is used by many ML frameworks as input format, but usually the json tree file is too big to be stored in a Git repository (over 50M), therefore we either need to store it under Git LFS (which does not show the diff, only the hash), or to use another representation that are better adapted for work with Git.
# ### Json tree
#
# This format makes the dataset suitable for Git: it stores each element in a separate json chunk, thus enabling Git to do the diff at the level of individual chunks.
#
# ```
# $ cocorepr \
# --in_json_file examples/coco_chunk/json_file/instances_train2017_chunk3x2.json \
# --out_path $TMP \
# --out_format json_tree # --overwrite
# INFO:root:Arguments: Namespace(in_crop_tree_path=None, in_json_path=PosixPath('examples/coco_chunk/json_file/instances_train2017_chunk3x2.json'), out_format='json_tree', out_path=PosixPath('/tmp/json_tree'), overwrite=False)
# INFO:root:Loading json file from file: examples/coco_chunk/json_file/instances_train2017_chunk3x2.json
# INFO:root:Loaded: images=6, annotations=6, categories=3
# INFO:root:Dumping json tree to dir: /tmp/json_tree
# INFO:root:[+] Success: json_tree dumped to /tmp/json_tree: ['info.json', 'info', 'categories', 'annotations', 'licenses', 'images']
#
# $ tree /tmp/json_tree
# /tmp/json_tree
# โโโ annotations
# โย ย โโโ 124710.json
# โย ย โโโ 124713.json
# โย ย โโโ 131774.json
# โย ย โโโ 131812.json
# โย ย โโโ 183020.json
# โย ย โโโ 183030.json
# โโโ categories
# โย ย โโโ 1.json
# โย ย โโโ 2.json
# โย ย โโโ 3.json
# โโโ images
# โย ย โโโ 117891.json
# โย ย โโโ 140006.json
# โย ย โโโ 289949.json
# โย ย โโโ 49428.json
# โย ย โโโ 537548.json
# โย ย โโโ 71345.json
# โโโ info
# โโโ info.json
# โโโ licenses
# โโโ 1.json
# โโโ 2.json
# โโโ 3.json
# โโโ 4.json
# โโโ 5.json
# โโโ 6.json
# โโโ 7.json
# โโโ 8.json
#
# 5 directories, 24 files
# ```
# ### Crop tree
#
# This format is used to facilitate the process of manual cleaning the CL dataset: the directory `crop` contains the list of classes named as `{sanitized-class-name}--{class-id}` so that the classes that have similar name (for example the classes of the cars `Bugatti Veyron EB 16.4` and `Bugatti Veyron 16.4 Grand Sport` will be named as `Bugatti_Veyron_EB_16_4--103209` and `Bugatti_Veyron_16_4_Grand_Sport--376319`, which makes sense since the directories are usually sorted alphabetically). The human then goes through the pictures of crops, deletes the "dirty" ones and makes sure that each class contains enough of "clean" crops. Then, we can reconstruct the dataset in the json tree representation and register it in Git.
#
# ```bash
# $ cocorepr \
# --in_json_file examples/coco_chunk/json_file/instances_train2017_chunk3x2.json \
# --out_path /tmp/crop_tree \
# --out_format crop_tree
# INFO:root:Arguments: Namespace(in_crop_tree_path=None, in_json_path=PosixPath('examples/coco_chunk/json_file/instances_train2017_chunk3x2.json'), indent=4, out_format='crop_tree', out_path=PosixPath('/tmp/crop_tree'), overwrite=False)
# INFO:root:Loading json file from file: examples/coco_chunk/json_file/instances_train2017_chunk3x2.json
# INFO:root:Loaded: images=6, annotations=6, categories=3
# INFO:root:Detected input dataset type: json_file: examples/coco_chunk/json_file/instances_train2017_chunk3x2.json
# INFO:root:Dumping crop tree to dir: /tmp/crop_tree
# Processing images: 100%| | 6/6 [00:03<00:00, 1.60it/s]
# INFO:root:[+] Success: crop_tree dumped to /tmp/crop_tree: ['crops', 'images']
#
# $ tree /tmp/crop_tree
# /tmp/crop_tree
# โโโ crops
# โย ย โโโ bicycle--2
# โย ย โย ย โโโ 124710.png
# โย ย โย ย โโโ 124713.png
# โย ย โโโ car--3
# โย ย โย ย โโโ 131774.png
# โย ย โย ย โโโ 131812.png
# โย ย โโโ person--1
# โย ย โโโ 183020.png
# โย ย โโโ 183030.png
# โโโ images
# โโโ 000000049428.jpg
# โโโ 000000071345.jpg
# โโโ 000000117891.jpg
# โโโ 000000140006.jpg
# โโโ 000000289949.jpg
# โโโ 000000537548.jpg
#
# 5 directories, 12 files
# ```
#
# Now, this tree can be manually cleaned by a human ("dirty" crops deleted) and we'll be able to re-construct the dataset.
# ## Showcase: single iteration of the dataset cleaning process
#
# Our setup:
# - Our dataset stored in git repository `/project/my-dataset` in the `json_tree` representation. This dataset suffers from incompleteness: some categories lack "clean" annotations.
# - The customer has provided us with additional data as two `json_file`s: `/inputs/annotations-new-1.json` and `/inputs/annotations-new-2.json`.
# - We would like to merge these two datasets into a `crop_tree` representation, clean it manually, and then re-construct a new dataset and save it in-place in our git repository.
#
# *Step 1*: merge datasets `json_tree` + `json_file`x2 -> `crop_tree`:
# ```bash
# cocorepr \
# --in_json_tree /project/my-dataset \
# --in_json_file /inputs/annotations-new-1.json /inputs/annotations-new-2.json \
# --out_path /temp/my-dataset-crops \
# --out_format crop_tree \
# --overwrite \
# --debug
# # ls /temp/my-dataset-crops
# ```
#
# *Step 2*: manually clean the `crop_tree` in `/temp/my-dataset-crops`
#
# *Step 3*: re-construct the cleaned dataset:
#
# ```bash
# # first, verify that your original dataset has no uncommitted changes (they'll be lost)
# # cd /project/my-dataset
# git diff-index --quiet HEAD
#
# cocorepr \
# --in_crop_tree /temp/my-dataset-crops \
# --in_json_tree /project/my-dataset \
# --out_path /project/my-dataset \
# --out_format json_tree \
# --overwrite \
# --debug
# ```
#
# Now you can commit the changes of your dataset `/project/my-dataset`.
#
| nbs/index.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: 'Python 3.8.5 64-bit (''base'': conda)'
# name: python3
# ---
# # Otimizaรงรฃo - Modelagem da Superfรญcie Resposta (RSM)
# ### Imports
# +
import os,sys
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import scipy.stats as stats
import seaborn as sns
pd.set_option('max.columns',500)
# %matplotlib inline
from IPython.display import Image
# -
# A Modelagem da Superfรญcie Resposta (*Response Surface Modelling* - RSM) รฉ uma รกrea que reรบne tรฉcnicas que visam estimar a superfรญcie resposta do problema de maneira simplificada.
# ## 1. Definiรงรตes
# Apรณs a execuรงรฃo do DOE, existirรฃo alguns pontos amostrados em que se conhece os fatores de entrada e a variรกvel resposta. A partir desses pontos, รฉ interessante tentar encontrar uma curva que modele uma funรงรฃo que aproxime a relaรงรฃo entre os fatores e a variรกvel resposta. Essa modelagem รฉ mais precisa quando o processo de DOE foi feito com mais pontos, ou quando o comportamento da variรกvel resposta รฉ razoavelmente regular.
#
# Essas tรฉcnicas se dividem em duas classes:
# * <u>Interpolaรงรฃo:</u> quando $y_i = f(x_i)$;
# * <u>Aproximaรงรฃo:</u> quando $|y_i - f(x_i)| \neq 0$;
#
# Assim, busca-se encontrar uma funรงรฃo $f(x_i)$ que se aproxime do valor da variรกvel resposta $y_i$.
# ## 2. Tรฉcnicas
# Como mencionado, as tรฉcnicas de RSM sรฃo tรฉcnicas de aproximaรงรฃo/interpolaรงรฃo, que visam minimizar o erro entre a **funรงรฃo modelo** $\hat{f(x)}$ e a variรกvel resposta $y$.
# ### 2.1 Mรญnimos quadrados
# Essa tรฉcnica de regressรฃo supรดe que
#
# $$
# y = \hat{f(\bm{X}, \bm{\beta})} + \epsilon
# $$
#
# em que $\bm{X} = [x_1, ... , x_k ]^T$ รฉ o vetor com as $k$ entradas do experimento, $\bm{\beta} = [\beta_1, ..., \beta_m]$ รฉ o vetor com os $m$ coeficientes da regressรฃo adotada e $\epsilon$ รฉ o resรญduo da regressรฃo.
#
# Dessa forma, o objetivo รฉ minimizar o $\epsilon$, ou seja, para um dataset com $N$ amostras, espera-se encontrar parรขmetros $\bm{\beta}$ que:
#
# $$
# S = \sum_{i=1}^N \epsilon_i^2 = \sum_{i=1}^N (\hat{f}(x_i, \bm{\beta}) - y_i)^2 \\
#
# \therefore \\
#
# \min_{\bm{\beta}} S
# $$
#
# Para isso, utiliza-se o gradiente para encontrar a melhor aproximaรงรฃo:
#
# $$
# \frac{\partial{S}}{\partial{\bm{\beta}}} \sim 0
# $$
#
# Essa รฉ a formulaรงรฃo mais geral da tรฉcnica de mรญnimos quadrados, porque a partir daรญ surgem diferentes estratรฉgias de modelagem do $\hat{f}$, algumas lineares outras nรฃo-lineares, o que tambรฉm exige diferentes formas de otimizaรงรฃo dos parรขmetros $\bm{\beta}$. A tabela abaixo ilustra alguns exemplos comuns de modelagem de $\hat{f}$.
Image('../assets/tabela_minimos_quadrados.jpg', width=700)
# A mais comum dessas modelagens รฉ a regressรฃo linear, o primeiro caso da tabela acima.
#
# Para avaliar essa tรฉcnica, utiliza-se as variรกveis conhecidas como parรขmetros da regressรฃo, que sรฃo definidas como:
#
# * Parรขmetro normal da regressรฃo ($R$):
# $$
# R^2 = 1 - \frac{\sum_{i=1}^N (y_i-\hat{f}(x_i, \bm{\beta}))^2}{\sum_{i=1}^N (y_i-\overline{y})^2}
# $$
#
# em que $\overline{y}$ รฉ a mรฉdia de $y$.
#
# * Parรขmetro ajustado da regressรฃo ($R_{adj}$):
# $$
# R_{adj}^2 = 1 - \frac{\sum_{i=1}^N (y_i-\hat{f}(x_i, \bm{\beta}))^2}{\sum_{i=1}^N (y_i-\overline{y})^2} \cdot \frac{N-1}{N-m}
# $$
#
# em que $N$ รฉ o nรบmero de amostras usadas na regressรฃo e $m$ o nรบmero de parรขmetros de modelagem.
#
# * Parรขmetro preditivo da regressรฃo ($R_{pred}$):
# $$
# R_{pred}^2 = 1 - \frac{\sum_{i=1}^N (y_i-\hat{f}(x'_i, \bm{\beta}))^2}{\sum_{i=1}^N (y_i-\overline{y})^2}
# $$
#
# em que $x'_i$ รฉ a $i$-รฉsima amostra que **NรO** foi usada na regressรฃo, de forma a entender a capacidade de prediรงรฃo da modelagem.
#
#
# Essa tรฉcnica รฉ muito utilizada por sua simplicidade e capacidade de modelagem, mas para problemas em que o comportamento da variรกvel resposta รฉ nรฃo linear e nรฃo convexo, essas tรฉcnicas se tornam pouco precisas e muito suscetรญveis a mรญnimos/mรกximos locais.
# ### 2.2 Shepard e K-Nearest
#
# ## 3. Aplicaรงรฃo
# ### 3.1 Redefiniรงรฃo do problema
# Para testar as tรฉcnicas, usaremos o problema de exemplo do notebook de "Introduรงรฃo" e a classe que foi criada nele, para especificaรงรฃo do ambiente de otimizaรงรฃo. Caso nรฃo lembre do problema, recomendamos que releiam a Seรงรฃo 3 da Introduรงรฃo.
class Ambiente():
'''
Classe que define o ambiente de simulaรงรฃo do problema em questรฃo, que รฉ o de definir
como serรก feita a alocaรงรฃo de produรงรฃo de carros por fรกbrica, dado que cada uma delas
tem custos e tempo de produรงรฃo prรณprios.
'''
def __init__(self, D, lambda_1=1, lambda_2=1):
'''
Inicializaรงรฃo do ambiente
Parรขmetros:
-----------
1. D {int}:
Nรบmero de carros que precisam ser produzidos;
2. lambda_1 e lambda_2 {float}:
Parรขmetros de ajuste das funรงรตes subobjetivo.
'''
#Definiรงรฃo dos atibutos da classe
self.D, self.lambda_1, self.lambda_2 = D, lambda_1, lambda_2
self.n_fabricas = 3
self.custo_por_carro = np.array([50, 30, 10]) #em milhares de reais
self.tempo_por_carro = np.array([1, 5, 10]) #em dias
#Cรกlculo do custo mรกximo e mรญnimo e do tempo mรกximo e mรญnimo de produรงรฃo dada a demanda D
self.max_custo, self.min_tempo = self.compute_costs([0,0,self.D])
self.min_custo, self.max_tempo = self.compute_costs([self.D,0,0])
def norm(self, valor, maximo, minimo):
'''
Funรงรฃo de normalizaรงรฃo mรกximo e mรญnimo
Parรขmetros:
-----------
1. valor {float}:
Valor a ser normalizado;
2. maximo {float}:
valor mรกximo da variรกvel;
3. minimo {float}:
valor mรญnimo da variรกvel.
Saรญda:
------
1. valor_normalizado {float}:
Valor normalizado.
'''
valor_normalizado = (valor - minimo) / (maximo - minimo)
return valor_normalizado
def compute_costs(self, alocacao):
'''
Funรงรฃo que calcula o custo de produรงรฃo e o tempo, dada uma determinada alocaรงรฃo.
Parรขmetros:
-----------
1. alocacao {list ou np.array}:
Alocaรงรฃo definindo quantos carros cada fรกbrica produzirรก.
Saรญdas:
-------
1. custo_pedido {float}:
Custo de produรงรฃo, em milhares de reais;
2. tempo_pedido {float}:
Tempo de produรงรฃo, em dias.
'''
#Transforma a entrada em um np.array
alocacao = np.array(alocacao)
# Dada a alocaรงรฃo, calcula o custo e o tempo de produรงรฃo
custo_pedido = np.sum(alocacao*self.custo_por_carro)
tempo_pedido = np.sum(alocacao*self.tempo_por_carro)
return custo_pedido, tempo_pedido
def r(self, f1, f2, omega_1, omega_2):
'''
Executa o cรกlculo de r
Parรขmetros:
-----------
1. f1 e f2 {float}:
Funรงรตes subobjetivo.
2. omega_1, omega_2 {float}:
Pesos das funรงรตes subobjetivos.
Saรญda:
------
1. f {float}:
Valor da funรงรฃo objetivo
'''
f = omega_1*f1 + omega_2*f2
return f
def funcao_objetivo(self, alocacao, omega_1, omega_2):
'''
Calcula a funรงรฃo objetivo.
Parรขmetros:
-----------
1. alocacao {list ou np.array}:
Alocaรงรฃo definindo quantos carros cada fรกbrica produzirรก.
2. omega_1, omega_2 {float}:
pesos dos subobjetivos. A soma dos dois precisa ser igual a 1.
Saรญda:
------
1. objetivo {float}:
Resultado da funรงรฃo objetivo.
'''
#Cรกlculo do custo e o tempo demandado
custo, tempo = self.compute_costs(alocacao)
#Cรกlculo das funรงรตes subpbjetivo
f1, f2 = self.lambda_1*custo, self.lambda_2*tempo
#Normalizaรงรฃo dessas funรงรตes usando o custo e tempo mรกximo e mรญnimo
f1_norm, f2_norm = self.norm(f1, self.min_custo, self.max_custo), self.norm(f2, self.min_tempo, self.max_tempo)
#Cรกlculo da funรงรฃo objetivo (o negativo รฉ porque o problema รฉ de minimzaรงรฃo)
objetivo = -self.r(f1_norm, f2_norm, omega_1, omega_2)
if np.sum(alocacao) != self.D: #Penaliza as soluรงรตes cuja soma seja maior ou menor que D
objetivo = -(np.abs(np.sum(alocacao) - self.D))
return objetivo
env = Ambiente(20)
# ### 3.5 Conclusรฃo
# XXX
# ## 4. Referรชncias
# * [<NAME>. (2013). Optimization methods: from theory to design](https://link.springer.com/book/10.1007/978-3-642-31187-1)
| Capitulos/RSM.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
import os
import pandas as pd
import numpy as np
import seaborn as sns
import matplotlib.pyplot as plt
# %matplotlib inline
data_dir = '/Users/boyuliu/pyprojects/Joann/Joann-Thailand-Project/notebooks/datasets/new_dataset/'
wv1 = pd.read_csv(
data_dir + 'regression_data_%s_20210520.csv' % 'wv_cases1')
wv1.head()
# -
wv1.columns
# ## calculate shock
# +
wv_data_file = 'wv_cases1'
df = wv1
df['demand_shock'] = None
for prov in df.province.unique():
for ind in df.industry.unique():
prov_row_idx = df[(df.province==prov)&(df.industry==ind)].index
demand_data_points = df.loc[prov_row_idx, 'total_demand'].values
std_demand = np.std(demand_data_points)
df.loc[[0, 1], 'demand_shock'] = np.nan # need at least two points to fit a line
for time_pointer in range(2, len(prov_row_idx)):
y = demand_data_points[max(0, time_pointer-4):time_pointer]
expected_demand = np.mean(y)
demand_shock = demand_data_points[time_pointer] - expected_demand
# print(x, y, expected_demand, demand_shock)
df.loc[prov_row_idx[time_pointer], 'demand_shock'] = demand_shock/std_demand
print(df.shape)
# +
# create placeholder
for offset in range(1, 9):
df['demand_shock_plus_%s' % offset] = None
# shift IV the other way by up to 2 months
for offset in range(1, 9):
df['demand_shock_minus_%s' % offset] = None
for prov in df.province.unique():
for ind in df.industry.unique():
prov_row_idx = df[(df.province==prov)&(df.industry==ind)].index
for offset in range(1, 9):
df.loc[prov_row_idx, 'demand_shock_plus_%s' % offset] = df.loc[prov_row_idx, 'demand_shock'].shift(offset)
# shift IV the other way by up to 2 months
for offset in range(1, 9):
df.loc[prov_row_idx, 'demand_shock_minus_%s' % offset] = df.loc[prov_row_idx, 'demand_shock'].shift(-offset)
print(df.shape)
# +
for offset in range(1, 6):
df['perc_abuse_minus_%s' % offset] = None
for prov in df.province.unique():
for ind in df.industry.unique():
prov_row_idx = df[(df.province==prov)&(df.industry==ind)].index
for offset in range(1, 6):
df.loc[prov_row_idx, 'perc_abuse_minus_%s' % offset] = df.loc[prov_row_idx, 'perc_abuse'].shift(-offset)
# shift IV the other way by up to 2 months
print(df.shape)
# -
df.head()
# +
def yr_wk_to_float(yr_wk):
yr, wk = yr_wk.split('-')
return int(yr) + float(wk)/100
df['yr_wk_float'] = df.year_week.apply(yr_wk_to_float)
# +
wv_data_file = 'wv_cases1'
df.to_csv(
data_dir + 'regression_data_%s_causal_ma_detrend_20210520.csv' % wv_data_file,
index=False)
df.head()
# -
'regression_data_%s_causal_ma_detrend_20210520.csv' % wv_data_file
df.columns
df.industry.unique()
# ## Add USD IV
# folder = '/Users/boyuliu/Dropbox/Boyu-Joann/Data/exchange_rate/China/'
# iv = pd.read_csv(folder + 'weekly_CNY_exchange_rate.csv')
folder = '/Users/boyuliu/Dropbox/Boyu-Joann/Data/exchange_rate/'
iv = pd.read_csv(folder + 'weekly_exchange_rate.csv')
print(iv.year_week.max(), iv.year_week.min(), iv.shape)
iv.head()
# weekly = pd.merge(weekly, iv[['year_week','ex_rate']], on='year_week', how='left')
# weekly.shape
iv['month'] = iv['fake_date'].apply(lambda x: x[:-3])
iv['quarter'] = iv['fake_date'].apply(lambda x: x[:5] + str((int(x[5:7])-1)//3 + 1))
iv['ex_rate_diff'] = iv['ex_rate'] - iv['ex_rate'].shift(1)
# shift IV by up to 2 months
for offset in range(1, 9):
iv['ex_rate_diff_plus_%s' % offset] = iv['ex_rate_diff'].shift(offset)
# shift IV the other way by up to 2 months
for offset in range(1, 9):
iv['ex_rate_diff_minus_%s' % offset] = iv['ex_rate_diff'].shift(-offset)
# iv['ex_rate_diff_%s' % offset] = iv['ex_rate'] - iv['ex_rate'].shift(-offset)
iv.head()
# +
print(wv_data_file)
df = pd.read_csv(data_dir + 'regression_data_%s_causal_ma_detrend_20210520.csv' % wv_data_file)
print(df.shape)
df = pd.merge(df.drop(iv.columns[1:], axis=1), iv, on='year_week', how='left')
print(df.shape)
# dup_cols = [col for col in df.columns if col[-1]=='x' or col[-1]=='y']
# df = df.drop(dup_cols, axis=1)
df.to_csv(
data_dir + 'regression_data_%s_causal_ma_detrend_USD_20210520.csv' % wv_data_file,
index=False)
df.head()
# -
# ## plot effect of transformation
df = pd.read_csv(data_dir+'regression_data_wv_cases1_causal_ma_detrend_20210301.csv')
plot_order = df.groupby('province').mean().sort_values(by='total_demand', ascending=False).index.values
plot_order
#
# plot = sns.barplot(x='ID', y='Amount', data=df, order=plot_order)
# +
# # a4_dims = (24, 16)
# # a4_dims = (16, 12)
# a4_dims = (12, 6)
# fig, ax = plt.subplots(figsize=a4_dims)
# df['total_demand'] = df['total_demand'].astype(float)
# df['name_len'] = df['province'].apply(len)
# df.sort_values('name_len', inplace=True)
# bxplot = sns.boxplot(data=df, x='province', y='total_demand', order=plot_order)
# for item in bxplot.get_xticklabels():
# item.set_rotation(45)
# item.set_fontsize(8)
# plt.ylabel('Weekly total excess demand', fontsize=15, fontweight='black', color = '#333F4B')
# plt.xlabel('Province', fontsize=15, fontweight='black', color = '#333F4B')
# plt.title('Excess demand for Thai provinces', fontsize=15, fontweight='black', color = '#333F4B')
# plt.savefig('../../../plots/paper/demand_by_province.jpg', bbox_inches='tight')
# plt.show()
# +
# a4_dims = (24, 16)
# a4_dims = (16, 12)
a4_dims = (12, 6)
fig, ax = plt.subplots(figsize=a4_dims)
df['total_demand'] = df['total_demand'].astype(float)
df['name_len'] = df['province'].apply(len)
df.sort_values('name_len', inplace=True)
bxplot = sns.boxplot(data=df, x='province', y='total_demand', order=plot_order)
tick_locs = range(df.province.nunique())
plt.xticks(ticks=tick_locs, labels=tick_locs)
plt.ylabel('Weekly total labor shortage', fontsize=15, fontweight='black', color = '#333F4B')
plt.xlabel('Province', fontsize=15, fontweight='black', color = '#333F4B')
plt.title('Labor shortage among Thai provinces', fontsize=15, fontweight='black', color = '#333F4B')
plt.savefig('../../../plots/paper/demand_by_province_nolegend.jpg', bbox_inches='tight')
plt.show()
# +
# # a4_dims = (16, 12)
# a4_dims = (12, 6)
# fig, ax = plt.subplots(figsize=a4_dims)
# df['demand_shock'] = df['demand_shock'].astype(float)
# bxplot = sns.boxplot(data=df, x='province', y='demand_shock', order=plot_order)
# for item in bxplot.get_xticklabels():
# item.set_rotation(45)
# item.set_fontsize(8)
# plt.ylabel('Weekly total excess demand shock', fontsize=15, fontweight='black', color = '#333F4B')
# plt.xlabel('Province', fontsize=15, fontweight='black', color = '#333F4B')
# plt.title('Excess demand shocks for Thai provinces', fontsize=15, fontweight='black', color = '#333F4B')
# plt.savefig('../../../plots/paper/demand_shock_by_province.jpg', bbox_inches='tight')
# plt.show()
# +
# a4_dims = (16, 12)
a4_dims = (12, 6)
fig, ax = plt.subplots(figsize=a4_dims)
df['demand_shock'] = df['demand_shock'].astype(float)
bxplot = sns.boxplot(data=df, x='province', y='demand_shock', order=plot_order)
tick_locs = range(df.province.nunique())
plt.xticks(ticks=tick_locs, labels=tick_locs)
plt.ylabel('Weekly total labor shortage shock', fontsize=15, fontweight='black', color = '#333F4B')
plt.xlabel('Province', fontsize=15, fontweight='black', color = '#333F4B')
plt.title('Labor shortage shocks among Thai provinces', fontsize=15, fontweight='black', color = '#333F4B')
plt.savefig('../../../plots/paper/demand_shock_by_province_nolegend.jpg', bbox_inches='tight')
plt.show()
# -
plot_data = df.groupby('province').mean().sort_values(by='total_demand', ascending=False).reset_index()
plot_data['idx'] = plot_data.index
plot_data['legend'] = plot_data.apply(lambda row: str(row.idx)+ ' - '+str(row.province), axis=1)
print('\n'.join(plot_data.legend))
# print(plot_data.province.values)
# ## other shock periods
# +
wv_data_file = 'wv_cases1'
df = wv1
df['demand_shock'] = None
period_length = 8
for prov in df.province.unique():
prov_row_idx = df[df.province==prov].index
demand_data_points = df.loc[prov_row_idx, 'total_demand'].values
std_demand = np.std(demand_data_points)
df.loc[[0, 1], 'demand_shock'] = np.nan # need at least two points to fit a line
# i = 0
for time_pointer in range(2, len(prov_row_idx)):
y = demand_data_points[max(0, time_pointer-period_length):time_pointer]
expected_demand = np.mean(y)
demand_shock = demand_data_points[time_pointer] - expected_demand
# print(x, y, expected_demand, demand_shock)
df.loc[prov_row_idx[time_pointer], 'demand_shock'] = demand_shock/std_demand
# i+=1
# if i>3: break
print(df.shape)
# +
# create placeholder
for offset in range(1, 9):
df['demand_shock_plus_%s' % offset] = None
# shift IV the other way by up to 2 months
for offset in range(1, 9):
df['demand_shock_minus_%s' % offset] = None
for prov in df.province.unique():
prov_row_idx = df[df.province==prov].index
for offset in range(1, 9):
df.loc[prov_row_idx, 'demand_shock_plus_%s' % offset] = df.loc[prov_row_idx, 'demand_shock'].shift(offset)
# shift IV the other way by up to 2 months
for offset in range(1, 9):
df.loc[prov_row_idx, 'demand_shock_minus_%s' % offset] = df.loc[prov_row_idx, 'demand_shock'].shift(-offset)
print(df.shape)
# +
def yr_wk_to_float(yr_wk):
yr, wk = yr_wk.split('-')
return int(yr) + float(wk)/100
df['yr_wk_float'] = df.year_week.apply(yr_wk_to_float)
# +
wv_data_file = 'wv_cases1'
df.to_csv(
data_dir + 'regression_data_%s_causal_ma_detrend_8w_20210301.csv' % wv_data_file,
index=False)
df.head()
# +
wv_data_file = 'wv_cases1'
df = wv1
df['demand_shock'] = None
period_length = 12
for prov in df.province.unique():
prov_row_idx = df[df.province==prov].index
demand_data_points = df.loc[prov_row_idx, 'total_demand'].values
std_demand = np.std(demand_data_points)
df.loc[[0, 1], 'demand_shock'] = np.nan # need at least two points to fit a line
# i = 0
for time_pointer in range(2, len(prov_row_idx)):
y = demand_data_points[max(0, time_pointer-period_length):time_pointer]
expected_demand = np.mean(y)
demand_shock = demand_data_points[time_pointer] - expected_demand
# print(x, y, expected_demand, demand_shock)
df.loc[prov_row_idx[time_pointer], 'demand_shock'] = demand_shock/std_demand
# i+=1
# if i>3: break
print(df.shape)
# +
# create placeholder
for offset in range(1, 9):
df['demand_shock_plus_%s' % offset] = None
# shift IV the other way by up to 2 months
for offset in range(1, 9):
df['demand_shock_minus_%s' % offset] = None
for prov in df.province.unique():
prov_row_idx = df[df.province==prov].index
for offset in range(1, 9):
df.loc[prov_row_idx, 'demand_shock_plus_%s' % offset] = df.loc[prov_row_idx, 'demand_shock'].shift(offset)
# shift IV the other way by up to 2 months
for offset in range(1, 9):
df.loc[prov_row_idx, 'demand_shock_minus_%s' % offset] = df.loc[prov_row_idx, 'demand_shock'].shift(-offset)
print(df.shape)
# +
def yr_wk_to_float(yr_wk):
yr, wk = yr_wk.split('-')
return int(yr) + float(wk)/100
df['yr_wk_float'] = df.year_week.apply(yr_wk_to_float)
# +
wv_data_file = 'wv_cases1'
df.to_csv(
data_dir + 'regression_data_%s_causal_ma_detrend_12w_20210301.csv' % wv_data_file,
index=False)
df.head()
# -
wv_data_file = 'wv_cases1'
'regression_data_%s_causal_ma_detrend_12w_20210301.csv' % wv_data_file
# ## std of 12 weeks
# +
wv_data_file = 'wv_cases1'
df = wv1
df['demand_shock'] = None
period_length = 8
eps = 1e-6
for prov in df.province.unique():
prov_row_idx = df[df.province==prov].index
demand_data_points = df.loc[prov_row_idx, 'total_demand'].values
# std_demand = np.std(demand_data_points)
df.loc[[0, 1, 2, 3], 'demand_shock'] = np.nan # need at least two points to fit a line
# i = 0
for time_pointer in range(4, len(prov_row_idx)):
y = demand_data_points[max(0, time_pointer-period_length):time_pointer]
expected_demand = np.mean(y)
demand_shock = demand_data_points[time_pointer] - expected_demand
running_std_demand = np.std(y)
# print(x, y, expected_demand, demand_shock)
df.loc[prov_row_idx[time_pointer], 'demand_shock'] = demand_shock/(running_std_demand+eps)
# i+=1
# if i>3: break
print(df.shape)
# +
# create placeholder
for offset in range(1, 9):
df['demand_shock_plus_%s' % offset] = None
# shift IV the other way by up to 2 months
for offset in range(1, 9):
df['demand_shock_minus_%s' % offset] = None
for prov in df.province.unique():
prov_row_idx = df[df.province==prov].index
for offset in range(1, 9):
df.loc[prov_row_idx, 'demand_shock_plus_%s' % offset] = df.loc[prov_row_idx, 'demand_shock'].shift(offset)
# shift IV the other way by up to 2 months
for offset in range(1, 9):
df.loc[prov_row_idx, 'demand_shock_minus_%s' % offset] = df.loc[prov_row_idx, 'demand_shock'].shift(-offset)
print(df.shape)
# +
def yr_wk_to_float(yr_wk):
yr, wk = yr_wk.split('-')
return int(yr) + float(wk)/100
df['yr_wk_float'] = df.year_week.apply(yr_wk_to_float)
# +
wv_data_file = 'wv_cases1'
df.to_csv(
data_dir + 'regression_data_%s_causal_ma_detrend_8w_std_20210217.csv' % wv_data_file,
index=False)
df.head()
# -
df[pd.isnull(df['demand_shock'])]['year_week'].value_counts()
df[pd.isnull(df['demand_shock'])]['province'].value_counts()
# # bigger provinces
df = pd.read_csv(data_dir+'regression_data_wv_cases1_causal_ma_detrend_20210301.csv')
by_prov_counts = df.groupby('province').sum()['wv_count'].to_dict()
print(by_prov_counts)
lg_provs_40 = [prov for prov in by_prov_counts.keys() if by_prov_counts[prov]>=40]
df = df[df.province.isin(lg_provs_40)]
wv_data_file_lg_prov = 'regression_data_wv_cases1_causal_ma_detrend__province_40_20210301.csv'
df.to_csv(data_dir + wv_data_file_lg_prov)
df = pd.read_csv(data_dir+'regression_data_wv_cases1_causal_ma_detrend_20210301.csv')
# by_prov_counts = df.groupby('province').sum()['wv_count'].to_dict()
lg_provs_20 = [prov for prov in by_prov_counts.keys() if by_prov_counts[prov]>=20]
df = df[df.province.isin(lg_provs_20)]
wv_data_file_lg_prov = 'regression_data_wv_cases1_causal_ma_detrend__province_20_20210301.csv'
df.to_csv(data_dir + wv_data_file_lg_prov)
# by province
for prov in lg_provs_40:
df[df.province==prov].to_csv(data_dir + 'regression_data_wv_cases1_causal_ma_detrend_%s_20210301.csv' % prov)
lg_provs_40
len(lg_provs_40)
df.head()
# # extra iv labor abuse
df = pd.read_csv(
data_dir + 'regression_data_wv_cases1_causal_ma_detrend_20210301.csv')
print(df.shape)
df.head()
df['perc_abuse_minus_1'] = None
df['perc_abuse_minus_2'] = None
df['perc_abuse_minus_3'] = None
df['perc_abuse_minus_4'] = None
df['perc_abuse_minus_5'] = None
df['perc_abuse_minus_6'] = None
df['perc_abuse_minus_7'] = None
for prov in df.province.unique():
prov_row_idx = df[df.province==prov].index
df.loc[prov_row_idx, 'perc_abuse_minus_1'] = df.loc[prov_row_idx, 'perc_abuse'].shift(-1)
df.loc[prov_row_idx, 'perc_abuse_minus_2'] = df.loc[prov_row_idx, 'perc_abuse'].shift(-2)
df.loc[prov_row_idx, 'perc_abuse_minus_3'] = df.loc[prov_row_idx, 'perc_abuse'].shift(-3)
df.loc[prov_row_idx, 'perc_abuse_minus_4'] = df.loc[prov_row_idx, 'perc_abuse'].shift(-4)
df.loc[prov_row_idx, 'perc_abuse_minus_5'] = df.loc[prov_row_idx, 'perc_abuse'].shift(-5)
df.loc[prov_row_idx, 'perc_abuse_minus_6'] = df.loc[prov_row_idx, 'perc_abuse'].shift(-6)
df.loc[prov_row_idx, 'perc_abuse_minus_7'] = df.loc[prov_row_idx, 'perc_abuse'].shift(-7)
df.head()
df[['province', 'perc_abuse', 'perc_abuse_minus_1','perc_abuse_minus_2','perc_abuse_minus_3','perc_abuse_minus_4','perc_abuse_minus_5']].\
fillna(0).groupby('province').sum().head()
df.to_csv(
data_dir + 'regression_data_%s_extra_iv_20210318.csv' % 'wv_cases1',
index=False)
data_dir + 'regression_data_%s_extra_iv_20210318.csv' % 'wv_cases1'
# ## other shock periods
df = pd.read_csv(
data_dir + 'regression_data_wv_cases1_causal_ma_detrend_8w_20210301.csv')
print(df.shape)
df['perc_abuse_minus_1'] = None
df['perc_abuse_minus_2'] = None
df['perc_abuse_minus_3'] = None
df['perc_abuse_minus_4'] = None
df['perc_abuse_minus_5'] = None
df['perc_abuse_minus_6'] = None
df['perc_abuse_minus_7'] = None
for prov in df.province.unique():
prov_row_idx = df[df.province==prov].index
df.loc[prov_row_idx, 'perc_abuse_minus_1'] = df.loc[prov_row_idx, 'perc_abuse'].shift(-1)
df.loc[prov_row_idx, 'perc_abuse_minus_2'] = df.loc[prov_row_idx, 'perc_abuse'].shift(-2)
df.loc[prov_row_idx, 'perc_abuse_minus_3'] = df.loc[prov_row_idx, 'perc_abuse'].shift(-3)
df.loc[prov_row_idx, 'perc_abuse_minus_4'] = df.loc[prov_row_idx, 'perc_abuse'].shift(-4)
df.loc[prov_row_idx, 'perc_abuse_minus_5'] = df.loc[prov_row_idx, 'perc_abuse'].shift(-5)
df.loc[prov_row_idx, 'perc_abuse_minus_6'] = df.loc[prov_row_idx, 'perc_abuse'].shift(-6)
df.loc[prov_row_idx, 'perc_abuse_minus_7'] = df.loc[prov_row_idx, 'perc_abuse'].shift(-7)
df.to_csv(
data_dir + 'regression_data_wv_cases1_causal_extra_iv_8w_20210318.csv',
index=False)
df = pd.read_csv(
data_dir + 'regression_data_wv_cases1_causal_ma_detrend_12w_20210301.csv')
print(df.shape)
df['perc_abuse_minus_1'] = None
df['perc_abuse_minus_2'] = None
df['perc_abuse_minus_3'] = None
df['perc_abuse_minus_4'] = None
df['perc_abuse_minus_5'] = None
df['perc_abuse_minus_6'] = None
df['perc_abuse_minus_7'] = None
for prov in df.province.unique():
prov_row_idx = df[df.province==prov].index
df.loc[prov_row_idx, 'perc_abuse_minus_1'] = df.loc[prov_row_idx, 'perc_abuse'].shift(-1)
df.loc[prov_row_idx, 'perc_abuse_minus_2'] = df.loc[prov_row_idx, 'perc_abuse'].shift(-2)
df.loc[prov_row_idx, 'perc_abuse_minus_3'] = df.loc[prov_row_idx, 'perc_abuse'].shift(-3)
df.loc[prov_row_idx, 'perc_abuse_minus_4'] = df.loc[prov_row_idx, 'perc_abuse'].shift(-4)
df.loc[prov_row_idx, 'perc_abuse_minus_5'] = df.loc[prov_row_idx, 'perc_abuse'].shift(-5)
df.loc[prov_row_idx, 'perc_abuse_minus_6'] = df.loc[prov_row_idx, 'perc_abuse'].shift(-6)
df.loc[prov_row_idx, 'perc_abuse_minus_7'] = df.loc[prov_row_idx, 'perc_abuse'].shift(-7)
df.to_csv(
data_dir + 'regression_data_wv_cases1_causal_extra_iv_12w_20210318.csv',
index=False)
# ## sandbox
# +
wv_data_file = 'wv_cases1'
df = wv1
df['demand_shock'] = None
for prov in df.province.unique():
prov_row_idx = df[df.province==prov].index
demand_data_points = df.loc[prov_row_idx, 'total_demand'].values
std_demand = np.std(demand_data_points)
df.loc[[0, 1], 'demand_shock'] = np.nan # need at least two points to fit a line
i = 0
for time_pointer in range(2, len(prov_row_idx)):
y = demand_data_points[max(0, time_pointer-4):time_pointer]
expected_demand = np.mean(y)
demand_shock = demand_data_points[time_pointer] - expected_demand
# print(x, y, expected_demand, demand_shock)
df.loc[prov_row_idx[time_pointer], 'demand_shock'] = demand_shock/std_demand
import pdb; pdb.set_trace()
i+=1
if i>3: break
print(df.shape)
# -
| notebooks/data_cleaning_w_2020/5-1. Detrend demand w industry - 0520.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3 (ipykernel)
# language: python
# name: python3
# ---
# ### 1. What is the concept of an abstract superclass?
# > Abstract superclass is a class in which we define methods and properties of other classes. But we do not create objects from this class directly. In order to create abstract superclass in python we need to import abc module and let it be inherited by our superclass. Also upon defining methods in our superclass we need to decorate them with @abstractmethod decorator.
#
#
# ### 2. What happens when a class statement's top level contains a basic assignment statement?
# > Defining basic assignment statements in a top level class (meaning Parent class, I believe?) will allow to inherit this statements in a form of attributes by the subclasses.
#
#
# ### 3. Why does a class need to manually call a superclass's __init__ method?
# > Because it can in this way connect to the parent's class variables and parameters (inherit them) and override some variables if needed.
#
#
# ### 4. How can you augment, instead of completely replacing, an inherited method?
# > Just by creating the method by the same name inside the child class and modifying it accordingly to our needs.
#
#
# ### 5. How is the local scope of a class different from that of a function?
# > Defining the variable inside the class makes it accesible for all the functions(methods) in the class, as well as subclasses (if the class is a parent). Defining variable in a function (script language) makes it accessible only inside the function.
| Python Advance Assignments/Assignment_3.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ### Multi-dimensional search
# Given a multi-dimensional array consisting of either integers, floats, strings or other nested arrays, return an array containing only numerical values that occur inside the original array (integers, floats).
def find_numbers(A, results):
"""Complexity analysis
O(N ^ m) time | O(n) space, where N is the total number of elements in the outer array A, and m is the number of the highest dimensions in the array.
"""
results = [] if results is None else results
for index, value in enumerate(A):
if type(value) is list:
find_numbers(A[index], results)
elif type(value) in (int, float):
results.append(value)
return results
A = [[[[1]], [2, 3],[], [[0, [3, 4], [7, 8, 9]]], [2.3, 9.8]]]
find_numbers(A, results=[])
| 2D-arrays/multi_dimensional_search.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# <a href="https://colab.research.google.com/github/compi1234/pyspch/blob/master/test/audio_test.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# ## Recorder
#
# Date: 17/05/2021, 20/09/2021
# +
# %matplotlib inline
import matplotlib.pyplot as plt
from IPython.display import display, clear_output, Audio, HTML
import ipywidgets as widgets
import numpy as np
import librosa
try:
import google.colab
IN_COLAB = True
# ! pip install git+https://github.com/compi1234/pyspch.git
except:
IN_COLAB = False
# verify the IPython version
import IPython
if IPython.version_info[0] >= 6:
Audio_args = {'normalize':False}
else:
print("Warning: you are using IPython<6 \n IPython.display.Audio() will automatically normalize audio output")
Audio_args = {}
import pyspch.audio as Spa
import pyspch.spg as Sps
import pyspch.display as Spd
# +
Symbols = { 'play':'\u25b6','reverse':'\u25C0' , 'pause':'\u23F8', 'stop': '\u23F9', 'record':'\u2b55'}
def box_layout():
return widgets.Layout(
border='solid 1px black',
margin='0px 10px 10px 0px',
padding='5px 5px 5px 5px'
)
def button_layout():
return widgets.Layout(
border='solid 1px black',
margin='0px 10px 10px 0px',
padding='5px 5px 5px 5px',
width = '40px',
height = '40px',
flex_shrink =2
)
class recorder(widgets.VBox):
def __init__(self,data=np.zeros((1000,)),sample_rate=16000,figsize=(12,6),dpi=72):
super().__init__()
sample_rates = [8000,11025,16000,22050,44100,48000]
self.data = data
self.sample_rate = sample_rate
self.rec_time = 2.0
self.line_color = '#0000ff'
self.figsize = figsize
self.dpi = dpi
self.wg_play_button = widgets.Button(description=Symbols['play'],layout=button_layout())
self.wg_record_button = widgets.Button(description=Symbols['record'],layout=button_layout())
self.wg_pause_button = widgets.Button(description=Symbols['pause'],layout=button_layout())
self.wg_clear_log_button = widgets.Button(description='Clear log')
self.wg_rectime = widgets.BoundedFloatText( value=2.0, min=0.5, max= 10., step=0.5,
description='Rec Time:',style={'description_width': '40%'}, disabled=False)
self.wg_samplerate = widgets.Dropdown(options=sample_rates,value=self.sample_rate,
description="Sampling Rate",style={'description_width': '40%'})
self.out = widgets.Output(layout=box_layout())
self.out.layout.width = '100%'
self.logscr = widgets.Output()
self.UI = widgets.VBox([
widgets.HBox([self.wg_play_button,self.wg_record_button,self.wg_pause_button,self.wg_rectime,self.wg_samplerate]),
self.wg_clear_log_button,
self.logscr],
layout=box_layout())
self.UI.layout.width = '100%'
# add as children
self.children = [self.out, self.UI]
self.wg_play_button.on_click(self.play_sound)
self.wg_record_button.on_click(self.record_sound)
self.wg_pause_button.on_click(self.pause_sound)
self.wg_clear_log_button.on_click(self.clear_log)
self.wg_rectime.observe(self.rectime_observe,'value')
self.wg_samplerate.observe(self.samplerate_observe,'value')
self.plot_data()
plt.close()
def plot_data(self):
with self.out:
clear_output(wait=True)
spg = Sps.spectrogram(self.data,sample_rate=self.sample_rate)
self.fig = Spd.PlotSpg(spgdata=spg,wavdata=self.data,sample_rate=self.sample_rate,figsize=self.figsize,dpi=self.dpi)
#self.fig = spchd.PlotWaveform(self.data,sample_rate=self.sample_rate)
display(self.fig)
def rectime_observe(self,change):
self.rec_time = change.new
def samplerate_observe(self,change):
self.sample_rate = change.new
def pause_sound(self,b):
with self.logscr:
#print("You didn't expect everything to work ?!! did you ... ")
Spa.stop()
def play_sound(self,b):
with self.logscr:
clear_output()
if(IN_COLAB):
print("IN_COLAB: Use the HTML button to play sound")
Spa.play(self.data,sample_rate=self.sample_rate,wait=False)
def record_sound(self,b):
with self.logscr:
self.data = Spa.record(self.rec_time,self.sample_rate,n_channels=1)
self.plot_data()
self.play_sound(b)
def clear_log(self,b):
with self.logscr: clear_output()
recorder()
# -
| demos/Recorder.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 2
# language: python
# name: python2
# ---
# # Talktorial 8
#
# # Protein data acquisition: Protein Data Bank (PDB)
#
# #### Developed in the CADD seminars 2017 and 2018, AG Volkamer, Charitรฉ/FU Berlin
#
# <NAME>, <NAME> and <NAME>
# ## Aim of this talktorial
#
# In this talktorial, we conduct the groundwork for the next talktorial where we will generate a ligand-based ensemble pharmacophore for EGFR. Therefore, we
# (i) fetch all PDB IDs for EGFR from the PDB database,
# (ii) retrieve five protein-ligand structures, which have the best structural quality and are derived from X-ray crystallography, and
# (iii) align all structures to each in 3D as well as extract and save the ligands to be used in the next talktorial.
#
# ## Learning Goals
#
# ### Theory
# * Protein Data Bank (PDB)
# * Python package PyPDB
#
# ### Practical
#
# * Select query protein
# * Get statistic on PDB entries for query protein
# * Get all PDB IDs for query protein
# * Get meta information on PDB entries
# * Filter and sort meta information on PDB entries
# * Get meta information of ligands from top structures
# * Draw top ligand molecules
# * Create protein-ligand ID pairs
# * Get the PDB structure files
# * Align PDB structures
#
# ## References
#
# * Protein Data Bank
# ([PDB website](http://www.rcsb.org/pdb>))
# * PyPDB python package
# ([<i>Bioinformatics</i> (2016), <b>32</b>, 159-60](https://academic.oup.com/bioinformatics/article-lookup/doi/10.1093/bioinformatics/btv543))
# * PyPDB python package documentation
# ([PyPDB website](http://www.wgilpin.com/pypdb_docs/html/))
# * PyMol selection algebra
# ([PyMolWiki: selection algebra](https://pymolwiki.org/index.php/Selection_Algebra))
#
# ## Theory
#
# ### Protein Data Bank (PDB)
#
# The Protein Data Bank (PDB) is one of the most comprehensive structural biology information databases and a key resource in areas of structural biology, such as structural genomics and drug design. ([PDB website](http://www.rcsb.org/pdb>))
#
# Structural data is generated from structural determination methods such as X-ray crystallography (most common method), nuclear magnetic resonance (NMR), and cryo electron microscopy (cryo-EM).
# For each entry, the database contains (i) the 3D coordinates of the atoms and the bonds connecting these atoms for proteins, ligand, cofactors, water molecules, and ions, as well as (ii) meta information on the structural data such as the PDB ID, the authors, the deposition date, the structural determination method used and the structural resolution.
#
# The structural resolution is a measure of the quality of the data that has been collected and has the unit ร
(Angstrรถm). The lower the value, the higher the quality of the structure.
#
# The PDB website offers the 3D visualization of the protein structures (with ligand interactions if available) and the structure quality metrics, as can be seen for the PDB entry of an example epidermal growth factor receptor (EGFR) with the [PDB ID 3UG5](https://www.rcsb.org/structure/3UG5).
#
# <img src="./images/protein-ligand-complex.png" align="above" alt="Image cannot be shown" width="400">
# <div align="center"> Figure 1: The protein structure (in gray) with an interacting ligand (in green) is shown for an example epidermal growth factor receptor (EGFR) with the PDB ID 3UG5 (figure by <NAME>).</div>
#
# ### PyPDB
#
# PyPDB is a python programming interface for the PDB and works exclusively in Python 3.
# This package facilitates the integration of automatic PDB searches within bioinformatics workflows and simplifies the process of performing multiple searches based on the results of existing searches.
# It also allows an advanced querying of information on PDB entries.
# The PDB currently uses a RESTful API that allows for the retrieval of information via standard HTML vocabulary. PyPDB converts these objects into XML strings.
# ([<i>Bioinformatics</i> (2016), <b>32</b>, 159-60](https://academic.oup.com/bioinformatics/article-lookup/doi/10.1093/bioinformatics/btv543))
#
# A list of functions is provided on the PyPDB documentation website ([PyPDB website](http://www.wgilpin.com/pypdb_docs/html/)).
# ## Practical
# +
# Import necessary libraries
from pypdb import *
from pymol import *
from rdkit import Chem
from rdkit.Chem.Draw import IPythonConsole
from rdkit.Chem import Draw
from rdkit.Chem import PandasTools
IPythonConsole.ipython_useSVG=True
import pprint
import glob
import pandas as pd
from array import array
import numpy as np
import collections
import matplotlib.pyplot as plt
# %matplotlib inline
# -
# ### Select query protein
#
# We use EGFR as query protein for this talktorial. The UniProt ID of EGFR is `P00533`, which will be used in the following to query the PDB database.
#
# ### Get statistic on PDB entries for query protein
#
# First, we ask the question: How many PDB entries are deposited in the PDB for EGFR per year and how many in total?
#
# We can do a search on the [PDB website](http://www.rcsb.org/pdb>) with the search term `P00533`.
# In October 2018, the PDB returned 179 search results.
#
# Using `pypdb`, we can find all deposition dates of EGFR structures from the PDB database. The number of deposited structures is needed to set the parameter `max_results` of the function `find_dates`.
# +
# Note: Parameter max_results default is 100, which is too low for EGFR
# If max_results > maximal number of EGFR structures: error,
# Therefore we checked beforehand how many results exist (#179)
# This database query may take a moment (minute to couple of minutes)
all_dates = find_dates("P00533", max_results=179)
# -
print("Number of EGFR structures found: " + str(len(all_dates)))
# Example of deposition dates
all_dates[:3]
# We extract the year from the deposition dates and calculate a depositions-per-year histogram.
# +
# Extract year
all_dates = np.asarray(all_dates)
all_years = np.asarray([int(depdate[:4]) for depdate in all_dates])
# Calculate histogram
bins = max(all_years)-min(all_years) # Bin number = year range
subs_v_time = np.histogram(all_years, bins)
# All entries (excluding 2018) are plotted
dates, num_entries = subs_v_time[1][:-1], subs_v_time[0]
# Show histogram
fig = plt.figure()
ax = plt.subplot(111)
ax.fill_between(dates, 0, num_entries)
ax.set_ylabel("New entries per year")
ax.set_xlabel("Year")
ax.set_title("PDB entries for EGFR")
plt.show()
# -
# ### Get all PDB IDs for query protein
#
# Now, we get all PDB structures for our query protein EGFR, using the `pypdb` function `make_query` and `do_search`.
# +
search_dict = make_query("P00533") # May run into timeout when max_results is 180 or more
found_pdb_ids = do_search(search_dict)
print("PDB IDs found for query: ")
print(found_pdb_ids)
print("\nNumber of structures: " + str(len(found_pdb_ids)))
# -
# ### Get meta information for PDB entries
#
# We use `describe_pdb` to get meta information about the structures, which is stored per structure as dictionary.
#
# Note: we only fetch meta information on PDB structures here, we do not fetch the structures (3D coordinates), yet.
# +
# This database query may take a moment
pdbs = []
for i in found_pdb_ids:
pdbs.append(describe_pdb(i))
pdbs[0]
# -
# ### Filter and sort meta information on PDB entries
#
# Since we want to use the information to filter for relevant PDB structures, we convert the data set from dictionary to DataFrame for easier handling.
pdbs = pd.DataFrame(pdbs)
pdbs.head()
print("Number of PDB structures for EGFR: " + str(len(pdbs)))
# We start filtering our dataset based on the following criteria:
#
# #### 1. Experimental method: X-ray diffraction
#
# We only keep structures resolved by `X-RAY DIFFRACTION`, the most commonly used structure determination method.
pdbs = pdbs[pdbs.expMethod =="X-RAY DIFFRACTION"]
print("Number of PDB structures for EGFR from X-ray: " + str(len(pdbs)))
# #### 2. Structural resolution
#
# We only keep structures with a resolution equal or lower than 3 ร
(Angstrรถm). The lower the resolution value, the higher is the quality of the structure (= the higher is the certainty that the assigned 3D coordinates of the atoms are correct). Below 3 ร
, atomic orientations can be determined and therefore is often used as threshold for structures relevant for structure-based drug design.
# +
pdbs_resolution = [float(i) for i in pdbs.resolution.tolist()]
pdbs = pdbs[[i <= 3.0 for i in pdbs_resolution]]
print("Number of PDB structures for EGFR from X-ray with resolution <= 3.0 Angstrรถm: " + str(len(pdbs)))
# -
# We sort the data set by the structural resolution.
pdbs = pdbs.sort_values(["resolution"],
ascending=True,
na_position='last')
# We check the top PDB structures (sorted by resolution):
pdbs.head()[["structureId", "resolution"]]
# #### 3. Ligand-bound structures
#
# Since we will create ensemble ligand-based pharmacophores in the next talktorial, we remove all PDB structures from our DataFrame, which do not contain a bound ligand: we use the `pypdb` function `get_ligands` to check/retrieve the ligand(s) from a PDB structure. PDB-annotated ligands can be ligands, cofactors, but also solvents and ions. In order to filter only ligand-bound structures, we (i) remove all structures without any annotated ligand and (ii) remove all structures that do not contain any ligands with a molecular weight (MW) greater than 100 Da (Dalton), since many solvents and ions weight less. Note: this is a simple, but not comprehensive exclusion of solvents and ions.
# Get all PDB IDs from DataFrame
pdb_ids = pdbs["structureId"].get_values().tolist()
# +
# Remove structures
# (i) without ligand and
# (ii) without any ligands with molecular weight (MW) greater than 100 Da (Dalton)
mw_cutoff = 100.0 # Molecular weight cutoff in Da
# This database query may take a moment
removed_pdb_ids = []
for i in pdb_ids:
ligand_dict = get_ligands(i)
# (i) Remove structure if no ligand present
if ligand_dict["ligandInfo"] is None:
pdb_ids.remove(i) # Remove ligand-free PDB IDs from list
removed_pdb_ids.append(i) # Store ligand-free PDB IDs
# (ii) Remove structure if not a single annotated ligand has a MW above mw_cutoff
else:
# Get ligand information
ligs = ligand_dict["ligandInfo"]["ligand"]
# Technicality: if only one ligand, cast dict to list (for the subsequent list comprehension)
if type(ligs) == dict:
ligs = [ligs]
# Get MW per annotated ligand
mw_list = [float(i["@molecularWeight"]) for i in ligs]
# Remove structure if not a single annotated ligand has a MW above mw_cutoff
if sum([mw > mw_cutoff for mw in mw_list]) == 0:
pdb_ids.remove(i) # Remove ligand-free PDB IDs from list
removed_pdb_ids.append(i) # Store ligand-free PDB IDs
print("PDB structures without a ligand (removed from our data set):")
print(removed_pdb_ids)
# -
print("Number of structures with ligand: " + str(len(pdb_ids)))
# ### Get meta information of ligands from top structures
#
# In the next talktorial, we will build ligand-based ensemble pharmacophores from the top `top_num` structures with the highest resolution.
top_num = 4 # Number of top structures
pdb_ids = pdb_ids[0:top_num]
pdb_ids
# We fetch the PDB information about the top `top_num` ligands using `get_ligands`, to be stored as *csv* file (as dictionary per ligand).
#
# If a structure contains several ligands, we select the largest ligand. Note: this is a simple, but not comprehensive method to select ligand binding the binding site of a protein. This approach may also select a cofactor bound to the protein. Therefore, please check the automatically selected top ligands in PyMol for further usage.
# +
ligands_list = []
for i in pdb_ids:
ligands = get_ligands(i)["ligandInfo"]["ligand"]
# Technicality: if only one ligand, cast dict to list (for the subsequent list comprehension)
if type(ligands) == dict:
ligands = [ligands]
weight = 0
this_lig = {}
# If several ligands contained, take largest
for lig in ligands:
if float(lig["@molecularWeight"]) > weight:
this_lig = lig
weight = float(lig["@molecularWeight"])
ligands_list.append(this_lig)
# Change the format to DataFrame
ligs = pd.DataFrame(ligands_list)
ligs
# -
ligs.to_csv('../data/T8/PDB_top_ligands.csv', header=True, index=False, sep='\t')
# ### Draw top ligand molecules
PandasTools.AddMoleculeColumnToFrame(ligs, 'smiles')
Draw.MolsToGridImage(mols=list(ligs.ROMol),
legends=list(ligs['@chemicalID']+', '+ligs['@structureId']),
molsPerRow=top_num)
# ### Create protein-ligand ID pairs
# +
pairs = collections.OrderedDict()
for idx, row in ligs.iterrows():
pairs[str(row['@structureId'])] = str(row['@chemicalID'])
print(pairs)
# -
# ### Get the PDB structure files
#
# We now fetch the PDB structure files, i.e. 3D coordinates of the protein, ligand (and if available other atomic or molecular entities such as cofactors, water molecules, and ions) from the PDB using the `pypdb` function `get_pdb_file`.
# Available file formats are *pdb* and *cif*, which store the 3D coordinations of atoms of the protein (and ligand, cofactors, water molecules, and ions) as well as information on bonds between atoms. Here, we work with *pdb* files.
# Fetch pdb file and save locally
for prot, lig in pairs.items():
pdb_file = get_pdb_file(prot, filetype='pdb', compression=False)
with open('../data/T8/'+ prot + '.pdb', 'w') as f:
f.write(pdb_file)
# ### Align PDB structures
#
# Since we want to build ligand-based ensemble pharmacophores in the next talktorial, it is necessary to align all structures to each other in 3D. We will use the molecular visualization program PyMol for this task, which can also be used from within the Jupyter notebook. PyMol aligns two structures at a time in a way that the distance of atoms between the two structures is minimized.
#
# First, we will launch PyMol from the command line (in quiet mode, i.e. the GUI will not open).
# Launch PyMol in quiet mode
pymol.pymol_argv = ['pymol','-qc']
pymol.finish_launching()
# For the alignment, we choose a reference structure file (immobile PDB, 'target') onto which the other ('query') structure files are superimposed using the PyMol command `cmd.align(query, target)`. All `cmd.` commands are commands for PyMol.
#
# We save the aligned structures with the new coordinates as *pdb* files. We also extract the ligand from the structure file and save it separately as *pdb* file to be used in the next talktorial.
# +
# Save alignment logs to file
f = open("../data/T8/alignments.log", "w")
# Variable distinguishing between immobile and mobile structure during alignment
immobile_pdb = True
refAlignTarget='non'
refAlignQuery='non'
# Align proteins on first protein
for prot, lig in pairs.items():
# Immobile structure (reference structure for alignment)
if immobile_pdb:
target = prot
f.write('Immobile target: ' + prot + '\n')
# Load pdb file (complex of protein and ligand)
targetFile = cmd.load('../data/T8/' + target + '.pdb')
# Store name for refined alignment
refAlignTarget='('+target+' within 5 of resn '+lig+')'
# Save complex as pdb file
cmd.save('../data/T8/' + target + '_algn.pdb', selection=target)
# Select only the ligand with the selected name
ligObj = cmd.select('ligand', target + ' and resn ' + lig)
# Save selection as pdb file
cmd.save('../data/T8/' + target + '_lig.pdb', selection='ligand', format='pdb')
# Delete ligand selection
cmd.delete(ligObj)
# Target selected
immobile_pdb = False
# Mobile structures (which are aligned to reference structure)
else:
query = prot
f.write('-- align %s to %s \n' %(query, target))
# Load pdb file (complex of protein and ligand)
queryFile = cmd.load('../data/T8/' + query + '.pdb')
# Align structures (proteins) with focus on binding site
refAlignQuery= '('+query+' within 5 of resn '+lig+')'
values = cmd.align(refAlignQuery, refAlignTarget)
# If structures cannot be aligned (i.e. if RMSD > 5A), skip alignment
if values[0] > 5:
f.write('--- bad alignment: skip structure\n')
else:
# Save complex as pdb file
cmd.save('../data/T8/' + query + '_algn.pdb', selection=query)
# Select only the ligand
ligObj = cmd.select('ligand', query + ' and resn ' + lig)
# Save selection as pdb file
cmd.save('../data/T8/' + query + '_lig.pdb', selection='ligand', format='pdb')
# Delete ligand selection
cmd.delete(ligObj)
# Delete "query" selection
cmd.delete(query)
# Delete "target" selection
cmd.delete(target)
f.close()
# -
# We quit the PyMol application.
# Quit PyMol
pymol.cmd.quit()
# We check the existence of all ligand *pdb* files. If files are missing, please check the protein-ligand structures by hand in PyMol.
mol_files = []
for file in glob.glob("../data/T8/*_lig.pdb"):
mol_files.append(file)
mol_files
# ## Discussion
#
# In this talktorial, we learned how to retrieve protein and ligand meta information and structural information from the PDB. We retained only X-ray structures and filtered our data by resolution and ligand availability. Ultimately, we aimed for an aligned set of ligands to be used in the next talktorial for the generation of ligand-based ensemble pharmacophores.
#
# In order to enrich information about ligands for pharmacophore modeling, it is advisable to not only filter by PDB structure resolution, but also to check for ligand diversity (see **talktorial 5** on molecule clustering by similarity) and to check for ligand activity (i.e. to include only potent ligands).
#
# ## Quiz
#
# 1. Summarize the kind of data that the Protein Data Bank contains.
# 2. Explain what the resolution of a structure stands for and how and why we filter for it in this talktorial.
# 3. Explain what an alignment of structures means and discuss the alignment performed in this talktorial.
| talktorials/8_PDB/T8_PDB.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: 'Python 3.6.10 64-bit (''PythonData'': conda)'
# name: python3610jvsc74a57bd039cf81ff75551f3ea7268a5731633db08a0e59f6bdbd7d1519c41c672b5e67ae
# ---
import pandas as pd
import os
import glob
# import csv to explore data
df1 = pd.read_csv('data/JC-201903-citibike-tripdata.csv')
df1.dtypes
# convert startime column to datetime format
# https://stackoverflow.com/questions/51898826/converting-object-column-in-pandas-dataframe-to-datetime
df['starttime'] = pd.to_datetime(df['starttime'], dayfirst=True)
df['stoptime'] = pd.to_datetime(df['stoptime'], dayfirst=True)
df.dtypes
# extract year from startime col and calculate age for each user
df['year'] = df['starttime'].dt.year
df['age'] = df['year'] - df['birth year']
df
# solution for importing and joining multiple csvs here: https://stackoverflow.com/questions/20906474/import-multiple-csv-files-into-pandas-and-concatenate-into-one-dataframe
all_files = glob.glob(os.path.join("data", "*.csv"))
frame = pd.concat((pd.read_csv(f)) for f in all_files)
frame
frame.describe()
frame.dtypes
# convert startime column to datetime format
# https://stackoverflow.com/questions/51898826/converting-object-column-in-pandas-dataframe-to-datetime
frame['starttime'] = pd.to_datetime(frame['starttime'], dayfirst=True)
frame['stoptime'] = pd.to_datetime(frame['stoptime'], dayfirst=True)
frame['start_year'] = frame['starttime'].dt.year
frame['age'] = frame['start_year'] - frame['birth year']
# extract year from startime col and calculate age for each user - https://datascienceparichay.com/article/pandas-extract-year-from-datetime-column/
# frame['start_year'] = frame['starttime'].dt.year
frame['start_month'] = frame['starttime'].dt.month
frame['start_day'] = frame['starttime'].dt.day
frame['stop_year'] = frame['stoptime'].dt.year
frame['stop_month'] = frame['stoptime'].dt.month
frame['stop_day'] = frame['stoptime'].dt.day
frame.dtypes
frame
df_by_year = frame.groupby(['start_year', 'start_month'])['usertype'].count()
df_by_year.head
| analysis.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: firstEnv
# language: python
# name: firstenv
# ---
# # Interpreting ResNet Model With Score CAM
# This notebook loads the pretrained ResNet model given by [PaddleClas](https://github.com/PaddlePaddle/PaddleClas/tree/release/2.0) and performs image classification on selected images.
#
# Interpretations of the predictions are generated and visualized using Score CAM algorithm, specifically the `ScoreCAMInterpreter` class.
from PIL import Image
import paddle
import interpretdl as it
from interpretdl.data_processor.readers import read_image
from assets.resnet import ResNet50
# If you have't done so, please first download the pretrained ResNet50 model by runnig the cell below or directly from [this link](https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/ResNet50_pretrained.pdparams).
# More pretrained models can be found in [PaddleClas Model Zoo](https://github.com/PaddlePaddle/PaddleClas/tree/e93711c43512a7ebcec07a0438aa87565df81084#Model_zoo_overview).
# downloads and the model to assets/
# !wget https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/ResNet50_pretrained.pdparams -P assets/
# Initialize `paddle_model` and load weights. `ResNet50` is borrowed from PaddleClas [architectures](https://github.com/PaddlePaddle/PaddleClas/tree/e93711c43512a7ebcec07a0438aa87565df81084/ppcls/modeling/architectures).
# +
MODEL_PATH = "assets/ResNet50_pretrained.pdparams"
paddle_model = ResNet50()
state_dict = paddle.load(MODEL_PATH)
paddle_model.set_dict(state_dict)
# -
# Initialize the `ScoreCAMInterpreter`.
scorecam = it.ScoreCAMInterpreter(paddle_model, use_cuda=True)
# Before interpreting the image, we first take a look at the original image.
img_path = 'assets/catdog.png'
x = Image.fromarray(read_image(img_path)[0])
x
# Then, let Score CAM method help us `interpret` the image with respect to the predicted label, which is bull mastiff. We choose the last layer as the target layer.
heatmap = scorecam.interpret(
img_path,
'res5c.conv2',
labels=None,
visual=True,
save_path=None)
# Let's see what happens if our target label is 282 (tiger cat.)
heatmap = scorecam.interpret(
img_path,
'res5c.conv2',
labels=[282],
visual=True,
save_path=None)
# Note that `ScoreCAMInterpreter` also supports multiple images as inputs. They can be either processed images or a list of image filepaths. Feel free to play around with it!
| tutorials/score_cam_tutorial_cv.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import numpy as np
import taskGLMPipeline_v2 as tgp
import matplotlib.pyplot as plt
# %matplotlib inline
import seaborn as sns
from importlib import reload
tgp = reload(tgp)
import scipy.stats as stats
import pandas as pd
import statsmodels.sandbox.stats.multicomp as mc
import matplotlib
matplotlib.rcParams['font.family'] = 'FreeSans'
subjNums = ['013','014','016','017','018','021','023','024','026','027','028','030',
'031','032','033','034','035','037','038','039','040','041','042','043',
'045','046','047','048','049','050','053','055','056','057','058','062',
'063','066','067','068','069','070','072','074','075','076','077','081',
'085','086','087','088','090','092','093','094','095','097','098','099',
'101','102','103','104','105','106','108','109','110','111','112','114',
'115','117','119','120','121','122','123','124','125','126','127','128',
'129','130','131','132','134','135','136','137','138','139','140','141']
# # Load sample subject and plot task regression matrix
subj = '013'
X = tgp.loadTaskTiming(subj,'ALL')
designmat = X['taskDesignMat'] # remove the interaction terms
designmat = np.hstack((designmat[:,:28],designmat[:,-4:]))
stimCond = np.asarray(X['stimCond'])
stimCond = np.hstack((stimCond[:28],stimCond[-4:]))
plt.figure(figsize=(3,3))
sns.heatmap(designmat,cmap='binary',cbar=False)
# plt.yticks(np.arange(0,designmat.shape[0],1000),np.arange(0,designmat.shape[0],1000))
plt.yticks([])
# plt.xlabel('Regressors',fontsize=10)
# plt.xticks(np.arange(0,32,4),np.arange(0,32,4),rotation=0)
plt.xticks([])
# plt.yticks(np.arange(0,designmat.shape[1],designmat.shape[1]),np.arange(0,designmat.shape[0],1000)])
plt.ylabel('Time points (TRs)',fontsize=10)
plt.title('Design matrix sample subject',fontsize=12)
plt.tight_layout()
plt.savefig('designMatrix_013.png',dpi=300)
# # Run group-level statistics measuring the average cosine similarity of each pair of regressors
unitmats_all = []
for subj in subjNums:
X = tgp.loadTaskTiming(subj,'ALL')
designmat = X['taskDesignMat'] # remove the interaction terms
designmat = np.hstack((designmat[:,:28],designmat[:,-4:]))
stimCond = np.asarray(X['stimCond'])
stimCond = np.hstack((stimCond[:28],stimCond[-4:]))
designmat = designmat - np.mean(designmat,axis=0)
unitmat = np.divide(designmat,np.linalg.norm(designmat,axis=0))
unitmats_all.append(unitmat)
# #### Average cosine design matrices across subjects
# +
corrmat = []
for unitmat in unitmats_all:
# unitmat = unitmat - np.mean(unitmat,axis=0)
tmp = np.dot(unitmat.T,unitmat)
#tmp = np.corrcoef(unitmat.T)
corrmat.append(tmp)
# -
# #### Create dataframes
# +
rule_cond = ['RuleLogic_BOTH', 'RuleLogic_NOTBOTH', 'RuleLogic_EITHER',
'RuleLogic_NEITHER', 'RuleSensory_RED', 'RuleSensory_VERTICAL',
'RuleSensory_HIGH', 'RuleSensory_CONSTANT', 'RuleMotor_LMID',
'RuleMotor_LIND', 'RuleMotor_RMID', 'RuleMotor_RIND']
resp_cond = ['Response_LMID', 'Response_LIND', 'Response_RMID', 'Response_RIND']
df = {}
df['Subject'] = []
df['Cosine'] = []
df['Response'] = []
df['Stimulus'] = []
scount = 0
for subj in subjNums:
stim_ind = 0
for stim_ind in range(12,28):
stim = stimCond[stim_ind]
for resp_ind in range(28,32):
resp = stimCond[resp_ind]
df['Subject'].append(scount)
df['Response'].append(resp)
df['Stimulus'].append(stim)
df['Cosine'].append(corrmat[scount][stim_ind,resp_ind])
scount += 1
df = pd.DataFrame(df)
# -
# #### Visualize Cosine similarity of rules x stimuli x motor responses
# +
groupavg = np.mean(corrmat,axis=0)
t, p = stats.ttest_1samp(corrmat,0,axis=0)
triu_ind = np.triu_indices(groupavg.shape[0],k=1)
ntests = len(p[triu_ind])
q = p*ntests
sig_mat = np.multiply(groupavg,q<0.05)
# np.fill_diagonal(groupavg,0)
plt.figure(figsize=(3,3))
ax = sns.heatmap(sig_mat,square=True,cmap="Blues",cbar_kws={'fraction':0.046})
ax.invert_yaxis()
plt.xticks([])
plt.yticks([])
plt.title('Cosine similarity of\ntask regressors',fontsize=12)
plt.tight_layout()
plt.savefig('CosineMatrixTaskRegressors.png',dpi=300)
# -
# #### Plot counterbalancing of motor responses with task stimuli
stimticks = []
stimticks.extend(np.repeat('Color',4))
stimticks.extend(np.repeat('Orientation',4))
stimticks.extend(np.repeat('Pitch',4))
stimticks.extend(np.repeat('Constant',4))
plt.figure(figsize=(7.,3.5))
plt.title('Counterbalancing of motor responses with task stimuli', fontsize=12)
ax = sns.boxplot(x="Stimulus",y='Cosine',hue='Response',data=df,whis=0,showfliers = False,palette="Set2")
sns.stripplot(x="Stimulus",y='Cosine',hue='Response',data=df,dodge=True,palette="Set2")
plt.xticks(np.arange(len(stimticks)), stimticks,rotation=-45,fontsize=10);
plt.yticks(fontsize=10)
plt.xlabel('Stimulus',fontsize=12)
plt.ylabel('Cosine similarity',fontsize=12)
handles, labels = ax.get_legend_handles_labels()
l = plt.legend(handles[-4:], ['LMID','LIND','RMID','RIND'], loc=1, borderaxespad=0., prop={'size': 8})
plt.tight_layout()
plt.savefig('CounterbalancingMotorResponseXStim.png',dpi=300)
# #### Run t-tests for all stimuli
fs = []
ps = []
for stim in np.unique(df.Stimulus.values):
tmpdf = df.loc[df.Stimulus==stim]
tmplmid = tmpdf.Cosine[df.Response=='Response_LMID']
tmplind = tmpdf.Cosine[df.Response=='Response_LIND']
tmprmid = tmpdf.Cosine[df.Response=='Response_RMID']
tmprind = tmpdf.Cosine[df.Response=='Response_RIND']
f, p = stats.f_oneway(tmplmid.values,tmplind.values,tmprmid.values,tmprind.values)
fs.append(f)
ps.append(p)
qs = mc.fdrcorrection0(ps)[1]
| code/glmScripts/visualizeExampleDesignMatrix.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
def videoToImage(filename,index):
import cv2
vc = cv2.VideoCapture(filename) #่ฏปๅ
ฅ่ง้ขๆไปถ
c=0
rval=vc.isOpened()
#timeF = 1 #่ง้ขๅธง่ฎกๆฐ้ด้้ข็
while rval: #ๅพช็ฏ่ฏปๅ่ง้ขๅธง
c = c + 1
rval, frame = vc.read()
# if(c%timeF == 0): #ๆฏ้timeFๅธง่ฟ่กๅญๅจๆไฝ
# cv2.imwrite('smallVideo/smallVideo'+str(c) + '.jpg', frame) #ๅญๅจไธบๅพๅ
if rval:
cv2.imwrite('./car/' + filename + 'car-'+str(c).zfill(8) + '.jpg', frame) #ๅญๅจไธบๅพๅ
cv2.waitKey(1)
else:
break
# +
import os
s = os.listdir('./')
index = 0
for i in s:
#print(i)
if i.endswith('MOV'):
videoToImage(i,i.index)
print(i)
# +
import cv2
def resize(folder, filename, index):
print(folder +filename)
pic = cv2.imread(folder +filename)
pic = cv2.resize(pic,(100,80),interpolation=cv2.INTER_CUBIC)
cv2.imwrite(folder + index + '.jpg', pic)
# -
import os
movie_name = os.listdir('./pos/')
index = 0
for temp in movie_name:
num = temp.rfind(']')#ๆพๅฐๆๅณ่พน]็ไธๆ
# new_name = '[ๅฏๅฏๅฏๅฏ]' + temp
new_name = temp[num+1:]
os.rename('./pos/'+temp,'./pos/'+ str(index) + '.jpg')
index+=1
import os
target_folder = './neg/'
s = os.listdir(target_folder)
pic_num = 1
for i in s:
#print(i)
if i.endswith('jpg'):
print(target_folder +i)
resize(target_folder,i, str(pic_num))
pic_num = pic_num + 1
import os
target_folder = './pos/'
s = os.listdir(target_folder)
pic_num = 1
for i in s:
#print(i)
#if i.endswith('jpg'):
print(target_folder +i)
resize(target_folder,i, str(pic_num))
pic_num = pic_num + 1
| detect/Untitled.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # A software bug
# ## Problem Definition
# You are using a legacy software to solve a production mix Continuous Linear Programming (CLP) problem in your company where the objective is to maximise profits. The following table provides the solution of the primal, dual and a sensitivity analysis for the three decision variables that represent the quantities to produce of each product
#
# |Variables | Solution | Reduced cost | Objective Coefficient | Lower bound | Upper bound|
# |---|---|---|---|---|---|
# |$x_1$ | 300.00 | ? | 30.00 | 24.44 | Inf|
# |$x_2$ | 33.33 | ? | 20.00 | -0.00 | 90.00|
# |$x_3$ | 0 | -8.33 | 40.00 | -Inf | 48.33 |
#
# Answer the following questions:
#
# **a.** Notice that there are some values missing (a bug in the software shows a ? sign instead of a numerical value, remember to use Python the next time around). Fill the missing values and explain your decision (1 point).
# <br />
# <br />
# <br />
# <br />
# <br />
# <br />
# <br />
# <br />
# <br />
# <br />
# <br />
# <br />
# **b.** According to the provided solution, which of the three products has the highest impact in your objective function? Motivate your response (1 point).
# <br />
# <br />
# <br />
# <br />
# <br />
# <br />
# <br />
# <br />
# <br />
# <br />
# <br />
# <br />
# **c.** What does the model tell you about product $x_3$. Is it profitable to manufacture under the current conditions modeled in the problem? Provide quantitative values to motivate your response (1 point).
# <br />
# <br />
# <br />
# <br />
# <br />
# <br />
# <br />
# <br />
# <br />
# <br />
# <br />
# <br />**d.** Recall that in this type of problem, the objective function represents the marginal profit per unit of product. What would happen if the objective function of variable $x_3$ is increased over 50? Describe what would be the impact in the obtained solution (1 point)
# <br />
# <br />
# <br />
# <br />
# <br />
# <br />
# <br />
# <br />
# <br />
# <br />
# <br />
# <br />The following table represents the value obtained for the decision variables related to the constraints:
#
# | Constraint | Right Hand Side | Shadow Price | Slack | Min RHS | Max RHS |
# |-----|-----|-----|----|-----|----|
# | $s_1$ | 400.00 | 6.67 | 0.00 | 300.00 | 525.00 |
# | $s_2$ | 600.00 | 11.67 | 0.00 | 0.00 | 800.00 |
# | $s_3$ | 600.00 | 0.00 | 166.67 | 433.33 | Inf |
#
# Bearing in mind that the constraints represent the availability of 3 limited resources (operating time in minutes) and that the type of constraints is in every case "less or equal" answer the following questions:
#
# **e.** What decision variables are basic? How many are there and, is this result expected? Motivate your response (1 point).
# <br />
# <br />
# <br />
# <br />
# <br />
# <br />
# <br />
# <br />
# <br />
# <br />
# <br />
# <br />
# <br />
# <br />
# <br />
# **f.** Imagine that you had to cut down costs by reducing the availability of the limited resources in the model. Which one would you select? How much could you cut down without changing the production mix? Motivate your response (1 point)
# <br />
# <br />
# <br />
# <br />
# <br />
# <br />
# <br />
# <br />
# <br />
# <br />
# <br />
# <br />
# <br />
# <br />
# <br />
# **g.** Now, with the money saved, you could invest to increase the availability of other limited resources. Again, indicate which one would you select and by how much you would increase the availability of this resource without changing the production mix. Motivate your response (1 point)
# <br />
# <br />
# <br />
# <br />
# <br />
# <br />
# <br />
# <br />
# <br />
# <br />
# <br />
# <br />
# **h.** How do the columns of the two tables relate to the primal and dual problem? Moreover, for each column, not considering upper and lower bounds, write down a brief description of each column and the relationship it has with the primal and dual problem (3 problems).
# <br />
# <br />
# <br />
# <br />
# <br />
# <br />
# <br />
# <br />
# <br />
# <br />
# <br />
# <br />
# <br />
# <br />
# <br />
# <br />
# <br />
# <br />
# <br />
# <br />
# <br />
# <br />
# <br />
# <br />
# <br />
# <br />
# <br />
# <br />
# <br />
# <br />
# <br />
# <br />
# <br />
# <br />
# <br />
# <br />
# + [markdown] pycharm={"name": "#%% md\n"}
#
| docs/source/CLP/solved/A software bug (Solved).ipynb |