code stringlengths 38 801k | repo_path stringlengths 6 263 |
|---|---|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: venv
# language: python
# name: venv
# ---
# # MPC Tensor - Party 1
# ### With Duet
# In this tutorial we will show you how to perform secure multiparty computation with data you cannot see. There are two parts/notebooks:
# * [POC-MPCTensor-Duet-Party1-DS](POC-MPCTensor-Duet-Party1-DS.ipynb) (this notebook). The data-scientist will be the responsible of perform any secure computation.
# * [POC-MPCTensor-Duet-Party2-DO](POC-MPCTensor-Duet-Party2-DO.ipynb). The Data Owner will store data in his Duet server and will be available for the data-scientist.
# ## 0 - Libraries
# Let's import the main libraries
# +
import torch # tensor computation
import syft as sy # core library for remote execution
sy.load_lib("sympc") # openmined library which helps to perform mpc (see https://github.com/OpenMined/SyMPC)
sy.logger.add(sink="./example.log")
# -
# ## 1 - Duet Server and connection to Data Owner (Party 2)
# In this step let's launch a Duet server and connect to the Data Owner
# ### 1.1 - Launch a Duet Server
duet_p1 = sy.launch_duet()
# ### 1.2 - Connect to Data Owner (Party 2)
duet_p2 = sy.join_duet("341f4a9c6b78b0c8a4cc5e86e7af02e0")
# If you see a green message saying CONNECTED!, that means that everything was ok, go back to [POC-MPCTensor-Duet-Party2-DO](POC-MPCTensor-Duet-Party2-DO.ipynb) and complete the tutorial.
# ## 2 - Secure MultiParty Computation
# ### 2.1 Create a session
# Before doing any computation we need to setup a session. The session is used to send some config information only once between the parties.
# This information can be:
# * the ring size in which we do the computation
# * the precision and base
# * the approximation methods we are using for different functions (TODO)
from sympc.session import Session
from sympc.tensor import MPCTensor
session = Session(parties=[duet_p1, duet_p2])
print(session)
# ### 2.2 Send the session to all the parties
Session.setup_mpc(session)
# ### 2.3 Private Operations
# Now we are ready to perform private operations. First of all let's check which datasets are stored in the Data Owner Duet server
duet_p2.store.pandas
# ### 2.3.1 - Sum, Substract and Multiply operations
# Let's first do some basic operations. Notice that the difference here is that these operations are performed via SMPC, so the raw data is not leaving the data owner server!
x_secret = duet_p2.store[0] # secret data to test sum, substract and multiply
y = torch.Tensor([-5, 0, 1, 2, 3]) # some local data
x = MPCTensor(secret=x_secret, shape=(1,), session=session) # MPC Tensor from x_secret
print("X + Y =", (x + y).reconstruct())
print("X - Y =", (x - y).reconstruct())
print("X * Y =", (x * y).reconstruct())
# ### 2.3.2 - Matrix multiplication
# Bit more complex operations such as matrix multiplications are valid as well.
# Remember that linear algebra is the basis of Deep Learning!
x_secret = duet_p2.store[1] # secret data with no access
x = MPCTensor(secret=x_secret, shape=(2,2), session=session) # MPC Tensor build from x_secret
print("X @ X =\n", (x @ x).reconstruct())
# ## Congratulations!!! - Time to Join the Community!
#
# Congratulations on completing this notebook tutorial! If you enjoyed this and would like to join the movement toward privacy preserving, decentralized ownership of AI and the AI supply chain (data), you can do so in the following ways!
#
# ### Star PySyft and SyMPC on GitHub
# The easiest way to help our community is just by starring the GitHub repos! This helps raise awareness of the cool tools we're building.
#
# * [Star PySyft](https://github.com/OpenMined/PySyft)
# * [Star SyMPC](https://github.com/OpenMined/SyMPC/)
#
# ### Join our Slack!
# The best way to keep up to date on the latest advancements is to join our community! You can do so by filling out the form at http://slack.openmined.org
#
# ### Join a Code Project!
# The best way to contribute to our community is to become a code contributor! At any time you can go to PySyft GitHub Issues page and filter for "Projects". This will show you all the top level Tickets giving an overview of what projects you can join! If you don't want to join a project, but you would like to do a bit of coding, you can also look for more "one off" mini-projects by searching for GitHub issues marked "good first issue".
#
# * [PySyft Good First Issue Tickets](https://github.com/OpenMined/PySyft/labels/Good%20first%20issue%20%3Amortar_board%3A)
# * [SyMPC Good First Issue Tickets](https://github.com/OpenMined/SyMPC/labels/good%20first%20issue)
#
# ### Donate
# If you don't have time to contribute to our codebase, but would still like to lend support, you can also become a Backer on our Open Collective. All donations go toward our web hosting and other community expenses such as hackathons and meetups!
#
# * [OpenMined's Open Collective Page](https://opencollective.com/openmined)
| examples/secure-multi-party-computation/Duet/1-DS-1-DO/POC-MPCTensor-Duet-Party1-DS.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import json
import altair as alt
from altair import expr, datum
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import requests
import re
# +
import colorsys
from matplotlib.colors import to_hex, to_rgb
def scale_lightness(rgb, scale_l):
rgbhex = False
if "#" in rgb:
rgb = to_rgb(rgb)
rgbhex = True
# convert rgb to hls
h, l, s = colorsys.rgb_to_hls(*rgb)
# manipulate h, l, s values and return as rgb
c = colorsys.hls_to_rgb(h, min(1, l * scale_l), s=s)
if rgbhex:
c = to_hex(c)
return c
# +
LOCAL = False
if LOCAL:
local_suffix = "_local"
else:
local_suffix = ""
# -
# %%capture pwd
# !pwd
uid = pwd.stdout.split("/")[-1].split("\r")[0]
eco_git_home = (
"https://raw.githubusercontent.com/EconomicsObservatory/ECOvisualisations/main/"
)
eco_git_path = eco_git_home + "articles/" + uid + "/data/"
vega_embed = requests.get(eco_git_home + "guidelines/html/vega-embed.html").text
colors = json.loads(
requests.get(eco_git_home + "guidelines/colors/eco-colors.json").content
)
category_color = json.loads(
requests.get(eco_git_home + "guidelines/colors/eco-category-color.json").content
)
hue_color = json.loads(
requests.get(eco_git_home + "guidelines/colors/eco-single-hue-color.json").content
)
mhue_color = json.loads(
requests.get(eco_git_home + "guidelines/colors/eco-multi-hue-color.json").content
)
div_color = json.loads(
requests.get(eco_git_home + "guidelines/colors/eco-diverging-color.json").content
)
config = json.loads(
requests.get(eco_git_home + "guidelines/charts/eco-global-config.json").content
)
height = config["height"]
width = config["width"]
uid, height, width
# # Fig 1
# https://www.theccc.org.uk/wp-content/uploads/2020/06/Reducing-UK-emissions-Progress-Report-to-Parliament-Committee-on-Cli.._-002-1.pdf
# https://www.irena.org/publications/2020/Jun/Renewable-Power-Costs-in-2019
df1 = pd.read_excel(
"raw/200605-IRENADatafileRenewablePowerGenerationCostsin2019v11.xlsx",
sheet_name="Figures ES.2 & 1.3",
skiprows=7,
nrows=30,
usecols="B:L",
).dropna(how="any")
df1["Unnamed: 1"] = [
i + "_" + j
for i in ["pv", "onwind", "offwind", "csp"]
for j in ["p5", "avg", "p95"]
]
df2 = pd.read_excel(
"raw/200605-IRENADatafileRenewablePowerGenerationCostsin2019v11.xlsx",
sheet_name="Figures ES.2 & 1.3",
skiprows=7,
nrows=30,
usecols="M:Z",
).dropna(how="all")
df2["2010.1"] = df2["2010.1"].replace("Auctions and PPA database", np.nan)
df2.columns = range(2010, 2024)
df2 = df2.dropna(subset=[2020])
df2 = pd.concat([df1["Unnamed: 1"], df2], axis=1)
df2 = df2.dropna(subset=["Unnamed: 1"], axis=0)
df1["db"] = "lcoe"
df2["db"] = "auction"
df = pd.concat([df1, df2])
df["tech"] = df["Unnamed: 1"].astype(str).str.split("_").str[0]
df["conf"] = df["Unnamed: 1"].astype(str).str.split("_").str[1]
df = (
df.drop("Unnamed: 1", axis=1)
.set_index(["db", "tech", "conf"])
.stack()
.reset_index()
.set_index(["level_3", "tech", "db", "conf"])
.unstack()[0]
.reset_index()
)
df.columns = ["year", "tech", "db", "avg", "p5", "p95"]
f = "fig1_lcoe"
f1 = eco_git_path + f + ".csv"
df.to_csv("data/" + f + ".csv")
f += local_suffix
open("visualisation/" + f + ".html", "w").write(
vega_embed.replace(
"JSON_PATH", f1.replace("/data/", "/visualisation/").replace(".csv", ".json")
)
)
if LOCAL:
f1 = df
readme = "### " + f + '\n\n\n'
df.head()
base = alt.Chart(f1).encode(
x=alt.X(
"year:Q",
sort=[],
axis=alt.Axis(
grid=False,
titleAlign="center",
titleAnchor="middle",
title="",
labelColor=colors["eco-gray"],
titleColor=colors["eco-gray"],
tickColor=colors["eco-gray"],
domainColor=colors["eco-gray"],
tickCount=10,
orient="bottom",
labelAngle=0,
format=".0f",
# zindex=1,
# offset=-43
),
)
)
areas = base.mark_area(opacity=0.2).encode(
y=alt.Y("p5:Q", sort=[]),
y2=alt.Y2("p95:Q"),
color=alt.Color(
"tech:N",
scale=alt.Scale(
range=[
colors["eco-orange"],
colors["eco-mid-blue"],
colors["eco-green"],
scale_lightness(colors["eco-yellow"], 0.7),
]
),
),
)
lines = (
base.mark_line()
.encode(
y=alt.Y(
"avg:Q",
sort=[],
axis=alt.Axis(
grid=True,
gridOpacity=0.1,
gridColor=colors["eco-gray"],
title="$/kWh",
titleAnchor="start",
labelColor=colors["eco-gray"],
titleColor=colors["eco-gray"],
tickColor=colors["eco-gray"],
domainColor=colors["eco-gray"],
titleFontSize=10,
titleFontWeight="normal",
ticks=False,
labelAlign="left",
labelBaseline="middle",
labelPadding=-5,
labelOffset=-10,
titleX=27,
titleY=-5,
titleBaseline="bottom",
titleAngle=0,
titleAlign="left",
tickCount=7,
format=".2f",
),
),
color=alt.Color("tech:N", legend=None),
)
.transform_filter('datum.db=="lcoe"')
)
lines2 = (
base.mark_point(strokeOpacity=0.8, opacity=0.6)
.encode(
y=alt.Y("avg:Q", sort=[]),
fill=alt.Color(
"tech:N",
legend=None,
scale=alt.Scale(
range=[
colors["eco-orange"],
colors["eco-mid-blue"],
colors["eco-green"],
scale_lightness(colors["eco-yellow"], 0.7),
]
),
),
color=alt.Color(
"tech:N",
legend=None,
scale=alt.Scale(
range=[
colors["eco-orange"],
colors["eco-mid-blue"],
colors["eco-green"],
scale_lightness(colors["eco-yellow"], 0.7),
]
),
),
)
.transform_filter('datum.db=="auction"')
)
title = alt.TitleParams(
"Evolution of LCOE for renewable energy technologies",
subtitle=["Levelised cost of electricity. Source: CCC, based on IRENA (2020)"],
anchor="start",
align="left",
dx=5,
dy=-5,
fontSize=12,
subtitleFontSize=11,
subtitleFontStyle="italic",
)
fossil = (
alt.Chart(
pd.DataFrame(
[{"x": 2008, "y": 0.050, "y2": 0.177}, {"x": 2024, "y": 0.050, "y2": 0.177}]
)
)
.mark_area(fill=colors["eco-gray"], opacity=0.2)
.encode(x="x:Q", y="y:Q", y2="y2:Q")
)
ftext = (
alt.Chart(pd.DataFrame([{"x": 2024, "y": 0.165, "t": "Fossil fuel range"}]))
.mark_text(
align="right", color=colors["eco-gray"], fontSize=10, dy=-7, dx=-5, angle=270
)
.encode(x="x:Q", y="y:Q", text="t:N")
)
text1 = (
alt.Chart(
pd.DataFrame(
[
{"x": 2023, "y": 0.055, "t": "Circles:"},
{"x": 2023.2, "y": 0.04, "t": "Auctions"},
{"x": 2023, "y": 0.025, "t": "& PPAs"},
{"x": 2018.5, "y": 0.31, "t": "Lines:"},
{"x": 2018.5, "y": 0.295, "t": "LCOE"},
{"x": 2018.7, "y": 0.28, "t": "database"},
]
)
)
.mark_text(align="right", color=colors["eco-gray"], fontSize=10, dx=-5)
.encode(x="x:Q", y="y:Q", text="t:N")
)
text2 = (
alt.Chart(
pd.DataFrame(
[
{"x": 2011, "y": 0.14, "t": "Offshore wind"},
{"x": 2010.5, "y": 0.064, "t": "Onshore wind"},
{"x": 2011, "y": 0.364, "t": "Concentrated solar"},
{"x": 2012.7, "y": 0.21, "t": "Solar PV"},
]
)
)
.mark_text(align="left", fontSize=10)
.encode(
x="x:Q",
y="y:Q",
text="t:N",
color=alt.Color(
"t:N",
legend=None,
scale=alt.Scale(
range=[
colors["eco-orange"],
colors["eco-mid-blue"],
colors["eco-green"],
scale_lightness(colors["eco-yellow"], 0.7),
]
),
),
)
)
layer1 = (
(
(fossil + lines + lines2 + ftext + text1 + text2).properties(
height=300, width=400
)
)
.configure_view(stroke=None)
.properties(title=title)
)
layer1.save("visualisation/" + f + ".json")
layer1.save("visualisation/" + f + ".png",scale_factor=2.0)
layer1.save("visualisation/" + f + ".svg")
open("README.md", "w").write(readme)
layer1
base = alt.Chart(f1).encode(
x=alt.X(
"year:Q",
sort=[],
axis=alt.Axis(
grid=False,
titleAlign="center",
titleAnchor="middle",
title="",
labelColor=colors["eco-gray"],
titleColor=colors["eco-gray"],
tickColor=colors["eco-gray"],
domainColor=colors["eco-gray"],
tickCount=10,
titleFontSize=12,
labelFontSize=12,
orient="bottom",
labelAngle=0,
format=".0f",
# zindex=1,
# offset=-43
),
)
)
areas = base.mark_area(opacity=0.2).encode(
y=alt.Y("p5:Q", sort=[]),
y2=alt.Y2("p95:Q"),
color=alt.Color(
"tech:N",
scale=alt.Scale(
range=[
colors["eco-orange"],
colors["eco-light-blue"],
colors["eco-green"],
scale_lightness(colors["eco-yellow"], 0.7),
]
),
),
)
lines = (
base.mark_line()
.encode(
y=alt.Y(
"avg:Q",
sort=[],
axis=alt.Axis(
grid=True,
gridOpacity=0.1,
gridColor=colors["eco-gray"],
title="$/kWh",
titleAnchor="start",
labelColor=colors["eco-gray"],
titleColor=colors["eco-gray"],
tickColor=colors["eco-gray"],
domainColor=colors["eco-gray"],
titleFontSize=12,
labelFontSize=12,
titleFontWeight="normal",
ticks=False,
labelAlign="left",
labelBaseline="middle",
labelPadding=-5,
labelOffset=-10,
titleX=32,
titleY=-3,
titleBaseline="bottom",
titleAngle=0,
titleAlign="left",
tickCount=7,
format=".2f",
),
),
color=alt.Color("tech:N", legend=None),
)
.transform_filter('datum.db=="lcoe"')
)
lines2 = (
base.mark_point(strokeOpacity=0.8, opacity=0.6)
.encode(
y=alt.Y("avg:Q", sort=[]),
fill=alt.Color(
"tech:N",
legend=None,
scale=alt.Scale(
range=[
colors["eco-orange"],
colors["eco-light-blue"],
colors["eco-green"],
scale_lightness(colors["eco-yellow"], 0.7),
]
),
),
color=alt.Color(
"tech:N",
legend=None,
scale=alt.Scale(
range=[
colors["eco-orange"],
colors["eco-light-blue"],
colors["eco-green"],
scale_lightness(colors["eco-yellow"], 0.7),
]
),
),
)
.transform_filter('datum.db=="auction"')
)
title = alt.TitleParams(
"Evolution of LCOE for renewable energy technologies",
subtitle=["Levelised cost of electricity. Source: CCC, based on IRENA (2020)"],
anchor="start",
align="left",
dx=5,
dy=-5,
fontSize=14,
subtitleFontSize=12,
color=colors["eco-dot"],
subtitleColor=colors["eco-dot"],
subtitleFontStyle="italic",
)
fossil = (
alt.Chart(
pd.DataFrame(
[{"x": 2008, "y": 0.050, "y2": 0.177}, {"x": 2024, "y": 0.050, "y2": 0.177}]
)
)
.mark_area(fill=colors["eco-gray"], opacity=0.2)
.encode(x="x:Q", y="y:Q", y2="y2:Q")
)
ftext = (
alt.Chart(pd.DataFrame([{"x": 2024, "y": 0.165, "t": "Fossil fuel range"}]))
.mark_text(
align="right", color=colors["eco-gray"], fontSize=10, dy=-7, dx=-5, angle=270
)
.encode(x="x:Q", y="y:Q", text="t:N")
)
text1 = (
alt.Chart(
pd.DataFrame(
[
{"x": 2023, "y": 0.055, "t": "Circles:"},
{"x": 2023.2, "y": 0.04, "t": "Auctions"},
{"x": 2023, "y": 0.025, "t": "& PPAs"},
{"x": 2018.5, "y": 0.31, "t": "Lines:"},
{"x": 2018.5, "y": 0.295, "t": "LCOE"},
{"x": 2018.7, "y": 0.28, "t": "database"},
]
)
)
.mark_text(align="right", color=colors["eco-gray"], fontSize=10, dx=-5)
.encode(x="x:Q", y="y:Q", text="t:N")
)
text2 = (
alt.Chart(
pd.DataFrame(
[
{"x": 2011, "y": 0.14, "t": "Offshore wind"},
{"x": 2010.5, "y": 0.064, "t": "Onshore wind"},
{"x": 2011, "y": 0.364, "t": "Concentrated solar"},
{"x": 2012.7, "y": 0.21, "t": "Solar PV"},
]
)
)
.mark_text(align="left", fontSize=10)
.encode(
x="x:Q",
y="y:Q",
text="t:N",
color=alt.Color(
"t:N",
legend=None,
scale=alt.Scale(
range=[
colors["eco-orange"],
colors["eco-light-blue"],
colors["eco-green"],
scale_lightness(colors["eco-yellow"], 0.7),
]
),
),
)
)
layer1 = (
(
(fossil + lines + lines2 + ftext + text1 + text2).properties(
height=300, width=400
)
)
.configure(font="Georgia", background=colors["eco-background"])
.configure_view(stroke=None)
.properties(title=title)
)
layer1.save("visualisation/" + f + "_dark.json")
layer1.save("visualisation/" + f + "_dark.png",scale_factor=2.0)
layer1.save("visualisation/" + f + "_dark.svg")
readme = re.sub(f, f + "_dark", readme)
open("README.md", "a").write(readme)
layer1
# # Fig 2
# https://iea.blob.core.windows.net/assets/5e6b3821-bb8f-4df4-a88b-e891cd8251e3/WorldEnergyInvestment2021.pdf
# https://www.iea.org/data-and-statistics/charts/global-energy-supply-investment-by-sector-2019-2021-2
# https://iea.blob.core.windows.net/assets/a9da6027-f7c7-4aeb-9710-4f66906c59ab/WEI2021ForWEB.xlsx
df = pd.read_excel(
"raw/WEI2021ForWEB.xlsx", sheet_name="1.2", skiprows=41, nrows=1, usecols="C:Z"
)
labels = [
"Upstream",
"Mid/downstream",
"Coal supply",
"Low-carbon fuels",
"Renewable power",
"Fossil fuel power",
"Nuclear power",
"Electricity networks and battery storage",
]
df.columns = [str(i) + "_" + str(j) for i in labels for j in [2019, 2020, "2021E"]]
df = df.T.reset_index()
df["industry"] = df["index"].str.split("_").str[0]
df["year"] = df["index"].str.split("_").str[1]
df.columns = ["index", "value", "industry", "year"]
df1 = df.set_index(["year", "industry"]).loc["2019"].reset_index()
df1["year"] = 2018
df1["value"] = 0
df = pd.concat([df1, df])
df1["year"] = 2017
df = pd.concat([df1, df])
df["index"] = df["industry"] + "_" + df["year"].astype(str)
df = (
df.sort_values(["industry", "year"]).set_index("industry").loc[labels].reset_index()
)
f = "fig2_energy-investment"
f2 = eco_git_path + f + ".csv"
df.to_csv("data/" + f + ".csv")
f += local_suffix
open("visualisation/" + f + ".html", "w").write(
vega_embed.replace(
"JSON_PATH", f2.replace("/data/", "/visualisation/").replace(".csv", ".json")
)
)
if LOCAL:
f2 = df
readme = "### " + f + '\n\n\n'
df.head()
# +
base = alt.Chart(f2).encode(
x=alt.X("index:N", sort=[], axis=None),
y=alt.Y(
"value:Q",
sort=[],
axis=alt.Axis(
grid=True,
gridOpacity=0.1,
gridColor=colors["eco-gray"],
title="billion US$",
titleAnchor="start",
labelColor=colors["eco-gray"],
titleColor=colors["eco-gray"],
tickColor=colors["eco-gray"],
domainColor=colors["eco-gray"],
titleFontSize=10,
labelFontSize=10,
titleFontWeight="normal",
titleX=-5,
titleY=-5,
titleBaseline="bottom",
titleAngle=0,
titleAlign="left",
tickCount=7,
format=".0f",
),
),
color=alt.Color(
"industry:N",
legend=None,
scale=alt.Scale(
range=[
colors["eco-dark-blue"],
colors["eco-purple"],
scale_lightness(colors["eco-yellow"], 0.7),
colors["eco-green"],
colors["eco-mid-blue"],
colors["eco-dot"],
colors["eco-light-blue"],
colors["eco-turquiose"],
]
),
),
)
bars = base.mark_bar(opacity=0.9).encode(x=alt.X("index:N", sort=[], axis=None))
axes = (
alt.Chart(
pd.DataFrame(
[
{"t": "Upstream", "y": 0, "x": "Upstream_2018"},
{
"t": "Mid/downstream",
"y": 0,
"x": "Mid/downstream_2018",
},
{
"t": "Coal supply",
"y": 0,
"x": "Coal supply_2018",
},
{
"t": "Low-carbon fuels",
"y": 0,
"x": "Low-carbon fuels_2018",
},
{
"t": "Renewable power",
"y": 0,
"x": "Renewable power_2018",
},
{
"t": "Fossil fuel power",
"y": 0,
"x": "Fossil fuel power_2018",
},
{
"t": "Nuclear power",
"y": 0,
"x": "Nuclear power_2018",
},
{
"t": "Electricity networks and battery storage",
"y": 0,
"x": "Electricity networks and battery storage_2018",
},
]
)
)
.mark_text(angle=270, align="left")
.encode(
x=alt.X("x:N", sort=None, axis=None),
y="y:Q",
text="t:N",
color=alt.Color("t:N", legend=None),
)
)
year = (
bars.mark_text(align="right", angle=60, dx=-8, dy=-3)
.encode(text="year:N")
.transform_filter("datum.industry=='Renewable power'")
.transform_filter(
alt.FieldOneOfPredicate(field="year", oneOf=["2019", "2020", "2021E"])
)
)
title = alt.TitleParams(
"Global energy supply investment by sector",
subtitle=["Billion US$ 2019. Source: IEA (2021)"],
anchor="start",
align="left",
dx=5,
dy=-5,
fontSize=12,
subtitleFontSize=11,
subtitleFontStyle="italic",
)
layer1 = (
((bars + axes + year).properties(height=200, width=500))
.configure_view(stroke=None)
.properties(title=title)
)
layer1.save("visualisation/" + f + ".json")
layer1.save("visualisation/" + f + ".png",scale_factor=2.0)
layer1.save("visualisation/" + f + ".svg")
open("README.md", "a").write(readme)
layer1
# +
base = alt.Chart(f2).encode(
x=alt.X("index:N", sort=[], axis=None),
y=alt.Y(
"value:Q",
sort=[],
axis=alt.Axis(
grid=True,
gridOpacity=0.1,
gridColor=colors["eco-gray"],
title="billion US$",
titleAnchor="start",
labelColor=colors["eco-gray"],
titleColor=colors["eco-gray"],
tickColor=colors["eco-gray"],
domainColor=colors["eco-gray"],
titleFontSize=12,
labelFontSize=12,
titleFontWeight="normal",
titleX=-5,
titleY=-5,
titleBaseline="bottom",
titleAngle=0,
titleAlign="left",
tickCount=7,
format=".0f",
),
),
color=alt.Color(
"industry:N",
legend=None,
scale=alt.Scale(
range=[
colors["eco-gray"],
colors["eco-orange"],
scale_lightness(colors["eco-yellow"], 0.7),
colors["eco-green"],
colors["eco-mid-blue"],
colors["eco-dot"],
colors["eco-light-blue"],
colors["eco-turquiose"],
]
),
),
)
bars = base.mark_bar(opacity=0.9).encode(x=alt.X("index:N", sort=[], axis=None))
axes = (
alt.Chart(
pd.DataFrame(
[
{"t": "Upstream", "y": 0, "x": "Upstream_2018"},
{
"t": "Mid/downstream",
"y": 0,
"x": "Mid/downstream_2018",
},
{
"t": "Coal supply",
"y": 0,
"x": "Coal supply_2018",
},
{
"t": "Low-carbon fuels",
"y": 0,
"x": "Low-carbon fuels_2018",
},
{
"t": "Renewable power",
"y": 0,
"x": "Renewable power_2018",
},
{
"t": "Fossil fuel power",
"y": 0,
"x": "Fossil fuel power_2018",
},
{
"t": "Nuclear power",
"y": 0,
"x": "Nuclear power_2018",
},
{
"t": "Electricity networks and battery storage",
"y": 0,
"x": "Electricity networks and battery storage_2018",
},
]
)
)
.mark_text(angle=270, align="left")
.encode(
x=alt.X("x:N", sort=None, axis=None),
y="y:Q",
text="t:N",
color=alt.Color("t:N", legend=None),
)
)
year = (
bars.mark_text(align="right", angle=60, dx=-8, dy=-3)
.encode(text="year:N")
.transform_filter("datum.industry=='Renewable power'")
.transform_filter(
alt.FieldOneOfPredicate(field="year", oneOf=["2019", "2020", "2021E"])
)
)
title = alt.TitleParams(
"Global energy supply investment by sector",
subtitle=["Billion US$ 2019. Source: IEA (2021)"],
anchor="start",
align="left",
dx=5,
dy=-5,
fontSize=14,
subtitleFontSize=12,
color=colors["eco-dot"],
subtitleColor=colors["eco-dot"],
subtitleFontStyle="italic",
)
layer1 = (
((bars + axes + year).properties(height=200, width=500))
.configure(font="Georgia", background=colors["eco-background"])
.configure_view(stroke=None)
.properties(title=title)
)
layer1.save("visualisation/" + f + "_dark.json")
layer1.save("visualisation/" + f + "_dark.png",scale_factor=2.0)
layer1.save("visualisation/" + f + "_dark.svg")
readme = re.sub(f, f + "_dark", readme)
open("README.md", "a").write(readme)
layer1
# -
# # Fig 3
# https://www.theccc.org.uk/publication/sixth-carbon-budget/
df = pd.read_excel(
"raw/Copy of The-Sixth-Carbon-Budget-Charts-and-data-in-the-report.xlsx",
sheet_name="Advice report Ch5&6",
skiprows=64,
nrows=12,
usecols="G:AL",
)
df['Sector and metric']=df['Sector and metric'].str.replace('\(£b\)','').str.strip()
df = df.set_index("Sector and metric").stack().reset_index()
df.columns = ["sector", "year", "value"]
f = "fig3_investment"
f3 = eco_git_path + f + ".csv"
df.to_csv("data/" + f + ".csv")
f += local_suffix
open("visualisation/" + f + ".html", "w").write(
vega_embed.replace(
"JSON_PATH", f3.replace("/data/", "/visualisation/").replace(".csv", ".json")
)
)
if LOCAL:
f3 = df
readme = "### " + f + '\n\n\n'
df.head()
base = alt.Chart(f3).encode(
x=alt.X(
"year:Q",
sort=[],
axis=alt.Axis(
grid=False,
titleAlign="center",
titleAnchor="middle",
title="",
labelColor=colors["eco-gray"],
titleColor=colors["eco-gray"],
tickColor=colors["eco-gray"],
domainColor=colors["eco-gray"],
tickCount=5,
orient="bottom",
labelAngle=0,
format=".0f",
# zindex=1,
# offset=-43
),
)
)
area1 = base.mark_area(opacity=0.7).encode(
y=alt.Y(
"value:Q",
stack=True,
axis=alt.Axis(
grid=True,
gridOpacity=0.1,
gridColor=colors["eco-gray"],
title="£ billion / year",
titleAnchor="start",
labelColor=colors["eco-gray"],
titleColor=colors["eco-gray"],
tickColor=colors["eco-gray"],
domainColor=colors["eco-gray"],
titleFontSize=10,
titleFontWeight="normal",
ticks=False,
labelAlign="left",
labelBaseline="middle",
labelPadding=-5,
labelOffset=-10,
titleX=30,
titleY=-5,
titleBaseline="bottom",
titleAngle=0,
titleAlign="left",
tickCount=7,
format=".0f",
),
),
color=alt.Color('sector:N',sort=[],legend=None,scale=alt.Scale(range=category_color))
).transform_filter('datum.sector!="Total"')
texts=area1.mark_text(align='left',dx=5).encode(text='sector:N').transform_filter('datum.year==2050')
title = alt.TitleParams(
"Capital investment costs and operating cost savings",
subtitle=["The Sixth Carbon Budget - Balanced Pathway. Source: CCC (2020)"],
anchor="start",
align="left",
dx=5,
dy=-5,
fontSize=12,
subtitleFontSize=11,
subtitleFontStyle="italic",
)
layer1 = (
((area1+texts))
.properties(height=300, width=400)
.configure_view(stroke=None)
.properties(title=title)
)
layer1.save("visualisation/" + f + ".json")
layer1.save("visualisation/" + f + ".png",scale_factor=2.0)
layer1.save("visualisation/" + f + ".svg")
open("README.md", "a").write(readme)
layer1
base = alt.Chart(f3).encode(
x=alt.X(
"year:Q",
sort=[],
axis=alt.Axis(
grid=False,
titleAlign="center",
titleAnchor="middle",
title="",
labelColor=colors["eco-gray"],
titleColor=colors["eco-gray"],
tickColor=colors["eco-gray"],
domainColor=colors["eco-gray"],
tickCount=5,
titleFontSize=12,
labelFontSize=12,
orient="bottom",
labelAngle=0,
format=".0f",
# zindex=1,
# offset=-43
),
)
)
area1 = base.mark_area(opacity=0.7).encode(
y=alt.Y(
"value:Q",
stack=True,
axis=alt.Axis(
grid=True,
gridOpacity=0.1,
gridColor=colors["eco-gray"],
title="£ billion / year",
titleAnchor="start",
labelColor=colors["eco-gray"],
titleColor=colors["eco-gray"],
tickColor=colors["eco-gray"],
domainColor=colors["eco-gray"],
titleFontSize=12,
labelFontSize=12,
titleFontWeight="normal",
ticks=False,
labelAlign="left",
labelBaseline="middle",
labelPadding=-5,
labelOffset=-10,
titleX=30,
titleY=-5,
titleBaseline="bottom",
titleAngle=0,
titleAlign="left",
tickCount=7,
format=".0f",
),
),
color=alt.Color('sector:N',sort=[],legend=None,scale=alt.Scale(range=category_color))
).transform_filter('datum.sector!="Total"')
texts=area1.mark_text(align='left',dx=5).encode(text='sector:N').transform_filter('datum.year==2050')
title = alt.TitleParams(
"Capital investment costs and operating cost savings",
subtitle=["The Sixth Carbon Budget - Balanced Pathway. Source: CCC (2020)"],
anchor="start",
align="left",
dx=5,
dy=-5,
fontSize=14,
subtitleFontSize=12,
color=colors["eco-dot"],
subtitleColor=colors["eco-dot"],
subtitleFontStyle="italic",
)
layer1 = (
((area1+texts))
.properties(height=300, width=400)
.configure(font="Georgia", background=colors["eco-background"])
.configure_view(stroke=None)
.properties(title=title)
)
layer1.save("visualisation/" + f + "_dark.json")
layer1.save("visualisation/" + f + "_dark.png",scale_factor=2.0)
layer1.save("visualisation/" + f + "_dark.svg")
readme=re.sub(f, f+'_dark', readme)
open("README.md", "a").write(readme)
layer1
| articles/what-are-the-likely-costs-of-the-transition-to-a-sustainable-economy/parser.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 2
# language: python
# name: python2
# ---
# # The Central Limit Theorem
# Sample from an arbitrary distribution f(x), say N samples, and take their mean. The mean will not necessarily be the same as the mean of f(x). But if you repeat this a number of times, you'lee see that the sample means are distributed *normally* around the mean of f(x) with a standard deviation: $\sigma_N = \sigma_{f(x)}/\sqrt{N}$, where $\sigma_{f(x)}$ is the spread of the original distribution.
#
# Assumptions:
# * initial distribution has well-defined standard deviation (tails fall of more rapidly than $x^{-2}$)
# * data are uncorrelated
#
# ### CLT example
#
# How does the spread of the sample mean change with the number of samples N? Let's compare the distributions of the sample means for N = 20 and N = 100. Let's also see how the spread of these distributions varies as a function of N.
# Importing Libraries
import numpy as np
from scipy import stats
import matplotlib.pyplot as plt
# +
# Creating the parent distribution
mu = 3.0
sigma = 2.0
# Sampling
N1 = 20
N2 = 100
sample_means1 = [] # lists that will store the means of all Nrepeat samples
sample_means2 = []
Nrepeats = 1000
for i in range(Nrepeats):
samples1 = stats.norm.rvs(loc=mu,scale=sigma,size=N1) # draw 1000 random N1-size samples
samples2 = stats.norm.rvs(loc=mu,scale=sigma,size=N2) # draw 1000 random N1-size samples
samples1_mean = np.mean(samples1)
samples2_mean = np.mean(samples2)
sample_means1.append(samples1_mean)
sample_means2.append(samples2_mean)
print(np.mean(sample_means1),np.mean(sample_means2))
# -
# Where we can see that the two means are very similar.
plt.hist(sample_means1,histtype='step',label=r'N1=20')
plt.hist(sample_means2,histtype='step',label=r'N1=100')
plt.hist(stats.norm.rvs(loc=mu,scale=sigma,size=1000),histtype='step',label=r'Parent')
plt.xlabel(r'sample means ($\mu$)')
plt.ylabel(r'Freq. of occurence')
plt.legend()
plt.show()
# We can see that the spread changes with N$_{sampling}$. How does it change?
# +
Ns=[5,10,20,50,100,200,500,1000]
spread_N = []
for i in Ns:
sample_means_i = []
Nrepeats2 = 100
for j in range(Nrepeats2):
samples = stats.norm.rvs(loc=mu,scale=sigma,size=i) # draw 1000 random N1-size samples
samples_mean = np.mean(samples)
sample_means_i.append(samples_mean)
spread_N.append(np.std(sample_means_i))
print(spread_N)
# -
plt.plot(Ns,spread_N)
plt.xlabel(r'N of sample')
plt.ylabel(r'$\sigma$')
plt.show()
# So the better the sampling the smaller the spread in the means ($\mu$).
# In the following example we can see how CLT applies for the various distributions.
# +
N = 30
dist = stats.norm(0, 1)
# dist = stats.uniform(-1, 2)
# dist = stats.dweibull(8.5)
# dist = stats.expon(1.0)
# dist = stats.lognorm(1.5, 0.5)
# dist = stats.beta(0.01, 10)
sample_means = [np.mean(dist.rvs(size = N)) for i in range(10000)]
gaussfit = stats.norm(np.mean(sample_means), np.std(sample_means))
pdf_x = np.linspace(dist.mean() - 5 * dist.std(), dist.mean() + 5 * dist.std(), 100)
pdf_y = dist.pdf(pdf_x)
plt.subplot(1, 2, 1)
plt.plot(pdf_x, pdf_y, "k-")
plt.title("PDF of " + dist.dist.name + "(" + ", ".join(map(str, dist.args)) + ")")
plt.subplot(1, 2, 2)
plt.title("Sampling distribution of $\mu$")
x = np.linspace(min(sample_means), max(sample_means), 100)
plt.plot(x, gaussfit.pdf(x), "r-")
plt.hist(sample_means, 30, normed = True, histtype = "step", color = "k")
plt.show()
# -
| clt.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Exercise 09 - Neural Networks - Multi Class Classifier
import struct
import numpy as np
import gzip
import urllib.request
import matplotlib.pyplot as plt
from array import array
from sklearn.neural_network import MLPClassifier
# Load the MNIST data into memory
# +
with gzip.open('../Datasets/train-images-idx3-ubyte.gz', 'rb') as f:
magic, size, rows, cols = struct.unpack(">IIII", f.read(16))
img = np.array(array("B", f.read())).reshape((size, rows, cols))
with gzip.open('../Datasets/train-labels-idx1-ubyte.gz', 'rb') as f:
magic, size = struct.unpack(">II", f.read(8))
labels = np.array(array("B", f.read()))
with gzip.open('../Datasets/t10k-images-idx3-ubyte.gz', 'rb') as f:
magic, size, rows, cols = struct.unpack(">IIII", f.read(16))
img_test = np.array(array("B", f.read())).reshape((size, rows, cols))
with gzip.open('../Datasets/t10k-labels-idx1-ubyte.gz', 'rb') as f:
magic, size = struct.unpack(">II", f.read(8))
labels_test = np.array(array("B", f.read()))
# -
# Visualise a sample of the data
for i in range(10):
plt.subplot(2, 5, i + 1)
plt.imshow(img[i], cmap='gray');
plt.title(f'{labels[i]}');
plt.axis('off')
# ## Construct a Neural Network Model to Classify Digits 0 - 9
#
# In this model as we are predicting classes 0 - 9 we will require images from all available data. However given the extremely large dataset we will need to sample only a small amount of the original MNIST set due to limited system requirements and anticipated training time. We will select 2000 samples at random:
np.random.seed(0) # Give consistent random numbers
selection = np.random.choice(len(img), 5000)
selected_images = img[selection]
selected_labels = labels[selection]
# In order to provide the image information to the Neural Network model we must first flatten the data out so that each image is 1 x 784 pixels in shape.
selected_images = selected_images.reshape((-1, rows * cols))
selected_images.shape
# Applying normalisation is important to facilatate efficient wokring of the gradient descent algorithm
selected_images = selected_images / 255.0
img_test = img_test / 255.0
# Let's construct the model, use the sklearn MLPClassifier API and call the fit function.
model = MLPClassifier(solver='sgd', hidden_layer_sizes=(100,), max_iter=1000, random_state=1,
learning_rate_init=.01)
model.fit(X=selected_images, y=selected_labels)
# Determine the score against the training set
model.score(X=selected_images, y=selected_labels)
# Display the first two predictions for the Logistic model against the training data
model.predict(selected_images)[:2]
plt.subplot(1, 2, 1)
plt.imshow(selected_images[0].reshape((28, 28)), cmap='gray');
plt.axis('off');
plt.subplot(1, 2, 2)
plt.imshow(selected_images[1].reshape((28, 28)), cmap='gray');
plt.axis('off');
# Examine the corresponding predicted probabilities for the first two training samples
model.predict_proba(selected_images)[0]
# Compare the performance against the test set
model.score(X=img_test.reshape((-1, rows * cols)), y=labels_test)
| Exercise09/Exercise09.ipynb |
;; ---
;; jupyter:
;; jupytext:
;; text_representation:
;; extension: .clj
;; format_name: light
;; format_version: '1.5'
;; jupytext_version: 1.14.4
;; kernelspec:
;; display_name: Clojure (clojupyter=0.3.2=1)
;; language: clojure
;; name: python397jvsc74a57bd089a790bea3b9cadff6bf73491f7cf161e4e94bfc015afe6a535623fbe4142b79
;; ---
;; # Maps, Keywords, and Sets
;; Maps
;; +
;; A simple map
{"title" "Oliver Twist" "author" "Dickens" "published" 1838}
;; +
;; A simple map literal
(hash-map "title" "Oliver Twist"
"author" "Dickens"
"published" 1838)
;; +
;; Get an item from a map
(def book {"title" "Oliver Twist"
"author" "Dickens"
"published" 1838})
(get book "published") ; Returns 1838.
;; +
;; Get an item from a map, using the map as a function
;; and the key as an arg
(book "published")
;; +
;; Search for a non existing key
(book "copyright")
;; -
;; Keywords
;; +
;; Using keywords as map keys
(def book {:title "Oliver Twist" :author "Dickens" :published 1838})
(println "Title:" (book :title))
(println "By:" (book :author))
(println "Published:" (book :published))
;; +
;; Get an item from a map, using the map as a function
;; and the keyword as an arg
(book :title)
;; +
;; Get an item from a map, using the keyword as a function
;; and the map as an arg
(:title book)
;; +
;; Add a key-value pair to a map
(assoc book :page-count 362)
;; +
;; Add more than 1 key-value pair to a map
(assoc book :page-count 362 :title "War & Peace")
;; +
;; Remove a key-value pair from a map
(dissoc book :published)
;; +
;; Remove more than 1 key-value pair from a map
(dissoc book :title :author :published)
;; +
;; Trying to remove non-existing items won't take effect
(dissoc book :paperback :illustrator :favorite-zoo-animal)
book
;; +
;; Get all the keys from a map
(keys book)
;; +
;; Get all the values from a map
(vals book)
;; -
;; Sets
;; +
;; A simple set
(def genres #{:sci-fi :romance :mystery}) ; Similar syntax to maps but with '#' added
(def authors #{"Dickens" "Austen" "King"})
;; +
;; By definition, sets don't accept duplicate elements
#{"Dickens" "Austen" "Dickens"}
;; +
;; Check if an element is in a set
(contains? authors "Austen")
;; -
(contains? genres "Austen")
;; +
;; Using the set as a function and an element as the arg
(authors "Austen") ; Existing element
;; -
(genres :historical) ; Non-existing element
;; +
;; Using the element as a function and the set as the arg
(:sci-fi genres) ; Existing element
;; -
(:historical genres) ; Non-existing element
;; +
;; Add an element to a set
(def more-authors (conj authors "Clarke"))
;; +
;; Remove an element to a set
(disj more-authors "King")
;; -
;; Issues with maps, sets and keywords
;; +
;; A simple map
(def book
{:title "Oliver Twist"
:author "Dickens"
:published 1838})
;; +
;; If a map was defined with keywords as keys,
;; strings can't be used as keywords
(book "title")
;; +
;; Searching with a non-existing key returns nil
(book :some-key-that-is-clearly-not-there) ; Gives you nil.
;; +
;; But maps can contain nil as values, which can cause confusion
;; whether a key-value pair is really present
(def anonymous-book {:title "The Arabian Nights" :author nil})
(anonymous-book :author)
;; +
;; To avoid the last issue, use the 'contains?' function
;; to check if some key is in a map
(contains? anonymous-book :title)
;; -
(contains? anonymous-book :author)
(contains? anonymous-book :favorite-color)
;; +
;; The same logic applies to sets
(def possible-authors #{"Austen" "Dickens" nil})
;; -
(contains? possible-authors "Austen")
(contains? possible-authors "King")
(contains? possible-authors nil)
;; +
;; Another simple map
(def book {:title "Hard Times"
:author "Dickens"
:published 1838})
;; +
;; Some functions see maps as sequences of 2-element vectors
(first book)
;; -
(rest book)
(count book)
| chapter03/chapter03.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Duel of sorcerers
# You are witnessing an epic battle between two powerful sorcerers: Gandalf and Saruman. Each sorcerer has 10 spells of variable power in their mind and they are going to throw them one after the other. The winner of the duel will be the one who wins more of those clashes between spells. Spells are represented as a list of 10 integers whose value equals the power of the spell.
# ```
# gandalf = [10, 11, 13, 30, 22, 11, 10, 33, 22, 22]
# saruman = [23, 66, 12, 43, 12, 10, 44, 23, 12, 17]
# ```
# For example:
# 1. The first clash is won by Saruman: 10 against 23, wins 23
# 2. The second clash wins Saruman: 11 against 66, wins 66
# 3. etc.
#
#
# You will create two variables, one for each sorcerer, where the sum of clashes won will be stored. Depending on which variable is greater at the end of the duel, you will show one of the following three results on the screen:
# * Gandalf wins
# * Saruman wins
# * Tie
#
# <img src="images/content_lightning_bolt_big.jpg" width="400">
# ## Solution
# +
# Assign spell power lists to variables
gandalf = [10, 11, 13, 30, 22, 11, 10, 33, 22, 22]
saruman = [23, 66, 12, 43, 12, 10, 44, 23, 12, 17]
# -
# Assign 0 to each variable that stores the victories
gandalf_victory = 0
saruman_victory = 0
# Execution of spell clashes
for i in range(len(gandalf)):
if gandalf[i]>saruman[i]:
gandalf_victory+=1
elif saruman[i]>gandalf[i]:
saruman_victory+1
else:
gandalf_victory=gandalf_victory
saruman_victory=saruman_victory
# +
# We check who has won, do not forget the possibility of a draw.
# Print the result based on the winner.
if gandalf_victory>saruman_victory:
print("Gandalf wins")
elif saruman_victory<gandalf_victory:
print("Saruman wins")
else:
print("Tie")
# -
# ## Goals
#
# 1. Treatment of lists
# 2. Use of **for loop**
# 3. Use of conditional **if-elif-else**
# 4. Use of the functions **range(), len()**
# 5. Print
# ## Bonus
#
# 1. Spells now have a name and there is a dictionary that relates that name to a power.
# 2. A sorcerer wins if he succeeds in winning 3 spell clashes in a row.
# 3. Average of each of the spell lists.
# 4. Standard deviation of each of the spell lists.
#
# ```
# POWER = {
# 'Fireball': 50,
# 'Lightning bolt': 40,
# 'Magic arrow': 10,
# 'Black Tentacles': 25,
# 'Contagion': 45
# }
#
# gandalf = ['Fireball', 'Lightning bolt', 'Lightning bolt', 'Magic arrow', 'Fireball',
# 'Magic arrow', 'Lightning bolt', 'Fireball', 'Fireball', 'Fireball']
# saruman = ['Contagion', 'Contagion', 'Black Tentacles', 'Fireball', 'Black Tentacles',
# 'Lightning bolt', 'Magic arrow', 'Contagion', 'Magic arrow', 'Magic arrow']
# ```
#
# Good luck!
# +
# 1. Spells now have a name and there is a dictionary that relates that name to a power.
# variables
POWER = {
'Fireball': 50,
'Lightning bolt': 40,
'Magic arrow': 10,
'Black Tentacles': 25,
'Contagion': 45
}
gandalf = ['Fireball', 'Lightning bolt', 'Lightning bolt', 'Magic arrow', 'Fireball',
'Magic arrow', 'Lightning bolt', 'Fireball', 'Magic arrow', 'Fireball']
saruman = ['Contagion', 'Contagion', 'Black Tentacles', 'Fireball', 'Black Tentacles',
'Lightning bolt', 'Magic arrow', 'Contagion', 'Magic arrow', 'Magic arrow']
# +
# Assign spell power lists to variables
gandalf_Power=[]
saruman_Power=[]
for spell in gandalf:
gandalf_Power.append(POWER[spell])
for spell in saruman:
saruman_Power.append(POWER[spell])
# +
# 2. A sorcerer wins if he succeeds in winning 3 spell clashes in a row.
# Execution of spell clashes
who_scores=[]
for i in range(len(saruman_Power)):
if gandalf_Power[i]>saruman_Power[i]:
who_scores.append('g')
elif gandalf_Power[i]<saruman_Power[i]:
who_scores.append('s')
else:
who_scores.append('tie')
# check for 3 wins in a row
#we check for every entry in who_scores if it's neighbors (left and right) are the same. we don't do it for the
#very first and very last entry, since this is checked bei their neighbours.
sb_won = bool
for i in range(1,len(who_scores)-2):
if (who_scores[i]==who_scores[i-1]) & (who_scores[i]==who_scores[i+1]):
sb_won = True
break
else: sb_won = False
# check the winner
if(not sb_won):
print('Tie')
else:
for i in range(1,len(who_scores)-2):
if (who_scores[i]==who_scores[i-1]) & (who_scores[i]==who_scores[i+1]):
print(who_scores[i], 'won')
# -
# 3. Average of each of the spell lists.
ave_gandalf=sum(gandalf_Power)/len(gandalf_Power)
ave_saruman=sum(saruman_Power)/len(saruman_Power)
print('Gandalfs spell list average is', ave_gandalf, 'and Sarumans is', ave_saruman)
# +
# 4. Standard deviation of each of the spell lists.
import math
#I assume, there exists a function for doint that, but I wanted to practice the steps.
diff_gandalf=[]
diff_saruman=[]
for i in range(len(gandalf_Power)):
diff_gandalf.append(gandalf_Power[i]-ave_gandalf)
for i in range(len(gandalf_Power)):
diff_saruman.append(saruman_Power[i]-ave_saruman)
square_gandalf=[]
square_saruman=[]
for i in range(len(diff_gandalf)):
square_gandalf.append(diff_gandalf[i]**2)
for i in range(len(diff_gandalf)):
square_saruman.append(diff_saruman[i]**2)
print('Standard deviation of Gandalfs spells list =', math.sqrt(sum(square_gandalf)/len(square_gandalf)))
print('Standard deviation of Sarumans spells list =', math.sqrt(sum(square_saruman)/len(square_saruman)))
# -
| duel/duel.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
import os
import numpy as np
import pandas as pd
import spikeextractors as se
import spiketoolkit as st
import spikewidgets as sw
import tqdm.notebook as tqdm
from scipy.signal import periodogram, spectrogram
import matplotlib.pyplot as plt
# # %matplotlib inline
# # %config InlineBackend.figure_format='retina'
import panel as pn
import panel.widgets as pnw
pn.extension()
from utils import *
# +
# Path to the data folder in the repo
data_path = r""
# !!! start assign jupyter notebook parameter(s) !!!
data_path = '2021-02-12_22-13-24_Or179_Or177_overnight'
# !!! end assign jupyter notebook parameter(s) !!!
# +
# Path to the raw data in the hard drive
with open(os.path.join(data_path, 'LFP_location.txt')) as f:
OE_data_path = f.read()
# -
# ### Get each bird's recording, and their microphone channels
# +
# Whole recording from the hard drive
recording = se.BinDatRecordingExtractor(OE_data_path,30000,40, dtype='int16')
# Note I am adding relevant ADC channels
# First bird
Or179_recording = se.SubRecordingExtractor(recording,channel_ids=[0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10,11,12,13,14,15, 32])
# Second bird
Or177_recording = se.SubRecordingExtractor(recording,channel_ids=[16, 17,18,19,20,21,22,23,24,25,26,27,28,29,30,31, 33])
# Bandpass fiter microphone recoridngs
mic_recording = st.preprocessing.bandpass_filter(
se.SubRecordingExtractor(recording,channel_ids=[32,33]),
freq_min=500,
freq_max=1400
)
# +
# Get wav files
wav_names = [file_name for file_name in os.listdir(data_path) if file_name.endswith('.wav')]
wav_paths = [os.path.join(data_path,wav_name) for wav_name in wav_names]
# Get tranges for wav files in the actual recording
# OE_data_path actually contains the path all the way to the .bin. We just need the parent directory
# with the timestamp.
# Split up the path
OE_data_path_split= OE_data_path.split(os.sep)
# Take only the first three. os.path is weird so we manually add the separator after the
# drive name.
OE_parent_path = os.path.join(OE_data_path_split[0] + os.sep, *OE_data_path_split[1:3])
# Get all time ranges given the custom offset.
tranges=np.array([
get_trange(OE_parent_path, path, offset=datetime.timedelta(seconds=0), duration=3)
for path in wav_paths])
# -
wav_df = pd.DataFrame({'wav_paths':wav_paths, 'wav_names':wav_names, 'trange0':tranges[:, 0], 'trange1':tranges[:, 1]})
wav_df.head()
# Connect the wav files to the recording. Manually input to gut check yourself. If it is before 2021 02 21 at 11:00 am PST, you need to add a time delay.
wav_f,_,_,_=wav_df.loc[0,:]
wav_f, data_path
datetime.datetime(2021,2,23,8,11,1) - datetime.datetime(2021, 2, 22,22,0,20)
paths, name, tr0, tr1 = wav_df.loc[0,:]
sw.plot_spectrogram(mic_recording, trange= [tr0,tr1+10], freqrange=[300,4000], nfft=2**10, channel=32)
np.linspace(0,130,14)
# +
# Set up widgets
wav_selector = pnw.Select(options=list(range(len(wav_df))), name="Select song file")
# offset_selector = pnw.Select(options=np.linspace(-10,10,21).tolist(), name="Select offset")
window_radius_selector = pnw.Select(options=[10,20,30,40,60], name="Select window radius")
spect_chan_selector = pnw.Select(options=list(range(16)), name="Spectrogram channel")
spect_freq_lo = pnw.Select(options=np.linspace(0,130,14).tolist(), name="Low frequency for spectrogram (Hz)")
spect_freq_hi = pnw.Select(options=np.linspace(130,0,14).tolist(), name="Hi frequency for spectrogram (Hz)")
log_nfft_selector = pnw.Select(options=np.linspace(10,16,7).tolist(), name="magnitude of nfft (starts at 256)")
@pn.depends(
wav_selector=wav_selector.param.value,
# offset=offset_selector.param.value,
window_radius=window_radius_selector.param.value,
spect_chan=spect_chan_selector.param.value,
spect_freq_lo=spect_freq_lo.param.value,
spect_freq_hi=spect_freq_hi.param.value,
log_nfft=log_nfft_selector.param.value
)
def create_figure(wav_selector,
# offset,
window_radius, spect_chan,
spect_freq_lo, spect_freq_hi, log_nfft):
# Each column in each row to a tuple that we unpack
wav_file_path, wav_file_name, tr0, tr1 = wav_df.loc[wav_selector,:]
# Set up figure
fig,axes = plt.subplots(4,1, figsize=(16,12))
# Get wav file numpy recording object
wav_recording = get_wav_recording(wav_file_path)
# Apply offset and apply window radius
offset = 0
tr0 = tr0+ offset-window_radius
# Add duration of wav file
tr1 = tr1+ offset+window_radius+wav_recording.get_num_frames()/wav_recording.get_sampling_frequency()
'''Plot sound spectrogram (Hi fi mic)'''
sw.plot_spectrogram(wav_recording, channel=0, freqrange=[300,14000],ax=axes[0])
axes[0].set_title('Hi fi mic spectrogram')
'''Plot sound spectrogram (Lo fi mic)'''
if 'Or179' in wav_file_name:
LFP_recording = Or179_recording
elif 'Or177' in wav_file_name:
LFP_recording = Or177_recording
mic_channel = LFP_recording.get_channel_ids()[-1]
sw.plot_spectrogram(
mic_recording,
mic_channel,
trange=[tr0, tr1],
freqrange=[600,4000],
ax=axes[1]
)
axes[1].set_title('Lo fi mic spectrogram')
'''Plot LFP timeseries'''
chan_ids = np.array([LFP_recording.get_channel_ids()]).flatten()
sw.plot_timeseries(
LFP_recording,
channel_ids=chan_ids[1:4],
trange=[tr0, tr1],
ax=axes[2]
)
axes[2].set_title('Raw LFP')
# Clean lines
for line in plt.gca().lines:
line.set_linewidth(0.5)
'''Plot LFP spectrogram'''
sw.plot_spectrogram(
LFP_recording,
channel=chan_ids[spect_chan],
freqrange=[spect_freq_lo,spect_freq_hi],
trange=[tr0, tr1],
ax=axes[3],
nfft=int(2**log_nfft)
)
axes[3].set_title('LFP')
for i, ax in enumerate(axes):
ax.set_yticks([ax.get_ylim()[1]])
ax.set_yticklabels([ax.get_ylim()[1]])
ax.set_xlabel('')
# Show 30 Hz
ax.set_yticks([30, ax.get_ylim()[1]])
ax.set_yticklabels([30, ax.get_ylim()[1]])
return fig
# -
text = pnw.StaticText(value="<h3>OR177 Data Analysis Dashboard</h3>", align="center")
dash = pn.Column(
text,
pn.Row(wav_selector,
# offset_selector,
window_radius_selector,spect_chan_selector),
pn.Row(spect_freq_lo,spect_freq_hi,log_nfft_selector),
create_figure
);
dash
| code/exploratory/2021-02-12_22-13-24_Or179_Or177_overnight-output (1)-Copy1.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python [default]
# language: python
# name: python2
# ---
# # $\lambda$ variable for oblate ellipsoids
# %matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
# Here, we follow the reasoning presented by [Webster (1904)](#webster-dynamics) for analyzing the ellipsoidal coordinate $\lambda$ describing a oblate ellipsoid. Let's consider an ellipsoid with semi-axes $a$, $b$, $c$ oriented along the $x$-, $y$-, and $z$-axis, respectively, where $0 < a < b = c$. This ellipsoid is defined by the following equation:
# <a id='eq1'></a>
# $$
# \frac{x^{2}}{a^{2}} + \frac{y^{2} + z^{2}}{b^{2}} = 1 \: . \tag{1}
# $$
# A quadric surface which is confocal with the ellipsoid defined in [equation 1](#eq1) can be described as follows:
# <a id='eq2'></a>
#
# $$
# \frac{x^{2}}{a^{2} + \rho} + \frac{y^{2} + z^{2}}{b^{2} + \rho}= 1 \: , \tag{2}
# $$
# where $\rho$ is a real number. We know that [equation 2](#eq2) represents an ellipsoid for $\rho$ satisfying the condition
# <a id='eq3'></a>
#
# $$
# \rho + a^{2} > 0 \: . \tag{3}
# $$
# Given $a$, $b$, and a $\rho$ satisfying [equation 3](#eq3), we may use [equation 2](#eq2) for determining a set of points $(x, y, z)$ lying on the surface of an ellipsoid confocal with that one defined in [equation 1](#eq1). Now, consider the problem of determining the ellipsoid which is confocal with that one defined in [equation 1](#eq1) and pass through a particular point $(x, y, z)$. This problem consists in determining the real number $\rho$ that, given $a$, $b$, $x$, $y$, and $z$, satisfies the [equation 2](#eq2).
# By rearranging [equation 2](#eq2), we obtain the following quadratic equation for $\rho$:
# $$
# f(\rho) = (a^{2} + \rho)(b^{2} + \rho) - (b^{2} + \rho) \, x^{2}
# - (a^{2} + \rho) \, (y^{2} + z^{2}) \: .
# $$
# This equation shows that:
# $$
# \rho = \begin{cases}
# d \to \infty \: &, \quad f(\rho) > 0 \\
# -a^{2} \: &, \quad f(\rho) < 0 \\
# -b^{2} \: &, \quad f(\rho) > 0
# \end{cases} \: .
# $$
# By rearranging this equation, we obtain a simpler one given by:
# <a id='eq4'></a>
#
# $$
# f(\rho) = p_{2} \, \rho^{2} + p_{1} \, \rho + p_{0} \: , \tag{4}
# $$
# where
# <a id='eq5'></a>
#
# $$
# p_{2} = 1 \: , \tag{5}
# $$
# <a id='eq6'></a>
#
# $$
# p_{1} = a^{2} + b^{2} - x^{2} - y^{2} - z^{2} \tag{6}
# $$
# and
# <a id='eq7'></a>
#
# $$
# p_{0} = a^{2} \, b^{2} - b^{2} \, x^{2} - a^{2} \, y^{2} - a^{2} \, z^{2} \: . \tag{7}
# $$
# Note that a particular $\rho$ satisfying [equation 2](#eq2) results in $f(\rho) = 0$ ([equation 4](#eq4)).
#
# In order to illustrate the parameter $\rho$, consider the constants $a$, $b$, $x$, $y$, and $z$ given in the cell below:
a = 11.
b = 20.
x = 21.
y = 23.
z = 30.
# By using these constants, we calculate the coefficients $p_{2}$ ([equation 5](#eq5)), $p_{1}$ ([equation 6](#eq6)) and $p_{0}$ ([equation 7](#eq7)) as follows:
p2 = 1.
p1 = a**2 + b**2 - (x**2) - (y**2) - (z**2)
p0 = (a*b)**2 - (b*x)**2 - (a*y)**2 - (a*z)**2
# In the sequence, we define a set of values for the variable $\rho$ in an interval $\left[ \rho_{min} \, , \rho_{max} \right]$ and evaluate the quadratic equation $f(\rho)$ ([equation 4](#eq4)).
rho_min = -b**2 - 500.
rho_max = -a**2 + 2500.
rho = np.linspace(rho_min, rho_max, 100)
f = p2*(rho**2) + p1*rho + p0
# Finally, the cell below shows the quadratic equation $f(\rho)$ ([equation 4](#eq4)) evaluated in the range $\left[ \rho_{min} \, , \rho_{max} \right]$ defined above.
# +
ymin = np.min(f) - 0.1*(np.max(f) - np.min(f))
ymax = np.max(f) + 0.1*(np.max(f) - np.min(f))
plt.close('all')
plt.figure(figsize=(10,4))
plt.subplot(1,2,1)
plt.plot([rho_min, rho_max], [0., 0.], 'k-')
plt.plot([-a**2, -a**2], [ymin, ymax], 'r--', label = '$-a^{2}$')
plt.plot([-b**2, -b**2], [ymin, ymax], 'g--', label = '$-b^{2}$')
plt.plot(rho, f, 'k-', linewidth=2.)
plt.xlim(rho_min, rho_max)
plt.ylim(ymin, ymax)
plt.legend(loc = 'best')
plt.xlabel('$\\rho$', fontsize = 20)
plt.ylabel('$f(\\rho)$', fontsize = 20)
plt.subplot(1,2,2)
plt.plot([rho_min, rho_max], [0., 0.], 'k-')
plt.plot([-a**2, -a**2], [ymin, ymax], 'r--', label = '$-a^{2}$')
plt.plot([-b**2, -b**2], [ymin, ymax], 'g--', label = '$-b^{2}$')
plt.plot(rho, f, 'k-', linewidth=2.)
plt.xlim(-600., 100.)
plt.ylim(-0.3*10**6, 10**6)
plt.legend(loc = 'best')
plt.xlabel('$\\rho$', fontsize = 20)
#plt.ylabel('$f(\\rho)$', fontsize = 20)
plt.tight_layout()
plt.show()
# -
# Remember that we are interested in a $\rho$ satisfying [equation 3](#eq3). Consequently, according to the figures shown above, we are interested in the largest root $\lambda$ of the quadratic equation $f(\rho)$ ([equation 4](#eq4)).
# The largest root $\lambda$ of $f(\rho)$ ([equation 4](#eq4)) is given by:
# <a id='eq8'></a>
#
# $$
# \lambda = \frac{-p_{1} + \sqrt{\Delta}}{2} \: , \tag{8}
# $$
# where
# <a id='eq9'></a>
#
# $$
# \Delta = p_{1}^{2} - 4 \, p_{0} \: . \tag{9}
# $$
# The cells below use the equations [8](#eq8) and [9](#eq9) to compute the root $\lambda$.
delta = p1**2 - 4.*p2*p0
lamb = (-p1 + np.sqrt(delta))/(2.*p2)
print 'lambda = %.5f' % lamb
# By substituing $\lambda$ in [equation 4](#eq4), we can verify that it is a root of $f(\rho)$.
f_lamb = p2*(lamb**2) + p1*lamb + p0
print 'f(lambda) = %.5f' % f_lamb
# ### References
# <a id='webster-dynamics'></a>
#
# * <NAME>. 1904. The Dynamics of Particles and of Rigid, Elastic and Fluid Bodies. Universidade de Michigan.
| code/lambda_oblate_ellipsoids.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# +
import numpy as np
from calcbsimpvol import calcbsimpvol
S = np.asarray(268.55) # Underlying Price
K = np.asarray([275.0]) # Strike Price
tau = np.asarray([9/365]) # Time to Maturity
r = np.asarray(0.01) # Interest Rate
q = np.asarray(0.00) # Dividend Rate
cp = np.asarray(1) # Option Type
P = np.asarray([0.31]) # Market Price
imp_vol = calcbsimpvol(dict(cp=cp, P=P, S=S, K=K, tau=tau, r=r, q=q))
print(imp_vol)
# -
| imp_vol.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3 (ipykernel)
# language: python
# name: python3
# ---
# # Export
#
# <img align="right" width="50%" src="./images/export_app.png">
#
#
# This application lets users export objects and data stored in `geoh5` to various file formats.
#
# The app currently supports:
#
# - ESRI Shapefile (**shp**)
# - Column file (**csv**)
# - Geotiff (1 or 3-band) (**tiff**)
# - UBC-GIF (Tensor or OcTree) (**msh, mod**)
#
#
# New user? Visit the [Getting Started](../installation.rst) page.
# ## Application
# The following sections provide details on the different parameters controlling the application. Interactive widgets shown below are for demonstration purposes only.
# +
from geoapps.export import Export
app = Export(h5file=r"../../../assets/FlinFlon.geoh5")
app()
# -
# ## Project Selection
#
# Select and connect to an existing **geoh5** project file containing data.
app.project_panel
# See the [Project Panel](base_application.ipynb#Project-Panel) page for more details.
# ## Object and Data Selection
#
# List of objects available for export from the target `geoh5` project.
app.data_panel
# ## Output type
#
# List of file formats currently supported.
app.file_type
# ### ESRI Shapefile
#
# Export option to **.shp** file format for `Points`, `Curve` objects.
# #### Projection
#
# Coordinate system assigned to the shapefile, either as ESRI, EPSG or WKT code.
app.projection_panel
# ### Column Seperated Values
#
# Export data to **csv** file format. The x, y and z coordinates of every nodes/cells are appended to the list of data by default.
# ### Geotiff
#
# Export option to **.geotiff** for `Grid2D` objects.
# #### Projection
#
# Coordinate system assigned to the geotiff, either as ESRI, EPSG or WKT code.
app.projection_panel
# #### Type
# Date type options exported to geotiff
app.data_type
# - **Float**: Single-band image containing the float value of selected data.
# - **RGB**: 3-band image containing the RGB color displayed in Geoscience ANALYST.
# ### UBC Model
#
# Export option for `BlockModel` and `Octree` objects to UBC mesh (**.msh**) and model (**.mod**) format.
# ## Output Panel
#
# Trigger the computation routine and store the result.
app.output_panel
# See the [Output Panel](base_application.ipynb#Output-Panel) page for more details.
app.plot_result = True
app.trigger.click()
# Need help? Contact us at <EMAIL>
| docs/content/applications/export.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # A4. Loading sensortran files
# This example loads sensortran files. Only single-ended measurements are currently supported.
# Sensortran files are in binary format. The library requires the `*BinaryRawDTS.dat` and `*BinaryTemp.dat` files.
# +
import os
import glob
import matplotlib.pyplot as plt
from pandas.plotting import register_matplotlib_converters
register_matplotlib_converters()
from dtscalibration import read_sensortran_files
# -
# The example data files are located in `./python-dts-calibration/tests/data`.
filepath = os.path.join('..', '..', 'tests', 'data', 'sensortran_binary')
print(filepath)
# +
filepathlist = sorted(glob.glob(os.path.join(filepath, '*.dat')))
filenamelist = [os.path.basename(path) for path in filepathlist]
for fn in filenamelist:
print(fn)
# -
# We will simply load in the binary files
ds = read_sensortran_files(directory=filepath)
# The object tries to gather as much metadata from the measurement files as possible (temporal and spatial coordinates, filenames, temperature probes measurements). All other configuration settings are loaded from the first files and stored as attributes of the `DataStore`. Sensortran's data files contain less information than the other manufacturer's devices, one being the acquisition time. The acquisition time is needed for estimating variances, and is set a constant 1s.
print(ds)
# The sensortran files differ from other manufacturers, in that they return the 'counts' of the Stokes and anti-Stokes signals. These are not corrected for offsets, which has to be done manually for proper calibration.
#
# Based on the data available in the binary files, the library estimates a zero-count to correct the signals, but this is not perfectly accurate or constant over time. For proper calibration, the offsets would have to be incorporated into the calibration routine.
# +
ds0 = ds.isel(time=0)
plt.figure()
ds0.ST.plot(label='Stokes signal')
plt.axhline(ds0.ST_zero.values, c='r', label="'zero' measurement")
plt.legend()
plt.title('')
plt.axhline(c='k')
# -
# After a correction and rescaling (for human readability) the data will look more like other manufacturer's devices
ds['ST'] = (ds.ST - ds.ST_zero)/1e4
ds['AST'] = (ds.AST - ds.AST_zero)/1e4
ds.isel(time=0).ST.plot(label='Stokes intensity')
ds.isel(time=0).AST.plot(label='anti-Stokes intensity')
plt.legend()
plt.axhline(c='k', lw=1)
plt.xlabel('')
plt.title('')
plt.ylim([-50,500])
| examples/notebooks/A4Load_sensortran_files.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Imports
import pyaurorax
import datetime
import pandas as pd
# # Search
# set values
# search for conjunctions between any THEMIS-ASI intrument and any Swarm instrument
start = datetime.datetime(2020, 1, 1, 0, 0, 0)
end = datetime.datetime(2020, 1, 1, 6, 59, 59)
ground_params = [
{
"programs": ["themis-asi"]
}
]
space_params = [
{
"programs": ["swarm"]
}
]
distance = 100
# perform search
s = pyaurorax.conjunctions.search(start=start,
end=end,
ground=ground_params,
space=space_params,
default_distance=distance,
verbose=True)
# output data as a pandas dataframe
conjunctions = [c.__dict__ for c in s.data]
df = pd.DataFrame(conjunctions)
df.sort_values("start")
# # Search with metadata filters
# set values
# search for conjunctions between any (THEMIS-ASI or REGO) instrument and (any Swarm instrument
# with north B trace region = "north polar cap")
start = datetime.datetime(2019, 2, 1, 0, 0, 0)
end = datetime.datetime(2019, 2, 10, 23, 59, 59)
ground_params = [{
"programs": ["themis-asi", "rego"]
}]
space_params = [{
"programs": ["swarm"],
"ephemeris_metadata_filters": [
{
"key": "nbtrace_region",
"operator": "=",
"values": [
"north polar cap"
]
}
]
}]
# perform search
s = pyaurorax.conjunctions.search(start=start,
end=end,
ground=ground_params,
space=space_params,
default_distance=distance,
verbose=True)
# output data as a pandas dataframe
conjunctions = [c.__dict__ for c in s.data]
df = pd.DataFrame(conjunctions)
df.sort_values("start")
# # Search with multiple ground and space instruments and custom distances
# set values
# search for conjunctions between any REGO and TREX and Swarm and THEMIS instruments
start = datetime.datetime(2020, 1, 1, 0, 0, 0)
end = datetime.datetime(2020, 1, 4, 23, 59, 59)
ground_params = [
{
"programs": ["rego"]
},
{
"programs": ["trex"]
}
]
space_params = [
{
"programs": ["swarm"]
},
{
"programs": ["themis"]
}
]
distances = {
"ground1-ground2": None,
"ground1-space1": 500,
"ground1-space2": 500,
"ground2-space1": 500,
"ground2-space2": 500,
"space1-space2": None
}
# perform search
s = pyaurorax.conjunctions.search(start=start,
end=end,
ground=ground_params,
space=space_params,
default_distance=distance,
max_distances=distances,
verbose=True)
# output data as a pandas dataframe
conjunctions = [c.__dict__ for c in s.data]
df = pd.DataFrame(conjunctions)
df.sort_values("start")
# # Search between space instruments only
# set values
# search for conjunctions between any (Swarm A or Swarm B) instrument
# and (any THEMIS instrument with south B trace region = "south polar cap")
start = datetime.datetime(2019, 1, 1, 0, 0, 0)
end = datetime.datetime(2019, 1, 1, 23, 59, 59)
ground_params = []
space_params = [
{
"programs": ["swarm"],
"platforms": ["swarma", "swarmb"],
"hemisphere": ["southern"],
"ephemeris_metadata_filters": [
{
"key": "sbtrace_region",
"operator": "=",
"values": [
"south polar cap"
]
}
],
},
{
"programs": ["themis"],
}
]
distances = {
"space1-space2": 1000
}
#perform search
s = pyaurorax.conjunctions.search(start=start,
end=end,
ground=ground_params,
space=space_params,
max_distances=distances,
verbose=True)
# output data as a pandas dataframe
conjunctions = [c.__dict__ for c in s.data]
df = pd.DataFrame(conjunctions)
df.sort_values("start")
| examples/notebooks/search_conjunctions.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: dev
# language: python
# name: dev
# ---
# # Bitcoin Collections
#
# In this activity, you’ll import the data for the second Bitcoin dataset into a Pandas DataFrame.
#
# Instructions
#
# 1. Use the `Path` module with the `read_csv` function to read the `bitcoin_2` csv file into the DataFrame. Be sure to set the `DatetimeIndex`.
#
# 2. Using the Pandas `head` function, review the first five rows of the DataFrame to confirm the import.
#
# 3. Using the Pandas `tail` function, review the last five rows of the DataFrame.
#
# ### Import the required libraries and dependencies
# Import the required libraries and dependencies including
# Pandas and pathlib .
import pandas as pd
from pathlib import Path
# ### Import the bitcoin_1 CSV file and create a bitcoin_1 DataFrame
# +
# Read in the CSV file called "bitcoin_1.csv" using the Path module.
# The CSV file is located in the Resources folder.
# Set the index to the column "Timestamp"
# Set the parse_dates and infer_datetime_format parameters
bitcoin_1 = pd.read_csv(
Path('./Resources/bitcoin_1.csv'),
index_col="Timestamp",
parse_dates=True,
infer_datetime_format=True
)
# Review the first 5 rows of the 'bitcoin_1' DataFrame
bitcoin_1.head()
# -
# ## Step 1: Use the `Path` module with the `read_csv` function to read the `bitcoin_2` csv file into the DataFrame. Be sure to set the `DatetimeIndex`.
# Read in the CSV file called "bitcoin_2.csv" using the Path module.
# The CSV file is located in the Resources folder.
# Set the index to the column "Timestamp"
# Set the parse_dates and infer_datetime_format parameters
bitcoin_2 = pd.read_csv(
Path('./Resources/bitcoin_2.csv'),
index_col="Timestamp",
parse_dates=True,
infer_datetime_format=True
)
# ## Step 2: Using the Pandas `head` function, review the first five rows of the DataFrame to confirm the import.
# Using the head function, review the first 5 rows of the DataFrame
bitcoin_2.head()
# ## Step 3: Using the Pandas `tail` function, review the last five rows of the DataFrame.
# Using the tail function, review the last 5 rows of the DataFrame
bitcoin_2.tail()
| 01_Bitcoin_Collections/Bitcoin_Collections.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Think Bayes: Chapter 2
#
# This notebook presents example code and exercise solutions for Think Bayes.
#
# Copyright 2016 <NAME>
#
# MIT License: https://opensource.org/licenses/MIT
# +
from __future__ import print_function, division
% matplotlib inline
from thinkbayes2 import Hist, Pmf, Suite
# -
# ## The Pmf class
#
# I'll start by making a Pmf that represents the outcome of a six-sided die. Initially there are 6 values with equal probability.
# +
pmf = Pmf()
for x in [1,2,3,4,5,6]:
pmf[x] = 1
pmf.Print()
# -
# To be true probabilities, they have to add up to 1. So we can normalize the Pmf:
pmf.Normalize()
# The return value from `Normalize` is the sum of the probabilities before normalizing.
pmf.Print()
# A faster way to make a Pmf is to provide a sequence of values. The constructor adds the values to the Pmf and then normalizes:
pmf = Pmf([1,2,3,4,5,6])
pmf.Print()
# To extract a value from a Pmf, you can use `Prob`
pmf.Prob(1)
# Or you can use the bracket operator. Either way, if you ask for the probability of something that's not in the Pmf, the result is 0.
pmf[1]
# ## The cookie problem
#
# Here's a Pmf that represents the prior distribution.
pmf = Pmf()
pmf['Bowl 1'] = 0.5
pmf['Bowl 2'] = 0.5
pmf.Print()
# And we can update it using `Mult`
pmf.Mult('Bowl 1', 0.75)
pmf.Mult('Bowl 2', 0.5)
pmf.Print()
# Or here's the shorter way to construct the prior.
pmf = Pmf(['Bowl 1', 'Bowl 2'])
pmf.Print()
# And we can use `*=` for the update.
pmf['Bowl 1'] *= 0.75
pmf['Bowl 2'] *= 0.5
pmf.Print()
# Either way, we have to normalize the posterior distribution.
pmf.Normalize()
pmf.Print()
# ## The Bayesian framework
#
# Here's the same computation encapsulated in a class.
class Cookie(Pmf):
"""A map from string bowl ID to probablity."""
def __init__(self, hypos):
"""Initialize self.
hypos: sequence of string bowl IDs
"""
Pmf.__init__(self)
for hypo in hypos:
self.Set(hypo, 1)
self.Normalize()
def Update(self, data):
"""Updates the PMF with new data.
data: string cookie type
"""
for hypo in self.Values():
like = self.Likelihood(data, hypo)
self.Mult(hypo, like)
self.Normalize()
mixes = {
'Bowl 1':dict(vanilla=0.75, chocolate=0.25),
'Bowl 2':dict(vanilla=0.5, chocolate=0.5),
}
def Likelihood(self, data, hypo):
"""The likelihood of the data under the hypothesis.
data: string cookie type
hypo: string bowl ID
"""
mix = self.mixes[hypo]
like = mix[data]
return like
# We can confirm that we get the same result.
pmf = Cookie(['Bowl 1', 'Bowl 2'])
pmf.Update('vanilla')
pmf.Print()
# But this implementation is more general; it can handle any sequence of data.
# +
dataset = ['vanilla', 'chocolate', 'vanilla']
for data in dataset:
pmf.Update(data)
pmf.Print()
# -
# ## The Monty Hall problem
#
# The Monty Hall problem might be the most contentious question in
# the history of probability. The scenario is simple, but the correct
# answer is so counterintuitive that many people just can't accept
# it, and many smart people have embarrassed themselves not just by
# getting it wrong but by arguing the wrong side, aggressively,
# in public.
#
# Monty Hall was the original host of the game show *Let's Make a
# Deal*. The Monty Hall problem is based on one of the regular
# games on the show. If you are on the show, here's what happens:
#
# * Monty shows you three closed doors and tells you that there is a
# prize behind each door: one prize is a car, the other two are less
# valuable prizes like peanut butter and fake finger nails. The
# prizes are arranged at random.
#
# * The object of the game is to guess which door has the car. If
# you guess right, you get to keep the car.
#
# * You pick a door, which we will call Door A. We'll call the
# other doors B and C.
#
# * Before opening the door you chose, Monty increases the
# suspense by opening either Door B or C, whichever does not
# have the car. (If the car is actually behind Door A, Monty can
# safely open B or C, so he chooses one at random.)
#
# * Then Monty offers you the option to stick with your original
# choice or switch to the one remaining unopened door.
#
# The question is, should you "stick" or "switch" or does it
# make no difference?
#
# Most people have the strong intuition that it makes no difference.
# There are two doors left, they reason, so the chance that the car
# is behind Door A is 50%.
#
# But that is wrong. In fact, the chance of winning if you stick
# with Door A is only 1/3; if you switch, your chances are 2/3.
#
# Here's a class that solves the Monty Hall problem.
class Monty(Pmf):
"""Map from string location of car to probability"""
def __init__(self, hypos):
"""Initialize the distribution.
hypos: sequence of hypotheses
"""
Pmf.__init__(self)
for hypo in hypos:
self.Set(hypo, 1)
self.Normalize()
def Update(self, data):
"""Updates each hypothesis based on the data.
data: any representation of the data
"""
for hypo in self.Values():
like = self.Likelihood(data, hypo)
self.Mult(hypo, like)
self.Normalize()
def Likelihood(self, data, hypo):
"""Compute the likelihood of the data under the hypothesis.
hypo: string name of the door where the prize is
data: string name of the door Monty opened
"""
if hypo == data:
return 0
elif hypo == 'A':
return 0.5
else:
return 1
# And here's how we use it.
pmf = Monty('ABC')
pmf.Update('B')
pmf.Print()
# ## The Suite class
#
# Most Bayesian updates look pretty much the same, especially the `Update` method. So we can encapsulate the framework in a class, `Suite`, and create new classes that extend it.
#
# Child classes of `Suite` inherit `Update` and provide `Likelihood`. So here's the short version of `Monty`
class Monty(Suite):
def Likelihood(self, data, hypo):
if hypo == data:
return 0
elif hypo == 'A':
return 0.5
else:
return 1
# And it works.
pmf = Monty('ABC')
pmf.Update('B')
pmf.Print()
# ## The M&M problem
#
# M&Ms are small candy-coated chocolates that come in a variety of
# colors. Mars, Inc., which makes M&Ms, changes the mixture of
# colors from time to time.
#
# In 1995, they introduced blue M&Ms. Before then, the color mix in
# a bag of plain M&Ms was 30% Brown, 20% Yellow, 20% Red, 10%
# Green, 10% Orange, 10% Tan. Afterward it was 24% Blue , 20%
# Green, 16% Orange, 14% Yellow, 13% Red, 13% Brown.
#
# Suppose a friend of mine has two bags of M&Ms, and he tells me
# that one is from 1994 and one from 1996. He won't tell me which is
# which, but he gives me one M&M from each bag. One is yellow and
# one is green. What is the probability that the yellow one came
# from the 1994 bag?
#
# Here's a solution:
class M_and_M(Suite):
"""Map from hypothesis (A or B) to probability."""
mix94 = dict(brown=30,
yellow=20,
red=20,
green=10,
orange=10,
tan=10,
blue=0)
mix96 = dict(blue=24,
green=20,
orange=16,
yellow=14,
red=13,
brown=13,
tan=0)
hypoA = dict(bag1=mix94, bag2=mix96)
hypoB = dict(bag1=mix96, bag2=mix94)
hypotheses = dict(A=hypoA, B=hypoB)
def Likelihood(self, data, hypo):
"""Computes the likelihood of the data under the hypothesis.
hypo: string hypothesis (A or B)
data: tuple of string bag, string color
"""
bag, color = data
mix = self.hypotheses[hypo][bag]
like = mix[color]
return like
# And here's an update:
suite = M_and_M('AB')
suite.Update(('bag1', 'yellow'))
suite.Update(('bag2', 'green'))
suite.Print()
# **Exercise:** Suppose you draw another M&M from `bag1` and it's blue. What can you conclude? Run the update to confirm your intuition.
suite.Update(('bag1', 'blue'))
suite.Print()
# **Exercise:** Now suppose you draw an M&M from `bag2` and it's blue. What does that mean? Run the update to see what happens.
# +
# Solution
# suite.Update(('bag2', 'blue'))
# throws ValueError: Normalize: total probability is zero.
# -
# ## Exercises
# **Exercise:** This one is from one of my favorite books, <NAME>'s "Information Theory, Inference, and Learning Algorithms":
#
# > <NAME> had a twin brother who died at birth. What is the probability that Elvis was an identical twin?"
#
# To answer this one, you need some background information: According to the Wikipedia article on twins: ``Twins are estimated to be approximately 1.9% of the world population, with monozygotic twins making up 0.2% of the total---and 8% of all twins.''
# +
# Solution
# Here's a Pmf with the prior probability that Elvis
# was an identical twin (taking the fact that he was a
# twin as background information)
pmf = Pmf(dict(fraternal=0.92, identical=0.08))
# +
# Solution
# And here's the update. The data is that the other twin
# was also male, which has likelihood 1 if they were identical
# and only 0.5 if they were fraternal.
pmf['fraternal'] *= 0.5
pmf['identical'] *= 1
pmf.Normalize()
pmf.Print()
# -
# **Exercise:** Let's consider a more general version of the Monty Hall problem where Monty is more unpredictable. As before, Monty never opens the door you chose (let's call it A) and never opens the door with the prize. So if you choose the door with the prize, Monty has to decide which door to open. Suppose he opens B with probability `p` and C with probability `1-p`. If you choose A and Monty opens B, what is the probability that the car is behind A, in terms of `p`? What if Monty opens C?
#
# Hint: you might want to use SymPy to do the algebra for you.
from sympy import symbols
p = symbols('p')
# +
# Solution
# Here's the solution if Monty opens B.
pmf = Pmf('ABC')
pmf['A'] *= p
pmf['B'] *= 0
pmf['C'] *= 1
pmf.Normalize()
pmf['A'].simplify()
# +
# Solution
# When p=0.5, the result is what we saw before
pmf['A'].evalf(subs={p:0.5})
# +
# Solution
# When p=0.0, we know for sure that the prize is behind C
pmf['C'].evalf(subs={p:0.0})
# +
# Solution
# And here's the solution if Monty opens C.
pmf = Pmf('ABC')
pmf['A'] *= 1-p
pmf['B'] *= 1
pmf['C'] *= 0
pmf.Normalize()
pmf['A'].simplify()
# -
# **Exercise:** According to the CDC, ``Compared to nonsmokers, men who smoke are about 23 times more likely to develop lung cancer and women who smoke are about 13 times more likely.'' Also, among adults in the U.S. in 2014:
#
# > Nearly 19 of every 100 adult men (18.8%)
# > Nearly 15 of every 100 adult women (14.8%)
#
# If you learn that a woman has been diagnosed with lung cancer, and you know nothing else about her, what is the probability that she is a smoker?
# +
# Solution
# In this case, we can't compute the likelihoods individually;
# we only know the ratio of one to the other. But that's enough.
# Two ways to proceed: we could include a variable in the computation,
# and we would see it drop out.
# Or we can use "unnormalized likelihoods", for want of a better term.
# Here's my solution.
pmf = Pmf(dict(smoker=15, nonsmoker=85))
pmf['smoker'] *= 13
pmf['nonsmoker'] *= 1
pmf.Normalize()
pmf.Print()
# -
# **Exercise** In Section 2.3 I said that the solution to the cookie problem generalizes to the case where we draw multiple cookies with replacement.
#
# But in the more likely scenario where we eat the cookies we draw, the likelihood of each draw depends on the previous draws.
#
# Modify the solution in this chapter to handle selection without replacement. Hint: add instance variables to Cookie to represent the hypothetical state of the bowls, and modify Likelihood accordingly. You might want to define a Bowl object.
# +
# Solution
# We'll need an object to keep track of the number of cookies in each bowl.
# I use a Hist object, defined in thinkbayes2:
bowl1 = Hist(dict(vanilla=30, chocolate=10))
bowl2 = Hist(dict(vanilla=20, chocolate=20))
bowl1.Print()
# +
# Solution
# Now I'll make a Pmf that contains the two bowls, giving them equal probability.
pmf = Pmf([bowl1, bowl2])
pmf.Print()
# +
# Solution
# Here's a likelihood function that takes `hypo`, which is one of
# the Hist objects that represents a bowl, and `data`, which is either
# 'vanilla' or 'chocolate'.
# `likelihood` computes the likelihood of the data under the hypothesis,
# and as a side effect, it removes one of the cookies from `hypo`
def likelihood(hypo, data):
like = hypo[data] / hypo.Total()
if like:
hypo[data] -= 1
return like
# +
# Solution
# Now for the update. We have to loop through the hypotheses and
# compute the likelihood of the data under each hypothesis.
def update(pmf, data):
for hypo in pmf:
pmf[hypo] *= likelihood(hypo, data)
return pmf.Normalize()
# +
# Solution
# Here's the first update. The posterior probabilities are the
# same as what we got before, but notice that the number of cookies
# in each Hist has been updated.
update(pmf, 'vanilla')
pmf.Print()
# +
# Solution
# So when we update again with a chocolate cookies, we get different
# likelihoods, and different posteriors.
update(pmf, 'chocolate')
pmf.Print()
# +
# Solution
# If we get 10 more chocolate cookies, that eliminates Bowl 1 completely
for i in range(10):
update(pmf, 'chocolate')
print(pmf[bowl1])
# -
| code/chap02soln.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # iterations
# +
# The sequence we want to analyze
seq = 'GACAGACUCCAUGCACGUGGGUAUCUGUC'
# Initialize GC counter
n_gc = 0
# Initialize sequence length
len_seq = 0
#loop through the sequence and count the G's and C's
for base in seq:
len_seq += 1
if base in "GCgc":
n_gc +=1
# divide by length of seq
n_gc/len_seq
# -
"g" in "GCgc"
len(seq)
# +
# We'll do one through 5
my_integers = [1, 2, 3, 4, 5]
# Double each one
for n in my_integers:
n *= 2
# Check out the result
my_integers
# -
for i in range(10):
print(i, end=' ')
# +
for i in range(2, 10):
print(i, end=' ')
# Print a newline
print()
# Print even numbers, 2 through 9
for i in range(2, 10, 2):
print(i, end=' ')
# -
list(range(10))
# +
my_integers = [1, 2, 3, 4, 5]
# Since len(my_integers) = 5, this takes i from 0 to 4,
# exactly the indices of my_integers
for i in range(len(my_integers)):
my_integers[i] *= 2
my_integers
# +
# Initialize GC counter
n_gc = 0
# Initialize GC counter
n_gc = 0
# Initialized sequence length
len_seq = 0
# Loop through sequence and print index of G's
for base in seq:
if base in 'Gg':
# Loop through sequence and print index of G's
for base in seq:
if base in 'Gg':
print(len_seq, end=' ')
len_seq += 1
# +
#enumerate function
# Initialize G counter
n_g = 0
# Loop through sequence and print index of G's
for i, base in enumerate(seq):
if base in 'Gg':
print(i,end=" ")
# -
# Print index and identity of bases
for i, base in enumerate(seq):
print(i, base)
# +
my_integers= [0,1,2,3,4,5,6]
for i,_ in enumerate(my_integers):
my_integers[i]*=2
my_integers
# +
names = ('Dunn', 'Ertz', 'Lavelle', 'Rapinoe')
positions = ('D', 'MF', 'MF', 'F')
numbers = (19, 8, 16, 15)
for num, pos, name in zip(numbers, positions, names):
print(num, name, pos)
# +
count_up = ('ignition', 1, 2, 3, 4, 5, 6, 7, 8 ,9, 10)
for count in reversed(count_up):
print(count)
# -
# # While
# +
# Define start codon
start_codon = 'AUG'
# Initialize sequence index
i = 0
#scan seq until stop codon
while seq[i:i+3] != start_codon:
i+=1
#show the result
print("the start codon start at index",i)
# +
# Define codon of interest
codon = 'GCC'
# Initialize sequence index
i = 0
# Scan sequence until we hit the start codon or the end of the sequence
while seq[i:i+3] != codon and i < len(seq):
i += 1
# Show the result
if i == len(seq):
print('Codon not found in sequence.')
else:
print('The codon starts at index', i)
# -
# # break and else
# +
# Define start codon
start_codon = 'AUG'
# Scan sequence until we hit the start codon
for i in range(len(seq)):
if seq[i:i+3] == start_codon:
print('The start codon starts at index', i)
break
else:
print('Codon not found in sequence.')
# +
my_integers= [0,2,3,5,6]
my_integers_reversed= my_integers[-1::-1]
print(my_integers_reversed)
for i, _ in enumerate(my_integers_reversed):
my_integers_reversed[i]+=2
print(my_integers_reversed)
# -
my_integers[-1::-1]
| lesson_06.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + colab={} colab_type="code" id="WTkoaG9Jzkey"
import re
from collections import Counter
from tqdm import tqdm
import json
from torch.utils.data import Dataset, DataLoader
import torch
# + [markdown] colab_type="text" id="kdhZGkwZzke2"
# # ConvAI dataset
#
# 
#
#
#
# #### How would you solve this problem based on what you have learned till now?
# + [markdown] colab_type="text" id="bHYcyMp6XeoO"
# ## Raw Data
#
# This is how raw input/target sample from training data of ConvAI dataset looks like:
#
# ```
# {
# "text": "your persona: i had a gig at local theater last night.\nyour persona: i work as a stand up comedian.\nyour persona: i come from a small town.\nyour persona: my favorite drink is cuba libre.\nyour persona: i did a few small roles in tv series.\nwe all live in a yellow submarine , a yellow submarine . morning !\nhi ! that is a great line for my next stand up .\nlol . i am shy , anything to break the ice , and i am a beatles fan .\ni can tell . i am not , you can see me in some tv shows\nreally ? what shows ? i like tv , it makes me forget i do not like my family\nwow , i wish i had a big family . i grew up in a very small town .\ni did too . i do not get along with mine . they have no class .\njust drink some cola with rum and you'll forget about them !\nput the lime in the coconut as well . . .\nnah , plain cuba libre , that's what we drank yesterday at the theater .\ni prefer mojitos . watermelon or cucumber .",
# "labels": ["those are really yummy too , but not my favorite ."],
# "reward": 0,
# "episode_done": true,
# "id": "convai2:self:no_cands"
# }
# ```
#
# * "text" is the input we need = [your persona + dialogue so far]
#
# * "labels" is the output we need = [sentence that model should response]
#
# ## Tokenization
#
# Here tokenization is done using a regular expression as in ParlAI framework (where the dataset is coming from!)
# + colab={} colab_type="code" id="RevAPsmCzke3"
RETOK = re.compile(r'\w+|[^\w\s]|\n', re.UNICODE)
# + colab={"base_uri": "https://localhost:8080/", "height": 450} colab_type="code" executionInfo={"elapsed": 940, "status": "ok", "timestamp": 1584066480343, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "05585133387136783154"}, "user_tz": 240} id="W_JMpWkEzke5" outputId="f960d122-8d30-4cfd-edbd-7f5e28dec0a0"
# example of parsed text
RETOK.findall('your persona: i had a gig at local theater last night.\nyour persona: i work as a stand up comedian.')
# + [markdown] colab_type="text" id="4Eh_8POhzke8"
# # ConvAI dictionary
#
# The dataset is coming with a precomputed dictionary, it looks like this:
#
# For each word there is a corresponding count. Counts for special symbols are artificially presented / not real.
#
# ```
# __null__ 1000000003
# __start__ 1000000002
# __end__ 1000000001
# __unk__ 1000000000
# . 276863
# i 270789
# you 93655
# your 91941
# a 89140
# ? 85346
# persona 80372
# \n 80365
# : 80365
# , 79513
# to 79240
# my 73999
# ' 68126
# do 55199
# is 53581
# the 49955
# ```
#
# `ChatDictionary` class implements the loading of that file with helpful functions.
# + colab={} colab_type="code" id="7UboLeurzke9"
class ChatDictionary(object):
"""
Simple dict loader
"""
def __init__(self, dict_file_path):
self.word2ind = {} # word:index
self.ind2word = {} # index:word
self.counts = {} # word:count
dict_raw = open(dict_file_path, 'r').readlines()
for i, w in enumerate(dict_raw):
_word, _count = w.strip().split('\t')
if _word == '\\n':
_word = '\n'
self.word2ind[_word] = i
self.ind2word[i] = _word
self.counts[_word] = _count
def t2v(self, tokenized_text):
return [self.word2ind[w] if w in self.counts else self.word2ind['__unk__'] for w in tokenized_text]
def v2t(self, list_ids):
return ' '.join([self.ind2word[i] for i in list_ids])
def pred2text(self, tensor):
result = []
for i in range(tensor.size(0)):
if tensor[i].item() == '__end__' or tensor[i].item() == '__null__': # null is pad
break
else:
result.append(self.ind2word[tensor[i].item()])
return ' '.join(result)
def __len__(self):
return len(self.counts)
# + [markdown] colab_type="text" id="o8c6mwy5zke_"
# # Dataset class
#
# The `ChatDataset` should be familiar to all of you, nothing fancy there
# + colab={} colab_type="code" id="jGpA2vYIzkfA"
class ChatDataset(Dataset):
"""
Json dataset wrapper
"""
def __init__(self, dataset_file_path, dictionary, dt='train'):
super().__init__()
json_text = open(dataset_file_path, 'r').readlines()
self.samples = []
for sample in tqdm(json_text):
sample = sample.rstrip()
sample = json.loads(sample)
_inp_toked = RETOK.findall(sample['text'])
_inp_toked_id = dictionary.t2v(_inp_toked)
sample['text_vec'] = torch.tensor(_inp_toked_id, dtype=torch.long)
# train and valid have different key names for target
if dt == 'train':
_tar_toked = RETOK.findall(sample['labels'][0]) + ['__end__']
elif dt == 'valid':
_tar_toked = RETOK.findall(sample['eval_labels'][0]) + ['__end__']
_tar_toked_id = dictionary.t2v(_tar_toked)
sample['target_vec'] = torch.tensor(_tar_toked_id, dtype=torch.long)
self.samples.append(sample)
def __getitem__(self, i):
return self.samples[i]['text_vec'], self.samples[i]['target_vec']
def __len__(self):
return len(self.samples)
# + [markdown] colab_type="text" id="nuMz60avzkfC"
# # Padding and batching
#
# `pad_tensor` function implements padding of a given tensor using the specified PAD token.
#
# `batchify` uses both previous function to make a minibatch which is ready to be packed.
# + colab={} colab_type="code" id="GtILdxv1zkfD"
def pad_tensor(tensors, sort=True, pad_token=0):
rows = len(tensors)
lengths = [len(i) for i in tensors]
max_t = max(lengths)
output = tensors[0].new(rows, max_t)
output.fill_(pad_token) # 0 is a pad token here
for i, (tensor, length) in enumerate(zip(tensors, lengths)):
output[i,:length] = tensor
return output, lengths
def batchify(batch):
inputs = [i[0] for i in batch]
labels = [i[1] for i in batch]
input_vecs, input_lens = pad_tensor(inputs)
label_vecs, label_lens = pad_tensor(labels)
return {
"text_vecs": input_vecs,
"text_lens": input_lens,
"target_vecs": label_vecs,
"target_lens": label_lens,
}
# + colab={"base_uri": "https://localhost:8080/", "height": 52} colab_type="code" executionInfo={"elapsed": 13104, "status": "ok", "timestamp": 1584066492537, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "05585133387136783154"}, "user_tz": 240} id="p8PmLljdzkfF" outputId="ee9696b8-2471-458e-8eb4-7a6c3d0a01a9"
# loading datasets and dictionary
# downloading pretrained models and data
### DOWNLOADING THE FILES
import os
### persona chat dataset
if not os.path.exists('./dict'):
# !wget "https://nyu.box.com/shared/static/sj9f87tofpicll89xbc154pmbztu5q4h" -O './dict'
if not os.path.exists('./train.jsonl'):
# !wget "https://nyu.box.com/shared/static/aqp0jyjaixjmukm5asasivq2bcfze075.jsonl" -O './train.jsonl'
if not os.path.exists('./valid.jsonl'):
# !wget "https://nyu.box.com/shared/static/eg4ivddtqib2hkf1k8rkxnmzmo0cq27p.jsonl" -O './valid.jsonl'
if not os.path.exists('./chat_model_best_22.pt'):
# !wget "https://nyu.box.com/shared/static/24zsynuks8nzg7530tgakzh8o62id9xa.pt" -O './chat_model_best_22.pt'
chat_dict = ChatDictionary('./dict')
train_dataset = ChatDataset('./train.jsonl', chat_dict)
valid_dataset = ChatDataset('./valid.jsonl', chat_dict, 'valid')
# -
# This is how the input to our model looks like now:
# + colab={"base_uri": "https://localhost:8080/", "height": 156} colab_type="code" executionInfo={"elapsed": 13084, "status": "ok", "timestamp": 1584066492539, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "05585133387136783154"}, "user_tz": 240} id="k-1axU0H8XlI" outputId="ee22d008-e43b-4f8c-989a-7b2d0908d27e"
train_dataset[0]
# + colab={} colab_type="code" id="owl9sDsrzkfH"
train_loader = DataLoader(train_dataset, shuffle=True, collate_fn=batchify, batch_size=256)
valid_loader = DataLoader(valid_dataset, shuffle=False, collate_fn=batchify, batch_size=256)
# + [markdown] colab_type="text" id="jMazqS5YzkfK"
# # Seq2seq model with attention
#
# 
# + colab={} colab_type="code" id="8qjSc8aPzkfK"
import torch.nn as nn
from torch.nn.utils.rnn import pack_padded_sequence, pad_packed_sequence
class EncoderRNN(nn.Module):
"""Encodes the input context."""
def __init__(self, vocab_size, embed_size, hidden_size, num_layers, pad_idx=0, dropout=0, shared_lt=None):
super().__init__()
self.vocab_size = vocab_size
self.embed_size = embed_size
self.hidden_size = hidden_size
self.num_layers = num_layers
self.dropout = nn.Dropout(p=dropout)
self.pad_idx = pad_idx
if shared_lt is None:
self.embedding = nn.Embedding(self.vocab_size, self.embed_size, pad_idx)
else:
# share embedding with decoder
self.embedding = shared_lt
self.gru = nn.GRU(
self.embed_size, self.hidden_size, num_layers=self.num_layers, batch_first=True, dropout=dropout if num_layers > 1 else 0,
)
def forward(self, text_vec, text_lens, hidden=None):
embedded = self.embedding(text_vec)
# assign 1 if not equal to pad_idx, otherwisw 0
attention_mask = text_vec.ne(self.pad_idx)
embedded = self.dropout(embedded)
output, hidden = self.gru(embedded, hidden)
return output, hidden, attention_mask
class DecoderRNN(nn.Module):
"""Generates a sequence of tokens in response to context."""
def __init__(self, vocab_size, embed_size, hidden_size, num_layers, dropout=0):
super().__init__()
self.vocab_size = vocab_size
self.embed_size = embed_size
self.hidden_size = hidden_size
self.num_layers = num_layers
self.dropout = nn.Dropout(p=dropout)
self.embedding = nn.Embedding(self.vocab_size, self.embed_size, 0)
self.gru = nn.GRU(
self.embed_size, self.hidden_size, num_layers=self.num_layers, batch_first=True, dropout=dropout if num_layers > 1 else 0,
)
self.attention = AttentionLayer(self.hidden_size, self.embed_size)
self.out = nn.Linear(self.hidden_size, self.vocab_size)
self.longest_label = 100
def decode_forced(self, ys, encoder_states, xs_lens):
encoder_output, encoder_hidden, attention_mask = encoder_states
batch_size = ys.size(0)
target_length = ys.size(1)
longest_label = max(target_length, self.longest_label)
starts = torch.Tensor([1]).long().to(self.embedding.weight.device).expand(batch_size, 1).long() # expand to batch size
# Teacher forcing: Feed the target as the next input
y_in = ys.narrow(1, 0, ys.size(1) - 1)
decoder_input = torch.cat([starts, y_in], 1)
decoder_output, decoder_hidden, attn_w_log = self.forward(decoder_input, encoder_hidden, encoder_states)
_, preds = decoder_output.max(dim=2)
return decoder_output, preds, attn_w_log
def forward(self, text_vec, decoder_hidden, encoder_states):
emb = self.embedding(text_vec)
emb = self.dropout(emb)
seqlen = text_vec.size(1)
encoder_output, _, attention_mask = encoder_states
output = []
attn_w_log = []
for i in range(seqlen):
decoder_output, decoder_hidden = self.gru(emb[:,i,:].unsqueeze(1), decoder_hidden)
# compute attention at each time step
decoder_output_attended, attn_weights = self.attention(decoder_output, decoder_hidden, encoder_output, attention_mask)
output.append(decoder_output_attended)
attn_w_log.append(attn_weights)
output = torch.cat(output, dim=1).to(text_vec.device)
scores = self.out(output)
return scores, decoder_hidden, attn_w_log
class AttentionLayer(nn.Module):
def __init__(self, hidden_size, embedding_size):
super().__init__()
input_dim = hidden_size
self.linear_out = nn.Linear(hidden_size+input_dim, input_dim, bias=False)
self.softmax = nn.Softmax(dim=-1)
self.tanh = nn.Tanh()
def forward(self, decoder_output, decoder_hidden, encoder_output, attention_mask):
batch_size, seq_length, hidden_size = encoder_output.size()
encoder_output_t = encoder_output.transpose(1,2)
attention_scores = torch.bmm(decoder_output, encoder_output_t).squeeze(1)
attention_scores.masked_fill_((~attention_mask), -10e5)
attention_weights = self.softmax(attention_scores)
mix = torch.bmm(attention_weights.unsqueeze(1), encoder_output)
combined = torch.cat((decoder_output.squeeze(1), mix.squeeze(1)), dim=1)
output = self.linear_out(combined).unsqueeze(1)
output = self.tanh(output)
return output, attention_weights
class seq2seq(nn.Module):
"""
Generic seq2seq model with attention mechanism.
"""
def __init__(self, opts):
super().__init__()
self.opts = opts
self.decoder = DecoderRNN(
vocab_size=self.opts['vocab_size'],
embed_size=self.opts['embedding_size'],
hidden_size=self.opts['hidden_size'],
num_layers=self.opts['num_layers_dec'],
dropout=self.opts['dropout'],
)
self.encoder = EncoderRNN(
vocab_size=self.opts['vocab_size'],
embed_size=self.opts['embedding_size'],
hidden_size=self.opts['hidden_size'],
num_layers=self.opts['num_layers_enc'],
dropout=self.opts['dropout'],
shared_lt=self.decoder.embedding
)
def train(self):
self.encoder.train()
self.decoder.train()
def eval(self):
self.encoder.eval()
self.decoder.eval()
# + colab={} colab_type="code" id="tynCTwJUzkfN"
num_gpus = torch.cuda.device_count()
if num_gpus > 0:
current_device = 'cuda'
else:
current_device = 'cpu'
load_pretrained = True
if load_pretrained is True:
if current_device == 'cuda':
model_pt = torch.load('./chat_model_best_22.pt')
else:
model_pt = torch.load('./chat_model_best_22.pt', map_location=torch.device('cpu'))
opts = model_pt['opts']
model = seq2seq(opts)
model.load_state_dict(model_pt['state_dict'])
model.to(current_device)
else:
opts = {}
opts['vocab_size'] = len(chat_dict)
opts['hidden_size'] = 512
opts['embedding_size'] = 256
opts['num_layers_enc'] = 2
opts['num_layers_dec'] = 2
opts['dropout'] = 0.3
opts['encoder_shared_lt'] = True
model = seq2seq(opts)
model.to(current_device)
# + colab={} colab_type="code" id="lSCRkORjzkfP"
criterion = nn.CrossEntropyLoss(ignore_index=0, reduction='sum')
optimizer = torch.optim.Adam(model.parameters(), 0.01, amsgrad=True)
scheduler = torch.optim.lr_scheduler.ReduceLROnPlateau(optimizer, 'min', patience=10)
# + colab={"base_uri": "https://localhost:8080/", "height": 381} colab_type="code" executionInfo={"elapsed": 14473, "status": "error", "timestamp": 1584066493971, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "05585133387136783154"}, "user_tz": 240} id="V7A9Yrv0zkfR" outputId="10960239-8507-4059-ef77-33889b3d7e61"
plot_cache = []
best_val_loss = 100
if not load_pretrained:
for epoch in range(100):
model.train()
sum_loss = 0
sum_tokens = 0
for i, batch in enumerate(train_loader):
optimizer.zero_grad()
text_vecs = batch['text_vecs'].to('cuda')
target_vecs = batch['target_vecs'].to('cuda')
encoded = model.encoder(text_vecs, batch['text_lens'])
decoder_output, preds, attn_w_log = model.decoder.decode_forced(target_vecs, encoded, batch['text_lens'])
scores = decoder_output.view(-1, decoder_output.size(-1))
loss = criterion(scores, target_vecs.view(-1))
sum_loss += loss.item()
num_tokens = target_vecs.ne(0).long().sum().item()
loss /= num_tokens
sum_tokens += num_tokens
loss.backward()
optimizer.step()
if i % 100 == 0:
avg_train_loss = sum_loss/sum_tokens
print("iter {} train loss = {}".format(i, sum_loss/sum_tokens))
val_loss = 0
val_tokens = 0
for i, batch in enumerate(valid_loader):
model.eval()
text_vecs = batch['text_vecs'].to('cuda')
target_vecs = batch['target_vecs'].to('cuda')
encoded = model.encoder(text_vecs, batch['text_lens'])
decoder_output, preds, attn_w_log = model.decoder.decode_forced(target_vecs, encoded, batch['text_lens'])
scores = decoder_output.view(-1, decoder_output.size(-1))
loss = criterion(scores, target_vecs.view(-1))
num_tokens = target_vecs.ne(0).long().sum().item()
val_tokens += num_tokens
val_loss += loss.item()
avg_val_loss = val_loss/val_tokens
scheduler.step(avg_val_loss)
print("Epoch {} valid loss = {}".format(epoch, avg_val_loss))
plot_cache.append( (avg_train_loss, avg_val_loss) )
if avg_val_loss < best_val_loss:
best_val_loss = avg_val_loss
torch.save({
'state_dict': model.state_dict(),
'opts': opts,
'plot_cache': plot_cache,
}, f'./chat_model_best_{epoch}.pt')
# + colab={} colab_type="code" id="H06IUKipzkfU"
if load_pretrained is True:
plot_cache = model_pt['plot_cache']
# + colab={"base_uri": "https://localhost:8080/", "height": 281} colab_type="code" executionInfo={"elapsed": 963, "status": "ok", "timestamp": 1584066520693, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "05585133387136783154"}, "user_tz": 240} id="8yU7rhB3zkfW" outputId="beacd0be-96b0-4a84-f929-7aa6eff2d9e0"
import matplotlib.pyplot as plt
import numpy
epochs = numpy.array(list(range(len(plot_cache))))
plt.plot(epochs, [i[0] for i in plot_cache], label='Train loss')
plt.plot(epochs, [i[1] for i in plot_cache], label='Valid loss')
plt.legend()
plt.title('Loss curves')
plt.show()
# + colab={"base_uri": "https://localhost:8080/", "height": 281} colab_type="code" executionInfo={"elapsed": 1143, "status": "ok", "timestamp": 1584066520889, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "05585133387136783154"}, "user_tz": 240} id="rHBkh9jFzkfZ" outputId="653dc30c-240c-43fc-a870-55acf0ed5a1c"
import matplotlib.pyplot as plt
import numpy
epochs = numpy.array(list(range(len(plot_cache))))
plt.plot(epochs, [2**(i[0]/numpy.log(2)) for i in plot_cache], label='Train ppl')
plt.plot(epochs, [2**(i[1]/numpy.log(2)) for i in plot_cache], label='Valid ppl')
plt.legend()
plt.title('PPL curves')
plt.show()
# + colab={} colab_type="code" id="JqnniuRCzkfm"
# saving the model, be careful to nor overwrite a good model here
if False:
torch.save({
'state_dict': model.state_dict(),
'opts': opts,
'plot_cache': plot_cache,
}, './chat_model.pt')
# + [markdown] colab_type="text" id="tAg5sK8UtD9s"
# ## Greedy Search
# + colab={} colab_type="code" id="KCa2Nhjszkfc"
def greedy_search(model, batch, batch_size, max_len=100):
model.eval()
text_vecs = batch['text_vecs'].to(current_device)
encoded = model.encoder(text_vecs, batch['text_lens'])
encoder_output, encoder_hidden, attention_mask = encoded
# 1 is __start__
starts = torch.Tensor([1]).long().to(model.decoder.embedding.weight.device).expand(batch_size, 1).long() # expand to batch size
decoder_hidden = encoder_hidden
# greedy decoding here
preds = [starts]
scores = []
# track if each sample in the mini batch is finished
# if all finished, stop predicting
finish_mask = torch.Tensor([0]*batch_size).byte().to(model.decoder.embedding.weight.device)
xs = starts
_attn_w_log = []
for ts in range(max_len):
decoder_output, decoder_hidden, attn_w_log = model.decoder(xs, decoder_hidden, encoded) # decoder_output: [batch, time, vocab]
_scores, _preds = torch.log_softmax(decoder_output, dim=-1).max(dim=-1)
preds.append(_preds)
_attn_w_log.append(attn_w_log)
scores.append(_scores.view(-1)*(finish_mask == 0).float())
finish_mask += (_preds == 2).byte().view(-1)
if not (torch.any(~finish_mask.bool())):
break
xs = _preds
preds = torch.cat(preds, dim=-1)
return preds
# + colab={} colab_type="code" id="Bt-R4axzzkfe"
# artificial example (try removing the question mark, result will be different)
#inputs = RETOK.findall("hello , where are you from?")
inputs = RETOK.findall("your persona: i live in texas.\n hello , where are you ? ?")
test_batch = {
'text_vecs': torch.tensor([chat_dict.t2v(inputs)], dtype=torch.long, device=model.decoder.embedding.weight.device),
'text_lens': torch.tensor([len(inputs)], dtype=torch.long)
}
# + colab={"base_uri": "https://localhost:8080/", "height": 34} colab_type="code" executionInfo={"elapsed": 1091, "status": "ok", "timestamp": 1584066520893, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "05585133387136783154"}, "user_tz": 240} id="fi_ZrKq4zkfg" outputId="c3b9192b-3434-4f3c-b46a-891733cd7abc"
output = greedy_search(model, test_batch, 1)
chat_dict.v2t(output[0].tolist())
# + [markdown] colab_type="text" id="IFPNPNEatYuT"
# ### Nucleus Sampling
# + [markdown] colab_type="text" id="jmccoyUof4aS"
#
# Reference to the original paper: https://openreview.net/pdf?id=rygGQyrFvH Please read section 3.1 for details. Here we give the needed excerpt:
#
# $$\begin{aligned} P^{\prime}\left(x | x_{1: i-1}\right) &=\left\{\begin{array}{ll}P\left(x | x_{1: i-1}\right) / p^{\prime} & \text { if } x \in V^{\left(p_{\text {nucleus }}\right)} \\ 0 & \text { otherwise }\end{array}\right.\\ p^{\prime} &=\sum_{x \in V^{\left(p_{\text {nucleus }}\right)}} P\left(x | x_{1: i-1}\right) \end{aligned}$$
#
# where $V(p_{nucleus}) ⊂ V$ is a top-p vocabulary which is defined as a smallest subset such that:
# $$\sum_{x \in V^{\left(p_{\text {nucleus }}\right)}} P\left(x | x_{1: i-1}\right) \geq p_{\text {nucleus }}$$
#
#
#
# + colab={} colab_type="code" id="r-YzYzektYuU"
# modified from Top-K and Nucleus Sampling from ParlAI
class Nucleus(object):
def __init__(self,p):
self.p=p
def select_paths(self, logprobs):
probs = torch.softmax(logprobs, dim=-1)
sprobs, sinds = probs.sort(dim=-1, descending=True)
org=sprobs.clone()
mask = (sprobs.cumsum(dim=-1) - sprobs[:, :1]) >= self.p
sprobs[mask] = 0
sprobs.div_(sprobs.sum(dim=-1).unsqueeze(1))
choice = torch.multinomial(sprobs[0][0], 1)
tok_id = sinds[0][0][choice]
# back to log
score = org[0][0][choice].log().detach().data.cpu().numpy()[0]
return (tok_id, score)
# + colab={} colab_type="code" id="B-bRDwekN_hy"
def sampling_with_nucleus(model, batch, batch_size, p, previous_hypo=None, verbose=True, sample=100):
model.eval()
text_vecs = batch['text_vecs'].to(current_device)
encoded = model.encoder(text_vecs, batch['text_lens'])
encoder_output, encoder_hidden, attention_mask = encoded
all_logpro=[]
unique_tokens=set()
for times in range(sample):
starts = torch.Tensor([1]).long().to(model.decoder.embedding.weight.device).expand(batch_size, 1).long() # expand to batch size
decoder_hidden = encoder_hidden
preds = []
scores = []
xs = starts
_attn_w_log = []
for ts in range(50):
score, decoder_hidden, attn_w_log = model.decoder(xs, decoder_hidden, encoded) # decoder_output: [batch, time, vocab]
N = Nucleus(p)
tok_ids,sc = N.select_paths(score)
t_tok_ids = tok_ids.data.cpu().numpy()[0]
preds.append(t_tok_ids)
unique_tokens.add(t_tok_ids)
_attn_w_log.append(attn_w_log)
scores.append(sc)
eos_token = chat_dict.word2ind['__end__']
if tok_ids==eos_token:
break
xs = torch.Tensor([t_tok_ids]).long().to(model.decoder.embedding.weight.device).expand(batch_size, 1).long() # expand to batch size
all_logpro.append(sum(scores))
# printing some sample results
pred_sentence = chat_dict.v2t(preds)
if verbose:
print(pred_sentence)
avg_logpro = sum(all_logpro)/len(all_logpro)
unique_num = len(unique_tokens)
return preds, avg_logpro, unique_num, pred_sentence
# + [markdown] colab_type="text" id="sBckP2IHtYua"
# ### Some samples from our sampling method:
# + colab={} colab_type="code" id="6gRamdDOtYud"
valid_loader_single = DataLoader(valid_dataset, shuffle=False, collate_fn=batchify, batch_size=1)
valid_sample = next(iter(valid_loader_single))
# + colab={"base_uri": "https://localhost:8080/", "height": 156} colab_type="code" executionInfo={"elapsed": 1192, "status": "ok", "timestamp": 1584066521051, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "05585133387136783154"}, "user_tz": 240} id="vT7XKzk5y6WF" outputId="23e97ec9-5f16-4926-9768-5c805fd3d839"
print("Input:\n", chat_dict.v2t(valid_sample['text_vecs'][0].tolist()))
print("Target output:\n", chat_dict.v2t(valid_sample['target_vecs'][0].tolist()))
# + colab={"base_uri": "https://localhost:8080/", "height": 433} colab_type="code" executionInfo={"elapsed": 946, "status": "ok", "timestamp": 1584066537591, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "05585133387136783154"}, "user_tz": 240} id="mbtVutJktYug" outputId="dd65ca77-ea35-41f3-8e42-686bfba1ba1c"
p_values=[0.1,0.5,0.9]
n_samples = 5
p_logprob=[]
p_u_num=[]
for p in p_values:
print("When p={}, generate {} example decoding sentence:".format(p, n_samples))
output,res,u_num, _ = sampling_with_nucleus(model, valid_sample, 1, p, sample=n_samples)
p_logprob.append(round(res,2))
p_u_num.append(u_num)
print("\n")
# + [markdown] colab_type="text" id="uDeZ1OtEzkfp"
# # Beam search
#
# We have learnt what is a beam search. But how to implement it? There are plenty of possible design choices along the way. Here we consider a so called `Beam` class which handles intermediate information.
#
# 
# + colab={} colab_type="code" id="E0-a9bg0zkfq"
import math
from operator import attrgetter
class _HypothesisTail(object):
"""Hold some bookkeeping about a hypothesis."""
# use slots because we don't want dynamic attributes here
__slots__ = ['timestep', 'hypid', 'score', 'tokenid']
def __init__(self, timestep, hypid, score, tokenid):
self.timestep = timestep
self.hypid = hypid
self.score = score
self.tokenid = tokenid
class Beam(object):
"""
This class serves to keep info about partial hypothesis and perform the beam step
"""
def __init__(
self,
beam_size,
padding_token=0,
bos_token=1,
eos_token=2,
min_length=3,
min_n_best=3,
device='cpu',
# for iterbeam below
similarity_metric='hamming',
similarity_threshold=0,
):
self.beam_size = beam_size
self.min_length = min_length
self.eos = eos_token
self.bos = bos_token
self.pad = padding_token
self.device = device
# recent score for each hypo in the beam
self.scores = None
# self.scores values per each time step
self.all_scores = [torch.Tensor([0.0] * beam_size).to(self.device)]
# backtracking id to hypothesis at previous time step
self.bookkeep = []
# output tokens at each time step
self.outputs = [
torch.Tensor(self.beam_size).long().fill_(self.bos).to(self.device)
]
# keeps tuples (score, time_step, hyp_id)
self.finished = []
self.eos_top = False
self.eos_top_ts = None
self.n_best_counter = 0
self.min_n_best = min_n_best
self.partial_hyps = [[self.bos] for i in range(beam_size)]
# iterbeam related below
self.history_hyps = []
self.similarity_metric = similarity_metric
self.similarity_threshold = similarity_threshold
self.banned_tokens = set()
def get_output_from_current_step(self):
"""Get the output at the current step."""
return self.outputs[-1]
def get_backtrack_from_current_step(self):
"""Get the backtrack at the current step."""
return self.bookkeep[-1]
##################### ITER-BEAM BLOCKING PART START #####################
def hamming_distance(self, t1, t2):
dist = 0
for tok1, tok2 in zip(t1,t2):
if tok1 != tok2:
dist += 1
return dist
def edit_distance(self, t1, t2):
import editdistance
dist = editdistance.eval(t1, t2)
return dist
def similarity_check(self, active_hyp, previous_hyps, metric='hamming', threshold=0):
banned_tokens = []
active_len = len(active_hyp)
for observed_hyp, _banned_tokens in previous_hyps.items():
if len(observed_hyp) != active_len:
continue
if metric == 'hamming':
dist = self.hamming_distance(observed_hyp, active_hyp)
if metric == 'edit':
dist = self.edit_distance(observed_hyp, active_hyp)
if dist <= threshold:
banned_tokens.extend(_banned_tokens)
return list(set(banned_tokens))
##################### ITER-BEAM BLOCKING PART END ########################
def select_paths(self, logprobs, prior_scores, previous_hyps):
"""Select the next vocabulary item in these beams."""
# beam search actually looks over all hypotheses together so we flatten
beam_scores = logprobs + prior_scores.unsqueeze(1).expand_as(logprobs)
# iterbeam blocking part
current_length = len(self.all_scores)
if len(previous_hyps) > 0 and current_length > 0:
for hyp_id in range(beam_scores.size(0)):
active_hyp = tuple(self.partial_hyps[hyp_id])
banned_tokens = self.similarity_check(active_hyp, previous_hyps, metric=self.similarity_metric, threshold=self.similarity_threshold)
if len(banned_tokens) > 0:
beam_scores[:, banned_tokens] = -10e5
flat_beam_scores = beam_scores.view(-1)
best_scores, best_idxs = torch.topk(flat_beam_scores, self.beam_size, dim=-1)
voc_size = logprobs.size(-1)
# get the backtracking hypothesis id as a multiple of full voc_sizes
hyp_ids = best_idxs / voc_size
# get the actual word id from residual of the same division
tok_ids = best_idxs % voc_size
return (hyp_ids, tok_ids, best_scores)
def advance(self, logprobs, previous_hyps):
"""Advance the beam one step."""
current_length = len(self.all_scores) - 1
if current_length < self.min_length:
# penalize all eos probs to make it decode longer
for hyp_id in range(logprobs.size(0)):
logprobs[hyp_id][self.eos] = -10e5
if self.scores is None:
logprobs = logprobs[0:1] # we use only the first hyp now, since they are all same
self.scores = torch.zeros(1).type_as(logprobs).to(logprobs.device)
hyp_ids, tok_ids, self.scores = self.select_paths(logprobs, self.scores, previous_hyps)
# clone scores here to avoid referencing penalized EOS in the future!
self.all_scores.append(self.scores.clone())
self.outputs.append(tok_ids)
self.bookkeep.append(hyp_ids)
self.partial_hyps = [
self.partial_hyps[hyp_ids[i]] + [tok_ids[i].item()]
for i in range(self.beam_size)
]
self.history_hyps.extend(self.partial_hyps)
# check new hypos for eos label, if we have some, add to finished
for hypid in range(self.beam_size):
if self.outputs[-1][hypid] == self.eos:
self.scores[hypid] = -10e5
# this is finished hypo, adding to finished
eostail = _HypothesisTail(
timestep=len(self.outputs) - 1,
hypid=hypid,
score=self.all_scores[-1][hypid],
tokenid=self.eos,
)
self.finished.append(eostail)
self.n_best_counter += 1
if self.outputs[-1][0] == self.eos:
self.eos_top = True
if self.eos_top_ts is None:
self.eos_top_ts = len(self.outputs) - 1
def is_done(self):
"""Return whether beam search is complete."""
return self.eos_top and self.n_best_counter >= self.min_n_best
def get_top_hyp(self):
"""
Get single best hypothesis.
:return: hypothesis sequence and the final score
"""
return self._get_rescored_finished(n_best=1)[0]
def _get_hyp_from_finished(self, hypothesis_tail):
"""
Extract hypothesis ending with EOS at timestep with hyp_id.
:param timestep:
timestep with range up to len(self.outputs) - 1
:param hyp_id:
id with range up to beam_size - 1
:return:
hypothesis sequence
"""
hyp_idx = []
endback = hypothesis_tail.hypid
for i in range(hypothesis_tail.timestep, -1, -1):
hyp_idx.append(
_HypothesisTail(
timestep=i,
hypid=endback,
score=self.all_scores[i][endback],
tokenid=self.outputs[i][endback],
)
)
endback = self.bookkeep[i - 1][endback]
return hyp_idx
def _get_pretty_hypothesis(self, list_of_hypotails):
"""Return hypothesis as a tensor of token ids."""
return torch.stack([ht.tokenid for ht in reversed(list_of_hypotails)])
def _get_rescored_finished(self, n_best=None, add_length_penalty=False):
"""
Return finished hypotheses according to adjusted scores.
Score adjustment is done according to the Google NMT paper, which
penalizes long utterances.
:param n_best:
number of finalized hypotheses to return
:return:
list of (tokens, score) pairs, in sorted order, where:
- tokens is a tensor of token ids
- score is the adjusted log probability of the entire utterance
"""
# if we never actually finished, force one
if not self.finished:
self.finished.append(
_HypothesisTail(
timestep=len(self.outputs) - 1,
hypid=0,
score=self.all_scores[-1][0],
tokenid=self.eos,
)
)
rescored_finished = []
for finished_item in self.finished:
if add_length_penalty:
current_length = finished_item.timestep + 1
# these weights are from Google NMT paper
length_penalty = math.pow((1 + current_length) / 6, 0.65)
else:
length_penalty = 1
rescored_finished.append(
_HypothesisTail(
timestep=finished_item.timestep,
hypid=finished_item.hypid,
score=finished_item.score / length_penalty,
tokenid=finished_item.tokenid,
)
)
# Note: beam size is almost always pretty small, so sorting is cheap enough
srted = sorted(rescored_finished, key=attrgetter('score'), reverse=True)
if n_best is not None:
srted = srted[:n_best]
return [
(self._get_pretty_hypothesis(self._get_hyp_from_finished(hyp)), hyp.score)
for hyp in srted
]
# + [markdown] colab_type="text" id="7EDSKb1uzkfs"
# # Model manipulation
#
# As you noticed, after the topk we select the best chosen tails of current hypotheses. And the corresponding previous hypotheses ids can be mixed in order. *We must reorder the hidden buffers of our model. Otherwise, the decoding will be wrong.*
# + colab={} colab_type="code" id="WdMugjnfzkfs"
def reorder_encoder_states(encoder_states, indices):
"""Reorder encoder states according to a new set of indices."""
enc_out, hidden, attention_mask = encoder_states
# LSTM or GRU/RNN hidden state?
if isinstance(hidden, torch.Tensor):
hid, cell = hidden, None
else:
hid, cell = hidden
if not torch.is_tensor(indices):
# cast indices to a tensor if needed
indices = torch.LongTensor(indices).to(hid.device)
hid = hid.index_select(1, indices)
if cell is None:
hidden = hid
else:
cell = cell.index_select(1, indices)
hidden = (hid, cell)
enc_out = enc_out.index_select(0, indices)
attention_mask = attention_mask.index_select(0, indices)
return enc_out, hidden, attention_mask
def reorder_decoder_incremental_state(incremental_state, inds):
if torch.is_tensor(incremental_state):
# gru or lstm
return torch.index_select(incremental_state, 1, inds).contiguous()
elif isinstance(incremental_state, tuple):
return tuple(
self.reorder_decoder_incremental_state(x, inds)
for x in incremental_state)
def get_nbest_list_from_beam(beam, dictionary, n_best=None, add_length_penalty=False):
if n_best is None:
n_best = beam.min_n_best
nbest_list = beam._get_rescored_finished(n_best=n_best, add_length_penalty=add_length_penalty)
nbest_list_text = [(dictionary.v2t(i[0].cpu().tolist()), i[1].item()) for i in nbest_list]
return nbest_list_text
# + colab={} colab_type="code" id="PJa3QIWVzkfu"
def generate_with_beam(beam_size, min_n_best, model, batch, batch_size,
previous_hyps=None, similarity_metric='hamming',
similarity_threshold=0, verbose=False):
"""
This function takes a model, batch, beam settings and performs decoding with a beam. PRe beams_best_pick
"""
beams = [ Beam(beam_size,
min_n_best=min_n_best,
eos_token=chat_dict.word2ind['__end__'],
padding_token=chat_dict.word2ind['__null__'],
bos_token=chat_dict.word2ind['__start__'],
device=current_device,
similarity_metric=similarity_metric,
similarity_threshold=similarity_threshold) for _ in range(batch_size)]
repeated_inds = torch.arange(batch_size).to(current_device).unsqueeze(1).repeat(1, beam_size).view(-1)
text_vecs = batch['text_vecs'].to(current_device)
encoder_states = model.encoder(text_vecs, batch['text_lens'])
model.eval()
encoder_states = reorder_encoder_states(encoder_states, repeated_inds) # no actual reordering here, but repeating beam size times each sample in the minibatch
encoder_output, encoder_hidden, attention_mask = encoder_states
incr_state = encoder_hidden # we init decoder hidden with last encoder_hidden
# 1 is a start token id
starts = torch.Tensor([1]).long().to(model.decoder.embedding.weight.device).expand(batch_size*beam_size, 1).long() # expand to batch_size * beam_size
decoder_input = starts
with torch.no_grad():
for ts in range(100):
if all((b.is_done() for b in beams)):
break
score, incr_state, attn_w_log = model.decoder(decoder_input, incr_state, encoder_states)
score = score[:, -1:, :] # take last time step and eliminate the dimension
score = score.view(batch_size, beam_size, -1)
score = torch.log_softmax(score, dim=-1)
for i, b in enumerate(beams):
if not b.is_done():
# make mock previous_hyps if not used #
if previous_hyps is None:
previous_hyps = [{} for i in range(batch_size)]
b.advance(score[i], previous_hyps[i])
incr_state_inds = torch.cat([beam_size * i + b.get_backtrack_from_current_step() for i, b in enumerate(beams)])
incr_state = reorder_decoder_incremental_state(incr_state, incr_state_inds)
selection = torch.cat([b.get_output_from_current_step() for b in beams]).unsqueeze(-1)
decoder_input = selection
beam_preds_scores = [list(b.get_top_hyp()) for b in beams]
beams_best_pick = get_nbest_list_from_beam(beams[0], chat_dict, n_best=1)[0][0]
if verbose:
for bi in range(batch_size):
print(f'batch {bi}')
for i in get_nbest_list_from_beam(beams[bi], chat_dict, n_best=min_n_best):
print(i)
return beam_preds_scores, beams, beams_best_pick
# + [markdown] colab_type="text" id="n_iwsZbxzkfx"
# # Generating some predictions
# + colab={"base_uri": "https://localhost:8080/", "height": 121} colab_type="code" executionInfo={"elapsed": 1036, "status": "ok", "timestamp": 1584066545953, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "05585133387136783154"}, "user_tz": 240} id="LuQRVmWOzkfx" outputId="a3b341b5-203f-4e41-afd7-3e3818da008c"
batch_size = 1
beam_size = 5
beam_n_best = 5 #return top beam_n_best outputs
valid_loader_single = DataLoader(valid_dataset, shuffle=False, collate_fn=batchify, batch_size=batch_size)
valid_sample = next(iter(valid_loader_single))
beam_preds_scores, beams, _ = generate_with_beam(beam_size,
beam_n_best,
model,
valid_sample,
batch_size=batch_size,
verbose=True)
# + [markdown] colab_type="text" id="V-ckVtETzkfz"
# ## Hm, why they are so similar? Lets make a visualization tool
# -
# Run this from your terminal (if you haven't installed already)
#
# sudo apt install libgraphviz-dev
# + colab={"base_uri": "https://localhost:8080/", "height": 176} colab_type="code" executionInfo={"elapsed": 9787, "status": "ok", "timestamp": 1584066554717, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "05585133387136783154"}, "user_tz": 240} id="SfrqBYOl1Nvv" outputId="1b5f73b9-2284-4b0f-a4d1-bd7cf497938f"
# !pip install --install-option="--include-path=/usr/local/include/graphviz/" --install-option="--library-path=/usr/local/lib/graphviz" pygraphviz
# + colab={} colab_type="code" id="cRQ8XToqzkf0"
import matplotlib.pyplot as plt
import networkx as nx
from networkx.drawing.nx_agraph import write_dot, graphviz_layout
def get_beam_dot(beam: Beam, plot_size=30):
"""Create pydot graph representation of the beam.
"""
graph = nx.DiGraph()
outputs = numpy.array([i.tolist() for i in beams[0].outputs])
bookkeep = numpy.array([i.tolist() for i in beams[0].bookkeep])
all_scores = numpy.array([i.tolist() for i in beams[0].all_scores])
max_ts = outputs.shape[0]
labels_dict = {}
node_color_map = []
for i in range(max_ts):
if i == 0:
# only one start
start_node = f"t_{0}__hid_{0}__tok_{outputs[i][0]}__sc_{all_scores[i][0]}"
#start_node = {"time":0, "hypid": 0, "token": outputs[i][0], "score": all_scores[i][0]}
graph.add_node(start_node)
labels_dict[start_node] = chat_dict.ind2word[outputs[i][0]]
node_color_map.append('aliceblue')
continue
for hypid, token in enumerate(outputs[i]): # go over each token on this level
backtrack_hypid = bookkeep[i-1][hypid]
backtracked_node = f"t_{i-1}__hid_{backtrack_hypid}__tok_{outputs[i-1][backtrack_hypid]}__sc_{all_scores[i-1][backtrack_hypid]}"
current_score = all_scores[i][hypid]
node = f"t_{i}__hid_{hypid}__tok_{token}__sc_{current_score}"
graph.add_node(node)
graph.add_edge(backtracked_node, node)
if token == 2:
node_color_map.append('pink')
labels_dict[node] = "__end__\n{:.{prec}f}".format(current_score, prec=4)
else:
node_color_map.append('aliceblue')
labels_dict[node] = chat_dict.ind2word[token]
# same layout using matplotlib with no labels
plt.figure(figsize=(plot_size,plot_size))
plt.title('Beam tree')
pos =graphviz_layout(graph, prog='dot')
nx.draw(graph, pos, labels=labels_dict, with_labels=True, arrows=True, font_size=24, node_size=5000, font_color='black', alpha=1.0, node_color=node_color_map)
plt.show()
# + colab={"base_uri": "https://localhost:8080/", "height": 1000} colab_type="code" executionInfo={"elapsed": 11977, "status": "ok", "timestamp": 1584066556933, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "05585133387136783154"}, "user_tz": 240} id="YXKZxJd6zkf2" outputId="ef97e217-5f3b-44a2-a2ab-76185b153aa9"
get_beam_dot(beams[0])
# + colab={"base_uri": "https://localhost:8080/", "height": 502} colab_type="code" executionInfo={"elapsed": 11967, "status": "ok", "timestamp": 1584066556936, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "05585133387136783154"}, "user_tz": 240} id="XUk24yVRzkf-" outputId="a5eb38e0-bda1-4541-91d6-254ba412dfb0"
# lets try bigger beam size
batch_size = 1
beam_size = 20
beam_n_best = 20
# shuffling to make different examples
valid_loader_single = DataLoader(valid_dataset, shuffle=True, collate_fn=batchify, batch_size=batch_size)
valid_sample = next(iter(valid_loader_single))
print(f"Input : {chat_dict.v2t(valid_sample['text_vecs'][0].tolist())}\n")
beam_preds_scores, beams, _ = generate_with_beam(beam_size, beam_n_best, model, valid_sample, batch_size=batch_size, verbose=True)
# + colab={"base_uri": "https://localhost:8080/", "height": 1000} colab_type="code" executionInfo={"elapsed": 19105, "status": "ok", "timestamp": 1584066564088, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "05585133387136783154"}, "user_tz": 240} id="vwjx899JzkgA" outputId="2401c416-0080-4ed3-ecfb-03fba31f06f2"
get_beam_dot(beams[0], 70)
# + [markdown] colab_type="text" id="DEhmU8JBzkgD"
# # To be explored by the students on their own:
#
# ### Iterative beam search: do not explore tokens you have already seen in the previous iteration
#
# There many strategies which work in a similar way: diverse beam search, RL-based rescoring of beam hypotheses, iterative beam search.
# Here we keep it simple: on each iteration we keep currently observed search space and block observed hypotheses tails of similar hypotheses in the following iterations.
#
# There can be many ways of how we qualify a hypothesis to be similar to other one. Here we define a similarity metric e.g. `edit distance` and a minimum threshold it should surpass to be a *different* one.
# + [markdown] colab={} colab_type="code" id="3pjNFAUCzkgE"
# def fill_prefixes(prefix_dict, history_hyps):
# for hyp in history_hyps:
# for j in range(len(hyp)):
# _prefix = tuple(hyp[:j])
# if _prefix in prefix_dict:
# if hyp[j] in prefix_dict[_prefix]:
# continue
# else:
# prefix_dict[_prefix].append(hyp[j])
# else:
# prefix_dict[_prefix] = [hyp[j]]
#
# def iterative_beam(num_iterations, beam_size, n_best_beam, model, batch, batch_size=1, similarity_metric='hamming', similarity_threshold=0, verbose=True):
#
# prefix_dict = [{} for i in range(batch_size)]
# outputs = []
#
# for beam_iter in range(num_iterations):
# beam_preds_scores, beams, _ = generate_with_beam(beam_size, n_best_beam, model, batch, batch_size=batch_size, previous_hyps=prefix_dict, similarity_metric=similarity_metric, similarity_threshold=similarity_threshold)
#
#
#
# outputs.append((beam_preds_scores, beams))
#
# for i, _dict in enumerate(prefix_dict):
# fill_prefixes(_dict, beams[i].history_hyps)
#
#
# if verbose:
# for bi in range(batch_size):
# for i in range(num_iterations):
# print(f'Iter {i}')
# for j in get_nbest_list_from_beam(outputs[i][1][bi], chat_dict, n_best_beam):
# print(j)
#
# return outputs, prefix_dict
# + [markdown] colab={"base_uri": "https://localhost:8080/", "height": 1000} colab_type="code" executionInfo={"elapsed": 19080, "status": "ok", "timestamp": 1584066564089, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "05585133387136783154"}, "user_tz": 240} id="8fOi2NzIzkgI" outputId="d70bfbd0-cd23-4d61-c5b3-05d76cec0c80"
# batch_size = 1
# iter_beam_size = 2
# beam_n_best = 2
# beam_iter = 10
#
# print(f"Input : {chat_dict.v2t(valid_sample['text_vecs'][0].tolist())}\n")
#
# outputs, prefix_dict = iterative_beam(beam_iter, iter_beam_size, beam_n_best, model, valid_sample, batch_size=batch_size, similarity_metric='edit', similarity_threshold=3, verbose=True)
#
# print('\n\n\n')
#
# batch_size = 1
# iter_beam_size = 5
# beam_n_best = 5
# beam_iter = 4
#
# outputs, prefix_dict = iterative_beam(beam_iter, iter_beam_size, beam_n_best, model, valid_sample, batch_size=batch_size, similarity_metric='edit', similarity_threshold=5, verbose=True)
# + [markdown] colab_type="text" id="c5BTUz8w0vSt"
# ## Interactive Chatbot
# + colab={} colab_type="code" id="QzV860PDtYu5"
# REFERENCE:https://pytorch.org/tutorials/beginner/chatbot_tutorial.html#run-evaluation
import unicodedata
import re
# Turn a Unicode string to plain ASCII, thanks to
# https://stackoverflow.com/a/518232/2809427
def unicodeToAscii(s):
return ''.join(
c for c in unicodedata.normalize('NFD', s)
if unicodedata.category(c) != 'Mn'
)
# Lowercase, trim, and remove non-letter characters
def normalizeString(s):
'''
convert all letters to lowercase and trim all non-letter characters except for basic punctuation (normalizeString)
'''
s = unicodeToAscii(s.lower().strip())
s = re.sub(r"([.!?])", r" \1", s)
s = re.sub(r"[^a-zA-Z.!?]+", r" ", s)
s = re.sub(r"\s+", r" ", s).strip()
return s
def tokenHistory(history,input_sentence):
# add the input sentence to all previous history
history += ' \n '+input_sentence
# parse input history
_inp_toked = RETOK.findall(history)
_inp_toked_id = chat_dict.t2v(_inp_toked)
input_vecs = torch.tensor([_inp_toked_id], dtype=torch.long)
length_input = torch.tensor([len(input_vecs[0])], dtype=torch.int64)
token_history ={'text_vecs': input_vecs, 'text_lens': length_input}
return history, token_history
def ChatBot(model, persona, beam_size=5, prob_ns=0.5, decode_method="Beam"):
assert( decode_method in ['Beam', 'NS', 'Greedy'])
history = persona # should be initialize with persona "words" string
while(1):
try:
# Get input sentence
input_sentence = input('> ')
# Check if it is quit case
if input_sentence == 'q' or input_sentence == 'quit': break
while not input_sentence:
print('Prompt should not be empty!')
# Normalize sentence
input_sentence = normalizeString(input_sentence)
# add the input sentence to all previous history
history += ' \n '+input_sentence
# parse input history
_inp_toked = RETOK.findall(history)
_inp_toked_id = chat_dict.t2v(_inp_toked)
input_vecs = torch.tensor([_inp_toked_id], dtype=torch.long)
length_input = torch.tensor([len(input_vecs[0])], dtype=torch.int64)
batch_history ={'text_vecs': input_vecs, 'text_lens': length_input}
# Evaluate sentence
if decode_method=="Beam":
_, _, output_words = generate_with_beam(beam_size, beam_n_best, model, batch_history, batch_size=1, verbose=False)
elif decode_method=="NS":
_, _, _, output_words = sampling_with_nucleus(model, batch_history, 1, prob_ns, previous_hypo=None, verbose=None, sample=1)
elif decode_method=="Greedy":
output = greedy_search(model, batch_history, 1, max_len = 50)
output_words = chat_dict.v2t(output[0].tolist())
# add bot output to history
history += '\n'+output_words
# Format and print response sentence
print('Bot:'+output_words.replace('__start__','').replace('__end__',''))
except KeyError:
print("Error: Encountered unknown word.")
# + [markdown] colab_type="text" id="7GoOpLtBtYu7"
# ### Interactions:
# + colab={} colab_type="code" id="Z-pO0PhbtYu7"
# input "q" or "quit" to quit the interactive session.
# + colab={"base_uri": "https://localhost:8080/", "height": 86} colab_type="code" executionInfo={"elapsed": 443, "status": "ok", "timestamp": 1584067564358, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "05585133387136783154"}, "user_tz": 240} id="zqZ-3dHwWnjU" outputId="9272498b-a2e6-438d-d723-f8695a88dd07"
persona1 = "your persona : i love to drink wine and dance in the moonlight . \n your persona : i am very strong for my age . \n your persona : i ' m 100 years old . \n your persona : i feel like i might live forever ."
print(persona1)
# -
ChatBot(model, persona1, decode_method='Greedy')
ChatBot(model, persona1, prob_ns = 0.3, decode_method='NS')
# + colab={"base_uri": "https://localhost:8080/", "height": 381} colab_type="code" executionInfo={"elapsed": 154299, "status": "ok", "timestamp": 1584067718600, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "05585133387136783154"}, "user_tz": 240} id="Zuf7HUyfwlj2" outputId="230287a2-ce50-4d87-9b6b-c09e48a331c3"
ChatBot(model, persona1, beam_size=5, decode_method='Beam')
# -
ChatBot(model, persona1, beam_size = 8, decode_method='Beam')
# + [markdown] colab={} colab_type="code" id="e17_9-WgzkgM"
# ## ChatBot Comparing All Decoding Method
# -
def ChatBotAll(model, persona, beam_size=5, prob_ns=0.5):
# three different history for all three decoding methods
history_beam, history_ns, history_greedy = persona, persona, persona # initialize with persona "words" string
history_list = [history_beam, history_ns, history_greedy]
while(1):
try:
# Get input sentence
input_sentence = input('> ')
# Check if it is quit case
if input_sentence == 'q' or input_sentence == 'quit': break
while not input_sentence:
print('Prompt should not be empty!')
# Normalize sentence
input_sentence = normalizeString(input_sentence)
# Evaluate sentence
# "Beam":
history_beam, token_history = tokenHistory(history_beam,input_sentence)
_, _, output_words = generate_with_beam(beam_size, beam_n_best, model, token_history, batch_size=1, verbose=False)
# Format and print response sentence
print('Beam Bot:'+output_words.replace('__start__','').replace('__end__',''))
# add bot output to history
history_beam += '\n'+output_words
# "NS":
history_ns, token_history = tokenHistory(history_ns,input_sentence)
_, _, _, output_words = sampling_with_nucleus(model, token_history, 1, prob_ns, previous_hypo=None, verbose=None, sample=1)
print('NS Bot:'+output_words.replace('__start__','').replace('__end__',''))
history_ns += '\n'+output_words
# "greedy":
history_greedy, token_history = tokenHistory(history_greedy,input_sentence)
output = greedy_search(model, token_history, 1, max_len=100)
output_words = chat_dict.v2t(output[0].tolist())
print('Greedy Bot:'+output_words.replace('__start__','').replace('__end__',''))
history_greedy += '\n'+output_words
except KeyError:
print("Error: Encountered unknown word.")
# + colab={"base_uri": "https://localhost:8080/", "height": 86} colab_type="code" executionInfo={"elapsed": 349, "status": "ok", "timestamp": 1584067734625, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "05585133387136783154"}, "user_tz": 240} id="z5UVZZ_mwfW5" outputId="ec4b5f46-b6b4-44ef-b94b-d4094b9f8044"
persona2 = "your persona : i love disneyland and mickey mouse . \n your persona : i love to spend time with my family . \n your persona : i ' m a baby delivery nurse . \n your persona : i walk three miles every day ."
print(persona2)
# -
ChatBotAll(model, persona2, beam_size = 15, prob_ns=0.3)
# + colab={"base_uri": "https://localhost:8080/", "height": 312} colab_type="code" executionInfo={"elapsed": 135227, "status": "ok", "timestamp": 1584067873138, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "05585133387136783154"}, "user_tz": 240} id="YlAikPXMyO7Y" outputId="f7f14c53-d4c8-4b78-da91-0796a807cfd9"
ChatBotAll(model, persona2, beam_size=5, prob_ns=0.3)
# + colab={"base_uri": "https://localhost:8080/", "height": 86} colab_type="code" executionInfo={"elapsed": 453, "status": "ok", "timestamp": 1584067880299, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "05585133387136783154"}, "user_tz": 240} id="VMjrBD8Fw3ct" outputId="f7c58881-baa7-4c39-d953-e6894a2941c9"
persona3 = "your persona : i love to drink fancy tea . \n your persona : i have a big library at home . \n your persona : i ' m a museum tour guide . \n your persona : i ' m partly deaf ."
print(persona3)
# + colab={} colab_type="code" id="yPG2HHkawryC"
ChatBotAll(model, persona3, beam_size=5, prob_ns=0.3)
# -
persona4 = "your persona : i study machine learning . \n your persona : i live in kigali . \n your persona : i ' m very smart . \n your persona : i love nlp ."
print(persona3)
persona4
ChatBotAll(model, persona4, beam_size=3, prob_ns=0.3)
| Part 02/003_Decoding/chat/chat.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import boto3
import sagemaker
import pandas as pd
from sagemaker import image_uris
from sagemaker import TrainingInput
from sagemaker.session import Session
from sklearn.datasets import load_iris
X, y = load_iris(return_X_y=True, as_frame=True)
dados = pd.concat([y, X], axis=1)
dados.to_csv("data/dados.csv", header=False, index=False)
bucket = sagemaker.Session().default_bucket()
xgboost_container = sagemaker.image_uris.retrieve("xgboost", "us-east-1", "1.2-1")
role = "arn:aws:iam::885248014373:role/service-role/AmazonSageMaker-ExecutionRole-20210305T230941"
# initialize hyperparameters
hyperparameters = {
"max_depth":"5",
"eta":"0.2",
"gamma":"4",
"min_child_weight":"6",
"subsample":"0.7",
"objective":"multi:softmax",
"num_round":"2",
"num_class": "3"}
estimator = sagemaker.estimator.Estimator(image_uri=xgboost_container,
role=role,
hyperparameters=hyperparameters,
instance_count=1,
instance_type='ml.m5.2xlarge',
volume_size=5,
output_path=f"s3://{bucket}")
input_data = sagemaker.Session().upload_data(path="data", bucket=bucket)
input_data
train_input = TrainingInput(input_data, content_type="csv")
estimator.fit({'train': train_input})
estimator.deploy(initial_instance_count=1,
instance_type="ml.t2.medium",
data_capture_config = data_capture_config)
endpointName = "sagemaker-xgboost-2021-03-09-03-10-34-846"
features = pd.DataFrame(X)
features.to_csv("data/features.csv", header=False, index=False)
with open("data/features.csv") as f:
er = f.read()
# +
import boto3
client = boto3.client("sagemaker-runtime")
for line in er.splitlines():
response = client.invoke_endpoint(EndpointName=endpointName,
Body=line,
ContentType="csv")
# -
response["Body"].read().decode("utf-8")
| notebooks/.ipynb_checkpoints/Untitled-checkpoint.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: rtc_analysis [conda env:.local-rtc_analysis]
# language: python
# name: conda-env-.local-rtc_analysis-py
# ---
# <img src="NotebookAddons/blackboard-banner.png" width="100%" />
# <font face="Calibri">
# <br>
# <font size="5"> <b>Prepare a SAR Data Stack</b><img style="padding: 7px" src="NotebookAddons/UAFLogo_A_647.png" width="170" align="right"/></font>
#
# <br>
# <font size="4"> <b> <NAME> and <NAME>; Alaska Satellite Facility </b> <br>
# </font>
#
# <font size="3"> This notebook downloads an ASF-HyP3 RTC project and prepares a deep multi-temporal SAR image data stack for use in other notebooks.</font></font>
# <hr>
# <font face="Calibri" size="5" color="darkred"> <b>Important Note about JupyterHub</b> </font>
# <br><br>
# <font face="Calibri" size="3"> <b>Your JupyterHub server will automatically shutdown when left idle for more than 1 hour. Your notebooks will not be lost but you will have to restart their kernels and re-run them from the beginning. You will not be able to seamlessly continue running a partially run notebook.</b> </font>
#
# +
# %%javascript
var kernel = Jupyter.notebook.kernel;
var command = ["notebookUrl = ",
"'", window.location, "'" ].join('')
kernel.execute(command)
# + pycharm={"name": "#%%\n"}
from IPython.display import Markdown
from IPython.display import display
# user = !echo $JUPYTERHUB_USER
# env = !echo $CONDA_PREFIX
if env[0] == '':
env[0] = 'Python 3 (base)'
if env[0] != '/home/jovyan/.local/envs/rtc_analysis':
display(Markdown(f'<text style=color:red><strong>WARNING:</strong></text>'))
display(Markdown(f'<text style=color:red>This notebook should be run using the "rtc_analysis" conda environment.</text>'))
display(Markdown(f'<text style=color:red>It is currently using the "{env[0].split("/")[-1]}" environment.</text>'))
display(Markdown(f'<text style=color:red>Select the "rtc_analysis" from the "Change Kernel" submenu of the "Kernel" menu.</text>'))
display(Markdown(f'<text style=color:red>If the "rtc_analysis" environment is not present, use <a href="{notebookUrl.split("/user")[0]}/user/{user[0]}/notebooks/conda_environments/Create_OSL_Conda_Environments.ipynb"> Create_OSL_Conda_Environments.ipynb </a> to create it.</text>'))
display(Markdown(f'<text style=color:red>Note that you must restart your server after creating a new environment before it is usable by notebooks.</text>'))
# -
# <hr>
# <font face="Calibri">
#
# <font size="5"> <b> 0. Importing Relevant Python Packages </b> </font>
#
# <font size="3">In this notebook we will use the following scientific libraries:
# <ol type="1">
# <li> <b><a href="https://www.gdal.org/" target="_blank">GDAL</a></b> is a software library for reading and writing raster and vector geospatial data formats. It includes a collection of programs tailored for geospatial data processing. Most modern GIS systems (such as ArcGIS or QGIS) use GDAL in the background.</li>
# <li> <b><a href="http://www.numpy.org/" target="_blank">NumPy</a></b> is one of the principal packages for scientific applications of Python. It is intended for processing large multidimensional arrays and matrices, and an extensive collection of high-level mathematical functions and implemented methods makes it possible to perform various operations with these objects. </li>
# </font>
# <br>
# <font face="Calibri" size="3"><b>Our first step is to import them:</b> </font>
# +
# %%capture
import copy
from datetime import datetime, timedelta, timezone
import json # for loads
from pathlib import Path
import re
import shutil
import warnings
from osgeo import gdal
import numpy as np
from IPython.display import display, clear_output, Markdown
import asf_notebook as asfn
from hyp3_sdk import Batch, HyP3
# -
# <hr>
# <font face="Calibri">
#
# <font size="5"> <b> 1. Load Your Own Data Stack Into the Notebook </b> </font>
#
# <font size="3"> This notebook assumes that you've created your own data stack over your personal area of interest using the <a href="https://www.asf.alaska.edu/" target="_blank">Alaska Satellite Facility's</a> value-added product system HyP3, available via <a href="https://search.asf.alaska.edu/#/" target="_blank">ASF Data Search (Vertex)</a>. HyP3 is an ASF service used to prototype value added products and provide them to users to collect feedback.
#
# We will retrieve HyP3 data via the hyp3_sdk. As both HyP3 and the Notebook environment sit in the <a href="https://aws.amazon.com/" target="_blank">Amazon Web Services (AWS)</a> cloud, data transfer is quick and cost effective.</font>
# </font>
# <hr>
# <font face="Calibri" size="3"> Before we download anything, create a working directory for this analysis.
# <br><br>
# <b>Select or create a working directory for the analysis:</b></font>
while True:
data_dir = Path(asfn.input_path(f"\nPlease enter the name of a directory in which to store your data for this analysis."))
if data_dir == Path('.'):
continue
if data_dir.is_dir():
contents = data_dir.glob('*')
if len(list(contents)) > 0:
choice = asfn.handle_old_data(data_dir, list(contents))
if choice == 1:
if data_dir.exists():
shutil.rmtree(data_dir)
data_dir.mkdir()
break
elif choice == 2:
break
else:
clear_output()
continue
else:
break
else:
data_dir.mkdir()
break
# + [markdown] slideshow={"slide_type": "subslide"}
# <font face="Calibri" size="3"><b>Define absolute path to analysis directory:</b></font>
# -
analysis_directory = Path.cwd().joinpath(data_dir)
print(f"analysis_directory: {analysis_directory}")
# <font face="Calibri" size="3"><b>Create a HyP3 object and authenticate</b> </font>
hyp3 = HyP3(prompt=True)
# <font face="Calibri" size="3"><b>Select a product type to download:</b> </font>
job_types = ['RTC_GAMMA', 'INSAR_GAMMA', 'AUTORIFT']
job_type = asfn.select_parameter(job_types)
job_type
# **Decide whether to search for a HyP3 project or jobs unattached to a project**
options = ['project', 'projectless jobs']
search_type = asfn.select_parameter(options, '')
print("Select whether to search for HyP3 Project or HyP3 Jobs unattached to a project")
display(search_type)
# <font face="Calibri" size="3"><b>List projects containing active products of the type chosen in the previous cell and select one:</b></font>
# +
my_hyp3_info = hyp3.my_info()
active_projects = dict()
if search_type.value == 'project':
for project in my_hyp3_info['job_names']:
batch = Batch()
batch = hyp3.find_jobs(name=project, job_type=job_type.value).filter_jobs(running=False, include_expired=False)
if len(batch) > 0:
active_projects.update({batch.jobs[0].name: batch})
if len(active_projects) > 0:
display(Markdown("<text style='color:darkred;'>Note: After selecting a project, you must select the next cell before hitting the 'Run' button or typing Shift/Enter.</text>"))
display(Markdown("<text style='color:darkred;'>Otherwise, you will rerun this code cell.</text>"))
print('\nSelect a Project:')
project_select = asfn.select_parameter(active_projects)
display(project_select)
if search_type.value == 'projectless jobs' or len(active_projects) == 0:
project_select = False
if search_type.value == 'project':
print(f"There were no {job_type.value} jobs found in any current projects.\n")
jobs = hyp3.find_jobs(job_type=job_type.value).filter_jobs(running=False, include_expired=False)
orphaned_jobs = Batch()
for j in jobs:
if not j.name:
orphaned_jobs += j
jobs = orphaned_jobs
if len(jobs) > 0:
print(f"Found {len(jobs)} {job_type.value} jobs that are not part of a project.")
print(f"Select the jobs you wish to download")
jobs = {i.files[0]['filename']: i for i in jobs}
jobs_select = asfn.select_mult_parameters(jobs, '', width='500px')
display(jobs_select)
else:
print(f"There were no {job_type.value} jobs found that are not part of a project either.")
# -
# <font face="Calibri" size="3"><b>Select a date range of products to download:</b> </font>
# +
if project_select:
batch = project_select.value
else:
batch = Batch()
for j in jobs_select.value:
batch += j
display(Markdown("<text style='color:darkred;'>Note: After selecting a date range, you should select the next cell before hitting the 'Run' button or typing Shift/Enter.</text>"))
display(Markdown("<text style='color:darkred;'>Otherwise, you may simply rerun this code cell.</text>"))
print('\nSelect a Date Range:')
dates = asfn.get_job_dates(batch)
date_picker = asfn.gui_date_picker(dates)
date_picker
# -
# <font face="Calibri" size="3"><b>Save the selected date range and remove products falling outside of it:</b> </font>
date_range = asfn.get_slider_vals(date_picker)
date_range[0] = date_range[0].date()
date_range[1] = date_range[1].date()
print(f"Date Range: {str(date_range[0])} to {str(date_range[1])}")
batch = asfn.filter_jobs_by_date(batch, date_range)
# <font face="Calibri" size="3"><b>Gather the available paths and orbit directions for the remaining products:</b></font>
display(Markdown("<text style='color:darkred;'><text style='font-size:150%;'>This may take some time for projects containing many jobs...</text></text>"))
batch = asfn.get_paths_orbits(batch)
paths = set()
orbit_directions = set()
for p in batch:
paths.add(p.path)
orbit_directions.add(p.orbit_direction)
paths.add('All Paths')
display(Markdown(f"<text style=color:blue><text style='font-size:175%;'>Done.</text></text>"))
# <hr>
# <font face="Calibri" size="3"><b>Select a path or paths (use shift or ctrl to select multiple paths):</b></font>
display(Markdown("<text style='color:darkred;'>Note: After selecting a path, you must select the next cell before hitting the 'Run' button or typing Shift/Enter.</text>"))
display(Markdown("<text style='color:darkred;'>Otherwise, you will simply rerun this code cell.</text>"))
print('\nSelect a Path:')
path_choice = asfn.select_mult_parameters(paths)
path_choice
# <font face="Calibri" size="3"><b>Save the selected flight path/s:</b></font>
flight_path = path_choice.value
if flight_path:
if flight_path:
print(f"Flight Path: {flight_path}")
else:
print('Flight Path: All Paths')
else:
print("WARNING: You must select a flight path in the previous cell, then rerun this cell.")
# <font face="Calibri" size="3"><b>Select an orbit direction:</b></font>
if len(orbit_directions) > 1:
display(Markdown("<text style='color:red;'>Note: After selecting a flight direction, you must select the next cell before hitting the 'Run' button or typing Shift/Enter.</text>"))
display(Markdown("<text style='color:red;'>Otherwise, you will simply rerun this code cell.</text>"))
print('\nSelect a Flight Direction:')
direction_choice = asfn.select_parameter(orbit_directions, 'Direction:')
direction_choice
# <font face="Calibri" size="3"><b>Save the selected orbit direction:</b></font>
direction = direction_choice.value
print(f"Orbit Direction: {direction}")
# <font face="Calibri" size="3"><b>Filter jobs by path and orbit direction:</b></font>
batch = asfn.filter_jobs_by_path(batch, flight_path)
batch = asfn.filter_jobs_by_orbit(batch, direction)
print(f"There are {len(batch)} products to download.")
# <font face="Calibri" size="3"><b>Download the products, unzip them into a directory named after the product type, and delete the zip files:</b> </font>
# +
products_path = analysis_directory.joinpath(job_type.value)
if not products_path.is_dir():
products_path.mkdir()
print(f"\nProject: {batch.jobs[0].name}")
project_zips = batch.download_files(products_path)
for z in project_zips:
asfn.asf_unzip(str(products_path), str(z))
z.unlink()
# -
# <font face="Calibri" size="3"><b>Determine the available polarizations if downloading RTC products:</b></font>
rtc = batch.jobs[0].job_type == 'RTC_GAMMA'
insar = batch.jobs[0].job_type == 'INSAR_GAMMA'
autorift = batch.jobs[0].job_type == 'AUTORIFT'
if rtc:
polarizations = asfn.get_RTC_polarizations(str(products_path))
polarization_power_set = asfn.get_power_set(polarizations)
# <font face="Calibri" size="3"><b>Select a polarization:</b></font>
if rtc:
polarization_choice = asfn.select_parameter(sorted(polarization_power_set), 'Polarizations:')
else:
polarization_choice = None
polarization_choice
# <font face="Calibri" size="3"><b>Create a paths variable, holding the relative path to the tiffs or NetCDFs:</b></font>
if rtc:
polarization = polarization_choice.value
print(polarization)
if len(polarization) == 2:
regex = "\w[\--~]{{5,300}}(_|-){}.(tif|tiff)$".format(polarization)
dbl_polar = False
else:
regex = "\w[\--~]{{5,300}}(_|-){}(v|V|h|H).(tif|tiff)$".format(polarization[0])
dbl_polar = True
elif insar:
regex = "\w*_ueF_\w*.tif$"
elif autorift:
regex = "\w*ASF_OD.nc$"
# <font face="Calibri" size="3"><b>Write functions to collect and print the paths of the tiffs or NetCDFs:</b></font>
# +
def get_product_paths(regex, pths):
product_paths = list()
paths = Path().glob(pths)
for pth in paths:
tiff_path = re.search(regex, str(pth))
if tiff_path:
product_paths.append(pth)
return product_paths
def print_product_paths(product_paths):
print("Tiff paths:")
for p in product_paths:
print(f"{p}\n")
# -
# <font face="Calibri" size="3"><b>Write a function to collect the product acquisition dates:</b></font>
def get_dates(product_paths):
dates = []
for pth in product_paths:
dates.append(asfn.date_from_product_name(str(pth)).split('T')[0])
return dates
# <font face="Calibri" size="3"><b>Collect and print the paths of the tiffs or NetCDFs:</b></font>
# +
rel_prod_path = products_path.relative_to(Path.cwd())
if rtc:
product_pth = f"{str(rel_prod_path)}/*/*{polarization[0]}*.tif*"
elif insar:
product_pth = f"{str(rel_prod_path)}/*/*.tif*"
elif autorift:
product_pth = f"{str(rel_prod_path)}/*"
product_paths = get_product_paths(regex, product_pth)
print_product_paths(product_paths)
# -
# <hr>
# <font face="Calibri" size="4"> <b>1.2 Fix multiple UTM Zone-related issues</b> <br>
# <br>
# <font face="Calibri" size="3">Fix multiple UTM Zone-related issues should they exist in your data set. If multiple UTM zones are found, the following code cells will identify the predominant UTM zone and reproject the rest into that zone. This step must be completed prior to merging frames or performing any analysis. AutoRIFT products do not come with projection metadata and so will not be reprojected.</font>
# <br><br>
# <font face="Calibri" size="3"><b>Use gdal.Info to determine the UTM definition types and zones in each product:</b></font>
if not autorift:
coord_choice = asfn.select_parameter(["UTM", "Lat/Long"], description='Coord Systems:')
coord_choice
if not autorift:
utm_zones = []
utm_types = []
print('Checking UTM Zones in the data stack ...\n')
for k in range(0, len(product_paths)):
info = (gdal.Info(str(product_paths[k]), options = ['-json']))
info = json.dumps(info)
info = (json.loads(info))['coordinateSystem']['wkt']
zone = info.split('ID')[-1].split(',')[1][0:-2]
utm_zones.append(zone)
typ = info.split('ID')[-1].split('"')[1]
utm_types.append(typ)
print(f"UTM Zones:\n {utm_zones}\n")
print(f"UTM Types:\n {utm_types}")
# <font face="Calibri" size="3"><b>Identify the most commonly used UTM Zone in the data:</b></font>
if not autorift:
if coord_choice.value == 'UTM':
utm_unique, counts = np.unique(utm_zones, return_counts=True)
a = np.where(counts == np.max(counts))
predominant_utm = utm_unique[a][0]
print(f"Predominant UTM Zone: {predominant_utm}")
else:
predominant_utm = '4326'
# <font face="Calibri" size="3"><b>Reproject all tiffs to the predominate UTM:</b></font>
if not autorift:
# Reproject (if needed) and Mosaic DEM Files in Preparation for Subsequent HAND Calculation
# print(DEM_paths)
reproject_indicies = [i for i, j in enumerate(utm_zones) if j != predominant_utm] #makes list of indicies in utm_zones that need to be reprojected
print('--------------------------------------------')
print('Reprojecting %4.1f files' %(len(reproject_indicies)))
print('--------------------------------------------')
for k in reproject_indicies:
temppath = f"{str(product_paths[k].parent)}/r{product_paths[k].name}"
print(temppath)
cmd = f"gdalwarp -overwrite {product_paths[k]} {temppath} -s_srs {utm_types[k]}:{utm_zones[k]} -t_srs EPSG:{predominant_utm}"
# print(cmd)
# !{cmd}
product_paths[k].unlink()
# <font face="Calibri" size="3"><b>Update product_paths with any new filenames created during reprojection:</b></font>
product_paths = get_product_paths(regex, product_pth)
print_product_paths(product_paths)
# <hr>
# <font face="Calibri" size="4"> <b>1.3 Merge multiple frames from the same date.</b></font>
# <br><br>
# <font face="Calibri" size="3"> You may notice duplicates in your acquisition dates. As HyP3 processes SAR data on a frame-by-frame basis, duplicates may occur if your area of interest is covered by two consecutive image frames. In this case, two separate images are generated that need to be merged together before time series processing can commence. Currently we only merge RTCs.
# <br><br>
# <b>Create a directory in which to store the reprojected and merged RTCs:</b></font>
if not autorift:
output_dir_path = analysis_directory.joinpath(f"{job_type.value}_tiffs")
print(output_dir_path)
if not output_dir_path.is_dir():
output_dir_path.mkdir()
# <font face="Calibri" size="3"><b>Create a set from the date list, removing any duplicates:</b></font>
if rtc:
dates = get_dates(product_paths)
print(dates)
unique_dates = set(dates)
print(unique_dates)
# <font face="Calibri" size="3"><b>Determine which dates have multiple frames. Create a dictionary with each date as a key linked to a value set as an empty string:</b></font>
if rtc:
dup_date_batches = [{}]
for date in unique_dates:
count = 0
for d in dates:
if date == d:
count +=1
if (dbl_polar and count > 2) or (not dbl_polar and count > 1):
dup_date_batches[0].update({date : ""})
if dbl_polar:
dup_date_batches.append(copy.deepcopy(dup_date_batches[0]))
print(dup_date_batches)
# <font face="Calibri" size="3"><b>Update the key values in dup_paths with the string paths to all the tiffs for each date:</b></font>
if rtc:
if dbl_polar:
polar_list = [polarization.split(' ')[0], polarization.split(' ')[2]]
else:
polar_list = [polarization]
for i, polar in enumerate(polar_list):
polar_path_regex = f"(\w|/)*_{polar}.(tif|tiff)$"
polar_paths = get_product_paths(polar_path_regex, product_pth)
for pth in polar_paths:
date = asfn.date_from_product_name(str(pth)).split('T')[0]
if date in dup_date_batches[i]:
dup_date_batches[i][date] = f"{dup_date_batches[i][date]} {str(pth)}"
for d in dup_date_batches:
print(d)
print("\n")
# <font face="Calibri" size="3"><b>Merge all the frames for each date, save the results to the output directory, and delete the original tiffs.</b></font>
if rtc and len(dup_date_batches[0]) > 0:
for i, dup_dates in enumerate(dup_date_batches):
polar_regex = "(?<=_)(vh|VH|vv|VV)(?=.tif{1,2})"
polar = re.search(polar_regex, dup_dates[list(dup_dates)[0]])
if polar:
polar = f'_{polar.group(0)}'
else:
polar = ''
for dup_date in dup_dates:
# print(f"\n\n{dup_dates[dup_date]}")
output = f"{str(output_dir_path)}/merged_{dup_date}T999999{polar}{product_paths[0].suffix}"
gdal_command = f"gdal_merge.py -o {output} {dup_dates[dup_date]}"
print(f"\n\nCalling the command: {gdal_command}\n")
# !$gdal_command
for pth in dup_dates[dup_date].split(' '):
path = Path(pth)
if path and path.is_file():
path.unlink()
print(f"Deleting: {str(pth)}")
# <hr>
# <font face="Calibri" size="3"> <b>Verify that all duplicate dates were resolved:</b> </font>
if rtc:
product_paths = get_product_paths(regex, product_pth)
for polar in polar_list:
polar_product_pth = product_pth.replace('V*', polar)
polar_product_paths = get_product_paths(regex, polar_product_pth)
dates = get_dates(polar_product_paths)
if len(dates) != len(set(dates)):
print(f"Duplicate dates still present!")
else:
print(f"No duplicate dates are associated with {polar} polarization.")
# <font face="Calibri" size="3"><b>Print the updated the paths to all remaining non-merged tiffs:</b></font>
print_product_paths(product_paths)
# <font face="Calibri" size="3"><b>Move all remaining unmerged tiffs into the output directory, and choose whether to save or delete the directory holding the remaining downloaded product files. AutoRIFT NetCDFs will remain in their original directory:</b></font>
if not autorift:
choices = ['save', 'delete']
print("Do you wish to save or delete the directory containing auxiliary product files?")
else:
choices = []
save_or_del = asfn.select_parameter(choices)
save_or_del
if not autorift:
for tiff in product_paths:
tiff.rename(f"{output_dir_path}/{tiff.name}")
if save_or_del.value == 'delete':
shutil.rmtree(products_path)
product_paths = get_product_paths(regex, product_pth)
# <font face="Calibri" size="3"><b>Print the path where you saved your tiffs or NetCDFs.</b></font>
if rtc or insar:
print(str(output_dir_path))
elif autorift:
print(str(products_path))
# + [markdown] pycharm={"name": "#%% md\n"}
# <font face="Calibri" size="2"> <i>Prepare_RTC_Stack_HyP3_v2.ipynb - Version 1.3.1 - October 2021
# <br>
# <b>Version Changes</b>
# <ul>
# <li>Identify and download jobs not belonging to a project</li>
# </ul>
# </i>
# </font>
| SAR_Training/English/Master/Prepare_Data_Stack_Hyp3_v2.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# **This notebook is an exercise in the [Data Visualization](https://www.kaggle.com/learn/data-visualization) course. You can reference the tutorial at [this link](https://www.kaggle.com/alexisbcook/hello-seaborn).**
#
# ---
#
# In this exercise, you will write your first lines of code and learn how to use the coding environment for the micro-course!
#
# ## Setup
#
# First, you'll learn how to run code, and we'll start with the code cell below. (Remember that a **code cell** in a notebook is just a gray box containing code that we'd like to run.)
# - Begin by clicking inside the code cell.
# - Click on the blue triangle (in the shape of a "Play button") that appears to the left of the code cell.
# - If your code was run sucessfully, you will see `Setup Complete` as output below the cell.
#
# 
# The code cell below imports and configures the Python libraries that you need to complete the exercise.
#
# Click on the cell and run it.
# +
import pandas as pd
pd.plotting.register_matplotlib_converters()
import matplotlib.pyplot as plt
# %matplotlib inline
import seaborn as sns
# Set up code checking
import os
if not os.path.exists("../input/fifa.csv"):
os.symlink("../input/data-for-datavis/fifa.csv", "../input/fifa.csv")
from learntools.core import binder
binder.bind(globals())
from learntools.data_viz_to_coder.ex1 import *
print("Setup Complete")
# -
# The code you just ran sets up the system to give you feedback on your work. You'll learn more about the feedback system in the next step.
#
# ## Step 1: Explore the feedback system
#
# Each exercise lets you test your new skills with a real-world dataset. Along the way, you'll receive feedback on your work. You'll see if your answer is right, get customized hints, and see the official solution (_if you'd like to take a look!_).
#
# To explore the feedback system, we'll start with a simple example of a coding problem. Follow the following steps in order:
# 1. Run the code cell below without making any edits. It will show the following output:
# > <font color='#ccaa33'>Check:</font> When you've updated the starter code, `check()` will tell you whether your code is correct. You need to update the code that creates variable `one`
#
# This means you need to change the code to set the variable `one` to something other than the blank provided below (`____`).
#
#
# 2. Replace the underline with a `2`, so that the line of code appears as `one = 2`. Then, run the code cell. This should return the following output:
# > <font color='#cc3333'>Incorrect:</font> Incorrect value for `one`: `2`
#
# This means we still have the wrong answer to the question.
#
#
# 3. Now, change the `2` to `1`, so that the line of code appears as `one = 1`. Then, run the code cell. The answer should be marked as <font color='#33cc33'>Correct</font>. You have now completed this problem!
# +
# Fill in the line below
one = 1
# Check your answer
step_1.check()
# -
# In this exercise, you were responsible for filling in the line of code that sets the value of variable `one`. **Don't edit the code that checks your answer.** You'll need to run the lines of code like `step_1.check()` and `step_2.check()` just as they are provided.
#
# This problem was relatively straightforward, but for more difficult problems, you may like to receive a hint or view the official solution. Run the code cell below now to receive both for this problem.
step_1.hint()
step_1.solution()
# ## Step 2: Load the data
#
# You are ready to get started with some data visualization! You'll begin by loading the dataset from the previous tutorial.
#
# The code you need is already provided in the cell below. Just run that cell. If it shows <font color='#33cc33'>Correct</font> result, you're ready to move on!
# +
# Path of the file to read
fifa_filepath = "../input/fifa.csv"
# Read the file into a variable fifa_data
fifa_data = pd.read_csv(fifa_filepath, index_col="Date", parse_dates=True)
# Check your answer
step_2.check()
# -
# Next, recall the difference between comments and executable code:
# - **Comments** are preceded by a pound sign (`#`) and contain text that appear faded and italicized. They are completely ignored by the computer when the code is run.
# - **Executable code** is code that is run by the computer.
#
# In the code cell below, every line is a comment:
# ```python
# # Uncomment the line below to receive a hint
# #step_2.hint()
# #step_2.solution()
# ```
#
# If you run the code cell below without making any changes, it won't return any output. Try this now!
# Uncomment the line below to receive a hint
step_2.hint()
# Uncomment the line below to see the solution
step_2.solution()
# Next, remove the pound sign before `step_2.hint()` so that the code cell above appears as follows:
# ```python
# # Uncomment the line below to receive a hint
# step_2.hint()
# #step_2.solution()
# ```
# When we remove the pound sign before a line of code, we say we **uncomment** the line. This turns the comment into a line of executable code that is run by the computer. Run the code cell now, which should return the <font color='#3366cc'>Hint</font> as output.
#
# Finally, uncomment the line to see the solution, so the code cell appears as follows:
# ```python
# # Uncomment the line below to receive a hint
# step_2.hint()
# step_2.solution()
# ```
# Then, run the code cell. You should receive both a <font color='#3366cc'>Hint</font> and the <font color='#33cc99'>Solution</font>.
#
# If at any point you're having trouble with coming up with the correct answer to a problem, you are welcome to obtain either a hint or the solution before completing the cell. (So, you don't need to get a <font color='#33cc33'>Correct</font> result before running the code that gives you a <font color='#3366cc'>Hint</font> or the <font color='#33cc99'>Solution</font>.)
#
# ## Step 3: Plot the data
#
# Now that the data is loaded into the notebook, you're ready to visualize it!
#
# Run the next code cell without changes to make a line chart. The code may not make sense yet - you'll learn all about it in the next tutorial!
# +
# Set the width and height of the figure
plt.figure(figsize=(16,6))
# Line chart showing how FIFA rankings evolved over time
sns.lineplot(data=fifa_data)
# Check your answer
step_3.a.check()
# -
# Some questions won't require you to write any code. Instead, you'll interpret visualizations.
#
# As an example, consider the question: Considering only the years represented in the dataset, which countries spent at least 5 consecutive years in the #1 ranked spot?
#
# To receive a <font color='#3366cc'>Hint</font>, uncomment the line below, and run the code cell.
# +
#step_3.b.hint()
# -
# Once you have an answer, check the <font color='#33cc99'>Solution</font> to get credit for completing the problem and to ensure your interpretation is right.
# Check your answer (Run this code cell to receive credit!)
step_3.b.solution()
# Congratulations - you have completed your first coding exercise!
#
# # Keep going
#
# Move on to learn to create your own **[line charts](https://www.kaggle.com/alexisbcook/line-charts)** with a new dataset.
# ---
#
#
#
#
# *Have questions or comments? Visit the [Learn Discussion forum](https://www.kaggle.com/learn-forum/161291) to chat with other Learners.*
| pre_exercises/data_visualisation/exercise-hello-seaborn.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ## Procedure: Uncertainty propagation for matrix-based LCA
# ### Method: Analytic uncertainty propagation (Taylor approximation)
# Author: <NAME> {evelyne [dot] groen [at] gmail [dot] com}
#
# Last update: 25/10/2016
#
#
#
# +
import numpy as np
A_det = np.matrix('10 0; -2 100') #A-matrix
B_det = np.matrix('1 10') #B-matrix
f = np.matrix('1000; 0') #Functional unit vector f
g_LCA = B_det * A_det.I * f
print("The deterministic result is:", g_LCA[0,0])
# -
# ### Step 1: Calculate partial derivatives
# NB: this is a vectorized implementation of the MatLab code that was originally written by <NAME> & <NAME>
#
#
# +
s = A_det.I * f #scaling vector s: inv(A_det)*f
Lambda = B_det * A_det.I; #B_det*inv(A)
dgdA = -(s * Lambda).T #Partial derivatives A-matrix
Gamma_A = np.multiply((A_det/g_LCA), dgdA) #For free: the multipliers of the A-matrix
print("The multipliers of the A-matrix are:")
print(Gamma_A)
dgdB = s.T #Partial derivatives B-matrix
Gamma_B = np.multiply((B_det/g_LCA), dgdB) #For free too: the multipliers of the B-matrix
print("The multipliers of the B-matrix are:")
print(Gamma_B)
# -
# ### Step 2: Determine output variance
# +
CV = 0.05 #Coefficient of variation set to 5% (CV = sigma/mu)
var_A = np.power(abs(CV*A_det),2) #Variance of the A-matrix (var =sigma^2)
var_B = np.power(abs(CV*B_det),2) #Variance of the B-matrix
P = np.concatenate((np.reshape(dgdA, 4), dgdB), axis=1) #P contains partial derivatives of both A and B
var_P = np.concatenate((np.reshape(var_A, 4), var_B), axis=1) #var_P contains all variances of each parameter in A and B
var_g = np.sum(np.multiply(np.power(P, 2), var_P)) #Total output variance (first order Taylor)
print("The total output variance equals:", var_g)
# -
# ### Step 3: Calculate the contribution to the output variance by the indivudial input parameters
KIA = np.multiply(np.power(P, 2), var_P)/var_g
KIA_procent = [KIA[0][0,k]*100 for k in range(0,6)]
print("The contribution to the output variance of each intput parameter equals (in %):")
print(KIA_procent)
# +
#Visualize: make a bar plot
import matplotlib.pyplot as plt
import matplotlib.pyplot as plt
x_label=[ 'A(1,1)','A(1,2)', 'A(2,1)', 'A(2,2)', 'B(1,1)', 'B(1,2)']
x_pos = range(6)
plt.bar(x_pos, KIA_procent, align='center')
plt.xticks(x_pos, x_label)
plt.title('Global sensitivity analysis: squared standardized regression coefficients')
plt.ylabel('KIA (%)')
plt.xlabel('Parameter')
plt.show()
# -
| Code/KIA_LCA_evelynegroen.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
with open("/notebooks/js_script_prechunks.json", "r") as ptj:
sources = ptj.readlines()
from collections import defaultdict
from tqdm import tqdm, trange
chunks = defaultdict(list)
num = 0
# for l in tqdm(sources[:500]):
for ii in trange(int(len(sources)/1000)):
for l in tqdm(sources[ii*1000:(ii+1)*1000]):
if '+++++\n' in l:
num +=1
else:
chunks[num].append(l)
first_line = chunks.get(num)
if first_line is not None:
l_splits = first_line[0].split(" ")
# for s in l_splits:
# print (f"\t\t========{s}")
# with open("/content/trial.txt", "a+") as f:
with open(f"/notebooks/sberbank_rugpts/huge_dataset_{ii}.txt", "a+") as f:
for j in range(len(chunks)):
for i in range(max(len(chunks[j]),3)):
prompt = str(''.join(chunks[j][:i]))
# a reasonable chunk of code for completeion
lng = 5
completion = str(''.join(chunks[j][i:i+lng]))
if not (prompt == '' or completion == ''):
f.write(f"{prompt}\n {completion}\n")
first_line = chunks.get(j)
if first_line is not None:
l_splits = first_line[0].split(" ")
for s in range(len(l_splits)):
# print (f"\t\t========{l_splits[s]}")
prompt = str(" ".join(l_splits[:s]))
completion = str(" ".join(l_splits[s:]))
if not (prompt == '' or completion == ''):
f.write(f"{prompt}\n {completion}\n")
| dataset_chunks_creator.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="view-in-github" colab_type="text"
# <a href="https://colab.research.google.com/github/bommankondapraveenkumar/datadime/blob/master/fib.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# + id="qfsT6jPCLsJd" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 50} outputId="9eb2cc1c-6335-4f7d-b156-f10a7f7ad8fb"
def Fib(n):
if n<0:
print("Incorrect input")
elif n==1:
return 0
elif n==2:
return 1
else:
return Fib(n-1)+Fib(n-2)
print(Fib(int(input())))
# + id="eTm25tQKREHe" colab_type="code" colab={}
# + id="FHXF3w2kN8Be" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 50} outputId="8e28736a-f263-4080-b417-a555df485279"
#prog_1:-nth fibonacci
n=int(input("which term? "))
f=[]
f.insert(0,0)
f.insert(1,1)
for i in range(2,n+1):
f.insert(i,f[i-1] +f[i-2])
print(f"{n}th term is {f[n-1]}")
| fib.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3.8.3 32-bit
# name: python383jvsc74a57bd003e80f00e8a0b9204e9c928296cca598eb1fee14ca0655e081f97bf8e0459b57
# ---
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
# %matplotlib inline
import numpy as np
pd.options.mode.chained_assignment = None
class_df.columns
# +
# Comparing drug use against grades
relevant_cols = ['If you have used recreational drugs, which ones? (If you have not used drugs, indicate as such)',
'What was your final grade for 101?',
'What was your final grade for 101L?',
'What was your final grade for 111? ',
'What was your final grade for 113?',
'What was your final grade for 121?',
'What was your final grade for 161?',
'What was your final grade for 181?']
drugs_grades = class_df[relevant_cols]
drugs_grades.head()
# +
def get_average(row):
return (row['What was your final grade for 101?']*0.5 + row['What was your final grade for 101L?']*0.5
+ row['What was your final grade for 113?'] *0.5 + row['What was your final grade for 121?'] + row['What was your final grade for 161?']
+ row['What was your final grade for 181?'] + row['What was your final grade for 111? '])/5.5
# drugs_grades = drugs_grades.assign(Average=drugs_grades.mean(axis=1))
drugs_grades['Average'] = drugs_grades.apply(get_average, axis=1)
drugs_grades.head()
# +
drugs_grades = drugs_grades[drugs_grades['What was your final grade for 101?'].notna()]
drugs_grades.head(10)
# -
drugs_grades.groupby('If you have used recreational drugs, which ones? (If you have not used drugs, indicate as such)').mean()
drugs_grades = drugs_grades.reset_index()
drugs_grades.head()
# +
drugs_grades.loc[drugs_grades['If you have used recreational drugs, which ones? (If you have not used drugs, indicate as such)'] != 'I did not use drugs', 'If you have used recreational drugs, which ones? (If you have not used drugs, indicate as such)'] = 'I used drugs'
drugs_grades.head(10)
# +
drugs_grades = drugs_grades[drugs_grades['Average'].notna()]
drugs_grades.head(20)
# -
index = ['I used drugs', 'I did not use drugs']
average_per_drugs = (drugs_grades.groupby('If you have used recreational drugs, which ones? (If you have not used drugs, indicate as such)').mean()
.sort_values(by='Average'))
# average_per_drugs = average_per_drugs.reset_index()
average_per_drugs
# +
sns.set(rc={'figure.figsize':(7.5,10)})
ax = sns.boxplot(x='If you have used recreational drugs, which ones? (If you have not used drugs, indicate as such)',
y='Average',
data = drugs_grades)
ax.set_ylabel("Average")
ax.set_xlabel("Drugs")
ax.set_title("Drug Usage vs Grade Average")
ax.set(ylim=(60,100))
# -
drugs_grades
ax = average_per_drugs[['Average']].plot(kind='bar', title ="Drug Usage vs Grade Average", figsize=(15, 10), legend=True, fontsize=12)
ax.set_xlabel("Drugs", fontsize=12)
ax.set_ylabel("Average", fontsize=12)
plt.show()
| Lifestyle/Drugs _v_grades.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
import pandas as pd
import numpy as np
import random
from collections import defaultdict
from ast import literal_eval
from collections import Counter
import re
import unicodedata
# from py_functions.nlp_preprocessing import *
# from py_functions.topic_modeling import *
from sklearn.feature_extraction.text import TfidfVectorizer, ENGLISH_STOP_WORDS, CountVectorizer
import spacy
import pickle
from sklearn.metrics.pairwise import cosine_similarity
from textblob import TextBlob
import seaborn as sns
import matplotlib.pyplot as plt
import textstat
from semantic_text_similarity.models import WebBertSimilarity
# sp_nlp = spacy.load('en_core_web_sm')
pd.set_option("display.max_rows", 50)
pd.set_option("display.max_columns", None)
pd.set_option('display.max_colwidth', None)
# pd.reset_option('display.max_colwidth')
# %load_ext autoreload
# %autoreload 2
# -
df = pd.read_csv("Data/data_NLP_round1.csv")
# + jupyter={"outputs_hidden": true}
df[df.title == 'Trump Picks State Department Spokeswoman Heather Nauert As Next UN Ambassador']
# -
df.info()
# + jupyter={"outputs_hidden": true}
df.title.unique()
# -
df_expanded = pd.read_csv('Data/paras_expanded_ready_for_modeling.csv')
df_sentences = pd.read_csv('Data/sent_expanded_ready_for_modeling.csv')
df_expanded.head()
polarity = lambda x: TextBlob(x).sentiment.polarity
subjectivity = lambda x: TextBlob(x).sentiment.subjectivity
df_news = df_expanded.groupby(['number','global_bias'], as_index = False).agg({'title':(lambda x: x.iloc[0]), 'news_source':(lambda x: x.iloc[0]),
'text_ascii':(lambda x: x.iloc[0]),'text_final':(lambda x: ','.join(x))})
df_news = df_expanded.drop_duplicates(subset=['number','global_bias'], ignore_index=True)
df_news['polar'] = df_news.text_ascii.map(polarity)
df_news['subje'] = df_news.text_ascii.map(subjectivity)
# +
# count_excl_quest_marks = X.apply(lambda x: self.count_regex(r'!|\?', x))
# -
df_news.groupby('global_bias')
df_news.news_source.value_counts()
plt.figure(figsize=(10,10))
sns.scatterplot(x='subje', y = 'polar', data=df_news[(df_news.news_source == 'Fox News (Online News)') | (df_news.news_source == 'Washington Post')], hue = 'global_bias')
plt.ylim([-0.2,0.4])
plt.xlim([0.1,0.6])
# +
# df_news['grade_level'] = df_news.text_ascii.map(textstat.text_standard)
# +
df_news[df_news.news_source == 'New York Times (News)'].grade_level.value_counts()
# sns.barplot(x='grade_level', y='grade_level' ,data = df_news[df_news.news_source == 'New York Times (News)'])
# +
pd.set_option('display.max_colwidth', None)
df_news.title.sample()
# -
df_news[df_news.number == 1353][['number','global_bias','title','news_source','text_ascii']].values
df_news[df_news.number == 2501]
df_news.iloc[580][['number','global_bias','title','news_source','text_ascii']]
| 3-Text_Classification.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Deep Learning Introduction
# > A very gentle introduction to the Deep Neural networks and some of it's terminology.
#
# - toc: true
# - badges: true
# - comments: true
# - categories: [jupyter]
# - image: ../images/neuron.jpeg
# ## Introduction
#
# Deep Learning is a technique to extract and transform data from an input dataset, by using a `deep` network of neural network layers. By deep means that the number of layers is huge, could be as big as more than 100 layers.
# Layers in any deep neural network are in one of the following categories:
# - **Input Layer:** This is the layer where the input is applied to the network.
# - **Hidden Layers:** These are all the layers between the input layer and the output layer of a neural network. Each layer has multiple neurons (described in the next section). A neuron applies weights (linear function) to the input received and directs it through an activation function (non-linear function). Each hidden layer receives input as the output of the previous layer, applies transformations on the input, and gives output to the next layer.
# - **Output Layer:** This layer computes the output of the network in the format we want. E.g. In classification problem if there are C classes, then generally output layer gives a C length vector containing probabilities for each class, and we predict the class with the highest probability.
#
# The output of the neural network is compared against the true-output, and a loss-value is calculated using a loss function.
#
# A loss function takes input as the network's output $(\hat{y})$ and true-output$(y)$ and computes a scalar value which depicts our happiness or unhappiness with the result. E.g. If we have 5 classes, i.e. $C=5$, and we get $y=2$, but $\hat{y}=1$, it means that our network classifies input into class 2, but the ground truth showing input of class 1. To give feedback of our *unhappiness* to the network our loss value should be a high positive number. If $\hat{y} = y$ then our loss should be 0. High positive loss value means *unhappiness* and vice-versa because the network tries to minimize the loss value, as we will see in the next sections.
#
# An image of a neural network, with 3 hidden layers (which is not so deep) is shown below. Here each node is a neuron and edges are weights.
# 
# +
#hide
# +
#hide
# -
# ## Neuron
# A neuron is the fundamental block of a neural network. Each neuron has weights, bias, and an activation function associated with it, as shown in the figure below. It receives inputs $x_i$, each input $x_i$ is multiplied with weight $w_i$, and bias $b$ is added to the final product, therefore $sum = \Sigma{(w_i.x_i)} + b$.
# Now activation function $\phi()$ is applied on the sum, so $y = \phi( \Sigma{(w_i.x_i)} + b )$. Some most commonly used activation functions are RELU, sigmoid, etc.
#
# 
# +
#hide
# -
# ## Parameters Updation
# When we feed an input into the neural network then it gives output $\hat{y}$. Let's have $y$ as ground truth label, and loss is $L = f(y, \hat{y})$, where $f()$ is our loss function.
# We know that layer $L_i$ takes input as an output of layer $L_{i-1}$, which in turn takes input from the output of layer $L_{i-2}$, and so on. The point is that layer $L_i$ output depends on all the layers before it. Therefore the final neural network output $y$ could be thought of as a complex function taking all the network parameters (weights and bias of all neurons of all the layers) as input to that function.
# Mathematically, if $N()$ is a neural network function involving all parameters, and input is x, then loss $L = f(y, N(x))$
#
# Now we can compute derivatives of L w.r.t. to each parameter of neural network, $\frac{\partial{L}}{\partial(p)}$, for all parameters p of the network. $\frac{\partial{L}}{\partial(p)}$ gives the direction of the steepest ascent of the loss L w.r.t. to parameter p, which means the direction in which if we little bit change p then the value of L will increase the most. Therefore if we move p in the exact opposite direction then that will be the steepest descent direction, and so L will decrease the most. So, we can update parameter p as:
# $p = p - \alpha\frac{\partial{L}}{\partial(p)}$, where $\alpha$ is known as the learning rate, the length of the step we have taken in the steepest descent direction.
#
# This is known as the classic Gradient Descent Algorithm for parameters updation. We can update all the parameters in a similar way, i.e. by computing gradient of loss L w.r.t. to a parameter, and then applying Gradient Descent Algorithm
#
#
# ## Network Training
# Suppose we have set of n training examples as $\{(x_1, y_1), (x_2, y_2), ... , (x_n, y_n)\}$, where $x_i$ is the $i^{th}$ training example and $y_i$ is the true class label for $i^{th}$ training example. We can initialize all the network parameters with randomly small values, and update them after each iteration. The training steps could be defined as:
# - Computing neural network output on $x_1, x_2, ..., x_n$.
# - Computing loss as $f(y_1, \hat{y_1}), f(y_2, \hat{y_2}), ..., f(y_n, \hat{y_n}) $, where $\hat{y_1}$ is class predicted by the neural network, and taking average value of loss
# - We compute gradients of average loss w.r.t. all the network parameters and update their values so as to minimize the loss.
#
# Keep on repeating the above steps until the loss converges.
# Going through all the training examples for once is known as 1 epoch. We can continue to train for multiple epochs until the loss converges.
#
# Once training is done, we'll end up with such network weights which are far better than initial random weights in prediction, and we can use the same weights for inference on new unseen data.
#
#
# ## Conclusion
# I have covered a very basic understanding of deep learning models and terminology. In practice, the models which are deployed in production are very advanced and different, but the fundamental ideas remain the same. We will delve into a lot more deep learning topics in later posts.
# +
#hide
import cv2
filename = 'images/blog11_1.png'
img = cv2.imread(filename,cv2.IMREAD_COLOR)
print(img.shape)
img1 = cv2.resize(img, (600,450))
cv2.imshow("image", img1)
cv2.waitKey(4000)
cv2.destroyAllWindows()
cv2.imwrite(filename, img1)
# -
| _notebooks/2020-10-14-DLforCNN.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3.8
# language: python
# name: py38
# ---
# ## What is a Variable?
#
# A variable is any characteristic, number, or quantity that can be measured or counted. They are called 'variables' because the value they take may vary, and it usually does. The following are examples of variables:
#
# - Age (21, 35, 62, ...)
# - Gender (male, female)
# - Income (GBP 20000, GBP 35000, GBP 45000, ...)
# - House price (GBP 350000, GBP 570000, ...)
# - Country of birth (China, Russia, Costa Rica, ...)
# - Eye colour (brown, green, blue, ...)
# - Vehicle make (Ford, Volkswagen, ...)
#
# Most variables in a data set can be classified into one of two major types:
#
# - **Numerical variables**
# - **Categorical variables**
#
# ===================================================================================
#
# ## Numerical Variables
#
# The values of a numerical variable are numbers. They can be further classified into:
#
# - **Discrete variables**
# - **Continuous variables**
#
#
# ### Discrete Variable
#
# In a discrete variable, the values are whole numbers (counts). For example, the number of items bought by a customer in a supermarket is discrete. The customer can buy 1, 25, or 50 items, but not 3.7 items. It is always a round number. The following are examples of discrete variables:
#
# - Number of active bank accounts of a borrower (1, 4, 7, ...)
# - Number of pets in the family
# - Number of children in the family
#
#
# ### Continuous Variable
#
# A variable that may contain any value within a range is continuous. For example, the total amount paid by a customer in a supermarket is continuous. The customer can pay, GBP 20.5, GBP 13.10, GBP 83.20 and so on. Other examples of continuous variables are:
#
# - House price (in principle, it can take any value) (GBP 350000, 57000, 100000, ...)
# - Time spent surfing a website (3.4 seconds, 5.10 seconds, ...)
# - Total debt as percentage of total income in the last month (0.2, 0.001, 0, 0.75, ...)
#
# =============================================================================
#
# ## In this demo: Peer to peer lending (Finance)
#
# In this demo, we will use a toy data set which simulates data from a peer-o-peer finance company to inspect discrete and continuous numerical variables.
#
# - You should have downloaded the **Datasets** together with the Jupyter notebooks in **Section 1**.
# +
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
# +
# let's load the dataset
# Variable definitions:
#-------------------------
# disbursed_amount: loan amount given to the borrower
# interest: interest rate
# income: annual income
# number_open_accounts: open accounts (more on this later)
# number_credit_lines_12: accounts opened in the last 12 months
# target: loan status(paid or being repaid = 1, defaulted = 0)
data = pd.read_csv('../loan.csv')
data.head()
# -
data.info()
# ### Continuous Variables
# +
# let's look at the values of the variable disbursed_amount
# this is the amount of money requested by the borrower
# this variable is continuous, it can take in principle
# any value
data['disbursed_amount'].unique()
# +
# let's make a histogram to get familiar with the
# distribution of the variable
fig = data['disbursed_amount'].hist(bins=50)
fig.set_title('Loan Amount Requested')
fig.set_xlabel('Loan Amount')
fig.set_ylabel('Number of Loans')
# -
# The values of the variable vary across the entire range of loan amounts typically disbursed to borrowers. This is characteristic of continuous variables.
# +
# let's do the same exercise for the variable interest rate,
# which is the interest charged by the finance company to the borrowers
# this variable is also continuous, it can take in principle
# any value within the range
data['interest'].unique()
# +
# let's make a histogram to get familiar with the
# distribution of the variable
fig = data['interest'].hist(bins=30)
fig.set_title('Interest Rate')
fig.set_xlabel('Interest Rate')
fig.set_ylabel('Number of Loans')
# -
# We see that the values of the variable vary continuously across the variable range. The values are the interest rate charged to borrowers.
# +
# Now, let's explore the income declared by the customers,
# that is, how much they earn yearly.
# this variable is also continuous
fig = data['income'].hist(bins=100)
# for better visualisation, I display only specific
# range in the x-axis
fig.set_xlim(0, 400000)
# title and axis legends
fig.set_title("Customer's Annual Income")
fig.set_xlabel('Annual Income')
fig.set_ylabel('Number of Customers')
# -
# The majority of salaries are concentrated towards values in the range 30-70k, with only a few customers earning higher salaries. The values of the variable, vary continuously across the variable range, because this is a continuous variable.
# ### Discrete Variables
# Let's explore the variable "Number of open credit lines in the borrower's credit file" (number_open_accounts in the dataset).
#
# This variable represents the total number of credit items (for example, credit cards, car loans, mortgages, etc) that is known for that borrower.
#
# By definition it is a discrete variable, because a borrower can have 1 credit card, but not 3.5 credit cards.
# +
# let's inspect the values of the variable
# this is a discrete variable
data['number_open_accounts'].dropna().unique()
# +
# let's make an histogram to get familiar with the
# distribution of the variable
fig = data['number_open_accounts'].hist(bins=100)
# for better visualisation, I display only specific
# range in the x-axis
fig.set_xlim(0, 30)
# title and axis legends
fig.set_title('Number of open accounts')
fig.set_xlabel('Number of open accounts')
fig.set_ylabel('Number of Customers')
# -
# Histograms of discrete variables have this typical broken shape, as not all the values within the variable range are present in the variable. As I said, the customer can have 3 credit cards, but not 3,5 credit cards.
#
# Let's look at another example of a discrete variable in this dataset: **Number of installment accounts opened in past 12 months** ('number_credit_lines_12' in the dataset).
#
# Installment accounts are those that at the moment of acquiring them, there is a set period and amount of repayments agreed between the lender and borrower. An example of this is a car loan, or a student loan. The borrower knows that they will pay a fixed amount over a fixed period, for example 36 months.
# +
# let's inspect the variable values
data['number_credit_lines_12'].unique()
# +
# let's make a histogram to get familiar with the
# distribution of the variable
fig = data['number_credit_lines_12'].hist(bins=50)
fig.set_title('Number of installment accounts opened in past 12 months')
fig.set_xlabel('Number of installment accounts opened in past 12 months')
fig.set_ylabel('Number of Borrowers')
# -
# The majority of the borrowers have none or 1 installment account, with only a few borrowers having more than 2.
# ### A variation of discrete variables: the binary variable
#
# Binary variables, are discrete variables, that can take only 2 values, therefore binary.
# +
# A binary variable, can take 2 values. For example in
# the variable "target":
# either the loan is defaulted (1) or not (0)
data['target'].unique()
# +
# let's make a histogram, although histograms for
# binary variables do not make a lot of sense
fig = data['target'].hist()
fig.set_xlim(0, 2)
fig.set_title('Defaulted accounts')
fig.set_xlabel('Defaulted')
fig.set_ylabel('Number of Loans')
# -
# As we can see, the variable shows only 2 values, 0 and 1, and the majority of the loans are OK.
#
# **That is all for this demonstration. I hope you enjoyed the notebook, and see you in the next one.**
print(f'{data.target.value_counts(normalize=True)*100}')
data.target.value_counts(normalize=True)
data.target.value_counts(normalize=False)
| Section-02-Types-of-Variables/02.1-Numerical-Variables.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
import numpy as np
from numpy import pi
import matplotlib.pyplot as plt
import pandas as pd
plt.rcParams['figure.figsize']=(10,8)
plt.rcParams['font.size']=14
plt.rcParams['image.cmap']='plasms'
plt.rcParams['axes.linewidth']=2
from cycler import cycler
cols = plt.get_cmap('tab10').colors
plt.rcParams['axes.prop_cycle']=cycler(color=cols)
def plot_2d(m,title=''):
plt.imshow(m)
plt.xticks([])
plt.yticks([])
plt.title(title)
N=200
t=np.arange(0,N)
trend=0.001*(t-100)**2
p1,p2=20,30
periodic1=2*np.sin(2*pi*t/p1)
periodic2=0.75*np.sin(2*pi*t/p2)
np.random.seed(123)
noise=2*(np.random.rand(N)-0.5)
F = trend+periodic1+periodic2+noise
plt.plot(t,F,lw=2.5)
plt.plot(t,trend,alpha=0.75)
plt.plot(t,periodic1,alpha=0.75)
plt.plot(t,periodic2,alpha=0.75)
plt.plot(t,noise,alpha=0.75)
plt.legend(['Toy Series ($F$)','Trend','Periodic #1','Periodic #2','Noise'])
plt.xlabel('$t$')
plt.ylabel('$F(t)$')
plt.title('The Toy Time Series and its Components')
plt.show()
# -
| tools/.ipynb_checkpoints/ssa_toy-checkpoint.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="YhrD_hhFj5n2"
# ## Computer Networks and the Internet, Pace University, Spring 2021
# ## Last update: 2.9.2021
# + [markdown] id="NyFWzOjoj5n9"
# ## Last class:
#
# 1. Programming language -> Interpreter -> Machine language
# 1. IDLE (editor + interpreter), we will stick to jupyter notesbook for now
# 1. Interactive mode vs. Script mode
# 1. Python3 instead of Python2
# 1. No type in declaration, type conversion on the fly
# 1.` import sys `for command line arguments
#
# + [markdown] id="fTCRB_Ruj5n-"
# * print function - prints a textual representation to the console
# + id="Y718XiZrj5n_" outputId="847fecbb-d286-4ee4-ca67-8165d4d2e803" colab={"base_uri": "https://localhost:8080/"}
print ("Hello World")
# + id="lXFoWaAVj5oA"
# + [markdown] id="_m3_n3RPn9cf"
# * How about reading input?
# + colab={"base_uri": "https://localhost:8080/", "height": 200} id="W76dyYUXoD9F" outputId="523d488e-f315-49dc-a9ad-735fd5ac7ff7"
m = int(input("enter a positive integer "))
print(m)
# + [markdown] id="mfcnc7TOj5oA"
# ###Variables, types
# + [markdown] id="chsYinDbj5oB"
# * int - integers: ..., -3, -2, -1, 0, 1, 2, 3, ...
# + id="6wH7uUSEj5oB"
# + id="x4cm48ZJj5oC"
# + [markdown] id="P2P74ebmj5oC"
# * float - floating point numbers, decimal point fractions: -3.2, 1.5, 1e-8, 3.2e5
# + id="zjZnkpNlj5oD"
# + [markdown] id="nivpFqmej5oD"
# * str - character strings, text: "intro2CS", 'python'
# + id="zVRPr6Vej5oD"
# + id="J-9wvi4Lj5oD"
# + [markdown] id="t_TnZB1Sj5oE"
# * bool - boolean values: True and False
# + id="Nbm0J5gyj5oE"
# + [markdown] id="GhmN7LSsj5oE"
# ## Operators
# + [markdown] id="8QBejY-8j5oF"
# ### Mathematical operators
# + [markdown] id="qt608Sn-j5oF"
# Addition:
# + id="bzDNbaC1j5oF"
# + id="JE8MPU3ij5oF"
# + id="l5hrIKHCj5oG"
# + [markdown] id="U_PHeGHZj5oG"
# Subtraction:
# + id="dI8fDROzj5oG"
# + [markdown] id="STulJz9xj5oG"
# Multiplication:
# + id="6IipYZkij5oH"
# + [markdown] id="tRI2PyZOj5oH"
# Division - float and integral with / and //:
# + id="n2kjIwFej5oH"
# + [markdown] id="uCXN6uKQj5oH"
# Power:
# + id="8XA4n2afj5oH"
# + [markdown] id="NMAttXt2j5oI"
# Modolu:
# + id="F1VGV5Jgj5oI"
# + [markdown] id="JvLju325j5oI"
# ### String operators
# + [markdown] id="422YY4eHj5oI"
# String concatenation using +:
# + id="8X28u9Goj5oJ"
# + [markdown] id="PTo4CnOFj5oJ"
# String duplication using *:
# + id="Zu62mwj4j5oJ"
# + [markdown] id="ZLW-b1Vkj5oJ"
# Strings vs. numbers:
# + id="A2YJvIp5j5oJ"
# + id="uBh9AUBaj5oK"
# + id="h4Jdy3SIj5oK"
# + id="_B6_98s5j5oK"
# + [markdown] id="UESrbKFbj5oK"
# ### Comparisons
# + id="RCIYPLjbj5oL"
# + id="13DdkRIqj5oL"
# + id="djXSCW8jj5oL"
# + id="0o6s93_Gj5oL"
# + id="-F6rhr4yj5oL"
# + id="hh0K_u6Pj5oM"
# + id="PMmDD0nrj5oM"
# + id="Ho0Zhdigj5oM"
# + id="Ga7woZufj5oM" outputId="0dd0303d-da3c-4c6e-ab47-ac192ff8a366"
# + id="oaPNm6Kpj5oN"
# + id="vqRqdzuFj5oN"
# + id="aBYyJZodj5oN"
# + [markdown] id="rjP8Q2bjj5oN"
# ### Logical operators
# + [markdown] id="CaggIpaJj5oO"
# * not:
# + id="XsXPgmVkj5oO"
# + [markdown] id="mxRaK9gaj5oO"
# * and:
# + id="9DamX5X4j5oO"
# + id="kwtZ8PJgj5oO"
# + id="ix3rEcQrj5oP"
# + [markdown] id="lZz649J_j5oP"
# * or:
# + id="Krr8rstmj5oP"
# + id="jH-PqZ-Kj5oP"
# + [markdown] id="OCtr8yXCj5oP"
# ## Conversions
# + [markdown] id="A_h3dBNrj5oQ"
# Use the functions `int()`, `float()`, and `str()` to convert between types (we will talk about *functions* next time):
# + id="36JMH80Sj5oQ"
# + id="SGeAAh0Hj5oQ"
# + id="W7go6_CRj5oR"
# + id="zL4PNo7Pj5oR"
# + id="4VPmmkvfj5oS"
# + [markdown] id="r07mN9Kwj5oS"
# ## Flow control
# + [markdown] id="CbHVYiTzj5oS"
# ### Conditional statements
# + [markdown] id="M1-_2hJsj5oS"
# The `if` condition formula - replace conditions and statements with meaningful code:
#
# if *condition*:
# *statement*
# *statement*
# ...
# elif *condition*: # 0 or more elif clauses
# *statement*
# *statement*
# ...
# else: # optional
# *statement*
# *statement*
#
# Example:
# + id="1-aasBKHj5oS"
# + [markdown] id="jvZt3Vu0j5oT"
# ### Loops
# + [markdown] id="jLWROAIcj5oT"
# * While:
# + [markdown] id="ZeQjCZgKj5oT"
# while *condition*:
# *statement*
# *statement*
#
# Example - count how many times 0 appears in an integer number:
# + id="wI2JDgwej5oU"
# + id="iNH8r8A5j5oU"
# + [markdown] id="JPvcCAAxj5oU"
# * For:
# + [markdown] id="n0q1EpLbj5oV"
# for *variable* in *iterable*:
# *statement*
# *statement*
#
# Example - solve the same problem with a `str` type instead of `int`:
# + id="Csg1OlCzj5oV"
# + [markdown] id="g9aDlLQ7j5oV"
# Builtin solution:
# + id="v5g_eo2Ij5oV"
# + [markdown] id="Vw9kYfGaj5oW"
# ### Efficiency
# + [markdown] id="eRN-PSKjj5oW"
# We can measure which solution is faster:
# + id="55jJ7PUdj5oW"
# + id="cXpM_Xpbj5oW"
# + id="vxqtC0_5j5oW"
# + [markdown] id="qTEvznxCj5oX"
# The builtin solution is 4 times faster than the `for` solution which is 3 times faster than the `while` solution.
# + [markdown] id="hxyNYDfRj5oX"
# ### Other notes
# + [markdown] id="VocwueEOj5oX"
# * The `while` solution will not work for `num <= 0`
# * The `while` solution will not work for non-numerals (e.g, `num = "<NAME> is awesome!"`)
# * The builtin solution is implemented with C and that is why it is faster
# + [markdown] id="vXxWaz1Ho5Q9"
#
# + [markdown] id="MCfg0w0docIt"
# # Exercise: Collatz Conjecture
#
# * The [Collatz Conjecture](http://en.wikipedia.org/wiki/Collatz_conjecture) (also known as the *3n+1* conjecture) is the conjecture that the following process is finite for every natural number:
# > If the number $n$ is even divide it by two ($n/2$), if it is odd multiply it by 3 and add 1 ($3n+1$). Repeat this process untill you get the number 1.
#
#
#
#
# + [markdown] id="5koLxNtzoysp"
#
# ## Implementation
# We start with the "Half Or Triple Plus One" process:
# + id="dFAI1hC4pFA1"
# + [markdown] id="ezkA6RabpFkc"
# ## Next
# we add another loop that will run the conjecture check on a range of numbers:
# + id="fnyeKb2ApUDG"
# + [markdown] id="FtdmIjWYpTus"
# `range` is a good friend with `for` loop
# + id="bg2OZhrlpF8R"
# + [markdown] id="RPdi26IFqQC5"
# # Lists
#
# * Lists are sequences of values.
# * Lists can contain a mix of types:
# + id="-ieTVEp4qVNS"
# + [markdown] id="4U-ExGS7qYrd"
# Lists are indexable, starting at 0:
# + id="y4CcgU08qZX9"
# + [markdown] id="9HOG6m5Pqe51"
# Negative indices are counted from the tail:
# + id="JvrWn7Qqqfxm"
# + id="4JZ71Xh5qjpV"
# + [markdown] id="_WhqBEUJqkMf"
# Lists can be sliced:
# + id="vqi-LmUoqnn6"
# + id="Un21Eqp1qn3r"
# + id="HRCnZkBBqq5d"
# + id="UqluQKXbqrEL"
# + id="ysrOz-yZqrPh"
# + id="nH6SePjvqrZN"
# + id="F45wqAJQqrkP"
# + [markdown] id="qK9-Knc2qsPO"
# Lists can be concatenated
# + id="nJPkR5w9quyy"
# + id="pDe2fkwhqu_y"
# + id="yz0GgcClqvLC"
# + [markdown] id="YHwvz1Nqq0HL"
# Lists have a rich set of functions
# + id="SHM646hPrLmE"
# + id="HzXYhz1trLw7"
# + id="5QSZmSlyrL6S"
# + [markdown] id="yAT9g6VErMNG"
# Lists are iterable
# + id="cRIGGJ8srOLj"
# + id="kr1-uIKzrOZ0"
# + id="F-yzEbLmrSjH"
# + id="Ucd2tynJr25w"
# + id="JmC_xu_lrSsR"
# + id="1Njn9EYHrS1G"
# + id="iUOoiN0-rS9P"
# + [markdown] id="ti7qxChirTh_"
# **NEW**
# ###A list of numbers can be created using list comprehension. The syntax is:
#
# [**expression** for **variable** in **iterable** if **condition**]
#
# The if **condition** part is optional, the statement and the condition can use variable.
#
# Example: Create a list of the squares of numbers between 1 and 10:
# + id="JYmPK0SErjGv"
# + [markdown] id="QTLK86D-rpBT"
# Try: Create a list of the square roots of odd numbers between 1 and 20:
# + id="HKTcAo2CrspV"
# + [markdown] id="RgbYW6jzuCTE"
# #Exericse: Grades problem
#
# Given a list of grades, count how many are above the average.
# + id="54fWPS5SuLdK"
grades = [33, 55,45,87,88,95,34,76,87,56,45,98,87,89,45,67,45,67,76,73,33,87,12,100,77,89,92]
# + [markdown] id="wPu1gIlKuWE5"
# Use `for` loop
#
# + id="zoZ_elC6uPT2"
# + [markdown] id="DWhBZpr4uY05"
# How about trying **list comprehension**?
# + id="OMOLlPthudQ8"
# + [markdown] id="rMxHoOIUujnX"
# #Functions!
# ## extremly simple syntax
# `def func_name(list of params):`
#
# + id="lawJmiWCvBEU"
# + id="UxH8csHsvBQ9"
# + id="Kz0g-L-bvBbQ"
# + [markdown] id="FuoSEbOMj5oX"
# ##Credits:
#
# This notebook is part of the [Extended introduction to computer science](http://tau-cs1001-py.wikidot.com/) course at Tel-Aviv University.
#
# The notebook was written using Python 3.2 and IPython 0.13.1.
#
# This work is licensed under a [Creative Commons Attribution-ShareAlike 3.0 Unported License](http://creativecommons.org/licenses/by-sa/3.0/).
| python_materials/3_1 basic condition loops and funcs.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3.9.8 64-bit
# language: python
# name: python3
# ---
# +
import btk
import cv2 as cv
import dataset_generators as dgen
import datetime
import matplotlib.pyplot as plt
import numpy as np
import random
import tensorflow as tf
import time
from PIL import Image, ImageDraw
from tensorflow import keras
from keras import layers
tf.config.experimental.set_memory_growth(tf.config.experimental.list_physical_devices('GPU')[0], True)
plt.style.use(f"{os.environ['style']}")
# +
#Demo set
""" tx, ty = dgen.gen_tdet_data(1, dgen.evalfonts, dgen.chars, True)
for i, x in enumerate(tx):
display(Image.fromarray(x))
display(ty[i]) """
""" num = 90000
for i, x in enumerate(valx_1[num:num + 10]):
display(Image.fromarray(x))
display(valy_1[i]) """
# +
trainx_1, trainy_1, valx_1, valy_1 = btk.depickler('trainx-1645175114', 'trainy-1645175114', 'valx-1645175114', 'valy-1645175114', 'ocr')
trainx_2, trainy_2, valx_2, valy_2 = btk.depickler('trainx-1645175477', 'trainy-1645175477', 'valx-1645175477', 'valy-1645175477', 'ocr')
trainx_3, trainy_3, valx_3, valy_3 = btk.depickler('trainx-1645175777', 'trainy-1645175777', 'valx-1645175777', 'valy-1645175777', 'ocr')
""" trainx, trainy = dgen.gen_tdet_data(21500, dgen.trainfonts, dgen.chars, True)
valx, valy = dgen.gen_tdet_data(7150, dgen.evalfonts, dgen.chars, False)
trainx = np.array([np.flip(x, random.randint(0, 1)) if random.randint(0, 3) == 0 else x for x in trainx])
valx = np.array([np.rot90(x, random.randint(1, 3)) if random.randint(0, 3) == 0 else x for x in valx])
btk.pickle_set(trainx, trainy, valx, valy, 'ocr') """
# +
def model_init():
inp = keras.Input(shape=(64, 64, 1))
x = layers.Rescaling(1.0/255)(inp)
x = layers.Conv2D(32, 3, padding='same', activation='relu')(x)
x = layers.BatchNormalization(axis=3)(x)
for features in [32, 64, 128]:
res = x
x = layers.Conv2D(features, 3, padding='same', activation='relu')(x)
x = layers.BatchNormalization(axis=3)(x)
x = layers.Conv2D(features, 3, padding='same', activation='relu')(x)
x = layers.BatchNormalization(axis=3)(x)
x = layers.MaxPooling2D(pool_size=(2, 2))(x)
x = layers.Dropout(0.1)(x)
res = layers.Conv2D(features, 1, strides=2, activation='relu')(res)
x = layers.add([x, res])
x = layers.BatchNormalization(axis=3)(x)
res = x
x = layers.Conv2D(features, 3, padding='same', activation='relu')(x)
x = layers.add([x, res])
x = layers.BatchNormalization(axis=3)(x)
x = layers.Conv2D(128, 3, padding='same')(x)
x = layers.BatchNormalization(axis=3)(x)
x = layers.Activation("relu")(x)
x = layers.GlobalMaxPooling2D()(x)
x = layers.Dropout(0.3678)(x)
outp = layers.Dense(1, activation='sigmoid')(x)
opti = keras.optimizers.Adam(learning_rate=0.001)
mod = keras.Model(inp, outp, name='tseg')
mod.compile(optimizer=opti, loss='binary_crossentropy', metrics=['accuracy'])
return mod
tdet = model_init()
tdet.summary()
# -
if not "tnum" in locals():
tnum = 0
measure = 'accuracy'
imgent = btk.DataGen(trainx_1, trainy_1, 144)
imgenv = btk.DataGen(valx_1, valy_1, 48)
log_dir = f"tblogs/detection/0/{tnum}/"
tbcall = keras.callbacks.TensorBoard(log_dir=log_dir, histogram_freq=1)
tnum += 1
tdet = model_init()
tstart = time.time()
history = tdet.fit(
x = imgent,
validation_data=imgenv,
epochs=64,
steps_per_epoch=32,
validation_steps=32,
callbacks=[tbcall],
verbose=0
)
tstop = time.time()
ttime = tstop - tstart
btk.mlstats(history.history, measure, ttime)
# +
if not 'best_score' in locals():
best_score = 0.9
metric = 'accuracy'
log_dir = "tblogs/detection/1/"
for i in range(7):
if i in [0, 3, 6]:
trainx_1 = np.array([np.flip(x, random.randint(0, 1)) for x in trainx_1])
valx_1 = np.array([np.rot90(x, random.randint(1, 3)) for x in valx_1])
tgen = btk.DataGen(trainx_1, trainy_1, 144)
vgen = btk.DataGen(valx_1, valy_1, 48)
elif i in [1, 4]:
trainx_2 = np.array([np.flip(x, random.randint(0, 1)) for x in trainx_2])
valx_2 = np.array([np.rot90(x, random.randint(1, 3)) for x in valx_2])
tgen = btk.DataGen(trainx_2, trainy_2, 144)
vgen = btk.DataGen(valx_2, valy_2, 48)
elif i in [2, 5]:
trainx_3 = np.array([np.flip(x, random.randint(0, 1)) for x in trainx_3])
valx_3 = np.array([np.rot90(x, random.randint(1, 3)) for x in valx_3])
tgen = btk.DataGen(trainx_3, trainy_3, 144)
vgen = btk.DataGen(valx_3, valy_3, 48)
tbcall = tf.keras.callbacks.TensorBoard(log_dir=log_dir, histogram_freq=1)
estop = keras.callbacks.EarlyStopping(monitor='val_loss', min_delta=0.0001, patience=32, restore_best_weights=True)
calls = [tbcall]
if i == 6:
calls = [estop, tbcall]
history = tdet.fit(
x = tgen,
validation_data=vgen,
epochs=64,
steps_per_epoch=32,
validation_steps=32,
callbacks=calls,
verbose=0
)
score = (max(history.history.get(f"val_{metric}")) + (2.7182818**-(min(history.history.get("val_loss"))))
+ (sum(history.history.get(f"val_{metric}")[-7:]) / 7) + (sum(2.7182818**-(np.array(history.history.get("val_loss")[-7:]))) / 7)
) / 4
btk.mlstats(history.history, metric)
if score > best_score:
print(f'Model Updated New Rating: {score} | Old Rating: {best_score}')
tdet.save('models\\tdet-3')
best_score = score
# -
tdet = tf.keras.models.load_model('models\\tdet-2')
#tdet.save('models\\tdet-1')
# +
def contour_bounds(img) -> list[tuple[np.ndarray, list]]:
conts = cv.findContours(img, cv.RETR_TREE, cv.CHAIN_APPROX_SIMPLE)[0]
#display(htree)
#display(conts)
#imc = cv.drawContours(cv.cvtColor(img, cv.COLOR_GRAY2BGR), conts, -1, (0,255,0), 2)
#display(Image.fromarray(imc))
bounds = []
for x in conts:
xset = [y[0][0] for y in x]
yset = [y[0][1] for y in x]
bounds.append((min(yset), max(yset), min(xset), max(xset)))
bounds = [x for x in bounds if (x[1] - x[0]) * (x[3] - x[2]) > 512 and x[1] - x[0] > 6]
return bounds
def expand_coordinates(bounds: tuple[int, ...], idims: tuple[int, int], stretch: tuple[float, float] = (0.085, 0.085)) -> tuple[int, ...]:
"""Expands coordinate inputs slightly to account for imprecise text detection"""
for i, x in enumerate(bounds):
ygap = (x[1] - x[0]) * stretch[0]
xgap = (x[3] - x[2]) * stretch[1]
bounds[i] = [round(x[0] - (ygap * 1.5)), round(x[1] + (ygap * 0.5)), round(x[2] - xgap), round(x[3] + xgap)]
for i, x in enumerate(bounds):
if x[0] < 0:
bounds[i][0] = 0
if x[1] > idims[0]:
bounds[i][1] = idims[0]
if x[2] < 0:
bounds[i][2] = 0
if x[3] > idims[1]:
bounds[i][3] = idims[1]
return bounds
def sat_check(img):
img = cv.Canny(img, 50, 200, apertureSize=3)
img = cv.GaussianBlur(img, (9, 1), 9)
img = cv.GaussianBlur(img, (1, 5), 5)
img = cv.morphologyEx(img, cv.MORPH_CLOSE, np.ones((2, 2), np.uint8), iterations=4)
img[img > 0] = 255
if np.sum(img) / (img.size * 255) > 0.5:
return True
def drawbox(img: Image, coords: list) -> Image:
for x in coords:
ImageDraw.Draw(img).rectangle((x[2], x[0], x[3], x[1]), outline=127, width=3)
return img
def txt_detect(img: np.ndarray, sdims: tuple[int, int], sens: float, sfactor: float = True) -> tuple[int, ...]:
"""
Use ML model to determine if text appears within an image
Args:
img (np.ndarray): Image to analyze
sens (float): Sensitivity threshold for text detection
Returns:
list[tuple[int, ...], np.ndarray]: Coordinates of regions that have text within the image
"""
if sfactor:
sfactor = img.size
oshape = img.shape
sc_coeff = (sfactor / img.size)**0.5
img = btk.resize(img, (round(img.shape[0] * sc_coeff), round(img.shape[1] * sc_coeff)))
img = btk.fit2dims(img, sdims)
coord_scale = ((oshape[0] / img.shape[0], oshape[1] / img.shape[1]))
slices = btk.img_slicer(img, sdims, sdims, 2)
coords = btk.gen_index(img.shape, sdims, sdims, 2)
predictions = tdet.predict(slices)
tcords = []
for i, x in enumerate(predictions):
if x > sens:
tcords.append((coords[i], x[0]))
for i, x in enumerate(tcords):
tcords[i] = (np.array([x[0][0] * coord_scale[0],
x[0][1] * coord_scale[0],
x[0][2] * coord_scale[1],
x[0][3] * coord_scale[1]]).astype('uint16'), x[1])
return tcords
def txt_heat_mapper(img: np.ndarray, sdims: tuple[int, int], sens: float) -> np.ndarray:
"""Generates a heatmap for text locations on an image"""
img = btk.fit2dims(img, sdims)
iarea = img.size
score_card = np.zeros((img.shape[0], img.shape[1]), dtype='float16')
""" if iarea < 1048576:
cords = txt_detect(img, sdims, sens - ((1 - sens) - ((1 - sens) ** 1.666)), iarea * 4)
for x in cords:
score_card[x[0][0]:x[0][1], x[0][2]:x[0][3]] = score_card[x[0][0]:x[0][1], x[0][2]:x[0][3]] + (np.ones((x[0][1] - x[0][0], x[0][3] - x[0][2])) * x[1]) """
if iarea < 2097152:
cords = txt_detect(img, sdims, sens - ((1 - sens) - ((1 - sens) ** 1.333)), iarea * 2)
for x in cords:
score_card[x[0][0]:x[0][1], x[0][2]:x[0][3]] = score_card[x[0][0]:x[0][1], x[0][2]:x[0][3]] + (np.ones((x[0][1] - x[0][0], x[0][3] - x[0][2])) * x[1])
cords = txt_detect(img, sdims, sens, iarea)
for x in cords:
score_card[x[0][0]:x[0][1], x[0][2]:x[0][3]] = score_card[x[0][0]:x[0][1], x[0][2]:x[0][3]] + (np.ones((x[0][1] - x[0][0], x[0][3] - x[0][2])) * x[1])
if iarea > 2097152:
cords = txt_detect(img, sdims, sens ** 0.666, iarea / 2)
for x in cords:
score_card[x[0][0]:x[0][1], x[0][2]:x[0][3]] = score_card[x[0][0]:x[0][1], x[0][2]:x[0][3]] + (np.ones((x[0][1] - x[0][0], x[0][3] - x[0][2])) * x[1])
""" if iarea > 4194304:
cords = txt_detect(img, sdims, sens ** 0.333, iarea / 4)
for x in cords:
score_card[x[0][0]:x[0][1], x[0][2]:x[0][3]] = score_card[x[0][0]:x[0][1], x[0][2]:x[0][3]] + (np.ones((x[0][1] - x[0][0], x[0][3] - x[0][2])) * x[1]) """
""" img = btk.fit2dims(img, (64, 64))
combined = cv.addWeighted(img, 0.3, score_card.astype('uint8') * 50, 0.7, 1)
display(Image.fromarray(combined)) """
return score_card
def textract_images(img: np.ndarray, coords: tuple[int, ...]) -> list[np.ndarray]:
"""Extract image regions defineed by coordinate list input"""
imarr = np.array(img)
extracted = [imarr[x[0]:x[1], x[2]:x[3]] for x in coords]
return extracted
def trimmer(iar):
img = np.array(iar)
hgram = np.histogram(img.flatten(), 255, (0, 255))
pix_mode = hgram[0].argmax()
if pix_mode > 127:
img = np.invert(img)
pix_mode = 255 - pix_mode
img = cv.adaptiveThreshold(img, 255, cv.ADAPTIVE_THRESH_GAUSSIAN_C, cv.THRESH_BINARY, 21, 29)
img = np.invert(img)
img = cv.medianBlur(img, 3)
img[img < 127] = 0
img[img > 0] = 255
img = cv.Sobel(img, cv.CV_64F, 0, 1, ksize=-1)
img = cv.Sobel(img, cv.CV_64F, 0, 1, ksize=1)
img = cv.Sobel(img, cv.CV_64F, 1, 0, ksize=1)
img = cv.Sobel(img, cv.CV_64F, 1, 0, ksize=-1)
img = np.uint8(np.absolute(img))
img = cv.medianBlur(img, 3)
img = cv.Sobel(img, cv.CV_64F, 0, 1, ksize=-1)
img = np.uint8(np.absolute(img))
img = cv.erode(img, np.ones((2, 2)), iterations=1)
img = cv.GaussianBlur(img, (9, 1), 9, 1)
img = cv.morphologyEx(img, cv.MORPH_OPEN, np.ones((2, 2), np.uint8), iterations=2)
img[img > 0] = 255
img = cv.GaussianBlur(img, (7, 1), 7, 1)
img = cv.morphologyEx(img, cv.MORPH_CLOSE, np.ones((2, 2), np.uint8), iterations=1)
img[img > 0] = 255
bounds = contour_bounds(img)
bounds = expand_coordinates(bounds, iar.shape, (0.06, 0.03))
for x in bounds.copy():
if x[1] - x[0] + 6 > x[3] - x[2]:
if x[3] - x[2] < 128:
bounds.remove(x)
for x in bounds.copy():
for y in bounds.copy():
if x[0] > y[0] and x[1] < y[1] and x[2] > y[2] and x[3] < y[3]:
try:
bounds.remove(x)
except: ValueError
return [(iar[x[0]:x[1], x[2]:x[3]], x) for x in bounds]
def get_text(img: np.ndarray, sdims: tuple[int, int], sens: float) -> list[np.ndarray]:
"""
Identify and extract image regions containing text
Args:
img (np.ndarray): Image
sdims (tuple[int, int]): Dimensions of the image slices
sens (float): Sensitivity of the detection model
Returns:
list[np.ndarray]: Image regions containing text in array form
"""
iar = btk.grey_np(img)
img = cv.adaptiveThreshold(iar, 254, cv.ADAPTIVE_THRESH_GAUSSIAN_C, cv.THRESH_BINARY, 17, 15)
img = np.invert(img)
img = cv.medianBlur(img, 3)
img[img < 127] = 0
img[img > 0] = 255
mtrx = txt_heat_mapper(img, sdims, sens)
img = np.rot90(img, 2)
mtrx += np.rot90(txt_heat_mapper(img, sdims, sens), 2)
img = cv.adaptiveThreshold(iar, 254, cv.ADAPTIVE_THRESH_GAUSSIAN_C, cv.THRESH_BINARY, 17, 21)
img = cv.medianBlur(img, 3)
mtrx += txt_heat_mapper(img, sdims, sens)
img = np.rot90(img, 2)
mtrx += np.rot90(txt_heat_mapper(img, sdims, sens), 2)
mtrx = np.array(mtrx, dtype='uint8')
img = btk.fit2dims(iar, (64, 64))
""" combined = cv.addWeighted(img, 0.3, mtrx * 15, 0.7, 1)
display(Image.fromarray(combined)) """
hgram = np.histogram(mtrx.flatten(), 9)
mtrx[mtrx < hgram[1][hgram[0].argmin()] - 1] = 0
mtrx[mtrx > 0] = 255
mtrx = cv.GaussianBlur(mtrx, (67, 31), 49)
mtrx[mtrx > 0] = 255
boxes = contour_bounds(mtrx)
for x in boxes.copy():
for y in boxes.copy():
if x[0] > y[0] and x[1] < y[1] and x[2] > y[2] and x[3] < y[3]:
try:
boxes.remove(x)
except: ValueError
iar = btk.fit2dims(iar, (64, 64))
boxes = [(iar[x[0]:x[1], x[2]:x[3]], x) for x in boxes]
txt_areas = {}
for x in boxes:
for y in trimmer(x[0]):
txt_areas.update({(y[1][0] + x[1][0], y[1][1] + x[1][0], y[1][2] + x[1][2], y[1][3] + x[1][2]): y[0]})
return txt_areas
# +
tlist = [6, 10, 12, 32, 37]
for x in tlist:
with Image.open(f"testimgs\\t{x}.png") as f:
iar = np.array(f)
im = Image.fromarray(iar)
test = get_text(iar, (64, 64), 0.95)
if test:
for y in test.values():
display(Image.fromarray(y))
| detection.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# ---
# <div class="contentcontainer med left" style="margin-left: -50px;">
# <dl class="dl-horizontal">
# <dt>Title</dt> <dd> HLine Element</dd>
# <dt>Dependencies</dt> <dd>Matplotlib</dd>
# <dt>Backends</dt> <dd><a href='./HLine.ipynb'>Matplotlib</a></dd> <dd><a href='../bokeh/HLine.ipynb'>Bokeh</a></dd>
# </dl>
# </div>
import numpy as np
import holoviews as hv
hv.extension('matplotlib')
# The ``HLine`` element is a type of annotation that marks a position along the y-axis. Here is an ``HLine`` element that marks the mean of a points distributions:
# %%opts HLine (color='blue' linewidth=6) Points (color='#D3D3D3')
xs = np.random.normal(size=100)
ys = np.random.normal(size=100) * xs
hv.Points((xs,ys)) * hv.HLine(ys.mean())
| examples/reference/elements/matplotlib/HLine.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 2
# language: python
# name: python2
# ---
# # Unit Testing `GiRaFFE_NRPy`: Fluxes of $\tilde{S}_i$
#
# ### Author: <NAME>
#
# This notebook validates our new, NRPyfied HLLE solver against the function from the original `GiRaFFE` that calculates the flux for $\tilde{S}_i$ according to the the method of Harten, Lax, von Leer, and Einfeldt (HLLE), assuming that we have calculated the values of the flux on the cell faces according to the piecewise-parabolic method (PPM) of [Colella and Woodward (1984)](https://crd.lbl.gov/assets/pubs_presos/AMCS/ANAG/A141984.pdf), modified for the case of GRFFE.
#
# **Module Status:** <font color=green><b> Validated: </b></font> This code has passed unit tests against the original `GiRaFFE` version.
#
# **Validation Notes:** Once this is completed, it will show the validation of [Tutorial-GiRaFFE_NRPy_Ccode_library-Stilde-flux](../Tutorial-GiRaFFE_NRPy_Ccode_library-Stilde-flux.ipynb).
#
# It is, in general, good coding practice to unit test functions individually to verify that they produce the expected and intended output. Here, we expect our functions `GRFFE__S_*__flux.C` to produce identical output to the function `GRFFE__S_i__flux.C` in the original `GiRaFFE`. It should be noted that the two codes handle the parameter `flux_dirn` (the direction in which the code is presently calculating the flux through the cell) differently; in the original `GiRaFFE`, the function `GRFFE__S_i__flux()` expects a parameter `flux_dirn` with value 1, 2, or 3, corresponding to the functions `GRFFE__S_0__flux()`, `GRFFE__S_1__flux()`, and `GRFFE__S_2__flux()`, respectively, in `GiRaFFE_NRPy`.
# We'll write this in C because the codes we want to test are already written that way, and we would like to avoid modifying the files as much as possible. To do so, we will write the C code as a string literal, and then print it to a file. We will begin by including core functionality. We will also define standard parameters needed for GRFFE and NRPy+.
out_string = """
// These are common packages that we are likely to need.
#include "stdio.h"
#include "stdlib.h"
#include "math.h"
#include "string.h" // Needed for strncmp, etc.
#include "stdint.h" // Needed for Windows GCC 6.x compatibility
#include <time.h> // Needed to set a random seed.
// Standard GRFFE parameters:
const double GAMMA_SPEED_LIMIT = 10.0;
const int Nxx_plus_2NGHOSTS[3] = {1,1,1};
// Parameter to avoid division by zero:
const double TINYDOUBLE = 1.0e-100;
// Standard NRPy+ memory access:
#define IDX4(g,i,j,k) \
( (i) + Nxx_plus_2NGHOSTS[0] * ( (j) + Nxx_plus_2NGHOSTS[1] * ( (k) + Nxx_plus_2NGHOSTS[2] * (g) ) ) )
#define IDX3(i,j,k) ( (i) + Nxx_plus_2NGHOSTS[0] * ( (j) + Nxx_plus_2NGHOSTS[1] * (k) ) )
// Assuming idx = IDX3(i,j,k). Much faster if idx can be reused over and over:
#define IDX4pt(g,idx) ( (idx) + (Nxx_plus_2NGHOSTS[0]*Nxx_plus_2NGHOSTS[1]*Nxx_plus_2NGHOSTS[2]) * (g) )
"""
# Now, we'll manually write the gridfunction `#define`s that NRPy+ needs.
out_string += """
// Let's also #define the NRPy+ gridfunctions
#define ALPHA_FACEGF 0
#define GAMMADET_FACEGF 1
#define GAMMA_FACEDD00GF 2
#define GAMMA_FACEDD01GF 3
#define GAMMA_FACEDD02GF 4
#define GAMMA_FACEDD11GF 5
#define GAMMA_FACEDD12GF 6
#define GAMMA_FACEDD22GF 7
#define GAMMA_FACEUU00GF 8
#define GAMMA_FACEUU11GF 9
#define GAMMA_FACEUU22GF 10
#define BETA_FACEU0GF 11
#define BETA_FACEU1GF 12
#define BETA_FACEU2GF 13
#define VALENCIAV_RU0GF 14
#define VALENCIAV_RU1GF 15
#define VALENCIAV_RU2GF 16
#define B_RU0GF 17
#define B_RU1GF 18
#define B_RU2GF 19
#define VALENCIAV_LU0GF 20
#define VALENCIAV_LU1GF 21
#define VALENCIAV_LU2GF 22
#define B_LU0GF 23
#define B_LU1GF 24
#define B_LU2GF 25
#define U4UPPERZERO_LGF 26
#define U4UPPERZERO_RGF 27
#define STILDE_FLUXD0GF 28
#define STILDE_FLUXD1GF 29
#define STILDE_FLUXD2GF 30
#define NUM_AUXEVOL_GFS 31
"""
# The last things NRPy+ will require are the definition of type `REAL` and, of course, the functions we are testing. These files are generated on the fly.
# +
import shutil, os, sys # Standard Python modules for multiplatform OS-level functions
nrpy_dir_path = os.path.join("..")
if nrpy_dir_path not in sys.path:
sys.path.append(nrpy_dir_path)
from outputC import * # NRPy+: Core C code output module
import finite_difference as fin # NRPy+: Finite difference C code generation module
import NRPy_param_funcs as par # NRPy+: Parameter interface
import grid as gri # NRPy+: Functions having to do with numerical grids
import loop as lp # NRPy+: Generate C code loops
import indexedexp as ixp # NRPy+: Symbolic indexed expression (e.g., tensors, vectors, etc.) support
import reference_metric as rfm # NRPy+: Reference metric support
import cmdline_helper as cmd # NRPy+: Multi-platform Python command-line interface
out_dir = "Validation"
cmd.mkdir(out_dir)
import GiRaFFE_HO.Stilde_flux as stflux
# Let's also add some standard NRPy+ macros here. We can also use IDX3 here
# to define and index that we'll keep reusing.
Stilde_flux_string_pre = """
//-----------------------------------------------------------------------------
// Compute the flux for advecting S_i .
//-----------------------------------------------------------------------------
static inline void GRFFE__S_i__flux(const int i0,const int i1,const int i2, REAL *auxevol_gfs) {
int idx = IDX3(i0,i1,i2);
"""
# We'll need a post-string as well to close the function.
Stilde_flux_string_post = """
}
"""
# We will pass values of the gridfunction on the cell faces into the function. This requires us
# to declare them as C parameters in NRPy+. We will denote this with the _face infix/suffix.
alpha_face,gammadet_face = gri.register_gridfunctions("AUXEVOL",["alpha_face","gammadet_face"])
gamma_faceDD = ixp.register_gridfunctions_for_single_rank2("AUXEVOL","gamma_faceDD","sym01")
gamma_faceUU = ixp.register_gridfunctions_for_single_rank2("AUXEVOL","gamma_faceUU","sym01")
beta_faceU = ixp.register_gridfunctions_for_single_rank1("AUXEVOL","beta_faceU")
# We'll need some more gridfunctions, now, to represent the reconstructions of BU and ValenciavU
# on the right and left faces
Valenciav_rU = ixp.register_gridfunctions_for_single_rank1("AUXEVOL","Valenciav_rU",DIM=3)
B_rU = ixp.register_gridfunctions_for_single_rank1("AUXEVOL","B_rU",DIM=3)
Valenciav_lU = ixp.register_gridfunctions_for_single_rank1("AUXEVOL","Valenciav_lU",DIM=3)
B_lU = ixp.register_gridfunctions_for_single_rank1("AUXEVOL","B_lU",DIM=3)
# And the function to which we'll write the output data:
Stilde_fluxD = ixp.register_gridfunctions_for_single_rank1("AUXEVOL","Stilde_fluxD",DIM=3)
# And now, we'll write the files
DIM = 3
for flux_dirn in range(DIM):
out_string_NRPy = Stilde_flux_string_pre.replace("GRFFE__S_i__flux","GRFFE__S_"+str(flux_dirn)+"__flux")
# Function call goes here
stflux.calculate_Stilde_flux(flux_dirn,True,alpha_face,gammadet_face,gamma_faceDD,gamma_faceUU,beta_faceU,Valenciav_rU,B_rU,Valenciav_lU,B_lU)
Stilde_flux_to_print = [\
lhrh(lhs=gri.gfaccess("out_gfs","Stilde_fluxD0"),rhs=stflux.Stilde_fluxD[0]),\
lhrh(lhs=gri.gfaccess("out_gfs","Stilde_fluxD1"),rhs=stflux.Stilde_fluxD[1]),\
lhrh(lhs=gri.gfaccess("out_gfs","Stilde_fluxD2"),rhs=stflux.Stilde_fluxD[2]),\
]
Stilde_flux_kernel = fin.FD_outputC("returnstring",Stilde_flux_to_print,params="outCverbose=False")
out_string_NRPy += Stilde_flux_kernel
out_string_NRPy += Stilde_flux_string_post
with open(os.path.join(out_dir,"GRFFE__S_"+str(flux_dirn)+"__flux.C"), "w") as file:
file.write(out_string_NRPy)
# -
out_string += """
// The NRPy+ versions of the function. These should require relatively little modification.
// We will need this define, though:
#define REAL double
#include "GRFFE__S_0__flux.C"
#include "GRFFE__S_1__flux.C"
#include "GRFFE__S_2__flux.C"
"""
# Next, we'll include the files from the old `GiRaFFE`. But before we can do so, we should define modified versions of the CCTK macros.
out_string += """
#define CCTK_REAL double
#define DECLARE_CCTK_PARAMETERS //
struct cGH{};
const cGH cctkGH;
"""
# We'll also need to download the files in question from the `GiRaFFE` bitbucket repository. This code was originally written by <NAME> in the IllinoisGRMHD documentation; we have modified it to download the files we want. Of note is the addition of the `for` loop since we need three files (The function `GRFFE__S_i__flux()` depends on two other files for headers and functions).
# +
# First download the original IllinoisGRMHD source code
import urllib
original_file_url = ["https://bitbucket.org/zach_etienne/wvuthorns/raw/5611b2f0b17135538c9d9d17c7da062abe0401b6/GiRaFFE/src/GiRaFFE_headers.h",\
"https://bitbucket.org/zach_etienne/wvuthorns/raw/5611b2f0b17135538c9d9d17c7da062abe0401b6/GiRaFFE/src/inlined_functions.C",\
"https://bitbucket.org/zach_etienne/wvuthorns/raw/5611b2f0b17135538c9d9d17c7da062abe0401b6/GiRaFFE/src/GRFFE__S_i__flux.C"\
]
original_file_name = ["GiRaFFE_headers-original.h",\
"inlined_functions-original.C",\
"GRFFE__S_i__flux-original.C"\
]
for i in range(len(original_file_url)):
original_file_path = os.path.join(out_dir,original_file_name[i])
# Then download the original IllinoisGRMHD source code
# We try it here in a couple of ways in an attempt to keep
# the code more portable
try:
original_file_code = urllib.request.urlopen(original_file_url[i]).read().decode('utf-8')
except:
original_file_code = urllib.urlopen(original_file_url[i]).read().decode('utf-8')
# Write down the file the original IllinoisGRMHD source code
with open(original_file_path,"w") as file:
file.write(original_file_code)
# We add the following lines to append includes to the code we're writing
out_string += """#include \""""
out_string += original_file_name[i]
out_string +=""""
"""
# -
# Now we can write a main function. In this function, we will fill all relevant arrays with (appropriate) random values. That is, if a certain gridfunction should never be negative, we will make sure to only generate positive numbers for it. We must also contend with the fact that in NRPy+, we chose to use the Valencia 3-velocity $v^i_{(n)}$, while in ETK, we used the drift velocity $v^i$; the two are related by $$v^i = \alpha v^i_{(n)} - \beta^i.$$
out_string +="""
int main() {
// We'll define all indices to be 0. No need to complicate memory access
const int i0 = 0;
const int i1 = 0;
const int i2 = 0;
// This is the array to which we'll write the NRPy+ variables.
REAL *auxevol_gfs = (REAL *)malloc(sizeof(REAL) * NUM_AUXEVOL_GFS);
// These are the arrays to which we will write the ETK variables.
CCTK_REAL METRIC_LAP_PSI4[NUMVARS_METRIC_AUX];
CCTK_REAL Ur[MAXNUMVARS];
CCTK_REAL Ul[MAXNUMVARS];
CCTK_REAL FACEVAL[NUMVARS_FOR_METRIC_FACEVALS];
CCTK_REAL cmax; CCTK_REAL cmin;
CCTK_REAL st_x_flux; CCTK_REAL st_y_flux; CCTK_REAL st_z_flux;
// Now, it's time to make the random numbers.
const long int seed = time(NULL); // seed = 1570632212; is an example of a seed that produces
// bad agreement for high speeds
srand(seed); // Set the seed
printf("seed for random number generator = %ld; RECORD IF AGREEMENT IS BAD\\n\\n",seed);
// We take care to make sure the corresponding quantities have the SAME value.
auxevol_gfs[IDX4(ALPHA_FACEGF, i0,i1,i2)] = (double)rand()/RAND_MAX;
const double alpha = auxevol_gfs[IDX4(ALPHA_FACEGF, i0,i1,i2)];
METRIC_LAP_PSI4[LAPSE] = alpha;
//METRIC_LAP_PSI4[LAPM1] = METRIC_LAP_PSI4[LAPSE]-1;
auxevol_gfs[IDX4(GAMMA_FACEDD00GF, i0,i1,i2)] = 1.0+(double)rand()/RAND_MAX*0.2-0.1;
auxevol_gfs[IDX4(GAMMA_FACEDD01GF, i0,i1,i2)] = (double)rand()/RAND_MAX*0.2-0.1;
auxevol_gfs[IDX4(GAMMA_FACEDD02GF, i0,i1,i2)] = (double)rand()/RAND_MAX*0.2-0.1;
auxevol_gfs[IDX4(GAMMA_FACEDD11GF, i0,i1,i2)] = 1.0+(double)rand()/RAND_MAX*0.2-0.1;
auxevol_gfs[IDX4(GAMMA_FACEDD12GF, i0,i1,i2)] = (double)rand()/RAND_MAX*0.2-0.1;
auxevol_gfs[IDX4(GAMMA_FACEDD22GF, i0,i1,i2)] = 1.0+(double)rand()/RAND_MAX*0.2-0.1;
// Generated by NRPy+:
const double gammaDD00 = auxevol_gfs[IDX4(GAMMA_FACEDD00GF, i0,i1,i2)];
const double gammaDD01 = auxevol_gfs[IDX4(GAMMA_FACEDD01GF, i0,i1,i2)];
const double gammaDD02 = auxevol_gfs[IDX4(GAMMA_FACEDD02GF, i0,i1,i2)];
const double gammaDD11 = auxevol_gfs[IDX4(GAMMA_FACEDD11GF, i0,i1,i2)];
const double gammaDD12 = auxevol_gfs[IDX4(GAMMA_FACEDD12GF, i0,i1,i2)];
const double gammaDD22 = auxevol_gfs[IDX4(GAMMA_FACEDD22GF, i0,i1,i2)];
/*
* NRPy+ Finite Difference Code Generation, Step 2 of 1: Evaluate SymPy expressions and write to main memory:
*/
const double tmp0 = gammaDD11*gammaDD22;
const double tmp1 = pow(gammaDD12, 2);
const double tmp2 = gammaDD02*gammaDD12;
const double tmp3 = pow(gammaDD01, 2);
const double tmp4 = pow(gammaDD02, 2);
const double tmp5 = gammaDD00*tmp0 - gammaDD00*tmp1 + 2*gammaDD01*tmp2 - gammaDD11*tmp4 - gammaDD22*tmp3;
const double tmp6 = 1.0/tmp5;
auxevol_gfs[IDX4(GAMMA_FACEUU00GF, i0, i1, i2)] = tmp6*(tmp0 - tmp1);
auxevol_gfs[IDX4(GAMMA_FACEUU11GF, i0, i1, i2)] = tmp6*(gammaDD00*gammaDD22 - tmp4);
auxevol_gfs[IDX4(GAMMA_FACEUU22GF, i0, i1, i2)] = tmp6*(gammaDD00*gammaDD11 - tmp3);
auxevol_gfs[IDX4(GAMMADET_FACEGF, i0, i1, i2)] = tmp5;
auxevol_gfs[IDX4(GAMMADET_FACEGF, i0,i1,i2)] = tmp5;
METRIC_LAP_PSI4[PSI6] = sqrt(auxevol_gfs[IDX4(GAMMADET_FACEGF, i0,i1,i2)]);
METRIC_LAP_PSI4[PSI2] = pow(METRIC_LAP_PSI4[PSI6],1.0/3.0);
METRIC_LAP_PSI4[PSI4] = METRIC_LAP_PSI4[PSI2]*METRIC_LAP_PSI4[PSI2];
const double Psim4 = 1.0/METRIC_LAP_PSI4[PSI4];
METRIC_LAP_PSI4[PSIM4] = Psim4;
// Copied from the ETK implementation
CCTK_REAL gtxxL = gammaDD00*Psim4;
CCTK_REAL gtxyL = gammaDD01*Psim4;
CCTK_REAL gtxzL = gammaDD02*Psim4;
CCTK_REAL gtyyL = gammaDD11*Psim4;
CCTK_REAL gtyzL = gammaDD12*Psim4;
CCTK_REAL gtzzL = gammaDD22*Psim4;
/*********************************
* Apply det gtij = 1 constraint *
*********************************/
const CCTK_REAL gtijdet = gtxxL * gtyyL * gtzzL + gtxyL * gtyzL * gtxzL + gtxzL * gtxyL * gtyzL -
gtxzL * gtyyL * gtxzL - gtxyL * gtxyL * gtzzL - gtxxL * gtyzL * gtyzL;
/*const CCTK_REAL gtijdet_Fm1o3 = fabs(1.0/cbrt(gtijdet));
gtxxL = gtxxL * gtijdet_Fm1o3;
gtxyL = gtxyL * gtijdet_Fm1o3;
gtxzL = gtxzL * gtijdet_Fm1o3;
gtyyL = gtyyL * gtijdet_Fm1o3;
gtyzL = gtyzL * gtijdet_Fm1o3;
gtzzL = gtzzL * gtijdet_Fm1o3;*/
FACEVAL[GXX] = gtxxL;
FACEVAL[GXY] = gtxyL;
FACEVAL[GXZ] = gtxzL;
FACEVAL[GYY] = gtyyL;
FACEVAL[GYZ] = gtyzL;
FACEVAL[GZZ] = gtzzL;
FACEVAL[GUPXX] = ( gtyyL * gtzzL - gtyzL * gtyzL )/gtijdet;
FACEVAL[GUPYY] = ( gtxxL * gtzzL - gtxzL * gtxzL )/gtijdet;
FACEVAL[GUPZZ] = ( gtxxL * gtyyL - gtxyL * gtxyL )/gtijdet;
auxevol_gfs[IDX4(BETA_FACEU0GF, i0,i1,i2)] = (double)rand()/RAND_MAX*2.0-1.0;
const double betax = auxevol_gfs[IDX4(BETA_FACEU0GF, i0,i1,i2)];
FACEVAL[SHIFTX] = betax;
auxevol_gfs[IDX4(BETA_FACEU1GF, i0,i1,i2)] = (double)rand()/RAND_MAX*2.0-1.0;
const double betay = auxevol_gfs[IDX4(BETA_FACEU1GF, i0,i1,i2)];
FACEVAL[SHIFTY] = betay;
auxevol_gfs[IDX4(BETA_FACEU2GF, i0,i1,i2)] = (double)rand()/RAND_MAX*2.0-1.0;
const double betaz = auxevol_gfs[IDX4(BETA_FACEU2GF, i0,i1,i2)];
FACEVAL[SHIFTZ] = betaz;
/* Generate physically meaningful speeds */
auxevol_gfs[IDX4(VALENCIAV_RU0GF, i0,i1,i2)] = (double)rand()/RAND_MAX*2.0-1.0;
Ur[VX] = alpha*auxevol_gfs[IDX4(VALENCIAV_RU0GF, i0,i1,i2)]-betax;
auxevol_gfs[IDX4(VALENCIAV_RU1GF, i0,i1,i2)] = (double)rand()/RAND_MAX*2.0-1.0;
Ur[VY] = alpha*auxevol_gfs[IDX4(VALENCIAV_RU1GF, i0,i1,i2)]-betay;
auxevol_gfs[IDX4(VALENCIAV_RU2GF, i0,i1,i2)] = (double)rand()/RAND_MAX*2.0-1.0;
Ur[VZ] = alpha*auxevol_gfs[IDX4(VALENCIAV_RU2GF, i0,i1,i2)]-betaz;
/* Superluminal speeds for testing */
/*auxevol_gfs[IDX4(VALENCIAV_RU0GF, i0,i1,i2)] = 1.0+(double)rand()/RAND_MAX*9.0;
Ur[VX] = alpha*auxevol_gfs[IDX4(VALENCIAV_RU0GF, i0,i1,i2)]-betax;
auxevol_gfs[IDX4(VALENCIAV_RU1GF, i0,i1,i2)] = 1.0+(double)rand()/RAND_MAX*9.0;
Ur[VY] = alpha*auxevol_gfs[IDX4(VALENCIAV_RU1GF, i0,i1,i2)]-betay;
auxevol_gfs[IDX4(VALENCIAV_RU2GF, i0,i1,i2)] = 1.0+(double)rand()/RAND_MAX*9.0;
Ur[VZ] = alpha*auxevol_gfs[IDX4(VALENCIAV_RU2GF, i0,i1,i2)]-betaz;*/
auxevol_gfs[IDX4(B_RU0GF, i0,i1,i2)] = (double)rand()/RAND_MAX*2.0-1.0;
Ur[BX_CENTER] = auxevol_gfs[IDX4(B_RU0GF, i0,i1,i2)];
auxevol_gfs[IDX4(B_RU1GF, i0,i1,i2)] = (double)rand()/RAND_MAX*2.0-1.0;
Ur[BY_CENTER] = auxevol_gfs[IDX4(B_RU1GF, i0,i1,i2)];
auxevol_gfs[IDX4(B_RU2GF, i0,i1,i2)] = (double)rand()/RAND_MAX*2.0-1.0;
Ur[BZ_CENTER] = auxevol_gfs[IDX4(B_RU2GF, i0,i1,i2)];
/* Generate physically meaningful speeds */
auxevol_gfs[IDX4(VALENCIAV_LU0GF, i0,i1,i2)] = (double)rand()/RAND_MAX*2.0-1.0;
Ul[VX] = alpha*auxevol_gfs[IDX4(VALENCIAV_LU0GF, i0,i1,i2)]-betax;
auxevol_gfs[IDX4(VALENCIAV_LU1GF, i0,i1,i2)] = (double)rand()/RAND_MAX*2.0-1.0;
Ul[VY] = alpha*auxevol_gfs[IDX4(VALENCIAV_LU1GF, i0,i1,i2)]-betay;
auxevol_gfs[IDX4(VALENCIAV_LU2GF, i0,i1,i2)] = (double)rand()/RAND_MAX*2.0-1.0;
Ul[VZ] = alpha*auxevol_gfs[IDX4(VALENCIAV_LU2GF, i0,i1,i2)]-betaz;
/* Superluminal speeds for testing */
/*auxevol_gfs[IDX4(VALENCIAV_LU0GF, i0,i1,i2)] = 1.0+(double)rand()/RAND_MAX*9.0;
Ul[VX] = alpha*auxevol_gfs[IDX4(VALENCIAV_LU0GF, i0,i1,i2)]-betax;
auxevol_gfs[IDX4(VALENCIAV_LU1GF, i0,i1,i2)] = 1.0+(double)rand()/RAND_MAX*9.0;
Ul[VY] = alpha*auxevol_gfs[IDX4(VALENCIAV_LU1GF, i0,i1,i2)]-betay;
auxevol_gfs[IDX4(VALENCIAV_LU2GF, i0,i1,i2)] = 1.0+(double)rand()/RAND_MAX*9.0;
Ul[VZ] = alpha*auxevol_gfs[IDX4(VALENCIAV_LU2GF, i0,i1,i2)]-betaz;*/
auxevol_gfs[IDX4(B_LU0GF, i0,i1,i2)] = (double)rand()/RAND_MAX*2.0-1.0;
Ul[BX_CENTER] = auxevol_gfs[IDX4(B_LU0GF, i0,i1,i2)];
auxevol_gfs[IDX4(B_LU1GF, i0,i1,i2)] = (double)rand()/RAND_MAX*2.0-1.0;
Ul[BY_CENTER] = auxevol_gfs[IDX4(B_LU1GF, i0,i1,i2)];
auxevol_gfs[IDX4(B_LU2GF, i0,i1,i2)] = (double)rand()/RAND_MAX*2.0-1.0;
Ul[BZ_CENTER] = auxevol_gfs[IDX4(B_LU2GF, i0,i1,i2)];
printf("Valencia 3-velocity (right): %.4e, %.4e, %.4e\\n",auxevol_gfs[IDX4(VALENCIAV_RU0GF, i0,i1,i2)],auxevol_gfs[IDX4(VALENCIAV_RU1GF, i0,i1,i2)],auxevol_gfs[IDX4(VALENCIAV_RU2GF, i0,i1,i2)]);
printf("Valencia 3-velocity (left): %.4e, %.4e, %.4e\\n\\n",auxevol_gfs[IDX4(VALENCIAV_LU0GF, i0,i1,i2)],auxevol_gfs[IDX4(VALENCIAV_LU1GF, i0,i1,i2)],auxevol_gfs[IDX4(VALENCIAV_LU2GF, i0,i1,i2)]);
printf("Below are the numbers we care about. These are the Significant Digits of Agreement \\n");
printf("between the HLLE fluxes computed by NRPy+ and ETK. Each row represents a flux \\n");
printf("direction; each entry therein corresponds to a component of StildeD. Each pair \\n");
printf("of outputs should show at least 10 significant digits of agreement. \\n\\n");
// Now, we'll run the NRPy+ and ETK functions, once in each flux_dirn.
// We'll compare the output in-between each
GRFFE__S_0__flux(0,0,0, auxevol_gfs);
GRFFE__S_i__flux(0,0,0,1,Ul,Ur,FACEVAL,METRIC_LAP_PSI4,cmax,cmin,st_x_flux,st_y_flux,st_z_flux);
printf("SDA: %.1f, %.1f, %.1f\\n",1.0-log10(2.0*fabs(auxevol_gfs[IDX4(STILDE_FLUXD0GF, i0,i1,i2)]-st_x_flux)/(fabs(auxevol_gfs[IDX4(STILDE_FLUXD0GF, i0,i1,i2)])+fabs(st_x_flux))),
1.0-log10(2.0*fabs(auxevol_gfs[IDX4(STILDE_FLUXD1GF, i0,i1,i2)]-st_y_flux)/(fabs(auxevol_gfs[IDX4(STILDE_FLUXD0GF, i0,i1,i2)])+fabs(st_x_flux))),
1.0-log10(2.0*fabs(auxevol_gfs[IDX4(STILDE_FLUXD2GF, i0,i1,i2)]-st_z_flux)/(fabs(auxevol_gfs[IDX4(STILDE_FLUXD0GF, i0,i1,i2)])+fabs(st_x_flux))));
GRFFE__S_1__flux(0,0,0, auxevol_gfs);
GRFFE__S_i__flux(0,0,0,2,Ul,Ur,FACEVAL,METRIC_LAP_PSI4,cmax,cmin,st_x_flux,st_y_flux,st_z_flux);
printf("SDA: %.1f, %.1f, %.1f\\n",1.0-log10(2.0*fabs(auxevol_gfs[IDX4(STILDE_FLUXD0GF, i0,i1,i2)]-st_x_flux)/(fabs(auxevol_gfs[IDX4(STILDE_FLUXD0GF, i0,i1,i2)])+fabs(st_x_flux))),
1.0-log10(2.0*fabs(auxevol_gfs[IDX4(STILDE_FLUXD1GF, i0,i1,i2)]-st_y_flux)/(fabs(auxevol_gfs[IDX4(STILDE_FLUXD0GF, i0,i1,i2)])+fabs(st_x_flux))),
1.0-log10(2.0*fabs(auxevol_gfs[IDX4(STILDE_FLUXD2GF, i0,i1,i2)]-st_z_flux)/(fabs(auxevol_gfs[IDX4(STILDE_FLUXD0GF, i0,i1,i2)])+fabs(st_x_flux))));
GRFFE__S_2__flux(0,0,0, auxevol_gfs);
GRFFE__S_i__flux(0,0,0,3,Ul,Ur,FACEVAL,METRIC_LAP_PSI4,cmax,cmin,st_x_flux,st_y_flux,st_z_flux);
printf("SDA: %.1f, %.1f, %.1f\\n",1.0-log10(2.0*fabs(auxevol_gfs[IDX4(STILDE_FLUXD0GF, i0,i1,i2)]-st_x_flux)/(fabs(auxevol_gfs[IDX4(STILDE_FLUXD0GF, i0,i1,i2)])+fabs(st_x_flux))),
1.0-log10(2.0*fabs(auxevol_gfs[IDX4(STILDE_FLUXD1GF, i0,i1,i2)]-st_y_flux)/(fabs(auxevol_gfs[IDX4(STILDE_FLUXD0GF, i0,i1,i2)])+fabs(st_x_flux))),
1.0-log10(2.0*fabs(auxevol_gfs[IDX4(STILDE_FLUXD2GF, i0,i1,i2)]-st_z_flux)/(fabs(auxevol_gfs[IDX4(STILDE_FLUXD0GF, i0,i1,i2)])+fabs(st_x_flux))));
printf("Note that in the case of very high velocities, numerical error will accumulate \\n");
printf("and reduce agreement significantly due to a catastrophic cancellation. \\n\\n");
/*printf("NRPy+ Results: %.16e, %.16e, %.16e\\n",auxevol_gfs[IDX4(STILDE_FLUXD0GF, i0,i1,i2)],auxevol_gfs[IDX4(STILDE_FLUXD1GF, i0,i1,i2)],auxevol_gfs[IDX4(STILDE_FLUXD2GF, i0,i1,i2)]);
printf("ETK Results: %.16e, %.16e, %.16e\\n\\n",st_x_flux,st_y_flux,st_z_flux);*/
}
"""
# Finally, we will write out the string that we have build as a C file.
# +
import cmdline_helper as cmd
import time
with open(os.path.join(out_dir,"Stilde_flux_unit_test.C"),"w") as file:
file.write(out_string)
# -
# And now, we will compile and run the C code. We also make python calls to time how long each of these steps takes.
# +
print("Now compiling, should take ~2 seconds...\n")
start = time.time()
cmd.C_compile(os.path.join(out_dir,"Stilde_flux_unit_test.C"), os.path.join(out_dir,"Stilde_flux_unit_test"))
end = time.time()
print("Finished in "+str(end-start)+" seconds.\n\n")
# os.chdir(out_dir)
print("Now running...\n")
start = time.time()
# cmd.Execute(os.path.join("Stilde_flux_unit_test"))
# !./Validation/Stilde_flux_unit_test
end = time.time()
print("Finished in "+str(end-start)+" seconds.\n\n")
# os.chdir(os.path.join("../"))
| GiRaFFE_standalone_Ccodes/Tutorial-Unit_Test-GiRaFFE_NRPy_Ccode_library-Stilde-flux.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] id="EghzKHYs20XW"
# # Beginner's Python — Session 7 + 8 Biochemistry Answers
# + id="pW5l74yU20XY"
#Import some modules.
import numpy as np
import matplotlib.pyplot as plt
import ast
import urllib
import os
from IPython.display import Image
from tqdm.notebook import tqdm
# + colab={"base_uri": "https://localhost:8080/"} id="VMmD26ek20XZ" outputId="d84c5f60-94e5-4299-ecfb-1e0fee5e82f5"
# Install RDKit - For Google Colab.
# download & extract
url = 'https://anaconda.org/rdkit/rdkit/2018.09.1.0/download/linux-64/rdkit-2018.09.1.0-py36h71b666b_1.tar.bz2'
# !curl -L $url | tar xj lib
# move to python packages directory
# !mv lib/python3.6/site-packages/rdkit /usr/local/lib/python3.6/dist-packages/
x86 = '/usr/lib/x86_64-linux-gnu'
# !mv lib/*.so.* $x86/
# rdkit need libboost_python3.so.1.65.1
# !ln -s $x86/libboost_python3-py36.so.1.65.1 $x86/libboost_python3.so.1.65.1
import rdkit
# + [markdown] id="f42fLE9t20Xa"
# ## **Building and Manipulating Dictionaries**
# + [markdown] id="9YUwqzvZ20Xa"
# **As covered in session 5 + 6, mRNA has a start codon ('AUG') and stop codons ('UAG', 'UAA', and 'UGG'). Create a dictionary called `markers` contaning `'start'` and `'stop'` as keys, and the codons as values.**
# + [markdown] id="wAfTIMw220Xb"
# **HINT:** You can't have multiple instances of the same key, instead store multiple values as a list tied to said key.
# + id="gFsp5bYB20Xb"
# Create a dictionary called `markers`.
markers = {
'start': 'AUG',
'stop': ['UAG','UAA', 'UGG']
}
# + [markdown] id="yILSRURL20Xb"
# **Print the dictionary**
# + colab={"base_uri": "https://localhost:8080/"} id="DQlD8_kK20Xc" outputId="0b42a44d-f54e-4e97-b4a1-3313823ae3d0"
# Print the dictionary.
print(markers)
# + [markdown] id="JRqOwqeR20Xc"
# **Print the 2nd stop codon from the `markers` dictionary.**
# + [markdown] id="BII8Ncje20Xc"
# **TIP:** When assigning multiple values to a key, the values are stored in a list. Therefore, access using `[relevant index]` as seen before. Multiple square brackets may be required.
# + colab={"base_uri": "https://localhost:8080/"} id="JKJlXxVF20Xd" outputId="0ce86d87-3043-4934-c3d5-bfbc3dd9497d"
# Print the value for the 2nd stop codon.
print(markers['stop'][1])
# + [markdown] id="poz4Rzrw20Xd"
# **There was an error in one of the stop codons. The codon `'UGG'` is meant to be `'UGA'`. Correct this error by reassigning the value for the relevant index of the `'stop'` key in the cell below. Print that index to confirm the changes.**
# + id="yOgyz1YS20Xd"
# Replace the incorrect codon with the correct one.
markers['stop'][2] = 'UGA'
# + colab={"base_uri": "https://localhost:8080/"} id="gqz_DEuP20Xd" outputId="01df3f83-f775-4396-93ca-077bd7572aae"
# Print the index value you corrected.
print(markers['stop'][2])
# + [markdown] id="bFtgZ9nL20Xe"
# ## **DNA Sequencer pt.III**
# + [markdown] id="9xd2Nkk320Xe"
# ### **Extracting an mRNA sequence**
# + [markdown] id="9Gne-GCQ20Xe"
# **Isolating the coding regions of mRNA is extremely important in sequencing, below is code that isolates the coding regions of a given sequence. The line: `start = dna.find()` is incomplete, within the brackets should be the 3-letter start codon. Instead of just typing out `'AUG'`in string format, extract the string from the `markers` dictionary defined above.**
# + id="3faZSGkj20Xf"
# Function
def sequence_extract(dna):
if not isinstance(dna,str):
raise TypeError('Input must be in tring format')
new = []
start = dna.find(markers['start'])
for i in range(start+3, len(dna)):
if dna[i:i+3] == "UAG" or dna[i:i+3] == "UAA" or dna[i:i+3] == "UGA":
break
else:
new.append(str(dna[i]))
for i in range(0, len(new)):
print(new[i], end = '')
# + id="3g0kmtRB20Xf"
# Use this short DNA string to test the function
dna1 = "AAAAUGUGCGGUGCGAAAUGCACGGCGAAAACGAAAAAAAAAUAG"
error = 54000
# + [markdown] id="Blvb4cWp20Xf"
# **Edit the `sequence_extract()` function to raise a `TypeError` when the input is not in the string format, choose an error message of your liking. Then run the function on the `error` value.**
# + colab={"base_uri": "https://localhost:8080/", "height": 283} id="dE_nf2GZ20Xf" outputId="e3d4e194-6b80-4c47-b274-6df628cc302a"
sequence_extract(error)
# + [markdown] id="dAHO8GFO20Xg"
# **Test the function on `dna1`.**
# + colab={"base_uri": "https://localhost:8080/"} id="qRePFbjW20Xg" outputId="920afccf-7ff2-4ae4-e770-fd066f5923c4"
sequence_extract(dna1)
# + [markdown] id="glxLGbYB20Xg"
# ### **mRNA sequene to amino acid sequence**
# + [markdown] id="t5VHihNe20Xg"
# **In the cell below a dictionary containing codons and the amino acid they code for has been stored as `dictionary`. Run `ast.literal_eval()` on `dictionary` and immediately store the result as a variable called `codons`. This allows the .txt file to be read by python as a dictionary rather than just as a string.**
# + id="Yulgo6Lj20Xh"
# Underneath,run ast.literal_eval() on `dictionary`.
import urllib.request
base = 'https://raw.githubusercontent.com/warwickdatasciencesociety/beginners-python/master/session-eight/subject_questions/'
dictionary = urllib.request.urlopen(base+'biochem_resources/mRNA.txt').read().decode('utf-8')
codons = ast.literal_eval(dictionary)
# + [markdown] id="hjY7EZXk20Xh"
# **To test the `codons` dictionary, print the value for the key `'AAA'`, this will be the amino acid called by it.**
# + colab={"base_uri": "https://localhost:8080/"} id="8it1E_zT20Xh" outputId="06c81d97-6a5e-4ef7-af7d-cc4b247b6aca"
# Print value for "AAA" codon.
print(codons['AAA'])
# + [markdown] id="Fb58OxqE20Xh"
# **Loop through the `codons` dictionary, printing both the keys and values.**
# + [markdown] id="gI81HQzg20Xh"
# **BONUS:** Add the `end = "(your separator of choice)"` argument to your print statement to keep the result compact.
# + colab={"base_uri": "https://localhost:8080/"} id="yaCk8_TD20Xh" outputId="44ead142-7a05-4c3c-e10e-a7baae2e6c1a"
# Loop through the `codons` dictionary.
for codon, amino in codons.items():
print(codon, "codes for ", amino,end = ' - ')
# + [markdown] id="ltIYSo6520Xi"
# **The below function, `translate()` takes an mRNA sequence and returns the amino acid sequence by referencing the `codons` dictionary.**
# + id="cFiHKx_b20Xi"
# Run this cell.
def translate(seq):
table = codons
aa_sequence = ""
if len(seq)%3 == 0:
for i in range(0, len(seq), 3):
codon = seq[i:i + 3]
aa_sequence += table[codon]
return aa_sequence
# + [markdown] id="gbAa0eqL20Xi"
# **The function `read_seq` reads .txt files and extracts their contents. Replace `"\n"` with `""`, this has already been similarly done for `\r`.**
# + id="ZqZd0zm220Xi"
# Replace all "\n" (newlines) in the .txt document and replace with "".
def read_seq(inputfile):
base = 'https://raw.githubusercontent.com/warwickdatasciencesociety/beginners-python/master/session-eight/subject_questions/'
seq = urllib.request.urlopen(base+inputfile).read().decode('utf-8')
seq = seq.replace("\n", "")
seq = seq.replace("\r", "")
return seq
# + [markdown] id="tzJdnt8S20Xj"
# **Run the `read_seq` function on the `'biochem_resources/opioid_receptor_mrna.txt'` file.**
# + colab={"base_uri": "https://localhost:8080/", "height": 123} id="x5cnAVnc20Xj" outputId="6bc134e5-a2b6-47d8-ce97-2e9bc02e6dd8"
# Run `read_seq` on the file directory given above.
read_seq('biochem_resources/opioid_receptor_mrna.txt')
# + [markdown] id="Csd0KpoD20Xj"
# **Store the result as a variable and use it to run the `translate()` function. The result will be the amino acid sequence of the $ \delta$ - $\mu$ opioid receptor protein.**
# + colab={"base_uri": "https://localhost:8080/", "height": 71} id="so7yJws320Xj" outputId="e736a5ef-d06b-4653-8d8b-4f485cbe5c93"
# Store result of `read_seq()` as a variable and input it into the `translate()` functoin.
opioid_rt = read_seq('biochem_resources/opioid_receptor_mrna.txt')
translate(opioid_rt)
# + [markdown] id="BMbjqjBU20Xk"
# ## **Using RDKit**
# + [markdown] id="LJM-pBmH20Xk"
# RDKit is a useful tool for chemists and biologists alike, we will be exploring some of its functionalities.
# + id="PWbcnEs320Xk"
# Import some useful RDKit modules.
from rdkit import Chem
from rdkit.Chem import Draw
from rdkit.Chem.Draw import IPythonConsole
from rdkit.Chem import AllChem
from rdkit import DataStructs
from rdkit.Chem import Descriptors
from rdkit.Chem.Draw import SimilarityMaps
from rdkit import RDLogger
# + id="nlDCB_LU20Xk"
# Displays 2D structures in .png format rather than .svg
IPythonConsole.ipython_useSVG=True
# + [markdown] id="-eR4JBoi20Xk"
# The structures of compounds can be stored in the **SMILES** format, which is extremely useful for computationally. Below is the **SMILES String** for **Morphine**. Morphine is an opiate used as pain medication, it targets the **$ \delta$ - $\mu$ opioid receptor** (amino acid sequence obtained in the previous excercise).
# + [markdown] id="eaGbeu5-20Xk"
# **SMILES (morphine):** `'CN1CC[C@]23C4=C5C=CC(O)=C4O[C@H]2[C@@H](O)C=C[C@H]3[C@H]1C5'`
# + [markdown] id="M_qlpFFF20Xl"
# **In the cell below, input the SMILES string into the `Chem.MolFromSmiles()` function and run the cell. It should display the 2D structure of morphine.**
# + colab={"base_uri": "https://localhost:8080/", "height": 171} id="En7hhLml20Xl" outputId="ceb71b54-2882-4668-9277-96dfbc01eb68"
# Displying morphine's 2D strucure.
cpd_1 = Chem.MolFromSmiles('CN1CC[C@]23C4=C5C=CC(O)=C4O[C@H]2[C@@H](O)C=C[C@H]3[C@H]1C5')
cpd_1
# + [markdown] id="4gqgsNXw20Xl"
# RDKit can be used to get the properties of compounds, below is the code to calculate the molecular weight for morphine.
# + colab={"base_uri": "https://localhost:8080/"} id="PRtg3qM_20Xl" outputId="bfe76202-0fd1-4607-f381-76c959e143e8"
# Run this cell.
# Molecular weight.
mw = Descriptors.MolWt(cpd_1)
mw
# + [markdown] id="IwimkWDQ20Xm"
# **Using code similar to the above cell, use `Descriptors.NumValenceElectrons()` to compute the number of valence electrons that morphine has.**
# + colab={"base_uri": "https://localhost:8080/"} id="itsPmC1y20Xm" outputId="a487a851-3dae-44a3-f637-4d38c66ae146"
# Run this cell.
# Valence electrons count.
valence = Descriptors.NumValenceElectrons(cpd_1)
valence
# + [markdown] id="9DwWda8p20Xm"
# Diacetylmorphine is a prodrug; when it enters the body, it is converted to morphine. Thus they have similar structures.
# + [markdown] id="9d_4lQfC20Xm"
# **Using the SMILES string given below for diacetylmorphine (heroin), calculate its molecular weight, the number of valence eletrons, and display its 2D structure. Use code similar to the example above for morphine.**
# + [markdown] id="kKSQ0GN520Xm"
# **SMILES (diacetylmorphine):** `'CC(OC1=C(O[C@@H]2[C@]34CCN(C)[C@@H]([C@@H]4C=C[C@@H]2OC(C)=O)C5)C3=C5C=C1)=O'`
# + colab={"base_uri": "https://localhost:8080/", "height": 171} id="0b_GhWIR20Xm" outputId="a20964db-3377-4b3d-b57e-9c61ad42a17f"
# Diacetylmorphine's 2D strucure.
cpd_2 = Chem.MolFromSmiles('CC(OC1=C(O[C@@H]2[C@]34CCN(C)[C@@H]([C@@H]4C=C[C@@H]2OC(C)=O)C5)C3=C5C=C1)=O')
cpd_2
# + colab={"base_uri": "https://localhost:8080/"} id="mNJI-1YW20Xn" outputId="c5aba6ba-6bd8-4fbf-ee8f-5a7daceadf5e"
# Molecular weight.
mw = Descriptors.MolWt(cpd_2)
mw
# + colab={"base_uri": "https://localhost:8080/"} id="ge2PqVzR20Xn" outputId="5f82b495-6064-4d09-f98e-f881d753c0f1"
# Valence electrons count.
valence = Descriptors.NumValenceElectrons(cpd_2)
valence
# + [markdown] id="2q_Nqisp20Xn"
# ### **Similar molecules**
# + [markdown] id="eqYSiUI120Xn"
# The ability to quantitevly assess the similarity of molecules based on their structure is an extremely useful concept in the drug discovery process. It enables researchers to find more effective analogues of already existing drugs. Below, the similarity between **morphine** and **diacetylmorphine** is calculated, maximum similarity (same molecule) would yield a value of **1**.
# + colab={"base_uri": "https://localhost:8080/"} id="jq3q5Q0y20Xo" outputId="eb5275f3-2ff3-4dbc-d914-326377357158"
# Run this cell.
ms = [cpd_1, cpd_2]
fps = [Chem.RDKFingerprint(x) for x in ms]
DataStructs.FingerprintSimilarity(fps[0], fps[1])
# + [markdown] id="OYj8yp8i20Xo"
# Using the built-in fingerprinting function from the cell above, we will attempt to write a program that will find the 3 most similar molecules to morphine from a large database of compounds and their respctive SMILES strings.
# + [markdown] id="tFHbrO6Y20Xo"
# First the .csv file must read and processed. The header will not be required, and neither will the final cell (as it is empty). These will need to be removed.
# + [markdown] id="aVdHK2DX20Xo"
# **Using the `del ` function in the cell below, remove the the header and the bottom line (last index) from `lines`.**
# + [markdown] id="MSwGB6Jb20Xo"
# **HINT:** Accessing the last element in a list was covered in session 4.
# + colab={"base_uri": "https://localhost:8080/"} id="xaiildOE20Xo" outputId="9dce69be-e671-41dc-b955-b0f683e9653e"
# Open, read and tailor the smiles database.
base = 'https://raw.githubusercontent.com/warwickdatasciencesociety/beginners-python/master/session-eight/subject_questions/'
f = urllib.request.urlopen(base+'biochem_resources/smiles.csv').read().decode('utf-8')
lines = f.split('\n')
print(lines[0])
del lines[0]
del lines[-1]
# + [markdown] id="-D_I4dSL20Xp"
# **Run the cell below.**
# + id="wob7zuK620Xp"
# This creates a new list, `molecules`.
molecules = []
for l in lines:
elements = l.split('\t')
molecules.append({
'name': elements[0],
'smiles': elements[1]
})
# + [markdown] id="rqcH0CBD20Xp"
# **The function below, `nearest_3()` will loop through the `molecules` list and assess similarity to the query input using RDKit's `DataStructs.FingerprintSimilarity()` function. Run the cell, then read through the comments to gain insight into how the function works.**
# + id="C1rqW5AJ20Xq"
# The function takes the input SMILES string and returns the most similar values.
def nearest_3(query_smiles):
#Setting baseline similarities.
max_similarity = 0
max_similarity_2 = 0
max_similarity_3 = 0
# Converting the query molecule to the Mol format and getting the molecules fingerprint.
query_fp = Chem.RDKFingerprint(Chem.MolFromSmiles(query_smiles))
# Loop through the `molecules` database - `tdqm()` is to add a progress bar.
for test_mol in tqdm(molecules, desc='BEST MATCH'):
# Transformating each encountered variable into the necessary format.
test_fp = Chem.RDKFingerprint(Chem.MolFromSmiles(test_mol['smiles']))
similarity = DataStructs.FingerprintSimilarity(test_fp, query_fp)
#This will continuously update the `best_mol` variable with the most similar molecule.
#The similarity cannot be equal to 1 otherwise the loop will return the query molecule as most similar.
if max_similarity < similarity < 1:
max_similarity = similarity
best_mol = test_mol
#Repitition of the code to obtain the 2nd most similar molecule from the database
for test_mol_2 in tqdm(molecules, desc='SECOND MATCH'):
test_fp_2 = Chem.RDKFingerprint(Chem.MolFromSmiles(test_mol_2['smiles']))
similarity_2 = DataStructs.FingerprintSimilarity(test_fp_2, query_fp)
#Basically the same as the loop above, however the maximum similarity is now set to the "best match's" similarity
#This ensures that the most similar molecule found this time will not include the "best match" from the first loop
if max_similarity_2 < similarity_2 < max_similarity:
max_similarity_2 = similarity_2
second_mol = test_mol_2
#Repitition of the code to obtain the 3rd most similar molecule from the database
for test_mol_3 in tqdm(molecules, desc='THIRD MATCH'):
test_fp_3 = Chem.RDKFingerprint(Chem.MolFromSmiles(test_mol_3['smiles']))
similarity_3 = DataStructs.FingerprintSimilarity(test_fp_3, query_fp)
if max_similarity_3 < similarity_3 < max_similarity_2:
max_similarity_3 = similarity_3
third_mol = test_mol_3
#The values that the function will return - TAKE NOTE
return best_mol,second_mol, third_mol, max_similarity, max_similarity_2, max_similarity_3
# + [markdown] id="-Pa1GpEE20Xq"
# **Store the SMILES string of morphine into a variable name of your choice.**
# + [markdown] id="h6MhmZII20Xq"
# **SMILES (morphine):** `'CN1CC[C@]23C4=C5C=CC(O)=C4O[C@H]2[C@@H](O)C=C[C@H]3[C@H]1C5'`
# + id="s90Hydbv20Xq"
# Store smiles string in a variable
morphine = 'CN1CC[C@]23C4=C5C=CC(O)=C4O[C@H]2[C@@H](O)C=C[C@H]3[C@H]1C5'
# + [markdown] id="dQHE9Cnm20Xq"
# **Run the `nearest_3` function on the morphine SMILES string, store the result as `run_1`.** (It may take a few minutes)
# + colab={"base_uri": "https://localhost:8080/", "height": 368, "referenced_widgets": ["2471735f6cbb48878d7e06ebcaa32bdf", "d9d41f75fbf54959865d50a13ae13dfb", "d679e55cb27743ebaf8847a22ef71f4d", "4766814bf77c4598a26bbf57dca9d3c2", "723a32d7927e42a19c37191727cf351b", "796bcdf869154160beb920634c8f9ab2", "1e6e98e4a81d4f41be0e1f264186d31b", "53e6d9766c4c4559b20f9ec84d699e30", "5e17035fff54463db88b166cd2616448", "5c604b6b5f3f482cb9bac1eb38d441df", "d8f2e138a794409487c69709ee2a6498", "d987ef10c2a94613b97302ae46508eb1", "<KEY>", "<KEY>", "de7d03c5e7044fe299e1570f6edbe9f0", "d63abec2782944bea8a8305a1e69aee8", "<KEY>", "7523e8de410241c383d91ebb041eee0a", "5bb4af00f71e4487a8c81e5b94b0cb51", "<KEY>", "c4513f7ef7c041ba8d7aff3e1be7d277", "<KEY>", "981a0b4128e6437b9c219a6253389272", "<KEY>"]} id="tNFpTfiE20Xr" outputId="8ec0b70f-e674-440d-c765-ea3fb2679e2a"
# Remember to store the result in a variable called `run_` - This can all be done in 1 line.
run_1 = nearest_3(morphine)
# + [markdown] id="lqQn12Bd20Xr"
# **What is `run_1`'s `type()` ?**
# + colab={"base_uri": "https://localhost:8080/"} id="Y-Q9JIvh20Xr" outputId="f6ba977b-e446-40f2-ff32-1c78f2cb386f"
#Find out what type() `run_1` is.
type(run_1)
# + [markdown] id="52ly-6AW20Xr"
# **Print `run_1`.**
# + colab={"base_uri": "https://localhost:8080/"} id="zKwBdhEn20Xr" outputId="60891e66-014b-4214-ec2f-bd43c4b7de44"
run_1
# + [markdown] id="sJ9Toeqr20Xr"
# **Print the 2nd most similar molecule's results.**
# + [markdown] id="cw0_SRMD20Xs"
# **HINT:** Look at which values the function `return`'s, and print that index.
# + colab={"base_uri": "https://localhost:8080/"} id="72S1N85320Xs" outputId="c360d355-22cd-4ed7-d797-300af5a90eb0"
# Overall results summary for the best match.
run_1[0]
# + [markdown] id="J5ZQGf6j20Xs"
# **Print the similarity (<1) of the 3rd most simlar molecule to the query.**
# + [markdown] id="OT4Z46OA20Xs"
# **HINT:** Look at which values the function `return`'s.
# + colab={"base_uri": "https://localhost:8080/"} id="6Ng7wnZ620Xs" outputId="19247003-526d-4d38-f4da-96c69b0b60e9"
# Similarity for 3rd match.
run_1[5]
# + [markdown] id="uSsdiwge20Xs"
# **Print the name of the most similar molecule found from the result.**
# + colab={"base_uri": "https://localhost:8080/"} id="C5ShJBcc20Xt" outputId="485af2fb-f1cd-4501-b8b5-8509a08dc4fa"
# HINT: view the result of this cell - how do you access elements from this type of data?
type(run_1[0])
# + colab={"base_uri": "https://localhost:8080/", "height": 37} id="SRLCmQwl20Xt" outputId="de4078e8-be56-424e-a7fa-bcc012ef16e2"
# Print the name of the most similar result.
run_1[0]['name']
# + [markdown] id="NlYHzBF020Xt"
# **The cells below will produce a visual of the result - If there are any errors, make sure you have assigned the result of the function to `run_1`**
# + id="zD0USIYy20Xt"
# Conversion of the results into an appropriate format to be drawn from.
query_m = Chem.MolFromSmiles(morphine)
match_m = Chem.MolFromSmiles(run_1[0]['smiles'])
match_2_m = Chem.MolFromSmiles(run_1[1]['smiles'])
match_3_m = Chem.MolFromSmiles(run_1[2]['smiles'])
# + colab={"base_uri": "https://localhost:8080/", "height": 921} id="yVnjuLVt20Xt" outputId="65ae7f1d-514c-456d-843b-446984716578"
# Draws all the results and stores the images in a grid.
Draw.MolsToGridImage(
(query_m, match_m, match_2_m, match_3_m),
legends = ('MORPHINE (Query)','Best Match: '+ run_1[0]['name'],'2. '+ run_1[1]['name'],'3. '+ run_1[2]['name']),
molsPerRow=2, subImgSize=(450, 450)
)
# + id="ecVeaPH46mWJ"
| session-eight/subject_questions/biochem_session_7and8_answers.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] colab_type="text" id="9RHioPUvpaX3"
# # **КРИТЕРІЙ ПІРСОНА ДЛЯ ПОРІВНЯННЯ ДЕКІЛЬКОЇ ГРУП ЗА РОЗПОДІЛЕННЯМ ОЗНАКИ**
# + [markdown] colab_type="text" id="gJvivgECtRy7"
# ## *Теоретичні відомості*
# + [markdown] colab_type="text" id="l_CIvVzgqAA1"
# Критерій хі-квадрат для аналізу таблиць спряженості був розроблений і запропонований в 1900 році англійським математиком, статистиком, біологом і філософом, засновником математичної статистики і одним з основоположників біометрії Карлом Пірсоном (1857-1936).
#
# *Таблиці спряженості* - це візуальне (табличне) відображення залежності між двома __*якісними*__ ознаками.
# Інтрерпретувати цей зв'язок можна як взаємозвя'зок між розподілом за однією ознакою в залежності від градації іншої ознаки. Прикладом може бути залежність частоти летальності в групах, що приймали різні лікувальні препарати (залежність летальності від схеми лікування), або залежність кількості осіб, що займаються різними видами спорту та частоти травматичних випадків (оцінка травматичності зайнять різними видами спорту).
#
# Рядки таблиці спряженості відповідають значенням однієї змінної, стовпці - значенням іншої змінної. *Для побудови таблиць спряженості кількісні шкали попередньо повинні бути згруповані в інтервали.* Область визначення випадковоъ величини розбивають на $k$ інтервалів, що не перетинаються:
#
# $$x_{0}<x_{1}<...<x_{k-1}<x_{k},$$
#
# де x_{0} - нижня границя області визначення випадкової величини, x_{k} - верхня границя.
#
# На перетині рядка і стовпця вказується частота спільної появи відповідних значень двох ознак.
# Сума частот по рядку називається маргінальною частотою рядка; сума частот по стовпцю - маргінальною частотою стовпчика.
# У таблиці спряженості можуть бути представлені як абсолютні, так і відносні частоти (в частках або відсотках). Відносні частоти можуть розраховуватися по відношенню:
# а) до маргінальної частоти по рядку;
# б) до маргінальної частоти по стовпцю;
# в) до обсягу вибірки.
# + [markdown] colab_type="text" id="EVMo4PIltX48"
# Статистична гіпотеза для даного критерію має наступний вигляд:
# * Основна (нульова). Ознаки не пов'язані.
# * Конкуруюча (альтернативна). Ознаки пов'язані.
# + [markdown] colab_type="text" id="LSU7rmy8yIAB"
# ## Алгоритм розрахунку
# + [markdown] colab_type="text" id="V-cBzKp5tkos"
# В класичному випадку, критерій Пірсона поріявнює розподіл ознаки між двома групами. Алгоритм роботи:
#
#
# * *Сформувати таблицю спряженості*, де стовбчики - це порювнювані групи, а рядки - градації ознаки, що досліджується.
#
# Ознака | Група 1 | Група 2 |
# ------------- | ------------- |--------- |
# Градація ознаки 1 | $n_{11}$ | $n_{21}$ |
# Градація ознаки 2 | $n_{12}$ | $n_{22}$ |
# Градація ознаки 3 | $n_{13}$ | $n_{23}$ |
#
# $n_{11}$ - частота з якою градація 1 зустрічається в 1й групі
#
# $n_{12}$ - частота з якоюградація 2 зустрічається в 1й групі
#
# $n_{13}$ - частота з якоюградація 3 зустрічається в 1й групі
#
# $n_{21}$ - частота з якоюградація 1 зустрічається в 2й групі
#
# $n_{22}$ - частота з якоюградація 2 зустрічається в 2й групі
#
# $n_{23}$ - частота з якоюградація 3 зустрічається в 2й групі
#
# Вважаємо, що перша група експериментальна, а друга - теоретична.
#
# * Перевірити рівність сум частот $\sum n_{i}=\sum \grave{n}_{i}$. Якщо суми відрізняються, вирівняти їх зі збереженням відсоткового співвідношення між частотами всередині групи.
#
# * Розрахувати різницю між експериметальними (емпричними) та контрольним (теоретичними) частотами для кожної градації:
#
# Ознака | Група 1 | Група 2 | $(n_{i}-\grave{n}_{i})^2$ |
# ------------- | ------------- |--------- |-------------- |
# Градація ознаки 1 | $n_{11}$ | $n_{21}$ |$(n_{11}-\grave{n}_{21})^2$ |
# Градація ознаки 2 | $n_{12}$ | $n_{22}$ |$(n_{12}-\grave{n}_{22})^2$ |
# Градація ознаки 3 | $n_{13}$ | $n_{23}$ |$(n_{13}-\grave{n}_{23})^2$ |
#
# * Розділити отримані квадрати на теоретичні частити (дані контрольної групи):
#
# Ознака | Група 1 | Група 2 | $(n_{i}-\grave{n}_{i})^2$ | $\frac{(n_{i}-\grave{n}_{i})^2}{\grave{n}_{i}}$ |
# ------------- | ------------- |--------- |-------------- |-------------- |
# Градація ознаки 1 | $n_{11}$ | $n_{21}$ |$(n_{11}-\grave{n}_{21})^2$ |$\frac{(n_{11}-\grave{n}_{21})^2}{\grave{n}_{21}}$ |
# Градація ознаки 2 | $n_{12}$ | $n_{22}$ |$(n_{12}-\grave{n}_{22})^2$ |$\frac{(n_{12}-\grave{n}_{22})^2}{\grave{n}_{22}}$ |
# Градація ознаки 3 | $n_{13}$ | $n_{23}$ |$(n_{13}-\grave{n}_{23})^2$ |$\frac{(n_{13}-\grave{n}_{23})^2}{\grave{n}_{23}}^2$ |
#
#
# * Знайти сумму отриманих значень, позначивши її як $\chi_{emp}^2$.
#
# * Визначити ступінь свободи критерію:
#
# $$r=m-1,$$
#
# де m - кількість градацій ознаки (рядків в таблиці спряженості).
#
# * Визначити за таблицею критичне значення для відповідного рівня значимості $\alpha$ та розрахованого числа ступенів свободи.
#
# * Якщо $\chi_{emp}^2 > \chi_{critical}^2$, то розбіжності між розподіленнями статистично значимі на даному рівні значимості.
#
# + [markdown] colab_type="text" id="IdynD2W28NS9"
# ## Таблиця критичних значень
# + [markdown] colab_type="text" id="6jyV7r8d8RyR"
# Завантажити таблицю критичних значень можна за посиланням:
#
# https://drive.google.com/open?id=1-525zNUUxYAbY3FStFy79B9O3UMkcuan
# + [markdown] colab_type="text" id="iZobJ1GO8_TY"
# ## Завдання
# + [markdown] colab_type="text" id="BiWG1gAq9C5S"
# 1. Обрати реальні дані з kaggle або сгенерувати випадкові дані, що містять розподіл деякої ознаки в двох групах.
# 2. Побудувати графік, що відображає розподіл ознак в групах.
# 3. Написати функції, що оцінює зв'язок між ознакою та групою за критерієм Пірсона. Функція має оцінювати розбіжність на двох рівнях - 0,001 та 0,05, в залежністі від параметру significant_level, що передається в неї.
# Врахувати у функції випадок, що сума частот ознаки у групах може бути відмінною.
# 4. Перевірити розбіжність на між групами на даних з п. 1. В результаті сформувати таблицю спряженості наступного вигляду:
#
# Ознака | Група 1 | Група 2 |
# ------------- | ------------- |--------- |
# Градація ознаки 1 | $n_{11} $ | $n_{21}$ |
# Градація ознаки 2 | $n_{12}$ | $n_{22}$ |
# Градація ознаки 3 | $n_{13}$ | $n_{23}$ |
#
# Додати до таблиці стовбчики з відсотовим розподілом по градаціям всередині груп.
# Окремо вивести результати роботи критерію (чи є статистична розбіжність між групами).
# + [markdown] colab={} colab_type="code" id="zyXdWWXf_Tm8"
# 1) Обрати реальні дані з kaggle або сгенерувати випадкові дані, що містять розподіл деякої ознаки в двох групах.
# +
import pandas as pd
import seaborn as sns
from collections import defaultdict
import matplotlib.pyplot as plt
db = pd.read_csv("CardioGoodFitness.csv", encoding = 'utf-8')
db.head()
# +
single = defaultdict(int)
partnered = defaultdict(int)
indexes = ["worst_fit", "bad_fit", "average_fit", "good_fit", "excellent_fit"]
for row, value in db.iterrows():
if value["MaritalStatus"] == "Single":
single[value["Fitness"]] += 1
if value["MaritalStatus"] == "Partnered":
partnered[value["Fitness"]] += 1
df_chunk = pd.DataFrame({"Single" : single,
"Partnered" : partnered}).sort_index()
df_chunk.insert(0, "Fit", indexes, True)
df_chunk
# -
# 2. Побудувати графік, що відображає розподіл ознак в групах.
# +
fig, (ax1, ax2) = plt.subplots(1,2,figsize=(10,8))
sns.boxplotg = sns.barplot(x="Fit", y="Single", data=df_chunk, ax=ax1)
sns.barplot(x="Fit", y="Partnered", data=df_chunk, ax=ax2);
# -
# 3. Написати функції, що оцінює зв'язок між ознакою та групою за критерієм Пірсона. Функція має оцінювати розбіжність на двох рівнях - 0,001 та 0,05, в залежністі від параметру significant_level, що передається в неї.Врахувати у функції випадок, що сума частот ознаки у групах може бути відмінною.
# +
def is_significant(data,significant_level):
# get keys.
keys = data.keys().tolist()
keys.extend(['(ni-ni`)^2', '((ni-ni`)^2)/ni`'])
group1_sum = sum(data[keys[1]])
group2_sum = sum(data[keys[2]])
# caculate n_d.
if(group1_sum != group2_sum):
n_d = []
for i in data[keys[1]]:
n_d.append((((i * 100)/group1_sum)*group2_sum)/100)
data[keys[1]] = n_d
# caculate (ni-ni`)^2 and ((ni-ni`)^2)/ni`.
for row in data.itertuples():
data.at[row.Index, keys[3]] = (data.at[row.Index, keys[1]]-data.at[row.Index, keys[2]])**2
data.at[row.Index, keys[4]] = data.at[row.Index, keys[3]]/data.at[row.Index, keys[2]]
# caculate ch_pirson.
ch_pirson = pd.read_excel("Таблиця критичних значень для критерію Пірсона.xlsx").iloc[
len(data)-1, 1 if (significant_level == 0.01) else 2
]
# caculate ch_em.
ch_em = sum(data[keys[4]])
# return the result.
return ch_em > ch_pirson
print(is_significant(df_chunk, 0.01))
df_chunk
# -
# 4) Перевірити розбіжність на між групами на даних з п. 1. В результаті сформувати таблицю спряженості наступного вигляду:
# <table>
# <tr>
# <th>Ознака</th> <th>Група 1</th> <th>Група 2</th>
# </tr>
# <tr>
# <th>Градація ознаки 1</th> <th> 𝑛11</th> <th>𝑛21 </th>
# </tr>
# <tr>
# <th>Градація ознаки 2</th> <th>𝑛12 </th> <th>𝑛22 </th>
# </tr>
# <tr>
# <th>Градація ознаки 3</th> <th>𝑛13</th> <th>𝑛23 </th>
# </tr>
# </table>
#
# Додати до таблиці стовбчики з відсотовим розподілом по градаціям всередині груп. Окремо вивести результати роботи критерію (чи є статистична розбіжність між групами).
# +
total_single = sum([x for x in df_chunk['Single']])
total_partnered = sum([x for x in df_chunk['Partnered']])
single_percents = [(part/total_single)*100 for part in df_chunk['Single']]
partnered_percents = [(part/total_partnered)*100 for part in df_chunk['Partnered']]
df_chunk.insert(2, 'Single %', single_percents)
df_chunk.insert(4, 'Partnered %', partnered_percents)
df_chunk
# -
# Окремо вивести результати роботи критерію (чи є статистична розбіжність між групами).
result = pd.DataFrame (
[is_significant(df_chunk, 0.01), is_significant(df_chunk, 0.05)],
index=["0.01", "0.05"],
columns=["Is significant"]
)
result.index.name = "Level"
result
# <h1>Висновок</h1>
# За результатами порівняння розподілу ознаки між двома групами (за допомогою критерію Пірсона), можна припустити відсутність статистично значимої розбіжності на рівнях значимості 0.05 та 0.01
| lab5/lab5_PogrebenkoBS81.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
from typing import List
# DP + 状态压缩: TLE
class Solution:
def minimumTimeRequired(self, jobs: List[int], k: int) -> int:
# 每一个工作都对一个工人来讲,都可以做,或者不做
n = len(jobs)
dp = [[float('inf')] * (1 << n) for _ in range(k + 1)]
times = [0] * (1 << n)
for state in range(1 << n):
sum_time = 0
for i in range(n):
if ((state >> i) & 1) == 1:
sum_time += jobs[i]
times[state] = sum_time
dp[0][0] = 0
for i in range(1, k+1):
for state in range(1 << n):
sub_state = state
while sub_state > 0:
dp[i][state] = min(dp[i][state], max(dp[i-1][state-sub_state], times[sub_state]))
sub_state = (sub_state - 1) & state
return dp[-1][-1]
# -
# +
from typing import List
from functools import lru_cache
# 二分搜索 + 状态压缩 TLE
class Solution:
def minimumTimeRequired(self, jobs: List[int], k: int) -> int:
# 每一个工作都对一个工人来讲,都可以做,或者不做
# @lru_cache(None)
def dfs(state, th, m): # 给第m个人,分配工作,并且这个工作时间不能超过th
if state == 0:
return True
if m == k:
return False
subset = state
while subset > 0:
if times[subset] <= th and dfs(state - subset, th, m+1):
return True
subset = (subset - 1) & state
return False
n = len(jobs)
dp = [[float('inf')] * (1 << n) for _ in range(k + 1)]
times = [0] * (1 << n)
for state in range(1 << n):
sum_time = 0
for i in range(n):
if ((state >> i) & 1) == 1:
sum_time += jobs[i]
times[state] = sum_time
left, right = 1, sum(jobs)
while left < right:
mid = left + (right - left) // 2
if dfs((1<<n)-1, mid, 0):
right = mid
else:
left = mid + 1
return left
# -
solution = Solution()
solution.minimumTimeRequired(jobs = [1,2,4,7,8], k = 2)
# +
from typing import List
from functools import lru_cache
# DFS + 二分搜索 + 状态压缩 TLE
class Solution:
def minimumTimeRequired(self, jobs: List[int], k: int) -> int:
def dfs(workers, th, idx):
if idx == len(jobs): # 所有的任务都分配完成
return True
faliure = {} # 失败的案例有哪些
for j in range(k):
if workers[j] + jobs[idx] > th:
continue
# 如果当前工作超过了上线,那么肯定不能添加
if workers[j] in faliure and jobs[idx] >= faliure[workers[j]]:
continue
workers[j] += jobs[idx]
if dfs(workers, th, idx + 1):
return True
workers[j] -= jobs[idx]
# 记录了workers[j] 在 阈值为th下的工作上限是多少
if workers[j] in faliure:
faliure[workers[j]] = min(jobs[idx], faliure[workers[j]])
else:
faliure[workers[j]] = jobs[idx]
return False
jobs.sort(reverse=True)
left, right = 1, sum(jobs)
while left < right:
workers = [0] * k # 表示每个人分配的工作时长有多少
mid = left + (right - left) // 2
if dfs(workers, mid, 0): # 其中 0 表示分配第 idx=0 的工作给工人
right = mid
else:
left = mid + 1
return left
# -
solution = Solution()
solution.minimumTimeRequired(jobs = [1,2,4,7,8], k = 2)
# +
from typing import List
from functools import lru_cache
# DFS + 二分搜索
class Solution:
def minimumTimeRequired(self, jobs: List[int], k: int) -> int:
def dfs(workers, th, idx):
if idx == len(jobs):
return True
faliure = {}
for i in range(k):
# 不能有一个工人的最大工作时间大于 threshold
if jobs[idx] + workers[i] > th:
continue
# 如果当前工作超过了上限,那么肯定不能添加
if workers[i] in faliure and jobs[idx] >= faliure[workers[i]]:
continue
workers[i] += jobs[idx]
if dfs(workers, th, idx+1):
return True
workers[i] -= jobs[idx]
if workers[i] in faliure:
faliure[workers[i]] = min(faliure[workers[i]], jobs[idx])
else:
faliure[workers[i]] = jobs[idx]
return False
jobs.sort(reverse=True)
left, right = 1, sum(jobs)
while left < right:
workers = [0] * k
mid = left + (right - left) // 2
if dfs(workers, mid, 0):
right = mid
else:
left = mid + 1
return left
# -
solution = Solution()
solution.minimumTimeRequired(jobs = [1,2,4,7,8], k = 2)
| Back Tracking/0127/1723. Find Minimum Time to Finish All Jobs.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3 (ipykernel)
# language: python
# name: python3
# ---
# # Introduction
# This is a tutorial for using Forest to analyze Beiwe data. We will also be creating some time series plots using the generated statistic summaries. There are four parts to this tutorial.
#
# 1. Check Python version and download Forest.
# 2. Download sample data.
# 3. Process data using forest.
# 4. Creating time series plots.
# ## Check Python Version and Download Forest
# Before we begin, we need to check the current distribution of Python. Note that forest is built using Python 3.8.
from platform import python_version
import sys
# - Print the python version and the path to the Python interpreter.
print(python_version()) ## Prints your version of python
print(sys.executable) ## Prints your current python installation
# *The output should display two lines.*
#
# 1. The Python version installed- make sure you are not using a version of Python that is earlier than 3.8
# 2. The path to where Python is currently installed
# - You may need to install git, pip, and forest. To do so, enter the lines below in a command-line shell. If not, you can skip to the next step.
# +
# conda install git pip
# pip install git+git://github.com/onnela-lab/forest.git@main
# -
# ## Download Sample Data
#
# For this tutorial, we will be using publically released data from the Beiwe Research Platform that is available through Zenodo. The Beiwe Research Platform collects high-density data from a variety of smartphone sensors including GPS, WiFi, Bluetooth, and accelerometer. Further information on the dataset and Beiwe can be found https://github.com/mkiang/beiwe_data_sample.
import wget
import zipfile
import os
# - For **source_url**, enter the "url to the dataset".
# - For **dest_dir**, enter the "path to the destination folder".
source_url = "https://zenodo.org/record/1188879/files/data.zip?download=1"
dest_dir = os.getcwd()
zip_fpath = wget.download(source_url, out = dest_dir)
# *The output should display the download progress if this code is running correctly. Note this is a large data file (~740 MB of data). This will take betwen ten minutes and an hour or more, depending on the speed of your internet connection. Once the download is complete, a zip file should be saved in the destination folder.*
# - Unzip the file downloaded in the previous block into a folder called **data**. The subfolders in this directory contain data produced by the Beiwe app.
with zipfile.ZipFile(zip_fpath, 'r') as zip_ref:
zip_ref.extractall(dest_dir)
# - Remove the downloaded zip file to save space on your computer
os.remove(zip_fpath)
# - Verify that the process of downloading and unzipping the data is complete.
# check the unzipped dir exists
data_dir = "data"
if os.path.isdir(data_dir):
print("Data Successfully Downloaded and Unzipped")
# *The output should say "Data Successfully Downloaded and Unzipped" if this code was successful.*
# ## Process Data using Forest
# - Using the Forest library developed by the Onnela lab, we compute daily GPS and communication summary statistics
# First, we generate the GPS-related summary statistics by using the **gps_stats_main** function under the **traj2stat.py** in the Jasmine tree of Forest. This code will take about 15-30 minutes to run, depending on your machine.
# - For **data_dir**, enter the "path to the data file directory".
# - For **output_dir**, enter the "path to the file directory where output is to be stored".
# - For **tz_str**, enter the time zone where the study was conducted. Here, it's **"America/New_York."** We can use "pytz.all_timezones" to check all options.
# - For **options**, there are 'daily' or 'hourly' or 'both' for the temporal resolution for summary statistics. Here, we chose **"daily."**
# - For **save_traj**, it's "True" if you want to save the trajectories as a csv file, "False" if you don't (default: False). Here, we chose **"True."**
# +
import forest.jasmine.traj2stats
data_dir = "data/onnela_lab_gps_testing"
output_dir = "gps_output"
tz_str = "America/New_York"
option = "daily"
save_traj = True
forest.jasmine.traj2stats.gps_stats_main(data_dir, output_dir, tz_str, option, save_traj)
# -
# *The output should describe how the data is being processed. If this is working correctly, you will see something like:*
#
# ><i>User: tcqrulfj
# Read in the csv files ...
# Collapse data within 10 second intervals ...
# Extract flights and pauses ...
# Infer unclassified windows ...
# Merge consecutive pauses and bridge gaps ...
# Selecting basis vectors ...
# Imputing missing trajectories ...
# Tidying up the trajectories...
# Calculating the daily summary stats...<i>
# Second, we compute the call and text-based summary statistics by using the **log_stats_main** function under the **log_stats.py** in the Willow tree of Forest
# - For **data_dir**, enter the "path to the data file directory".
# - For **output_dir**, enter the "path to the file directory where output is to be stored".
# - For **tz_str**, enter the time zone where the study was conducted. Here, it's **"America/New_York."**
# - For **options**, it's 'daily' or 'hourly' or 'both' for the temporal resolution for summary statistics. Here, we chose **"daily."**
# +
import forest.willow.log_stats
data_dir = "data/onnela_lab_gps_testing"
output_dir = "comm_output"
tz_str = "America/New_York"
option = "daily"
forest.willow.log_stats.log_stats_main(data_dir, output_dir, tz_str, option)
# -
# *The output should describe how the data is being processed (e.g., read, collapse, extracted...imputing, tidying, and calculating daily summary stats).*
#
# >*Note- this isn't currently working on our sample dataset.*
# The outputs of **gps_stats_main** and **log_stats_main** are generated with respect to each suject in the study folder.
# - The following code is used to concatenate these files into a single csv for the **GPS summaries**.
# +
import numpy as np
import pandas as pd
import os
import sys
from pathlib import Path
from datetime import datetime
from datetime import timedelta
import math
from functools import reduce
# Path to subdirectory
direc = os.getcwd()
data_dir = os.path.join(direc,"gps_output")
# initialize dataframe list
df_list = []
# loop through all directories - select folder
for subdir, dirs, files in os.walk(data_dir):
# loop through files in list
for file in files:
# obtain subject study_id
file_dir = os.path.join(data_dir,file)
subject_id = os.path.basename(file_dir)[:-4]
if file[-4:] == ".csv":# only read in csv files
temp_df = pd.read_csv(file_dir)
temp_df.insert(loc=0, column="Date", value=pd.to_datetime(temp_df[['day', 'month', 'year']]))
temp_df.insert(loc=0, column='Beiwe_ID', value=subject_id)
df_list.append(temp_df)
if len(df_list) > 0:
# concatenate dataframes within list --> Final Data for trajectories
response_data = pd.concat(df_list, axis=0).reset_index()
response_data = response_data.drop(['index','day', 'month', 'year'], axis=1)
# print few few observations
print(response_data.head())
# Write results to CSV
response_filename = 'gps_summary.csv'
path_resp = os.path.join(direc, response_filename)
# write to csv
response_data.to_csv(path_resp, index=False)
else:
print("Error: No data found")
# -
# *The output should show the data for the first five observations in the concatenated dataset.*
# - The following code is used to concatenate these files into a single csv for the **communication summaries**.
# +
# (use study_id and timestamp)
# Path to subdirectory
direc = os.getcwd()
data_dir = os.path.join(direc,"comm_output")
# initialize dataframe list
df_list = []
# loop through all directories - select folder
for subdir, dirs, files in os.walk(data_dir):
# loop through files in list
for file in files:
# obtain patient study_id
file_dir = os.path.join(data_dir,file)
print(file_dir)
subject_id = os.path.basename(file_dir)[:-4]
if file[-4:] == ".csv":
temp_df = pd.read_csv(file_dir)
temp_df.insert(loc=0, column="Date", value=pd.to_datetime(temp_df[['day', 'month', 'year']]))
temp_df.insert(loc=0, column='Beiwe_ID', value=subject_id)
df_list.append(temp_df)
# concatenate dataframes within list --> Final Data for trajectories
if len(df_list) > 0:
response_data = pd.concat(df_list, axis=0).reset_index()
response_data = response_data.drop(['index','day', 'month', 'year'], axis=1)
# print few few observations
print(response_data.head())
# Write results to CSV
response_filename = 'comm_summary.csv'
path_resp = os.path.join(direc, response_filename)
# write to csv
response_data.to_csv(path_resp, index=False)
else:
print("Error: No data found")
# -
# *The output should show the data for the first five observations in the concatenated dataset.*
# ## Plot Data
# Now, we will also be generate some time series plots using the generated statistic summaries.
# - To read the file, we need to define **response_filename** with the concatenated dataset. Here, we are using 'gps_summary.csv'.
# +
import matplotlib.pyplot as plt
import os
import pandas as pd
direc = os.getcwd()
response_filename = 'gps_summary.csv'
path_resp = os.path.join(direc, response_filename)
# read data
response_data = pd.read_csv(path_resp)
# -
# The data needs to be sorted according to date. The following code will sort and create 4 even time intervals in the plot.
# +
## Make sure the data is sorted according to date
response_data.sort_values('Date', inplace = True)
response_data.reset_index(drop = True, inplace = True)
def time_series_plot(var_to_plot, ylab = '', xlab = 'Date', num_x_ticks = 4):
for key, grp in response_data.groupby(['Beiwe_ID']):
plt.plot(response_data.Date, response_data[var_to_plot], label=key)
#if len(response_data['Beiwe_ID'].unique()) > 1: ## more than one user to plot
# plt.plot(response_data.Date, response_data[var_to_plot], c=response_data['Beiwe_ID'].astype('category'))
#else:
# plt.plot(response_data.Date, response_data[var_to_plot]) #just one user
title = f"Time Series Plot of {var_to_plot}"
plt.title(title)
plt.xlabel(xlab)
plt.ylabel(ylab)
## get evenly indices
tick_indices = [(i * (len(response_data.Date.unique()) - 1)) // (num_x_ticks - 1) for i in range(num_x_ticks) ]
plt.xticks(response_data.Date.unique()[tick_indices])
plt.show()
# -
# - You can now create time series plots using **time_series_plot('variable')**.
time_series_plot('dist_traveled', ylab = "km")
# *The output displays a time series plot for the variable, "dist_traveled."*
time_series_plot('sd_flight_length', ylab = "km")
# *The output displays a time series plot for the variable, "sd_flight_length."*
| tutorials/forest_usage.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] id="9nkDv5dppU6B"
# # NESTS algorithm **Kopuru Vespa Velutina Competition**
#
# Purpose: Bring together weather data, geographic data, food availability data, and identified nests in each municipality of Biscay in order to have a dataset suitable for analysis and potential predictions in a Machine Learning model.
#
# Outputs: QUEENtrain and QUEENpredict datasets *(WBds03_QUEENtrain.csv & WBds03_QUEENpredict.csv)*
#
# @authors:
# * <EMAIL>
# * <EMAIL>
# * <EMAIL>
# * <EMAIL>
# -
# ## Get the data
import pandas as pd
import numpy as np
df01 = pd.read_csv('../../../Input_open_data/ds01_PLANTILLA-RETO-AVISPAS-KOPURU.csv', sep=";")
df02 = pd.read_csv('../../../Input_open_data/ds02_datos-nidos-avispa-asiatica.csv', sep=",")
df03 = pd.read_csv('../../../Input_open_data/ds03_APICULTURA_COLMENAS_KOPURU.csv', sep=";")
df04 = pd.read_csv('../../../Input_open_data/ds04_FRUTALES-DECLARADOS-KOPURU.csv', sep=";")
WBdf01 = pd.read_csv('../Feeder_months/WBds01_GEO.csv', sep=',')
WBdf02 = pd.read_csv('../Feeder_months/WBds02_METEO.csv', sep=',')
df_population = pd.read_csv('../../../Other_open_data/population.csv', sep=',')
# ## Data cleanup
# ### Getting the names right
# +
# Dropping and Renaming columns in accordance to the DataMap
# DataMap's URL: https://docs.google.com/spreadsheets/d/1Ad7s4IOmj9Tn2WcEOz4ArwedTzDs9Y0_EaUSm6uRHMQ/edit#gid=0
df01.columns = ['municip_code', 'municip_name', 'nests_2020']
df01.drop(columns=['nests_2020'], inplace=True) # just note that this is the final variable to predict in the competition
df02.drop(columns=['JARDUERA_ZENBAKIA/NUM_ACTUACION', 'ERABILTZAILEA_EU/USUARIO_EU', 'ERABILTZAILEA_CAS/USUARIO_CAS', 'HELBIDEA/DIRECCION', 'EGOERA_EU/ESTADO_EU', 'ITXIERA_DATA/FECHA CIERRE', 'ITXIERAKO AGENTEA_EU/AGENTE CIERRE_EU', 'ITXIERAKO AGENTEA_CAS/AGENTE CIERRE_CAS'], inplace=True)
df02.columns = ['waspbust_id', 'year', 'nest_foundDate', 'municip_name', 'species', 'nest_locType', 'nest_hight', 'nest_diameter', 'nest_longitude', 'nest_latitude', 'nest_status']
df03.drop(columns=['CP'], inplace=True)
df03.columns = ['municip_name','municip_code','colonies_amount']
df04.columns = ['agriculture_type','municip_code','municip_name']
# -
# We don't have the "months" specified for any of the records in 2017 ('nest_foundDate' is incorrect for this year), so we'll drop those records
df02 = df02.drop(df02[df02['year'] == 2017].index, inplace = False)
# Cleaning municipality names in ds02 with names from ds01
df02_wrong_mun = ['ABADIÑO' ,'<NAME>' ,'<NAME>-<NAME>' ,'AJANGIZ' ,'ALONSOTEGI' ,'AMOREBIETA-ETXANO' ,'AMOROTO' ,'ARAKALDO' ,'ARANTZAZU' ,'AREATZA' ,'ARRANKUDIAGA' ,'ARRATZU' ,'ARRIETA' ,'ARRIGORRIAGA' ,'ARTEA' ,'ARTZENTALES' ,'ATXONDO' ,'AULESTI' ,'BAKIO' ,'BALMASEDA' ,'BARAKALDO' ,'BARRIKA' ,'BASAURI' ,'BEDIA' ,'BERANGO' ,'BERMEO' ,'BERRIATUA' ,'BERRIZ' ,'BUSTURIA' ,'DERIO' ,'DIMA' ,'DURANGO' ,'EA' ,'ELANTXOBE' ,'ELORRIO' ,'ERANDIO' ,'EREÑO' ,'ERMUA' ,'ERRIGOITI' ,'ETXEBARRI' ,'ETXEBARRIA', 'ETXEBARRIa','FORUA' ,'FRUIZ' ,'GALDAKAO' ,'GALDAMES' ,'GAMIZ-FIKA' ,'GARAI' ,'GATIKA' ,'GAUTEGIZ ARTEAGA' ,'GERNIKA-LUMO' ,'GETXO' ,'GETXO ' ,'GIZABURUAGA' ,'GORDEXOLA' ,'GORLIZ' ,'GUEÑES' ,'IBARRANGELU' ,'IGORRE' ,'ISPASTER' ,'IURRETA' ,'IZURTZA' ,'KARRANTZA HARANA/VALLE DE CARRANZA' ,'KARRANTZA HARANA-VALLE DE CARRANZA' ,'KORTEZUBI' ,'LANESTOSA' ,'LARRABETZU' ,'LAUKIZ' ,'LEIOA' ,'LEKEITIO' ,'LEMOA' ,'LEMOIZ' ,'LEZAMA' ,'LOIU' ,'MALLABIA' ,'MAÑARIA' ,'MARKINA-XEMEIN' ,'MARURI-JATABE' ,'MEÑAKA' ,'MENDATA' ,'MENDEXA' ,'MORGA' ,'MUNDAKA' ,'MUNGIA' ,'MUNITIBAR-ARBATZEGI' ,'MUNITIBAR-ARBATZEGI GERRIKAITZ' ,'MURUETA' ,'MUSKIZ' ,'MUXIKA' ,'NABARNIZ' ,'ONDARROA' ,'OROZKO' ,'ORTUELLA' ,'OTXANDIO' ,'PLENTZIA' ,'PORTUGALETE' ,'SANTURTZI' ,'SESTAO' ,'SONDIKA' ,'SOPELA' ,'SOPUERTA' ,'SUKARRIETA' ,'TRUCIOS-TURTZIOZ' ,'UBIDE' ,'UGAO-MIRABALLES' ,'URDULIZ' ,'URDUÑA/ORDUÑA' ,'URDUÑA-ORDUÑA' ,'VALLE DE TRAPAGA' ,'VALLE DE TRAPAGA-TRAPAGARAN' ,'ZALDIBAR' ,'ZALLA' ,'ZAMUDIO' ,'ZARATAMO' ,'ZEANURI' ,'ZEBERIO' ,'ZIERBENA' ,'ZIORTZA-BOLIBAR' ]
df02_correct_mun = ['Abadiño' ,'Abanto y Ciérvana-Abanto Zierbena' ,'Abanto y Ciérvana-Abanto Zierbena' ,'Ajangiz' ,'Alonsotegi' ,'Amorebieta-Etxano' ,'Amoroto' ,'Arakaldo' ,'Arantzazu' ,'Areatza' ,'Arrankudiaga' ,'Arratzu' ,'Arrieta' ,'Arrigorriaga' ,'Artea' ,'Artzentales' ,'Atxondo' ,'Aulesti' ,'Bakio' ,'Balmaseda' ,'Barakaldo' ,'Barrika' ,'Basauri' ,'Bedia' ,'Berango' ,'Bermeo' ,'Berriatua' ,'Berriz' ,'Busturia' ,'Derio' ,'Dima' ,'Durango' ,'Ea' ,'Elantxobe' ,'Elorrio' ,'Erandio' ,'Ereño' ,'Ermua' ,'Errigoiti' ,'Etxebarri' , 'Etxebarria', 'Etxebarria','Forua' ,'Fruiz' ,'Galdakao' ,'Galdames' ,'Gamiz-Fika' ,'Garai' ,'Gatika' ,'Gautegiz Arteaga' ,'Gernika-Lumo' ,'Getxo' ,'Getxo' ,'Gizaburuaga' ,'Gordexola' ,'Gorliz' ,'Güeñes' ,'Ibarrangelu' ,'Igorre' ,'Ispaster' ,'Iurreta' ,'Izurtza' ,'Karrantza Harana/Valle de Carranza' ,'Karrantza Harana/Valle de Carranza' ,'Kortezubi' ,'Lanestosa' ,'Larrabetzu' ,'Laukiz' ,'Leioa' ,'Lekeitio' ,'Lemoa' ,'Lemoiz' ,'Lezama' ,'Loiu' ,'Mallabia' ,'Mañaria' ,'Markina-Xemein' ,'Maruri-Jatabe' ,'Meñaka' ,'Mendata' ,'Mendexa' ,'Morga' ,'Mundaka' ,'Mungia' ,'Munitibar-Arbatzegi Gerrikaitz' ,'Munitibar-Arbatzegi Gerrikaitz' ,'Murueta' ,'Muskiz' ,'Muxika' ,'Nabarniz' ,'Ondarroa' ,'Orozko' ,'Ortuella' ,'Otxandio' ,'Plentzia' ,'Portugalete' ,'Santurtzi' ,'Sestao' ,'Sondika' ,'Sopela' ,'Sopuerta' ,'Sukarrieta' ,'Trucios-Turtzioz' ,'Ubide' ,'Ugao-Miraballes' ,'Urduliz' ,'Urduña/Orduña' ,'Urduña/Orduña' ,'Valle de Trápaga-Trapagaran' ,'Valle de Trápaga-Trapagaran' ,'Zaldibar' ,'Zalla' ,'Zamudio' ,'Zaratamo' ,'Zeanuri' ,'Zeberio' ,'Zierbena' ,'Ziortza-Bolibar',]
df02.municip_name.replace(to_replace = df02_wrong_mun, value = df02_correct_mun, inplace = True)
df02.shape
# Translate the `species` variable contents to English
df02.species.replace(to_replace=['AVISPA ASIÁTICA', 'AVISPA COMÚN', 'ABEJA'], value=['Vespa Velutina', 'Common Wasp', 'Wild Bee'], inplace=True)
# +
# Translate the contents of the `nest_locType` and `nest_status` variables to English
# But note that this data makes is of no use from a "forecastoing" standpoint eventually, since we will predict with a one-year offset (and thus, use thigs like weather mostly)
df02.nest_locType.replace(to_replace=['CONSTRUCCIÓN', 'ARBOLADO'], value=['Urban Environment', 'Natural Environment'], inplace=True)
df02.nest_status.replace(to_replace=['CERRADA - ELIMINADO', 'CERRADA - NO ELIMINABLE', 'PENDIENTE DE GRUPO'], value=['Nest Terminated', 'Cannot Terminate', 'Pending classification'], inplace=True)
# -
# ### Getting the dates right
# Including the addition of a `year_offset` variable to comply with the competition's rules
# +
# Changing 'nest_foundDate' the to "datetime" format
df02['nest_foundDate'] = pd.to_datetime(df02['nest_foundDate'])
# Create a "month" variable in the main dataframe
df02['month'] = pd.DatetimeIndex(df02['nest_foundDate']).month
# Create a "year_offset" variable in the main dataframe
# IMPORTANT: THIS REFLECTS OUR ASSUMPTION THAT `YEAR-1` DATA CAN BE USE TO PREDICT `YEAR` DATA, AS MANDATED BY THE COMPETITION'S BASE REQUIREMENTS
df02['year_offset'] = pd.DatetimeIndex(df02['nest_foundDate']).year - 1
# -
df02.columns
df02.shape
# ### Creating distinct dataFrames for each `species`
# + tags=[]
df02.species.value_counts()
# -
df02_vespas = df02.loc[df02.species == 'Vespa Velutina', :]
df02_wasps = df02.loc[df02.species == 'Common Wasp', :]
df02_bees = df02.loc[df02.species == 'Wild Bee', :]
df02_vespas.shape
# ## Create a TEMPLATE dataframe with the missing municipalities and months
template = pd.read_csv('../../../Input_open_data/ds01_PLANTILLA-RETO-AVISPAS-KOPURU.csv', sep=";")
template.drop(columns='NIDOS 2020', inplace=True)
template.columns = ['municip_code', 'municip_name']
template['year2019'] = 2019
template['year2018'] = 2018
template['year2017'] = 2017
template = pd.melt(template, id_vars=['municip_code', 'municip_name'], value_vars=['year2019', 'year2018', 'year2017'], value_name = 'year_offset')
template.drop(columns='variable', inplace=True)
for i in range(1,13,1):
template[i] = i
template = pd.melt(template, id_vars=['municip_code', 'municip_name', 'year_offset'],\
value_vars=[1,2,3,4,5,6,7,8,9,10,11,12], value_name = 'month')
template.drop(columns='variable', inplace=True)
template.shape
112*12*3 == template.shape[0]
template.columns
df02_vespas.shape[0] - template.shape[0]
# ## Merge the datasets
# ### Match each `municip_name` to its `municip_code` as per the competition's official template (i.e. `df01`)
# +
# Merge dataFrames df01 and df02 by 'municip_name', in order to identify every wasp nest with its 'municip_code'
# The intention is that 'all_the_queens-wasps' will be the final dataFrame to use in the ML model eventually
all_the_queens_wasps = pd.merge(df02_vespas, df01, how = 'left', on = 'municip_name')
# +
# check if there are any municipalities missing from the df02 dataframe, and add them if necessary
df01.municip_code[~df01.municip_code.isin(all_the_queens_wasps.municip_code.unique())]
# -
# ### Input municipalities and months missing from the dataset
all_the_queens_wasps = pd.merge(all_the_queens_wasps, template,\
how = 'outer', left_on = ['municip_code', 'municip_name', 'year_offset', 'month'],\
right_on = ['municip_code', 'municip_name', 'year_offset', 'month'])
# + tags=[]
all_the_queens_wasps.isnull().sum()
# + tags=[]
all_the_queens_wasps.waspbust_id.fillna(value='no registers', inplace=True)
all_the_queens_wasps.year.fillna(value='no registers', inplace=True)
all_the_queens_wasps.nest_foundDate.fillna(value='no registers', inplace=True)
all_the_queens_wasps.species.fillna(value='no registers', inplace=True)
all_the_queens_wasps.nest_locType.fillna(value='no registers', inplace=True)
all_the_queens_wasps.nest_hight.fillna(value='no registers', inplace=True)
all_the_queens_wasps.nest_diameter.fillna(value='no registers', inplace=True)
all_the_queens_wasps.nest_longitude.fillna(value='no registers', inplace=True)
all_the_queens_wasps.nest_latitude.fillna(value='no registers', inplace=True)
all_the_queens_wasps.nest_status.fillna(value='no registers', inplace=True)
#all_the_queens_wasps.isnull().sum()
# -
all_the_queens_wasps.shape
# ### Counting the amount of wasp nests in each municipality, for each month and year
# ... and dropping some variables along the way.
# Namely: **species** (keeping the Vespa Velutina only), **nest_foundDate**, **nest_locType**, **nest_hight**, **nest_diameter**, **nest_longitude**, **nest_latitude**, **nest_status**
# Filtering the rest of variables now, and counting
all_the_queens_wasps = all_the_queens_wasps.loc[:, ['waspbust_id', 'year', 'municip_name', 'municip_code', 'month', 'year_offset']]\
.groupby(by =['year', 'municip_name', 'municip_code', 'month', 'year_offset'], as_index = False).count()
# let's rename the id to NESTS, now that it has been counted
all_the_queens_wasps.rename(columns = {"waspbust_id":"NESTS"}, inplace = True)
all_the_queens_wasps.columns
# verifying that the DataFrame has the right number of rows
all_the_queens_wasps.shape[0] == 112*12*3
# for all those "outer merge" rows with no associated year, set their NESTS to zero
all_the_queens_wasps.loc[all_the_queens_wasps.year == 'no registers', ['NESTS']] = 0
# + tags=[]
#all_the_queens_wasps.isnull().sum()
# -
# ### Food sources
# Group df03 by 'municip_code' because there are multiple rows for each municipality (and we need a 1:1 relationship)
df03 = df03.groupby(by = 'municip_code', as_index= False).colonies_amount.sum()
# +
# Now merge df03 to add number of bee hives (which is a food source for the wasp) in each municipality
# Note that NaNs (unknown amount of hives) are replaced with zeroes for the 'colonies_amount' variable
all_the_queens_wasps = pd.merge(all_the_queens_wasps, df03, how = 'left', on = 'municip_code')
all_the_queens_wasps.colonies_amount.fillna(value=0, inplace=True)
# -
all_the_queens_wasps.shape
# + tags=[]
#all_the_queens_wasps.isnull().sum()
# +
# Group df04 (agricultural food sources) by municipality code, after appending variables with the amount of each type of agricultural product
aux = df04.copy(deep=True)
aux.drop(columns=['municip_name'], inplace=True)
aux['food_fruit'] = np.where(aux['agriculture_type'] == 'FRUTALES', '1', '0')
aux['food_fruit'] = aux['food_fruit'].astype('int')
aux['food_apple'] = np.where(aux['agriculture_type'] == 'MANZANO', '1', '0')
aux['food_apple'] = aux['food_apple'].astype('int')
txakoli_string = df04.agriculture_type[45]
aux['food_txakoli'] = np.where(aux['agriculture_type'] == txakoli_string, '1', '0')
aux['food_txakoli'] = aux['food_txakoli'].astype('int')
aux['food_kiwi'] = np.where(aux['agriculture_type'] == 'AKTINIDIA (KIWI)', '1', '0')
aux['food_kiwi'] = aux['food_kiwi'].astype('int')
aux['food_pear'] = np.where(aux['agriculture_type'] == 'PERAL', '1', '0')
aux['food_pear'] = aux['food_pear'].astype('int')
aux['food_blueberry'] = np.where(aux['agriculture_type'] == 'ARANDANOS', '1', '0')
aux['food_blueberry'] = aux['food_blueberry'].astype('int')
aux['food_raspberry'] = np.where(aux['agriculture_type'] == 'FRAMBUESAS', '1', '0')
aux['food_raspberry'] = aux['food_raspberry'].astype('int')
aux = aux.groupby(by='municip_code', as_index=False).sum()
df04 = aux.copy(deep=True)
# +
# Now merge df04 to add number of each type of food source ('agriculture_type') present in each municipality
# Any municipality not present in df04 will get assigned 'zero' food sources for any given type of fruit
all_the_queens_wasps = pd.merge(all_the_queens_wasps, df04, how = 'left', on= 'municip_code')
all_the_queens_wasps.food_fruit.fillna(value=0, inplace=True)
all_the_queens_wasps.food_apple.fillna(value=0, inplace=True)
all_the_queens_wasps.food_txakoli.fillna(value=0, inplace=True)
all_the_queens_wasps.food_kiwi.fillna(value=0, inplace=True)
all_the_queens_wasps.food_pear.fillna(value=0, inplace=True)
all_the_queens_wasps.food_blueberry.fillna(value=0, inplace=True)
all_the_queens_wasps.food_raspberry.fillna(value=0, inplace=True)
# -
all_the_queens_wasps.shape
# + tags=[]
#all_the_queens_wasps.isnull().sum()
# -
# ### Geographic
# Here, a very important assumption regarding which station corresponds to each municipality is being brought from the HONEYCOMB script
# Adding weather station code to each municipality in all_the_queens_wasps. "No municipality left behind!"
all_the_queens_wasps = pd.merge(all_the_queens_wasps, WBdf01, how = 'left', on= 'municip_code')
all_the_queens_wasps.shape
# + tags=[]
#all_the_queens_wasps.isnull().sum()
# -
all_the_queens_wasps.year_offset.value_counts()
# ### Weather
#
# MANDATORY ASSUMPTION: As per the competition's rules. 2020 weather data cannot be used to predict 2020's number of wasp nests.
#
# Therefore, **this merge links 2018's wasp nests to 2017's weather data for each corresponding month** (all of which falls under the $2017$ value for `year_offset`).
#
# Likewise, **2019's wasp nests are linked to 2018's weather data for the corresponding month** (all of which falls under the $2018$ value for `year_offset`).
#
# Finally, the $2019$ value for `year_offset` contains zero NESTS and the year 2019's weather which we will use to predict 2020's number of NESTS (the target variable of the competition)
# Now, merge the Main 'all_the_queens_wasps' dataFrame with the weather data 'WBdf02' dataFrame
all_the_queens_wasps = pd.merge(all_the_queens_wasps, WBdf02, how = 'left',\
left_on = ['station_code', 'month', 'year_offset'],\
right_on = ['station_code', 'month', 'year'])
# + tags=[]
# note that this relabels `year` from the `all_the_queens_wasps` dataframe as `year_x`, and likewise as `year_y` from the WBdf02 dataframe
all_the_queens_wasps.columns
# -
all_the_queens_wasps_TRAIN = all_the_queens_wasps.loc[all_the_queens_wasps.year_offset.isin([2017, 2018]),:]
all_the_queens_wasps_PREDICT = all_the_queens_wasps.loc[all_the_queens_wasps.year_offset.isin([2019]),:]
# ### Adding `Population`, a publicly available dataset
# +
# Adding population by municipality
all_the_queens_wasps_TRAIN = pd.merge(all_the_queens_wasps_TRAIN, df_population, how = 'left',\
left_on= ['municip_code', 'year_offset'],\
right_on = ['municip_code', 'year'])
all_the_queens_wasps_PREDICT = pd.merge(all_the_queens_wasps_PREDICT, df_population, how = 'left',\
left_on= ['municip_code', 'year_offset'],\
right_on = ['municip_code', 'year'])
# -
all_the_queens_wasps_TRAIN.shape
all_the_queens_wasps_PREDICT.shape
all_the_queens_wasps_PREDICT.shape[0] + all_the_queens_wasps_TRAIN.shape[0] == template.shape[0]
# ## Further cleanup
#dropping unnecessary/duplicate columns
all_the_queens_wasps_TRAIN.drop(columns=['year_y','code_merge', 'merge_cod', 'year_x', 'index', 'MMM'], inplace=True)
# + tags=[]
all_the_queens_wasps_TRAIN.columns
# -
all_the_queens_wasps_PREDICT.drop(columns=['year_y', 'code_merge', 'merge_cod', 'year_x', 'index', 'MMM'], inplace=True)
# + tags=[]
all_the_queens_wasps_PREDICT.columns
# -
# ## Final check
# + tags=[]
all_the_queens_wasps.isnull().sum()
# + tags=[]
# check how many rows (municipalities) are there in the dataframe for each year/month combination
pd.crosstab(all_the_queens_wasps.year_offset, all_the_queens_wasps.month)
# + tags=[]
# this loops helps verify which municipality may be missing from any given year/month combination
for i in range(1,13,1):
print(df01.municip_code[~df01.municip_code.isin\
(all_the_queens_wasps.loc[(all_the_queens_wasps.month == i) &\
(all_the_queens_wasps.year_offset == 2019),:].\
municip_code.unique())])
# -
all_the_queens_wasps_TRAIN.NESTS.sum() == df02_vespas.shape[0]
all_the_queens_wasps_PREDICT.NESTS.sum() == 0
# + [markdown] tags=[]
# ## Export the TRAINING dataset for the model
# A dataset which relates the weather from a previous year (12 months ago) to an amount of NESTS in any given year (and month).
# -
all_the_queens_wasps_TRAIN.to_csv('WBds03_QUEENtrainMONTHS.csv', index=False)
# ## Export the PREDICTION dataset for the model
all_the_queens_wasps_PREDICT.to_csv('WBds03_QUEENpredictMONTHS.csv', index=False)
| B_Submissions_Kopuru_competition/2021-05-19_submit/Batch_OLS/workerbee03_NESTSmonths.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# > **Copyright (c) 2020 <NAME>**<br><br>
# > **Copyright (c) 2021 Skymind Education Group Sdn. Bhd.**<br>
# <br>
# Licensed under the Apache License, Version 2.0 (the \"License\");
# <br>you may not use this file except in compliance with the License.
# <br>You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0/
# <br>
# <br>Unless required by applicable law or agreed to in writing, software
# <br>distributed under the License is distributed on an \"AS IS\" BASIS,
# <br>WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# <br>See the License for the specific language governing permissions and
# <br>limitations under the License.
# <br>
# <br>
# **SPDX-License-Identifier: Apache-2.0**
# <br>
# # Data Cleaning
# ## Introduction
# This notebook goes through a necessary step of any data science project - data cleaning. Data cleaning is a time consuming and unenjoyable task, yet it's a very important one. Keep in mind, "garbage in, garbage out". Feeding dirty data into a model will give us results that are meaningless.
#
# Specifically, we'll be walking through:
#
# 1. **Getting the data - **in this case, we'll be scraping data from a website
# 2. **Cleaning the data - **we will walk through popular text pre-processing techniques
# 3. **Organizing the data - **we will organize the cleaned data into a way that is easy to input into other algorithms
#
# The output of this notebook will be clean, organized data in two standard text formats:
#
# 1. **Corpus** - a collection of text
# 2. **Document-Term Matrix** - word counts in matrix format
# ## Problem Statement
# As a reminder, our goal is to look at transcripts of various comedians and note their similarities and differences. Specifically, I'd like to know if <NAME>'s comedy style is different than other comedians, since she's the comedian that got me interested in stand up comedy.
# # Notebook Content
#
# * [Getting The Data](#Getting-The-Data)
#
#
# * [Cleaning The Data](#Cleaning-The-Data)
#
#
# * [Organizing The Data](#Organizing-The-Data)
#
# * [Corpus](#Corpus)
# * [Document-Term Matrix](#Document-Term-Matrix)
#
#
# * [Additional Exercises](#Additional-Exercises)
# ## Getting The Data
# Luckily, there are wonderful people online that keep track of stand up routine transcripts. [Scraps From The Loft](http://scrapsfromtheloft.com) makes them available for non-profit and educational purposes.
#
# To decide which comedians to look into, I went on IMDB and looked specifically at comedy specials that were released in the past 5 years. To narrow it down further, I looked only at those with greater than a 7.5/10 rating and more than 2000 votes. If a comedian had multiple specials that fit those requirements, I would pick the most highly rated one. I ended up with a dozen comedy specials.
# +
# Web scraping, pickle imports
import requests
from bs4 import BeautifulSoup
import pickle
# Scrapes transcript data from scrapsfromtheloft.com
def url_to_transcript(url):
'''Returns transcript data specifically from scrapsfromtheloft.com.'''
page = requests.get(url).text
soup = BeautifulSoup(page, "lxml")
text = [p.text for p in soup.find(class_="post-content").find_all('p')]
print(url)
return text
# URLs of transcripts in scope
urls = ['http://scrapsfromtheloft.com/2017/05/06/louis-ck-oh-my-god-full-transcript/',
'http://scrapsfromtheloft.com/2017/04/11/dave-chappelle-age-spin-2017-full-transcript/',
'http://scrapsfromtheloft.com/2018/03/15/ricky-gervais-humanity-transcript/',
'http://scrapsfromtheloft.com/2017/08/07/bo-burnham-2013-full-transcript/',
'http://scrapsfromtheloft.com/2017/05/24/bill-burr-im-sorry-feel-way-2014-full-transcript/',
'http://scrapsfromtheloft.com/2017/04/21/jim-jefferies-bare-2014-full-transcript/',
'http://scrapsfromtheloft.com/2017/08/02/john-mulaney-comeback-kid-2015-full-transcript/',
'http://scrapsfromtheloft.com/2017/10/21/hasan-minhaj-homecoming-king-2017-full-transcript/',
'http://scrapsfromtheloft.com/2017/09/19/ali-wong-baby-cobra-2016-full-transcript/',
'http://scrapsfromtheloft.com/2017/08/03/anthony-jeselnik-thoughts-prayers-2015-full-transcript/',
'http://scrapsfromtheloft.com/2018/03/03/mike-birbiglia-my-girlfriends-boyfriend-2013-full-transcript/',
'http://scrapsfromtheloft.com/2017/08/19/joe-rogan-triggered-2016-full-transcript/']
# Comedian names
comedians = ['louis', 'dave', 'ricky', 'bo', 'bill', 'jim', 'john', 'hasan', 'ali', 'anthony', 'mike', 'joe']
# +
# # Actually request transcripts (takes a few minutes to run)
# transcripts = [url_to_transcript(u) for u in urls]
# +
# # Pickle files for later use
# # Make a new directory to hold the text files
# # !mkdir transcripts
# for i, c in enumerate(comedians):
# with open("transcripts/" + c + ".txt", "wb") as file:
# pickle.dump(transcripts[i], file)
# -
# Load pickled files
data = {}
for i, c in enumerate(comedians):
with open("../../../resources/day_04/transcripts/" + c + ".txt", "rb") as file:
data[c] = pickle.load(file)
# Double check to make sure data has been loaded properly
data.keys()
# More checks
data['louis'][:2]
# ## Cleaning The Data
# When dealing with numerical data, data cleaning often involves removing null values and duplicate data, dealing with outliers, etc. With text data, there are some common data cleaning techniques, which are also known as text pre-processing techniques.
#
# With text data, this cleaning process can go on forever. There's always an exception to every cleaning step. So, we're going to follow the MVP (minimum viable product) approach - start simple and iterate. Here are a bunch of things you can do to clean your data. We're going to execute just the common cleaning steps here and the rest can be done at a later point to improve our results.
#
# **Common data cleaning steps on all text:**
# * Make text all lower case
# * Remove punctuation
# * Remove numerical values
# * Remove common non-sensical text (/n)
# * Tokenize text
# * Remove stop words
#
# **More data cleaning steps after tokenization:**
# * Stemming / lemmatization
# * Parts of speech tagging
# * Create bi-grams or tri-grams
# * Deal with typos
# * And more...
# Let's take a look at our data again
next(iter(data.keys()))
# Notice that our dictionary is currently in key: comedian, value: list of text format
next(iter(data.values()))
# We are going to change this to key: comedian, value: string format
def combine_text(list_of_text):
'''Takes a list of text and combines them into one large chunk of text.'''
combined_text = ' '.join(list_of_text)
return combined_text
# Combine it!
data_combined = {key: [combine_text(value)] for (key, value) in data.items()}
# +
# We can either keep it in dictionary format or put it into a pandas dataframe
import pandas as pd
pd.set_option('max_colwidth',150)
data_df = pd.DataFrame.from_dict(data_combined).transpose()
data_df.columns = ['transcript']
data_df = data_df.sort_index()
data_df
# -
# Let's take a look at the transcript for Ali Wong
data_df.transcript.loc['ali']
# +
# Apply a first round of text cleaning techniques
import re
import string
def clean_text_round1(text):
'''Make text lowercase, remove text in square brackets, remove punctuation and remove words containing numbers.'''
text = text.lower()
text = re.sub('\[.*?\]', '', text)
text = re.sub('[%s]' % re.escape(string.punctuation), '', text)
text = re.sub('\w*\d\w*', '', text)
return text
round1 = lambda x: clean_text_round1(x)
# -
# Let's take a look at the updated text
data_clean = pd.DataFrame(data_df.transcript.apply(round1))
data_clean
# +
# Apply a second round of cleaning
def clean_text_round2(text):
'''Get rid of some additional punctuation and non-sensical text that was missed the first time around.'''
text = re.sub('[‘’“”…]', '', text)
text = re.sub('\n', '', text)
return text
round2 = lambda x: clean_text_round2(x)
# -
# Let's take a look at the updated text
data_clean = pd.DataFrame(data_clean.transcript.apply(round2))
data_clean
# **NOTE:** This data cleaning aka text pre-processing step could go on for a while, but we are going to stop for now. After going through some analysis techniques, if you see that the results don't make sense or could be improved, you can come back and make more edits such as:
# * Mark 'cheering' and 'cheer' as the same word (stemming / lemmatization)
# * Combine 'thank you' into one term (bi-grams)
# * And a lot more...
# ## Organizing The Data
# I mentioned earlier that the output of this notebook will be clean, organized data in two standard text formats:
# 1. **Corpus - **a collection of text
# 2. **Document-Term Matrix - **word counts in matrix format
# ### Corpus
# We already created a corpus in an earlier step. The definition of a corpus is a collection of texts, and they are all put together neatly in a pandas dataframe here.
# Let's take a look at our dataframe
data_df
# +
# Let's add the comedians' full names as well
full_names = ['<NAME>', '<NAME>', '<NAME>', '<NAME>', '<NAME>', '<NAME>',
'<NAME>', '<NAME>', '<NAME>', '<NAME>.', '<NAME>', '<NAME>']
data_df['full_name'] = full_names
data_df
# -
# Let's pickle it for later use
data_df.to_pickle("models/corpus.pkl")
# ### Document-Term Matrix
# For many of the techniques we'll be using in future notebooks, the text must be tokenized, meaning broken down into smaller pieces. The most common tokenization technique is to break down text into words. We can do this using scikit-learn's CountVectorizer, where every row will represent a different document and every column will represent a different word.
#
# In addition, with CountVectorizer, we can remove stop words. Stop words are common words that add no additional meaning to text such as 'a', 'the', etc.
# +
# We are going to create a document-term matrix using CountVectorizer, and exclude common English stop words
from sklearn.feature_extraction.text import CountVectorizer
cv = CountVectorizer(stop_words='english')
data_cv = cv.fit_transform(data_clean.transcript)
data_dtm = pd.DataFrame(data_cv.toarray(), columns=cv.get_feature_names())
data_dtm.index = data_clean.index
data_dtm
# -
# Let's pickle it for later use
data_dtm.to_pickle("models/dtm.pkl")
# Let's also pickle the cleaned data (before we put it in document-term matrix format) and the CountVectorizer object
data_clean.to_pickle('models/data_clean.pkl')
pickle.dump(cv, open("models/cv.pkl", "wb"))
# ## Additional Exercises
# 1. Can you add an additional regular expression to the clean_text_round2 function to further clean the text?
# 2. Play around with CountVectorizer's parameters. What is ngram_range? What is min_df and max_df?
# # Contributors
#
# **Author**
# <br><NAME>
# # References
#
# 1. [Natural Language Processing in Python](https://www.youtube.com/watch?v=xvqsFTUsOmc&t=6s)
| nlp-labs/Day_04/Statistical_Models/1-Data-Cleaning.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ## Magma Differentiation Plotter ##
# This is meant as a virtual alternative to using jelly beans or bingo chips for the magma differentiation lab. In the lab, you start with a set number of cations (chips/jelly beans) in a magma chamber drawn on paper, and you systematically remove them as you crystallize minerals during fractionation. This notebook simulates this by showing colored bar charts to represent the cations in the magma chamber and the cations that have been taken up in crystallized minerals.
#
# Using this notebook requires no coding. You just need to be able to run the individual cells below (either with the run button above or by pressing shift+enter).
#
# To start, run the import cell below. Nothing should happen, but a little number should appear to the left of the cell indicating it has run.
import numpy as np
import matplotlib.pyplot as plt
# ### Initial Magma Chamber ###
# Run the cell below to create an initial magma chamber to start your lab. This will also reset your magma chamber if you make a mistake while differentiating.
# +
Si = 170
Ti = 4
Al = 58
Fe = 46
Mg = 45
Ca = 32
Na =17
K = 5
cations = np.array([Si,Ti,Al,Fe,Mg,Ca,Na,K])
crystallized = np.zeros(8)
names = ['Si','Ti','Al','Fe','Mg','Ca','Na','K']
testcolors = ['red','cyan','purple','blue','pink','yellow','green','magenta']
fig,axs = plt.subplots(1,2,sharey=True)
axs[0].bar(names,cations,color=testcolors)
axs[1].bar(names,crystallized,color=testcolors)
axs[0].set_title('Magma Chamber')
axs[1].set_title('Crystallized')
plt.tight_layout()
plt.show()
print('Magma Chamber')
for x in range(8):
print(names[x],'('+testcolors[x]+'):',cations[x])
print('')
print('Crystallized')
for x in range(8):
print(names[x],'('+testcolors[x]+'):',crystallized[x])
# -
# ## Crystallize Minerals ##
# When you run the cell below, you will be asked how many of each cation to remove from your magma chamber to as you crystallize a set of minerals. Refer to your lab to determine how many. The output will show you how much of each cation is in your magma chamber and how much is crystallized after the differentiation. Run this cell once for each fractionation event in your lab.
# +
subtract = np.zeros(8)
print('Enter which cations to remove from magma chamber')
for x in range(8):
subtract[x] = float(input(names[x]+': '))
cations = cations-subtract
crystallized = crystallized + subtract
fig,axs = plt.subplots(1,3,sharey=True)
axs[0].bar(names,cations,color=testcolors)
axs[1].bar(names,crystallized,color=testcolors)
axs[2].bar(names,subtract,color=testcolors)
axs[0].set_title('Magma Chamber')
axs[1].set_title('Total Crystallized')
axs[2].set_title('Crystallized This Step')
plt.tight_layout()
plt.show()
print('Magma Chamber')
for x in range(8):
print(names[x],'('+testcolors[x]+'):',cations[x])
print('')
print('Total Crystallized')
for x in range(8):
print(names[x],'('+testcolors[x]+'):',crystallized[x])
print('')
print('Crystallized This Step')
for x in range(8):
print(names[x],'('+testcolors[x]+'):',subtract[x])
# -
| igneous_petrology/magma_differentiation.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# <h2>Introduction</h2>
#
# Ask a home buyer to describe their dream house, and they probably won't begin with the height of the basement ceiling or the proximity to an east-west railroad. But this playground competition's dataset proves that much more influences price negotiations than the number of bedrooms or a white-picket fence.
#
# With 79 explanatory variables describing (almost) every aspect of residential homes in Ames, Iowa, this will predict the final price of each home.
#
# Our goal is to have above 90% accuracy
# + _cell_guid="b1076dfc-b9ad-4769-8c92-a6c4dae69d19" _uuid="8f2839f25d086af736a60e9eeb907d3b93b6e0e5"
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
# + _cell_guid="79c7e3d0-c299-4dcb-8224-4455121ee9b0" _uuid="d629ff2d2480ee46fbb7e2d37f6b5fab8052498a"
from sklearn.model_selection import train_test_split
from sklearn.linear_model import LinearRegression
from sklearn.ensemble import GradientBoostingRegressor
from sklearn.linear_model import Lasso
from sklearn.preprocessing import MinMaxScaler
# -
df = pd.read_csv('../input/house-prices-advanced-regression-techniques/train.csv')
df.head()
# <h2>I - Exploratory Data Analysis</h2>
# <h3>Check the correlation of all columns vs the label</h3>
# <h5>Columns with the highest correlation with SalePrice</h5>
corr = df.corr()['SalePrice'].sort_values(ascending=False).head(20).to_frame()
plt.figure(figsize=(10,5))
sns.heatmap(corr)
# <h4>Plot of OverallQual vs SalePrice</h4>
plt.figure(figsize=(15,10))
sns.jointplot(x='OverallQual', y='SalePrice', data=df)
# <h4>Plot of GrLivArea vs SalePrice</h4>
plt.figure(figsize=(15,10))
sns.jointplot(x='GrLivArea', y='SalePrice', data=df)
plt.figure(figsize=(15,10))
sns.countplot(x='Neighborhood', data=df, order=df['Neighborhood'].value_counts().index)
plt.xticks(rotation=60)
# **Neighborhood VS Saleprice**
# + _kg_hide-output=true
Neighborhood = dict(zip(df['Neighborhood'].unique().tolist(), range(len(df['Neighborhood'].unique().tolist()))))
df.replace({'Neighborhood': Neighborhood}, inplace=True)
plt.figure(figsize=(15,10))
sns.barplot(x='Neighborhood', y='SalePrice', data=df)
plt.xlabel('Neighborhood')
plt.xticks([*range(0, len(Neighborhood))], Neighborhood, rotation=60)
# -
# **House Style VS Sale Price**
HouseStyle = dict(zip(df['HouseStyle'].unique().tolist(), range(len(df['HouseStyle'].unique().tolist()))))
df.replace({'HouseStyle': HouseStyle}, inplace=True)
plt.figure(figsize=(15,10))
sns.barplot(x='HouseStyle', y='SalePrice', data=df)
plt.xlabel('HouseStyle')
plt.xticks([*range(0, len(HouseStyle))], HouseStyle, rotation=60)
# **Basement VS Sale Price**
BsmtFinType1 = dict(zip(df['BsmtFinType1'].unique().tolist(), range(len(df['BsmtFinType1'].unique().tolist()))))
df.replace({'BsmtFinType1': BsmtFinType1}, inplace=True)
plt.figure(figsize=(15,10))
sns.barplot(x='BsmtFinType1', y='SalePrice', data=df)
plt.xlabel('BsmtFinType1')
plt.xticks([*range(0, len(BsmtFinType1))], BsmtFinType1, rotation=60)
# **Building Type VS Sale Price**
BldgType = dict(zip(df['BldgType'].unique().tolist(), range(len(df['BldgType'].unique().tolist()))))
df.replace({'BldgType': BldgType}, inplace=True)
plt.figure(figsize=(15,10))
sns.barplot(x='BldgType', y='SalePrice', data=df)
plt.xlabel('BldgType')
plt.xticks([*range(0, len(BldgType))], BldgType, rotation=60)
# <h2>II - Feature Engineering</h2>
# **We need to do something about the NA values**
plt.figure(figsize=(15,10))
df.isnull().mean().sort_values(ascending=False).plot()
# +
df['FireplaceQu'] = df['FireplaceQu'].fillna(value='NF')
df.drop(columns=['PoolQC', 'MiscFeature', 'Alley', 'Fence'], inplace=True)
df['LotFrontage'] = df['LotFrontage'].fillna(value=df['LotFrontage'].mean())
df['GarageType'] = df['GarageType'].fillna(value='NoGar')
df['GarageYrBlt'] = df['GarageYrBlt'].fillna(value=df['GarageYrBlt'].mean())
df['GarageQual'] = df['GarageQual'].fillna(value='NoGar')
df['GarageFinish'] = df['GarageFinish'].fillna(value='NoGar')
df['GarageCond'] = df['GarageCond'].fillna(value='NoGar')
df['BsmtFinType2'] = df['BsmtFinType2'].fillna(value='NoBasement')
df['BsmtExposure'] = df['BsmtExposure'].fillna(value='NoBasement')
df['BsmtQual'] = df['BsmtQual'].fillna(value='NoBasement')
df['BsmtCond'] = df['BsmtCond'].fillna(value='NoBasement')
df['MasVnrType'] = df['MasVnrType'].fillna(value='None')
df['MasVnrArea'] = df['MasVnrArea'].fillna(value=0.0)
Electrical = dict(zip(df['Electrical'].unique().tolist(), range(len(df['Electrical'].unique().tolist()))))
df.replace({'Electrical': Electrical}, inplace=True)
df['Electrical'] = df['Electrical'].fillna(value=0)
# -
df.isnull().mean().sort_values(ascending=False)
# **Convert string values to a number representation for training**
for column in df.columns:
if(df[column].dtype == 'object'):
df.replace({column: dict(zip(df[column].unique().tolist(), range(len(df[column].unique().tolist()))))}, inplace=True)
df.head()
# **Create more feature**
df['totalArea'] = df['TotalBsmtSF'] + df['1stFlrSF'] + df['2ndFlrSF'] + df['GrLivArea'] + df['GarageArea']
df['Bathrooms'] = df['FullBath'] + df['HalfBath'] * 0.5
df['Year average'] = (df['YearRemodAdd'] + df['YearBuilt']) / 2
# > **Correlations of the new features**
new_corr = pd.DataFrame({'Feature Name': ['Total Area', 'Bathrooms', 'Year Average'],
'Corr': [df['totalArea'].corr(df['SalePrice']), df['Bathrooms'].corr(df['SalePrice']), df['Year average'].corr(df['SalePrice'])]})
new_corr
# <h2>III - Training</h2>
y = df['SalePrice']
X = df.drop(columns='SalePrice')
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3, random_state=101)
scaler= MinMaxScaler()
X_train = scaler.fit_transform(X_train)
X_test = scaler.transform(X_test)
models = {
'linear regression': LinearRegression(),
'gradient boosting regressor': GradientBoostingRegressor(n_estimators=2000, max_depth=1),
'lasso regression': Lasso()
}
# +
score_df = pd.DataFrame({'Model': [], 'Accuracy': []})
for key, value in models.items():
model = value
model.fit(X_train,y_train)
score = model.score(X_test, y_test)
score_df = score_df.append({
'Model': key,
'Accuracy': score * 100
}, ignore_index=True)
# -
score_df
# <h2>IV - Conclusion</h2>
# * Gradient Boosting Regressor helps us achieve 90% score accuracy
# * Adding total area feature helps!
| dataset_0/notebook/house-prices-advance-regression.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Ungraded Lab: Implement a Siamese network
# This lab will go through creating and training a multi-input model. You will build a basic Siamese Network to find the similarity or dissimilarity between items of clothing. For Week 1, you will just focus on constructing the network. You will revisit this lab in Week 2 when we talk about custom loss functions.
# ## Imports
# + colab={} colab_type="code" id="nVHTXTZpkdtM"
try:
# # %tensorflow_version only exists in Colab.
# %tensorflow_version 2.x
except Exception:
pass
import tensorflow as tf
from tensorflow.keras.models import Model
from tensorflow.keras.layers import Input, Flatten, Dense, Dropout, Lambda
from tensorflow.keras.optimizers import RMSprop
from tensorflow.keras.datasets import fashion_mnist
from tensorflow.python.keras.utils.vis_utils import plot_model
from tensorflow.keras import backend as K
import numpy as np
import matplotlib.pyplot as plt
from PIL import Image, ImageFont, ImageDraw
import random
# -
# ## Prepare the Dataset
#
# First define a few utilities for preparing and visualizing your dataset.
# + colab={} colab_type="code" id="iSQMl9cZkgDx"
def create_pairs(x, digit_indices):
'''Positive and negative pair creation.
Alternates between positive and negative pairs.
'''
pairs = []
labels = []
n = min([len(digit_indices[d]) for d in range(10)]) - 1
for d in range(10):
for i in range(n):
z1, z2 = digit_indices[d][i], digit_indices[d][i + 1]
pairs += [[x[z1], x[z2]]]
inc = random.randrange(1, 10)
dn = (d + inc) % 10
z1, z2 = digit_indices[d][i], digit_indices[dn][i]
pairs += [[x[z1], x[z2]]]
labels += [1, 0]
return np.array(pairs), np.array(labels)
def create_pairs_on_set(images, labels):
digit_indices = [np.where(labels == i)[0] for i in range(10)]
pairs, y = create_pairs(images, digit_indices)
y = y.astype('float32')
return pairs, y
def show_image(image):
plt.figure()
plt.imshow(image)
plt.colorbar()
plt.grid(False)
plt.show()
# -
# You can now download and prepare our train and test sets. You will also create pairs of images that will go into the multi-input model.
# + colab={} colab_type="code" id="ook7lKQakomz"
# load the dataset
(train_images, train_labels), (test_images, test_labels) = fashion_mnist.load_data()
# prepare train and test sets
train_images = train_images.astype('float32')
test_images = test_images.astype('float32')
# normalize values
train_images = train_images / 255.0
test_images = test_images / 255.0
# create pairs on train and test sets
tr_pairs, tr_y = create_pairs_on_set(train_images, train_labels)
ts_pairs, ts_y = create_pairs_on_set(test_images, test_labels)
# -
# You can see a sample pair of images below.
# + colab={} colab_type="code" id="BhTpANwipLIk"
# array index
this_pair = 8
# show images at this index
show_image(ts_pairs[this_pair][0])
show_image(ts_pairs[this_pair][1])
# print the label for this pair
print(ts_y[this_pair])
# + colab={} colab_type="code" id="lbgAYQW0zT_4"
# print other pairs
show_image(tr_pairs[:,0][0])
show_image(tr_pairs[:,0][1])
show_image(tr_pairs[:,1][0])
show_image(tr_pairs[:,1][1])
# -
# ## Build the Model
#
# Next, you'll define some utilities for building our model.
# + colab={} colab_type="code" id="wMo2HbKLkuAa"
def initialize_base_network():
input = Input(shape=(28,28,), name="base_input")
x = Flatten(name="flatten_input")(input)
x = Dense(128, activation='relu', name="first_base_dense")(x)
x = Dropout(0.1, name="first_dropout")(x)
x = Dense(128, activation='relu', name="second_base_dense")(x)
x = Dropout(0.1, name="second_dropout")(x)
x = Dense(128, activation='relu', name="third_base_dense")(x)
return Model(inputs=input, outputs=x)
def euclidean_distance(vects):
x, y = vects
sum_square = K.sum(K.square(x - y), axis=1, keepdims=True)
return K.sqrt(K.maximum(sum_square, K.epsilon()))
def eucl_dist_output_shape(shapes):
shape1, shape2 = shapes
return (shape1[0], 1)
# -
# Let's see how our base network looks. This is where the two inputs will pass through to generate an output vector.
# + colab={} colab_type="code" id="8FjSLg_LoJAy"
base_network = initialize_base_network()
plot_model(base_network, show_shapes=True, show_layer_names=True, to_file='base-model.png')
# -
# Let's now build the Siamese network. The plot will show two inputs going to the base network.
# + colab={} colab_type="code" id="Qe4YNz0kkwq5"
# create the left input and point to the base network
input_a = Input(shape=(28,28,), name="left_input")
vect_output_a = base_network(input_a)
# create the right input and point to the base network
input_b = Input(shape=(28,28,), name="right_input")
vect_output_b = base_network(input_b)
# measure the similarity of the two vector outputs
output = Lambda(euclidean_distance, name="output_layer", output_shape=eucl_dist_output_shape)([vect_output_a, vect_output_b])
# specify the inputs and output of the model
model = Model([input_a, input_b], output)
# plot model graph
plot_model(model, show_shapes=True, show_layer_names=True, to_file='outer-model.png')
# -
# ## Train the Model
#
# You can now define the custom loss for our network and start training.
# + colab={} colab_type="code" id="HswzRyDAk-V7"
def contrastive_loss_with_margin(margin):
def contrastive_loss(y_true, y_pred):
'''Contrastive loss from Hadsell-et-al.'06
http://yann.lecun.com/exdb/publis/pdf/hadsell-chopra-lecun-06.pdf
'''
square_pred = K.square(y_pred)
margin_square = K.square(K.maximum(margin - y_pred, 0))
return K.mean(y_true * square_pred + (1 - y_true) * margin_square)
return contrastive_loss
# + colab={} colab_type="code" id="UIGaA9TMlBCc"
rms = RMSprop()
model.compile(loss=contrastive_loss_with_margin(margin=1), optimizer=rms)
history = model.fit([tr_pairs[:,0], tr_pairs[:,1]], tr_y, epochs=20, batch_size=128, validation_data=([ts_pairs[:,0], ts_pairs[:,1]], ts_y))
# -
# ## Model Evaluation
#
# As usual, you can evaluate our model by computing the accuracy and observing the metrics during training.
# + colab={} colab_type="code" id="RYwU4CIhlIE4"
def compute_accuracy(y_true, y_pred):
'''Compute classification accuracy with a fixed threshold on distances.
'''
pred = y_pred.ravel() < 0.5
return np.mean(pred == y_true)
# + colab={} colab_type="code" id="IyfJWzjYlKMg"
loss = model.evaluate(x=[ts_pairs[:,0],ts_pairs[:,1]], y=ts_y)
y_pred_train = model.predict([tr_pairs[:,0], tr_pairs[:,1]])
train_accuracy = compute_accuracy(tr_y, y_pred_train)
y_pred_test = model.predict([ts_pairs[:,0], ts_pairs[:,1]])
test_accuracy = compute_accuracy(ts_y, y_pred_test)
print("Loss = {}, Train Accuracy = {} Test Accuracy = {}".format(loss, train_accuracy, test_accuracy))
# + colab={} colab_type="code" id="3obxy4EBlMyI"
def plot_metrics(metric_name, title, ylim=5):
plt.title(title)
plt.ylim(0,ylim)
plt.plot(history.history[metric_name],color='blue',label=metric_name)
plt.plot(history.history['val_' + metric_name],color='green',label='val_' + metric_name)
plot_metrics(metric_name='loss', title="Loss", ylim=0.2)
# + colab={} colab_type="code" id="E9KLCFiClP9Q"
# Matplotlib config
def visualize_images():
plt.rc('image', cmap='gray_r')
plt.rc('grid', linewidth=0)
plt.rc('xtick', top=False, bottom=False, labelsize='large')
plt.rc('ytick', left=False, right=False, labelsize='large')
plt.rc('axes', facecolor='F8F8F8', titlesize="large", edgecolor='white')
plt.rc('text', color='a8151a')
plt.rc('figure', facecolor='F0F0F0')# Matplotlib fonts
# utility to display a row of digits with their predictions
def display_images(left, right, predictions, labels, title, n):
plt.figure(figsize=(17,3))
plt.title(title)
plt.yticks([])
plt.xticks([])
plt.grid(None)
left = np.reshape(left, [n, 28, 28])
left = np.swapaxes(left, 0, 1)
left = np.reshape(left, [28, 28*n])
plt.imshow(left)
plt.figure(figsize=(17,3))
plt.yticks([])
plt.xticks([28*x+14 for x in range(n)], predictions)
for i,t in enumerate(plt.gca().xaxis.get_ticklabels()):
if predictions[i] > 0.5: t.set_color('red') # bad predictions in red
plt.grid(None)
right = np.reshape(right, [n, 28, 28])
right = np.swapaxes(right, 0, 1)
right = np.reshape(right, [28, 28*n])
plt.imshow(right)
# -
# You can see sample results for 10 pairs of items below.
# + colab={} colab_type="code" id="VRxB-Tmemzt9"
y_pred_train = np.squeeze(y_pred_train)
indexes = np.random.choice(len(y_pred_train), size=10)
display_images(tr_pairs[:, 0][indexes], tr_pairs[:, 1][indexes], y_pred_train[indexes], tr_y[indexes], "clothes and their dissimilarity", 10)
| Custom Models, Layers, and Loss Functions with TensorFlow/Week 1 Functional APIs/Lab_3_siamese-network.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# HIDDEN
from datascience import *
from prob140 import *
import numpy as np
import matplotlib.pyplot as plt
plt.style.use('fivethirtyeight')
# %matplotlib inline
# # Jointly Normal Random Variables #
# Galton's observations about oval scatter plots became the foundation of multiple regression, one of most commonly used methods in data analysis. Inference in multiple regression and its modern variants is often based on *multivariate normal* models.
#
# In this chapter we will study what it means for a collection of random variables to be *jointly normally distributed*. We will introduce matrix notation for linear combinations of random variables and then study the main properties of the multivariate normal distribution. This is the necessary groundwork for using multivariate normal models in prediction, which we will do in the next chapter.
| content/Chapter_23/00_Multivariate_Normal_RVs.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
import numpy as np
import pandas as pd
import IPy
import time
# -
ipv6_all = pd.read_csv("C:/Users/webDrag0n/Downloads/ipv6_all.csv")
ipv6_all.head()
network = ipv6_all["network"]
network[1]
total_length = len(network)
print(total_length)
# ### 二分查找
#target_ip = "fd00:c2b6:b24b:be67:2827:688d:e6a1:6a3b"
target_ip = "2a0b:3143:2000::"
# +
start = time.perf_counter()
total_length = len(network)
i_0 = 0
i_1 = total_length / 2 - 1
while True:
#print(str(i_0) + " | " + str(i_1))
if IPy.IP(target_ip) > IPy.IP(network[i_0]) and IPy.IP(target_ip) < IPy.IP(network[i_1]):
i_1 = int(i_0 + (i_1 - i_0) / 2)
elif IPy.IP(target_ip) > IPy.IP(network[i_1]):
temp = i_1 - i_0
i_0 = i_1
i_1 += np.ceil(temp / 2)
if i_1 >= total_length:
i_1 = total_length - 1
if (i_0 == i_1):
if IPy.IP(target_ip) in IPy.IP(network[i_0]):
print("found")
else:
print("not found")
break
print(i_0)
end = time.perf_counter()
print("total time: ", end - start)
# -
network[i_0]
| IPv6_search/IPv6_search.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Lab 5: House Candidate Positioning Graph Recreations
# In this lab, we'll be recreating Figure 1 from this paper titled [Candidate Positioning in U.S. Elections](https://www-jstor-org.libproxy.berkeley.edu/stable/2669364?seq=1#metadata_info_tab_contents). The table we will be recreating shows the estimated issue positions of all Democrats and Republicans running for House positions in 2000 plotted against the conservatism of their district. We'll see that candidates tend to take positions according to the convervatism of their district with little deviation across party lines.
# Run the next cell to import the libraries we'll be using to do our analysis
import pandas as pd
import json
import matplotlib.pyplot as plt
import seaborn as sns
from scipy.stats import linregress
# Below, you'll find a chart containing labels of the columns in the dataset we'll be working with for this lab:
#
# | Variable | Meaning |
# |--------|------------------------|
# | Gore | % of candidate's district that voted for Gore in '00 |
# | Bush '00 | % of candidate's district that voted for Bush in '00 |
# | location | candidate's state and district number |
# | first_name | candidate's first name |
# | middle_name | candidate's middle name |
# | last_name | candidate's last name |
# | votes_with_party_pct | % of times the candidate voted with their party in the previous Congressional session |
# | votes_against_party_pct | % of times the candidate voted against their party in the previous Congressional session |
# | party | candidate's party |
# | Member Position | 0-1 scale for how conservative a candidate's stances are (0=lowest conservative, 1=highest conservative) |
# ## Load Data
# For our analysis, we'll be using district-level data on House members in the 106th Congress and their district behavior from the 2000 presidential election.
#
# We'll begin by loading our file housedata.csv into a pandas dataframe named df.
filename = "housedata.csv"
df = pd.read_csv(filename)
df
# ## Cleaning Data
# Before we can begin manipulating our data to recreate our table, we must first clean the data. The following cells will walk you through dropping unnecessary columns and removing null values that could disrupt our analysis.
# ### Drop Columns
# Since we are mainly interested in the voting patterns of the members and their districts, there are a few columns currently included in df that we can get rid of. First, we'll start with an example. Then, you'll get to write your own code to drop certain columns.
#
# Run the following cell to drop the "geoid" column:
#Example
df = df.drop(['State'], axis=1)
df
# Now it's your turn! In the following cell, write some code that drops the following columns: suffix, gender, geoid, district
#Use this cell to drop the specified columns
#...
df = df.drop(['suffix', 'gender', 'geoid', 'district'], axis=1)
df
# Great job! You have successfully dropped all unneeded columns.
# ### Removing Null Values
# Taking a look at the dataset, we'll see that some rows contain "NaN" in the last_name column. For the purpose of our analysis, we want to exclude these rows because they can disrupt what we are able to do with the data.
#
# The following cell provides an example for how you can drop rows containing "NaN" in the first_name column.
#Example
df.dropna(subset=['first_name'])
# Now it's your turn! Write some code that will drop rows containing "NaN" in the last_name column.
#Use this cell to drop rows in the last_name column containing "NaN"
#df = ...
#df
df = df.dropna(subset=['last_name'])
df
# ## Graphing the Data
# This section will walk you through how to create a scatterplot and fit linear regressions to our data.
# +
#Graphing the scatterplot
sns.lmplot(x="Bush '00", y='Member Position', hue="party",
data=df,markers=["o", "x"], palette="Set1")
#Adjusting scatterplot labels
sns.set(style='ticks')
plt.xlabel("District Conservatism")
plt.ylabel("Member's Position")
plt.title("Member's Position in 2000 by District Conservatism")
#Adding regression line analysis
democrats = df[df.party == 'D']
republicans = df[df.party == 'R']
d = linregress(democrats["Bush '00"], democrats["Member Position"])
r = linregress(republicans["Bush '00"], republicans["Member Position"])
print("Democratic slope: " + str(d.slope))
print("Republican slope: " + str(r.slope))
# -
# ### Observations
# Now that we've successfully recreated the graph, it's time to make some observations and inferences based on what we see. Pleasee write a brief 1-2 sentence answer for each of the following questions:
# 1. Interpret the slopes of the regressions for the Republican data clump and the Democrat data clump. No need to get too specific mathmatically, just observe the general trend and think about what it suggests about the relationship between candidate's position taking and their district's political leanings.
# *Question 1 answer here*
# 2. Politically, why might we see the trends displayed in the graph?
# *Question 2 answer here*
# ## The End
# Congratulations! You have finished this lab on House candidate positioning.
| lab/lab5/AnsolebehereSF.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# #### Write a NumPy program to test whether none of the elements of a given array is zero
# #### Write a NumPy program to create an array of 10 zeros, 10 ones, 10 fives.
# #### Write a NumPy program to create an array of all the even integers from 30 to 70.
# #### Write a NumPy program to generate a random number between 0 and 1.
# #### Write a NumPy program to create a vector with values from 0 to 20 and change the sign of the numbers in the range from 9 to 15.
# #### Write a NumPy program to create a vector of length 5 filled with arbitrary integers from 0 to 10.
# #### Write a NumPy program to create a 10x10 matrix, in which the elements on the borders will be equal to 1, and inside 0.
# #### Write a NumPy program to add a vector to each row of a given matrix.
# #### Write a NumPy program to convert a given array into a list and then convert it into a list again.
# #### Write a NumPy program to create a 3x3 matrix with values ranging from 2 to 10.
# #### Write a NumPy program to reverse an array (first element becomes last).
# #### Write a NumPy program to create a 8x8 matrix and fill it with a checkerboard pattern.
# #### Write a NumPy program to append values to the end of an array.
# #### Write a NumPy program to convert the values of Centigrade degrees into Fahrenheit degrees. Centigrade values are stored into a NumPy array.
# #### Write a NumPy program to get the unique elements of an array.
# #### Write a NumPy program to find the indices of the maximum and minimum values along the given axis of an array.
# #### Write a NumPy program to sort an along the first, last axis of an array.
# #### Write a NumPy program to create a contiguous flattened array.
# #### Write a NumPy program to interchange two axes of an array.
# #### Write a NumPy program to interchange two axes of an array.
# #### Write a NumPy program to concatenate two 2-dimensional arrays. Sample arrays: ([[0, 1, 3], [5, 7, 9]], [[0, 2, 4], [6, 8, 10]]
# #### Write a NumPy program (using numpy) to sum of all the multiples of 3 or 5 below 100.
# #### Write a NumPy program to how to add an extra column to an numpy array.
# #### Write a NumPy program to count the frequency of unique values in numpy array.
# #### Write a NumPy program to extract all the elements of the first row from a given (4x4) array.
# #### Write a NumPy program to extract first and second elements of the first and second rows from a given (4x4) array
# #### Write a NumPy program to extract first, third and fifth elements of the third and fifth rows from a given (6x6) array.
# #### Write a NumPy program to create a random 10x4 array and extract the first five rows of the array and store them into a variable.
| Chapter_1/Numpy_Exercise.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Sample, Explore, and Clean Taxifare Dataset
#
# **Learning Objectives**
# - Practice querying BigQuery
# - Sample from large dataset in a reproducible way
# - Practice exploring data using Pandas
# - Identify corrupt data and clean accordingly
#
# ## Introduction
# In this notebook, we will explore a dataset corresponding to taxi rides in New York City to build a Machine Learning model that estimates taxi fares. The idea is to suggest a likely fare to taxi riders so that they are not surprised, and so that they can protest if the charge is much higher than expected. Such a model would also be useful for ride-hailing apps that quote you the trip price in advance.
# ### Set up environment variables and load necessary libraries
PROJECT = "qwiklabs-gcp-00-34ffb0f0dc65" # Replace with your PROJECT
REGION = "us-central1" # Choose an available region for Cloud MLE
import os
os.environ["PROJECT"] = PROJECT
os.environ["REGION"] = REGION
# Check that the Google BigQuery library is installed and if not, install it.
# !pip freeze | grep google-cloud-bigquery==1.21.0 || pip install google-cloud-bigquery==1.21.0
# %load_ext google.cloud.bigquery
# ## View data schema and size
#
# Our dataset is hosted in [BigQuery](https://cloud.google.com/bigquery/): Google's petabyte scale, SQL queryable, fully managed cloud data warehouse. It is a publically available dataset, meaning anyone with a GCP account has access.
#
# 1. Click [here](https://console.cloud.google.com/bigquery?project=bigquery-public-data&p=nyc-tlc&d=yellow&t=trips&page=table) to acess the dataset.
# 2. In the web UI, below the query editor, you will see the schema of the dataset. What fields are available, what does each mean?
# 3. Click the 'details' tab. How big is the dataset?
# ## Preview data
#
# Let's see what a few rows of our data looks like. Any cell that starts with `%%bigquery` will be interpreted as a SQL query that is executed on BigQuery, and the result is printed to our notebook.
#
# BigQuery supports [two flavors](https://cloud.google.com/bigquery/docs/reference/standard-sql/migrating-from-legacy-sql#comparison_of_legacy_and_standard_sql) of SQL syntax: legacy SQL and standard SQL. The preferred is standard SQL because it complies with the official SQL:2011 standard. To instruct BigQuery to interpret our syntax as such we start the query with `#standardSQL`.
#
# There are over 1 Billion rows in this dataset and it's 130GB large, so let's retrieve a small sample
# %%bigquery --project $PROJECT
#standardSQL
SELECT
*
FROM
`nyc-tlc.yellow.trips`
WHERE RAND() < .0000001 -- sample a small fraction of the data
# ### Preview data (alternate way)
#
# Alternatively we can use BigQuery's web UI to execute queries.
#
# 1. Open the [web UI](https://console.cloud.google.com/bigquery)
# 2. Paste the above query minus the `%%bigquery` part into the Query Editor
# 3. Click the 'Run' button or type 'CTRL + ENTER' to execute the query
#
# Query results will be displayed below the Query editor.
# ## Sample data repeatably
#
# There's one issue with using `RAND() < N` to sample. It's non-deterministic. Each time you run the query above you'll get a different sample.
#
# Since repeatability is key to data science, let's instead use a hash function (which is deterministic by definition) and then sample the using the modulo operation on the hashed value.
#
# We obtain our hash values using:
#
# `ABS(FARM_FINGERPRINT(CAST(hashkey AS STRING)))`
#
# Working from inside out:
#
# - `CAST()`: Casts hashkey to string because our hash function only works on strings
# - `FARM_FINGERPRINT()`: Hashes strings to 64bit integers
# - `ABS()`: Takes the absolute value of the integer. This is not strictly neccesary but it makes the following modulo operations more intuitive since we don't have to account for negative remainders.*
#
#
# The `hashkey` should be:
#
# 1. Unrelated to the objective
# 2. Sufficiently high cardinality
#
# Given these properties we can sample our data repeatably using the modulo operation.
#
# To get a 1% sample:
#
# `WHERE MOD(hashvalue,100) = 0`
#
# To get a *different* 1% sample change the remainder condition, for example:
#
# `WHERE MOD(hashvalue,100) = 55`
#
# To get a 20% sample:
#
# `WHERE MOD(hashvalue,100) < 20` Alternatively: `WHERE MOD(hashvalue,5) = 0`
#
# And so forth...
#
# We'll use `pickup_datetime` as our hash key because it meets our desired properties. If such a column doesn't exist in the data you can synthesize a hashkey by concatenating multiple columns.
#
# Below we sample 1/5000th of the data. The syntax is admittedly less elegant than `RAND() < N`, but now each time you run the query you'll get the same result.
#
# \**Tech note: Taking absolute value doubles the chances of hash collisions but since there are 2^64 possible hash values and less than 2^30 hash keys the collision risk is negligable.*
# #### **Exercise 1**
#
# Modify the BigQuery query above to produce a repeatable sample of the taxi fare data.
# Replace the RAND operation above with a FARM_FINGERPRINT operation that will yield a repeatable 1/5000th sample of the data.
# %%bigquery --project $PROJECT
# TODO: Your code goes here
# ## Load sample into Pandas dataframe
#
# The advantage of querying BigQuery directly as opposed to the web UI is that we can supplement SQL analysis with Python analysis. A popular Python library for data analysis on structured data is [Pandas](https://pandas.pydata.org/), and the primary data strucure in Pandas is called a DataFrame.
#
# To store BigQuery results in a Pandas DataFrame we have have to query the data with a slightly differently syntax.
#
# 1. Import the `google.cloud` `bigquery` module
# 2. Create a variable called `bq` which is equal to the BigQuery Client `bigquery.Client()`
# 2. Store the desired SQL query as a Python string
# 3. Execute `bq.query(query_string).to_dataframe()` where `query_string` is what you created in the previous step
#
# **This will take about a minute**
#
# *Tip: Use triple quotes for a multi-line string in Python*
#
# *Tip: You can measure execution time of a cell by starting that cell with `%%time`*
# #### **Exercise 2**
#
# Store the results of the query you created in the previous TODO above in a Pandas DataFrame called `trips`.
# You will need to import the `bigquery` module from Google Cloud and store the query as a string before executing the query. Then,
# - Create a variable called `bq` which contains the BigQuery Client
# - Copy/paste the query string from above
# - Use the BigQuery Client to execute the query and save it to a Pandas dataframe
# +
from google.cloud import bigquery
bq = # TODO: Your code goes here
query_string = """
# TODO: Your code goes here
"""
trips = # TODO: Your code goes here
# -
# ## Explore datafame
print(type(trips))
trips.head()
# The Python variable `trips` is now a Pandas DataFrame. The `.head()` function above prints the first 5 rows of a DataFrame.
#
# The rows in the DataFrame may be in a different order than when using `%%bq query`, but the data is the same.
#
# It would be useful to understand the distribution of each of our columns, which is to say the mean, min, max, standard deviation etc..
#
# A DataFrame's `.describe()` method provides this. By default it only analyzes numeric columns. To include stats about non-numeric column use `describe(include='all')`.
trips.describe()
# ## Distribution analysis
#
# Do you notice anything off about the data? Pay attention to `min` and `max`. Latitudes should be between -90 and 90, and longitudes should be between -180 and 180, so clearly some of this data is bad.
#
# Further more some trip fares are negative and some passenger counts are 0 which doesn't seem right. We'll clean this up later.
# ## Investigate trip distance
#
# Looks like some trip distances are 0 as well, let's investigate this.
trips[trips["trip_distance"] == 0][:10] # first 10 rows with trip_distance == 0
# It appears that trips are being charged substantial fares despite having 0 distance.
#
# Let's graph `trip_distance` vs `fare_amount` using the Pandas [`.plot()`](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.plot.html) method to corroborate.
# %matplotlib inline
trips.plot(x = "trip_distance", y = "fare_amount", kind = "scatter")
# It appears that we have a lot of invalid data that is being coded as zero distance and some fare amounts that are definitely illegitimate. Let's remove them from our analysis. We can do this by modifying the BigQuery query to keep only trips longer than zero miles and fare amounts that are at least the minimum cab fare ($2.50).
# ## Identify correct label
#
# Should we use `fare_amount` or `total_amount` as our label? What's the difference?
#
# To make this clear let's look at some trips that included a toll.
# #### **Exercise 3**
#
# Use the pandas DataFrame indexing to look at a subset of the trips dataframe created above where the `tolls_amount` is positive.
#
# **Hint**: You can index the dataframe over values which have `trips['tolls_amount'] > 0`.
# +
# TODO: Your code goes here
# -
# What do you see looking at the samples above? Does `total_amount` always reflect the `fare amount` + `tolls_amount` + `tip`? Why would there be a discrepancy?
#
# To account for this, we will use the sum of `fare_amount` and `tolls_amount`
# ## Select useful fields
#
# What fields do you see that may be useful in modeling taxifare? They should be
#
# 1. Related to the objective
# 2. Available at prediction time
#
# **Related to the objective**
#
# For example we know `passenger_count` shouldn't have any affect on fare because fare is calculated by time and distance. Best to eliminate it to reduce the amount of noise in the data and make the job of the ML algorithm easier.
#
# If you're not sure whether a column is related to the objective, err on the side of keeping it and let the ML algorithm figure out whether it's useful or not.
#
# **Available at prediction time**
#
# For example `trip_distance` is certainly related to the objective, but we can't know the value until a trip is completed (depends on the route taken), so it can't be used for prediction.
#
# **We will use the following**
#
# `pickup_datetime`, `pickup_longitude`, `pickup_latitude`, `dropoff_longitude`, and `dropoff_latitude`.
# ## Clean the data
#
# We need to do some clean-up of the data:
#
# - Filter to latitudes and longitudes that are reasonable for NYC
# - the pickup longitude and dropoff_longitude should lie between -70 degrees and -78 degrees
# - the pickup_latitude and dropoff_latitude should lie between 37 degrees and 45 degrees
# - We shouldn't include fare amounts less than $2.50
# - Trip distances and passenger counts should be non-zero
# - Have the label reflect the sum of fare_amount and tolls_amount
#
# Let's change the BigQuery query appropriately, and only return the fields we'll use in our model.
# %%bigquery --project $PROJECT
#standardSQL
SELECT
(tolls_amount + fare_amount) AS fare_amount, -- create label that is the sum of fare_amount and tolls_amount
pickup_datetime,
pickup_longitude,
pickup_latitude,
dropoff_longitude,
dropoff_latitude
FROM
`nyc-tlc.yellow.trips`
WHERE
-- Clean Data
trip_distance > 0
AND passenger_count > 0
TODO: Your code goes here
TODO: Your code goes here
-- create a repeatable 1/5000th sample
AND MOD(ABS(FARM_FINGERPRINT(CAST(pickup_datetime AS STRING))), 5000) = 1
# We now have a repeatable and clean sample we can use for modeling taxi fares.
# Copyright 2019 Google Inc.
# Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at
# http://www.apache.org/licenses/LICENSE-2.0
# Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.
| courses/machine_learning/deepdive/01_bigquery/labs/a_sample_explore_clean.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # CPU performance measuring and estimation
# ## Report by <NAME>
# In this project my goal is to be able to look at a CPU and be able to tell what factors make that CPU fast. We can do this by looking at all the things that can affect a CPU speed graph and analyze them to come up with some rules and guidelines that can help us choose a CPU that performs well.
# First thing we can do is start to gather and look at what these attributes are.
# # Building graphs and models
# + [markdown] tags=["hide", "remove_cell"]
# First we import our utility scripts and libraries
# + tags=["hide", "remove_cell"]
import sys
import os
import pandas as pd
import numpy as np
import matplotlib as mp
import seaborn as sns
import matplotlib.pyplot as plt
project_dir = '/home/atoris/course-project-thomas-wright/src'
if project_dir not in sys.path:
sys.path.insert(0, project_dir)
# -
import datautil as du
# + tags=["hide", "remove_cell"]
url = 'https://www.cpubenchmark.net/mid_range_cpus.html'
filename = 'cpu_data_encoded.csv'
df = du.load_data(url, filename)
# -
# Now we can start to graph our data
# +
amd = df[df['brand_Amd'] == 1]
intel = df[df['brand_Intel'] == 1]
def graph(x, y):
ax = intel.plot(y=y, x=x, kind='scatter', label="Intel", color="Blue")
amd.plot(y=y, x=x, kind='scatter', label="Amd", ax=ax, color="red")
graph('st_score', 'Turbo Speed')
# -
# We can see a basic trend of a higher clock speed allowing for better single threaded performace, here are a few more graphs that I though had interesting results.
# +
graph('mt_score', 'Cores')
graph('price', 'Typical TDP')
graph('st_score', 'Cores')
# -
# # CPU attribute graph builder
# We can look at any attribute againts another attribute, here you can choose what you want to compare and build a graph to visualize the comparision.
# +
import ipywidgets as widgets
columns = ['price', 'Clockspeed', 'Turbo Speed', 'Threads', 'Cores', 'Typical TDP', 'mt_score', 'st_score']
def graph_update(x, y, ax):
intel.plot(y=y, x=x, kind='scatter', label="Intel", ax=ax, color="Blue")
amd.plot(y=y, x=x, kind='scatter', label="Amd", ax=ax, color='Red')
im = widgets.interact_manual(
X_axis=columns, Y_axis=columns,)
def plot(X_axis='Cores', Y_axis='Turbo Speed', grid=True):
fig, ax = plt.subplots(1, 1, figsize=(8, 6))
graph_update(X_axis, Y_axis, ax)
ax.grid(grid)
im = im(plot)
im.widget.children[3].description = "Create Graph"
# -
# # Creating the model
# Next we can split up our data and feed it into a Classifier.
# +
from sklearn.model_selection import train_test_split
from sklearn.ensemble import RandomForestClassifier
from sklearn.linear_model import LinearRegression
from sklearn.linear_model import LogisticRegression
from sklearn.metrics import accuracy_score
from sklearn.metrics import precision_score
import seaborn as sns
np.random.seed(10)
sns.set_theme()
df.dropna(subset=['Typical TDP'], inplace=True)
features = df.drop(['mt_score', 'name', 'Socket', 'st_score', 'price'], axis=1)
target = df['mt_score'].tolist()
X_train, X_test, y_train, y_test = train_test_split(features, target, test_size=.25)
clf = RandomForestClassifier(n_estimators=250,max_features=9)
clf.fit(X_train, y_train)
y_test_pred = clf.predict(X_test)
# -
# Finally we can use our model to make some assumptions. Here we can look at how each factor contributes to the multithreaded performance of a cpu.
# +
labels = pd.Index(['Clockspeed', 'Turbo Speed', 'Threads', 'Cores', 'Typical TDP', 'class_Desktop', 'class_Laptop', 'class_Server', 'brand_Amd', 'brand_Intel'])
importance = pd.Series(data=clf.feature_importances_, index=labels)
fig, ax = plt.subplots(1, 1, figsize=(16, 8))
sns.set_style("white")
plt.pie(x=importance, labels=labels, autopct="%.1f%%")
plt.title("Feature importance on Multithreaded score", fontsize=14);
fig.savefig('../data/Feature_importance.png')
# -
# # Using a model for predictions
# Finally we can create our own hypotheical CPU and see how well it would perform in a multithreaded application
# +
im = widgets.interact_manual(
clock_speed=widgets.IntSlider(min=1000, max=6000, step=1, value=3800),
turbo_speed=widgets.IntSlider(min=1000, max=6000, step=1, value=3800),
cores=widgets.IntSlider(min=1, max=64, step=1, value=4),
threads=widgets.IntSlider(min=1, max=128, step=1, value=8),
tdp=widgets.IntSlider(min=10, max=300, step=1, value=95),
cpu_class = widgets.RadioButtons(
options=['desktop', 'server', 'laptop'],
description='Platform:',
disabled=False),
brand = widgets.RadioButtons(
options=['intel', 'amd'],
description='Brand:',
disabled=False)
)
def predict(clock_speed=3800, turbo_speed=4100, cores=4, threads=8, tdp=95, cpu_class='desktop', brand='intel'):
data = {'Clockspeed' : [clock_speed/1000],
'Turbo Speed' : [turbo_speed/1000],
'Threads' : [threads],
'Cores' : [cores],
'Typical TDP' : [tdp],
'class_Desktop' : [0],
'class_Server' : [0],
'class_Laptop' : [0],
'brand_Amd' : [0],
'brand_Intel' : [0]}
if(cpu_class == 'desktop'):
data['class_Desktop'] = 1
if(cpu_class == 'server'):
data['class_Server'] = 1
if(cpu_class == 'laptop'):
data['class_Laptop'] = 1
if(brand == 'intel'):
data['brand_Intel'] = 1
if(brand == 'amd'):
data['brand_Amd'] = 1
df_data = pd.DataFrame(data)
result = clf.predict(df_data)
print("\nThis cpu configuration would have an estimate of a " + str(result) + " Multithreaded score.\n")
im = im(predict)
im.widget.children[0].description = "Clock Speed (Mhz)"
im.widget.children[1].description = "Turbo Speed (Mhz)"
im.widget.children[2].description = "Core Count"
im.widget.children[3].description = "Thread Count"
im.widget.children[4].description = "TDP"
im.widget.children[7].description = "Generate"
# -
# # Building a better model
# The results from that don't seem very accurate to the inputs so pruning our less important feature may help us get a better prediction.
# Also we can switch to a linear regression classification as it would suit the data better now that we know what features to exclude.
# +
from sklearn.neighbors import KNeighborsClassifier
from sklearn.svm import SVC
from sklearn.linear_model import LogisticRegression
from sklearn.linear_model import LinearRegression
features = df.drop(['mt_score', 'name', 'Socket', 'st_score', 'price', 'class_Laptop', 'class_Desktop', 'class_Server', 'brand_Amd', 'brand_Intel'], axis=1)
X_train, X_test, y_train, y_test = train_test_split(features, target, test_size=.25)
clf2 = LinearRegression()
clf2.fit(X_train, y_train)
im2 = widgets.interact_manual(
clock_speed=widgets.IntSlider(min=1000, max=6000, step=1, value=3800),
turbo_speed=widgets.IntSlider(min=1000, max=6000, step=1, value=3800),
cores=widgets.IntSlider(min=1, max=64, step=1, value=4),
threads=widgets.IntSlider(min=1, max=128, step=1, value=8),
tdp=widgets.IntSlider(min=10, max=300, step=1, value=95)
)
def predict(clock_speed=3800, turbo_speed=4100, cores=4, threads=8, tdp=95):
data = {'Clockspeed' : [clock_speed/1000],
'Turbo Speed' : [turbo_speed/1000],
'Threads' : [threads],
'Cores' : [cores],
'Typical TDP' : [tdp]}
df_data = pd.DataFrame(data)
result = clf2.predict(df_data)
print("\nThis cpu configuration would have an estimate of a " + str(result) + " Multithreaded score.\n")
im2 = im2(predict)
im2.widget.children[0].description = "Clock Speed (Mhz)"
im2.widget.children[1].description = "Turbo Speed (Mhz)"
im2.widget.children[2].description = "Core Count"
im2.widget.children[3].description = "Thread Count"
im2.widget.children[4].description = "TDP"
im2.widget.children[5].description = "Generate"
# -
# # Final Thoughts
# I think this project came out rather well given the amount of data that was availabe to the model. There are few issues with the model one being it see's core count as a negative attribute but overall I am pleased with the final performance of the cpu multihreaded score estimation tool.
# The data I used was from www.cpubenchmark.net, and is still avaialable there but if you want to use the data that I cleaned or gathered without pinging their server hundreds of times it's availabe at the projects github https://github.com/cpsc6300/course-project-thomas-wright.
| notebooks/3_Simple_Graphs.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Лабораторные работы
# ## DES
# DES - блочный алгоритм для симметричного шифрования.
# - Работает на блоках данных по 64 бита
# - Размер ключа - 64 бита (56 бит + 8 проверочных (parity bits))
# - Использует 16 раундов шифрования сетью Фейстеля, для каждого раунда генерируется свой подключ
# - Если нужно зашифровать данные, размерном больше 64-х бит, используются слудующие режими работы (mode of operation):
# - ECB (electronic code book) - шифрование 64-битных по-порядку, не зависимо друг от друга
# - CBC (cipher block chaining) - каждый 64-битный блок открытого текста (кроме первого) складывается по модулю 2 с предыдущим результатом шифрования
# - CFB (cipher feed back) / OFB (output feed back) - схожы с CBC, но используют другие похожие схемы с xor
# - Если блок данных меньше 64-х бит, используется паддинг, например, PKCS5 или, в обобщенном виде, PKCS7
# #### Алгоритм
# +
import bitarray
import itertools
from collections import deque
class DES(object):
_initial_permutation = [
58, 50, 42, 34, 26, 18, 10, 2,
60, 52, 44, 36, 28, 20, 12, 4,
62, 54, 46, 38, 30, 22, 14, 6,
64, 56, 48, 40, 32, 24, 16, 8,
57, 49, 41, 33, 25, 17, 9, 1,
59, 51, 43, 35, 27, 19, 11, 3,
61, 53, 45, 37, 29, 21, 13, 5,
63, 55, 47, 39, 31, 23, 15, 7
]
_final_permutation = [
40, 8, 48, 16, 56, 24, 64, 32,
39, 7, 47, 15, 55, 23, 63, 31,
38, 6, 46, 14, 54, 22, 62, 30,
37, 5, 45, 13, 53, 21, 61, 29,
36, 4, 44, 12, 52, 20, 60, 28,
35, 3, 43, 11, 51, 19, 59, 27,
34, 2, 42, 10, 50, 18, 58, 26,
33, 1, 41, 9, 49, 17, 57, 25
]
_expansion_function = [
32, 1, 2, 3, 4, 5,
4, 5, 6, 7, 8, 9,
8, 9, 10, 11, 12, 13,
12, 13, 14, 15, 16, 17,
16, 17, 18, 19, 20, 21,
20, 21, 22, 23, 24, 25,
24, 25, 26, 27, 28, 29,
28, 29, 30, 31, 32, 1
]
_permutation = [
16, 7, 20, 21, 29, 12, 28, 17,
1, 15, 23, 26, 5, 18, 31, 10,
2, 8, 24, 14, 32, 27, 3, 9,
19, 13, 30, 6, 22, 11, 4, 25
]
_pc1 = [
57, 49, 41, 33, 25, 17, 9,
1, 58, 50, 42, 34, 26, 18,
10, 2, 59, 51, 43, 35, 27,
19, 11, 3, 60, 52, 44, 36,
63, 55, 47, 39, 31, 23, 15,
7, 62, 54, 46, 38, 30, 22,
14, 6, 61, 53, 45, 37, 29,
21, 13, 5, 28, 20, 12, 4
]
_pc2 = [
14, 17, 11, 24, 1, 5,
3, 28, 15, 6, 21, 10,
23, 19, 12, 4, 26, 8,
16, 7, 27, 20, 13, 2,
41, 52, 31, 37, 47, 55,
30, 40, 51, 45, 33, 48,
44, 49, 39, 56, 34, 53,
46, 42, 50, 36, 29, 32
]
_left_rotations = [
1, 1, 2, 2, 2, 2, 2, 2, 1, 2, 2, 2, 2, 2, 2, 1
]
_sbox = [
# S1
[
[14, 4, 13, 1, 2, 15, 11, 8, 3, 10, 6, 12, 5, 9, 0, 7],
[0, 15, 7, 4, 14, 2, 13, 1, 10, 6, 12, 11, 9, 5, 3, 8],
[4, 1, 14, 8, 13, 6, 2, 11, 15, 12, 9, 7, 3, 10, 5, 0],
[15, 12, 8, 2, 4, 9, 1, 7, 5, 11, 3, 14, 10, 0, 6, 13]
],
# S2
[
[15, 1, 8, 14, 6, 11, 3, 4, 9, 7, 2, 13, 12, 0, 5, 10],
[3, 13, 4, 7, 15, 2, 8, 14, 12, 0, 1, 10, 6, 9, 11, 5],
[0, 14, 7, 11, 10, 4, 13, 1, 5, 8, 12, 6, 9, 3, 2, 15],
[13, 8, 10, 1, 3, 15, 4, 2, 11, 6, 7, 12, 0, 5, 14, 9]
],
# S3
[
[10, 0, 9, 14, 6, 3, 15, 5, 1, 13, 12, 7, 11, 4, 2, 8],
[13, 7, 0, 9, 3, 4, 6, 10, 2, 8, 5, 14, 12, 11, 15, 1],
[13, 6, 4, 9, 8, 15, 3, 0, 11, 1, 2, 12, 5, 10, 14, 7],
[1, 10, 13, 0, 6, 9, 8, 7, 4, 15, 14, 3, 11, 5, 2, 12]
],
# S4
[
[7, 13, 14, 3, 0, 6, 9, 10, 1, 2, 8, 5, 11, 12, 4, 15],
[13, 8, 11, 5, 6, 15, 0, 3, 4, 7, 2, 12, 1, 10, 14, 9],
[10, 6, 9, 0, 12, 11, 7, 13, 15, 1, 3, 14, 5, 2, 8, 4],
[3, 15, 0, 6, 10, 1, 13, 8, 9, 4, 5, 11, 12, 7, 2, 14]
],
# S5
[
[2, 12, 4, 1, 7, 10, 11, 6, 8, 5, 3, 15, 13, 0, 14, 9],
[14, 11, 2, 12, 4, 7, 13, 1, 5, 0, 15, 10, 3, 9, 8, 6],
[4, 2, 1, 11, 10, 13, 7, 8, 15, 9, 12, 5, 6, 3, 0, 14],
[11, 8, 12, 7, 1, 14, 2, 13, 6, 15, 0, 9, 10, 4, 5, 3]
],
# S6
[
[12, 1, 10, 15, 9, 2, 6, 8, 0, 13, 3, 4, 14, 7, 5, 11],
[10, 15, 4, 2, 7, 12, 9, 5, 6, 1, 13, 14, 0, 11, 3, 8],
[9, 14, 15, 5, 2, 8, 12, 3, 7, 0, 4, 10, 1, 13, 11, 6],
[4, 3, 2, 12, 9, 5, 15, 10, 11, 14, 1, 7, 6, 0, 8, 13]
],
# S7
[
[4, 11, 2, 14, 15, 0, 8, 13, 3, 12, 9, 7, 5, 10, 6, 1],
[13, 0, 11, 7, 4, 9, 1, 10, 14, 3, 5, 12, 2, 15, 8, 6],
[1, 4, 11, 13, 12, 3, 7, 14, 10, 15, 6, 8, 0, 5, 9, 2],
[6, 11, 13, 8, 1, 4, 10, 7, 9, 5, 0, 15, 14, 2, 3, 12]
],
# S8
[
[13, 2, 8, 4, 6, 15, 11, 1, 10, 9, 3, 14, 5, 0, 12, 7],
[1, 15, 13, 8, 10, 3, 7, 4, 12, 5, 6, 11, 0, 14, 9, 2],
[7, 11, 4, 1, 9, 12, 14, 2, 0, 6, 10, 13, 15, 3, 5, 8],
[2, 1, 14, 7, 4, 10, 8, 13, 15, 12, 9, 0, 3, 5, 6, 11]
]
]
def __init__(self, key):
self.key = key
def encrypt(self, message):
padded = self.pkcs7_padding(message, pad=True)
result = []
for block in padded:
result += self.encrypt_64bit(''.join(str(i) for i in block))
return result
def decrypt(self, message, msg_in_bits=False):
bits_array_msg = []
if msg_in_bits:
bits_array_msg = message
else:
bits_array_msg = self._string_to_bitsarray(message)
if len(bits_array_msg) % 64 != 0:
raise ValueError('Ciphered code must be a multiple of 64')
blocks_lst = [
bits_array_msg[i:i + 64] for i in range(0, len(bits_array_msg), 64)
]
result = []
for block in blocks_lst:
decrypted = self.decrypt_64bit(block, msg_in_bits=True)
bl = list(
''.join(chr(int(
''.join(
map(str, decrypted[i:i + 8])),
2)) for i in range(0, len(decrypted), 8)))
bl = self._unpad(bl)
result += bl
return ''.join(result)
def encrypt_64bit(self, message):
return self.crypt(message, encrypt=True)
def decrypt_64bit(self, message, msg_in_bits=False):
return self.crypt(message, encrypt=False, msg_in_bits=msg_in_bits)
def crypt(self, message, encrypt=True, msg_in_bits=False):
bits_array_msg = []
if msg_in_bits:
bits_array_msg = message
else:
bits_array_msg = self._string_to_bitsarray(message)
bits_array_key = self._string_to_bitsarray(self.key)
if len(bits_array_msg) != 64:
raise ValueError('Message must be 64 bit!')
if len(bits_array_key) != 64:
raise ValueError('Key must be 64 bit!')
# Compute 16 48-bit subkeys
subkeys = self.get_subkeys(bits_array_key)
# Convert the message using the initial permutation block
msg = [bits_array_msg[i - 1] for i in self._initial_permutation]
L, R = msg[:32], msg[32:]
if encrypt:
for i in range(16):
prev_r = R
r_feistel = self.feistel_function(R, subkeys[i])
R = [L[i] ^ r_feistel[i] for i in range(32)]
L = prev_r
else:
for i in reversed(range(16)):
prev_l = L
l_feistel = self.feistel_function(L, subkeys[i])
L = [R[i] ^ l_feistel[i] for i in range(32)]
R = prev_l
before_final_permute = L + R
return [before_final_permute[i - 1] for i in self._final_permutation]
def pkcs7_padding(self, message, block_size=8, pad=True):
msg = list(message)
blocks_lst = [
msg[i:i + block_size] for i in range(0, len(msg), block_size)
]
s = block_size
return [
self._pad(b, s) if len(b) < block_size else b for b in blocks_lst
] if pad else blocks_lst
def feistel_function(self, r_32bit, subkey_48bit):
r_48bit = [r_32bit[i - 1] for i in self._expansion_function]
subkey_xor_r = [r_48bit[i] ^ subkey_48bit[i] for i in range(48)]
# Divide subkey_xor_r into 8 6-bit blocks for computing s-boxes
b_6_bit_blocks = [subkey_xor_r[i:i + 6] for i in range(0, 48, 6)]
# Compute 8 s-boxes and concatenate them into 32-bit vector
after_sboxes_32bit = list(itertools.chain(*[
self.compute_s_box(
self._sbox[i], b_6_bit_blocks[i]) for i in range(8)
]))
# Compute the permutation and return the 32-bit block
return [int(after_sboxes_32bit[i - 1]) for i in self._permutation]
def compute_s_box(self, sbox, b_6_bit):
row = int(''.join(str(x) for x in [b_6_bit[0], b_6_bit[5]]), 2)
col = int(''.join(str(x) for x in b_6_bit[1:5]), 2)
s_box_res = sbox[row][col]
return list('{0:04b}'.format(s_box_res))
def get_subkeys(self, bits_array_key):
# Extract 8 parity bits from the key (8, 16, 24, 32, 40, 48, 56, 64)
# key_56bit = bits_array_key
# del key_56bit[7::8]
# Compute Permuted Choice 1 on the key
key_56bit = [bits_array_key[i - 1] for i in self._pc1]
# Split the key into two 28-bit subkeys
key_56_left, key_56_right = [
key_56bit[i:i + 28] for i in range(0, 56, 28)
]
# Compute 16 48-bit keys using left rotations and permuted choice 2
subkeys_48bit = []
C, D = key_56_left, key_56_right
for i in range(16):
C_deque, D_deque = deque(C), deque(D)
C_deque.rotate(-self._left_rotations[i])
D_deque.rotate(-self._left_rotations[i])
C, D = list(C_deque), list(D_deque)
CD = C + D
subkeys_48bit.append([CD[i - 1] for i in self._pc2])
return subkeys_48bit
def _string_to_bitsarray(self, string):
ba = bitarray.bitarray()
ba.fromstring(string)
return [1 if i else 0 for i in ba.tolist()]
def _pad(self, arr, block_size):
z = block_size - len(arr)
return arr + [z] * z
def _unpad(self, arr):
if str(arr[-1]).isdigit():
arr_str = ''.join(str(i) for i in arr)
i = j = int(arr[-1])
while arr_str[-1] == str(j) and i > 0:
arr_str = arr_str[:-1]
i -= 1
return list(arr_str)
else:
return arr
# -
# #### Работа алгоритма
# +
d = DES('qwertyui')
cipher = d.encrypt('hello world!')
print("Ciphered bits:\n", cipher)
deciphered = d.decrypt(cipher, msg_in_bits=True)
print("Deciphered text:\n", deciphered)
# -
# ## Хеш-функция
# - 8-ми байтная
# - Переменные a, b, c и d после генерации складываются в шестнадцатеричном виде в порядке d, c, a, b.
# #### Код функции:
def hash_function(s=b''):
a, b, c, d = 0xa0, 0xb1, 0x11, 0x4d
for byte in bytearray(s):
a ^= byte
b = b ^ a ^ 0x55
c = b ^ 0x94
d = c ^ byte ^ 0x74
return format(d << 24 | c << 16 | a << 8 | b, '08x')
# #### Демонстрация работы:
# +
str1 = 'hello'
str2 = 'hello!'
str3 = 'Hello World'
print('Hash for %s:\t\t' % str1, hash_function(bytes(str1, 'ascii')))
print('Hash for %s:\t' % str2, hash_function(bytes(str2, 'ascii')))
print('Hash for %s:\t' % str3, hash_function(bytes(str3, 'ascii')))
# -
# #### Коллизия
# Поиск 4-х байтных коллизий для хеш-функции:
# +
from random import choice, seed
ascii = ''.join([chr(i) for i in range(33, 127)])
seed(37)
found = {}
for j in range(5000):
# Build a 4 byte random string
s = bytes(''.join([choice(ascii) for _ in range(4)]), 'ascii')
h = hash_function(s)
if h in found:
v = found[h]
if v == s:
# Same hash, but from the same source string
continue
print(h, found[h], s)
found[h] = s
# -
# #### Кривая вероятности коллизии на выбранном интервале
# Расчет вероятностей коллизии для хеш-выборок заданного размера с использованием принципа парадокса дней рождения.
# +
# %matplotlib inline
import math
import numpy as np
import matplotlib.pyplot as plt
# Calculate the probability of collision among 10000 keys using Birthday Paradox Principle
n = num_of_all_hashes = 8 ** 8 # 16777216
keys = 10000
probUnique = 1.0
keys_arr = np.array(range(1, keys))
coll_probs = []
for k in range(1, keys):
probUnique = probUnique * (n - (k - 1)) / n
coll_probs.append((1 - math.exp(-0.5 * k * (k - 1) / n)))
plt.plot(keys_arr, coll_probs)
plt.show()
# -
#
# ## Протокол Диффи-Хеллмана
# Позволяет двум или более сторонам получить общий секретный ключ, используя незащищенный от прослушивания канал связи. Полученный ключ используется для шифрования дальнейшего обмена с помощью алгоритмов симметричного шифрования.
#
# Использует операции возведения в степень по модулю и симметричность модульной арифметики.
# #### Код алгоритма
# +
from hashes.hash_function import hash_function
from binascii import hexlify
try:
import ssl
random_function = ssl.RAND_bytes
random_provider = "Python SSL"
except (AttributeError, ImportError):
import OpenSSL
random_function = OpenSSL.rand.bytes
random_provider = "OpenSSL"
class DiffieHellman(object):
def __init__(self, generator=2, prime=11,
key_length=540, private_key=None):
self.generator = generator
self.key_length = key_length
self.prime = prime
if private_key:
self.private_key = private_key
else:
self.private_key = self.gen_private_key(self.key_length)
self.public_key = self.gen_public_key()
def get_random(self, bits):
_rand = 0
_bytes = bits
while _rand.bit_length() < bits:
_rand = int.from_bytes(random_function(_bytes), byteorder='big')
return _rand
def gen_private_key(self, bits):
return self.get_random(bits)
def gen_public_key(self):
return pow(self.generator, self.private_key, self.prime)
def gen_secret(self, private_key, other_key):
return pow(other_key, private_key, self.prime)
def gen_key(self, other_key):
self.shared_secret = self.gen_secret(self.private_key, other_key)
try:
_shared_secret_bytes = self.shared_secret.to_bytes(
self.shared_secret.bit_length() // 8 + 1, byteorder='big'
)
except AttributeError:
_shared_secret_bytes = str(self.shared_secret)
self.key = hash_function(_shared_secret_bytes)
def get_key(self):
return self.key
def get_shared_secret(self):
return self.shared_secret
def show_params(self):
print('Parameters:')
print('Prime [{0}]: {1}'.format(self.prime.bit_length(), self.prime))
print(
'Generator [{0}]: {1}\n'
.format(self.generator.bit_length(), self.generator))
print(
'Private key [{0}]: {1}\n'
.format(self.private_key.bit_length(), self.private_key))
print(
'Public key [{0}]: {1}'
.format(self.public_key.bit_length(), self.public_key))
def show_results(self):
print('Results:')
print(
'Shared secret [{0}]: {1}'
.format(self.shared_secret.bit_length(), self.shared_secret))
print(
'Shared key [{0}]: {1}'.format(len(self.key),
hexlify(bytes(self.key, 'ascii'))))
# -
# #### Работа алгоритма:
# +
p = 11
g = 2
d1 = DiffieHellman(generator=g, prime=p, key_length=3)
d2 = DiffieHellman(generator=g, prime=p, key_length=3)
d1.gen_key(d2.public_key)
d2.gen_key(d1.public_key)
d1.show_params()
d1.show_results()
d2.show_params()
d2.show_results()
if d1.get_key() == d2.get_key():
print('Shared keys match!')
print('Key: ', hexlify(bytes(d1.key, 'ascii')))
print('Hashed key: ', d1.get_key())
else:
print("Shared secrets didn't match!")
print("Shared secret A: ", d1.gen_secret(d2.public_key))
print("Shared secret B: ", d2.gen_secret(d1.public_key))
# -
# ## Алгоритм RSA
# RSA - ассиметричный криптографический алгоритм, использующий открытые и закрытые ключи.
#
# - Открытый ключ - пара (e, N), где e - открытая экспонента, N - результат выполнения функции Эйлера
# - Закрытый ключ - пара (d, N), где d - число, мультипликативно обратное открытой экспоненте, N - результат выполнения функции Эйлера
# - Данные шифруются открытым ключем, а расшифровываются закрытым.
# - Функция Эйлера - выражение вида (p - 1) * (q - 1), где p и q - простые случайные числа
# - Шифрование происходит за счет операций возведения в степень по модулю с открытыми данными и ключем
#
# #### Код алгоритма:
# +
import json
from random import randint
from base64 import b64encode, b64decode
class RSA(object):
def __init__(self):
self.p, self.q = self.gen_p_q()
self.N = self.p * self.q
self.phi = self.euler_function(self.p, self.q)
self.public_key_pair = (self.gen_public_exponent(), self.N)
self.private_key_pair = (self.gen_private_exponent(), self.N)
def encrypt(self, message, public_key_pair):
return self.crypt(message, public_key_pair, encrypt=True)
def decrypt(self, cipher, private_key_pair):
return self.crypt(cipher, private_key_pair, encrypt=False)
def crypt(self, message, key_pair, encrypt=True):
if encrypt:
msg = [ord(c) for c in message]
r = [pow(i, key_pair[0], key_pair[1]) for i in msg]
return b64encode(bytes(json.dumps(r), 'utf8')).decode('utf8'), r
else:
msg = json.loads(b64decode(message).decode('utf8'))
r = [pow(i, key_pair[0], key_pair[1]) for i in msg]
return ''.join(chr(i) for i in r), r
def get_public_key_pair(self):
return self.public_key_pair
def get_private_key_pair(self):
return self.private_key_pair
def get_phi(self):
return self.phi
def get_p_q(self):
return self.p, self.q
def gen_public_exponent(self):
for e in reversed(range(self.phi)):
if self.fermat_primality(e):
self.e = e
return e
def gen_private_exponent(self):
self.d = self.mod_multiplicative_inv(self.e, self.phi)[0]
return self.d
def euler_function(self, p, q):
return (p - 1) * (q - 1)
def gen_p_q(self):
p_c, q_c = randint(2, 100000), randint(2, 100000)
p = q = None
gen1 = gen2 = self.eratosthenes_sieve()
bigger = max(p_c, q_c)
for i in range(bigger):
if p_c > 0:
p = next(gen1)
if q_c > 0:
q = next(gen2)
p_c -= 1
q_c -= 1
return p, q
def fermat_primality(self, n):
if n == 2:
return True
if not n & 1:
return False
return pow(2, n - 1, n) == 1
def extended_euclide(self, b, n):
# u*a + v*b = gcd(a, b)
# return g, u, v
x0, x1, y0, y1 = 1, 0, 0, 1
while n != 0:
q, b, n = b // n, n, b % n
x0, x1 = x1, x0 - q * x1
y0, y1 = y1, y0 - q * y1
return b, x0, y0
def mod_multiplicative_inv(self, a, b):
g, u, v = self.extended_euclide(a, b)
return b + u, a - v
def eratosthenes_sieve(self):
D = {}
q = 2
while True:
if q not in D:
yield q
D[q * q] = [q]
else:
for p in D[q]:
D.setdefault(p + q, []).append(p)
del D[q]
q += 1
def print_info(self):
print('p = %d\nq = %d' % (self.p, self.q))
print('N = %d\nphi = %d' % (self.N, self.phi))
print('e = %d\nd = %d' % (self.e, self.d))
# -
# #### Работа алгоритма:
# +
rsa = RSA()
message = 'hello!'
cipher = rsa.encrypt(message, rsa.get_public_key_pair())
print('Cipher:\n', cipher[0])
deciphered = rsa.decrypt(cipher[0], rsa.get_private_key_pair())
print('Deciphered text:\n', deciphered[0])
rsa.print_info()
# -
# ## ЭЦП
# Реализация упрощенной электронной цифровой подписи.
#
# $$H_i = (H_{i-1} + M_i)^2 \mod n,H_0 = 0$$
#
# $$S = H ^ d \mod n$$
#
# $$H' = S^e \mod n$$
# #### Код алгоритма:
# +
import re
class Signature(object):
def __init__(self, public_key, private_key, n):
self.public_key = public_key
self.private_key = private_key
self.n = n
def sign(self, message, private_key):
H = self.hash_function(message)
signature = pow(H, private_key, self.n)
return '@' + str(signature) + '@' + message
def verify(self, message, public_key):
regex = re.compile('@\d+@')
match = regex.search(message)
if match:
signature = int(match.group(0).strip('@'))
msg = regex.sub('', message)
H1 = pow(signature, public_key, self.n)
H2 = self.hash_function(msg)
return H1 == H2
def hash_function(self, message):
H = 0
for c in [ord(c) for c in message]:
H = (H + c)**2 % self.n
return H
def get_public_key(self):
return self.public_key
def get_private_key(self):
return self.private_key
# -
# #### Демонстрация работы ЭЦП:
# +
public_key = 5
private_key = 29
n = 91
signature = Signature(public_key, private_key, n)
message = 'hello!'
signed_message = signature.sign(message, signature.get_private_key()) # sign the message
print('Initial message: ', message)
print('Signed message: ', signed_message)
if signature.verify(signed_message, public_key):
print('Verification successful! Message was not modified.\n')
else:
print('Verification error! Message was modified.\n')
modified_message = signed_message + ' hi!' # modify signed message
print('Modified message: ', modified_message)
if signature.verify(modified_message, public_key):
print('Verification successful! Message was not modified.')
else:
print('Verification error! Message was modified')
# -
# # Практические занятия
# ## <NAME>
# #### Шифрование:
def cipher(text, alphabet='abcdefghijklmnopqrstuvwxyz', key=0):
result = ""
alphabet = alphabet.lower()
n = len(alphabet)
for char in text:
if char.isalpha():
new_char = alphabet[(alphabet.find(char.lower()) + key) % n]
result += new_char if char.islower() else new_char.upper()
else:
result += char
return result
# #### Расшифровывание:
def decipher(text, alphabet='abcdefghijklmnopqrstuvwxyz', key=0):
result = ""
alphabet = alphabet.lower()
n = len(alphabet)
for char in text:
if char.isalpha():
new_char = alphabet[(alphabet.find(char.lower()) - key + n) % n]
result += new_char if char.islower() else new_char.upper()
else:
result += char
return result
# #### Демонстрация работы алгоритма:
# +
alphabet = 'АБВГДЕЁЖЗИЙКЛМНОПРСТУФХЦЧШЩЪЫЬЭЮЯ'
text = 'Съешь же ещё этих мягких французских булок, да выпей чаю.'
key = 3
ciphered_phrase = cipher(text, alphabet, key)
print(ciphered_phrase)
deciphered_phrase = decipher(ciphered_phrase, alphabet, key)
print(deciphered_phrase)
# -
# #### Криптоанализ
# Взломать шифр Цезаря можно, используя частоты букв алфавита и вычисление наименьшей энтропии
# ##### Код для взлома шифров на английском языке:
# +
import numpy as np
ENGLISH_FREQS = [
0.08167, 0.01492, 0.02782, 0.04253, 0.12702, 0.02228,
0.02015, 0.06094, 0.06966, 0.00153, 0.00772, 0.04025,
0.02406, 0.06749, 0.07507, 0.01929, 0.00095, 0.05987,
0.06327, 0.09056, 0.02758, 0.00978, 0.02360, 0.00150,
0.01974, 0.00074
]
# Returns the cross-entropy of the given string with respect to
# the English unigram frequencies, which is a positive
# floating-point number.
def get_entropy(str):
sum, ignored = 0, 0
for c in str:
if c.isalpha():
sum += np.log(ENGLISH_FREQS[ord(c.lower()) - 97])
else:
ignored += 1
return -sum / np.log(2) / (len(str) - ignored)
# Returns the entropies when the given string is decrypted with
# all 26 possible shifts, where the result is an array of tuples
# (int shift, float enptroy) -
# e.g. [(0, 2.01), (1, 4.95), ..., (25, 3.73)].
def get_all_entropies(str):
result = []
for i in range(0, 26):
result.append((i, get_entropy(decipher(str, key=i))))
return result
def cmp_to_key(mycmp):
'Convert a cmp= function into a key= function'
class K(object):
def __init__(self, obj, *args):
self.obj = obj
def __lt__(self, other):
return mycmp(self.obj, other.obj) < 0
def __gt__(self, other):
return mycmp(self.obj, other.obj) > 0
def __eq__(self, other):
return mycmp(self.obj, other.obj) == 0
def __le__(self, other):
return mycmp(self.obj, other.obj) <= 0
def __ge__(self, other):
return mycmp(self.obj, other.obj) >= 0
def __ne__(self, other):
return mycmp(self.obj, other.obj) != 0
return K
def comparator(x, y):
if x[1] < y[1]:
return -1
elif x[1] > y[1]:
return 1
elif x[0] < y[0]:
return -1
elif x[0] > y[0]:
return 1
else:
return 0
# -
# ##### Демонстрация работы кода:
# +
text = 'hello'
ciphered = cipher(text, alphabet='abcdefghijklmnopqrstuvwxyz', key=14)
print('Initial text: ', text)
print('Ciphered text: ', ciphered)
entropies = get_all_entropies(ciphered)
entropies.sort(key=cmp_to_key(comparator))
best_shift = entropies[0][0]
cracked_val = decipher(ciphered, key=best_shift)
print('\nBest guess:')
print('%d rotations\nDeiphered text: %s\n' % (best_shift, cracked_val))
print('=========\nFull circle:')
for i in range(0, 26):
print('%d -\t%s' % (i, decipher(ciphered, key=i)))
# -
# ## <NAME>
# <NAME> — метод полиалфавитного шифрования буквенного текста с использованием ключевого слова.
# #### Код алгоритма:
# +
import string
class Vigenere(object):
def __init__(self, key):
self.key = key
self.tabula_recta = self.generate_tabula_recta()
def encrypt(self, msg):
msg_l = len(msg)
key = self.adjust_key(self.key, msg_l)
return ''.join(self.tabula_recta[msg[i]][key[i]] for i in range(msg_l))
def decrypt(self, msg):
msg_l, t = len(msg), self.tabula_recta
key = self.adjust_key(self.key, msg_l)
return ''.join(
list(t[key[i]].keys())[
list(t[key[i]].values()).index(msg[i])
] for i in range(msg_l)
)
def generate_tabula_recta(self):
alphabet = a = list(string.ascii_uppercase)
n = len(alphabet)
tabula_recta = dict()
for index, row_c in enumerate(alphabet):
tabula_recta[row_c] = dict()
for col_c in alphabet:
tabula_recta[row_c][col_c] = a[(a.index(col_c) + index) % n]
return tabula_recta
def adjust_key(self, key, length):
key_len = len(key)
return ''.join([key[(i + key_len) % key_len] for i in range(length)])
def get_key(self):
return self.key
def get_tabula_recta(self):
return self.tabula_recta
def pretty_print(self, d, space=3, fill='-'):
strs = ''.join('{{{0}:^{1}}}'.format(
str(i), str(space)) for i in range(len(d) + 1)
)
std = sorted(d)
print(strs.format(" ", *std))
for x in std:
print(strs.format(x, *(d[x].get(y, fill) for y in std)))
# -
# #### Tabula Recta - таблицы Виженера:
v = Vigenere('TEST')
v.pretty_print(v.get_tabula_recta(), space=3)
# #### Демонстрация работы алгоритма:
# +
key = 'LEMON'
message = 'ATTACKATDAWN'
vigenere = Vigenere(key)
cipher = vigenere.encrypt(message)
deciphered = vigenere.decrypt(cipher)
print('Key:\t\t', key)
print('Message:\t', message)
print('Ciphered:\t', cipher)
print('Deciphered:\t', deciphered)
# -
# ## <NAME>
# Алгоритм существует в нескольких вариантах, например, оригинальный вариант с исключающим ИЛИ по сообщению и секретному ключу, а также шифр Вернама по модулю m, в котором знаки открытого текста, шифрованного текста и ключа принимают значения из кольца вычетов множества знаков алфавита.
# #### Код алгоритма:
# +
import string
import random
class OneTimePad(object):
def __init__(self, key=None, alphabet=string.ascii_uppercase):
self.alphabet = alphabet
self.key = key
def crypt(self, msg, mode='original', encrypt=True):
if not self.key:
self.key = self.generate_secure_key(len(msg))
if mode == 'original':
return self.crypt_original(msg)
else:
return self.crypt_by_mod(msg, encrypt)
def crypt_original(self, msg):
return ''.join(
chr(ord(msg[i]) ^ ord(self.key[i])) for i in range(len(msg))
)
def crypt_by_mod(self, msg, encrypt=True):
if encrypt:
return self.encrypt_by_mod(msg)
else:
return self.decrypt_by_mod(msg)
def encrypt_by_mod(self, msg):
a, l, a_l = self.alphabet, len(msg), len(self.alphabet)
return ''.join(
a[(a.index(msg[i]) + a.index(self.key[i])) % a_l] for i in range(l)
)
def decrypt_by_mod(self, msg):
a, l, a_l = self.alphabet, len(msg), len(self.alphabet)
return ''.join(
a[(a.index(msg[i]) - a.index(self.key[i])) % a_l] for i in range(l)
)
def generate_secure_key(self, length):
return ''.join(
random.SystemRandom().choice(self.alphabet) for _ in range(length)
)
# -
# #### Демонстрация работы:
# +
print('===== The original XOR version =====')
message = 'ALLSWELLTHATENDSWELL'
key = '<KEY>'
pad = OneTimePad(key)
cipher = pad.crypt(message)
deciphered = pad.crypt(cipher)
print('Original:\t', message)
print('The key:\t', key)
print('Cipher\t\t', [ord(c) for c in cipher])
print('Deciphered:\t', deciphered)
print('\n======= The modulo version =======')
message = 'HELLO'
pad = OneTimePad()
cipher = pad.crypt('HELLO', mode='mod', encrypt=True)
deciphered = pad.crypt(cipher, mode='mod', encrypt=False)
print('Original:\t', message)
print('The key:\t', pad.key)
print('Cipher\t\t', cipher)
print('Deciphered:\t', deciphered)
| Bpid.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Dziban: Exploring Edits
# ## Example 1
#
# A user is analyzing the cars dataset. They begin by asking the question, 'How does horsepower relate to MPG?' They can formulate such a query as such
from dziban.chart import Chart
from vega_datasets import data
chart = Chart(data.cars())
chart.see('Miles_per_Gallon', 'Horsepower', name="scatter")
# A trend is visible, increasing **horsepower** is correlated with decreasing **MPG**. To confirm this, they may which to view an aggregate metric: the mean of the horsepower in relation to MPG.
chart['Horsepower'].aggregate('mean')
# A **cold** recommendation would yield the following.
chart.see()
# Note that a few things have changed that may be confusing. First, the _x_ and _y_ axes were swapped. The pattern seen in the previous visualization has been flipped. Second, the /zero/ baseline for MPG has been removed, which again makes comparison difficult. A better visualization would be one that has been **anchored**, and thus changes the least number of items whilst optimizing effectiveness.
chart.see(anchor="scatter", name="meaned")
chart.see('+Origin', anchor="meaned", name="origins")
| examples/old/Exploring Edits Examples.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# <img src="images/usm.jpg" width="480" height="240" align="center"/>
# <center>
# <strong> <font size="6"> MAT281 - 2° Semestre 2020</font> </strong>
#
# </center>
# <center>
# <strong> <font size="3"> Profesor: <NAME></font> </strong>
#
# </center>
import numpy as np
# ## Problema 01
# La función ```SMA()``` queda de la siguiente forma:
def SMA(Array,Numero):
stop = np.shape(Array)[0] - Numero + 1 #es para detenernos en ese punto, y no iterar demás
AF = [np.cumsum(Array[i:i + Numero],dtype=float)[Numero - 1] / Numero for i in range(0,stop)] #Aca ponemos cada
#media considerando el
#largo Numero para volverlo un array
return np.array(AF) #Finalmente lo volvemos un arreglo
a = np.array([5,3,8,10,2,1,5,1,0,2])
SMA(a,2)
# ## Problema 02
# Para este función, debemos notar que en algunos casos puede quedar la ultima fila con menos elementos que el numero de columnas, de ser este el caso se eliminará la ultima fila, es decir, hay casos en que la matriz no tendrá todos los elementos del arreglo.
#
# Pensando en ello, así queda la funcion ```Strides()```.
def Strides(Arreglo, Columnas, Desfase):
largo = np.shape(Arreglo)[0] #Obtenemos el largo del arreglo
Matriz = []
for i in range(0,largo - Desfase, Columnas - Desfase): #Avanzamos acorde al desfase
if i + Columnas < largo: #Aquí restringimos para que no ocurra
Matriz.append(Arreglo[i:i + Columnas]) #lo descrito antes y se agrega la fila
#junto el desfase
return np.array(Matriz) #Finalmente lo volvemos un arreglo
b = np.array([1,2,3,4,5,6,7,8,9,10])
Strides(b,4,2)
# ## Problema 03
# Para la función ```EsCuadradoMagico()``` se crea la función ```EsCuaYSonCon()```, para verificar si cumple con que es cuadrada y si los numeros en la matriz son consecutivos del 1 al $n^2$. Finalmente, ambas funciones quedan de la siguiente manera:
# +
def EsCuaYSonCon(Matrix):
n,m = np.shape(Matrix)
if n != m:
return False #Aquí vemos si es una matriz cuadrada
for i in range(1,n ** 2 + 1): #Con esto vemos si están todos los numeros consecutivamente del 1 al n cuadrado
if i not in Matrix:
return False
return True
def EsCuadradoMagico(Matrix):
if EsCuaYSonCon(Matrix) == False: #Vemos si se cumple o no lo pedido
return False #(Matriz cuadrada + numeros consecutivos del 1 al n cuadrado)
n,m = np.shape(Matrix)
SumasF, SumasC = Matrix.sum(axis = 1), Matrix.sum(axis = 0) #Arreglos con la suma de cada fila y columna
SumaD1, SumaD2 = sum(Matrix.diagonal()), sum(np.fliplr(Matrix).diagonal()) #Suma de la diagonal a la der e izq
if SumaD1 == SumaD2:
for i in range(0,n):
if SumasF[i] != SumaD1 and SumasC[i] != SumaD1: #Verificamos si se cumple que es cuadrado magico o no
return False
return True
return False
# -
A = np.array([[4,9,2],[3,5,7],[8,1,6]])
B = np.array([[4,2,9],[3,5,7],[8,1,6]])
print(EsCuadradoMagico(A))
print(EsCuadradoMagico(B))
| Labs/Laboratorio-02.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: ker_fastai
# language: python
# name: ker_fastai
# ---
# # Learning Rate Range Test on CIFAR-10 with fastai
# This notebook uses the [fastai](https://github.com/fastai/fastai) library.
#
# It is meant to reproduce the learning rate range test that fastai published [in this notebook](https://github.com/sgugger/Deep-Learning/blob/master/Cyclical%20LR%20and%20momentums.ipynb) and presented in [this blog post](https://towardsdatascience.com/finding-good-learning-rate-and-the-one-cycle-policy-7159fe1db5d6).
#
# As the linked notebook is based on an old version of fastai, we modify the code to comply with the latest version of the library.
# %matplotlib inline
# %reload_ext autoreload
# %autoreload 2
# Set seeds for reproducibility
# +
import torch
import numpy as np
import random as rn
np.random.seed(42) # cpu vars
rn.seed(12345) # Python
torch.manual_seed(1234) # cpu vars
torch.cuda.manual_seed(1234)
torch.cuda.manual_seed_all(1234) # gpu vars
torch.backends.cudnn.deterministic = True #needed
torch.backends.cudnn.benchmark = False
# -
# ## fastai reference result
# The tests are run on cifar 10 with an SGD optimizer finding the result in the following picture and selecting a final range of 0.08 - 0.8 (to 3).
from IPython.display import Image
Image(url="https://miro.medium.com/max/346/1*aSQd2ZN4Kl3DXPJzw3kTpQ.jpeg", width=600, height=300)
# ## LRRT tests
import os
os.environ["CUDA_DEVICE_ORDER"] = "PCI_BUS_ID"
os.environ["CUDA_VISIBLE_DEVICES"] = "1"
from fastai.vision import *
path = untar_data(URLs.CIFAR)
# Data augmentation: random horizontal flip and reflection padding of 4 pixels.
ds_tfms = ([*rand_pad(4, 32), flip_lr(p=0.5)], [])
data = ImageDataBunch.from_folder(path, valid='test', ds_tfms=ds_tfms, bs=512).normalize(cifar_stats)
img,label = data.train_ds[0]
print(label)
img
img.shape
# ### Resnet-56 model
# Basic bloc of the resnet version for cifar10 (no bottlenecks).
class BasicBlock(nn.Module):
def __init__(self, ch_in, ch_out, stride=1):
super().__init__()
self.conv1 = nn.Conv2d(ch_in, ch_out, kernel_size=3, stride=stride, padding=1, bias=False)
torch.nn.init.kaiming_uniform(self.conv1.weight, a=0, nonlinearity='relu')
self.bn1 = nn.BatchNorm2d(ch_out)
self.conv2 = nn.Conv2d(ch_out, ch_out, kernel_size=3, stride=1, padding=1, bias=False)
torch.nn.init.kaiming_uniform(self.conv2.weight, a=0, nonlinearity='relu')
self.bn2 = nn.BatchNorm2d(ch_out)
if stride != 1 or ch_in != ch_out:
self.shortcut = nn.Sequential(
nn.Conv2d(ch_in, ch_out, kernel_size=1, stride=stride, bias=False)#,
#nn.BatchNorm2d(ch_out)
# removing this additional BN layer to have the very same model as in
# https://keras.io/examples/cifar10_resnet/
)
torch.nn.init.kaiming_uniform(self.shortcut[0].weight, a=0, nonlinearity='relu')
def forward(self, x):
shortcut = self.shortcut(x) if hasattr(self, 'shortcut') else x
out = self.conv1(x)
out = self.bn2(self.conv2(F.relu(self.bn1(out))))
out += shortcut
return F.relu(out)
# Resnet for cifar10 with 56 convolutional layers.
#
# We override the default fastai initialization and use the same as in the keras implementation: ```kaiming_uniform```:
#
# ```
# torch.nn.init.kaiming_uniform(self.conv1.weight, a=0, nonlinearity='relu')
# ```
class ResNet(nn.Module):
def __init__(self, num_blocks, num_classes=10):
super().__init__()
self.conv1 = nn.Conv2d(3, 16, kernel_size=3, stride=1, padding=1, bias=False)
torch.nn.init.kaiming_uniform(self.conv1.weight, a=0, nonlinearity='relu')
self.bn1 = nn.BatchNorm2d(16)
self.layer1 = self.make_group_layer(16, 16, num_blocks[0], stride=1)
self.layer2 = self.make_group_layer(16, 32, num_blocks[1], stride=2)
self.layer3 = self.make_group_layer(32, 64, num_blocks[2], stride=2)
self.linear = nn.Linear(64, num_classes)
torch.nn.init.kaiming_uniform(self.linear.weight, a=0, nonlinearity='relu')
def make_group_layer(self,ch_in, ch_out, num_blocks, stride):
layers = [BasicBlock(ch_in, ch_out, stride)]
for i in range(num_blocks-1):
layers.append(BasicBlock(ch_out, ch_out, stride=1))
return nn.Sequential(*layers)
def forward(self, x):
out = F.relu(self.bn1(self.conv1(x)))
out = self.layer1(out)
out = self.layer2(out)
out = self.layer3(out)
out = F.adaptive_avg_pool2d(out, 1)
out = out.view(out.size(0), -1)
return F.log_softmax(self.linear(out))
# ### SGD Optimizer
# The default optimizer is Adam. We switch to SGD.
my_optim = functools.partial(torch.optim.SGD, lr=0.01, momentum=0.9)
learn = Learner(data, ResNet([9,9,9]), metrics=accuracy, opt_func=my_optim).to_fp32()
learn.crit = F.nll_loss
learn.opt_func
learn.loss_func
learn.summary()
# ### Single run of the Learning Rate Finder
learn.lr_find(wd=1e-4, end_lr=100)
learn.recorder.plot(suggestion=True)
# ### Compute averages on multiple runs of LRRT
# +
import pickle
import numpy as np
from matplotlib import pyplot as plt
import os
from lr_finder_tools.visualization import plot_lr_loss, plot_lrrt_curves_by_max_lr, stack_tests
# -
def multiple_runs_lr_find(n, learn=False, filepath='./lr_find_data.pkl'):
suggested_lrs = []
max_lrs = []
losses = []
input_learn = learn
for i in range(n):
if not input_learn:
learn = Learner(data, ResNet([9,9,9]), metrics=accuracy, opt_func=my_optim).to_fp32()
learn.crit = F.nll_loss
learn.lr_find(wd=1e-4, end_lr=100)
learn.recorder.plot(suggestion=True)
losses.append(learn.recorder.losses)
suggested_lrs.append(learn.recorder.min_grad_lr)
min_loss_idx = np.argmin(learn.recorder.losses)
max_lrs.append(learn.recorder.lrs[min_loss_idx]/10)
plt.close()
lrs = 1e-7*(100/1e-7)**np.linspace(0, 1, num=100)
lrs_res, all_losses_res, all_min_lr_res, all_max_lr_res = stack_tests(lrs, losses, suggested_lrs, max_lrs)
with open(filepath, 'wb') as fp:
pickle.dump([lrs_res, all_losses_res, all_min_lr_res, all_max_lr_res], fp)
return lrs_res, all_losses_res, all_min_lr_res, all_max_lr_res
NUM_EXPERIMENTS = 20
# ### Multiple runs with same weight initialization and random batch sampling
lrs_res, all_losses_res, all_min_lr_res, all_max_lr_res = multiple_runs_lr_find(
n = NUM_EXPERIMENTS,
learn = learn,
filepath = './lr_find_data_reset_weight.pkl'
)
with open('./lr_find_data_reset_weight.pkl', 'rb') as fp:
lrs_res, all_losses_res, all_min_lr_res, all_max_lr_res = pickle.load(fp)
plot_lr_loss(lrs_res, all_losses_res, all_min_lr_res, all_max_lr_res, epoch_ratio="1.0", optimizer='sgd', bs=512, ylim_top_ratio=0.25)
plot_lrrt_curves_by_max_lr(lrs_res, all_losses_res, all_max_lr_res)
# ### Multiple runs with different weight initialization and random batch sampling
lrs_res, all_losses_res, all_min_lr_res, all_max_lr_res = multiple_runs_lr_find(
n = NUM_EXPERIMENTS,
learn = False,
filepath = './lr_find_data_random_weight.pkl'
)
with open('./lr_find_data_random_weight.pkl', 'rb') as fp:
lrs_res, all_losses_res, all_min_lr_res, all_max_lr_res = pickle.load(fp)
plot_lr_loss(lrs_res, all_losses_res, all_min_lr_res, all_max_lr_res, epoch_ratio="1.0", optimizer='sgd', bs=512)
plot_lrrt_curves_by_max_lr(lrs_res, all_losses_res, all_max_lr_res)
| fastai_lr_finder.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3 (ipykernel)
# language: python
# name: python3
# ---
# + [markdown] id="UzNwUgtwKEQf"
# # EDA
# +
# internal imports
# OPTIONAL: Load the "autoreload" extension so that code can change
# %load_ext autoreload
# OPTIONAL: always reload modules so that as you change code in src, it gets loaded
# %autoreload 2
import sys, os
sys.path.insert(0, os.path.abspath('..'))
from src.data import extract_media_data as emd
from src.data import checker
from src.data import preprocessor
from src.models import topic_modeling as tm
# + id="0X87BxfhKEQk"
import numpy as np
import pandas as pd
import seaborn as sn
from typing import List
from tqdm import tqdm
# + id="gAGz0aRdD3T4"
#Visualizations
import seaborn as sns
import matplotlib.pyplot as plt
import pyLDAvis
import pyLDAvis.sklearn
import warnings
warnings.filterwarnings("ignore", category=DeprecationWarning)
#Natural Language Processing (NLP)
import spacy
import gensim
from spacy.tokenizer import Tokenizer
from gensim.corpora import Dictionary
from gensim.models.ldamulticore import LdaMulticore
from gensim.models import CoherenceModel
from sklearn.decomposition import LatentDirichletAllocation
from sklearn.feature_extraction.text import CountVectorizer, TfidfVectorizer
from sklearn.model_selection import GridSearchCV
from pprint import pprint
# from wordcloud import STOPWORDS
# stopwords = set(STOPWORDS)
# -
# data before the war
data_before = pd.read_csv('../data/interim/new_data.csv')
# data after the war
data_after = pd.read_csv('../data/interim/data_cleaned_version_1.csv', index_col=[0])
# + [markdown] id="aEmbnp3HKEQm"
# ## Overview
# + colab={"base_uri": "https://localhost:8080/"} executionInfo={"elapsed": 476, "status": "ok", "timestamp": 1649791981346, "user": {"displayName": "<NAME>", "userId": "00967194380598162450"}, "user_tz": 240} id="R5e4pSK4Q8ex" outputId="ed5689e2-db34-410d-ac9b-6b01de9ca8ad"
data_before.info()
# + colab={"base_uri": "https://localhost:8080/", "height": 530} executionInfo={"elapsed": 112, "status": "ok", "timestamp": 1649791484617, "user": {"displayName": "<NAME>", "userId": "00967194380598162450"}, "user_tz": 240} id="XK96kGX7OvBC" outputId="16c93b75-8fa3-4357-8915-3092ea4557a4"
# checker.check_data(data_before)
data_before.head()
# + colab={"base_uri": "https://localhost:8080/"} executionInfo={"elapsed": 714, "status": "ok", "timestamp": 1649791992242, "user": {"displayName": "<NAME>", "userId": "00967194380598162450"}, "user_tz": 240} id="ueIqGKkGKEQm" outputId="eea6331f-3c79-461d-a2b7-be2940108b6e"
data_after.info()
# + [markdown] id="Aysy2ScxRD52"
# ## Concate two dataframe together
# + id="A5E-xmX2RHBJ"
df = pd.concat([data_before, data_after])
df.reset_index(inplace=True)
# + colab={"base_uri": "https://localhost:8080/"} executionInfo={"elapsed": 698, "status": "ok", "timestamp": 1649792299581, "user": {"displayName": "<NAME>", "userId": "00967194380598162450"}, "user_tz": 240} id="a8hr9ifSSJwo" outputId="52a889d7-6df2-4585-e635-97cba1141cca"
df.info()
# + colab={"base_uri": "https://localhost:8080/", "height": 49} executionInfo={"elapsed": 1422, "status": "ok", "timestamp": 1649792326653, "user": {"displayName": "<NAME>", "userId": "00967194380598162450"}, "user_tz": 240} id="Y3ArVfoPKEQq" outputId="3bb41b35-45e7-446f-eea8-193435db0f5b"
# only these three columns have missing values
# luckily they are not too important for our model building, so we can abandon these columns
checker.check_missing_value(df, df.columns)
# -
# most tweets do not get likes
checker.check_zeros(df, df.columns)
# + [markdown] id="zG51Rk5bhukl"
# # Preprocessing
# + [markdown] id="bHtAfLxyIBGB"
# ## Get Media list
# + id="UVqd5-kVIVcm"
url = "https://memeburn.com/2010/09/the-100-most-influential-news-media-twitter-accounts"
media_account_dict = emd.get_media_dict(url)
# + [markdown] id="QfGB078qih5Y"
# ## Filter tweets from media and ordinary people
# + colab={"base_uri": "https://localhost:8080/"} executionInfo={"elapsed": 664, "status": "ok", "timestamp": 1649792376555, "user": {"displayName": "<NAME>", "userId": "00967194380598162450"}, "user_tz": 240} id="bvgSLPMiKEQr" outputId="ccd9d107-1450-433b-e6e0-8f32df1eb103"
media_tweets_df = df[df.username.isin(media_account_dict.values())]
# -
media_tweets_df.info()
# + [markdown] id="I9qeCZ1QTu_h"
# Since there are not too many tweets (only 63) which were generated from the media list. We can also filter out Twitter account whose followers is larger than a threshold, for example 1 milliom.
# + colab={"base_uri": "https://localhost:8080/"} executionInfo={"elapsed": 276, "status": "ok", "timestamp": 1649793879275, "user": {"displayName": "<NAME>", "userId": "00967194380598162450"}, "user_tz": 240} id="ML4oq0U6UQyP" outputId="99907755-6dd8-4c2c-cfb9-5efdda51318c"
follower_threshold = 100000
influencer_tweets = df[df.followers>=follower_threshold].copy()
normal_tweets = df[df.followers>=follower_threshold].copy()
# + id="qp642mCeZjdH"
# convert to pandas date time for easy processing
df['tweetcreatedts'] = pd.to_datetime(df['tweetcreatedts'])
# reset the index
df = df.set_index('tweetcreatedts', drop=False)
# -
date_list = [str(i) for i in np.unique(df.index.date)]
df[df.followers>follower_threshold].loc[date_list[0]]
# + [markdown] id="2TslbhIpeQrP"
# ## Data Split
# + id="UqhZjSpueQT3"
# split all the tweets to tweets per day
date_list = [str(i) for i in np.unique(df.index.date)]
text_dict = {}
media_text_dict = {}
normal_text_dict = {}
for date in date_list:
text_dict[date] = list(df.loc[date].text)
media_text_dict[date] = list(df[df.followers>follower_threshold].loc[date].text)
normal_text_dict[date] = list(df[df.followers<=follower_threshold].loc[date].text)
# + [markdown] id="9e9p7qsJKEQr"
# ## Data Cleaning
# -
for date, media_text, normal_text in tqdm(zip(date_list, media_text_dict.values(), normal_text_dict.values())):
media_cleaned_text = [preprocessor.clean_message(i) for i in media_text]
normal_cleaned_text = [preprocessor.clean_message(i) for i in normal_text]
media_text_dict[date] = media_cleaned_text
normal_text_dict[date] = normal_cleaned_text
text_dict[date] = media_cleaned_text + normal_cleaned_text
# + colab={"base_uri": "https://localhost:8080/"} executionInfo={"elapsed": 105299, "status": "ok", "timestamp": 1649798605107, "user": {"displayName": "<NAME>", "userId": "00967194380598162450"}, "user_tz": 240} id="1yOnoZ99kwBb" outputId="cc0422cb-cd1a-491d-8afd-ced11f9cc6fc"
for date, text_list in tqdm(text_dict.items()):
# TODO: loop through all the text_list instead of 1000 entries
cleaned_text = [preprocessor.clean_message(i) for i in text_list[:1000]]
text_dict[date] = cleaned_text
# + [markdown] id="jWhsoQjPLDyr"
# # Modeling
# + [markdown] id="zUdiyDRWLTeu"
# ## Baseline
# + id="wPOESJ0IryF4"
# let's first get the baseline model from tweets data on 01-17
tweet_list = text_dict['2022-01-17']
# + colab={"base_uri": "https://localhost:8080/"} executionInfo={"elapsed": 107, "status": "ok", "timestamp": 1649799031287, "user": {"displayName": "<NAME>", "userId": "00967194380598162450"}, "user_tz": 240} id="V_bdEsnPe74h" outputId="4dcf0869-67df-4506-b6d1-887a60c58f07"
vectorizer = CountVectorizer(
analyzer='word',
min_df=3,# minimum required occurences of a word
lowercase=True,# convert all words to lowercase
token_pattern='[a-zA-Z0-9]{3,}',# num chars > 3
max_features=5000,# max number of unique words
)
data_matrix = vectorizer.fit_transform(tweet_list)
# + id="Eu1W-nple-fU"
# I will use LDA to create topics along with the probability distribution for each word in our vocabulary for each topic
lda_model = LatentDirichletAllocation(
n_components=5, # Number of topics
learning_method='online',
random_state=20,
n_jobs = -1 # Use all available CPUs
)
lda_output = lda_model.fit_transform(data_matrix)
# + [markdown] id="9ndigpY2LPeS"
# ## Evaluation
# + colab={"base_uri": "https://localhost:8080/"} executionInfo={"elapsed": 2220, "status": "ok", "timestamp": 1649801415716, "user": {"displayName": "<NAME>", "userId": "00967194380598162450"}, "user_tz": 240} id="XkwTwL_50pTz" outputId="71178916-ceeb-4de7-90af-70e82a5f0393"
# Log Likelyhood: Higher the better
print("Log Likelihood: ", lda_model.score(data_matrix))
# Perplexity: Lower the better. Perplexity = exp(-1. * log-likelihood per word)
print("Perplexity: ", lda_model.perplexity(data_matrix))
# See model parameters
pprint(lda_model.get_params())
# + [markdown] id="pMncQdmf1DEL"
# ## Visualization
# + colab={"base_uri": "https://localhost:8080/", "height": 1000} executionInfo={"elapsed": 2874, "status": "ok", "timestamp": 1649799362112, "user": {"displayName": "<NAME>", "userId": "00967194380598162450"}, "user_tz": 240} id="SKVKYh5YfBCT" outputId="c6d14f2f-8837-46f2-ea62-8c881e14fda2"
#pyLDAvis extracts information from a fitted LDA topic model to inform an interactive web-based visualization
pyLDAvis.enable_notebook()
pyLDAvis.sklearn.prepare(lda_model, data_matrix, vectorizer, mds='tsne')
# + colab={"base_uri": "https://localhost:8080/"} executionInfo={"elapsed": 129, "status": "ok", "timestamp": 1649799390469, "user": {"displayName": "<NAME>", "userId": "00967194380598162450"}, "user_tz": 240} id="wDVOPqXwfZfJ" outputId="41145b34-f388-4471-d977-31eb51058555"
# top 5 most frequent words from each topic that found by LDA
for i,topic in enumerate(lda_model.components_):
print('Top 5 words for topic:',i)
print([vectorizer.get_feature_names()[i] for i in topic.argsort()[-5:]])
print('\n')
# + [markdown] id="8KHURDvn1k_u"
# ## Hyperparameter Tuning
# + colab={"base_uri": "https://localhost:8080/"} executionInfo={"elapsed": 114931, "status": "ok", "timestamp": 1649801766824, "user": {"displayName": "<NAME>", "userId": "00967194380598162450"}, "user_tz": 240} id="tTLkNNqH1ouT" outputId="8c5dcf49-7579-416d-c424-5dda0c8ab70f"
# Define Search Param
search_params = {'n_components': [3, 5, 10, 15], 'learning_decay': [.5, .7, .9]}
# Init the Model
lda = LatentDirichletAllocation()
# Init Grid Search Class
model = GridSearchCV(lda, param_grid=search_params)
# Do the Grid Search
model.fit(data_matrix)
# + colab={"base_uri": "https://localhost:8080/"} executionInfo={"elapsed": 529, "status": "ok", "timestamp": 1649802202595, "user": {"displayName": "<NAME>", "userId": "00967194380598162450"}, "user_tz": 240} id="MaUars9Q28Xm" outputId="7d0cfd74-ac78-47cd-a2ce-09fcdabf6798"
# Best Model
best_lda_model = model.best_estimator_
# Model Parameters
print("Best Model's Params: ", model.best_params_)
# Log Likelihood Score
print("Best Log Likelihood Score: ", model.best_score_)
# Perplexity
print("Model Perplexity: ", best_lda_model.perplexity(data_matrix))
# + [markdown] id="MaV2cJEE5BB0"
# ## Topic Modeling Per Day
# -
daily_topics = tm.generate_daily_topic(text_dict, 3, 0.5)
# + colab={"base_uri": "https://localhost:8080/"} executionInfo={"elapsed": 75621, "status": "ok", "timestamp": 1649805727799, "user": {"displayName": "<NAME>", "userId": "00967194380598162450"}, "user_tz": 240} id="ffET-Npa_-T9" outputId="352b95e9-4be0-419c-d46f-b5afad7837e0"
media_daily_topics = tm.generate_daily_topic(media_text_dict, 3, 0.5)
# + id="pjZzkvQbFHXC"
normal_daily_topics = tm.generate_daily_topic(normal_text_dict, 3, 0.5)
# -
daily_topics
media_daily_topics
normal_daily_topics
# ## Data persistence
daily_topics.to_csv('../data/processed/daily_topics.csv')
media_daily_topics.to_csv('../data/processed/influencer_daily_topics.csv')
normal_daily_topics.to_csv('../data/processed/normal_daily_topics.csv')
| notebooks/2.0-zy-topic-modeling.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ### Prerequisite packages
# !pip install transformers
# ### Variable initialization
# For this implementation I chose the gpt2 pretrained Model and AutoTokenizer
# +
import re
import json
from sklearn.model_selection import train_test_split
from transformers import TextDataset, DataCollatorForLanguageModeling # Preprocessing
from transformers import Trainer, TrainingArguments, AutoModelForCausalLM, AutoTokenizer # Training
from transformers import pipeline # Testing
INPUT_FILENAME = "./dataset/recipes_raw_nosource_ar.json"
# INPUT_FILENAME = "./dataset/recipes_raw_nosource_epi.json"
# INPUT_FILENAME = "./dataset/recipes_raw_nosource_fn.json"
TRAIN_PATH = "train_dataset.txt"
TEST_PATH = "test_dataset.txt"
OUTPUT_PATH = "./model"
LOG_PATH = "./logs"
TOKENIZER_NAME = "gpt2"
MODEL_NAME = "gpt2"
tokenizer = AutoTokenizer.from_pretrained(TOKENIZER_NAME)
model = AutoModelForCausalLM.from_pretrained(MODEL_NAME)
# -
# ### Data preprocessing
# +
with open(INPUT_FILENAME) as f:
data = json.load(f)
def build_file(data, filepath):
file = open(filepath, 'w')
string = ''
for texts in data:
if 'instructions' in texts:
instructions = texts['instructions']
instructions = re.sub(r'[^\x00-\x7f]', r' ', instructions) # Remove non-Unicode characters
instructions = re.sub(r"\s+", " ", instructions) # Remove trailing tabs and spaces
string += instructions + " "
file.write(string)
file.close()
data_list = []
for key, value in data.items(): # Remove key hash, not relevant
data_list.append(value)
train, test = train_test_split(data_list, test_size=0.15)
build_file(train, TRAIN_PATH)
build_file(test, TEST_PATH)
print("Train dataset length: " + str(len(train)))
print("Test dataset length: " + str(len(test)))
# -
# ### Dataset and trainer initialization
# Fine tuning proved most effective after 5 epochs, further training showed little to no improvement.
# +
train_dataset = TextDataset(
tokenizer=tokenizer,
file_path=TRAIN_PATH,
block_size=128)
test_dataset = TextDataset(
tokenizer=tokenizer,
file_path=TEST_PATH,
block_size=128)
data_collator = DataCollatorForLanguageModeling(
tokenizer=tokenizer, mlm=False,
)
training_args = TrainingArguments(
output_dir=OUTPUT_PATH,
logging_dir=LOG_PATH,
overwrite_output_dir=True, # Overwrite the output directory
num_train_epochs=5,
warmup_steps=500, # Number of steps used for a linear warmup from 0
eval_steps=500, # Number of update steps between two evaluations.
per_device_train_batch_size=64, # Batch size for training
per_device_eval_batch_size=64, # Batch size for evaluation
)
trainer = Trainer(
model=model,
args=training_args,
data_collator=data_collator,
train_dataset=train_dataset,
eval_dataset=test_dataset,
)
# -
# ### Training the model and saving the model
trainer.train()
trainer.save_model()
# ### Testing the results
# +
model = pipeline('text-generation', model=OUTPUT_PATH, tokenizer=TOKENIZER_NAME)
model("Chicken soup")
| pytorch-trainer.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: 'Python 3.8.5 64-bit (''pytorch_env'': conda)'
# name: python385jvsc74a57bd0e016d4dd0e064b7c1aff94752bda7a13d8e29c3d77200207baf8d0194d31e52e
# ---
# +
import sys; sys.path.append('../')
import os
import numpy as np
import torch
import torch.optim as optim
import torch.nn as nn
import torchvision.transforms as tf
from torchvision.datasets import CIFAR10
import matplotlib.pyplot as plt
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
# -
from python.deepgen.models import dcgan
from python.deepgen.training import adversarial, loss
from python.deepgen.utils.image import Unnormalize
from python.deepgen.utils.init import Initializer
from python.deepgen.datasets import common
# +
latent_channels = 128
image_channels = 3
hidden_layers = 3
base_channels = 64
lr = 2e-4
init_map = {
nn.Conv2d: {'weight': (nn.init.normal_, {'mean':0.0, 'std':0.02}),
'bias': (nn.init.constant_, {'val': 0.0})},
nn.ConvTranspose2d: {'weight': (nn.init.normal_, {'mean':0.0, 'std':0.02}),
'bias': (nn.init.constant_, {'val': 0.0})},
}
init_fn = Initializer(init_map)
def build_model():
model = dcgan.vanilla_dcgan(latent_channels, image_channels,
hidden_layers, base_channels=base_channels,
init_fn=init_fn)
model['gen'].to(device)
model['disc'].to(device)
gen_opt = optim.Adam(model['gen'].parameters(), lr, betas=(0.5, 0.999))
disc_opt = optim.Adam(model['disc'].parameters(), lr, betas=(0.5, 0.999))
optimizers = {'gen': gen_opt, 'disc': disc_opt}
return model, optimizers
# -
# # CIFAR 10
# +
cifar10n, cifar10_mean, cifar10_std = common.cifar10n()
batch_size = 32
shuffle = True
num_workers = 0
dataloader = torch.utils.data.DataLoader(
cifar10n, batch_size, shuffle, num_workers=num_workers)
samples = torch.split(next(iter(dataloader)), 1)
num_rows =int(np.ceil(batch_size/4))
num_cols = 4
fig, axes = plt.subplots(num_rows, num_cols, figsize=(2*num_cols, 2*num_rows))
unnorm_cifar10 = Unnormalize(cifar10_mean, cifar10_std)
output_transform = tf.Compose([
Unnormalize(cifar10_mean, cifar10_std),
tf.Lambda(lambda x: x.squeeze(0)),
tf.ToPILImage('RGB'),
])
for sample, ax in zip(samples, axes.ravel()):
ax.imshow(output_transform(sample))
for ax in axes.ravel():
ax.axis('off')
fig.tight_layout()
# -
# ## BCELoss
# +
model, optims = build_model()
criterions = [nn.BCEWithLogitsLoss(reduction='mean')]
samplers = {
'latent': lambda bs: torch.rand(bs, latent_channels, 1, 1)*2-1,
'labels': lambda bs: (torch.ones(bs, 1, 1, 1), torch.zeros(bs, 1, 1, 1))
}
trainer = adversarial.Trainer(model, optims, samplers, criterions, device)
num_epochs = 32
output_dir = '../models/dcgan_cifar10_bce'
# -
history = trainer.train(dataloader, num_epochs, output_dir, output_transform, True)
# ## WGAN-GP
# +
model, optims = build_model()
criterions = [loss.Wasserstein1()]
reg = [loss.GradPenalty(10)]
samplers = {
'latent': lambda bs: torch.rand(bs, latent_channels, 1, 1)*2-1,
'labels': lambda bs: (torch.ones(bs, 1, 1, 1), (-1)*torch.ones(bs, 1, 1, 1)),
}
trainer = adversarial.Trainer(model, optims, samplers, criterions, device, reg=reg)
num_epochs = 16
output_dir = '../models/dcgan_cifar10_wgp'
# -
history = trainer.train(dataloader, num_epochs, output_dir, output_transform, True)
| notebooks/00_dcgan_grad_penalty.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3 (ipykernel)
# language: python
# name: python3
# ---
# # Generative models
# ## Intro and in the context of enhancer classification
# - Generative models will try and learn what composes one enhancer class vs another
# - Generative models generally fall into the un-supervised learning class
# ### What is the "generative" in generative model
# - Basically a probilility distribution that we can draw from (sample)
#
# $L$ = $10^5$ enhancers
#
# Given a sequence could call it enhancer or not enhancer. Generative models would assign probibility to a given enhancer. Model is generative because could sample from it.
#
# If $P(X)$ (the generative model) if the model was perfect sampling from $P(X)$ would be the same as sampling from all actual enhancers.
# ### What does it mean to "fit" a generative model?
# - We do not actually get to check $L$
# - Have a sample of part of $L$ that represent some proportion of $L$
# - The model tries to determine what region of space the samples sit within
#
# #### An example
#
# - Trying to classify apples and oranges
# - Input data is images, labels is fruit ID
#
# $P(y|x)$ = "is the color orange in the image" is only required to classify (descrimitive model approach).
#
# But if you want to sample the model and get pictures of apples the model has to have a more complex understanding of what makes an apple an apple and an orange an orange.
# ## Generative models and sequences
# - We are given some sequence and we want to generate sequences that look like it this could be done by
# - Prob of each letter and sample that discrete distribution
# - Randomly arrange the letters
# - 1st order Markov model
#
# ### How can the above be formalized?
#
# - Set of variables $X_ij$ representing the $jth$ letter of sequence $i$
# - $Xijk$ where $k$ = {1, 2, 3, 4} where integers represent identities of nucleotides and $X_{ijk}=1$ if $X_{ij} = k$
# - $k$ is asking the question "is the identity of this nucleotide the one specified by the value of k making this a binary variable
# - Also a form of "one-hot" encoding
#
# $P(X_{ijk} | \sigma ) = \sigma_k$: single base
#
# So the prob of all input data (the complete sequence) is the product of the probabilities of all nucleotides of that sequence.
#
# $P(X | \sigma) = \prod_{i}^{}\prod_{j}^{}P(X_{ijk} = 1 | \sigma )$
#
# $\sigma$ is a parameter that would actually be trained. Fit $\sigma$ to the data so the distribution we end up with looks as much like the training data as possible.
# ### Now we have a notation how to train?
# - We want to find the right $\sigma$s that will match our training data
# - We can do this using max-like approach
# - Calculate probibility of all sequenecs in data set to zero and solve
# $L = \prod_{i}^{}\prod_{j}^{}\prod_{k}^{}\sigma_k$
#
# Take log of all terms
#
# $Log(L) = \sum_{}^{i} \sum_{}^{j} \sum_{}^{k}X_{ijk}log(\sigma_k)$
# Have a constrained problem (0 to 1) but solving in a non-constrained way.
# Could have defined $\sigma_k$ as a function of another variable $a$ and then optimize $ak$?
#
# - Look into La Grange multipliers: Solving optimization problems when you have a specific kind of constraint.
# - To enforce the constraint have to reword it as a function that equals 0.
# - Then take original likihood function and add the la grange multiplier times the multiplier you derived
# This model is not plausible in bio
# ### Other approaches
#
# It would be nice to be able to look back $k$ positions, we need to add $4^k$ parameters to our look up table. This number of parameters will soon outweigh the amount of data that we have.
#
# Instead something smarter to do would be to think maybe there is something common across all enhancers we could build model where we presume there is some short motif so most bases are generated using lookup table but within each enhancer there is one copy of a 6 base motif.
#
# - First predict where the 6bp motif is then generate the rest of the sequence
# - This is called a mixture model and is powerful because can focus the complexity where it would make the most sense to have.
#
# ## For next Tuesday
#
# - Gerald will post file of sequences and wants us to write python code to read and calculate Xs $X_{ijk}$ which is a 3D matrix.
| notes/09-30-21.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + id="7Zn9RKf78p1-" colab_type="code" outputId="e598d8d0-1340-4afd-9ebc-e0d3eff89ee5" executionInfo={"status": "ok", "timestamp": 1581671203282, "user_tz": -60, "elapsed": 7447, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AAuE7mABPVUrvXjzM9yxYQJmS4A5SJPnqPkBJX2pfhF10A=s64", "userId": "00166699122305166239"}} colab={"base_uri": "https://localhost:8080/", "height": 289}
# !pip install eli5
# + id="537enoc18v61" colab_type="code" colab={}
import pandas as pd
import numpy as np
from sklearn.tree import DecisionTreeRegressor
from sklearn.ensemble import RandomForestRegressor
from sklearn.metrics import mean_absolute_error
from sklearn.model_selection import cross_val_score
import eli5
from eli5.sklearn import PermutationImportance
from ast import literal_eval
from tqdm import tqdm_notebook
# + id="0El5fOfd9vg4" colab_type="code" outputId="e0b8cea4-c87c-49db-a614-5a5929ef7856" executionInfo={"status": "ok", "timestamp": 1581671462626, "user_tz": -60, "elapsed": 590, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AAuE7mABPVUrvXjzM9yxYQJmS4A5SJPnqPkBJX2pfhF10A=s64", "userId": "00166699122305166239"}} colab={"base_uri": "https://localhost:8080/", "height": 34}
# cd 'drive/My Drive/Colab Notebooks/matrix/matrix'
# + id="0KtYDOHe9zQ-" colab_type="code" outputId="7189f598-860a-408d-d86d-524bd2622735" executionInfo={"status": "ok", "timestamp": 1581671470324, "user_tz": -60, "elapsed": 2266, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AAuE7mABPVUrvXjzM9yxYQJmS4A5SJPnqPkBJX2pfhF10A=s64", "userId": "00166699122305166239"}} colab={"base_uri": "https://localhost:8080/", "height": 50}
# ls
# + id="nw1XyWLL90vV" colab_type="code" outputId="44e9908b-b0f4-4ee2-bbe8-8da353e3e39a" executionInfo={"status": "ok", "timestamp": 1581671536791, "user_tz": -60, "elapsed": 3024, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AAuE7mABPVUrvXjzM9yxYQJmS4A5SJPnqPkBJX2pfhF10A=s64", "userId": "00166699122305166239"}} colab={"base_uri": "https://localhost:8080/", "height": 34}
df=pd.read_csv("data/men_shoes.csv",low_memory=False)
df.shape
# + id="d5foOAGA-Ex2" colab_type="code" colab={}
def run_model(feats, model=RandomForestRegressor(max_depth=5)):
X=df[feats].values
y=df['prices_amountmin'].values
model=model
scores = cross_val_score(model, X, y, scoring='neg_mean_absolute_error')
return np.mean(scores), np.std(scores)
# + id="u_H06SyXAUGy" colab_type="code" colab={}
df['brand_cat']=df.brand.map(lambda x: str(x).lower()).factorize()[0]
# + id="0OaZgk7tAW8I" colab_type="code" outputId="3052e54b-a256-48b7-a646-007fadd0dc55" executionInfo={"status": "ok", "timestamp": 1581672490913, "user_tz": -60, "elapsed": 3982, "user": {"displayName": "Pawe\u0142 Krawczyk", "photoUrl": "https://lh3.googleusercontent.com/a-/AAuE7mABPVUrvXjzM9yxYQJmS4A5SJPnqPkBJX2pfhF10A=s64", "userId": "00166699122305166239"}} colab={"base_uri": "https://localhost:8080/", "height": 34}
model=RandomForestRegressor(max_depth=5, n_estimators=100, random_state=0)
run_model(['brand_cat'], model)
# + id="A4Lo2kgyAbEx" colab_type="code" outputId="70a82a8e-aa34-4c38-9e7e-657d5425b058" executionInfo={"status": "ok", "timestamp": 1581672392481, "user_tz": -60, "elapsed": 557, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AAuE7mABPVUrvXjzM9yxYQJmS4A5SJPnqPkBJX2pfhF10A=s64", "userId": "00166699122305166239"}} colab={"base_uri": "https://localhost:8080/", "height": 669}
df.sample(5)
# + id="HqXBeLCdBWSF" colab_type="code" colab={}
def parse_features(x):
output_dict={}
if str(x)=='nan': return output_dict
features= literal_eval(x.replace('\\"','"'))
for i in features:
key=i['key'].lower().strip()
value=i['value'][0].lower().strip()
output_dict[key]= value
return output_dict
df['features_parsed'] = df.features.map(parse_features)
# + id="aXpILNTgU8Gb" colab_type="code" outputId="62e91381-5dce-4459-a6dd-fca76b611449" executionInfo={"status": "ok", "timestamp": 1581680883939, "user_tz": -60, "elapsed": 1544, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AAuE7mABPVUrvXjzM9yxYQJmS4A5SJPnqPkBJX2pfhF10A=s64", "userId": "00166699122305166239"}} colab={"base_uri": "https://localhost:8080/", "height": 34}
keys =set()
df['features_parsed'].map(lambda x: keys.update(x.keys()))
len(keys)
# + id="9xDMg7hxVe3c" colab_type="code" outputId="0e04df40-f604-4752-a76c-3adde198e9ac" executionInfo={"status": "ok", "timestamp": 1581681558153, "user_tz": -60, "elapsed": 5728, "user": {"displayName": "Pawe\u014<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AAuE7mABPVUrvXjzM9yxYQJmS4A5SJPnqPkBJX2pfhF10A=s64", "userId": "00166699122305166239"}} colab={"base_uri": "https://localhost:8080/", "height": 66, "referenced_widgets": ["e76a41d3f7b2492e8fea4348bf3e56cf", "70b347d3d8224699946096293a55e7d6", "8c797a48fcfa4190860aea242a2ae75a", "30c8ba93f8044da1a7a5302af6c36ece", "285fafeab3e542c2a906fb7a5033d9ef", "90587e4d2f9e49c687d7f3362722fceb", "<KEY>", "4b07c8368d814440be42f1473e2e2f72"]}
def names(key):
return 'cat_'+key
for key in tqdm_notebook(keys):
df[names(key)]=df.features_parsed.map(lambda feats: feats[key] if key in feats else np.nan)
# + id="CNVXPWlUkSuU" colab_type="code" outputId="dfc91a10-4b54-4819-c798-d143da0edc90" executionInfo={"status": "ok", "timestamp": 1581681602299, "user_tz": -60, "elapsed": 582, "user": {"displayName": "Pawe\u01<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AAuE7mABPVUrvXjzM9yxYQJmS4A5SJPnqPkBJX2pfhF10A=s64", "userId": "00166699122305166239"}} colab={"base_uri": "https://localhost:8080/", "height": 134}
df.columns
# + id="1kKzs9U6kc5w" colab_type="code" outputId="76cdbb61-5fb8-46bf-d7dd-ffeb2f8f4eb5" executionInfo={"status": "ok", "timestamp": 1581681700508, "user_tz": -60, "elapsed": 895, "user": {"displayName": "Pawe\u01<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AAuE7mABPVUrvXjzM9yxYQJmS4A5SJPnqPkBJX2pfhF10A=s64", "userId": "00166699122305166239"}} colab={"base_uri": "https://localhost:8080/", "height": 34}
df[df['cat_athlete'].isnull()].shape
# + id="4sI5iAq6kv66" colab_type="code" colab={}
keys_stat={}
for key in keys:
keys_stat[key]=(df[False == df[names(key)].isnull()].shape[0]/df.shape[0] *100)
# + id="9P4WR_OalcIe" colab_type="code" outputId="8c5f08d5-c9b9-4eef-ccc3-5f712a9a6869" executionInfo={"status": "ok", "timestamp": 1581682139996, "user_tz": -60, "elapsed": 572, "user": {"displayName": "Pawe\u014<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AAuE7mABPVUrvXjzM9yxYQJmS4A5SJPnqPkBJX2pfhF10A=s64", "userId": "00166699122305166239"}} colab={"base_uri": "https://localhost:8080/", "height": 101}
{k:v for k,v in keys_stat.items() if v>30}
# + id="N6UkaIFVmUo0" colab_type="code" colab={}
# df['cat_brand_cat']=df.cat_brand.map(lambda x: str(x).lower()).factorize()[0]
# df['cat_color_cat']=df.cat_color.map(lambda x: str(x).lower()).factorize()[0]
# df['cat_gender_cat']=df.cat_gender.map(lambda x: str(x).lower()).factorize()[0]
# df['cat_manufacturer part number_cat']=df['cat_manufacturer part number'].map(lambda x: str(x).lower()).factorize()[0]
# df['cat_material']=df.cat_material.map(lambda x: str(x).lower()).factorize()[0]
# df['cat_sport']=df.cat_sport.map(lambda x: str(x).lower()).factorize()[0]
# df['cat_style']=df.cat_style.map(lambda x: str(x).lower()).factorize()[0]
# df['cat_condition']=df.cat_condition.map(lambda x: str(x).lower()).factorize()[0]
for key in keys:
df[names(key)+'_cat2']=df[names(key)].map(lambda x: str(x).lower()).factorize()[0]
# + id="dh4eTFqDrkzz" colab_type="code" outputId="f0d6103e-f260-4b19-e1fe-974541a6ffe4" executionInfo={"status": "ok", "timestamp": 1581683539857, "user_tz": -60, "elapsed": 581, "user": {"displayName": "Pawe\u014<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AAuE7mABPVUrvXjzM9yxYQJmS4A5SJPnqPkBJX2pfhF10A=s64", "userId": "00166699122305166239"}} colab={"base_uri": "https://localhost:8080/", "height": 1000}
feats_cat=[x for x in df.columns if 'cat2' in x]
feats_cat
# + id="WLo1upYmnbdR" colab_type="code" outputId="c56c3fc3-d84b-4ce9-e64b-6dcac21e0d7a" executionInfo={"status": "ok", "timestamp": 1581683785144, "user_tz": -60, "elapsed": 112083, "user": {"displayName": "Pawe\u0142 Krawczyk", "photoUrl": "https://lh3.googleusercontent.com/a-/AAuE7mABPVUrvXjzM9yxYQJmS4A5SJPnqPkBJX2pfhF10A=s64", "userId": "00166699122305166239"}} colab={"base_uri": "https://localhost:8080/", "height": 34}
model=RandomForestRegressor(max_depth=5, n_estimators=100, random_state=0)
cols=['cat_brand_cat','cat_gender_cat', 'cat_material','cat_style','cat_condition']
cols+=feats_cat
cols=list(set(cols))
run_model(cols, model)
# + id="RBgmjlC9n2k6" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 386} outputId="21792a4f-4c25-4247-bf50-8b2670d9e164" executionInfo={"status": "ok", "timestamp": 1581684978484, "user_tz": -60, "elapsed": 59, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AAuE7mABPVUrvXjzM9yxYQJmS4A5SJPnqPkBJX2pfhF10A=s64", "userId": "00166699122305166239"}}
X= df[cols].values
y=df.prices_amountmin.values
m=RandomForestRegressor(max_depth=5, n_estimators=100, random_state=0)
m.fit(X,y)
perm=PermutationImportance(m, random_state=1).fit(X,y)
eli5.show_weights(perm, feature_names=cols)
# + id="esx3X9VHpQrV" colab_type="code" colab={}
| day5.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Spatial Data frames, arcpy and the Python API for local data
# This notebook works with local data in filegeodatabases, using spatial data frames and arcpy to access local data and geoprocessing. You will need arcpy installed (By installing ArcGIS Pro) to use this
# import arcpy as well as the arcgis api for python
import arcpy
import os
from arcgis import GIS
from arcgis.features import SpatialDataFrame
# Get the sample data from ArcGIS Online, by downloading and unzipping the file https://github.com/dstubbins/notebook-examples/blob/master/OSOpenRoadsSample.gdb.zip
# Use arcpy to list the contents of a local file geodatabase
# +
arcpy.env.workspace = "C:\\Data\\OSOpenRoadsSample.gdb"
datasets = arcpy.ListDatasets(feature_type='feature')
datasets = [''] + datasets if datasets is not None else []
for ds in datasets:
for fc in arcpy.ListFeatureClasses(feature_dataset=ds):
path = os.path.join(arcpy.env.workspace, ds, fc)
print(path)
# -
# Create a spatially enabled dataframe from a featureclass in a geodatabase
sdf = SpatialDataFrame.from_featureclass('C:\Data\OSOpenRoadsSample.gdb\SZ_RoadLink')
sdf.tail()
# describe the data
sdf.info()
sdf[['OBJECTID']].count()
# # Do some Pandas Stuff
# How many of each type of road?
sdfgroup=sdf.groupby(['class'])['OBJECTID'].count()
sdfgroup
# Make a graph
import matplotlib.pyplot as plt
sdfgroup.plot.bar()
# # Query the dataframe
sdfARoads = sdf[sdf['class']=='A Road']
sdfARoads['OBJECTID'].count()
# # Add a map to our notebook
agol=GIS()
m1 = agol.map()
m1.basemap='os_open_background'
m1
# Set the map extent
m1.extent={'spatialReference': {'latestWkid': 27700, 'wkid': 27700},
'xmin': 424993.2379115089,
'ymin': 73324.90305999425,
'xmax': 488162.6350836366,
'ymax': 99783.28931010008}
m1.mode='3D'
m1.mode='2D'
sdfARoads.spatial.plot(map_widget=m1,
symbol_type='simple',
colors = 'Reds_r')
sdfBRoads=sdf[sdf['class']=='B Road']
sdfBRoads['OBJECTID'].count()
sdfBRoads.spatial.plot(map_widget=m1,
symbol_type='simple',
colors = 'Blues_r')
# # Spatial Joins in pandas
# intersect the A roads and the B Roads
sdfjoin=sdfBRoads.spatial.join(sdfARoads)
sdfjoin.info()
# how many intersections?
sdfjoin['OBJECTID_left'].count()
# How many Unique classified roads intersect?
sdfjoin['OBJECTID_left'].nunique()
sdfjoin.spatial.plot(m1)
# # Use ArcPy Geoprocessing
inputFeature = "C:\\Data\\OSOpenRoadsSample.gdb\\SZ_RoadLink"
outputFeature = "C:\\Data\\bufferoutput.shp"
bufferdist = '50 meters'
arcpy.Buffer_analysis(inputFeature,outputFeature,bufferdist, "","", "ALL")
sdfBuffer = SpatialDataFrame.from_featureclass('C:\\Data\\bufferoutput.shp')
sdfBuffer.spatial.plot(m1)
| arcpysdfdemo.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import time
import tqdm
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
import torch
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
import torch.utils.data as utils
import torch.nn.init as init
from torch.autograd import Variable
mode = 'ROC'
f_rnd = pd.read_hdf("/data/t3home000/spark/LHCOlympics_previous/LHC-Olympics/Code/Nsubjettiness_mjj.h5")
f_rnd.columns
f_rnd.head()
if mode == 'ROC':
dt = f_rnd.values
else:
dt_PureBkg = f_PureBkg.values
index_list = [0,1,2,3,4,5,7,8,9,10,11,12]
for i in index_list:
dt[:,i] = (dt[:,i]-np.mean(dt[:,i]))/np.std(dt[:,i])
dt
idx = dt[:,15]
bkg_idx = np.where(idx==0)[0]
signal_idx = np.where(idx==1)[0]
print(bkg_idx)
dt[bkg_idx].shape
total_PureBkg = torch.tensor(dt[bkg_idx])
total_PureBkg_train_x_1 = total_PureBkg.t()[0:6].t()
total_PureBkg_train_x_3 = total_PureBkg.t()[7:13].t()
total_PureBkg_selection = torch.cat((total_PureBkg_train_x_1,total_PureBkg_train_x_3),dim=1)
total_PureBkg_selection.shape
train_set, val_set = torch.utils.data.random_split(total_PureBkg_selection, [800000, 200000])
len(train_set)
bs = 256
bkgAE_train_iterator = utils.DataLoader(total_PureBkg_selection, batch_size=bs, shuffle=True)
bkgAE_test_iterator = utils.DataLoader(total_PureBkg_selection, batch_size=bs)
class Encoder(nn.Module):
''' This the encoder part of VAE
'''
def __init__(self, z_dim):
'''
Args:
input_dim: A integer indicating the size of input (in case of MNIST 28 * 28).
hidden_dim: A integer indicating the size of hidden dimension.
z_dim: A integer indicating the latent dimension.
'''
super().__init__()
self.linear1 = nn.Linear(12, 48)
self.linear2 = nn.Linear(48, 30)
self.linear3 = nn.Linear(30, 20)
self.linear4 = nn.Linear(20, 10)
self.linear5 = nn.Linear(10, 6)
hidden_dim = 6
self.mu = nn.Linear(hidden_dim, z_dim)
self.var = nn.Linear(hidden_dim, z_dim)
def forward(self, x):
# x is of shape [batch_size, input_dim]
x = F.leaky_relu(self.linear1(x))
x = F.leaky_relu(self.linear2(x))
x = F.leaky_relu(self.linear3(x))
x = F.leaky_relu(self.linear4(x))
x = F.leaky_relu(self.linear5(x))
#hidden = F.relu(self.linear(x))
# hidden is of shape [batch_size, hidden_dim]
z_mu = self.mu(x)
# z_mu is of shape [batch_size, latent_dim]
z_var = self.var(x)
# z_var is of shape [batch_size, latent_dim]
return z_mu, z_var
class Decoder(nn.Module):
''' This the decoder part of VAE
'''
def __init__(self, z_dim):
'''
Args:
z_dim: A integer indicating the latent size.
hidden_dim: A integer indicating the size of hidden dimension.
output_dim: A integer indicating the output dimension (in case of MNIST it is 28 * 28)
'''
super().__init__()
self.linear1 = nn.Linear(z_dim, 6)
self.linear2 = nn.Linear(6, 10)
self.linear3 = nn.Linear(10, 20)
self.linear4 = nn.Linear(20, 30)
self.linear5 = nn.Linear(30, 48)
self.out = nn.Linear(48, 12)
def forward(self, x):
# x is of shape [batch_size, latent_dim]
x = F.leaky_relu(self.linear1(x))
x = F.leaky_relu(self.linear2(x))
x = F.leaky_relu(self.linear3(x))
x = F.leaky_relu(self.linear4(x))
x = F.leaky_relu(self.linear5(x))
#hidden = F.relu(self.linear(x))
# hidden is of shape [batch_size, hidden_dim]
predicted = torch.sigmoid(self.out(x))
# predicted is of shape [batch_size, output_dim]
return predicted
class VAE(nn.Module):
''' This the VAE, which takes a encoder and decoder.
'''
def __init__(self, enc, dec):
super().__init__()
self.enc = enc
self.dec = dec
def forward(self, x):
# encode
z_mu, z_var = self.enc(x)
# sample from the distribution having latent parameters z_mu, z_var
# reparameterize
std = torch.exp(z_var / 2)
eps = torch.randn_like(std)
x_sample = eps.mul(std).add_(z_mu)
# decode
predicted = self.dec(x_sample)
return predicted, z_mu, z_var
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
print(device)
torch.cuda.get_device_name(0)
# +
# encoder
encoder = Encoder(3)
# decoder
decoder = Decoder(3)
# vae
model = VAE(encoder, decoder).to(device)
# optimizer
lr = 1e-4
optimizer = optim.Adam(model.parameters(), lr=lr)
# -
print(model)
print(device)
def train():
# set the train mode
model.train()
# loss of the epoch
train_loss = 0
for i, x in enumerate(bkgAE_train_iterator):
# reshape the data into [batch_size, 784]
x = x.float().cuda()
# update the gradients to zero
optimizer.zero_grad()
# forward pass
x_sample, z_mu, z_var = model(x)
# reconstruction loss
recon_loss = F.binary_cross_entropy(x_sample, x, size_average=False)
# kl divergence loss
kl_loss = 0.5 * torch.sum(torch.exp(z_var) + z_mu**2 - 1.0 - z_var)
# total loss
loss = recon_loss + kl_loss
# backward pass
loss.backward()
train_loss += loss.item()
# update the weights
optimizer.step()
return train_loss
def test():
# set the evaluation mode
model.eval()
# test loss for the data
test_loss = 0
# we don't need to track the gradients, since we are not updating the parameters during evaluation / testing
with torch.no_grad():
for i, x in enumerate(bkgAE_test_iterator):
# reshape the data
#x = x.view(-1, 28 * 28)
x = x.float().cuda()
# forward pass
x_sample, z_mu, z_var = model(x)
# reconstruction loss
recon_loss = F.binary_cross_entropy(x_sample, x, size_average=False)
# kl divergence loss
kl_loss = 0.5 * torch.sum(torch.exp(z_var) + z_mu**2 - 1.0 - z_var)
# total loss
loss = recon_loss + kl_loss
test_loss += loss.item()
return test_loss
# +
best_test_loss = float('inf')
for e in range(10):
train_loss = train()
test_loss = test()
train_loss /= len(total_PureBkg_selection)
test_loss /= len(total_PureBkg_selection)
print(f'Epoch {e}, Train Loss: {train_loss:.2f}, Test Loss: {test_loss:.2f}')
if best_test_loss > test_loss:
best_test_loss = test_loss
patience_counter = 1
print("Saving model!")
if mode == 'ROC':
torch.save(model.state_dict(),"/data/t3home000/spark/QUASAR/weights/bkg_vae_Vanilla_RND.h5")
else:
torch.save(model.state_dict(), "/data/t3home000/spark/QUASAR/weights/bkg_vae_Vanilla_PureBkg.h5")
else:
patience_counter += 1
print("Not saving model!")
if patience_counter > 3:
print("Patience Limit Reached")
break
# -
print(mode)
model.load_state_dict(torch.load("/data/t3home000/spark/QUASAR/weights/bkg_vae_Vanilla_RND.h5"))
def get_loss(dt, index_list):
print(dt.shape)
#for i in index_list:
# print(i)
# dt[:,i] = (dt[:,i]-np.mean(dt[:,i]))/np.std(dt[:,i])
total_in = torch.tensor(dt)
total_in_train_x_1 = total_in.t()[0:6].t()
total_in_train_x_3 = total_in.t()[7:13].t()
total_in_selection = torch.cat((total_in_train_x_1,total_in_train_x_3),dim=1)
z_mu, z_var = model.enc(total_in_selection.float().cuda())
std = torch.exp(z_var / 2)
eps = torch.randn_like(std)
x_sample = eps.mul(std).add_(z_mu)
decoded_bkg = model.dec(x_sample)
loss_bkg = torch.mean((decoded_bkg-total_in_selection.float().cuda())**2,dim=1).data.cpu().numpy()
#with torch.no_grad():
# reconstruction loss
#x_sample, z_mu, z_var = model(total_in_selection.float().cuda())
#recon_loss = F.binary_cross_entropy(x_sample, total_in_selection.float().cuda(), size_average=False, reduce=None)
# kl divergence loss
#kl_loss = 0.5 * torch.sum(torch.exp(z_var) + z_mu**2 - 1.0 - z_var)
# total loss
#loss = recon_loss + kl_loss
#loss = torch.mean((model(total_in_selection.float().cuda())[0]- total_in_selection.float().cuda())**2,dim=1).data.cpu().numpy()
return loss_bkg
data_bkg = torch.tensor(dt[bkg_idx])
data_signal = torch.tensor(dt[signal_idx])
data_bkg.shape
data_train_x_1 = data_bkg.t()[0:6].t()
data_train_x_2 = data_bkg.t()[7:13].t()
data_test_bkg = torch.cat((data_train_x_1,data_train_x_2),dim=1)
data_train_x_1 = data_signal.t()[0:6].t()
data_train_x_2 = data_signal.t()[7:13].t()
data_test_signal = torch.cat((data_train_x_1,data_train_x_2),dim=1)
z_mu, z_var = model.enc(data_test_bkg.float().cuda())
std = torch.exp(z_var / 2)
eps = torch.randn_like(std)
x_sample = eps.mul(std).add_(z_mu)
decoded_bkg = model.dec(x_sample)
loss_bkg = torch.mean((decoded_bkg-data_test_bkg.float().cuda())**2,dim=1).data.cpu().numpy()
z_mu2, z_var2 = model.enc(data_test_signal.float().cuda())
std2 = torch.exp(z_var2 / 2)
eps2 = torch.randn_like(std2)
x_sample2 = eps2.mul(std2).add_(z_mu2)
decoded_sig = model.dec(x_sample2)
loss_sig = torch.mean((decoded_sig-data_test_signal.float().cuda())**2,dim=1).data.cpu().numpy()
loss_bkg
bins = np.linspace(0,5,100)
plt.hist(loss_bkg,bins=bins,alpha=0.3,color='b',label='bkg')
plt.hist(loss_sig,bins=bins,alpha=0.3,color='r',label='sig')
f = pd.read_hdf("/data/t3home000/spark/LHCOlympics_previous/LHC-Olympics/Code/Nsubjettiness_mjj.h5")
dt = f.values
dt.shape
idx = dt[:,15]
bkg_idx = np.where(idx==0)[0]
signal_idx = np.where(idx==1)[0]
loss_bkg = get_loss(dt[bkg_idx],[0,1,2,3,4,5,7,8,9,10,11,12])
loss_sig = get_loss(dt[signal_idx],[0,1,2,3,4,5,7,8,9,10,11,12])
plt.rcParams["figure.figsize"] = (10,10)
bins = np.linspace(0,5,100)
plt.hist(loss_bkg,bins=bins,alpha=0.3,color='b',label='bkg')
plt.hist(loss_sig,bins=bins,alpha=0.3,color='r',label='sig')
plt.xlabel(r'Autoencoder Loss')
plt.ylabel('Count')
plt.legend(loc='upper right')
plt.show()
# ### ROC
#
def get_tpr_fpr(sigloss,bkgloss,aetype='bkg'):
bins = np.linspace(0,50,1001)
tpr = []
fpr = []
for cut in bins:
if aetype == 'sig':
tpr.append(np.where(sigloss<cut)[0].shape[0]/len(sigloss))
fpr.append(np.where(bkgloss<cut)[0].shape[0]/len(bkgloss))
if aetype == 'bkg':
tpr.append(np.where(sigloss>cut)[0].shape[0]/len(sigloss))
fpr.append(np.where(bkgloss>cut)[0].shape[0]/len(bkgloss))
return tpr,fpr
# ### PRECISION - RECALL
def get_precision_recall(sigloss,bkgloss,aetype='bkg'):
bins = np.linspace(0,100,1001)
tpr = []
fpr = []
precision = []
for cut in bins:
if aetype == 'sig':
tpr.append(np.where(sigloss<cut)[0].shape[0]/len(sigloss))
precision.append((np.where(sigloss<cut)[0].shape[0])/(np.where(bkgloss<cut)[0].shape[0]+np.where(sigloss<cut)[0].shape[0]))
if aetype == 'bkg':
tpr.append(np.where(sigloss>cut)[0].shape[0]/len(sigloss))
precision.append((np.where(sigloss>cut)[0].shape[0])/(np.where(bkgloss>cut)[0].shape[0]+np.where(sigloss>cut)[0].shape[0]))
return precision,tpr
| new_flows/VAE_without_Flow.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
from datascience import *
import numpy as np
import matplotlib.pyplot as plots
plots.style.use('fivethirtyeight')
# %matplotlib inline
# load the table of movies, studios, gross, and year.
top1 = Table.read_table('/Users/mt2245/Dropbox/INF_Syllabi/INF110/Spring_22/coding snippets/top_movies_2017.csv')
top1
# +
# Here we'll format the table to include an index column
# and label it 'Row Index'
# and add commas to the numbers for easier reading
top2 = top1.with_column('Row Index', np.arange(top1.num_rows))
top = top2.move_to_start('Row Index')
top.set_format(make_array(3, 4), NumberFormatter)
# -
# Deterministic Samples
# Use .take to choose elements of the set
top.take(make_array(3, 18, 100))
# +
# Also use .where
# For instance, let's sample all the Harry Potter movies
top.where('Title', are.containing('Harry Potter'))
# +
# Systematic Sampling
"""Choose a random start among rows 0 through 9;
then take every 10th row."""
start = np.random.choice(np.arange(10))
top.take(np.arange(start, top.num_rows, 10))
# do this multiple times to demonstrate how it is random.
# +
# Empirical Distributions
# roll a die multiple times and keep track of the results.
# The table 'die' contais the numbers of spots.
# This is a fair die - what does that mean?
# - all numbers appear exactly once.
die = Table().with_column('Face', np.arange(1,7,1))
die
# -
# Let's make a histogram to show the distribution of probabilities
# over all the possible faces.
die_bins = np.arange(0.5, 6.6, 1)
die.hist(bins = die_bins)
# +
# the bars all represent the same chance, so this is
# a UNIFORM DISTRIBUTION.
# +
# Let’s visualize some empirical distributions with empirical histograms
# Roll a die 10 times, using the table 'die' we created above.
die.sample(10)
# +
# Let's roll this many times and draw an empirical histogram.
# Since we want to do this a lot of times, let's make a function.
def empirical_hist_die(n):
die.sample(n).hist(bins = die_bins)
# +
# now let's roll 10 times and make a histogram.
empirical_hist_die(10)
# -
# increase the sample size
empirical_hist_die(100)
# increase the sample size
empirical_hist_die(10000)
# SAMPLING FROM A POPULATION
# Load the table of flight delay times.
united = Table.read_table('/Users/mt2245/Dropbox/INF_Syllabi/INF110/Spring_22/coding snippets/united_summer2015.csv')
united
# Some flights left early!
# Also, some were REALLY LATE :-(
print(united.column('Delay').min())
print(united.column('Delay').max())
# +
# we will look at the distribution of delay times
# using a histogram.
#
# let's create some bins for the histogram.
# we'll start at -20, and create 300 bins of 10
delay_bins = np.append(np.arange(-20, 301, 10),600)
united.hist('Delay', bins = delay_bins, unit = 'minute')
# -
# let's zoom in because most times were <200min.
# To get an idea of how much were above 200min,
# use .where to collect all the delays >200
# count the number of rows,
# and divide by the total number of rows.
united.where('Delay', are.above(200)).num_rows/united.num_rows
delay_bins = np.arange(-20, 201, 10)
united.hist('Delay', bins = delay_bins, unit = 'minute')
# +
# How many delays were between 0 and 10 minutes?
# look at the histogram (second bin)
# but also let's measure:
united.where('Delay', are.between(0, 10)).num_rows/united.num_rows
# +
# most delays are not THAT bad...
# +
# EMPIRICAL DISTRIBUTION OF THE POPULATION SAMPLE
# Here we're going to draw random samples
# from 13,825 flights with replacement.
# Like before, let's make a function.
# it takes sample size as an argument
# and draws an empirical histogram
def empirical_hist_delay(n):
united.sample(n).hist('Delay', bins = delay_bins, unit = 'minute')
# -
empirical_hist_delay(1)
empirical_hist_delay(10)
empirical_hist_delay(1000)
# +
# As sample size increases, the emprical histogram
# more closely resembles the histogram taken from
# the population.
# -
# let's create an emprical histogram
# of 1000 random samples from the data
sample_1000 = united.sample(1000)
sample_1000.hist('Delay', bins = delay_bins, unit = 'minute')
plots.title('Sample of Size 1000');
# parameters
# what is the median delay?
np.median(united.column('Delay'))
# what proportion of flights were
# at or below the median delay?
united.where('Delay', are.below_or_equal_to(2)).num_rows / united.num_rows
# not bad!
# how many were exactly the median delay?
united.where('Delay', are.equal_to(2)).num_rows
# here, let's use the 1000 random samples
# to estimate the parameter "median delay"
np.median(sample_1000.column('Delay'))
# it MIGHT be different. Why?
# run this many times and see
np.median(united.sample(1000).column('Delay'))
# +
# Step 1: we want the MEDIAN from a random sample
# of 1000 flight delays.
# Step 2: put the sampling code into a function
def random_sample_median():
return np.median(united.sample(1000).column('Delay'))
# +
# Step 3: we'll do 5000 simulations.
# Step 4: Use the for loop.
# first, create an empty array to collect the results:
medians = make_array()
# then, generate the values with a for loop
# and append each iteration to the array.
for i in np.arange(5000):
medians = np.append(medians, random_sample_median())
# -
# Let's display all this in a table based on the medians array.
simulated_medians = Table().with_column('Sample Median', medians)
simulated_medians
# +
# let's visualize it with a histogram.
simulated_medians.hist(bins=np.arange(0.5, 5, 1))
# this is an EMPIRICAL HISTOGRAM OF THE STATISTIC.
# it displays the EMPIRICAL DISTRIBUTION of the statistic.
# -
| coding snippets/sampling.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python(torchenv)
# language: python
# name: torchenv
# ---
# # 손실함수
#
# > 2.2.5 장에 해당하는 코드
# ## 출력값을 확률로 표현하기
# +
# 코드 2-7
import torch
import torch.nn as nn
torch.manual_seed(70)
class Network(nn.Module):
def __init__(self, input_size, hidden_size, output_size):
super(Network, self).__init__()
# 층을 구성
# input layer > hidden layer
self.linear_ih = nn.Linear(in_features=input_size,
out_features=hidden_size)
# hidden layer > output layer
self.linear_ho = nn.Linear(in_features=hidden_size,
out_features=output_size)
# activation layer
self.activation_layer = nn.Sigmoid()
def forward(self, x):
z1 = self.linear_ih(x)
a1 = self.activation_layer(z1)
z2 = self.linear_ho(a1)
y = self.activation_layer(z2)
return y
# 입력텐서 생성
x = torch.Tensor([[0, 1]])
# 커스텀 모듈 호출
net = Network(input_size=2, hidden_size=2, output_size=1)
y = net(x)
print(y.item())
# -
# ## 확률론적 접근
#
# ### 엔트로피
# +
# 코드 2-8
P = torch.Tensor([0.4, 0.6])
Q = torch.Tensor([0.0, 1.0])
def self_information(x):
return -torch.log(x)
def entropy(x):
# log(0) = NaN 값이 나옴으로 아주 작은 수를 더해서 이를 방지한다.
e = 1e-30
return torch.sum((x+e)*self_information(x+e))
# 앞면이 40%, 뒷면이 60% 확률의 동전
print(entropy(P).numpy().round(4))
# 뒷면만 100% 나오는 확실한 동전
print(entropy(Q).numpy().round(4))
# -
# ### KL-divergence
# +
# 코드 2-9
def KL_divergence(q, p):
"""
q: predict prob
p: target prob
"""
# log(0) = NaN 값이 나옴으로 아주 작은 수(e)를 더해서 이를 방지한다.
e = 1e-30
return torch.sum((p+e)*torch.log(p+e) - (p+e)*torch.log(q+e))
U = torch.Tensor([0.5, 0.5])
# 확률분포 P 와 U 의 KL-divergence
print(KL_divergence(P, U))
# 확률분포 Q 와 U 의 KL-divergence
print(KL_divergence(Q, U))
# +
# 코드 2-10
loss_function = nn.KLDivLoss(reduction="sum")
e = 1e-30
## pytorch: y*(log(y) - x) = log(y)*y - x*y
# 확률분포 P 와 U 의 KL-divergence
print(loss_function(torch.log(P+e), U+e))
# 확률분포 Q 와 U 의 KL-divergence
print(loss_function(torch.log(Q+e), U+e))
# +
# 코드 2-11
torch.manual_seed(70)
# 입력과 타겟텐서 생성
x = torch.Tensor([[0, 1]])
t = torch.Tensor([1])
# 이전에 만든 XOR 네트워크 호출
net = Network(input_size=2, hidden_size=2, output_size=1)
y = net(x)
# 타겟값의 원-핫 인코딩 생성
one_hot = torch.eye(2)
prob_t = one_hot.index_select(dim=0, index=t.long())
# 예측값도 확률 분포로 만들어준다. y=1이 될 확률분포이다.
prob_y = torch.cat([1-y, y], dim=1)
# t=1 에 해당하는 t의 확률분포와 예측값 y의 확률분포
print(prob_t)
print(prob_y)
# KL-divergence 구하기
loss_function = nn.KLDivLoss(reduction="sum")
print(loss_function(torch.log(prob_y), prob_t))
# -
# ### BCE Loss
# +
# 코드 2-12
torch.manual_seed(70)
# 입력과 타겟텐서 생성
x = torch.Tensor([[0, 1]])
t = torch.Tensor([1])
# 이전에 만든 XOR 네트워크 호출
net = Network(input_size=2, hidden_size=2, output_size=1)
y = net(x)
# BCE 구하기
loss_function = nn.BCELoss(reduction="sum")
# 예측 값: y=1 확률 / 타겟 값: t=1 의 확률은 1
print(loss_function(y.squeeze(1), t))
# -
# ### Softmax
# +
# 코드 2-13
torch.manual_seed(70)
# 선형결합값: 임의의 크기가 (1,10)인 벡터 텐서 생성
z = torch.rand(1, 10)
# Softmax
y = torch.softmax(z, dim=1)
print(y)
# -
# ### Cross Entropy Loss
# +
# 코드 2-14
loss_function = nn.CrossEntropyLoss(reduction="sum")
# 타겟값 또한 torch.LongTensor로 만들어야 한다.
print(loss_function(z, t.long()))
| src/Chapter-2/04-loss_function.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="view-in-github" colab_type="text"
# <a href="https://colab.research.google.com/github/CaQtiml/Data-Structure-and-Graph-Algorithm/blob/master/DecisionTree_fromScratch.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# + [markdown] id="BzlbTWnXZ1CV"
# ## Dataset
# + id="GDkxmHYeaPJQ"
# Format: each row is an example.
# The last column is the label.
# The first two columns are features.
# Feel free to play with it by adding more features & examples.
# Interesting note: I've written this so the 2nd and 5th examples
# have the same features, but different labels - so we can see how the
# tree handles this case.
training_data = [
['Green', 3, 'Apple'],
['Yellow', 3, 'Apple'],
['Red', 1, 'Grape'],
['Red', 1, 'Grape'],
['Yellow', 3, 'Lemon'],
]
# + id="eo0GX6joaVTD"
# Column labels.
# These are used only to print the tree.
header = ["color", "diameter", "label"]
# + id="lEh-Sr_naXGo"
def unique_vals(rows, col):
"""Find the unique values for a column in a dataset."""
return set([row[col] for row in rows])
# + colab={"base_uri": "https://localhost:8080/"} id="6VbPDs1jaZeo" outputId="a61d0ccb-0e2f-4e2d-ac10-b12f83962f73"
#######
# Demo:
print(unique_vals(training_data, 0))
print(unique_vals(training_data, 1))
#######
# + id="2XVhtaoNae44"
def class_counts(rows):
"""Counts the number of each type of example in a dataset."""
counts = {} # a dictionary of label -> count.
for row in rows:
label = row[-1]
if label not in counts:
counts[label]=0
counts[label]+=1
return counts
# + colab={"base_uri": "https://localhost:8080/"} id="WtxWsnxEap4x" outputId="846e13e5-8137-4510-d8ec-245a1adcee89"
#######
# Demo:
x = class_counts(training_data)
print(type(x))
#######
# + id="WstAvzTgbBrr"
def is_numeric(value):
"""Test if a value is numeric."""
return isinstance(value, int) or isinstance(value, float)
# + id="ojekJiHmbF-w"
class Question:
"""A Question is used to partition a dataset.
This class just records a 'column number' (e.g., 0 for Color) and a
'column value' (e.g., Green). The 'match' method is used to compare
the feature value in an example to the feature value stored in the
question. See the demo below.
"""
def __init__(self,column,value):
self.column = column
self.value = value
def match(self,example): # example be like ['Yellow', 3, 'Apple']
val = example[self.column]
if is_numeric(val):
return val >= self.value
else:
return val == self.value
def __repr__(self):
# This is just a helper method to print
# the question in a readable format.
condition = "=="
if is_numeric(self.value):
condition = ">="
return "Is %s %s %s?" % (
header[self.column], condition, str(self.value))
# + colab={"base_uri": "https://localhost:8080/"} id="l6dTGw9vcTLM" outputId="f98ee8cd-f0aa-4e1d-c8fe-d26a3a06451a"
#######
# Demo:
# Let's write a question for a numeric attribute
print(Question(0, "Yellow"))
print(Question(1, 3))
# + colab={"base_uri": "https://localhost:8080/"} id="8WPmpDZ5ceNB" outputId="60b09af5-1caf-401f-8cfe-ce4369374087"
q = Question(0, 'Green')
example = training_data[0]
print(example)
q.match(example)
# + id="NoSYh6GRc1Sk"
def partition(rows, question):
"""Partitions a dataset.
For each row in the dataset, check if it matches the question. If
so, add it to 'true rows', otherwise, add it to 'false rows'.
rows - training set
question - Question(0, 'Red')
OUTPUT : true_rows ([['Red', 1, 'Grape'], ['Red', 1, 'Grape']]) , false_rows ([['Green', 3, 'Apple'], ['Yellow', 3, 'Apple'], ['Yellow', 3, 'Lemon']])
"""
true_rows = []
false_rows = []
for row in rows:
if question.match(row):
true_rows.append(row)
else:
false_rows.append(row)
return true_rows,false_rows
# + colab={"base_uri": "https://localhost:8080/"} id="xn52z8f8eyQK" outputId="8f9e1cef-0aae-43ae-9073-4cb8fa84aa5f"
#######
# Demo:
# Let's partition the training data based on whether rows are Red.
true_rows, false_rows = partition(training_data, Question(0, 'Red'))
# This will contain all the 'Red' rows.
print(true_rows)
print(false_rows)
# + id="e1YFmn_ne5mo"
def gini(rows):
counts = class_counts(rows)
impurity = 1
for lbl in counts:
prob_of_lbl = counts[lbl]/float(len(rows))
impurity -= prob_of_lbl**2
return impurity
# + colab={"base_uri": "https://localhost:8080/"} id="gH-CtmX7gO6g" outputId="5a4d8c8e-6c14-4ad5-d6d6-609215dc05ea"
#######
# Demo:
# Let's look at some example to understand how Gini Impurity works.
#
# First, we'll look at a dataset with no mixing.
no_mixing = [['Apple'],
['Apple']]
# this will return 0
gini(no_mixing)
# + colab={"base_uri": "https://localhost:8080/"} id="r71I2H8vgNKJ" outputId="c303d74b-0546-4515-8c2b-98c78830e2aa"
# Now, we'll look at dataset with a 50:50 apples:oranges ratio
some_mixing = [['Apple'],
['Orange']]
# this will return 0.5 - meaning, there's a 50% chance of misclassifying
# a random example we draw from the dataset.
gini(some_mixing)
# + colab={"base_uri": "https://localhost:8080/"} id="KXVqCKxOiz4-" outputId="77b5d291-778c-4bc6-aa63-26418b886ed1"
# Now, we'll look at a dataset with many different labels
lots_of_mixing = [['Apple'],
['Orange'],
['Grape'],
['Grapefruit'],
['Blueberry']]
# This will return 0.8
gini(lots_of_mixing)
#######
# + id="nMXWnXeJkfRF"
def info_gain(left, right, current_uncertainty):
"""Information Gain.
The uncertainty of the starting node, minus the weighted impurity of
two child nodes.
"""
p = float(len(left)) / (len(left) + len(right))
return current_uncertainty - p * gini(left) - (1 - p) * gini(right)
# + id="W5Cx3t8Ii1yY"
## another implementation of info_gain by receiving all instead of gini(all) (but same algorithm)
def info_gain_2(left,right,all):
p = float(len(left)) / (len(left) + len(right))
return gini(all) - p*gini(left) - (1-p)*gini(right)
# + colab={"base_uri": "https://localhost:8080/"} id="K_Z4_RNPkQQy" outputId="61467556-b06b-4672-835f-dcf95f3527a1"
true_rows, false_rows = partition(training_data, Question(0, 'Green'))
info_gain_2(true_rows, false_rows, training_data)
# + colab={"base_uri": "https://localhost:8080/"} id="kgzkIPYkkoyh" outputId="1baf8248-1d5f-4681-b7c9-6a6aecff7f08"
current_uncertainty = gini(training_data)
current_uncertainty
# + colab={"base_uri": "https://localhost:8080/"} id="jww09o-3krHa" outputId="dec8ecd8-a6b0-4eaa-806e-41023d1afcd9"
true_rows, false_rows = partition(training_data, Question(0, 'Green'))
print(info_gain(true_rows, false_rows, current_uncertainty))
true_rows, false_rows = partition(training_data, Question(0,'Red'))
print(info_gain(true_rows, false_rows, current_uncertainty))
# + [markdown] id="jMqI5svxlAEB"
# We choose the way that returns high info_gain because its impurity after splitting is low.
# + [markdown] id="4SID3i73lxBD"
# It looks like we learned more using 'Red' (0.37), than 'Green' (0.14).
# Why? Look at the different splits that result, and see which one looks more 'unmixed' to you.
# + colab={"base_uri": "https://localhost:8080/"} id="lFIPZCKtl1po" outputId="46da7d9b-1d67-4777-a361-3658cb03b09d"
true_rows, false_rows = partition(training_data, Question(0,'Red'))
print(true_rows)
print(false_rows)
# at least it can distinguish "Grape".
# + colab={"base_uri": "https://localhost:8080/"} id="EqsbmUZjps9D" outputId="03bde368-6e6f-4ef9-caf9-f260569ea212"
# On the other hand, partitioning by Green doesn't help so much.
true_rows, false_rows = partition(training_data, Question(0,'Green'))
# We've isolated one apple in the true rows.
print(true_rows)
print(false_rows)
# + id="22q1g1Qll__j"
def find_best_split(rows):
"""Find the best question to ask by iterating over every feature / value
and calculating the information gain."""
best_gain = 0 # keep track of the best information gain
best_question = None # keep train of the feature / value that produced it
current_uncertainty = gini(rows)
n_features = len(rows[0]) - 1 # number of columns
for col in range(n_features):
values = set([row[col] for row in rows])
for val in values:
question = Question(col,val)
true_rows,false_rows = partition(rows,question)
if len(true_rows)==0 or len(false_rows)==0 :
continue
gain = info_gain(true_rows,false_rows,current_uncertainty)
# print(f"{val}:{gain}")
if gain >= best_gain:
best_gain,best_question = gain,question
return best_gain,best_question
# + colab={"base_uri": "https://localhost:8080/"} id="2i48u9K1noRW" outputId="8c7a673c-3333-4c35-acb1-ee47417635aa"
training_data = [
['Green', 3, 'Apple'],
['Yellow', 3, 'Apple'],
['Red', 1, 'Grape'],
['Red', 1, 'Grape'],
['Yellow', 3, 'Lemon'],
]
#######
# Demo:
# Find the best question to ask first for our toy dataset.
best_gain, best_question = find_best_split(training_data)
best_question
# FYI: is color == Red is just as good. See the note in the code above
# where I used '>='.
#######
# + id="rRyFCFeTott0"
class Leaf:
"""A Leaf node classifies data.
This holds a dictionary of class (e.g., "Apple") -> number of times
it appears in the rows from the training data that reach this leaf.
"""
def __init__(self, rows):
self.predictions = class_counts(rows)
# + id="x7-YmaX1s-bf"
class Decision_Node:
"""A Decision Node asks a question.
This holds a reference to the question, and to the two child nodes.
"""
def __init__(self,
question,
true_branch,
false_branch):
self.question = question
self.true_branch = true_branch
self.false_branch = false_branch
# + id="kl4jT2ZwtBmP"
def build_tree(rows):
best_gain,best_question = find_best_split(rows)
if best_gain == 0 :
return Leaf(rows)
true_rows,false_rows = partition(rows,best_question)
true_node = build_tree(true_rows)
false_node = build_tree(false_rows)
return Decision_Node(best_question,true_node,false_node)
# + id="83XmNtSMvOyr"
def print_tree(node, spacing=""):
"""World's most elegant tree printing function."""
# Base case: we've reached a leaf
if isinstance(node, Leaf):
print (spacing + "Predict", node.predictions)
return
# Print the question at this node
print (spacing + str(node.question))
# Call this function recursively on the true branch
print (spacing + '--> True:')
print_tree(node.true_branch, spacing + " ")
# Call this function recursively on the false branch
print (spacing + '--> False:')
print_tree(node.false_branch, spacing + " ")
# + id="hV3zOs4zvPVS"
my_tree = build_tree(training_data)
# + colab={"base_uri": "https://localhost:8080/"} id="wLZycaG1vQtb" outputId="2e3a12f2-f282-4adf-f422-5820d2051910"
print_tree(my_tree)
# + id="ebm7JqGIvSMZ"
def classify(row,node):
if isinstance(node,Leaf):
return node.predictions
if node.question.match(row):
return classify(row,node.true_branch)
else:
return classify(row,node.false_branch)
# + colab={"base_uri": "https://localhost:8080/"} id="sWJ6BiXXy8SU" outputId="0b6af0d5-6e13-4fbf-e4d8-c44ee47cca4f"
classify(["Green",3], my_tree)
# + id="6gxtVKkFzQ03"
def print_leaf(counts):
"""A nicer way to print the predictions at a leaf."""
total = sum(counts.values()) * 1.0
probs = {}
for lbl in counts.keys():
probs[lbl] = str(int(counts[lbl] / total * 100)) + "%"
return probs
# + colab={"base_uri": "https://localhost:8080/"} id="QyzSwKqZzWtf" outputId="8dbe74a2-59b5-40a2-ec9c-ae47e4e53614"
print_leaf(classify(["Green",3], my_tree))
print_leaf(classify(["Yellow",3], my_tree))
# + id="3_bSMKquzCqt"
# Evaluate
testing_data = [
['Green', 3, 'Apple'],
['Yellow', 4, 'Apple'],
['Red', 2, 'Grape'],
['Red', 1, 'Grape'],
['Yellow', 3, 'Lemon'],
]
# + colab={"base_uri": "https://localhost:8080/"} id="oaq_ceDlzhgJ" outputId="cf0dc1a7-dcfa-4135-ea19-f0142cd24871"
for row in testing_data:
print ("Actual: %s. Predicted: %s" %
(row[-1], print_leaf(classify(row, my_tree))))
# + id="_iwwL2iazi43"
| DecisionTree_fromScratch.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Inspect Faster R-CNN trained model
# +
import os
import sys
import random
import math
import re
import time
import numpy as np
import tensorflow as tf
import matplotlib
import matplotlib.pyplot as plt
import matplotlib.patches as patches
# Root directory of the project
ROOT_DIR = os.path.abspath("../")
# Import Mask RCNN
sys.path.append(ROOT_DIR) # To find local version of the library
from faster_rcnn import utils
from faster_rcnn import visualize
from faster_rcnn.visualize import display_images
from faster_rcnn import model as modellib
from faster_rcnn.model import log
import mammo_baseline_faster_rcnn
# %matplotlib inline
# Directory to save logs and trained model
LOGS_DIR = os.path.join(ROOT_DIR, "mammography", "checkpoints")
# -
# Comment out to reload imported modules if they change
# %load_ext autoreload
# %autoreload 2
# ## Configurations
# +
# # Dataset directory
DATASET_DIR = os.path.join(ROOT_DIR, "datasets/mammo")
# # Inference Configuration
config = mammo_baseline_faster_rcnn.MammoInferenceConfig()
config.display()
# -
# ## Notebook Preferences
DEVICE = "/gpu:0" # /cpu:0 or /gpu:0
TEST_MODE = "inference"
def get_ax(rows=1, cols=1, size=16):
"""Return a Matplotlib Axes array to be used in
all visualizations in the notebook. Provide a
central point to control graph sizes.
Adjust the size attribute to control how big to render images
"""
fig, ax = plt.subplots(rows, cols, figsize=(size*cols, size*rows))
fig.tight_layout()
return ax
# ## Load Validation Dataset
# +
dataset = mammo_baseline_faster_rcnn.MammoDataset()
dataset.load_mammo(DATASET_DIR, "mass_test", augmented=False)
dataset.prepare()
print("Images: {}\nClasses: {}".format(len(dataset.image_ids), dataset.class_names))
# -
# ## Load Model
# Create model in inference mode
with tf.device(DEVICE):
model = modellib.MaskRCNN(mode="inference",
model_dir=LOGS_DIR,
config=config)
# ## Compute AP results
# +
################################################
## Using trained network on higher res image ##
## on normal resolution image (3x dataset) ##
## Inference = 2000 instead of 1000 ##
################################################
# Path to a specific weights file
_4sep_1024_reso_mass_train_3x_fastrcnn = "mammo20180904T0233"
latest = _4sep_1024_reso_mass_train_3x_fastrcnn
epochs_trained = 11
limit = 361
def compute_batch_ap(dataset, image_ids, verbose=1):
APs = np.zeros(10)
mAPs = []
for image_id in image_ids:
# Load image
image, image_meta, gt_class_id, gt_bbox, gt_mask =\
modellib.load_image_gt(dataset, config,
image_id, use_mini_mask=False)
# Run object detection
results = model.detect_molded(image[np.newaxis], image_meta[np.newaxis], verbose=0)
# Compute AP over range 0.5 to 0.95
r = results[0]
mAP, AP = utils.compute_ap_range(
gt_bbox, gt_class_id, gt_mask,
r['rois'], r['class_ids'], r['scores'],
verbose=0)
mAPs.append(mAP)
APs = np.add(APs, AP)
if verbose:
info = dataset.image_info[image_id]
meta = modellib.parse_image_meta(image_meta[np.newaxis,...])
print("{:3} {} mAP: {:.2f}".format(
meta["image_id"][0], meta["original_image_shape"][0], mAP))
return mAPs,APs
iou_range = np.arange(0.5, 1.0, 0.05)
for i in range(1, epochs_trained):
if i < 10:
n_epochs = "mask_rcnn_mammo_000" + str(i) + ".h5"
weights_path = os.path.join(LOGS_DIR, latest, n_epochs)
print("\n", i, ": Loading weights ", weights_path)
time_now = time.time()
model.load_weights(weights_path, by_name=True)
mAPs, APs = compute_batch_ap(dataset, dataset.image_ids[:limit],verbose=0)
for i in range(len(iou_range)):
print("AP @{:.2f}:\t {:.3f}".format(iou_range[i], (np.sum(APs[i])/limit)))
print("Mean AP over {} images: {:.4f}".format(len(mAPs), np.mean(mAPs)))
print("Time taken:", time.time() - time_now)
elif i >= 10:
n_epochs = "mask_rcnn_mammo_00" + str(i) + ".h5"
weights_path = os.path.join(LOGS_DIR, latest, n_epochs)
print("\n", i, ": Loading weights ", weights_path)
time_now = time.time()
model.load_weights(weights_path, by_name=True)
mAPs, APs = compute_batch_ap(dataset, dataset.image_ids[:limit], verbose=0)
for i in range(len(iou_range)):
print("AP @{:.2f}:\t {:.3f}".format(iou_range[i], (np.sum(APs[i])/limit)))
print("Mean AP over {} images: {:.4f}".format(len(mAPs), np.mean(mAPs)))
print("Time taken:", time.time() - time_now)
# Run on validation set
# -
# ## Visualize predictions
# +
image_id = random.choice(dataset.image_ids)
print("Image ID:", image_id)
_4sep_1024_reso_mass_train_3x_fastrcnn = "mammo20180904T0233"
latest = _4sep_1024_reso_mass_train_3x_fastrcnn
n_epochs = "mask_rcnn_mammo_000" + str(6) + ".h5"
weights_path = os.path.join(LOGS_DIR, latest, n_epochs)
# print("\n", i, ": Loading weights ", weights_path)
# time_now = time.time()
model.load_weights(weights_path, by_name=True)
image, image_meta, gt_class_id, gt_bbox, gt_mask =\
modellib.load_image_gt(dataset, config, image_id, use_mini_mask=False)
info = dataset.image_info[image_id]
print("image ID: {}.{} ({}) {}".format(info["source"], info["id"], image_id,
dataset.image_reference(image_id)))
print("Original image shape: ", modellib.parse_image_meta(image_meta[np.newaxis,...])["original_image_shape"][0])
# Run object detection
results = model.detect_molded(np.expand_dims(image, 0), np.expand_dims(image_meta, 0), verbose=1)
# Display results
r = results[0]
# log("gt_class_id", gt_class_id)
# log("gt_bbox", gt_bbox)
# log("gt_mask", gt_mask)
# # Compute AP over range 0.5 to 0.95 and print it
mAP, AP = utils.compute_ap_range(gt_bbox, gt_class_id, gt_mask,
r['rois'], r['class_ids'], r['scores'],
verbose=1)
visualize.display_bbox_differences(
image,
gt_bbox, gt_class_id, gt_mask,
r['rois'], r['class_ids'], r['scores'],
dataset.class_names, ax=get_ax(),
show_box=True,
iou_threshold=0.5, score_threshold=0.9)
# +
image_id = random.choice(dataset.image_ids)
print("Image ID:", image_id)
_4sep_1024_reso_mass_train_3x_fastrcnn = "mammo20180904T0233"
latest = _4sep_1024_reso_mass_train_3x_fastrcnn
n_epochs = "mask_rcnn_mammo_000" + str(6) + ".h5"
weights_path = os.path.join(LOGS_DIR, latest, n_epochs)
# print("\n", i, ": Loading weights ", weights_path)
# time_now = time.time()
model.load_weights(weights_path, by_name=True)
image, image_meta, gt_class_id, gt_bbox, gt_mask =\
modellib.load_image_gt(dataset, config, image_id, use_mini_mask=False)
info = dataset.image_info[image_id]
print("image ID: {}.{} ({}) {}".format(info["source"], info["id"], image_id,
dataset.image_reference(image_id)))
print("Original image shape: ", modellib.parse_image_meta(image_meta[np.newaxis,...])["original_image_shape"][0])
# Run object detection
results = model.detect_molded(np.expand_dims(image, 0), np.expand_dims(image_meta, 0), verbose=1)
# Display results
r = results[0]
# log("gt_class_id", gt_class_id)
# log("gt_bbox", gt_bbox)
# log("gt_mask", gt_mask)
# # Compute AP over range 0.5 to 0.95 and print it
mAP, AP = utils.compute_ap_range(gt_bbox, gt_class_id, gt_mask,
r['rois'], r['class_ids'], r['scores'],
verbose=1)
visualize.display_bbox_differences(
image,
gt_bbox, gt_class_id, gt_mask,
r['rois'], r['class_ids'], r['scores'],
dataset.class_names, ax=get_ax(),
show_box=True,
iou_threshold=0.5, score_threshold=0.9)
| mammography/inspect_faster_rcnn_model.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
from PIL import Image
# Some standard imports
import onnxruntime
import numpy as np
from matplotlib import pyplot as plt
# +
# helper function for data visualization
def visualize(**images):
"""PLot images in one row."""
n = len(images)
plt.figure(figsize=(16, 5))
for i, (name, image) in enumerate(images.items()):
plt.subplot(1, n, i + 1)
plt.xticks([])
plt.yticks([])
plt.title(' '.join(name.split('_')).title())
plt.imshow(image)
plt.show()
def scale(x, input_space="RGB", input_range=[0,1]):
"from https://github.com/qubvel/segmentation_models.pytorch.git"
mean = [0.485, 0.456, 0.406]
std = [0.229, 0.224, 0.225]
if input_space == "BGR":
x = x[..., ::-1].copy()
if input_range is not None:
if x.max() > 1 and input_range[1] == 1:
x = x / 255.0
if mean is not None:
mean = np.array(mean)
x = x - mean
if std is not None:
std = np.array(std)
x = x / std
return x
def to_numpy(tensor):
"from pytorch.org"
return tensor.detach().cpu().numpy() if tensor.requires_grad else tensor.cpu().numpy()
# +
# visual performance of model
IMG_PATH = 'data/input/demo_1.png'
image = np.array(Image.open(IMG_PATH).convert(mode='RGB')).astype('uint8')
image_processed = scale(image).transpose(2,0,1).astype('float32')
DEVICE = 'cpu'
x_tensor = torch.from_numpy(image_processed).to(DEVICE).unsqueeze(0)
# -
# compute ONNX Runtime output prediction
ort_session = onnxruntime.InferenceSession("model/geofpn.onnx")
ort_inputs = {ort_session.get_inputs()[0].name: to_numpy(x_tensor)}
ort_outs = ort_session.run(None, ort_inputs)
visualize(
image=image,
predicted_mask=ort_outs[0][0,0,:,:]
)
| blog_code.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="1l_iWI8kmqfD"
# # Few shot learning NLP
# + id="vuREe4mCnkNj" executionInfo={"status": "ok", "timestamp": 1611899462849, "user_tz": 480, "elapsed": 4251, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GimIntWE1SvxJeoWOj0PwvUuB_-simILm2JI8het08=s64", "userId": "04899005791450850059"}}
# %%capture
# !wget https://github.com/martin-fabbri/colab-notebooks/raw/master/nlp/few-shot-learning/datasets/final_fewshot_test.csv
# !wget https://github.com/martin-fabbri/colab-notebooks/raw/master/nlp/few-shot-learning/datasets/final_fewshot_train.csv
# + colab={"base_uri": "https://localhost:8080/"} id="LWBh3IJfmcjX" executionInfo={"status": "ok", "timestamp": 1611899464428, "user_tz": 480, "elapsed": 5817, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GimIntWE1SvxJeoWOj0PwvUuB_-simILm2JI8het08=s64", "userId": "04899005791450850059"}} outputId="df45f5e4-6b8f-4912-fce1-287b0111f16c"
import keras.backend as K
import numpy as np
import pandas as pd
import tensorflow as tf
import tensorflow_hub as hub
from tensorflow.keras import Model, Sequential
from tensorflow.keras.layers import (
BatchNormalization,
Dense,
Dropout,
Input,
Lambda,
Layer,
)
from tensorflow.keras.regularizers import l2
print("tensorflow", tf.__version__)
print("tensorflow_hub", hub.__version__)
# + colab={"base_uri": "https://localhost:8080/"} id="J7FzcDyFni0_" executionInfo={"status": "ok", "timestamp": 1611899464429, "user_tz": 480, "elapsed": 5809, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GimIntWE1SvxJeoWOj0PwvUuB_-simILm2JI8het08=s64", "userId": "04899005791450850059"}} outputId="0bcc782a-ff6e-4078-b354-3914d6e94a81"
test = pd.read_csv("final_fewshot_test.csv")
train = pd.read_csv("final_fewshot_train.csv")
train.shape, test.shape
# + colab={"base_uri": "https://localhost:8080/", "height": 196} id="1R42KNb6loiP" executionInfo={"status": "ok", "timestamp": 1611899464430, "user_tz": 480, "elapsed": 5801, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GimIntWE1SvxJeoWOj0PwvUuB_-simILm2JI8het08=s64", "userId": "04899005791450850059"}} outputId="f9040cd7-4b0c-4119-c71a-771967ec2c2f"
train.head()
# + id="noyxDToJwbMR" executionInfo={"status": "ok", "timestamp": 1611899495484, "user_tz": 480, "elapsed": 36842, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GimIntWE1SvxJeoWOj0PwvUuB_-simILm2JI8het08=s64", "userId": "04899005791450850059"}}
module_url = "https://tfhub.dev/google/universal-sentence-encoder-large/5"
embed = hub.load(module_url)
# + colab={"base_uri": "https://localhost:8080/"} id="LIMDPv0tzfAN" executionInfo={"status": "ok", "timestamp": 1611899495486, "user_tz": 480, "elapsed": 36833, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GimIntWE1SvxJeoWOj0PwvUuB_-simILm2JI8het08=s64", "userId": "04899005791450850059"}} outputId="29b936a4-e7da-4fff-c191-6e501ea33669"
model = Sequential([
Input(shape=(512,)),
Dense(256, activation="relu"),
Dropout(0.4),
BatchNormalization(),
Dense(64, activation="relu", kernel_regularizer=l2(0.001)),
Dropout(0.4),
Dense(128, name="dense_layer"),
Lambda(lambda x: K.l2_normalize(x, axis=1), name="norm_layer")
])
model.summary()
# + id="ctkcx5ej183A" executionInfo={"status": "ok", "timestamp": 1611899495487, "user_tz": 480, "elapsed": 36821, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GimIntWE1SvxJeoWOj0PwvUuB_-simILm2JI8het08=s64", "userId": "04899005791450850059"}}
class TripletLossLayer(Layer):
def __init__(self, alpha, **kwargs):
self.alpha = alpha
super(TripletLossLayer, self).__init__(**kwargs)
def triplet_loss(self, inputs):
a, p, n = inputs
p_dist = K.sum(K.square(a - p), axis=-1)
n_dist = K.sum(K.square(a - n), axis=-1)
return K.sum(K.maximum(p_dist - n_dist + self.alpha, 0), axis=0)
def call(self, inputs):
loss = self.triplet_loss(inputs)
self.add_loss(loss)
return loss
# + id="swcpL47sXm67" executionInfo={"status": "ok", "timestamp": 1611899495488, "user_tz": 480, "elapsed": 36815, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GimIntWE1SvxJeoWOj0PwvUuB_-simILm2JI8het08=s64", "userId": "04899005791450850059"}}
in_a = Input(shape=(512,))
in_p = Input(shape=(512,))
in_n = Input(shape=(512,))
emb_a = model(in_a)
emb_p = model(in_p)
emb_n = model(in_n)
triplet_loss_layer = TripletLossLayer(alpha=0.4, name="tripley_loss_layer")([emb_a, emb_p, emb_n])
nn4_small2_train = Model([in_a, in_p, in_n], triplet_loss_layer)
# + colab={"base_uri": "https://localhost:8080/"} id="upjD1VwUXrw6" executionInfo={"status": "ok", "timestamp": 1611899495829, "user_tz": 480, "elapsed": 37149, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GimIntWE1SvxJeoWOj0PwvUuB_-simILm2JI8het08=s64", "userId": "04899005791450850059"}} outputId="382098dc-11ad-4d41-8c04-1fdd6220529e"
nn4_small2_train.summary()
# + colab={"base_uri": "https://localhost:8080/"} id="T2s1kNDRYPb_" executionInfo={"status": "ok", "timestamp": 1611899495830, "user_tz": 480, "elapsed": 37141, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GimIntWE1SvxJeoWOj0PwvUuB_-simILm2JI8het08=s64", "userId": "04899005791450850059"}} outputId="21c6587c-4e89-497b-9220-a5a466a3d836"
unique_train_label = np.array(train["class"].unique().tolist())
labels_train = np.array(train["class"].tolist())
map_train_label_indices = {
label: np.flatnonzero(labels_train == label) for label in unique_train_label
}
map_train_label_indices
# + id="1LHsWlhYYtES" executionInfo={"status": "ok", "timestamp": 1611899495830, "user_tz": 480, "elapsed": 37134, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GimIntWE1SvxJeoWOj0PwvUuB_-simILm2JI8het08=s64", "userId": "04899005791450850059"}}
def get_triplets(unique_train_label, map_train_label_indices):
label_l, label_r = np.random.choice(unique_train_label, 2, replace=False)
a, p = np.random.choice(map_train_label_indices[label_l], 2, replace=False)
n = np.random.choice(map_train_label_indices[label_r])
return a, p, n
# + colab={"base_uri": "https://localhost:8080/"} id="o5cHIvb4fWI6" executionInfo={"status": "ok", "timestamp": 1611899495831, "user_tz": 480, "elapsed": 37129, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GimIntWE1SvxJeoWOj0PwvUuB_-simILm2JI8het08=s64", "userId": "04899005791450850059"}} outputId="a2ff156d-9b0d-4afc-a83c-40608fda110a"
a, p, n = get_triplets(unique_train_label, map_train_label_indices)
a, p, n
# + colab={"base_uri": "https://localhost:8080/", "height": 137} id="igSEHnJxgeLu" executionInfo={"status": "ok", "timestamp": 1611899495832, "user_tz": 480, "elapsed": 37121, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GimIntWE1SvxJeoWOj0PwvUuB_-simILm2JI8het08=s64", "userId": "04899005791450850059"}} outputId="7bf20541-bed9-41dd-98fa-039005b5c77a"
train.iloc[[a, p, n]]
# + id="Gq3xX3b4iMEB" executionInfo={"status": "ok", "timestamp": 1611899495833, "user_tz": 480, "elapsed": 37112, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GimIntWE1SvxJeoWOj0PwvUuB_-simILm2JI8het08=s64", "userId": "04899005791450850059"}}
def get_triplets_batch(k, train_set, unique_train_label, map_train_label_indices, embed):
while True:
idxs_a, idxs_p, idxs_n = [], [], []
for _ in range(k):
a, p, n = get_triplets(unique_train_label, map_train_label_indices)
idxs_a.append(a)
idxs_p.append(p)
idxs_n.append(n)
a = train_set.iloc[idxs_a].values.tolist()
p = train_set.iloc[idxs_p].values.tolist()
n = train_set.iloc[idxs_n].values.tolist()
a = embed(a)
p = embed(p)
n = embed(n)
yield [a, p, n], []
# + id="3mwZSQQvlcjC" executionInfo={"status": "ok", "timestamp": 1611899500736, "user_tz": 480, "elapsed": 42006, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GimIntWE1SvxJeoWOj0PwvUuB_-simILm2JI8het08=s64", "userId": "04899005791450850059"}}
batch = next(get_triplets_batch(128, train["text"], unique_train_label, map_train_label_indices, embed))
#batch
# + id="vKG5aKGNdD0G" executionInfo={"status": "ok", "timestamp": 1611899500738, "user_tz": 480, "elapsed": 42001, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GimIntWE1SvxJeoWOj0PwvUuB_-simILm2JI8het08=s64", "userId": "04899005791450850059"}}
nn4_small2_train.compile(loss=None, optimizer="adam")
# + colab={"base_uri": "https://localhost:8080/"} id="j9KQMp9HdHxx" executionInfo={"status": "ok", "timestamp": 1611899672991, "user_tz": 480, "elapsed": 214246, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GimIntWE1SvxJeoWOj0PwvUuB_-simILm2JI8het08=s64", "userId": "04899005791450850059"}} outputId="bee1c8a9-6ba2-41be-81df-e05a48d16747"
nn4_small2_train.fit(get_triplets_batch(128, train["text"], unique_train_label, map_train_label_indices, embed), epochs=100, steps_per_epoch=10)
# + id="qHYztZe_sHIT" executionInfo={"status": "ok", "timestamp": 1611899734918, "user_tz": 480, "elapsed": 1887, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GimIntWE1SvxJeoWOj0PwvUuB_-simILm2JI8het08=s64", "userId": "04899005791450850059"}}
X_train = model.predict(embed(np.array(train['text'].values.tolist())))
X_test = model.predict(embed(np.array(test['text'].values.tolist())))
y_train = np.array(train['class'].values.tolist())
y_test = np.array(test['class'].values.tolist())
| nlp/few-shot-learning/few_shot_siamese_nlp.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Labeling
# Labeling is the process of giving numbers to object in a binary image. Remember, a binary image has intensity zero where there is no object and 1 where there is any object. We now want to generate a label image where individual objects have different numbers. To get it, we need to label connected components.
#
# See also
# * [Connected-component labeling](https://en.wikipedia.org/wiki/Connected-component_labeling)
#
# We start with an artifical image
# +
import numpy as np
binary_image = np.asarray([
[1, 1, 0, 0, 0, 0 ,0],
[0, 0, 1, 0, 0, 0 ,0],
[0, 0, 0, 1, 1, 1 ,0],
[0, 0, 0, 1, 1, 1 ,0],
[1, 1, 0, 0, 0, 0 ,0],
[1, 1, 0, 0, 1, 1 ,1],
[1, 1, 0, 0, 1, 1 ,1],
])
# +
from skimage.io import imshow
imshow(binary_image, cmap='Greys_r')
# -
# This binary image can be interpreted in two ways: Either there are five rectangles with size ranging between 1 and 6. Alternatively, there are two rectangles with size 6 and one not rectangluar structure of size 9 pixels.
#
# We can technially use both alternatives for connected components labeling, depending on the connectivity that is used for connecting pixels in the [label function](https://scikit-image.org/docs/dev/api/skimage.measure.html#skimage.measure.label).
#
# Connectivity
# 1. [von Neumann, 4-connected](https://en.wikipedia.org/wiki/Von_Neumann_neighborhood)
# 2. [Moore, 8-connected](https://en.wikipedia.org/wiki/Moore_neighborhood)
#
# ## 4-connected component labeling
# +
from skimage.measure import label
labeled_4_connected = label(binary_image, connectivity=1)
# make a custom lookup table / color map
import matplotlib
lut = np.random.rand ( 256,3)
lut[0,:] = 0
label_cmap = matplotlib.colors.ListedColormap ( lut )
imshow(labeled_4_connected, cmap=label_cmap)
# -
# ## 8-connected component labeling
# +
from skimage.measure import label
labeled_8_connected = label(binary_image, connectivity=2)
imshow(labeled_8_connected, cmap=label_cmap)
# -
# In practice, for counting cells, the connectivity is not so important. This is why the connectivity parameter is often not provided.
#
# ## Labeling in practice
# To demonstrate labeling in a practical use case, we label the blobs.tif image.
# +
# Load data
from skimage.io import imread
blobs = imread("blobs.tif")
# Thresholding
from skimage.filters import threshold_otsu
threshold = threshold_otsu(blobs)
binary_blobs = blobs > threshold
# Labeling
from skimage.measure import label
labeled_blobs = label(binary_blobs)
# Visualization
import matplotlib.pyplot as plt
fig, axs = plt.subplots(1, 3, figsize=(15,15))
axs[0].imshow(blobs)
axs[1].imshow(binary_blobs)
axs[2].imshow(labeled_blobs, cmap=label_cmap)
# -
# For visualizing and potentially manually curating blobs, we can load these images into napari.
# %gui qt
# +
import napari
# start napari
viewer = napari.Viewer()
# add image
viewer.add_image(blobs)
# add binary image as labels
viewer.add_labels(binary_blobs, visible=False)
# add labels
viewer.add_labels(labeled_blobs)
# -
napari.utils.nbscreenshot(viewer)
# ## Exercise
# Find out experimentally what the default setting of the connectivity parameter of the label function is.
| image_processing/09_Labeling.ipynb |
/ ---
/ jupyter:
/ jupytext:
/ text_representation:
/ extension: .q
/ format_name: light
/ format_version: '1.5'
/ jupytext_version: 1.14.4
/ ---
/ %sql
select * from nycdatah_csv;
select sum(passenger_count) as pc,PULocationID from nycdatah_csv group by PULocationID order by pc asc
/ +
file_location = "/FileStore/tables/nycdata.csv"
file_type = "csv"
# CSV options
infer_schema = "true"
first_row_is_header = "true"
delimiter = ","
# The applied options are for CSV files. For other file types, these will be ignored.
df1 = spark.read.format(file_type) \
.option("inferSchema", infer_schema) \
.option("header", first_row_is_header) \
.option("sep", delimiter) \
.load(file_location)
#display(df1)
/ -
df1.printSchema
#agr=sum('passenger_count').alias('as')
import pyspark.sql.functions as func
df1.groupBy('PULocationID').sum('passenger_count').orderBy('sum(passenger_count)').select('PULocationID',func.col("sum(passenger_count)").alias("as")).show()
/ %sql
select count(distinct(tpep_pickup_datetime)) from nycdatah_csv;
select tpep_pickup_datetime,sum(total_amount),VendorID from nycdatah_csv group by tpep_pickup_datetime, VendorID order by VendorID,tpep_pickup_datetime
/ %sql
select count(distinct(tpep_pickup_datetime)) from nycdatah_csv;
/ +
import pyspark.sql.functions as func
df1.groupBy("tpep_pickup_datetime", "VendorID").sum('total_amount').orderBy("VendorID","tpep_pickup_datetime").select("VendorID","tpep_pickup_datetime",func.col("sum(total_amount)").alias("avg_fare")).show()
/ -
/ %sql
select sum(total_amount) as mcount,payment_type, tpep_pickup_datetime from nycdatah_csv group by payment_type,tpep_pickup_datetime order by payment_type
import pyspark.sql.functions as func
df1.groupBy("payment_type","tpep_pickup_datetime").sum('total_amount').orderBy("payment_type","tpep_pickup_datetime").select(func.col("sum(total_amount)").alias("mcount"),"payment_type","tpep_pickup_datetime").show()
/ %sql
select VendorID, sum(total_amount) as amt, sum(passenger_count), sum(trip_distance) from nycdatah_csv where tpep_pickup_datetime like '01-01-2018%' group by VendorID order by amt desc limit 2
import pyspark.sql.functions as func
df2=df1.where(col("tpep_pickup_datetime").like('01-01-2018%')
df1.filter(df1.tpep_pickup_datetime.like('01-01-2018%'))
df1.groupBy("VendorID").sum("total_amount","passenger_count","trip_distance") .orderBy('sum(total_amount)').select("VendorID",func.col("sum(total_amount)").alias("amt"),func.col("sum(passenger_count)").alias("pcount"),func.col("sum(trip_distance)").alias("distance")).show()
| dt3.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
import numpy as np
import torch
import torch.nn.functional as F
from torchvision import datasets, transforms
from tqdm.notebook import tqdm
import torch.nn as nn
train_data = datasets.MNIST(root="./dataset", train=True, transform=transforms.ToTensor(), download=True)
test_data = datasets.MNIST(root="./dataset", train=False, transform=transforms.ToTensor(), download=True)
train_loader = torch.utils.data.DataLoader(train_data, batch_size=10, shuffle=True)
test_loader = torch.utils.data.DataLoader(test_data, batch_size=10, shuffle=False)
class MLP_Model(nn.Module):
def __init__(self):
super().__init__()
self.submodel_1 = nn.Linear(784,500)
self.submodel_2 = nn.Linear(500,10)
def forward(self,x):
z = F.relu(self.submodel_1(x))
y = F.relu(self.submodel_2(z))
return y
model = MLP_Model()
optimizer = torch.optim.SGD(model.parameters(), lr=0.1)
for images, labels in tqdm(train_loader):
optimizer.zero_grad()
x = images.view(-1,784)
y = model.forward(x)
loss = F.cross_entropy(y,labels)
loss.backward()
optimizer.step()
correct = 0
n = len(test_data)
for images,labels in tqdm(test_loader):
for image,label in zip(images,labels):
x = image.view(-1,784)
y = model.forward(x)
prediction = torch.argmax(y,dim=1)
if(prediction == label):
correct=correct+1
print(f"Accuracy= {(correct/n)*100}%")
# +
# 2 cells below this cell are for checking visually how our model performs
# add index to load image and label in MNIST test_data
# you will get the image and true label corresponding to the image
# after running the next cell you will get the label predicted by our model
# +
import matplotlib.pyplot as plt
im,lb = test_data[3]
im = im.reshape([28,28])
plt.imshow(im, cmap='gray')
print("true lable: {}".format(lb))
# -
x = im.view(-1,784)
y = model(x)
prediction = torch.argmax(y,dim=1)
print("predicted label: {}".format(prediction))
| .ipynb_checkpoints/mnist MLP-checkpoint.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# %matplotlib inline
#
# # Pandas-style slicing of Triangle
#
#
# This example demonstrates the familiarity of the pandas API applied to a
# :class:`Triangle` instance.
#
#
#
# +
import chainladder as cl
import seaborn as sns
sns.set_style('whitegrid')
# The base Triangle Class:
cl.Triangle
# Load data
clrd = cl.load_dataset('clrd')
# pandas-style Aggregations
clrd = clrd.groupby('LOB').sum()
# pandas-style value/column slicing
clrd = clrd['CumPaidLoss']
# pandas loc-style index slicing
clrd = clrd.loc['medmal']
# Plot
g = clrd.link_ratio.plot(marker='o') \
.set(title='Medical Malpractice Link Ratios',
ylabel='Link Ratio', xlabel='Accident Year')
| docs/auto_examples/plot_triangle_slicing.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # <p style="text-align: center;">How To</p>
#
# ## <p style="text-align: center;">Mesurer une répétabilité de positionnement</p>
# ---
# ## I. Configuration logiciel et hardware
# ---
# * Regler la valeur **Sampling cycle** à 1ms
#
# *note : envoyer les nouvelles données de configuration via le bouton "Send settings to controler"*
# 
# * Regler la valeur **Averaging Times** à 1
#
# *note : envoyer les nouvelles données de configuration via le bouton "Send settings to controler"*
# 
# ## II. Outils statistiques pour l'interprétation des données
# ---
# >Le télémètre laser permet l'acquisition d'une mesure à chaque déclenchement du signal trigger. Chaque point est donc une mesure de position. L'échantillon minimum conseillé est de 11 points afin d'évaluer une répétabilité. Toutefois il sera d'usage d'acquérir au moins 100 points pour avoir un nombre d'échantillons suffisant pour utiliser les outils de statistiques descriptives avec un niveau de confiance satisfaisant.
#
# Les outils graphique :
#
# 1. Le [**nuage de points**](https://fr.wikipedia.org/wiki/Nuage_de_points_(statistique)) permet d'illustrer les données mesurées
# 2. La [**boite à moustache**](https://fr.wikipedia.org/wiki/Bo%C3%AEte_%C3%A0_moustaches) permet d'évaluer visuellement les interquartiles ainsi que la médiane de la série de données
# 3. [**L'histogramme**](https://fr.wikipedia.org/wiki/Histogramme) permet d'observer la repartitions des classes de points de mesures
# 4. Le [**QQ-Plot**](https://fr.wikipedia.org/wiki/Diagramme_quantile-quantile) permet de vérifier que la série de données suit une loi normale
#
# Les indicateurs statistiques permettant de confirmer ou non une répétabilité sont :
#
# 1. la [**p-value**](https://fr.wikipedia.org/wiki/Valeur_p)
# 2. La [**capabilité**](https://fr.wikipedia.org/wiki/Capabilit%C3%A9_machine) noté Cp
#
# Ces différents outils statistiques permettent donc de vérifier une répétabilité de positionnement à l'aide des mesures acquises via le télémètre laser Keyence.
#
# Tous ces indicateurs sont générés automatiquement à l'aide de la bibliothèque `Keyence1D` présenté dans la suite.
# ## III. Post traitement des mesures de répétabilité issue du télémètre laser Keyence
# ---
# ### 1. Prérequis
# 1. Installer le package Anaconda : https://www.anaconda.com/products/individual
# 2. Installer le package Keyence1D : **lien git à ajouter**
# 3. Utiliser Jupyter Notebook ou tout autre IDE Python
# ### 2. Premier exemple : traitement d'un fichier de données brute et interprétations
# Charger le package `repetabilityMeasure` à partir de la bibliothèque `Keyence1D` est une fonction permettant d'affichier les mesures issues du fichier csv géneré depuis le logiciel Keyence.
#
# Pour importer la fonction (le fichier Keyence1D.py doit être présent dans votre projet) :
# ```python
# from keyence1D import repetabilityMeasure
# ```
#
# Exécuter la fonction avec comme paramètre le nom du fichier brute :
# ```python
# repetabilityMeasure("rawData.csv")
# ```
#
# Enfin la fonction `repetabilityMeasure()` fournit 5 informations pour analyser les données mesurer :
# * Nuages de points : permet l'observation des points de mesures
# * Boîte à Tukey : permet l'observation de la médiane et des valeurs Max, Min et les interquartiles
# * Histogramme des positions : permet l'observation de la distribution des points de mesures
# * QQ-Plot ou Droite de Henry : permet de tester si la distribution suit une loi Normale
# * p-value : test statistique permettant de **valider ou non si la distribution suit une loi Normale**
#
# Ci-dessous un exemple :
# +
# Premier exemple si on souhaite faire une analyse de la répétabilité sur un seul fichier
from keyence1D import repetabilityMeasure # Importer la fonction répétabilité
import os # Importer la fonction os
os.chdir(r"C:\Users\Home\Documents\Git\KeyenceDataProcessing") # Définir le repertoire des données brutes
repetabilityMeasure("./datas/Results_Position_Y_180326.csv") # Executer la fonction avec le nom du fichier brute
# -
# **Commentaires** : *le nuage de points et la boite de tukey ainsi que l'histogramme semble indiquer des valeurs homogènes et suivant une loi Normale. Toutefois, le QQ-plot indique que pour les classe supérieur à 0.01 et inférieur à -0.02 il existe une discontinuité. Enfin la p-value est < à 5% donc cette mesure n'est pas répétable.*
# ### 3. Deuxième exemple : traitement de plusieurs fichiers de mesures brutes et intervalle de tolérance
# >De manière générale, on considère que si le test statistique donne une p-value > 5%, alors la distribution suit une loi Normale. Dans le cas ou les mesures suivent une loi Normale **et uniquement dans ce cas**, alors il est beaucoup plus pratique d'utiliser l'indicateur de capabilité Cp :
#
# $$Cp = \frac {IT}{6\sigma}$$
# avec :
# * Cp = Capabilité --> **On considère un processus capable si Cp > 1.33**
# * IT = Intervalle de tolérance
# * 𝜎 = écart type
#
# La fonction `repetabilityMeasure()` prend aussi comme paramètre l'IT à tester, ainsi que l'option de sauvegarde des résultats sous la forme d'un fichier image au format ".png". dans le cas ou la p-value > 5%, **et uniquement dans ce cas**, alors la Cp sera aussi calculé. Ci-dessous un exemple de paramètres pour un IT = 1mm
#
#
# ```python
# repetabilityMeasure('rawData.csv', # Fichier de données brute
# pngSave=True, # Option de sauvegarde des résultats au format .png (True ou False)
# IT = 1) # Intervalle de tolérance IT = 1mm pour le calcul de Cp
# ```
#
#
# Il est donc tout à fait possible de tester plusieurs fichiers brutes issus du logiciel Keyence, car dans certains cas plusieurs jeux de données sont a comparé.
#
# Ci-dessous un exemple :
# +
# Deuxieme exemple si on souhaite faire une analyse de la répétabilité sur plusieurs fichiers
from keyence1D import repetabilityMeasure # Importer la fonction répétabilité
import os # Importer la fonction os
import glob # Importer la fonction glob
os.chdir(r"C:\Users\Home\Documents\Git\KeyenceDataProcessing\datas") # Définir le repertoire des données brutes
for file in glob.glob("*.csv"): # Parcourir le repertoire pour traiter les fichiers csv brutes
repetabilityMeasure(file, pngSave=True, IT = 1) # Execution de la fonction pour tous les fichiers et sauvegarde png
# -
# **Commentaires** : *uniquement le fichier 'Distrib_Norm.csv' présente une distribution normal mais à une capabilité de 0.167, donc trés inférieur à 1.33 pour un intervalle de tolérance de 1mm.*
# # Conclusion
# >Le télémètre laser est capable d'acquérir plusieurs points de mesure pour évaluer une répétabilité. La fonction de déclenchement (Trigger) est à privilégier afin d'obtenir un jeu de données brutes exploitables en utilisant les outils de traitement de données du package `Keyence1D`.
#
# >Les valeurs Min et Max ne sont pas des indicateurs suffisant pour s'assurer que le positionnement d'un axe est répétable. L'estimation de p-value et de Cp sont des estimateurs statistiques plus pertinents. Le but est de s'assurer que les positions successives prises par l'axe seront toujours contenues dans l'intervalle de tolérance souhaitée.
| RepetabilityKeyence1D.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ## Instagram Follow Automation to Particular Accounts
from selenium import webdriver
from selenium.webdriver.common.keys import Keys
from selenium.webdriver.support import expected_conditions as EC
from selenium.webdriver.common.by import By
from selenium.webdriver.support.wait import WebDriverWait
import time
import random
# ## Step 1 - Chrome Driver
driver = webdriver.Chrome('D:\chromedriver_win32\chromedriver.exe')
driver.get('https://www.instagram.com')
# ## Step 2 - Log in
# +
username = WebDriverWait(driver,10).until(EC.element_to_be_clickable((By.XPATH, "//input[@name='username']"))).send_keys("MY_USERNAME")
password = WebDriverWait(driver,10).until(EC.element_to_be_clickable((By.CSS_SELECTOR, "input[name='password']"))).send_keys("<PASSWORD>")
try :
submit = WebDriverWait(driver,10).until(EC.element_to_be_clickable((By.XPATH, "//button[@type='submit']"))).click()
print('berhasil login')
except :
print('gagal login')
# -
# ## Step 3 - Handle Alerts
notNow_alert = WebDriverWait(driver,10).until(EC.element_to_be_clickable((By.XPATH, "//button[contains(text(),'Not Now')]")))
notNow_alert.click()
# ## Step 4 - Search Account
time.sleep(1)
searchBox = WebDriverWait(driver,10).until(EC.element_to_be_clickable((By.XPATH, "//input[@placeholder='Search']")))
searchBox.send_keys('<PASSWORD>wahh')
time.sleep(1)
searchBox.send_keys(Keys.ENTER)
time.sleep(1)
searchBox.send_keys(Keys.ENTER)
# ## Step 5 - Button Follow Click
time.sleep(4)
followBtn = WebDriverWait(driver,10).until(EC.element_to_be_clickable((By.XPATH, "//button[contains(text(),'Follow')]")))
followBtn.click()
| src/Follow_Any_Accounts.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
import pandas as pd
import matplotlib.pyplot as plt
import numpy as np
import pymc3 as pm
from sklearn.cross_validation import train_test_split
from sklearn.preprocessing import scale
from theano import shared
import theano.tensor as T
from pymc3 import *
import warnings
warnings.filterwarnings('ignore')
# +
#Importing dataset
df = pd.read_csv('breast-cancer-wisconsin.csv')
df.drop(['id'],1,inplace=True)
# Convert '?' to NaN
df[df == '?'] = np.nan
# Drop missing values and print shape of new DataFrame
df = df.dropna()
X = scale(np.array(df.drop(['class'],1)))
y = np.array(df['class'])/2-1
#Split Data
X_tr, X_te, y_tr, y_te = train_test_split(X,y,test_size=0.2, random_state=42)
#Preprocess data for Modeling
ann_input = shared(X_tr)
ann_output = shared(y_tr)
n_hidden = 5
# +
init_1 = np.random.randn(X.shape[1], n_hidden)
init_2 = np.random.randn(n_hidden, n_hidden)
init_out = np.random.randn(n_hidden)
with pm.Model() as neural_network:
# Weights from input to hidden layer
weights_in_1 = pm.Normal('w_in_1', 0, sd=1,
shape=(X.shape[1], n_hidden),
testval=init_1)
# Weights from 1st to 2nd layer
weights_1_2 = pm.Normal('w_1_2', 0, sd=1,
shape=(n_hidden, n_hidden),
testval=init_2)
# Weights from hidden layer to output
weights_2_out = pm.Normal('w_2_out', 0, sd=1,
shape=(n_hidden,),
testval=init_out)
# Build neural-network
act_1 = T.tanh(T.dot(ann_input, weights_in_1))
act_2 = T.tanh(T.dot(act_1, weights_1_2))
act_out = T.nnet.sigmoid(T.dot(act_2, weights_2_out))
out = pm.Bernoulli('data',
act_out,
observed=ann_output)
# -
#infering parameters
with neural_network:
advi=pm.ADVI()
approx = advi.fit(n=5000,more_replacements={
ann_input:pm.Minibatch(X_tr),
ann_output:pm.Minibatch(y_tr)
}
)
#Replace shared variables with testing set
#(note that using this trick we could be streaming ADVI for big data)
ann_input.set_value(X_te)
ann_output.set_value(y_te)
# +
#Creater posterior predictive samples
trace= approx.sample(draws=5000)
ppc = pm.sample_ppc(trace, model=neural_network, samples=500)
pred = ppc['data'].mean(axis=0) > 0.5
print('Accuracy = {}%'.format((y_te == pred).mean() * 100))
# +
import seaborn as sns
grid = np.mgrid[-3:3:100j,-3:3:100j]
grid_2d = grid.reshape(2, -1).T
dummy_out = np.ones(grid.shape[1], dtype=np.int8)
cmap = sns.cubehelix_palette(light=1, as_cmap=True)
fig, ax = plt.subplots(figsize=(10, 6))
contour = ax.contourf(*grid, ppc['data'].std(axis=0).reshape(11,11), cmap=cmap)
ax.scatter(X_te[pred==0, 0], X_te[pred==0, 1])
ax.scatter(X_te[pred==1, 0], X_te[pred==1, 1], color='r')
cbar = plt.colorbar(contour, ax=ax)
_ = ax.set(xlim=(-3, 3), ylim=(-3, 3), xlabel='X', ylabel='Y');
cbar.ax.set_ylabel('Uncertainty (posterior predictive standard deviation)');
# -
| Classification/Breastcancer/PyMC3-NN.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import jinja2
with open("template.inx", "r") as fid:
inx_template_str = fid.read()
inx_template = jinja2.Template(inx_template_str)
rendering= {
"_name": "Hello World",
}
print(inx_template.render(**rendering))
class objectview(object):
def __init__(self, d):
self.__dict__ = d
extension = objectview(
{
"name": "<NAME>",
"id": "world.hello.example.com"
}
)
print(inx_template.render(**rendering, extension=extension))
# Whee.
print(inx_template.render(**rendering, extension=extension))
inkex_dependency = objectview(
{
"type": "executable",
"path": "inkex.py"
}
)
with open("template.inx", "r") as fid:
inx_template_str = fid.read()
inx_template = jinja2.Template(inx_template_str)
print(inx_template.render(**rendering, extension=extension, dependencies=[inkex_dependency]))
| inx_generator-Copy2.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python [conda env:PythonData]
# language: python
# name: conda-env-PythonData-py
# ---
# Import Dependencies
import pandas as pd
# +
# Create a path to the csv and read it into a Pandas DataFrame
csv_path = "Resources/ted_talks.csv"
ted_df = pd.read_csv(csv_path)
ted_df.head()
# +
# Figure out the minimum and maximum views for a TED Talk
# +
# Create bins in which to place values based upon TED Talk views
# Create labels for these bins
# +
# Slice the data and place it into bins
# +
# Place the data series into a new column inside of the DataFrame
# +
# Create a GroupBy object based upon "View Group"
# Find how many rows fall into each bin
# Get the average of each column within the GroupBy object
| 04-Pandas/3/Activities/04-Stu_TedTalks/Unsolved/BinningTed.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + colab={"base_uri": "https://localhost:8080/"} id="S3yFco-vH_VQ" outputId="dd03839d-2f6d-409e-9c88-6a313704d2d0"
4*5
# + colab={"base_uri": "https://localhost:8080/"} id="kMsXQr1VIIyj" outputId="27f1bfce-bfe3-45bf-a874-204a08b3dd32"
7-9
# + colab={"base_uri": "https://localhost:8080/"} id="eNspur1hIMhi" outputId="851a7249-0cb7-423c-e38a-bf4357149e34"
5/6
# + id="Aka7jwvzJDqu"
x=10
# + colab={"base_uri": "https://localhost:8080/"} id="NvH-3DRMIPLL" outputId="e06c4c19-4de2-4171-f724-f12af2dbb03a"
print(x)
# + colab={"base_uri": "https://localhost:8080/"} id="kiy8rCsRIjD2" outputId="9888f625-ee54-4cfd-c591-56b57f6a1b68"
x=10
y=12
x*y
# + colab={"base_uri": "https://localhost:8080/"} id="-gsZWt3CKBHe" outputId="3b908187-6a7c-47e2-bc38-d269d2ad52db"
x/y
# + colab={"base_uri": "https://localhost:8080/"} id="MUoJYI41KIJt" outputId="9dc9cd0d-0f55-4b80-8bb3-c9d99f19b13b"
x="Jovin"
print(x)
# + id="8h4MS__JKx1l"
x=10
y=20.5
z="Jovin"
# + colab={"base_uri": "https://localhost:8080/"} id="QaAp3qiMLzt0" outputId="a30bd9a8-0992-4d90-8ac7-9bc39a06b501"
type(x)
# + colab={"base_uri": "https://localhost:8080/"} id="BGzlYkMrMFgG" outputId="d8d2c620-1800-4849-914a-948e407a587c"
type(y)
# + colab={"base_uri": "https://localhost:8080/"} id="wWob7w1DMHjG" outputId="598c0125-2613-4555-d78e-cc37d6155582"
type(z)
# + colab={"base_uri": "https://localhost:8080/", "height": 35} id="xQAteza4MKN-" outputId="fbc6894f-13d9-43da-bcb2-2941f0f52f19"
x="Jovin"
y="Rusail"
x+y
# + colab={"base_uri": "https://localhost:8080/"} id="DXDyJpo6NuVV" outputId="1938cf55-e7c1-461a-c020-dbcc79b286b7"
x=input("Enter a number")
# + colab={"base_uri": "https://localhost:8080/"} id="vTlinMUsOcDm" outputId="9740ce50-f72a-4d5b-e8d2-53596977233c"
print(x)
# + colab={"base_uri": "https://localhost:8080/"} id="BFtVw7_hOyiN" outputId="5cdc147f-a2ff-4ed8-b836-6cc4682ab672"
x=input("Enter the first number")
y=input("Enter the second number")
# + colab={"base_uri": "https://localhost:8080/", "height": 87} id="hq4otXvXRBZZ" outputId="441d9e89-b776-4d40-ed33-fd7c8cdd7bde"
print(x)
print(y)
print(x,y)
x+y
# + colab={"base_uri": "https://localhost:8080/"} id="DRslbDlWRdOi" outputId="37b1f5da-36c7-4c67-f5c6-e245c5b21420"
int(x)+int(y)
# + id="pRnW5Cg8SsBA"
| Python_Basics.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3 (ipykernel)
# language: python
# name: python3
# ---
# +
# #!pip install chesslab
# -
import numpy as np
from chesslab.utils import load_pkl
from chesslab.training_tf import fitting
from sklearn.model_selection import train_test_split
import tensorflow as tf
# +
lr = 0.1
epochs=20
batch_size = 128
test_percent=0.1
path = 'D:/database/ccrl/'
name_data='ccrl_states_elo3.pkl'
name_labels='ccrl_results_elo3.pkl'
save_name='./tmp/tf_weights-relu-elo3.3'
optim = tf.compat.v1.train.GradientDescentOptimizer(learning_rate=lr)
loss_fn = tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True)
# -
download=False
if download:
from chesslab.utils import download_7z
path='./'
file_id = '1MFHFz_rxNziYSeN-9ruwnRiYskd0_9ss'
download_7z(file_id,path)
encoding_4={
'.':np.array([0],dtype=np.float32),
'p':np.array([1/12],dtype=np.float32),
'P':np.array([2/12],dtype=np.float32),
'b':np.array([3/12],dtype=np.float32),
'B':np.array([4/12],dtype=np.float32),
'n':np.array([5/12],dtype=np.float32),
'N':np.array([6/12],dtype=np.float32),
'r':np.array([7/12],dtype=np.float32),
'R':np.array([8/12],dtype=np.float32),
'q':np.array([9/12],dtype=np.float32),
'Q':np.array([10/12],dtype=np.float32),
'k':np.array([11/12],dtype=np.float32),
'K':np.array([12/12],dtype=np.float32)
}
class Model_4():
def __init__(self,
n_classes=2):
initializer = tf.keras.initializers.GlorotNormal()
self.hw=[]
self.hb=[]
self.hw.append( tf.Variable(initializer(shape=(7,7,1,32),dtype=np.float32),name="hl1weigths",dtype="float32") )
self.hb.append( tf.Variable(np.zeros(32,dtype=np.float32),name="hl1bias",dtype="float32") )
#8x8x32
self.hw.append( tf.Variable(initializer(shape=(5,5,32,64),dtype=np.float32),name="hl2weigths",dtype="float32"))
self.hb.append( tf.Variable(np.zeros(64,dtype=np.float32),name="hl2bias",dtype="float32"))
#8x8x64
self.hw.append( tf.Variable(initializer(shape=(3,3,64,128),dtype=np.float32),name="hl3weigths",dtype="float32"))
self.hb.append( tf.Variable(np.zeros(128,dtype=np.float32),name="hl3bias",dtype="float32"))
#8x8x128
self.hw.append( tf.Variable(initializer(shape=(8*8*128,256),dtype=np.float32),name="hl4weigths",dtype="float32"))
self.hb.append( tf.Variable(np.zeros(256,dtype=np.float32),name="hl4bias",dtype="float32"))
self.hw.append( tf.Variable(initializer(shape=(256, n_classes),dtype=np.float32),name="outweigths",dtype="float32"))
self.hb.append( tf.Variable(np.zeros(n_classes,dtype=np.float32),name="outbias",dtype="float32"))
self.trainable_variables = []
for i in range(len(self.hw)):
self.trainable_variables.append(self.hw[i])
self.trainable_variables.append(self.hb[i])
def __call__(self,x):
out = tf.cast(x, tf.float32)
out = tf.reshape(out, shape=[-1, 8, 8, 1])
layer=0
out = tf.nn.conv2d(out,self.hw[layer], strides=[1,1,1,1], padding='SAME')
out = tf.add(out, self.hb[layer])
out = tf.nn.relu(out)
#8*8*32
layer+=1
out = tf.nn.conv2d(out,self.hw[layer], strides=[1,1,1,1], padding='SAME')
out = tf.add(out, self.hb[layer])
out = tf.nn.relu(out)
#8*8*64
layer+=1
out = tf.nn.conv2d(out,self.hw[layer], strides=[1,1,1,1], padding='SAME')
out = tf.add(out, self.hb[layer])
out = tf.nn.elu(out)
#8*8*128
layer+=1
out = tf.reshape(out,[-1, 8*8*128])
out = tf.matmul(out,self.hw[layer])
out = tf.add(out, self.hb[layer])
out = tf.nn.relu(out)
layer+=1
out = tf.matmul(out,self.hw[layer])
out = tf.add(out, self.hb[layer])
return out
# +
np.random.seed(0)
tf.random.set_seed(0)
x_data = load_pkl(path+name_data)
y_data = load_pkl(path+name_labels)[:,1] #Nota: pasa de onehot a logits
print(x_data.shape)
print(y_data.shape)
x_train, x_test, y_train, y_test = train_test_split(
x_data, y_data, test_size = test_percent, random_state = 0, shuffle = True)
del x_data
del y_data
# +
model = Model_4()
encoding=encoding_4
# + tags=[]
fitting(epochs=epochs,
x_train=x_train,
y_train=y_train,
x_test=x_test,
y_test=y_test,
model=model,
optimizer=optim,
batch_size=batch_size,
lr=lr,
loss_fn=loss_fn,
save_name=save_name,
encoding=encoding)
# -
fitting(epochs=20,
x_train=x_train,
y_train=y_train,
x_test=x_test,
y_test=y_test,
model= model,
load_name=save_name+'.10.h5',
save_name=save_name,)
fitting(epochs=10,
x_train=x_train,
y_train=y_train,
x_test=x_test,
y_test=y_test,
model= model,
load_name=save_name+'.30.h5',
save_name=save_name,)
| examples/training/training_tf-relu-elo3-single.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Horse collar data exploration
#
# <img align="right" src="https://anitagraser.github.io/movingpandas/pics/movingpandas.png">
#
# [](https://mybinder.org/v2/gh/anitagraser/movingpandas-examples/main?filepath=2-analysis-examples/3-horse-collar.ipynb)
#
# This notebook presents a systematic movement data exploration workflow.
# The proposed workflow consists of five main steps:
#
# 1. **Establishing an overview** by visualizing raw input data records
# 2. **Putting records in context** by exploring information from consecutive movement data records (such as: time between records, speed, and direction)
# 3. **Extracting trajectories, locations & events** by dividing the raw continuous tracks into individual trajectories, locations, and events
# 4. **Exploring patterns** in trajectory and event data by looking at groups of the trajectories or events
# 5. **Analyzing outliers** by looking at potential outliers and how they may challenge preconceived assumptions about the dataset characteristics
#
# The workflow is demonstrated using horse collar tracking data provided by Prof. <NAME> (University of Copenhagen) and the Center for Technology & Environment of Guldborgsund Municiplaity in Denmark but should be generic enough to be applied to other tracking datasets.
#
# The workflow is implemented in Python using Pandas, GeoPandas, and MovingPandas (http://movingpandas.org).
#
# For an interactive version of this notebook visit https://mybinder.org/v2/gh/anitagraser/movingpandas/master.
# +
import numpy as np
import pandas as pd
import geopandas as gpd
from geopandas import GeoDataFrame, read_file
from datetime import datetime, timedelta
from pyproj import CRS
import movingpandas as mpd
import warnings
warnings.simplefilter("ignore")
import hvplot.pandas # seems to be necessary for the following import to work
from holoviews import opts
opts.defaults(opts.Overlay(active_tools=['wheel_zoom']))
# -
mpd.__version__
from IPython.core.display import display, HTML
display(HTML("<style>.container { width:100% !important; }</style>"))
# ## Raw data import
df = read_file('../data/horse_collar.gpkg')
df['t'] = pd.to_datetime(df['timestamp'])
df = df.set_index('t').tz_localize(None)
print("This dataset contains {} records.\nThe first lines are:".format(len(df)))
df.head()
df.columns
df = df.drop(columns=['LMT_Date', 'LMT_Time',
'Origin', 'SCTS_Date', 'SCTS_Time', 'Latitude [?]', 'Longitude [?]',
'FixType', 'Main [V]', 'Beacon [V]', 'Sats', 'Sat',
'C/N', 'Sat_1', 'C/N_1', 'Sat_2', 'C/N_2', 'Sat_3', 'C/N_3', 'Sat_4',
'C/N_4', 'Sat_5', 'C/N_5', 'Sat_6', 'C/N_6', 'Sat_7', 'C/N_7', 'Sat_8',
'C/N_8', 'Sat_9', 'C/N_9', 'Sat_10', 'C/N_10', 'Sat_11', 'C/N_11',
'Easting', 'Northing',], axis=1)
df.head()
collar_id = df['CollarID'].unique()[0]
print("There is only one collar with ID {}.".format(collar_id))
df['Activity'].unique()
original_crs = df.crs
original_crs
# ## 1. Establishing an overview
#
# The first step in our proposed EDA workflow can be performed directly on raw
# input data since it does not require temporally ordered data. It is therefore suitable
# as a first exploratory step when dealing with new data.
#
# ### Q1.1 Geographic extent: Is the geographical extent as expected and are there holes in the spatial coverage?
df.to_crs({'init': 'epsg:4326'}).hvplot(title='Geographic extent of the dataset', geo=True, tiles='OSM', width=500, height=500)
# The main area (the horste's pasture?) is located south of Nykobing Strandhuse.
#
# However, we find also find two records on the road north west of the main area. Both points have been recorded on 2018-11-14 which is the first day of the dataset.
pd.DataFrame(df).sort_values('lat').tail(2)
# A potential hypothesis for the origin of these two records is that the horse (or the collar) was transported on 2018-11-14, taking the road from Nykobing Falster south to the pasture.
#
# If we remove these first two records from the dataset, the remainder of the records are located in a small area:
df = df[2:].to_crs({'init': 'epsg:4326'})
( df.hvplot(title='OSM showing paths and fences', size=2, geo=True, tiles='OSM', width=500, height=500) +
df.hvplot(title='Imagery showing land cover details', size=2, color='red', geo=True, tiles='EsriImagery', width=500, height=500) )
# It looks like the horse generally avoids areas without green vegetation since point patterns in these area appear more sparse than in other areas.
temp = df.to_crs(CRS(25832))
temp['geometry'] = temp['geometry'].buffer(5)
total_area = temp.dissolve(by='CollarID').area
total_area = total_area[collar_id]/10000
print('The total area covered by the data is: {:,.2f} ha'.format(total_area))
# ### Q1.2 Temporal extent: Is the temporal extent as expected and are there holes in the temporal coverage?
print("The dataset covers the time between {} and {}.".format(df.index.min(), df.index.max()))
print("That's {}".format(df.index.max() - df.index.min()))
df['No'].resample('1d').count().hvplot(title='Number of records per day')
# On most days there are 48 (+/- 1) records per day. However, there are some days with more records (in Nov 2018 and later between Mai and August 2019).
#
# There is one gap: On 2019-10-18 there are no records in the dataset and the previous day only contains 37 and the following day 27 records.
# ### Q1.3 Spatio-temporal gaps: Does the geographic extent vary over time or do holes appear during certain times?
#
# Considering that the dataset covers a whole year, it may be worthwhile to look at the individual months using small multiples map plots, for example:
df['Y-M'] = df.index.to_period('M')
a = None
for i in df['Y-M'].unique():
plot = df[df['Y-M']==i].hvplot(title=str(i), size=2, geo=True, tiles='OSM', width=300, height=300)
if a: a = a + plot
else: a = plot
a
# The largest change between months seems to be that the southernmost part of the pasture wasn't used in August and September 2019.
# ## 2. Putting records in context
#
# The second exploration step puts movement records in their temporal and geographic
# context. The exploration includes information based on consecutive movement data
# records, such as time between records (sampling intervals), speed, and direction.
# Therefore, this step requires temporally ordered data.
# ### Q2.1 Sampling intervals: Is the data sampled at regular or irregular intervals?
#
# For example, tracking data of migratory animals is expected to exhibit seasonal changes. Such changes in vehicle tracking systems however may indicate issues with data collection .
t = df.reset_index().t
df = df.assign(delta_t=t.diff().values)
df['delta_t'] = df['delta_t'].dt.total_seconds()/60
pd.DataFrame(df).hvplot.hist('delta_t', title='Histogram of intervals between consecutive records (in minutes)', bins=60, bin_range=(0, 60))
# The time delta between consecutive records is usually around 30 minutes.
#
# However, it seems that sometimes the intererval has been decreased to around 15 minutes. This would explain why some days have more than the usual 48 records.
# ### Q2.2 Speed values: Are there any unrealistic movements?
#
# For example: Does the data contain unattainable speeds?
tc = mpd.TrajectoryCollection(df, 'CollarID')
traj = tc.trajectories[0]
traj.add_speed()
max_speed = traj.df.speed.max()
print("The highest computed speed is {:,.2f} m/s ({:,.2f} km/h)".format(max_speed, max_speed*3600/1000))
# ### Q2.3 Movement patterns: Are there any patterns in movement direction or speed?
pd.DataFrame(traj.df).hvplot.hist('speed', title='Histogram of speeds (in meters per second)', bins=90)
# The speed distribution shows no surprising patterns.
traj.add_direction(overwrite=True)
pd.DataFrame(traj.df).hvplot.hist('direction', title='Histogram of directions', bins=90)
# There is some variation in movement directions but no directions stand out in the histogram.
#
# Let's look at spatial patterns of direction and speed!
#
# ### Q2.4 Temporal context: Does the movement make sense in its temporal context?
#
# For example: Do nocturnal animal tracks show movement at night?
#
pd.DataFrame(traj.df).hvplot.heatmap(title='Mean speed by hour of day and month of year',
x='t.hour', y='t.month', C='speed', reduce_function=np.mean)
# The movement speed by hour of day shows a clear pattern throughout the year with earlier and longer fast movements during the summer months and later and slower movements during the winter months.
#
# #### Temperature context
#
# In addition to time, the dataset also contains temperature information for each record:
traj.df['n'] = 1
pd.DataFrame(traj.df).hvplot.heatmap(title='Record count by temperature and month of year',
x='Temp [?C]', y='t.month', C='n', reduce_function=np.sum)
pd.DataFrame(traj.df).hvplot.heatmap(title='Mean speed by temperature and month of year',
x='Temp [?C]', y='t.month', C='speed', reduce_function=np.mean)
# ### Q2.5 Geographic context: Does the movement make sense in its geographic context?
#
# For example: Do vessels follow traffic separation schemes defined in maritime maps? Are there any ship trajectories crossing land?
# +
traj.df['dir_class'] = ((traj.df['direction']-22.5)/45).round(0)
a = None
temp = traj.df
for i in sorted(temp['dir_class'].unique()):
plot = temp[temp['dir_class']==i].hvplot(geo=True, tiles='OSM', size=2, width=300, height=300, title=str(int(i*45))+"°")
if a: a = a + plot
else: a = plot
a
# -
# There are no obvious spatial movement direction patterns.
# +
traj.df['speed_class'] = (traj.df['speed']*2).round(1)
a = None
temp = traj.df
for i in sorted(temp['speed_class'].unique()):
filtered = temp[temp['speed_class']==i]
if len(filtered) < 10:
continue
plot = filtered.hvplot(geo=True, tiles='EsriImagery', color='red', size=2, width=300, height=300, title=str(i/2)) # alpha=max(0.05, 50/len(filtered)),
if a: a = a + plot
else: a = plot
a
# -
# Low speed records (classes 0.0 and 0.05 m/s) are distributed over the whole area with many points on the outline (fence?) of the area.
#
# Medium speed records (classes 0.1 and 0.15 m/s) seem to be more common along paths and channels.
# ## 3. Extracting trajectories & locations / events
#
# The third exploration step looks at individual trajectories. It therefore requires that
# the continuous tracks are split into individual trajectories. Analysis results depend on
# how the continuous streams are divided into trajectories, locations, and events.
# ### 3.1 Trajectory lines: Do the trajectory lines look plausible or are there indications of out of sequence positions or other unrealistic location jumps?
tc.hvplot()
# Due to the 30 minute reporting interval, the trajectories are rather sparse.
#
# The trajectories mostly stay within the (fenced?) area. However, there are a few cases of positions outside the area.
# #### Movement during week #1
daily = mpd.TemporalSplitter(tc).split(mode='day')
a = None
for i in range(0,7):
if a: a = a + daily.trajectories[i].hvplot(title=daily.trajectories[i].id, c='speed', line_width=2, cmap='RdYlBu', width=300, height=300)
else: a = daily.trajectories[i].hvplot(title=daily.trajectories[i].id, c='speed', line_width=2, cmap='RdYlBu', width=300, height=300)
a
# ### 3.2 Home/depot locations: Do day trajectories start and end at the same home (for human and animal movement) or depot (for logistics applications) location?
daily_starts = daily.get_start_locations()
daily_starts['month'] = daily_starts.index.month
daily_starts.hvplot(c='month', geo=True, tiles='EsriImagery', cmap='autumn', width=500, height=500)
# There is no clear preference for a certain home location where the horse would tend to spend the night.
# Instead of spliting by date, we can also specify a minimum movement speed and then split the continuous observation when this minimum speed is not reached for a certain time:
moving = mpd.TrajectoryCollection(traj.df[traj.df['speed'] > 0.05], 'CollarID')
moving = mpd.ObservationGapSplitter(moving).split(gap=timedelta(minutes=70))
moving.get_start_locations().hvplot(c='month', geo=True, tiles='EsriImagery', color='red', width=500, height=500)
# ### 3.3 Trajectory length
daily_lengths = [traj.get_length() for traj in daily]
daily_t = [traj.get_start_time() for traj in daily]
daily_lengths = pd.DataFrame(daily_lengths, index=daily_t, columns=['length'])
daily_lengths.hvplot(title='Daily trajectory length')
# The length of the daily trajectories varies between 1.6 and 6.2 km. (It is worth noting that this has to be considered a lower bound of the movement due to the sparseness of the tracking data.)
#
# The seasonal trends agree well with the previously discovered seasonal movement speed patterns: winter trajectories tend to be shorter than summer trajectories.
# ### 3.4 Covered area
# #### Method 1: Convex hulls around trajectory
daily_areas = [(traj.id, traj.to_crs(CRS(25832)).to_linestring().convex_hull.area/10000) for traj in daily]
daily_areas = pd.DataFrame(daily_areas, index=daily_t, columns=['id', 'area'])
daily_areas.hvplot(title='Daily covered area [ha]', y='area')
# #### Method 2: Buffered trajectory
daily_areas = [(traj.id, traj.to_crs(CRS(25832)).to_linestring().buffer(15).area/10000) for traj in daily]
daily_areas = pd.DataFrame(daily_areas, index=daily_t, columns=['id', 'area'])
daily_areas.hvplot(title='Daily covered area [ha]', y='area')
# The ten smallest areas are:
daily_areas.sort_values(by='area')[:10]
# The days with the smallest areas cover areas include the first and the last observation day (since they are only partially recorded). We can remove those:
daily_areas = daily_areas.drop(datetime(2018,11,14,12,30,8))
daily_areas = daily_areas.drop(datetime(2019,11,7,0,0,9))
# The smallest area for a complete day was observed on 2018-11-19 with only 1.2 ha:
a = None
for i in daily_areas.sort_values(by='area')[:3].id:
traj = daily.get_trajectory(i)
if a: a = a + traj.hvplot(title=i, c='speed', line_width=2, cmap='RdYlBu', width=300, height=300)
else: a = traj.hvplot(title=i, c='speed', line_width=2, cmap='RdYlBu', width=300, height=300)
a
# ## 3.5 Stop detection
#
# Instead of splitting the continuous track into daily trajectories, an alternative approach is to split it at stops. Stops can be defined as parts of the track where the moving object stays within a small area for a certain duration.
#
# Let's have a look at movement of one day and how stop detection parameter settings affect the results:
# +
MAX_DIAMETER = 100
MIN_DURATION = timedelta(hours=3)
one_day = daily.get_trajectory('30788_2018-11-17 00:00:00')
one_day_stops = mpd.TrajectoryStopDetector(one_day).get_stop_segments(
min_duration=MIN_DURATION, max_diameter=MAX_DIAMETER)
( one_day.hvplot(title='Stops in Trajectory {}'.format(one_day.id), line_width=7.0, color='slategray', width=500) *
one_day_stops.hvplot(size=200, line_width=7, tiles=None, color='deeppink') *
one_day_stops.get_start_locations().hvplot(geo=True, size=200, color='deeppink') )
# -
# Let's apply stop detection to the whole dataset:
# %%time
stops = mpd.TrajectoryStopDetector(tc).get_stop_points(min_duration=MIN_DURATION, max_diameter=MAX_DIAMETER)
len(stops)
# The spatial distribution reveals preferred stop locations:
stops.hvplot(geo=True, tiles='OSM', color='deeppink', size=MAX_DIAMETER, alpha=0.2, width=500)
stops['duration_h'] = (stops['end_time']-stops['start_time']).dt.total_seconds() / 3600
pd.DataFrame(stops)['duration_h'].hvplot.hist(title='Stop duration histogram', xlabel='Duration [hours]', ylabel='n', bins=30)
# ## Continue exploring MovingPandas
#
#
# 1. [Bird migration analysis](1-bird-migration.ipynb)
# 1. [Ship data analysis](2-ship-data.ipynb)
# 1. [Horse collar data exploration](3-horse-collar.ipynb)
# 1. [Stop hotspot detection](4-stop-hotspots.ipynb)
# 1. [OSM traces](5-osm-traces.ipynb)
| 2-analysis-examples/3-horse-collar.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] id="Tg8gGY0VnKSG"
# # Recommendation System - Collaborative Filtering
# + [markdown] id="c2cVJy4_nKSG"
# In this case, filtering is based on items rather than users and we use a Similarity Matrix of user ratings. If a user gives similar ratings to the same items that another user has also rated, then it stands to reason that the two users themselves are similar to each other, at least in terms of preferences, and would have similar preferences for other items. However, here we can reason that a set of items rated similarly by the same users are also similar to each other. We use the K-Nearest Neighbors algorithm along with Cosine Similarity to identify items that are in close proximity to each other. While the users in a dataset don’t always give explicit ratings to the items in question, we can estimate a set of implicit ratings based on certain user behaviors, such as how many times they have interacted with one item compared to another item.
# [Source](https://medium.com/nerd-for-tech/building-a-reddit-recommendation-system-44ab6734d9d9)
# + [markdown] id="7X2eocc_nKSH"
# ## Collecting and Exploring the Data
# + id="tqKPm2H8nKSH"
# Importing libraries
import pandas as pd
import numpy as np
from scipy.sparse import csr_matrix
from sklearn.neighbors import NearestNeighbors
from sklearn.cluster import KMeans
import matplotlib.pyplot as plt
import seaborn as sns
# + id="HInHIyyHnKSI"
# Reading the dataframe
df = pd.read_csv('../datasets/reddit_user_data_count.csv')
# + id="tHfRbVInnKSI" outputId="606b228f-e3c2-4e89-e1e2-f7ae639c6393"
df.info()
df = df.iloc[:600000] # limit the size of the dataframe
df.head()
# + id="lqQ3KI5GnKSI" outputId="59ef9281-e086-450b-d12e-e654ea42e561"
# Rename columns
df.columns = ['user', 'subreddit', 'num_comments']
# Finding number of usernames and subreddits
users = df.user.unique().tolist()
subs = df.subreddit.unique().tolist()
print('Number of Usernames: {}'.format(len(users)))
print('Number of Subreddits: {}'.format(len(subs)))
# + [markdown] id="S8Dd4Z3UnKSJ"
# ## Finding the Implicit Ratings
#
# By calculating the total number of comments that each user has made in each subreddit, then calculating the maximum number of comments that each user has made in any subreddit, and then dividing the total by the maximum, we can generate an implicit rating that represents a user’s interest in one subreddit that they have commented on compared to all of the other subreddits that they have commented on.
#
# rating = num of comments in subreddit / num of total comments
# + id="1IEeM5NunKSJ"
# Finding each user's total number of comments for all subreddits
dftotal = df.groupby('user')['num_comments'].sum().reset_index(name="total_comments")
# Merging each subreddit comments and total comments onto new dataframe
dfnew = pd.merge(df, dftotal, on='user', how='left')
# Calculate a user's subreddit rating based on total and max comments
dfnew['rating'] = dfnew['num_comments']/dfnew['total_comments']*10
# + id="oJxQNUVvnKSJ" outputId="e327191f-fc8f-4aec-884c-b191df4cfbdf"
dfnew
# + [markdown] id="1vg00oIWnKSK"
# We need numerical values for every field to create a similarity matrix. These lines of code show how we can make a set of separate dataframes with only the dataset’s unique usernames and subreddits, assign a fixed numerical id to each based on its index number, and then add those ids back into the dataset into convenient positions.
# + id="OsFB4_vVnKSK" outputId="84c82f90-6063-492e-cfbc-ba4c71bdcd30"
# Create new dataframe and drop duplicate users
dfusers = df.drop_duplicates(subset='user')
# Drop subs
dfusers.drop(['subreddit'], inplace=True, axis=1)
# Sort by users
dfusers = dfusers.sort_values(['user'], ascending=True)
# Reset index
dfusers.reset_index(drop=True, inplace=True)
# Create user id from index
dfusers['user_id'] = dfusers.index+1
# Create new dataframe and drop duplicate subs
dfsubs = df.drop_duplicates(subset='subreddit')
# Drop users
dfsubs.drop(['user'], inplace=True, axis=1)
# Sort by subs
dfsubs = dfsubs.sort_values(['subreddit'], ascending=True)
# Reset index
dfsubs.reset_index(drop=True, inplace=True)
# Create user id from index
dfsubs['sub_id'] = dfsubs.index+1
# Merging user id onto dataframe, moving position
dfnew = pd.merge(dfnew, dfusers, on='user', how='left')
move_pos = dfnew.pop('user_id')
dfnew.insert(1, 'user_id', move_pos)
# Merging sub id onto dataframe, moving position
dfnew = pd.merge(dfnew, dfsubs, on='subreddit', how='left')
move_pos = dfnew.pop('sub_id')
dfnew.insert(3, 'sub_id', move_pos)
# + id="4N5jTLv9nKSK" outputId="0255aa8d-5f0f-4a61-c53a-8cac797784ac"
dfnew.drop(['num_comments_x', 'num_comments_y'], inplace=True, axis=1)
dfnew
# + [markdown] id="uqz_KiF5nKSL"
# ## Visualizing the Data
# + id="CbjgKo62nKSL" outputId="71f7aebc-f591-4e02-adb8-780f47873c10"
# %matplotlib inline
import matplotlib.pyplot as plt
# Counting number of users in each subreddit
dfcounts = dfnew['subreddit'].value_counts().rename_axis('subreddit').reset_index(name='tot_users').head(10)
# Plotting the Top 10 Subreddits with the Most Users
plt.rcParams["figure.figsize"] = (16,9)
dfcounts.plot.bar(x='subreddit', y='tot_users', rot=0, legend=None, color=['blue'])
plt.gcf().axes[0].yaxis.get_major_formatter().set_scientific(False)
plt.title('Top 10 Subreddits with the Most Users')
plt.xlabel('Subreddit')
plt.ylabel('Users')
plt.show()
# + id="xweg_CHRnKSL" outputId="9dee20ce-a543-4773-8b2d-2b89be521143"
# %matplotlib inline
import matplotlib.pyplot as plt
# Grouping by subreddit, summing by top 10 total comments
dfsum = dfnew.groupby(['subreddit']).sum()
dfsum = dfsum[['total_comments']].sort_values(by='total_comments', ascending=False).head(10)
# Plotting the Top 10 Subreddits with the Most Comments
plt.rcParams["figure.figsize"] = (16,9)
dfsum.plot.bar(y='total_comments', rot=0, legend=None, color=['blue'])
plt.gcf().axes[0].yaxis.get_major_formatter().set_scientific(False)
plt.title('Top 10 Subreddits with the Most Comments')
plt.xlabel('Subreddit')
plt.ylabel('Comments')
plt.show()
# + id="v0_2Sg0hnKSM" outputId="510d39e5-7cb9-4981-8df0-cbea4c16c9b3"
# %matplotlib inline
import matplotlib.pyplot as plt
# Counting number of subreddits each user follows
dfcounts = dfnew['user'].value_counts().rename_axis('user').reset_index(name='tot_subs').head(10)
# Plotting the Top 10 Users following the most subreddits
plt.rcParams["figure.figsize"] = (16,9)
dfcounts.plot.bar(x='user', y='tot_subs', rot=0, legend=None, color=['orange'])
plt.gcf().axes[0].yaxis.get_major_formatter().set_scientific(False)
plt.title('Top 10 Users Commenting on the Most Subreddits')
plt.xlabel('Users')
plt.ylabel('Subreddits')
plt.show()
# + id="ivLpORcCnKSM" outputId="3505cfc0-fed7-4fe6-9bd6-7bc608fe83db"
# %matplotlib inline
import matplotlib.pyplot as plt
# Grouping by subreddit, summing by top 10 total comments
dfsum = dfnew.groupby(['user']).sum()
dfsum = dfsum[['total_comments']].sort_values(by='total_comments', ascending=False).head(10)
# Plotting the Top 10 Users with the Most Comments
plt.rcParams["figure.figsize"] = (16,9)
dfsum.plot.bar(y='total_comments', rot=0, legend=None, color=['orange'])
plt.gcf().axes[0].yaxis.get_major_formatter().set_scientific(False)
plt.title('Top 10 Users with the Most Comments')
plt.xlabel('Users')
plt.ylabel('Comments')
plt.show()
# + [markdown] id="CtYNbCWLnKSM"
# ## Similarity Matrix and Data Reduction
#
# By eliminating non-numerical values, pivoting the dataset into a grid that compares all users to all subreddits in the dataset, and replacing the values between the users and subreddits with no existing connection from null to zero, we have created a vast matrix of relationships — although it is mostly empty. This is known as the problem of sparsity, which is that most users have not commented on the majority of subreddits, and most subreddits do not have comments from the majority of users.
# + id="a7g_fCZJnKSN"
# Create new dataframe
dfnum = dfnew
# Drop non-numerical columns
dfnew.drop(['user','subreddit','total_comments','num_comments'], inplace=True, axis=1)
# + id="8fNApM3ZnKSN" outputId="13392f56-7051-4ebe-bda0-e6daeb53d634"
# !pip install pandas==0.21
import pandas as pd
# Pivot dataframe into a matrix of total ratings for users and subs
dfrat = dfnum.pivot(index='sub_id', columns='user_id', values='rating')
# Replace all null values with 0
dfrat.fillna(0,inplace=True)
# !pip install pandas
# + id="g7F54GnLnKSN" outputId="89961ea2-4677-4440-b7f8-abd2c2406b55"
dfrat
# + [markdown] id="L6sfP0fnnKSO"
# We aggregate the number of users who commented on different subreddits, and the number of subreddits that were commented on by different users, and project those numbers onto a scatter plot to see all of the dataset represented as points.
# + id="OLdOueIHnKSO"
# Calculating number of users commenting per sub
num_users = dfnum.groupby('sub_id')['rating'].agg('count')
# Calculating number of subs per user
num_subs = dfnum.groupby('user_id')['rating'].agg('count')
# + id="JyE6FY5GnKSO" outputId="3ab24f76-349b-4f04-fdf8-4812ceaecfa8"
# %matplotlib inline
import matplotlib.pyplot as plt
# Plotting number of users commenting per sub
f,ax = plt.subplots(1,1,figsize=(16,9))
plt.scatter(num_users.index,num_users,color='blue')
plt.title('Number of Users Commenting per Sub')
plt.xlabel('Sub ID')
plt.ylabel('Number of Users')
plt.show()
# + id="L9YLR2w2nKSO" outputId="f140f0c1-4a22-4fc9-a583-17a4469a4ffc"
# %matplotlib inline
import matplotlib.pyplot as plt
# Plotting number of subs commented on per user
f,ax = plt.subplots(1,1,figsize=(16,9))
plt.scatter(num_subs.index,num_subs,color='orange')
plt.title('Number of Subs Commented on per User')
plt.xlabel('User ID')
plt.ylabel('Number of Subs')
plt.show()
# + [markdown] id="EPPUoRG4nKSO"
# We can also use a machine learning tool called the Compressed Sparse Row (CSR) to help us parse the system. Even a sparse matrix with many zeroes such as ours is rather sizable and requires a great deal of computational power, but because the zeros in the matrix contain no useful information they end up increasing the time and complexity of the matrix operations without having to. The solution to this problem is to use an alternate data structure to represent the sparse data, which ultimately amounts to ignoring the zero values and focusing only on the sections of the matrix that are more dense.
# + id="TPNe-kqunKSO"
# Limiting dataframe to only subreddits with 3 or more commenting users
dflast = dfrat.loc[num_users[num_users > 3].index,:]
# Limiting dataframe to only users following 1 or more subs
dflast = dflast.loc[:,num_subs[num_subs > 1].index]
# Removing sparsity from the ratings dataset
csr_data = csr_matrix(dflast.values)
dflast.reset_index(inplace=True)
# + id="8cNPoksSnKSP" outputId="e8319d46-0214-4e11-d938-3bf238601d86"
dflast
# + [markdown] id="Xnc8j-CinKSP"
# ## Subreddit Recommender
#
# We fit the CSR data into the KNN algorithm with the number of nearest neighbors set to 10 and the metric set to cosine distance in order to compute similarity. Then we define the function subreddit recommender, set the number of recommended subreddits to 10, instruct it to search for our inputted subreddit in the database, find similar subreddits, sort them based on similarity, and output those top 10 most similar subreddits from the list.
# + id="DitSK-5lnKSP"
# Using K-Nearest Neighbors as a similarity metric with cosine simlarity
knn = NearestNeighbors(metric='cosine', algorithm='brute', n_neighbors=20, n_jobs=-1)
knn.fit(csr_data)
# Defining subreddit receommender function
def subreddit_recommender(sub_name):
num_subs_to_reccomend = 10
sub_list = dfsubs[dfsubs['subreddit'].str.contains(sub_name)]
if len(sub_list):
sub_idx = sub_list.iloc[0]['sub_id']
sub_idx = dflast[dflast['sub_id'] == sub_idx].index[0]
distances , indices = knn.kneighbors(csr_data[sub_idx],n_neighbors=num_subs_to_reccomend+1)
rec_sub_indices = sorted(list(zip(indices.squeeze().tolist(),distances.squeeze().tolist())),key=lambda x: x[1], reverse=True)[:0:-1]
recommend_frame = []
for val in rec_sub_indices:
sub_idx = dflast.iloc[val[0]]['sub_id']
idx = dfsubs[dfsubs['sub_id'] == sub_idx].index
recommend_frame.append({'Subreddit':dfsubs.iloc[idx]['subreddit'].values[0],'Distance':val[1]})
df = pd.DataFrame(recommend_frame,index=range(1,num_subs_to_reccomend+1))
return df
else:
return "No subreddits found. Please check your input"
# + [markdown] id="iqc2URMJnKSP"
# ### Some Examples
# + id="qH39RhE7nKSQ" outputId="13f8a9fa-b70a-448e-f0d0-e380bb163ac3"
subreddit_recommender("CryptoCurrencies")
# + id="-A8DYOa7nKSQ" outputId="8d475f17-4b38-4481-a2b3-3d3ae1f886b0"
subreddit_recommender("ApplyingToCollege")
# + id="h5_ZnViTnKSQ" outputId="6d5359af-8f3c-4724-fb02-b49a3eb942c0"
subreddit_recommender("gaming")
# + id="Oxp5WdsgnKSQ" outputId="1fc9286a-c220-40e7-b03c-a10e9d777c57"
subreddit_recommender("ProgrammingLanguages")
# + [markdown] id="zUQsRa_VnKSQ"
# As seen with the results above, this type of recommendation system, using an item matrix, works well. It can be used in future work to implement diversity.
| diversity/embeddings/user-matrix-recsys.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
from core import *
from core_mps import *
from quantum_plots import *
from mps.mpo import MPO, MPOList
# ## Time evolution
# ### a) Ladder operators
# We want to construct an operator that maps a binary number $s=s_1s_2\ldots s_m$ to $s+1$ or $s-1$. Let us begin with the operator $S^+$ which will increase the value of the register by one. The table of additions is
#
# | a | b | a+b | c |
# |---|---|-----|---|
# | 0 | 0 | 0 | 0 |
# | 0 | 1 | 1 | 0 |
# | 1 | 0 | 1 | 0 |
# | 1 | 1 | 0 | 1 |
# We can implement this with a tensor $A_{cb}^{a',a}$ that is 1 only on the values of that table.
def mpoSup(n, **kwdargs):
A = np.zeros((2,2,2,2))
A[0,0,0,0] = 1.
A[0,1,1,0] = 1.
A[1,0,1,1] = 1.
A[0,1,0,1] = 1.
R = A[:,:,:,[1]]
L = A[[0],:,:,:] # + A[[1],:,:,:]
return MPO([L] + [A] * (n-2) + [R], **kwdargs)
# Similarly, we would have another tensor for the -1 subtraction
#
# | a | b | a-b | c |
# |---|---|-----|---|
# | 0 | 0 | 0 | 0 |
# | 0 | 1 | 1 | 1 |
# | 1 | 0 | 1 | 0 |
# | 1 | 1 | 0 | 0 |
def mpoSdown(n, **kwdargs):
A = np.zeros((2,2,2,2))
A[0,0,0,0] = 1.
A[0,1,1,0] = 1.
A[0,0,1,1] = 1.
A[1,1,0,1] = 1.
R = A[:,:,:,[1]]
L = A[[0],:,:,:] # + A[[1],:,:,:]
return MPO([L] + [A] * (n-2) + [R], **kwdargs)
# And finally, if we want to make a superposition of both changes
# $$O = \epsilon_0 + \epsilon_1 S^+ + \epsilon_2 S^-,$$
# we can do it easily with bond dimension 3.
def mpo_combined(n,a,b,c, **kwdargs):
A = np.zeros((3,2,2,3))
# Internal bond dimension 0 is nothing, 1 is add 1, 2 is subtract 1
A[0,0,0,0] = 1.
A[0,1,1,0] = 1.
# Increase
A[0,1,0,1] = 1.
A[1,0,1,1] = 1.
# Decrease
A[2,1,0,2] = 1.
A[0,0,1,2] = 1.
R = a*A[:,:,:,[0]] + b*A[:,:,:,[1]] + c*A[:,:,:,[2]]
L = A[[0],:,:,:] # + A[[1],:,:,:] + A[[2],:,:,:]
return MPO([L] + [A] * (n-2) + [R], **kwdargs)
# We can reconstruct the full operators from the MPO representation. The result is the tridiagonal matrices we expect
mpoSup(3).tomatrix()
mpoSdown(3).tomatrix()
mpo_combined(3, 1, 2, 3).tomatrix()
# +
from mps.truncate import simplify
def apply_all(mpos, ψmps, canonical=True, tolerance=DEFAULT_TOLERANCE, normalize=True, debug=[]):
def multiply(A, B):
C = np.einsum('aijb,cjd->acibd',A,B)
s = C.shape
return C.reshape(s[0]*s[1],s[2],s[3]*s[4])
err = 0.
for (i,mpo) in enumerate(reversed(mpos)):
ψmps = MPS([multiply(A,B) for A,B in zip(mpo,ψmps)])
if canonical:
newψmps, theerr, _ = simplify(ψmps, tolerance=tolerance, normalize=normalize)
theerr = np.sqrt(theerr)
if 'norm' in debug:
print(f'Initial state norm {ψmps.norm2()}, final {newψmps.norm2()}')
elif 'trunc' in debug:
n1 = ψmps.norm2()
sc = abs(mps.expectation.scprod(ψmps, newψmps))
n2 = newψmps.norm2()
real_err = np.sqrt(2*np.abs(1.0 - sc/np.sqrt(n1*n2)))
D = max(A.shape[-1] for A in ψmps)
print(f'error={theerr:5g}, estimate={np.sqrt(real_err):5g}, norm={n1:5f}, after={n2:3f}, D={D}')
err += theerr
ψmps = newψmps
newψmps = None
return ψmps, err
# -
# ### b) Fokker-Planck equation
# Let us assume a variable that follows a Wiener process $W$ in the Ito representation
# $$dX = \mu(X,t)dt + \sigma(X,t) dW.$$
#
# The probability distribution for the random variable $X$ evolves as
# $$\frac{\partial}{\partial t}p(x,t) = -\frac{\partial}{\partial x} \left[\mu(x,t)p(x,t)\right] + \frac{\partial^2}{\partial x^2}[D(x,t)p(x,t)].$$
# We are going to use a finite-difference solver for this equation, with the following approximations
# $$\frac{\partial}{\partial x}f(x) = \frac{f(x+\delta)-f(x-\delta)}{2\delta} + \mathcal{O}(\delta^2),$$
# $$\frac{\partial^2}{\partial x^2}f(x) = \frac{f(x+\delta)+f(x-\delta)-2f(x)}{\delta^2} + \mathcal{O}(\delta).$$
# Assuming constant drift and diffusion and labelling $p(x_s,t) = p_s(t),$ we have
# $$p_s(t+\delta t) = p_s(t) + \delta t \left[\mu\frac{p_{s-1}(t)-p_{s+1}(t)}{2\delta{x}}
# + D \frac{p_{s+1}(t)+p_{s-1}(t)-2p_s(t)}{\delta{x}^2}\right].$$
# In terms of our ladder operators,
# $$\vec{p}(t+\delta t) = \left(1-2\delta{t}\frac{D}{\delta{x}^2}\right)\vec{p}
# +\delta t\left(-\frac{\mu}{2\delta{x}}+\frac{D}{\delta{x}^2}\right)S^+\vec{p}
# +\delta t\left(+\frac{\mu}{2\delta{x}}+\frac{D}{\delta{x}^2}\right)S^-\vec{p}.$$
# But this equation blows exponentially unless $\delta tD/\delta x^2 \ll 1.$
# An alternative is to write
# $$\frac{p(t+\delta t)-p(t)}{\delta t} = \frac{1}{2}\hat{G}\left[p(t+\delta t)+p(t)\right],$$
# leading to
# $$\left(1-\frac{\delta t}{2}\hat{G}\right) p(t+\delta t) = \left(1 + \frac{\delta t}{2}\hat{G}\right)p(t),$$
# and the numerically stable solution
# $$ p(t+\delta t) = \left(1-\frac{\delta t}{2}\hat{G}\right)^{-1}\left(1 + \frac{\delta t}{2}\hat{G}\right)p(t).$$
# The following operator implements the MPO $(1+\delta{t}\hat{G}).$ The sign and factors $\tfrac{1}{2}$ can be changed by simply changing $\delta{t}.$
def mpo_drift(n, δt, δx, μ, D, **kwdargs):
Dx = D/δx**2
μx = μ/(2*δx)
a = 1 - 2*δt*Dx
b = δt*(Dx-μx)
c = δt*(Dx+μx)
print(f'δx={δx}, δt={δt}, D={D}, μ={μ}')
print(f'Coefficients: a={a}, b={b}, c={c}')
return mpo_combined(n, a, b, c, **kwdargs)
# We test the method with a Gaussian probability distribution as initial state
# +
import scipy.sparse as sp
import os.path
from mps.truncate import cgs
def FokkerPlanck(N, σ, T, steps, b=None, a=None, μ=0.0, D=1.0, filename=None):
if b is None:
b = 7*σ
if a is None:
a = -b
δx = (b-a)/2**N
times = np.linspace(0, T, steps)
δt = times[1]
ψmps0 = GaussianMPS(N, σ, a=a, b=b, GR=False, simplify=True, normalize=True)
ψ0 = ψmps0.tovector()
x = np.linspace(a, b, 2**N)
mpo1 = mpo_drift(N, 0.5*δt, δx, μ, D, simplify=False)
mpo2 = mpo_drift(N, -0.5*δt, δx, μ, D, simplify=False)
op1 = sp.csr_matrix(mpo1.tomatrix())
op2 = sp.csr_matrix(mpo2.tomatrix())
ψ = [ψ0]
print(f'int(ψ)={np.sum(ψ0)}, |ψ|={np.linalg.norm(ψ0)}')
for t in times[1:]:
ψ0 = sp.linalg.spsolve(op2, op1 @ ψ0)
n0 = np.linalg.norm(ψ0)
ψmps0, err = mps.truncate.cgs(mpo2, mpo1.apply(ψmps0))
ψ1 = ψmps0.tovector()
n1 = np.linalg.norm(ψ1)
sc = 1 - np.vdot(ψ1, ψ0)/(n1*n0)
print(f'int(ψ)={np.sum(ψ0):5f}, |ψ|={n0:5f}, |ψmps|={n1:5f}, sc={sc:5g}, err={err:5g}')
ψ.append(ψ1)
if filename is not None:
with open(filename,'wb') as f:
pickle.dump((ψ, x, times, D, μ, b), f)
return ψ, x, times
import mps.tools
if not os.path.exists('data/fokker-planck-2d-a.pkl'):
FokkerPlanck(10, 1.0, 10, 100, μ=-0.2, D=0.1, b=10, filename='data/fokker-planck-2d-a.pkl');
| 03 MPS Finite differences.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] colab_type="text" id="jxlNgEnbUpqe"
# # How to Train YOLO-Net for License Plate Detection
#
# + [markdown] colab_type="text" id="Z-5LoXV-HeuP"
# ## Introduction
# + [markdown] colab_type="text" id="GZAbJHV6HdYm"
# In this notebook, we will train YOLO-Net for License Plate (LP) detection using
# - [DarkNet framework](https://github.com/AlexeyAB/darknet.git) by Alexey - an open source neural network framework written in C and CUDA
# - [darknet53.conv](https://pjreddie.com/media/files/darknet53.conv.74) - YOLO v3 architecture based model trained on Imagenet dataset as pretrained model
# - [OpenALPR benchmark](https://platerecognizer.com/number-plate-datasets/) - as training dataset
#
# The process of training YOLO-Net for LP detection involves,
# 1. Environment Setup
# ```
# 1.1 Clone DarkNet framework
# 1.2 Compile DarkNet framework
# ```
# 2. Data Preparation
# ```
# 2.1 Download dataset
# 2.2 Generate required text files for traning
# 2.3 Download pretrained model
# 2.4 Download config file
# ```
# 3. Train the Model
#
# 4. Inference
# ```
# 4.1 Download test image
# 4.2 Utility functions
# 4.3 Load the trained model
# 4.4 Infer on the test image
# 4.5 Display inference
# ```
# + [markdown] colab_type="text" id="nI_EkapSIwN2"
# ## 1. Environment Setup
# + [markdown] colab_type="text" id="uBuQ4P8OXEkF"
# ### 1.1 Clone DarkNet framework
#
#
# + [markdown] colab_type="text" id="gCl2liylI30s"
# We will use [DarkNet framework](https://github.com/AlexeyAB/darknet.git) by Alexey which has many modifications and improvements and is actively maintained by him.
# + colab={} colab_type="code" id="EvmVYyTkUT47"
# !git clone https://github.com/AlexeyAB/darknet.git
# + [markdown] colab_type="text" id="HhHP4xO0ZHxC"
# ### 1.2 Compile DarkNet framework
#
# + [markdown] colab_type="text" id="8HJags-yI9cE"
# Before we compile, we have to make some changes to the `Makefile` to enable followings
# 1. Build darknet with OpenCV
# 1. Build with CUDA enabled
# 1. Build with cuDNN enabled
# + colab={} colab_type="code" id="3nWPWUSOZG95"
# %cd darknet
# !sed -i 's/OPENCV=0/OPENCV=1/' Makefile # Build darknet with OpenCV
# !sed -i 's/GPU=0/GPU=1/' Makefile # Build with CUDA enabled
# !sed -i 's/CUDNN=0/CUDNN=1/' Makefile # Build with cuDNN enabled
# !make |& tee build_log.txt
print('Compilation of DarkNet finished...')
# + [markdown] colab_type="text" id="mYOUy3nmJEHu"
# ## 2. Data Preparation
# + [markdown] colab_type="text" id="Fhe0Fy8rgF9w"
# ### 2.1 Download dataset
# + [markdown] colab_type="text" id="N70Top5klhyg"
# [OpenALPR benchmark](https://github.com/openalpr/benchmarks/tree/master/endtoend/) is a collection of labeled images of vehicles in Europe, Brazil and the US. Each has bounding box around the plate and the value of the license plate.
#
# To train on custom dataset, we need to create `.txt`-file for each `.jpg`-image-file - in the same directory and with the same name, but with `.txt`-extension, and put to file: object number and object coordinates on this image, for each object in new line:
#
# `<object-class> <x_center> <y_center> <width> <height>`
#
# Where:
# * `<object-class>` - integer object number from `0` to `(classes-1)`
# * `<x_center> <y_center> <width> <height>` - float values **relative** to width and height of image, it can be equal from `(0.0 to 1.0]`
# * for example: `<x> = <absolute_x> / <image_width>` or `<height> = <absolute_height> / <image_height>`
# * Note: `<x_center> <y_center>` - are center of rectangle (are not top-left corner)
#
# We have shared prepared dataset which can be download from [here](https://www.dropbox.com/s/x38sqxxmem335gv/images.zip?dl=0).
# + colab={} colab_type="code" id="SXh7Q1grgZfX"
# !wget "https://www.dropbox.com/s/x38sqxxmem335gv/images.zip?dl=0" -O dataset.zip
# !unzip -q dataset.zip
# + [markdown] colab_type="text" id="0kuAWr6KiN7W"
# ### 2.2 Generate required text files for traning
# + [markdown] colab_type="text" id="hIOpeu_BJdLE"
# We will create two text file 1) `data_train.txt` and 2) `data_test.txt` which contains absolute path to train images and test images respectively. The dataset is split into train and test in the ratio of 90:10. So, we will use 90% of the total images for training and the rest for testing after a few iterations of training.
# + colab={} colab_type="code" id="edOxTAduih9b"
import random
import os
import subprocess
import sys
image_dir = "./images"
f_val = open("data_test.txt", 'w')
f_train = open("data_train.txt", 'w')
path, dirs, files = next(os.walk(image_dir))
data_size = len(files)
ind = 0
data_test_size = int(0.2 * data_size)
test_array = random.sample(range(data_size), k=data_test_size)
for f in os.listdir(image_dir):
if(f.split(".")[-1] == "jpg"):
ind += 1
if ind in test_array:
f_val.write(image_dir+'/'+f+'\n')
else:
f_train.write(image_dir+'/'+f+'\n')
f_train.close()
f_val.close()
# + [markdown] colab_type="text" id="9T0JTwf3e7ed"
# ### 2.3 Download pretrained model
#
# + [markdown] colab_type="text" id="5grSqQL2J90K"
# When you train an object detector on custom dataset, it is a good idea to leverage existing models trained on very large datasets even though the large dataset may not contain the object you are trying to detect. This process is called *transfer learning*.
#
# We will use YOLO v3 architecture based model trained on Imagenet dataset as pretrained model.
# + colab={} colab_type="code" id="5nXDXWlBYOMs"
# !wget "https://www.dropbox.com/s/18dwbfth7prbf0h/darknet53.conv.74?dl=1" -O darknet53.conv.74
# + [markdown] colab_type="text" id="gVv7uhclj6lR"
# ### 2.4 Download Config file
#
#
#
# + [markdown] colab_type="text" id="s8ftz3RYJ3q9"
# We have provided config files to specify the various training parameters. A gist of all the parameters is as follows,
#
# 1. `class.names`: contains the names of all the classes to be trained. In our case it contains `License plate` as class name.
# 1. `yolov3-LP-train.cfg`: contains model training related parameters
# 1. `yolov3-LP-test.cfg`: contains parameters related to model testing
# 1. `yolov3-LP-setup.data`: The content of the file is as follow
# >
# ```shell
# classes = 1 # number of object classes. It's 1 in our case.
# train = data_train.txt # path to text file containing absolute path to train images
# valid = data_test.txt # path to text file containing absolute path to test images
# names = class.names # path to file contain the names of all classses
# backup = backup/ # path to an existing directory where intermediate weights files will be stored as the training progresses
# ```
# + colab={} colab_type="code" id="aixPEsE3kjMX"
# !wget "https://www.dropbox.com/sh/5y72h8ul8654y9i/AAAFwOwOl7bsQ4BmuxraKBRta?dl=0" -O yolov3_LP.zip
# !unzip -q yolov3_LP.zip
# + [markdown] colab_type="text" id="jxoXQSwOBPG_"
# ## 3. Train the Model
#
# + [markdown] colab_type="text" id="SN6ETfQlKEKw"
# Execute following command to start model training, by specifying
# 1. path to the setup file,
# 1. path to config file,
# 1. path to pretrained convolutional weights file
#
# We are passing some flags such as,
# - `dont_show` to prevent displaying the graphs.
# Since `Colab` does not support, this flag will avoid code crashing. However you can display the graphs if you run this notebook on your local system.
# - `map` - to calculate mAP - mean average precision for the test data specified by `data_test.txt` file.
# + colab={} colab_type="code" id="cUwdaUV0BRjy"
# !./darknet detector train yolov3-LP-setup.data yolov3-LP-train.cfg darknet53.conv.74 -dont_show -map 2> train_log.txt
# + [markdown] colab_type="text" id="yzTdXNeHu1iE"
# ### Caveat
# + [markdown] colab_type="text" id="FvZd5BfuPABV"
# Ideally, as the training process progress, training loss should decrease. However, we observed that occasionally (not always) training loss shoots up to suddenly around ~100 iterations. This may lead to a code crash also.
#
# In such cases, we recommend to restart the training process again and keep watch on training loss. It should decrease as training progress.
# + [markdown] colab_type="text" id="g-TchvJiDSwt"
# ### Download Link
#
# + [markdown] colab_type="text" id="8UY5etbWTpjJ"
# After 500 iterations of training, model achieved mAP @ 0.5 around 94%. The same model can be downloaded from [here](https://www.dropbox.com/s/vw9omi6tjntp6vr/yolov3-LP-train_best.weights?dl=0) which we will use in LP detection using YOLO-Net for ALPR system in next notebook.
# + [markdown] colab_type="text" id="02Hl-s5DF6Mu"
# ## 4. Inference
#
#
# + [markdown] colab_type="text" id="v2q6BMfiKdkD"
# ### 4.1 Download test image
# + colab={} colab_type="code" id="hvUSHzxQK0gY"
# !wget "https://raw.githubusercontent.com/openalpr/benchmarks/master/endtoend/br/JOG9221.jpg" -O test_img.jpg
# + [markdown] colab_type="text" id="VY0XwKXYLnMQ"
# ### 4.2 Utility functions
# + [markdown] colab_type="text" id="br274ZMdqgVu"
# Here, we define few utility functions like,
# - `getOutputsNames`: Get the names of the output layers for given input neural network.
# - `postprocess`: uses `cv.dnn.NMSBoxes` internally to perform non maximum suppression to eliminate redundant overlapping boxes with lower confidences
# - `drawPred:` to draw the predicted bounding box
#
# + cellView="form" colab={} colab_type="code" id="LDamrM56L7Of"
#@title
# Import necessary modules
import cv2 as cv
# Get the names of the output layers
def getOutputsNames(net):
""" Get the names of the output layers.
Generally in a sequential CNN network there will be
only one output layer at the end. In the YOLOv3
architecture, there are multiple output layers giving
out predictions. This function gives the names of the
output layers. An output layer is not connected to
any next layer.
Args
net : YOLOv3 architecture based neural network.
"""
# Get the names of all the layers in the network
layersNames = net.getLayerNames()
# Get the names of the output layers, i.e. the layers with unconnected outputs
return [layersNames[i[0] - 1] for i in net.getUnconnectedOutLayers()]
# Remove the bounding boxes with low confidence using non-maxima suppression
def postprocess(frame, outs, confThreshold, nmsThreshold=0.4):
frameHeight = frame.shape[0]
frameWidth = frame.shape[1]
classIds = []
confidences = []
boxes = []
# Scan through all the bounding boxes output from the network and keep only the
# ones with high confidence scores. Assign the box's class label as the class with the highest score.
classIds = []
confidences = []
boxes = []
predictions = []
for out in outs:
# print("out.shape : ", out.shape)
for detection in out:
scores = detection[5:]
classId = np.argmax(scores)
confidence = scores[classId]
if confidence > confThreshold:
center_x = int(detection[0] * frameWidth)
center_y = int(detection[1] * frameHeight)
width = int(detection[2] * frameWidth)
height = int(detection[3] * frameHeight)
left = int(center_x - width / 2)
top = int(center_y - height / 2)
classIds.append(classId)
confidences.append(float(confidence))
boxes.append([left, top, width, height])
# Perform non maximum suppression to eliminate redundant overlapping boxes with
# lower confidences.
if nmsThreshold:
indices = cv.dnn.NMSBoxes(boxes, confidences, confThreshold, nmsThreshold)
else:
indices = [[x] for x in range(len(boxes))]
for i in indices:
i = i[0]
box = boxes[i]
left = box[0]
top = box[1]
width = box[2]
height = box[3]
predictions.append([classIds[i], confidences[i], [left, top, left + width, top + height]])
return predictions
# Draw the predicted bounding box
def drawPred(frame, pred):
classId = pred[0]
conf = pred[1]
box = pred[2]
left, top, right, bottom = box[0], box[1], box[2], box[3]
# draw bounding box
cv.rectangle(frame, (left, top), (right, bottom), (0, 255, 0), 3)
# + [markdown] colab_type="text" id="yXhhsSbAKjRE"
# ### 4.3 Load the trained model
#
# + colab={} colab_type="code" id="tIlbK3POLNWH"
# Import necessary modules
import cv2 as cv
import numpy as np
import os
# Initialize the parameters
confThreshold = 0.2 # Confidence threshold
nmsThreshold = 0.4 # Non-maximum suppression threshold
inpWidth = 416 # Width of network's input image
inpHeight = 416 # Height of network's input image
yolo_lp_confi_path = "yolov3-LP-test.cfg"
yolo_lp_weights_path = "backup/yolov3-LP-train_best.weights"
# Download the pre-trained model if your training was not complete
if not os.path.exists(yolo_lp_weights_path):
print("Downloading Pretrained model...")
# !wget https://www.dropbox.com/s/vw9omi6tjntp6vr/yolov3-LP-train_best.weights?dl=0 -O "backup/yolov3-LP-train_best.weights" --quiet
# load the network
lp_net = cv.dnn.readNetFromDarknet(yolo_lp_confi_path, yolo_lp_weights_path)
lp_net.setPreferableBackend(cv.dnn.DNN_BACKEND_OPENCV)
lp_net.setPreferableTarget(cv.dnn.DNN_TARGET_CPU)
# + [markdown] colab_type="text" id="xUkzyU2xKkoz"
# ### 4.4 Infer on the test image
#
# + colab={} colab_type="code" id="eyTjeCcELXiz"
# read test image
test_img = cv.imread('test_img.jpg')
# Create a 4D blob from a frame.
blob = cv.dnn.blobFromImage(test_img, 1/255, (inpWidth, inpHeight), [0,0,0], 1, crop=False)
# Sets the input to the network
lp_net.setInput(blob)
# Runs the forward pass to get output of the output layers
outs = lp_net.forward(getOutputsNames(lp_net))
# Remove the bounding boxes with low confidence
predictions = postprocess(test_img, outs, confThreshold, nmsThreshold)
# + [markdown] colab_type="text" id="lZlqpfXIKl-v"
# ### 4.5 Display inference
# + colab={"base_uri": "https://localhost:8080/", "height": 612} colab_type="code" id="yDZSF24xJmDz" outputId="a0f41c98-f174-44e8-bcb1-31eb38b0608f"
# Import necessary modules
import matplotlib
import matplotlib.pyplot as plt
for pred in predictions:
drawPred(test_img, pred)
# Display inference
fig=plt.figure(figsize=(10, 10))
plt.imshow(test_img[:,:,::-1])
# + [markdown] colab_type="text" id="1hh0rkK-mSRf"
# Here, we have trained License plate detector using YOLO, it is observed that the performance improves if the detection is applied after the vehicle detection step. We will see more about this in the implementations of ALPR system.
#
| ALPR/2_How_to_Train_YOLO_Net_for_LP_Detection.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Sample calculations with stokes-numerics
# Note: This notebook needs to be able to import modules from the stokes-numerics repository. If the notebook file is moved to a location other than the root directory of the repository, it will be necessary to adjust `sys.path` so that the imports can find stokes-numerics modules.
# # Setup
# First we set up logging.
import logging
import logconfig
logconfig.logconfig(filename=None)
logconfig.loglevel(logging.WARN)
# Next we import the main modules.
import integralequations
import framedata
import comparisons
import hkmetric
# # Some quick IEQ computations
# Turn on more verbose output in this section for reassurance, since some of the commands take a little while to run.
logconfig.loglevel(logging.INFO)
# Compute the X-functions in the (A1,A2) example, for the Hitchin section, at R=1.
xar = integralequations.computeXar("A1A2", R = 1)
# Use the X-functions to get the spectral coordinates X1 and X2, at zeta=1.
xar.computeX(charge = [1,0], zeta=1)
xar.computeX(charge = [0,1], zeta=1)
# Compute the X-functions at other values of zeta.
xar.computeX(charge = [1,0], zeta = 3 + 5j)
# Compute the cluster: this is a packaging of the spectral coordinates at zeta=e^(i theta), convenient for comparing with the PDE results.
xar.getCluster(theta = 0)
# A tricky point is that `getCluster()` is trying to compute some specific determinantal invariants, and this requires that it should return different combinations of spectral coordinates depending on the value of theta. The particular combinations of spectral coordinates which we compute are specified in the `theorydata` module. If we pick a `theta` too far from zero, then `getCluster()` usually doesn't know what combination of spectral coordinates it should compute; in that case it will return an error.
# In the (A1,A2) example we have 6 nonzero BPS invariants and correspondingly 6 distinguished rays in the zeta-plane, along which the integrals in the IEQ method run.
xar.theory.activerays
xar.theory.activerays[0].phase
xar.theory.activerays[0].id
# The code stores information about X-functions on only 3 of these rays, since the other 3 are related by a symmetry.
xar.theory.rayequivclasses
# Each of these 3 rays has a list of charges, which contains just 1 element, namely the charge gamma associated with this ray, and also has a "rayid" used to identify the ray.
xar.raydata[0].charges
xar.raydata[0].rayid
# Choosing one of the rays, we can plot the associated X-function along the ray.
import integralequationsplotting
integralequationsplotting.xarrayrayplots(xar, part="real", rayid=0, tcutoff=15)
# To make similar computations for opers we just add the keyword `oper=True` in various places. Here is a computation in the (A1,A2) example again. In this case we do not have to specify a parameter R, nor do we have to specify h at the time we run the iterative computation of the X-functions; we only have to specify h later, when we want to evaluate.
xaroper = integralequations.computeXar("A1A2", oper = True)
xaroper.computeX(charge = [1,0], h=1)
xaroper.computeX(charge = [0,1], h=1)
integralequationsplotting.xarrayrayplots(xaroper, part="real", rayid=0, tcutoff=15)
# A few computations in the (A1,A3) example. For fun we choose a smaller R this time; one consequence is that we will see a plateau-like structure in the plotted X function.
xar2 = integralequations.computeXar("A1A3", R = 1e-5)
# The plateau value in this case for X1 is just 3, as opposed to the golden ratio which we get in the (A1,A2) case (see the arXiv paper for discussion of that case).
xar2.computeX(charge = [1,0,0], zeta = 1)
xar2.raydata[0].charges
integralequationsplotting.xarrayrayplots(xar2, part="real", rayid=0, tcutoff=25)
# # Some quick ODE/PDE computations
logconfig.loglevel(logging.INFO)
# DE computation of the subdominant sections for the (A1,A2) example at R=1.
fd = framedata.computeFrames("A1A2", R = 1)
# Compute determinantal invariants from the subdominant sections.
fd.getCluster()
# DE computation of the subdominant sections for the (A1,A2) example in the oper case, with hbar = 1.
fdoper = framedata.computeFrames("A1A2", oper = True, absh = 1)
# Again compute determinantal invariants from the subdominant sections.
fdoper.getCluster()
# # Comparing IEQ and DE computations
# Perform both the IEQ and DE computations in the (A1,A2) example, with the parameter Lambda = 0.2, at R = 0.07, and compare the results. (The parameter `scratch=True` tells `compareClusters()` not to look for saved data files but rather to compute both sides directly.)
comparisons.compareClusters("A1A2_Lambda=0.2", R = 0.07, scratch = True, pde_nmesh = 511)
# In the output above, `xarcluster` is the IEQ result, `fdcluster` is the DE result, `reldiff` is the relative difference. The relative error in X1 is about 7e-6, which we believe is mostly due to PDE discretization error; in particular, if we pass a larger `pde_nmesh` to compareClusters, the error would likely decrease.
# For sufficiently large Lambda the agreement will break down; in particular this would be expected to happen if one of the periods crosses the real axis (or more generally if the phase of one the periods crosses theta), because then the formulas for the spectral coordinates as determinantal invariants would change.
# The IEQ side of the above computation uses period integrals which are computed numerically. The commands below compute just the periods.
import theory
t = theory.theory("A1A2_Lambda=0.2")
t.Z(charge = [1,0])
t.Z(charge = [0,1])
# Perform both the IEQ and DE computations in the (A2,A1) example, oper case, with the parameter c = 0.3 + 0.2i, at hbar = 1. (This example is interesting because it involves both the quadratic and cubic differentials nonzero. Again, we expect it to work only for small enough c.)
comparisons.compareClusters("A2A1_c=0.3+0.2j", oper = True, scratch = True)
comparisons.compareClusters("A1A3", scratch = True)
# # Some quick metric computations
hkmetric.comparemetrics(c = 0.5, Lambda = 0, pde_nmesh = 500)
hkmetric.comparemetrics(c = 0.5, Lambda = 0.5, pde_nmesh = 500)
# # Looking at stored data
logconfig.loglevel(logging.WARN)
# The commands in this section use the stored data files; these files are available as the dataset at [https://doi.org/10.7910/DVN/W0V4D9](https://doi.org/10.7910/DVN/W0V4D9). The code expects them to be installed at paths specified in `paths.conf`. The command below interrogates the code about what paths it is looking in.
(integralequations.fileinfo.XARPATH, integralequations.fileinfo.FRAMEPATH, integralequations.fileinfo.HKMETRICPATH, integralequations.fileinfo.CACHEPATH)
# A comparison between the two stored clusters in the (A1,A2) example, at R=1 and theta=0.1.
# `xarcluster` is the IEQ result, `fdcluster` is the DE result, `reldiff` is the relative difference.
comparisons.compareClusters("A1A2", oper=False, R=1, pde_nmesh=8191, theta = 0.1)
# All the clusters computed by IEQ method in the (A1,A2) example. The list of R values is drawn from the `theorydata` module; if the data file for some R were missing, this command would generate an error.
comparisons.xarclustertable("A1A2", theta = 0.1)
# All the clusters computed by DE method in the (A1,A2) example. Since data is stored for several values of `pde_nmesh` we have to specify which we want. (Otherwise the default will be used, which is stored in framedata.PDE_NMESH; when the framedata module is loaded, this is set to 511; we actually don't have data stored for that value, so it would generate an error here.)
comparisons.fdclustertable("A1A2", theta = 0.1, pde_nmesh = 8191)
# A quick and dirty comparison table.
print(comparisons.plaintables("A1A2", theta = 0.1, pde_nmesh = 8191, precision=5)["data"])
| sample-computations.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
import json
from pathlib import Path
from statistics import variance
path = Path(r'datadump')
bikes = {}
# Choose either "helbiz_*.json" or "bird_*.json or "lime_*.json"
for e in path.glob('helbiz_*.json'):
json_data = json.loads(e.read_text())
for i in json_data["data"]["bikes"]:
bikes[i["bike_id"]] = bikes.get(i["bike_id"], []) + [i]
# -
for bike in bikes:
try:
#print(*bikes[bike],sep="\n")
#positions = [float(i["lon"]) for i in bikes[bike]]
#print(bike,":",variance(positions),":",positions)
print(bikes[bike][0:10])
break
except:
pass
positions = []
for bike in bikes:
positions.append({"uuid":bike,"lon":float(bikes[bike][0]["lon"]),"lat":float(bikes[bike][0]["lat"])})
print(positions[:5])
import pandas as pd
positions_df = pd.DataFrame.from_dict(positions)
positions_df = positions_df.drop_duplicates()
positions_df.count()
positions_df.describe()
positions_df.dtypes
# +
import pandas as pd
import plotly.express as px
class Plotter:
def plot(self, df):
fig = px.scatter_mapbox(df, lat="lat", lon="lon",hover_data=["uuid"])
fig.update_layout(mapbox_style="open-street-map")
fig.update_layout(margin={"r":0,"t":0,"l":0,"b":0})
return fig
def save_plot(self, fig):
file_path = html_render_location / "plot.html"
fig.write_html(str(file_path))
def show_plot(self, fig):
fig.show()
def plot_and_save(self, json_fp: Path):
p = self.plot(json_fp)
self.save_plot(p)
# +
import plotly.io as pio
pio.renderers.default = 'browser'
plotter = Plotter()
fig = plotter.plot(positions_df)
plotter.show_plot(fig)
# -
| data_analysis_tools_for_gfbs_data/Analyse scraped data from e scooter apis.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="view-in-github" colab_type="text"
# <a href="https://colab.research.google.com/github/fabiobasson/Bi-Master/blob/main/geological_comparative_inference19102021.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# + [markdown] id="vAEEn_XMWXwe"
# # **Project:** Posgraduate Business Intelligence Master PUC-RJ 2021
#
# # **Classification and Prediction of rocks images in drilled wells via Learning Methods Supervised by deep learning and pre trained models**
#
# This work will focus on the analysis of the application of supervised learning methods to the prediction of rock images in drilling wells and represents a new study to be developed by the technical team.
#
# The use of deep learning for the budgeting process has a number of advantages, including the reduction of HH involved, improvement in the degree of assertiveness, speed of response and the possibility of testing different project scenarios in less time.
#
# This work proposes to classify the rocks through their images acquired during geological activities (drilling), using a Deep Learning technique and pre-trained models.
# + id="BhMpCgi7O64x"
# Import from libraries
import warnings
warnings.filterwarnings('always')
warnings.filterwarnings('ignore')
import os
from os import getcwd
import tensorflow as tf
import zipfile
import sys
import shutil
import numpy as np
import glob
import random
import pandas as pd
import seaborn as sns
import json
import matplotlib.pyplot as plt
from keras.utils import np_utils
from PIL import Image
import plotly.express as px
import cv2 as cv
from imutils import paths
import matplotlib.pyplot as plt
from tensorflow.keras import layers
from tensorflow.keras import Model
from tensorflow.keras.layers import Dense, Dropout
from tensorflow.keras.utils import to_categorical
from tensorflow.keras.optimizers import RMSprop, Adam
from tensorflow.keras.applications.vgg16 import VGG16
from tensorflow.keras.applications.inception_v3 import InceptionV3
from tensorflow.keras.preprocessing.image import ImageDataGenerator
from sklearn.metrics import confusion_matrix,accuracy_score,classification_report
from sklearn.metrics import classification_report
from sklearn.metrics import hamming_loss
from sklearn.metrics import roc_curve
from sklearn.metrics import roc_auc_score
from sklearn.preprocessing import LabelBinarizer
from sklearn.model_selection import train_test_split
# + colab={"base_uri": "https://localhost:8080/"} id="ye09bw6nVWgH" outputId="9a048f52-247f-40b1-e86a-356699fa48a2"
# Checking Tensorflow and Keras Versions
print(tf.__version__)
# Install Tensorflow
# #!pip install tensorflow==2.6.0
# Install Keras
# !pip install keras --upgrade
# + id="qB35LCwK1F9Y"
# If necessary, remove the directories e umount the drive
# #!rm -rf geological_similarity andesite gneiss/ marble/ quartzite/ rhyolite/ schist/
# #!rm -rf geological_similarity
# #!umount -f /content/drive
# + [markdown] id="ZdPH63KM75nK"
#
# # **2. Extraction, Transformation and Loading of the Data**
# + [markdown] id="WsBJvou07fbF"
# ## **2.1. Kaglle Data Collection**
# + colab={"resources": {"http://localhost:8080/nbextensions/google.colab/files.js": {"data": "Ly8gQ29weXJpZ2h0IDIwMTcgR29vZ2xlIExMQwovLwovLyBMaWNlbnNlZCB1bmRlciB0aGUgQXBhY2hlIExpY2Vuc2UsIFZlcnNpb24gMi4wICh0aGUgIkxpY2Vuc2UiKTsKLy8geW91IG1heSBub3QgdXNlIHRoaXMgZmlsZSBleGNlcHQgaW4gY29tcGxpYW5jZSB3aXRoIHRoZSBMaWNlbnNlLgovLyBZb3UgbWF5IG9idGFpbiBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKLy8KLy8gICAgICBodHRwOi8vd3d3LmFwYWNoZS5vcmcvbGljZW5zZXMvTElDRU5TRS0yLjAKLy8KLy8gVW5sZXNzIHJlcXVpcmVkIGJ5IGFwcGxpY2FibGUgbGF3IG9yIGFncmVlZCB0byBpbiB3cml0aW5nLCBzb2Z0d2FyZQovLyBkaXN0cmlidXRlZCB1bmRlciB0aGUgTGljZW5zZSBpcyBkaXN0cmlidXRlZCBvbiBhbiAiQVMgSVMiIEJBU0lTLAovLyBXSVRIT1VUIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4KLy8gU2VlIHRoZSBMaWNlbnNlIGZvciB0aGUgc3BlY2lmaWMgbGFuZ3VhZ2UgZ292ZXJuaW5nIHBlcm1pc3Npb25zIGFuZAovLyBsaW1pdGF0aW9ucyB1bmRlciB0aGUgTGljZW5zZS4KCi8qKgogKiBAZmlsZW92ZXJ2aWV3IEhlbHBlcnMgZm9yIGdvb2dsZS5jb2xhYiBQeXRob24gbW9kdWxlLgogKi8KKGZ1bmN0aW9uKHNjb3BlKSB7CmZ1bmN0aW9uIHNwYW4odGV4dCwgc3R5bGVBdHRyaWJ1dGVzID0ge30pIHsKICBjb25zdCBlbGVtZW50ID0gZG9jdW1lbnQuY3JlYXRlRWxlbWVudCgnc3BhbicpOwogIGVsZW1lbnQudGV4dENvbnRlbnQgPSB0ZXh0OwogIGZvciAoY29uc3Qga2V5IG9mIE9iamVjdC5rZXlzKHN0eWxlQXR0cmlidXRlcykpIHsKICAgIGVsZW1lbnQuc3R5bGVba2V5XSA9IHN0eWxlQXR0cmlidXRlc1trZXldOwogIH0KICByZXR1cm4gZWxlbWVudDsKfQoKLy8gTWF4IG51bWJlciBvZiBieXRlcyB3aGljaCB3aWxsIGJlIHVwbG9hZGVkIGF0IGEgdGltZS4KY29uc3QgTUFYX1BBWUxPQURfU0laRSA9IDEwMCAqIDEwMjQ7CgpmdW5jdGlvbiBfdXBsb2FkRmlsZXMoaW5wdXRJZCwgb3V0cHV0SWQpIHsKICBjb25zdCBzdGVwcyA9IHVwbG9hZEZpbGVzU3RlcChpbnB1dElkLCBvdXRwdXRJZCk7CiAgY29uc3Qgb3V0cHV0RWxlbWVudCA9IGRvY3VtZW50LmdldEVsZW1lbnRCeUlkKG91dHB1dElkKTsKICAvLyBDYWNoZSBzdGVwcyBvbiB0aGUgb3V0cHV0RWxlbWVudCB0byBtYWtlIGl0IGF2YWlsYWJsZSBmb3IgdGhlIG5leHQgY2FsbAogIC8vIHRvIHVwbG9hZEZpbGVzQ29udGludWUgZnJvbSBQeXRob24uCiAgb3V0cHV0RWxlbWVudC5zdGVwcyA9IHN0ZXBzOwoKICByZXR1cm4gX3VwbG9hZEZpbGVzQ29udGludWUob3V0cHV0SWQpOwp9CgovLyBUaGlzIGlzIHJvdWdobHkgYW4gYXN5bmMgZ2VuZXJhdG9yIChub3Qgc3VwcG9ydGVkIGluIHRoZSBicm93c2VyIHlldCksCi8vIHdoZXJlIHRoZXJlIGFyZSBtdWx0aXBsZSBhc3luY2hyb25vdXMgc3RlcHMgYW5kIHRoZSBQeXRob24gc2lkZSBpcyBnb2luZwovLyB0byBwb2xsIGZvciBjb21wbGV0aW9uIG9mIGVhY2ggc3RlcC4KLy8gVGhpcyB1c2VzIGEgUHJvbWlzZSB0byBibG9jayB0aGUgcHl0aG9uIHNpZGUgb24gY29tcGxldGlvbiBvZiBlYWNoIHN0ZXAsCi8vIHRoZW4gcGFzc2VzIHRoZSByZXN1bHQgb2YgdGhlIHByZXZpb3VzIHN0ZXAgYXMgdGhlIGlucHV0IHRvIHRoZSBuZXh0IHN0ZXAuCmZ1bmN0aW9uIF91cGxvYWRGaWxlc0NvbnRpbnVlKG91dHB1dElkKSB7CiAgY29uc3Qgb3V0cHV0RWxlbWVudCA9IGRvY3VtZW50LmdldEVsZW1lbnRCeUlkKG91dHB1dElkKTsKICBjb25zdCBzdGVwcyA9IG91dHB1dEVsZW1lbnQuc3RlcHM7CgogIGNvbnN0IG5leHQgPSBzdGVwcy5uZXh0KG91dHB1dEVsZW1lbnQubGFzdFByb21pc2VWYWx1ZSk7CiAgcmV0dXJuIFByb21pc2UucmVzb2x2ZShuZXh0LnZhbHVlLnByb21pc2UpLnRoZW4oKHZhbHVlKSA9PiB7CiAgICAvLyBDYWNoZSB0aGUgbGFzdCBwcm9taXNlIHZhbHVlIHRvIG1ha2UgaXQgYXZhaWxhYmxlIHRvIHRoZSBuZXh0CiAgICAvLyBzdGVwIG9mIHRoZSBnZW5lcmF0b3IuCiAgICBvdXRwdXRFbGVtZW50Lmxhc3RQcm9taXNlVmFsdWUgPSB2YWx1ZTsKICAgIHJldHVybiBuZXh0LnZhbHVlLnJlc3BvbnNlOwogIH0pOwp9CgovKioKICogR2VuZXJhdG9yIGZ1bmN0aW9uIHdoaWNoIGlzIGNhbGxlZCBiZXR3ZWVuIGVhY2ggYXN5bmMgc3RlcCBvZiB0aGUgdXBsb2FkCiAqIHByb2Nlc3MuCiAqIEBwYXJhbSB7c3RyaW5nfSBpbnB1dElkIEVsZW1lbnQgSUQgb2YgdGhlIGlucHV0IGZpbGUgcGlja2VyIGVsZW1lbnQuCiAqIEBwYXJhbSB7c3RyaW5nfSBvdXRwdXRJZCBFbGVtZW50IElEIG9mIHRoZSBvdXRwdXQgZGlzcGxheS4KICogQHJldHVybiB7IUl0ZXJhYmxlPCFPYmplY3Q+fSBJdGVyYWJsZSBvZiBuZXh0IHN0ZXBzLgogKi8KZnVuY3Rpb24qIHVwbG9hZEZpbGVzU3RlcChpbnB1dElkLCBvdXRwdXRJZCkgewogIGNvbnN0IGlucHV0RWxlbWVudCA9IGRvY3VtZW50LmdldEVsZW1lbnRCeUlkKGlucHV0SWQpOwogIGlucHV0RWxlbWVudC5kaXNhYmxlZCA9IGZhbHNlOwoKICBjb25zdCBvdXRwdXRFbGVtZW50ID0gZG9jdW1lbnQuZ2V0RWxlbWVudEJ5SWQob3V0cHV0SWQpOwogIG91dHB1dEVsZW1lbnQuaW5uZXJIVE1MID0gJyc7CgogIGNvbnN0IHBpY2tlZFByb21pc2UgPSBuZXcgUHJvbWlzZSgocmVzb2x2ZSkgPT4gewogICAgaW5wdXRFbGVtZW50LmFkZEV2ZW50TGlzdGVuZXIoJ2NoYW5nZScsIChlKSA9PiB7CiAgICAgIHJlc29sdmUoZS50YXJnZXQuZmlsZXMpOwogICAgfSk7CiAgfSk7CgogIGNvbnN0IGNhbmNlbCA9IGRvY3VtZW50LmNyZWF0ZUVsZW1lbnQoJ2J1dHRvbicpOwogIGlucHV0RWxlbWVudC5wYXJlbnRFbGVtZW50LmFwcGVuZENoaWxkKGNhbmNlbCk7CiAgY2FuY2VsLnRleHRDb250ZW50ID0gJ0NhbmNlbCB1cGxvYWQnOwogIGNvbnN0IGNhbmNlbFByb21pc2UgPSBuZXcgUHJvbWlzZSgocmVzb2x2ZSkgPT4gewogICAgY2FuY2VsLm9uY2xpY2sgPSAoKSA9PiB7CiAgICAgIHJlc29sdmUobnVsbCk7CiAgICB9OwogIH0pOwoKICAvLyBXYWl0IGZvciB0aGUgdXNlciB0byBwaWNrIHRoZSBmaWxlcy4KICBjb25zdCBmaWxlcyA9IHlpZWxkIHsKICAgIHByb21pc2U6IFByb21pc2UucmFjZShbcGlja2VkUHJvbWlzZSwgY2FuY2VsUHJvbWlzZV0pLAogICAgcmVzcG9uc2U6IHsKICAgICAgYWN0aW9uOiAnc3RhcnRpbmcnLAogICAgfQogIH07CgogIGNhbmNlbC5yZW1vdmUoKTsKCiAgLy8gRGlzYWJsZSB0aGUgaW5wdXQgZWxlbWVudCBzaW5jZSBmdXJ0aGVyIHBpY2tzIGFyZSBub3QgYWxsb3dlZC4KICBpbnB1dEVsZW1lbnQuZGlzYWJsZWQgPSB0cnVlOwoKICBpZiAoIWZpbGVzKSB7CiAgICByZXR1cm4gewogICAgICByZXNwb25zZTogewogICAgICAgIGFjdGlvbjogJ2NvbXBsZXRlJywKICAgICAgfQogICAgfTsKICB9CgogIGZvciAoY29uc3QgZmlsZSBvZiBmaWxlcykgewogICAgY29uc3QgbGkgPSBkb2N1bWVudC5jcmVhdGVFbGVtZW50KCdsaScpOwogICAgbGkuYXBwZW5kKHNwYW4oZmlsZS5uYW1lLCB7Zm9udFdlaWdodDogJ2JvbGQnfSkpOwogICAgbGkuYXBwZW5kKHNwYW4oCiAgICAgICAgYCgke2ZpbGUudHlwZSB8fCAnbi9hJ30pIC0gJHtmaWxlLnNpemV9IGJ5dGVzLCBgICsKICAgICAgICBgbGFzdCBtb2RpZmllZDogJHsKICAgICAgICAgICAgZmlsZS5sYXN0TW9kaWZpZWREYXRlID8gZmlsZS5sYXN0TW9kaWZpZWREYXRlLnRvTG9jYWxlRGF0ZVN0cmluZygpIDoKICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgJ24vYSd9IC0gYCkpOwogICAgY29uc3QgcGVyY2VudCA9IHNwYW4oJzAlIGRvbmUnKTsKICAgIGxpLmFwcGVuZENoaWxkKHBlcmNlbnQpOwoKICAgIG91dHB1dEVsZW1lbnQuYXBwZW5kQ2hpbGQobGkpOwoKICAgIGNvbnN0IGZpbGVEYXRhUHJvbWlzZSA9IG5ldyBQcm9taXNlKChyZXNvbHZlKSA9PiB7CiAgICAgIGNvbnN0IHJlYWRlciA9IG5ldyBGaWxlUmVhZGVyKCk7CiAgICAgIHJlYWRlci5vbmxvYWQgPSAoZSkgPT4gewogICAgICAgIHJlc29sdmUoZS50YXJnZXQucmVzdWx0KTsKICAgICAgfTsKICAgICAgcmVhZGVyLnJlYWRBc0FycmF5QnVmZmVyKGZpbGUpOwogICAgfSk7CiAgICAvLyBXYWl0IGZvciB0aGUgZGF0YSB0byBiZSByZWFkeS4KICAgIGxldCBmaWxlRGF0YSA9IHlpZWxkIHsKICAgICAgcHJvbWlzZTogZmlsZURhdGFQcm9taXNlLAogICAgICByZXNwb25zZTogewogICAgICAgIGFjdGlvbjogJ2NvbnRpbnVlJywKICAgICAgfQogICAgfTsKCiAgICAvLyBVc2UgYSBjaHVua2VkIHNlbmRpbmcgdG8gYXZvaWQgbWVzc2FnZSBzaXplIGxpbWl0cy4gU2VlIGIvNjIxMTU2NjAuCiAgICBsZXQgcG9zaXRpb24gPSAwOwogICAgZG8gewogICAgICBjb25zdCBsZW5ndGggPSBNYXRoLm1pbihmaWxlRGF0YS5ieXRlTGVuZ3RoIC0gcG9zaXRpb24sIE1BWF9QQVlMT0FEX1NJWkUpOwogICAgICBjb25zdCBjaHVuayA9IG5ldyBVaW50OEFycmF5KGZpbGVEYXRhLCBwb3NpdGlvbiwgbGVuZ3RoKTsKICAgICAgcG9zaXRpb24gKz0gbGVuZ3RoOwoKICAgICAgY29uc3QgYmFzZTY0ID0gYnRvYShTdHJpbmcuZnJvbUNoYXJDb2RlLmFwcGx5KG51bGwsIGNodW5rKSk7CiAgICAgIHlpZWxkIHsKICAgICAgICByZXNwb25zZTogewogICAgICAgICAgYWN0aW9uOiAnYXBwZW5kJywKICAgICAgICAgIGZpbGU6IGZpbGUubmFtZSwKICAgICAgICAgIGRhdGE6IGJhc2U2NCwKICAgICAgICB9LAogICAgICB9OwoKICAgICAgbGV0IHBlcmNlbnREb25lID0gZmlsZURhdGEuYnl0ZUxlbmd0aCA9PT0gMCA/CiAgICAgICAgICAxMDAgOgogICAgICAgICAgTWF0aC5yb3VuZCgocG9zaXRpb24gLyBmaWxlRGF0YS5ieXRlTGVuZ3RoKSAqIDEwMCk7CiAgICAgIHBlcmNlbnQudGV4dENvbnRlbnQgPSBgJHtwZXJjZW50RG9uZX0lIGRvbmVgOwoKICAgIH0gd2hpbGUgKHBvc2l0aW9uIDwgZmlsZURhdGEuYnl0ZUxlbmd0aCk7CiAgfQoKICAvLyBBbGwgZG9uZS4KICB5aWVsZCB7CiAgICByZXNwb25zZTogewogICAgICBhY3Rpb246ICdjb21wbGV0ZScsCiAgICB9CiAgfTsKfQoKc2NvcGUuZ29vZ2xlID0gc2NvcGUuZ29vZ2xlIHx8IHt9OwpzY29wZS5nb29nbGUuY29sYWIgPSBzY29wZS5nb29nbGUuY29sYWIgfHwge307CnNjb3BlLmdvb2dsZS5jb2xhYi5fZmlsZXMgPSB7CiAgX3VwbG9hZEZpbGVzLAogIF91cGxvYWRGaWxlc0NvbnRpbnVlLAp9Owp9KShzZWxmKTsK", "ok": true, "headers": [["content-type", "application/javascript"]], "status": 200, "status_text": ""}}, "base_uri": "https://localhost:8080/", "height": 156} id="qPW1Uceu4EkK" outputId="b2b20786-f548-4583-9fdf-88625ba8e07a"
from google.colab import files
files.upload()
# !mkdir ~/.kaggle
# !cp kaggle.json ~/.kaggle/
# !chmod 600 ~/.kaggle/kaggle.
# #!kaggle datasets download fabiobasson/geologicalsimilarity
# #! unzip -qq geologicalsimilarity
# !kaggle datasets download tanyadayanand/geological-image-similarity
# ! unzip -qq geological-image-similarity
# + [markdown] id="owY2uo8A7q5j"
# # **Creating the directory structure**
# + id="QXSrUDeU4L7J" colab={"base_uri": "https://localhost:8080/"} outputId="8899e245-3131-4d56-cbed-0adfa7756782"
andesite_dir = glob.glob('geological_similarity/andesite/*.jpg');gneiss_dir = glob.glob('geological_similarity/gneiss/*.jpg')
marble_dir= glob.glob('geological_similarity/marble/*.jpg');quartzite_dir = glob.glob('geological_similarity/quartzite/*.jpg')
rhyolite_dir = glob.glob('geological_similarity/rhyolite/*.jpg');
schist_dir = glob.glob('geological_similarity/schist/*.jpg')
print(len(andesite_dir));print(len(gneiss_dir)); print(len(marble_dir)); print(len(quartzite_dir)); print(len(quartzite_dir)); print(len(schist_dir))
andesite_df=[];gneiss_df=[];marble_df=[];quartzite_df=[];rhyolite_df=[];schist_df=[]
label1=['andesite','gneiss','marble','quartzite','rhyolite','schist']
for i in andesite_dir:
andesite_df.append([i,label1[0]])
for j in gneiss_dir:
gneiss_df.append([j,label1[1]])
for l in marble_dir:
marble_df.append([l,label1[2]])
for m in quartzite_dir:
quartzite_df.append([m,label1[3]])
for n in rhyolite_dir:
rhyolite_df.append([n,label1[4]])
for o in schist_dir:
schist_df.append([o,label1[5]])
df = andesite_df + gneiss_df + marble_df + quartzite_df + rhyolite_df + schist_df
random.shuffle(df)
len(df)
# + [markdown] id="MZc34XkIcgA-"
# # **Creation of Parameters**
# + id="e2ubYEBGcjJK"
INIT_LR = 1e-3
EPOCHS = 200
BS=24
# + [markdown] id="jbRhRfvzt_5T"
# # **Dataframe Creation**
# + colab={"base_uri": "https://localhost:8080/", "height": 415} id="87UeiF--wzh8" outputId="441f8f20-e49d-4301-ab93-8dc3f0e9b56e"
# Criação do Dataframe
data_df = pd.DataFrame(df,columns=['path','label'])
data_df
# + [markdown] id="uVvzkibCuN5L"
# # **Convert the images to 32 by 32**
# + id="TZ7zohxAka0z" colab={"base_uri": "https://localhost:8080/"} outputId="12db7912-7868-4189-a43b-83dc1dbfa2e8"
dados=[]
labels=[]
for imagePath in data_df['path']:
label = imagePath.split(os.path.sep)[-2]
image = cv.imread(imagePath)
image = cv.cvtColor(image, cv.COLOR_BGR2RGB)
image = cv.resize(image, (32, 32))
dados.append(image)
labels.append(label)
print("labels: ", np.unique(labels))
# Convert data and labels to NumPy arrays while scaling the pixel
dados = np.array(dados)
labels = np.array(labels)
labels
# + [markdown] id="O7UwXUQruqV4"
# # **Renaming labels from string format to int**
# + id="DQ7ltOFxyjrW" colab={"base_uri": "https://localhost:8080/"} outputId="c435adb5-0ff0-4877-e50e-480fcf5b8216"
lb={'andesite':[1, 0, 0, 0, 0, 0] ,'gneiss':[0, 1, 0, 0, 0, 0] ,'marble':[0, 0, 1, 0, 0, 0], 'quartzite':[0, 0, 0, 1, 0, 0], 'rhyolite':[0, 0, 0, 0, 1, 0], 'schist':[0, 0, 0, 0, 0, 1]}
len(labels)
labels1 = [lb[label] for label in labels]
labels1 = np.array(labels1)
print(labels1)
# + [markdown] id="UIPXX6y48mV-"
# # **Creating the Dataframe on Training, Validation and Testing Data**
# + id="mwIj5M_l9t9-" colab={"base_uri": "https://localhost:8080/"} outputId="e13d7948-d96b-458e-a1c9-87d3c329f3c9"
X_train, X_test, y_train, y_test = train_test_split(dados, labels1, test_size=0.2, random_state=42)
X_train, X_val, y_train, y_val = train_test_split(X_train, y_train,test_size=0.25, random_state=42) # 0.25 x 0.8 = 0.2
print(X_train.shape)
print(y_train.shape)
print(X_val.shape)
print(y_val.shape)
print(X_test.shape)
print(y_test.shape)
# + [markdown] id="9azU6pZpscIw"
# # **Normalization**
# + id="--Cpp-sHOqRy"
X_train = X_train / 255.
X_val = X_val / 255.
X_test = X_test / 255.
# + [markdown] id="xukNtP2KjKVY"
# # **Escolha do modelo a ser aplicado**
# + [markdown] id="zyHI-deiI_hz"
# # **Important**
# To perform the inference, we will not handle missing values. Columns containing missing values will be discarded. Therefore, the selected model must contain all the variables required by the model.
# + id="1QrhLWXHIcBa" colab={"base_uri": "https://localhost:8080/"} outputId="eacffeac-0a8d-4316-f067-39383e80a42a"
from google.colab import drive
drive.mount('/content/drive')
#import os
#workdir_path = '/content/drive/My Drive' # Inserir o local da pasta onde estão os arquivos de entrada (treino e teste)
#os.chdir(workdir_path)
# + id="4HpTTzCGGntc" colab={"base_uri": "https://localhost:8080/"} outputId="74139c8d-4a9d-4d13-a9f9-1a2412401233"
opcao = str(input('Qual modelo será aplicado? (Modelo Criado, Modelo VGG16 ou Modelo C) ')).strip().upper()
# + id="V0nkCG3EGQyU"
from keras.models import model_from_json
def load_model():
# loading model
model = model_from_json(open('/content/drive/My Drive/models/model_create.json').read())
model.load_weights('/content/drive/My Drive/models/create_weights.h5')
model.compile(optimizer = Adam(learning_rate=0.0001),
loss = 'categorical_crossentropy',
metrics =['accuracy'])
return model
def load_modelvgg16():
# loading model
modelvgg16 = model_from_json(open('/content/drive/My Drive/models/model_vgg16.json').read())
modelvgg16.load_weights('/content/drive/My Drive/models/vgg16_weights.h5')
modelvgg16.compile(optimizer = Adam(learning_rate=0.0001),
loss = 'categorical_crossentropy',
metrics =['accuracy'])
return model
# + id="z4zXUTMuGq3k" colab={"base_uri": "https://localhost:8080/"} outputId="94d73ac3-9f27-400d-aeff-087088a738a9"
if opcao == 'MODELO CRIADO':
model = tf.keras.models.load_model('/content/drive/My Drive/models/best_model.h5')
if opcao == 'MODELO VGG16':
model = tf.keras.models.load_model('/content/drive/My Drive/models/feature_extraction.vgg16')
print (opcao)
# + id="l0DgCQ94aGOk" outputId="c8970475-81f3-4136-876a-7854206f16f7" colab={"base_uri": "https://localhost:8080/"}
if opcao == 'MODELO CRIADO':
model = load_model()
if opcao == 'MODELO VGG16':
modelvgg16 = load_modelvgg16()
print (opcao)
# + [markdown] id="CvYwZyoSyZkT"
# # **Performing predictions and confirmations**
# + id="fxTWIK4HWHHp"
# (ref http://stackoverflow.com/q/32239577/395857)
def hamming_score(y_true, y_pred):
acc_list = []
for i in range(y_true.shape[0]):
set_true = set( np.where(y_true[i])[0] )
set_pred = set( np.where(y_pred[i])[0] )
tmp_a = None
if len(set_true) == 0 and len(set_pred) == 0:
tmp_a = 1
else:
tmp_a = len(set_true.intersection(set_pred))/\
float( len(set_true.union(set_pred)) )
acc_list.append(tmp_a)
return np.mean(acc_list)
# + id="jHX9PR5gTQZA" colab={"base_uri": "https://localhost:8080/"} outputId="15732999-1cc5-488c-914e-f09949a69b7d"
if opcao == 'MODELO CRIADO':
print("Conv2D - Predições")
print("-------\n")
else:
print("VGG16 - Predições")
print("-------\n")
y_pred = model.predict(X_test,batch_size=BS)
print("Prediction_accuracy: " + str(y_pred))
print("-------\n")
# + id="pYKNqTvYIy56" colab={"base_uri": "https://localhost:8080/"} outputId="317a0a5d-e1d4-4795-8e9f-0b45f05d610a"
y_pred[400] # 400 position prediction
# + id="c81gyqxSpT8o" colab={"base_uri": "https://localhost:8080/"} outputId="e79fbedb-dcfa-4160-c79d-582253e472fc"
print(np.argmax(y_pred[400])) # In this case, the prediction is pointed out - Class 4
# + id="buGHzD6QudJy" colab={"base_uri": "https://localhost:8080/", "height": 280} outputId="09579b08-0e8f-4982-e25b-c3283af0aff7"
# In this case, the prediction is correct - Class 4, as shown below:
plt.figure()
plt.imshow(X_train[400])
plt.xlabel(y_test[400])
plt.colorbar()
plt.grid(False)
plt.show()
# + colab={"base_uri": "https://localhost:8080/"} id="r9FzFwTkcEc7" outputId="418596e4-a377-4201-8efa-9ebc202dcde0"
y_pred[105] # first prediction position 5
# + colab={"base_uri": "https://localhost:8080/"} id="jgjixCwDcLfK" outputId="d46e3943-49a7-4989-dc12-f407eb5d7e8a"
np.argmax(y_pred[105]) # In this case, the prediction is correct - Class 05
# + id="HIFhaEXOlmwM" colab={"base_uri": "https://localhost:8080/", "height": 280} outputId="a5b08e81-d5c3-481b-9587-04803603eadf"
# In this case, the prediction is correct - Class 05, as shown in the figure below:
plt.figure()
plt.imshow(X_train[105])
plt.xlabel(y_test[105])
plt.colorbar()
plt.grid(False)
plt.show()
# + [markdown] id="ovn2yNyjtV7n"
# # **Prediction for a single image**
# + id="N9DtFPuGk60j" colab={"base_uri": "https://localhost:8080/"} outputId="12b9eb82-a7f5-47e6-9336-57e60115b259"
img = X_test [300]
test_labels_single = y_test [300]
print(img.shape)
test_labels_single
# Adiciona a imagem em um batch que possui um só membro.
img = (np.expand_dims(img,0))
print(img.shape)
# + id="hRr4iTHEmLeW" colab={"base_uri": "https://localhost:8080/"} outputId="0c8547df-8b8b-44ae-c0f2-f64fcdc88c5d"
predictions_single = model.predict(img)
print(predictions_single)
predictions_single = np.argmax(predictions_single, axis=1)
predictions_single
# + id="eM5SuzN2DTW8" colab={"base_uri": "https://localhost:8080/", "height": 694} outputId="8861d29d-c65a-4b55-b18b-d39cf3dffd5a"
#Get the predictions for the test data
predicted_classes = model.predict(X_test,batch_size=BS)
predicted_classes = np.argmax(predicted_classes, axis=1)
L = 4
W = 4
fig, axes = plt.subplots(L, W, figsize = (12,12))
axes = axes.ravel()
for i in np.arange(0, L * W):
axes[i].imshow(X_test[i])
axes[i].set_title(f"Prediction Class = {predicted_classes[i]}\n Original Class = {y_test[i]}")
axes[i].axis('off')
plt.subplots_adjust(wspace=0.5)
| geological_comparative_inference19102021.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Who are the *Karen* of Sweden?
from os import listdir
from os.path import isfile, join
import csv
import matplotlib.pyplot as plt
import random
random.seed(1957)
# +
us_data_path = r'original_data\ssa_names'
us_file_names = [f for f in listdir(us_data_path) if isfile(join(us_data_path, f)) and f.endswith('.txt')]
# Swedish data is from 1920 (including) to 2019 (including)
# US data is from 1880 (including) to 2018 (including)
def find_year_of_us_file(file_name: str) -> int:
return int(file_name.replace('yob','').replace('.txt',''))
us_file_names = [file for file in us_file_names if 1920 <= find_year_of_us_file(file) <= 2018]
# +
def extract_number_of_karens_from_file(file_name: str):
year = find_year_of_us_file(file_name)
with open(f'{us_data_path}\\{file_name}') as f:
karens = [int(name.replace('Karen,F,','')) for name in f.readlines() if name.startswith('Karen,F,')][0]
return (year, karens)
karen_stats = [(year, karens) for year, karens in map(extract_number_of_karens_from_file, us_file_names)]
karen_peak_year, karen_peak_number = max(karen_stats, key=lambda x: x[1])
print(f'Popularity of Karen peaked in {karen_peak_year} with {karen_peak_number} girls born.')
karen_stats_normalized = [(year, karens/karen_peak_number) for year, karens in karen_stats]
# +
year, karens = zip(*karen_stats)
plt.plot(year, karens)
plt.title('Number of girls named Karen borned by year')
plt.xlabel('Year')
plt.show()
year, karens_normalized = zip(*karen_stats_normalized)
# -
se_data_path = r'original_data\BE0001AN.csv'
with open(se_data_path, newline='') as csvfile:
reader = csv.DictReader(csvfile, delimiter=';')
years = [int(fn) for fn in reader.fieldnames[1:]]
swedish_names = []
for name in reader:
name_dict = {'name': name['tilltalsnamn']}
name_dict['data'] = [(int(year), int(number) if number != '..' else None) for year, number in name.items() if year != 'tilltalsnamn']
_, peak_number = max(name_dict['data'], key=lambda x: x[1] or 0)
name_dict['normalized_data'] = [(year, (number_born or 0)/peak_number) for year, number_born in name_dict['data']]
swedish_names.append(name_dict)
# +
r_idx = random.sample(range(len(swedish_names)), 5)
for i in r_idx:
name = swedish_names[i]
years, numbers = zip(*name['data'])
_, normalized_numbers = zip(*name['normalized_data'])
plt.figure(1)
plt.plot(years, numbers, label=f"{name['name']} ({i}:th)")
plt.figure(2)
plt.plot(years, [100*n for n in normalized_numbers], label=f"{name['name']} ({i}:th)")
plt.figure(1)
plt.title('Birth year distribution of some Swedish names')
plt.xlabel('Year')
plt.ylabel('Number born')
plt.figure(2)
plt.title('Relative birth year of some Swedish names')
plt.xlabel('Year')
plt.ylabel('%')
plt.legend()
plt.show()
# +
def karen_likeness(swedish_name):
years, numbers = zip(*swedish_name['normalized_data'])
return 1/sum([abs(karen_normalized - swedish_name_normalized)
for (karen_normalized, swedish_name_normalized)
in zip(karens_normalized, numbers[:-1])
if swedish_name_normalized is not None])
swedish_names.sort(key=karen_likeness, reverse=True)
print('Top five Karen-like swedish names:')
print(*[f"{i+1}:\t{name['name']}\t({karen_likeness(name):.4f})" for i, name in enumerate(swedish_names[:5])], sep='\n')
print('\nTop five Karen-unlike swedish names:')
print(*[f"{i+1}:\t{name['name']}\t({karen_likeness(name):.4f})" for i, name in enumerate(swedish_names[:-6:-1])], sep='\n')
# -
| sveriges_karen.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import matplotlib.pyplot as plt
import seaborn as sns
import pandas as pd
import numpy as np
df = pd.read_csv("./../../data/raw/listings.csv")
df.head()
import seaborn as sns
project_fuctions = df[['name', 'calculated_host_listings_count', 'calculated_host_listings_count_entire_homes', 'calculated_host_listings_count_private_rooms', 'calculated_host_listings_count_shared_rooms', 'review_per_month']]
project_fuctions.dropna()
# +
sns.pairplot(df[['calculated_host_listings_count', 'calculated_host_listings_count_entire_homes', 'calculated_host_listings_count_private_rooms', 'calculated_host_listings_count_shared_rooms', 'review_per_month']])
# +
###I wanted to see the distribution of different types of accommodation offered by each host.
# -
sns.displot(data=project_fuctions, x="calculated_host_listings_count", y="review_per_month")
# +
###i wanted to find out if there is a relation between the total count of types of accommodation offered by each host and the reviews per month. I would like to narrow down to finding the relation between airbnbs that offer at most 5 accomodations and the review per month
# -
small_airbnb = project_fuctions(['calculated_host_listings_count'])<6
sns.counplot(x = small_airbnb, y = 'review_per_month')
| analysis/Motheo/.ipynb_checkpoints/analysis.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] nbsphinx="hidden"
# This notebook is part of the `clifford` documentation: https://clifford.readthedocs.io/.
# -
# # Space Time Algebra
# ## Intro
# This notebook demonstrates how to use `clifford` to work with Space Time Algebra.
# The Pauli algebra of space $\mathbb{P}$, and Dirac algebra of space-time $\mathbb{D}$, are related using the *spacetime split*.
# The split is implemented by using a `BladeMap` ([docs](../api/clifford.BladeMap.rst)), which maps a subset of blades in $\mathbb{D}$ to the blades in $\mathbb{P}$.
# This *split* allows a spacetime bivector $F$ to be broken up into relative electric and magnetic fields in space.
# Lorentz transformations are implemented as rotations in $\mathbb{D}$, and the effects on the relative fields are computed with the split.
# ## Setup
#
# First we import `clifford`, instantiate the two algebras, and populate the namespace with the blades of each algebra.
# The elements of $\mathbb{D}$ are prefixed with $d$, while the elements of $\mathbb{P}$ are prefixed with $p$.
# Although unconventional, it is easier to read and to translate into code.
# +
from clifford import Cl, pretty
pretty(precision=1)
# Dirac Algebra `D`
D, D_blades = Cl(1,3, firstIdx=0, names='d')
# Pauli Algebra `P`
P, P_blades = Cl(3, names='p')
# put elements of each in namespace
locals().update(D_blades)
locals().update(P_blades)
# -
# ## The Space Time Split
# To two algebras can be related by the spacetime-split.
# First, we create a `BladeMap` which relates the bivectors in $\mathbb{D}$ to the vectors/bivectors in $\mathbb{P}$.
# The scalars and pseudo-scalars in each algebra are equated.
#
# 
# +
from clifford import BladeMap
bm = BladeMap([(d01,p1),
(d02,p2),
(d03,p3),
(d12,p12),
(d23,p23),
(d13,p13),
(d0123, p123)])
# -
# ## Splitting a space-time vector (an event)
# A vector in $\mathbb{D}$, represents a unique place in space and time, i.e. an event.
# To illustrate the split, create a random event $X$.
X = D.randomV()*10
X
# This can be *split* into time and space components by multiplying with the time-vector $d_0$,
X*d0
# and applying the `BladeMap`, which results in a scalar+vector in $\mathbb{P}$
#
bm(X*d0)
# The space and time components can be separated by grade projection,
x = bm(X*d0)
x(0) # the time component
x(1) # the space component
# We therefor define a `split()` function, which has a simple condition allowing it to act on a vector or a multivector in $\mathbb{D}$.
# Splitting a spacetime bivector will be treated in the next section.
def split(X):
return bm(X.odd*d0+X.even)
split(X)
# The split can be inverted by applying the `BladeMap` again, and multiplying by $d_0$
x = split(X)
bm(x)*d0
# ## Splitting a Bivector
# Given a random bivector $F$ in $\mathbb{D}$,
F = D.randomMV()(2)
F
# $F$ *splits* into a vector/bivector in $\mathbb{P}$
split(F)
# If $F$ is interpreted as the electromagnetic bivector, the Electric and Magnetic fields can be separated by grade
# +
E = split(F)(1)
iB = split(F)(2)
E
# -
iB
# ## Lorentz Transformations
# Lorentz Transformations are rotations in $\mathbb{D}$, which are implemented with Rotors.
# A rotor in G4 will, in general, have scalar, bivector, and quadvector components.
R = D.randomRotor()
R
# In this way, the effect of a lorentz transformation on the electric and magnetic fields can be computed by rotating the bivector with $F \rightarrow RF\tilde{R}$
F_ = R*F*~R
F_
# Then splitting into $E$ and $B$ fields
E_ = split(F_)(1)
E_
iB_ = split(F_)(2)
iB_
# ## Lorentz Invariants
# Since lorentz rotations in $\mathbb{D}$, the magnitude of elements of $\mathbb{D}$ are invariants of the lorentz transformation.
# For example, the magnitude of electromagnetic bivector $F$ is invariant, and it can be related to $E$ and $B$ fields in $\mathbb{P}$ through the split,
i = p123
E = split(F)(1)
B = -i*split(F)(2)
F**2
split(F**2) == E**2 - B**2 + (2*E|B)*i
| docs/tutorials/space-time-algebra.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
# # !pip install html5lib beautifulsoup4
# # !pip install selenium
# -
from selenium import webdriver
from bs4 import BeautifulSoup
from time import sleep
import requests
import json
import re
from urllib import request
from selenium import webdriver
from selenium.webdriver.support.ui import WebDriverWait
from selenium.common.exceptions import NoSuchElementException
from selenium.webdriver.common.keys import Keys
path=r"C:\Users\Kavin\Downloads\chromedriver_win32\chromedriver"
driver=webdriver.Chrome(path)
driver.get("https://www.melon.com/artist/song.htm?artistId=108316")
title = []
album = []
# release_date = []
# song_genre = []
# like = []
# lyrics = []
# creator = []
# artist = []
# artist_m = []
# debut_y = []
# debut_t = []
# agency = []
def list_clear():
title.clear()
album.clear()
# release_date.clear()
# song_genre.clear()
# like.clear()
# lyrics.clear()
# creator.clear()
# artist.clear()
# artist_m.clear()
# debut_y.clear()
# debut_t.clear()
# agency.clear()
def click_artist():
driver.find_element_by_xpath('//*[@id="conts"]/div[3]/div[1]/div[1]/div/a/strong').click()
artist.append(driver.find_element_by_xpath('//*[@id="conts"]/div[1]/div/div[2]/p').text)
artist_m.append(driver.find_element_by_xpath('//*[@id="conts"]/div[1]/div/div[2]/div').text)
debut_y.append(driver.find_element_by_xpath('//*[@id="conts"]/div[1]/div/div[2]/dl[1]/dd[1]').text)
debut_t.append(driver.find_element_by_xpath('//*[@id="conts"]/div[1]/div/div[2]/dl[1]/dd[1]/a/span[2]').text)
agency.append(driver.find_element_by_xpath('//*[@id="conts"]/div[1]/div/div[2]/dl[1]/dd[3]').text)
def append_list():
title.append(driver.find_element_by_xpath('//*[@id="downloadfrm"]/div/div/div[2]/div[1]/div[1]').text)
album.append(driver.find_element_by_xpath('//*[@id="downloadfrm"]/div/div/div[2]/div[2]/dl/dd[1]/a').text)
# release_date.append(driver.find_element_by_xpath('//*[@id="downloadfrm"]/div/div/div[2]/div[2]/dl/dd[2]').text)
# song_genre.append(driver.find_element_by_xpath('//*[@id="downloadfrm"]/div/div/div[2]/div[2]/dl/dd[3]').text)
# like.append(driver.find_element_by_xpath('//*[@id="d_like_count"]').text)
# try:
# lyrics.append(driver.find_element_by_xpath('//*[@id="d_video_summary"]').text)
# except NoSuchElementException as e:
# lyrics.append('')
# # 작곡/작사 없는 경우 예외 처리
# try:
# creator.append(driver.find_element_by_xpath('//*[@id="conts"]/div[3]/ul').text)
# except NoSuchElementException as e:
# creator.append('')
def get_page_num():
source = driver.page_source
bs = BeautifulSoup(source,'lxml')
page_length = []
page_length = bs.select('#pageObjNavgation > div > span > a')
len(page_length) # 9면 10페이지인것
return len(page_length)
source = driver.page_source
bs = BeautifulSoup(source,'lxml')
page_length = []
page_length = bs.select('#pageObjNavgation > div > span > a')
len(page_length) # 9면 10페이지인것
# +
# search = driver.find_element_by_xpath("//*[@id='top_search']")
# search.send_keys(Keys.CONTROL + "a");
# search.clear()
# WebDriverWait(driver, timeout)
# search.send_keys("레드벨벳")
list_clear()
# # 아티스트명 검색
# search.send_keys(Keys.RETURN)
# click_artist()
# # 곡 클릭
# driver.find_element_by_xpath('//*[@id="conts"]/div[3]/ul/li[3]/a').click()
index = 1
reset_time = 0
while True:
for i in range(1, len(page_length)+2):
print("==for문의 시작으로 왔습니다==")
while True:
link1 = '//*[@id="frm"]/div/table/tbody/tr['
link2 = ']/td[3]/div/div/a[1]/span'
# 곡 정보 클릭
sleep(2)
driver.find_element_by_xpath(link1 +str(index)+ link2).click()
append_list()
print(len(title), title[len(title)-1])
# 뒤로가기 누르기
sleep(2)
driver.execute_script("window.history.go(-1)")
index += 1
if (len(title) % 50 == 0):
index = 1
print("50까지 왔습니다")
break
page_path1 = '//*[@id="pageObjNavgation"]/div/span/a['
page_path2 = ']'
# i 가 9일때 xpath요소 못찾음
if (i == 10) and (len(title) % 50 == 0) : # 마지막 10page에 도달하면
driver.find_element_by_xpath('//*[@id="pageObjNavgation"]/div/a[3]').click() # '다음' 버튼을 클릭하도록
continue
elif (len(page_length) == 1) and (len(title) % 50 == 0) :
driver.find_element_by_xpath('//*[@id="pageObjNavgation"]/div/a[4]').click() # 페이지개수가 2개이면'맨끝' 버튼 누르
driver.find_element_by_xpath(page_path1 + str(i) + page_path2).click()
print('변경전:', i)
i = i + 1
print('변경후', i)
continue
# # 마지막 페이지인 경우 while문을 빠져나감
# next_link_end = driver.find_element_by_xpath('//*[@id="pageObjNavgation"]/div/a')
# hi = next_link_end.get_attribute('class')
# if hi == 'btn_next disabled':
# print('마지막 페이지입니다')
# break
| Crawling/melon_crawling/Melon_crawling_kavin_ver03.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# ---
# + [markdown] origin_pos=0
# # GPUs
# :label:`sec_use_gpu`
#
# In :numref:`tab_intro_decade`, we discussed the rapid growth
# of computation over the past two decades.
# In a nutshell, GPU performance has increased
# by a factor of 1000 every decade since 2000.
# This offers great opportunities but it also suggests
# a significant need to provide such performance.
#
#
# In this section, we begin to discuss how to harness
# this computational performance for your research.
# First by using single GPUs and at a later point,
# how to use multiple GPUs and multiple servers (with multiple GPUs).
#
# Specifically, we will discuss how
# to use a single NVIDIA GPU for calculations.
# First, make sure you have at least one NVIDIA GPU installed.
# Then, download the [NVIDIA driver and CUDA](https://developer.nvidia.com/cuda-downloads)
# and follow the prompts to set the appropriate path.
# Once these preparations are complete,
# the `nvidia-smi` command can be used
# to view the graphics card information.
#
# + origin_pos=1 tab=["tensorflow"]
# !nvidia-smi
# + [markdown] origin_pos=4
# To run the programs in this section,
# you need at least two GPUs.
# Note that this might be extravagant for most desktop computers
# but it is easily available in the cloud, e.g.,
# by using the AWS EC2 multi-GPU instances.
# Almost all other sections do *not* require multiple GPUs.
# Instead, this is simply to illustrate
# how data flow between different devices.
#
# ## Computing Devices
#
# We can specify devices, such as CPUs and GPUs,
# for storage and calculation.
# By default, tensors are created in the main memory
# and then use the CPU to calculate it.
#
# + origin_pos=9 tab=["tensorflow"]
import tensorflow as tf
tf.device('/CPU:0'), tf.device('/GPU:0'), tf.device('/GPU:1')
# + [markdown] origin_pos=10
# We can query the number of available GPUs.
#
# + origin_pos=13 tab=["tensorflow"]
len(tf.config.experimental.list_physical_devices('GPU'))
# + [markdown] origin_pos=14
# Now we define two convenient functions that allow us
# to run code even if the requested GPUs do not exist.
#
# + origin_pos=17 tab=["tensorflow"]
def try_gpu(i=0): #@save
"""Return gpu(i) if exists, otherwise return cpu()."""
if len(tf.config.experimental.list_physical_devices('GPU')) >= i + 1:
return tf.device(f'/GPU:{i}')
return tf.device('/CPU:0')
def try_all_gpus(): #@save
"""Return all available GPUs, or [cpu(),] if no GPU exists."""
num_gpus = len(tf.config.experimental.list_physical_devices('GPU'))
devices = [tf.device(f'/GPU:{i}') for i in range(num_gpus)]
return devices if devices else [tf.device('/CPU:0')]
try_gpu(), try_gpu(10), try_all_gpus()
# + [markdown] origin_pos=18
# ## Tensors and GPUs
#
# By default, tensors are created on the CPU.
# We can query the device where the tensor is located.
#
# + origin_pos=21 tab=["tensorflow"]
x = tf.constant([1, 2, 3])
x.device
# + [markdown] origin_pos=22
# It is important to note that whenever we want
# to operate on multiple terms,
# they need to be on the same device.
# For instance, if we sum two tensors,
# we need to make sure that both arguments
# live on the same device---otherwise the framework
# would not know where to store the result
# or even how to decide where to perform the computation.
#
# ### Storage on the GPU
#
# There are several ways to store a tensor on the GPU.
# For example, we can specify a storage device when creating a tensor.
# Next, we create the tensor variable `X` on the first `gpu`.
# The tensor created on a GPU only consumes the memory of this GPU.
# We can use the `nvidia-smi` command to view GPU memory usage.
# In general, we need to make sure that we do not create data that exceed the GPU memory limit.
#
# + origin_pos=25 tab=["tensorflow"]
with try_gpu():
X = tf.ones((2, 3))
X
# + [markdown] origin_pos=26
# Assuming that you have at least two GPUs, the following code will create a random tensor on the second GPU.
#
# + origin_pos=29 tab=["tensorflow"]
with try_gpu(1):
Y = tf.random.uniform((2, 3))
Y
# + [markdown] origin_pos=30
# ### Copying
#
# If we want to compute `X + Y`,
# we need to decide where to perform this operation.
# For instance, as shown in :numref:`fig_copyto`,
# we can transfer `X` to the second GPU
# and perform the operation there.
# *Do not* simply add `X` and `Y`,
# since this will result in an exception.
# The runtime engine would not know what to do:
# it cannot find data on the same device and it fails.
# Since `Y` lives on the second GPU,
# we need to move `X` there before we can add the two.
#
# 
# :label:`fig_copyto`
#
# + origin_pos=33 tab=["tensorflow"]
with try_gpu(1):
Z = X
print(X)
print(Z)
# + [markdown] origin_pos=34
# Now that the data are on the same GPU
# (both `Z` and `Y` are),
# we can add them up.
#
# + origin_pos=35 tab=["tensorflow"]
Y + Z
# + [markdown] origin_pos=38 tab=["tensorflow"]
# Imagine that your variable `Z` already lives on your second GPU.
# What happens if we still call `Z2 = Z` under the same device scope?
# It will return `Z` instead of making a copy and allocating new memory.
#
# + origin_pos=41 tab=["tensorflow"]
with try_gpu(1):
Z2 = Z
Z2 is Z
# + [markdown] origin_pos=42
# ### Side Notes
#
# People use GPUs to do machine learning
# because they expect them to be fast.
# But transferring variables between devices is slow.
# So we want you to be 100% certain
# that you want to do something slow before we let you do it.
# If the deep learning framework just did the copy automatically
# without crashing then you might not realize
# that you had written some slow code.
#
# Also, transferring data between devices (CPU, GPUs, and other machines)
# is something that is much slower than computation.
# It also makes parallelization a lot more difficult,
# since we have to wait for data to be sent (or rather to be received)
# before we can proceed with more operations.
# This is why copy operations should be taken with great care.
# As a rule of thumb, many small operations
# are much worse than one big operation.
# Moreover, several operations at a time
# are much better than many single operations interspersed in the code
# unless you know what you are doing.
# This is the case since such operations can block if one device
# has to wait for the other before it can do something else.
# It is a bit like ordering your coffee in a queue
# rather than pre-ordering it by phone
# and finding out that it is ready when you are.
#
# Last, when we print tensors or convert tensors to the NumPy format,
# if the data is not in the main memory,
# the framework will copy it to the main memory first,
# resulting in additional transmission overhead.
# Even worse, it is now subject to the dreaded global interpreter lock
# that makes everything wait for Python to complete.
#
#
# ## Neural Networks and GPUs
#
# Similarly, a neural network model can specify devices.
# The following code puts the model parameters on the GPU.
#
# + origin_pos=45 tab=["tensorflow"]
strategy = tf.distribute.MirroredStrategy()
with strategy.scope():
net = tf.keras.models.Sequential([
tf.keras.layers.Dense(1)])
# + [markdown] origin_pos=46
# We will see many more examples of
# how to run models on GPUs in the following chapters,
# simply since they will become somewhat more computationally intensive.
#
# When the input is a tensor on the GPU, the model will calculate the result on the same GPU.
#
# + origin_pos=47 tab=["tensorflow"]
net(X)
# + [markdown] origin_pos=48
# Let us confirm that the model parameters are stored on the same GPU.
#
# + origin_pos=51 tab=["tensorflow"]
net.layers[0].weights[0].device, net.layers[0].weights[1].device
# + [markdown] origin_pos=52
# In short, as long as all data and parameters are on the same device, we can learn models efficiently. In the following chapters we will see several such examples.
#
# ## Summary
#
# * We can specify devices for storage and calculation, such as the CPU or GPU.
# By default, data are created in the main memory
# and then use the CPU for calculations.
# * The deep learning framework requires all input data for calculation
# to be on the same device,
# be it CPU or the same GPU.
# * You can lose significant performance by moving data without care.
# A typical mistake is as follows: computing the loss
# for every minibatch on the GPU and reporting it back
# to the user on the command line (or logging it in a NumPy `ndarray`)
# will trigger a global interpreter lock which stalls all GPUs.
# It is much better to allocate memory
# for logging inside the GPU and only move larger logs.
#
# ## Exercises
#
# 1. Try a larger computation task, such as the multiplication of large matrices,
# and see the difference in speed between the CPU and GPU.
# What about a task with a small amount of calculations?
# 1. How should we read and write model parameters on the GPU?
# 1. Measure the time it takes to compute 1000
# matrix-matrix multiplications of $100 \times 100$ matrices
# and log the Frobenius norm of the output matrix one result at a time
# vs. keeping a log on the GPU and transferring only the final result.
# 1. Measure how much time it takes to perform two matrix-matrix multiplications
# on two GPUs at the same time vs. in sequence
# on one GPU. Hint: you should see almost linear scaling.
#
# + [markdown] origin_pos=55 tab=["tensorflow"]
# [Discussions](https://discuss.d2l.ai/t/270)
#
| d2l-en/tensorflow/chapter_deep-learning-computation/use-gpu.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 2
# language: python
# name: python2
# ---
# # Pandas
#
# [Pandas](http://pandas.pydata.org) is a Python Data Analysis library. It is an open source library that brings data structures and data analysis tools to the Python programming language. It defines the a set of labelled array data structures to Python:
#
# 1. **Series** - One-dimensioned homogeneously typed labelled array
# 2. **TimeSeries** - Series with index containing dates
# 2. **DataFrame** - A general two-dimensioned labelled array with potentially heterogeneously typed columns. It is a fast and efficient data structure for data manipulation with integrated indexing.
# 3. **Panel** - A general three-dimensioned labelled array
#
# In addition, it adds several tools for reading and writing data. It is frequently used to read, clean, align, filter data.
#
# ## Series
#
# **Series** is a one-dimensional labelled array capable of holding any data type.
# +
# %matplotlib inline
import numpy as np
from pandas import Series
a = np.array([0.1, 1.2, 2.3, 3.4, 4.5])
print 'a =', a
print
s = Series([0.1, 1.2, 2.3, 3.4, 4.5])
print s
print
s1 = Series(a)
print s1
print
s2 = Series(a, index=['a', 'b', 'c', 'd', 'e'])
print s2
s1.plot()
# -
print "s2['a'] =", s2['a']
print 's2[0] =', s2[0]
print "s2[['a', 'c']] =", s2[['a', 'c']]
print "s2[[0, 2]] =", s2[[0, 2]]
print "s2[s2 > 2] =", s2[s2 > 2]
s3 = Series([0.1, 1.2, 2.3, 3.4, 4.5])
print s3
s3.index = ['a', 'b', 'c', 'd', 'e']
print s3
s3.head()
s3.describe()
summary = s3.describe()
print summary
print "Mean =", summary['mean']
# ## DataFrame
# +
from pandas import DataFrame
a = np.array([[1.0, 2], [3, 4]])
df = DataFrame(a)
print df
# -
df = DataFrame(np.array([[1, 2], [3, 4]]), columns=['a', 'b'])
print(df)
df = DataFrame(np.array([[1, 2], [3, 4]]))
df.columns = ['dogs', 'cats']
print df
s1 = Series(np.arange(0.0, 5))
s2 = Series(np.arange(1.0, 6))
print s1
print s2
DataFrame({'a': s1, 'b': s2})
s3 = Series(np.arange(0.0, 3))
print s3
DataFrame({'a': s1, 'b':s2, 'c': s3})
df1 = DataFrame(np.random.randn(10, 4), columns=['A', 'B', 'C', 'D'])
print df1
df2 = DataFrame(np.random.randn(7, 3), columns=['A', 'B', 'C'])
print df2
df3 = df1 + df2
print df3
print df1 - df1.iloc[0]
# ## Reading and Writing Data in Text Format
#
# There are several functions available in Pandas to read and write data in different text formats:
#
# * **`read_csv`**: Load delimited data from a file, URL or file like object. Use comma as default delimiter
# * **`read_table`**: Load delimited data from a file, URL or file like object. Use **`\t`** as default delimiter
# * **`read_fwf`**: Read data in fixed-width column format (that is, no delimiters)
# * **`read_clipboard`**: Version of **`read_table`** that reads data from the clipboard. Useful for converting tables from web pages
# +
from pandas import read_csv
df = read_csv('world.csv')
print df.columns
print df
# -
# ## Reading and Writing Data in Binary Format
# ## Reading Microsoft Excel Files
#
#
# +
from pandas import ExcelFile
xlFile = ExcelFile('india_steel.xls')
print xlFile.sheet_names
df = xlFile.parse("Sheet1")
print df
# -
# # PandaSql
#
# **PandaSql** performs SQL operations on data stored in memory as a **DataFrame**. This makes it a versatile tool to perform a variety of search, sort, add, delete or modify operations on data stored in a DataFrame.
# ### PandaSql Example - Design of Connections in Steel Structures
#
# Properties of Indian Standard steel sections from SP:6(1) have been typed into a Microsoft Excel 2003 file named **india_steel.xls**. We will read the data from this file into program memory, and perform SQL queries on it.
# from pandasql import sqldf
# +
from pandasql import sqldf
pysqldf = lambda q: sqldf(q, globals())
q = """SELECT Name, w, AX FROM df WHERE Name LIKE '%s%%' AND Ax > 15 ORDER BY AX LIMIT 3""" % ('ISMB')
print q
sec = pysqldf(q)
print "Type = ", type(sec), "\n", sec
# -
print sec.Name
s1 = sec.loc[0]
print s1
print "Type =", type(s1)
print s1['Name'], s1['w'], s1['AX']
print s1.Name, s1.w, s1.AX
# +
from pandas import read_csv
df = read_csv('parasite_data.csv', na_values=[" "])
print df
# -
# Retrieve the CSV file from this URL (Your computer must be connected to the Internet)
#
# https://raw.githubusercontent.com/rasbt/python_reference/master/Data/some_soccer_data.csv
# +
from __future__ import division, print_function
import pandas as pd
df = pd.read_csv('https://raw.githubusercontent.com/rasbt/python_reference/master/Data/some_soccer_data.csv')
df
# -
for c in df.columns:
print(c, c.lower())
df.columns = [c.lower() for c in df.columns]
df.tail(3)
print(type(df.columns), df.columns)
print(df.columns[0])
print(df.columns[1])
df['salary'] = df['salary'].apply(lambda x: x.strip('$m'))
df.tail()
| Pandas.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="view-in-github" colab_type="text"
# <a href="https://colab.research.google.com/github/JimKing100/DS-Unit-3-Sprint-2-SQL-and-Databases/blob/master/module4-acid-and-database-scalability-tradeoffs/assignment_titanic.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# + id="PBRnYFAuOD24" colab_type="code" outputId="53ed3f73-3484-4c8c-9ddb-a23ca20c5b3f" colab={"base_uri": "https://localhost:8080/", "height": 122}
# !pip install psycopg2-binary
# + id="zVbYVnn8O20p" colab_type="code" colab={}
import psycopg2
# + id="_5Bl-i0rO741" colab_type="code" colab={}
host = 'salt.db.elephantsql.com'
user = 'wsatpgnz'
database = 'wsatpgnz'
password = '<PASSWORD>'
# + id="5n4urI4EO_tn" colab_type="code" colab={}
pg_conn = psycopg2.connect(database=database, user=user, password=password, host=host)
pg_cur = pg_conn.cursor()
# + id="6P9Yq1kWPKFO" colab_type="code" colab={}
def execute(query, conn):
with conn:
with conn.cursor() as cur:
cur.execute(query)
result = cur.fetchall()
return list(result)
# + id="vPbUmuKZPx9N" colab_type="code" colab={}
query1 = 'SELECT COUNT(survived) \
FROM "public"."titanic" \
WHERE survived = 1'
# + id="tnRWIcpsRRcF" colab_type="code" colab={}
query2 = 'SELECT COUNT(survived) \
FROM "public"."titanic" \
WHERE survived = 0'
# + id="KbvJQyxNUiE8" colab_type="code" colab={}
query3 = 'SELECT pclass, COUNT(survived) \
FROM "public"."titanic" \
WHERE survived = 1 \
GROUP BY pclass \
ORDER BY pclass'
# + id="ztsNolbhW0yT" colab_type="code" colab={}
query4 = 'SELECT pclass, COUNT(survived) \
FROM "public"."titanic" \
WHERE survived = 0 \
GROUP BY pclass \
ORDER BY pclass'
# + id="lVvqf78_XPCl" colab_type="code" colab={}
query5 = 'SELECT survived, AVG(age) \
FROM "public"."titanic" \
GROUP BY survived \
ORDER BY survived'
# + id="PIWfWu2tZeam" colab_type="code" colab={}
query6 = 'SELECT pclass, AVG(age) \
FROM "public"."titanic" \
GROUP BY pclass \
ORDER BY pclass'
# + id="dh_l6AnUaTtE" colab_type="code" colab={}
query7 = 'SELECT pclass, AVG(fare) \
FROM "public"."titanic" \
GROUP BY pclass \
ORDER BY pclass'
# + id="HDh5hRbxa0IV" colab_type="code" colab={}
query8 = 'SELECT survived, AVG(fare) \
FROM "public"."titanic" \
GROUP BY survived \
ORDER BY survived'
# + id="9BGgFAYeb-gr" colab_type="code" colab={}
query9 = 'SELECT pclass, AVG(siblings) \
FROM "public"."titanic" \
GROUP BY pclass \
ORDER BY pclass'
# + id="UmSB7k6hc43q" colab_type="code" colab={}
query10 = 'SELECT survived, AVG(siblings) \
FROM "public"."titanic" \
GROUP BY survived \
ORDER BY survived'
# + id="3JJ2WjAfdbhr" colab_type="code" colab={}
query11 = 'SELECT pclass, AVG(parents) \
FROM "public"."titanic" \
GROUP BY pclass \
ORDER BY pclass'
# + id="qCK7ztJqd5Eb" colab_type="code" colab={}
query12 = 'SELECT survived, AVG(parents) \
FROM "public"."titanic" \
GROUP BY survived \
ORDER BY survived'
# + id="Lz3jg-daeWS5" colab_type="code" colab={}
query13 = 'SELECT name, COUNT(*) \
FROM "public"."titanic" \
GROUP BY name \
HAVING COUNT(*) > 1'
# + id="ovlqSA9bie2g" colab_type="code" outputId="c57acf87-3102-45ee-fdae-31f90f312b2c" colab={"base_uri": "https://localhost:8080/", "height": 54}
sec1 = 'SELECT name \
FROM "public"."titanic" \
WHERE (name LIKE '
sec2 = "'Mr.%'"
sec3 = ' OR name LIKE '
sec4 = "'Mrs.%'"
sec5 = ') AND siblings > 0'
query14 = sec1 + sec2 + sec3 + sec4 + sec5
print(query14)
# + id="5pgwAFmIPt6y" colab_type="code" outputId="f92c452f-ac58-4261-e53d-d5487206a94a" colab={"base_uri": "https://localhost:8080/", "height": 34}
answer = execute(query1, pg_conn)
print(answer[0][0], 'survivors on the Titanic')
# + id="uyRho9uyRe6G" colab_type="code" outputId="e89e1f5a-f6df-4dad-f9d1-938408e813ec" colab={"base_uri": "https://localhost:8080/", "height": 34}
answer = execute(query2, pg_conn)
print(answer[0][0], 'died on the Titanic')
# + id="dMEa1fNKU4EE" colab_type="code" outputId="5d5d350b-e4a2-4b3e-e2d8-cba4fbd6cf25" colab={"base_uri": "https://localhost:8080/", "height": 68}
answer = execute(query3, pg_conn)
print(answer[0][1], 'survivors from class', answer[0][0])
print(answer[1][1], 'survivors from class', answer[1][0])
print(answer[2][1], 'survivors from class', answer[2][0])
# + id="j5n5WBRoXAyc" colab_type="code" outputId="acb0d33d-db5e-4474-8319-a7a4fac98cca" colab={"base_uri": "https://localhost:8080/", "height": 68}
answer = execute(query4, pg_conn)
print(answer[0][1], 'died from class', answer[0][0])
print(answer[1][1], 'died from class', answer[1][0])
print(answer[2][1], 'died from class', answer[2][0])
# + id="fwdun7yvXxwc" colab_type="code" outputId="a6e6fbe7-9a43-4395-e607-5927deb15a89" colab={"base_uri": "https://localhost:8080/", "height": 51}
answer = execute(query5, pg_conn)
print('average age on those who died was', answer[0][1])
print('average age on those who suvived was', answer[1][1])
# + id="bU_ZLvMCZqgW" colab_type="code" outputId="714632da-1a51-4e48-8572-ecd7d246129b" colab={"base_uri": "https://localhost:8080/", "height": 68}
answer = execute(query6, pg_conn)
print('class', answer[0][0], 'average age is', answer[0][1])
print('class', answer[1][0], 'average age is', answer[1][1])
print('class', answer[2][0], 'average age is', answer[2][1])
# + id="e_WIfJTIafVV" colab_type="code" outputId="cbd883a3-6fa2-46e1-c121-e171adc02f1f" colab={"base_uri": "https://localhost:8080/", "height": 68}
answer = execute(query7, pg_conn)
print('class', answer[0][0], 'average fare is', answer[0][1])
print('class', answer[1][0], 'average fare is', answer[1][1])
print('class', answer[2][0], 'average fare is', answer[2][1])
# + id="h8y6jncTbBz7" colab_type="code" outputId="2948cfd7-e2e0-44c3-ec50-faf6dac9858c" colab={"base_uri": "https://localhost:8080/", "height": 51}
answer = execute(query8, pg_conn)
print('average fare of those who died is', answer[0][1])
print('average fare of those who survived is', answer[1][1])
# + id="-vSXDndScbNQ" colab_type="code" outputId="1d739373-106e-44bc-fd90-a48085cc0bad" colab={"base_uri": "https://localhost:8080/", "height": 68}
answer = execute(query9, pg_conn)
print('class', answer[0][0], 'average # of siblings is', answer[0][1])
print('class', answer[1][0], 'average # of siblings is', answer[1][1])
print('class', answer[2][0], 'average # of siblings is', answer[2][1])
# + id="XgiV-lKMdF5k" colab_type="code" outputId="a1c2550a-4289-425f-8888-df78f30b1e08" colab={"base_uri": "https://localhost:8080/", "height": 51}
answer = execute(query10, pg_conn)
print('average # siblings of those who died is', answer[0][1])
print('average # siblings of those who survived is', answer[1][1])
# + id="4wVqh1Fqdr8s" colab_type="code" outputId="8b802cc2-ba31-460c-ddd4-6894a80489dd" colab={"base_uri": "https://localhost:8080/", "height": 68}
answer = execute(query11, pg_conn)
print('class', answer[0][0], 'average # of parents is', answer[0][1])
print('class', answer[1][0], 'average # of parents is', answer[1][1])
print('class', answer[2][0], 'average # of parents is', answer[2][1])
# + id="MX_oYjiAeDsL" colab_type="code" outputId="88097a70-10ba-4a51-e6d6-89799fbb219d" colab={"base_uri": "https://localhost:8080/", "height": 51}
answer = execute(query12, pg_conn)
print('average # parents of those who died is', answer[0][1])
print('average # parents of those who survived is', answer[1][1])
# + id="DWwUzpDQevF3" colab_type="code" outputId="5027098c-ef52-44f7-f141-e8707d0a883e" colab={"base_uri": "https://localhost:8080/", "height": 51}
answer = execute(query13, pg_conn)
print(answer)
print('No duplicate names')
# + id="1VTNPamnhUAQ" colab_type="code" outputId="dd5feb0e-8746-4f58-f4ed-d642dc6f8538" colab={"base_uri": "https://localhost:8080/", "height": 54}
answer = execute(query14, pg_conn)
print(answer)
| module4-acid-and-database-scalability-tradeoffs/assignment_titanic.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="4_s8h-ilzHQc"
# # StyleGAN2
#
# This notebook demonstrates how to run NVIDIA's StyleGAN2 on Google Colab.
# Make sure to specify a GPU runtime.
#
# This notebook mainly adds a few convenience functions.
#
# For information on StyleGAN2, see:
#
# Paper: https://arxiv.org/abs/1812.04948
#
# Video: https://youtu.be/kSLJriaOumA
#
# Code: https://github.com/NVlabs/stylegan
#
# FFHQ: https://github.com/NVlabs/ffhq-dataset
#
# /<NAME>, 2019.
#
# + id="PzDuIoMcqfBT"
# %tensorflow_version 1.x
import tensorflow as tf
# Download the code
# !git clone https://github.com/NVlabs/stylegan2.git
# %cd stylegan2
# !nvcc test_nvcc.cu -o test_nvcc -run
print('Tensorflow version: {}'.format(tf.__version__) )
# !nvidia-smi -L
print('GPU Identified at: {}'.format(tf.test.gpu_device_name()))
# + id="cwVXBFaSuoIU"
# Download the model of choice
import argparse
import numpy as np
import PIL.Image
import dnnlib
import dnnlib.tflib as tflib
import re
import sys
from io import BytesIO
import IPython.display
import numpy as np
from math import ceil
from PIL import Image, ImageDraw
import imageio
import pretrained_networks
# Choose between these pretrained models - I think 'f' is the best choice:
# 1024×1024 faces
# stylegan2-ffhq-config-a.pkl
# stylegan2-ffhq-config-b.pkl
# stylegan2-ffhq-config-c.pkl
# stylegan2-ffhq-config-d.pkl
# stylegan2-ffhq-config-e.pkl
# stylegan2-ffhq-config-f.pkl
# 512×384 cars
# stylegan2-car-config-a.pkl
# stylegan2-car-config-b.pkl
# stylegan2-car-config-c.pkl
# stylegan2-car-config-d.pkl
# stylegan2-car-config-e.pkl
# stylegan2-car-config-f.pkl
# 256x256 horses
# stylegan2-horse-config-a.pkl
# stylegan2-horse-config-f.pkl
# 256x256 churches
# stylegan2-church-config-a.pkl
# stylegan2-church-config-f.pkl
# 256x256 cats
# stylegan2-cat-config-f.pkl
# stylegan2-cat-config-a.pkl
network_pkl = "gdrive:networks/stylegan2-ffhq-config-f.pkl"
# If downloads fails, due to 'Google Drive download quota exceeded' you can try downloading manually from your own Google Drive account
# network_pkl = "/content/drive/My Drive/GAN/stylegan2-ffhq-config-f.pkl"
print('Loading networks from "%s"...' % network_pkl)
_G, _D, Gs = pretrained_networks.load_networks(network_pkl)
noise_vars = [var for name, var in Gs.components.synthesis.vars.items() if name.startswith('noise')]
# + id="Zxbhe4uLvF_a"
# Useful utility functions...
# Generates a list of images, based on a list of latent vectors (Z), and a list (or a single constant) of truncation_psi's.
def generate_images_in_w_space(dlatents, truncation_psi):
Gs_kwargs = dnnlib.EasyDict()
Gs_kwargs.output_transform = dict(func=tflib.convert_images_to_uint8, nchw_to_nhwc=True)
Gs_kwargs.randomize_noise = False
Gs_kwargs.truncation_psi = truncation_psi
dlatent_avg = Gs.get_var('dlatent_avg') # [component]
imgs = []
for row, dlatent in log_progress(enumerate(dlatents), name = "Generating images"):
#row_dlatents = (dlatent[np.newaxis] - dlatent_avg) * np.reshape(truncation_psi, [-1, 1, 1]) + dlatent_avg
dl = (dlatent-dlatent_avg)*truncation_psi + dlatent_avg
row_images = Gs.components.synthesis.run(dlatent, **Gs_kwargs)
imgs.append(PIL.Image.fromarray(row_images[0], 'RGB'))
return imgs
def generate_images(zs, truncation_psi):
Gs_kwargs = dnnlib.EasyDict()
Gs_kwargs.output_transform = dict(func=tflib.convert_images_to_uint8, nchw_to_nhwc=True)
Gs_kwargs.randomize_noise = False
if not isinstance(truncation_psi, list):
truncation_psi = [truncation_psi] * len(zs)
imgs = []
for z_idx, z in log_progress(enumerate(zs), size = len(zs), name = "Generating images"):
Gs_kwargs.truncation_psi = truncation_psi[z_idx]
noise_rnd = np.random.RandomState(1) # fix noise
tflib.set_vars({var: noise_rnd.randn(*var.shape.as_list()) for var in noise_vars}) # [height, width]
images = Gs.run(z, None, **Gs_kwargs) # [minibatch, height, width, channel]
imgs.append(PIL.Image.fromarray(images[0], 'RGB'))
return imgs
def generate_zs_from_seeds(seeds):
zs = []
for seed_idx, seed in enumerate(seeds):
rnd = np.random.RandomState(seed)
z = rnd.randn(1, *Gs.input_shape[1:]) # [minibatch, component]
zs.append(z)
return zs
# Generates a list of images, based on a list of seed for latent vectors (Z), and a list (or a single constant) of truncation_psi's.
def generate_images_from_seeds(seeds, truncation_psi):
return generate_images(generate_zs_from_seeds(seeds), truncation_psi)
def saveImgs(imgs, location):
for idx, img in log_progress(enumerate(imgs), size = len(imgs), name="Saving images"):
file = location+ str(idx) + ".png"
img.save(file)
def imshow(a, format='png', jpeg_fallback=True):
a = np.asarray(a, dtype=np.uint8)
str_file = BytesIO()
PIL.Image.fromarray(a).save(str_file, format)
im_data = str_file.getvalue()
try:
disp = IPython.display.display(IPython.display.Image(im_data))
except IOError:
if jpeg_fallback and format != 'jpeg':
print ('Warning: image was too large to display in format "{}"; '
'trying jpeg instead.').format(format)
return imshow(a, format='jpeg')
else:
raise
return disp
def showarray(a, fmt='png'):
a = np.uint8(a)
f = StringIO()
PIL.Image.fromarray(a).save(f, fmt)
IPython.display.display(IPython.display.Image(data=f.getvalue()))
def clamp(x, minimum, maximum):
return max(minimum, min(x, maximum))
def drawLatent(image,latents,x,y,x2,y2, color=(255,0,0,100)):
buffer = PIL.Image.new('RGBA', image.size, (0,0,0,0))
draw = ImageDraw.Draw(buffer)
cy = (y+y2)/2
draw.rectangle([x,y,x2,y2],fill=(255,255,255,180), outline=(0,0,0,180))
for i in range(len(latents)):
mx = x + (x2-x)*(float(i)/len(latents))
h = (y2-y)*latents[i]*0.1
h = clamp(h,cy-y2,y2-cy)
draw.line((mx,cy,mx,cy+h),fill=color)
return PIL.Image.alpha_composite(image,buffer)
def createImageGrid(images, scale=0.25, rows=1):
w,h = images[0].size
w = int(w*scale)
h = int(h*scale)
height = rows*h
cols = ceil(len(images) / rows)
width = cols*w
canvas = PIL.Image.new('RGBA', (width,height), 'white')
for i,img in enumerate(images):
img = img.resize((w,h), PIL.Image.ANTIALIAS)
canvas.paste(img, (w*(i % cols), h*(i // cols)))
return canvas
def convertZtoW(latent, truncation_psi=0.7, truncation_cutoff=9):
dlatent = Gs.components.mapping.run(latent, None) # [seed, layer, component]
dlatent_avg = Gs.get_var('dlatent_avg') # [component]
for i in range(truncation_cutoff):
dlatent[0][i] = (dlatent[0][i]-dlatent_avg)*truncation_psi + dlatent_avg
return dlatent
def interpolate(zs, steps):
out = []
for i in range(len(zs)-1):
for index in range(steps):
fraction = index/float(steps)
out.append(zs[i+1]*fraction + zs[i]*(1-fraction))
return out
# Taken from https://github.com/alexanderkuk/log-progress
def log_progress(sequence, every=1, size=None, name='Items'):
from ipywidgets import IntProgress, HTML, VBox
from IPython.display import display
is_iterator = False
if size is None:
try:
size = len(sequence)
except TypeError:
is_iterator = True
if size is not None:
if every is None:
if size <= 200:
every = 1
else:
every = int(size / 200) # every 0.5%
else:
assert every is not None, 'sequence is iterator, set every'
if is_iterator:
progress = IntProgress(min=0, max=1, value=1)
progress.bar_style = 'info'
else:
progress = IntProgress(min=0, max=size, value=0)
label = HTML()
box = VBox(children=[label, progress])
display(box)
index = 0
try:
for index, record in enumerate(sequence, 1):
if index == 1 or index % every == 0:
if is_iterator:
label.value = '{name}: {index} / ?'.format(
name=name,
index=index
)
else:
progress.value = index
label.value = u'{name}: {index} / {size}'.format(
name=name,
index=index,
size=size
)
yield record
except:
progress.bar_style = 'danger'
raise
else:
progress.bar_style = 'success'
progress.value = index
label.value = "{name}: {index}".format(
name=name,
index=str(index or '?')
)
# + id="q8VnyjDhiBQY"
from google.colab import drive
drive.mount('/content/drive')
# + id="BQIhdSRcXC-Q"
# generate some random seeds
seeds = np.random.randint(10000000, size=9)
print(seeds)
# show the seeds
imshow(createImageGrid(generate_images_from_seeds(seeds, 0.7), 0.7 , 3))
# + id="_aZvophLZQOw"
# Simple (Z) interpolation
zs = generate_zs_from_seeds([5015289 , 9148088 ])
latent1 = zs[0]
latent2 = zs[1]
number_of_steps = 9
imgs = generate_images(interpolate([latent1,latent2],number_of_steps), 1.0)
number_of_images = len(imgs)
imshow(createImageGrid(imgs, 0.4 , 3))
# + id="TwXUbkVJXckp"
# generating a MP4 movie
zs = generate_zs_from_seeds([421645,6149575,3487643,3766864 ,3857159,5360657,3720613])
number_of_steps = 10
imgs = generate_images(interpolate(zs,number_of_steps), 1.0)
# Example of reading a generated set of images, and storing as MP4.
# %mkdir out
movieName = 'out/mov.mp4'
with imageio.get_writer(movieName, mode='I') as writer:
for image in log_progress(list(imgs), name = "Creating animation"):
writer.append_data(np.array(image))
# + id="Po7eQSxav8qj"
# In order to download files, you can use the snippet below - this often fails for me, though, so I prefer the 'Files' browser in the sidepanel.
from google.colab import files
files.download(movieName)
# + id="F252sUipCOgO"
# If you want to store files to your Google drive, run this cell...
from google.colab import drive
drive.mount('/content/gdrive')
import os
import time
print( os.getcwd() )
location = "/content/gdrive/My Drive/PythonTests"
print( os.listdir(location) )
# + id="GofpNwi5aLl9"
# more complex example, interpolating in W instead of Z space.
zs = generate_zs_from_seeds([421645,6149575,3487643,3766864 ,3857159,5360657,3720613 ])
# It seems my truncation_psi is slightly less efficient in W space - I probably introduced an error somewhere...
dls = []
for z in zs:
dls.append(convertZtoW(z ,truncation_psi=1.0))
number_of_steps = 100
imgs = generate_images_in_w_space(interpolate(dls,number_of_steps), 1.0)
# %mkdir out
movieName = 'out/mov.mp4'
with imageio.get_writer(movieName, mode='I') as writer:
for image in log_progress(list(imgs), name = "Creating animation"):
writer.append_data(np.array(image))
# + [markdown] id="rYdsgv4i6YPl"
# # Projecting images onto the generatable manifold
#
# StyleGAN2 comes with a projector that finds the closest generatable image based on any input image. This allows you to get a feeling for the diversity of the portrait manifold.
# + id="urzy8lw76j_r"
# !mkdir projection
# !mkdir projection/imgs
# !mkdir projection/out
# Now upload a single image to 'stylegan2/projection/imgs' (use the Files side panel). Image should be color PNG, with a size of 1024x1024.
# + id="IDLJBbpz6n4k"
# Convert uploaded images to TFRecords
import dataset_tool
dataset_tool.create_from_images("./projection/records/", "./projection/imgs/", True)
# Run the projector
import run_projector
import projector
import training.dataset
import training.misc
import os
def project_real_images(dataset_name, data_dir, num_images, num_snapshots):
proj = projector.Projector()
proj.set_network(Gs)
print('Loading images from "%s"...' % dataset_name)
dataset_obj = training.dataset.load_dataset(data_dir=data_dir, tfrecord_dir=dataset_name, max_label_size=0, verbose=True, repeat=False, shuffle_mb=0)
assert dataset_obj.shape == Gs.output_shape[1:]
for image_idx in range(num_images):
print('Projecting image %d/%d ...' % (image_idx, num_images))
images, _labels = dataset_obj.get_minibatch_np(1)
images = training.misc.adjust_dynamic_range(images, [0, 255], [-1, 1])
run_projector.project_image(proj, targets=images, png_prefix=dnnlib.make_run_dir_path('projection/out/image%04d-' % image_idx), num_snapshots=num_snapshots)
project_real_images("records","./projection",1,100)
# + id="OmjPpjFU6yq3"
# Create video
import glob
imgs = sorted(glob.glob("projection/out/*step*.png"))
target_imgs = sorted(glob.glob("projection/out/*target*.png"))
assert len(target_imgs) == 1, "More than one target found?"
target_img = imageio.imread(target_imgs[0])
movieName = "projection/movie.mp4"
with imageio.get_writer(movieName, mode='I') as writer:
for filename in log_progress(imgs, name = "Creating animation"):
image = imageio.imread(filename)
# Concatenate images with original target image
w,h = image.shape[0:2]
canvas = PIL.Image.new('RGBA', (w*2,h), 'white')
canvas.paste(Image.fromarray(target_img), (0, 0))
canvas.paste(Image.fromarray(image), (w, 0))
writer.append_data(np.array(canvas))
# + id="XGVarLre63dL"
# Now you can download the video (find it in the Files side panel under 'stylegan2/projection')
# To cleanup
# !rm projection/out/*.*
# !rm projection/records/*.*
# !rm projection/imgs/*.*
| StyleGAN2.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
basedir=r'./ADE/images/training'
from os import listdir
from os.path import isfile, join,isdir
#onlyfiles = [f for f in listdir(mypath) if isfile(join(mypath, f))]
from pathlib import Path
result = list(Path(basedir).rglob("*.[tT][xX][tT]"))
print(len(result),result[0:5])
def is_include_floor(txtfilepath):
with open(txtfilepath) as f:
if 'floor' in f.read():
return True
i=0
floorfiles=[]
for txtp in result :
if is_include_floor(txtp):
i+=1
print(txtp)
floorfiles.append(txtp)
if i>10:
break
def get_img_seg_path(txtfilepath):
head,tail=os.path.split(txtfilepath)
img=tail.replace('_atr.txt','.jpg')
seg=tail.replace('_atr.txt','_seg.png')
img=os.path.join(head,img)
seg=os.path.join(head,seg)
return img,seg
f=floorfiles[3]
print(f)
img,seg=get_img_seg_path(f)
print(img,seg)
fileObject = open(f, "r")
data = fileObject.read()
print(data)
img=Image.open(img)
img
from PIL import Image
import matplotlib.pyplot as plt
import numpy as np
segdata=np.array(Image.open(seg))
plt.imshow(segdata)
classes=np.unique(segdata)
classes
segone=segdata.copy()
segone[segone!=classes[5]]=0
plt.imshow(segone)
segone=segdata.copy()
segone[segone!=classes[4]]=0
plt.imshow(segone)
segone=segdata.copy()
segone[segone!=classes[7]]=0
plt.imshow(segone)
dstimg=r'./FloorData/Images'
dstmsk=r'./FloorData/Masks'
# +
from shutil import copyfile
def copy_file(srcpath,dstfilename):
dstimg=r'./FloorData/Images'
dstmsk=r'./FloorData/Masks'
dstfile=os.path.join(dstimg,dstfilename)
copyfile(img,dstfile)
# -
dstfile=os.path.join(dstimg,'1.jpg')
copyfile(img,dstfile)
np.array(segdata)
np.unique(segdata)
| prepare_training_data.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: deep-rl-notebooks-poetry
# language: python
# name: deep-rl-notebooks-poetry
# ---
# # Ch 5 - Actor-Critic Models
# ### Deep Reinforcement Learning in Action
# ##### Listing 5.1
import multiprocessing as mp
import numpy as np
def square(x): #A
return np.square(x)
x = np.arange(64) #B
print(x)
print(mp.cpu_count())
pool = mp.Pool(8) #C
squared = pool.map(square, [x[8*i:8*i+8] for i in range(8)])
print(squared)
# ##### Listing 5.2
# +
def square(i, x, queue):
print("In process {}".format(i,))
queue.put(np.square(x))
processes = [] #A
queue = mp.Queue() #B
x = np.arange(64) #C
for i in range(8): #D
start_index = 8*i
proc = mp.Process(target=square,args=(i,x[start_index:start_index+8], queue))
proc.start()
processes.append(proc)
for proc in processes: #E
proc.join()
for proc in processes: #F
proc.terminate()
results = []
while not queue.empty(): #G
results.append(queue.get())
print(results)
# -
# ##### Listing 5.4
# +
import torch
from torch import nn
from torch import optim
import numpy as np
from torch.nn import functional as F
import gym
import torch.multiprocessing as mp #A
class ActorCritic(nn.Module): #B
def __init__(self):
super(ActorCritic, self).__init__()
self.l1 = nn.Linear(4,25)
self.l2 = nn.Linear(25,50)
self.actor_lin1 = nn.Linear(50,2)
self.l3 = nn.Linear(50,25)
self.critic_lin1 = nn.Linear(25,1)
def forward(self,x):
x = F.normalize(x,dim=0)
y = F.relu(self.l1(x))
y = F.relu(self.l2(y))
actor = F.log_softmax(self.actor_lin1(y),dim=0) #C
c = F.relu(self.l3(y.detach()))
critic = torch.tanh(self.critic_lin1(c)) #D
return actor, critic #E
# -
# ##### Listing 5.6
def worker(t, worker_model, counter, params):
worker_env = gym.make("CartPole-v1")
worker_env.reset()
worker_opt = optim.Adam(lr=1e-4,params=worker_model.parameters()) #A
worker_opt.zero_grad()
for i in range(params['epochs']):
worker_opt.zero_grad()
values, logprobs, rewards = run_episode(worker_env,worker_model) #B
actor_loss,critic_loss,eplen = update_params(worker_opt,values,logprobs,rewards) #C
counter.value = counter.value + 1 #D
# ##### Listing 5.7
def run_episode(worker_env, worker_model):
state = torch.from_numpy(worker_env.env.state).float() #A
values, logprobs, rewards = [],[],[] #B
done = False
j=0
while (done == False): #C
j+=1
policy, value = worker_model(state) #D
values.append(value)
logits = policy.view(-1)
action_dist = torch.distributions.Categorical(logits=logits)
action = action_dist.sample() #E
logprob_ = policy.view(-1)[action]
logprobs.append(logprob_)
state_, _, done, info = worker_env.step(action.detach().numpy())
state = torch.from_numpy(state_).float()
if done: #F
reward = -10
worker_env.reset()
else:
reward = 1.0
rewards.append(reward)
return values, logprobs, rewards
# ##### Listing 5.8
def update_params(worker_opt,values,logprobs,rewards,clc=0.1,gamma=0.95):
rewards = torch.Tensor(rewards).flip(dims=(0,)).view(-1) #A
logprobs = torch.stack(logprobs).flip(dims=(0,)).view(-1)
values = torch.stack(values).flip(dims=(0,)).view(-1)
Returns = []
ret_ = torch.Tensor([0])
for r in range(rewards.shape[0]): #B
ret_ = rewards[r] + gamma * ret_
Returns.append(ret_)
Returns = torch.stack(Returns).view(-1)
Returns = F.normalize(Returns,dim=0)
actor_loss = -1*logprobs * (Returns - values.detach()) #C
critic_loss = torch.pow(values - Returns,2) #D
loss = actor_loss.sum() + clc*critic_loss.sum() #E
loss.backward()
worker_opt.step()
return actor_loss, critic_loss, len(rewards)
# ##### Listing 5.5
# +
MasterNode = ActorCritic() #A
MasterNode.share_memory() #B
processes = [] #C
params = {
'epochs':1000,
'n_workers':7,
}
counter = mp.Value('i',0) #D
for i in range(params['n_workers']):
p = mp.Process(target=worker, args=(i,MasterNode,counter,params)) #E
p.start()
processes.append(p)
for p in processes: #F
p.join()
for p in processes: #G
p.terminate()
print(counter.value,processes[1].exitcode) #H
# -
# ##### Test the trained agent
# +
env = gym.make("CartPole-v1")
env.reset()
for i in range(100):
state_ = np.array(env.env.state)
state = torch.from_numpy(state_).float()
logits,value = MasterNode(state)
action_dist = torch.distributions.Categorical(logits=logits)
action = action_dist.sample()
state2, reward, done, info = env.step(action.detach().numpy())
if done:
print("Lost")
env.reset()
state_ = np.array(env.env.state)
state = torch.from_numpy(state_).float()
env.render()
# -
# ##### Listing 5.9
def run_episode(worker_env, worker_model, N_steps=10):
raw_state = np.array(worker_env.env.state)
state = torch.from_numpy(raw_state).float()
values, logprobs, rewards = [],[],[]
done = False
j=0
G=torch.Tensor([0]) #A
while (j < N_steps and done == False): #B
j+=1
policy, value = worker_model(state)
values.append(value)
logits = policy.view(-1)
action_dist = torch.distributions.Categorical(logits=logits)
action = action_dist.sample()
logprob_ = policy.view(-1)[action]
logprobs.append(logprob_)
state_, _, done, info = worker_env.step(action.detach().numpy())
state = torch.from_numpy(state_).float()
if done:
reward = -10
worker_env.reset()
else: #C
reward = 1.0
G = value.detach()
rewards.append(reward)
return values, logprobs, rewards, G
# ##### Listing 5.10
#Simulated rewards for 3 steps
r1 = [1,1,-1]
r2 = [1,1,1]
R1,R2 = 0.0,0.0
#No bootstrapping
for i in range(len(r1)-1,0,-1):
R1 = r1[i] + 0.99*R1
for i in range(len(r2)-1,0,-1):
R2 = r2[i] + 0.99*R2
print("No bootstrapping")
print(R1,R2)
#With bootstrapping
R1,R2 = 1.0,1.0
for i in range(len(r1)-1,0,-1):
R1 = r1[i] + 0.99*R1
for i in range(len(r2)-1,0,-1):
R2 = r2[i] + 0.99*R2
print("With bootstrapping")
print(R1,R2)
| notebooks/05_book.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 2
# language: python
# name: python2
# ---
# # Directional detection in WIMpy_NREFT
#
# First we'll load `WIMpy` and a bunch of other libraries:
# +
#This is a fudge so that the notebook reads the version of WIMpy
#in my local folder, rather than the pip-installed one...
import sys
sys.path.append("../WIMpy/")
#from WIMpy import DMUtils as DMU
import DMUtils as DMU
#We'll also import some useful libraries
import numpy as np
import matplotlib.pyplot as plt
import matplotlib as mpl
mpl.rcParams.update({'font.size': 18,'font.family':'serif'})
mpl.rcParams['xtick.major.size'] = 5
mpl.rcParams['xtick.major.width'] = 1
mpl.rcParams['xtick.minor.size'] = 3
mpl.rcParams['xtick.minor.width'] = 1
mpl.rcParams['ytick.major.size'] = 5
mpl.rcParams['ytick.major.width'] = 1
mpl.rcParams['ytick.minor.size'] = 3
mpl.rcParams['ytick.minor.width'] = 1
mpl.rc('text', usetex=True)
mpl.rcParams['xtick.direction'] = 'in'
mpl.rcParams['ytick.direction'] = 'in'
mpl.rcParams['xtick.top'] = True
mpl.rcParams['ytick.right'] = True
from tqdm import tqdm
from scipy.interpolate import interp1d
# -
# -----------
# ### The Radon Transform
#
# The Radon transform $\hat{f}(v_\mathrm{min}, \cos\theta, \phi)$ is the equivalent of the velocity integral in non-directional detection. More information can be found at [hep-ph/0209110](https://arxiv.org/abs/hep-ph/0209110).
#
# Here, $\cos\theta, \phi$ is the direction of the recoiling nucleus.
#
# In the function `calcRT`, the direction of the recoiling nucleus is measured from the mean recoil direction (essentially anti-parallel to the Earth's motion). So $\theta = 0$ is along is th mean recoil direction. This means that the Radon Transform only depends on $\theta$, not $\phi$.
# +
v_list = np.linspace(0, 801,1000)
theta_list = np.linspace(0, np.pi, 101)
v_grid, theta_grid = np.meshgrid(v_list, theta_list)
RT_grid = DMU.calcRT(v_grid, theta_grid)
# -
# Now let's plot it:
# +
plt.figure()
plt.contourf(v_grid, theta_grid/np.pi, 1e3*RT_grid)
plt.xlabel(r'$v_\mathrm{min}$ [km/s]')
plt.ylabel(r'$\theta/\pi$')
plt.colorbar(label=r'$\hat{f}(v,\theta,\phi)$ [$10^{-3}$ s/km]')
plt.show()
# -
# And again, in polar coordinates:
# +
plt.figure()
ax = plt.subplot(111, projection='polar')
c = ax.contourf( theta_grid,v_grid, 1e3*RT_grid)
ax.contourf(np.pi+theta_grid[::-1],v_grid, 1e3*RT_grid) #Plot twice to get both sides of the polar plot...
#plt.xlabel(r'$v_\mathrm{min}$ [km/s]')
#plt.ylabel(r'$\theta/\pi$')
plt.colorbar(c,label=r'$\hat{f}(v,\theta,\phi)$ [$10^{-3}$ s/km]')
plt.show()
# -
# Now let's check that the Radon Transform is correctly normalised. By definition, if we integrate over all recoil directions, we should get the velocity integral:
#
# $$ \eta \left( v _ { \min } \right) = \int _ { v _ { \min } } ^ { \infty } \frac { f ( \mathbf { v } ) } { v } \mathrm { d } ^ { 3 } \mathbf { v } = \frac{1}{2\pi}\oint \hat { f } \left( v _ { \min } , \hat { \mathbf { q } } \right) \mathrm { d } \Omega _ { q }$$
#
# Note that the integral over $\phi$ in this case contributes the factor of $2\pi$.
# +
integral = 2*np.pi*np.trapz(np.sin(theta_grid)*RT_grid, theta_grid, axis=0)
plt.figure()
plt.plot(v_list, integral/(2*np.pi), label='Integral over RT')
plt.plot(v_list, DMU.calcEta(v_list), linestyle='--', label='$\eta(v)$')
plt.xlabel('v [km/s]')
plt.ylabel('Velocity integral [s/km]')
plt.legend()
plt.show()
# -
# --------
# ### The Modified Radon Transform
#
# When the cross section depends on $v^2$, we need to calculate what I'm calling the the *modified* Radon Transform. See e.g. [arXiv:1505.07406](https://arxiv.org/abs/1505.07406) or [arXiv:1505.06441](https://arxiv.org/abs/1505.06441).
# +
v_list = np.linspace(0, 801,1000)
theta_list = np.linspace(0, np.pi,1001)
v_grid, theta_grid = np.meshgrid(v_list, theta_list)
MRT_grid = DMU.calcMRT(v_grid, theta_grid)
# -
# Now let's plot it:
# +
plt.figure()
plt.contourf(v_grid, theta_grid/np.pi, 1e6*MRT_grid)
plt.xlabel(r'$v_\mathrm{min}$ [km/s]')
plt.ylabel(r'$\theta/\pi$')
plt.colorbar(label=r'$\hat{f}^{T}(v,\theta,\phi)$ [$10^{-6}$ s/km]')
plt.show()
# -
# And again, in polar coordinates:
# +
plt.figure()
ax = plt.subplot(111, projection='polar')
c = ax.contourf( theta_grid,v_grid, 1e6*MRT_grid)
ax.contourf(np.pi+theta_grid[::-1],v_grid, 1e6*MRT_grid) #Plot twice to get both sides of the polar plot...
#plt.xlabel(r'$v_\mathrm{min}$ [km/s]')
#plt.ylabel(r'$\theta/\pi$')
plt.colorbar(c,label=r'$\hat{f}^{T}(v,\theta,\phi)$ [$10^{-6}$ s/km]')
plt.show()
# +
Mintegral = 2*np.pi*np.trapz(np.sin(theta_grid)*MRT_grid, theta_grid, axis=0)
plt.figure()
plt.plot(v_list, Mintegral/(2*np.pi), label='Integral over MRT')
plt.plot(v_list, DMU.calcMEta(v_list), linestyle='--', label='Modified $\eta(v)$')
plt.xlabel('v [km/s]')
plt.ylabel('Velocity integral [s/km]')
plt.legend()
plt.show()
# -
# ### Directional rates
#
# Here, we'll calculate some recoil distributions, as a function of $E_R$ and the angle between the recoil and the mean DM flux direction $\theta$. We'll consider a Xenon detector for concreteness.
# +
N_p_Xe = 54
N_n_Xe = 77
m_x = 10 #GeV
sig = 1e-40 #cm^2
# +
E_list = np.logspace(-1,1)
theta_list = np.linspace(0, np.pi)
E_grid, theta_grid = np.meshgrid(E_list, theta_list)
Rate_standard = DMU.dRdEdOmega_standard(E_grid, theta_grid, N_p_Xe, N_n_Xe, m_x, sig)
# +
plt.figure()
plt.contourf(E_grid, theta_grid/np.pi, Rate_standard)
plt.xlabel(r'$E_R$ [keV]')
plt.ylabel(r'$\theta/\pi$')
plt.colorbar(label=r'$\mathrm{d}R/\mathrm{d}\cos\theta\mathrm{d}\phi$ [arb. units]')
plt.title("Standard SI Interactions",fontsize=14.0)
plt.show()
# -
# And now, some non-standard interaction. Let's try $\mathcal{O}_7$:
# +
cp = np.zeros(11)
cn = np.zeros(11)
cp[6] = 1.0
cn[6] = 1.0
Rate_O7 = DMU.dRdEdOmega_NREFT(E_grid, theta_grid, m_x, cp, cn, "Xe131")
# +
plt.figure()
plt.contourf(E_grid, theta_grid/np.pi, Rate_O7)
plt.xlabel(r'$E_R$ [keV]')
plt.ylabel(r'$\theta/\pi$')
plt.colorbar(label=r'$\mathrm{d}R/\mathrm{d}\cos\theta\mathrm{d}\phi$ [arb. units]')
plt.title("NREFT: $\mathcal{O}_7$",fontsize=12.0)
plt.show()
# -
# Now let's integrate over energies and calculate directional spectra (we'll also normalise 'per recoil'):
# +
Dir_SI = np.trapz(Rate_standard, E_grid, axis=1)
Dir_SI /= np.trapz(np.sin(theta_list)*Dir_SI, theta_list)
Dir_O7 = np.trapz(Rate_O7, E_grid, axis=1)
Dir_O7 /= np.trapz(np.sin(theta_list)*Dir_O7, theta_list)
# -
# *Note that we're being careful about the distinction between $P(\cos\theta)$ and $P(\theta) = \sin\theta\, P(\cos\theta)$...*
# +
plt.figure()
plt.plot(theta_list/np.pi, Dir_SI, label="Standard SI")
plt.plot(theta_list/np.pi, Dir_O7, label="NREFT $\mathcal{O}_7$")
plt.legend()
plt.xlabel(r'$\theta/\pi$')
plt.ylabel(r'$P(\mathrm{cos}\theta)$')
plt.title("Xenon Recoils, $m_\chi = " + str(m_x) + " \,\,\mathrm{GeV}$",fontsize=14)
plt.show()
# -
# ### Directional rates in lab coordinates
#
# We can also calculate the rate as a function of ($\theta_l$ and $\phi_l$), which are angles as measured in a lab-fixed references frame, with $(N, W, Z)$ axes. $\theta_l = 0$ corresponds to recoils going directly upwards in the lab-frame. $\phi_l = 0$ corresponds to recoils pointed North (or possible South, depending on the signs of things...)
#
# To do this calculation, we have to specify the location of the detector (we'll choose Amsterdam) and the time, we'll choose my birthday:
# +
lat = 52 #degrees N
lon = 5 #degrees E
JD = DMU.JulianDay(3, 15, 1921, 6)
# +
theta_lab_list = np.linspace(0, np.pi)
phi_lab_list = np.linspace(0,2*np.pi)
tlab_grid, plab_grid = np.meshgrid(theta_lab_list, phi_lab_list)
theta_vals = DMU.calcAngleFromMean(tlab_grid, plab_grid, lat=lat, lon=lon, JD=JD)
# +
plt.figure()
plt.contourf(tlab_grid/np.pi, plab_grid/np.pi, theta_vals/np.pi)
plt.colorbar( label=r'$\theta/\pi$', ticks=np.linspace(0, 1,11))
plt.xlabel(r'$\theta_l/\pi$')
plt.ylabel(r'$\phi_l/\pi$')
plt.show()
# -
# Remember, $\theta$ measures the angle between the lab-fixed direction $(\theta_l, \phi_l)$ and the mean direction of the DM recoils (parallel to the mean DM flux).
#
# At this particular time of day, the DM flux is roughly parallel to the direction $\theta_l \sim \pi$, so the mean recoil direction is pointing slightly off the vertical downwards direction.
# By using $(\theta_l, \phi_l)$ we can specify the directions which would be measured by a detector (or, say, a rock) which is fixed in position on th Earth over a long period of time.
JD_list = DMU.JulianDay(3, 15, 1921, 6) + np.linspace(0, 1) #List of times over 1 day
# +
#Directional spectrum (per recoil) in Xenon for SI interactions
Dir_SI_interp = interp1d(theta_list, Dir_SI)
Dir_grid_times = np.zeros((len(theta_lab_list), len(theta_lab_list), len(JD_list)))
#Calculate the directional spectrum at every time step
for i in range(len(JD_list)):
Dir_grid_times[:,:,i] = Dir_SI_interp(DMU.calcAngleFromMean(tlab_grid, plab_grid, lat=lat, lon=lon, JD=JD_list[i]))
#Now integrate over times
Dir_grid = np.trapz(Dir_grid_times, JD_list, axis=-1)
# -
# So now, the distribution of recoils over the course of one day looks like this:
# +
plt.figure()
plt.contourf(tlab_grid/np.pi, plab_grid/np.pi, Dir_grid)
plt.colorbar( label=r'$P(\cos\theta)$')
#The direction of the North-South rotation axis of the Earth,
# in (N,W,Z) lab-fixed coordinates.
v_NSaxis = [np.cos(lat*np.pi/180.0), 0, np.sin(lat*np.pi/180.0)]
dotprod = (np.sin(tlab_grid)*np.cos(plab_grid)*v_NSaxis[0]+ \
np.sin(tlab_grid)*np.sin(plab_grid)*v_NSaxis[1] + \
np.cos(tlab_grid)*v_NSaxis[2])
plt.contour(tlab_grid/np.pi, plab_grid/np.pi, np.arccos(dotprod), colors='grey')
plt.xlabel(r'$\theta_l/\pi$')
plt.ylabel(r'$\phi_l/\pi$')
plt.show()
# -
# The recoils appear mostly to be pointing downwards ($\theta_l = \pi$), because the DM flux comes mostly from overhead in the Northern hemisphere.
#
# We've also added grey contours, which correspond to contours of constant angle, as measured (in the lab-fixed frame) from the direction of the North-South rotation axis of the Earth. As you can see, the recoil rate now depends *only* on this angle (because we've washed out any anisotropy along the perpendicular direction).
| Examples/Directional.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # _Narrative Tweet Exploration_
#
# **NOTE**: When this notebook was originally produced it was note in the `playground_nbs` folder. It was located in the `experiments` folder, so if any further experimentation is to be done, you should consider moving it to that location first.
import pandas as pd
from pathlib import Path
import glob
from vaderSentiment.vaderSentiment import SentimentIntensityAnalyzer
from sklearn.preprocessing import MinMaxScaler
import umap
import pickle
import matplotlib.pyplot as plt
from bokeh.plotting import figure, show, output_notebook
from bokeh.models import HoverTool, ColumnDataSource, CategoricalColorMapper
from bokeh.palettes import Spectral10
# %config InlineBackend.figure_format = 'retina'
df = pd.read_json('playground_data/narratives_june112020.json', orient='columns')
df.info()
print(
f'There are {len(df.narrative.unique())} narratives & include:\n\n{df.narrative.unique()}'
)
# +
# twint command to gather tweets for given search term
# #! twint -s 'search term(s)' --lang en --min-retweets 10 --popular-tweets --filter-retweets --limit 25 search_terms.json --json
# -
def path_and_files():
folder_name = input('Enter name of data sub-folder where JSONs are stored:\n')
datapath = Path.cwd().parent / 'data' / folder_name
narrative_files = glob.glob(f'{datapath}/*/*.json')
return datapath, narrative_files
# +
#def load_data(datapath, filename):
# df = pd.read_json(
# datapath / filename,
# lines=True
# )
# return df.sample(frac=0.5, random_state=1)
def glob_load(filename):
df = pd.read_json(
filename,
lines=True
)
df['narrative'] = str(filename.split('/')[8])
return df
def sentimentscore(df):
analyzer = SentimentIntensityAnalyzer()
df['sentiment'] = df['tweet'].apply(
lambda tweet: analyzer.polarity_scores(tweet)['compound']
)
return df
def cleandata(df):
return df[['created_at', 'id', 'tweet', 'sentiment', 'narrative']]
def data_wrapper():
datapath = path_to_data()
filename = input('What is the filename?\n')
df = load_data(datapath, filename)
df = sentimentscore(df)
df = cleandata(df)
return df
#dfclean = data_wrapper().sort_values(by='sentiment')
# -
def load_all_narratives(narrative_files):
for file in narrative_files:
df = pd.concat([glob_load(file) for file in narrative_files])
df = sentimentscore(df)
df = cleandata(df).reset_index(drop=True)
return df
datapath, narrative_files = path_and_files()
df = load_all_narratives(narrative_files)
df.info()
print(
f'There are {len(df.narrative.unique())} narratives & include:\n\n{df.narrative.unique()}'
)
# ### _Clean Text_
def emoji_replace(text):
# first demojize text
text = emoji.demojize(text) # use_aliases=True)
regex = re.compile(r'(?<=:)(\S+)(?=:)', re.I)
#text = punctuation_replace(new_text)
#regex = re.compile(r"(:\S+:)", re.I)
for item in regex.finditer(text):
emojistr = str(item.group()) #.replace(r'_',''))
#print(emojistr)
#itemstring = item.group()
text = re.sub(f'(?::)({emojistr})(?::)', str(' ' + emojistr.replace(r'_', '') + ' '), text)
return text
# +
# %%writefile tweet_clean.py
import re
import string
import emoji
from nltk.tokenize import RegexpTokenizer, regexp_tokenize
from spacy.lang.en.stop_words import STOP_WORDS
def emoji_replace(text):
# first demojize text
text = emoji.demojize(text) # use_aliases=True)
regex = re.compile(r'(?<=:)(\S+)(?=:)', re.I)
#text = punctuation_replace(new_text)
#regex = re.compile(r"(:\S+:)", re.I)
for item in regex.finditer(text):
emojistr = str(item.group()) #.replace(r'_',''))
#print(emojistr)
#itemstring = item.group()
text = re.sub(f'(?::)({emojistr})(?::)', str(' ' + emojistr.replace(r'_', '') + ' '), text)
return text
def newline_remove(text):
regex = re.compile(r'\n+', re.I)
text = regex.sub(' ', text)
return text
def replace_coronavirus(text):
regex = re.compile(r'(corona[\s]?virus)', re.I)
return regex.sub('coronavirus', text)
def coronavirus_hashtags(text):
regex = re.compile(r'#(coronavirus)\b', re.I)
return regex.sub('xxhash coronavirus', text)
def replace_covid(text):
regex = re.compile(r'(covid[-\s_]?19)|covid', re.I)
return regex.sub('covid19', text)
def covid_hashtags(text):
regex = re.compile(r'#(covid[_-]?(19))', re.I)
return regex.sub('xxhash covid19', text)
def sarscov2_replace(text):
regex = re.compile(r'(sars[-]?cov[-]?2)', re.I)
return regex.sub(r'sarscov2', text)
def emoji_replace(text):
# first demojize text
text = emoji.demojize(text) # use_aliases=True)
regex = re.compile(r'(?<=:)(\S+)(?=:)', re.I)
for item in regex.finditer(text):
emojistr = str(item.group())
replacestr = str(' xxemoji ' + emojistr.replace(r'_', '') + ' ')
pattern = r"(?::)" + re.escape(emojistr) + r"(?::)"
# r'(?::)({emojistr})(?::)'
text = re.sub(pattern, replacestr, text)
return text
def twitterpic_replace(text):
regex = re.compile(r"pic.twitter.com/\w+", re.I)
return regex.sub(" xxpictwit ", text)
def youtube_replace(text):
regex = re.compile(r"(https://youtu.be/(\S+))|(https://www.youtube.(\S+))", re.I)
return regex.sub(" xxyoutubeurl ", text)
def url_replace(text):
regex1 = re.compile(r'(?:http|ftp|https)://(\S+)|(?:www)\.\S+\b', re.I)
regex2 = re.compile(r'((\b\S+)\.(?:[a-z+]{2,3}))', re.I)
text = regex1.sub(' xxurl ', text)
text = regex2.sub(' xxurl ', text)
return text
def punctuation_replace(text):
# put spaces between punctuation
PUNC = '!"$%&\'()*+,-./:;<=>?[\\]^_`{|}~…–”“’'
punct = r"[" + re.escape(PUNC) + r"]"
text = re.sub("(?<! )(?=" + punct + ")|(?<=" + punct + ")(?! )", r" ", text)
text = re.sub(r"[^\w\s]", 'xxpunc', text) # could replace with xxpunc
# remove any extra whitespace
text = re.sub(r'[ ]{2,}',' ',text)
return text
def clean_tweet_wrapper(text, nltk_tokenize=False, punc_replace=False):
PUNC = '!"$%&\'()*+,-./:;<=>?[\\]^_`{|}~…–”“’'
# removes newline characters from text
text = newline_remove(text)
# standardizes all instances of coronavirus in text
text = replace_coronavirus(text)
# replaces instances of #coronavirus with special token, xxhashcoronavirus
text = coronavirus_hashtags(text)
# standardizes all instances of covid19
text = replace_covid(text)
# replaces instances of #covid19 with special token, xxhashcovid19
text = covid_hashtags(text)
# standardizes SARS-Cov-2 to sarscov2
text = sarscov2_replace(text)
# removes hashtag characters
text = text.replace(r'#', 'xxhash ')
# removes @ character
text = text.replace(r'@', 'xxmention ')
# replace emojies with special token xxemoji
text = emoji_replace(text)
# replace pic.twitter.com links with special token, xxpictwit
text = twitterpic_replace(text)
# replace YouTube links with special token, xxyoutubeurl
text = youtube_replace(text)
# replace other URLs with special token, xxurl
text = url_replace(text)
# if nltk_tokenize parameter True, then use regexp_tokenize from nltk library
if nltk_tokenize == True:
tokens = RegexpTokenizer('\s+', gaps=True).tokenize(text)
text = ' '.join(
[''.join([char for char in word if char not in PUNC]) for word in tokens if word not in STOP_WORDS]
)
# if punc_replace set to True, replace all punctuations
if punc_replace == True:
text = punctuation_replace(text)
# remove any unnecessary whitespace
text = re.sub(r'[ ]{2,}',' ',text)
return text.strip()
# -
df['processed_tweet'] = df['tweet'].apply(clean_wrapper)
df.to_json('playground_data/narratives_june10.json', orient='columns')
df.info()
# ### _Embeddings --> Generated via Google Colab_
#
# ### _UMAP_
Path.cwd().parent
def load_embeddings():
path = Path().cwd() / 'playground_data'
filename = input('What is the file name?\n')
pkl_file = open(
path / filename, 'rb'
)
X = pickle.load(pkl_file)
return X
X = load_embeddings()
def apply_umap(X):
dr = umap.UMAP(n_components=2, metric='cosine')
dr.fit(X)
X_dr = dr.transform(X)
return X_dr
def plot_reduced_2D(X_dr, dr_type):
_, ax = plt.subplots(figsize=(10,10))
ax.scatter(X_dr[:,0],X_dr[:,1], alpha=0.8)
ax.set_title(dr_type+' with 2 components')
plt.show()
X_dr = apply_umap(X)
plot_reduced_2D(X_dr, 'UMAP')
# ## _Bokeh_
tweet_embed_df = pd.DataFrame(X_dr, columns=('x', 'y'))
tweet_embed_df.head()
tweet_embed_df['tweet'] = [str(x) for x in df['tweet']]
tweet_embed_df['narrative'] = [str(x) for x in df['narrative']]
tweet_embed_df.head()
| experiments/playground_nbs/narrative_tweet_exploration.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
#coding:utf-8
#dnn_train.py
import os
import sys
import tensorflow as tf
from tensorflow.examples.tutorials.mnist import input_data
#加载 dnn_inference.py 中定义的常量和前向传播的函数
import dnn_inference
#配置神经网络参数
BATCH_SIZE = 100
LEARNING_RATE_BASE = 0.5
LEARNING_RATE_DECAY = 0.99
REGULARAZTION_RATE = 0.0001
TRAINING_STEPS = 9001
MOVING_AVERAGE_DECAY = 0.99
PATH = os.path.abspath(os.path.dirname(sys.argv[0]))
#模型保存的路径和文件名
MODEL_SAVE_PATH = PATH+'/output/'
MODEL_NAME = "model.ckpt"
INPUT_NODE,LAYER1_NODE,LAYER2_NODE,OUTPUT_NODE = dnn_inference.get_node_dims()
def train(data_set):
x = tf.placeholder(tf.float32, [None, INPUT_NODE], name='x-input')
y_ = tf.placeholder(tf.float32, [None, OUTPUT_NODE], name='y_input')
#返回regularizer函数,L2正则化项的值
regularizer = tf.contrib.layers.l2_regularizer(REGULARAZTION_RATE)
#使用dnn_inference.py中定义的前向传播过程
y=dnn_inference.inference(x,regularizer)
#定义step为0
global_step = tf.Variable(0, trainable=False)
#滑动平均,由衰减率和步数确定
variable_averages = tf.train.ExponentialMovingAverage(MOVING_AVERAGE_DECAY, global_step)
#可训练参数的集合
variables_averages_op = variable_averages.apply(tf.trainable_variables())
#交叉熵损失 函数
cross_entropy = tf.nn.sparse_softmax_cross_entropy_with_logits(logits=y, labels=tf.argmax(y_, 1))
#交叉熵平均值
cross_entropy_mean = tf.reduce_mean(cross_entropy)
#总损失
loss = cross_entropy_mean + tf.add_n(tf.get_collection('losses'))
#学习率(衰减)
learning_rate = tf.train.exponential_decay(LEARNING_RATE_BASE, global_step, data_set.train.num_examples / BATCH_SIZE, LEARNING_RATE_DECAY)
#定义了反向传播的优化方法,之后通过sess.run(train_step)就可以对所有GraphKeys.TRAINABLE_VARIABLES集合中的变量进行优化,似的当前batch下损失函数更小
train_step = tf.train.GradientDescentOptimizer(learning_rate).minimize(loss, global_step=global_step)
#更新参数
with tf.control_dependencies([train_step, variables_averages_op]):
train_op = tf.no_op(name='train')
saver = tf.train.Saver()
#初始会话,并开始训练过程
with tf.Session() as sess:
tf.global_variables_initializer().run()
for i in range(TRAINING_STEPS):
xs, ys = data_set.train.next_batch(BATCH_SIZE)
op, loss_value, step = sess.run([train_op, loss, global_step], feed_dict={x: xs, y_: ys})
if i % 1000 == 0:
print ("After %d training step(s), loss on training batch is %g." % (step, loss_value))
saver.save(sess, os.path.join(MODEL_SAVE_PATH, MODEL_NAME), global_step=global_step)
def main(argv=None):
mnist = input_data.read_data_sets("/User/dawei/AI/DNN/data", one_hot=True)
#train(mnist)
print(mnist.train)
#if __name__ == '__main__':
# tf.app.run()
# -
main()
| test/xears/models/tf/tf_dnn_note.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + colab={"base_uri": "https://localhost:8080/"} executionInfo={"elapsed": 392, "status": "ok", "timestamp": 1623163667042, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GjyeB8nVwl4aahdaHXzAfTJ5tfTEuviw8xXhOsl=s64", "userId": "11132753723286663537"}, "user_tz": -480} id="qkvB7jVUIhYE" outputId="195976e8-db81-4724-de5a-7e36f8923a94"
from google.colab import drive
drive.mount('/content/drive')
# + executionInfo={"elapsed": 820, "status": "ok", "timestamp": 1623163668221, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GjyeB8nVwl4aahdaHXzAfTJ5tfTEuviw8xXhOsl=s64", "userId": "11132753723286663537"}, "user_tz": -480} id="wI2nMf2ZIl8V"
googleDrivePathPrefix = 'drive/My Drive/Colab Notebooks'
# + executionInfo={"elapsed": 89, "status": "ok", "timestamp": 1623163668222, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GjyeB8nVwl4aahdaHXzAfTJ5tfTEuviw8xXhOsl=s64", "userId": "11132753723286663537"}, "user_tz": -480} id="UVAj1fb2JnIQ"
from os import path
import re
import pandas as pd
import numpy as np
# + executionInfo={"elapsed": 87, "status": "ok", "timestamp": 1623163668223, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GjyeB8nVwl4aahdaHXzAfTJ5tfTEuviw8xXhOsl=s64", "userId": "11132753723286663537"}, "user_tz": -480} id="UQpuAdPWWF_D"
data_examples = []
with open(path.join(googleDrivePathPrefix,'data/cmn.txt'), 'r',encoding='utf8') as f:
for line in f.readlines():
data_examples.append(line)
# + executionInfo={"elapsed": 85, "status": "ok", "timestamp": 1623163668224, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GjyeB8nVwl4aahdaHXzAfTJ5tfTEuviw8xXhOsl=s64", "userId": "11132753723286663537"}, "user_tz": -480} id="dOx72P4zWbvu"
english=[]
chinese=[]
for data in data_examples:
splits=re.split('\t',data)
english.append(splits[0])
chinese.append(splits[1])
# + executionInfo={"elapsed": 83, "status": "ok", "timestamp": 1623163668224, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GjyeB8nVwl4aahdaHXzAfTJ5tfTEuviw8xXhOsl=s64", "userId": "11132753723286663537"}, "user_tz": -480} id="2TA38GEZWxz_"
df = pd.DataFrame(data={'english':english,'chinese':chinese})
# + colab={"base_uri": "https://localhost:8080/", "height": 206} executionInfo={"elapsed": 81, "status": "ok", "timestamp": 1623163668225, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GjyeB8nVwl4aahdaHXzAfTJ5tfTEuviw8xXhOsl=s64", "userId": "11132753723286663537"}, "user_tz": -480} id="yn7RLCNYXLPI" outputId="56ce117f-9289-43ff-98d1-44807d08d4f0"
df.head()
# + colab={"base_uri": "https://localhost:8080/"} executionInfo={"elapsed": 74, "status": "ok", "timestamp": 1623163668225, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GjyeB8nVwl4aahdaHXzAfTJ5tfTEuviw8xXhOsl=s64", "userId": "11132753723286663537"}, "user_tz": -480} id="DDORjog6Zx_Y" outputId="a631b777-5993-4c2b-db03-7a982622aaed"
df.count()
# + [markdown] id="UIsbzErxXqZg"
# Filter texts with quotation marks `"`
#
# - The intention is to remove cases where there is conversational nature in the the sentence, e.g.
# + colab={"base_uri": "https://localhost:8080/"} executionInfo={"elapsed": 61, "status": "ok", "timestamp": 1623163668226, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GjyeB8nVwl4aahdaHXzAfTJ5tfTEuviw8xXhOsl=s64", "userId": "11132753723286663537"}, "user_tz": -480} id="fJxWpWkubC8y" outputId="73dfd18e-80f3-431d-f8fe-03356d57204c"
df.loc[24690]
# + colab={"base_uri": "https://localhost:8080/"} executionInfo={"elapsed": 58, "status": "ok", "timestamp": 1623163668227, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GjyeB8nVwl4aahdaHXzAfTJ5tfTEuviw8xXhOsl=s64", "userId": "11132753723286663537"}, "user_tz": -480} id="LztS85v8Xpqz" outputId="995d2d92-0522-4f1b-9fe7-28ad7510c3f7"
conversational_sample_mask = df['english'].str.contains('" "')
print(f'Conversational example counts: {conversational_sample_mask.sum()}')
# + colab={"base_uri": "https://localhost:8080/", "height": 206} executionInfo={"elapsed": 54, "status": "ok", "timestamp": 1623163668228, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GjyeB8nVwl4aahdaHXzAfTJ5tfTEuviw8xXhOsl=s64", "userId": "11132753723286663537"}, "user_tz": -480} id="pNYwuN3FYnyz" outputId="c447db04-f677-4d4c-814f-4be73c1e7af2"
df_conversational_samples = df[conversational_sample_mask]
df_conversational_samples.head()
# + colab={"base_uri": "https://localhost:8080/"} executionInfo={"elapsed": 53, "status": "ok", "timestamp": 1623163668230, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GjyeB8nVwl4aahdaHXzAfTJ5tfTEuviw8xXhOsl=s64", "userId": "11132753723286663537"}, "user_tz": -480} id="YXibc2VAafO8" outputId="cb67dc5e-000a-4eb9-81b2-5d083947f5b7"
df_filtered_conversation = df[~conversational_sample_mask]
print(f'Data sample counts (filtered conversations):\n{df_filtered_conversation.count()}')
# + colab={"base_uri": "https://localhost:8080/", "height": 206} executionInfo={"elapsed": 50, "status": "ok", "timestamp": 1623163668231, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GjyeB8nVwl4aahdaHXzAfTJ5tfTEuviw8xXhOsl=s64", "userId": "11132753723286663537"}, "user_tz": -480} id="aUNivw7OcYgi" outputId="5e4fd704-f239-4d49-bad4-3dc9755fa2fd"
df_filtered_conversation.sample(5)
# + [markdown] id="Y-EYXQ9BiSQc"
# Filter chinese text with english characters, numbers and non-essential punctuation marks excluding `[。,?!]`.
#
# - To simplify separating individual characters in chinese texts, we filter out samples with english characters, numbers, and non-essential punctuation marks.
# - Thus, we can simply split each character in the chinese texts and treat them as individual chinese character or punctuation mark.
# + colab={"base_uri": "https://localhost:8080/"} executionInfo={"elapsed": 49, "status": "ok", "timestamp": 1623163668232, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GjyeB8nVwl4aahdaHXzAfTJ5tfTEuviw8xXhOsl=s64", "userId": "11132753723286663537"}, "user_tz": -480} id="NJjSqJHVi7dl" outputId="60fb246b-a96a-4047-f75c-3a510ee6a8df"
chinese_with_eng_char_sample_mask = df['chinese'].str.contains("[A-Za-z0-9\-/•()《》「」“”\[\]\"]")
chinese_with_eng_char_sample_mask.sum()
# + colab={"base_uri": "https://localhost:8080/", "height": 242} executionInfo={"elapsed": 46, "status": "ok", "timestamp": 1623163668233, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GjyeB8nVwl4aahdaHXzAfTJ5tfTEuviw8xXhOsl=s64", "userId": "11132753723286663537"}, "user_tz": -480} id="0TiSVgSYjnfG" outputId="28c698f1-f79b-4f69-e94b-8920563dabfc"
df_chinese_with_eng_char_sample = df_filtered_conversation[chinese_with_eng_char_sample_mask]
df_chinese_with_eng_char_sample.sample(5)
# + colab={"base_uri": "https://localhost:8080/"} executionInfo={"elapsed": 42, "status": "ok", "timestamp": 1623163668233, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GjyeB8nVwl4aahdaHXzAfTJ5tfTEuviw8xXhOsl=s64", "userId": "11132753723286663537"}, "user_tz": -480} id="kVlYLFdNkQ9Z" outputId="0e7c40dd-2df6-400b-ebf9-9384756c8ba0"
df_filtered_final = df_filtered_conversation[~chinese_with_eng_char_sample_mask]
print(f'Data sample counts (filtered chinese text with english characters):\n{df_filtered_final.count()}')
# + colab={"base_uri": "https://localhost:8080/", "height": 206} executionInfo={"elapsed": 39, "status": "ok", "timestamp": 1623163668233, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GjyeB8nVwl4aahdaHXzAfTJ5tfTEuviw8xXhOsl=s64", "userId": "11132753723286663537"}, "user_tz": -480} id="N0JzenTpknRa" outputId="4973984f-38d9-4f3b-9fea-d6cd9d44d93c"
df_filtered_final.sample(5)
# + [markdown] id="2LupHefp7dYs"
# Replace chinese text with ascii punctuation marks, e.g. `[.,?!]` with chinese unicode punctuation marks `[。,?!]`.
# + colab={"base_uri": "https://localhost:8080/"} executionInfo={"elapsed": 37, "status": "ok", "timestamp": 1623163668234, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GjyeB8nVwl4aahdaHXzAfTJ5tfTEuviw8xXhOsl=s64", "userId": "11132753723286663537"}, "user_tz": -480} id="2UXV6624rcvO" outputId="731a0826-886f-4ccb-86dd-f33bc8f66462"
chinese_with_ascii_punctuation_mask = df_filtered_final['chinese'].str.contains("[?.!,]")
chinese_with_ascii_punctuation_mask.sum()
# + colab={"base_uri": "https://localhost:8080/", "height": 206} executionInfo={"elapsed": 36, "status": "ok", "timestamp": 1623163668235, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GjyeB8nVwl4aahdaHXzAfTJ5tfTEuviw8xXhOsl=s64", "userId": "11132753723286663537"}, "user_tz": -480} id="Om9x1OH_rxal" outputId="f4ffb10e-8cbf-4aca-8d35-b3170eb458e8"
df_chinese_with_ascii_punct = df_filtered_final[chinese_with_ascii_punctuation_mask]
ind = np.random.choice(df_chinese_with_ascii_punct.index,5)
df_chinese_with_ascii_punct.loc[ind[:]]
# + executionInfo={"elapsed": 34, "status": "ok", "timestamp": 1623163668236, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GjyeB8nVwl4aahdaHXzAfTJ5tfTEuviw8xXhOsl=s64", "userId": "11132753723286663537"}, "user_tz": -480} id="mRtXv7BRw9r2"
def replace_punct(sentence):
sentence = re.sub(r"[?]", r"?", sentence)
sentence = re.sub(r"[.]", r"。", sentence)
sentence = re.sub(r"[!]", r"!", sentence)
sentence = re.sub(r"[,]", r",", sentence)
return sentence
# + colab={"base_uri": "https://localhost:8080/", "height": 315} executionInfo={"elapsed": 392, "status": "ok", "timestamp": 1623163668593, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GjyeB8nVwl4aahdaHXzAfTJ5tfTEuviw8xXhOsl=s64", "userId": "11132753723286663537"}, "user_tz": -480} id="9dTkE8h8sGdA" outputId="e351e917-26e9-44e9-b3d7-2c6ec78835c4"
df_replace_punctuation = df_filtered_final
df_replace_punctuation['chinese'] = df_replace_punctuation['chinese'].apply(lambda x: replace_punct(x))
df_replace_punctuation[chinese_with_ascii_punctuation_mask].loc[ind[:]]
# + colab={"base_uri": "https://localhost:8080/"} executionInfo={"elapsed": 20, "status": "ok", "timestamp": 1623163668595, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GjyeB8nVwl4aahdaHXzAfTJ5tfTEuviw8xXhOsl=s64", "userId": "11132753723286663537"}, "user_tz": -480} id="W2sfPg7Uy9zJ" outputId="df1b1e50-7cd8-4806-dfe6-e28d73d20364"
df_replace_punctuation['chinese'].str.contains("[?.!,]").sum()
# + [markdown] id="pWR7BHrilhS1"
# Separate punctuation marks with additional space for the purpose of string splitting.
# + executionInfo={"elapsed": 17, "status": "ok", "timestamp": 1623163668595, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GjyeB8nVwl4aahdaHXzAfTJ5tfTEuviw8xXhOsl=s64", "userId": "11132753723286663537"}, "user_tz": -480} id="_tw4i5vcc4du"
df_separate_punctuation = df_replace_punctuation.applymap(lambda x: re.sub(r"([.,?!。,?!])", r" \1 ", x))
# + colab={"base_uri": "https://localhost:8080/", "height": 206} executionInfo={"elapsed": 16, "status": "ok", "timestamp": 1623163668595, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GjyeB8nVwl4aahdaHXzAfTJ5tfTEuviw8xXhOsl=s64", "userId": "11132753723286663537"}, "user_tz": -480} id="b7Xuh01feTwg" outputId="b59d4402-4efe-4a66-f759-38886d418d67"
df_separate_punctuation.sample(5)
# + [markdown] id="jQ7VwhVk0Egb"
# Mapping function to do the text spliting.
#
# - Split english text with space.
# - Split chinese text at every character, discard the first and last element.
# + executionInfo={"elapsed": 15, "status": "ok", "timestamp": 1623163668596, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GjyeB8nVwl4aahdaHXzAfTJ5tfTEuviw8xXhOsl=s64", "userId": "11132753723286663537"}, "user_tz": -480} id="ZKEIXKQ8e_t9"
df_individual_char = df_separate_punctuation
df_individual_char['english_split']=df_individual_char['english'].map(lambda x: re.split(' ',x))
df_individual_char['chinese_split']=df_individual_char['chinese'].map(lambda x: re.split('',x)[1:-2])
# + colab={"base_uri": "https://localhost:8080/"} executionInfo={"elapsed": 16, "status": "ok", "timestamp": 1623163668597, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GjyeB8nVwl4aahdaHXzAfTJ5tfTEuviw8xXhOsl=s64", "userId": "11132753723286663537"}, "user_tz": -480} id="aekyBhRQmJFs" outputId="2bd41efa-2fce-4a46-928b-5b3418cc8ee7"
#ind = np.random.choice(df_individual_char['english'].count(),5)
ind = np.array([23215, 7732, 22636, 6434, 20126]) # fix
ind
# + colab={"base_uri": "https://localhost:8080/", "height": 206} executionInfo={"elapsed": 15, "status": "ok", "timestamp": 1623163668597, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GjyeB8nVwl4aahdaHXzAfTJ5tfTEuviw8xXhOsl=s64", "userId": "11132753723286663537"}, "user_tz": -480} id="qRFjU06wfgvX" outputId="34c6903e-1641-4069-f7b5-8412c60d5ddb"
df_individual_char.loc[ind]
# + [markdown] id="GL_OETW00m33"
# Remove undesired characters in the text arrays.
#
# - Remove '' in english text.
# - Remove ' ' in chinese text.
# + executionInfo={"elapsed": 15, "status": "ok", "timestamp": 1623163668598, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GjyeB8nVwl4aahdaHXzAfTJ5tfTEuviw8xXhOsl=s64", "userId": "11132753723286663537"}, "user_tz": -480} id="c2bUcifzpCcN"
def remove_char(x, char):
x = np.asarray(x)
mask = x == np.repeat(char, len(x))
return x[~mask]
# + colab={"base_uri": "https://localhost:8080/"} executionInfo={"elapsed": 11, "status": "ok", "timestamp": 1623163668903, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GjyeB8nVwl4aahdaHXzAfTJ5tfTEuviw8xXhOsl=s64", "userId": "11132753723286663537"}, "user_tz": -480} id="JXiBOG8tm2md" outputId="9d0a98db-321f-4605-b2cb-e7430159dc92"
tmp=df_individual_char['english_split'].loc[ind[0]]
print(f'Raw input: {tmp}\n')
result = remove_char(tmp, '')
print(f'Removed char \'\': {result}')
# + colab={"base_uri": "https://localhost:8080/"} executionInfo={"elapsed": 7, "status": "ok", "timestamp": 1623163668903, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GjyeB8nVwl4aahdaHXzAfTJ5tfTEuviw8xXhOsl=s64", "userId": "11132753723286663537"}, "user_tz": -480} id="pfsjBP0uqLP3" outputId="c43c5748-e40e-4647-dcfc-34243b8d9618"
tmp=df_individual_char['chinese_split'].loc[ind[2]]
print(f'Raw input: {tmp}\n')
result = remove_char(tmp, ' ')
print(f'Removed char \' \': {result}')
# + executionInfo={"elapsed": 419, "status": "ok", "timestamp": 1623163669318, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GjyeB8nVwl4aahdaHXzAfTJ5tfTEuviw8xXhOsl=s64", "userId": "11132753723286663537"}, "user_tz": -480} id="vpETGkmfqdPv"
df_remove_array_char = df_individual_char
df_remove_array_char['english_split']=df_remove_array_char['english_split'].map(lambda x: remove_char(x,''))
df_remove_array_char['chinese_split']=df_remove_array_char['chinese_split'].map(lambda x: remove_char(x,' '))
# + colab={"base_uri": "https://localhost:8080/", "height": 206} executionInfo={"elapsed": 16, "status": "ok", "timestamp": 1623163669319, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GjyeB8nVwl4aahdaHXzAfTJ5tfTEuviw8xXhOsl=s64", "userId": "11132753723286663537"}, "user_tz": -480} id="Dcdb8U47qvfl" outputId="f0d536f3-082c-4546-8567-a101397b5f07"
df_remove_array_char.loc[ind]
# + [markdown] id="v1hyWg8z1FRp"
# Append `<start>` and `<end>` token to the chinese text.
# + executionInfo={"elapsed": 16, "status": "ok", "timestamp": 1623163669322, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GjyeB8nVwl4aahdaHXzAfTJ5tfTEuviw8xXhOsl=s64", "userId": "11132753723286663537"}, "user_tz": -480} id="hlI44d8Lr5Fa"
def append_token(x):
x=np.array(x,dtype='U7')
start_token_added = np.insert(x,0,"<start>",axis=0)
end_token_added = np.concatenate((start_token_added,["<end>"]),axis=0)
return end_token_added
# + colab={"base_uri": "https://localhost:8080/"} executionInfo={"elapsed": 15, "status": "ok", "timestamp": 1623163669323, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GjyeB8nVwl4aahdaHXzAfTJ5tfTEuviw8xXhOsl=s64", "userId": "11132753723286663537"}, "user_tz": -480} id="EcZ7ZFrlsmoL" outputId="eed9a50d-d1e9-4a7f-d7f7-0a13c66f666a"
append_token(df_remove_array_char['chinese_split'].loc[ind[0]])
# + executionInfo={"elapsed": 744, "status": "ok", "timestamp": 1623163670055, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GjyeB8nVwl4aahdaHXzAfTJ5tfTEuviw8xXhOsl=s64", "userId": "11132753723286663537"}, "user_tz": -480} id="8awrtaMhoYxf"
df_final = df_remove_array_char
df_final['chinese_split']=df_final['chinese_split'].map(lambda x: append_token(x))
# + colab={"base_uri": "https://localhost:8080/"} executionInfo={"elapsed": 24, "status": "ok", "timestamp": 1623163670057, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GjyeB8nVwl4aahdaHXzAfTJ5tfTEuviw8xXhOsl=s64", "userId": "11132753723286663537"}, "user_tz": -480} id="XwHiM4a3vzza" outputId="33e1a8dd-6194-48f4-efc2-76adf2eaeadf"
for i in ind:
print(f'{i}-th sample:\n')
eng = df_final['english_split'].loc[i]
chn = df_final['chinese_split'].loc[i]
print(f'english: {eng}\n')
print(f'chinese: {chn}\n\n')
# + [markdown] id="MJCM9ILy1ZqP"
# Save the processed dataset.
# + executionInfo={"elapsed": 565, "status": "ok", "timestamp": 1623163670611, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GjyeB8nVwl4aahdaHXzAfTJ5tfTEuviw8xXhOsl=s64", "userId": "11132753723286663537"}, "user_tz": -480} id="3w_5gkie1Yj0"
df_final.to_json(path.join(googleDrivePathPrefix,'data/cmn-processed.json'))
# + colab={"base_uri": "https://localhost:8080/", "height": 206} executionInfo={"elapsed": 30, "status": "ok", "timestamp": 1623163670612, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GjyeB8nVwl4aahdaHXzAfTJ5tfTEuviw8xXhOsl=s64", "userId": "11132753723286663537"}, "user_tz": -480} id="4yD0N_2X2bfI" outputId="9089d07c-9ea2-4d1f-f0e2-ba8f4cebcb3c"
df_reproduced = pd.read_json(path.join(googleDrivePathPrefix,'data/cmn-processed.json'))
df_reproduced.sample(5)
| 01-DataPreprocessing.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# !pip install -q --upgrade git+https://github.com/mlss-skoltech/tutorials_week2.git#subdirectory=graph_neural_networks
# +
import pkg_resources
ZIP_PATH = pkg_resources.resource_filename('gnnutils', 'data/data.zip')
DATA_PATH = './data'
# !unzip -u {ZIP_PATH} -d ./
# +
from gnnutils import graph, coarsening, utils
import tensorflow as tf
import time, shutil
import numpy as np
import os, collections, sklearn
import scipy.sparse as sp
import matplotlib.pyplot as plt
import networkx as nx
# %matplotlib inline
# +
#Definition of some flags useful later in the code
flags = tf.app.flags
FLAGS = flags.FLAGS
# Graphs.
flags.DEFINE_integer('number_edges', 8, 'Graph: minimum number of edges per vertex.')
flags.DEFINE_string('metric', 'euclidean', 'Graph: similarity measure (between features).')
flags.DEFINE_bool('normalized_laplacian', True, 'Graph Laplacian: normalized.')
flags.DEFINE_integer('coarsening_levels', 4, 'Number of coarsened graphs.')
# Directories.
flags.DEFINE_string('dir_data', 'data_mnist', 'Directory to store data.')
# -
tf.app.flags.DEFINE_string('f', '', 'kernel')
# +
#Here we proceed at computing the original grid where the images live and the various coarsening that are applied
#for each level
def grid_graph(m):
z = graph.grid(m) # normalized nodes coordinates
dist, idx = graph.distance_sklearn_metrics(z, k=FLAGS.number_edges, metric=FLAGS.metric)
#dist contains the distance of the 8 nearest neighbors for each node indicated in z sorted in ascending order
#idx contains the indexes of the 8 nearest for each node sorted in ascending order by distance
A = graph.adjacency(dist, idx) # graph.adjacency() builds a sparse matrix out of the identified edges computing similarities as: A_{ij} = e^(-dist_{ij}^2/sigma^2)
return A, z
def coarsen(A, nodes_coordinates, levels):
graphs, parents = coarsening.metis(A, levels) #Coarsen a graph multiple times using Graclus variation of the METIS algorithm.
#Basically, we randomly sort the nodes, we iterate on them and we decided to group each node
#with the neighbor having highest w_ij * 1/(\sum_k w_ik) + w_ij * 1/(\sum_k w_kj)
#i.e. highest sum of probabilities to randomly walk from i to j and from j to i.
#We thus favour strong connections (i.e. the ones with high weight wrt all the others for both nodes)
#in the choice of the neighbor of each node.
#Construction is done a priori, so we have one graph for all the samples!
#graphs = list of spare adjacency matrices (it contains in position
# 0 the original graph)
#parents = list of numpy arrays (every array in position i contains
# the mapping from graph i to graph i+1, i.e. the idx of
# node i in the coarsed graph -> that is, the idx of its cluster)
perms = coarsening.compute_perm(parents) #Return a list of indices to reorder the adjacency and data matrices so
#that two consecutive nodes correspond to neighbors that should be collapsed
#to produce the coarsed version of the graph.
#Fake nodes are appended for each node which is not grouped with anybody else
list_A = []
coordinates = np.copy(nodes_coordinates)
idx_rows, idx_cols, edge_feat = [], [], []
for i,A in enumerate(graphs):
M, M = A.shape
# We remove self-connections created by metis.
A = A.tocoo()
A.setdiag(0)
if i < levels: #if we have to pool the graph
A = coarsening.perm_adjacency(A, perms[i]) #matrix A is here extended with the fakes nodes
#in order to do an efficient pooling operation
#in tensorflow as it was a 1D pooling
A = A.tocsr()
A.eliminate_zeros()
Mnew, Mnew = A.shape
if i == 0:
# I add coordinates fake nodes at the beginning and then I simulate a max-pooling operation to coarse them at each layer
no_fake_node = Mnew-M
coordinates = [coordinates, np.ones([no_fake_node, 2])*np.inf]
coordinates = np.concatenate(coordinates, 0)
coordinates = coordinates[perms[i]]
assert coordinates.shape[0] == coordinates.shape[0]
list_A.append(A)
print('Layer {0}: M_{0} = |V| = {1} nodes ({2} added), |E| = {3} edges'.format(i, Mnew, Mnew-M, A.nnz//2))
c_idx_rows, c_idx_cols = A.nonzero()
c_edge_feat = coordinates[c_idx_rows] - coordinates[c_idx_cols]
assert np.sum(np.isfinite(c_edge_feat.flatten())) == c_edge_feat.shape[0]*c_edge_feat.shape[1] # check no fake node is an endpoint of an edge
idx_rows.append(c_idx_rows)
idx_cols.append(c_idx_cols)
edge_feat.append(c_edge_feat)
# update coordinates for next coarser graph
new_coordinates = []
for k in range(A.shape[0]//2):
idx_first_el = k * 2
if not np.isfinite(coordinates[idx_first_el][0]):
#assert np.isfinite(perm_nodes_coordinates[idx_first_el+1][0])
new_coordinates.append(coordinates[idx_first_el+1])
elif not np.isfinite(coordinates[idx_first_el+1][0]):
#assert np.isfinite(perm_nodes_coordinates[idx_first_el][0])
new_coordinates.append(coordinates[idx_first_el])
else:
new_coordinates.append(np.mean(coordinates[idx_first_el:idx_first_el+2], axis=0))
coordinates = np.asarray(new_coordinates)
return list_A, perms[0] if len(perms) > 0 else None, idx_rows, idx_cols, edge_feat
t_start = time.time()
np.random.seed(0)
n_rows_cols = 28
A, nodes_coordinates = grid_graph(n_rows_cols)
list_A, perm, idx_rows, idx_cols, edge_feat = coarsen(A, nodes_coordinates, FLAGS.coarsening_levels)
print('Execution time: {:.2f}s'.format(time.time() - t_start))
graph.plot_spectrum(list_A)
# !!! the plateau is here in correspondence of 1 as here we are plotting the normalized adjacency matrix with self-loops (not the laplacian)!!!
# +
# plot the constructed graph
import matplotlib.pyplot as plt
import matplotlib.lines as mlines
def newline(p1, p2):
# draw a line between p1 and p2
ax = plt.gca()
l = mlines.Line2D([p1[0],p2[0]], [p1[1],p2[1]], color='r', linewidth=0.1)
ax.add_line(l)
return l
plt.figure(dpi=200)
plt.imshow(A.todense())
plt.title('Original adjacency matrix')
plt.figure(dpi=200)
plt.scatter(nodes_coordinates[:, 0]*n_rows_cols, nodes_coordinates[:, 1]*n_rows_cols)
A_row, A_col = A.nonzero()
for idx_e in range(len(A_row)):
newline(nodes_coordinates[A_row[idx_e]]*n_rows_cols, nodes_coordinates[A_col[idx_e]]*n_rows_cols)
# +
#loading of MNIST dataset
from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets(FLAGS.dir_data, one_hot=False)
train_data = mnist.train.images.astype(np.float32)
val_data = mnist.validation.images.astype(np.float32) #the first 5K samples of the training dataset
#are used for validation
test_data = mnist.test.images.astype(np.float32)
train_labels = mnist.train.labels
val_labels = mnist.validation.labels
test_labels = mnist.test.labels
t_start = time.time()
train_data = coarsening.perm_data(train_data, perm)
val_data = coarsening.perm_data(val_data, perm)
test_data = coarsening.perm_data(test_data, perm)
print('Execution time: {:.2f}s'.format(time.time() - t_start))
del perm
# -
class MoNet:
"""
The neural network model.
"""
#Helper functions used for constructing the model
def _weight_variable(self, shape, std=0.1, regularization=True):
"""Initializer for the weights"""
initial = tf.truncated_normal_initializer(0, std)
var = tf.get_variable('weights', shape, tf.float32, initializer=initial)
if regularization: #append the loss of the current variable to the regularization term
self.regularizers.append(tf.nn.l2_loss(var))
return var
def _bias_variable(self, shape, regularization=True):
"""Initializer for the bias"""
initial = tf.constant_initializer(0.1)
var = tf.get_variable('bias', shape, tf.float32, initializer=initial)
if regularization:
self.regularizers.append(tf.nn.l2_loss(var))
return var
def frobenius_norm(self, tensor):
"""Computes the frobenius norm for a given tensor"""
square_tensor = tf.square(tensor)
tensor_sum = tf.reduce_sum(square_tensor)
frobenius_norm = tf.sqrt(tensor_sum)
return frobenius_norm
def count_no_weights(self):
total_parameters = 0
for variable in tf.trainable_variables():
# shape is an array of tf.Dimension
shape = variable.get_shape()
variable_parameters = 1
for dim in shape:
variable_parameters *= dim.value
total_parameters += variable_parameters
print('#weights in the model: %d' % (total_parameters,))
#Modules used by the graph convolutional network
def MoNet(self, x, idx_rows, idx_cols, edge_feat, A_shape, kernel_std, Fout, K):
"""Applies chebyshev polynomials over the graph (i.e. it makes a spectral convolution)"""
N, M, Fin = x.get_shape() # N is the number of images
# M the number of vertices in the images
# Fin the number of features
N, M, Fin = int(N), int(M), int(Fin)
list_x = [x]
for k in range(K-1):
with tf.variable_scope('kernel{}'.format(k+1)):
mu = tf.get_variable('mean', [1, edge_feat.shape[1]], tf.float32, initializer=tf.random_uniform_initializer(minval=-kernel_std, maxval=kernel_std))
sigma = tf.get_variable('sigma', [1, edge_feat.shape[1]], tf.float32, initializer=tf.ones_initializer())*kernel_std
kernel_weight = tf.reduce_sum(tf.square(edge_feat - mu)/tf.square(sigma), axis=1)
A_ker = tf.SparseTensor(indices=np.vstack([idx_rows, idx_cols]).T,
values=kernel_weight,
dense_shape=A_shape)
A_ker = tf.sparse_reorder(A_ker)
A_ker = tf.sparse_softmax(A_ker)
x0 = tf.transpose(x, [1,2,0]) # shape = M x Fin x N
x0 = tf.reshape(x0, [M, Fin*N])
x = tf.sparse_tensor_dense_matmul(A_ker, x0) # shape = M x Fin*N
x = tf.reshape(x, [M, Fin, N]) # shape = M x Fin x N
x = tf.transpose(x, [2,0,1]) # shape = N x M x Fin
list_x.append(x)
x = tf.stack(list_x) # shape = K x N x M x Fin
x = tf.transpose(x, [1,2,3,0]) # shape = N x M x Fin x K
x = tf.reshape(x, [N*M, Fin*K]) # shape = N*M x Fin*K
# Filter: Fout filters of order K applied over all the Fin features
W = self._weight_variable([Fin*K, Fout], regularization=False)
x = tf.matmul(x, W) # N*M x Fout
return tf.reshape(x, [N, M, Fout]) # N x M x Fout
def b1relu(self, x):
"""Applies bias and ReLU. One bias per filter."""
N, M, F = x.get_shape()
b = self._bias_variable([1, 1, int(F)], regularization=False)
return tf.nn.relu(x + b) #add the bias to the convolutive layer
def mpool1(self, x, p):
"""Max pooling of size p. Should be a power of 2 (this is possible thanks to the reordering we previously did)."""
if p > 1:
x = tf.expand_dims(x, 3) # shape = N x M x F x 1
x = tf.nn.max_pool(x, ksize=[1,p,1,1], strides=[1,p,1,1], padding='SAME')
return tf.squeeze(x, [3]) # shape = N x M/p x F
else:
return x
def fc(self, x, Mout, relu=True):
"""Fully connected layer with Mout features."""
N, Min = x.get_shape()
W = self._weight_variable([int(Min), Mout], regularization=True)
b = self._bias_variable([Mout], regularization=True)
x = tf.matmul(x, W) + b
return tf.nn.relu(x) if relu else x
#function used for extracting the result of our model
def _inference(self, x, dropout): #definition of the model
# Graph convolutional layers.
x = tf.expand_dims(x, 2) # N x M x F=1
for i in range(len(self.p)):
with tf.variable_scope('cgconv{}'.format(i+1)):
with tf.name_scope('filter'):
x = self.MoNet(x, self.idx_rows[i*2], self.idx_cols[i*2], self.edge_feat[i*2], self.list_A[i*2].shape, self.list_kernel_std[i*2], self.F[i], self.K[i])
with tf.name_scope('bias_relu'):
x = self.b1relu(x)
with tf.name_scope('pooling'):
x = self.mpool1(x, self.p[i])
# Fully connected hidden layers.
N, M, F = x.get_shape()
x = tf.reshape(x, [int(N), int(M*F)]) # N x M
for i,M in enumerate(self.M[:-1]): #apply a fully connected layer for each layer defined in M
#(we discard the last value in M since it contains the number of classes we have
#to predict)
with tf.variable_scope('fc{}'.format(i+1)):
x = self.fc(x, M)
x = tf.nn.dropout(x, dropout)
# Logits linear layer, i.e. softmax without normalization.
with tf.variable_scope('logits'):
x = self.fc(x, self.M[-1], relu=False)
return x
def convert_coo_to_sparse_tensor(self, L):
indices = np.column_stack((L.row, L.col))
L = tf.SparseTensor(indices, L.data.astype('float32'), L.shape)
L = tf.sparse_reorder(L)
return L
def __init__(self, p, K, F, M, M_0, batch_size, idx_rows, idx_cols, list_A, edge_feat,
decay_steps, decay_rate, learning_rate=1e-4, momentum=0.9, regularization=5e-4,
idx_gpu = '/gpu:1'):
self.regularizers = list() #list of regularization l2 loss for multiple variables
self.p = p #dimensions of the pooling layers
self.K = K #List of polynomial orders, i.e. filter sizes or number of hops
self.F = F #Number of features of convolutional layers
self.M = M #Number of neurons in fully connected layers
self.M_0 = M_0 #number of elements in the first graph
self.batch_size = batch_size
#definition of some learning parameters
self.decay_steps = decay_steps
self.decay_rate = decay_rate
self.learning_rate = learning_rate
self.regularization = regularization
with tf.Graph().as_default() as g:
self.graph = g
tf.set_random_seed(0)
with tf.device(idx_gpu):
#definition of placeholders
self.idx_rows = idx_rows
self.idx_cols = idx_cols
self.edge_feat = edge_feat
self.list_A = list_A
self.list_kernel_std = [np.mean(np.abs(c_edge_feat.flatten())) for c_edge_feat in edge_feat]
self.ph_data = tf.placeholder(tf.float32, (self.batch_size, M_0), 'data')
self.ph_labels = tf.placeholder(tf.int32, (self.batch_size), 'labels')
self.ph_dropout = tf.placeholder(tf.float32, (), 'dropout')
#Model construction
self.logits = self._inference(self.ph_data, self.ph_dropout)
#Definition of the loss function
with tf.name_scope('loss'):
self.cross_entropy = tf.nn.sparse_softmax_cross_entropy_with_logits(logits=self.logits,
labels=self.ph_labels)
self.cross_entropy = tf.reduce_mean(self.cross_entropy)
with tf.name_scope('regularization'):
self.regularization *= tf.add_n(self.regularizers)
self.loss = self.cross_entropy + self.regularization
#Solver Definition
with tf.name_scope('training'):
# Learning rate.
global_step = tf.Variable(0, name='global_step', trainable=False) #used for counting how many iterations we have done
if decay_rate != 1: #applies an exponential decay of the lr wrt the number of iterations done
learning_rate = tf.train.exponential_decay(
learning_rate, global_step, decay_steps, decay_rate, staircase=True)
# Optimizer.
if momentum == 0:
optimizer = tf.train.GradientDescentOptimizer(learning_rate)
else: #applies momentum for increasing the robustness of the gradient
optimizer = tf.train.MomentumOptimizer(learning_rate, momentum)
grads = optimizer.compute_gradients(self.loss)
self.op_gradients = optimizer.apply_gradients(grads, global_step=global_step)
#Computation of the norm gradients (useful for debugging)
self.var_grad = tf.gradients(self.loss, tf.trainable_variables())
self.norm_grad = self.frobenius_norm(tf.concat([tf.reshape(g, [-1]) for g in self.var_grad], 0))
#Extraction of the predictions and computation of accuracy
self.predictions = tf.cast(tf.argmax(self.logits, dimension=1), tf.int32)
self.accuracy = 100 * tf.contrib.metrics.accuracy(self.predictions, self.ph_labels)
# Create a session for running Ops on the Graph.
config = tf.ConfigProto(allow_soft_placement = True)
config.gpu_options.allow_growth = True
self.session = tf.Session(config=config)
# Run the Op to initialize the variables.
init = tf.global_variables_initializer()
self.session.run(init)
self.count_no_weights()
# +
#Convolutional parameters
p = [4, 4] #Dimensions of the pooling layers
K = [25, 25] #List of polynomial orders, i.e. filter sizes or number of hops
F = [32, 64] #Number of features of convolutional layers
#FC parameters
C = max(mnist.train.labels) + 1 #Number of classes we have
M = [512, C] #Number of neurons in fully connected layers
#Solver parameters
batch_size = 100
decay_steps = mnist.train.num_examples / batch_size #number of steps to do before decreasing the learning rate
decay_rate = 0.95 #how much decreasing the learning rate
learning_rate = 0.02
momentum = 0.9
regularization = 5e-4
#Definition of keep probabilities for dropout layers
dropout_training = 0.5
dropout_val_test = 1.0
# +
#Construction of the learning obj
M_0 = list_A[0].shape[0] #number of elements in the first graph
learning_obj = MoNet(p, K, F, M, M_0, batch_size, idx_rows, idx_cols, list_A, edge_feat,
decay_steps, decay_rate,
learning_rate=learning_rate, regularization=regularization,
momentum=momentum)
#definition of overall number of training iterations and validation frequency
num_iter_val = 600
num_total_iter_training = 21000
num_iter = 0
list_training_loss = list()
list_training_norm_grad = list()
list_val_accuracy = list()
# -
#training and validation
indices = collections.deque() #queue that will contain a permutation of the training indexes
for k in range(num_iter, num_total_iter_training):
#Construction of the training batch
if len(indices) < batch_size: # Be sure to have used all the samples before using one a second time.
indices.extend(np.random.permutation(train_data.shape[0])) #reinitialize the queue of indices
idx = [indices.popleft() for i in range(batch_size)] #extract the current batch of samples
#data extraction
batch_data, batch_labels = train_data[idx,:], train_labels[idx]
feed_dict = {learning_obj.ph_data: batch_data,
learning_obj.ph_labels: batch_labels,
learning_obj.ph_dropout: dropout_training}
#Training
tic = time.time()
_, current_training_loss, norm_grad = learning_obj.session.run([learning_obj.op_gradients,
learning_obj.loss,
learning_obj.norm_grad], feed_dict = feed_dict)
training_time = time.time() - tic
list_training_loss.append(current_training_loss)
list_training_norm_grad.append(norm_grad)
if (np.mod(num_iter, num_iter_val)==0): #validation
msg = "[TRN] iter = %03i, cost = %3.2e, |grad| = %.2e (%3.2es)" \
% (num_iter, list_training_loss[-1], list_training_norm_grad[-1], training_time)
print(msg)
#Validation Code
tic = time.time()
val_accuracy = 0
for begin in range(0, val_data.shape[0], batch_size):
end = begin + batch_size
end = min([end, val_data.shape[0]])
#data extraction
batch_data = np.zeros((end-begin, val_data.shape[1]))
batch_data = val_data[begin:end,:]
batch_labels = np.zeros(batch_size)
batch_labels[:end-begin] = val_labels[begin:end]
feed_dict = {learning_obj.ph_data: batch_data,
learning_obj.ph_labels: batch_labels,
learning_obj.ph_dropout: dropout_val_test}
batch_accuracy = learning_obj.session.run(learning_obj.accuracy, feed_dict)
val_accuracy += batch_accuracy*batch_data.shape[0]
val_accuracy = val_accuracy/val_data.shape[0]
val_time = time.time() - tic
msg = "[VAL] iter = %03i, acc = %4.2f (%3.2es)" % (num_iter, val_accuracy, val_time)
print(msg)
num_iter += 1
#Test code
tic = time.time()
test_accuracy = 0
for begin in range(0, test_data.shape[0], batch_size):
end = begin + batch_size
end = min([end, test_data.shape[0]])
batch_data = np.zeros((end-begin, test_data.shape[1]))
batch_data = test_data[begin:end,:]
feed_dict = {learning_obj.ph_data: batch_data, learning_obj.ph_dropout: 1}
batch_labels = np.zeros(batch_size)
batch_labels[:end-begin] = test_labels[begin:end]
feed_dict[learning_obj.ph_labels] = batch_labels
batch_accuracy = learning_obj.session.run(learning_obj.accuracy, feed_dict)
test_accuracy += batch_accuracy*batch_data.shape[0]
test_accuracy = test_accuracy/test_data.shape[0]
test_time = time.time() - tic
msg = "[TST] iter = %03i, acc = %4.2f (%3.2es)" % (num_iter, test_accuracy, test_time)
print(msg)
# +
# 98.24 std=1 _weight_variable
| Machine Learning Summer School 2019 (Moscow, Russia)/tutorials/graph_neural_networks/MNIST/MoNet.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import mysql.connector
from mysql.connector import Error
import pandas as pd
def create_server_connection(host_name, user_name, user_password):
connection = None
try:
connection = mysql.connector.connect(
host=host_name,
user=user_name,
passwd=user_password
)
print("MySQL Database connection successful")
except Error as err:
print(f"Error: '{err}'")
return connection
pw = "<PASSWORD>"
db = "python_db" # This is the name of the database
connection = create_server_connection("localhost", "root", pw)
def create_database(connection, query):
cursor = connection.cursor()
try:
cursor.execute(query)
print("Database created successfully")
except Error as err:
print(f"Error: '{err}'")
create_database_query = "CREATE DATABASE python_db"
create_database(connection, create_database_query)
cursor = connection.cursor()
cursor.execute("SELECT VERSION()")
data = cursor.fetchone()
print ("Database version : %s " % data)
def get_connection():
connection = mysql.connector.connect(host='localhost',
database='python_db',
user='root',
password='<PASSWORD>')
return connection
def get_hospital(hospital_id):
try:
connection = get_connection()
cursor = connection.cursor()
select_query = """select * from Hospital where Hospital_Id = %s"""
cursor.execute(select_query, (hospital_id,))
records = cursor.fetchall()
print("Printing Hospital record")
for row in records:
print("Hospital Id:", row[0], )
print("Hospital Name:", row[1])
print("Bed Count:", row[2])
close_connection(connection)
except (Exception, mysql.connector.Error) as error:
print("Error while getting data", error)
def get_doctor(doctor_id):
try:
connection = get_connection()
cursor = connection.cursor()
select_query = """select * from Doctor where Doctor_Id = %s"""
cursor.execute(select_query, (doctor_id,))
records = cursor.fetchall()
print("Printing Doctor record")
for row in records:
print("Doctor Id:", row[0])
print("Doctor Name:", row[1])
print("Hospital Id:", row[2])
print("Joining Date:", row[3])
print("Specialty:", row[4])
print("Salary:", row[5])
print("Experience:", row[6])
close_connection(connection)
except (Exception, mysql.connector.Error) as error:
print("Error while getting data", error)
#printing Output
print("Question 2: Read given hospital and doctor details \n")
get_hospital_detail(2)
print("\n")
get_doctor_detail(105)
def get_specialist_doctors_list(speciality, salary):
try:
connection = get_connection()
cursor = connection.cursor()
sql_select_query = """select * from Doctor where Speciality=%s and Salary > %s"""
cursor.execute(sql_select_query, (speciality, salary))
records = cursor.fetchall()
print("Printing doctors whose specialty is", speciality, "and salary greater than", salary, "\n")
for row in records:
print("Doctor Id: ", row[0])
print("Doctor Name:", row[1])
print("Hospital Id:", row[2])
print("Joining Date:", row[3])
print("Specialty:", row[4])
print("Salary:", row[5])
print("Experience:", row[6], "\n")
close_connection(connection)
except (Exception, mysql.connector.Error) as error:
print("Error while getting data", error)
print("Question 3: Get Doctors as per given Speciality\n")
get_specialist_doctors_list("Garnacologist", 30000)
# +
def get_hospital_name(hospital_id):
# Fetch Hospital Name using Hospital id
try:
connection = get_connection()
cursor = connection.cursor()
select_query = """select * from Hospital where Hospital_Id = %s"""
cursor.execute(select_query, (hospital_id,))
record = cursor.fetchone()
close_connection(connection)
return record[1]
except (Exception, mysql.connector.Error) as error:
print("Error while getting data", error)
# -
def get_doctors(hospital_id):
# Fetch Hospital Name using Hospital id
try:
hospital_name = get_hospital_name(hospital_id)
connection = get_connection()
cursor = connection.cursor()
sql_select_query = """select * from Doctor where Hospital_Id = %s"""
cursor.execute(sql_select_query, (hospital_id,))
records = cursor.fetchall()
print("Printing Doctors of ", hospital_name, "Hospital")
for row in records:
print("Doctor Id:", row[0])
print("Doctor Name:", row[1])
print("Hospital Id:", row[2])
print("Hospital Name:", hospital_name)
print("Joining Date:", row[3])
print("Specialty:", row[4])
print("Salary:", row[5])
print("Experience:", row[6], "\n")
close_connection(connection)
except (Exception, mysql.connector.Error) as error:
print("Error while getting doctor's data", error)
print("Question 4: Get List of doctors of a given Hospital Id\n")
get_doctors(2)
| Assignment - MYSQLPYTHON.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ## Q.1
# #### (a) Draw n = 300 real numbers uniformly at random on [0, 1], call them x1, . . . , xn.
# #### (b) Draw n real numbers uniformly at random on [− 1 , 1 ], call them ν1,...,νn.
# +
import os
import struct
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import matplotlib as mpl
np.random.seed(42)
n=300
X = np.random.uniform(0,1,size = n).reshape((n,1))
MU = np.random.uniform(-.1,.1,size = n).reshape((n,1))
Y = np.zeros(n).reshape((n,1))
# -
# #### (c) Let di = sin(20xi) + 3xi + νi, i = 1, . . . , n. Plot the points (xi, di), i = 1, . . . , n.
D = np.sin(20*X) + 3*X + MU
plt.plot(X,D,'bo')
plt.show()
# We will consider a 1 × N × 1 neural network with one input, N = 24 hidden neurons, and 1 output neuron. The network will thus have 3N + 1 weights including biases. Let w denote the vector of all these 3N + 1 weights. The output neuron will use the activation function φ(v) = v; all other neurons will use the activation function φ(v) = tanhv. Given input x, we use the notation f(x,w) to represent the network output.
# +
### Step 1 to 4
# Number of hidden layer's neurons
N = 24
W = np.random.uniform(-.1,.1,size = (3*N+1)).reshape((3*N+1,1))
eta = 4
epsilon = 1e-8
mse = list()
mse.append(1e2)
mse.append(1e1)
l=0
m = 0
epoch = 2
U = np.zeros(N)
Z = U
A = W[0:N]
B = W[N:2*N]
C = W[2*N:3*N]
e = W[-1]
# +
### Step 5
while (np.abs(mse[-1]-mse[-2])>=epsilon):
m += 1
index = np.random.choice(np.arange(n), size= n, replace=False)
# Step 5.A and C
for i in index:
U = X[i]*A + B
Z = np.tanh(U)
Y[i] = np.transpose(Z).dot(C) + e
# update A:
A += 2 / n * eta * X[i] * (D[i]-Y[i]) * (1-Z) * (1+Z) * C
# update B:
B += 2 / n * eta * (D[i]-Y[i]) * (1-Z) * (1+Z) * C
# update C:
C += 2 / n * eta * (D[i]-Y[i]) * Z
# update e:
e += 2 / n * eta * (D[i]-Y[i])
# Step 5.B
mse.append(np.sum((D-Y)**2)/n)
epoch +=1
if (epoch-2) % 2000 == 0:
print('The current epoch is {}, MSE is {}, eta is {}.'.format((epoch-2) ,round(mse[epoch-2],7) , eta))
if (epoch-2) % 6000 == 0:
eta *= 0.5
# delete the 2 dummpy points
epoch -= 2
del(mse[0:2])
# -
# #### (d) Use the backpropagation algorithm with online learning to find the optimal weights/network that minimize the mean-squared error (MSE) 1 n (di −f(xi,w))2. Use some η of your choice. Plot the number of n~i~=1 epochs vs the MSE in the backpropagation algorithm.
fig, ax = plt.subplots()
ny = np.arange(epoch)
m = ny[ny%3000 == 0]
m = np.append(m,ny[-1])
mse = DataFrame(mse, columns = ['MSE'])
A = np.array(mse.iloc[m])
plt.plot(m,np.log(A),'y->')
for i, txt in enumerate(A):
ax.annotate(np.round(txt.tolist(),5), (m[i], np.log(A[i])+0.2*(-1)**i),fontsize = 8)
plt.xlabel('Epoch')
plt.title('log(MSE) at each 3,000 epochs in BP')
plt.ylabel('log(MSE)')
plt.show()
# #### (e) Plot the curve f(x, w0) as x ranges from 0 to 1 on top of the plot of points in (c). The fit should be a “good” fit.
W0 =W
Desired_curve = D
Fitt
ed_curve = Y
for i in range(n):
Y[i] = np.transpose(np.tanh(X[i] * W0[0:N] +W0[N:2*N])).dot(W0[2*N:3*N]) + W0[-1]
plot1, = plt.plot(X,Desired_curve,'bo',markersize = 3)
plot2, = plt.plot(X,Fitted_curve,'ro',markersize = 3)
plt.legend([plot1,plot2],["Desired curve", "Fitted curve"])
plt.show()
# #### (f) Pseudocode of implementation.
# 1. Given n, N, and $\epsilon$.
#
# 2. Initialize $\eta \in \mathbb{R}, W \in \mathbb{R}^{(3N+1) \times 1}$ randomly. Note: $W = {A,B,C,d}$, where $A \in \mathbb{R}^N, B \in \mathbb{R}^N, C \in \mathbb{R}^N, e in \mathbb{R}$.
#
# 3. Initialize epoch = 0.
#
# 4. Initialize $MSE_{epoch}$ = 0 for epoch = 0, 1, ... .
#
# 5. Do SGD, so sample n numbers out range(n) without replacement as the index:
# 1. for i in index, do (this loop is where we compute the mse of each epoch):
# 1. Compute the induced local fields as a vector with the current training sample and weights by:
# $$
# U = x_i \cdot A + B \in \mathbb{R}^N
# $$
# 2. Then, get the N outputs from the N hidden neurons as a vector by:
# $$
# Z = tanh(U) \in \mathbb{R}^N
# $$
# 3. So, the output of the last neuron will be:
# $$
# y_i = Z^T \cdot C+ e
# $$
# 2. Compute the MSE for current epoch:
# $$
# MSE_{epoch} = \frac{1}{n}\sum_{i=1}^n(d_i-y_i)^2
# $$
# Then, update the epoch:
# $$
# epoch \leftarrow epoch +1.
# $$
# 3. for i = 1 to n, do (this loop is where we update the weights):
# Note:
# $$
# \frac{\partial tanh(U)}{\partial U} = \begin{bmatrix} \frac{\partial tanh(u_1)}{\partial u_1} & \frac{\partial tanh(u_1)}{\partial u_2} & \cdots & \frac{\partial tanh(u_1)}{\partial u_{24}} \\ \frac{\partial tanh(u_2)}{\partial u_1} & \frac{\partial tanh(u_2)}{\partial u_2} & & \vdots \\ \vdots & & \ddots & \vdots \\ \frac{\partial tanh(u_{24})}{\partial u_1} & \cdots & \cdots & \frac{\partial tanh(u_{24})}{\partial u_{24}} \end{bmatrix} = \begin{bmatrix} \frac{\partial tanh(u_1)}{\partial u_1} & \cdots & 0 \\ \vdots & \ddots & \vdots \\ 0 & \cdots & \frac{\partial tanh(u_{24})}{\partial u_{24}} \end{bmatrix} = \begin{bmatrix} 1- tanh^2(u_1) & \cdots & 0 \\ \vdots & \ddots & \vdots \\ 0 & \cdots & 1- tanh^2(u_{24}) \end{bmatrix}
# $$
# 3. update the A here:
# 1. According to the backpropogation algorithm, we have:
# $$
# \frac{\partial MSE}{\partial A} = -x_i \cdot \frac{2}{n}(d_i-y_i) \cdot \frac{\partial tanh(U)}{\partial U} \cdot C = -x_i \cdot \frac{2}{n}(d_i-y_i) \cdot (1-tanh(U)) \bigodot (1+tanh(U)) \bigodot C
# $$
# 2. The update would be:
# $$
# A \leftarrow A - \eta \frac{\partial MSE}{\partial A} =A + \eta x_i \cdot \frac{2}{n}(d_i-y_i) \cdot (1-tanh(U)) \bigodot (1+tanh(U)) \bigodot C
# $$
# 4. update the B here:
# 1. According to the backpropogation algorithm, we have:
# $$
# \frac{\partial MSE}{\partial B} = -1 \cdot \frac{2}{n}(d_i-y_i) \cdot \frac{\partial tanh(U)}{\partial U} \cdot C = -1 \cdot \frac{2}{n}(d_i-y_i) \cdot (1-tanh(U)) \bigodot (1+tanh(U)) \bigodot C
# $$
# 2. The update would be:
# $$
# B \leftarrow B - \eta \frac{\partial MSE}{\partial B} =B + \eta \frac{2}{n}(d_i-y_i) \cdot (1-tanh(U)) \bigodot (1+tanh(U)) \bigodot C
# $$
# 5. update the C here:
# 1. According to the backpropogation algorithm, we have:
# $$
# \frac{\partial MSE}{\partial C} = -tanh(U) \cdot \frac{2}{n}(d_i - y_i)
# $$
# 2. The update would be:
# $$
# C \leftarrow C - \eta \frac{\partial MSE}{\partial C} =C + \eta \frac{2}{n}(d_i-y_i) \cdot tanh(U)
# $$
# 6. update the e here:
# 1. According to the backpropogation algorithm, we have:
# $$
# \frac{\partial MSE}{\partial e} = -\frac{2}{n}(d_i - y_i)
# $$
# 2. The update would be:
# $$
# e \leftarrow e - \eta \frac{\partial MSE}{\partial e} =e + \eta \frac{2}{n}(d_i-y_i)
# $$
# 7. for every kth (k = 6000) epoch:
# $$\eta = 0.5 * \eta$$
# 2. Loop to A, if $abs(MSE_{epoch} - MSE_{epoch-1}) >\epsilon$.
#
#
# Reamrk: Could consider to reduce the $\eta$ after a certain epochs systematically.
#
#
#
#
#
#
# ### 2.
# (150pts) In this computer project, we will design a neural network for digit classification using the backpropagation algorithm (see the notes above). You should use the MNIST data set (see Homework 2 for details) that consists of 60000 training images and 10000 test images. The training set should only be used for training, and the test set should only be used for testing. Your final report should include the following:
# #### 0. Input the data files
# +
# save the original binary MNIST data files in 0-255
def read(dataset = "training", path = "."):
if dataset is "training":
fname_img = os.path.join(path, 'train-images-idx3-ubyte')
fname_lbl = os.path.join(path, 'train-labels-idx1-ubyte')
elif dataset is "testing":
fname_img = os.path.join(path, 't10k-images-idx3-ubyte')
fname_lbl = os.path.join(path, 't10k-labels-idx1-ubyte')
else:
raise ValueError( "It needs to be between 'testing' and 'training'")
with open(fname_lbl, 'rb') as flbl:
magic, num = struct.unpack(">II", flbl.read(8))
lbl = np.fromfile(flbl, dtype=np.int8)
# print(len(lbl))
with open(fname_img, 'rb') as fimg:
magic, num, rows, cols = struct.unpack(">IIII", fimg.read(16))
image = np.fromfile(fimg, dtype=np.uint8)
image = image.reshape(len(lbl), rows, cols)
get_image = lambda idx: (lbl[idx], image[idx])
for i in range(len(lbl)):
yield get_image(i)
training_data = list(read(dataset = "training",path = r'C:\Users\Han\Desktop\Box Sync\CS 559\hwk2'))
testing_data = list(read(dataset = "testing",path = r'C:\Users\Han\Desktop\Box Sync\CS 559\hwk2'))
training_label = np.zeros((len(training_data),1))
training_desiredout =np.zeros((len(training_data),10))
training_image = np.zeros((len(training_data),28*28))
testing_label = np.zeros((len(testing_data),1))
testing_desiredout = np.zeros((len(testing_data),10))
testing_image = np.zeros((len(testing_data),28*28))
# split the training and testing data to labels and images
for i in range(len(training_data)):
temp = training_data[i]
training_label[i] = temp[0]
training_desiredout[i,temp[0]] = 1
training_image[i,] = temp[1].reshape(1,28*28)
#training_label = training_label.reshape((1,60000))
for i in range(len(testing_data)):
temp = testing_data[i]
testing_label[i] = temp[0]
testing_desiredout[i,temp[0]] = 1
testing_image[i,] = temp[1].reshape(1,28*28)
#testing_label = testing_label.reshape((1,10000))
# Rename data
X_train = training_image
X_test = testing_image
Y_train = training_desiredout
Y_test = testing_desiredout
# Standardize X (images from 0-255 to 0-1)
X_train = X_train.astype('float32')
X_test = X_test.astype('float32')
X_train /= 255
X_test /= 255
# -
# ### 1. First try
# ##### 1. Architecture of NN: 784 input neurons, 1 hidden layer with $N_1$ = 100 neurons, 10 output neurons with 0 or 1 as their outputs to be compatible with the use of 0-1 vector outputs.
# ##### 2. Output: With [1 0 · · · 0] representing a 0, [0 1 0 · · · 0] representing a 1 and so on, it is denoted as $f(x_i,w)$.
# ##### 3.1 Activation functions:, tanh() from input to 1st hidden layer and softmax from 1st hidden layer to output.
# ##### 3.2 Leanring rate: 0.1.
# ##### 3.2 Weight initialization: all weights are intialized randomly from unifrom(-0.5,0.5).
# ##### 4. Energy function: MSE in the form of: $\frac{1}{n}\sum_{i=1}^{n}||d_i-f(x_i,w)||^2$.
# ##### 5. Other tricks: reduce the eta by 50% when at plateau.
# +
## initialization
np.random.seed(2)
# use first n samples from training data to train the NN
n = 60000
# number of neurons in 1st hidden layer, N1
N1 = 100
# learning rate
eta = 1e-2
# convergence threshold
epsilon = 5e-8
# epoch number
epoch = 2
# batch size
bs = 1
# initialize errors
mse = list()
mse.append(1e3)
mse.append(5e2)
# initialize real outputs given the current w
out = np.zeros((n,10))
# initialize weights from input to 1st hidden layer
u = np.random.uniform(-0.05,0.05,size = (784*N1)).reshape(784,N1)
# initialize weights from 1st hidden layer to output
v = np.random.uniform(-0.2,0.2,size = (10*N1)).reshape(N1,10)
# initialzie the bias
b = np.random.normal(-0.1,0.1,size = (bs*N1)).reshape(bs,N1)
# define softmax actiavtion function
def softmax(w):
e = np.exp(np.array(w) - np.max(w))
dist = e / np.sum(e)
return dist
# define a continuous vector to 0-1 vector functin
def vec_to_01(x):
s1 = np.zeros(x.shape[0])
q = np.argmax(x)
s1[q] =1
return s1
# define softmax's Jacobian matrix function
def softmax_jacobian(s):
return np.diagflat(s) - np.outer(s, s)
# define tanh's Jacobian matrix function
def tanh_jacobian(t):
return np.diagflat(1-t**2)
def cross_S(out, Label):
return - np.sum(np.multiply(out, np.log(Label)) + np.multiply((1-out), np.log(1-Label)))
# +
while (np.abs(mse[-1]-mse[-2])>=epsilon):
index = np.random.choice(np.arange(n), size= n, replace=False)
X_train_rndm = X_train[index]
# this loop is where we update the weights
for i in range(0,n, bs):
lf1 = np.dot(X_train_rndm[i:(i+bs)],u) + b #compute the input layers' induced local field
z0 = np.tanh(lf1) #compute the input layer's out
lf2 = np.dot(z0,v) #compute the 1st H layers' induced local field
out[i:(i+bs)] = softmax(lf2) #compute the 1st H layer's out
if (True):
# update u:
temp1 = 2 / n * eta *(Y_train[i:(i+bs)] - out[i:(i+bs)]).dot(softmax_jacobian(lf2))
temp2 = np.dot(temp1,v.T)
temp2 = temp2.dot(tanh_jacobian(lf1))
u -= np.outer(X_train_rndm[i:(i+bs)],temp2)
# update b:
b -= temp2
# update v:
v -= np.outer(z0, temp1)
# calculate mse for the current epoch
mse.append(np.sum((out-Y_train)**2)/n)
epoch +=1
if mse[epoch-2] <= mse[epoch-1]:
eta *= 0.5
#if (epoch-2) % 2 == 0:
#print('The current epoch is {}, MSE is {}, eta is {}.'.format((epoch-2) ,round(mse[epoch-2],7) , eta))
# delete the 2 dummpy points
epoch -= 2
del(mse[0:2])
# -
u0 = u
b0 = b
v0 = v
# Initialize errors = 0.
error_test = 0
# loop on all testing samples
for i in range(10000):
lf1 = np.dot(X_train_rndm[i:(i+bs)],u) + b #compute the input layers' induced local field
z0 = np.tanh(lf1) #compute the input layer's out
lf2 = np.dot(z0,v) #compute the 1st H layers' induced local field
out[i:(i+bs)] = softmax(lf2) #compute the 1st H layer's out
# Find the largest component of v0 = [v0', v1', ...v9']^T^
predic_out = np.argmax(out[i,:])
# If the prediceted output is different to the testing label, error +=1
diff = predic_out - np.argmax(Y_test[i,:])
if diff != 0:
error_test += 1
error_test/10000
fig, ax = plt.subplots()
ny = np.arange(epoch)
m = ny[ny%30 == 0]
m = np.append(m,ny[-1])
mse = pd.DataFrame(mse, columns = ['MSE'])
A = np.array(mse.iloc[m])
plt.plot(m,np.log(A),'y->')
for i, txt in enumerate(A):
ax.annotate(np.round(txt.tolist(),5), (m[i], np.log(A[i])),fontsize = 8)
plt.xlabel('Epoch')
plt.title('log(MSE) at each 3,000 epochs in BP')
plt.ylabel('log(MSE)')
plt.show()
# #### Clearly, we at not satisfied with a 12.2% errorrate on the test data. And I concluded that it was due to the undernumber of the neurons in the hidden layer, careless initialization of parameters and/or hyperparameters, and lack of other tricks. Thus, I had my 2nd try:
# ### 2. Second try
# ##### 1. Architecture of NN: 784 input neurons, 1 hidden layer with $N_1$ = 150 neurons, 10 output neurons with 0 or 1 as their outputs to be compatible with the use of 0-1 vector outputs.
# ##### 2. Output: With [1 0 · · · 0] representing a 0, [0 1 0 · · · 0] representing a 1 and so on, it is denoted as $f(x_i,w)$.
# ##### 3.1 Activation functions:, tanh() from input to 1st hidden layer and softmax from 1st hidden layer to output.
# ##### 3.2 Leanring rate: Set to 0.03 due to the emperical principle $\eta \sim O(1/ \sqrt{m})$, m is 784 here.
# ##### 3.2 Weight initialization: For hyperbolic tangent units: sample a Uniform(-r, r) with $r = \sqrt{\frac{6}{fan-in + fan-out}}$; For softmax tangent units: sample a Uniform(-r, r) with $r = 4\sqrt{\frac{6}{fan-in + fan-out}}$ (fan-in is the number of inputs of the unit, fan-out is the number of outputs of the unit); For bias: sample the same way as the associated weight.
# ##### 4. Energy function: MSE in the form of: $\frac{1}{n}\sum_{i=1}^{n}||d_i-f(x_i,w)||^2$.
# ##### 5. Other tricks: SGD optimizer; reduce the eta by 50% when at plateau; carry 90% of the last epoch's gradient over to the current epoch when updating; Early termination (done mannully)
## initialization
np.random.seed(2)
# use first n samples from training data to train the NN
n = 60000
# number of neurons in 1st hidden layer, N1
N1 =150
# learning rate
eta = 0.03
# convergence threshold
epsilon = 5e-7
# epoch number
epoch = 2
# batch size
bs = 1
# initialize errors
mse = list()
mse.append(1e3)
mse.append(5e2)
# initialize real outputs given the current w
out = np.zeros((n,10))
# initialize weights from input to 1st hidden layer
u = np.random.uniform(-0.05,0.05,size = (784*N1)).reshape(784,N1)
# initialize weights from 1st hidden layer to output
v = np.random.uniform(-0.2,0.2,size = (10*N1)).reshape(N1,10)
# initialzie the bias
b = np.random.uniform(-0.05,0.05,size = (bs*N1)).reshape(bs,N1)
# initialize condi
m = 0
l = 0
# intialize momentum
momentum = 0.9
while (np.abs(mse[-1]-mse[-2])>=epsilon):
m += 1
index = np.random.choice(np.arange(n), size= n, replace=False)
X_train_rndm = X_train[index]
# re-initialzie last epoch's gradients
grad_uold = 0
grad_bold = 0
grad_vold = 0
# this loop is where we update the weights
for i in range(0,n, bs):
l +=1
lf1 = np.dot(X_train_rndm[i:(i+bs)],u) + b #compute the input layers' induced local field
z0 = np.tanh(lf1) #compute the input layer's out
lf2 = np.dot(z0,v) #compute the 1st H layers' induced local field
out[i:(i+bs)] = softmax(lf2) #compute the 1st H layer's out
if (True):
# update u:
temp1 = 2 / n * eta * (Y_train[i:(i+bs)]-out[i:(i+bs)]).dot(softmax_jacobian(lf2))
temp2 = np.dot(temp1,v.T)
temp2 = temp2.dot(tanh_jacobian(lf1))
grad_u= np.outer(X_train_rndm[i:(i+bs)],temp2)
grad_u = grad_uold*momentum+grad_u
u -= grad_u
grad_uold = grad_u
# update b:
grad_b = (temp2)
grad_b = grad_bold*momentum+grad_b
b -=grad_b
grad_bold = grad_b
# update v:
grad_v = np.outer(z0, temp1)
grad_v = grad_vold*momentum+grad_v
v -=grad_v
grad_vold = grad_v
# calculate mse for the current epoch
mse.append(np.sum((out-Y_train)**2)/n)
epoch +=1
if mse[epoch-2] <= mse[epoch-1]:
eta *= 0.5
# delete the 2 dummpy points
epoch -= 2
del(mse[0:2])
u0 = u
b0 = b
v0 = v
# Initialize errors = 0.
error_test = 0
# loop on all testing samples
for i in range(10000):
lf1 = np.dot(X_train_rndm[i:(i+bs)],u) + b #compute the input layers' induced local field
z0 = np.tanh(lf1) #compute the input layer's out
lf2 = np.dot(z0,v) #compute the 1st H layers' induced local field
out[i:(i+bs)] = softmax(lf2) #compute the 1st H layer's out
# Find the largest component of v0 = [v0', v1', ...v9']^T^
predic_out = np.argmax(out[i,:])
# If the prediceted output is different to the testing label, error +=1
diff = predic_out - np.argmax(Y_test[i,:])
if diff != 0:
error_test += 1
error_test/10000
fig, ax = plt.subplots()
ny = np.arange(epoch)
m = ny[ny%30 == 0]
m = np.append(m,ny[-1])
mse = pd.DataFrame(mse, columns = ['MSE'])
A = np.array(mse.iloc[m])
plt.plot(m,np.log(A),'y->')
for i, txt in enumerate(A):
ax.annotate(np.round(txt.tolist(),5), (m[i], np.log(A[i])),fontsize = 8)
plt.xlabel('Epoch')
plt.title('log(MSE) at each 30 epochs in BP')
plt.ylabel('log(MSE)')
plt.show()
# #### 6.72% to 12.2% error rate on the test data was a big boost. But I concluded that cross-entropy will be a better choice of the energy function of the than good old MSE. Meanwhile, adding more neurons will definitely help me to get into the 5% error rate club. Thus, I had my 3rd try:
# ### 3. Third try
# ##### 1. Architecture of NN: 784 input neurons, 1 hidden layer with $N_1$ = 250 neurons, 10 output neurons with 0 or 1 as their outputs to be compatible with the use of 0-1 vector outputs.
# ##### 2. Output: With [1 0 · · · 0] representing a 0, [0 1 0 · · · 0] representing a 1 and so on, it is denoted as $f(x_i,w)$.
# ##### 3.1 Activation functions:, tanh() from input to 1st hidden layer and softmax from 1st hidden layer to output.
# ##### 3.2 Leanring rate: Set to 0.03 due to the emperical principle $\eta \sim O(1/ \sqrt{m})$, m is 784 here.
# ##### 3.2 Weight initialization: For hyperbolic tangent units: sample a Uniform(-r, r) with $r = \sqrt{\frac{6}{fan-in + fan-out}}$; For softmax units: sample a Uniform(-r, r) with $r = 4\sqrt{\frac{6}{fan-in + fan-out}}$ (fan-in is the number of inputs of the unit, fan-out is the number of outputs of the unit); For bias: sample the same way as the associated weight.
# ##### 4. Energy function: Cross-entropy in the form of: $-\frac{1}{n}\sum_{i=1}^{n}\sum_{j=1}^{10}d_{ij} log(f(x_i,w)_j)$, where j represents the 10 output classes.
# ##### 5. Other tricks: SGD optimizer; reduce the eta by 50% when at plateau; carry 82% of the last epoch's gradient over to the current epoch when updating; Early termination (done mannully)
## initialization
np.random.seed(2)
# use first n samples from training data to train the NN
n = 60000
# number of neurons in 1st hidden layer, N1
N1 = 250
# learning rate
eta = 0.03
# convergence threshold
epsilon = 5e-8
# epoch number
epoch = 2
# batch size
bs = 1
# initialize errors
entropy = list()
entropy.append(1e1)
entropy.append(5e0)
# initialize real outputs given the current w
out = np.zeros((n,10))
# initialize weights from input to 1st hidden layer
u = np.random.uniform(-0.05,0.05,size = (784*N1)).reshape(784,N1)
# initialize weights from 1st hidden layer to output
v = np.random.uniform(-0.2,0.2,size = (10*N1)).reshape(N1,10)
# initialzie the bias
b = np.random.normal(-0.1,0.1,size = (bs*N1)).reshape(bs,N1)
# initialize condi
m = 0
l = 0
# intialize momentum
momentum = 0.82
# +
while (np.abs(entropy[-1]-entropy[-2])>=epsilon):
m += 1
index = np.random.choice(np.arange(n), size= n, replace=False)
X_train_rndm = X_train[index]
# re-initialzie last epoch's gradients
grad_uold = 0
grad_bold = 0
grad_vold = 0
# this loop is where we update the weights
for i in range(0,n, bs):
lf1 = np.dot(X_train_rndm[i:(i+bs)],u) + b #compute the input layers' induced local field
z0 = np.tanh(lf1) #compute the input layer's out
lf2 = np.dot(z0,v) #compute the 1st H layers' induced local field
out[i:(i+bs)] = softmax(lf2) #compute the 1st H layer's out
if (True):
# update u:
temp1 = 1 / n * eta * np.sum(Y_train[i:(i+bs)] / out[i:(i+bs)], axis = 0).dot(softmax_jacobian(lf2))
temp2 = np.dot(temp1,v.T)
temp2 = temp2.dot(tanh_jacobian(lf1))
grad_u= np.outer(X_train_rndm[i:(i+bs)],temp2)
grad_u = grad_uold*momentum+grad_u
u -= grad_u
grad_uold = grad_u
# update b:
grad_b = (temp2)
grad_b = grad_bold*momentum+grad_b
b -=grad_b
grad_bold = grad_b
# update v:
grad_v = np.outer(z0, temp1)
grad_v = grad_vold*momentum+grad_v
v -=grad_v
grad_vold = grad_v
#if (i%1000 == 0):
#print(np.mean(u**2),np.mean(b**2),np.mean(v**2),i)
if np.isnan(v).sum ()>0:
break
# calculate mse for the current epoch
entropy.append(-np.sum(np.log(out) * Y_train)/n)
epoch +=1
if (epoch-2) % 2 == 0:
print('The current epoch is {}, entropy is {}, eta is {}.'.format((epoch-2) ,round(entropy[epoch-2],7) , eta))
if entropy[epoch-2] <= entropy[epoch-1]:
eta *= 0.5
# delete the 2 dummpy points
epoch -= 2
del(entropy[0:2])
# -
u0 = u
b0 = b
v0 = v
# Initialize errors = 0.
error_test = 0
# loop on all testing samples
for i in range(10000):
lf1 = np.dot(X_train_rndm[i:(i+bs)],u) + b #compute the input layers' induced local field
z0 = np.tanh(lf1) #compute the input layer's out
lf2 = np.dot(z0,v) #compute the 1st H layers' induced local field
out[i:(i+bs)] = softmax(lf2) #compute the 1st H layer's out
# Find the largest component of v0 = [v0', v1', ...v9']^T^
predic_out = np.argmax(out[i,:])
# If the prediceted output is different to the testing label, error +=1
diff = predic_out - np.argmax(Y_test[i,:])
if diff != 0:
error_test += 1
error_test/10000
fig, ax = plt.subplots()
ny = np.arange(epoch)
m = ny[ny%30 == 0]
m = np.append(m,ny[-1])
mse = pd.DataFrame(mse, columns = ['MSE'])
A = np.array(mse.iloc[m])
plt.plot(m,np.log(A),'y->')
for i, txt in enumerate(A):
ax.annotate(np.round(txt.tolist(),5), (m[i], np.log(A[i])),fontsize = 8)
plt.xlabel('Epoch')
plt.title('log(MSE) at each 30 epochs in BP')
plt.ylabel('log(MSE)')
plt.show()
# #### Now, with an errorrate of 4.63%. We have accomplised our goal with the 3rd try, where the pseudocode was listed below:
# 1. Given n, $N_1$, momentum and $\epsilon$.
#
# 2. Initialize $\eta \sim O(1/ \sqrt{m}) \in \mathbb{R}, u \in \mathbb{R}^{(784) \times N_1}$ and $b \in \mathbb{R}^{1 \times N_1}$ by Uniform(-r, r) with $r = \sqrt{\frac{6}{fan-in + fan-out}}$, then $v \in \mathbb{R}^{N_1 \times 10}$ by Uniform(-r, r) with $r = 4\sqrt{\frac{6}{fan-in + fan-out}}$. (fan-in is the number of inputs of the unit, fan-out is the number of outputs of the unit)
#
# 3. Initialize epoch = 0.
#
# 4. Initialize $loss_{epoch}$ = 0 for epoch = 0, 1, ... .
#
# 5. Do SGD, so sample n numbers out range(n) without replacement as the index:
# 1. for i in index, do (this loop is where we compute the mse of each epoch):
# 1. Compute the 1st induced local fields as a vector with the current training sample and weights by:
# $$
# K= x_i \cdot u + b \in \mathbb{R}^{1 \times N_1}
# $$
# 2. Then, get the outputs from the N_1 hidden neurons as a vector by:
# $$
# Z = tanh(K) \in \mathbb{R}^{1 \times N_1}
# $$
# 3. Do the same for the rest, the outputs of the 10 output neurons will be:
# $$
# L = Z^T \cdot v
# $$
# $$
# y_i = softmax(L)
# $$
# 2. Compute the Cross-entropy for current epoch:
# $$
# loss_{epoch} = -\frac{1}{n}\sum_{i=1}^{n}\sum_{j=1}^{10}d_{ij} log(f(x_i,w)_j)
# $$
# Then, update the epoch:
# $$
# epoch \leftarrow epoch +1.
# $$
# 3. for i = 1 to n, do (this loop is where we update the weights):
# Note:
# $$
# \frac{\partial tanh(U)}{\partial U} = \begin{bmatrix} \frac{\partial tanh(u_1)}{\partial u_1} & \frac{\partial tanh(u_1)}{\partial u_2} & \cdots & \frac{\partial tanh(u_1)}{\partial u_{q}} \\ \frac{\partial tanh(u_2)}{\partial u_1} & \frac{\partial tanh(u_2)}{\partial u_2} & & \vdots \\ \vdots & & \ddots & \vdots \\ \frac{\partial tanh(u_{q})}{\partial u_1} & \cdots & \cdots & \frac{\partial tanh(u_{q})}{\partial u_{q}} \end{bmatrix} = \begin{bmatrix} 1- tanh^2(u_1) & \cdots & 0 \\ \vdots & \ddots & \vdots \\ 0 & \cdots & 1- tanh^2(u_{q}) \end{bmatrix} = (1-tanh(U)) \bigodot (1+tanh(U))
# $$
# $$
# \frac{\partial softmax(U)}{\partial U} = \begin{bmatrix} \frac{\partial softmax(u_1)}{\partial u_1} & \frac{\partial softmax(u_1)}{\partial u_2} & \cdots & \frac{\partial softmax(u_1)}{\partial u_{q}} \\ \frac{\partial softmax(u_2)}{\partial u_1} & \frac{\partial softmax(u_2)}{\partial u_2} & & \vdots \\ \vdots & & \ddots & \vdots \\ \frac{\partial softmax(u_{24})}{\partial u_1} & \cdots & \cdots & \frac{\partial softmax(u_{q})}{\partial u_{q}} \end{bmatrix} = \begin{bmatrix} u_1(\delta_{11}-u_1)& \cdots & u_1(\delta_{1q}-u_q) \\ \vdots & \ddots & \vdots \\ u_q(\delta_{q1}-u_1) & \cdots & u_q(\delta_{qq}-u_q) \end{bmatrix} = U \cdot I - U \otimes U^T
# $$
# 3. update the u here:
# 1. According to the backpropogation algorithm, we have:
# $$
# \frac{\partial loss}{\partial u} = -x_i \cdot \frac{1}{n}(d_i-y_i) \cdot \frac{\partial softmax(L)}{\partial L} \cdot v \cdot \frac{\partial tanh(K)}{\partial K} = -x_i \cdot \frac{1}{n}(d_i-y_i) \cdot (L \cdot I - L \otimes L^T) \cdot v \cdot (1-tanh(K)) \bigodot (1+tanh(K))
# $$
# 2. The update would be:
# $$
# u_{(t)} \leftarrow u_{(t-1)} \cdot momentum - \eta \frac{\partial loss}{\partial u} = u_{(t-1)} \cdot momentum - x_i \cdot \frac{1}{n}(d_i-y_i) \cdot (L \cdot I - L \otimes L^T) \cdot v \cdot (1-tanh(K)) \bigodot (1+tanh(K))
# $$
# 4. update the b here:
# 1. According to the backpropogation algorithm, we have:
# $$
# \frac{\partial loss}{\partial b} = - \frac{1}{n}(d_i-y_i) \cdot \frac{\partial softmax(L)}{\partial L} \cdot v \cdot \frac{\partial tanh(K)}{\partial K} = - \frac{1}{n}(d_i-y_i) \cdot (L \cdot I - L \otimes L^T) \cdot v \cdot (1-tanh(K)) \bigodot (1+tanh(K))
# $$
# 2. The update would be:
# $$
# b_{(t)} \leftarrow b_{(t-1)} \cdot momentum - \eta \frac{\partial loss}{\partial b} =b_{(t-1)} \cdot momentum - \frac{1}{n}(d_i-y_i) \cdot (L \cdot I - L \otimes L^T) \cdot v \cdot (1-tanh(K)) \bigodot (1+tanh(K))
# $$
# 5. update the v here:
# 1. According to the backpropogation algorithm, we have:
# $$
# \frac{\partial loss}{\partial v} = -tanh(K) \cdot \frac{1}{n}(d_i - y_i) \cdot \frac{\partial softmax(L)}{\partial L}
# $$
# 2. The update would be:
# $$
# v_{(t)} \leftarrow v_{(t-1)} \cdot momentum - \eta \frac{\partial loss}{\partial v} =v_{(t-1)} \cdot momentum - \frac{1}{n}(d_i-y_i) \cdot Z \cdot (L \cdot I - L \otimes L^T)
# $$
#
# 6. for every kth (k = 6000) epoch:
# $$\eta = 0.5 * \eta$$ if loss_{epoch} - loss_{epoch-1} >0
# 2. Loop to A, if $abs(loss_{epoch} - loss_{epoch-1}) >\epsilon$.
#
#
# Reamrk: The initiliazation needs some try and error.
| Hwk5&6/Hwk5&6.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # MNIST Image Classification with TensorFlow on Cloud AI Platform
#
# This notebook demonstrates how to implement different image models on MNIST using the [tf.keras API](https://www.tensorflow.org/versions/r2.0/api_docs/python/tf/keras).
#
# ## Learning Objectives
# 1. Understand how to build a Dense Neural Network (DNN) for image classification
# 2. Understand how to use dropout (DNN) for image classification
# 3. Understand how to use Convolutional Neural Networks (CNN)
# 4. Know how to deploy and use an image classification model using Google Cloud's [AI Platform](https://cloud.google.com/ai-platform/)
#
# Each learning objective will correspond to a __#TODO__ in the [student lab notebook](../labs/2_mnist_models.ipynb) -- try to complete that notebook first before reviewing this solution notebook.
#
# First things first. Configure the parameters below to match your own Google Cloud project details.
# + id="Nny3m465gKkY" colab_type="code" colab={}
# !sudo chown -R jupyter:jupyter /home/jupyter/training-data-analyst
# -
# Here we'll show the currently installed version of TensorFlow
import tensorflow as tf
print(tf.__version__)
# +
from datetime import datetime
import os
PROJECT = "your-project-id-here" # REPLACE WITH YOUR PROJECT ID
BUCKET = "your-bucket-id-here" # REPLACE WITH YOUR BUCKET NAME
REGION = "us-central1" # REPLACE WITH YOUR BUCKET REGION e.g. us-central1
MODEL_TYPE = "cnn" # "linear", "cnn", "dnn_dropout", or "dnn"
# Do not change these
os.environ["PROJECT"] = PROJECT
os.environ["BUCKET"] = BUCKET
os.environ["REGION"] = REGION
os.environ["MODEL_TYPE"] = MODEL_TYPE
os.environ["TFVERSION"] = "2.6" # Tensorflow version
os.environ["IMAGE_URI"] = os.path.join("gcr.io", PROJECT, "mnist_models")
# -
# ## Building a dynamic model
#
# In the previous notebook, <a href="1_mnist_linear.ipynb">1_mnist_linear.ipynb</a>, we ran our code directly from the notebook. In order to run it on the AI Platform, it needs to be packaged as a python module.
#
# The boilerplate structure for this module has already been set up in the folder `mnist_models`. The module lives in the sub-folder, `trainer`, and is designated as a python package with the empty `__init__.py` (`mnist_models/trainer/__init__.py`) file. It still needs the model and a trainer to run it, so let's make them.
#
# Let's start with the trainer file first. This file parses command line arguments to feed into the model.
# +
# %%writefile mnist_models/trainer/task.py
import argparse
import json
import os
import sys
from . import model
def _parse_arguments(argv):
"""Parses command-line arguments."""
parser = argparse.ArgumentParser()
parser.add_argument(
'--model_type',
help='Which model type to use',
type=str, default='linear')
parser.add_argument(
'--epochs',
help='The number of epochs to train',
type=int, default=10)
parser.add_argument(
'--steps_per_epoch',
help='The number of steps per epoch to train',
type=int, default=100)
parser.add_argument(
'--job-dir',
help='Directory where to save the given model',
type=str, default='mnist_models/')
return parser.parse_known_args(argv)
def main():
"""Parses command line arguments and kicks off model training."""
args = _parse_arguments(sys.argv[1:])[0]
# Configure path for hyperparameter tuning.
trial_id = json.loads(
os.environ.get('TF_CONFIG', '{}')).get('task', {}).get('trial', '')
output_path = args.job_dir if not trial_id else args.job_dir + '/'
model_layers = model.get_layers(args.model_type)
image_model = model.build_model(model_layers, args.job_dir)
model_history = model.train_and_evaluate(
image_model, args.epochs, args.steps_per_epoch, args.job_dir)
if __name__ == '__main__':
main()
# -
# Next, let's group non-model functions into a util file to keep the model file simple. We'll copy over the `scale` and `load_dataset` functions from the previous lab.
# +
# %%writefile mnist_models/trainer/util.py
import tensorflow as tf
def scale(image, label):
"""Scales images from a 0-255 int range to a 0-1 float range"""
image = tf.cast(image, tf.float32)
image /= 255
image = tf.expand_dims(image, -1)
return image, label
def load_dataset(
data, training=True, buffer_size=5000, batch_size=100, nclasses=10):
"""Loads MNIST dataset into a tf.data.Dataset"""
(x_train, y_train), (x_test, y_test) = data
x = x_train if training else x_test
y = y_train if training else y_test
# One-hot encode the classes
y = tf.keras.utils.to_categorical(y, nclasses)
dataset = tf.data.Dataset.from_tensor_slices((x, y))
dataset = dataset.map(scale).batch(batch_size)
if training:
dataset = dataset.shuffle(buffer_size).repeat()
return dataset
# -
# Finally, let's code the models! The [tf.keras API](https://www.tensorflow.org/versions/r2.0/api_docs/python/tf/keras) accepts an array of [layers](https://www.tensorflow.org/api_docs/python/tf/keras/layers) into a [model object](https://www.tensorflow.org/api_docs/python/tf/keras/Model), so we can create a dictionary of layers based on the different model types we want to use. The below file has two functions: `get_layers` and `create_and_train_model`. We will build the structure of our model in `get_layers`. Last but not least, we'll copy over the training code from the previous lab into `train_and_evaluate`.
#
# **TODO 1**: Define the Keras layers for a DNN model
# **TODO 2**: Define the Keras layers for a dropout model
# **TODO 3**: Define the Keras layers for a CNN model
#
# Hint: These models progressively build on each other. Look at the imported `tensorflow.keras.layers` modules and the default values for the variables defined in `get_layers` for guidance.
# +
# %%writefile mnist_models/trainer/model.py
import os
import shutil
import matplotlib.pyplot as plt
import numpy as np
import tensorflow as tf
from tensorflow.keras import Sequential
from tensorflow.keras.callbacks import TensorBoard
from tensorflow.keras.layers import (
Conv2D, Dense, Dropout, Flatten, MaxPooling2D, Softmax)
from . import util
# Image Variables
WIDTH = 28
HEIGHT = 28
def get_layers(
model_type,
nclasses=10,
hidden_layer_1_neurons=400,
hidden_layer_2_neurons=100,
dropout_rate=0.25,
num_filters_1=64,
kernel_size_1=3,
pooling_size_1=2,
num_filters_2=32,
kernel_size_2=3,
pooling_size_2=2):
"""Constructs layers for a keras model based on a dict of model types."""
model_layers = {
'linear': [
Flatten(),
Dense(nclasses),
Softmax()
],
'dnn': [
Flatten(),
Dense(hidden_layer_1_neurons, activation='relu'),
Dense(hidden_layer_2_neurons, activation='relu'),
Dense(nclasses),
Softmax()
],
'dnn_dropout': [
Flatten(),
Dense(hidden_layer_1_neurons, activation='relu'),
Dense(hidden_layer_2_neurons, activation='relu'),
Dropout(dropout_rate),
Dense(nclasses),
Softmax()
],
'cnn': [
Conv2D(num_filters_1, kernel_size=kernel_size_1,
activation='relu', input_shape=(WIDTH, HEIGHT, 1)),
MaxPooling2D(pooling_size_1),
Conv2D(num_filters_2, kernel_size=kernel_size_2,
activation='relu'),
MaxPooling2D(pooling_size_2),
Flatten(),
Dense(hidden_layer_1_neurons, activation='relu'),
Dense(hidden_layer_2_neurons, activation='relu'),
Dropout(dropout_rate),
Dense(nclasses),
Softmax()
]
}
return model_layers[model_type]
def build_model(layers, output_dir):
"""Compiles keras model for image classification."""
model = Sequential(layers)
model.compile(optimizer='adam',
loss='categorical_crossentropy',
metrics=['accuracy'])
return model
def train_and_evaluate(model, num_epochs, steps_per_epoch, output_dir):
"""Compiles keras model and loads data into it for training."""
mnist = tf.keras.datasets.mnist.load_data()
train_data = util.load_dataset(mnist)
validation_data = util.load_dataset(mnist, training=False)
callbacks = []
if output_dir:
tensorboard_callback = TensorBoard(log_dir=output_dir)
callbacks = [tensorboard_callback]
history = model.fit(
train_data,
validation_data=validation_data,
epochs=num_epochs,
steps_per_epoch=steps_per_epoch,
verbose=2,
callbacks=callbacks)
if output_dir:
export_path = os.path.join(output_dir, 'keras_export')
model.save(export_path, save_format='tf')
return history
# -
# ## Local Training
#
# With everything set up, let's run locally to test the code. Some of the previous tests have been copied over into a testing script `mnist_models/trainer/test.py` to make sure the model still passes our previous checks. On `line 13`, you can specify which model types you would like to check. `line 14` and `line 15` has the number of epochs and steps per epoch respectively.
#
# Moment of truth! Run the code below to check your models against the unit tests. If you see "OK" at the end when it's finished running, congrats! You've passed the tests!
# !python3 -m mnist_models.trainer.test
# Now that we know that our models are working as expected, let's run it on the [Google Cloud AI Platform](https://cloud.google.com/ml-engine/docs/). We can run it as a python module locally first using the command line.
#
# The below cell transfers some of our variables to the command line as well as create a job directory including a timestamp.
# +
current_time = datetime.now().strftime("%y%m%d_%H%M%S")
model_type = 'cnn' # "linear", "cnn", "dnn_dropout", or "dnn"
os.environ["MODEL_TYPE"] = model_type
os.environ["JOB_DIR"] = "mnist_models/models/{}_{}/".format(
model_type, current_time)
# -
# The cell below runs the local version of the code. The epochs and steps_per_epoch flag can be changed to run for longer or shorter, as defined in our `mnist_models/trainer/task.py` file.
# + language="bash"
# python3 -m mnist_models.trainer.task \
# --job-dir=$JOB_DIR \
# --epochs=5 \
# --steps_per_epoch=50 \
# --model_type=$MODEL_TYPE
# -
# ## Training on the cloud
#
# Since we're using an unreleased version of TensorFlow on AI Platform, we can instead use a [Deep Learning Container](https://cloud.google.com/ai-platform/deep-learning-containers/docs/overview) in order to take advantage of libraries and applications not normally packaged with AI Platform. Below is a simple [Dockerlife](https://docs.docker.com/engine/reference/builder/) which copies our code to be used in a TF2 environment.
# %%writefile mnist_models/Dockerfile
FROM gcr.io/deeplearning-platform-release/tf2-cpu
COPY mnist_models/trainer /mnist_models/trainer
ENTRYPOINT ["python3", "-m", "mnist_models.trainer.task"]
# The below command builds the image and ships it off to Google Cloud so it can be used for AI Platform. When built, it will show up [here](http://console.cloud.google.com/gcr) with the name `mnist_models`. ([Click here](https://console.cloud.google.com/cloud-build) to enable Cloud Build)
# !docker build -f mnist_models/Dockerfile -t $IMAGE_URI ./
# !docker push $IMAGE_URI
# Finally, we can kickoff the [AI Platform training job](https://cloud.google.com/sdk/gcloud/reference/ai-platform/jobs/submit/training). We can pass in our docker image using the `master-image-uri` flag.
# +
current_time = datetime.now().strftime("%y%m%d_%H%M%S")
model_type = 'cnn' # "linear", "cnn", "dnn_dropout", or "dnn"
os.environ["MODEL_TYPE"] = model_type
os.environ["JOB_DIR"] = "gs://{}/mnist_{}_{}/".format(
BUCKET, model_type, current_time)
os.environ["JOB_NAME"] = "mnist_{}_{}".format(
model_type, current_time)
# + language="bash"
# echo $JOB_DIR $REGION $JOB_NAME
# gcloud ai-platform jobs submit training $JOB_NAME \
# --staging-bucket=gs://$BUCKET \
# --region=$REGION \
# --master-image-uri=$IMAGE_URI \
# --scale-tier=BASIC_GPU \
# --job-dir=$JOB_DIR \
# -- \
# --model_type=$MODEL_TYPE
# -
# Can't wait to see the results? Run the code below and copy the output into the [Google Cloud Shell](https://console.cloud.google.com/home/dashboard?cloudshell=true) to follow.
# ## Deploying and predicting with model
#
# Once you have a model you're proud of, let's deploy it! All we need to do is give AI Platform the location of the model. Below uses the keras export path of the previous job, but `${JOB_DIR}keras_export/` can always be changed to a different path.
#
# Uncomment the delete commands below if you are getting an "already exists error" and want to deploy a new model.
# + language="bash"
# MODEL_NAME="mnist"
# MODEL_VERSION=${MODEL_TYPE}
# MODEL_LOCATION=${JOB_DIR}keras_export/
# echo "Deleting and deploying $MODEL_NAME $MODEL_VERSION from $MODEL_LOCATION ... this will take a few minutes"
# #yes | gcloud ai-platform versions delete ${MODEL_VERSION} --model ${MODEL_NAME}
# #yes | gcloud ai-platform models delete ${MODEL_NAME}
# gcloud config set ai_platform/region global
# gcloud ai-platform models create ${MODEL_NAME} --regions $REGION
# gcloud ai-platform versions create ${MODEL_VERSION} \
# --model ${MODEL_NAME} \
# --origin ${MODEL_LOCATION} \
# --framework tensorflow \
# --runtime-version=2.6
# -
# To predict with the model, let's take one of the example images.
#
# **TODO 4**: Write a `.json` file with image data to send to an AI Platform deployed model
# +
import json, codecs
import tensorflow as tf
import matplotlib.pyplot as plt
from mnist_models.trainer import util
HEIGHT = 28
WIDTH = 28
IMGNO = 12
mnist = tf.keras.datasets.mnist.load_data()
(x_train, y_train), (x_test, y_test) = mnist
test_image = x_test[IMGNO]
jsondata = test_image.reshape(HEIGHT, WIDTH, 1).tolist()
json.dump(jsondata, codecs.open("test.json", "w", encoding = "utf-8"))
plt.imshow(test_image.reshape(HEIGHT, WIDTH));
# -
# Finally, we can send it to the prediction service. The output will have a 1 in the index of the corresponding digit it is predicting. Congrats! You've completed the lab!
# + language="bash"
# gcloud ai-platform predict \
# --model=mnist \
# --version=${MODEL_TYPE} \
# --json-instances=./test.json
# -
# Copyright 2021 Google Inc.
# Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at
# http://www.apache.org/licenses/LICENSE-2.0
# Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.
| courses/machine_learning/deepdive2/image_classification/solutions/2_mnist_models.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ### Homework 7
# ### <NAME>
# ### February 2020
# ***
# +
### Imports
# -
import scipy.io as sio
import numpy as np
import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d import Axes3D
from sklearn.mixture import GaussianMixture
from sklearn.metrics import confusion_matrix
from scipy.spatial import distance
from math import pi
# ### Exersize 3
# ***
# +
### Let's plot the three distributions
### Assuming that the classes are equiprobable we have that
### P(ω1) = P(ω2) = P(ω3) = 1/3
### P(ω1)P(χ|ω1) = 1/15
### P(ω2)P(χ|ω2) = 1/27
### P(ω3)P(χ|ω3) = 1/3
# +
plt.figure(figsize=(10,8))
plt.title("Equiprobable Prior Plot")
plt.grid()
### P(ω1)
plt.plot([0,1,2], [(1/15)]*3,'y',label='$p(x|\omega_1)P(\omega_1)$')
plt.plot([5,6,7,8],[(1/15)]*4,'y')
plt.fill_between([0,1,2], 0, (1/27), color='y', alpha=0.5, label='$R_1$', hatch='/', edgecolor="c")
plt.fill_between([0,1,2], (1/27), (1/15), color='y', alpha=0.4)
plt.fill_between([5,6,7,8], 0, (1/27), color='y', alpha=0.5, hatch='/', edgecolor="c")
plt.fill_between([5,6,7,8], (1/27), (1/15), color='y', alpha=0.4)
### P(ω2)
plt.plot([0,1,2,3,4,5,6,7,8,9],[(1/27)]*10,'c',label='$p(x|\omega_2)P(\omega_2)$')
plt.fill_between([2,3], 0, (1/27), color='c', alpha=0.6,label='$R_2$')
plt.fill_between([4,5], 0, (1/27), color='c', alpha=0.6)
plt.fill_between([8,9], 0, (1/27), color='c', alpha=0.6)
### P(ω3)
plt.plot([3,4],[(1/3)]*2,'r',label='$p(x|\omega_3)P(\omega_3)$')
plt.fill_between([3,4], 0, (1/27), color='r', hatch='/',edgecolor="c",alpha=0.6)
plt.fill_between([3,4], (1/27), 1/3, color='r',label='$R_3$')
plt.legend(loc=1)
plt.show()
# -
# ### Exersize 4
# ***
# +
training_set = sio.loadmat('Training_set.mat')
train_x = training_set['train_x']
train_y= training_set['train_y']
test_set = sio.loadmat('Test_set.mat')
test_x = test_set['test_x']
test_y = test_set['test_y']
# +
### Bayes Classifier
### In order to adopt such solution we need to calculate:
### 1) The prior probabilities of the Classes in the Train Set
### And estimate:
### 2) The pdf's the p(x|class_id) of each class.
# -
### Total Training Samples
total_n = len(train_y)
# +
### Let's estimate the priors as the number of
### assigned points in each class divided by the total number of ponts
idx_1 = (train_y==1).reshape(total_n)
idx_2 = (train_y==2).reshape(total_n)
idx_3 = (train_y==3).reshape(total_n)
Prior_class_1 = np.count_nonzero(idx_1) / total_n
Prior_class_2 = np.count_nonzero(idx_2) / total_n
Prior_class_3 = np.count_nonzero(idx_3) / total_n
print("The Prior of Class 1 is: {}".format(Prior_class_1))
print("The Prior of Class 2 is: {}".format(Prior_class_2))
print("The Prior of Class 3 is: {}".format(Prior_class_3))
# +
### In order to estimate the p(x|class_id) of each class, we need to have
### an idea of how the classes are distributed. Because the data are 4-D,
### we will plot all the possible feature combinations.
# -
def perform_3d_ploting(dataset, dimension_set=0):
fig = plt.figure(figsize=(10,10))
ax = fig.add_subplot(111, projection='3d')
ax.scatter(train_x[idx_1,dimension_set],train_x[idx_1,dimension_set+1],train_x[idx_1,dimension_set+2], c='c', label='class 1')
ax.scatter(train_x[idx_2,dimension_set],train_x[idx_2,dimension_set+1],train_x[idx_2,dimension_set+2], c='r', label='class 2')
ax.scatter(train_x[idx_3,dimension_set],train_x[idx_3,dimension_set+1],train_x[idx_3,dimension_set+2], c='y', label='class 3')
plt.legend(loc=3)
plt.show()
return
# +
### Let's plot the first 3 dimensions
# -
perform_3d_ploting(train_x, dimension_set=0)
# +
### Let's plot the 2nd 3rd and 4th dimensions
# -
perform_3d_ploting(train_x, dimension_set=1)
# +
### We can see that the data of class 1 come from 2 clusters,
### the data from class 2 and class 3 from 1 cluster each.
# +
### For the parametric approach we will use the Gaussian Mixture Model for each class.
### We will adopt a 2 component Gaussian for Class 1
### We will adopt a single component Gaussian for Class 2 and Class 3
# -
class_1_estimator = GaussianMixture(n_components=2, covariance_type='full')
class_1_estimator.fit(train_x[idx_1,:])
class_1_scores = np.exp(class_1_estimator.score_samples(test_x))*Prior_class_1
class_1_scores
class_2_estimator = GaussianMixture(n_components=1, covariance_type='full')
class_2_estimator.fit(train_x[idx_2,:])
class_2_scores = np.exp(class_2_estimator.score_samples(test_x))*Prior_class_2
class_2_scores
class_3_estimator = GaussianMixture(n_components=1, covariance_type='full')
class_3_estimator.fit(train_x[idx_3,:])
class_3_scores = np.exp(class_3_estimator.score_samples(test_x))*Prior_class_3
class_3_scores
# +
### Let's aggregate the per-point class score into a single matrix
# -
total_scores = np.array([class_1_scores,class_2_scores,class_3_scores]).T
# +
### Now let's test with respect to the 'real' class
# -
parametric_method_results = np.argmax(total_scores,axis=1).reshape(len(total_scores),1) + 1
parametric_method_error = 1 - (np.sum(i==1 for i in parametric_method_results == test_y)) / len(parametric_method_results)
print("The parametric method error is: {}".format(round(parametric_method_error[0],5)))
# +
### The confusion Matrix
# -
confusion_matrix(test_y.reshape(-1), parametric_method_results.reshape(-1))
# +
### For the non- parametric approach we will use kNN density estimation
# -
### First we need to estimate pairwise distances betweeen test and training samples
pairwise_dist = distance.cdist(test_x,train_x,'euclidean')
### Next we need to define the 4d-hypersphere volume to include the neighbours
def hyper4dvolume(distance):
return 0.5*(pi**2)*(distance**4)
# +
### We will choose the number of neighbours arbitrarily @ 8
k=8
N1 = Prior_class_1 * total_n
N2 = Prior_class_2 * total_n
N3 = Prior_class_3 * total_n
# +
class_1_scores = k/(N1*hyper4dvolume(np.sort(pairwise_dist[:,idx_1])[:,4]))*Prior_class_1
class_2_scores = k/(N2*hyper4dvolume(np.sort(pairwise_dist[:,idx_2])[:,4]))*Prior_class_2
class_3_scores = k/(N3*hyper4dvolume(np.sort(pairwise_dist[:,idx_3])[:,4]))*Prior_class_3
total_scores = np.array([class_1_scores,class_2_scores,class_3_scores]).T
non_parametric_results = np.argmax(total_scores,axis=1).reshape(len(total_scores),1) + 1
non_parametric_error = 1 - (np.sum(i==1 for i in non_parametric_results == test_y)) / len(non_parametric_results)
# -
print("The Non-parametric method error is: {}".format(round(non_parametric_error[0],5)))
# +
### The confusion Matrix
# -
confusion_matrix(test_y.reshape(-1), non_parametric_results.reshape(-1))
| machine_learning_and_computational_statistics/HomeWork7/HW7.ipynb |