code stringlengths 38 801k | repo_path stringlengths 6 263 |
|---|---|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # AMPEL intro I: Introduction to Tier 0 of AMPEL
#
# This notebook provides a basic introduction to the filtering of alerts that take place in the first Tier of **AMPEL**.
# ## T0 Unit Example
#
# Here is an example of how to implement and run a T0 unit. This diagram summarizes the whole process:
#
# 
#
#
# The internal structure of **AMPEL** is designed to interact with classes that inherits from the `AbsAlertFilter` class. At least two methods need to be implemented: the constructor and `apply`. The constructor stores the run parameters to be used by this instance of the filter - see the `t0_SampleFilter` notebook for more information regarding these. The `apply` method is called once for each alert received by **AMPEL** and receives an `AmpelAlert` object as a parameter. If an alert is to be rejected (not retained for further processing) the `apply` should return `None`. If an alert should be accepted the method should rather return a list of the T2 processing units that should be run on each transient state. Note that an empty list, `[]`, can be returned for cases where a transient should be accepted but no T2 units are run.
#
#
# ## AMPEL logging
#
# Each filter will also receive a `logger` instance when initiated. We recommend using this consistently as in a live **AMPEL** instance each log entry is stored in a dedicated, searchable database and automatically tagged with time and channel ID. Here will use a simple logger for demonstration.
import logging
logger = logging.getLogger()
logger.setLevel(logging.DEBUG)
# ## A random filter sample class.
#
# We here create a filter that randomly selectes whether an alert is accepted or not. In this case, accepted alerts are queued for processing by a T2 that fits a five degree polynomial called `T2ExamplePolyFit_5D`. This is further discussed in the `t2_unit_example` notebook.
#
# +
from random import choice
from ampel.base.abstract.AbsAlertFilter import AbsAlertFilter
class RandFilter(AbsAlertFilter):
version=1.0
def __init__(self, on_match_t2_units=["T2ExamplePolyFit_5D"], base_config=None, run_config=None, logger=logger):
self.on_match_t2_units = on_match_t2_units
def apply(self, ampel_alert):
return choice([self.on_match_t2_units, None])
# -
# ## Input data
#
# For this offline use we will make use of the daily tar releases of ZTF alerts. Alerts for any date can be downloaded from `https://ztf.uw.edu/alerts/public/`. Alert information is identical for both the live processing and these alert collection. More information regarding the alert content can be found at `https://zwickytransientfacility.github.io/ztf-avro-alert/schema.html`.
#
# The specific file used in this example notebook was chosen as it has a small size and allows for fast download and demonstration. It is thus not representative for a normal ZTF night.
#
# These notebooks, together with the alert archive, can thus be used to fully replicate an alert broker. A critical step for most users will be catalog matching. In a live **AMPEL** instance this is done using either `catsHTM` or `extcats`. As many catalogs are too large for routine use in a laptop development environment we here also provide a version of one of the ZTF alerts where all matches to Gaia stars have been purged. This alert collection is thus more representative of the kind of data an **AMPEL** channel whith Gaia star rejection turned on will face.
#
# +
import os
import urllib.request
small_test_tar_url = 'https://ztf.uw.edu/alerts/public/ztf_public_20181129.tar.gz'
small_test_tar_path = 'ztf_public_20181129.tar.gz'
if not os.path.isfile(small_test_tar_path):
print('Downloading tar')
urllib.request.urlretrieve(small_test_tar_url, small_test_tar_path)
# -
# ## The _AlertProcessor_
#
# We next need to create an _AlertProcessor_ to process alerts using a specified filter. We here use the `DevAlertProcessor` which is designed to process a (tar) archive of alerts. A live **AMPEL** instance will work identically, but use an alert processor designed for a particular live stream (in the case of ZTF this is currently done through `Kafka Streams`).
#
# We here initialize an object from our T0 unit class and give it as an initialization parameter to a `DevAlertProcessor` object:
# +
from ampel.ztf.pipeline.t0.DevAlertProcessor import DevAlertProcessor
my_filter = RandFilter()
dap = DevAlertProcessor(my_filter, use_dev_alerts=True)
# -
# ## Processing alerts ##
#
# Then, we just proceed to execute the `process_tar` method of the `DevAlertProcessor` object, in order to process the compressed TAR file we downloaded previously. We can measure the execution time with the help of the `time` Python standard library module.
#
# Even the minimal alert collection chosen for this example contains thousands of alerts. For clarity, we here limit the number of alerts processed each time the command is run to 200 alerts (`iter_max=200`).
#
# +
import time
print ("processing alerts from %s" % small_test_tar_path)
start = time.time()
nproc = dap.process_tar(small_test_tar_path, iter_max=200)
end = time.time()
print ("processed %d alerts in %.2e sec"%(nproc, end-start))
# -
# After the dataset has been processed, we can see which alerts were accepted and which were rejected with the help of the `get_accepted_alerts` and the `get_rejected_alerts` methods, respectively:
n_good, n_bad = len(dap.get_accepted_alerts()), len(dap.get_rejected_alerts())
print ("%d alerts accepted by the filter (%.2f perc)"%(n_good, 100*n_good/nproc))
print ("%d alerts rejected by the filter (%.2f perc)"%(n_bad, 100*n_bad/nproc))
# We can also visualize any of these alerts with the help of the `summary_plot` method of the `AmpelAlertPlotter` class. This method receives an `AmpelAlert` object and plots a summary for it. In this case we will plot a random alert from the set of accepted alerts:
# +
from ampel.ztf.view.AmpelAlertPlotter import AmpelAlertPlotter
accepted = dap.get_accepted_alerts()
accepted_plot = AmpelAlertPlotter(interactive=True)
# -
accepted_plot.summary_plot(choice(accepted))
# Of course, we can also plot any alert from the set of rejected alerts:
rejected = dap.get_rejected_alerts()
rejected_plot = AmpelAlertPlotter(interactive=True)
rejected_plot.summary_plot(choice(rejected))
# ## Re-running _DevAlertProcessor_ on previously accepted alerts
# One of the good aspects of the T0 module is that it allows to re-run units over a set of alerts multiple times. In this case, we will run the same random filter we implemented previously, this time only over the set of accepted alerts. Of course, in a real case it is also possible to run a different unit over the processed set of alerts.
# +
new_filter = RandFilter()
recursive_dap = DevAlertProcessor(new_filter)
print ("Reprocessing alerts from %s" % small_test_tar_path)
start = time.time()
recursive_dap.process_loaded_alerts(accepted)
end = time.time()
print ("Reprocessed %d alerts in %.2e sec"%(nproc, end-start))
new_accepted = recursive_dap.get_accepted_alerts()
# -
# Same as before, we can inspect the set of rejected and accepted alerts and plot any alert from these sets:
n_good, n_bad = len(recursive_dap.get_accepted_alerts()), len(recursive_dap.get_rejected_alerts())
print ("%d alerts accepted by the filter (%.2f perc)"%(n_good, 100*n_good/nproc))
print ("%d alerts rejected by the filter (%.2f perc)"%(n_bad, 100*n_bad/nproc))
new_accepted_plot = AmpelAlertPlotter(interactive=True)
new_accepted_plot.summary_plot(choice(new_accepted))
# Finally, we may remove the compressed TAR file we previously downloaded.
import os
os.remove(small_test_tar_path)
| notebooks/t0_unit_example.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python [default]
# language: python
# name: python3
# ---
# + [markdown] state="normal"
# # Python Language Intro (Part 3)
# + [markdown] state="normal"
# ## Agenda
#
# 1. Language overview
# 2. White space sensitivity
# 3. Basic Types and Operations
# 4. Statements & Control Structures
# 5. Functions
# 6. OOP (Classes, Methods, etc.)
# 7. Immutable Sequence Types (Strings, Ranges, Tuples)
# 8. Mutable data structures: Lists, Sets, Dictionaries
# + state="normal"
# by default, only the result of the last expression in a cell is displayed after evaluation.
# the following forces display of *all* self-standing expressions in a cell.
from IPython.core.interactiveshell import InteractiveShell
InteractiveShell.ast_node_interactivity = "all"
# + [markdown] state="normal"
# ## 7. Immutable Sequence Types: Strings, Ranges, Tuples
# + [markdown] state="normal"
# Recall: All immutable sequences support the [common sequence operations](https://docs.python.org/3/library/stdtypes.html#common-sequence-operations). For many sequence types, there are constructors that allow us to create them from other sequence types.
# + [markdown] state="normal"
# ### Strings
# + state="normal"
'hello'
# -
str('hello')
# + [markdown] state="normal"
# ### Ranges
# + state="normal"
range(10)
# + state="normal"
range(10, 20)
# + state="normal"
range(20, 50, 5)
# + state="normal"
range(10, 0, -1)
# + [markdown] state="normal"
# ### Tuples
# + state="normal"
()
# + state="normal"
(1, 2, 3)
# + state="normal"
('a', 10, False, 'hello')
# + state="normal"
tuple(range(10))
# + state="normal"
tuple('hello')
# + [markdown] state="normal"
# ## 8. Mutable data structures: Lists, Sets, Dicts
# + [markdown] state="normal"
# ### Lists
# + [markdown] state="normal"
# This list supports the [mutable sequence operations](https://docs.python.org/3/library/stdtypes.html#mutable-sequence-types) in addition to the [common sequence operations](https://docs.python.org/3/library/stdtypes.html#common-sequence-operations).
# + state="normal"
l = [1, 2, 1, 1, 2, 3, 3, 1]
# + state="normal"
len(l)
# + state="normal"
l[5]
# + state="normal"
l[1:-1]
# + state="normal"
l + ['hello', 'world']
# + state="normal"
l # `+` does *not* mutate the list!
# + state="normal"
l * 3
# + state="normal"
sum = 0
for x in l:
sum += x
sum
# + [markdown] state="normal"
# #### Mutable list operations
# + state="normal"
l = list('hell')
# + state="normal"
l.append('o')
# + state="normal"
l
# + state="normal"
l.append(' there')
# + state="normal"
l
# + state="normal"
del l[-1]
# + state="normal"
l.extend(' there')
# + state="normal"
l
# + state="normal"
l[2:7]
# + state="normal"
del l[2:7]
# + state="normal"
l
# + state="normal"
l[]
# + [markdown] state="normal"
# #### List comprehensions
# + state="normal"
[x for x in range(10)]
# + state="normal"
[2*x+1 for x in range(10)] # odd numbers
# + state="normal"
# pythagorean triples
n = 50
[(a,b,c) for a in range(1,n)
for b in range(a,n)
for c in range(b,n)
if a**2 + b**2 == c**2]
# + state="normal"
adjs = ('hot', 'blue', 'quick')
nouns = ('table', 'fox', 'sky')
phrases = []
for adj in adjs:
for noun in nouns:
phrases.append(adj + ' ' + noun)
# + state="normal"
phrases
# + state="normal"
[adj + ' ' + noun for adj in adjs for noun in nouns]
# + [markdown] state="normal"
# ### Sets
# + [markdown] state="normal"
# A [set](https://docs.python.org/3.7/library/stdtypes.html#set-types-set-frozenset) is a data structure that represents an *unordered* collection of unique objects (like the mathematical set).
# + state="normal"
s = {1, 2, 1, 1, 2, 3, 3, 1}
# + state="normal"
s
# + state="normal"
t = {2, 3, 4, 5}
# + state="normal"
s.union(t)
# + state="normal"
s.difference(t)
# + state="normal"
s.intersection(t)
# + [markdown] state="normal"
# ### Dicts
# + [markdown] state="normal"
# A [dictionary](https://docs.python.org/3/library/stdtypes.html#mapping-types-dict) is a data structure that contains a set of unique key → value mappings.
# + state="normal"
d = {
'Superman': '<NAME>',
'Batman': '<NAME>',
'Spiderman': '<NAME>',
'Ironman': '<NAME>'
}
# + state="normal"
d['Ironman']
# + state="normal"
d['Ironman'] = '<NAME>'
# + state="normal"
d
# + [markdown] state="normal"
# #### Dictionary comprehensions
# + state="normal"
{e:2**e for e in range(0,100,10)}
# + state="normal"
{x:y for x in range(3) for y in range(10)}
# + state="normal"
sentence = 'a man a plan a canal panama'
{w:w[::-1] for w in sentence.split()}
# + [markdown] state="normal"
# ## Demo
| CS331/Lect 03 Python Fundamentals.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Can Atoti Predict Baseball? How will the COVID Season Impact Baseball Stats?
#
# ## Introduction
#
# Credit: The original version of this notebook was created by <NAME> and it's available on https://github.com/ncosgrov/atoti_baseball_analysis.
#
# Read more: https://medium.com/@neilcosgrove/can-atoti-predict-baseball-how-will-the-covid-season-impact-baseball-stats-ad9322e3b722
# Baseball is a game built on statistics. There seems to be a statistic for every possible combination and permutation of events that can happen in a game or a season. Fans enjoy comparing stats in a perpetual game of "who was the greatest (<i>fill in the blank</i>); creating matchups that span generations. The availability of data analysis tool like atoti bring a whole new dimension (no pun intended) to these discussons allowing for endless analysis which <b><u> WARNING</u></b> can become addictive :-).
#
# For a game that loves its statistics, the 2020 Covid shortened season presents a nightmare. Instead of playing 162 games, the season will only be 60 games, no doubt setting up endless "what if" debates in future years in regards to players active in this season who just miss key milestones.
#
# In this notebook, we decide to get a jump on the debate by looking at veterans who have diminishing season to make up those lost 102 games, focusing in on a specific example.
# <div style="text-align: center;" ><a href="https://www.atoti.io/?utm_source=gallery&utm_content=baseball" target="_blank" rel="noopener noreferrer"><img src="https://data.atoti.io/notebooks/banners/discover.png" alt="Try atoti"></a></div>
# ## Getting the atoti library
#
# We will first start by importing support libraries and [atoti](https://docs.atoti.io/latest/installation.html).
import atoti as tt
import pandas as pd
# ## Importing The Data and Wrangling
#
# The dataset used comes from <NAME>'s website at http://www.seanlahman.com/baseball-archive/statistics/. The work is licensed under a Creative Commons Attribution-ShareAlike 3.0 Unported License. This is a great site and has more types of Baseball stats than you can imagine. For the purpose of this exercise, we will confine ourselve to Batting Statistics. For those more interested in the analaysis, you can skip to the <i> Create the Atoti Cube</i> section
#
# ### Player Information
#
# We begin by importing the master table which has individual player information.
pd.set_option("display.max_columns", None)
player_df = pd.read_csv(
"http://data.atoti.io/notebooks/baseball/People.csv",
usecols=[
"playerID",
"birthYear",
"nameGiven",
"nameLast",
"nameFirst",
"bats",
"throws",
"debut",
"finalGame",
],
parse_dates=["debut", "finalGame"],
dtype={"birthYear": "Int64"},
)
player_df.head()
player_df.shape
player_df.dtypes
# We will do some data cleanup. The dataset has a lot of late 19th century data that is missing values (not the datasets fault, just no one recorded the values at the time).
# Lets Clean up names
# Get Rid of Blank First and Given Names, records where we dont know when player started
player_df.dropna(subset=["nameFirst", "debut"], inplace=True)
player_df["nameGiven"].fillna("Unknown", inplace=True)
FullName = player_df["nameFirst"] + " " + player_df["nameLast"]
player_df.insert(4, "fullName", FullName)
player_df.head()
# Anticipating that in future analysis we may want to do some analysis where we compare players on "career years" rather than "calendar years" we create a dictionary of the years players debuted for use in the next table.
player_df["debutYear"] = pd.DatetimeIndex(player_df["debut"]).year
player_df.head()
# ### Battling Information
#
# OK, the Master table of player info is done, lets import the batting table and clean it up
battling_df = pd.read_csv(
"http://data.atoti.io/notebooks/baseball/Batting.csv",
dtype={
"RBI": "Int64",
"SB": "Int64",
"CS": "Int64",
"SO": "Int64",
"IBB": "Int64",
"HBP": "Int64",
"SH": "Int64",
"SF": "Int64",
"GIDP": "Int64",
},
)
battling_df.head()
battling_df.shape
battling_df.rename(
columns={
"lgID": "League",
"G": "Games",
"AB": "At Bats",
"R": "Runs",
"H": "Hits",
"2B": "Doubles",
"3B": "Triples",
"HR": "Homeruns",
"RBI": "Runs Batted In",
"SB": "Stolen Bases",
"CS": "Caught Stealing",
"BB": "Base on Balls",
"SO": "Strikeouts",
"IBB": "Intentional walks",
"HBP": "Hit by pitch",
"SH": "Sacrifice hits",
"SF": "Sacrifice flies",
"GIDP": "Grounded into double plays",
},
inplace=True,
)
battling_df.League.unique()
# Cull down to just NL and Al Leagues
modern_league_names = ["NL", "AL"]
battling_df = battling_df.loc[battling_df["League"].isin(modern_league_names)].copy()
battling_df.League.unique()
# #### Add Career Season
debutDict = dict(zip(player_df.playerID, player_df.debutYear))
battling_df["careerYear"] = battling_df["playerID"].map(debutDict)
battling_df["careerYear"] = battling_df["yearID"] - battling_df["careerYear"] + 1
battling_df["careerYear"] = battling_df["careerYear"].astype(pd.Int32Dtype())
battling_df.head()
# ### Team Information
#
# Team ID is deceptive, it is not as "human readable" as you would think, lets replace with franchise names (but note that these will be the current franchise name, i.e. <NAME>'s Boston Braves will show that the team is the current Atlanta Braves)
# Get Franchises and create a dictionary of Franchise ID to name
franchises_df = pd.read_csv(
"http://data.atoti.io/notebooks/baseball/TeamsFranchises.csv"
)
franchises_df.shape
team_df = pd.read_csv(
"http://data.atoti.io/notebooks/baseball/Teams.csv",
usecols=["yearID", "lgID", "teamID", "franchID"],
)
team_df.shape
team_df = pd.merge(team_df, franchises_df, on=["franchID"])
team_df.shape
team_df.rename(columns={"franchName": "teamName"}, inplace=True)
team_df.head()
# ## Create the atoti Cube
session = tt.create_session()
player_table = session.read_pandas(
player_df,
keys=["playerID"],
table_name="players",
)
player_table.head()
# +
batting_table = session.read_pandas(
battling_df,
keys=["playerID", "yearID", "teamID", "League"],
table_name="batting",
)
batting_table.head()
# +
team_table = session.read_pandas(
team_df,
keys=["yearID", "lgID", "teamID"],
table_name="teams",
)
team_table.head()
# -
# atoti automatically maps columns with the same names between 2 datatables
batting_table.join(player_table)
batting_table.join(
team_table, mapping={"League": "lgID", "yearID": "yearID", "teamID": "teamID"}
)
print(
f"Number of results: {len(player_table)} rows, {len(player_table.columns)} columns"
)
# ### Let's create the cube and see what the tables look like
cube = session.create_cube(batting_table, name="Stats", mode="manual")
cube.schema
# ### Now Create Hierarchies and Measures
h, l, m = cube.hierarchies, cube.levels, cube.measures
h["League"] = [batting_table["League"]]
h["Year"] = [batting_table["yearID"]]
h["Career Year"] = [batting_table["careerYear"]]
# players may have the same name but they will be unique by the playerId
h["Player"] = [player_table["playerID"], player_table["fullName"]]
h["Team"] = [team_table["teamName"]]
h["Debut"] = [player_table["debut"]]
h["Final Game"] = [player_table["finalGame"]]
# Lets Review the hierarchies
h
# Let us create some measures. In 2020 Major League Baseball season, there are 60 games. We shall use this to project the players' performances.
# We shall see later on how an adjustment in the season length for 2020 will impact the players' performances.
m["2020 Season Length"] = 60
m["500 Home Runs"] = 500
m["Latest Season"] = tt.total(tt.agg.max(batting_table["yearID"]), h["Year"])
m["Latest Season"].formatter = "DOUBLE[####]"
cube.query(m["2020 Season Length"], m["500 Home Runs"], m["Latest Season"])
# #### Player's stats
# +
m["birthYear"] = tt.value(player_table["birthYear"])
m["PlayerAge"] = tt.where(
l["playerID"] != None,
m["Latest Season"] - m["birthYear"],
None,
)
# -
m["Career Length"] = tt.where(
l["playerID"] != None,
tt.agg.sum(
tt.date_diff(l["debut"], l["finalGame"], unit="years"),
scope=tt.scope.origin(l["playerID"]),
),
None,
)
m["Games played"] = tt.where(
l["playerID"] != None,
tt.agg.sum(batting_table["Games"]),
None,
)
m["Player Season At Bats"] = tt.where(
l["playerID"] != None,
tt.agg.sum(batting_table["At Bats"]),
None,
)
# + atoti={"height": 469, "widget": {"columnWidths": {"[Measures].[Player Season At Bats]": 142.33334350585938}, "filters": ["{[batting].[Year].[AllMember].[2016], [batting].[Year].[AllMember].[2017], [batting].[Year].[AllMember].[2018], [batting].[Year].[AllMember].[2019], [batting].[Year].[AllMember].[2015]}", "{[players].[Player].[AllMember].[encared01], [players].[Player].[AllMember].[cabremi01], [players].[Player].[AllMember].[cruzne02], [players].[Player].[AllMember].[pujollu01], [players].[Player].[AllMember].[pujolal01], [players].[Player].[AllMember].[cruzne01]}"], "mapping": {"columns": ["ALL_MEASURES"], "measures": ["[Measures].[PlayerAge]", "[Measures].[Career Length]", "[Measures].[Games played]", "[Measures].[Player Season At Bats]"], "rows": ["[players].[Player].[playerID] => [players].[Player].[fullName] => [batting].[Year].[yearID]"]}, "query": {"mdx": "SELECT NON EMPTY {[Measures].[PlayerAge], [Measures].[Career Length], [Measures].[Games played], [Measures].[Player Season At Bats]} ON COLUMNS, NON EMPTY Hierarchize(Union(Crossjoin(Hierarchize(Union(Descendants({[players].[Player].[AllMember]}, 1, SELF_AND_BEFORE), Descendants({[players].[Player].[AllMember].[abadfe01]}, [players].[Player].[fullName]), Descendants({[players].[Player].[AllMember].[jacksry02]}, [players].[Player].[fullName]), Descendants({[players].[Player].[AllMember].[cabremi01]}, [players].[Player].[fullName]), Descendants({[players].[Player].[AllMember].[cruzne02]}, [players].[Player].[fullName]), Descendants({[players].[Player].[AllMember].[encared01]}, [players].[Player].[fullName]), Descendants({[players].[Player].[AllMember].[pujolal01]}, [players].[Player].[fullName]))), [batting].[Year].DefaultMember), Crossjoin([players].[Player].[AllMember].[cabremi01].[<NAME>], Hierarchize(Descendants({[batting].[Year].[AllMember]}, 1, SELF_AND_BEFORE))))) ON ROWS FROM [Stats]", "updateMode": "once"}, "serverKey": "default", "widgetKey": "pivot-table"}}
session.visualize("Player's stats")
# -
# #### Player's homerun stats
# +
m["Homeruns"] = tt.agg.sum(batting_table["Homeruns"])
m["Homeruns Per Game"] = m["Homeruns"] / m["Games played"]
m["Player Total HR (Year)"] = tt.agg.sum(
m["Homeruns"], scope=tt.scope.cumulative(l["yearID"])
)
m["Career HR"] = tt.agg.sum(m["Homeruns"], scope=tt.scope.cumulative(l["careerYear"]))
# projecting player's homerun
m["Median HR"] = tt.agg.mean(
m["Homeruns Per Game"], scope=tt.scope.origin(l["playerID"])
)
m["Projected HR"] = tt.math.round(m["2020 Season Length"] * m["Median HR"])
m["Projected End Of Season Career HRs"] = (
m["Player Total HR (Year)"] + m["Projected HR"]
)
# + atoti={"widget": {"filters": ["{[batting].[Year].[AllMember].[2016], [batting].[Year].[AllMember].[2017], [batting].[Year].[AllMember].[2018], [batting].[Year].[AllMember].[2019], [batting].[Year].[AllMember].[2015]}", "{[players].[Player].[AllMember].[encared01], [players].[Player].[AllMember].[cabremi01], [players].[Player].[AllMember].[cruzne02], [players].[Player].[AllMember].[pujollu01], [players].[Player].[AllMember].[pujolal01], [players].[Player].[AllMember].[cruzne01]}"], "mapping": {"columns": ["ALL_MEASURES"], "measures": ["[Measures].[Homeruns]", "[Measures].[Games played]", "[Measures].[Homeruns Per Game]", "[Measures].[Career HR]", "[Measures].[Median HR]", "[Measures].[Projected HR]", "[Measures].[Projected End Of Season Career HRs]"], "rows": ["[players].[Player].[playerID] => [players].[Player].[fullName]", "[batting].[Year].[yearID]"]}, "query": {"mdx": "SELECT NON EMPTY {[Measures].[Homeruns], [Measures].[Games played], [Measures].[Homeruns Per Game], [Measures].[Career HR], [Measures].[Median HR], [Measures].[Projected HR], [Measures].[Projected End Of Season Career HRs]} ON COLUMNS, NON EMPTY Crossjoin(Hierarchize(Union(Descendants({[players].[Player].[AllMember]}, 1, SELF_AND_BEFORE), Descendants({[players].[Player].[AllMember].[aaronha01]}, [players].[Player].[fullName]))), Hierarchize(Descendants({[batting].[Year].[AllMember]}, 1, SELF_AND_BEFORE))) ON ROWS FROM [Stats]", "updateMode": "once"}, "serverKey": "default", "widgetKey": "pivot-table"}}
session.visualize("Player's homerun stats")
# -
# #### Player's hits stats
# +
m["Hits"] = tt.agg.sum(batting_table["Hits"])
m["Hits Per Game"] = m["Hits"] / m["Games played"]
m["Player Total Hits (Year)"] = tt.agg.sum(
m["Hits"], scope=tt.scope.cumulative(l["yearID"])
)
m["Career Hits"] = tt.agg.sum(m["Hits"], scope=tt.scope.cumulative(l["careerYear"]))
# projecting player's hits
m["Median Hits"] = tt.agg.mean(m["Hits Per Game"], scope=tt.scope.origin(l["playerID"]))
m["Projected Hits"] = tt.math.round(
tt.math.round(m["2020 Season Length"] * m["Median Hits"])
)
m["Projected End Of Season Career Hits"] = (
m["Player Total Hits (Year)"] + m["Projected Hits"]
)
# + atoti={"widget": {"columnWidths": {"[Measures].[Projected End Of Season Career HRs]": 217, "[Measures].[Projected End Of Season Career Hits]": 210}, "filters": ["{[batting].[Year].[AllMember].[2016], [batting].[Year].[AllMember].[2017], [batting].[Year].[AllMember].[2018], [batting].[Year].[AllMember].[2019], [batting].[Year].[AllMember].[2015]}", "{[players].[Player].[AllMember].[encared01], [players].[Player].[AllMember].[cabremi01], [players].[Player].[AllMember].[cruzne02], [players].[Player].[AllMember].[pujollu01], [players].[Player].[AllMember].[pujolal01], [players].[Player].[AllMember].[cruzne01]}"], "mapping": {"columns": ["ALL_MEASURES"], "measures": ["[Measures].[Hits]", "[Measures].[Hits Per Game]", "[Measures].[Career Hits]", "[Measures].[Median Hits]", "[Measures].[Projected Hits]", "[Measures].[Projected End Of Season Career Hits]"], "rows": ["[players].[Player].[playerID] => [players].[Player].[fullName] => [batting].[Career Year].[careerYear]"]}, "query": {"mdx": "SELECT NON EMPTY Hierarchize(Union(Crossjoin(Union(Descendants({[players].[Player].[AllMember]}, 1, SELF_AND_BEFORE), Descendants({[players].[Player].[AllMember].[cabremi01]}, [players].[Player].[fullName])), [batting].[Career Year].DefaultMember), Crossjoin([players].[Player].[AllMember].[cabremi01].[<NAME>], Hierarchize(Descendants({[batting].[Career Year].[AllMember]}, 1, SELF_AND_BEFORE))))) ON ROWS, NON EMPTY {[Measures].[Hits], [Measures].[Hits Per Game], [Measures].[Career Hits], [Measures].[Median Hits], [Measures].[Projected Hits], [Measures].[Projected End Of Season Career Hits]} ON COLUMNS FROM [Stats]", "updateMode": "once"}, "serverKey": "default", "widgetKey": "pivot-table"}}
session.visualize()
# -
# ## The Fun Stuff, the Analysis
#
# So let's pose a basic question: who is being impacted by the COVID shortened Baseball season of 60 rather than 162 games? Well, certainly everyone; a missed game is an opportunity to put something in the book for the future and even a non-Hall of Famer would like to get one more hit or one more homerun in their career. However, certainly veteran players, which we will define as 10 years in the league, have less of a career remainder left to make up for lost 2020 opportunities
#
# So let us look at players who have played over 10 years and are closing in on the respected 500 Home Run Mark
# + atoti={"widget": {"filters": ["Filter([players].[Player].[playerID].Members, [Measures].[Career Length] > 10)", "Filter([players].[Player].[playerID].Members, [Measures].[Homeruns] > 200)", "[batting].[Year].[AllMember].[2019]"], "mapping": {"columnValues": ["[Measures].[Player Total HR (Year)]"], "horizontalSubplots": [], "lineValues": ["[Measures].[500 Home Runs]"], "verticalSubplots": [], "xAxis": ["[players].[Player].[playerID]"]}, "plotly": {"data": {"commonTraceOverride": {"line": {"color": "red"}, "mode": "lines"}}}, "query": {"mdx": "SELECT NON EMPTY Union(Hierarchize(Descendants({[players].[Player].[AllMember]}, 2, SELF_AND_BEFORE)), Hierarchize(Descendants({[players].[Player].[AllMember]}, 1, SELF_AND_BEFORE))) ON ROWS, NON EMPTY {[Measures].[Player Total HR (Year)], [Measures].[500 Home Runs]} ON COLUMNS FROM [Stats]", "updateMode": "once"}, "serverKey": "default", "widgetKey": "plotly-clustered-column-and-line-chart"}} tags=[]
session.visualize("Players over 10 Years in League with over 200 Home Runs")
# -
# Puts into perspective how tough getting 500 Home Runs is. Pujols is already well past the post, <NAME> is in striking distance at 477. <NAME> (age 40) and <NAME> (age 37) have a steep climb to get there, probably needing four excellent seasons; certainly losing 60% of their potential at bats this year may put it out of reach especially for Cruz who would likely have to stay healty and maintain his production till about 44.
#
# + atoti={"widget": {"filters": ["{[players].[Player].[AllMember].[encared01], [players].[Player].[AllMember].[cruzne02]}"], "mapping": {"horizontalSubplots": [], "splitBy": ["[players].[Player].[playerID]", "ALL_MEASURES"], "values": ["[Measures].[Homeruns]"], "verticalSubplots": [], "xAxis": ["[batting].[Career Year].[careerYear]"]}, "query": {"mdx": "SELECT NON EMPTY Crossjoin(Hierarchize(Descendants({[players].[Player].[AllMember]}, 1, SELF_AND_BEFORE)), {[Measures].[Homeruns]}) ON COLUMNS, NON EMPTY Hierarchize(Descendants({[batting].[Career Year].[AllMember]}, 1, SELF_AND_BEFORE)) ON ROWS FROM [Stats]", "updateMode": "once"}, "serverKey": "default", "widgetKey": "plotly-line-chart"}}
session.visualize("<NAME> (age 40) and <NAME>")
# -
# Let us now look at hits, focusing in on the mythical 3000 mark
# + atoti={"widget": {"filters": ["[batting].[Year].[AllMember].[2019]", "Filter([players].[Player].[playerID].Members, [Measures].[Career Length] > 10)", "TopCount(Filter([players].[Player].Levels(1).Members, NOT IsEmpty([Measures].[Player Total Hits (Year)])), 25, [Measures].[Player Total Hits (Year)])"], "isTextVisible": true, "mapping": {"horizontalSubplots": [], "splitBy": ["ALL_MEASURES"], "values": ["[Measures].[Player Total Hits (Year)]"], "verticalSubplots": [], "xAxis": ["[players].[Player].[fullName]"]}, "query": {"mdx": "SELECT NON EMPTY {[Measures].[Player Total Hits (Year)]} ON COLUMNS, NON EMPTY Order(Union(Hierarchize(Descendants({[players].[Player].[AllMember]}, 2, SELF_AND_BEFORE)), Hierarchize(Descendants({[players].[Player].[AllMember]}, 1, SELF_AND_BEFORE))), [Measures].[Player Total Hits (Year)], BDESC) ON ROWS FROM [Stats]", "updateMode": "once"}, "serverKey": "default", "widgetKey": "plotly-clustered-column-chart"}} tags=[]
session.visualize("Players over 10 Years in League with in top 25% of Career Hits")
# -
# Suzuki has retired (he shows as his last game was last year), Pujol is in. Cano at age 38 and 430 hits outs was facing a tough challenge. Looking at his career hit production which is declining, the shortened 2020 season may have put is out of reach. The fact that his hit production and games played crossed last season is not a good sign.
# + atoti={"widget": {"filters": ["[players].[Player].[canoro01]"], "mapping": {"horizontalSubplots": [], "splitBy": ["ALL_MEASURES"], "values": ["[Measures].[Hits]", "[Measures].[Games played]"], "verticalSubplots": [], "xAxis": ["[batting].[Career Year].[careerYear]"]}, "query": {"mdx": "SELECT NON EMPTY Hierarchize(Descendants({[batting].[Career Year].[AllMember]}, 1, SELF_AND_BEFORE)) ON ROWS, NON EMPTY {[Measures].[Hits], [Measures].[Games played]} ON COLUMNS FROM [Stats]", "updateMode": "once"}, "serverKey": "default", "widgetKey": "plotly-line-chart"}}
session.visualize("Robinson Cano Hit Production VS Career Years")
# -
# However, the most interesting is <NAME>, looking at his 2019 stats as the close of 2019
# + atoti={"height": 145, "widget": {"columnWidths": {"[Measures].[Player Total Hits (Year)]": 155}, "filters": ["[batting].[Year].[AllMember].[2019]", "[players].[Player].[cabremi01]"], "mapping": {"columns": ["ALL_MEASURES"], "measures": ["[Measures].[Career Length]", "[Measures].[PlayerAge]", "[Measures].[Homeruns]", "[Measures].[Player Total HR (Year)]", "[Measures].[Hits]", "[Measures].[Player Total Hits (Year)]"], "rows": []}, "query": {"mdx": "SELECT NON EMPTY {[Measures].[Career Length], [Measures].[PlayerAge], [Measures].[Homeruns], [Measures].[Player Total HR (Year)], [Measures].[Hits], [Measures].[Player Total Hits (Year)]} ON COLUMNS FROM [Stats]", "updateMode": "once"}, "serverKey": "default", "switchedTo": "kpi", "widgetKey": "pivot-table"}} tags=[]
session.visualize("Cabera End of 2019 Stats")
# -
# Cabrera is well within striking distance to joing some excluive company the "500 HR/3000 hits club"! A feat attained by only 6 players in the history of Baseball.
# + atoti={"height": 278, "widget": {"filters": ["Filter([players].[Player].[fullName].Members, [Measures].[Career Hits] >= 3000)", "Filter([players].[Player].[fullName].Members, [Measures].[Career HR] >= 500)"], "mapping": {"columns": ["ALL_MEASURES"], "measures": ["[Measures].[Hits]", "[Measures].[Homeruns]"], "rows": ["[players].[Player].[fullName]"]}, "query": {"mdx": "SELECT NON EMPTY {[Measures].[Hits], [Measures].[Homeruns]} ON COLUMNS, NON EMPTY Hierarchize(Descendants({[players].[Player].[AllMember]}, 1, SELF_AND_BEFORE)) ON ROWS FROM [Stats]", "updateMode": "once"}, "serverKey": "default", "widgetKey": "pivot-table"}}
session.visualize("500 HR/3000 hits club")
# -
# ### What-if 2020 becomes a standard 162 game season?
#
# Let's run a simulation to see what the impact of the shortened 2020 season would be assuming that Cabrera "plays to the back of his baseball card" defined as he attains his median hits and home run per game for the season. The base will be the now accepted 60 games season versus a standard 162 game season
# The default value is the current 60 games season
season_simulation = cube.create_parameter_simulation(
"Impact 2020 Season Length Simulation",
measure_name="Season length parameter",
default_value=60,
base_scenario_name="2020 60 Game Season",
)
# Use the original measure when the simulated measure is equal to the provided default value
m["Simulated 2020 Season Length"] = tt.where(
m["Season length parameter"] != 60,
m["Season length parameter"],
m["2020 Season Length"],
)
season_simulation += ("Full Length 2020", 162)
# Instead of using the measure `Simulated 2020 Season Length` to project the number of homeruns, we can update the formula to use the simulation parameter related measure - `Simulated 2020 Season Length`.
m["Projected HR"] = tt.math.round(m["Simulated 2020 Season Length"] * m["Median HR"])
# + atoti={"widget": {"filters": ["Filter([players].[Player].[playerID].Members, [Measures].[Career Length] > 10)", "Filter([players].[Player].[playerID].Members, [Measures].[Homeruns] > 200)", "[batting].[Year].[AllMember].[2019]"], "isTextVisible": true, "mapping": {"columnValues": ["[Measures].[Projected End Of Season Career HRs]"], "horizontalSubplots": ["[Impact 2020 Season Length Simulation].[Impact 2020 Season Length Simulation].[Impact 2020 Season Length Simulation]"], "lineValues": ["[Measures].[500 Home Runs]"], "verticalSubplots": [], "xAxis": ["[players].[Player].[fullName]"]}, "plotly": {"data": {"commonTraceOverride": {"mode": "lines"}}}, "query": {"mdx": "SELECT NON EMPTY Order(Hierarchize(Descendants({[players].[Player].[AllMember]}, 2, SELF_AND_BEFORE)), [Measures].[Projected End Of Season Career HRs], BDESC) ON ROWS, NON EMPTY Crossjoin([Impact 2020 Season Length Simulation].[Impact 2020 Season Length Simulation].[Impact 2020 Season Length Simulation].Members, {[Measures].[Projected End Of Season Career HRs], [Measures].[500 Home Runs]}) ON COLUMNS FROM [Stats]", "updateMode": "once"}, "serverKey": "default", "widgetKey": "plotly-clustered-column-and-line-chart"}} tags=[]
session.visualize(
"Full length 2020 - Impact on player's end of season career hit projection"
)
# -
# We can see <NAME> and <NAME> goes closer to the 500 homeruns. The closest however is <NAME>.
# <h3>What do Cabrera's projected Numbers look like in the 60 game COVID season versus a normal 162 Game Season?</h3>
# + atoti={"height": 126, "widget": {"comparison": {"comparedMemberNamePath": ["Full Length 2020"], "dimensionName": "Impact 2020 Season Length Simulation", "hierarchyName": "Impact 2020 Season Length Simulation", "referenceMemberNamePath": ["2020 60 Game Season"]}, "filters": ["[batting].[Year].[AllMember].[2019]", "[players].[Player].[AllMember].[cabremi01].[Miguel Cabrera]"], "mapping": {"columns": [], "measures": ["[Measures].[Projected Hits]", "[Measures].[Projected End Of Season Career Hits]", "[Measures].[Projected HR]", "[Measures].[Projected End Of Season Career HRs]"], "rows": []}, "query": {"mdx": "SELECT NON EMPTY {[Measures].[Projected Hits], [Measures].[Projected End Of Season Career Hits], [Measures].[Projected HR], [Measures].[Projected End Of Season Career HRs]} ON COLUMNS FROM [Stats] CELL PROPERTIES VALUE, FORMATTED_VALUE, BACK_COLOR, FORE_COLOR, FONT_FLAGS", "updateMode": "once"}, "serverKey": "default", "widgetKey": "kpi"}} tags=[]
session.visualize("Cabrera's projected performance - full length 2020")
# -
# ### What happens when Covid season is followed by full length 2021?
#
# So per the assumptions of this analysis, if 2020 was a full season, having a "median year" Cabrera would not get him into the "500 HR/3000 hits club", but he would be <u>very close</u>. It would certainly not be outside the realm of possibility that if he had a better than average year he could find an extra 19 hits and 9 homeruns over the course of 162 games. Even if he missed in 2020, getting those landmarks would seem very likely even if Cabrera had an off year in 2021.
#
# The concern is that coming out of the shortened season, what would Cabrera's numbers look like if after a "median" full 162 game season followed a "median" Covid Shortened season? Let's run another simulation, where we project a 60 game season followed by a 162 game season.
season_simulation += ("Covid Season followed by Full Length 2021", 60 + 162)
# + atoti={"height": 150, "widget": {"filters": ["[batting].[Year].[AllMember].[2019]", "[players].[Player].[AllMember].[cabremi01].[Miguel Cabrera]"], "mapping": {"columns": [], "measures": ["[Measures].[Projected Hits]", "[Measures].[Projected End Of Season Career Hits]", "[Measures].[Projected HR]", "[Measures].[Projected End Of Season Career HRs]"], "rows": []}, "query": {"mdx": "SELECT NON EMPTY {[Measures].[Projected Hits], [Measures].[Projected End Of Season Career Hits], [Measures].[Projected HR], [Measures].[Projected End Of Season Career HRs]} ON COLUMNS FROM [Stats] CELL PROPERTIES VALUE, FORMATTED_VALUE, BACK_COLOR, FORE_COLOR, FONT_FLAGS", "updateMode": "once"}, "serverKey": "default", "widgetKey": "kpi"}} tags=[]
session.visualize(
"Cabrera's projected performance - Covid season followed by full length 2021"
)
# -
# So the COVID shortened Season turns <NAME>'s highly likely induction into the "500 HR/3000 hits club" by the 2021 season under normal circumstanse into a far risky proposition, changing it from an optimistic "one season away" to a potential "three seasons" with all the risks that playing an additional three late career seasons entails.
#
# We will have to see the results and if atoti disproves that "<i>You can’t predict baseball Suzyn</i>".
# <div style="text-align: center;" ><a href="https://www.atoti.io/?utm_source=gallery&utm_content=baseball" target="_blank" rel="noopener noreferrer"><img src="https://data.atoti.io/notebooks/banners/discover-try.png" alt="Try atoti"></a></div>
| notebooks/baseball/main.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
from collections import defaultdict
import json
import numpy as np
import codecs
import re
from matplotlib import pyplot as plt
pos = np.loadtxt("pos_info_psych.csv", delimiter=",",usecols=(2,6,7,8));
text = np.loadtxt("pos_info_psych.csv", dtype=np.unicode, delimiter=",",usecols=(9,));
text = np.asarray([t[4:-2] for t in text]) #Hack: np is not parsing text cols right
# +
def def_column(row):
h, tx, ty, p = row
if tx<130:
return "address"
elif tx<250:
return "names"
elif tx<480:
return "contact"
else:
return "service"
# -
def isheader(row):
h, tx, ty, p = row
return h>15
header_pages = np.unique(pos[(np.apply_along_axis(isheader, 1, pos)), 3])
cols = np.apply_along_axis(def_column, 1, pos)
#list(map(print,cols))
cols
def isValidRow(row, header_pages):
h,tx,ty,p = row
if ty<30:
return False
elif p in header_pages and ty>500:
return False
else:
return True
print(len(text[cols == 1] ))
print(len(text[cols == 2] ))
print(len(text[cols == 3] ))
print(len(text[cols == 4] ))
def get_rows(pos, text):
header_pages = np.unique(pos[(np.apply_along_axis(isheader, 1, pos)), 3])
valid_rows = np.asarray([isValidRow(row, header_pages) for row in pos])
vpos = pos[valid_rows,:]
vtext = text[valid_rows]
entries = defaultdict(lambda : {'address': '', 'names': '', 'contact': '', 'service':''} )
for page in np.unique(vpos[:,-1]):
print(page)
page_row_inds = vpos[:,-1]==page
page_pos = vpos[page_row_inds, :]
page_text = vtext[page_row_inds]
cell_bottom_inds = np.logical_or(page_text == "Fax", page_text == "Fax:")
cell_bottoms_ty = page_pos[cell_bottom_inds, -2]
cell_bottoms_ty.sort()
row_assignments = []
for (ii,((h,tx,ty,p), tt)) in enumerate(zip(page_pos, page_text)):
for (row_ii, bottom_ty) in enumerate(cell_bottoms_ty[::-1]):
if ty>= bottom_ty:
row_assignments.append(row_ii)
#print("good ", tt, " at ", row_ii)
break
else:
pass
#print("warn: ", tt)
#row_assignments.append(length(cell_bottoms_ty))
col_assignments = np.apply_along_axis(def_column, 1, page_pos)
for (tt, col_ass, row_ass) in zip(page_text, col_assignments, row_assignments):
entries[(page, row_ass)][col_ass]+=tt
return entries
entries = get_rows(pos,text)
entries
# +
def varients(alt_name):
yield alt_name
yield alt_name.lower()
yield "".join(alt_name.split())
yield "".join(alt_name.lower().split())
yield re.sub(r'[^\w]', '', alt_name)
yield re.sub(r'[^\w]', '', alt_name.lower())
#Maybe we miss things with spaces and symbols?
def load_service_aliases():
servise_names = json.load(open("./spec_types.json","r"))
aliases = {}
for canon_name, altnames in servise_names.items():
for altname in altnames:
for varient in varients(altname):
if len(varient) == 1:
continue
aliases[varient] = canon_name
for varient in canon_name:
if len(varient) == 1:
continue
aliases[varient] = canon_name
return aliases
SERVICE_ALIASES = load_service_aliases()
# -
re.findall(r"[A-Z|a-z]+(?=\sWA)", "sdsd Kenwick WA sdsd")
# +
def NameCase(s):
return s[0].upper() + s[1:].lower()
def parse_address(tt):
loc_parts = re.findall(r"[A-Z][A-Z]+", tt)
if loc_parts==['WA']:
#Only WA found
loc_parts = re.findall(r"[A-Z|a-z]+(?=\sWA)", tt)
loc_parts=filter(lambda p: p!="WA", loc_parts)
loc_parts = map(NameCase, loc_parts)
return "area", " ".join(loc_parts)
def parse_names(tt):
# uc = [t==t.upper]
names = re.findall(r"[A-Z][a-z]+", tt)
#surname_and_first = re.findall*()
#parts = re.findall(r"[A-Z][A-Z]+[a-z]+", tt)
#names = []
#for part in parts:
# if all([p==p.upper() for p in part]):
# names+=part
# else:
# surnames = re.findall(r"[A-Z]+*", tt)
if len(names)>2:
names = names[:2]
return "name"," ".join(names)
def parse_services(tt):
mad_skills = set()
for skill_varname, canon_name in SERVICE_ALIASES.items():
if skill_varname in tt:
mad_skills.add(canon_name)
return "expertise", list(mad_skills)
def parse_contact(tt):
numbers = re.findall(r"[0-9][0-9][0-9][0-9] [0-9][0-9][0-9][0-9]", tt)
if len(numbers) == 0:
return "number",""
else:
return "number", numbers[-1]
parsers = {'address': parse_address, 'names': parse_names, 'contact': parse_contact, 'service': parse_services}
def make_data_for_jvb(entry):
jvb_format = {'specialistType': "Psychologist"}
for key, tt in entry.items():
jvb_key, jvb_value = parsers[key](tt)
jvb_format[jvb_key] = jvb_value
return jvb_format
def make_all_data_for_jon(entries):
data=[]
for entry in entries.values():
datum = make_data_for_jvb(entry)
if len(datum['name'].strip()) > 0:
data.append(datum)
return data
# -
make_data_for_jvb(list(entries.values())[2])
with open("../../www/default-data/auto_psych.json","w") as fh:
json.dump(make_all_data_for_jon(entries), fh, sort_keys=True, indent=4, separators=(',', ': '))
ss=set()
ss.add(2)
ss
| protoscripts/pmh_scraping/ana_psych.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="view-in-github" colab_type="text"
# <a href="https://colab.research.google.com/github/ozgurdemirhan/hu-bby162-2021/blob/main/final.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# + colab={"base_uri": "https://localhost:8080/"} id="EHlzxHb73Ycb" outputId="f0dbcd50-f601-4601-96cc-66993fe6eacd"
dosya = "/content/drive/MyDrive/final.txt"
def eserListesi():
dosya = "/content/drive/MyDrive/final.txt"
f = open(dosya, "r")
for line in f.readlines():
print(line)
f.close()
def eserKaydet():
ad = input(" Eser adı giriniz: ")
yazar = input(" Yazar adı giriniz: ")
yayın = input(" Yayınevi giriniz: ")
basım = input(" Basım tarihi giriniz: ")
Isbn = input(" ISBN numarası giriniz: ")
dosya = "/content/drive/MyDrive/final.txt"
f = open(dosya, 'a')
f.write(ad+ "," +yazar+ "." +yayın+ "," +basım+ "," +Isbn+ "\n")
f.close()
def menu():
print(" *** Kütüphane Bilgi Sistemi ***")
while True:
print(" 1- Çıkış İşlemi")
print(" 2- Mevcut Eserler")
print(" 3- Yeni Eser Kaydet")
islem = input(" Yapmak istediğiniz işlemi seçiniz (1/3): ")
if islem == "1":
print(" Çıkış yapıldı")
break
elif islem == "2":
eserListesi()
elif islem == "3":
eserKaydet()
menu()
| final.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] graffitiCellId="id_9vap74c"
# ### Problem Statement
#
# You have been given an array containg numbers. Find and return the largest sum in a contiguous subarray within the input array.
#
# **Example 1:**
# * `arr= [1, 2, 3, -4, 6]`
# * The largest sum is `8`, which is the sum of all elements of the array.
#
# **Example 2:**
# * `arr = [1, 2, -5, -4, 1, 6]`
# * The largest sum is `7`, which is the sum of the last two elements of the array.
#
#
#
#
# + graffitiCellId="id_ok5cosl"
def max_sum_subarray(arr):
"""
:param - arr - input array
return - number - largest sum in contiguous subarry within arr
"""
current_sum = arr[0]
max_sum = arr[0]
for element in arr[1:]:
current_sum = max(current_sum + element, element)
max_sum = max(current_sum, max_sum)
return max_sum
# + [markdown] graffitiCellId="id_qy59phn"
# <span class="graffiti-highlight graffiti-id_qy59phn-id_3hqoizc"><i></i><button>Show Solution</button></span>
# + graffitiCellId="id_x2c4yaf"
def test_function(test_case):
arr = test_case[0]
solution = test_case[1]
output = max_sum_subarray(arr)
if output == solution:
print("Pass")
else:
print("Fail")
# + graffitiCellId="id_ocg7bal"
arr= [1, 2, 3, -4, 6]
solution= 8 # sum of array
test_case = [arr, solution]
test_function(test_case)
# + graffitiCellId="id_vepfpek"
arr = [1, 2, -5, -4, 1, 6]
solution = 7 # sum of last two elements
test_case = [arr, solution]
test_function(test_case)
# + graffitiCellId="id_78na1mt"
arr = [-12, 15, -13, 14, -1, 2, 1, -5, 4]
solution = 18 # sum of subarray = [15, -13, 14, -1, 2, 1]
test_case = [arr, solution]
test_function(test_case)
# + graffitiCellId="id_fwrdstr"
| Data Structures/Arrays and Linked Lists/Max-Sum-Subarray.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import matplotlib.pyplot as plt
# %matplotlib inline
from matplotlib import patches
import numpy as np
import pandas as pd
import matplotlib
# ## einsum to calculate closest
def einsum_closest(source_xy, targets_xy, method='euclidean'):
source = np.asarray(source_xy)
deltas = targets_xy - source
if method.lower() == 'euclidean':
dist = np.sqrt(np.einsum('ij,ij->i', deltas, deltas))
return np.min(dist), np.argmin(dist)
elif method.lower() == 'manhattan':
dist = (
np.sqrt(np.einsum('i,i->i', deltas[:, 0], deltas[:, 0])) +
np.sqrt(np.einsum('i,i->i', deltas[:, 1], deltas[:, 1]))
)
return np.min(dist), np.argmin(dist)
else:
raise ValueError('a valid method is either "Euclidean", or "Manhattan".')
def nearest_neighbor(sources_xy, targets_xy):
f = lambda source: einsum_closest(source, targets_xy)[1]
return np.array(list(map(f, sources_xy)))
def plotArrow(A, B, ax):
arrow = ax.arrow(A[0], A[1], B[0]-A[0], B[1]-A[1], head_width=0.15,
color='black', linestyle=(1,(5,5)), head_length=0.2,
length_includes_head=True, overhang=0.5, label="distance")
return arrow
# +
np.random.seed(5)
x1 = np.random.rand(9) * 10
y1 = np.random.rand(9) * 10
x2 = np.random.rand(20) * 10
y2 = np.random.rand(20) * 10
nearest_array = nearest_neighbor(np.column_stack((x1, y1)), np.column_stack((x2, y2)))
fig, ax = plt.subplots(figsize=(5,5))
ax.set_aspect('equal')
ax.set_xlim(0, 10)
ax.set_ylim(0, 10)
ax.grid(True, 'both', linestyle="--")
source = ax.plot(x1, y1, 'ro', label="Sources")
target = ax.plot(x2, y2, 'bo', label="Targets")
ax.set_aspect('equal')
for source, target in enumerate(nearest_array):
plotArrow((x1[source], y1[source]), (x2[target], y2[target]), ax)
# arrow1.set_label = "Distance to NN"
patches = [ax.plot([],[], marker="o", ls="", color='red', label="Sources")[0],
ax.plot([],[], marker="o", ls="", color='blue', label="Targets")[0],
ax.plot([],[], marker="$--->$", ms=25, ls="", color='black', label="Distance to NN")[0]]
ax.legend(handles=patches, loc="best", fontsize="medium")
plt.tight_layout()
fig.savefig("./nearest_neighbor_illustration.png", dpi=300)
# -
| plots/nearest_neighbor_illustration.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ### Age-structured SEkIkIkR model, compared to COVID-19 data in UK
#
# In this example, we fit the parameter $\beta$ of an SEkIkIkR model to epidemiological data from the UK; **all other parameters for the SEkIkIkR model are chosen ad-hoc.**
# %%capture
## compile PyRoss for this notebook
import os
owd = os.getcwd()
os.chdir('../..')
# %run setup.py install
os.chdir(owd)
# %matplotlib inline
import numpy as np
import pyross
import matplotlib.pyplot as plt
from scipy.io import loadmat
from scipy import optimize
plt.rcParams.update({'font.size': 22})
# ### Load age structure and contact matrices for UK
# +
M=16 # number of age groups
# load age structure data
my_data = np.genfromtxt('../data/age_structures/UK.csv', delimiter=',', skip_header=1)
aM, aF = my_data[:, 1], my_data[:, 2]
# set age groups
Ni=aM+aF; Ni=Ni[0:M]; N=np.sum(Ni)
# +
# contact matrices
CH, CW, CS, CO = pyross.contactMatrix.UK()
# matrix of total contacts
C=CH+CW+CS+CO
# -
# ### Load and visualise epidemiological data for UK
# +
# Load data
my_data = np.genfromtxt('../data/covid-cases/uk.txt', delimiter='', skip_header=7)
cases = my_data[:,1]
# data starts on 2020-03-03
# The lockdown in the UK started on 2020-03-23, which corresponds to the 20th datapoint
# (which has index 19)
lockdown_start = 19
fig,ax = plt.subplots(1,1, figsize=(10, 8), dpi=80, facecolor='w', edgecolor='k')
ax.plot(cases,marker='o',ls='',color='crimson',
label='Number of cases')
ax.axvline(lockdown_start,
lw=3,ls='--',color='black',
label='Beginning of lockdown')
ax.set_xlabel('Days')
ax.set_ylabel('Total number of cases')
ax.set_yscale('log')
ax.legend(loc='best')
plt.show()
plt.close()
# -
# Note that
# * the data is shown in a semilogarithmic plot, and that
# * this is the total number of cases, not the active cases.
# ### Define parameters and initial condition of SEkIkIkR model
# +
alpha= 0.3 # fraction of asymptomatics
gE = 1/2.72 # recovery rate of exposeds
kI = 10; # # of stages of I class
kE = 10; # # of stages of E class
gIa = 1./7 # recovery rate of asymptomatic infectives
gIs = 1./17.76 # recovery rate of symptomatic infectives
fsa = 0.5 # fraction of symptomatics who self-isolate
# We start with one symptomatic infective in each of the age groups 6-13,
# let the model "grow the number of infectives" by itself,
# and will later define a reference point to relate the
# time of the simulation to the real time of the UK data.
S0 = np.zeros(M)
I0 = np.zeros((kI,M));
E0 = np.zeros((kE,M));
for i in range(kI):
I0[i, 6:13]= 1
for i in range(M) :
S0[i] = Ni[i] - np.sum(I0[:,i]) - np.sum(E0[:,i])
I0 = np.reshape(I0, kI*M)/kI;
E0 = np.reshape(E0, kE*M)/kE;
# the contact structure is independent of time
def contactMatrix(t):
return C
# duration of simulation and data file
Tf=200; Nf = Tf+1
# We use the first 20 days (= pre lockdown data) of the
# UK dataset for the fit
# note that day 20 has index 19
Tf_fit = 19; Nf_fit = Tf_fit+1;
cases_fit = cases[:Tf_fit+1]
def findBetaIs(x,
reference_index=0):
# reference_index = index of the UK time series which we use as "anchor"
# for relating simulation time and time of UK time series.
#
# Define model and run simulation
parameters = {'beta':x, 'gE':gE, 'gIa':gIa, 'gIs':gIs,
'kI':kI, 'kE' : kE, 'fsa':fsa, 'alpha':alpha}
model = pyross.deterministic.SEkIkIkR(parameters, M, Ni)
data=model.simulate(S0, E0, 0*I0, I0.copy(),
contactMatrix, Tf, Tf+1)
#
# The UK time series gives all known cases (NOT just the currently active ones)
# To get these from the simulation, we use
# (All known cases) = (Total population) - (# of suspectibles) + (# of asymptomatics),
# which assumes that the asymptomatics do not count as known cases,
# and that all symptomatics are registered as "known cases".
Ia = (model.Ia(data))
summedAgesIa = Ia.sum(axis=1)
S = (model.S(data))
summedAgesS = S.sum(axis=1)
trajectory = N - summedAgesS + summedAgesIa
#
# We shift the simulated trajectory such that reference_index-th datapoint
# of the UK trajectory agrees well with a datapoint on the simulated trajectory:
index = np.argmin( np.fabs( trajectory - cases[reference_index]) )
# index = "which point of simulated trajectory agrees well with UK data at reference_index?"
numerical_arr = trajectory[index-reference_index:index-reference_index+Nf_fit]
#
# this if-clause rules out unrealistic parameters that lead to an "index" too
# far to the end of the trajectory:
if np.shape(cases_fit) != np.shape(numerical_arr):
return np.inf
#
# calculate mean-squared deviation between simulated trajectory and given dataset
diff = (cases_fit-numerical_arr)
error = np.sum( diff**2 )
return error
'''
# implement same fitting procedure also for assumption
# "both asymptomatic and symptomatic cases count as known cases"?
def findBetaIsandIa(x):
parameters = {'beta':x, 'gE':gE, 'gIa':gIa, 'gIs':gIs,
'kI':kI, 'kE' : kE, 'fsa':fsa, 'alpha':alpha}
model = pyross.deterministic.SEkIkIkR(parameters, M, Ni)
data=model.simulate(S0, E0, 0*I0, I0.copy(),
contactMatrix, Tf_fit, Nf_fit)
Is = (model.Is(data))
summedAgesIs = Is.sum(axis=1)
Ia = (model.Ia(data))
summedAgesIa = Ia.sum(axis=1)
summedAgesI = summedAgesIs + summedAgesIa
index = np.argmin( np.fabs( summedAgesI - cases[0]) )
numerical_arr = summedAgesIs[index:index+Nf_fit]
if np.shape(cases_fit) != np.shape(numerical_arr):
return np.inf
error = np.sum(( cases_fit-numerical_arr)**2)
return error
''';
# -
# ### Find optimal value of $\beta$
# +
# scan parameter space to find good initial value for minimiser
beta_arr = np.logspace(-2,-0.3,num=41)
values = np.zeros_like(beta_arr)
for i,beta in enumerate(beta_arr):
values[i] = findBetaIs(beta)
# visualise
fig,ax = plt.subplots(1,1, figsize=(7, 4), dpi=80, facecolor='w', edgecolor='k')
ax.plot(beta_arr,np.sqrt(values),marker='o')
ax.set_yscale('log')
ax.set_xscale('log')
ax.set_xlabel(r'$\beta$')
ax.set_ylabel(r'Mean-squared error of fit')
plt.show()
plt.close()
min_beta = beta_arr [ np.argmin( values) ]
print('starting guess for minimiser:',min_beta)
# +
beta0 = min_beta
# we use the datapoint at the beginning of the lockdown as reference
reference_index = 19
# define function for minimiser and run minimisation
minimising_func = lambda x: findBetaIs(x,reference_index)
sol1 = optimize.root(minimising_func,beta0)
print('Is only best fit: ', sol1.x)
# +
x=sol1.x[0]
parameters = {'beta':x, 'gE':gE, 'gIa':gIa, 'gIs':gIs,
'kI':kI, 'kE' : kE, 'fsa':fsa, 'alpha':alpha}
model = pyross.deterministic.SEkIkIkR(parameters, M, Ni)
data=model.simulate(S0, E0, 0*I0, I0.copy(), contactMatrix, Tf, Nf)
plt.rcParams.update({'font.size': 22})
# Compare total number of cases to dataset used for fitting
# As in the function used for fitting, we use
# (All known cases) = (Total population) - (# of suspectibles) + (# of asymptomatics),
Ia = (model.Ia(data))
summedAgesIa = Ia.sum(axis=1)
S = (model.S(data))
summedAgesS = S.sum(axis=1)
trajectory = N - summedAgesS + summedAgesIa
# Relate time of simulation to time of dataset used for fitting
# (as in function "findBetaIs" used for fitting)
index = np.argmin( np.fabs( trajectory - cases[reference_index]) )
print('Array index for simulated trajectory on 2020-03-03:',index-reference_index)
fig,ax = plt.subplots(1,1,figsize=(10,7))
ax.axvline(Tf_fit,lw=3,ls='--',
color='black',label='beginning of lockdown')
ax.plot(cases,marker='o',ls='',markersize=8,
label='UK data')
ax.plot(cases_fit,marker='d',ls='',markersize=10,
label='UK data used for fit')
ax.plot(trajectory[index-reference_index:],lw=3,
label='fitted SEkIkIkR model')
ax.set_xlabel('Days')
ax.set_ylabel('Total known cases')
ax.set_ylim(0,7e4)
ax.set_xlim(0,30)
ax.legend(loc='best')
#ax.set_yscale('log')
fig.savefig('fitParamBeta_UK.pdf',bbox_inches='tight')
plt.show()
plt.close(fig)
fig,ax = plt.subplots(1,1,figsize=(10,7))
ax.axvline(Tf_fit,lw=3,ls='--',
color='black',label='beginning of lockdown')
ax.plot(cases,marker='o',ls='',markersize=8,
label='UK data')
ax.plot(cases_fit,marker='d',ls='',markersize=10,
label='UK data used for fit')
ax.plot(trajectory[index-reference_index:],lw=3,
label='fitted SEkIkIkR model')
ax.set_xlabel('Days')
ax.set_ylabel('Total known cases')
ax.set_xlim(0,50)
ax.legend(loc='best')
ax.set_yscale('log')
plt.show()
plt.close(fig)
# -
| examples/deterministic/fitParamBeta - UK.ipynb |
// ---
// jupyter:
// jupytext:
// text_representation:
// extension: .scala
// format_name: light
// format_version: '1.5'
// jupytext_version: 1.14.4
// kernelspec:
// display_name: Scala
// language: scala
// name: scala
// ---
// # Scala Intro
// ---
//
// - It's a hosted language, and _primary_ host is JVM.
// - statically typed general purpose langauge.
// - statically typed: it type checks the source "statically", without running the source.
// - general purpose: it's design to write software for wide variety of application domains
//
// ### expressions, types and objects
// ---
//
// - these are the entities of three universes
// - an expression lives at `source level`, and valid Scala program is made up of expressions.
// - every expression has type, which is "labelled" to an expression at `compile-time`.
// - an expression _evaluates_ to a value at `run-time`, and becuase every value is _object_ in Scala, we will refer to it as object from now.
// - evaluation strategy<sup>1</sup>
// - Scala has `strict` evaluation strategy, which means it evaluates expression as soon as they are available. Also, arguments of method call gets evaluated before passing it down.
//
// #### examples
// ---
//
// - `"Hello World"` is an expression of type `String`, and it evaluates to a String object "Hello World". This kind of expression is called `literal expression`. An expression is literal if it's source level representation and object representation are same. Other examples would be, `1`, `1.2`, `false` etc...
"Hello World"
// ### strict evaluation in action
// ---
"Hello World".take(5 + 2).drop(1 + 2)
// Let's go through an evaluation steps, one at a time, for above expression.
//
// ```scala
// "Hello World".take(5 + 2).drop(1 + 2)
// "Hello World" // evaluate first expression, result: "Hello World"
// 5 + 2 // evaluate argument before passing it to method call, result: 7
// "Hello World".take(7) // evaluate method call, result: "Hello W"
// 1 + 2 // evaluate argument before passing it to method call, result: 3
// "Hello W".drop(3) // evaluate method call, result: "lo W"
// "lo W" // final result
// ```
// ### abstracting over expressions
// ---
//
// - Scala provides three constructs to abstract over expressions:
// - `var` - variable, allow to change its value over the time (mutable)
// - `val` - value, does not allow to change its value (immutable)
// - `def` - definition, evalutates its expression each time on call-site
// - Syntax: `<construct> <identifier>: <type> = <expression>`<sup>2</sup>
// +
// annotating a type unfront is optional (see `pi` declaration)
var count: Int = 1
val pi = 3.14 // see the result below, it inferred the type `Double`
def userId: Long = 123L
// -
// - Notice that `def` declaration didn't evaluate to a value but it printed `defined function userId`. As mentioned, it evaluates its body at call-site.
userId
// - Difference between `val` and `def`:
// - `val` evaluates its expression right away, only once.
// - `def` evaluates its expression every time.
//
// let's add `println` statement in expression.
// +
val pi: Double = {
println("evaluating pi")
3.14
}
def userId = {
println("evaluating userId")
123L
}
pi // calling pi
pi // calling pi
pi // calling pi
userId // calling userId
userId // calling userId
userId // calling userId
// -
// Notice that `evaluating pi` gets printed only once (at the time of declaration) even though we are "accessing" `pi` three times, on the other hand, `def` doesn't print anything at the time of declaration but it did print `evaluating userId` each time we "called" `userId`.
//
// ### statements and compound expressions
// ---
//
// Everything is expression in Scala, which means it always returns some type of value.
// - a `statement` is an expression which returns a value of type `Unit`.
// - `Unit` has only one value and literal expression of it is `()`.
// - The types which has single value are called `Singleton type`.
// - example, `println("hello world")` will print "hello world" on console and returns `()`.
// - a `compound expression` is group of expressions, enclosed in `{` and `}`, usually called `block`.
// - its type and return value is determined by the last expression of block.
// - example,
// - `{println("hello world"); 1.2}` inline it by separating expressions with `;`
// - type of above expression will be `Double` and value of whole expression will be `1.2`.
// - ```scala
// {
// println("hello world")
// 1.2
// }
// ```
// it's the same expression as above, but without semicolons because here we are separating expression with newline.
//
// ### objects (fields and methods)
// ---
//
// As mentioned, in Scala, every value is object. Following is the way to interact with object.
// - Syntax: `<object>.<method>(<params>...)`
// - Example: `"Hello World".take(5)`
//
// The above expression is equivalent of `"Hello World" take 5`, compiler will translate this into above expression with `.`, `(` and `)`.
// Which means, when we write `1 + 2`, what we are actaully doing is, calling a `+` method on integer object `1` with argument object `2`.
//
// Let's see couple of examples,
1 + 2
// 1.+(2)
"Hello World" take 7 drop 3
"Hello World".take(7).drop(3)
// #### define object
//
// > object is combination of data (fields) and operations (methods) on that data.
//
// - Syntax: `object <name> { <body> }`
// - <body> is collection of expressions and declarations.
object Person {
println("evaluating body")
val name = {
println("evaluating field")
"John"
}
def say(greet: String) = {
println("evaluating method")
name + " greets " + greet
}
}
// Similar to `def`, declaring `object` doesn't evaluate it's body right away. It will be evaluated, once, on referring it by its name, in this case, `Person`.
Person
Person
// Notes:
// - the type of object will be `<name of object>.type`, and the object is the only value of that type.
// - similiar to `Unit`, objects declared using `object` syntax are `Singleton`.
// - we called `Perosn` twice but it printed those messages once.
// - field can be defined with `var` and `val`.
// - method is defined with `def`.
// - method can take arguments, in this case, `greet` is type String.
// - annotating argument type is compulsory, but return type is optional.
// - we can access fields or methods with `.` notation. As `Person.name` or `Person.say("hello")`.
Person.name // accessing field
Person.say("hello") // calling method
// ---
//
// [1] it also has limited support for `lazy` (non-strict) evaluation.
//
// [2] type of each expression is inferable, so we can ommit the `: <type>` part.
//
// ---
//
// ### Exercise
//
// 1) Define a method called `personGreets` which take two arguments, a greet message and Person object, and prints whatever the output that `say` method would return given the greet message. [hint: `personGreets("hi there", Person)` would print `John greets hi there`]
//
// 2) Identify the definition of statement. (choose one)
// - declaration of `var`, `val` or `def`
// - an expression that returns a Unit value
// - an expression that returns a value except of type Unit
// - an expression that doesn't return any value
//
// 3) Explain the difference between `val` and `var`, and use cases.
//
// 4) Explore Scala docs to implement following expressions. [hints: (1) see the method types (2) https://www.scala-lang.org/api/current/scala/collection/StringOps.html]
// - concate two strings
// - find the last character of a string
// - length of a string
// - convert a string to double
// - concate a string 5 times
//
// 5) Write down one literal expression for all the types you can think of. [hint: `2` for `Int`]
| 01-basics.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + pycharm={"is_executing": false}
import pandas as pd
import plotly.graph_objects as go
import json
# + pycharm={"is_executing": false}
df = pd.read_excel('..\Data\Extracted_data.xlsx', sheet_name ='Sheet1' )
df
# + pycharm={"is_executing": false}
fig1 = go.Figure()
fig1.add_trace(go.Scatter(
x=[2018],
y=[600],
mode='lines',
legendgroup="Shivalik-Gangetic",
name="Shivalik-Gangetic",
marker=dict(size=0 , color = "White"),
visible='legendonly'
))
fig1.add_trace(go.Scatter(
x=[2006,2010,2014,2018],
y=df.iloc[2, 2:6],
mode='lines+markers',
legendgroup="Shivalik-Gangetic",
name="Bihar",
marker=dict(size=10 , color = "#F58518"),
visible='legendonly'
))
fig1.add_trace(go.Scatter(
x=[2006,2010,2014,2018],
y=df.iloc[0, 2:6],
legendgroup="Shivalik-Gangetic",
mode='lines+markers',
name="Uttarakhand",
marker=dict(size=10 , color = "rgb(253,180,98)"),
visible='legendonly'
))
fig1.add_trace(go.Scatter(
x=[2006,2010,2014,2018],
y=df.iloc[1, 2:6],
mode='lines+markers',
legendgroup="Shivalik-Gangetic",
name="Uttar Pradesh",
marker=dict(size=10 , color = "#FD3216"),
visible='legendonly'
))
fig1.add_trace(go.Scatter(
x=[2018],
y=[600],
mode='lines',
legendgroup="Central Indian",
name="Central Indian",
marker=dict(size=0 , color = "White"),
visible='legendonly'
))
fig1.add_trace(go.Scatter(
x=[2006,2010,2014,2018],
y=df.iloc[3, 2:6],
mode='lines+markers',
name="<NAME>",
legendgroup="Central Indian",
marker=dict(size=10 , color = "rgb(55,126,184)"),
visible='legendonly'
))
fig1.add_trace(go.Scatter(
x=[2006,2010,2014,2018],
y=df.iloc[4, 2:6],
mode='lines+markers',
name="Chattisgarh",
legendgroup="Central Indian",
marker=dict(size=10 ,color = "rgb(179,205,227)"),
visible='legendonly'
))
fig1.add_trace(go.Scatter(
x=[2006,2010,2014,2018],
y=df.iloc[9, 2:6],
mode='lines+markers',
name="Jharkhand",
legendgroup="Central Indian",
marker=dict(size=10 ,color = "rgb(128,177,211)"),
visible='legendonly'
))
fig1.add_trace(go.Scatter(
x=[2006,2010,2014,2018],
y=df.iloc[5, 2:6],
mode='lines+markers',
name="<NAME>",
legendgroup="Central Indian",
marker=dict(size=10 ,color = "#19D3F3"),
visible='legendonly'
))
fig1.add_trace(go.Scatter(
x=[2006,2010,2014,2018],
y=df.iloc[6, 2:6],
mode='lines+markers',
name="Maharashtra",
legendgroup="Central Indian" ,
marker=dict(size=10,color = "#2E91E5"),
visible='legendonly'
))
fig1.add_trace(go.Scatter(
x=[2006,2010,2014,2018],
y=df.iloc[7, 2:6],
mode='lines+markers',
name="Odisha",
legendgroup="Central Indian",
marker=dict(size=10 , color = "#3366CC"),
visible='legendonly'
))
fig1.add_trace(go.Scatter(
x=[2006,2010,2014,2018],
y=df.iloc[8, 2:6],
mode='lines+markers',
name="Rajasthan",
legendgroup="Central Indian",
marker=dict(size=10 , color = "#17BEFC"),
visible='legendonly'
))
fig1.add_trace(go.Scatter(
x=[2018],
y=[600],
mode='lines',
legendgroup="Western Ghats",
name="Western Ghats",
marker=dict(size=0 , color = "White"),
visible='legendonly'
))
fig1.add_trace(go.Scatter(
x=[2006,2010,2014,2018],
y=df.iloc[13, 2:6],
mode='lines+markers',
name="Goa",
legendgroup="Western Ghats",
marker=dict(size=10 , color = "rgb(27,158,119)"),
visible='legendonly'
))
fig1.add_trace(go.Scatter(
x=[2006,2010,2014,2018],
y=df.iloc[10, 2:6],
mode='lines+markers',
name="Karnataka",
legendgroup="Western Ghats",
marker=dict(size=10 , color = "rgb(179,222,105)"),
visible='legendonly'
))
fig1.add_trace(go.Scatter(
x=[2006,2010,2014,2018],
y=df.iloc[11, 2:6],
mode='lines+markers',
name="Kerala",
legendgroup="Western Ghats",
marker=dict(size=10 , color = "rgb(102,166,30)"),
visible='legendonly'
))
fig1.add_trace(go.Scatter(
x=[2006,2010,2014,2018],
y=df.iloc[12, 2:6],
mode='lines+markers',
name="<NAME>",
legendgroup="Western Ghats",
marker=dict(size=10 , color = "#00CC96"),
visible='legendonly'
))
fig1.add_trace(go.Scatter(
x=[2018],
y=[600],
mode='lines',
legendgroup="North East Hills and Brahmaputra",
name="North East Hills and Brahmaputra",
marker=dict(size=0 , color = "White"),
visible='legendonly'
))
fig1.add_trace(go.Scatter(
x=[2006,2010,2014,2018],
y=df.iloc[15, 2:6],
mode='lines+markers',
legendgroup="North East Hills and Brahmaputra",
name="<NAME>",
marker=dict(size=10, color = "rgb(231,41,138)"),
visible='legendonly'
))
fig1.add_trace(go.Scatter(
x=[2006,2010,2014,2018],
y=df.iloc[14, 2:6],
mode='lines+markers',
name="Assam",
legendgroup="North East Hills and Brahmaputra",
marker=dict(size=10 ,color = "rgb(231,138,195)"),
visible='legendonly'
))
fig1.add_trace(go.Scatter(
x=[2006,2010,2014,2018],
y=df.iloc[16, 2:6],
mode='lines+markers',
legendgroup="North East Hills and Brahmaputra",
name="Mizoram",
marker=dict(size=10 ,color = "rgb(254,136,177)"),
visible='legendonly'
))
fig1.add_trace(go.Scatter(
x=[2006,2010,2014,2018],
y=df.iloc[17, 2:6],
mode='lines+markers',
legendgroup="North East Hills and Brahmaputra",
name="Northern West Bengal",
marker=dict(size=10 ,color = "#FF97FF"),
visible='legendonly'
))
fig1.add_trace(go.Scatter(
x=[2006,2010,2014,2018],
y=df.iloc[18, 2:6],
mode='lines+markers',
legendgroup="Sunderbans",
name="Sunderbans",
marker=dict(size=10 ,color = "rgb(102,17,0)"),
visible='legendonly'
))
fig1.update_traces(marker=dict(size=20))
fig1.update_layout( paper_bgcolor='rgba(0,0,0,0)')
#fig1.update_layout( plot_bgcolor='rgba(255,255,255,1)')
fig1.update_layout(yaxis=dict(range=[-10,600]))
fig1.update_layout(xaxis=dict(range=[2005,2019]))
fig1.update_layout(legend_title_text='States:', showlegend=True)
fig1.update_xaxes(title_text='Year')
fig1.update_yaxes(title_text='Number of Tigers')
fig1.show()
# + pycharm={"is_executing": false}
import plotly.io as pio
pio.write_html(fig1, file ='../HTMLs/idiom4.html', auto_open=False)
| JupyterCodes/VIZ_4.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] nbsphinx="hidden"
# [Index](Index.ipynb) - [Back](Widget Basics.ipynb) - [Next](Widget Events.ipynb)
# -
# # Widget List
# ## Complete list
# + [markdown] slideshow={"slide_type": "slide"}
# For a complete list of the GUI widgets available to you, you can list the registered widget types. `Widget` and `DOMWidget`, not listed below, are base classes.
# -
import ipywidgets as widgets
widgets.Widget.widget_types
# + [markdown] slideshow={"slide_type": "slide"}
# ## Numeric widgets
# -
# There are 8 widgets distributed with IPython that are designed to display numeric values. Widgets exist for displaying integers and floats, both bounded and unbounded. The integer widgets share a similar naming scheme to their floating point counterparts. By replacing `Float` with `Int` in the widget name, you can find the Integer equivalent.
# ### IntSlider
widgets.IntSlider(
value=7,
min=0,
max=10,
step=1,
description='Test:',
disabled=False,
continuous_update=False,
orientation='horizontal',
readout=True,
readout_format='i',
slider_color='white'
)
# + [markdown] slideshow={"slide_type": "slide"}
# ### FloatSlider
# -
widgets.FloatSlider(
value=7.5,
min=0,
max=10.0,
step=0.1,
description='Test:',
disabled=False,
continuous_update=False,
orientation='horizontal',
readout=True,
readout_format='.1f',
slider_color='white'
)
# Sliders can also be **displayed vertically**.
widgets.FloatSlider(
value=7.5,
min=0,
max=10.0,
step=0.1,
description='Test:',
disabled=False,
continuous_update=False,
orientation='vertical',
readout=True,
readout_format='.1f',
slider_color='white'
)
# ### IntRangeSlider
widgets.IntRangeSlider(
value=[5, 7],
min=0,
max=10,
step=1,
description='Test:',
disabled=False,
continuous_update=False,
orientation='horizontal',
readout=True,
readout_format='i',
slider_color='white',
color='black'
)
# ### FloatRangeSlider
widgets.FloatRangeSlider(
value=[5, 7.5],
min=0,
max=10.0,
step=0.1,
description='Test:',
disabled=False,
continuous_update=False,
orientation='horizontal',
readout=True,
readout_format='i',
slider_color='white',
color='black'
)
# ### IntProgress
widgets.IntProgress(
value=7,
min=0,
max=10,
step=1,
description='Loading:',
bar_style='', # 'success', 'info', 'warning', 'danger' or ''
orientation='horizontal'
)
# + [markdown] slideshow={"slide_type": "slide"}
# ### FloatProgress
# -
widgets.FloatProgress(
value=7.5,
min=0,
max=10.0,
step=0.1,
description='Loading:',
bar_style='info',
orientation='horizontal'
)
# ### BoundedIntText
widgets.BoundedIntText(
value=7,
min=0,
max=10,
step=1,
description='Text:',
disabled=False
)
# + [markdown] slideshow={"slide_type": "slide"}
# ### BoundedFloatText
# -
widgets.BoundedFloatText(
value=7.5,
min=0,
max=10.0,
step=0.1,
description='Text:',
disabled=False,
color='black'
)
# ### IntText
widgets.IntText(
value=7,
description='Any:',
disabled=False
)
# + [markdown] slideshow={"slide_type": "slide"}
# ### FloatText
# -
widgets.FloatText(
value=7.5,
description='Any:',
disabled=False,
color='black'
)
# + [markdown] slideshow={"slide_type": "slide"}
# ## Boolean widgets
# -
# There are three widgets that are designed to display a boolean value.
# ### ToggleButton
widgets.ToggleButton(
value=False,
description='Click me',
disabled=False,
button_style='', # 'success', 'info', 'warning', 'danger' or ''
tooltip='Description',
icon='check'
)
# + [markdown] slideshow={"slide_type": "slide"}
# ### Checkbox
# -
widgets.Checkbox(
value=False,
description='Check me',
disabled=False
)
# ### Valid
#
# The valid widget provides a read-only indicator.
widgets.Valid(
value=False,
description='Valid!',
disabled=False
)
# + [markdown] slideshow={"slide_type": "slide"}
# ## Selection widgets
# -
# There are four widgets that can be used to display single selection lists, and one that can be used to display multiple selection lists. All inherit from the same base class. You can specify the **enumeration of selectable options by passing a list**. You can **also specify the enumeration as a dictionary**, in which case the **keys will be used as the item displayed** in the list and the corresponding **value will be returned** when an item is selected.
# + [markdown] slideshow={"slide_type": "slide"}
# ### Dropdown
# -
widgets.Dropdown(
options=['1', '2', '3'],
value='2',
description='Number:',
disabled=False,
button_style='' # 'success', 'info', 'warning', 'danger' or ''
)
# The following is also valid:
widgets.Dropdown(
options={'One': 1, 'Two': 2, 'Three': 3},
value=2,
description='Number:',
)
# + [markdown] slideshow={"slide_type": "slide"}
# ### RadioButtons
# -
widgets.RadioButtons(
options=['pepperoni', 'pineapple', 'anchovies'],
# value='pineapple',
description='Pizza topping:',
disabled=False
)
# + [markdown] slideshow={"slide_type": "slide"}
# ### Select
# -
widgets.Select(
options=['Linux', 'Windows', 'OSX'],
# value='OSX',
description='OS:',
disabled=False
)
# ### SelectionSlider
widgets.SelectionSlider(
options=['scrambled', 'sunny side up', 'poached', 'over easy'],
value='sunny side up',
description='I like my eggs ...',
disabled=False,
continuous_update=False,
orientation='horizontal',
readout=True,
# readout_format='i',
# slider_color='black'
)
# + [markdown] slideshow={"slide_type": "slide"}
# ### ToggleButtons
# -
widgets.ToggleButtons(
options=['Slow', 'Regular', 'Fast'],
description='Speed:',
disabled=False,
button_style='', # 'success', 'info', 'warning', 'danger' or ''
tooltip='Description',
# icon='check'
)
# ### SelectMultiple
# Multiple values can be selected with <kbd>shift</kbd> and/or <kbd>ctrl</kbd> (or <kbd>command</kbd>) pressed and mouse clicks or arrow keys.
widgets.SelectMultiple(
options=['Apples', 'Oranges', 'Pears'],
value=['Oranges'],
description='Fruits',
disabled=False
)
# + [markdown] slideshow={"slide_type": "slide"}
# ## String widgets
# -
# There are 4 widgets that can be used to display a string value. Of those, the `Text` and `Textarea` widgets accept input. The `Label` and `HTML` widgets display the string as either Label or HTML respectively, but do not accept input.
# + [markdown] slideshow={"slide_type": "slide"}
# ### Text
# -
widgets.Text(
value='<NAME>',
placeholder='Type something',
description='String:',
disabled=False
)
# ### Textarea
widgets.Textarea(
value='<NAME>',
placeholder='Type something',
description='String:',
disabled=False
)
# + [markdown] slideshow={"slide_type": "slide"}
# ### Label
# -
widgets.Label(
value="$$\\frac{n!}{k!(n-k)!} = \\binom{n}{k}$$",
placeholder='Some LaTeX',
description='Some LaTeX',
disabled=False
)
# ## HTML
widgets.HTML(
value="Hello <b>World</b>",
placeholder='Some HTML',
description='Some HTML',
disabled=False
)
# ## Image
file = open("images/WidgetArch.png", "rb")
image = file.read()
widgets.Image(
value=image,
format='png',
width=300,
height=400
)
# + [markdown] slideshow={"slide_type": "slide"}
# ## Button
# -
widgets.Button(
description='Click me',
disabled=False,
button_style='', # 'success', 'info', 'warning', 'danger' or ''
tooltip='Click me',
icon='check'
)
# ## Play (Animation) widget
# The `Play` widget is useful to perform animations by iterating on a sequence of integers with a certain speed.
play = widgets.Play(
# interval=10,
value=50,
min=0,
max=100,
step=1,
description="Press play",
disabled=False
)
slider = widgets.IntSlider()
widgets.jslink((play, 'value'), (slider, 'value'))
widgets.HBox([play, slider])
# ## Color picker
widgets.ColorPicker(
concise=False,
description='Pick a color',
value='blue'
)
# ## Controller
widgets.Controller(
index=0,
)
# ## Layout widgets
# ### Box
# ### HBox
items = [widgets.Label(str(i)) for i in range(4)]
widgets.HBox(items)
# ### VBox
items = [widgets.Label(str(i)) for i in range(4)]
widgets.HBox([widgets.VBox([items[0], items[1]]), widgets.VBox([items[2], items[3]])])
# ### Accordion
accordion = widgets.Accordion(children=[widgets.IntSlider(), widgets.Text()])
accordion.set_title(0, 'Slider')
accordion.set_title(1, 'Text')
accordion
# ### Tabs
list = ['P0', 'P1', 'P2', 'P3', 'P4']
children = [widgets.Text(description=name) for name in list]
tab = widgets.Tab(children=children)
tab
# + [markdown] nbsphinx="hidden"
# [Index](Index.ipynb) - [Back](Widget Basics.ipynb) - [Next](Widget Events.ipynb)
| notebooks/shared/nbconvert/tests/files/Widget_List.ipynb |
# ##Template for performing event based ingestion and merging from Attunity change files
# Scalable CDC from DBMS to Azure Databricks
#
# 1. Summary of algorithm
#
# - CDC program send change data from various tables into ADLS Gen 2 folder, each table has its own folder.
# - Each change data file will come with a schema file (dfm) that describe the schema of the data file
# - Azure Eventgrid listen to new files landed in the subscribed folder and create messages detailing locations and type of operations for each file
# - Our main program will read messages from message queue, sort them by table then process messages in batch with a predefined size. By sorting we will have least number of table possible in each batch
# - Within each batch, the process_files program will group by table and retrieve a unique schema file and data files for each table in the group by. From schema file, it will form the schema and use it to retrieve data
# - For insert data, use regular insert. For update and delete, user MERGE to merge data to target table
# 2. Libraries: Please install following PyPI libraries to Databricks cluster
#
# - applicationinsights
# - azure
# - azure-storage-queue
#
#
# 3. Data: We use a sample CDC files generated by Attunity. Please upd]ate the sample data to folder where each folder represents a landing location for a table.
#
# 4. Configuring Azure Eventgrid and Storage Queue
#
# https://docs.microsoft.com/en-us/azure/event-grid/custom-event-to-queue-storage
#
# 5. Setup Event subscription in your storage account
#
# Route the event to a Storage Queue
# 6. #####TODOs:
# - Error handling of failures & bad data
# - Data type change of schema
# - Deduplication at target table
# - Optimization of target delta tables
# 7. Monitoring:
#
# Dashboard: https://portal.azure.com/#@providence4.onmicrosoft.com/dashboard/arm/subscriptions/b7e624c9-c1fd-45df-a83a-d80710465b56/resourcegroups/dashboards/providers/microsoft.portal/dashboards/770efcd0-83a2-4d8e-9fa1-a00204844093
#
# To customize and create new monitoring report:https://portal.azure.com/#@2e319086-9a26-46a3-865f-615bed576786/resource/subscriptions/b7e624c9-c1fd-45df-a83a-d80710465b56/resourceGroups/rg-attunityPOC-storage/providers/microsoft.insights/components/databricksCDCPOCTelemetry/overview
# +
# Load data from Azure
# Reset the widgets
dbutils.widgets.removeAll()
dbutils.widgets.text("STORAGE_ACCOUNT", "")
dbutils.widgets.text("SAS_KEY", "")
dbutils.widgets.text("ACCOUNT_KEY", "")
dbutils.widgets.text("QUEUE_NAME", "")
dbutils.widgets.text("ARCHIVE_QUEUE_NAME", "")
dbutils.widgets.text("INSTRUMENT_KEY", "")
dbutils.widgets.text("ROOT_PATH", "")
account_name = dbutils.widgets.get("STORAGE_ACCOUNT").strip()
sas=dbutils.widgets.get("SAS_KEY").strip()
account_key = dbutils.widgets.get("ACCOUNT_KEY").strip()
queue_name= dbutils.widgets.get("QUEUE_NAME").strip()
archive_queue_name= dbutils.widgets.get("ARCHIVE_QUEUE_NAME").strip()
instrument_key=dbutils.widgets.get("INSTRUMENT_KEY").strip()
conf_key = "fs.azure.account.key.{storage_acct}.dfs.core.windows.net".format(storage_acct=account_name)
spark.conf.set(conf_key, account_key)
root_path =dbutils.widgets.get("ROOT_PATH").strip()
#Dictionary of table name and partition columns. Assumption is partition value does not change in update
table_partition_mapping ={"TABLE_NAME":"PARTION_NAME"}
spark.conf.set("spark.databricks.delta.schema.autoMerge", "true")
use_partition_in_match =True
# -
# ## Utility functions to parse schema and load data
# +
from delta.tables import *
from pyspark.sql.types import StructField, StructType , LongType, StringType, DoubleType ,DecimalType, DateType,FloatType
from pyspark.sql.functions import col
from pyspark.sql.utils import AnalysisException
def get_file_info(metadata_file_paths, datafile_basepath="", datafile_extension="json"):
"""
Function to parse dfm schema file and return schema for loading data file
assumption is each table has its own directory.Inside each directory there're data files and each data file has a metadata file
describing schema and other information.
Normally in each load, each table will have only one schema but should they have more than 1 schemas due to changes at source,
the function has logic to deal with it.
Parameters
--------
metadata_file_paths: a list, required
a list of full paths to meta data files that contain schema and other information
datafile_basepath: a string
show the base path down to the last folder containing datafiles
datafile_extension: a string
extension showing type of data file e.g. json or csv..
--------
"""
schemas =spark.read.option("multiLine", True).option("mode", "PERMISSIVE").json(metadata_file_paths)
#This map show mapping between oracle data type and Spark SQL. Needs updates to reflect motype
ora_pyspark_map = {'STRING':StringType(), 'DATETIME':DateType(),'NUMERIC':DecimalType(), 'REAL8':FloatType()}
# ora_pyspark_map = {'STRING':StringType(), 'DATETIME':DateType(),'NUMERIC':FloatType()}
schemas = schemas.select(["dataInfo", "fileInfo"]).collect()
#this is a list to contain list of primary keys
primary_keys=[]
file_schema_mapping={}
for num, schema in enumerate(schemas):
targetSchema = StructType()
for item in schema['dataInfo']['columns']:
#default to String type if no mapping is found
target_type = ora_pyspark_map.get(item['type'], StringType())
if target_type ==DecimalType():
scale =int(item['scale'])
precision=int(item['precision'])
target_type =DecimalType(precision,scale)
# if scale!=0 or precision!=0:
# target_type =DecimalType(precision,scale)
# else:
# target_type =DecimalType(38,10)
targetSchema.add(item['name'],target_type)
#only need to check first schema assuming primary keys won't change
if(num==0):
if (item['primaryKeyPos']>0):
primary_keys.append(item['name'])
#build a dict of data file full path to target schema. This can be done because the assumption that data file is
#in the same folder as metadata file
file_schema_mapping[datafile_basepath+schema['fileInfo']['name']+"."+datafile_extension] = targetSchema
#This algorithm is to build dict of schema:data files (one to many relationship)
schema_datafiles_map= {}
for key, value in sorted(file_schema_mapping.items()):
schema_datafiles_map.setdefault(value, []).append(key)
return schema_datafiles_map,primary_keys
def merge(updatesDF, target_tbl,schema,primary_keys, field_names):
"""
Function to insert, delete or update data into target tables
Parameters
--------
updatesDF: spark dataframe that contains change data to merge
target_tbl: a string
the base path to the target table to merge the updates to
schema: a StrucType, optional
the schema of target table, may not be needed
primary_keys: a list
contains a list of primary key(s) needed to delete or update table
field_names: a list
contains a list of fields of the target table
--------
"""
partition_key=""
#Check if the data is from full table copy
if ("__ct" in target_tbl):
target_tbl=target_tbl[:-4]
partition_key = table_partition_mapping.get(target_tbl.split(".")[1],"")
else:
print("perform full table copy")
partition_key = table_partition_mapping.get(target_tbl.split(".")[1],"")
if len(partition_key)>0:
print("parition key detected, ",partition_key)
updatesDF.write.format("delta").option("mergeSchema", "true").partitionBy(partition_key).mode("append").saveAsTable(target_tbl)
else:
updatesDF.write.format("delta").option("mergeSchema", "true").mode("append").saveAsTable(target_tbl)
return
#processing the insert
print("processing insert")
insert_df = updatesDF.filter("header__change_oper='I'")
#automatic add new columns
if len(partition_key)>0:
insert_df.write.format("delta").option("mergeSchema", "true").partitionBy(partition_key).mode("append").saveAsTable(target_tbl)
else:
insert_df.write.format("delta").option("mergeSchema", "true").mode("append").saveAsTable(target_tbl)
#processing the update/delete
print("done processing insert for ", target_tbl)
upsert_df = updatesDF.filter("header__change_oper='U' or header__change_oper='D'")
#build dynamic match condition query
update_alias ="updates"
target_tbl_alias ='tgt_tbl'
match_condition=""
for num, key in enumerate(primary_keys):
if num==0:
if (use_partition_in_match and len(partition_key)>0): #use parition in match condition to limit scanning scope for update/date
match_condition= "{0}.{1} ={2}.{1} and {0}.{3} ={2}.{3} ".format(target_tbl_alias, key,update_alias,partition_key )
else:
match_condition= "{0}.{1} ={2}.{1}".format(target_tbl_alias, key,update_alias)
else:
match_condition = match_condition+ " and {0}.{1} ={2}.{1}".format(target_tbl_alias, key,update_alias)
update_dict ={fieldname: update_alias+"."+fieldname for fieldname in set(field_names)-set(primary_keys)}
# print(update_dict)
print("target table for merge is ", target_tbl)
targetTable = DeltaTable.forName(spark, target_tbl)
targetTable.alias(target_tbl_alias).merge(
upsert_df.alias(update_alias),match_condition
) \
.whenMatchedUpdate(update_alias+".header__change_oper='U'",set = update_dict ) \
.whenMatchedDelete(update_alias+".header__change_oper='D'") \
.execute()
updatesDF.unpersist()
def replace_schema(new_schema, tbl):
tbl_schema = spark.table(tbl).schema
partition_key = table_partition_mapping.get(tbl.split(".")[1],"")
# print("partition key : ", partition_key)
# print("new schema:", new_schema)
field_pairs ={}
for field1 in tbl_schema:
for field2 in new_schema:
if field1.name == field2.name:
if field1.dataType!=field2.dataType:
print("field: ",field1.name)
print(" with current dtype:", field1.dataType)
print(" but dtype in the update is ",field2.dataType)
field_pairs[field1.name] = field2.dataType
if len(field_pairs)==0:
print("no difference detect")
# return
tbl_df = spark.read.table(tbl)
for item in field_pairs.items():
tbl_df = tbl_df.withColumn(item[0], col(item[0]).cast(item[1]))
spark.sql("drop table IF EXISTS {}_tmp".format(tbl))
if len(partition_key)>0:
tbl_df.write \
.partitionBy(partition_key) \
.mode("overwrite") \
.option("overwriteSchema", "true") \
.saveAsTable(tbl+"_tmp")
else:
tbl_df.write \
.mode("overwrite") \
.option("overwriteSchema", "true") \
.saveAsTable(tbl+"_tmp")
tbl_df = spark.read.table(tbl+"_tmp")
#clean up existing table to replace schema and partition
spark.sql("drop table IF EXISTS ""{}".format(tbl))
if len(partition_key)>0:
print("rewrite with parition ", partition_key)
tbl_df.write \
.mode("overwrite") \
.partitionBy(partition_key) \
.format("delta") \
.option("overwriteSchema", "true") \
.saveAsTable(tbl)
else:
print("rewrite without parition ")
tbl_df.write \
.mode("overwrite") \
.format("delta") \
.option("overwriteSchema", "true") \
.saveAsTable(tbl)
#drop temporary table
spark.sql("drop table {}_tmp".format(tbl))
print("complete replacing table schema",field_pairs, tbl)
def process_files(metadata_files,datafile_basepath,target_tbl,tc):
"""
Function to process multiple data/metadata files and deliver updates/inserts to target tables
Parameters
--------
metadata_file_paths: a list, required
a list of full paths to meta data files that contain schema and other information
datafile_basepath: a string
show the base path down to the last folder containing datafiles
target_tbl: a string
name of target delta table.
--------
"""
datafile_extension='csv'
zip_extension="gz"
chg_tbl_name ="chg_tbl"
metadata_file_paths=[datafile_basepath+metafile for metafile in metadata_files]
schema_datafiles_map,primary_keys = get_file_info(metadata_file_paths,datafile_basepath,zip_extension)
for (schema,datafile_paths) in schema_datafiles_map.items():
print("number of files for the table: ", len(datafile_paths))
tc.track_metric("Number of files", len(datafile_paths), properties = {"table":target_tbl})
# print("schema before read is: ",schema)
field_names =[fieldname for fieldname in schema.fieldNames()]
data = spark.read.format(datafile_extension).schema(schema).load(datafile_paths)
# print("schema after read is : ",data.schema)
data.createOrReplaceTempView(chg_tbl_name)
#generate below query dynamically based on target schema. Check if the data is of change nature or full load (without __ct in table name)
updatesDF=data
if "__ct" in target_tbl:
sql_query = "select {0} from (select {0}, RANK() OVER (PARTITION BY {1} ORDER BY header__change_seq DESC) AS RNK from {2} ) A where RNK=1".format(",".join(field_names),",".join(primary_keys),chg_tbl_name )
updatesDF =sql(sql_query)
updatesDF.cache()
print("Files in the update: ", datafile_paths)
tc.track_event("Files in the update", {'files': datafile_paths, "table":target_tbl })
updatesize=updatesDF.count()
print("Size of the updates: ",updatesize )
tc.track_metric("Size of update", updatesize, properties = {"table":target_tbl})
try:
merge(updatesDF,target_tbl, schema,primary_keys, field_names)
except AnalysisException as e:
print(e)
print("replace schema")
tbl_name=None
if ("__ct" in target_tbl):
tbl_name=target_tbl[:-4]
else:
tbl_name =target_tbl
replace_schema(schema, tbl_name)
# replacing schema (e.g. change data type) reprocess the update
merge(updatesDF,target_tbl, schema,primary_keys, field_names)
tc.track_event('Table processing', { 'table': target_tbl })
tc.flush()
# -
# # %md ### Main procedure to process incoming data from Eventgrid
# +
from azure.storage.queue import QueueService,QueueMessageFormat
import ast
import time
from applicationinsights import TelemetryClient
tc = TelemetryClient(instrument_key)
#Authenticate to Datalake where the files are landed. You can use ABFS or WASB depending on authentication method
#Dictionary contain table name and table path maps
def main_proc(max_num_tbl=1):
wait_time_count=0
start_run = time.time()
#authenticate to the Storeage Queue Service using a shared SAS key
queue_service = QueueService(account_name=account_name,sas_token=sas)
#Set visibility_timeout which is the estimated time that your processing will last so that other parallel clusters may not see it. The messages will be back to queue
#unless you explicitly delete them which should be done after successful operation. 32 is the max number of messages in one read. If you need more than that, call get_messages
#multiple times.
#Do this while a while loop so that it keep processing new files
batch_size=32
batch=0
max_messages_explored = 2000
#initial visibility timeout to filter out messages
init_visibility_timeout =60*120
#timeout for processing
visibility_timeout =2
#wait time if current queue is empty before retry
wait_time =10
while True:
dfm_filelist=[]
table_list=[]
#Get estimate of queue length
metadata = queue_service.get_queue_metadata(queue_name)
count = metadata.approximate_message_count
print("Begin processing, entire queue length is ", count)
start_queue = time.time()
tc.track_event('Queue ingesting', { 'queue_length': count })
tc.flush()
messages=None
messages_ids=[]
#This is to get more messages than the default limit of 32
while True:
batch_messages = queue_service.get_messages(
queue_name, num_messages=batch_size, visibility_timeout=init_visibility_timeout)
if messages is None:
messages = batch_messages
else:
messages = messages+batch_messages
if len(batch_messages)==0 or len(messages)>=max_messages_explored:
break
#This is the path to append with new files extracted from the queue
for message in messages:
content =QueueMessageFormat.binary_base64decode(message.content).decode('utf8')
json_content = ast.literal_eval(content)
data_content = json_content['data']
file_name =data_content['url'].split("/")[-1]
tbl_name =data_content['url'].split("/")[-2]
if ((data_content['api']=="PutBlockList" or data_content['api']=="CopyBlob" or data_content['api']=="CreateFile") and ".dfm" in file_name and "Attunity" not in tbl_name):
dfm_filelist.append(file_name)
table_list.append(tbl_name)
messages_ids.append({"file_name":file_name,"tbl":tbl_name, "id":message.id,"pop_receipt":message.pop_receipt,"content":message.content})
#here is the main processing logic (Data transformation)
#1. Reading files, the reader can read multiple files
table_dfm_dict = {file:tbl for tbl,file in zip(table_list,dfm_filelist)}
dfm_filelist,table_list=[],[]
reduced_tbl_file = {}
#create a grouping of table:file_list
for key, value in sorted(table_dfm_dict.items()):
reduced_tbl_file.setdefault(value, []).append(key)
#only get certain number of tables to process
reduced_tbl_file = dict(list(reduced_tbl_file.items())[0:max_num_tbl])
# print("reduced table file list is ", reduced_tbl_file)
#reset the dfm file list to the files associate with the selected table(s)
dfm_filelist = [file for key in reduced_tbl_file.keys() for file in reduced_tbl_file[key]]
#now release unused files from list by setting new timeout for files in scope
tmp_messages_ids=[]
for message in messages_ids:
if message["file_name"] in dfm_filelist:
tmp_messages_ids.append({"id":message['id'],"pop_receipt":message['pop_receipt'],"content":message['content'],"tbl":message['tbl']})
else:
queue_service.update_message(queue_name, message['id'], message['pop_receipt'], visibility_timeout)
messages_ids =tmp_messages_ids
# print("done updating message timeout, new file list is ",dfm_filelist)
filelistlen =len(dfm_filelist)
if filelistlen>0:
print("Start processing ", filelistlen, " files in this batch")
start_batch = time.time()
tc.track_event('start batch processing', { 'number of files': filelistlen })
tc.flush()
for tbl in reduced_tbl_file.keys():
tbl_messages_ids=[{"id":message['id'],"pop_receipt":message['pop_receipt'],"content":message['content']} for message in messages_ids if tbl in message['tbl']]
print("start processing: "+tbl)
start_tbl = time.time()
tc.track_event('Table processing', { 'table': tbl })
tc.flush()
try:
process_files(reduced_tbl_file[tbl],root_path+tbl+"/", tbl,tc)
elapsed_tbl_time = time.time() - start_tbl
elapsed_queue_time=time.strftime("%H:%M:%S", time.gmtime(elapsed_tbl_time))
print("Finished processing table {0} in {1}".format(tbl,elapsed_tbl_time))
for message in tbl_messages_ids:
queue_service.delete_message(queue_name, message['id'], message['pop_receipt'])
queue_service.put_message(archive_queue_name, message['content'])
except Exception as e:
template = "An exception of type {0} occurred. Arguments:\n{1!r}"
message = template.format(type(e).__name__, e.args)
print(message)
print(e)
failed_messages= [message['id'] for message in tbl_messages_ids]
tc.track_event('exception', { 'error message': str(e) },{ 'failed messageid': str(failed_messages) })
tc.flush()
#ddl update here!
elapsed_batch_time = time.time() - start_batch
elapsed_batch_time=time.strftime("%H:%M:%S", time.gmtime(elapsed_batch_time))
print("finish batch {0}, processed {1} files in {2}".format(batch, filelistlen,elapsed_batch_time))
tc.track_event('finish batch processing', { 'number of files': filelistlen,'duration': elapsed_batch_time })
tc.flush()
batch=batch+1
else:
#Wait for messages to arrive
elapsed_queue_time = time.time() - start_queue
elapsed_queue_time=time.strftime("%H:%M:%S", time.gmtime(elapsed_queue_time))
print("Finished queue in ",elapsed_queue_time)
print("Nothing in queue, wait {} seconds for next batch".format(wait_time))
tc.track_event('waiting for new messages', { 'wait_time': wait_time })
tc.flush()
time.sleep(wait_time)
wait_time_count=wait_time_count+1
if (wait_time_count>100):
elapsed_total_time = time.time() - start_run - wait_time_count*wait_time
elapsed_total_time=time.strftime("%H:%M:%S", time.gmtime(elapsed_total_time))
print("Finished total run in ",elapsed_total_time)
break
continue
# -
#calling function a single time
main_proc(2)
# +
import threading
#start 5 jobs simultaneously. You won't be able to see jobs running in this notebook. You'll need to go
for i in range(5):
t = threading.Thread(target=main_proc)
t.start()
# -
| code/loading_merging_template.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
# import pandas as pd
# import os
# import cv2
# df = pd.read_csv('training_labels.csv')
# for index, row in df.iterrows():
# img = cv2.imread(f"training_data/{row['id']:06}.jpg")
# if not os.path.exists(f"tmp/{row['label']}"):
# os.makedirs(f"tmp/{row['label']}")
# print(f"{row['label']}")
# cv2.imwrite(f"tmp/{row['label']}/{row['id']:06}.jpg", img)
# #os.rmdir(row['label'])
# #print(f"{row['id']:06}")
# +
import os
import cv2
import numpy as np
import matplotlib.pyplot as plt
import matplotlib.cm as cm
import tensorflow as tf
import random
import PIL
from PIL import Image, ImageOps
import os
os.environ["CUDA_VISIBLE_DEVICES"] = "0"
gpus = tf.config.experimental.list_physical_devices(device_type='GPU')
for gpu in gpus:
tf.config.experimental.set_memory_growth(device=gpu, enable=True)
len(os.listdir(f'tmp')) # Directory where training data folders are
# -
tf.config.experimental.list_physical_devices(device_type='GPU')
# ### Hyperparameters
# batch size 256: 0.798
#
# batch size 128: 0.833
#
# batch size 64: 0.858
#
# batch size 32: 0.881
#
# batch size 16 0.891
batch_size = 16 # Training batch size
num_classes = 196 # Classes in dataset
num_epochs = 100 # Epochs for training
lr = 1e-3 # Learning rate
lr_weight_decay = 1e-3 # Learning weight decay
# ### Data Processing
# +
from tensorflow.keras.preprocessing.image import ImageDataGenerator
# validation_datagen=ImageDataGenerator(rescale=1/255)
# validation_generator = validation_datagen.flow_from_directory('hw5_data/test/',target_size=(299,299),
# batch_size=batch_size,
# class_mode='sparse')
#from tensorflow.keras.preprocessing.image import ImageDataGenerator
"""
from albumentations.augmentations.transforms import Rotate
from albumentations import Compose
def augment_and_show(image):
#image = Equalize(mode='pil', always_apply=True, p=1)(image=np.array(image).astype('uint8'))['image']
aug = Compose([Rotate(limit=5, interpolation=2, p=1)])
image = aug(image=image)['image']
return image.astype('float')
"""
# preprocessing image and divide validaiton set
train_datagen = ImageDataGenerator(horizontal_flip=True, validation_split=0)
val_datagen = ImageDataGenerator(horizontal_flip=False, validation_split=0)
train_generator = train_datagen.flow_from_directory('tmp/',
target_size=(256,256),
batch_size=batch_size,
class_mode='categorical',
shuffle=True,
subset='training')
# validation_datagen = ImageDataGenerator()
# validation_generator = validation_datagen.flow_from_directory('hw5_data/test/',target_size=(256,256),
# batch_size=batch_size,
# class_mode='sparse')
validation_generator = val_datagen.flow_from_directory('tmp_test/',
target_size=(256,256),
batch_size=batch_size,
class_mode='categorical')
# for i in range(25):
# plt.figure(figsize=(30,20))
# plt.subplot(5,5,i+1)
# plt.imshow(train_generator.next()[0][i].astype('uint8'))
# plt.tight_layout()
# +
from tensorflow.keras.layers import GlobalAveragePooling2D, Dense, Dropout
from tensorflow.keras.models import Model
from classification_models.tfkeras import Classifiers
ResNet50, preprocess_input = Classifiers.get('resnet50')
base_model = ResNet50(input_shape=(256, 256, 3), weights='imagenet', include_top=False)
x = GlobalAveragePooling2D()(base_model.output)
#output = Dense(num_classes, kernel_initializer='zeros')(x)
output = Dense(num_classes, activation='softmax')(x)
model = Model(inputs=[base_model.input], outputs=[output])
# Training with the final custom-made layers
# ResNet 50 (48,51)
for layer in model.layers[:48]:
layer.trainable =False
for layer in model.layers[48:]:
layer.trainable=True
model.summary()
# -
# ### Callbacks
# +
import math
from tensorflow.keras.callbacks import Callback, ModelCheckpoint
from tensorflow.keras import backend as K
class CosineAnnealingScheduler(Callback):
"""Cosine annealing scheduler.
"""
def __init__(self, T_max, eta_max, eta_min=0, verbose=1):
super(CosineAnnealingScheduler, self).__init__()
self.T_max = T_max
self.eta_max = eta_max
self.eta_min = eta_min
self.verbose = verbose
def on_epoch_begin(self, epoch, logs=None):
if not hasattr(self.model.optimizer, 'lr'):
raise ValueError('Optimizer must have a "lr" attribute.')
lr = self.eta_min + (self.eta_max - self.eta_min) * (1 + math.cos(math.pi * epoch / self.T_max)) / 2
K.set_value(self.model.optimizer.lr, lr)
if self.verbose > 0:
print('\nEpoch %05d: CosineAnnealingScheduler setting learning '
'rate to %s.' % (epoch + 1, lr))
def on_epoch_end(self, epoch, logs=None):
logs = logs or {}
logs['lr'] = K.get_value(self.model.optimizer.lr)
cosineanneling = CosineAnnealingScheduler(T_max=20, eta_max=1e-2, eta_min=1e-5)
checkpoint = ModelCheckpoint('logs/ep{epoch:03}-val_loss{val_loss:.3f}-val_auc{val_accuracy}.h5', save_best_only=False, save_weight_only=True, monitor='val_loss', mode='min', verbose=1)
#checkpoint = ModelCheckpoint('logs/ep{epoch:03}-loss{loss:.3f}-auc{accuracy}.h5', save_best_only=False, save_weight_only=True, monitor='loss', mode='min', verbose=1)
# +
from tensorflow.keras import optimizers
# Use default adam optimizer (learning rate=1e-3, decay=0)
# Loss function: categorical cross entropy
# Evaluation: Accuracy
sgd = optimizers.SGD(lr=1e-3, momentum=0.9, decay=1e-3)
#loss_fn = tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True)
model.compile(optimizer=sgd, loss='categorical_crossentropy', metrics=['accuracy'])
step_size_train = train_generator.n // train_generator.batch_size
step_size_val = validation_generator.n // validation_generator.batch_size
history = model.fit_generator(generator=train_generator, steps_per_epoch=step_size_train,
validation_data=validation_generator, validation_steps=step_size_val,
epochs=num_epochs, callbacks=[cosineanneling, checkpoint])
# -
# ### Multi-GPU
# +
# from tensorflow.keras.layers import GlobalAveragePooling2D, Dense, Dropout
# from tensorflow.keras.models import Model
# import efficientnet.tfkeras as efn
# import tensorflow as tf
# from tensorflow.keras import optimizers
# def get_compiled_model():
# base_model = efn.EfficientNetB0(input_shape=(256, 256, 3), weights='imagenet', include_top=False) # or weights='noisy-student'
# x = GlobalAveragePooling2D()(base_model.output)
# output = Dense(num_classes, activation='softmax')(x)
# model = Model(inputs=[base_model.input], outputs=[output])
# # for layer in model.layers[:22]:
# # layer.trainable =False
# # for layer in model.layers[22:]:
# # layer.trainable=True
# sgd = optimizers.SGD(lr=1e-3, momentum=0.9, decay=1e-3)
# model.compile(optimizer=sgd, loss='categorical_crossentropy', metrics=['accuracy'])
# return model
# strategy = tf.distribute.MirroredStrategy()
# print('Number of devices: {}'.format(strategy.num_replicas_in_sync))
# step_size_train = train_generator.n // train_generator.batch_size
# step_size_val = validation_generator.n // validation_generator.batch_size
# with strategy.scope():
# # Everything that creates variables should be under the strategy scope.
# # In general this is only model construction & `compile()`.
# model = get_compiled_model()
# history = model.fit_generator(generator=train_generator, steps_per_epoch=step_size_train,
# validation_data=validation_generator, validation_steps=step_size_val,
# epochs=num_epochs , callbacks=[cosineanneling, checkpoint])
# -
# ### Visualization
# +
plt.plot(history.history['accuracy'])
plt.plot(history.history['val_accuracy'])
plt.title('ResNet50 Model accuracy')
plt.ylabel('Accuracy')
plt.xlabel('Epoch')
plt.legend(['Train', 'Test'], loc='upper left')
plt.show()
plt.plot(history.history['loss'])
plt.plot(history.history['val_loss'])
plt.title('ResNet50 Model loss')
plt.ylabel('Loss')
plt.xlabel('Epoch')
plt.legend(['Train', 'Test'], loc='upper left')
plt.show()
# -
# ### Testing
model.load_weights(f'logs/ResNet_batch16_0.87_val_auc0.958.h5')
#model.predict(test_img)
# +
import pandas as pd
class_label = os.listdir(f'testing_data/')
tmp_dic = train_generator.class_indices
dic = dict((v,k) for k,v in tmp_dic.items())
dic[173] = '<NAME>/V <NAME> 2012'
def DataLoader(standardize=False):
img_list = []
label_list = []
filenames = []
num_img = 0
for img in os.listdir(f'testing_data/'):
if img.endswith('.jpg'):
original = cv2.imread(f'testing_data/{img}')
filenames.append(img.split('.')[0])
#original = augment_and_show(original)
resized_img = cv2.resize(original, (256, 256), interpolation=cv2.INTER_CUBIC)
img_list.append(resized_img)
num_img += 1
#img_list = img_list.T[1:,:]
return np.array(img_list), filenames, num_img
test_img, name, num_test = DataLoader(standardize=False)
predict = (model.predict(test_img)).argmax(-1)
df = pd.DataFrame({
'id': name,
'label': [dic[num] for num in predict]
})
# dic = train_generator.class_indices
# label_num = []
# for label in test_label:
# label_num.append(dic[label])
# -
df.head()
df.to_csv("hw1_resnet50.csv", header=["id", "label"], index=False)
# +
class_label = os.listdir(f'tmp_test/')
def DataLoader(path, standardize=False):
img_list = []
label_list = []
num_img = 0
for classes in class_label:
for img in os.listdir(f'tmp_test/{classes}'):
if img.endswith('.jpg'):
label_list.append(classes)
original = cv2.imread(f'tmp_test/{classes}/{img}')
#original = augment_and_show(original)
resized_img = cv2.resize(original, (256, 256), interpolation=cv2.INTER_CUBIC)
img_list.append(resized_img)
num_img += 1
#img_list = img_list.T[1:,:]
return np.array(img_list), label_list, num_img
test_img, test_label, num_test = DataLoader('test', standardize=False)
dic = train_generator.class_indices
label_num = []
for label in test_label:
label_num.append(dic[label])
for log in sorted(os.listdir(f'./logs/')):
if log.startswith('batch16_0.87_ep021-'):
model.load_weights(f'logs/{log}')
print(log)
print(np.count_nonzero(np.argmax(model.predict(test_img), axis=1) == label_num) / num_test)
| ResNet50.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import cv2
import matplotlib.pyplot as plt
import numpy as np
import time
# ## OpenCv's Interpolation:
#
#
#
#
# ## OpenCV's Video Writer:
# Steps to use video Writer:
# 1. Create VideoWriter Object
# 2. Write Frame to VideoWriter
# ```python
# cv.VideoWriter(filename, fourcc, fps, frameSize)
# ```
# **Parameters:**
#
# 1. **filename:** Output video file
# 2. **fourcc:** 4-character code of codec used to compress the frames
# 3. **fps:** framerate of videostream
# 4. **framesize:** Height and width of frame
#
# **Write Frame** (Frames have to share same size)
# ``` python
# for frame in range(0,len(list_of_frames)):
# frame = cv2.resize(frame, dim, interpolation = cv2.INTER_AREA)
# out.write(frame)
# ```
# ### Load Images:
# In this block we will load all images and we will store them in a list based on their class, to autoamte this we will store this lists in a list of lists. Additionally we will interpolate the images so that instead of 6 images we get 11 in total
# +
verbose= True
interpolate_image = True
root = r"../images/" # Root directory for orignal imges
images = ["atypical","typical","mid"] # List of diffferent classes of images
TYPICAL = list() # List to store typical images
ATYPICAL = list() # List to store atypical iamges
MID = list() # List to store mid images
lists = [TYPICAL,ATYPICAL,MID] # Lisr of lists: Stroes lists of images
dim = (int(130),int(197)) # Interpolation dimenssions
for i,lista in enumerate(lists): # Iterate over lists(enumerating it)
for j in range(1,7): # Iterate from 1 to 6 (number of orignal images for each class)
path = root + images[i] + "0"+str(j)+".PNG" # Concatenate path of image
image_loaded = cv2.cvtColor(cv2.imread(path), cv2.COLOR_BGR2RGB) # Load image
image_loaded_interpolated = cv2.resize(image_loaded, dim, interpolation = cv2.INTER_LANCZOS4)
lista.append(image_loaded_interpolated) # Append loaded image to correspondend list
if verbose:
print("Imaage " + path + " Loaded") # Tell user that image has been created
if interpolate_image:
if j !=6: # If current iteration is not the last iteration (6)
path2 = root + images[i] + "0"+str(j+1)+".PNG" # Get next image
image_loaded2 = cv2.cvtColor(cv2.imread(path2), cv2.COLOR_BGR2RGB) # Load next image
image_loaded2_interpolated = cv2.resize(image_loaded2, dim, interpolation = cv2.INTER_LANCZOS4)
iterpolated_image = (0.5*image_loaded_interpolated + 0.5*image_loaded2_interpolated).astype("uint8") # Iterpolate images
lista.append(iterpolated_image)
if verbose:
print("Iterpolated images " + path + " and " + path2 + " Loaded")
# -
dim = (int(197),int(130))
out = cv2.VideoWriter('project2.avi',cv2.VideoWriter_fourcc(*'DIVX'), 2, dim)
for i in range(0,lists[0].__len__()):
frame = lists[0][i]
frame = cv2.cvtColor(frame, cv2.COLOR_RGB2BGR)
frame = cv2.resize(frame, dim, interpolation = cv2.INTER_LANCZOS4)
out.write(frame)
time.sleep(1)
out.release() # Destroy window
# ## Segementation of Data
# We will follow the white contours in order to segement the image in 6 parts
n=7
figure,ax = plt.subplots(1,1,figsize=(10,9))
ax.imshow(lists[0][n])
ax.grid("on")
plt.show()
### Experimental (Get rid of white contours) ###
A = lists[0][7]
R,G,B = (16,16,144)
Th = 100
M = (A[:,:,0] > Th)*(A[:,:,1] > Th)*(A[:,:,2] > Th)*1
M0 = -1*M + 1
F = np.copy(A)
F[:,:,0],F[:,:,1],F[:,:,2] = A[:,:,0]*M0 + M*R , A[:,:,1]*M0 + M*G,A[:,:,2]*M0 + M*B
#F[:,:,0],F[:,:,1],F[:,:,2] = A[:,:,0]*M0 , A[:,:,1]*M0 ,A[:,:,2]*M0
# +
def get_energy(segment):
''' Get the average energy of a segment'''
N,M= segment.shape
energy = 0
for ni in range(0,N):
for mj in range(0,M):
energy += segment[ni,mj]
energy = energy/(N*M)
return round(energy)
def segment(COLOR_IMAGE, plot=True):
''' Segement the image into 6 segments, following the white contours'''
rgb_weights = [0.2989, 0.5870, 0.1140] # Weights for Grayscale Transformation
IMAGE = (np.dot(COLOR_IMAGE,rgb_weights)).astype(np.uint8) # Transfrom RGB to Grayscale
## SEGMENT PHOTO ##
TL = IMAGE[40:80,40:60] # TOP LEFT
TR = IMAGE[50:86,65:90] # TOP RIGHT
ML = IMAGE [90:140,38:62] # MIDDLE lEFT
MR = IMAGE[90:140,70:85] # MIDDLE RIGHT
BL = IMAGE[150:190,40:62] # BOTTOM LEFT
BR = IMAGE[150:190,70:94] # BOTTOM RIGHT
## GET ENERGY ##
ETL = get_energy(TL) # TOP LEFT ENERGY
ETR = get_energy(TR) # TOP RIGHT ENERGY
EML = get_energy(ML) # MIDDLE lEFT ENERGY
EMR = get_energy(MR) # MIDDLE RIGHT ENERGY
EBL = get_energy(BL) # BOTTOM LEFT ENERGY
EBR = get_energy(BR) # BOTTOM RIGHT ENERGY
if plot:
figure, ax = plt.subplots(3,2,figsize=(6,10))
## Show segments ##
ax[0,0].imshow(TL)
ax[0,0].axis("off")
ax[0,0].set_title(f"Energy {ETL}")
ax[0,1].imshow(TR)
ax[0,1].set_title(f"Energy {ETR}")
ax[0,1].axis("off")
ax[1,0].imshow(ML)
ax[1,0].set_title(f"Energy {EML}")
ax[1,0].axis("off")
ax[1,1].imshow(MR)
ax[1,1].set_title(f"Energy {EMR}")
ax[1,1].axis("off")
ax[2,0].imshow(BL)
ax[2,0].set_title(f"Energy {EBL}")
ax[2,0].axis("off")
ax[2,1].imshow(BR)
ax[2,1].set_title(f"Energy {EBR}")
ax[2,1].axis("off")
plt.show()
return (ETL,ETR,EML,EMR,EBL,EBR)
def analyze_energy_through_time(Class,Th= 80):
ETLv = list()
ETRv = list()
EMLv = list()
EMRv = list()
EBLv = list()
EBRv = list()
Vectors = [ETLv,ETRv,EMLv,EMRv,EBLv,EBRv]
Labels = ["Top Left","Top Right","Middle Left","Middle Right","Lower Right","Lower Left"]
for i,image in enumerate(Class):
Energy = segment(image, plot=False)
for energy,vector in zip(Energy ,Vectors):
vector.append(energy)
figure, ax = plt.subplots(3,2,figsize=(12,10))
axes = ax.flatten()
i=0
for vector,label in zip(Vectors,Labels):
axes[i].plot(vector,label=label,)
axes[i].hlines(Th,label="Threshold",xmin=0,xmax=10,linestyles='--',color="r")
axes[i].set_xlabel("Time [s]")
axes[i].set_ylabel("Energy")
axes[i].set_title(f"{label} energy throug time")
axes[i].legend(loc = "upper left")
plt.tight_layout()
i += 1
return Vectors
# + slideshow={"slide_type": "notes"}
# -
analyze_energy_through_time(TYPICAL,Th= 80)
for i,image in enumerate(ATYPICAL):
print(f"Image number {i}")
Energy = segment(image, plot=True)
| 4_Pie/Python/Untitled.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3 (ipykernel)
# language: python
# name: python3
# ---
# +
import numpy as np
import pandas as pd
import pyreadr
import os
import seaborn as sns
import matplotlib.pyplot as plt
BCFTOOLS = "/home/kele/programs/bcftools/bcftools-1.11/bcftools"
def get_ancestry_dosage(arr, n_anc):
anc_dosage = np.zeros((arr.shape[0], int(arr.shape[1]/2)), dtype=np.half)
if n_anc==3:
assert (n_anc==3)
a0 = arr[:, 0::3] # should be views
a1 = arr[:, 1::3]
a2 = arr[:, 2::3]
anc_dosage[:, 0::3] = a0[:, ::2] + a0[:, 1::2]
anc_dosage[:, 1::3] = a1[:, ::2] + a1[:, 1::2]
anc_dosage[:, 2::3] = a2[:, ::2] + a2[:, 1::2]
elif n_anc==4:
assert (n_anc==4)
a0 = arr[:, 0::4] # should be views
a1 = arr[:, 1::4]
a2 = arr[:, 2::4]
a3 = arr[:, 3::4]
anc_dosage[:, 0::4] = a0[:, ::2] + a0[:, 1::2]
anc_dosage[:, 1::4] = a1[:, ::2] + a1[:, 1::2]
anc_dosage[:, 2::4] = a2[:, ::2] + a2[:, 1::2]
anc_dosage[:, 3::4] = a3[:, ::2] + a3[:, 1::2]
return anc_dosage
# True ancestry
def load_true_la(path):
return np.load(path)['arr']
def get_true_anc_dosage(true_la, n_anc):
hap1 = np.zeros((true_la.shape[0], int(true_la.shape[1]/2*n_anc)), dtype = 'int8')
hap2 = np.zeros((true_la.shape[0], int(true_la.shape[1]/2*n_anc)), dtype = 'int8')
aa = np.arange(true_la[:, ::2].shape[1])*n_anc+true_la[:, ::2]
bb = np.arange(true_la[:, 1::2].shape[1])*n_anc+true_la[:, 1::2]
np.put_along_axis(hap1, aa, 1, axis=1)
np.put_along_axis(hap2, bb, 1, axis=1)
return hap1+hap2
## Load in the probablistic output of each method
Data frame with one row per site,
Only every 5 sites represented in this file - not sure if it will always be intervals of 5 sites
After the index columns - each (individual) X (haplotype) X (population) has an entry.
def load_rfmix_fb(path):
rfmix_res = pd.read_csv(path, sep='\t', comment='#')
# expand out to each site
rfmix_res = np.repeat(rfmix_res.iloc[:, 4:].values, [5], axis = 0)
return rfmix_res
def load_bmix(path):
csv_path = path.replace('.vcf.gz', '.csv')
# !{BCFTOOLS} query -f '%CHROM, %POS, [%ANP1, %ANP2,]\n' {path} > {csv_path}
bmix = pd.read_csv(csv_path, header=None)
bmix = bmix.dropna(axis=1)
return(bmix.iloc[:,2:].values)
def load_mosaic(path):
mr = pyreadr.read_r(path)['arr'].astype(np.half)
mr = mr.to_numpy().T.reshape((mr.shape[2],-1), order='C')
return mr
def plot_ancestry_dosage(pred_dosage, start_index, n_anc, reference_dosage=None, title = None):
"""
only works for 3 ancestries
"""
colors = ['blue', 'orange', 'green', 'grey']
fig, ax = plt.subplots(figsize = (12, n_anc*1.5), nrows=n_anc, sharex=True, sharey=True)
f = []
for i in range(n_anc):
l, = ax[i].plot(pred_dosage[:, start_index+i], c=colors[i])
ax[i].set_ylim([-.05, 2.05])
f.append(l)
plt.legend(f, [f'pop{p}' for p in range(n_anc)])
if reference_dosage is not None:
for i in range(n_anc):
l, = ax[i].plot(reference_dosage[:, start_index+i], c=colors[i], #alpha=.5,
ls='dotted')
fig.tight_layout()
sns.despine(bottom=True)
if title:
ax[0].set_title(title)
else:
ax[0].set_title('Ancestry dosage')
ax[-1].set_xlabel('Site number ')
## 3 population paths
n_anc=3
base_path = "/home/kele/Documents/lai/report/results/3pop_b/SUMMARY"
true_path = "/home/kele/Documents/lai/report/results/3pop_b/true_local_ancestry.site_matrix.npz"
rf_fb_path = '/home/kele/Documents/lai/report/results/3pop_b/RFMix2/rfmix2.fb.tsv'
mosaic_path = "/home/kele/Documents/lai/report/results/3pop_b/MOSAIC/la_probs.RData"
bmixpath = '/home/kele/Documents/lai/report/results/3pop_b/bmix/bmix.anc.vcf.gz'
true_anc_dosage = get_true_anc_dosage(load_true_la(true_path), n_anc=n_anc)
rfmix_anc_dosage = get_ancestry_dosage(load_rfmix_fb(rf_fb_path), n_anc=n_anc)
mosaic_anc_dosage = get_ancestry_dosage(load_mosaic(mosaic_path), n_anc=n_anc)
bmix_anc_dosage = get_ancestry_dosage(load_bmix(bmixpath), n_anc)
# sum over sites and ancestries?
def get_Q(arr, n_anc):
nsites = arr.shape[0]
# avoid overflow and sum over sites
arr = arr.astype(float).sum(0)
if n_anc == 3:
a0 = arr[0::3] # should be views
a1 = arr[1::3]
a2 = arr[2::3]
q0 = (a0[0::2] + a0[1::2])/(nsites*4)
q1 = (a1[0::2] + a1[1::2])/(nsites*4)
q2 = (a2[0::2] + a2[1::2])/(nsites*4)
Q = pd.DataFrame([q0, q1, q2]).T
Q.columns = ['pop_0', 'pop_1', 'pop_2']
elif n_anc == 4:
a0 = arr[0::4] # should be views
a1 = arr[1::4]
a2 = arr[2::4]
a3 = arr[3::4]
q0 = (a0[0::2] + a0[1::2])/(nsites*4)
q1 = (a1[0::2] + a1[1::2])/(nsites*4)
q2 = (a2[0::2] + a2[1::2])/(nsites*4)
q3 = (a3[0::2] + a3[1::2])/(nsites*4)
Q = pd.DataFrame([q0, q1, q2, q3]).T
Q.columns = ['pop_0', 'pop_1', 'pop_2', 'pop_3']
return(Q)
def get_RMSD_Q(Q1, Q2):
assert(Q1.shape == Q2.shape)
D = Q1-Q2
SD = D*D
MSD = SD.mean().mean()
RMSD = np.sqrt(MSD)
return(RMSD)
Q_true = get_Q(true_anc_dosage, n_anc=3)
Q_bmix = get_Q(bmix_anc_dosage, n_anc=3)
Q_mosaic = get_Q(mosaic_anc_dosage, n_anc=3)
Q_rfmix = get_Q(rfmix_anc_dosage, n_anc=3)
rmsd_bmix = get_RMSD_Q(Q_bmix, Q_true)
rmsd_mosaic = get_RMSD_Q(Q_mosaic, Q_true)
rmsd_rfmix = get_RMSD_Q(Q_rfmix, Q_true)
## Write Q results tables
with open('/home/kele/Documents/test.rmsd.tsv', 'w') as OUTFILE:
OUTFILE.write('\t'.join(['bmix', 'MOSAIC', 'RFMix2' ]) + '\n')
OUTFILE.write('\t'.join([f'{x:0.4f}' for x in [rmsd_bmix, rmsd_mosaic, rmsd_rfmix]]) + '\n')
Q_true.to_csv('/home/kele/Documents/test.Q.tsv', index = None, sep = '\t', float_format='%0.4f')
Q_true.to_csv('/home/kele/Documents/test.Q.tsv', index = None, sep = '\t', float_format='%0.4f')
Q_true.to_csv('/home/kele/Documents/test.Q.tsv', index = None, sep = '\t', float_format='%0.4f')
Q_true.to_csv('/home/kele/Documents/test.Q.tsv', index = None, sep = '\t', float_format='%0.4f')
# -
f'{rmsd_mosaic:0.4f}'
rmsd_rfmix
Q1 = get_Q(mosaic_anc_dosage, n_anc=3)
Q2 = get_Q(true_anc_dosage, n_anc=3)
get_RMSD_Q(Q1, Q2)
Q1 = get_Q(bmix_anc_dosage, n_anc=3)
Q2 = get_Q(true_anc_dosage, n_anc=3)
get_RMSD_Q(Q1, Q2)
Q1 = get_Q(rfmix_anc_dosage, n_anc=3)
Q2 = get_Q(true_anc_dosage, n_anc=3)
get_RMSD_Q(Q1, Q2)
D = Q1-Q2
np.sqrt((D*D).mean().mean())
get_Q(mosaic_anc_dosage, n_anc=3) - get_Q(true_anc_dosage, n_anc=3)
nsites = mosaic_anc_dosage.shape[0]
arr = mosaic_anc_dosage.astype(float).sum(0)
a0 = arr[0::3] # should be views
a1 = arr[1::3]
a2 = arr[2::3]
# sum over adjacent haploids to make inds
q0 = (a0[0::2] + a0[1::2])/(nsites*4)
q1 = (a1[0::2] + a1[1::2])/(nsites*4)
q2 = (a2[0::2] + a2[1::2])/(nsites*4)
# make a file for output
Q = pd.DataFrame([q0, q1, q2]).T
Q.columns = ['pop_0', 'pop_1', 'pop_2']
Q
plt.subplots(figsize = (12, 8))
order = np.argsort(q2)
plt.bar(np.arange(len(q0)), q0[order], width=1, label='pop_0')
plt.bar(np.arange(len(q1)), q1[order], width=1, bottom = q0[order], label='pop_1')
plt.bar(np.arange(len(q2)), q2[order], width=1, bottom = q0[order]+q1[order], label='pop_2')
sns.despine(bottom=True, left=True)
plt.xlabel("Individuals")
plt.xlabel("Ancestry fraction")
plt.legend()
sns.histplot(q0)
sns.histplot(q0), sns.histplot(q1), sns.histplot(q2)
q0 = a0[:, 0::2] + a0[:, 1::2]
q1 = a1[:, 0::2] + a1[:, 1::2]
q2 = a2[:, 0::2] + a2[:, 1::2]
a2
mosaic_anc_dosage.astype(float).sum(0)
np.sum(a2.astype(float), axis =0)
sns.histplot(np.sum(tt.astype(float), axis = 0))
# +
rfmix_anc_r2, rfmix_ind_r2 = r2_ancestry_dosage(
true_dosage=true_anc_dosage,
pred_dosage=rfmix_anc_dosage[:len(true_anc_dosage)],
n_anc=n_anc
)
print('RFMix2')
print(np.mean(rfmix_anc_r2), rfmix_anc_r2)
mosaic_anc_r2, mosaic_ind_r2 = r2_ancestry_dosage(
true_dosage=true_anc_dosage,
pred_dosage=mosaic_anc_dosage,
n_anc=n_anc
)
print('MOSAIC')
print(np.mean(mosaic_anc_r2), mosaic_anc_r2)
bmix_anc_r2, bmix_ind_r2 = r2_ancestry_dosage(
true_dosage=true_anc_dosage,
pred_dosage=bmix_anc_dosage,
n_anc=n_anc
)
print('bmix')
print(np.mean(bmix_anc_r2), bmix_anc_r2)
# -
plot_ancestry_dosage(mosaic_anc_dosage, start_index=3, n_anc=n_anc, reference_dosage=true_anc_dosage, title = 'MOSAIC')
plot_ancestry_dosage(rfmix_anc_dosage, start_index=3, n_anc=n_anc, reference_dosage=true_anc_dosage, title = 'RFMix2')
# +
n_anc=3
base_path = "/home/kele/Documents/lai/report/results/3pop_d/SUMMARY"
true_path = "/home/kele/Documents/lai/report/results/3pop_d/true_local_ancestry.site_matrix.npz"
rf_fb_path = '/home/kele/Documents/lai/report/results/3pop_d/RFMix2/rfmix2.fb.tsv'
mosaic_path = "/home/kele/Documents/lai/report/results/3pop_d/MOSAIC/la_probs.RData"
bmixpath = '/home/kele/Documents/lai/report/results/3pop_d/bmix/bmix.anc.vcf.gz'
true_anc_dosage = get_true_anc_dosage(load_true_la(true_path), n_anc=n_anc)
rfmix_anc_dosage = get_ancestry_dosage(load_rfmix_fb(rf_fb_path), n_anc=n_anc)
rfmix_anc_r2, rfmix_ind_r2 = r2_ancestry_dosage(
true_dosage=true_anc_dosage,
pred_dosage=rfmix_anc_dosage[:len(true_anc_dosage)],
n_anc=n_anc
)
print('RFMix2')
print(np.mean(rfmix_anc_r2), rfmix_anc_r2)
mosaic_anc_dosage = get_ancestry_dosage(load_mosaic(mosaic_path), n_anc=n_anc)
mosaic_anc_r2, mosaic_ind_r2 = r2_ancestry_dosage(
true_dosage=true_anc_dosage,
pred_dosage=mosaic_anc_dosage,
n_anc=n_anc
)
print('MOSAIC')
print(np.mean(mosaic_anc_r2), mosaic_anc_r2)
bmix_anc_dosage = get_ancestry_dosage(load_bmix(bmixpath), n_anc)
bmix_anc_r2, bmix_ind_r2 = r2_ancestry_dosage(
true_dosage=true_anc_dosage,
pred_dosage=bmix_anc_dosage,
n_anc=n_anc
)
print('bmix')
print(np.mean(bmix_anc_r2), bmix_anc_r2)
# +
n_anc=3
base_path = "/home/kele/Documents/lai/report/results/3pop_e/SUMMARY"
true_path = "/home/kele/Documents/lai/report/results/3pop_e/true_local_ancestry.site_matrix.npz"
rf_fb_path = '/home/kele/Documents/lai/report/results/3pop_e/RFMix2/rfmix2.fb.tsv'
mosaic_path = "/home/kele/Documents/lai/report/results/3pop_e/MOSAIC/la_probs.RData"
bmixpath = '/home/kele/Documents/lai/report/results/3pop_e/bmix/bmix.anc.vcf.gz'
true_anc_dosage = get_true_anc_dosage(load_true_la(true_path), n_anc=n_anc)
rfmix_anc_dosage = get_ancestry_dosage(load_rfmix_fb(rf_fb_path), n_anc=n_anc)
rfmix_anc_r2, rfmix_ind_r2 = r2_ancestry_dosage(
true_dosage=true_anc_dosage,
pred_dosage=rfmix_anc_dosage[:len(true_anc_dosage)],
n_anc=n_anc
)
print('RFMix2')
print(np.mean(rfmix_anc_r2), rfmix_anc_r2)
mosaic_anc_dosage = get_ancestry_dosage(load_mosaic(mosaic_path), n_anc=n_anc)
mosaic_anc_r2, mosaic_ind_r2 = r2_ancestry_dosage(
true_dosage=true_anc_dosage,
pred_dosage=mosaic_anc_dosage,
n_anc=n_anc
)
print('MOSAIC')
print(np.mean(mosaic_anc_r2), mosaic_anc_r2)
bmix_anc_dosage = get_ancestry_dosage(load_bmix(bmixpath), n_anc)
bmix_anc_r2, bmix_ind_r2 = r2_ancestry_dosage(
true_dosage=true_anc_dosage,
pred_dosage=bmix_anc_dosage,
n_anc=n_anc
)
print('bmix')
print(np.mean(bmix_anc_r2), bmix_anc_r2)
# -
plot_ancestry_dosage(mosaic_anc_dosage, start_index=3, n_anc=n_anc, reference_dosage=true_anc_dosage, title = 'MOSAIC')
plot_ancestry_dosage(rfmix_anc_dosage, start_index=3, n_anc=n_anc, reference_dosage=true_anc_dosage, title = 'RFMix2')
# +
print('3pop_b, PEARSON')
n_anc=3
base_path = "/home/kele/Documents/lai/report/results/3pop_b/SUMMARY"
true_path = "/home/kele/Documents/lai/report/results/3pop_b/true_local_ancestry.site_matrix.npz"
rf_fb_path = '/home/kele/Documents/lai/report/results/3pop_b/RFMix2/rfmix2.fb.tsv'
mosaic_path = "/home/kele/Documents/lai/report/results/3pop_b/MOSAIC/la_probs.RData"
bmixpath = '/home/kele/Documents/lai/report/results/3pop_b/bmix/bmix.anc.vcf.gz'
true_anc_dosage = get_true_anc_dosage(load_true_la(true_path), n_anc=n_anc)
rfmix_anc_dosage = get_ancestry_dosage(load_rfmix_fb(rf_fb_path), n_anc=n_anc)
rfmix_anc_r2, rfmix_ind_r2 = r2_ancestry_dosage_PEARSON(
true_dosage=true_anc_dosage,
pred_dosage=rfmix_anc_dosage[:len(true_anc_dosage)],
n_anc=n_anc
)
print('RFMix2')
print(np.mean(rfmix_anc_r2), rfmix_anc_r2)
mosaic_anc_dosage = get_ancestry_dosage(load_mosaic(mosaic_path), n_anc=n_anc)
mosaic_anc_r2, mosaic_ind_r2 = r2_ancestry_dosage_PEARSON(
true_dosage=true_anc_dosage,
pred_dosage=mosaic_anc_dosage,
n_anc=n_anc
)
print('MOSAIC')
print(np.mean(mosaic_anc_r2), mosaic_anc_r2)
bmix_anc_dosage = get_ancestry_dosage(load_bmix(bmixpath), n_anc)
bmix_anc_r2, bmix_ind_r2 = r2_ancestry_dosage_PEARSON(
true_dosage=true_anc_dosage,
pred_dosage=bmix_anc_dosage,
n_anc=n_anc
)
print('bmix')
print(np.mean(bmix_anc_r2), bmix_anc_r2)
# +
print('3pop_d, PEARSON')
n_anc=3
base_path = "/home/kele/Documents/lai/report/results/3pop_d/SUMMARY"
true_path = "/home/kele/Documents/lai/report/results/3pop_d/true_local_ancestry.site_matrix.npz"
rf_fb_path = '/home/kele/Documents/lai/report/results/3pop_d/RFMix2/rfmix2.fb.tsv'
mosaic_path = "/home/kele/Documents/lai/report/results/3pop_d/MOSAIC/la_probs.RData"
bmixpath = '/home/kele/Documents/lai/report/results/3pop_d/bmix/bmix.anc.vcf.gz'
true_anc_dosage = get_true_anc_dosage(load_true_la(true_path), n_anc=n_anc)
rfmix_anc_dosage = get_ancestry_dosage(load_rfmix_fb(rf_fb_path), n_anc=n_anc)
rfmix_anc_r2, rfmix_ind_r2 = r2_ancestry_dosage_PEARSON(
true_dosage=true_anc_dosage,
pred_dosage=rfmix_anc_dosage[:len(true_anc_dosage)],
n_anc=n_anc
)
print('RFMix2')
print(np.mean(rfmix_anc_r2), rfmix_anc_r2)
mosaic_anc_dosage = get_ancestry_dosage(load_mosaic(mosaic_path), n_anc=n_anc)
mosaic_anc_r2, mosaic_ind_r2 = r2_ancestry_dosage_PEARSON(
true_dosage=true_anc_dosage,
pred_dosage=mosaic_anc_dosage,
n_anc=n_anc
)
print('MOSAIC')
print(np.mean(mosaic_anc_r2), mosaic_anc_r2)
bmix_anc_dosage = get_ancestry_dosage(load_bmix(bmixpath), n_anc)
bmix_anc_r2, bmix_ind_r2 = r2_ancestry_dosage_PEARSON(
true_dosage=true_anc_dosage,
pred_dosage=bmix_anc_dosage,
n_anc=n_anc
)
print('bmix')
print(np.mean(bmix_anc_r2), bmix_anc_r2)
# +
print('3pop_e, PEARSON')
n_anc=3
base_path = "/home/kele/Documents/lai/report/results/3pop_e/SUMMARY"
true_path = "/home/kele/Documents/lai/report/results/3pop_e/true_local_ancestry.site_matrix.npz"
rf_fb_path = '/home/kele/Documents/lai/report/results/3pop_e/RFMix2/rfmix2.fb.tsv'
mosaic_path = "/home/kele/Documents/lai/report/results/3pop_e/MOSAIC/la_probs.RData"
bmixpath = '/home/kele/Documents/lai/report/results/3pop_e/bmix/bmix.anc.vcf.gz'
true_anc_dosage = get_true_anc_dosage(load_true_la(true_path), n_anc=n_anc)
rfmix_anc_dosage = get_ancestry_dosage(load_rfmix_fb(rf_fb_path), n_anc=n_anc)
rfmix_anc_r2, rfmix_ind_r2 = r2_ancestry_dosage_PEARSON(
true_dosage=true_anc_dosage,
pred_dosage=rfmix_anc_dosage[:len(true_anc_dosage)],
n_anc=n_anc
)
print('RFMix2')
print(np.mean(rfmix_anc_r2), rfmix_anc_r2)
mosaic_anc_dosage = get_ancestry_dosage(load_mosaic(mosaic_path), n_anc=n_anc)
mosaic_anc_r2, mosaic_ind_r2 = r2_ancestry_dosage_PEARSON(
true_dosage=true_anc_dosage,
pred_dosage=mosaic_anc_dosage,
n_anc=n_anc
)
print('MOSAIC')
print(np.mean(mosaic_anc_r2), mosaic_anc_r2)
bmix_anc_dosage = get_ancestry_dosage(load_bmix(bmixpath), n_anc)
bmix_anc_r2, bmix_ind_r2 = r2_ancestry_dosage_PEARSON(
true_dosage=true_anc_dosage,
pred_dosage=bmix_anc_dosage,
n_anc=n_anc
)
print('bmix')
print(np.mean(bmix_anc_r2), bmix_anc_r2)
# -
# ## 4 population paths
n_anc=4
base_path = "/home/kele/Documents/lai/lai-sim/results/OutOfAfrica_4J17/4pop_79/4pop_test/SUMMARY"
true_path = "/home/kele/Documents/lai/lai-sim/results/OutOfAfrica_4J17/4pop_79/4pop_test/true_local_ancestry.site_matrix.npz"
rf_fb_path = "/home/kele/Documents/lai/lai-sim/results/OutOfAfrica_4J17/4pop_79/4pop_test/RFMix2/rfmix2.fb.tsv"
mosaic_path = '/home/kele/Documents/lai/lai-sim/results/OutOfAfrica_4J17/4pop_79/4pop_test/MOSAIC/la_probs.RData'
bmixpath = '/home/kele/Documents/lai/lai-sim/results/OutOfAfrica_4J17/4pop_79/4pop_test/bmix/bmix.anc.vcf.gz'
# # TODO
# - move to Snakemake
# - write out the diploid ancestry dosage matrices
# - write out the accuracy for each in a file
n_anc=3
base_path = "/home/kele/Documents/lai/lai-sim/results/AmericanAdmixture_4B11/AA_42/3pop_2/SUMMARY"
true_path = "/home/kele/Documents/lai/lai-sim/results/AmericanAdmixture_4B11/AA_42/3pop_2/true_local_ancestry.site_matrix.npz"
rf_fb_path = '/home/kele/Documents/lai/lai-sim/results/AmericanAdmixture_4B11/AA_42/3pop_2/RFMix2/rfmix2.fb.tsv'
mosaic_path = "/home/kele/Documents/lai/lai-sim/results/AmericanAdmixture_4B11/AA_42/3pop_2/MOSAIC/la_probs.RData"
bmixpath = '/home/kele/Documents/lai/lai-sim/results/AmericanAdmixture_4B11/AA_42/3pop_2/bmix/bmix.anc.vcf.gz'
true_anc_dosage = get_true_anc_dosage(load_true_la(true_path), n_anc=n_anc)
rfmix_anc_dosage = get_ancestry_dosage(load_rfmix_fb(rf_fb_path), n_anc=n_anc)
rfmix_anc_r2, rfmix_ind_r2 = r2_ancestry_dosage(
true_dosage=true_anc_dosage,
pred_dosage=rfmix_anc_dosage[:len(true_anc_dosage)],
n_anc=n_anc
)
np.mean(rfmix_anc_r2), rfmix_anc_r2
rfmix_anc_dosage = get_ancestry_dosage(load_rfmix_fb(rf_fb_path), n_anc=n_anc)
rfmix_anc_r2, rfmix_ind_r2 = r2_ancestry_dosage(
true_dosage=true_anc_dosage,
pred_dosage=rfmix_anc_dosage,
n_anc=n_anc
)
np.mean(rfmix_anc_r2), rfmix_anc_r2
mosaic_anc_dosage = get_ancestry_dosage(load_mosaic(mosaic_path), n_anc=n_anc)
mosaic_anc_r2, mosaic_ind_r2 = r2_ancestry_dosage(
true_dosage=true_anc_dosage,
pred_dosage=mosaic_anc_dosage,
n_anc=n_anc
)
np.mean(mosaic_anc_r2), mosaic_anc_r2
bmix_anc_dosage = get_ancestry_dosage(load_bmix(bmixpath), n_anc)
bmix_anc_r2, bmix_ind_r2 = r2_ancestry_dosage(
true_dosage=true_anc_dosage,
pred_dosage=bmix_anc_dosage,
n_anc=n_anc
)
np.mean(bmix_anc_r2), bmix_anc_r2
# +
#plot_ancestry_dosage(true_anc_dosage, start_index=0, n_anc=n_anc, reference_dosage=None)
# -
plot_ancestry_dosage(bmix_anc_dosage, start_index=3, n_anc=n_anc, reference_dosage=true_anc_dosage)
plot_ancestry_dosage(mosaic_anc_dosage, start_index=3, n_anc=n_anc, reference_dosage=true_anc_dosage)
plot_ancestry_dosage(rfmix_anc_dosage, start_index=3, n_anc=n_anc, reference_dosage=true_anc_dosage)
n_anc=3
base_path = "/home/kele/Documents/lai/report/results/3pop_d/SUMMARY"
true_path = "/home/kele/Documents/lai/report/results/3pop_d/true_local_ancestry.site_matrix.npz"
rf_fb_path = '/home/kele/Documents/lai/report/results/3pop_d/RFMix2/rfmix2.fb.tsv'
mosaic_path = "/home/kele/Documents/lai/report/results/3pop_d/MOSAIC/la_probs.RData"
bmixpath = '/home/kele/Documents/lai/report/results/3pop_d/bmix/bmix.anc.vcf.gz'
true_anc_dosage = get_true_anc_dosage(load_true_la(true_path), n_anc=n_anc)
rfmix_anc_dosage = get_ancestry_dosage(load_rfmix_fb(rf_fb_path), n_anc=n_anc)
rfmix_anc_r2, rfmix_ind_r2 = r2_ancestry_dosage(
true_dosage=true_anc_dosage,
pred_dosage=rfmix_anc_dosage[:len(true_anc_dosage)],
n_anc=n_anc
)
np.mean(rfmix_anc_r2), rfmix_anc_r2
rfmix_anc_dosage = get_ancestry_dosage(load_rfmix_fb(rf_fb_path), n_anc=n_anc)
rfmix_anc_r2, rfmix_ind_r2 = r2_ancestry_dosage(
true_dosage=true_anc_dosage,
pred_dosage=rfmix_anc_dosage,
n_anc=n_anc
)
np.mean(rfmix_anc_r2), rfmix_anc_r2
mosaic_anc_dosage = get_ancestry_dosage(load_mosaic(mosaic_path), n_anc=n_anc)
mosaic_anc_r2, mosaic_ind_r2 = r2_ancestry_dosage(
true_dosage=true_anc_dosage,
pred_dosage=mosaic_anc_dosage,
n_anc=n_anc
)
np.mean(mosaic_anc_r2), mosaic_anc_r2
bmix_anc_dosage = get_ancestry_dosage(load_bmix(bmixpath), n_anc)
bmix_anc_r2, bmix_ind_r2 = r2_ancestry_dosage(
true_dosage=true_anc_dosage,
pred_dosage=bmix_anc_dosage,
n_anc=n_anc
)
np.mean(bmix_anc_r2), bmix_anc_r2
plot_ancestry_dosage(bmix_anc_dosage, start_index=3, n_anc=n_anc, reference_dosage=true_anc_dosage)
plot_ancestry_dosage(mosaic_anc_dosage, start_index=3, n_anc=n_anc, reference_dosage=true_anc_dosage)
plot_ancestry_dosage(rfmix_anc_dosage, start_index=3, n_anc=n_anc, reference_dosage=true_anc_dosage)
plot_ancestry_dosage(true_anc_dosage, start_index=0, n_anc=n_anc, reference_dosage=None)
plot_ancestry_dosage(bmix_anc_dosage , start_index=4, n_anc=n_anc, reference_dosage=true_anc_dosage)
plot_ancestry_dosage(mosaic_anc_dosage , start_index=4, n_anc=n_anc, reference_dosage=true_anc_dosage)
# ## Why is mosaic failing
mosaic_anc_dosage = get_ancestry_dosage(load_mosaic(mosaic_path), n_anc=n_anc)
import glob
mosaic_path = glob.glob('/home/kele/Documents/lai/lai-sim/results/*/*/*/MOSAIC/*.RData')[1]
#pyreadr.read_r(mosaic_path)
# +
#load_mosaic(mosaic_path)
# -
# !ls /home/kele/3pop_c/MOSAIC/*.RData
# +
# run in R
library('MOSAIC')
model_results = '/home/kele/3pop_c/MOSAIC/admixed.RData'
la_results = '/home/kele/3pop_c/MOSAIC/localanc_admixed.RData'
mosaic_input_dir = '/home/kele/3pop_c/MOSAIC/input/'
load(model_results)
load(la_results)
# localanc gives the local ancestry at each grid point
# get local ancestry probabilities at each SNP
local_pos=grid_to_pos(localanc, mosaic_input_dir, g.loc, chrnos)
dims = dim(local_pos[[1]])
# convert to array and then export
arr = array(unlist(local_pos, use.names=FALSE), dims)
# -
arr2 = array(unlist(local_pos, use.names=FALSE), dims)
output_path = '/home/kele/3pop_c/MOSAIC/la_probs2.RData'
save(arr, file = output_path)
#save(arr, file = simple_output)
#pyreadr.read_r('/home/kele/3pop_c/MOSAIC/la_probs2.RData', use_objects=['arr'])
pyreadr.read_r('/home/kele/3pop_c/MOSAIC/la_probs.rds')
# + active=""
# !cat ~/Documents/lai/lai-sim/env/lib/python3.9/site-packages/pyreadr/librdata.pyx
# -
pyreadr.read_r('/home/kele/Documents/lai/lai-sim/results/AmericanAdmixture_4B11/AA_42/3pop_5/MOSAIC/la_probs.RData')
pyreadr.read_r('/home/kele/3pop_c/MOSAIC/la_probs2.RData', use_objects=['arr'])
true_anc_dosage
bb = load_true_la(true_path)
bb.shape
# !wc -l /home/kele/3pop_c/site.positions
aa
sqdif = ((true_anc_dosage - bmix_anc_dosage).astype('float')**2).sum().sum()
sqdif/ (true_anc_dosage.shape[0]*true_anc_dosage.shape[1])
np.sqrt(sqdif/(true_anc_dosage.shape[0]*true_anc_dosage.shape[1]))
sns.histplot(bmix_ind_r2)
plt.show()
plt.scatter(np.arange(len(bmix_ind_r2)), sorted(bmix_ind_r2))
plt.show()
np.mean(bmix_anc_r2), np.mean(bmix_ind_r2)
sns.histplot(rfmix_ind_r2)
plt.show()
plt.scatter(np.arange(len(rfmix_ind_r2)), sorted(rfmix_ind_r2))
plt.show()
np.mean(rfmix_anc_r2), np.mean(rfmix_ind_r2)
# +
## Write R2 tables
with open(os.path.join(base_path, 'R2_score.ancestry.tsv'), 'w') as OUTFILE:
OUTFILE.write('\t'.join(['method'] + [f'anc_{x}' for x in range(nanc)]) + '\n')
OUTFILE.write('\t'.join(['rfmix2'] + [str(x) for x in rfmix_anc_r2]) + '\n')
OUTFILE.write('\t'.join(['mosaic'] + [str(x) for x in mosaic_anc_r2]) + '\n')
OUTFILE.write('\t'.join(['bmix'] + [str(x) for x in bmix_anc_r2]) + '\n')
with open(os.path.join(base_path, 'R2_score.individuals.tsv'), 'w') as OUTFILE:
OUTFILE.write('\t'.join(['method'] + [f'ind_{x}' for x in range(len(bmix_ind_r2))]) + '\n')
OUTFILE.write('\t'.join(['rfmix2'] + [str(x) for x in rfmix_ind_r2]) + '\n')
OUTFILE.write('\t'.join(['mosaic'] + [str(x) for x in mosaic_ind_r2]) + '\n')
OUTFILE.write('\t'.join(['bmix'] + [str(x) for x in bmix_ind_r2]) + '\n')
# -
assert False
plot_ancestry_dosage(true_anc_dosage, start_index=3, reference_dosage=None)
plot_ancestry_dosage(rfmix_anc_dosage, start_index=3, reference_dosage=true_anc_dosage)
rfmix_anc_r2, rfmix_ind_r2 = r2_ancestry_dosage(
true_dosage=true_anc_dosage,
pred_dosage=rfmix_anc_dosage,
nanc=3
)
sns.histplot(rfmix_ind_r2)
plt.show()
plt.scatter(np.arange(len(rfmix_ind_r2)), sorted(rfmix_ind_r2))
plt.show()
np.mean(rfmix_anc_r2), np.mean(rfmix_ind_r2)
# # Mosaic
# Data frame with one row per site.
# after the index columns - each (individual) X (haplotype) X (ancestry) has an entry.
plot_ancestry_dosage(mosaic_anc_dosage, start_index=3, reference_dosage=true_anc_dosage)
mosaic_anc_r2, mosaic_ind_r2 = r2_ancestry_dosage(
true_dosage=true_anc_dosage,
pred_dosage=mosaic_anc_dosage,
nanc=3
)
sns.histplot(mosaic_ind_r2)
plt.show()
plt.scatter(np.arange(len(mosaic_ind_r2)), sorted(mosaic_ind_r2))
np.mean(mosaic_anc_r2), np.mean(mosaic_ind_r2)
plot_ancestry_dosage(mosaic_anc_dosage, start_index=62*3, reference_dosage=true_anc_dosage)
# # bmix
plot_ancestry_dosage(bmix_anc_dosage, start_index=62*3, reference_dosage=true_anc_dosage)
# !{BCFTOOLS} query -f '%CHROM, %POS, [%ANP1, %ANP2,]\n' {bmixpath} > {bmixpath.replace('.vcf.gz', '.csv')}
bmix = pd.read_csv(bmixpath.replace('.vcf.gz', '.csv'), header = None)
bmix = bmix.dropna(axis=1)
bmix = bmix.iloc[:,2:]
bmix_anc_dosage = get_ancestry_dosage(bmix.values)
bmix_anc_r2, bmix_ind_r2 = r2_ancestry_dosage(
true_dosage=true_anc_dosage,
pred_dosage=bmix_anc_dosage,
nanc=3
)
np.mean(bmix_anc_r2), np.mean(bmix_ind_r2)
sns.histplot(bmix_ind_r2)
plt.show()
plt.scatter(np.arange(len(bmix_ind_r2)), sorted(bmix_ind_r2))
np.where(bmix_ind_r2 == np.min(bmix_ind_r2))
plt.show()
plot_ancestry_dosage(bmix_anc_dosage, start_index=62*3, reference_dosage=true_anc_dosage)
# ## There is not a huge correlation in the indiviudal level accuracy of the various methods here
r2_df = pd.DataFrame(data = {'bmix':bmix_ind_r2, 'rfmix':rfmix_ind_r2, 'mosaic':mosaic_ind_r2})
pearsonr(r2_df['bmix'], r2_df['mosaic'])[0]**2, pearsonr(r2_df['bmix'], r2_df['rfmix'])[0]**2, pearsonr(r2_df['mosaic'], r2_df['rfmix'])[0]**2
sns.pairplot(r2_df,
plot_kws = {'alpha': 0.6, 's': 20, 'edgecolor': 'k'})
sns.jointplot(data = r2_df, x='bmix', y='rfmix', color="#4CB391", kind="reg")
def plot_ancestry_dosage(pred_dosage, start_index, n_anc, reference_dosage=None):
"""
only works for 3 ancestries
"""
fig, ax = plt.subplots(figsize = (12, n_anc*1.5), nrows=n_anc, sharex=True, sharey=True)
l0, = ax[0].plot(pred_dosage[:, start_index+0], c='b')
l1, = ax[1].plot(pred_dosage[:, start_index+1], c='orange')
l2, = ax[2].plot(pred_dosage[:, start_index+2], c='green')
plt.legend([l0, l1, l2], ['pop0', 'pop1', 'pop2'])
if reference_dosage is not None:
l0, = ax[0].plot(reference_dosage[:, start_index+0], c='b', alpha=.5, ls='--')
l1, = ax[1].plot(reference_dosage[:, start_index+1], c='orange', alpha=.5, ls='--')
l2, = ax[2].plot(reference_dosage[:, start_index+2], c='green', alpha=.5, ls='--')
fig.tight_layout()
sns.despine(bottom=True)
ax[0].set_title('Ancestry dosage')
ax[-1].set_xlabel('Site number ')
| workflow/notebooks/debug ancestry dosage Qscore.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3.9.6 64-bit
# language: python
# name: python3
# ---
# +
# x = (2-1/x)**0.5
x = (-1+5**0.5)/2
for _ in range(100):
x = (2-1/x)**0.5
print(x)
# -
| new_folder/math_prb.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3 (ipykernel)
# language: python
# name: python3
# ---
# <div class="jumbotron text-left"><b>
#
# This tutorial describes how to perform a mixed optimization using the SMT toolbox. The idea is to use a Bayesian Optimization (EGO method) to solve an unconstrained optimization problem with mixed variables.
# <div>
#
# October 2020
#
# <NAME> and <NAME> ONERA/DTIS/M2CI)
# <p class="alert alert-success" style="padding:1em">
# To use SMT models, please follow this link : https://github.com/SMTorg/SMT/blob/master/README.md. The documentation is available here: http://smt.readthedocs.io/en/latest/
# </p>
#
# The reference paper is available
# here https://www.sciencedirect.com/science/article/pii/S0965997818309360?via%3Dihub
#
# or as a preprint: http://mdolab.engin.umich.edu/content/python-surrogate-modeling-framework-derivatives
# For mixed integer with continuous relaxation, the reference paper is available here https://www.sciencedirect.com/science/article/pii/S0925231219315619
# ### Mixed Integer EGO
# For mixed integer EGO, the model is the continuous one. The discrete variables being relaxed continuously
# +
# %matplotlib inline
from math import exp
import numpy as np
import matplotlib.pyplot as plt
from matplotlib import colors
from mpl_toolkits.mplot3d import Axes3D
from scipy.stats import norm
from scipy.optimize import minimize
import scipy
import six
from smt.applications import EGO
from smt.surrogate_models import KRG
from smt.sampling_methods import FullFactorial
from smt.sampling_methods import LHS
from sklearn import gaussian_process
from sklearn.gaussian_process.kernels import Matern, WhiteKernel, ConstantKernel
import matplotlib.font_manager
from smt.applications.mixed_integer import MixedIntegerSurrogateModel
import warnings
warnings.filterwarnings("ignore")
from smt.applications.mixed_integer import (
FLOAT,
ORD,
ENUM,
MixedIntegerSamplingMethod,
cast_to_mixed_integer, unfold_with_enum_mask
)
# -
# Definition of the plot function
def PlotEgo(criterion, xdoe, bounds,npt,n_iter=12,xtypes=None, sm=KRG(print_global=False)) :
ego = EGO(n_iter=n_iter, criterion=criterion, xdoe=xdoe,xtypes=xtypes, xlimits=bounds,n_start=20,n_max_optim=35,enable_tunneling=False, surrogate=sm)
x_opt, y_opt, ind_best, x_data, y_data = ego.optimize(fun=f)
print("Minimum in x={:.0f} with f(x)={:.10f}".format(int(x_opt), float(y_opt)))
x_plot = np.atleast_2d(np.linspace(bounds[0][0], bounds[0][1], 9*(npt-1)+1)).T
fig = plt.figure(figsize=[15, 15])
for i in range(n_iter):
k = n_doe + i
x_data_k = x_data[0:k]
y_data_k = y_data[0:k]
#if check list, not already evaluated
y_data[k]=f(x_data[k][:, np.newaxis])
ego.gpr.set_training_values(x_data_k, y_data_k)
ego.gpr.train()
y_gp_plot = ego.gpr.predict_values(x_plot)
y_gp_plot_var = ego.gpr.predict_variances(x_plot)
y_ei_plot = ego.EI(x_plot,False)
ax = fig.add_subplot((n_iter + 1) // 2, 2, i + 1)
ax1 = ax.twinx()
ei, = ax1.plot(x_plot, y_ei_plot, color="red")
true_fun = ax.scatter(Xsol, Ysol,color='k',marker='d')
data, = ax.plot(
x_data_k, y_data_k, linestyle="", marker="o", color="orange"
)
if i < n_iter - 1:
opt, = ax.plot(
x_data[k], y_data[k], linestyle="", marker="*", color="r"
)
print(x_data[k], y_data[k])
gp, = ax.plot(x_plot, y_gp_plot, linestyle="--", color="g")
sig_plus = y_gp_plot + 3 * np.sqrt(y_gp_plot_var)
sig_moins = y_gp_plot - 3 * np.sqrt(y_gp_plot_var)
un_gp = ax.fill_between(
x_plot.T[0], sig_plus.T[0], sig_moins.T[0], alpha=0.3, color="g"
)
lines = [true_fun, data, gp, un_gp, opt, ei]
fig.suptitle("EGO optimization of a set of points")
fig.subplots_adjust(hspace=0.4, wspace=0.4, top=0.8)
ax.set_title("iteration {}".format(i + 1))
fig.legend(
lines,
[
"set of points",
"Given data points",
"Kriging prediction",
"Kriging 99% confidence interval",
"Next point to evaluate",
"Expected improvment function",
],
)
plt.show()
# ## Local minimum trap: 1D function
# The 1D function to optimize is described by:
# - 1 discrete variable $\in [0, 25]$
#definition of the 1D function
def f(X) :
x= X[:, 0]
if (np.abs(np.linalg.norm(np.floor(x))-np.linalg.norm(x))< 0.000001):
y = (x - 3.5) * np.sin((x - 3.5) / (np.pi))
else :
print("error")
return y
# +
#to plot the function
bounds = np.array([[0, 25]])
npt=26
Xsol = np.linspace(bounds[0][0],bounds[0][1], npt)
Xs= Xsol[:, np.newaxis]
Ysol = f(Xs)
print("Min of the DOE: ",np.min(Ysol))
plt.scatter(Xs,Ysol,marker='d',color='k')
plt.show()
# -
#to run the optimization process
n_iter = 10
xdoe = np.atleast_2d([0,10]).T
n_doe = xdoe.size
xtypes=[ORD]
criterion = "EI" #'EI' or 'SBO' or 'LCB'
PlotEgo(criterion,xdoe,bounds,npt,n_iter,xtypes=xtypes)
# On this 1D test case, 4 iterations are required to find the global minimum, evaluated at iteration 5.
# ## 1D function with noisy values
# The 1D function to optimize is described by:
# - 1 discrete variable $\in [0, 60]$
def f(X) :
x= X[:, 0]
y = -np.square(x-25)/220+0.25*(np.sin((x - 3.5) * np.sin((x - 3.5) / (np.pi)))+np.cos(x**2))
np.random.seed(10)
y2 = y+3*np.random.uniform(size=y.shape)
return -y2
# +
#to plot the function
xlimits = np.array([[0, 60]])
npt=61
Xsol = np.linspace(xlimits[0][0],xlimits[0][1], npt)
Xs= Xsol[:, np.newaxis]
Ysol = f(Xs)
print("min of the DOE: ", np.min(Ysol))
plt.scatter(Xs,Ysol,marker='d',color='k')
plt.show()
# -
#to run the optimization process
n_iter = 10
n_doe=2
sampling = MixedIntegerSamplingMethod(xtypes, xlimits, LHS, criterion="ese")
xdoe = sampling(n_doe)
xtypes=[ORD]
criterion = "EI" #'EI' or 'SBO' or 'LCB'
sm=KRG(print_global=False,eval_noise= True)
PlotEgo(criterion,xdoe,xlimits,npt,n_iter,xtypes,sm=sm)
# - On this noisy case, it toook 7 iterations to understand the shape of the curve but then, it took time to explore the random noise aroudn the minimum.
# ## 2D mixed branin function
# The 2D function to optimize is described by:
# - 1 discrete variable $\in [-5, 10]$
# - 1 continuous variable $\in [0., 15.]$
#definition of the 2D function
#the first variable is a integer one and the second one is a continuous one
import math
def f(X) :
x1 = X[:,0]
x2 = X[:,1]
PI = math.pi #3.14159265358979323846
a = 1
b = 5.1/(4*np.power(PI,2))
c = 5/PI
r = 6
s = 10
t = 1/(8*PI)
y= a*(x2 - b*x1**2 + c*x1 -r)**2 + s*(1-t)*np.cos(x1) + s
return y
#to define and compute the doe
xtypes = [ORD, FLOAT]
xlimits = np.array([[-5.0, 10.0],[0.0,15.0]])
n_doe=20
sampling = MixedIntegerSamplingMethod(xtypes, xlimits, LHS, criterion="ese")
xt = sampling(n_doe)
yt = f(xt)
# +
#to build the mixed surrogate model
sm = MixedIntegerSurrogateModel(xtypes=xtypes, xlimits=xlimits, surrogate=KRG())
sm.set_training_values(xt, yt)
sm.train()
num = 100
x = np.linspace(-5.0,10., 100)
y = np.linspace(0,15., 100)
xv, yv = np.meshgrid(x, y)
x_plot= np.array([np.ravel(xv), np.ravel(yv)]).T
y_plot = f(np.floor(x_plot))
fig = plt.figure(figsize=[14, 7])
y_gp_plot = sm.predict_values(x_plot)
y_gp_plot_sd = np.sqrt(sm.predict_variances(x_plot))
l=y_gp_plot-3*y_gp_plot_sd
h=y_gp_plot+3*y_gp_plot_sd
ax = fig.add_subplot(1, 3, 1, projection='3d')
ax1 = fig.add_subplot(1, 3, 2, projection='3d')
ax2 = fig.add_subplot(1, 3,3)
ii=-100
ax.view_init(elev=15., azim=ii)
ax1.view_init(elev=15., azim=ii)
true_fun = ax.plot_surface(xv, yv, y_plot.reshape((100, 100)), label ='true_function',color='g')
data3 = ax2.scatter(xt.T[0],xt.T[1],s=60,marker="o",color="orange")
gp1 = ax1.plot_surface(xv, yv, l.reshape((100, 100)), color="b")
gp2 = ax1.plot_surface(xv, yv, h.reshape((100, 100)), color="r")
gp3 = ax2.contour(xv, yv, y_gp_plot.reshape((100, 100)), color="k", levels=[0,1,2,5,10,20,30,40,50,60])
fig.suptitle("Mixed Branin function surrogate")
ax.set_title("True model")
ax1.set_title("surrogate model, DOE de taille {}".format(n_doe))
ax2.set_title("surrogate mean response")
# -
# - On the left, we have the real model in green.
# - In the middle we have the mean surrogate $+3\times \mbox{ standard deviation}$ (red) and the mean surrogate $-3\times \mbox{ standard deviation}$ (blue) in order to represent an approximation of the $99\%$ confidence interval.
#
# - On the right, the contour plot of the mean surrogate are given where yellow points are the values at the evaluated points (DOE).
# ## 4D mixed test case
# The 4D function to optimize is described by:
# - 1 continuous variable $\in [-5, 5]$
# - 1 categorical variable with 3 labels $["blue", "red", "green"]$
# - 1 categorical variable with 2 labels $ ["large", "small"]$
# - 1 discrete variable $\in [0, 2]$
# +
#to define the 4D function
def function_test_mixed_integer(X):
import numpy as np
# float
x1 = X[:, 0]
# enum 1
c1 = X[:, 1]
x2 = c1 == 0
x3 = c1 == 1
x4 = c1 == 2
# enum 2
c2 = X[:, 2]
x5 = c2 == 0
x6 = c2 == 1
# int
i = X[:, 3]
y = (
(x2 + 2 * x3 + 3 * x4) * x5 * x1
+ (x2 + 2 * x3 + 3 * x4) * x6 * 0.95 * x1
+ i
)
return y
#to run the optimization process
n_iter = 15
xtypes = [FLOAT, (ENUM, 3), (ENUM, 2), ORD]
xlimits = np.array([[-5, 5], ["blue", "red", "green"], ["large", "small"], [0, 2]])
criterion = "EI" #'EI' or 'SBO' or 'LCB'
qEI = "KB"
sm = KRG(print_global=False)
n_doe = 3
sampling = MixedIntegerSamplingMethod(xtypes, xlimits, LHS, criterion="ese")
xdoe = sampling(n_doe)
ydoe = function_test_mixed_integer(xdoe)
print('Initial DOE: \n', 'xdoe = ',xdoe, '\n ydoe = ',ydoe)
ego = EGO(
n_iter=n_iter,
criterion=criterion,
xdoe=xdoe,
ydoe=ydoe,
xtypes=xtypes,
xlimits=xlimits,
surrogate=sm,
qEI=qEI,
)
x_opt,y_opt, _, _, y_data = ego.optimize(fun=function_test_mixed_integer)
#to plot the objective function during the optimization process
min_ref = -15
mini = np.zeros(n_iter)
for k in range(n_iter):
mini[k] = np.log(np.abs(np.min(y_data[0 : k + n_doe - 1]) - min_ref))
x_plot = np.linspace(1, n_iter + 0.5, n_iter)
u = max(np.floor(max(mini)) + 1, -100)
l = max(np.floor(min(mini)) - 0.2, -10)
fig = plt.figure()
axes = fig.add_axes([0.1, 0.1, 0.8, 0.8])
axes.plot(x_plot, mini, color="r")
axes.set_ylim([l, u])
plt.title("minimum convergence plot", loc="center")
plt.xlabel("number of iterations")
plt.ylabel("log of the difference w.r.t the best")
plt.show()
# -
print(" 4D EGO Optimization: Minimum in x=",cast_to_mixed_integer(xtypes, xlimits, x_opt), "with y value =",y_opt)
# ## Manipulate the DOE
# +
#to give the initial doe in the initial space
print('Initial DOE in the initial space: ')
for i in range(n_doe):
print("Doe point i={} ={}".format((i), (cast_to_mixed_integer(xtypes, xlimits, xdoe[i]))),'\n')
#to give the initial doe in the relaxed space
print('Initial DOE in the unfold space (or relaxed space): ')
for i in range(n_doe):
print("Doe point i={} ={}".format((i), (unfold_with_enum_mask(xtypes, xdoe[i]))),'\n')
#to print the used DOE
print('Initial DOE in the fold space: ')
for i in range(n_doe):
print("Doe point i={} ={}".format((i), xdoe[i]),'\n')
# -
# # Gower mixed based surrogate model 2D function
# The function is described by:
# - 1 continuous variable $\in [0, 4]$
# - 1 categorical variable with 2 labels $["Blue", "Red"]$
# For mixed integer with Gower distance, the reference thesis is available here https://eldorado.tu-dortmund.de/bitstream/2003/35773/1/Dissertation_%20Halstrup.pdf
# +
import numpy as np
import matplotlib.pyplot as plt
from smt.surrogate_models import KRG
from smt.applications.mixed_integer import MixedIntegerSurrogateModel, ENUM,ORD,FLOAT,GOWER, HOMO_GAUSSIAN
xt1 = np.array([[0,0.0],
[0,1.0],
[0,4.0]])
xt2 = np.array([[1,0.0],
[1,1.0],
[1,2.0],
[1,3.0]])
xt = np.concatenate((xt1, xt2), axis=0)
xt[:,1] = xt[:,1].astype(np.float)
yt1 = np.array([0.0, 9.0, 16.0])
yt2 = np.array([ 0.0, 1.0,8.0,27.0])
yt = np.concatenate((yt1, yt2), axis=0)
xlimits = [["Blue","Red"],[0.0,4.0]]
xtypes=[(ENUM, 2),FLOAT]
# Surrogate
sm = MixedIntegerSurrogateModel(categorical_kernel = HOMO_GAUSSIAN, xtypes=xtypes, xlimits=xlimits, surrogate=KRG(theta0=[1e-2]))
sm.set_training_values(xt, yt)
sm.train()
# DOE for validation
n = 100
x_cat1 = []
x_cat2 = []
for i in range(n):
x_cat1.append(0)
x_cat2.append(1)
x_cont = np.linspace(0.0, 4.0, n)
x1 = np.concatenate((np.asarray(x_cat1).reshape(-1,1), x_cont.reshape(-1,1)), axis=1)
x2 = np.concatenate((np.asarray(x_cat2).reshape(-1,1), x_cont.reshape(-1,1)), axis=1)
y1 = sm.predict_values(x1)
y2 = sm.predict_values(x2)
# estimated variance
s2_1 = sm.predict_variances(x1)
s2_2 = sm.predict_variances(x2)
fig, axs = plt.subplots(2)
axs[0].plot(xt1[:,1].astype(np.float), yt1,'o',linestyle="None")
axs[0].plot(x_cont, y1,color ='Blue')
axs[0].fill_between(
np.ravel(x_cont),
np.ravel(y1 - 3 * np.sqrt(s2_1)),
np.ravel(y1 + 3 * np.sqrt(s2_1)),
color="lightgrey",
)
axs[0].set_xlabel("x")
axs[0].set_ylabel("y")
axs[0].legend(
["Training data", "Prediction", "Confidence Interval 99%"],
loc="upper left",
)
axs[1].plot(xt2[:,1].astype(np.float), yt2, marker='o', color='r',linestyle="None")
axs[1].plot(x_cont, y2,color ='Red')
axs[1].fill_between(
np.ravel(x_cont),
np.ravel(y2 - 3 * np.sqrt(s2_2)),
np.ravel(y2 + 3 * np.sqrt(s2_2)),
color="lightgrey",
)
axs[1].set_xlabel("x")
axs[1].set_ylabel("y")
axs[1].legend(
["Training data", "Prediction", "Confidence Interval 99%"],
loc="upper left",
)
plt.show()
# -
# ## Gower mixed based optimization 4D function
# +
#to define the 4D function
def function_test_mixed_integer(X):
import numpy as np
# float
x1 = X[:, 3]
# enum 1
c1 = X[:, 0]
x2 = c1 == 0
x3 = c1 == 1
x4 = c1 == 2
# enum 2
c2 = X[:, 1]
x5 = c2 == 0
x6 = c2 == 1
# int
i = X[:, 2]
y = (
(x2 + 2 * x3 + 3 * x4) * x5 * x1
+ (x2 + 2 * x3 + 3 * x4) * x6 * 0.95 * x1
+ i
)
return y
#to run the optimization process
n_iter = 15
xtypes = [(ENUM, 3), (ENUM, 2), ORD,FLOAT]
xlimits = np.array([["blue", "red", "green"], ["large", "small"], ["0","1","2"],[-5, 5]])
criterion = "EI" #'EI' or 'SBO' or 'LCB'
qEI = "KB"
sm = KRG(print_global=False)
n_doe = 2
sampling = MixedIntegerSamplingMethod(xtypes, xlimits, LHS, criterion="ese")
xdoe = sampling(n_doe)
ydoe = function_test_mixed_integer(xdoe)
print('Initial DOE: \n', 'xdoe = ',xdoe, '\n ydoe = ',ydoe)
ego = EGO(
n_iter=n_iter,
criterion=criterion,
xdoe=xdoe,
ydoe=ydoe,
xtypes=xtypes,
xlimits=xlimits,
surrogate=sm,
qEI=qEI,
categorical_kernel= GOWER,
)
x_opt,y_opt, _, _, y_data = ego.optimize(fun=function_test_mixed_integer)
#to plot the objective function during the optimization process
min_ref = -15
mini = np.zeros(n_iter)
for k in range(n_iter):
mini[k] = np.log(np.abs(np.min(y_data[0 : k + n_doe - 1]) - min_ref))
x_plot = np.linspace(1, n_iter + 0.5, n_iter)
u = max(np.floor(max(mini)) + 1, -100)
l = max(np.floor(min(mini)) - 0.2, -10)
fig = plt.figure()
axes = fig.add_axes([0.1, 0.1, 0.8, 0.8])
axes.plot(x_plot, mini, color="r")
axes.set_ylim([l, u])
plt.title("minimum convergence plot", loc="center")
plt.xlabel("number of iterations")
plt.ylabel("log of the difference w.r.t the best")
plt.show()
# -
print(" 4D EGO Optimization: Minimum in x=",cast_to_mixed_integer(xtypes, xlimits, x_opt), "with y value =",y_opt)
# # Group kernel mixed based optimization 4D function
# There is two distinct models : the homoscedastic one (HOMO_GAUSSIAN) that does not considerer different variances between the variables and the heteroscedastic one (HETERO_GAUSSIAN)
# For mixed integer with Group kernels, the reference thesis is available here https://hal.inria.fr/tel-03113542/document
# +
#to define the 4D function
def function_test_mixed_integer(X):
import numpy as np
# float
x1 = X[:, 3]
# enum 1
c1 = X[:, 0]
x2 = c1 == 0
x3 = c1 == 1
x4 = c1 == 2
# enum 2
c2 = X[:, 1]
x5 = c2 == 0
x6 = c2 == 1
# int
i = X[:, 2]
y = (
(x2 + 2 * x3 + 3 * x4) * x5 * x1
+ (x2 + 2 * x3 + 3 * x4) * x6 * 0.95 * x1
+ i
)
return y
#to run the optimization process
n_iter = 15
xtypes = [(ENUM, 3), (ENUM, 2), ORD,FLOAT]
xlimits = np.array([["blue", "red", "green"], ["large", "small"], ["0","1","2"],[-5, 5]])
criterion = "EI" #'EI' or 'SBO' or 'LCB'
qEI = "KB"
sm = KRG(print_global=False)
n_doe = 2
sampling = MixedIntegerSamplingMethod(xtypes, xlimits, LHS, criterion="ese")
xdoe = sampling(n_doe)
ydoe = function_test_mixed_integer(xdoe)
print('Initial DOE: \n', 'xdoe = ',xdoe, '\n ydoe = ',ydoe)
ego = EGO(
n_iter=n_iter,
criterion=criterion,
xdoe=xdoe,
ydoe=ydoe,
xtypes=xtypes,
xlimits=xlimits,
surrogate=sm,
qEI=qEI,
categorical_kernel= HOMO_GAUSSIAN,
)
x_opt,y_opt, _, _, y_data = ego.optimize(fun=function_test_mixed_integer)
#to plot the objective function during the optimization process
min_ref = -15
mini = np.zeros(n_iter)
for k in range(n_iter):
mini[k] = np.log(np.abs(np.min(y_data[0 : k + n_doe - 1]) - min_ref))
x_plot = np.linspace(1, n_iter + 0.5, n_iter)
u = max(np.floor(max(mini)) + 1, -100)
l = max(np.floor(min(mini)) - 0.2, -10)
fig = plt.figure()
axes = fig.add_axes([0.1, 0.1, 0.8, 0.8])
axes.plot(x_plot, mini, color="r")
axes.set_ylim([l, u])
plt.title("minimum convergence plot", loc="center")
plt.xlabel("number of iterations")
plt.ylabel("log of the difference w.r.t the best")
plt.show()
# -
print(" 4D EGO Optimization: Minimum in x=",cast_to_mixed_integer(xtypes, xlimits, x_opt), "with y value =",y_opt)
| tutorial/SMT_MixedInteger_application.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .jl
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Julia 1.4.2
# language: julia
# name: julia-1.4
# ---
#
# <a id='tools-and-techniques'></a>
# <div id="qe-notebook-header" style="text-align:right;">
# <a href="https://quantecon.org/" title="quantecon.org">
# <img style="width:250px;display:inline;" src="https://assets.quantecon.org/img/qe-menubar-logo.svg" alt="QuantEcon">
# </a>
# </div>
# # Tools and Techniques
#
# This section of the course contains foundational mathematical and statistical
# tools and techniques.
# ## Lectures
#
# - [Linear Algebra](linear_algebra.html)
# - [Overview](linear_algebra.html#overview)
# - [Vectors](linear_algebra.html#vectors)
# - [Matrices](linear_algebra.html#matrices)
# - [Solving Systems of Equations](linear_algebra.html#solving-systems-of-equations)
# - [Eigenvalues and Eigenvectors](linear_algebra.html#eigenvalues-and-eigenvectors)
# - [Further Topics](linear_algebra.html#further-topics)
# - [Exercises](linear_algebra.html#exercises)
# - [Solutions](linear_algebra.html#solutions)
# - [Orthogonal Projections and Their Applications](orth_proj.html)
# - [Overview](orth_proj.html#overview)
# - [Key Definitions](orth_proj.html#key-definitions)
# - [The Orthogonal Projection Theorem](orth_proj.html#the-orthogonal-projection-theorem)
# - [Orthonormal Basis](orth_proj.html#orthonormal-basis)
# - [Projection Using Matrix Algebra](orth_proj.html#projection-using-matrix-algebra)
# - [Least Squares Regression](orth_proj.html#least-squares-regression)
# - [Orthogonalization and Decomposition](orth_proj.html#orthogonalization-and-decomposition)
# - [Exercises](orth_proj.html#exercises)
# - [Solutions](orth_proj.html#solutions)
# - [LLN and CLT](lln_clt.html)
# - [Overview](lln_clt.html#overview)
# - [Relationships](lln_clt.html#relationships)
# - [LLN](lln_clt.html#lln)
# - [CLT](lln_clt.html#clt)
# - [Exercises](lln_clt.html#exercises)
# - [Solutions](lln_clt.html#solutions)
# - [Linear State Space Models](linear_models.html)
# - [Overview](linear_models.html#overview)
# - [The Linear State Space Model](linear_models.html#the-linear-state-space-model)
# - [Distributions and Moments](linear_models.html#distributions-and-moments)
# - [Stationarity and Ergodicity](linear_models.html#stationarity-and-ergodicity)
# - [Noisy Observations](linear_models.html#noisy-observations)
# - [Prediction](linear_models.html#prediction)
# - [Code](linear_models.html#code)
# - [Exercises](linear_models.html#exercises)
# - [Solutions](linear_models.html#solutions)
# - [Finite Markov Chains](finite_markov.html)
# - [Overview](finite_markov.html#overview)
# - [Definitions](finite_markov.html#definitions)
# - [Simulation](finite_markov.html#simulation)
# - [Marginal Distributions](finite_markov.html#marginal-distributions)
# - [Irreducibility and Aperiodicity](finite_markov.html#irreducibility-and-aperiodicity)
# - [Stationary Distributions](finite_markov.html#stationary-distributions)
# - [Ergodicity](finite_markov.html#ergodicity)
# - [Computing Expectations](finite_markov.html#computing-expectations)
# - [Exercises](finite_markov.html#exercises)
# - [Solutions](finite_markov.html#solutions)
# - [Continuous State Markov Chains](stationary_densities.html)
# - [Overview](stationary_densities.html#overview)
# - [The Density Case](stationary_densities.html#the-density-case)
# - [Beyond Densities](stationary_densities.html#beyond-densities)
# - [Stability](stationary_densities.html#stability)
# - [Exercises](stationary_densities.html#exercises)
# - [Solutions](stationary_densities.html#solutions)
# - [Appendix](stationary_densities.html#appendix)
# - [A First Look at the Kalman Filter](kalman.html)
# - [Overview](kalman.html#overview)
# - [The Basic Idea](kalman.html#the-basic-idea)
# - [Convergence](kalman.html#convergence)
# - [Implementation](kalman.html#implementation)
# - [Exercises](kalman.html#exercises)
# - [Solutions](kalman.html#solutions)
# - [Numerical Linear Algebra and Factorizations](numerical_linear_algebra.html)
# - [Overview](numerical_linear_algebra.html#overview)
# - [Factorizations](numerical_linear_algebra.html#factorizations)
# - [Continuous-Time Markov Chains (CTMCs)](numerical_linear_algebra.html#continuous-time-markov-chains-ctmcs)
# - [Banded Matrices](numerical_linear_algebra.html#banded-matrices)
# - [Implementation Details and Performance](numerical_linear_algebra.html#implementation-details-and-performance)
# - [Exercises](numerical_linear_algebra.html#exercises)
# - [Krylov Methods and Matrix Conditioning](iterative_methods_sparsity.html)
# - [Overview](iterative_methods_sparsity.html#overview)
# - [Ill-Conditioned Matrices](iterative_methods_sparsity.html#ill-conditioned-matrices)
# - [Stationary Iterative Algorithms for Linear Systems](iterative_methods_sparsity.html#stationary-iterative-algorithms-for-linear-systems)
# - [Krylov Methods](iterative_methods_sparsity.html#krylov-methods)
# - [Iterative Methods for Linear Least Squares](iterative_methods_sparsity.html#iterative-methods-for-linear-least-squares)
# - [Iterative Methods for Eigensystems](iterative_methods_sparsity.html#iterative-methods-for-eigensystems)
# - [Krylov Methods for Markov-Chain Dynamics](iterative_methods_sparsity.html#krylov-methods-for-markov-chain-dynamics)
| tools_and_techniques/index.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import xarray as xr
import matplotlib.pyplot as plt
import numpy as np
from glob import glob
plt.rcParams['figure.figsize'] = (15.0, 8.0)
adir_data = 'F:/data/cruise_data/saildrone/sss/sss_collocations_8day_nearest_norepeat/'
saildrone_filenames_rss = glob(adir_data+'*arctic_misst*rss*.nc')
#print(saildrone_filenames_rss)
saildrone_filenames_jpl = glob(adir_data+'*arctic_misst*jpl*.nc')
ds=[]
for iusv in range(2):
fname=saildrone_filenames_rss[iusv]
ds_usv=xr.open_dataset(fname)#.swap_dims({'ob':'time'})#.isel(trajectory=0).swap_dims({'obs':'time'})
ds_usv.close()
ds_usv=ds_usv.where(ds_usv.sat_sss_smap>10)
ds.append(ds_usv)
ds_saildrone_rss = xr.concat(ds,dim='trajectory')
ds=[]
for iusv in range(2):
fname=saildrone_filenames_jpl[iusv]
ds_usv=xr.open_dataset(fname)#.swap_dims({'ob':'time'})#.isel(trajectory=0).swap_dims({'obs':'time'})
ds_usv.close()
ds_usv=ds_usv.where(ds_usv.sat_smap_sss>10)
ds.append(ds_usv)
ds_saildrone_jpl = xr.concat(ds,dim='trajectory')
plt.scatter(ds_saildrone_rss.time.dt.dayofyear,ds_saildrone_rss.sat_sss_smap,label='RSS') #sat_smap_sss)
plt.scatter(ds_saildrone_jpl.time.dt.dayofyear,ds_saildrone_jpl.sat_smap_sss,c='r',label='JPL') #sat_smap_sss)
plt.grid()
plt.legend()
plt.savefig('f:\sss_fig4_norepeat.png')
adir_data = 'F:/data/cruise_data/saildrone/sss/sss_collocations_8day_nearest/'
saildrone_filenames_rss = glob(adir_data+'*arctic_misst*rss*.nc')
#print(saildrone_filenames_rss)
saildrone_filenames_jpl = glob(adir_data+'*arctic_misst*jpl*.nc')
ds=[]
for iusv in range(2):
fname=saildrone_filenames_rss[iusv]
ds_usv=xr.open_dataset(fname)#.swap_dims({'ob':'time'})#.isel(trajectory=0).swap_dims({'obs':'time'})
ds_usv.close()
# ds_usv=ds_usv.where(ds_usv.sat_sss_smap>10)
ds.append(ds_usv)
ds_saildrone_rss = xr.concat(ds,dim='trajectory')
ds=[]
for iusv in range(2):
fname=saildrone_filenames_jpl[iusv]
ds_usv=xr.open_dataset(fname)#.swap_dims({'ob':'time'})#.isel(trajectory=0).swap_dims({'obs':'time'})
ds_usv.close()
# ds_usv=ds_usv.where(ds_usv.sat_smap_sss>10)
ds.append(ds_usv)
ds_saildrone_jpl = xr.concat(ds,dim='trajectory')
for iusv in range(1):
plt.plot(ds_saildrone_rss.time.dt.dayofyear,ds_saildrone_rss.sat_sss_smap[iusv,:],label='RSS') #sat_smap_sss)
plt.plot(ds_saildrone_jpl.time.dt.dayofyear,ds_saildrone_jpl.sat_smap_sss[iusv,:],c='r',label='JPL') #sat_smap_sss)
plt.plot(ds_saildrone_jpl.time.dt.dayofyear,ds_saildrone_jpl.SAL_CTD_MEAN[iusv,:],c='k',label='JPL') #sat_smap_sss)
plt.grid()
plt.legend()
plt.savefig('f:\sss_fig4.png')
ds_saildrone_rss
| test_figure3_gap_rss_jpl.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: 'Python 3.8.8 64-bit (''base'': conda)'
# name: python3
# ---
# + id="a42M43IPu1cD"
import pandas as pd
import lxml.etree
tree = lxml.etree.parse('Batchablauf.xml')
root = tree.getroot()
df_batch = pd.DataFrame(columns = ['batchId', 'batchName', 'batchKurzname', 'note'])
predecessor_successor_array = [[]]
batches=root.findall('.//Heading[@Level="2"]')
index = 0
for i, batch in enumerate(batches, start=0):
batchId = batch.attrib['UUID']
batchName = batch.find('HeadingText[1]').text
batchKurzname_result = batch.xpath('.//HeadingText[text()="Kurzname"]/following-sibling::Paragraph/run')
batchKurzname = batchKurzname_result[0].text if len(batchKurzname_result) > 0 else batchName
df_batch.loc[index] = [batchId, batchName, batchKurzname, ""]
index += 1
successor = batch.xpath('.//HeadingText[text()="Ziel"]/following-sibling::Heading')[0]
#successorId = batch.xpath('.//HeadingText[text()="Ziel"]/following-sibling::Heading/attribute::UUID')[0]
successorId = successor.attrib['UUID']
if len(df_batch[df_batch['batchId'] == successorId]) == 0:
successorName = successor.find('HeadingText[1]').text
df_batch.loc[index] = [successorId, successorName, successorName, ""]
index += 1
predecessor_successor_array.append([batchId,successorId])
df_graph = pd.DataFrame(columns = ['predecessorId', 'successorId'], data=predecessor_successor_array)
df_graph.dropna(axis=0, how='any', thresh=None, subset=None, inplace=True)
df_graph.to_csv('batchgraph.csv', index=False)
df_batch.to_csv('batch.csv', index=False)
# -
#print(df_batch.loc[df_batch['batchId'] == '9210d55d-e720-071a-2ab0-b916396cc556'])
print(df_batch.loc[df_batch['batchId'] == 'fc1db4a1-dbbc-db62-a574-eb92da4a606b'])
df_batch.head()
# +
# Anaconda Prompt:
# conda install -c alubbock pygraphviz
# dot -c
import networkx as nx
import matplotlib.pyplot as plt
from networkx.drawing.nx_agraph import graphviz_layout
G = nx.from_pandas_edgelist(df_graph, 'predecessorId', 'successorId', create_using=nx.DiGraph)
label_dict = df_batch.set_index('batchId')['batchKurzname'].to_dict()
nx.relabel_nodes(G,label_dict,False)
pos = graphviz_layout(G, prog="dot", root=1, args='-Gsplines=true -Gsep=1 -Goverlap=false -Gorientation=10')
plt.figure(figsize=(100, 8))
#plt.gca().invert_yaxis()
nx.draw(G, pos=pos, with_labels=True, node_shape="s", node_size=400, linewidths=1)
# -
| python/notebooks/xml2csv.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Relative Permeability Estimation
# This is an example of relative permeability calculation using Metrics algorithm in OpenPNM. First, we create a network and assign geometry,phases and physics in a similar way that we used to do for the other examples.
import numpy as np
import pandas as pd
import openpnm as op
# %config InlineBackend.figure_formats = ['svg']
import openpnm.models.physics as pmods
import matplotlib.pyplot as plt
pn = op.network.Cubic(shape=[15,15,15], spacing=6e-5)
geom = op.geometry.StickAndBall(network=pn, pores=pn['pore.all'],
throats=pn['throat.all'])
air = op.phases.Air(network=pn,name='air')
water = op.phases.Water(network=pn,name='water')
air.add_model(propname='throat.hydraulic_conductance',
model=pmods.hydraulic_conductance.hagen_poiseuille)
air.add_model(propname='throat.entry_pressure',
model=pmods.capillary_pressure.washburn)
water.add_model(propname='throat.hydraulic_conductance',
model=pmods.hydraulic_conductance.hagen_poiseuille)
water.add_model(propname='throat.entry_pressure',
model=pmods.capillary_pressure.washburn)
# The only other argument that needs to be passed to the metrics relative permeability is the invasion sequence (We made it as a user defined sequence, so that the user has the option to implement the drainage process in any direction using any algorithm). The invasion sequence can be obtained by implementing an Invasion Percolation on the network. Asumming a drainage process, the air(invading/non-wetting phase) will be invading the medium.
#
# In the following code, we can find the invasion sequence applying an invasion percolation through a user-defined inlet face (here from the left surface pores in the x direction). By updating the air phase, the invasion sequence can then be found using the phase occupancy which is a property of the phase. This calculation is all done inside the metrics relative permeability algorithm without any user contribution. We encourage the users to reach the available source code of this algorithm for more information.
ip = op.algorithms.InvasionPercolation(network=pn, phase=air)
Finlets= pn.pores('top')
ip.set_inlets(pores=Finlets)
ip.run()
air.update(ip.results())
# Having the network and invasion sequence, we can now use the metrics relative permeability algorithm. These are the minimum required arguments for the algorithm to be run. If we do not pass the defending phase to the algorithm, it does not give us any report related to the defendin phase relative permeability. If we do not define the flow direction, it will automatically calculate the relative permeability in all three directions
rp = op.algorithms.metrics.RelativePermeability(network=pn)
rp.settings.update({'nwp': 'air',
'invasion_sequence': 'invasion_sequence'})
rp.run(Snwp_num=10)
# Once the algorithm is run, the output can either be a table of values or a graph showing the relative permeability curves of the phase(s). Here we call both of those methods to see the outputs.
results=rp.get_Kr_data()
pd.DataFrame(results['kr_nwp'])
fig = rp.plot_Kr_curves()
# In order to get the relative permeability curves of both phases, we need to pass the defending phase as an argument to the algorithm.
rp = op.algorithms.metrics.RelativePermeability(network=pn)
rp.settings.update({'nwp': 'air',
'wp': 'water',
'invasion_sequence': 'invasion_sequence'})
rp.run(Snwp_num=10)
fig = rp.plot_Kr_curves()
# The algorithm can also find the relative permeabilities of the phase(s) in the user-defined flow direction(s). The algorithm overwrites the flow inlets/outlets for the user-defined direction. Then calculates the relative permeability through the other directions from the default settings. This is illustrated as following.
rp = op.algorithms.metrics.RelativePermeability(network=pn)
inlets = {'x': 'top'}
outlets = {'x': 'bottom'}
rp.settings.update({'nwp': 'air',
'wp': 'water',
'invasion_sequence': 'invasion_sequence'
})
rp.settings['flow_inlets'].update(inlets)
rp.settings['flow_outlets'].update(outlets)
rp.run(Snwp_num=10)
fig = rp.plot_Kr_curves()
# As we see both Kr values for the x and z direction are the same, because we changed the inlets/outlets in the x direction in such a way that it is equivalent to the algorithm default z direction. If we pass the flow intelts/outlets to the algorithm, the algorithm overwrites its default pores with the passed arguments corresponding to that direction. The same procedure cna be applied to a 2D model.
# Note: The direction of the flow is found by finding the boundaries' corresponding cartesian coordinates direction. This rule also allows the user to define their boundary pores easily. As an example if the user defines the 'x' direction boundary pores, the algorithm overwrites the default boundary pores that were related to the 'x' direction. Any other possible boundaries (which is dependent on the shape of the network) will be automatically assigned to the default pores in the other direction. Note that the algorithm distinguishes whether a network is a 2D or not by looking at the number of boundary faces labeled. By doing so, the shape of the network is found and so does their corresponding default boundary pores. As mentioned before the algorithm will not plot the defending phase curves, if it is not passed as an argument (optional).
| examples/notebooks/algorithms/multiphase/relative_permeability_metrics.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + pycharm={"is_executing": true, "name": "#%%\n"}
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from sklearn.linear_model import LinearRegression
from sklearn.model_selection import train_test_split
from sklearn.metrics import r2_score, mean_squared_error
import seaborn as sns
# %matplotlib inline
# -
# **Gather Data for Seattle and Boston**
# +
# read calendar seattle
df_cal_s = pd.read_csv('./calendar_seattle.csv')
# read calendar boston
df_cal_b = pd.read_csv('./calendar_boston.csv')
# -
# **Access Data**
df_cal_s.head() # seattle
df_cal_b.head() # boston
# Business Question 1
# How is the distribution of the home prices and are there differences between both cities?
#
# Data Preparation
# - dropping na price values because the na values dont provide information for this analysis
# - formatting datetime and using date as index
# - categorizing the price values in ranges to make the plot more expressive
# **Clean Data and analyze**
# +
# clean price values
df_cal_s['price'] = df_cal_s['price'].replace({'\$':''}, regex = True).dropna().squeeze()
df_cal_s['price'] = pd.to_numeric(df_cal_s['price'], errors='coerce')
# format datetime
df_cal_s['date'] = pd.to_datetime(df_cal_s[['date']].squeeze())
df_cal_s = df_cal_s.set_index('date')
# calc avg price and occupancy rate
print('Seattle')
print("average price: ",df_cal_s['price'].mean())
print("occupancy rate (False: sold): ", df_cal_s['price'].isna().value_counts() / len(df_cal_s['price']) * 100)
# clean price values
df_cal_b['price'] = df_cal_b['price'].replace({'\$':''}, regex = True).dropna().squeeze()
df_cal_b['price'] = pd.to_numeric(df_cal_b['price'], errors='coerce')
# format datetime
df_cal_b['date'] = pd.to_datetime(df_cal_b[['date']].squeeze())
df_cal_b = df_cal_b.set_index('date')
# calc avg price and occupancy rate
print('Boston')
print("average price: ",df_cal_b['price'].mean())
print("occupancy rate (False: sold): ", df_cal_b['price'].isna().value_counts() / len(df_cal_b['price']) * 100)
# -
# **Data understanding:**
# - The average price is having a big different between both cities. Should be analysed if that´s changing with time.
# - The utilization of the homes seems to be different as well. The percentage of _False_ means the homes are sold. So in Boston there are much more home that are free. The offer seems to be much higher than the request.
# **Visualise data**
# +
# making a figure
fig = plt.figure()
# getting categories
cat_s = pd.cut(df_cal_s['price'], bins=[0, 25, 50, 75, 100, 125, 150, 175, 200, 225, 250, 300, 500, 1000, 3000], include_lowest=True)
cat_b = pd.cut(df_cal_b['price'], bins=[0, 25, 50, 75, 100, 125, 150, 175, 200, 225, 250, 300, 500, 1000, 3000], include_lowest=True)
# count categories
cat_counts_s = cat_s.value_counts(sort=False)/len(df_cal_s)
cat_counts_b = cat_b.value_counts(sort=False)/len(df_cal_b)
# plot categories
cat_counts_s.plot(kind='bar', color='c', width = 0.4, position=1, label='Seattle', legend=True)
cat_counts_b.plot(kind='bar', color='b', width = 0.4, position=0, label='Boston', legend=True)
# plot layout
plt.ylabel("price distribution")
plt.tight_layout()
plt.show()
# save fig
fig.savefig('./occupany.png')
# -
# Are there price changings during the year and is it caused by any events?
# - groupby date and using the mean value of all listings at the same time
# - rotating the date to make the axis ticks readable
# **Visualise data**
# + pycharm={"name": "#%%\n"}
# Start and end of the date range to extract
start, end = '2014-01', '2023-01'
# Plot daily and weekly resampled time series together
fig, ax = plt.subplots()
ax.plot(df_cal_s.loc[start:end, 'price'].groupby('date').mean(), marker='.', linestyle='-', color='c', markersize=1, label='Seattle')
ax.plot(df_cal_b.loc[start:end, 'price'].groupby('date').mean(), marker='.', linestyle='-', color='b', markersize=1, label='Boston')
ax.set_ylabel('avg price [$]')
ax.legend()
plt.xticks(rotation=90)
plt.tight_layout()
fig = ax.get_figure()
fig.savefig('./avg_price.png')
# -
# **Data understanding:**
# - in Jan/Feb/March the prices are low but rise untill midsommer and go down again untill the winter months
# - crazy price drop in Boston - maybe the prices were too high and the request for the homes were too low
# - in April there must have been a local event in Boston causing this price peak
# not used analysis - I wanted to make a plot about the utilization of the homes.
#print(df_cal_s)
df_s = df_cal_s[df_cal_s.available != 'f']
#df_b = df_cal_b[df_cal_s.available != 'f']
auslastung_s = df_s.groupby('date')['available'].value_counts()
#auslastung_b = df_b.groupby('date')['available'].value_counts()
print(auslastung)
auslastung_s.plot()
#auslastung_b.plot()
# Are there different weighted influences for the total review score value?
# Making a linear model to predict the **review_scores_rating** to see which coefs are most
# significant. Comparing the output of seattle and boston.
# including some functions of the lesson
def create_dummy_df(df, cat_cols, dummy_na):
'''
INPUT:
df - pandas dataframe with categorical variables you want to dummy
cat_cols - list of strings that are associated with names of the categorical columns
dummy_na - Bool holding whether you want to dummy NA vals of categorical columns or not
OUTPUT:
df - a new dataframe that has the following characteristics:
1. contains all columns that were not specified as categorical
2. removes all the original columns in cat_cols
3. dummy columns for each of the categorical columns in cat_cols
4. if dummy_na is True - it also contains dummy columns for the NaN values
5. Use a prefix of the column name with an underscore (_) for separating
'''
for col in cat_cols:
try:
# for each cat add dummy var, drop original column
df = pd.concat([df.drop(col, axis=1), pd.get_dummies(df[col], prefix=col, prefix_sep='_', drop_first=True, dummy_na=dummy_na)], axis=1)
except:
continue
return df
# + pycharm={"name": "#%%\n"}
def clean_fit_linear_mod(df, response_col, cat_cols, dummy_na, test_size=.3, rand_state=42):
'''
INPUT:
df - a dataframe holding all the variables of interest
response_col - a string holding the name of the column
cat_cols - list of strings that are associated with names of the categorical columns
dummy_na - Bool holding whether you want to dummy NA vals of categorical columns or not
test_size - a float between [0,1] about what proportion of data should be in the test dataset
rand_state - an int that is provided as the random state for splitting the data into training and test
OUTPUT:
test_score - float - r2 score on the test data
train_score - float - r2 score on the test data
lm_model - model object from sklearn
X_train, X_test, y_train, y_test - output from sklearn train test split used for optimal model
Your function should:
1. Drop the rows with missing response values
2. Drop columns with NaN for all the values
3. Use create_dummy_df to dummy categorical columns
4. Fill the mean of the column for any missing values
5. Split your data into an X matrix and a response vector y
6. Create training and test sets of data
7. Instantiate a LinearRegression model with normalized data
8. Fit your model to the training data
9. Predict the response for the training data and the test data
10. Obtain an rsquared value for both the training and test data
'''
# Drop the rows with missing response values
df = df.dropna(subset=[response_col], axis=0)
# Drop columns with all NaN values
df = df.dropna(how='all', axis=1)
# Dummy categorical variables
df = create_dummy_df(df, cat_cols, dummy_na)
# Mean function
fill_mean = lambda col: col.fillna(col.mean())
# Fill the mean
df = df.apply(fill_mean, axis=0)
# Split into explanatory and response variables
X = df.drop(columns=[response_col], axis=1)
y = df[response_col]
# Split into train and test
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=test_size, random_state=rand_state)
lm_model = LinearRegression(normalize=True) # Instantiate
lm_model.fit(X_train, y_train) # Fit
# Predict using your model
y_test_preds = lm_model.predict(X_test)
y_train_preds = lm_model.predict(X_train)
# Score using your model
test_score = r2_score(y_test, y_test_preds)
train_score = r2_score(y_train, y_train_preds)
return test_score, train_score, lm_model, X_train, X_test, y_train, y_test
# -
def coef_weights(coefficients, X_train):
'''
INPUT:
coefficients - the coefficients of the linear model
X_train - the training data, so the column names can be used
OUTPUT:
coefs_df - a dataframe holding the coefficient, estimate, and abs(estimate)
Provides a dataframe that can be used to understand the most influential coefficients
in a linear model by providing the coefficient estimates along with the name of the
variable attached to the coefficient.
'''
coefs_df = pd.DataFrame()
coefs_df['est_int'] = X_train.columns
coefs_df['coefs'] = lm_model.coef_
coefs_df['abs_coefs'] = np.abs(lm_model.coef_)
coefs_df = coefs_df.sort_values('abs_coefs', ascending=False)
return coefs_df
# **Gather & Assess Data**
# Read in linstings data and store in a list
df_lis_s = pd.read_csv('./listings_seattle.csv')
df_lis_b = pd.read_csv('./listings_boston.csv')
df_lists = [df_lis_s, df_lis_b]
# **Clean data - analyze and model**
# Loop for seattle and boston
for df_list in df_lists:
# Filter for categorical variables
df_cat = df_list.select_dtypes(include=[np.number])
cat_cols_lst = df_cat.select_dtypes(include=['object']).columns
# Value of interest:
response_col = 'review_scores_rating'
# Clean and fit linear model
test_score, train_score, lm_model, X_train, X_test, y_train, y_test = clean_fit_linear_mod(df_cat, 'review_scores_rating', cat_cols_lst, dummy_na=False)
print("test_score, train_score: ", test_score, train_score)
# Calc the coef weights
coef_df = coef_weights(lm_model.coef_, X_train)
# **Visualize coefs with weights**
# +
# relevant for my analysis are just the review scores
review_scores = ['review_scores_location','review_scores_value','review_scores_cleanliness','review_scores_checkin', 'review_scores_accuracy','review_scores_communication']
# Show the 20 most significant influencing variables
print(coef_df.head(20))
# -
# **Data understanding:**
# - value and cleanliness are most important for customers in both countries
#
| analysis.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="Cuq9-5YMxZPQ" colab_type="text"
# # **3a. Formación de vectores con matriz de covarianzas**
#
# **Responsable:**
#
# <NAME>
#
# **Infraestructura usada:**
# Google Colab, para pruebas
#
# + [markdown] id="ZwiYf-3Ox3Bp" colab_type="text"
# ## 0. Instalación de Cupy en Colab
# + id="467T8zJzxZ4N" colab_type="code" outputId="a16f1079-ca1d-4f9b-91b7-cb73a7514fa2" colab={"base_uri": "https://localhost:8080/", "height": 214}
# !curl https://colab.chainer.org/install | sh -
# + [markdown] id="gYi6APThyLPd" colab_type="text"
# ## 1. Implementación
#
# **Consideraciones:**. Esta etapa supone que se conocen $\mu$ y $\Sigma$ asociados a los activos. El fin de este paso es obtener vectores que serán relevantes para obtener los pesos del portafolio para el inversionista.
#
# Lo cual, se lleva a cabo en concreto a través de las expresiones:
#
# $$ u := \Sigma^{-1} \cdot \mu $$
#
# $$ v := \Sigma^{-1} \cdot 1 $$
#
#
# En así que a continuación se presenta el código correspondiente:
# + id="fiWayxHhewhy" colab_type="code" colab={}
import cupy as cp
def formar_vectores(mu, Sigma):
'''
Calcula las cantidades u = \Sigma^{-1} \mu y v := \Sigma^{-1} \cdot 1 del problema de Markowitz
Args:
mu (cupy array, vector): valores medios esperados de activos (dimension n)
Sigma (cupy array, matriz): matriz de covarianzas asociada a activos (dimension n x n)
Return:
u (cupy array, escalar): vector dado por \cdot Sigma^-1 \cdot mu (dimension n)
v (cupy array, escalar): vector dado por Sigma^-1 \cdot 1 (dimension n)
'''
# Vector auxiliar con entradas igual a 1
n = Sigma.shape[0]
ones_vector = cp.ones(n)
# Formamos vector \cdot Sigma^-1 mu y Sigm^-1 1
# Nota:
# 1) u= Sigma^-1 \cdot mu se obtiene resolviendo Sigma u = mu
# 2) v= Sigma^-1 \cdot 1 se obtiene resolviendo Sigma v = 1
# Obtiene vectores de interes
u = cp.linalg.solve(Sigma, mu)
v = cp.linalg.solve(Sigma, ones_vector)
return u, v
# + [markdown] id="FidlJG2Ozhse" colab_type="text"
# ## 1.1 Matrices de prueba
# + id="54p9YVhfewqJ" colab_type="code" colab={}
n= 10
Sigma=cp.random.rand(n, n)
mu=cp.random.rand(n, 1).transpose()[0]
# + id="Qa2fOGuZews9" colab_type="code" outputId="3c4e724c-efca-4a33-a9e5-eaa9973b6578" colab={"base_uri": "https://localhost:8080/", "height": 82}
formar_vectores(mu, Sigma)
| notebooks/Programacion/3a_formacion_matrices.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# <h2> Importing Necessary Modules</h2>
import pandas as pd
import numpy as np
pd.set_option('display.max_columns', None)
from sklearn.model_selection import train_test_split
#from sklearn.ensemble import RandomForestClassifier
from sklearn.model_selection import KFold, StratifiedKFold
from lightgbm import LGBMClassifier
from sklearn.metrics import log_loss, classification_report, accuracy_score, f1_score
def file_reader(data):
return pd.read_csv(data)
df = file_reader('new_dataset.csv')
#Checking out the Categorical Column and Numerical in the Dataset
def cat_num(data):
cat_col = []
num_col = []
for feat in data.columns:
if data[feat].dtypes =='O':
cat_col.append(feat)
else:
num_col.append(feat)
return cat_col, num_col
cat_col, num_col = cat_num(train)
# +
test = df[400000:]
train = df[:400000]
# -
# ## Preparing the dataset for our ML Model
X = train[num_col].drop(['CHURN'], axis =1)
y = train.CHURN
X_train, X_test, y_train, y_test = train_test_split(X, y, stratify = y, test_size = 0.25, random_state = 43)
lgb=LGBMClassifier(learning_rate = 0.05,n_estimators=500)
lgb.fit(X_train,y_train)
pred = lgb.predict_proba(X_test)[:,1]
pred_x = lgb.predict(X_test)
f1_score(y_test,pred_x,average='micro')
# +
log_loss(y_test, pred) #Logloss of 0.2519 on Public Lb and 0.24742
#This was my Baseline Model
# -
_ = plt.figure(figsize = (10,8))
fi = pd.Series(index = X.columns, data = lgm.feature_importances_)
_ = fi.sort_values()[-50:].plot(kind = 'barh')
sub = file_reader('sample_submission.csv')
sample = sub.copy()
sample.CHURN = lgb_test[:,1]
sample.to_csv('Sub.csv', index = False)
| Model.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Visualizing Time Series Data
#
# Let's go through a few key points of creatng nice time visualizations!
# +
import pandas as pd
import matplotlib.pyplot as plt
# %matplotlib inline
# Optional for interactive
# # %matplotlib notebook (watch video for full details)
# -
mcdon = pd.read_csv('mcdonalds.csv',
index_col = 'Date',
parse_dates = True)
mcdon.head()
# Not Good!
mcdon.plot()
mcdon['Adj. Close'].plot()
mcdon['Adj. Volume'].plot()
mcdon['Adj. Close'].plot(figsize = (12, 8))
mcdon['Adj. Close'].plot(figsize = (12, 8))
plt.ylabel('Close Price')
plt.xlabel('Overwrite Date Index')
plt.title('Mcdonalds')
mcdon['Adj. Close'].plot(figsize = (12,8),
title = 'Pandas Title')
# # Plot Formatting
# ## X Limits
mcdon['Adj. Close'].plot(xlim = ['2007-01-01', '2009-01-01'])
mcdon['Adj. Close'].plot(xlim = ['2007-01-01','2009-01-01'],
ylim = [0,50])
# ## Color and Style
mcdon['Adj. Close'].plot(xlim = ['2007-01-01','2007-05-01'],
ylim = [0,40],
ls = '--',
c = 'r')
# ## X Ticks
#
# This is where you will need the power of matplotlib to do heavy lifting if you want some serious customization!
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import matplotlib.dates as dates
mcdon['Adj. Close'].plot(xlim = ['2007-01-01','2007-05-01'],
ylim = [0,40])
idx = mcdon.loc['2007-01-01':'2007-05-01'].index
stock = mcdon.loc['2007-01-01':'2007-05-01']['Adj. Close']
idx
stock
# ## Basic matplotlib plot
fig, ax = plt.subplots()
ax.plot_date(idx, stock,'-')
plt.tight_layout()
plt.show()
# ## Fix the overlap!
# +
fig, ax = plt.subplots()
ax.plot_date(idx, stock,'-')
fig.autofmt_xdate() # Auto fixes the overlap!
plt.tight_layout()
plt.show()
# -
# ## Customize grid
fig, ax = plt.subplots()
ax.plot_date(idx, stock,'-')
ax.yaxis.grid(True)
ax.xaxis.grid(True)
fig.autofmt_xdate() # Auto fixes the overlap!
plt.tight_layout()
plt.show()
# ## Format dates on Major Axis
# +
fig, ax = plt.subplots()
ax.plot_date(idx, stock,'-')
# Grids
ax.yaxis.grid(True)
ax.xaxis.grid(True)
# Major Axis
ax.xaxis.set_major_locator(dates.MonthLocator())
ax.xaxis.set_major_formatter(dates.DateFormatter('%b\n%Y'))
fig.autofmt_xdate() # Auto fixes the overlap!
plt.tight_layout()
plt.show()
# +
fig, ax = plt.subplots()
ax.plot_date(idx, stock,'-')
# Grids
ax.yaxis.grid(True)
ax.xaxis.grid(True)
# Major Axis
ax.xaxis.set_major_locator(dates.MonthLocator())
ax.xaxis.set_major_formatter(dates.DateFormatter('\n\n\n\n%Y--%B'))
fig.autofmt_xdate() # Auto fixes the overlap!
plt.tight_layout()
plt.show()
# -
# ## Minor Axis
# +
fig, ax = plt.subplots()
ax.plot_date(idx, stock,'-')
# Major Axis
ax.xaxis.set_major_locator(dates.MonthLocator())
ax.xaxis.set_major_formatter(dates.DateFormatter('\n\n%Y--%B'))
# Minor Axis
ax.xaxis.set_minor_locator(dates.WeekdayLocator())
ax.xaxis.set_minor_formatter(dates.DateFormatter('%d'))
# Grids
ax.yaxis.grid(True)
ax.xaxis.grid(True)
fig.autofmt_xdate() # Auto fixes the overlap!
plt.tight_layout()
plt.show()
# +
fig, ax = plt.subplots(figsize=(10,8))
ax.plot_date(idx, stock,'-')
# Major Axis
ax.xaxis.set_major_locator(dates.WeekdayLocator(byweekday=1))
ax.xaxis.set_major_formatter(dates.DateFormatter('%B-%d-%a'))
# Grids
ax.yaxis.grid(True)
ax.xaxis.grid(True)
fig.autofmt_xdate() # Auto fixes the overlap!
plt.tight_layout()
plt.show()
# -
# ## Great job!
| 04-Visualization-Matplotlib-Pandas/04b-Pandas Visualization/02 - Visualizing Time Series Data.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] tags=[]
# # Invisible Canon
# > A project with a bit of hubris
# - toc: true
# - badges: true
# - categories: [writing, canon]
# - sticky_rank: 5
# - image: images/popularity-prestige.png
# -
# **I'm building a canon!**
# # Rationale
# ## Background
#
# ### What is this project, and who does it benefit?
#
# This is a personal project by internet user **@deepfates** to make a "canon" of about 5,000 books, selected from all possible books using data available on the internet.
#
# Though I work at a bookstore, this project is not a commercial endeavor. I want to explore the multidimensional space of all existing books: a latent space that connects both the interior concepts and the exterior context of all books, and which implies the existence of all possible books that might yet be imagined.
#
# This is not a new concept, of course. Writers and readers have studied this space forever, and changed it by their observation. Scholars have developed a discourse about which authors should be propagated and a meta-discourse about how that decision should be made. Poets and novelists have gestured to it through metaphor, or explicated it through genre tropes.
#
# Personally I was inspired by <NAME>'s **L-space** formulation:
# > twitter: https://twitter.com/deepfates/status/1402006194278092808?s=20
#
#
# Books, everywhere, affect all other books -- including books in the future, and books never written. Pratchett calls these **invisible writings**.
#
# My goal is to explore L-space, looking for hidden information that distinguishes some books -- the Canon-- from others -- the Archive -- and use that to make a corpus of books across a broad cross-section of genres and topics. Once selected, these books can work as training data to predict canonicity in the future.
#
# This will be my **Invisible Canon**. In these notes I will provide prose, code and data to justify and document my process.
#
# ## Qualifications
#
# ### Who are you to choose the Canon?
#
# As Odysseus would say, I'm nobody! I'm a person living in the third decade of the twenty-first century. I have no college degree, no institutional backing, no special knowledge. I just love books.
#
#
# I sell books for a living, at a small bookstore in the desert southwest USA. My store has room for about 5,000 curated books, since we must leave space for the chaotic flow of used books as well. I do have a little insider information: my inventory and sales data (which I will show some peeks of, but is not itself open-source). The marketplace provides some information about which books are desirable just by the amount of times people will buy them.
#
# I want to act as an arbiter between the implicit preferences of this market, and the explicit preferences cointained in sales data and book reviews around the globe, and choose books that my customers will buy, recommend and seek out. It's my duty, as a curator, to supply what people want to read. I don't think any of the available canons represent that, so I will make one.
# ## What is a "canon", anyway?
#
# It's a group of books selected from the totality of all books, with the intention to elevate and/or recognize their status. This is distinct from the "published", which includes all books ever made public in some sense, and the "archive", which is the subset of the published that has been preserved and is available to study.
#
# Too, a canon is not simply a 'corpus'. A corpus is a portion of the archive selected for a specific research purpose; a canon is a portion that is meant to represent some societal value.
#
# Here I must defer to the Stanford Literary Lab, specifically these papers in their Pamphlets series:
#
# - 8 Between Canon and Corpus [pdf](https://litlab.stanford.edu/LiteraryLabPamphlet8.pdf)
#
# - 11 Canon/Archive [pdf](https://litlab.stanford.edu/LiteraryLabPamphlet11.pdf)
#
# - 17 Popularity/Prestige [pdf](https://litlab.stanford.edu/LiteraryLabPamphlet17.pdf)
#
# In these articles, the digital humanities scholars of the Literary Lab have empirically measured the canon and archive (of literature) and found trends that correlate with their deep domain knowledge. I intend to follow recklessly in the direction they headed, and explore new terrain.
#
# The thesis developed in these papers is, first, that the canon can be taken as a superset of all canons put forth by individuals or groups, and secondly that you can project features of these books into a "space" that correlates to their canonicity.
#
# The authors test several different features, but prefer a framing of Popularity vs Prestige: market success vs academic success, roughly. This diagram, from Pamphlet 17, shows the use:
#
# 
#
# Popularity here is measured by number of reviews on Goodreads, and prestige by number of "Primary Subject Author" citations in the MLA database. To the right, something is more marketable, and to upward more well-regarded by academia. Up and to the right, therefore, is the direction of canon.
#
# This tracks with my own anecdotal evidence: the four most popular categories, Fiction, Sci-Fi, Kids, and Mystery/Thriller, are among the bestselling categories in my store.
#
# This brings up a good point: my audience is specific, and so my curation must be as well.
# Next episode
| _notebooks/2021-06-07-building-a-canon.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Determining matrix for servo mix from position and orientation of rotors
# ##### <NAME> 19/12/2016
#
# The goal is to determine the servo-mix matrix $B$, used to compute motor commands from 3D thrust and 3D torque commands:
# $$
# \vec{u} = B \cdot \begin{bmatrix} \vec{m} \\ \vec{t} \end{bmatrix}
# $$
#
# Where
# - $\vec{t}$ is the $(3 \times 1)$ thrust command vector
# - $\vec{m}$ is the $(3 \times 1)$ torque command vector
# - $\vec{u}$ is the $(n \times 1)$ motor command vector
#
#
# > Reference for simpler case (motors on 2D plane, all pointing vertically) from paparazzi : https://wiki.paparazziuav.org/wiki/RotorcraftMixing
# ## Thrust generated by rotor
# $$\vec{t}_i = k_t . \rho . D^4 . \omega_i^2 .\vec{v}_i \\
# \vec{t}_i \approx C_t . u_i . \vec{v}_i$$
#
# Where
# - $\vec{t}_i$ is the thrust
# - $\vec{v}_i$ is the main axis of the rotor (unit vector)
# - $k_t$ is the thrust coeffient of the propeller
# - $C_t$ is the approximated thrust coeffient of the motor/propeller unit
# - $\omega_i$ is the rotation speed
# - $\rho$ is the fluid density
# - $D$ is the diameter of the propeller
# - $u_i$ is the motor command
# ## Thrust generated by a set of motors
#
# Can be computed using a matrix $A_t$ defined as:
# $$
# \vec{t} = A_t \cdot \vec{u}
# $$
# Where
# - $\vec{t}$ is the $(3 \times 1)$ thrust vector
# $$
# \vec{t} = \begin{bmatrix} t_x \\ t_y \\ t_z \end{bmatrix}
# $$
# - $A_t$ is a $(3 \times n)$ matrix with the thrust generated by the i-th rotor on the i-th column:
# $$
# A_t = \begin{bmatrix}
# & & & & \\
# \vec{t}_0 & \dots & \vec{t}_i & \dots & \vec{t}_{n-1} \\
# & & & & \\
# \end{bmatrix}
# $$
# - $\vec{u}$ is the $(n \times 1)$ command vector
# $$
# \vec{u} = \begin{bmatrix} u_0 \\ \vdots \\ u_i \\ \vdots \\ u_{n-1} \end{bmatrix}
# $$
# ## Torque generated by rotor
# $$
# \vec{m}_i = (\vec{p}_i - \vec{p}_{cg}) \times (k_t . \rho . D^4 . \omega_i^2 .\vec{v}_i) - d_i . k_m . \rho . D^5 . \omega^2_i . \vec{v}_i \\
# $$
# $$
# \vec{m}_i \approx \left( C_t . (\vec{p}_i - \vec{p}_{cg}) \times \vec{v}_i \right) . u_i
# - \left( d_i . C_m . D . \vec{v}_i \right) . u_i
# $$
# $$
# \vec{m}_i \approx \left( C_t . (\vec{p}_i - \vec{p}_{cg}) \times \vec{v}_i \right) . u_i
# - \left( d_i . \frac{C_t}{10} . D . \vec{v}_i \right) . u_i
# $$
#
# Where
# - $\vec{m}_i$ it the torque
# - $\vec{v}_i$ is the main axis of the rotor
# - $\vec{p}_i$ is the position of the center of the rotor
# - $\vec{p}_{cg}$ is the position of the center of mass
# - $k_t$ is the thrust coeffient of the propeller
# - $k_m$ is the moment coeffient of the propeller (usually $k_m \approx \frac{k_t}{10}$)
# - $C_t$ is the approximated thrust coeffient of the motor/propeller unit
# - $C_m$ is the approximated moment coeffient of the motor/propeller unit ($C_m \approx D.\frac{C_t}{10}$)
# - $\rho$ is the fluid density
# - $D$ is the diameter of the propeller
# - $d_i$ is the rotation direction (-1 for CW or +1 for CCW)
# - $\omega_i$ is the rotation speed
# - $u_i$ is the motor command
# ## Torque generated by a set of motors
#
# Can be computed using a matrix $A_m$ defined as:
# $$
# \vec{m} = A_m \cdot \vec{u}
# $$
# Where
# - $\vec{m}$ is the $(3 \times 1)$ torque vector
# $$
# \vec{m} = \begin{bmatrix} m_x \\ m_y \\ m_z \end{bmatrix}
# $$
# - $A_m$ is a $(3 \times n)$ matrix with the torque generated by the i-th rotor on the i-th column:
# $$
# A_m = \begin{bmatrix}
# & & & & \\
# \vec{m}_0 & \dots & \vec{m}_i & \dots & \vec{m}_{n-1} \\
# & & & & \\
# \end{bmatrix}
# $$
# - $\vec{u}$ is the $(n \times 1)$ command vector
# $$
# \vec{u} = \begin{bmatrix} u_0 \\ \vdots \\ u_i \\ \vdots \\ u_{n-1} \end{bmatrix}
# $$
# ## Combined torque and thrust matrix
#
# We define the $(6 \times n)$ matrix $A$ as
# $$
# A = \begin{bmatrix} A_m \\ At \end{bmatrix}
# $$
#
# The matrix $A$ allows to compute the thrust and torque generated by a set of $n$ motors as a function of the throttle command of each motor:
# $$
# \begin{bmatrix} \vec{m} \\ \vec{t} \end{bmatrix} = A \cdot \vec{u}
# $$
# or
# $$
# \begin{bmatrix} m_x \\ m_y \\ m_z \\ t_x \\ t_y \\ t_z \end{bmatrix} =
# \begin{bmatrix}
# m^0_x && \dots && m^i_x && \dots && m^{n-1}_x \\
# m^0_y && \dots && m^i_y && \dots && m^{n-1}_y \\
# m^0_z && \dots && m^i_z && \dots && m^{n-1}_z \\
# t^0_x && \dots && t^i_x && \dots && t^{n-1}_x \\
# t^0_y && \dots && t^i_y && \dots && t^{n-1}_y \\
# t^0_z && \dots && t^i_z && \dots && t^{n-1}_z \\
# \end{bmatrix}
# \cdot \begin{bmatrix} u_0 \\ \vdots \\ u_i \\ \vdots \\ u_{n-1} \end{bmatrix}
# $$
# ## Servo mixing matrix
#
# In order to compute the command to apply to each motor for a desired thrust and torque, we need the $(n \times 6)$ servo-mix matrix $B$:
#
# $$
# \vec{u} = B \cdot \begin{bmatrix} \vec{m} \\ \vec{t} \end{bmatrix}
# $$
#
# The matrix $B$ can be computed as the Moore-Penrose pseudo-inverse of matrix $A$. The singular value decomposition (SVD) of $A$ gives $A = U \cdot \sigma \cdot V^T$, where $\sigma$ is a diagonal matrix. If $A$ has a rank $r$, then the first $r$ elements of $\sigma$ are non-nul. $B$ can be computed as
# $B = V \cdot \sigma^{+} \cdot U^T $. Where $\sigma^{+}$ is a diagonal matrix that contains the inverse of the non-nul terms of the diagonal of $\sigma$.
#
#
# ## Taking Mass and Inertia into account
#
# The formulas above allow to adapt the terms of the servo mixing matrix $B$ to the geometrical and aerodynamic characterisitcs of the drones. Thus it is possible to apply the correct motor commands for desired thrust and torque commands.
#
# However, in order to fully abstract the dynamics of the system, we may want to give angular and linear acceleration commands instead of torque and thrust commands. Torque and thrust are given by:
# $$
# \vec{m} = J \cdot \vec{\alpha} \\
# \vec{t} = M \cdot \vec{a}
# $$
#
# Where
# - $\vec{m}$ is the torque vector
# - $\vec{t}$ is the thrust vector
# - $J$ is the inertia matrix
# - $M$ is the mass of the system
# - $\vec{\alpha}$ is the angular acceleration vector
# - $\vec{a}$ is the acceleration vector
#
# Thus the motors commands can be computed from angular and acceleration commands as:
# $$
# \vec{u} = B \cdot H \cdot \begin{bmatrix} \vec{\alpha} \\ \vec{a} \end{bmatrix}
# $$
#
# Where $H$ is a concatenation of matrix of inertia $J$ and the mass of the system $M$ multiplied by the identity matrix $I_3$:
# $$
# H = \begin{bmatrix} J & 0_3 \\ 0_3 & M \cdot I_3 \end{bmatrix}
# $$
# ## Practical usage
# For robots where the geometry, motors and propellers are known, $B$ can be pre-computed offline (see the examples bellow using the numpy implementation of the pseudo-inverse `numpy.linalg.pinv`).
#
# Also, if the robot has planar symmetries relative to the planes $XY$, $XZ$ and $YZ$, then the matrix of inertia is diagonal and motor commands are computed as
# $$
# \vec{u} = B \cdot
# \begin{bmatrix}
# J_{xx} & 0 & 0 & 0 & 0 & 0 \\
# 0 & J_{yy} & 0 & 0 & 0 & 0 \\
# 0 & 0 & J_{zz} & 0 & 0 & 0 \\
# 0 & 0 & 0 & M & 0 & 0 \\
# 0 & 0 & 0 & 0 & M & 0 \\
# 0 & 0 & 0 & 0 & 0 & M \\
# \end{bmatrix}
# \cdot
# \begin{bmatrix} \vec{\alpha} \\ \vec{a} \end{bmatrix}
# $$
#
# ---
# # Implementation
# ---
import numpy as np
import pandas as pd
import matplotlib.pylab as plt
# %matplotlib inline
# ### Torque matrix $A_m$ from geometry
# +
def compute_torque(center, axis, dirs, Ct, Cm):
#normalize rotor axis
ax = axis / np.linalg.norm(axis, axis=1)[:,np.newaxis]
torque = Ct * np.cross(center, ax) - Cm * ax * dirs
return torque
def compute_torque_matrices(geom):
# Torque matrix, each column is the torque generated by one rotor
Am = compute_torque(center=geom[['x', 'y', 'z']].values,
axis=geom[['ax', 'ay', 'az']].values,
dirs=geom[['dir']].values,
Ct=geom[['ct']].values,
Cm=geom[['cm']].values).T
# Torque servo mix computed as pseudoinverse of At
# Each column is the command to apply to the servos to get torques on roll, pitch and yaw torques
Bm = np.linalg.pinv(Am)
return Am, Bm
# -
#
# ### Thrust matrix $A_t$ from geometry
# +
def compute_thrust(axis, Ct):
# Normalize rotor axis
ax = axis / np.linalg.norm(axis, axis=1)[:,np.newaxis]
thrust = Ct * ax
return thrust
def compute_thrust_matrices(geom):
# Thrust matrix, each column is the thrust generated by one rotor
At = compute_thrust(axis=geom[['ax', 'ay', 'az']].values,
Ct=geom[['ct']].values).T
# Thrust servo mix computed as pseudoinverse of At
# Each column is the command to apply to the servos to get thrust on x, y, and z
Bt = np.linalg.pinv(At)
return At, Bt
# -
# ### Combined torque/thrust matrices $A$ and $B$
# +
def compute_torque_thrust_matrices(geom):
# Torque matrices
Am, Bm = compute_torque_matrices(geom)
# Thrust matrices
At, Bt = compute_thrust_matrices(geom)
# Combined matrices
A = np.vstack([Am, At])
B = np.linalg.pinv(A)
A = pd.DataFrame(A, index= ['Roll', 'Pitch', 'Yaw', 'X', 'Y', 'Z'])
B = pd.DataFrame(B, columns=['Roll', 'Pitch', 'Yaw', 'X', 'Y', 'Z'])
return A, B
def print_matrices(A, B):
print('\nA|' + ''.join(['| ' + str(i) + ' ' for i in range(geom.shape[0])]) + '|\n',
A.round(2))
print('\nB| R | P | Y | X | Y | Z |\n',
B.round(2))
print('\nActuation effort for unit commands')
print('Torque: norm ', np.linalg.norm(B[:, :3], axis=0).round(2), '/ std ', np.abs(B[:, :3]).std(axis=0).round(2))
print('Thrust: norm ', np.linalg.norm(B[:, 3:], axis=0).round(2), '/ std ', np.abs(B[:, 3:]).std(axis=0).round(2))
def print_actuation_effort(Bdf):
B = Bdf.values
print('\nActuation effort for unit commands')
print('Torque: norm ', np.linalg.norm(B[:, :3], axis=0).round(2), '/ std ', np.abs(B[:, :3]).std(axis=0).round(2))
print('Thrust: norm ', np.linalg.norm(B[:, 3:], axis=0).round(2), '/ std ', np.abs(B[:, 3:]).std(axis=0).round(2))
# -
# Plotting
def plot(geom):
plt.figure(figsize=[6,6])
l = 0.05
for i, g in geom.iterrows():
color = plt.cm.rainbow(i / geom.shape[0])
style='-'
if g.dir == 1:
marker='o'
else:
marker='s'
# top view
plt.subplot(221)
plt.plot([0.0, 0.1], [0.0, 0.0], '--k', alpha=0.3)
plt.plot([0.0, 0.0], [0.0, -0.1], '--k', alpha=0.1)
plt.plot([0.0, g.x], [0.0, -g.y], '-k', alpha=0.5)
plt.plot([g.x, g.x + l*g.ax], [-g.y,-(g.y + l*g.ay)],
linestyle=style, marker=marker, color=color, markevery=2, linewidth=4)
plt.xlabel('x')
plt.ylabel('y')
plt.xlim([-0.2, 0.2])
plt.ylim([-0.2, 0.2])
plt.xticks([])
plt.yticks([])
# side view
plt.subplot(222)
plt.plot([0.0, 0.1], [0.0, 0.0], '--k', alpha=0.3)
plt.plot([0.0, 0.0], [0.0, -0.1], '--k', alpha=0.1)
plt.plot([0.0, -g.y], [0.0, -g.z], '-k', alpha=0.5)
plt.plot([-g.y,-(g.y + l*g.ay)], [-g.z,-(g.z + l*g.az)],
linestyle=style, marker=marker, color=color, markevery=2, linewidth=4)
plt.xlabel('y')
plt.ylabel('z')
plt.xlim([-0.2, 0.2])
plt.ylim([-0.2, 0.2])
plt.xticks([])
plt.yticks([])
# front view
plt.subplot(223)
plt.plot([0.0, 0.1], [0.0, 0.0], '--k', alpha=0.3)
plt.plot([0.0, 0.0], [0.0, -0.1], '--k', alpha=0.1)
plt.plot([0.0, g.x], [0.0, -g.z], '-k', alpha=0.5)
plt.plot([g.x, g.x + l*g.ax], [-g.z, -(g.z + l*g.az)],
linestyle=style, marker=marker, color=color, markevery=2, linewidth=4)
plt.xlabel('x')
plt.ylabel('z')
plt.xlim([-0.2, 0.2])
plt.ylim([-0.2, 0.2])
plt.xticks([])
plt.yticks([])
# perspective view
view = np.array([-1.0, -0.3, 0.5])
ax_x = np.cross(np.array([0, 0, 1]), view)
ax_x = ax_x / np.linalg.norm(ax_x)
ax_y = np.cross(view, ax_x)
ax_y = ax_y / np.linalg.norm(ax_y)
pos = [np.dot(np.array([g.x, -g.y, -g.z]), ax_x),
np.dot(np.array([g.x, -g.y, -g.z]), ax_y)]
axis = [np.dot(np.array([g.ax, -g.ay, -g.az]), ax_x),
np.dot(np.array([g.ax, -g.ay, -g.az]), ax_y)]
plt.subplot(224)
plt.plot([0.0, np.dot([0.1, 0, 0], ax_x)], [0.0, np.dot([0.1, 0, 0], ax_y)], '--k', alpha=0.3)
plt.plot([0.0, np.dot([0, -0.1, 0], ax_x)], [0.0, np.dot([0, -0.1, 0], ax_y)], '--k', alpha=0.1)
plt.plot([0.0, np.dot([0, 0, -0.1], ax_x)], [0.0, np.dot([0, 0, -0.1], ax_y)], '--k', alpha=0.1)
plt.plot([0.0, pos[0]], [0.0, pos[1]], '-k', alpha=0.5)
plt.plot([pos[0], pos[0] + l*axis[0]], [pos[1], pos[1] + l*axis[1]],
linestyle=style, marker=marker, color=color, markevery=2, linewidth=4)
plt.xlabel('')
plt.ylabel('')
plt.xlim([-0.2, 0.2])
plt.ylim([-0.2, 0.2])
plt.xticks([])
plt.yticks([])
plt.tight_layout()
# ---
# # Examples
# ---
# ## Example for quadrotor
# +
# Geometry
width = 0.23
length = 0.23
geom = pd.DataFrame({ 'x':[-0.5*width, 0.5*width, 0.5*width, -0.5*width ],
'y':[-0.5*length, -0.5*length, 0.5*length, 0.5*length],
'z':[0.0, 0.0, 0.0, 0.0 ],
'ax':[0.0, 0.0, 0.0, 0.0 ],
'ay':[0.0, 0.0, 0.0, 0.0 ],
'az':[-1.0, -1.0, -1.0, -1.0 ],
'dir':[1.0, -1.0, 1.0, -1.0 ],
'ct':[1.0, 1.0, 1.0, 1.0 ],
'cm':[0.015, 0.015, 0.015, 0.015 ]}, # prop diameter=0.15 -> cm = 0.1*0.15*ct
columns = ['x', 'y', 'z', 'ax', 'ay', 'az', 'dir', 'ct', 'cm'])
# Matrices
A, B = compute_torque_thrust_matrices(geom)
plot(geom)
print_actuation_effort(B)
print("A")
print(A.round(2))
print('\nMix:')
B.round(2)
# -
# # Example for quadrotor with tilted motors
# +
# Geometry
s45 = np.sin(np.deg2rad(45))
c45 = np.cos(np.deg2rad(45))
s25 = np.sin(np.deg2rad(25))
c25 = np.cos(np.deg2rad(25))
geom = pd.DataFrame({ 'x':np.array([ 0.1, 0.1, -0.1, -0.1]),
'y':np.array([ 0.1, -0.1, -0.1, 0.1]),
'z':[0.0 for _ in range(4) ],
'ax':np.array([ 1, 1,-1,-1]) * s25 * c45,
'ay':np.array([-1, 1, 1,-1]) * s25 * c45,
'az':[-1.0, -1.0, -c25, -c25 ],
'dir':[(-1.0)**(i+1) for i in range(4) ],
'ct':[1.0 for _ in range(4) ],
'cm':[0.015 for _ in range(4) ] # prop diameter=0.15 -> cm = 0.1*0.15*ct
},
columns = ['x', 'y', 'z', 'ax', 'ay', 'az', 'dir', 'ct', 'cm'])
# Matrices
A, B = compute_torque_thrust_matrices(geom)
plot(geom)
print_actuation_effort(B)
print('\nNormalized Mix (as in paparazzi):')
from scipy.stats import threshold
B_norm = B.abs().max(axis=0)
B_norm[np.abs(B_norm)<1e-3] = 1
B_papa = (255 * B / B_norm).round()
print(B_papa)
print('\nNormalized Mix (as in PX4):')
from scipy.stats import threshold
B_norm = B.abs().max(axis=0)
# Same scale on roll and pitch
B_norm.Roll = max(B_norm.Roll, B_norm.Pitch)
B_norm.Pitch = B_norm.Roll
# Same scale on x, y and z thrust
B_norm.X = max(B_norm.X, B_norm.Y, B_norm.Z)
B_norm.Y = B_norm.X
B_norm.Z = B_norm.X
B_norm[np.abs(B_norm)<1e-3] = 1
B_px4 = (1.0 * B / B_norm).round(2)
print(B_px4)
print('\nMix:')
B.round(2)
# -
# # Example for hexacopter
# +
# Geometry
l = 0.16
thetas = np.arange(0, 2*np.pi, np.pi/3) + np.pi/6.0
geom = pd.DataFrame({ 'x':[l * np.cos(theta) for theta in thetas ],
'y':[l * np.sin(theta) for theta in thetas ],
'z':[0.0 for _ in thetas ],
'ax':[0.0 for _ in thetas ],
'ay':[0.0 for _ in thetas ],
'az':[-1.0 for _ in thetas ],
'dir':[-1+2*(i%2) for i,_ in enumerate(thetas)],
'ct':[1.0 for _ in thetas ],
'cm':[0.015 for _ in thetas ], # prop diameter=0.15 -> cm = 0.1*0.15*ct
},
columns = ['x', 'y', 'z', 'ax', 'ay', 'az', 'dir', 'ct', 'cm'])
# Matrices
A, B = compute_torque_thrust_matrices(geom)
plot(geom)
print_actuation_effort(B)
print("A")
print(A.round(2))
print('\nMix:')
B.round(2)
# -
# ## Example for hexacopter in V shape (same example as paparazzi)
# +
# Geometry
geom = pd.DataFrame({ 'x':np.array([-0.35, -0.35, 0.0, 0.0, 0.35, 0.35])*0.4,
'y':np.array([ 0.17, -0.17, 0.25, -0.25, 0.33, -0.33])*0.4,
'z':[0.0 for _ in range(6) ],
'ax':[0.0 for _ in range(6) ],
'ay':[0.0 for _ in range(6) ],
'az':[-1.0 for _ in range(6) ],
'dir':[-1+2*(((i+1)//2)%2) for i in range(6)],
'ct':[1.0 for _ in range(6) ],
'cm':[0.015 for _ in range(6) ] # prop diameter=0.15 -> cm = 0.1*0.15*ct
},
columns = ['x', 'y', 'z', 'ax', 'ay', 'az', 'dir', 'ct', 'cm'])
# Matrices
A, B = compute_torque_thrust_matrices(geom)
plot(geom)
print_actuation_effort(B)
print('\nNormalized Mix (as in paparazzi):')
from scipy.stats import threshold
B_norm = B.abs().max(axis=0)
B_norm[np.abs(B_norm)<1e-3] = 1
B_papa = (256 * B / B_norm).round()
print(B_papa)
print('\nMix:')
B.round(2)
# -
# ## Example for hexacopter in H shape
# +
# Geometry
geom = pd.DataFrame({ 'x':np.array([ 0.000, -0.000, 0.050, -0.050, 0.050, -0.050]),
'y':np.array([ 0.066, -0.066, -0.066, 0.066, 0.066, -0.066]),
'z':[0.0 for _ in range(6) ],
'ax':[0.0 for _ in range(6) ],
'ay':[0.0 for _ in range(6) ],
'az':[-1.0 for _ in range(6) ],
'dir':np.array([1, -1, 1, -1, -1, 1]),
'ct':[1.0 for _ in range(6) ],
'cm':[0.015 for _ in range(6) ] # prop diameter=0.15 -> cm = 0.1*0.15*ct
},
columns = ['x', 'y', 'z', 'ax', 'ay', 'az', 'dir', 'ct', 'cm'])
# Matrices
A, B = compute_torque_thrust_matrices(geom)
plot(geom)
print_actuation_effort(B)
print('\nNormalized Mix (as in paparazzi):')
from scipy.stats import threshold
B_norm = B.abs().max(axis=0)
B_norm[np.abs(B_norm)<1e-3] = 1
B_papa = (255 * B / B_norm).round()
# B_papa = (1.0 * B / B_norm).round(2)
print(B_papa)
print('\nNormalized Mix (as in PX4):')
from scipy.stats import threshold
B_norm = B.abs().max(axis=0)
# Same scale on roll and pitch
B_norm.Roll = max(B_norm.Roll, B_norm.Pitch)
B_norm.Pitch = B_norm.Roll
# Same scale on x, y and z thrust
B_norm.X = max(B_norm.X, B_norm.Y, B_norm.Z)
B_norm.Y = B_norm.X
B_norm.Z = B_norm.X
B_norm[np.abs(B_norm)<1e-3] = 1
B_px4 = (1.0 * B / B_norm).round(2)
print(B_px4)
print('\nMix:')
B.round(2)
# -
# ## Example for holonomic hexacopter
# +
# Geometry
t30 = np.deg2rad(30)
t60 = np.deg2rad(60)
thetas = np.arange(-np.pi, np.pi, t60)
l = 0.16
h = 0.5*l * np.sin(t30)
geom = pd.DataFrame({ 'x':[l * np.cos(t30) * np.cos(theta) for theta in thetas ],
'y':[l * np.cos(t30) * np.sin(theta) for theta in thetas ],
'z':[h * (-1+2*((i+0)%2)) for i,_ in enumerate(thetas) ],
'ax':[-np.sin(t30)*np.cos(theta)*(-1+2*((i+1)%2)) for i,theta in enumerate(thetas) ],
'ay':[-np.sin(t30)*np.sin(theta)*(-1+2*((i+1)%2)) for i,theta in enumerate(thetas) ],
'az':[-np.cos(t30) for _ in thetas ],
'dir':[-1+2*((i+1)%2) for i,_ in enumerate(thetas)],
'ct':[1.0 for _ in thetas ],
'cm':[0.015 for _ in thetas ]
},
columns = ['x', 'y', 'z', 'ax', 'ay', 'az', 'dir', 'ct', 'cm'])
# Matrices
A, B = compute_torque_thrust_matrices(geom)
plot(geom)
print_actuation_effort(B)
print('\nMix:')
B.round(2)
# -
# ---
# ### Example for holonomic +4 octo with rotors tilted towards center
# +
# Geometry
t30 = np.deg2rad(30)
t45 = np.deg2rad(45)
t90 = np.deg2rad(90)
t135 = np.deg2rad(135)
l = 0.16
# h = 0.0
h = 1.0 * l * np.sin(t30)
thetas = np.array([t45, t45, t135, t135, -t135, -t135, -t45, -t45])
# thetas = np.array([0.0, 0.0, t90, t90, 2*t90, 2*t90, -t90, -t90])
# thetas = np.array([-t45/4, t45/4, t90-t45/4, t90+t45/4, 2*t90-t45/4, 2*t90+t45/4, -t90-t45/4, -t90+t45/4])
# thetas = np.array([-t45/2, t45/2, t90-t45/2, t90+t45/2, 2*t90-t45/2, 2*t90+t45/2, -t90-t45/2, -t90+t45/2])
geom = pd.DataFrame({ 'x':[l * np.cos(theta) for theta in thetas ],
'y':[l * np.sin(theta) for theta in thetas ],
'z':[h * (-1+2*(i%2)) for i,_ in enumerate(thetas) ],
'ax':[np.sin(t30)*np.cos(theta)*(-1+2*((1*i+0)%2)) for i,theta in enumerate(thetas) ],
'ay':[np.sin(t30)*np.sin(theta)*(-1+2*((1*i+0)%2)) for i,theta in enumerate(thetas) ],
'az':[-np.cos(t30) for _ in thetas ],
'dir':[-1+2*(((2*(i+0))//2)%2) for i,_ in enumerate(thetas)],
'ct':[1.0 for _ in thetas ],
'cm':[0.015 for _ in thetas ]
},
columns = ['x', 'y', 'z', 'ax', 'ay', 'az', 'dir', 'ct', 'cm'])
# Matrices
A, B = compute_torque_thrust_matrices(geom)
plot(geom)
print_actuation_effort(B)
print('\nMix:')
B.round(2)
# -
# ---
# ### Example for holonomic x4 octo with rotors tilted sideways
# +
# Thrust and moment coefficients
Ct = 1.0
Cm = Ct / 10.0
# Geometry
t30 = np.deg2rad(30)
t45 = np.deg2rad(45)
t90 = np.deg2rad(90)
t135 = np.deg2rad(135)
l = 0.16
h = 0.0#l * np.sin(t30)
# h = 0.5*l * np.sin(t30)
thetas = np.array([t45, t45, t135, t135, -t135, -t135, -t45, -t45])
# thetas = np.array([-t45/2, t45/2, t90-t45/2, t90+t45/2, 2*t90-t45/2, 2*t90+t45/2, -t90-t45/2, -t90+t45/2])
geom = pd.DataFrame({ 'x':[l * np.cos(theta) for theta in thetas ],
'y':[l * np.sin(theta) for theta in thetas ],
'z':[h * (-1+2*((i+0)%2)) for i,_ in enumerate(thetas) ],
'ax':[np.sin(t30)*np.cos(theta + t90*(-1+2*(((2*i+1)//2)%2))) for i,theta in enumerate(thetas) ],
'ay':[np.sin(t30)*np.sin(theta + t90*(-1+2*(((2*i+1)//2)%2))) for i,theta in enumerate(thetas) ],
'az':[-np.cos(t30) for _ in thetas ],
'dir':[(-1+2*(((2*i)//2)%2)) for i,_ in enumerate(thetas)],
'ct':[1.0 for _ in thetas ],
'cm':[0.015 for _ in thetas ]
},
columns = ['x', 'y', 'z', 'ax', 'ay', 'az', 'dir', 'ct', 'cm'])
# Matrices
A, B = compute_torque_thrust_matrices(geom)
plot(geom)
print_actuation_effort(B)
print('\nMix:')
B.round(2)
# -
| servo_mix_matrix.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3.9.7 ('base')
# language: python
# name: python3
# ---
# +
import numpy as np
from random import randint
matrix1 = np.empty((12,12), int)
matrix2 = np.empty((12,12), int)
for i in range(0, 12):
for j in range(0,12):
matrix1[i][j] = randint(0,100)
matrix2[i][j] = randint(0,100)
print(matrix1)
optional_row = matrix1[1]
optional_col = matrix1[:, 4]
print(optional_row)
print(optional_col)
print(matrix1[matrix1 > 5])
# -
| hw.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="view-in-github" colab_type="text"
# <a href="https://colab.research.google.com/github/johnpharmd/DS-Unit-2-Sprint-3-Advanced-Regression/blob/master/DS_Unit_2_Sprint_Challenge_3(1).ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# + [markdown] id="ayDccRP01GJD" colab_type="text"
# # Data Science Unit 2 Sprint Challenge 3
#
# ## Logistic Regression and Beyond
#
# In this sprint challenge you will fit a logistic regression modeling the probability of an adult having an income above 50K. The dataset is available at UCI:
#
# https://archive.ics.uci.edu/ml/datasets/adult
#
# Your goal is to:
#
# 1. Load, validate, and clean/prepare the data.
# 2. Fit a logistic regression model
# 3. Answer questions based on the results (as well as a few extra questions about the other modules)
#
# Don't let the perfect be the enemy of the good! Manage your time, and make sure to get to all parts. If you get stuck wrestling with the data, simplify it (if necessary, drop features or rows) so you're able to move on. If you have time at the end, you can go back and try to fix/improve.
#
# ### Hints
#
# It has a variety of features - some are continuous, but many are categorical. You may find [pandas.get_dummies](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.get_dummies.html) (a method to one-hot encode) helpful!
#
# The features have dramatically different ranges. You may find [sklearn.preprocessing.minmax_scale](https://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.minmax_scale.html#sklearn.preprocessing.minmax_scale) helpful!
# + [markdown] id="U22R1Ud51hxb" colab_type="text"
# ## Part 1 - Load, validate, and prepare data
#
# The data is available at: https://archive.ics.uci.edu/ml/datasets/adult
#
# Load it, name the columns, and make sure that you've loaded the data successfully. Note that missing values for categorical variables can essentially be considered another category ("unknown"), and may not need to be dropped.
#
# You should also prepare the data for logistic regression - one-hot encode categorical features as appropriate.
# + id="SeOByIkht-NS" colab_type="code" colab={}
# TODO - your work!
# + [markdown] id="RT1LFnFO1lo6" colab_type="text"
# ## Part 2 - Fit and present a Logistic Regression
#
# Your data should now be in a state to fit a logistic regression. Use scikit-learn, define your `X` (independent variable) and `y`, and fit a model.
#
# Then, present results - display coefficients in as interpretible a way as you can (hint - scaling the numeric features will help, as it will at least make coefficients more comparable to each other). If you find it helpful for interpretation, you can also generate predictions for cases (like our 5 year old rich kid on the Titanic) or make visualizations - but the goal is your exploration to be able to answer the question, not any particular plot (i.e. don't worry about polishing it).
#
# It is *optional* to use `train_test_split` or validate your model more generally - that is not the core focus for this week. So, it is suggested you focus on fitting a model first, and if you have time at the end you can do further validation.
# + id="s7fTRDXguD7N" colab_type="code" colab={}
# TODO - your work!
# + [markdown] id="BkIa-Sa21qdC" colab_type="text"
# ## Part 3 - Analysis, Interpretation, and Questions
#
# ### Based on your above model, answer the following questions
#
# 1. What are 3 features positively correlated with income above 50k?
# 2. What are 3 features negatively correlated with income above 50k?
# 3. Overall, how well does the model explain the data and what insights do you derive from it?
#
# *These answers count* - that is, make sure to spend some time on them, connecting to your analysis above. There is no single right answer, but as long as you support your reasoning with evidence you are on the right track.
#
# Note - scikit-learn logistic regression does *not* automatically perform a hypothesis test on coefficients. That is OK - if you scale the data they are more comparable in weight.
#
# ### Match the following situation descriptions with the model most appropriate to addressing them
#
# In addition to logistic regression, a number of other approaches were covered this week. Pair them with the situations they are most appropriate for, and briefly explain why.
#
# Situations:
# 1. You are given data on academic performance of primary school students, and asked to fit a model to help predict "at-risk" students who are likely to receive the bottom tier of grades.
# 2. You are studying tech companies and their patterns in releasing new products, and would like to be able to model and predict when a new product is likely to be launched.
# 3. You are working on modeling expected plant size and yield with a laboratory that is able to capture fantastically detailed physical data about plants, but only of a few dozen plants at a time.
#
# Approaches:
# 1. Ridge Regression
# 2. Quantile Regression
# 3. Survival Analysis
# + [markdown] id="Yjj0sseiuHib" colab_type="text"
# **TODO - your answers!**
| DS_Unit_2_Sprint_Challenge_3(1).ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3 (ipykernel)
# language: python
# name: python3
# ---
# !python3 -m pip install pillow
from PIL import Image, ImageOps
from IPython.display import display
import random
import json
# +
### NECESSARY INFO
backgrounds = ["Red", "Orange", "Yellow", "Lime", "Green", "Sky", "Indigo", "Purple", "Pink", "Slate", "Stone", "Gradient1", "Gradient2", "Gradient3", "Gradient4"] #5 types
backgroundweights = [18, 16, 14, 14, 14, 7.5, 6.5, 3, 2.5, 2.25, 2.25]
heads = ["Red", "Orange", "Yellow", "Lime", "Green", "Sky", "Indigo", "Purple", "Pink", "Slate", "Stone", "Sun", "Pandora", "Gunmetal", "Lavendar"] #7 types
headweights = [20.5, 19, 17.5, 16.75, 9.5, 6.5, 0.5, 1.75, 0.75, 0.25, 1.5, 1.65, 0.75, 1, 0.5, 0.35, 1.25]
faces = ["Default", "Content", "Happy", "Grumpy", "Happy Note", "Cat Note"] #7 types
Faceweights = [8, 6, 6, 4, 3, 4, 3, 6, 9, 9, 9, 9, 1, 2, 3, 5, 2, 1, 5, 0.5, 0.5, 4]
bodies = [
"White Sweater", "Pink Sweater", "Purple Sweater", "Indigo Sweater",
"Stone Hoodie", "Green Hoodie", "Pink Hoodie",
"Green Suit and Tie", "Blue Suit and Tie", "Military Uniform",
"Red Military Uniform", "Green Military Uniform",
"Blue Polo", "Polo Combo1", "Polo Combo2", "Black Purple Hoodie"] #12 types
Bodyweights = [7.5, 5.5, 1.5, 6, 4.5, 4, 4, 3.5, 2, 5.5, 5, 4.25, 3.5, 0.5, 4, 4.5, 2, 1, 5.5, 2.5, 0.5, 3.5, 0.5, 0.25, 6.5, 4.5, 2, 2.5, 3]
headgears = ["Small Daisy", "Rice Hat", "Default", "Cap", "Pirate Bandana", "Prison Hat",
"Soviet Hat", "Russian Hat", "Princess",
"Witchs Hat", "Wizards Hat", "Bear Ears"] # 28 types
headgearweights = [4.2, 3.8, 3, 3.6, 2.5, 3.7, 3.2, 2.1, 2.1, 4.3, 4.2, 3.8, 3.7, 4.2, 1.9, 3.2, 2.1, 3.1, 3.5, 1.4, 4.2, 3, 2.6, 4, 0.2, 4.2, 1.7, 1, 3.8, 0.6, 0.4, 0.3, 0.75, 0.75, 3.5, 2.5, 2.9]
backgroundfiles = {
"Red": "bg1",
"Orange": "bg2",
"Yellow": "bg3",
"Lime": "bg4",
"Green": "bg5",
"Sky": "bg6",
"Indigo": "bg7",
"Purple": "bg8",
"Pink": "bg9",
"Slate": "bg10",
"Stone": "bg11",
"Gradient1": "bg12",
"Gradient2": "bg13",
"Gradient3": "bg14",
"Gradient4": "bg15"
}
headfiles = {
"Red": "hd1",
"Orange": "hd2",
"Yellow": "hd3",
"Lime": "hd4",
"Green": "hd5",
"Sky": "hd6",
"Indigo": "hd7",
"Purple": "hd8",
"Pink": "hd9",
"Slate": "hd10",
"Stone": "hd11",
"Sun":"hd12",
"Pandora":"hd13",
"Gunmetal":"hd14",
"Lavendar":"hd15",
}
facefiles = {
"Default" : "fc1",
"Content" : "fc2",
"Happy" : "fc3",
"Grumpy" : "fc4",
"Happy Note" : "fc5",
"Cat Note" : "fc6"
}
bodyfiles = {
"White Sweater": "bd1",
"Pink Sweater": "bd2",
"Purple Sweater": "bd3",
"Indigo Sweater": "bd4",
"Stone Hoodie": "bd5",
"Green Hoodie": "bd6",
"Pink Hoodie": "bd7",
"Green Suit and Tie": "bd8",
"Blue Suit and Tie": "bd9",
"Military Uniform": "bd10",
"Red Military Uniform": "bd11",
"Green Military Uniform": "bd12",
"Blue Polo": "bd13",
"Polo Combo1": "bd14",
"Polo Combo2": "bd15",
"Black Purple Hoodie": "bd16",
}
headgearfiles = {
"Small Daisy" : "he1",
"Rice Hat" : "he2",
"Default" : "he3",
"Cap": "he4",
"Pirate Bandana": "he5",
"Prison Hat": "he6",
"Monarchs Crown": "he7",
"Soviet Hat": "he8",
"Russian Hat": "he9",
"Princess": "he10",
"Witchs Hat": "he11",
"Wizards Hat": "he12",
"Bear Ears": "he13"
}
# +
## TRAIT GENERATION
TOTAL_BANANAS = 1000 ## 2222
traits = []
def createCombo():
trait = {}
trait["Background"] = random.choice(backgrounds)
trait["Head"] = random.choice(heads)
trait["Face"] = random.choice(faces)
trait["Body"] = random.choice(bodies)
trait["Head Gear"] = random.choice(headgears)
if trait in traits:
return createCombo()
else:
return trait
for i in range(TOTAL_BANANAS):
newtraitcombo = createCombo()
traits.append(newtraitcombo)
# +
## ARE ALL BANANAS UNIQUE? I DUNNO HOW THIS WORKS BUT IT WORKS
def allUnique(x):
seen = list()
return not any(i in seen or seen.append(i) for i in x)
print(allUnique(traits))
# +
# ADD TOKEN IDS TO JSON
i = 0
for item in traits:
item["tokenId"] = i
i = i + 1
# +
# PRINT THE SHIT
print(json.dumps(traits, indent=2))
# +
# GET TRAIT COUNTS
backgroundcounts = {}
for item in backgrounds:
backgroundcounts[item] = 0
headcounts = {}
for item in heads:
headcounts[item] = 0
facecounts = {}
for item in faces:
facecounts[item] = 0
bodycounts = {}
for item in bodies:
bodycounts[item] = 0
headgearcounts = {}
for item in headgears:
headgearcounts[item] = 0
oneofonecounts = 0
signatures = [137,883,1327,1781,2528,2763,3833,5568,5858,6585,6812,7154,8412]
for banana in traits:
if banana["tokenId"] in signatures:
oneofonecounts += 1
else:
# print(banana)
backgroundcounts[banana["Background"]] += 1
headcounts[banana["Head"]] += 1
facecounts[banana["Face"]] += 1
bodycounts[banana["Body"]] += 1
headgearcounts[banana["Head Gear"]] += 1
print("background:", backgroundcounts)
print("head:", headcounts)
print("face:", facecounts)
print("body:", bodycounts)
print("headgear:", headgearcounts)
print("1/1:",oneofonecounts)
# +
# WRITE METADATA TO JSON FILE
with open('traits2.json', 'w') as outfile:
json.dump(traits, outfile, indent=4)
# -
backgroundfiles
# +
#### IMAGE GENERATION
for item in traits:
im1 = Image.open(f'./Backgrounds/{backgroundfiles[item["Background"]]}.png').convert('RGBA')
im2 = Image.open(f'./Bodies/{bodyfiles[item["Body"]]}.png').convert('RGBA')
im3 = Image.open(f'./Heads/{headfiles[item["Head"]]}.png').convert('RGBA')
im4 = Image.open(f'./Faces/{facefiles[item["Face"]]}.png').convert('RGBA')
im5 = Image.open(f'./Headgear/{headgearfiles[item["Head Gear"]]}.png').convert('RGBA')
#Create each composite
com1 = Image.alpha_composite(im1.resize((797,797)), im2)
com2 = Image.alpha_composite(com1, im3)
com3 = Image.alpha_composite(com2, im4)
com4 = Image.alpha_composite(com3, im5)
#Convert to RGB
rgb_im = com4.convert('RGB')
# display(rgb_im.resize((400,400), Image.NEAREST))
file_name = str(item["tokenId"]) + ".jpg"
rgb_im.save("./output/" + file_name)
print(f'{str(item["tokenId"])} done')
# +
import os
frames = list()
randomImagesIdx = random.sample(range(0,100), 6)
print(randomImagesIdx)
# creating a list of file locations to pass to imagemagick
for x in range(81,86):
frames.append(f'./output/{x}.jpg')
os.system(f"convert -delay 40 -loop 0 {' '.join(frames)} ../../../animateddoods.gif")
os.system(f"montage -geometry +0+0 {' '.join(frames[:3])} ../../../doodsbanner.jpg")
# -
# +
# READ METADATA IF YOU ALREADY HAVE A JSON FILE
with open("traitsfinal.json", 'r') as f:
traits = json.load(f)
# -
traits[2050]
flip_ids = [8057, 8835, 8880, 6612, 5788, 8333, 2173, 8704, 7351, 6671, 4847, 5575, 4864, 2418, 2944, 5845, 552, 1161, 2390, 4383, 8126, 439, 1055, 3435, 7957, 1209, 6438, 6578, 6244, 3490, 4149, 8510, 113, 7193, 5728, 4731, 810, 2949, 3158, 1475, 1242, 4137, 7112, 5852, 7845, 3493, 377, 4272, 5494, 2919, 6818, 2828, 1089, 4914, 5054, 160, 3991, 7625, 6265, 8160, 7331, 4802, 1442, 3850, 171, 3469, 193, 7171, 6328, 5852, 6504, 6639, 2637, 1918, 8607, 4460, 5257, 926, 6114, 8428, 8173, 4565, 5857, 2021, 430, 7708, 4799, 8065, 1609, 4807, 5802, 3371, 8722, 5594, 3034, 2087, 3684, 7878, 7908]
for m in flip_ids:
img = Image.open(f'./output/{str(m)}.jpg')
im_mirror = ImageOps.mirror(img)
im_mirror.save(f'./flipped/{str(m)}.jpg')
# +
balds = [4378, 7459, 6945, 4644, 5999, 6337, 6675, 4179, 8467, 5482, 4531, 2016, 3790, 1305, 355]
for bald in balds:
traits[bald]["Head Gear"] = "None"
#### BALDS
for bald in balds:
item = traits[bald]
im1 = Image.open(f'./Backgrounds/{backgroundfiles[item["Background"]]}.jpg').convert('RGBA')
im2 = Image.open(f'./Bananas/{bananafiles[item["Banana Base"]]}.png').convert('RGBA')
im3 = Image.open(f'./Mouths/{mouthfiles[item["Mouth"]]}.png').convert('RGBA')
im4 = Image.open(f'./Eyes/{eyefiles[item["Eyes"]]}.png').convert('RGBA')
# im5 = Image.open(f'./Headgear/{headgearfiles[item["Head Gear"]]}.png').convert('RGBA')
#Create each composite
com1 = Image.alpha_composite(im1, im2)
com2 = Image.alpha_composite(com1, im3)
com3 = Image.alpha_composite(com2, im4)
# com4 = Image.alpha_composite(com3, im5)
#Convert to RGB
rgb_im = com3.convert('RGB')
# display(rgb_im.resize((400,400), Image.NEAREST))
file_name = str(item["tokenId"]) + ".jpg"
rgb_im.save("./balds/" + file_name)
print(f'{str(item["tokenId"])} done')
# -
madboogielogos = [ 1334, 8120, 3430, 3439, 4175, 7710, 1842, 2428, 3553, 4764, 3439 ]
for logo in madboogielogos:
im1 = Image.open(f'./output2/{logo}.jpg').convert('RGBA')
im2 = Image.open(f'./Logo_static.png').convert('RGBA')
com1 = Image.alpha_composite(im1, im2).convert('RGB')
com1.save(f'./boogielogo/{logo}.jpg')
# +
## ADD IPFS HASH TO JSON, ASSUMING YOU HAVE A JSON FILE LIKE THIS:
'''
{
"<KEY>": "0",
"<KEY>": "1",
"<KEY>": "10",
"<KEY>": "11",
"<KEY>": "12",
"<KEY>": "13",
"<KEY>": "14",
"<KEY>": "15",
"<KEY>": "16",
"Qmd98fff2dWLxuUbt2sCMmtcGMGHh93Cfc8Qi3nrieG1FC": "17",
"<KEY>": "18",
"<KEY>": "19",
"<KEY>": "2",
"<KEY>": "20",
"<KEY>": "3",
"<KEY>": "4",
"<KEY>": "5",
"<KEY>": "6",
"<KEY>": "7",
"<KEY>": "8",
"<KEY>": "9"
...
}
'''
with open("jsonlocation", 'r') as f:
hashes = json.load(f)
for k,v in hashes.items():
print(k,v)
traits[v]["imageIPFS"] = k
# -
traits
with open('traitsfinal.json', 'w') as outfile:
json.dump(traits, outfile, indent=4)
| image_generation/BANANA GENERATION CODE.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# %load_ext autoreload
import pandas as pd
# Read test data
df = pd.read_csv('../data/sst/sst_test.txt', sep='\t', header=None,
names=['truth', 'text'],
)
df['truth'] = df['truth'].str.replace('__label__', '')
df['truth'] = df['truth'].astype(int).astype('category')
print(df.dtypes)
df.head()
# ### FInd length of each text in the data
df['len'] = df['text'].str.len()
df.head()
# ### Find shortest length text with rating of 1
df = df.sort_values(['len'], ascending=True)
df[df['truth'] == 1].head(50)
| notebooks/explore_data.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: nlpbook
# language: python
# name: nlpbook
# ---
# +
import torch
import torch.nn as nn
import torch.nn.functional as F
seed = 1337
torch.manual_seed(seed)
torch.cuda.manual_seed_all(seed)
# -
# ### Example 4-1
class MultilayerPerceptron(nn.Module):
def __init__(self, input_dim, hidden_dim, output_dim):
"""
Args:
input_dim (int): the size of the input vectors
hidden_dim (int): the output size of the first Linear layer
output_dim (int): the output size of the second Linear layer
"""
super(MultilayerPerceptron, self).__init__()
self.fc1 = nn.Linear(input_dim, hidden_dim)
self.fc2 = nn.Linear(hidden_dim, output_dim)
def forward(self, x_in, apply_softmax=False):
"""The forward pass of the MLP
Args:
x_in (torch.Tensor): an input data tensor.
x_in.shape should be (batch, input_dim)
apply_softmax (bool): a flag for the softmax activation
should be false if used with the Cross Entropy losses
Returns:
the resulting tensor. tensor.shape should be (batch, output_dim)
"""
intermediate = F.relu(self.fc1(x_in))
output = self.fc2(intermediate)
if apply_softmax:
output = F.softmax(output, dim=1)
return output
# ### Example 4-2
# +
batch_size = 2 # number of samples input at once
input_dim = 3
hidden_dim = 100
output_dim = 4
# Initialize model
mlp = MultilayerPerceptron(input_dim, hidden_dim, output_dim)
print(mlp)
# -
# ### Example 4-3
def describe(x):
print("Type: {}".format(x.type()))
print("Shape/size: {}".format(x.shape))
print("Values: \n{}".format(x))
# Inputs
x_input = torch.rand(batch_size, input_dim)
describe(x_input)
y_output = mlp(x_input, apply_softmax=False)
describe(y_output)
# ### Example 4-4
y_output = mlp(x_input, apply_softmax=True)
describe(y_output)
# ### Example 4-13
# +
class MultilayerPerceptron(nn.Module):
def __init__(self, input_dim, hidden_dim, output_dim):
"""
Args:
input_dim (int): the size of the input vectors
hidden_dim (int): the output size of the first Linear layer
output_dim (int): the output size of the second Linear layer
"""
super(MultilayerPerceptron, self).__init__()
self.fc1 = nn.Linear(input_dim, hidden_dim)
self.fc2 = nn.Linear(hidden_dim, output_dim)
def forward(self, x_in, apply_softmax=False):
"""The forward pass of the MLP
Args:
x_in (torch.Tensor): an input data tensor.
x_in.shape should be (batch, input_dim)
apply_softmax (bool): a flag for the softmax activation
should be false if used with the Cross Entropy losses
Returns:
the resulting tensor. tensor.shape should be (batch, output_dim)
"""
intermediate = F.relu(self.fc1(x_in))
output = self.fc2(F.dropout(intermediate, p=0.5))
if apply_softmax:
output = F.softmax(output, dim=1)
return output
batch_size = 2 # number of samples input at once
input_dim = 3
hidden_dim = 100
output_dim = 4
# Initialize model
mlp = MultilayerPerceptron(input_dim, hidden_dim, output_dim)
print(mlp)
y_output = mlp(x_input, apply_softmax=False)
describe(y_output)
# -
# ### Example 4-14
batch_size = 2
one_hot_size = 10
sequence_width = 7
data = torch.randn(batch_size, one_hot_size, sequence_width)
conv1 = nn.Conv1d(in_channels=one_hot_size, out_channels=16, kernel_size=3)
intermediate1 = conv1(data)
print(data.size())
print(intermediate1.size())
# ### Example 4-15
# +
conv2 = nn.Conv1d(in_channels=16, out_channels=32, kernel_size=3)
conv3 = nn.Conv1d(in_channels=32, out_channels=64, kernel_size=3)
intermediate2 = conv2(intermediate1)
intermediate3 = conv3(intermediate2)
print(intermediate2.size())
print(intermediate3.size())
# -
y_output = intermediate3.squeeze()
print(y_output.size())
intermediate2.mean(dim=0).mean(dim=1).sum()
# ### Example 4-16
# +
# Method 2 of reducing to feature vectors
print(intermediate1.view(batch_size, -1).size())
# Method 3 of reducing to feature vectors
print(torch.mean(intermediate1, dim=2).size())
# print(torch.max(intermediate1, dim=2).size())
# print(torch.sum(intermediate1, dim=2).size())
# -
# ### Example 4-22
#
# The full model will not be reproduced here. Instead, we will just show batch norm being used.
# +
conv1 = nn.Conv1d(in_channels=one_hot_size, out_channels=16, kernel_size=3)
conv2 = nn.Conv1d(in_channels=16, out_channels=32, kernel_size=3)
conv3 = nn.Conv1d(in_channels=32, out_channels=64, kernel_size=3)
conv1_bn = nn.BatchNorm1d(num_features=16)
conv2_bn = nn.BatchNorm1d(num_features=32)
intermediate1 = conv1_bn(F.relu(conv1(data)))
intermediate2 = conv2_bn(F.relu(conv2(intermediate1)))
intermediate3 = conv3(intermediate2)
print(intermediate1.size())
print(intermediate2.size())
print(intermediate3.size())
# -
# Note: BatchNorm computes its statistics over the batch and sequence dimensions. In other words, the input to each batchnorm1d is a tensor of size `(B, C, L)` (where b=batch, c=channels, and l=length). Each `(B, L)` slice should have 0-mean. This reduces covariate shift.
intermediate2.mean(dim=(0, 2))
#
# ## Bonus Examples
#
# In chapter 4, we cover convolutions. Below are code examples which instantiate the convolutions with various hyper parameter settings.
# +
x = torch.randn(1, 2, 3, 3)
describe(x)
conv1 = nn.Conv2d(in_channels=2, out_channels=1, kernel_size=2)
describe(conv1.weight)
describe(conv1(x))
# +
x = torch.randn(1, 1, 3, 3)
describe(x)
conv1 = nn.Conv2d(in_channels=1, out_channels=2, kernel_size=2)
describe(conv1.weight)
describe(conv1(x))
| chapters/chapter_4/Chapter-4-In-Text-Examples.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: kloppy-venv
# language: python
# name: kloppy-venv
# ---
# # State
#
# While working with event data it can be convenient to also use the current score. This can be used to determine if a team is winning, losing or drawing. A more generic name for the score is 'state'.
#
# In this quickstart we'll look at how to use the `add_state` method of kloppy to add state to the events for later use.
#
# ## Loading some statsbomb data
#
# First we'll load Barcelona - Deportivo Alaves from the statsbomb open-data project.
# +
from kloppy import statsbomb
from kloppy.domain import EventType
dataset = statsbomb.load_open_data()
print([team.name for team in dataset.metadata.teams])
print(dataset.events[0].state)
# -
# ## Add state - score
#
# kloppy contains some default state builders: score, lineup and sequence. Let's have a look at the `score` state builder.
# +
dataset = dataset.add_state('score')
dataset.events[0].state
# -
# As you can see `state` is now filled with a score object. The object contains two attributes: `home` and `away`. Every event contains a score object which is automatically updated when a goal is scored.
#
# Now lets have a look at how we can use the score state. First we filter on only shots.
dataset = dataset.filter(lambda event: event.event_type == EventType.SHOT)
shots = dataset.events
len(shots)
for shot in shots:
print(shot.state['score'], shot.player.team, '-', shot.player, '-', shot.result)
dataframe = dataset.to_pandas(additional_columns={
'home_score': lambda event: event.state['score'].home,
'away_score': lambda event: event.state['score'].away
})
dataframe
# Now filter the dataframe. We only want to see shots when we are winning by at least two goals difference.
dataframe[dataframe['home_score'] - dataframe['away_score'] >= 2]
# ## Add state - lineup
#
# We are able to add more state. In this example we'll look at adding lineup state.
# +
from kloppy import statsbomb
from kloppy.domain import EventType
dataset = statsbomb.load_open_data()
home_team, away_team = dataset.metadata.teams
# -
# <NAME> is a substitute on the side of Barcelona. We add lineup to all events so we are able to filter out events where <NAME> is on the pitch.
arturo_vidal = home_team.get_player_by_id(8206)
dataframe = (
dataset
.add_state('lineup')
.filter(lambda event: arturo_vidal in event.state['lineup'].players)
.to_pandas()
)
print(f"time on pitch: {dataframe['timestamp'].max() - dataframe['timestamp'].min()} seconds")
dataframe = (
dataset
.add_state('lineup')
.filter(lambda event: event.event_type == EventType.PASS and event.team == home_team)
.to_pandas(additional_columns={
'vidal_on_pitch': lambda event: arturo_vidal in event.state['lineup'].players
})
)
dataframe = dataframe.groupby(['vidal_on_pitch'])['success'].agg(['sum', 'count'])
dataframe['percentage'] = dataframe['sum'] / dataframe['count'] * 100
dataframe
# ## Add state - formation
#
# In this example we'll look at adding formation state to all shots.
# +
from kloppy import statsbomb
from kloppy.domain import EventType
dataset = statsbomb.load_open_data()
dataframe = (
dataset
.add_state('formation')
.filter(
lambda event: event.event_type == EventType.SHOT
)
.to_pandas(additional_columns={
'Team': lambda event: str(event.team),
'Formation': lambda event: str(
event.state['formation'].home
if event.team == dataset.metadata.teams[0]
else event.state['formation'].away
)
})
)
# -
dataframe_stats = (
dataframe
.groupby(['Team', 'Formation'])['success']
.agg(['sum', 'count'])
)
dataframe_stats['Percentage'] = (
dataframe_stats['sum']
/ dataframe_stats['count']
* 100
)
dataframe_stats.rename(
columns={
'sum': 'Goals',
'count': 'Shots'
}
)
| docs/examples/state.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import cogdl
import numpy as np
import scipy.sparse as sp
import torch
import torch.nn.functional as F
import sys
sys.path.append('..')
# # Load data
import dgl
import networkx
from grb.dataset.dataloader import DataLoader
dataset = DataLoader('cora')
dataset.adj
dataset.adj_tensor
# # Load model
from grb.model.gcn import GCN
model = GCN(3, [1433, 64, 64, 7], activation=F.elu)
model.load_state_dict(torch.load('../grb/model/saved_models/model_gcn_cora.pt'))
# # Model training
from grb.model.trainer import Trainer
adam = torch.optim.Adam(model.parameters(), lr=0.01)
nll_loss = F.nll_loss
device = 'cpu'
trainer = Trainer(dataset=dataset, optimizer=adam, loss=nll_loss, device=device)
trainer.set_config(n_epoch=100, eval_every=50, save_path='../grb/model/saved_models/model_gcn_cora.pt')
trainer.train(model)
# # Evaluation
adj = dataset.adj
adj_tensor = dataset.adj_tensor
features = dataset.features
labels = dataset.labels
pred = model.forward(features, adj_tensor)
pred_label = torch.argmax(pred, dim=1)
pred_label.shape
from grb.utils import evaluator
acc = evaluator.eval_acc(pred, labels, mask=dataset.test_mask)
acc
# # Attack
from grb.attack.speit import Speit
target_node = np.random.choice(np.arange(1000), 100)
config = {}
config['n_inject'] = 100
config['n_target_total'] = 1000
config['target_node'] = target_node
config['mode'] = 'random-inter'
config['lr'] = 0.01
config['feat_lim_min'] = 0
config['feat_lim_max'] = 1
speit = Speit(dataset, n_epoch=100, n_inject=100, n_edge_max=100)
speit.set_config(**config)
adj_attack = speit.injection(target_node, config['mode'])
adj_attack
features_attack = speit.attack(model, features, adj, target_node)
speit.save_features(features_attack, './', 'features.npy')
speit.save_adj(adj_attack, './', 'adj.pkl')
| examples/speit_attack.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="view-in-github" colab_type="text"
# <a href="https://colab.research.google.com/github/buganart/unagan/blob/master/unagan.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# + id="l5hTlhspTbjX" colab={"base_uri": "https://localhost:8080/", "height": 579} cellView="form" outputId="a9db900d-e858-4f49-beae-bc6df7f8b0a7"
# @title Setup
# @markdown 1. Before starting please save the notebook in your drive by clicking on `File -> Save a copy in drive`
# @markdown 2. Check GPU, should be a Tesla V100 if you want to train it as fast as possible.
# @markdown 3. Mount google drive.
# @markdown 4. Log in to wandb.
# !nvidia-smi -L
import os
print(f"We have {os.cpu_count()} CPU cores.")
print()
try:
from google.colab import drive, output
IN_COLAB = True
except ImportError:
from IPython.display import clear_output
IN_COLAB = False
from pathlib import Path
if IN_COLAB:
drive.mount("/content/drive/")
if not Path("/content/drive/My Drive/IRCMS_GAN_collaborative_database").exists():
raise RuntimeError(
"Shortcut to our shared drive folder doesn't exits.\n\n"
"\t1. Go to the google drive web UI\n"
'\t2. Right click shared folder IRCMS_GAN_collaborative_database and click "Add shortcut to Drive"'
)
clear = output.clear if IN_COLAB else clear_output
def clear_on_success(msg="Ok!"):
if _exit_code == 0:
clear()
print(msg)
print()
print("Wandb installation and login ...")
# %pip install -q wandb
wandb_drive_netrc_path = Path("drive/My Drive/colab/.netrc")
wandb_local_netrc_path = Path("/root/.netrc")
if wandb_drive_netrc_path.exists():
import shutil
print("Wandb .netrc file found, will use that to log in.")
shutil.copy(wandb_drive_netrc_path, wandb_local_netrc_path)
else:
print(
f"Wandb config not found at {wandb_drive_netrc_path}.\n"
f"Using manual login.\n\n"
f"To use auto login in the future, finish the manual login first and then run:\n\n"
f"\t!mkdir -p '{wandb_drive_netrc_path.parent}'\n"
f"\t!cp {wandb_local_netrc_path} '{wandb_drive_netrc_path}'\n\n"
f"Then that file will be used to login next time.\n"
)
# !wandb login
# + [markdown] id="_H9p0zp1PRHA"
# # **Description and training**
#
# This notebook serves to train UnaGAN, logging the results to the wandb project "demiurge/unagan". The [buganart/unagan](https://github.com/buganart/unagan) code is a modificaiton of the [ciaua/unagan repository](https://github.com/ciaua/unagan). To start training UnaGAN the user will need to specify the path for **audio_db**, a sound file (.wav) folder in the mounted Google Drive. All of the folder's data will be used for training and training process evaluation. If the run stops and the user wants to resume it, please specify `wandb run id` in the **resume_run_id**. For all the training arguments, please see [ciaua/unagan repository](https://github.com/ciaua/unagan).
#
#
# + id="b2nGSKlz8xJU" colab={"base_uri": "https://localhost:8080/", "height": 249} cellView="form" outputId="c5d31894-3055-4ac2-ad74-25ee1309c5d2"
#@title CONFIGURATION
#Fill in the configuration then Then, select `Runtime` and `Run all` then let it ride!
#@markdown ###Training
drive = Path('/content/drive/MyDrive')
print(f"Google drive at {drive}")
drive_audio_db_root = drive
collaborative_database = drive / "IRCMS_GAN_collaborative_database"
violingan_experiment_dir = collaborative_database / "Experiments" / "colab-violingan"
experiment_dir = violingan_experiment_dir / "unagan"
#@markdown The path to the audio database containing the `.wav` files that you would like to use for training
audio_db = "/content/drive/MyDrive/AUDIO DATABASE/TESTING/" #@param {type:"string"}
audio_db_dir = Path(audio_db)
if not audio_db_dir.exists():
raise RuntimeError(f"The audio_db_dir {audio_db_dir} does not exist.")
#@markdown Use [wandb](https://wandb.ai/) ID to resume previous run or leave empty to start from scratch
resume_run_id = "" #@param {type: "string"}
#@markdown ###Training arguments
feat_dim = 80#@param {type: "integer"}
z_dim = 20 #@param {type: "integer"}
# z_scale_factors = 2 #@param {type: "integer"}
num_va = 200 #@param {type: "integer"}
gamma = 1.0 #@param {type: "number"}
lambda_k = 0.01 #@param {type: "number"}
init_k = 0.0 #@param {type: "number"}
init_lr = 0.001 #@param {type: "number"}
num_epochs = 500 #@param {type: "integer"}
lambda_cycle = 1 #@param {type: "integer"}
max_grad_norm = 3 #@param {type: "integer"}
save_rate = 20 #@param {type: "integer"}
batch_size = 10#@param {type: "integer"}
def check_wandb_id(run_id):
import re
if run_id and not re.match(r"^[\da-z]{8}$", run_id):
raise RuntimeError(
"Run ID needs to be 8 characters long and contain only letters a-z and digits.\n"
f"Got \"{run_id}\""
)
check_wandb_id(resume_run_id)
# z_scale_factors = [z_scale_factor, z_scale_factor, z_scale_factor, z_scale_factor]
config = dict(
audio_db_dir=audio_db_dir,
resume_run_id=resume_run_id,
feat_dim=feat_dim,
z_dim=z_dim,
num_va=num_va,
gamma=gamma,
lambda_k=lambda_k,
init_k=init_k,
init_lr=init_lr,
num_epochs=num_epochs,
lambda_cycle=lambda_cycle,
max_grad_norm=max_grad_norm,
save_rate=save_rate,
batch_size=batch_size,
)
for k,v in config.items():
print(f"=> {k:30}: {v}")
# + id="eu_-fP8pCHft"
# + id="NjxLUZlFBKyP" colab={"base_uri": "https://localhost:8080/", "height": 1000} cellView="form" outputId="760a68cc-3f47-4c42-db53-1bfc34fdf873"
#@title CLONE UNAGAN REPO AND INSTALL DEPENDENCIES
# os.environ["WANDB_MODE"] = "dryrun"
if IN_COLAB:
# !git clone https://github.com/buganart/unagan
# %cd "/content/unagan/"
# # !git checkout dev
# %pip install -r requirements.txt
clear_on_success("Repo cloned! Dependencies installed!")
# + id="zgjJlo5QF8I4" cellView="form"
#@title COPY FILES TO LOCAL RUNTIME
local_wav_dir = Path("data")
local_wav_dir.mkdir(exist_ok=True)
# !find "{audio_db_dir}"/ -maxdepth 1 -type f | xargs -t -d "\n" -I'%%' -P 10 -n 1 rsync -a '%%' "$local_wav_dir"/
clear_on_success("All files copied to this runtime.")
audio_paths = sorted(list(local_wav_dir.glob("*")))
num_files = len(audio_paths)
print(f"Database has {num_files} files in total.")
# + id="HE5EmRGgHBe5" cellView="form"
#@title COLLECT AUDIO CLIPS
# !python scripts/collect_audio_clips.py --audio-dir "$local_wav_dir" --extension wav
clear_on_success(f"Done.")
# + id="QJBVbFVbUqSM" cellView="form"
#@title EXTRACT MEL SPECTROGRAMS
# !python scripts/extract_mel.py --n_mel_channels "$feat_dim"
clear_on_success("Done!")
# + id="R0qj2FG4U17Y" cellView="form"
#@title GENERATE DATASET
# !python scripts/make_dataset.py
clear_on_success("Done!")
# + id="ac2KepbFoOrg" cellView="form"
#@title COMPUTE MEAN AND STANDARD DEVIATION
# !python scripts/compute_mean_std.mel.py
clear_on_success("Done!")
# + id="NOQSMDxHU4UC" cellView="form"
#@title TRAIN
# !env PYTHONPATH="." python scripts/train.hierarchical_with_cycle.py \
# --model-id "$resume_run_id" \
# --audio_db_dir "$audio_db_dir" \
# --wandb-dir "$experiment_dir" \
# --feat_dim "$feat_dim" \
# --z_dim "$z_dim" \
# --num_va "$num_va" \
# --gamma "$gamma" \
# --lambda_k "$lambda_k" \
# --init_k "$init_k" \
# --init_lr "$init_lr" \
# --num_epochs "$num_epochs" \
# --lambda_cycle "$lambda_cycle" \
# --max_grad_norm "$max_grad_norm" \
# --save_rate "$save_rate" \
# --batch_size "$batch_size"
| unagan.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# <a href="https://colab.research.google.com/github/joanby/python-ml-course/blob/master/notebooks/T11%20-%201%20-%20TensorFlow101-Colab.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# # Clonamos el repositorio para obtener los dataSet
# !git clone https://github.com/joanby/python-ml-course.git
# # Damos acceso a nuestro Drive
from google.colab import drive
drive.mount('/content/drive')
# Test it
# !ls '/content/drive/My Drive'
from google.colab import files # Para manejar los archivos y, por ejemplo, exportar a su navegador
import glob # Para manejar los archivos y, por ejemplo, exportar a su navegador
from google.colab import drive # Montar tu Google drive
# %tensorflow_version 1.x
# # Introducción a Tensor Flow
import tensorflow as tf
print(tensorflow.__version__)
x1 = tf.constant([1,2,3,4,5])
x2 = tf.constant([6,7,8,9,10])
res = tf.multiply(x1,x2)
print(res)
sess = tf.Session()
print(sess.run(res))
sess.close()
with tf.Session() as sess:
output = sess.run(res)
print(output)
config = tf.ConfigProto(log_device_placement = True)
config = tf.ConfigProto(allow_soft_placement = True)
| notebooks/T11 - 1 - TensorFlow101-Colab.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3 (ipykernel)
# language: python
# name: python3
# ---
# + [markdown] pycharm={"metadata": false}
# # MLTSA vs Feature Permutations
#
# ###### *Note that this Jupyter Notebook requires you to have the MLTSA package installed.
# -
# As usual with MLTSA experiments we first create the 1D analytical model dataset.
# +
"""First we import our dataset examples"""
from MLTSA_datasets.OneD_pot.OneD_pot_data import potentials #We import the potentials class which will define them.
from MLTSA_datasets.OneD_pot.OneD_pot_data import dataset #We import the dataset class which will hold our potentials.
import matplotlib.pyplot as plt
import numpy as np
#This cell sets the potentials, don't re-run
total_n_pots = 25
n_DW = 5
relevant_DW_n = 2
#After defining the desired parameters we define the potentials accordingly
pots = potentials(total_n_pots, n_DW, relevant_DW_n)
# This creates the first dataset of data.
n_features = 180
degree_of_mixing = 2
#We specified the number of features wanted and how much they will mix
oneD_dataset = dataset(pots, n_features, degree_of_mixing)
# + [markdown] pycharm={}
# Once the dataset has been created we generate the data we will use over the comparison
# + pycharm={"name": "#%%\n"}
"""Now we generate the trajectories we will use for the whole experiment"""
#Generate the trajectories
n_simulations = 100
n_steps = 250
data, ans = oneD_dataset.generate_linear(n_simulations, n_steps)
data_val, ans_val = oneD_dataset.generate_linear(int(n_simulations/2), n_steps)
#Prepare it for training
time_frame = [30, 60] #Same time frame as the sklearn one
X, Y = oneD_dataset.PrepareData(data, ans, time_frame, mode="Normal")
X_val, Y_val = oneD_dataset.PrepareData(data_val, ans_val, time_frame, mode="Normal")
# + [markdown] pycharm={}
#
# + pycharm={"name": "#%%\n"}
from MLTSA_sklearn.models import SKL_Train
from sklearn.neural_network import MLPClassifier
from MLTSA_sklearn.MLTSA_sk import MLTSA
from sklearn.inspection import permutation_importance
#For loop for MLTSA and Permutation on MLP
replicas = 100
results = {}
results["MLTSA"] = []
results["Perm"] = []
results["NN"] = []
results["acc"] = []
for R in range(replicas):
NN = MLPClassifier(random_state=0, verbose=False, max_iter=500)
trained_NN, train_acc, test_acc = SKL_Train(NN, X, Y)
feat_perm = permutation_importance(trained_NN, X, Y, n_repeats=10, random_state=0)
ADrop_train_avg = MLTSA(data[:,:,time_frame[0]:time_frame[1]], ans, trained_NN, drop_mode="Average")
results["MLTSA"].append(ADrop_train_avg)
results["Perm"].append(feat_perm)
results["NN"].append(trained_NN)
results["acc"].append([train_acc, test_acc])
# + pycharm={"name": "#%%\n"}
acc = np.array(results["acc"])
adrop = np.array(results["MLTSA"])
permut = np.array(results["Perm"])
permut = [ x.importances_mean for x in permut]
plt.figure()
plt.title("Accuracy through replicas (MLP)")
plt.plot(acc.T[0]*100, label="Training")
plt.plot(acc.T[1]*100, label="Test")
plt.xlabel("Replica")
plt.ylabel("Accuracy")
plt.legend()
std = np.std(adrop, axis=0)*100
mean = np.mean(adrop, axis=0)*100
plt.figure()
plt.title("MLTSA - MLP")
plt.plot(mean)
plt.fill_between(np.arange(len(std)), y1=(mean+std), y2=(mean-std))
plt.ylabel("Accuracy")
plt.xlabel("Feature (CV)")
plt.figure()
plt.title("Feature Permutation Importances - MLP")
plt.plot(np.mean(permut, axis=0))
# + jupyter={"outputs_hidden": false} pycharm={"name": "#%%\n"}
from MLTSA_sklearn.MLTSA_sk import MLTSA_Plot
#We simply get the plot with this
MLTSA_Plot(adrop, oneD_dataset, pots, errorbar=False)
MLTSA_Plot(np.array(permut)/100, oneD_dataset, pots, errorbar=False)
# + jupyter={"outputs_hidden": false} pycharm={"name": "#%%\n"}
# + pycharm={"name": "#%%\n"}
from MLTSA_sklearn.models import SKL_Train
from sklearn.ensemble import GradientBoostingClassifier
from MLTSA_sklearn.MLTSA_sk import MLTSA
from sklearn.inspection import permutation_importance
#For loop for MLTSA and Permutation on GBDT
replicas = 100
results_GB = {}
results_GB["MLTSA"] = []
results_GB["Imp"] = []
results_GB["Perm"] = []
results_GB["NN"] = []
results_GB["acc"] = []
for R in range(replicas):
GBDT = GradientBoostingClassifier(random_state=0, verbose=False, n_estimators=500)
trained_GBDT, train_acc, test_acc = SKL_Train(GBDT, X, Y)
feat_perm = permutation_importance(trained_GBDT, X, Y, n_repeats=10, random_state=0)
ADrop_train_avg = MLTSA(data[:,:,time_frame[0]:time_frame[1]], ans, trained_GBDT, drop_mode="Average")
results_GB["MLTSA"].append(ADrop_train_avg)
results_GB["Perm"].append(feat_perm)
results_GB["NN"].append(trained_GBDT)
results_GB["Imp"].append(trained_GBDT.feature_importances_)
results_GB["acc"].append([train_acc, test_acc])
# + [markdown] pycharm={"name": "#%% md\n"}
#
# + pycharm={"name": "#%%\n"}
acc = np.array(results_GB["acc"])
adrop = np.array(results_GB["MLTSA"])
permut = np.array(results_GB["Perm"])
permut = [x.importances_mean for x in permut]
imp = np.array(results_GB["Imp"])
plt.figure()
plt.title("Accuracy through replicas (GBDT)")
plt.plot(acc.T[0]*100, label="Training")
plt.plot(acc.T[1]*100, label="Test")
plt.xlabel("Replica")
plt.ylabel("Accuracy")
plt.legend()
std = np.std(adrop, axis=0)*100
mean = np.mean(adrop, axis=0)*100
plt.figure()
plt.title("MLTSA - GBDT")
plt.plot(mean)
plt.fill_between(np.arange(len(std)), y1=(mean+std), y2=(mean-std))
plt.ylabel("Accuracy")
plt.xlabel("Feature (CV)")
plt.figure()
plt.title("Feature Permutation Importances - GBDT")
plt.plot(np.mean(permut, axis=0))
plt.ylabel("Mean Importance")
plt.xlabel("Feature (CV)")
plt.figure()
plt.title("Feature Importances - GBDT")
plt.plot(np.mean(imp, axis=0))
plt.ylabel("Gini Importance")
plt.xlabel("Feature (CV)")
# + pycharm={"name": "#%%\n"}
MLTSA_Plot(adrop, oneD_dataset, pots, errorbar=False)
MLTSA_Plot(np.array(permut)/100, oneD_dataset, pots, errorbar=False)
MLTSA_Plot(imp/100, oneD_dataset, pots, errorbar=False)
# + pycharm={"name": "#%%\n"}
# + [markdown] pycharm={}
#
# + pycharm={"name": "#%%\n"}
# + [markdown] pycharm={"name": "#%% md\n"}
#
# + pycharm={"name": "#%%\n"}
# + [markdown] pycharm={"name": "#%% md\n"}
#
# + pycharm={"name": "#%%\n"}
# + [markdown] pycharm={}
#
# + pycharm={"name": "#%%\n"}
# + [markdown] pycharm={"name": "#%% md\n"}
#
# + pycharm={"name": "#%%\n"}
# + [markdown] pycharm={}
#
# + pycharm={"name": "#%%\n"}
from MLTSA_sklearn.MLTSA_sk import MLTSA
#We Call the method on the data, labels and trained NN.
ADrop_train_avg = MLTSA(data[:,:,time_frame[0]:time_frame[1]], ans, trained_NN, drop_mode="Average")
# + [markdown] pycharm={}
#
# + pycharm={"name": "#%%\n"}
plt.figure(figsize=(10,4))
plt.plot(ADrop_train_avg*100, label="Training Accuracy Drop", color="C0")
plt.legend()
plt.xlabel("# Feature")
plt.ylabel("Accuracy (%)")
plt.ylim()
# + [markdown] pycharm={}
#
# + pycharm={"name": "#%%\n"}
from MLTSA_sklearn.MLTSA_sk import MLTSA_Plot
#We simply get the plot with this
MLTSA_Plot([ADrop_train_avg], oneD_dataset, pots, errorbar=False)
# + [markdown] pycharm={"name": "#%% md\n"}
# This plots shows the accuracy drop for every single feature after swapping them with their global mean accross simulations. The features highlighted with a coloured cross are correlated features having red as the most correlated (100%) and blue as the least (0%).
| notebooks/MLTSA_vs_Feature_Permutation.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Statuts
#
# ## Version du 29.11.21
#
# -----
# ## 1. Raison sociale et siège
# ### Art. 1 Raison sociale
#
# Sous la raison sociale *Re-Co société coopérative d’habitation* (ci-après : Re-Co) existe, au sens de l'art. 828 ss CO, et pour une durée indéterminée, une société coopérative dont le siège est à Lausanne.
# ### Art. 2 Siège
# Le siège de la coopérative est à Lausanne.
# ## 2. But, moyens et principes
#
# ### Art. 1 But et moyens
#
# 1. La coopérative poursuit le but de créer et maintenir pour ses membres, en une action et responsabilité communes, de bons logements à des prix abordables. Elle s'efforce de proposer des espaces habitables pour toutes les couches de la population, en particulier pour les personnes seules, les familles, les personnes âgées et celles ayant des besoins particuliers. Elle encourage le vivre ensemble dans un esprit de responsabilité face à l'intérêt général, et de solidarité réciproque. La coopérative peut proposer dans ses bâtiments des locaux voués à des prestations commerciales.
#
# 2. Elle cherche à atteindre ce but par:
#
# * l'acquisition de terrain à bâtir et de droits de superficie;
# * la construction et l'acquisition de maisons individuelles et locatives correspondant aux besoins en logements modernes, de coopérative;
# * un entretien attentif et continu ainsi qu'une rénovation périodique des bâtiments existants;
# * la construction de bâtiments de remplacement, lorsque les immeubles existants ne peuvent plus être rénovés de manière économique et défendable;
# * le recours à des instruments d'encouragement conformes à la loi fédérale sur le logement ou aux lois cantonales et communales correspondantes;
# * l'administration et la location des logements sur la base de loyers à prix coûtant;
# * la construction de logements et de maisons individuelles pour la vente en propriété par étages ou en droit de superficie;
# * l'encouragement d'activités en esprit de coopérative dans les lotissements;
# * le soutien au plan conceptuel et matériel de démarches visant un logement de valeur et de bonne qualité.
#
# 3. L'activité de la coopérative est d'utilité publique et ne vise pas de bénéfices.
#
# 4. La coopérative peut participer à des entreprises et organisations ayant les mêmes objectifs ou des buts analogues. Elle peut être membre de coopératives d'habitation Suisse, la fédération des maîtres d'ouvrage d'utilité publique.
#
# ### Art. 2 Principes applicables à la location
#
# 1. La location est, dans les limites des dispositions suivantes, la tâche du comité qui édicte à ce sujet un règlement de location. Le comité veille également à ce que les locataires soient informés à propos d'éventuelles conditions découlant d'une aide au logement fournie par l'Etat et à ce qu'ils s'engagent à respecter ces conditions.
#
# 2. La location de logements ou de maisons individuelles de la coopérative suppose en principe l'adhésion à la coopérative.
#
# 3. Les loyers de logements subventionnés par l'Etat se fondent sur les prescriptions y afférentes. Au demeurant, la coopérative loue en principe ses logements à prix coûtant. Elle renonce à réaliser un véritable bénéfice et à opérer des paiements surfaits à des tiers. Les loyers doivent notamment couvrir les capitaux de tiers et le capital propre, d'éventuelles redevances de droit de superficie, les amortissements usuels dans la branche, les provisions et investissements dans les fonds prescrits par la loi ou par les autorités de subventionnement, ainsi que dans les fonds décidés par l'assemblée générale, l'entretien courant des bâtiments et aménagements extérieurs, le paiement de taxes, impôts et primes d'assurances, tout comme les coûts d'une administration et d'une direction de la coopérative répondant aux critères de la modernité.
#
# 4. Les membres sont tenus d'habiter eux-mêmes dans les logements qui leur sont loués et d'y avoir leur domicile de droit civil.
#
# 5. La sous-location totale ou partielle d'un logement ou de pièces individuelles n'est autorisée qu'avec l'accord préalable du comité. Celui-ci peut refuser d'approuver une demande de sous-location pour les motifs mentionnés sous article 262, al. 2 CO. Sont réputés inconvénients majeurs en cas de sous-location du logement entier, en particulier celle durant plus d'un an, la sous-location intervenant plus de deux fois au cours du rapport de bail, la sous-location à des personnes ne remplissant pas les directives régissant la location selon règlement y afférent, ainsi que l'impossibilité pour les membres de préciser clairement qu'ils habiteront à nouveau eux-mêmes le logement à l'échéance de la sous-location. En cas de sous-location de pièces individuelles, la coopérative subit également un inconvénient majeur lorsque, de ce fait, les directives régissant la location selon règlement y relatif peuvent être contournées. Le comité peut autoriser la sous-location journalière ou hebdomadaire à des tiers. Il règle les détails dans le règlement de location.
#
# 6. Un équilibre raisonnable doit être ménagé entre la grandeur de l'appartement et le nombre des utilisateurs. Il est possible de louer des logements de trois pièces à une personne, et les quatre pièces à deux personnes. Dans des nouveaux baux de logements de plus de quatre pièces, le nombre de pièces peut dépasser d'une unité le nombre des personnes qui y habitent. Sous le régime d'un contrat de bail existant, le nombre de pièces peut dépasser de deux le nombre des personnes qui y habitent. Un logement est réputé sous-occupé si le nombre de pièces dépasse de plus de deux le nombre de personnes qui y habitent. Les membres sont tenus, pour la durée de la sous-occupation, de payer au Fonds de la coopérative les redevances mensuelles fixées à ce titre dans le règlement de location (au maximum le montant d'un loyer mensuel divisé par le [nombre de pièces plus 1]) et de passer à un logement plus petit. Le comité règle les détails dans le règlement de location.
# ### Art. 5 Principes applicables à la construction et à l’entretien des bâtiments
#
# 1. Pour ce qui touche à la construction et à la transformation de ses bâtiments, quelques points sont extrêmement importants pour la coopérative: grande souplesse dans l'utilisation des logements ou des locaux commerciaux moyennant prise en compte de besoins futurs, construction adaptée aux personnes en situation de handicap, espaces extérieurs de haute qualité, desserte sûre et favorisant la communication, faible entretien subséquent, ainsi qu'une utilisation de matériaux impeccables au plan écologique, et économies d'énergie dans la construction et l'exploitation.
#
# 2. Grâce à un entretien en continu, durable, soucieux des coûts et de la qualité, la coopérative adapte ses bâtiments à l'état des moyens techniques et aux besoins de logements en coopérative modernes, et veille par conséquent au maintien de la valeur des bâtiments. Cette orientation comprend également l'examen régulier de mesures visant la plus-value des immeubles et de leur environnement au plan de l'habitat.
#
# 3. A l'occasion de rénovations globales et de la construction de bâtiments de remplacement, la coopérative a à cœur d'agir en toute responsabilité sociale. Elle annonce de telles actions au moins deux ans à l'avance et propose au moins un objet de relogement aux intéressés, selon possibilités. En cas de relocation des bâtiments transformés et de bâtisses érigées en remplacement, ce sont en premier lieu les locataires actuels qui doivent être pris en considération, dans la mesure où ils remplissent les critères de location.
#
# ### Art. 6 Inaliénabilité des terrains, immeubles et logements
#
# 1. Les terrains, immeubles et logements des coopératives sont en principe inaliénables.
#
# 2. Pour des raisons majeures, l'assemblée générale décide à la majorité des deux tiers au sujet d'une vente et de ses modalités, ainsi que sur l'octroi de droits de superficie distincts et permanents.
#
# 3. S'agissant de logements bénéficiant d'aides étatiques, le comité veille à ce que les acquéreurs soient informés d'éventuelles contraintes liées à la promotion du logement et à ce qu'ils s'engagent à les respecter.
#
# ## 3. Qualité de membre: acquisition, perte et obligations
#
# ### Art. 7 Acquisition de la qualité de membre
#
# 1. Toute personne physique adulte ou toute personne morale peut devenirmembre de la coopérative, pour autant qu'elle souscrive au moins une part sociale de la coopérative (part sociale de membre).
#
# 2. Le nombre des membres est illimité.
#
# 3. L'admission intervient sous la forme d'une décision du comité, sur la base d'une demande d'adhésion écrite et après paiement intégral des parts sociales de la coopérative exigées. Le comité statue définitivement sur l'admission. La décision du comité est déterminante pour le début de la qualité de membre.
#
# 4. Le comité tient un registre des membres.
# ### Art. 8 Extinction de la qualité de membre
#
# 1. La qualité de membre s'éteint:
# * a) pour les personnes physiques, par la sortie, l'exclusion ou le décès;
# * b) pour les personnes morales, par la sortie, l'exclusion ou la dissolution.
#
# 2. Le remboursement des parts sociales en cas d'extinction de la qualité de membre est réglé selon l'art. 18 des statuts.
# ### Art. 9 Sortie
#
# 1. Si le membre est locataire de locaux de la coopérative, la sortie suppose la résiliation du contrat de bail.
#
# 2. La sortie de la coopérative n'est possible que par notification écrite, avec effet à la fin de l'exercice annuel et sous respect d'un délai de résiliation de six mois. Le comité peut dans des cas justifiés autoriser aussi la sortie dans un délai de résiliation plus court ou à une autre date, ainsi notamment en cas de résiliation du contrat de bail pour la fin du délai de résiliation selon le droit du bail.
#
# 3. Dès que la décision de dissolution de la coopérative est rendue, il n'est plus possible de notifier la sortie.
# ### Art. 10 Décès
#
# 1. Si un membre qui a été locataire d'un logement de la coopérative décède, le conjoint, le partenaire enregistré ou le partenaire (concubin) vivant dans le même ménage peut, dans la mesure où il n'est pas déjà membre de la coopérative, reprendre la qualité de membre du défunt et, le cas échéant, son contrat de bail. Le partenaire doit prouver qu'il est l'héritier du défunt.
#
# 2. D'autres personnes vivant dans le même ménage peuvent avec l'assentiment du comité devenir membres de la coopérative et conclure un contrat de bail.
# ### Art. 11 Exclusion
#
# 1. Un membre peut être en tout temps exclu de la coopérative par le comité lorsqu'un juste motif ou l'un des motifs suivants existe:
# * a) violation d'obligations générales incombant au membre, en particulier l'obligation de bonne foi envers la coopérative, inobservation de décisions de l'assemblée générale ou du comité ainsi qu'une atteinte portée intentionnellement à l'image ou aux intérêts économiques de la coopérative;
# * b) inobservation de l'obligation faite aux membres d'habiter eux-mêmes dans les logements qui leur sont loués et d'y avoir leur domicile de droit civil [ou le séjour hebdomadaire annoncé officiellement];
# * c) affectation du logement contraire à son but, notamment lorsqu'il sera principalement utilisé – tout comme ses locaux accessoires – à des buts commerciaux;
# * d) en cas de divorce ou de séparation, pour autant que l'exclusion soit prévue sous art. 12, ou que seule peut-être membre une personne habitant dans un logement de la coopérative;
# * e) non-respect des dispositions des statuts et du règlement de location au sujet des sous-locations;
# * f) refus d'une offre de relogement en cas de sous-occupation du logement;
# * g) décision de l'organe compétent au sujet d'une rénovation globale ou de la démolition de l'immeuble en question; toutefois, si la coopérative dispose d'objets correspondants, seulement après refus d'une offre de relogement;
# * h) existence d'un motif extraordinaire de résiliation selon le droit du bail, en particulier en vertu des art. 257d CO, 257f CO, 266g CO, 266h CO ainsi que d'autres infractions au contrat de bail;
# * i) infraction aux dispositions régissant l'aide au logement, sur la base desquelles la coopérative a l'obligation de résilier le contrat de bail, pour autant que nulle offre de relogement ne puisse être faite ou qu'une telle offre ait été refusée.
#
# 2. L'exclusion doit être précédée d'un avertissement adéquat, sauf si celui-ci est inutile ou si la résiliation du bail intervient selon art. 257f, al. 4 CO, ou conformément à l'art. 12 des statuts.
#
# 3. La décision concernant l'exclusion doit être notifiée au membre concerné par lettre recommandée indiquant les motifs et moyennant référence à la possibilité de faire recours auprès de l'assemblée générale. Le membre exclu a le droit de faire recours auprès de l'assemblée générale dans le délai de 30 jours à dater de la réception de la notification de l'exclusion. Le recours n'a pas d'effet suspensif, mais le membre exclu a le droit d'exposer lui-même ou de faire exposer son point de vue à l'assemblée générale.
#
# 4. Le membre exclu à la faculté d'en appeler au juge dans le délai de trois mois, conformément à l'art. 846, al. 3 CO. L'appel au juge n'a pas non plus d'effet suspensif.
#
# 5. La résiliation du contrat de bail est fondée sur les dispositions du droit du bail; elle suppose l'existence d'un motif qui justifierait également l'exclusion de la coopérative.
# ### Art. 12 Dissolution de la vie commune de couples mariés et de partenaires enregistrés
#
# 1. Si le juge attribue dans une décision de mesure de protection de l'union conjugale ou dans un jugement de divorce l'utilisation du logement au conjoint du membre, le comité peut avec l'accord de celui-ci transférer le contrat de bail sur l'autre conjoint. Un tel transfert suppose la qualité de membre ou son acquisition par la personne restant dans le logement, ainsi que la reprise de toutes les parts sociales (art. 15, al. 2). Le comité peut exclure de la coopérative, sans préavis, le membre auquel le logement n'a pas été attribué, pour autant qu'il ne puisse pas ou ne veuille pas lui mettre à disposition un autre logement. La même règle s'applique en cas de décision concernant la dissolution de la vie commune du partenariat enregistré.
#
# 2. Si le juge attribue en cas de divorce ou de jugement de liquidation du régime matrimonial le logement et le contrat de bail au conjoint ou au partenaire enregistré du membre, le comité a la faculté, s'il ne peut ou veut mettre un autre logement à disposition du membre, exclure celui-ci de la coopérative, sans préavis. Le conjoint ou le partenaire enregistré à qui le contrat de bail a été transféré doit être ou devenir membre de la coopérative et reprendre toutes les parts sociales du logement. La même règle est applicable en cas de jugement relatif à la dissolution du partenariat enregistré.
#
# 3. Les règles concernant l'occupation du logement énoncées sous art. 4, al. 6 demeurent réservées.
#
# 4. Les conséquences patrimoniales concernant les parts sociales de la coopérative sont régies conformément à la décision y relative du tribunal, ou de la convention passée à ce propos, un remboursement du capital social n'intervenant qu'après que le conjoint ou le partenaire enregistré restant dans le logement ait versé un montant correspondant à la coopérative.
# ### Art. 13 Mise en gage et cession de parts sociales de la coopérative
#
# 1. Toute mise en gage et autre charge grevant des parts sociales de la coopérative, ainsi que leur cession à des personnes qui n'en sont pas membres, sont exclues.
#
# 2. La cession de parts sociales de la coopérative n'est admise que de membre à membre et nécessite l'approbation du comité. Un contrat de cession sous forme écrite est requis à cet effet.
# ### Art. 14 Obligations personnelles des membres
#
# Les membres sont tenus:
# * a) de défendre les intérêts de la coopérative en toute bonne foi; Obligation de loyauté
# * b) de se conformer aux statuts et décisions des organes de la coopérative; Obligation de respect des règles
# * c) selon possibilité, participer aux activités de la coopérative et collaborer au sein de ses organes.
# ## 4. Dispositions financières
#
# Capital social
#
# ### Art. 15 Parts sociales
#
# 1. Le capital social de la coopérative se compose de la somme des parts sociales souscrites. Les parts sociales ont une valeur nominale de 100 francs chacune et doivent être intégralement payées. Exceptionnellement, le comité peut autoriser un paiement par tranches des parts liées au logement. Le comité a la faculté d'émettre en tout temps de nouvelles parts sociales de la coopérative pour de nouveaux membres.
#
# 2. Les membres qui louent des locaux de la coopérative doivent, en sus de la part sociale de membre (cf. art. 7, al. 1) souscrire des parts sociales supplémentaires (parts liées au logement). Le comité règle les détails dans un règlement, le montant à souscrire étant échelonné en fonction des frais d'investissement du logement loué et devant correspondre aux prescriptions régissant l'aide au logement, tout en suffisant au financement des constructions. Le montant maximum est de 20% des frais d'investissement des locaux loués.
#
# 3. Si plusieurs membres louent en commun des locaux de la coopérative, les parts sociales à souscrire pour ces locaux peuvent être réparties entre ces membres en une proportion qu'ils choisissent eux-mêmes.
#
# 4. Il n'est pas émis de titres des parts sociales souscrites. Le membre obtient cependant chaque année une attestation du montant de sa participation, accompagnée d'une éventuelle attestation des intérêts.
# ### Art. 16 Financement des parts sociales de coopérative
#
# 1. Les parts sociales peuvent être payées au moyen de la prévoyance professionnelle. Le comité règle l'exécution dans un règlement.
#
# 2. Avec l'accord du comité, les parts de coopérative peuvent également être financées par des tiers. A défaut de convention contraire, un éventuel intérêt à payer incombe au membre.
# ### Art. 17 Intérêt servi sur les parts sociales
#
# 1. Un intérêt ne peut être versé sur les parts sociales qu'à condition qu'aient été opérés des investissements appropriés dans les fonds légaux et statutaires ainsi que des amortissements.
#
# 2. L'assemblée générale fixe chaque année le taux d'intérêt, étant précisé qu'il ne peut dépasser l'intérêt usuel en vigueur dans le pays pour des prêts à longue échéance accordés sans garanties spéciales, ni le taux d'intérêt admissible pour l'exonération du droit de timbre fédéral à hauteur de 6%, ni d'éventuelles limites définies dans les dispositions régissant l'encouragement au logement.
#
# 3. Les parts porteront toujours intérêt du premier jour du mois suivant le paiement jusqu'à l'extinction de la qualité de membre. Le montant non payé ne porte pas intérêt.
# ### Art. 18 Remboursement des parts sociales
#
# 1. Les membres sortants ou leurs héritiers n'ont aucun droit au patrimoine de la coopérative, à l'exception du droit au remboursement des parts sociales qu'ils ont payées.
#
# 2. Il n'existe aucun droit au remboursement de parts de membres et de parts liées au logement qui sont reprises par le partenaire en vertu des articles 10 et 12 des statuts. Le remboursement de parts sociales acquises au moyen de la prévoyance professionnelle doit intervenir, selon instruction du membre actuel, en sa faveur ou à l'attention d'une autre coopérative d'habitation dans laquelle il occupe lui-même et durablement un logement, ou d'une institution de prévoyance professionnelle - ou encore, à l'âge de la retraite, en faveur du membre actuel lui-même.
#
# 3. Le remboursement a lieu à la valeur du bilan de l'année de sortie, sous exclusion des réserves et attributions à des fonds, mais au maximum à la valeur nominale.
#
# 4. Le versement auquel s'ajoute un éventuel intérêt intervient dans le délai d'un mois à dater de l'approbation des comptes annuels et de la fixation du taux d'intérêt par la prochaine assemblée générale ordinaire. Si la situation financière de la coopérative l'exige, le comité est habilité à différer le remboursement pendant la durée de trois ans, l'intérêt servi étant le même que celui rémunérant des parts de coopératives non résiliées.
#
# 5. En des cas spéciaux, le comité peut décider que les parts de la coopérative seront remboursées avant terme, mais jamais avant la reddition du logement, ainsi en particulier lorsque le montant est nécessaire pour libérer des parts sociales d'une autre coopérative d'habitation.
#
# 6. La coopérative est autorisée à compenser les créances qu'elle possède envers le membre sortant avec le crédit que celui-ci détient au titre des parts sociales.
# ### Responsabilité
#
# ### Art. 19 Responsabilité
#
# Seul le patrimoine de la coopérative répond des engagements de celle-ci. Toute obligation de versements supplémentaires ou responsabilité individuelle d'un membre est exclue.
# ### Comptabilité
#
# ### Art. 20 Comptes annuels et exercice annuel
#
# 1. Les comptes annuels se composent du bilan, du compte d'exploitation et de l'annexe et sont établis selon les principes d'une présentation des comptes en bonne et due forme, de sorte que la situation puisse être jugée fiablement au plan de la fortune, du financement et des produits de la coopérative. Ils contiennent également les chiffres de l'année précédente. Les articles correspondants du Code des obligations sont déterminants à cet égard, tout comme d'autres prescriptions légales, en particulier celles de l'encouragement au logement, ainsi que les principes usuels de la branche.
#
# 2. Les comptes annuels sont à soumettre à l'organe de révision, resp. à l'expert agréé pour le contrôle limité.
#
# 3. L'exercice correspond à l'année civile.
# ### Art. 21 Réserves issues du bénéfice
#
# 1. Le bénéfice annuel établi sur la base des comptes annuels sert en premier lieu à alimenter les réserves de bénéfices.
#
# 2. L'assemblée générale décide sous respect de l'art. 860, al. 1 CO du montant affecté aux réserves de bénéfices légales et facultatives.
#
# 3. Le comité décide de la mise à contribution des réserves issues de bénéfices, moyennant observation de l'art. 860, al. 3 CO.
# ### Art. 22 Réserves et ajustements de valeurs
#
# 1. Le compte d'exploitation doit être grevé chaque année de versements appropriés au fonds de rénovation, axés sur la stratégie de la coopérative en la matière.
#
# 2. Il s'agit de tenir compte de la dépréciation des immeubles par des amortissements adéquats et réguliers. Ceux-ci se fondent généralement sur les directives de l'administration des contributions et figurent au bilan selon la méthode indirecte. Si la coopérative est titulaire d'un droit de superficie, le compte d'exploitation est débité chaque année d'un versement aux ajustements de valeurs pour le droit de retour gratuit des constructions. Si le montant de ces ajustements peut être déterminé au préalable, conformément aux conditions des contrats de droit de superficie, c'est ce montant qui sera
# raisonnablement pris en compte et, sans quoi, les amortissements admissibles au plan fiscal.
#
# 3. Pour des logements subventionnés par l'Etat, les réserves et ajustements de valeurs doivent correspondre aux prescriptions régissant l'aide au logement.
#
# 4. L'assemblée générale peut, dans les limites des art. 862 et 863 CO, décider d'alimenter d'autres fonds.
#
# 5. Les moyens mis à disposition des fonds sont gérés et utilisés par le comité, conformément au but décidé; ils sont contrôlés dans le cadre de la comptabilité générale, resp. par l'organe de révision et l'expert agréé pour un contrôle limité.
# ### Art. 23 Indemnité des organes
#
# 1. Les membres du comité ont droit à une indemnité raisonnable qui se fonde sur les tâches et la charge de travail des membres respectifs, et qui est fixée par le comité lui-même.
#
# 2. L'indemnité de l'organe de révision, resp. de l'expert agréé pour un contrôle limité est fixée en fonction des tarifs usuels de la branche.
#
# 3. Les membres de commissions et de délégations ont droit à un jeton de présence d'un montant raisonnable.
#
# 4. Le versement de tantièmes est exclu. Tantièmes exclus
#
# 5. Le montant total des indemnités versées aux membres du comité - ventilées entre indemnités de séance, indemnités pour activité de construction et autres travaux effectués pour la coopérative - ainsi que pour d'autres commissions instituées par l'assemblée générale doit figurer dans le compte d'exploitation.
#
# 6. De plus, seront remboursés les frais engagés dans l'intérêt de la coopérative par des membres du comité, de l'organe de révision ou par l'expert agréé pour un contrôle limité ainsi que par des commissions.
# ## 5. Organisation
#
# ## Organes
#
# ### Art. 24 Vue d'ensemble
#
# Les organes de la coopérative sont:
# * a) l'assemblée générale,
# * b) le comité,
# * c) l'organe de révision.
# ### Assemblée générale
#
# ### Art. 25 Compétences
#
# 1. L'assemblée générale dispose des compétences suivantes:
# * a) Adoption et modification des statuts;
# * b) Election et révocation du président, du co-président ainsi que des autres membres du comité et de l'organe de révision;
# * c) Approbation du rapport annuel du comité;
# * d) Approbation des comptes annuels et décision sur l'affectation du bénéfice figurant au bilan;
# * e) Décharge des membres du comité;
# * f) Décision sur recours contre des décisions d'exclusion émises par le comité;
# * g) Décision sur la vente de terrains, immeubles et logements et sur l'octroi de droits de superficie distincts et permanents;
# * h) Décision sur l'achat de terrains et/ou la construction de nouveaux bâtiments dont les coûts dépassent 20% de la valeur d'investissement de tous les biens-fonds (sans les amortissements);
# * i) Décision sur la démolition de bâtiments d'habitation et la construction d'immeubles de remplacement;
# * j) Décision sur la dissolution ou la fusion de la coopérative;
# * k) Approbation de règlements, dans la mesure où ceux-ci ne relèvent pas expressément de la compétence du comité;
# * l) Décision sur des objets portés à l'ordre du jour sur proposition de membres, pour autant que ces objets soient soumis à la décision de l'assemblée générale (art. 25, al. 2);
# * m) Décision sur tout autre objet réservé à l'assemblée générale par la loi ou les statuts, ou qui lui est soumis par le comité.
#
# 2. Les propositions de membres de porter un objet à l'ordre du jour selon let.
# * l) doivent parvenir par écrit au comité, au plus tard 60 jours avant l'assemblée générale ordinaire. La date de l'assemblée générale doit être communiquée au moins trois mois au préalable.
#
# 3. Seuls peuvent être soumis au vote les objets qui ont été portés à l'ordre du jour. Pour déposer des propositions dans le cadre de l'ordre du jour, il n'y a pas besoin d'annonce préalable.
#
# ### Art. 26 Convocation et direction de l'assemblée
#
# 1. L'Assemblée générale ordinaire a lieu une fois par an, durant le premier semestre de l'année civile.
#
# 2. Les assemblées générales extraordinaires sont convoquées pour autant qu'une assemblée générale précédente, le comité et l'organe de révision ou les liquidateurs le décident, ou si le dixième des membres l'exige. Si la coopérative compte moins de 30 membres, la convocation doit être demandée par trois membres au moins. La convocation doit avoir lieu dans le délai de huit semaines à dater de la réception de la demande.
#
# 3. L'assemblée générale doit être convoquée par le comité, au moins 20 jours avant le jour de l'assemblée. Dans la convocation doivent figurer l'ordre du jour et, en cas de propositions de modification des statuts, le texte des modifications proposées. Il y a lieu de joindre à la convocation aux assemblées générales ordinaires le rapport annuel (art. 30, al. 2), y compris le rapport de l'organe de révision ou de l'expert agréé pour un contrôle limité; ces documents doivent également être déposés 20 jours avant la date de l'assemblée, pour consultation, au siège social de la coopérative.
#
# 4. L'assemblée générale est dirigée par le président ou le co-président ou par un membre du comité. Elle peut, sur proposition du comité, élire un président du jour.
# ### Art. 27 Droit de vote
#
# 1. Chaque membre dispose d'une voix à l'assemblée générale. Principe
#
# 2. Il peut s'y faire représenter par un autre membre moyennant procuration écrite. Personne ne peut représenter plus d'un autre membre.
#
# 3. Lors de décisions sur la décharge des membres du comité, ceux-ci n'ont pas le droit de voter.
# ### Art. 28 Décisions et élections
#
# 1. L'assemblée générale est apte à décider valablement lorsqu'elle a été convoquée conformément aux statuts.
#
# 2. Elections et votes ont lieu à main levée, sauf si un tiers des voix émises exige le vote à bulletin secret.
#
# 3. L'assemblée générale prend ses décisions à la majorité simple des voix exprimées. Pour les élections s'applique au premier tour de scrutin la majorité absolue, et au deuxième, la majorité relative. Les abstentions et les suffrages nuls ne sont pas pris en compte pour la détermination de la majorité.
#
# 4. Pour l'achat de biens-fonds et l'octroi de droits de superficie distincts et permanents, pour les modifications des statuts ainsi que la dissolution et la fusion de la coopérative, il faut l'approbation des deux tiers des voix émises.
#
# 5. Les art. 889 CO et 18, al. 1, let. d de la loi sur fusion (LFus) demeurent réservés.
#
# 6. Un procès-verbal est rédigé pour les décisions et les résultats d'élections, qui doit être signé par le président et par son rédacteur.
# ### Comité
#
# ### Art. 29 Election et éligibilité
#
# 1. Le comité se compose de cinq personnes. Il doit être composé en majorité de membres de la coopérative. Le président ou le co-président est élu par l'assemblée générale; pour le reste, le comité se constitue lui-même. Il désigne un rédacteur du procès-verbal qui n'a pas obligation de faire partie du comité.
#
# 2. Les membres du comité sont élus pour trois ans et sont rééligibles. Des élections au cours d'une période de fonction sont valables jusqu'à l'échéance de cette période. La durée maximale de fonction est de douze ans.
#
# 3. Tous les membres du comité sont tenus de se récuser si des affaires sont traitées, qui touchent à leurs propres intérêts ou aux intérêts de personnes physiques ou morales qui leur sont proches. Les membres du comité prenant la décision ont obligation de conclure l'affaire en question au maximum aux conditions de tiers (valeur du marché). Dans de tels cas, le contrat doit revêtir la forme écrite. Cette exigence ne s'applique pas aux opérations courantes pour lesquelles la prestation de la société ne dépasse pas 1'000 CHF. Si le comité tout entier doit se récuser, il y a lieu de requérir pour l'affaire une décision d'approbation de l'assemblée générale.
# ### Art. 30 Tâches
#
# 1. Dans les limites des dispositions légales et statutaires, le comité est compétent pour l'administration et pour toutes les affaires de la coopérative qui ne sont pas expressément réservées à un autre organe.
#
# 2. Le comité établit pour chaque exercice un rapport de gestion qui se compose des comptes annuels (art. 20) et du rapport annuel. Le rapport annuel présente l'évolution des affaires ainsi que la situation économique de la coopérative et fournit l'attestation de contrôle de l'organe de révision ou de l'expert agréé pour un contrôle limité.
#
# 3. Il désigne les personnes habilitées à signer et le mode de signature, seule la signature collective à deux pouvant leur être accordée.
# ### Art. 31 Délégation de compétence
#
# 1. Le comité est habilité à confier la gestion des affaires ou certains secteurs de celle-ci a un ou plusieurs de ses membres (délégations), à des commissions permanentes ou ad hoc et/ou à une ou plusieurs personne/s qui à cet effet n'ont pas obligation d'être membres de la coopérative (direction). Les membres de commissions n'ont pas à être membres de la coopérative.
#
# 2. Le comité édicte un règlement d'organisation qui fixe les tâches du comité, des délégations, commissions ainsi que de la direction et règle notamment la question des rapports à présenter.
# ### Art. 32 Séances du comité
#
# 1. Les séances du comité sont convoquées par la présidence ou la co-présidence, aussi souvent que les affaires l'exigent et, en outre, lorsque deux membres du comité en demandent la convocation.
#
# 2. Le comité peut prendre valablement des décisions en présence de la majorité de ses membres. Il décide à la majorité simple des voix émises. En cas d'égalité des voix, celle du président est prépondérante.
#
# 3. Dans la mesure où aucun membre du comité n'exige les débats oraux et que la majorité des membres du comité y consent, des décisions prises par voie circulaire écrite sans voix contraire sont réputées décisions valables du comité, même celles communiquées par e-mail ou fax. Elles doivent être consignées au procès-verbal de la prochaine séance.
#
# 4. Les délibérations et décisions du comité sont consignées dans un procès verbal qui doit être signé par le président et par son rédacteur.
# ### Organe de révision
#
# ### Art. 33 Election et constitution
#
# 1. A titre d'organe de révision, l'assemblée générale élit un réviseur ou une entreprise de révision agréés selon la loi sur la surveillance de la révision (art. 5 s. LSR et art. 727c CO), toujours pour un exercice annuel jusqu'à l'approbation des comptes annuels en question.
#
# 2. L'assemblée générale peut renoncer à l'élection d'un tel organe (opting out) si:
# * a) la coopérative n'est pas tenue à une révision ordinaire;
# * b) tous les membres de la coopérative approuvent ce renoncement;
# * c) la coopérative n'a pas plus de dix postes à temps complet en moyenne durant l'année;
# * d) aucun autre motif légal ou contractuel ne contraint la coopérative à une révision.
#
# 3. Si l'assemblée générale renonce à l'élection d'un organe de révision, le comité mandate en lieu et place un réviseur agréé par l'Office fédéral du logement pour qu'il procède à un contrôle selon ses directives.
# ### Art. 34 Tâches
#
# 1. Si l'assemblée générale élit un organe de révision, celui-ci exécute un contrôle restreint selon art. 729 ss CO. Les tâches et la responsabilité de l'organe de révision se fondent sur les dispositions légales.
#
# 2. Si en lieu et place, la coopérative a décidé l'opting out, les tâches et la responsabilité du réviseur agréé sont régies par les directives y relatives de l'Office fédéral du logement (OFL).
#
# 3. L'organe de révision ou le réviseur agréé OFL soumet à l'assemblée générale ordinaire un rapport écrit.
# ## 6. Dispositions finales
#
# ### Dissolution par liquidation et fusion
#
# ### Art. 35 Liquidation
#
# 1. Une assemblée générale spécialement convoquée à cette fin peut en tout temps décider de la dissolution de la coopérative par voie de liquidation.
#
# 2. Le comité exécute la liquidation conformément aux dispositions de la loi et des statuts, pour autant que l'assemblée générale ne mandate pas un liquidateur spécial pour ce faire.
# ### Art. 36 Excédent de liquidation
#
# 1. La fortune résiduelle de la coopérative après extinction de toutes les dettes et le remboursement de toutes les parts de coopératives à la valeur nominale sera totalement transférée à la Fondation fonds de solidarité de coopératives d'habitation Suisse - fédération des maîtres d'ouvrage d'utilité publique.
#
# 2. Demeurent réservées des dispositions contraires de la Confédération, du canton ou de communes ou d'autres institutions encore, en matière d'aide au logement.
# ### Art. 37 Fusion
#
# 1. L'assemblée générale peut décider en tout temps de la fusion de la coopérative avec un autre maître d'ouvrage d'utilité publique.
#
# 2. La préparation de la fusion est l'affaire du comité. Il peut cependant demander au préalable un vote de l'assemblée générale à titre consultatif.
# ### Communications
#
# ### Art. 38 Communications et organe de publication
#
# 1. Les communications internes de la coopérative à l'attention des membres et les convocations se font par écrit, par voie d'e-mail ou de circulaire, à défaut de dispositions contraires impératives prévues par la loi.
#
# 2. L'organe de publication de la coopérative est la Feuille officielle suisse du commerce.
| _build/jupyter_execute/statuts.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3 (ipykernel)
# language: python
# name: python3
# ---
# +
# %%writefile ptdataset.py
import numpy as np
import pandas as pd
from pandas.core.groupby.generic import DataFrameGroupBy, SeriesGroupBy
from sklearn.preprocessing import StandardScaler, PolynomialFeatures
import matplotlib.pyplot as plt
from sklearn.utils import resample
import copy
import os
def to_numpy(arr):
try:
return arr.data.cpu().numpy()
except: pass
try:
return arr.to_numpy()
except: pass
return arr
class PTDS:
_metadata = ['_df', '_dfindices', '_pt_categoryx', '_pt_categoryy', '_pt_dummiesx', '_pt_dummiesy', '_pt_columny', '_pt_columnx', '_pt_transposey', '_pt_bias', '_pt_polynomials', '_pt_dtype', '_pt_sequence_window', '_pt_sequence_shift_y', '_pt_is_test']
_internal_names = pd.DataFrame._internal_names + ["_pt__indices", "_pt__x_sequence"]
_internal_names_set = set(_internal_names)
def to_ptdataframe(self):
cls = self._df.__class__
r = cls(self)
r._pt_columnx = self._pt_columnx
r._pt_columny = self._pt_columny
r._pt_transposey = self._pt_transposey
r._pt_bias = self._pt_bias
r._pt_polynomials = self._pt_polynomials
r._pt_sequence_window = self._pt_sequence_window
r._pt_sequence_shift_y = self._pt_sequence_shift_y
r._pt__train = self
r._pt__full = self
r._pt__valid = None
r._pt__test = None
r._pt_indices = list(range(len(self)))
r._pt__train_indices = r._pt_indices
r._pt__valid_indices = []
r._pt__test_indices = []
r._pt_split = None
r._pt_random_state = None
r._pt_balance = None
r._pt_shuffle = False
return r
def _copy_meta(self, r):
r._df = self._df
r._dfindices = self._dfindices
r._pt_categoryx = self._pt_categoryx
r._pt_categoryy = self._pt_categoryy
r._pt_dummiesx = self._pt_dummiesx
r._pt_dummiesy = self._pt_dummiesy
r._pt_columny = self._pt_columny
r._pt_columnx = self._pt_columnx
r._pt_is_test = self._pt_is_test
r._pt_transposey = self._pt_transposey
r._pt_polynomials = self._pt_polynomials
r._pt_bias = self._pt_bias
r._pt_dtype = self._pt_dtype
r._pt_sequence_window = self._pt_sequence_window
r._pt_sequence_shift_y = self._pt_sequence_shift_y
return r
def _ptdataset(self, data):
return self._copy_meta( PTDataSet(data) )
def _not_nan(self, a):
a = np.isnan(a)
while len(a.shape) > 1:
a = np.any(a, -1)
return np.where(~a)[0]
@property
def _dtype(self):
return self._pt_dtype
@property
def indices(self):
try:
return self._pt__indices
except:
if self._pt_is_test:
self._pt__indices = self._not_nan(self._x_sequence)
else:
s = set(self._not_nan(self._y_transposed))
self._pt__indices = [ i for i in self._not_nan(self._x_sequence) if i in s]
return self._pt__indices
@property
def _scalerx(self):
return self._df._scalerx
@property
def _scalery(self):
return self._df._scalery
@property
def _categoryx(self):
return self._pt_categoryx()
@property
def _categoryy(self):
return self._pt_categoryy()
@property
def _dummiesx(self):
return self._pt_dummiesx()
@property
def _dummiesy(self):
return self._pt_dummiesy()
@property
def _shift_y(self):
if self._pt_sequence_shift_y is not None:
return self._pt_sequence_shift_y
else:
return 0
@property
def _sequence_window(self):
try:
if self._is_sequence:
return self._pt_sequence_window
except:pass
return 1
@property
def _sequence_index_y(self):
return self._pt_sequence_window+self._shift_y-1
@property
def _columny(self):
return [ self.columns[-1] ] if self._pt_columny is None else self._pt_columny
@property
def _transposey(self):
return True if self._pt_transposey is None else self._pt_transposey
@property
def _columnx(self):
if self._pt_columnx is None:
return [ c for c in self.columns if c not in self._columny ]
return self._pt_columnx
@property
def _polynomials(self):
return self._pt_polynomials
@property
def _bias(self):
return self._pt_bias
def _transform(self, scalers, array):
out = []
for i, scaler in enumerate(scalers):
if scaler is not None:
out.append(scaler.transform(array[:, i:i+1]))
else:
out.append(array[:, i:i+1])
return np.concatenate(out, axis=1)
def resample_rows(self, n=True):
r = self._ptdataset(self)
if n == True:
n = len(r)
if n < 1:
n = n * len(r)
return r.iloc[resample(list(range(len(r))), n_samples = int(n))]
def interpolate_factor(self, factor=2, sortcolumn=None):
if not sortcolumn:
sortcolumn = self.columns[0]
df = self.sort_values(by=sortcolumn)
for i in range(factor):
i = df.rolling(2).sum()[1:] / 2.0
df = pd.concat([df, i], axis=0)
df = df.sort_values(by=sortcolumn)
return self._df._ptdataset(df).reset_index(drop=True)
@property
def _x_category(self):
if self._is_sequence:
self = self.iloc[:-self._shift_y]
if self._categoryx is None:
return self[self._columnx]
r = copy.copy(self[self._columnx])
for c, cat in zip(r._columnx, r._categoryx):
if cat is not None:
r[c] = cat.transform(r[c])
return r
@property
def _x_dummies(self):
if self._dummiesx is None:
return self._x_category
r = copy.copy(self._x_category)
r1 = []
for d, onehot in zip(r._columnx, r._dummiesx):
if onehot is not None:
a = onehot.transform(r[[d]])
r1.append( pd.DataFrame(a.toarray(), columns=onehot.get_feature_names_out([d])) )
r = r.drop(columns = d)
r1.insert(0, r.reset_index(drop=True))
r = pd.concat(r1, axis=1)
return r
@property
def _x_numpy(self):
return self._x_dummies.to_numpy()
@property
def _x_polynomials(self):
try:
return self._polynomials.fit_transform(self._x_numpy)
except:
return self._x_numpy
@property
def _x_scaled(self):
if len(self) > 0:
return self._transform(self._scalerx, self._x_polynomials)
return self._x_polynomials
@property
def _x_biased(self):
a = self._x_scaled
if self._bias:
return np.concatenate([np.ones((len(a),1)), a], axis=1)
return a
@property
def _x_sequence(self):
try:
return self._pt__x_sequence
except:
if not self._is_sequence:
self._pt__x_sequence = self._x_biased
else:
X = self._x_biased
window = self._sequence_window
len_seq_mode = max(0, len(X) - window + 1)
self._pt__x_sequence = np.concatenate([np.expand_dims(X[ii:ii+window], axis=0) for ii in range(len_seq_mode)], axis=0)
return self._pt__x_sequence
@property
def X(self):
return self._x_sequence[self.indices]
@property
def X_tensor(self):
import torch
if self._dtype is None:
return torch.tensor(self.X).type(torch.FloatTensor)
else:
return torch.tensor(self.X)
@property
def y_tensor(self):
import torch
if self._dtype is None:
return torch.tensor(self.y).type(torch.FloatTensor)
else:
return torch.tensor(self.y)
@property
def _is_sequence(self):
return self._pt_sequence_window is not None
@property
def tensors(self):
return self.X_tensor, self.y_tensor
@property
def _range_y(self):
stop = len(self) if self._shift_y >= 0 else len(self) + self._shift_y
start = min(stop, self._sequence_window + self._shift_y - 1)
return slice(start, stop)
@property
def _y_category(self):
if self._is_sequence:
self = self.iloc[self._range_y]
if self._categoryy is None:
return self[self._columny]
r = copy.copy(self[self._columny])
for d, onehot in zip(r._columny, r._dummiesy):
if onehot is not None:
r[c] = cat.transform(r[c])
return r
@property
def _y_dummies(self):
if self._dummiesy is None:
return self._y_category
r = copy.copy(self._y_category)
r1 = []
for d, onehot in zip(r._columny, r._dummiesy):
if onehot is not None:
a = onehot.transform(r[[d]])
r1.append( pd.DataFrame(a.toarray(), columns=onehot.get_feature_names_out([d])) )
r = r.drop(columns = d)
r1.insert(0, r.reset_index(drop=True))
r = pd.concat(r1, axis=1)
return r
@property
def _y_numpy(self):
return self._y_dummies.to_numpy()
@property
def _y_scaled(self):
if len(self) > 0:
return self._transform(self._scalery, self._y_numpy)
return self._y_numpy
@property
def _y_transposed(self):
return self._y_scaled.squeeze() if self._transposey else self._y_scaled
@property
def y(self):
return self._y_transposed[self.indices]
def replace_y(self, new_y):
y_pred = self._predict(new_y)
offset = self._range_y.start
indices = [ i + offset for i in self.indices ]
assert len(y_pred) == len(indices), f'The number of predictions ({len(y_pred)}) does not match the number of samples ({len(indices)})'
r = copy.deepcopy(self)
r[self._columny] = np.NaN
columns = [r.columns.get_loc(c) for c in self._columny]
r.iloc[indices, columns] = y_pred.values
return r
def to_dataset(self):
"""
returns: a list with a train, valid and test DataSet. Every DataSet contains an X and y, where the
input data matrix X contains all columns but the last, and the target y contains the last column
columns: list of columns to convert, the last column is always the target. default=None means all columns.
"""
from torch.utils.data import TensorDataset
return TensorDataset(*self.tensors)
def _predict_y(self, predict):
if not callable(predict):
return predict
try:
from torch import nn
import torch
with torch.set_grad_enabled(False):
return to_numpy(predict(self.X_tensor)).reshape(len(self))
except:
raise
try:
return predict(self.X).reshape(len(self))
except:
raise
raise ValueError('predict mus be a function that works on Numpy arrays or PyTorch tensors')
def _predict(self, predict):
return self.inverse_transform_y(self._predict_y(predict))
def predict(self, predict, drop=True):
y_pred = self._predict_y(predict)
if drop:
return self._df.inverse_transform(self.X, y_pred)
return self._df.inverse_transform(self.X, self.y, y_pred)
def add_column(self, y_pred, *columns):
y_pred = to_numpy(y_pred)
offset = self._range_y.start
indices = [ i + offset for i in self.indices ]
assert len(y_pred) == len(indices), f'The number of predictions ({len(y_pred)}) does not match the number of samples ({len(indices)})'
r = copy.deepcopy(self)
y_pred = self.inverse_transform_y(y_pred)
if len(columns) == 0:
columns = [ c + '_pred' for c in self._columny ]
for c in columns:
r[c] = np.NaN
columns = [r.columns.get_loc(c) for c in columns]
r.iloc[indices, columns] = y_pred.values
return r
def inverse_transform_y(self, y_pred):
return self._df.inverse_transform_y(y_pred)
def line(self, x=None, y=None, xlabel = None, ylabel = None, title = None, **kwargs ):
self._df.evaluate().line(x=x, y=y, xlabel=xlabel, ylabel=ylabel, title=title, df=self, **kwargs)
def scatter(self, x=None, y=None, xlabel = None, ylabel = None, title = None, **kwargs ):
self._df.evaluate().scatter(x=x, y=y, xlabel=xlabel, ylabel=ylabel, title=title, df=self, **kwargs)
def scatter2d_class(self, x1=None, x2=None, y=None, xlabel=None, ylabel=None, title=None, loc='upper right', noise=0, **kwargs):
self._df.evaluate().scatter2d_class(x1=x1, x2=x2, y=y, xlabel=xlabel, ylabel=ylabel, title=title, loc=loc, noise=noise, df=self, **kwargs)
def scatter2d_color(self, x1=None, x2=None, c=None, xlabel=None, ylabel=None, title=None, noise=0, **kwargs):
self._df.evaluate().scatter2d_color(x1=x1, x2=x2, c=c, xlabel=xlabel, ylabel=ylabel, title=title, noise=noise, df=self, **kwargs)
def scatter2d_size(self, x1=None, x2=None, s=None, xlabel=None, ylabel=None, title=None, noise=0, **kwargs):
self._df.evaluate().scatter2d_size(x1=x1, x2=x2, s=s, xlabel=xlabel, ylabel=ylabel, title=title, noise=noise, df=self, **kwargs)
def plot_boundary(self, predict):
self._df.evaluate().plot_boundary(predict)
def plot_contour(self, predict):
self._df.evaluate().plot_contour(predict)
class PTDataSet(pd.DataFrame, PTDS):
_metadata = PTDS._metadata
_internal_names = PTDS._internal_names
_internal_names_set = PTDS._internal_names_set
@property
def _constructor(self):
return PTDataSet
@classmethod
def from_ptdataframe(cls, data, df, dfindices):
r = cls(data)
r._df = df
r._dfindices = dfindices
r._pt_categoryx = df._categoryx
r._pt_categoryy = df._categoryy
r._pt_dummiesx = df._dummiesx
r._pt_dummiesy = df._dummiesy
r._pt_columny = df._columny
r._pt_columnx = df._columnx
r._pt_transposey = df._transposey
r._pt_polynomials = df._pt_polynomials
r._pt_bias = df._pt_bias
r._pt_dtype = df._pt_dtype
r._pt_is_test = False
r._pt_sequence_window = df._pt_sequence_window
r._pt_sequence_shift_y = df._pt_sequence_shift_y
return r
@classmethod
def df_to_testset(cls, data, df, dfindices):
r = cls.from_ptdataframe(data, df, dfindices)
r._pt_is_test = True
return r
def groupby(self, by, axis=0, level=None, as_index=True, sort=True, group_keys=True, observed=False, dropna=True):
r = super().groupby(by, axis=axis, level=level, as_index=as_index, sort=sort, group_keys=group_keys, observed=observed, dropna=dropna)
return self._copy_meta( PTGroupedDataSet(r) )
class PTGroupedDataSetSeries(SeriesGroupBy, PTDS):
_metadata = PTDS._metadata
#_internal_names = PTDS._internal_names
#_internal_names_set = PTDS._internal_names_set
@property
def _constructor(self):
return PTGroupedDataSetSeries
@property
def _constructor_expanddim(self):
return PTGroupedDataFrame
class PTGroupedDataSet(DataFrameGroupBy, PTDS):
_metadata = PTDS._metadata
#_internal_names = PTDS._internal_names
#_internal_names_set = PTDS._internal_names_set
def __init__(self, data=None):
super().__init__(obj=data.obj, keys=data.keys, axis=data.axis, level=data.level, grouper=data.grouper, exclusions=data.exclusions,
selection=data._selection, as_index=data.as_index, sort=data.sort, group_keys=data.group_keys,
observed=data.observed, mutated=data.mutated, dropna=data.dropna)
@property
def _constructor(self):
return PTGroupedDataSet
@property
def _constructor_sliced(self):
return PTGroupedDataSetSeries
def __iter__(self):
for group, subset in super().__iter__():
yield group, self._copy_meta(subset)
def astype(self, dtype, copy=True, errors='raise'):
PTDataSet.astype(self, dtype, copy=copy, errors=errors)
def get_group(self, name, obj=None):
return self._ptdataset( super().get_group(name, obj=obj) )
def to_dataset(self):
from torch.utils.data import ConcatDataset
dss = []
for key, group in self:
dss.append( group.to_dataset())
return ConcatDataset(dss)
# -
| pipetorch/data/ptdataset.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Extract Metafeatures
# + jupyter={"outputs_hidden": true}
from MetafeatureExtraction.metafeatures import *
path_to_datasets = 'Datasets/new/'
ExtractMetaFeatures(path_to_datasets)
# -
# To start collecting data for a given classifier over all datasets
# + active=""
# +AdaBoost': params_adaboost,
# +RandomForest': params_random_forest,
# +SVM': params_svm,
# +ExtraTrees': params_random_forest,
# +GradientBoosting': params_gboosting,
# +DecisionTree': params_decision_tree,
# +LogisticRegression': param_lr,
# +PassiveAggressiveClassifier':param_pass_agg, //a chaaaaaanger
# +SGDClassifier' : param_sgd,
# +LinearSVC' : LinearSVC // a chaaaanger aussi
# -
# # classification/algorithm
from ModelsCode.classification import *
from pandas import DataFrame
path_to_datasets = 'Datasets/10/'
classification_per_algorithm(path=path_to_datasets, algorithm='SVM')
# +
from DataCollection.functions import *
path_to_datasets = 'Datasets/DS/'
classification_per_algorithm(path=path_to_datasets, algorithm='DecisionTree')
# -
# Conduct fANOVA on the data
from fANOVA.fanova_functions import *
do_fanova(dataset_name='PerformanceData/AB_results_total.csv', algorithm='AdaBoost')
# Create the Database object to import the collected data in desired formats
from Tools.database import Database
db = Database()
per_dataset_acc = db.get_per_dataset_accuracies()
per_dataset_acc.head()
| AMLBID/Recommender/.ipynb_checkpoints/Implementation-checkpoint.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Environment (conda_pytorch_p36)
# language: python
# name: conda_pytorch_p36
# ---
# # PyTorch Reference Layers
#
# A reference made for personal use on common layers in PyTorch. Not meant to be comprehensive.
#
# Currently:
#
# 1. Linear
# 2. Embeddings
# 3. Dropout
# 4. Transformers
#
# Created by <NAME>.
import platform; print("Platform", platform.platform())
import sys; print("Python", sys.version)
import torch; print("PyTorch", torch.__version__)
import torch.nn as nn
import torch.nn.functional as F
# ## 1. Linear
#
# Handles the core matrix multiply of the form $y = xA^T + b$, where $x$ is the data, $A$ is the learned weight parameter and $b$ is the bias term.
#
# ### [`nn.Linear(in_features, out_features)`](https://pytorch.org/docs/stable/nn.html#linear)
#
# - **Input**: 2D,3D,4D of the form [observations [, n_heads, seq_len,] in_features] (e.g., [10, 8]).
# - **Arguments**: First arg is number of columns in the input called `in_features` (e.g., 8), second arg is number of columns in output called `out_features` (e.g., 16).
# - **Output**: 2D,3D,4D of the form [observations [, n_heads, seq_len,] out_features] (e.g., [10, 16])
# - **Stores**: Layer stores two parameters, the bias with shape=`out_features` and weight with shape=[`out_features`, `in_features`]). The weight matrix is tranposed before being multiplied by the input matrix.
# 2D example
x = torch.rand(10, 8) # e.g., [observations, hidden]
lin = nn.Linear(8, 16)
y = lin(x)
y.shape
# 3D Example, e.g., language modelling
x = torch.rand(10,3, 8) # e.g., [observations, seq_len, hidden]
y = lin(x)
y.shape
# 4D example, e.g., transformer self-attention
x = torch.rand(10, 2, 3, 8) # e.g., [observations, n_heads, seq_len, hidden_per_head]
y = lin(x)
y.shape
lin.bias.shape # [out_features]
lin.weight.shape # [out_features, in_features]
# Values are initialized by default with uniform distribution about $\sqrt{\frac{1}{{in\_features}}}$ ([source code](https://pytorch.org/docs/stable/_modules/torch/nn/modules/linear.html#Linear)).
#
# For example, we had 8 input features.
import numpy as np
print(np.sqrt(1/8))
vals = lin.weight.detach().numpy()
counts, cutoffs = np.histogram(vals)
print(cutoffs)
# Initialization could be changed as desired. For example.
class CustomLin(nn.Module):
def __init__(self):
super(CustomLin, self).__init__()
self.lin1 = nn.Linear(8, 16)
self.lin2 = nn.Linear(16, 6)
self._init_layers()
def _init_layers(self):
init_1 = np.sqrt(6) / np.sqrt(8 + 16)
self.lin1.weight.data.uniform_(-init_1, init_1)
self.lin1.bias.data.zero_()
self.lin2.weight.data.normal_(mean=0, std=np.sqrt(6 / 16))
self.lin2.bias.data.zero_()
def forward(self, x):
return self.lin2(self.lin1(x))
cl = CustomLin()
print(np.sqrt(6) / np.sqrt(8 + 16))
counts, cutoffs = np.histogram(cl.lin1.weight.detach().numpy())
print(cutoffs)
print(np.sqrt(6 / 16))
wts = cl.lin2.weight.detach().numpy()
wts.mean(), wts.std()
# For learning about some of the history and development of weight initialization, consult:
# - [Understanding the difficulty of training deep feedforward neural networks (2010)](http://proceedings.mlr.press/v9/glorot10a/glorot10a.pdf) by <NAME> and <NAME>
# - [Delving Deep into Rectifiers: Surpassing Human-Level Performance on ImageNet Classification (2015)](https://arxiv.org/pdf/1502.01852.pdf) by <NAME>, <NAME>, <NAME>, <NAME>
# ## 2. Embeddings
#
# ### [`nn.Embedding(num_embeddings, embedding_dim)`](https://pytorch.org/docs/stable/nn.html#embedding)
#
# Simple lookup of any categorical variable (often NLP vocabulary) mapping to dense floating point representations.
#
# #### Shape
#
# - Input: Could be 1D or 2D depending on the application.
# - Output: If the input is 1D, output will be 2D, if the input is 2D, output will be 2D.
#
# #### Parameters
#
# - **num_embeddings** (int) – how large is the vocab in dictionary or how many categories are there? (e.g., 25)
# - **embedding_dim** (int) – the number of features (e.g., 5)
# 1D input example (e.g., a single column in tablular data)
x = torch.tensor([15, 20, 7])
print(f'input shape: {x.shape}')
emb = nn.Embedding(num_embeddings=25, embedding_dim=5)
y = emb(x)
print(f'output shape: {y.shape}')
# 2D input example (e.g., text for language modelling with sequence on the rows and "text chunk" on the columns)
x = torch.tensor([[15, 20, 7],[23, 10, 6]])
print(f'input shape: {x.shape}')
emb = nn.Embedding(num_embeddings=25, embedding_dim=5)
y = emb(x)
print(f'output shape: {y.shape}')
emb.weight
# ## 3. Dropout
#
# ### [`nn.Dropout(p)`](https://pytorch.org/docs/master/nn.html#dropout)
#
# Randomly zero out elements from input matrix for regularization. Doesn't work when `model.eval()` is set. Also re-normalizes the output values not zero'd out by $\frac{1}{1-p}$.
#
# **Input:** Can be any shape
#
# **Output:** Same shape as input
#
# **See Also:** `nn.Dropout2d` and `nn.Dropout3d` for zero-ing entire channels at a time.
#
# **Paper**: Improving neural networks by preventing co-adaptation of feature detectors (2012) ([arxiv](https://arxiv.org/pdf/1207.0580.pdf))
x = torch.randn(2, 4)
print(f'input shape: {x.shape}')
do = nn.Dropout(p=0.6)
y = do(x)
print(f'output shape: {y.shape}')
print(f'output:\n{y}')
# Dropout is only activated once the training mode is turned on.
class DOModel(nn.Module):
def __init__(self):
super().__init__()
self.do = nn.Dropout(0.6)
def forward(self, x):
return self.do(x)
x = torch.randn(2, 4)
x
# Running with `model.train()`. This will give you a different output each time you call it (unless you set a seed).
#
# Incidentally, this is the key to monte carlo dropout, a technique for uncertainty estimation.
model = DOModel()
model.train()
y = model(x)
print(f'output:\n{y}')
# Running again with `model.eval()`. This will give you the same output no matter how many times you call it.
model.eval()
y = model(x)
print(f'output:\n{y}')
# ## 4. Transformers
#
# ### [`nn.TransformerEncoderLayer(d_model, nhead)`](https://pytorch.org/docs/master/nn.html?highlight=transformerencoderlayer#torch.nn.TransformerEncoderLayer)
#
# Transformer operations are defined in "Attention is all you need (2017)." Differece between this and the Decoder layer is that the Encoder only attends to itself (Key/Value/Query is the source langage). Whereas in Decoder layer there is an attention over the memory (i.e., encoding of the input sequence) as well as the self-attention over the target sequence.
#
# **Input**: A 3D input of the structure [sequence length, batch size, hidden features].
#
# **Output**: Same structure as input.
#
# **Operation**: Key, Value, and Query are all the source sentence (only self-attention). See [source code](https://pytorch.org/docs/master/_modules/torch/nn/modules/transformer.html#TransformerEncoderLayer)
#
# ```
# src2 = self.self_attn(src, src, src, attn_mask=src_mask, key_padding_mask=src_key_padding_mask)[0]
# src = src + self.dropout1(src2)
# src = self.norm1(src)
# src2 = self.linear2(self.dropout(self.activation(self.linear1(src))))
# src = src + self.dropout2(src2)
# src = self.norm2(src)
# return src
# ```
#
# **Parameters**
#
# - **d_model** – the number of expected features in the input (required).
# - **nhead** – the number of heads in the multiheadattention models (required).
# - **dim_feedforward** – the dimension of the feedforward network model (default=2048).
# - **dropout** – the dropout value (default=0.1).
# - **activation** – the activation function of intermediate layer, relu or gelu (default=relu).
x = torch.rand(10, 32, 512)
print(f'input shape: {x.shape}')
encoder_layer = nn.TransformerEncoderLayer(d_model=512, nhead=8)
y = encoder_layer(x)
print(f'output shape: {y.shape}')
# ## 5. Normalization
#
# ### [`nn.LayerNorm`](https://pytorch.org/docs/stable/nn.html#layernorm)
#
# Layer Normalization is described in the paper from 2016, [Layer Normalization](https://arxiv.org/pdf/1607.06450.pdf). It consists of subtracting the mean and dividing by the variance of the data coming into the layer. You get to choose what you want dimensions you would like to use when computing the mean and variance. Unlike Batch normalization, this layer conducts normalization during training and infernce.
x = torch.randn(3, 5)
# With Learnable Parameters
m = nn.LayerNorm(5, elementwise_affine=False)
x
E_x = np.mean(x[0,:].detach().numpy())
E_x
V_x = np.var(x[0,:].detach().numpy())
V_x
(x[0,0] - E_x) / np.sqrt(V_x + m.eps)
m(x)
| notebooks/layers.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python
# language: python
# name: python
# ---
# # Layout and Styling of Jupyter widgets
#
# This notebook presents how to layout and style Jupyter interactive widgets to build rich and *reactive* widget-based applications.
# ## The `layout` attribute.
#
# Every Jupyter interactive widget has a `layout` attribute exposing a number of CSS properties that impact how widgets are laid out.
#
# ### Exposed CSS properties
#
# <div class="alert alert-info" style="margin: 20px">
# The following properties map to the values of the CSS properties of the same name (underscores being replaced with dashes), applied to the top DOM elements of the corresponding widget.
# </div>
#
#
# ** Sizes **
# - `height`
# - `width`
# - `max_height`
# - `max_width`
# - `min_height`
# - `min_width`
#
# ** Display **
#
# - `visibility`
# - `display`
# - `overflow`
# - `overflow_x`
# - `overflow_y`
#
# ** Box model **
# - `border`
# - `margin`
# - `padding`
#
# ** Positioning **
# - `top`
# - `left`
# - `bottom`
# - `right`
#
# ** Flexbox **
# - `order`
# - `flex_flow`
# - `align_items`
# - `flex`
# - `align_self`
# - `align_content`
# - `justify_content`
#
#
# ### Shorthand CSS properties
#
# You may have noticed that certain CSS properties such as `margin-[top/right/bottom/left]` seem to be missing. The same holds for `padding-[top/right/bottom/left]` etc.
#
# In fact, you can atomically specify `[top/right/bottom/left]` margins via the `margin` attribute alone by passing the string
#
# ```
# margin: 100px 150px 100px 80px;
# ```
#
# for a respectively `top`, `right`, `bottom` and `left` margins of `100`, `150`, `100` and `80` pixels.
#
# Similarly, the `flex` attribute can hold values for `flex-grow`, `flex-shrink` and `flex-basis`. The `border` attribute is a shorthand property for `border-width`, `border-style (required)`, and `border-color`.
# ### Simple examples
# The following example shows how to resize a `Button` so that its views have a height of `80px` and a width of `50%` of the available space:
# +
from ipywidgets import Button, Layout
b = Button(description='(50% width, 80px height) button',
layout=Layout(width='50%', height='80px'))
b
# -
# The `layout` property can be shared between multiple widgets and assigned directly.
Button(description='Another button with the same layout', layout=b.layout)
# ### Description
# You may have noticed that the widget's length is shorter in presence of a description. This because the description is added *inside* of the widget's total length. You **cannot** change the width of the internal description field. If you need more flexibility to layout widgets and captions, you should use a combination with the `Label` widgets arranged in a layout.
# +
from ipywidgets import HBox, Label, IntSlider
HBox([Label('A too long description'), IntSlider()])
# -
# ### More Styling (colors, inner-widget details)
#
# The `layout` attribute only exposes layout-related CSS properties for the top-level DOM element of widgets. Individual widgets may expose more styling-related properties, or none. For example, the `Button` widget has a `button_style` attribute that may take 5 different values:
#
# - `'primary'`
# - `'success'`
# - `'info'`
# - `'warning'`
# - `'danger'`
#
# besides the default empty string ''.
# +
from ipywidgets import Button
Button(description='Danger Button', button_style='danger')
# -
# ### Natural sizes, and arrangements using HBox and VBox
#
# Most of the core-widgets have
# - a natural width that is a multiple of `148` pixels
# - a natural height of `32` pixels or a multiple of that number.
# - a default margin of `2` pixels
#
# which will be the ones used when it is not specified in the `layout` attribute.
#
# This allows simple layouts based on the `HBox` and `VBox` helper functions to align naturally:
# +
from ipywidgets import Button, HBox, VBox
words = ['correct', 'horse', 'battery', 'staple']
items = [Button(description=w) for w in words]
HBox([VBox([items[0], items[1]]), VBox([items[2], items[3]])])
# -
# ### Latex
# Widgets such as sliders and text inputs have a description attribute that can render Latex Equations. The `Label` widget also renders Latex equations.
from ipywidgets import IntSlider, Label
IntSlider(description='$\int_0^t f$')
Label(value='$e=mc^2$')
# ### Number formatting
#
# Sliders have a readout field which can be formatted using Python's *[Format Specification Mini-Language](https://docs.python.org/3/library/string.html#format-specification-mini-language)*. If the space available for the readout is too narrow for the string representation of the slider value, a different styling is applied to show that not all digits are visible.
# ## The Flexbox layout
#
# In fact, the `HBox` and `VBox` helpers used above are functions returning instances of the `Box` widget with specific options.
#
# The `Box` widgets enables the entire CSS Flexbox spec, enabling rich reactive layouts in the Jupyter notebook. It aims at providing an efficient way to lay out, align and distribute space among items in a container.
#
# Again, the whole Flexbox spec is exposed via the `layout` attribute of the container widget (`Box`) and the contained items. One may share the same `layout` attribute among all the contained items.
#
# ### Acknowledgement
#
# The following tutorial on the Flexbox layout follows the lines of the article *[A Complete Guide to Flexbox](https://css-tricks.com/snippets/css/a-guide-to-flexbox/)* by <NAME>.
#
# ### Basics and terminology
#
# Since flexbox is a whole module and not a single property, it involves a lot of things including its whole set of properties. Some of them are meant to be set on the container (parent element, known as "flex container") whereas the others are meant to be set on the children (said "flex items").
#
# If regular layout is based on both block and inline flow directions, the flex layout is based on "flex-flow directions". Please have a look at this figure from the specification, explaining the main idea behind the flex layout.
#
# 
#
# Basically, items will be laid out following either the `main axis` (from `main-start` to `main-end`) or the `cross axis` (from `cross-start` to `cross-end`).
#
# - `main axis` - The main axis of a flex container is the primary axis along which flex items are laid out. Beware, it is not necessarily horizontal; it depends on the flex-direction property (see below).
# - `main-start | main-end` - The flex items are placed within the container starting from main-start and going to main-end.
# - `main size` - A flex item's width or height, whichever is in the main dimension, is the item's main size. The flex item's main size property is either the ‘width’ or ‘height’ property, whichever is in the main dimension.
# cross axis - The axis perpendicular to the main axis is called the cross axis. Its direction depends on the main axis direction.
# - `cross-start | cross-end` - Flex lines are filled with items and placed into the container starting on the cross-start side of the flex container and going toward the cross-end side.
# - `cross size` - The width or height of a flex item, whichever is in the cross dimension, is the item's cross size. The cross size property is whichever of ‘width’ or ‘height’ that is in the cross dimension.
#
# ### Properties of the parent
#
# 
#
# - `display` (must be equal to 'flex' or 'inline-flex')
#
# This defines a flex container (inline or block).
# - `flex-flow` **(shorthand for two properties)**
#
# This is a shorthand `flex-direction` and `flex-wrap` properties, which together define the flex container's main and cross axes. Default is `row nowrap`.
#
# - `flex-direction` (row | row-reverse | column | column-reverse)
#
# This establishes the main-axis, thus defining the direction flex items are placed in the flex container. Flexbox is (aside from optional wrapping) a single-direction layout concept. Think of flex items as primarily laying out either in horizontal rows or vertical columns.
# 
#
# - `flex-wrap` (nowrap | wrap | wrap-reverse)
#
# By default, flex items will all try to fit onto one line. You can change that and allow the items to wrap as needed with this property. Direction also plays a role here, determining the direction new lines are stacked in.
# 
#
# - `justify-content` (flex-start | flex-end | center | space-between | space-around)
#
# This defines the alignment along the main axis. It helps distribute extra free space left over when either all the flex items on a line are inflexible, or are flexible but have reached their maximum size. It also exerts some control over the alignment of items when they overflow the line.
# 
#
# - `align-items` (flex-start | flex-end | center | baseline | stretch)
#
# This defines the default behaviour for how flex items are laid out along the cross axis on the current line. Think of it as the justify-content version for the cross-axis (perpendicular to the main-axis).
# 
#
# - `align-content` (flex-start | flex-end | center | baseline | stretch)
#
# This aligns a flex container's lines within when there is extra space in the cross-axis, similar to how justify-content aligns individual items within the main-axis.
# 
#
# **Note**: this property has no effect when there is only one line of flex items.
#
# ### Properties of the items
#
# 
#
# The flexbox-related CSS properties of the items have no impact if the parent element is not a flexbox container (i.e. has a `display` attribute equal to `flex` or `inline-flex`).
#
#
# - `order`
#
# By default, flex items are laid out in the source order. However, the order property controls the order in which they appear in the flex container.
# <img src="./images/order-2.svg" alt="Order" style="width: 500px;"/>
#
# - `flex` **(shorthand for three properties)**
# This is the shorthand for flex-grow, flex-shrink and flex-basis combined. The second and third parameters (flex-shrink and flex-basis) are optional. Default is `0 1 auto`.
#
# - `flex-grow`
#
# This defines the ability for a flex item to grow if necessary. It accepts a unitless value that serves as a proportion. It dictates what amount of the available space inside the flex container the item should take up.
#
# If all items have flex-grow set to 1, the remaining space in the container will be distributed equally to all children. If one of the children a value of 2, the remaining space would take up twice as much space as the others (or it will try to, at least).
# 
#
# - `flex-shrink`
#
# This defines the ability for a flex item to shrink if necessary.
#
# - `flex-basis`
#
# This defines the default size of an element before the remaining space is distributed. It can be a length (e.g. `20%`, `5rem`, etc.) or a keyword. The `auto` keyword means *"look at my width or height property"*.
#
# - `align-self`
#
# This allows the default alignment (or the one specified by align-items) to be overridden for individual flex items.
#
# 
#
# ### The VBox and HBox helpers
#
# The `VBox` and `HBox` helper provide simple defaults to arrange child widgets in Vertical and Horizontal boxes.
#
# ```Python
# def VBox(*pargs, **kwargs):
# """Displays multiple widgets vertically using the flexible box model."""
# box = Box(*pargs, **kwargs)
# box.layout.display = 'flex'
# box.layout.flex_flow = 'column'
# box.layout.align_items = 'stretch'
# return box
#
# def HBox(*pargs, **kwargs):
# """Displays multiple widgets horizontally using the flexible box model."""
# box = Box(*pargs, **kwargs)
# box.layout.display = 'flex'
# box.layout.align_items = 'stretch'
# return box
# ```
#
#
# ### Examples
# **Four buttons in a `VBox`. Items stretch to the maximum width, in a vertical box taking `50%` of the available space.**
# +
from ipywidgets import Layout, Button, Box
items_layout = Layout(flex='1 1 auto',
width='auto') # override the default width of the button to 'auto' to let the button grow
box_layout = Layout(display='flex',
flex_flow='column',
align_items='stretch',
border='solid',
width='50%')
words = ['correct', 'horse', 'battery', 'staple']
items = [Button(description=w, layout=items_layout, button_style='danger') for w in words]
box = Box(children=items, layout=box_layout)
box
# -
# **Three buttons in an HBox. Items flex proportionaly to their weight.**
# +
from ipywidgets import Layout, Button, Box
items_layout = Layout(flex='1 1 auto', width='auto') # override the default width of the button to 'auto' to let the button grow
items = [
Button(description='weight=1'),
Button(description='weight=2', layout=Layout(flex='2 1 auto', width='auto')),
Button(description='weight=1'),
]
box_layout = Layout(display='flex',
flex_flow='row',
align_items='stretch',
border='solid',
width='50%')
box = Box(children=items, layout=box_layout)
box
# -
# **A more advanced example: a reactive form.**
#
# The form is a `VBox` of width '50%'. Each row in the VBox is an HBox, that justifies the content with space between..
# +
from ipywidgets import Layout, Button, Box, FloatText, Textarea, Dropdown, Label, IntSlider
label_layout = Layout()
form_item_layout = Layout(
display='flex',
flex_flow='row',
justify_content='space-between'
)
form_items = [
Box([Label(value='Age of the captain'), IntSlider(min=40, max=60)], layout=form_item_layout),
Box([Label(value='Egg style'),
Dropdown(options=['Scrambled', 'Sunny side up', 'Over easy'])], layout=form_item_layout),
Box([Label(value='Ship size'),
FloatText()], layout=form_item_layout),
Box([Label(value='Information'),
Textarea()], layout=form_item_layout)
]
form = Box(form_items, layout=Layout(
display='flex',
flex_flow='column',
border='solid 2px',
align_items='stretch',
width='50%'
))
form
# -
# **A more advanced example: a carousel.**
# +
from ipywidgets import Layout, Button, Box
item_layout = Layout(height='100px', min_width='40px')
items = [Button(layout=item_layout, button_style='warning') for i in range(40)]
box_layout = Layout(overflow_x='scroll',
border='3px solid black',
width='500px',
height='',
flex_direction='row',
display='flex')
carousel = Box(children=items, layout=box_layout)
carousel
| docs/source/examples/Widget Styling.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3 (ipykernel)
# language: python
# name: python3
# ---
# ## Data Analysis of Hblpower share
# ***Specifically we are going to visualize the variation in open and close price of Hblpower share in this notebook***
# ### Importing Necessary Libraries
# - ***Findspark to get into Spark***
# - ***kafka to get the data in kafkaCosumer***
# - ***Matplotlib to Visualize the Data***
# - ***json to deal with the data*** **Because we are getting the data in json format**
# - ***Threading because Threads provide a way to improve application performance through parallelism***
# - ***Warnings so that we can ignore the warnings that are not that much important***
import findspark
findspark.init()
from kafka import KafkaConsumer
import matplotlib.pyplot as plt
import json
import threading
import warnings
import seaborn as sns
import numpy as np
warnings.filterwarnings('ignore')
# ### Starting Consumer
# - ***Consumer is used to get the data***
# - ***Topic Name is Hblpower1***
# - ***Port Number 9092***
consumer = KafkaConsumer('hblpower1',
group_id='hblpower1',
bootstrap_servers=['localhost:9092'],
)
x, y={}, {}
# ### Creating Plot function
# ***This function is used to continuously get the in the form of key-value pair and storing it into dictionaries X and Y***
def plot():
global x
for message in consumer:
x[json.loads((message.value).decode("utf-8"))["date"]] = float(json.loads((message.value).decode("utf-8"))["open"])
y[json.loads((message.value).decode("utf-8"))["date"]] = float(json.loads((message.value).decode("utf-8"))["close"])
# **Starting Thread**
plot_thread = threading.Thread(target=plot)
plot_thread.start()
# ### Let's Visualize the Data in Real Time
# - ***Importing Searborn library (used for Data Visualization)***
# - ***Set figure Size (20,6)***
# - ***Sorted Dictionary x and y by the date***
# - ***
# - ***Two line Plot we have used one for Open Price and One for Close price of the Day***
# - ***Set XLabel***
# - ***Set YLabel***
# - ***Set The Title for the plot***
# - ***Set XTicks and rotate those by 90 Degree and Keeping fontSize as 10***
# - ***setting Ylim so that we can visualize lines in better manner***
# - ***Setting legend so that we can understand which line represent which data***
# - ***plt.show() is used to show the Plot***
try:
fig = plt.figure(figsize = (20, 8))
x = dict(sorted(x.items(), key=lambda item: item[0], reverse=False))
y = dict(sorted(y.items(), key=lambda item: item[0], reverse=False))
array=np.linspace(x[list(x.keys())[0]], x[list(x.keys())[-1]],len(x))
sns.lineplot([*y.keys()], [*x.values()], color='red', linewidth=2.5,label='Open Price')
sns.lineplot([*y.keys()], [*y.values()], color='green',linewidth=2.5, label= 'Close Price')
sns.lineplot([*y.keys()], array, linewidth=2.5, label='flow Of The Stock')
plt.xlabel("Date", fontsize=20)
plt.ylabel("Open and Close Price",fontsize=20)
plt.title("Open and Close Price Comparison of the Stock with Date (Hblpower)", fontsize=20)
plt.xticks(rotation=90, fontsize=10)
plt.ylim((30,60))
plt.legend(loc='lower right', frameon=True, fontsize=20)
plt.show()
except:
print('Something is wrong please Try Again !')
| EV/Open and Close Share/Battery/HBL Power/openCloseWithDate_Hblpower.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] slideshow={"slide_type": "slide"}
# # Multi-qubit quantum circuit
# In this exercise we creates a two qubit circuit, with two qubits in superposition, and then measures the individual qubits, resulting in two coin toss results with the following possible outcomes with equal probability: $|00\rangle$, $|01\rangle$, $|10\rangle$, and $|11\rangle$. This is like tossing two coins.
#
# + [markdown] slideshow={"slide_type": "slide"}
# Import the required libraries, including the IBM Q library for working with IBM Q hardware.
# + slideshow={"slide_type": "fragment"}
import numpy as np
from qiskit import QuantumCircuit, execute, Aer
from qiskit.tools.monitor import job_monitor
# Import visualization
from qiskit.visualization import plot_histogram, plot_bloch_multivector, iplot_bloch_multivector, plot_state_qsphere, iplot_state_qsphere
# Add the state vector calculation function
def get_psi(circuit, vis):
global psi
backend = Aer.get_backend('statevector_simulator')
psi = execute(circuit, backend).result().get_statevector(circuit)
if vis=="IQ":
display(iplot_state_qsphere(psi))
elif vis=="Q":
display(plot_state_qsphere(psi))
elif vis=="M":
print(psi)
elif vis=="B":
display(plot_bloch_multivector(psi))
else: # vis="IB"
display(iplot_bloch_multivector(psi))
vis=""
# + [markdown] slideshow={"slide_type": "slide"}
# How many qubits do we want to use. The notebook let's you set up multi-qubit circuits of various sizes. Keep in mind that the biggest publicly available IBM quantum computer is 14 qubits in size.
# + slideshow={"slide_type": "fragment"}
#n_qubits=int(input("Enter number of qubits:"))
n_qubits=2
# + [markdown] slideshow={"slide_type": "slide"}
# Create quantum circuit that includes the quantum register and the classic register. Then add a Hadamard (super position) gate to all the qubits. Add measurement gates.
# + slideshow={"slide_type": "fragment"}
qc1 = QuantumCircuit(n_qubits,n_qubits)
qc_measure = QuantumCircuit(n_qubits,n_qubits)
for qubit in range (0,n_qubits):
qc1.h(qubit) #A Hadamard gate that creates a superposition
for qubit in range (0,n_qubits):
qc_measure.measure(qubit,qubit)
display(qc1.draw(output="mpl"))
# + [markdown] slideshow={"slide_type": "slide"}
# Now that we have more than one qubit it is starting to become a bit difficult to visualize the outcomes when running the circuit. To alleviate this we can instead have the get_psi return the statevector itself by by calling it with the vis parameter set to `"M"`. We can also have it display a Qiskit-unique visualization called a Q Sphere by passing the parameter `"Q"` or `"q"`. Big Q returns an interactive Q-sphere, and little q a static one.
# + slideshow={"slide_type": "fragment"}
get_psi(qc1,"M")
print (abs(np.square(psi)))
get_psi(qc1,"B")
# + [markdown] slideshow={"slide_type": "slide"}
# Now we see the statevector for multiple qubits, and can calculate the probabilities for the different outcomes by squaring the complex parameters in the vector.
#
# The Q Sphere visualization provides the same informaton in a visual form, with |0..0> at the north pole, |1..1> at the bottom, and other combinations on latitude circles. In the dynamicc version, you can hover over the tips of the vectors to see the state, probability, and phase data. In the static version, the size of the vector tip represents the relative probability of getting that specific result, and the color represents the phase angle for that specific output. More on that later!
# + [markdown] slideshow={"slide_type": "slide"}
# Now add your circuit with the measurement circuit and run a 1,000 shots to get statistics on the possible outcomes.
#
# + slideshow={"slide_type": "fragment"}
backend = Aer.get_backend('qasm_simulator')
qc_final=qc1+qc_measure
job = execute(qc_final, backend, shots=1000)
counts1 = job.result().get_counts(qc_final)
print(counts1)
plot_histogram(counts1)
# + [markdown] slideshow={"slide_type": "slide"}
# As you might expect, with two independednt qubits ea h in a superposition, the resulting outcomes should be spread evenly accross th epossible outcomes, all the combinations of 0 and 1.
#
# **Time for you to do some work!** To get an understanding of the probable outcomes and how these are displayed on the interactive (or static) Q Sphere, change the `n_qubits=2` value in the cell above, and run the cells again for a different number of qubits.
#
# When you are done, set the value back to 2, and continue on.
#
# + slideshow={"slide_type": "fragment"}
n_qubits=2
# + [markdown] slideshow={"slide_type": "slide"}
# # Entangled-qubit quantum circuit - The Bell state
#
# Now we are going to do something different. We will entangle the qubits.
#
# Create quantum circuit that includes the quantum register and the classic register. Then add a Hadamard (super position) gate to the first qubit. Then add a controlled-NOT gate (cx) between the first and second qubit, entangling them. Add measurement gates.
#
# We then take a look at using the CX (Controlled-NOT) gate to entangle the two qubits in a so called Bell state. This surprisingly results in the following possible outcomes with equal probability: $|00\rangle$ and $|11\rangle$. Two entangled qubits do not at all behave like two tossed coins.
#
# We then run the circuit a large number of times to see what the statistical behavior of the qubits are.
# Finally, we run the circuit on real IBM Q hardware to see how real physical qubits behave.
#
# In this exercise we introduce the CX gate, which creates entanglement between two qubits, by flipping the controlled qubit (q_1) if the controlling qubit (q_0) is 1.
#
# 
qc2_measure = QuantumCircuit(n_qubits, n_qubits)
for qubit in range (0,n_qubits):
qc2_measure.measure(qubit,qubit)
qc2.h(0) # A Hadamard gate that puts the first qubit in superposition
display(qc2.draw(output="mpl"))
get_psi(qc2,"M")
get_psi(qc2,"B")
# + slideshow={"slide_type": "subslide"}
for qubit in range (1,n_qubits):
qc2.cx(0,qubit) #A controlled NOT gate that entangles the qubits.
display(qc2.draw(output="mpl"))
get_psi(qc2, "B")
# + [markdown] slideshow={"slide_type": "slide"}
# Now we notice something peculiar; after we add the CX gate, entangling the qubits the Bloch spheres display nonsense. Why is that? It turns out that once your qubits are entangled they can no longer be described individually, but only as a combined object. Let's take a look at the state vector and Q sphere.
# + slideshow={"slide_type": "fragment"}
get_psi(qc2,"M")
print (abs(np.square(psi)))
get_psi(qc2,"Q")
# + [markdown] slideshow={"slide_type": "slide"}
# Set the backend to a local simulator. Then create a quantum job for the circuit, the selected backend, that runs just one shot to simulate a coin toss with two simultaneously tossed coins, then run the job. Display the result; either 0 for up (base) or 1 for down (excited) for each qubit. Display the result as a histogram. Either |00> or |11> with 100% probability.
# + slideshow={"slide_type": "fragment"}
backend = Aer.get_backend('qasm_simulator')
qc2_final=qc2+qc2_measure
job = execute(qc2_final, backend, shots=1)
counts2 = job.result().get_counts(qc2_final)
print(counts2)
plot_histogram(counts2)
# + [markdown] slideshow={"slide_type": "fragment"}
# Note how the qubits completely agree. They are entangled.
#
# **Do some work..** Run the cell above a few times to verify that you only get the results 00 or 11.
# + [markdown] slideshow={"slide_type": "slide"}
# **Your turn again!** Now, lets run quite a few more shots, and display the statistsics for the two results. This time, as we are no longer just talking about two qubits, but the amassed results of thousands of runs on these qubits.
# + slideshow={"slide_type": "fragment"}
# Add your code here
# + [markdown] slideshow={"slide_type": "fragment"}
# And look at that, we are back at our coin toss results, fifty-fifty. Every time one of the coins comes up heads (|0>) the other one follows suit. Tossing one coin we immediately know what the other one will come up as; the coins (qubits) are entangled.
# + [markdown] slideshow={"slide_type": "slide"}
# # Run your entangled circuit on an IBM quantum computer
# **Important:** With the simulator we get perfect results, only |00> or |11>. On a real NISQ (Noisy Intermediate Scale Quantum computer) we do not expect perfect results like this. Let's run the Bell state once more, but on an actual IBM Q quantum computer.
# -
# **Time for some work!** Before you can run your program on IBM Q you must load your API key. If you are running this notebook in an IBM Qx environment, your API key is already stored in the system, but if you are running on your own machine you [must first store the key](https://qiskit.org/documentation/install.html#access-ibm-q-systems).
# +
#Save and store API key locally.
from qiskit import IBMQ
#IBMQ.save_account('MY_API_TOKEN') <- Uncomment this line if you need to store your API key
#Load account information
IBMQ.load_account()
provider = IBMQ.get_provider()
# + [markdown] slideshow={"slide_type": "fragment"}
# Grab the least busy IBM Q backend.
# + slideshow={"slide_type": "fragment"}
from qiskit.providers.ibmq import least_busy
backend = least_busy(provider.backends(operational=True, simulator=False))
#backend = provider.get_backend('ibmqx2')
print("Selected backend:",backend.status().backend_name)
print("Number of qubits(n_qubits):", backend.configuration().n_qubits)
print("Pending jobs:", backend.status().pending_jobs)
# + [markdown] slideshow={"slide_type": "slide"}
# Lets run a large number of shots, and display the statistsics for the two results: $|00\rangle$ and $|11\rangle$ on the real hardware. Monitor the job and display our place in the queue.
# + slideshow={"slide_type": "fragment"}
if n_qubits > backend.configuration().n_qubits:
print("Your circuit contains too many qubits (",n_qubits,"). Start over!")
else:
job = execute(qc2_final, backend, shots=1000)
job_monitor(job)
# + [markdown] slideshow={"slide_type": "slide"}
# Get the results, and display in a histogram. Notice how we no longer just get the perfect entangled results, but also a few results that include non-entangled qubit results. At this stage, quantum computers are not perfect calculating machines, but pretty noisy.
# + slideshow={"slide_type": "fragment"}
result = job.result()
counts = result.get_counts(qc2_final)
print(counts)
plot_histogram(counts)
# + [markdown] slideshow={"slide_type": "slide"}
# That was the simple readout. Let's take a look at the whole returned results:
# + slideshow={"slide_type": "fragment"}
print(result)
# -
| qiskit_advocates/meetups/11_04_19-Hassi_Norlen-McLean/exercise_2_multiple_qubits_and_entanglement_STUDENT.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Intro to Modulus
# ## Families of objects
# The goal of this book is to introduce a collection of Python routines to explore the concept of the modulus of a family of objects $\Gamma$ on a discrete graph $G=(V,E)$. Roughly speaking, modulus is a framework for quantifying the "richness" of the family $\Gamma$. Before we define modulus, therefore, it make sense to consider what is meant by a family of objects. As will become apparent in later chapters, there aren't many obvious constraints on the type of graph considered. For example, the graph may be finite or infinite, it may be directed or undirected, it may even contain parallel edges or self loops. Modulus is a flexible framework that can be adapted to many settings. However, for developing an intuition about modulus, it helps to consider something concrete; for now, consider a finite, simple, undirected graph $G$.
#
# Similarly, there is a great deal of flexibility in what is considered to be an "objects". A good starting point is to consider $\Gamma$ to be some collection of subsets of edges. That is, $\Gamma\subseteq 2^E$. While it is certainly possible to define more complicated notions of "object," families of edge subsets are already sufficiently rich to demonstrate the modulus framework. For now, then, we shall think of $\Gamma$ as a collection of subsets of edges. Some possible choices to keep in mind are:
# - the family of all paths connecting two specified distinct nodes
# - the family of all cuts separating two specified distinct nodes
# - the family of all spanning trees of $G$
# - the family of all cycles in $G$
# - the family of all triangles in $G$
# - the family of all stars in $G$
#
# In each of these cases, we may identify each object $\gamma$ in the family $\Gamma$ with the set of edges used by the object. For example, a path can be identified with the collection of edges it crosses. A convenient way to keep track of this information is via the $\Gamma\times E$ *usage matrix* $\mathcal{N}$, whose entries are defined as
#
# $$
# \mathcal{N}(\gamma,e) :=
# \begin{cases}
# 1 & \text{if }e\in\gamma,\\
# 0 & \text{otherwise}.
# \end{cases}
# $$
# ### Example: the family of triangles
#
# The following code cell produces the $\mathcal{N}$ matrix for the family of triangles on a particular graph. Notice that, in order to write $\mathcal{N}$ as a two-dimensional array of numbers, we need to choose some ordering for the edges and objects. These orderings can be chosen arbitrarily, as long as they are used consistently.
#
# The code also draws the three triangles of the graph. The labels on the edges show the edge enumeration. The figure on the left corresponds to the first row of $\mathcal{N}$, the middle figure to the second row, and the right figure to the third row.
# +
# %matplotlib inline
import matplotlib.pyplot as plt
import networkx as nx
import numpy as np
from modulus_tools import demo_graphs
# the demo graph
G, pos = demo_graphs.slashed_house_graph()
# enumerate the edges
for i, (u,v) in enumerate(G.edges()):
G[u][v]['enum'] = i
# find all triangles
# NOTE: this would be a silly thing to do on a big graph
triangles = []
n = 5
for i in range(n-2):
for j in range(i+1, n-1):
for k in range(j+1, n):
if j in G[i] and k in G[j] and i in G[k]:
triangles.append(((i, j), (j, k), (k, i)))
plt.figure(figsize=(10,4))
# draw the triangles
for i, T in enumerate(triangles):
plt.subplot(1,3,i+1)
labels = {(u, v): d['enum'] for u, v, d in G.edges(data=True)}
nx.draw(G, pos, node_size=100, node_color='black', edge_color='gray')
nx.draw_networkx_edges(G, pos, edgelist=T, width=3)
nx.draw_networkx_edge_labels(G, pos, edge_labels=labels, font_size=12)
plt.tight_layout()
# build the N matrix
n_edges = len(G.edges())
n_triangles = len(triangles)
N_tri = np.zeros((n_triangles, n_edges))
for i, T in enumerate(triangles):
for u,v in T:
j = G[u][v]['enum']
N_tri[i, j] = 1
print('N_tri = ')
print(N_tri)
# -
# ### Example: a family of simple paths
#
# Consider, instead, the family of simple paths that connect the top vertex of the previous graph to the bottom-right vertex. The following code generates and prints the $\mathcal{N}$ matrix for this family, along with a corresponding series of pictures.
# +
# find all simple paths
# NOTE: again, not a good idea for large graphs
paths = list(nx.all_simple_paths(G, 1, 3))
plt.figure(figsize=(10,8))
# draw the paths
for i, path in enumerate(paths):
plt.subplot(2,3,i+1)
edges = [(path[i], path[i+1]) for i in range(len(path)-1)]
labels = {(u, v): d['enum'] for u, v, d in G.edges(data=True)}
nx.draw(G, pos, node_size=100, node_color='black', edge_color='gray')
nx.draw_networkx_edges(G, pos, edgelist=edges, width=3)
nx.draw_networkx_edge_labels(G, pos, edge_labels=labels, font_size=12)
plt.tight_layout()
# build the N matrix
n_edges = len(G.edges())
n_paths = len(paths)
N_path = np.zeros((n_paths, n_edges))
for i, path in enumerate(paths):
for u,v in [(path[k], path[k+1]) for k in range(len(path)-1)]:
j = G[u][v]['enum']
N_path[i, j] = 1
print('N_path = ')
print(N_path)
# -
# ## Admissible densities: "Everybody pays a dollar"
#
# The next step in the definition of modulus is to develop the concept of admissible density. Suppose we are given a graph $G$ and the usage matrix $\mathcal{N}$ for a family of objects $\Gamma$. In later chapters, we shall see more exotic families of objects whose usage matrices take values other than $0$ and $1$ so, in order to keep the discussion general, we assume that $\mathcal{N}$ is a $\Gamma\times E$ real matrix with non-negative entries.
#
# A *density* on $G$ is a non-negative vector $\rho\in\mathbb{R}^E_{\ge 0}$ that gives some non-negative value $\rho(e)$ to every edge $e\in E$. It can be helpful to think of $\rho(e)$ as a cost per unit usage incurred by using edge $e$. This induces on each $\gamma\in\Gamma$ a *total usage cost*, often referred to as the *$\rho$-length* of $\gamma$ for historical reasons, defined as
#
# $$
# \ell_\rho(\gamma) := \sum_{e\in E}\mathcal{N}(\gamma,e)\rho(e).
# $$
#
# In words, the $\rho$-length of $\gamma$ is the sum over all edges of the extent to which $\gamma$ uses $e$ multiplied by the cost per unit usage assessed by $\rho$. In the case that all entries of $\mathcal{N}$ are either 0 or 1, the $\rho$-length can be rewritten as
#
# $$
# \ell_\rho(\gamma) = \sum_{e\in\Gamma}\rho(e).
# $$
#
# Now we are ready to define admissibility. A density $\rho\in\mathbb{R}^E_{\ge 0}$ is said to be *admissible for $\Gamma$* if $\ell_{\rho}(\gamma)\ge 1$ for every $\gamma\in\Gamma$. That is: everybody pays a dollar.
#
# Since $\mathcal{N}$ is a $\Gamma\times E$ matrix and $\rho$ is an $E$-vector, the product $\mathcal{N}\rho$ is a $\Gamma$-vector whose $\gamma$ entry is $\ell_\rho(\gamma)$. Thus, it is often convenient to use the shorthand notation $\mathcal{N}\rho\ge\mathbf{1}$ to indicated that $\rho$ is admissible. In this context, the inequality is interpreted element-wise.
# ### Example: densities and triangles
#
# Returning to the example of the family of triangles, the following code generates a set of densities and checks for admissibility. Each row in the figure produced represents a density. In each row, the triangles are colored either green or red depending on whether or not the $\rho$-length of that triangle is at least 1. A density is admissible for the triangles if all triangles in that row are colored green.
# +
# seed the random number generator
np.random.seed(88198283)
n_rho = 3
for i in range(n_rho):
plt.figure(figsize=(10, 4))
# generate a random density
rho = np.random.rand(n_edges)
labels = {(u, v): round(rho[d['enum']], 3) for u, v, d in G.edges(data=True)}
# compute the rho-lengths
l = N_tri.dot(rho)
# plot the results
for j, T in enumerate(triangles):
plt.subplot(1,3,j+1)
if l[j] >= 1:
color = 'green'
else:
color = 'red'
nx.draw(G, pos, node_size=100, node_color='black', edge_color='gray')
nx.draw_networkx_edges(G, pos, edgelist=T, width=3, edge_color=color)
nx.draw_networkx_edge_labels(G, pos, edge_labels=labels, font_size=12)
plt.title('rho-length = {:.3f}'.format(l[j]))
title = 'Example {}'.format(i+1)
if np.all(l>=1):
title += ' (admissible)'
else:
title += ' (not admissible)'
plt.suptitle(title, fontsize='x-large')
plt.tight_layout()
plt.subplots_adjust(top=0.75)
# -
# ### Example: densities and paths
#
# The code below does the same thing, but for the path family we considered earlier.
# +
# seed the random number generator
np.random.seed(88198283)
n_rho = 3
for i in range(n_rho):
plt.figure(figsize=(10, 8))
# generate a random density
rho = np.random.rand(n_edges)
labels = {(u, v): round(rho[d['enum']], 3) for u, v, d in G.edges(data=True)}
# compute the rho-lengths
l = N_path.dot(rho)
# plot the results
for j, path in enumerate(paths):
plt.subplot(2,3,j+1)
edges = [(path[k], path[k+1]) for k in range(len(path)-1)]
if l[j] >= 1:
color = 'green'
else:
color = 'red'
nx.draw(G, pos, node_size=100, node_color='black', edge_color='gray')
nx.draw_networkx_edges(G, pos, edgelist=edges, width=3, edge_color=color)
nx.draw_networkx_edge_labels(G, pos, edge_labels=labels, font_size=12)
plt.title('rho-length = {:.3f}'.format(l[j]))
title = 'Example {}'.format(i+1)
if np.all(l>=1):
title += ' (admissible)'
else:
title += ' (not admissible)'
plt.suptitle(title, fontsize='x-large')
plt.tight_layout()
plt.subplots_adjust(top=0.9)
# -
# ## Modulus of a family
#
# Once the graph $G$, the family of objects $\Gamma$ and the usage matrix $\mathcal{N}$ are defined, the definition of modulus is easy. (Interpretations of modulus are more complex and are discussed in later chapters of this book.) In optimization notation, the *modulus of $\Gamma$* is defined to be the value of the problem
#
# $$
# \begin{split}
# \text{minimize}\quad&\mathcal{E}_p(\rho)\\
# \text{subject to}\quad&\rho\in\mathbb{R}^E_{\ge 0}\\
# &\mathcal{N}\rho\ge\mathbf{1}.
# \end{split}
# $$
#
# The symbol $\mathcal{E}_p$ is called the *energy functional* or simply the *energy* for modulus, and is parameterized by a number $p\in[1,\infty]$. For this reason, modulus is often referred to as *$p$-modulus* or, when a specific $p$ has been chosen, as $1$-modulus, $2$-modulus, $\infty$-modulus, etc. In the context of modulus, a non-negative vector $\rho\in\mathbb{R}^E_{\ge 0}$ is called a *density* and the *$p$-energy* of a density is defined to be
#
# $$
# \mathcal{E}_p(\rho) :=
# \begin{cases}
# \sum\limits_{e\in E}\rho(e)^p &\text{ if }1\le p<\infty,\\
# \max\limits_{e\in E}\rho(e) &\text{ if }p=\infty.
# \end{cases}
# $$
#
# That is, the $p$-energy of a density is just the sum of its $p$th powers on all edges of the graph if $p<\infty$ and is the maximum value of $\rho$ over all edges if $p=\infty$.
#
# The value of the optimization problem above is referred to as the *$p$-modulus of the family $\Gamma$* and is denoted $\text{Mod}_p(\Gamma)$. If $\rho=\rho^*$ is a minimizer, then $\rho^*$ is called an *optimal density* or *extremal density*.
# ### Example: triangle modulus
#
# Modulus, as defined above, is an example of a convex optimization problem. Therefore, at least for small graphs, modulus can be computed using any software tool that can solve convex optimization problems. An example of a function for computing modulus in this way is `matrix_modulus` found in `modulus_tools/basic_algorithm.py`. Given the $\mathcal{N}$ matrix along with the exponent $p$, this function finds a numerical approximation the modulus along with an optimal $\rho^*$. The code below computes the modulus of triangles on our example graph for a few different values of $p$. The value $\text{Mod}_p(\Gamma)$ is printed above each plot, while the values of the optimal density $\rho^*$ are shown on the edges.
# +
from modulus_tools.basic_algorithm import matrix_modulus
plt.rcParams['font.size'] = 12
plt.figure(figsize=(10,8))
# loop over a few p values
for i, p in enumerate((1, 1.5, 2, 2.5, 3, np.inf)):
# compute the modulus
mod, rho, _ = matrix_modulus(N_tri, p)
# draw rho values on graph
plt.subplot(2,3,i+1)
labels = {(u, v): round(rho[d['enum']], 3) for u, v, d in G.edges(data=True)}
nx.draw(G, pos, node_size=100, node_color='black', edge_color='gray')
nx.draw_networkx_edge_labels(G, pos, edge_labels=labels, font_size=12)
plt.title('p = {}, Mod = {:.3f}'.format(p, mod))
plt.tight_layout()
# -
# ### Example: path modulus
#
# The code above requires only a small modification if we wish to compute the modulus of the family of paths from top to lower right.
# +
plt.figure(figsize=(10,8))
# loop over a few p values
for i, p in enumerate((1, 1.5, 2, 2.5, 3, np.inf)):
# compute the modulus
mod, rho, _ = matrix_modulus(N_path, p)
# draw rho values on graph
plt.subplot(2,3,i+1)
labels = {(u, v): round(rho[d['enum']], 3) for u, v, d in G.edges(data=True)}
nx.draw(G, pos, node_size=100, node_color='black', edge_color='gray')
nx.draw_networkx_edge_labels(G, pos, edge_labels=labels, font_size=12)
plt.title('p = {}, Mod = {:.3f}'.format(p, mod))
plt.tight_layout()
# -
# ## Where to from here?
#
# Modulus is better thought of as a general framework for understanding families of objects than as a single rigid theory. The ideas presented above can be generalized in many different and interesting ways, only some of which are covered in the chapters that follow. Before moving on, however, we will introduce two useful generalizations. First, if $G$ is a weighted graph with nonnegative weights $\sigma\in\mathbb{R}^E_{>0}$, it is common to define a *weighted $p$-modulus*, $\text{Mod}_{p,\sigma}(\Gamma)$, wherein the energy $\mathcal{E}_p$ is replaced by
#
# $$
# \mathcal{E}_{p,\sigma}(\rho) :=
# \begin{cases}
# \sum\limits_{e\in E}\sigma(e)\rho(e)^p &\text{ if }1\le p<\infty,\\
# \max\limits_{e\in E}\sigma(e)\rho(e) &\text{ if }p=\infty.
# \end{cases}
# $$
#
# Second, as described before, it is also useful to consider more general usage matrices $\mathcal{N}$ that may take nonnegative values other than $0$ and $1$. This allows the consideration of objects that may have multiplicity on certain edges, or may spread fractional usage among several edges. In any case, the modulus problem always looks the same:
#
# $$
# \begin{split}
# \text{minimize}\quad&\mathcal{E}_{p,\sigma}(\rho)\\
# \text{subject to}\quad&\rho\in\mathbb{R}^E_{\ge 0}\\
# &\mathcal{N}\rho\ge\mathbf{1}.
# \end{split}
# $$
#
# The energy is determined by the parameter $p$ and weights $\sigma$ while the constraints are determined by the usage matrix $\mathcal{N}$. The `matrix_modulus` function can compute this more general concept of modulus.
#
# The remainder of this book delves more deeply into modulus, its various interpretations, and algorithms for computing it.
#
# Other resources for understanding modulus and its applications can be found in the following papers.
# - **General theory:** {cite}`albin2015modulusgraphsas,albin2017modulusfamilieswalks`
# - **Interpretations of modulus:** {cite}`albin2016minimalsubfamiliesprobabilistic,albin2019blockingdualityp,albin2018fairestedgeusage`
# - **Applications of modulus:** {cite}`albin2018modulusmetricsnetworks,shakeri2016generalizednetworkmeasures,shakeri2017networkclusteringcommunity,shakeri2018generalizationeffectiveconductance`
| Intro_to_Modulus.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import requests
link = "https://www.zoopla.co.uk/to-rent/property/edinburgh-city-centre/?beds_min=2&price_frequency=per_month&price_max=1000&q=Edinburgh%20City%20Centre%2C%20Edinburgh&results_sort=newest_listings&search_source=home"
page = requests.get(link)
from bs4 import BeautifulSoup
soup = BeautifulSoup(page.content, 'html.parser')
# +
results_id = "l-searchResults"
results = soup.find_all("li", {"class": "clearfix"})
base_link = "https://www.zoopla.co.uk/to-rent/details/"
# results_link = [base_link + result["data-listing-id"] for result in results]
results_links = []
for result in results:
try:
results_links.append(base_link + result["data-listing-id"])
except:
pass
results_links
# +
individual_link = results_links[0]
individual_page = requests.get(individual_link)
individual_soup = BeautifulSoup(individual_page.content, "html.parser")
# -
individual_soup.find("h3", string="Property features").next_sibling.next_sibling
import re
re.sub("\s", "", individual_soup.find("div", {"class":"listing-details-price"}).getText()).split("pcm")[0].split("£")[1]
individual_soup.find("h2", {"class": "listing-details-h1"}).getText()
for link in results_links:
individual_page = requests.get(link)
individual_soup = BeautifulSoup(individual_page.content, "html.parser")
print("ADDRESS DETAILS")
print(link)
print(individual_soup.find("h2", {"itemprop": "streetAddress"}).getText())
print(individual_soup.find("meta", {"itemprop": "addressLocality"})["content"])
print(individual_soup.find("meta", {"itemprop": "latitude"})["content"])
print(individual_soup.find("meta", {"itemprop": "longitude"})["content"])
print("~~~~~~~~~~~~~~~~~~~~~~")
re.sub("\s", " ", individual_soup.find("div", {"itemprop": "description"}).getText())
# +
from dateutil.parser import parse
import re
for link in results_links:
individual_page = requests.get(link)
individual_soup = BeautifulSoup(individual_page.content, "html.parser")
info = individual_soup.find("h3", string="Property info").next_sibling.next_sibling.find_all("li")#.getText()
# for element in info:
# element = element.getText()
# if "Available from" in element:
# date_available = re.sub("Available from ", "", element)
# print(parse(date_available))
print(link)
print(info)
# +
monthly_costs = ["Rent", "Insurance", "Energy", "Water", "Council Tax"]
for cost in monthly_costs:
datum = individual_soup.find("button", {"data-rc-name": cost}).find("span", {"rc-option-btn-price"}).getText()
print(cost + ": " + datum)
# -
for link in results_links:
print(link)
individual_page = requests.get(link)
individual_soup = BeautifulSoup(individual_page.content, "html.parser")
try:
print(individual_soup.find("h3", string="Property features").next_sibling.next_sibling)
except:
pass
# +
from propertyscraper.address import Address
addr = Address.from_zoopla(individual_soup)
addr.to_json()
# +
from propertyscraper.key_info import KeyInfo
info = KeyInfo.from_zoopla(individual_soup)
info.to_json()
# +
from propertyscraper.accomodation import rentedAccomodation
id = "46425422"
accom = rentedAccomodation.from_zoopla(id)
accom.to_json()
# -
# %load_ext autoreload
# %autoreload 2
from propertyscraper.search import RentedSearch
link ="https://www.zoopla.co.uk/to-rent/property/edinburgh-city-centre/?beds_min=2&price_frequency=per_month&price_max=1000&q=Edinburgh%20City%20Centre%2C%20Edinburgh&results_sort=newest_listings&search_source=home"
search = RentedSearch.from_zoopla(link)
[item.to_json() for item in search.data]
# +
from propertyscraper.search import RentedSearch
link = "http://www.rightmove.co.uk/property-to-rent/find.html?minBedrooms=2&sortType=6&viewType=LIST&channel=RENT&index=0&maxPrice=1000&radius=0.0&numberOfPropertiesPerPage=499&locationIdentifier=USERDEFINEDAREA%5E%7B%22polylines%22%3A%22kcmtIfpjRhZ%7ELzXnmBxFlqCmb%40hw%40qgAs%5Euu%40qaA_N_uB%7ESmq%40d%5Ds%7B%40n%60A%3F%22%7D"
search = RentedSearch.from_rightmove(link)
[item.to_json() for item in search.data]
# +
from propertyscraper.search import RentedSearch
link = "https://www.gumtree.com/search?search_category=property-to-rent&search_location=edinburgh&q=&min_price=&max_price=400&min_property_number_beds=2&max_property_number_beds="
search = RentedSearch.from_gumtree(link)
[item.to_json() for item in search.data]
| Zoopla Scraper.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# <a href="/assets/tutorial05_code.ipynb" class="link-button">Download</a>
# <a href="https://colab.research.google.com/github/technion046195/technion046195/blob/master/content/tutorial05/code.ipynb" target="_blank">
# <img src="../assets/colab-badge.svg" style="display:inline"/>
# </a>
#
# <center><h1>
# תרגול 5 - שיערוך פילוג בשיטות לא פרמטריות
# </h1></center>
# ## Setup
# +
## Importing packages
import os # A build in package for interacting with the OS. For example to create a folder.
import numpy as np # Numerical package (mainly multi-dimensional arrays and linear algebra)
import pandas as pd # A package for working with data frames
import matplotlib.pyplot as plt # A plotting package
import imageio # A package to read and write image (is used here to save gif images)
import tabulate # A package from pretty printing tables
from graphviz import Digraph # A package for plothing graphs (of nodes and edges)
## Setup matplotlib to output figures into the notebook
## - To make the figures interactive (zoomable, tooltip, etc.) use ""%matplotlib notebook" instead
# %matplotlib inline
## Setting some nice matplotlib defaults
plt.rcParams['figure.figsize'] = (4.5, 4.5) # Set default plot's sizes
plt.rcParams['figure.dpi'] = 120 # Set default plot's dpi (increase fonts' size)
plt.rcParams['axes.grid'] = True # Show grid by default in figures
## Auxiliary function for prining equations, pandas tables and images in cells output
from IPython.core.display import display, HTML, Latex, Markdown
## Create output folder
if not os.path.isdir('./output'):
os.mkdir('./output')
# -
# ## Ex 5.3
# +
dataset = pd.DataFrame([
[1, 0],
[7, 0],
[9, 0],
[12, 0],
[4, 1],
[4, 1],
[7, 1],
], columns=['x', 'y'])
display(HTML(dataset.T.to_html()))
# -
# ### Section 2
# +
fig, ax = plt.subplots(figsize=(5, 4))
ax.set_title(r'Historgram for $\hat{p}_{x|y}(x|0)$')
ax.hist(dataset.query('y==0')['x'].values, bins=[0,5,10,15], density=True)
ax.set_xlabel('x')
ax.set_ylabel('PDF')
fig.savefig('./output/ex_5_3_1_hist_y_0.png')
fig, ax = plt.subplots(figsize=(5, 4))
ax.set_title(r'Historgram for $\hat{p}_{x|y}(x|1)$')
ax.hist(dataset.query('y==1')['x'].values, bins=[0,5,10,15], density=True)
ax.set_xlabel('x')
ax.set_ylabel('PDF')
fig.savefig('./output/ex_5_3_1_hist_y_1.png')
# -
# ### Section 3
# +
fig, ax = plt.subplots(figsize=(5, 4))
ax.set_title(r'Historgram for $\hat{p}_{x|y}(x|0)$')
ax.hist(dataset.query('y==0')['x'].values, bins=np.arange(16), density=True)
ax.set_xlabel('x')
ax.set_ylabel('PDF')
fig.savefig('./output/ex_5_3_3_hist_y_0.png')
fig, ax = plt.subplots(figsize=(5, 4))
ax.set_title(r'Historgram for $\hat{p}_{x|y}(x|1)$')
ax.hist(dataset.query('y==1')['x'].values, bins=np.arange(16), density=True)
ax.set_xlabel('x')
ax.set_ylabel('PDF')
fig.savefig('./output/ex_5_3_3_hist_y_1.png')
# +
def gen_kde_square(x, x_ref, h):
pdf = np.zeros(len(x))
for center in x_ref:
pdf += (np.abs(x - center) <= (h / 2)).astype(float) / h / x_ref.shape[0]
return pdf
h = 5
x_grid = np.arange(0, 15.1, 0.1)
for i, x in enumerate(dataset['x']):
fig, ax = plt.subplots(figsize=(3, 2))
ax.set_title('$\phi_h(x-x^{(' + f'{i + 1}' + ')})$')
ax.plot(x_grid, gen_kde_square(x_grid, np.array([x]), h), linewidth=4)
ax.set_xlabel('x')
ax.set_ylabel('PDF')
fig.savefig(f'./output/ex_6_3_4_kernel_{i + 1}.png')
fig, ax = plt.subplots(figsize=(5, 4))
ax.set_title(r'KDE for $\hat{p}_{x|y}(x|0)$')
ax.plot(x_grid, gen_kde_square(x_grid, dataset.query('y==0')['x'].values, h), linewidth=4)
ax.set_xlabel('x')
ax.set_ylabel('PDF')
fig.savefig('./output/ex_6_3_4_kde_y_0.png')
fig, ax = plt.subplots(figsize=(5, 4))
ax.set_title(r'KDE for $\hat{p}_{x|y}(x|1)$')
ax.plot(x_grid, gen_kde_square(x_grid, dataset.query('y==1')['x'].values, h), linewidth=4)
ax.set_xlabel('x')
ax.set_ylabel('PDF')
fig.savefig('./output/ex_6_3_4_kde_y_1.png')
# -
| content/tutorial05/code.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python [default]
# language: python
# name: python3
# ---
# # Defining a Milky Way potential model
# +
# Third-party dependencies
from astropy.io import ascii
import astropy.units as u
import numpy as np
import matplotlib.pyplot as plt
from scipy.optimize import leastsq
# Gala
from gala.mpl_style import mpl_style
plt.style.use(mpl_style)
import gala.dynamics as gd
import gala.integrate as gi
import gala.potential as gp
from gala.units import galactic
# %matplotlib inline
# -
# ## Introduction
#
# `gala` provides a simple and easy way to access and integrate orbits in an
# approximate mass model for the Milky Way. The parameters of the mass model are
# determined by least-squares fitting the enclosed mass profile of a pre-defined
# potential form to recent measurements compiled from the literature. These
# measurements are provided with the documentation of `gala` and are shown below.
# The radius units are kpc, and mass units are solar masses:
tbl = ascii.read('data/MW_mass_enclosed.csv')
tbl
# Let's now plot the above data and uncertainties:
# +
fig, ax = plt.subplots(1, 1, figsize=(4,4))
ax.errorbar(tbl['r'], tbl['Menc'], yerr=(tbl['Menc_err_neg'], tbl['Menc_err_pos']),
marker='o', markersize=2, color='k', alpha=1., ecolor='#aaaaaa',
capthick=0, linestyle='none', elinewidth=1.)
ax.set_xlim(1E-3, 10**2.6)
ax.set_ylim(7E6, 10**12.25)
ax.set_xlabel('$r$ [kpc]')
ax.set_ylabel('$M(<r)$ [M$_\odot$]')
ax.set_xscale('log')
ax.set_yscale('log')
fig.tight_layout()
# -
# We now need to assume some form for the potential. For simplicity and within reason, we'll use a four component potential model consisting of a Hernquist ([1990](https://ui.adsabs.harvard.edu/#abs/1990ApJ...356..359H/abstract)) bulge and nucleus, a Miyamoto-Nagai ([1975](https://ui.adsabs.harvard.edu/#abs/1975PASJ...27..533M/abstract)) disk, and an NFW ([1997](https://ui.adsabs.harvard.edu/#abs/1997ApJ...490..493N/abstract)) halo. We'll fix the parameters of the disk and bulge to be consistent with previous work ([Bovy 2015](https://ui.adsabs.harvard.edu/#abs/2015ApJS..216...29B/abstract) - please cite that paper if you use this potential model) and vary the scale mass and scale radius of the nucleus and halo, respectively. We'll fit for these parameters in log-space, so we'll first define a function that returns a `gala.potential.CCompositePotential` object given these four parameters:
def get_potential(log_M_h, log_r_s, log_M_n, log_a):
mw_potential = gp.CCompositePotential()
mw_potential['bulge'] = gp.HernquistPotential(m=5E9, c=1., units=galactic)
mw_potential['disk'] = gp.MiyamotoNagaiPotential(m=6.8E10*u.Msun, a=3*u.kpc, b=280*u.pc,
units=galactic)
mw_potential['nucl'] = gp.HernquistPotential(m=np.exp(log_M_n), c=np.exp(log_a)*u.pc,
units=galactic)
mw_potential['halo'] = gp.NFWPotential(m=np.exp(log_M_h), r_s=np.exp(log_r_s), units=galactic)
return mw_potential
# We now need to specify an initial guess for the parameters - let's do that (by making them up), and then plot the initial guess potential over the data:
# Initial guess for the parameters- units are:
# [Msun, kpc, Msun, pc]
x0 = [np.log(6E11), np.log(20.), np.log(2E9), np.log(100.)]
init_potential = get_potential(*x0)
# +
xyz = np.zeros((3, 256))
xyz[0] = np.logspace(-3, 3, 256)
fig, ax = plt.subplots(1, 1, figsize=(4,4))
ax.errorbar(tbl['r'], tbl['Menc'], yerr=(tbl['Menc_err_neg'], tbl['Menc_err_pos']),
marker='o', markersize=2, color='k', alpha=1., ecolor='#aaaaaa',
capthick=0, linestyle='none', elinewidth=1.)
fit_menc = init_potential.mass_enclosed(xyz*u.kpc)
ax.loglog(xyz[0], fit_menc.value, marker='', color="#3182bd",
linewidth=2, alpha=0.7)
ax.set_xlim(1E-3, 10**2.6)
ax.set_ylim(7E6, 10**12.25)
ax.set_xlabel('$r$ [kpc]')
ax.set_ylabel('$M(<r)$ [M$_\odot$]')
ax.set_xscale('log')
ax.set_yscale('log')
fig.tight_layout()
# -
# It looks pretty good already! But let's now use least-squares fitting to optimize our nucleus and halo parameters. We first need to define an error function:
def err_func(p, r, Menc, Menc_err):
pot = get_potential(*p)
xyz = np.zeros((3,len(r)))
xyz[0] = r
model_menc = pot.mass_enclosed(xyz).to(u.Msun).value
return (model_menc - Menc) / Menc_err
# Because the uncertainties are all approximately but not exactly symmetric, we'll take the maximum of the upper and lower uncertainty values and assume that the uncertainties in the mass measurements are Gaussian (a bad but simple assumption):
err = np.max([tbl['Menc_err_pos'], tbl['Menc_err_neg']], axis=0)
p_opt, ier = leastsq(err_func, x0=x0, args=(tbl['r'], tbl['Menc'], err))
assert ier in range(1,4+1), "least-squares fit failed!"
fit_potential = get_potential(*p_opt)
# Now we have a best-fit potential! Let's plot the enclosed mass of the fit potential over the data:
# +
xyz = np.zeros((3, 256))
xyz[0] = np.logspace(-3, 3, 256)
fig, ax = plt.subplots(1, 1, figsize=(4,4))
ax.errorbar(tbl['r'], tbl['Menc'], yerr=(tbl['Menc_err_neg'], tbl['Menc_err_pos']),
marker='o', markersize=2, color='k', alpha=1., ecolor='#aaaaaa',
capthick=0, linestyle='none', elinewidth=1.)
fit_menc = fit_potential.mass_enclosed(xyz*u.kpc)
ax.loglog(xyz[0], fit_menc.value, marker='', color="#3182bd",
linewidth=2, alpha=0.7)
ax.set_xlim(1E-3, 10**2.6)
ax.set_ylim(7E6, 10**12.25)
ax.set_xlabel('$r$ [kpc]')
ax.set_ylabel('$M(<r)$ [M$_\odot$]')
ax.set_xscale('log')
ax.set_yscale('log')
fig.tight_layout()
# -
# This potential is already implemented in `gala` in `gala.potential.special`, and we can import it with:
from gala.potential import MilkyWayPotential
potential = MilkyWayPotential()
potential
| docs/potential/define-milky-way-model.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# In order to successfully complete this assignment you must do the required reading, watch the provided videos and complete all instructions. The embedded Google form must be entirely filled out and submitted on or before **11:59pm on Sunday March 15**. Students must come to class the next day prepared to discuss the material covered in this assignment. answer
# # Pre-Class Assignment: Ordinary Differential Equations
#
# In this pre-class assignment we will review Ordinary Differential Equations (ODEs) which are an extremely (and relatively simple) scientific model used in a wide range of problems.
# + [markdown] slideshow={"slide_type": "-"}
# ## Goals for today's pre-class assignment
#
# </p>
#
# 1. [Simulation](#Simulation)
# 1. [Ordinary Differential Equations](#ODE)
# 1. [Wave Equation](#Wave_Equation)
# 3. [Find your own example](#Find_your_own_example)
# 1. [Assignment wrap-up](#Assignment_wrap-up)
#
# -
# ----
# <a name="Simulation"></a>
# # 1. Simulation
# In solving many complex scientific problems we want to simulate the real world inside of a computer. Simulations, often involve starting with a know state and then predicting the future based on a model (ex. weather forecasting).
#
# Consider the following simulation that uses an Ordinary Differential Equation as it's model (note: this model is often called the wave equation).
#
from IPython.display import YouTubeVideo
YouTubeVideo("ib1vmRDs8Vw",width=640,height=360)
# ✅ **<font color=red>QUESTION:</font>** Describe in words the starting state in the above model?
# Put your answer to the above equation here
# ----
# <a name="ODE"></a>
# # 2. Ordinary Differential Equations
# Here is a quick video that introduces the concepts of Differential Equations.
from IPython.display import YouTubeVideo
YouTubeVideo("6o7b9yyhH7k",width=640,height=360)
from IPython.display import YouTubeVideo
YouTubeVideo("8QeCQn7uxnE",width=640,height=360)
# ✅ **<font color=red>QUESTION:</font>** What is the difference between a Differential Equation and an Algebraic equation?
# Put your answer to the above question here.
#
# In the above video all of the examples are Ordinary Differential Equations. But what makes them "Ordinary" vs "extraordinary"?
#
# > In mathematics, an ordinary differential equation (ODE) is a differential equation containing one or more functions of one independent variable and its derivatives. The term ordinary is used in contrast with the term partial differential equation (PDE) which may be with respect to more than one independent variable. - From https://en.wikipedia.org/wiki/Ordinary_differential_equation
#
#
# ✅ **<font color=red>QUESTION:</font>** As a reminder from calculus. What is the dependent variable and independent variable in the following polynomial equation?
#
# $$ f(x) = C_1 + C_2x^2 + C_3x^3$$
# Put your answer to the above question here.
# ----
#
# <a name="Wave_Equation"></a>
#
# # 3. Wave Equation Review
#
# We have used the 1D and 2D wave equations as examples a couple of times in theis course.
#
# ✅ **<font color=red>DO THIS:</font>** Review the following notebooks and see if you can identify and understnad the math that makes these differential equations:
#
# - [0108-Models-in-class-assignment](0108-Models-in-class-assignment.ipynb)
# - [0122-Code_Optimization_in-class-assignment](0122-Code_Optimization_in-class-assignment.ipynb)
# ---
# <a name="Find_your_own_example"></a>
#
# # 4. Find your own example
#
# There are a number of ODE solvers avaliable. The most common is the ```odeint``` solver that is included in ```scipy```.
#
# ✅ **<font color=red>DO THIS:</font>** See if you can find some example code that solve ODEs. Some keywords to use are "ODE" and "Python". It may be good to include the libraries as keyworks such as ```scipy``` ```odeint``` function so you could add those terms as well. Add a link to the model you find and see if you can get it working:
#
# Put a link to your model here.
# +
#Copy and paste the example code here.
# -
# ----
# <a name="Assignment_wrap-up"></a>
# # 5. Assignment wrap-up
#
# Please fill out the form that appears when you run the code below. **You must completely fill this out in order to receive credit for the assignment!**
#
# [Direct Link](https://cmse.msu.edu/cmse802-pc-survey)
# ✅ **<font color=red>Assignment-Specific QUESTION:</font>** What ODE example did you find (Include the link)? Did you get it to work, if not, where did you get stuck?
# Put your answer to the above question here
# ✅ **<font color=red>QUESTION:</font>** Summarize what you did in this assignment.
# Put your answer to the above question here
# ✅ **<font color=red>QUESTION:</font>** What questions do you have, if any, about any of the topics discussed in this assignment after working through the jupyter notebook?
# Put your answer to the above question here
# ✅ **<font color=red>QUESTION:</font>** How well do you feel this assignment helped you to achieve a better understanding of the above mentioned topic(s)?
# Put your answer to the above question here
# ✅ **<font color=red>QUESTION:</font>** What was the **most** challenging part of this assignment for you?
# Put your answer to the above question here
# ✅ **<font color=red>QUESTION:</font>** What was the **least** challenging part of this assignment for you?
# Put your answer to the above question here
# ✅ **<font color=red>QUESTION:</font>** What kind of additional questions or support, if any, do you feel you need to have a better understanding of the content in this assignment?
# Put your answer to the above question here
# ✅ **<font color=red>QUESTION:</font>** Do you have any further questions or comments about this material, or anything else that's going on in class?
# Put your answer to the above question here
# ✅ **<font color=red>QUESTION:</font>** Approximately how long did this pre-class assignment take?
# Put your answer to the above question here
from IPython.display import HTML
HTML(
"""
<iframe
src="https://docs.google.com/forms/d/e/1FAIpQLSfBiRDC7m_nmscrPZdAc_8YqG6OQA-QXVLbmzg6AQS-4rSz8Q/viewform?embedded=true"
width="100%"
height="1200px"
frameborder="0"
marginheight="0"
marginwidth="0">
Loading...
</iframe>
"""
)
# ---------
# ### Congratulations, we're done!
#
# To get credit for this assignment you must fill out and submit the above Google From on or before the assignment due date.
# ### Course Resources:
#
# - [Syllabus](https://docs.google.com/document/d/<KEY>)
# - [Preliminary Schedule](https://docs.google.com/spreadsheets/d/e/2PACX-1vRsQcyH1nlbSD4x7zvHWAbAcLrGWRo_RqeFyt2loQPgt3MxirrI5ADVFW9IoeLGSBSu_Uo6e8BE4IQc/pubhtml?gid=2142090757&single=true)
# - [Course D2L Page](https://d2l.msu.edu/d2l/home/912152)
# © Copyright 2020, Michigan State University Board of Trustees
| cmse802-s20/0315--ODE-pre-class-assignment.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Data Validation
# In this example we will use `timekeep` to validate the shape and quality of data.
# +
# %load_ext watermark
# %watermark -i -v -p timekeep
# -
import numpy as np
import timekeep.decorators as tkd
import timekeep.exceptions as tke
# #### Confirming data shape
# We can use the `is_shape` decorator to confirm that data we are loading has the shape we assume it does.
@tkd.is_shape((10, 100, 1))
def load_data():
return np.random.random((10, 100, 1))
# Calling this function will load our data as normal, because the shapes match.
data = load_data()
data.shape
# But if our data doesn't match our assumptions, `timekeep` will let us know.
@tkd.is_shape((10, 100, 1))
def load_more_data():
return np.random.random((10, 100, 4))
try:
data = load_more_data()
except tke.TimekeepCheckError as e:
print("TimekeepCheckError was raised")
# In many cases, the number of data points we load will change depending on other conditions. `is_shape` accepts **-1** as a placeholder for any number of data points.
@tkd.is_shape((-1, 100, 1))
def load_varying_data(n):
return np.random.random((n, 100, 1))
data = load_varying_data(10)
more_data = load_varying_data(123)
| examples/data_validation.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] id="4xvgavwf4r5H"
# # Introduction to Clustering
#
#
# ## Supervised vs Unsupervised learning
#
# Up until now we have been focusing on supervised learning. In supervised learning our training set consists of labeled data. For example, we have images and each image has an associated label: dog, cat, elephant. And from this labeled data our model is trying to learn how to predict the label from the features.
#
# Unsupervised learning is trying to learn patterns from unlabeled data, and one set of models has to do with segmenting a dataset into clusters or groups of related data.
#
# 
#
# We will cover two main clustering techniques.
#
# Let's explore this with a small dog breed dataset.
#
# First, we will load the dataset:
# + id="QAQv0rnv4r5N"
import pandas as pd
dog_data = pd.read_csv('https://raw.githubusercontent.com/zacharski/machine-learning-notebooks/master/data/dogbreeds.csv')
dog_data = dog_data.set_index('breed')
# + colab={"base_uri": "https://localhost:8080/", "height": 131} id="mPnF_WW44r5N" outputId="2524efbf-ea66-4f4b-c4c1-060062732ebd"
|dog_data
# + [markdown] id="-JI_WDmv4r5O"
# Looking at the values in the height and weight columns it looks like we should normalize the data.
#
# <img src="http://animalfair.com/wp-content/uploads/2014/08/chihuahua-and-great-dane.jpg" width="700"/>
#
# + id="kBV9waEe4r5P"
## TODO
# + [markdown] id="eVe08rKB4r5P"
# And let's visualize that data:
# + id="9eNnwbZA4r5P" outputId="e32a34a6-8e6d-4002-9778-9416bb004a5e"
from bokeh.charts import Scatter, output_file, show
from bokeh.io import push_notebook, show, output_notebook
output_notebook()
x = Scatter(dog_data, x='weight (pounds)', y='height (inches)', title="Plot of Dog Breeds",
xlabel="Normalized Weight", ylabel="Normalized Height")
output_file("dogbreed.html")
show(x)
# + [markdown] id="wNd0rh6B4r5Q"
# Gazing at the scatter plot, it looks like we could group the data into three clusters. There are the 2 data points on the bottom left (*Chihuahua* and *Yorkshire Terrier*) The top right group of two (*Bull Mastiff* and *Great Dane*) and the middle group with all the other breeds.
#
#
# ## My book
# 
#
# Years ago I wrote a book on machine learning.The English version is free, and [the Chinese version is available on Amazon](https://www.amazon.com/%E5%86%99%E7%BB%99%E7%A8%8B%E5%BA%8F%E5%91%98%E7%9A%84%E6%95%B0%E6%8D%AE%E6%8C%96%E6%8E%98%E5%AE%9E%E8%B7%B5%E6%8C%87%E5%8D%97%EF%BC%88%E5%BC%82%E6%AD%A5%E5%9B%BE%E4%B9%A6%EF%BC%89-Chinese-Ron-Zacharski-%E6%89%8E%E5%93%88%E5%B0%94%E6%96%AF%E5%9F%BA-ebook/dp/B01ASYWU2I/ref=sr_1_6?dchild=1&keywords=zacharski&qid=1617035941&sr=8-6). The field has changed so much that most of the book is out of date. However [the general explanation of clustering](http://guidetodatamining.com/assets/guideChapters/DataMining-ch8.pdf) in the related chapter still is current and instead of repeating that here just read the material at the link. The book has the algorithms implemented from scratch but now these algorithms are available in sklearn:
#
#
# ## k means clustering
#
# Let's divide our dog dataset into 3 clusters:
#
# + id="2skLMP5X4r5Q"
from sklearn.cluster import KMeans
kmeans = KMeans(n_clusters=3, ).fit(dog_data)
labels = kmeans.labels_
# + [markdown] id="0qTILIqf4r5R"
# The variable `labels` is an array the specifies which group each dog belongs to:
# + id="77rM5cBM4r5R" outputId="593c9062-0aca-48ae-a080-082af8fe9d13"
labels
# + [markdown] id="WSlFYCBr4r5R"
# My results were:
#
# array([0, 0, 0, 1, 2, 0, 0, 1, 0, 0, 2], dtype=int32)
#
# which indicates that the first, second, and third dogs are in group 0, the next one in group 1 and so on That may be helpful for future computational tasks but is not the helpful if we are trying to visualize the data. Let me munge that a bit into a slightly more useful form:
# + id="8DHfIQVP4r5R" outputId="26906bfa-2a79-4772-8809-ea8033504942"
groups = {0: [], 1: [], 2: []}
i = 0
for index, row in dog_data.iterrows():
groups[labels[i]].append(index)
i += 1
## Now I will print it in a nice way:
for key, value in groups.items():
print ('CLUSTER %i' % key)
for breed in value:
print(" %s" % breed)
print('\n')
# + [markdown] id="n_OMZ5HL4r5S"
# keep in mind that since they initial centroids are selected somewhat randomly it is possible that you get a different answer than I do. The answer I got was:
#
# CLUSTER 0
# Border Collie
# Boston Terrier
# <NAME>iel
# German Shepherd
# Golden Retreiver
# Portuguese Water Dog
# Standard Poodle
#
#
# CLUSTER 1
# Bullmastiff
# Great Dane
#
#
# CLUSTER 2
# Chihuahua
# Yorkshire Terrier
#
#
# ## Hierarchical Clustering
#
# For the basics on hierarchical clustering consult [the fine chapter I mentioned](http://guidetodatamining.com/assets/guideChapters/DataMining-ch8.pdf). Here is how to do hierarchical clustering in sklearn.
# + id="jTNxpY1J4r5S" outputId="001f3e27-6184-4b6c-ecec-57558ae2e1e8"
from sklearn.cluster import AgglomerativeClustering
clusterer = AgglomerativeClustering(affinity='euclidean', linkage='ward')
clusterer.fit(dog_data)
# + [markdown] id="F2bA4cgu4r5T"
# we can get the highest level division by viewing the `.labels_`:
#
# + id="FYTJpb6-4r5T" outputId="1e3fed98-9d68-4507-b865-590b1d46a6c4"
clusterer.labels_
# + [markdown] id="i_J85yFr4r5T"
# So here the first dog breed, Border Collie belongs to cluster 0. Keep in mind that in kmeans there is a random element--the selection of the initial centroids, but in hierarchical clustering there is no randomness so you should get the exact same answer I do. so that is the high level division but the hierarchical clustering algorithm constructs a tree - specifically, a dendrogram. To view that requires some imagination. I can print a representation of the tree by:
# + id="rGdrv0Kq4r5T" outputId="0cabe066-0513-4f6b-8da1-441d821f23da"
import itertools
ii = itertools.count(dog_data.shape[0])
[{'node_id': next(ii), 'left': x[0], 'right':x[1]} for x in clusterer.children_]
# + [markdown] id="336xIjIv4r5U"
# The first line `{'left': 0, 'node_id': 11, 'right': 8}` reads that we combine cluster 0 *Border Collie* with cluster 8 *Portuguese Water Dog* to create Cluster 11. The next line says we combine 4 *Chihuahua* with 10 *Yorkshire Terrier* to create cluster 12. Let's associate index numbers with the dog breed names so that structure is easier to parse:
#
# + id="5o2v0-cT4r5U" outputId="4aa75247-6b01-4271-dd38-d1ca51b6d6de"
dog_names = pd.DataFrame({'breeds': dog_data.index.values})
dog_names
# + [markdown] id="jVSLZcCN4r5V"
# That makes it easier to interpret lines like:
#
# {'left': 1, 'node_id': 14, 'right': 2},
#
# We are combining `1` *Boston Terrier* and `2` *Brittany Spaniel*
#
# So when we draw this out we get:
#
# <img src="http://zacharski.org/files/courses/cs419/dendro.png" width="700"/>
#
#
# <h1 style="color:red">Tasks</h1>
#
# <h2 style="color:red">Task 1: Breakfast Cereals</h2>
# I would like you to create 4 clusters of the data in:
#
# https://raw.githubusercontent.com/zacharski/pg2dm-python/master/data/ch8/cereal.csv
#
# For clustering use the features calories, sugar, protein, and fiber.
#
# Print out the results as we did for the dog breed data:
#
#
# CLUSTER 0
# Bullmastiff
# Great Dane
#
#
# CLUSTER 1
# Chihuahua
# Yorkshire Terrier
#
#
# CLUSTER 2
# Border Collie
# Boston Terrier
# Brittany Spaniel
# German Shepherd
# Golden Retreiver
# Portuguese Water Dog
# Standard Poodle
#
# Because the initial centroids are random, by default the sklearn kmeans agorithm runs the algorithm 10 times and picks the best results (based on some of squares error). I would like you to change that parameter so it runs the algorithm 100 times. Just google `sklearn kmeans` to get documentation on the parameters.
# + id="K2RR8kwx4r5V"
# + [markdown] id="HkQoS43x4r5W"
#
# <h2 style="color:red">Task 2: Hierarchical</h2>
# I would like you to use the hierarchical clustering algorithm on the cereal data.
#
# + id="H9Ye84l-4r5W"
# + [markdown] id="zRKqUn9p4r5W"
# And here is a question. What clusters with *Fruity Pebbles*?
# + id="BTxVp-rX4r5W"
# + [markdown] id="NVZNNl3X8_hQ"
# ## A non-programming question
#
# The following is a table of the weights of some world-class female athletes.
#
#
# name | weight in pounds
# :--- | ---:
# <NAME> | 66
# <NAME> | 68
# <NAME> | 76
# <NAME> | 99
# <NAME> | 123
# Brittainey Raven | 162
# <NAME> | 175
# <NAME> | 200
# <NAME> | 204
#
#
# In Single Linkage Hierarchical Clustering, what is the first cluster made?
#
#
# Note: You can indicate hiearchies with sub-lists:
#
# ```
# [['acoustic guitar', 'electric guitar'], 'dobro']
# ```
#
#
#
# + id="JT13DrJbEGUL"
athletes = ['<NAME>', '<NAME>', '<NAME>', '<NAME>', '<NAME>', 'Brittainey Raven', '<NAME>', '<NAME>', '<NAME>']
# TO DO
first_cluster = []
# + [markdown] id="wtHU06uzGscP"
# What is the second cluster made?
# + id="lnvJ32j5FCxf"
# TO DO
# + [markdown] id="aIgk1cE5Gwnj"
# What is the third?
# + id="K0Cyj93eGx9M"
| labs/clustering.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # %matplotlib inline
from missingpy import MissForest
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from scipy.constants import golden
import altair as alt
dataT = pd.read_csv('Between_variances_Dec_True.csv', sep=';',decimal='.')
dataF = pd.read_csv('Between_variances_Dec_False.csv', sep=';',decimal='.')
df = pd.concat([dataF, dataT], axis=1)
df=df.drop(['index2'], axis=1)
df = df.rename(columns={'index': 'Imputations'})
df
alt.Chart(df).mark_line().transform_fold(
fold=['Between_Variance_dec_False', 'Between_Variance_dec_True'], #fold transform essentially stacks all the values from the specified columns into a single new field named "value", with the associated names in a field named "key"
as_=['key', 'Between - imputation variance']
).encode(
x='Imputations',
y='Between - imputation variance:Q',#The shorthand form, x="name:Q", is useful for its lack of boilerplate when doing quick data explorations. The long-form, alt.X('name', type='quantitative')
color='key:N'
)+alt.Chart(df).mark_circle().transform_fold(
fold=['Between_Variance_dec_False', 'Between_Variance_dec_True'],
as_=['key', 'Between - imputation variance']
).encode(
x='Imputations',
y='Between - imputation variance:Q',
color='key:N'
).properties(
width=750,
height=350
)
| ALTAIR_VARIANCES (2).ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python [conda env:tensorflow] *
# language: python
# name: conda-env-tensorflow-py
# ---
# <a href="https://colab.research.google.com/github/jeffheaton/t81_558_deep_learning/blob/master/assignments/assignment_yourname_class1.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# + [markdown] colab_type="text" id="rMtCUtXxZ3ts"
# # T81-558: Applications of Deep Neural Networks
# * Instructor: [<NAME>](https://sites.wustl.edu/jeffheaton/), School of Engineering and Applied Science, [Washington University in St. Louis](https://engineering.wustl.edu/Programs/Pages/default.aspx)
# * For more information visit the [class website](https://sites.wustl.edu/jeffheaton/t81-558/).
#
# **Module 1 Assignment: How to Submit an Assignment**
#
# **Student Name: <NAME>**
# + [markdown] colab_type="text" id="s_t8eej3Z3tu"
# # Assignment Instructions
#
# Assignments are submitted using the **submit** function that is defined earlier in this document. When you submit an assignment, you are sending both your source code and data. A computer program will automatically check your data, and it will inform you
# of how closely your data matches up with my solution file. You are allowed to submit as many times as you like, so if you see some issues with your first submission, you are allowed to make changes and resubmit. You may resubmit as many times as you like, I will only count your final submission towards your grade.
#
# When you first signed up for the course, I emailed you a student key. You can see a sample key below. If you use this key, you will get an error. It is not the current key. **Use YOUR student key, that I provided in the email.**
#
# You must also provide a filename and assignment number. The filename is the source file that you wish to submit. Your data is a Pandas dataframe.
#
# **Assignment 1 is easy!** To complete assignment one, all you have to do is add your student key and make sure that the **file** variable contains the path to your source file. Your source file will most likely end in **.pynb** if you are using a Jupyter notebook; however, it might also end in **.py** if you are using a Python script.
#
# Run the code below, and if you are successful, you should see something similar to:
#
# ```
# Success: Submitted assignment 1 for jheaton:
# You have submitted this assignment 2 times. (this is fine)
# No warnings on your data. You will probably do well, but no guarantee. :-)
# ```
#
# If there is an issue with your data, you will get a warning.
#
#
# **Common Problem #1: Bad student key**
#
# If you use an invalid student key, you will see:
#
# ```
# Failure: {"message":"Forbidden"}
# ```
#
# You should also make sure that **_class#** appears somewhere in your filename. For example, for assignment 1, you should have **_class1** somewhere in your filename. If not, you will get an error. This check ensures that you do not submit the wrong assignment, with the incorrect file. If you do have a mismatch, you will get an error such as:
#
#
# **Common Problem #2: Must have class1 (or another number) as part of the filename**
# ```
# Exception: _class1 must be part of the filename.
# ```
#
# The following video covers assignment submission: [assignment submission video](http://www.yahoo.com).
#
# **Common Problem #3: Can't find source file**
#
# You might get an error similar to this:
#
# ```
# FileNotFoundError: [Errno 2] No such file or directory: '/Users/jeffh/projects/t81_558_deep_learning/t81_558_class1_intro_python.ipynb'
# ```
#
# This error means your **file** path is wrong. Make sure the path matches where your file is stored. See my hints below in the comments for paths in different environments.
#
# **Common Problem #4: ??? **
#
# If you run into a problem not listed here, just let me know. If you cannot solve the file not found error, try running the code provided later in this file to list every file in your notebooks folder.
# + [markdown] colab_type="text" id="vT-h8-JIbVg0"
# # Google CoLab Instructions
#
# If you are using Google CoLab, it will be necessary to mount your GDrive so that you can send your notebook during the submit process. Running the following code will map your GDrive to ```/content/drive```.
# + colab={"base_uri": "https://localhost:8080/", "height": 124} colab_type="code" id="VDWQvbmRaLLD" outputId="ff34aff5-e89b-4a7b-de16-4bfdbd3e9e5e"
try:
from google.colab import drive
drive.mount('/content/drive', force_remount=True)
COLAB = True
print("Note: using Google CoLab")
# %tensorflow_version 2.x
except:
print("Note: not using Google CoLab")
COLAB = False
# -
# If you are running this notebook with CoLab, the following command will show you your notebooks. You will need to know the location of your notebook when you submit your assignment.
# + colab={"base_uri": "https://localhost:8080/", "height": 69} colab_type="code" id="fgb13NXeaif2" outputId="c56f6d13-d698-4fab-9bdf-d553c6d56831"
# !ls /content/drive/My\ Drive/Colab\ Notebooks
# + [markdown] colab_type="text" id="6vA8pJdpZ3tu"
# # Assignment Submit Function
#
# You will submit the ten programming assignments electronically. The following **submit** function can be used to do this. My server will perform a basic check of each assignment and let you know if it sees any underlying problems.
#
# **It is unlikely that should need to modify this function.**
# + colab={} colab_type="code" id="IhpWWqjYZ3tv"
import base64
import os
import numpy as np
import pandas as pd
import requests
import PIL
import PIL.Image
import io
# This function submits an assignment. You can submit an assignment as much as you like, only the final
# submission counts. The paramaters are as follows:
# data - List of pandas dataframes or images.
# key - Your student key that was emailed to you.
# no - The assignment class number, should be 1 through 1.
# source_file - The full path to your Python or IPYNB file. This must have "_class1" as part of its name.
# . The number must match your assignment number. For example "_class2" for class assignment #2.
def submit(data,key,no,source_file=None):
if source_file is None and '__file__' not in globals(): raise Exception('Must specify a filename when a Jupyter notebook.')
if source_file is None: source_file = __file__
suffix = '_class{}'.format(no)
if suffix not in source_file: raise Exception('{} must be part of the filename.'.format(suffix))
with open(source_file, "rb") as image_file:
encoded_python = base64.b64encode(image_file.read()).decode('ascii')
ext = os.path.splitext(source_file)[-1].lower()
if ext not in ['.ipynb','.py']: raise Exception("Source file is {} must be .py or .ipynb".format(ext))
payload = []
for item in data:
if type(item) is PIL.Image.Image:
buffered = BytesIO()
item.save(buffered, format="PNG")
payload.append({'PNG':base64.b64encode(buffered.getvalue()).decode('ascii')})
elif type(item) is pd.core.frame.DataFrame:
payload.append({'CSV':base64.b64encode(item.to_csv(index=False).encode('ascii')).decode("ascii")})
r= requests.post("https://api.heatonresearch.com/assignment-submit",
headers={'x-api-key':key}, json={ 'payload': payload,'assignment': no, 'ext':ext, 'py':encoded_python})
if r.status_code==200:
print("Success: {}".format(r.text))
else: print("Failure: {}".format(r.text))
# + [markdown] colab_type="text" id="cwHjhzzgZ3ty"
# # Assignment #1 Sample Code
#
# For assignment #1, you only must change two things. The key must be modified to be your key, and the file path must be modified to be the path to your local file. Once you have that, just run it, and your assignment is submitted.
# + colab={"base_uri": "https://localhost:8080/", "height": 87} colab_type="code" id="k7sKpvkYZ3tz" outputId="06cf2a6a-c42c-48d0-d662-2e0d443543a0"
# This is your student key that I emailed to you at the beginnning of the semester.
key = "<KEY>" # This is an example key and will not work.
# You must also identify your source file. (modify for your local setup)
# file='/content/drive/My Drive/Colab Notebooks/assignment_yourname_class1.ipynb' # Google CoLab
# file='C:\\Users\\jeffh\\projects\\t81_558_deep_learning\\assignments\\assignment_yourname_class1.ipynb' # Windows
file='/Users/jheaton/projects/t81_558_deep_learning/assignments/assignment_yourname_class1.ipynb' # Mac/Linux
df = pd.DataFrame({'a' : [0, 0, 1, 1], 'b' : [0, 1, 0, 1], 'c' : [0, 1, 1, 0]})
#submit(source_file=file,data=[df],key=key,no=1)
df
# + [markdown] colab_type="text" id="-uIU8SxVZ3t2"
# # Checking Your Submission
#
# You can always double-check to make sure my program correctly received your submission. The following utility code will help with that.
# + colab={} colab_type="code" id="YtQXDseiZ3t3"
import requests
import pandas as pd
import base64
import os
def list_submits(key):
r = requests.post("https://api.heatonresearch.com/assignment-submit",
headers={'x-api-key': key},
json={})
if r.status_code == 200:
print("Success: \n{}".format(r.text))
else:
print("Failure: {}".format(r.text))
def display_submit(key,no):
r = requests.post("https://api.heatonresearch.com/assignment-submit",
headers={'x-api-key': key},
json={'assignment':no})
if r.status_code == 200:
print("Success: \n{}".format(r.text))
else:
print("Failure: {}".format(r.text))
# + colab={} colab_type="code" id="YrtN_yZnZ3t5" outputId="f11b2136-6d4b-47d2-faab-7f7bb04c9f4c"
# Show a listing of all submitted assignments.
key = "<KEY>"
list_submits(key)
# + colab={} colab_type="code" id="CPjhGyemZ3t7" outputId="86e05f16-1212-4dfc-8daf-5ae96bb13bf8"
# Show one assignment, by number.
display_submit(key,1)
# -
# # Can't Find Your File in CoLab?
#
# The following code will list every file in your Google Colab Notebooks directory along with a true/false indicator of if the file could be opened. You SHOULD see all true's. Copy/paste the filename (along with the quotes) and use it as the name of your submitted notebook.
# +
from os import listdir
from os.path import isfile, join
import base64
if COLAB:
PATH = "/content/drive/My Drive/Colab Notebooks/"
for f in listdir(PATH):
full_path = join(PATH, f)
if isfile(full_path):
try:
with open(full_path, "rb") as image_file:
encoded_python = base64.b64encode(image_file.read()).decode('ascii')
success = True
except FileNotFoundError:
success = False
report = full_path.replace('\\','\\\\')
report = report.replace('\"','\\\"')
report = report.replace('\'','\\\'')
print(f'[{success}] - "{report}"')
# + colab={} colab_type="code" id="-OvkF33bZ3t9"
| assignments/assignment_yourname_class1.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + id="oR4N4thZX_Ty" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 306} outputId="24c3811d-5f94-49ce-bdc3-ffd7ca2f2d8e"
# !nvidia-smi
# + id="9DVoGlsbV5oA" colab_type="code" colab={}
import pymc3 as pm
import numpy as np
import matplotlib.pyplot as plt
# + id="BQIB-EcUV-bP" colab_type="code" colab={}
def generateCauchyData(N, mean):
return (np.random.standard_cauchy(N) * 100. + mean)
# + id="SOT9QmdoWXMs" colab_type="code" colab={}
data = generateCauchyData(10000, 42.)
# + id="I215-j7hXRXu" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 269} outputId="2b8bb61d-b737-474f-83c1-3f7f743b67d3"
plt.plot(data)
plt.show()
# + id="bLge7EOmWClC" colab_type="code" colab={}
def frequentistCenter(data):
return np.mean(data)
# + id="Novh783eWFOV" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="4c153ba3-b83a-4269-b864-f0850f847748"
X = frequentistCenter(data)
print("Sample mean: ", X)
# + id="QFeCfAzkWHE3" colab_type="code" colab={}
def bayesianCenter(data):
with pm.Model():
loc = pm.Uniform('location', lower=-1000., upper=1000.)
scale = pm.Uniform('scale', lower=0.01, upper=1000.)
pm.Cauchy('y', alpha=loc, beta=scale, observed=data)
trace = pm.sample(3000, tune=3000, target_accept=0.92)
# pm.traceplot(trace)
# plt.show()
return np.mean(trace['location'])
# + id="rmOB6JDPXrtY" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 136} outputId="86ccc723-4aac-4ab4-a10c-9cba2f355e96"
X2 = bayesianCenter(data)
print("Bayesian mode (median, location): ", X2)
| pymc3/02-cauchy-model.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: conda_pytorch_latest_p36
# language: python
# name: conda_pytorch_latest_p36
# ---
# +
import numpy as np
import pandas as pd
df = pd.read_csv('tsv.csv')
df['date_of_infraction']= pd.to_datetime(df['date_of_infraction'])
df['Count Date'] = df['Count Date'].str.replace('*', '').str.replace(',', '')
df['Latitude'] = df['Latitude'].str.replace('*', '').str.replace(',', '')
df['Longitude'] = df['Longitude'].str.replace('*', '').str.replace(',', '')
df['Longitude'] = df['Longitude'].str.replace('*', '').str.replace(',', '')
df['8 Peak Hr Vehicle Volume'] = df['8 Peak Hr Vehicle Volume'].str.replace('*', '').str.replace(',', '')
df['8 Peak Hr Pedestrian Volume'] = df['8 Peak Hr Pedestrian Volume'].str.replace('*', '').str.replace(',', '')
df['Activation Date'] = df['Activation Date'].str.replace('*', '').str.replace(',', '')
df['TCS '] = df['TCS '].str.replace('*', '').str.replace(',', '')
df['Count Date']= pd.to_datetime(df['Count Date'])
df['Activation Date']= pd.to_datetime(df['Activation Date'])
df
df1 = df[['date_of_infraction','set_fine_amount','time_of_infraction','location2','Latitude','Longitude','Count Date','8 Peak Hr Vehicle Volume','8 Peak Hr Pedestrian Volume']]
df1
df1['date_of_infraction_year'] = df1['date_of_infraction'].dt.year
df1['date_of_infraction_month'] = df1['date_of_infraction'].dt.month
df1['date_of_infraction_week'] = df1['date_of_infraction'].dt.week
df1['date_of_infraction_day'] = df1['date_of_infraction'].dt.day
df1['date_of_infraction_dayofweek'] = df1['date_of_infraction'].dt.dayofweek
df1['Count Date_year'] = df1['Count Date'].dt.year
df1['Count Date_month'] = df1['Count Date'].dt.month
df1['Count Date_week'] = df1['Count Date'].dt.week
df1['Count Date_day'] = df1['Count Date'].dt.day
df1['Count Date_dayofweek'] = df1['Count Date'].dt.dayofweek
# -
one_hot=pd.get_dummies(df1['location2'])
# Drop column as it is now encoded
df1 = df1.drop('location2',axis = 1)
# Join the encoded df
df1 = df1.join(one_hot)
df1
df1 = df1.drop('date_of_infraction',axis = 1)
df1 = df1.drop('Count Date',axis = 1)
df1 = df1.drop('location2',axis = 1)
df1 = df1.dropna(how='all')
df1["8 Peak Hr Vehicle Volume"] = df1["8 Peak Hr Vehicle Volume"].astype(str).astype(int)
df1["8 Peak Hr Pedestrian Volume"] = df1["8 Peak Hr Pedestrian Volume"].astype(str).astype(int)
df1['hour_sin'] = np.sin(2 * np.pi * df1['time_of_infraction']/23.0)
df1['hour_cos'] = np.cos(2 * np.pi * df1['time_of_infraction']/23.0)
df1["Latitude"] = df1.Latitude.astype(float)
df1["Longitude"] = df1.Latitude.astype(float)
np.where(df1.values >= np.finfo(np.float64).max)
df1 = df1.dropna()
pd.isnull(df1).sum() > 0
X=df1.drop(columns=['8 Peak Hr Vehicle Volume','8 Peak Hr Pedestrian Volume'])
y= df1[['8 Peak Hr Vehicle Volume','8 Peak Hr Pedestrian Volume']]
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.60)
# !pip install tensorflow
# !pip install keras
# +
# k-nearest neighbors for multioutput regression
from sklearn.datasets import make_regression
from sklearn.neighbors import KNeighborsRegressor
# create datasets
model = KNeighborsRegressor()
# fit model
model.fit(X, y)
# make a prediction
y_pred=model.predict(X_test)
# summarize prediction
print(y_pred)
# -
| Models/Untitled1.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ## Importing libraries and datasets
import numpy as np
import pandas as pd
import datetime as dt
import numpy as np
import matplotlib.pyplot as plt
import xgboost as xgb
import seaborn as sns
from sklearn.model_selection import train_test_split
from sklearn.metrics import precision_score, recall_score, f1_score, roc_auc_score, accuracy_score, classification_report, roc_curve, auc
from imblearn.under_sampling import RandomUnderSampler
from imblearn.over_sampling import RandomOverSampler
from imblearn.over_sampling import SMOTE
from xgboost import plot_importance
from sklearn.linear_model import LogisticRegression
from sklearn.metrics import confusion_matrix
df_response = pd.read_csv('./Retail_Data/Retail_Data_Response.csv')
df_transactions = pd.read_csv('./Retail_Data/Retail_Data_Transactions.csv', parse_dates=['trans_date'])
df_response.head()
df_transactions.head()
print(df_transactions['trans_date'].min())
print(df_transactions['trans_date'].max())
# ## Data Preparation
# +
## since the last date of the data is 16 March 2015, the campaign date is assumed to be 17 March 2015
## RFM model will be used to predict campaign response. Recency is calculated
campaign_date = dt.datetime(2015,3,17)
df_transactions['recent'] = campaign_date - df_transactions['trans_date']
df_transactions['recent'].astype('timedelta64[D]')
df_transactions['recent'] =df_transactions['recent'] / np.timedelta64(1, 'D')
df_transactions.head()
# +
## create data set with RFM variables
df_rfm = df_transactions.groupby('customer_id').agg({'recent': lambda x:x.min(), # Recency
'customer_id': lambda x: len(x), # Frequency
'tran_amount': lambda x: x.sum()}) # Monetary Value
df_rfm.rename(columns={'recent': 'recency',
'customer_id': 'frequency',
'tran_amount': 'monetary_value'}, inplace=True)
# -
df_rfm = df_rfm.reset_index()
df_rfm.head()
# +
## create data set with CLV variables
df_clv = df_transactions.groupby('customer_id').agg({'recent': lambda x:x.min(), # Recency
'customer_id': lambda x: len(x), # Frequency
'tran_amount': lambda x: x.sum(), # Monetary Value
'trans_date': lambda x: (x.max() - x.min()).days}) # AOU
df_clv.rename(columns={'recent': 'recency',
'customer_id': 'frequency',
'tran_amount': 'monetary_value',
'trans_date' : 'AOU'}, inplace=True)
df_clv['ticket_size'] = df_clv['monetary_value'] / df_clv['frequency']
# -
df_clv = df_clv.reset_index()
df_clv.head()
# ## Calculating response rate
response_rate = df_response.groupby('response').agg({'customer_id': lambda x: len(x)}).reset_index()
response_rate.head()
plt.figure(figsize=(5,5))
x=range(2)
plt.bar(x,response_rate['customer_id'])
plt.xticks(response_rate.index)
plt.title('Response Distribution')
plt.xlabel('Convert or Not')
plt.ylabel('no. of users')
plt.show()
# data is imbalanced
# +
## merging two data sets - RFM
df_modeling_rfm = pd.merge(df_response,df_rfm)
df_modeling_rfm.head()
# +
## merging two data sets
df_modeling = pd.merge(df_response,df_rfm)
df_modeling.head()
# -
# ## Creating train and test dataset
# +
## spliting dataframe into X and y
X = df_modeling.drop(columns=['response','customer_id'])
y = df_modeling['response']
# +
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3, random_state=0)
print("Number transactions X_train dataset: ", X_train.shape)
print("Number transactions y_train dataset: ", y_train.shape)
print("Number transactions X_test dataset: ", X_test.shape)
print("Number transactions y_test dataset: ", y_test.shape)
# -
sns.scatterplot(data=df_modeling, x='recency', y='monetary_value', hue='response')
sns.despine()
plt.title("Imbalanced Data")
# ## Fixing imbalanced with SMOTE
# +
sm = SMOTE(random_state=0)
sm.fit(X_train, y_train)
X_SMOTE, y_SMOTE = sm.fit_sample(X_train, y_train)
df_SMOTE = pd.concat([pd.DataFrame(data=X_SMOTE),pd.DataFrame(data=y_SMOTE)], axis=1, sort=False)
df_SMOTE.columns= ['recency', 'frequency', 'monetary_value', 'response']
sns.scatterplot(data=df_SMOTE, x='recency', y='monetary_value', hue='response')
sns.despine()
plt.title("SMOTE Data")
# -
# ## Logistic Regression
# +
print('logistic regression model - SMOTE')
logreg = LogisticRegression(solver='liblinear', class_weight='balanced')
predicted_y = []
expected_y = []
logreg_model_SMOTE = logreg.fit(X_SMOTE, y_SMOTE)
predictions = logreg_model_SMOTE.predict(X_SMOTE)
predicted_y.extend(predictions)
expected_y.extend(y_SMOTE)
report_train = classification_report(expected_y, predicted_y)
print('training set')
print(report_train)
predicted_y = []
expected_y = []
predictions = logreg_model_SMOTE.predict(X_test)
predicted_y.extend(predictions)
expected_y.extend(y_test)
report_test = classification_report(expected_y, predicted_y)
print('test set')
print(report_test)
# +
y_score_train = logreg_model_SMOTE.decision_function(X_SMOTE)
fpr_train, tpr_train, _ = roc_curve(y_SMOTE, y_score_train)
auc_train = roc_auc_score(y_SMOTE, y_score_train)
plt.plot(fpr_train,tpr_train, color='red', label='SMOTE - train , auc='+str(auc_train))
y_score_test = logreg_model_SMOTE.decision_function(X_test)
fpr_test, tpr_test, _ = roc_curve(y_test, y_score_test)
auc_test = roc_auc_score(y_test, y_score_test)
plt.plot(fpr_test,tpr_test, color='Blue', label='SMOTE - test , auc='+str(auc_test))
plt.plot([0, 1], [0, 1], color='navy', lw=2, linestyle='--')
plt.legend(loc=4)
plt.show()
| 08 - Campaign Response Model/Campaign Response Model.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ## Multi-label classification
# %reload_ext autoreload
# %autoreload 2
# %matplotlib inline
# %pwd
from fastai.vision import *
from fastai import *
import pandas as pd
from PIL import Image
import numpy as np
import matplotlib.pyplot as plt
pwd
os.listdir("./data/kaggle/humanProtein")
PATH = 'data/kaggle/humanProtein'
# ls {PATH}
# +
# # !cp -r {PATH}/test {PATH}/valid
# # !cp {PATH}/train.csv {PATH}/valid.csv
# -
# !ls {PATH}
os.listdir(f'{PATH}/train')[:3]
imgpath = os.path.join(f'{PATH}/train', os.listdir(f'{PATH}/train')[0])
print({imgpath})
img = Image.open(f'{imgpath}')
img = np.array(img)
plt.imshow(img)
plt.show()
# +
# os.listdir(f'{PATH}/valid')[:3]
# + [markdown] heading_collapsed=true
# ## Multi-label versus single-label classification
# + [markdown] hidden=true
# In single-label classification each sample belongs to one class. In the previous example, each image is either a *dog* or a *cat*.
# + hidden=true
df = pd.read_csv(f'{PATH}/train.csv')
# -
df.head()
df.Target
pd.
tfms = get_transforms(do_flip=True)
data = ImageDataBunch.from_csv(PATH, folder='train', csv_labels='train.csv', valid_pct=1e-1, size=24, suffix='_green.png',
ds_tfms=get_transforms(flip_vert=True))
data.normalize(vision.data.imagenet_stats)
data.batch_stats()
data.show_batch(rows=3, figsize=(5,5))
??ConvLearner
learn = ConvLearner(data, models.resnet18)
learn.lr_find()
learn.lr_range(slice(1e-1, 1e-3))
learn.lr_find()
learn.recorder.plot(20, 7)
learn.fit_one_cycle(cyc_len=10, max_lr=1e-2)
# + [markdown] hidden=true
# In multi-label classification each sample can belong to one or more classes. In the previous example, the first images belongs to two classes: *haze* and *primary*. The second image belongs to four classes: *agriculture*, *clear*, *primary* and *water*.
# -
# ## Multi-label models for Planet dataset
# +
from planet import f2
metrics=[f2]
f_model = resnet34
# -
label_csv = f'{PATH}train_v2.csv'
n = len(list(open(label_csv)))-1
val_idxs = get_cv_idxs(n)
# We use a different set of data augmentations for this dataset - we also allow vertical flips, since we don't expect vertical orientation of satellite images to change our classifications.
def get_data(sz):
tfms = tfms_from_model(f_model, sz, aug_tfms=transforms_top_down, max_zoom=1.05)
return ImageClassifierData.from_csv(PATH, 'train-jpg', label_csv, tfms=tfms,
suffix='.jpg', val_idxs=val_idxs, test_name='test-jpg')
data = get_data(256)
x,y = next(iter(data.val_dl))
y
list(zip(data.classes, y[0]))
plt.imshow(data.val_ds.denorm(to_np(x))[0]*1.4);
sz=64
data = get_data(sz)
data = data.resize(int(sz*1.3), 'tmp')
learn = ConvLearner.pretrained(f_model, data, metrics=metrics)
lrf=learn.lr_find()
learn.sched.plot()
lr = 0.2
learn.fit(lr, 3, cycle_len=1, cycle_mult=2)
lrs = np.array([lr/9,lr/3,lr])
learn.unfreeze()
learn.fit(lrs, 3, cycle_len=1, cycle_mult=2)
learn.save(f'{sz}')
learn.sched.plot_loss()
sz=128
learn.set_data(get_data(sz))
learn.freeze()
learn.fit(lr, 3, cycle_len=1, cycle_mult=2)
learn.unfreeze()
learn.fit(lrs, 3, cycle_len=1, cycle_mult=2)
learn.save(f'{sz}')
sz=256
learn.set_data(get_data(sz))
learn.freeze()
learn.fit(lr, 3, cycle_len=1, cycle_mult=2)
learn.unfreeze()
learn.fit(lrs, 3, cycle_len=1, cycle_mult=2)
learn.save(f'{sz}')
multi_preds, y = learn.TTA()
preds = np.mean(multi_preds, 0)
f2(preds,y)
# ### End
| kaggle-human-protein-comp.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] cell_style="split"
# # Daily User Plots (Bokeh Static)
# -
# ### Imports
# +
import os
import pandas as pd
from bokeh.io import output_notebook, expaort_png
from bokeh.plotting import show
from IPython.display import clear_output
import sys
sys.path.append(os.path.abspath(os.path.join(os.getcwd(), "..", "src", "visualization")))
import daily_user_bokeh_fxns as dub
output_notebook()
# -
# ### Load raw data
# +
file_path = os.path.abspath(os.path.join(os.getcwd(), "..", "data", "processed", "daily_user_averages.csv"))
adu_df = pd.read_csv(file_path)
# Convert month to categorical
adu_df['month'] = pd.Categorical(adu_df['month'], dub.MONTHS)
# Init figure dirs
os.makedirs(os.path.abspath(os.path.join(os.getcwd(), "..","reports", "figures", "daily_user_averages")), exist_ok=True)
# -
# ## LCC Plots
season = '2017-2018'
sites = [x for x in adu_df.query("season == @season")['site'].unique() if ("LCC" in x and "Gate" not in x)]
dayOfWeek = 'All'
src_df = dub.make_data_set(season, sites, dayOfWeek, adu_df)
p = dub.make_plot(src_df, show_sensor_n=True)
fig_path = os.path.abspath(os.path.join(os.getcwd(), "..","reports", "figures", "daily_user_averages",
"daily_users_{}_{}_{}.png".format(season.replace("-", ""), "LCC", dayOfWeek)))
export_png(p, fig_path)
show(p)
season = '2017-2018'
sites = [x for x in adu_df.query("season == @season")['site'].unique() if ("LCC" in x and "Gate" not in x)]
dayOfWeek = 'Saturday'
src_df = dub.make_data_set(season, sites, dayOfWeek, adu_df)
p = dub.make_plot(src_df, show_sensor_n=True)
fig_path = os.path.abspath(os.path.join(os.getcwd(), "..","reports", "figures", "daily_user_averages",
"daily_users_{}_{}_{}.png".format(season.replace("-", ""), "LCC", dayOfWeek)))
export_png(p, fig_path)
show(p)
season = '2018-2019'
sites = [x for x in adu_df.query("season == @season")['site'].unique() if ("LCC" in x and "Gate" not in x)]
dayOfWeek = 'All'
src_df = dub.make_data_set(season, sites, dayOfWeek, adu_df)
p = dub.make_plot(src_df, show_sensor_n=True)
fig_path = os.path.abspath(os.path.join(os.getcwd(), "..","reports", "figures", "daily_user_averages",
"daily_users_{}_{}_{}.png".format(season.replace("-", ""), "LCC", dayOfWeek)))
export_png(p, fig_path)
show(p)
season = '2018-2019'
sites = [x for x in adu_df.query("season == @season")['site'].unique() if ("LCC" in x and "Gate" not in x)]
dayOfWeek = 'Saturday'
src_df = dub.make_data_set(season, sites, dayOfWeek, adu_df)
p = dub.make_plot(src_df, show_sensor_n=True)
fig_path = os.path.abspath(os.path.join(os.getcwd(), "..","reports", "figures", "daily_user_averages",
"daily_users_{}_{}_{}.png".format(season.replace("-", ""), "LCC", dayOfWeek)))
export_png(p, fig_path)
show(p)
# ## BCC Plots
season = '2017-2018'
sites = [x for x in adu_df.query("season == @season")['site'].unique() if "BCC" in x]
dayOfWeek = 'All'
src_df = dub.make_data_set(season, sites, dayOfWeek, adu_df)
p = dub.make_plot(src_df, show_sensor_n=True)
fig_path = os.path.abspath(os.path.join(os.getcwd(), "..","reports", "figures", "daily_user_averages",
"daily_users_{}_{}_{}.png".format(season.replace("-", ""), "BCC", dayOfWeek)))
export_png(p, fig_path)
show(p)
season = '2017-2018'
sites = [x for x in adu_df.query("season == @season")['site'].unique() if "BCC" in x]
dayOfWeek = 'Saturday'
src_df = dub.make_data_set(season, sites, dayOfWeek, adu_df)
p = dub.make_plot(src_df, show_sensor_n=True)
fig_path = os.path.abspath(os.path.join(os.getcwd(), "..","reports", "figures", "daily_user_averages",
"daily_users_{}_{}_{}.png".format(season.replace("-", ""), "BCC", dayOfWeek)))
export_png(p, fig_path)
show(p)
season = '2018-2019'
sites = [x for x in adu_df.query("season == @season")['site'].unique() if "BCC" in x]
dayOfWeek = 'All'
src_df = dub.make_data_set(season, sites, dayOfWeek, adu_df)
p = dub.make_plot(src_df, show_sensor_n=True)
fig_path = os.path.abspath(os.path.join(os.getcwd(), "..","reports", "figures", "daily_user_averages",
"daily_users_{}_{}_{}.png".format(season.replace("-", ""), "BCC", dayOfWeek)))
export_png(p, fig_path)
show(p)
season = '2018-2019'
sites = [x for x in adu_df.query("season == @season")['site'].unique() if "BCC" in x]
dayOfWeek = 'Saturday'
src_df = dub.make_data_set(season, sites, dayOfWeek, adu_df)
p = dub.make_plot(src_df, show_sensor_n=True)
fig_path = os.path.abspath(os.path.join(os.getcwd(), "..","reports", "figures", "daily_user_averages",
"daily_users_{}_{}_{}.png".format(season.replace("-", ""), "BCC", dayOfWeek)))
export_png(p, fig_path)
show(p)
# ## MCC Plots
season = '2018-2019'
sites = [x for x in adu_df.query("season == @season")['site'].unique() if "MCC" in x]
dayOfWeek = 'All'
src_df = dub.make_data_set(season, sites, dayOfWeek, adu_df)
p = dub.make_plot(src_df, show_sensor_n=True)
fig_path = os.path.abspath(os.path.join(os.getcwd(), "..","reports", "figures", "daily_user_averages",
"daily_users_{}_{}_{}.png".format(season.replace("-", ""), "MCC", dayOfWeek)))
export_png(p, fig_path)
show(p)
season = '2018-2019'
sites = [x for x in adu_df.query("season == @season")['site'].unique() if "MCC" in x]
dayOfWeek = 'Saturday'
src_df = dub.make_data_set(season, sites, dayOfWeek, adu_df)
p = dub.make_plot(src_df, show_sensor_n=True)
fig_path = os.path.abspath(os.path.join(os.getcwd(), "..","reports", "figures", "daily_user_averages",
"daily_users_{}_{}_{}.png".format(season.replace("-", ""), "MCC", dayOfWeek)))
export_png(p, fig_path)
show(p)
| wba-trailcounting/notebooks/3.0-maf-daily-user-averages-bokeh-static.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: dl_CE7454
# language: python
# name: dl_ce7454
# ---
# # Lab 07: Vanilla neural networks -- demo
#
# # Creating a two-layer network
import torch
import torch.nn as nn
import torch.nn.functional as F
# ### In Pytorch, networks are defined as classes
class two_layer_net(nn.Module):
def __init__(self, input_size, hidden_size, output_size):
super(two_layer_net , self).__init__()
self.layer1 = nn.Linear( input_size, hidden_size , bias=True)
self.layer2 = nn.Linear( hidden_size, output_size , bias=True)
def forward(self, x):
x = self.layer1(x)
x = F.relu(x)
x = self.layer2(x)
p = F.softmax(x, dim=0)
return p
# ### Create an instance that takes input of size 2, then transform it into something of size 5, then into something of size 3
# $$
# \begin{bmatrix}
# \times \\ \times
# \end{bmatrix}
# \longrightarrow
# \begin{bmatrix}
# \times \\ \times \\ \times \\ \times \\ \times
# \end{bmatrix}
# \longrightarrow
# \begin{bmatrix}
# \times \\ \times \\ \times
# \end{bmatrix}
# $$
net= two_layer_net(2,5,3)
print(net)
# ### Now we are going to make an input vector and feed it to the network:
x=torch.Tensor([1,1])
print(x)
p=net.forward(x)
print(p)
# ### Syntactic easy for the forward method
p=net(x)
print(p)
# ### Let's check that the probability vector indeed sum to 1:
print( p.sum() )
# ### This network is composed of two Linear modules that we have called layer1 and layer2. We can see this when we type:
print(net)
# ### We can access the first module as follow:
print(net.layer1)
# ### To get the weights and bias of the first layer we do:
print(net.layer1.weight)
print(net.layer1.bias)
# ### So to change the first row of the weights from layer 1 you would do:
# +
net.layer1.weight[0,0]=10
net.layer1.weight[0,1]=20
print(net.layer1.weight)
# -
# ### Now we are gong to feed $x=\begin{bmatrix}1\\1 \end{bmatrix}$ to this modified network:
p=net(x)
print(p)
# ### Alternatively, all the parameters of the network can be accessed by net.parameters().
list_of_param = list( net.parameters() )
print(list_of_param)
| codes/labs_lecture04/lab03_vanilla_nn/.ipynb_checkpoints/vanilla_nn_demo-checkpoint.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="dkgce5cdOcW7"
# # Part 2: Process the item embedding data in BigQuery and export it to Cloud Storage
#
# This notebook is the second of five notebooks that guide you through running the [Real-time Item-to-item Recommendation with BigQuery ML Matrix Factorization and ScaNN](https://github.com/GoogleCloudPlatform/analytics-componentized-patterns/tree/master/retail/recommendation-system/bqml-scann) solution.
#
# Use this notebook to complete the following tasks:
#
# 1. Process the song embeddings data in BigQuery to generate a single embedding vector for each song.
# 1. Use a Dataflow pipeline to write the embedding vector data to CSV files and export the files to a Cloud Storage bucket.
#
# Before starting this notebook, you must run the [01_train_bqml_mf_pmi](01_train_bqml_mf_pmi.ipynb) notebook to calculate item PMI data and then train a matrix factorization model with it.
#
# After completing this notebook, run the [03_create_embedding_lookup_model](03_create_embedding_lookup_model.ipynb) notebook to create a model to serve the item embedding data.
#
# + [markdown] id="SW1RHsqGPNzE"
# ## Setup
#
# Import the required libraries, configure the environment variables, and authenticate your GCP account.
#
#
# + id="Mp6ETYF2R-0q"
# !pip install -U -q apache-beam[gcp]
# + [markdown] id="zdSKSzqvR_qY"
# ### Import libraries
# + id="OcUKzLnuR_wa"
import os
from datetime import datetime
import apache_beam as beam
import numpy as np
import tensorflow.io as tf_io
# + [markdown] id="22rDpO3JPcy9"
# ### Configure GCP environment settings
#
# Update the following variables to reflect the values for your GCP environment:
#
# # + `PROJECT_ID`: The ID of the Google Cloud project you are using to implement this solution.
# # + `BUCKET`: The name of the Cloud Storage bucket you created to use with this solution. The `BUCKET` value should be just the bucket name, so `myBucket` rather than `gs://myBucket`.
# # + `REGION`: The region to use for the Dataflow job.
# + id="Nyx4vEd7Oa9I"
PROJECT_ID = "yourProject" # Change to your project.
BUCKET = "yourBucketName" # Change to the bucket you created.
REGION = "yourDataflowRegion" # Change to your Dataflow region.
BQ_DATASET_NAME = "recommendations"
# !gcloud config set project $PROJECT_ID
# + [markdown] id="3d89ZwydPhQX"
# ### Authenticate your GCP account
# This is required if you run the notebook in Colab. If you use an AI Platform notebook, you should already be authenticated.
# + id="6ICvdRicPhl8"
try:
from google.colab import auth
auth.authenticate_user()
print("Colab user is authenticated.")
except:
pass
# + [markdown] id="R1gmEmHbSaQD"
# ## Process the item embeddings data
#
# You run the [sp_ExractEmbeddings](sql_scripts/sp_ExractEmbeddings.sql) stored procedure to process the item embeddings data and write the results to the `item_embeddings` table.
#
# This stored procedure works as follows:
#
# 1. Uses the [ML.WEIGHTS](https://cloud.google.com/bigquery-ml/docs/reference/standard-sql/bigqueryml-syntax-weights) function to extract the item embedding matrices from the `item_matching_model` model.
# 1. Aggregates these matrices to generate a single embedding vector for each item.
#
# Because BigQuery ML matrix factorization models are designed for user-item recommendation use cases, they generate two embedding matrices, one for users, and the other of items. However, in this use case, both embedding matrices represent items, but in different axes of the feedback matrix. For more information about how the feedback matrix is calculated, see [Understanding item embeddings](https://cloud.google.com/solutions/real-time-item-matching#understanding_item_embeddings).
#
# + [markdown] id="utkyuwJUyTlb"
# ### Run the `sp_ExractEmbeddings` stored procedure
# + id="DK0olptba8qi"
# %%bigquery --project $PROJECT_ID
CALL recommendations.sp_ExractEmbeddings()
# + [markdown] id="0UvHD7BJ8Gk0"
# Get a count of the records in the `item_embeddings` table:
# + id="pQsJenNFzVJ7"
# %%bigquery --project $PROJECT_ID
SELECT COUNT(*) embedding_count
FROM recommendations.item_embeddings;
# + [markdown] id="sx8JNJbA8PxC"
# See a sample of the data in the `item_embeddings` table:
# + id="Y4kTGcaRzVJ7"
# %%bigquery --project $PROJECT_ID
SELECT *
FROM recommendations.item_embeddings
LIMIT 5;
# + [markdown] id="i3LKaxlNSkrv"
# ## Export the item embedding vector data
#
# Export the item embedding data to Cloud Storage by using a Dataflow pipeline. This pipeline does the following:
#
# 1. Reads the item embedding records from the `item_embeddings` table in BigQuery.
# 1. Writes each item embedding record to a CSV file.
# 1. Writes the item embedding CSV files to a Cloud Storage bucket.
#
# The pipeline in implemented in the [embeddings_exporter/pipeline.py](embeddings_exporter/pipeline.py) module.
# + [markdown] id="G8HLFGGl5oac"
# ### Configure the pipeline variables
#
# Configure the variables needed by the pipeline:
# + id="2ZKaoBwnSk6U"
runner = "DataflowRunner"
timestamp = datetime.utcnow().strftime("%y%m%d%H%M%S")
job_name = f"ks-bqml-export-embeddings-{timestamp}"
bq_dataset_name = BQ_DATASET_NAME
embeddings_table_name = "item_embeddings"
output_dir = f"gs://{BUCKET}/bqml/item_embeddings"
project = PROJECT_ID
temp_location = os.path.join(output_dir, "tmp")
region = REGION
print(f"runner: {runner}")
print(f"job_name: {job_name}")
print(f"bq_dataset_name: {bq_dataset_name}")
print(f"embeddings_table_name: {embeddings_table_name}")
print(f"output_dir: {output_dir}")
print(f"project: {project}")
print(f"temp_location: {temp_location}")
print(f"region: {region}")
# + id="OyiIh-ATzVJ8"
try:
os.chdir(os.path.join(os.getcwd(), "embeddings_exporter"))
except:
pass
# + [markdown] id="AHxODaoFzVJ8"
# ### Run the pipeline
# + [markdown] id="OBarPrE_-LJr"
# It takes about 5 minutes to run the pipeline. You can see the graph for the running pipeline in the [Dataflow Console](https://console.cloud.google.com/dataflow/jobs).
# + id="WngoWnt2zVJ9"
if tf_io.gfile.exists(output_dir):
print("Removing {} contents...".format(output_dir))
tf_io.gfile.rmtree(output_dir)
print("Creating output: {}".format(output_dir))
tf_io.gfile.makedirs(output_dir)
# !python runner.py \
# --runner={runner} \
# --job_name={job_name} \
# --bq_dataset_name={bq_dataset_name} \
# --embeddings_table_name={embeddings_table_name} \
# --output_dir={output_dir} \
# --project={project} \
# --temp_location={temp_location} \
# --region={region}
# + [markdown] id="PLXGq4CA_Oz0"
# ### List the CSV files that were written to Cloud Storage
# + id="Ee89jHK5zVJ9"
# !gsutil ls {output_dir}/embeddings-*.csv
# + [markdown] id="Fp1bOyVCgBnH"
# ## License
#
# Copyright 2020 Google LLC
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License. You may obtain a copy of the License at: http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
#
# See the License for the specific language governing permissions and limitations under the License.
#
# **This is not an official Google product but sample code provided for an educational purpose**
| notebooks/community/analytics-componetized-patterns/retail/recommendation-system/bqml-scann/02_export_bqml_mf_embeddings.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="view-in-github" colab_type="text"
# <a href="https://colab.research.google.com/github/harpreet-kaur-mahant/kaggle-house-prediction/blob/master/House_Price.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# + id="kb9w4uIIJMbi" colab_type="code" outputId="691297ba-1223-491c-badf-0554ed4033e0" colab={"base_uri": "https://localhost:8080/", "height": 34}
from google.colab import drive
drive.mount('/content/drive')
# + [markdown] id="sV4UiLRRiftl" colab_type="text"
# **Problem Description :** Predict the Final Price of the House based on few features. For Example, some people want Two Bedroom or other want Three Bedroom House. So the House Sale Price Depend on the Living Area of the House and Depend upon how many people want small or Big one. We can predict which living area is high in demand and what cost of house.
# + id="G2G3kwEJijc2" colab_type="code" colab={}
#importing the liberaries
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
# %matplotlib inline
import seaborn as sns
# + id="_8C1KQj6fF4i" colab_type="code" colab={}
train = pd.read_csv('/content/train.csv')
test = pd.read_csv('/content/test.csv')
# + [markdown] id="qfHzrFNsDhHD" colab_type="text"
# The very First Step in preparing the good model is perform EDA (Exploratory Data Analysis.) Let's Do it.
# 1. EDA - Steps of EDA
#
# + [markdown] id="_SnfunnvFEcT" colab_type="text"
# #Variable Identification
# ### 1.1 First, identify Predictor (Input) and Target (output) variables.
# + id="tN8nHYssj-XJ" colab_type="code" outputId="7228a225-319b-4ba9-8b2f-ea8b2af9aa97" colab={"base_uri": "https://localhost:8080/", "height": 408}
#display the top 5 values
train.head(10)
# + id="pNQxNMlVk99L" colab_type="code" outputId="34023ae1-4f2b-4992-e098-e522c6b030f1" colab={"base_uri": "https://localhost:8080/", "height": 34}
#shape of train data
train.shape
# + [markdown] id="qIpqiJo4Fr5W" colab_type="text"
# Here we come to know Predictor Variables , Target Variable , their types and type of Data Set
#
# + [markdown] id="wN8AP3t8F6a9" colab_type="text"
# In this Data Set , Target Variable is SALE PRICE and Rest Are Predictor
#
# + id="Ww1sQ7gYlvjh" colab_type="code" outputId="f9500727-5dad-4e97-a02d-84aad49bf430" colab={"base_uri": "https://localhost:8080/", "height": 1000}
#you can also check the data set information using the info() command.
train.info()
# + id="3tlUl9UbAJKO" colab_type="code" outputId="70cccda7-d96e-4b6f-be60-07c5e0238c8a" colab={"base_uri": "https://localhost:8080/", "height": 170}
train['SalePrice'].describe()
# + id="o5P0z8A3AVLp" colab_type="code" outputId="d3376fec-1665-4d5a-fce9-f0e8c1ad32c4" colab={"base_uri": "https://localhost:8080/", "height": 339}
plt.figure(figsize = (9, 5))
train['SalePrice'].plot(kind ="hist")
# + [markdown] id="okYRGDOmHtZr" colab_type="text"
# At this stage, we explore variables one by one. Method to perform uni-variate analysis will depend on whether the variable type is categorical or continuous. Let’s look at these methods and statistical measures for categorical and continuous variables individually:
#
# **Continuous Variables:-** In case of continuous variables, we need to understand the central tendency and spread of the variable. Note: Univariate analysis is also used to highlight missing and outlier values. In the upcoming part of this series, we will look at methods to handle missing and outlier values. To know more about these methods, you can refer https://www.analyticsvidhya.com/blog/2016/01/guide-data-exploration/#one
# + id="oCZqg0VpIxDS" colab_type="code" outputId="b2e6d81c-7448-4801-f20f-b70732fb5e21" colab={"base_uri": "https://localhost:8080/", "height": 319}
#Analysis for numerical variable
train['SalePrice'].describe()
sns.distplot(train['SalePrice']);
#skewness and kurtosis
print("Skewness: %f" % train['SalePrice'].skew())
print("Kurtosis: %f" % train['SalePrice'].kurt())
# + [markdown] id="Dqe8wY8obTWj" colab_type="text"
# **Above Graph is showing Positive Skewness, Now we will get rid of the skewness by using log transformation. **
# + [markdown] id="S1RXTt4ZbyCZ" colab_type="text"
# ### **Transforming **
# + id="s5KJQL9-b50d" colab_type="code" outputId="52eb9bdb-ea1e-4b8d-ad8b-25cd911e94fd" colab={"base_uri": "https://localhost:8080/", "height": 852}
from scipy import stats
from scipy.stats import norm, skew #for some statistics
# Plot histogram and probability
fig = plt.figure(figsize=(15,5))
plt.subplot(1,2,1)
sns.distplot(train['SalePrice'] , fit=norm);
(mu, sigma) = norm.fit(train['SalePrice'])
print( '\n mu = {:.2f} and sigma = {:.2f}\n'.format(mu, sigma))
plt.legend(['Normal dist. ($\mu=$ {:.2f} and $\sigma=$ {:.2f} )'.format(mu, sigma)],
loc='best')
plt.ylabel('Frequency')
plt.title('SalePrice distribution')
plt.subplot(1,2,2)
res = stats.probplot(train['SalePrice'], plot=plt)
plt.suptitle('Before transformation')
# Apply transformation
train.SalePrice = np.log1p(train.SalePrice )
# New prediction
y_train = train.SalePrice.values
y_train_orig = train.SalePrice
# Plot histogram and probability after transformation
fig = plt.figure(figsize=(15,5))
plt.subplot(1,2,1)
sns.distplot(train['SalePrice'] , fit=norm);
(mu, sigma) = norm.fit(train['SalePrice'])
print( '\n mu = {:.2f} and sigma = {:.2f}\n'.format(mu, sigma))
plt.legend(['Normal dist. ($\mu=$ {:.2f} and $\sigma=$ {:.2f} )'.format(mu, sigma)],
loc='best')
plt.ylabel('Frequency')
plt.title('SalePrice distribution')
plt.subplot(1,2,2)
res = stats.probplot(train['SalePrice'], plot=plt)
plt.suptitle('After transformation')
# + [markdown] id="HLgDGsqSccn5" colab_type="text"
# # Concatenate train and test
# This is good practice to concatenate the test and train data before start working on missing values.
# + id="IcHFl8V3cywi" colab_type="code" outputId="fbd97275-b9bc-4a43-8a22-93869983a937" colab={"base_uri": "https://localhost:8080/", "height": 34}
# y_train_orig = train.SalePrice
# train.drop("SalePrice", axis = 1, inplace = True)
data_features = pd.concat((train, test), sort=False).reset_index(drop=True)
print(data_features.shape)
# print(train.SalePrice)
# + [markdown] id="sJrSTV6PdQAg" colab_type="text"
# ## Missing data
# + id="SEtVy2dpdREb" colab_type="code" outputId="a5ac2ce0-056e-4371-a286-ef193e9e877d" colab={"base_uri": "https://localhost:8080/", "height": 439}
#Let's check if the data set has any missing values.
data_features.columns[train.isnull().any()]
data_features
# + id="FbhMkMondZxJ" colab_type="code" outputId="ea899593-f021-4a9e-847a-89f69c2db24d" colab={"base_uri": "https://localhost:8080/", "height": 445}
#plot of missing value attributes
plt.figure(figsize=(12, 6))
sns.heatmap(train.isnull())
plt.show()
# + id="SJEa1V8ydeUJ" colab_type="code" outputId="1e55d1fe-561b-4f25-a31e-67bdbe440a0a" colab={"base_uri": "https://localhost:8080/", "height": 357}
#missing value counts in each of these columns
Isnull = train.isnull().sum()
Isnull = Isnull[Isnull>0]
Isnull.sort_values(inplace=True, ascending=False)
Isnull
# + [markdown] id="6mzK6tE4dpqY" colab_type="text"
# ## Visualising missing values
# + id="7Oz-CTCHdrer" colab_type="code" outputId="d07a5f3d-763c-4c7a-aa3f-7fdd1a1abb07" colab={"base_uri": "https://localhost:8080/", "height": 409}
#Convert into dataframe
Isnull = Isnull.to_frame()
Isnull.columns = ['count']
Isnull.index.names = ['Name']
Isnull['Name'] = Isnull.index
#plot Missing values
plt.figure(figsize=(13, 5))
sns.set(style='whitegrid')
sns.barplot(x='Name', y='count', data=Isnull)
plt.xticks(rotation = 90)
plt.show()
# + id="J38tpHOud47V" colab_type="code" outputId="ea8029a3-5d90-4556-de9f-1d208a1273ae" colab={"base_uri": "https://localhost:8080/", "height": 669}
#missing data percent plot, basically percent plot is for categorical columns
total = data_features.isnull().sum().sort_values(ascending=False)
percent = (data_features.isnull().sum()/data_features.isnull().count()).sort_values(ascending=False)
missing_data = pd.concat([total, percent], axis=1, keys=['Total', 'Percent'])
missing_data.head(20)
# + id="eJSwlLa2grhF" colab_type="code" colab={}
#Separate variable into new dataframe from original dataframe which has only numerical values
#there is 38 numerical attribute from 81 attributes
train_corr = data_features.select_dtypes(include=[np.number])
# + id="7hkg1N9ag1Ai" colab_type="code" outputId="66332166-fcca-4f01-f564-0db95623d7af" colab={"base_uri": "https://localhost:8080/", "height": 34}
train_corr.shape
# + id="5QAli9ZKg4O6" colab_type="code" outputId="ad2cc245-5bcf-473f-d729-d8cb29d47047" colab={"base_uri": "https://localhost:8080/", "height": 439}
#Delete Id because that is not need for corralation plot
train_corr.drop(['Id'], axis=1)
# + [markdown] id="Is1IVi6Hh4wx" colab_type="text"
# # Top 50% Correlation concatenate attributes with sale-price
#
# + id="XdDdGL6hh_z-" colab_type="code" outputId="05a93366-e0b3-4c9f-bdaa-58a0dc5368c6" colab={"base_uri": "https://localhost:8080/", "height": 561}
#0.5 is nothing but a threshold value.
#It is good to take it 0.5 because your feautres which are fitting under this threshold will give good accuracy.
top_feature = corr.index[abs(corr['SalePrice']>0.5)]
plt.subplots(figsize=(12, 8))
top_corr = data_features[top_feature].corr()
sns.heatmap(top_corr, annot=True)
plt.show()
# + id="RW4GkjrhOXYS" colab_type="code" outputId="9d3f745e-36b2-4aa0-b608-2eecfacf13f0" colab={"base_uri": "https://localhost:8080/", "height": 1000}
#scatterplot
sns.set()
cols = ['SalePrice', 'OverallQual', 'GrLivArea', 'GarageCars', 'TotalBsmtSF', 'FullBath', 'YearBuilt']
sns.pairplot(train_corr[cols], size = 2.5)
plt.show();
# + [markdown] id="LqSv_VZzik78" colab_type="text"
# ## Here OverallQual is highly correlated with target feature of saleprice by 82%
# + id="w4UB03VmimSQ" colab_type="code" outputId="bd7ae485-66a2-407e-8abf-be65739d4242" colab={"base_uri": "https://localhost:8080/", "height": 34}
#unique value of OverallQual
data_features.OverallQual.unique()
# + id="4HbM771wi8E3" colab_type="code" outputId="47c44b58-fa4a-43b1-904e-dec78bef57fb" colab={"base_uri": "https://localhost:8080/", "height": 302}
sns.barplot(data_features.OverallQual, data_features.SalePrice)
# + id="k9rJR-tYkEts" colab_type="code" outputId="57c476e5-f20e-430c-9bab-93f73936d459" colab={"base_uri": "https://localhost:8080/", "height": 697}
print("Find most important features relative to target")
corr = data_features.corr()
corr.sort_values(['SalePrice'], ascending=False, inplace=True)
corr.SalePrice
# + [markdown] id="hnucYyeelXlE" colab_type="text"
# # Imputting missing values
# + id="yXYAtcnflT0U" colab_type="code" colab={}
# PoolQC has missing value ratio is 99%+. So, there is fill by None
data_features['PoolQC'] = data_features['PoolQC'].fillna('None')
# + id="v2s1r55Mlf1d" colab_type="code" colab={}
#Arround 50% missing values attributes have been fill by None
data_features['MiscFeature'] = data_features['MiscFeature'].fillna('None')
data_features['Alley'] = data_features['Alley'].fillna('None')
data_features['Fence'] = data_features['Fence'].fillna('None')
data_features['FireplaceQu'] = data_features['FireplaceQu'].fillna('None')
data_features['SaleCondition'] = data_features['SaleCondition'].fillna('None')
# + id="hhx1LRuQlphH" colab_type="code" colab={}
#Group by neighborhood and fill in missing value by the median LotFrontage of all the neighborhood
data_features['LotFrontage'] = data_features.groupby("Neighborhood")["LotFrontage"].transform(
lambda x: x.fillna(x.median()))
# + id="OCXRI5KhlwJ6" colab_type="code" colab={}
#GarageType, GarageFinish, GarageQual and GarageCond these are replacing with None
for col in ['GarageType', 'GarageFinish', 'GarageQual', 'GarageCond']:
data_features[col] = data_features[col].fillna('None')
# + id="1lQRUVynl1H-" colab_type="code" colab={}
#GarageYrBlt, GarageArea and GarageCars these are replacing with zero
for col in ['GarageYrBlt', 'GarageArea', 'GarageCars']:
data_features[col] = data_features[col].fillna(int(0))
# + id="r2um_oNWl5hb" colab_type="code" colab={}
#BsmtFinType2, BsmtExposure, BsmtFinType1, BsmtCond, BsmtQual these are replacing with None
for col in ('BsmtFinType2', 'BsmtExposure', 'BsmtFinType1', 'BsmtCond', 'BsmtQual'):
data_features[col] = data_features[col].fillna('None')
# + id="K0oaFy2Sl-dy" colab_type="code" colab={}
#MasVnrArea : replace with zero
data_features['MasVnrArea'] = data_features['MasVnrArea'].fillna(int(0))
# + id="pkqnUHKimIMg" colab_type="code" colab={}
#MasVnrType : replace with None
data_features['MasVnrType'] = data_features['MasVnrType'].fillna('None')
# + id="--V1k-bcmVG-" colab_type="code" colab={}
#There is put mode value
data_features['Electrical'] = data_features['Electrical'].fillna(data_features['Electrical']).mode()[0]
# + id="Wc85JwyWmcNa" colab_type="code" colab={}
#There is no need of Utilities
#data_features = data_features.drop(['Utilities'], axis=1)
# + [markdown] id="wMu2FR6Xm3LF" colab_type="text"
# ## Encoding str to int
# + id="Takwwe3Dm4So" colab_type="code" colab={}
cols = ('FireplaceQu', 'BsmtQual', 'BsmtCond', 'GarageQual', 'GarageCond',
'ExterQual', 'ExterCond','HeatingQC', 'PoolQC', 'KitchenQual', 'BsmtFinType1',
'BsmtFinType2', 'Functional', 'Fence', 'BsmtExposure', 'GarageFinish', 'LandSlope',
'LotShape', 'PavedDrive', 'Street', 'Alley', 'CentralAir', 'MSSubClass', 'OverallCond',
'YrSold', 'MoSold', 'MSZoning', 'LandContour', 'LotConfig', 'Neighborhood',
'Condition1', 'Condition2', 'BldgType', 'HouseStyle', 'RoofStyle', 'RoofMatl', 'Exterior1st',
'Exterior2nd', 'MasVnrType', 'MasVnrArea', 'Foundation', 'GarageType', 'MiscFeature',
'SaleType', 'SaleCondition', 'Electrical', 'Heating')
# + id="3kY3h9cSm7uU" colab_type="code" colab={}
from sklearn.preprocessing import LabelEncoder
for c in cols:
lbl = LabelEncoder()
lbl.fit(list(data_features[c].values))
data_features[c] = lbl.transform(list(data_features[c].values))
# + [markdown] id="DsN2_NiRnFPL" colab_type="text"
# # Prepraring data for prediction
# + [markdown] id="VbrFXd6mnugU" colab_type="text"
# ### Splitting the data back to train and test
# + id="X0pJY-FunCiW" colab_type="code" outputId="e93a6c75-46a8-4981-cd5d-923ea4678b98" colab={"base_uri": "https://localhost:8080/", "height": 34}
train_data = data_features.iloc[:len(y_train), :]
test_data = data_features.iloc[len(y_train):, :]
print(['Train data shpe: ',train_data.shape,'Prediction on (Sales price) shape: ', y_train.shape,'Test shape: ', test_data.shape])
# + id="MAX5ef6tn3VG" colab_type="code" colab={}
#Take targate variable into y
y = train_data['SalePrice']
# + id="BAq5iQRUn-ED" colab_type="code" colab={}
#Delete the saleprice
del train_data['SalePrice']
# + id="PY3SxU_DoASE" colab_type="code" colab={}
#Take their values in X and y
X = train_data.values
y = y.values
# + id="2XptG3GGoCdt" colab_type="code" colab={}
# Split data into train and test formate
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=7)
# + [markdown] id="hVu0MvjSoLD0" colab_type="text"
# # Linear Regression
# + id="d6DMtIEKoHeH" colab_type="code" colab={}
#Train the model
from sklearn import linear_model
model = linear_model.LinearRegression()
# + id="nyV2qGM5oQ3L" colab_type="code" outputId="6bd8f211-cb4c-49e3-f66c-761827a31469" colab={"base_uri": "https://localhost:8080/", "height": 34}
#Fit the model
model.fit(X_train, y_train)
# + id="eVmNjBZSoTQl" colab_type="code" outputId="1724eb1b-87b1-4bcf-8c49-585675767a4b" colab={"base_uri": "https://localhost:8080/", "height": 51}
#Prediction
print("Predict value " + str(model.predict([X_test[142]])))
print("Real value " + str(y_test[142]))
# + id="CnBbkO9UoW_B" colab_type="code" outputId="4bc6950d-bf97-4327-b531-faddceb4a8a1" colab={"base_uri": "https://localhost:8080/", "height": 34}
#Score/Accuracy
print("Accuracy --> ", model.score(X_test, y_test)*100)
# + [markdown] id="E7jXeTaroeE6" colab_type="text"
# # RandomForestRegression
# + id="kYgwaHCNoauX" colab_type="code" colab={}
#Train the model
from sklearn.ensemble import RandomForestRegressor
model = RandomForestRegressor(n_estimators=1000)
# + id="H7n0MdsqosYZ" colab_type="code" outputId="17743384-5f96-413a-8734-9e71aeff944d" colab={"base_uri": "https://localhost:8080/", "height": 136}
#Fit
model.fit(X_train, y_train)
# + id="zjH1o9Ulou1L" colab_type="code" outputId="dfe642a6-f3e6-44d1-edf9-ff755098270c" colab={"base_uri": "https://localhost:8080/", "height": 34}
#Score/Accuracy
print("Accuracy --> ", model.score(X_test, y_test)*100)
# + [markdown] id="HOiA0MT1o5M8" colab_type="text"
# # GradientBoostingRegressor
# + id="GZuvPsvro1LB" colab_type="code" colab={}
#Train the model
from sklearn.ensemble import GradientBoostingRegressor
GBR = GradientBoostingRegressor(n_estimators=100, max_depth=4)
# + id="CENlSyBro9M_" colab_type="code" outputId="a0e95895-1181-42e3-ca02-08ee5a2a67ef" colab={"base_uri": "https://localhost:8080/", "height": 170}
#Fit
GBR.fit(X_train, y_train)
# + id="9Ns7XQdNpAhT" colab_type="code" outputId="76225d0b-299e-494e-90de-5863927ed304" colab={"base_uri": "https://localhost:8080/", "height": 34}
print("Accuracy --> ", GBR.score(X_test, y_test)*100)
# + id="nThYdXqWpDtN" colab_type="code" colab={}
| House_Price.ipynb |
# -*- coding: utf-8 -*-
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .r
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: R
# language: R
# name: ir
# ---
# # Tag 1. Kapitel 6. R Programmierung
#
# ## Lektion 27. IF-ELSE Anweisungen
#
# Es ist jetzt endlich Zeit eine zumindest kleine Logik mit R zu programmieren! Unser erster Schritt dieser Lernkurve wird sein if, else und else if Anweisungen zu verstehen.
#
# Die Syntax einer `if` Anweisung in R lautet:
#
# if (Bedingung){
# # Führe Code aus
# }
#
# Was heißt das denn überhaupt fragt man möglicherweise, wenn man noch nie eine if Anweisung gesehen hat. Es bedeutet, dass wir eine eifnache Logik zu unserem Code hinzufügen können. Wir sagen if (dt. falls) eine gewisse Bedingung erfüllt (d.h. TRUE) ist, dann soll der Code innerhalb der geschwungenen Klammern ausgeführt werden.
#
# Zum Beispiel können wir uns die zwei Variablen **heiss** und **temp** vorstellen. Nun ist **heiss** zunächst als FALSE definiert. Falls (en. if) die **temp** nun über 35°C liegt, dann wird **heiss** zu TRUE.
#
# Schauen wir uns den Code dazu an:
heiss <- FALSE
temp <- 28
# +
if (temp > 35){
heiss <- TRUE
}
heiss
# +
# temp neu definieren
temp <- 37
if (temp > 35){
heiss <- TRUE
}
heiss
# -
# Dabei sollten wir daran denken den Code so zu formatieren, dass wir ihn später einfach wieder öffnen und lesen können. Üblicherweise richten wir die abschließende Klammer an der if Anweisung aus, zu der sie gehört. Nichtsdestotrotz könnten wir auch unordentlich vorgehen und der Code würde dank der Klammern immernoch funktionieren:
if( 1 == 1){ print('Hi')}
# +
if(1 == 1)
{
print('Hi')
}
# Funktioniert...lässt sich aber schlecht lesen!
# -
# Ein guter Editor wie R Studio wird uns dabei helfen alles angemessen auszurichten.
#
# ## else
#
# Falls wir einen anderen Codeblock ausführen wollen, sofern die if Anweisung nicht erfüllt ist, können wir die **else** Anweisung dazu verwenden. Es ergibt sich folgende Syntax:
#
# if (Bedingung) {
# # Code sofern die Bedingung zu TRUE führt
# } else {
# # Code sofern die Bedingung zu FALSE führt
# }
#
# Achtet hier exemplarisch auf die Platzierung der Klammern und des *else* Stichworts.
# +
temp <- 40
if (temp > 35){
print("Heiß draußen!")
} else{
print("Es ist heute nicht zu heiß!")
}
# -
# # else if
#
# Was ist nun, falls wir mehrere Optionen angeben und überprüfen möchten? Dann fügen wir jede weitere Überprüfung bzw. Bedingung über die **else if** Anweisung hinzu. Am Ende verwenden wir **else** um festzulegen, was passiert, wenn keine Überprüfung als Ergebnis TRUE hatte.
# +
temp <- 27
if (temp > 35){
print("Heiß draußen!")
} else if(temp<35 & temp>20){
print('Schön draußen!')
} else if(temp <20 & temp > 10){
print("Es ist etwas kühler!")
} else{
print("Draußen ist es kalt!")
}
# +
temp <- 27
if (temp > 35){
print("Heiß draußen!")
} else if(temp<35 & temp>20){
print('Schön draußen!')
} else if(temp <20 & temp > 10){
print("Es ist etwas kühler!")
} else{
print("Draußen ist es kalt!")
}
# -
# ## Abschließendes Beispiel
#
# Schauen wir uns ein etwas ausgearbeiteteres Beispiel von if, else if und else an. Stellen wir uns vor wir haben einen Laden mit zwei Produkten: Schinken und Käse. Wir würden gerne einen automatisierten Bericht erstellen, der zur verwaltung geschickt werden kann. Er soll berichten, wie viel wir verkaufen:
# +
# Produkte, die heute verkauft wurden
schinken <- 10
kaese <- 5
# Bericht für die Verwaltung
bericht <- 'leer'
if(schinken >= 10 & kaese >= 10){
bericht <- "Beide Produkte wurden gut verkauft."
}else if(schinken == 0 & kaese == 0){
bericht <- "Gar nichts verkauft!"
}else{
bericht <- 'Wir hatten ein paar Verkäufe.'
}
print(bericht)
# -
# Herzlichen Glückwunsch! Sie sind mit Lektion 27. fertig!
| 1.6 R Programming/de-DE/1.6.27 R - IF-ELSE Expressions.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="view-in-github" colab_type="text"
# <a href="https://colab.research.google.com/github/kpedrok/python-data-science-biology-experiments/blob/main/Aula_02.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# + [markdown] id="UIsjjtsF74vf"
# #Aula 01
# + colab={"base_uri": "https://localhost:8080/", "height": 439} id="jw4JmFUsNPwq" outputId="2a006b40-642a-49b9-d95b-8d52245e573d"
import pandas as pd
url_dados = 'https://github.com/alura-cursos/imersaodados3/blob/main/dados/dados_experimentos.zip?raw=true'
dados = pd.read_csv(url_dados, compression = 'zip')
dados
# + colab={"base_uri": "https://localhost:8080/", "height": 253} id="qBvT4sBNNPws" outputId="10ae6b6a-ad08-424d-86fc-989f06f356c9"
dados.head()
# + colab={"base_uri": "https://localhost:8080/"} id="fYdow_XdNPws" outputId="0a2da992-d08e-4a3e-a984-970399f5df9f"
dados.shape
# + colab={"base_uri": "https://localhost:8080/"} id="zOmQRiUTNPwt" outputId="8f146eae-2e1d-4a4c-d30b-49d453470e5f"
dados['tratamento']
# + colab={"base_uri": "https://localhost:8080/"} id="BrCJ2TzSNPwt" outputId="304725c2-8e29-436f-ba5d-9fcd132eef6a"
dados['tratamento'].unique()
# + colab={"base_uri": "https://localhost:8080/"} id="d-eF7mPaNPwt" outputId="64d61713-4665-48dd-f43c-b270168c6328"
dados['tempo'].unique()
# + colab={"base_uri": "https://localhost:8080/"} id="Zgm1XiOeNPwu" outputId="709138ca-c115-4566-9184-93edb44b401c"
dados['dose'].unique()
# + colab={"base_uri": "https://localhost:8080/"} id="dByGDqcGNPwu" outputId="5067870c-d7f2-4adf-d3e9-3790fabc42a9"
dados['droga'].unique()
# + colab={"base_uri": "https://localhost:8080/"} id="hvdM3kYSNPwv" outputId="1cfcdf22-75a3-49c4-8c52-789a6f0ca9dc"
dados['g-0'].unique()
# + colab={"base_uri": "https://localhost:8080/"} id="ZwjZj1F6NPwv" outputId="b7c3916c-9b9c-40be-e244-bd890885a116"
dados['tratamento'].value_counts()
# + colab={"base_uri": "https://localhost:8080/"} id="8buFt2cKNPwv" outputId="37d81524-b5fd-405d-c081-80cfe1454a9e"
dados['dose'].value_counts()
# + colab={"base_uri": "https://localhost:8080/"} id="jmkevYGTNPww" outputId="b14f4072-e25e-4186-a670-5ddd01e0d0aa"
dados['tratamento'].value_counts(normalize = True)
# + colab={"base_uri": "https://localhost:8080/"} id="owgn4RkDNPww" outputId="0d55360f-e1cd-45e8-a17f-f7a2745b74c7"
dados['dose'].value_counts(normalize = True)
# + colab={"base_uri": "https://localhost:8080/", "height": 265} id="OzMt7-vdNPwx" outputId="6dcf67c5-6aff-456f-e791-378d616ca50d"
dados['tratamento'].value_counts().plot.pie()
# + colab={"base_uri": "https://localhost:8080/", "height": 265} id="wTkLLtMnNPwx" outputId="9379ac2a-aaeb-46b2-c404-3b3cc934be6f"
dados['tempo'].value_counts().plot.pie()
# + colab={"base_uri": "https://localhost:8080/", "height": 289} id="ArVvC7PRNPwx" outputId="f851c259-8e19-4cd5-a88d-9f8217d45837"
dados['tempo'].value_counts().plot.bar()
# + colab={"base_uri": "https://localhost:8080/", "height": 253} id="ngAH-Ir6NPwy" outputId="77e9e4e0-1c38-40c2-f14c-0ee9d24972bd"
dados_filtrados = dados[dados['g-0'] > 0]
dados_filtrados.head()
# + [markdown] id="87yE0HvTNY6u"
# #Aula 02
# + [markdown] id="xFXfW7vFvfK-"
# Olá!
#
# Seja bem vindo e bem vinda a aula 02!
#
# Na aula 01, nós começamos a explorar um conjunto de dados relacionados à industria farmacêutica. Para isso nós utilizamos uma biblioteca muito conhecida no mundo de DataScience: o Pandas.
#
# Nós utilizamos o Pandas para abrir o dataset, que estava no formato CSV, e para gerar um dataframe, uma tabela, para então começarmos a analisar e entender o que significava cada coluna. Com a ajuda da Vanessa, especialista na área, identificamos que temos colunas que nos remetem a dados que estão sob tratamento(droga) e com controle. Além disso, temos também diversas colunas que remetem aos genes.
#
# Começamos nossa análise olhando para cada coluna de forma separada e também geramos gráficos, para auxiliar nossa análise.
#
# E você? Como se saiu na resolução dos desafios?
#
# Agora, vamos mergulhar juntos na aula 02!
# + colab={"base_uri": "https://localhost:8080/", "height": 439} id="hB3O-zZW9ZYw" outputId="26f4c60e-8e5e-4194-9dad-<KEY>"
dados
# + [markdown] id="zGB4Kq8JvmZ5"
# A base de dados que usamos até o momento tem uma variável chamada ```composto```mas entendemos, com a ajuda da Vanessa, que essa não é a melhor nomenclatura para representa-la.
# Por isso, vamos usar a função ```map``` da biblioteca Pandas para renomear essa coluna.
# É importante destacar que passamos o parâmetro ```inplace = True```, esse parâmetro faz com que os dados sejam modificados no local e o dataframe será atualizado.
# Caso esse parâmetro não seja declarado, o default é ```inplace = False``` e o retorno será uma cópia do objeto e caso você queira, precisa salva-lo com um outro nome.
# + id="CFse2h0Vcrp4"
mapa = {'droga': 'composto'}
dados.rename(columns=mapa, inplace=True)
# + [markdown] id="lq5EDpC4w_rS"
# Aqui, estamos usando a função ```head``` para apresentar as 5 primeiras linhas da base de dados e assim, podemos conferir se a renomeação aconteceu da maneira que estavámos esperando.
# + colab={"base_uri": "https://localhost:8080/", "height": 253} id="VX73K_tLcoQg" outputId="e22d6c59-cb21-439a-fc85-1e08562d1961"
dados.head()
# + [markdown] id="i7mgnrykiMNb"
# Queremos melhorar a visualização do nosso histograma de compostos e, como existem mais de 3.000 variações na nossa base de dados, decidimos elencar os 5 compostos que mais aparecem.
# Para isso, vamos usar a função ```value_counts``` (função presente na biblioteca Pandas e que conta a ocorrência dos diferentes valores) e, como queremos saber somente os 5 elementos mais frequentes, também declaramos o ```index[0:5]```. Essa parte final, faz com que o ```value_counts```se atenha à contagem dos maiores valores e apresente como resultado apenas o index do intervalo [0, 5[, ou seja, o nome dos 5 maiores valores.
# + id="D5hK5mcVej_G"
cod_compostos = dados['composto'].value_counts().index[0:5]
# + [markdown] id="hrOGPnbP4cE_"
# Na célula acima, declaramos a variável ```cod_compostos``` e definimos a função que está atrelada à ela.
# E agora, executamos a nossa nova variável para verificar o resultado.
# + colab={"base_uri": "https://localhost:8080/"} id="LgrRw3eSfLYu" outputId="180f6b87-13fb-4b4c-e352-8043d7b5b192"
cod_compostos
# + [markdown] id="1JANbNGx4wQO"
# Exitem algumas maneiras de filtrar uma base de dados e optamos em usar a função ```query``` do Pandas e, somente a título de curiosidade, essa função é bastante análoga ao SQL (linguagem de programação para bancos de dados).
# A estrutura dela é bastante simplificada, precisamos apenas definir o dataframe, chamar a função e passar como parâmetro a condição que deve ser
# filtrada no nosso conjunto de dados.
#
# Nesta parte do projeto, queremos realizar um filtro em nossos dados, selecionando apenas as linhas nas quais o composto esteja dentro da nossa lista ```cod_composto``` (lista que representa os 5 compostos mais testados no experimento) e vamos utilizar o método ```query``` para resolver este problema.
#
# Como parâmetro da função, passamos uma string contendo a lógica para realização da seleção dos dados. O que queremos é o seguinte: o ```query```precisa retornar para nós todas as linhas contendo os 5 compostos mais utilizados. Logo, a string necessária para isso é: ```composto in @cod_compostos```.
#
# Usamos ```composto``` porque essa é a coluna a ser verificada no dataframe e ```cod_compostos``` por ser a lista com os top 5 compostos, o detalhe aqui é que o ```@``` é necessário para informar o ```query``` que ```cod_composto``` é uma variável que já foi definida fora da função.
# + colab={"base_uri": "https://localhost:8080/", "height": 439} id="_GlSwTMGfpFs" outputId="3b0b5333-4fc1-4237-8285-6ece5f5d463c"
dados.query('composto in @cod_compostos')
# + [markdown] id="p9FQo1EuFF8M"
# Agora que vimos que a nossa filtragem funcionou e que temos como retorno uma base de dados com 3.235 linhas, podemos usar a função ```query```como parâmetro para o ```countplot```, o nosso gráfico de barras.
# O ```countplot``` é um gráfico pré-programado da biblioteca ```Seaborn```e, por isso, precisaremos fazer a importação padrão da mesma (```import seaborn as sns```). Adicionalmente, aqui no Google Colaboratory, para que possamos enxergar o gráfico com os padrões de configuração da biblioteca, precisamos rodar ```sns.set()```.
# Além disso, para refinar a apresentação do gráfico, podemos utilizar algumas funcionalidades da biblioteca ```Matplotlib``` (fazendo, primeiramente, sua importação - ```import matplotlib.pyplt as plt```).
# Também estamos definindo o tamanho do gráfico através da função ```figure``` e seu parâmetro ```figsize=(x, y))``` e o título através do ```set_title('Título')```.
# Como comentado na aula, usualmente, armazenamos o nosso gráfico em uma variável ```ax``` e então, definimos as demais configurações (por exemplo, ```ax.set_title('Título')```.
# E, finalmente, para visualizar o gráfico de barras, usamos o ```plt.show()```.
# + colab={"base_uri": "https://localhost:8080/", "height": 410} id="IePUr98kdgED" outputId="fbc8c43c-9a20-4349-9560-d741fd8ab2f7"
import seaborn as sns
import matplotlib.pyplot as plt
sns.set()
plt.figure(figsize=(8, 6))
order_desc = dados.query('composto in @cod_compostos')['composto'].value_counts().index
ax = sns.countplot(x = 'composto', data=dados.query('composto in @cod_compostos'), order = order_desc )
ax.set_title('Top 5 compostos')
plt.show()
# + [markdown] id="bA_RUClHM6I8"
# Até o momento analisamos os dados de tempo, dose, compostos e afins. Entretanto, não analisamos os dados de expressões gênicas (G's) e viabilidade celular (C's). Será que podemos criar um gráfico de barras para esses dados?
# Vamos pensar que a nossa base de dados apresenta mais de 3.000 compostos. Mas quantos desses compostos aparecem na coluna ```g-0```?
# Para responder essa questão, vamos usar a função ```unique()``` do Pandas que conta os valores únicos presentes na coluna em questão. Como resposta padrão, o retorno será uma lista com arrays (os nomes dos compostos) mas, nosso objetivo é saber o tamanho dessa lista e, por isso, usamos o ```len```, pois assim, ele contará o tamanho desta lista de arrays.
# + colab={"base_uri": "https://localhost:8080/"} id="DkQ2wH9Gj-w7" outputId="78509ddc-4c74-45f9-ffba-d2aa01ff0993"
len(dados['g-0'].unique())
# + [markdown] id="wM4yn28tRflX"
# Como temos diversos compostos únicos dentro da coluna ```g-0```, não é viável que façamos o mesmo gráfico utilizado anteriormente.
# Por isso, precisamos traçar uma nova estratégia para visualizar os nossos dadose aqui, usaremos um histograma.
# O primeiro passo, é identificar qual o valor mínimo (```min()```) e o valor máximo (```max()```) para entender qual o intervalo númerico com o qual estamos trabalhando.
# + colab={"base_uri": "https://localhost:8080/"} id="mCC5DpK1kmiX" outputId="2e4eca8b-205b-475c-8fce-ec778af41fa3"
dados['g-0'].min()
# + colab={"base_uri": "https://localhost:8080/"} id="qzH4Jl0OksuL" outputId="92d6a389-fdde-457a-8616-1d3bca0049e7"
dados['g-0'].max()
# + [markdown] id="-Tp7M_wWUCU8"
# Depois que reconhecemos que o nosso intervalo vai de ~5,5 a 10,0, podemos partir para o histograma e a função que usaremos para plotar é do Pandas (```dataframe['variável'].hist()```).
# Assim que rodamos essa função, percebemos que a visualização destes dados ainda não está boa pois, a divisão padrão das barras do histograma, representam intervalos muito grandes que atrapalham o entendimento dos dados.
# Por isso, acresentamos um parâmetro dentro da função ```(bins = número de quebras)``` para melhor dividir e, consequentemente visualizar os dados.
# Quando definimos os bins em 100, podemos perceber que a forma se aproxima bastante de uma curva bastante conhecida: a curva normal.
# + colab={"base_uri": "https://localhost:8080/", "height": 285} id="RnjRPpubk1z0" outputId="e84b0102-8751-4cf7-f1b5-fd59ddee86fa"
dados['g-0'].hist(bins = 100)
# + [markdown] id="ILTZmvd2fczV"
# Aqui, estamos testando o mesmo histograma para outro gene, o ```g-19```. E queremos fazer uma comparação entre os gráficos e podemos fazer algumas considerações sobre ambos os gráficos.
# Percebemos, por exemplo, que a imagem seguem a mesma tendência de curva mas há um deslocamento para a direita.
# + colab={"base_uri": "https://localhost:8080/", "height": 285} id="1_aedkvToYlB" outputId="4cb96108-7578-4ce1-847b-1dab29b191a1"
dados['g-19'].hist(bins = 100)
# + [markdown] id="i8HZHcicEJ0F"
# Como concluimos que plotar os gráficos de todos os genes é inviável, vamos analisar resumidamente algumas estatísticas sobre eles.
# Para isso, vamos usar a função ```describe``` do Pandas que já calcula e descreve algumas estatísticas importantes para o entendimento dos dados (contagem, média, desvio padrão, mínimo, alguns quartis e máximo).
#
# + colab={"base_uri": "https://localhost:8080/", "height": 346} id="Kv-2zPLBprgD" outputId="3a9e1134-ed95-4b76-a8a0-9ee0fa6ccaee"
dados.describe()
# + [markdown] id="G4mCKUkUFYhb"
# Neste ponto, vamos separar as variáveis que queremos analisar da base de dados (exemplo: ```g-0``` e ```g-1```) através de uma lista de arrays.
# Entretanto, apesar desta ser uma ótima estratégia para a separação, temos 771 genes e escrevê-los um a um seria muito trabalhoso e podemos fazer de uma outra forma.
# + colab={"base_uri": "https://localhost:8080/", "height": 419} id="OAkjbZdkqPlR" outputId="22ee247a-7a37-4e88-bd33-bafc71fe8a70"
dados[['g-0', 'g-1']]
# + [markdown] id="0LHi_9AUGFmS"
# Uma estratégia mais direta em que não precisaremos escrever a lista gene a gene, é o uso da função ```loc[]``` do Pandas.
# Como argumentos, passamos primeiramente o ```:```, os dois pontos faz com que o ```loc[]```retorne todos os elementos de uma determinada coluna, isso é importante quando não sabemos qual a quantidade de linhas de um dataframe. E, o segundo elemento, passamos as colunas que são de nosso interesse. No caso, queremos que a função nos retorne todos os elementos das colunas ```g-0```até ```g-771```.
# E, por fim, podemos declarar a nossa função de interesse a partir deste filtro realizado nos dados, o ```describe()```.
# + colab={"base_uri": "https://localhost:8080/", "height": 346} id="dQX0KOhUqm66" outputId="8e9c2e0f-09d6-4b95-a9e3-9eb5f52b8f0b"
dados.loc[:,'g-0':'g-771'].describe()
# + [markdown] id="8mp610qzHQGY"
# Apesar do describe reunir as nossas estatísticas de interesse, é bastante complexo analisar o dataframe resposta. Para facilitar o nosso entendimento, vamos plotar histogramas que nos ajudaram na visualização das estatísticas de todas as colunas selecionadas.
# Olhando o dataframe original, anteriormente, fizemos o histograma de apenas uma coluna. Mas agora, nosso conjunto de dados de interesse é o ```describe()``` que fizemos a partir do ```loc[]``` e, deste ponto de vista, não queremos mais fazer o histograma coluna a coluna (genes), queremos que ele seja a partir das linhas (estatísticas). Por isso, vamos transpor as linhas e colunas (transformar as linhas em colunas e vice-versa).
# Para isso, vamos usar o ```.T[]``` no código anterior que produzimos para organizar o ```describe()```.
# Ou seja, vamos manter todo o código até o ```describe()``` e, ao final, acrescentaremos o ```.T[]```. Mas, ao rodarmos essa linha, percebemos que temos como devolutiva o mesmo dataframe mas transposto. E, como aqui, nosso interesse é produzir histogramas, acrescentamos como argumento do ```.T[]``` a estatística alvo (```.T['estatística']```) e, por último, acrescentamos o ```.hist(bins = número de quebras)``` para que o histograma seja observado.
#
#
# + colab={"base_uri": "https://localhost:8080/", "height": 285} id="ij9QVxGArZsd" outputId="6c12fbca-a6d3-4017-aa1b-d5beaae39ce8"
dados.loc[:,'g-0':'g-771'].describe().T['mean'].hist(bins=30)
# + [markdown] id="qbINMiTKKNqK"
# A seguir, reproduzimos o código acima apenas alterando o parâmetro estatístico a ser analisado (mínimo e máximo, por exemplo).
# E assim, podemos perceber as nuances de cada métrica.
# + colab={"base_uri": "https://localhost:8080/", "height": 285} id="u31uRxYWr2PG" outputId="0e280307-ffc2-4c19-b844-930f43053198"
dados.loc[:,'g-0':'g-771'].describe().T['min'].hist(bins=30)
# + colab={"base_uri": "https://localhost:8080/", "height": 285} id="bYnsG81Sr8mc" outputId="a50455d8-03ca-4add-dc34-50d3ff36e9d8"
dados.loc[:,'g-0':'g-771'].describe().T['max'].hist(bins=30)
# + [markdown] id="g8FvuJapKhco"
# É muito interessante que a gente replique a análise desenvolvida para os ```genes (g)```, nos ```tipos celulares (c)```.
# Por isso, vamos copiar a linha de código que produz os histogramas mas aqui, vamos modificar o argumento ```loc[:,'g-0':'g-771'] -> loc[:,'c-0':'c-99']``` e a quantidade de bins ```hist(bins=100) -> hist(bins=50)```.
# + colab={"base_uri": "https://localhost:8080/", "height": 285} id="hu2jZ2snuJ09" outputId="beeaf357-0770-4526-b302-fa1611cb4db7"
dados.loc[:,'c-0':'c-99'].describe().T['mean'].hist(bins=50)
# + [markdown] id="kUCr17j6xpu5"
# Um outro tipo de gráfico super interessante e útil é o boxplot.
# Para visualizá-lo, vamos usar a função ```boxplot```do Seaborn e, como argumentos dessa função vamos passar um ```x```, onde ```x = coluna que será plotada neste eixo``` e a base de dados ```data = conjunto de dados```.
# O boxplot apresenta uma caixa no meio onde podemos identificar a mediana (linha no meio da caixa que é o ponto onde metade dos dados estão na direita e a outra metade para a esquerda), os outliers (pontos acima ou abaixo do eixo principal do gráfico que representam valores discrepantes para mais ou para menos), a maior concentração dos dados (caixa principal que representa onde está a mior parte dos dados - primeiro quartil (25%) e terceiro quartil (75%)) e os máximos e mínimos desconsiderando os outliers (linhas laterais à caixa principal).
# O boxplot é uma importante ferramenta na visualização de dados porque em apenas um gráfico, podemos identificar várias métricas estatísticas.
# + colab={"base_uri": "https://localhost:8080/", "height": 302} id="R9LHaY0yv29K" outputId="3f207868-be5d-4955-a685-e61b038cfbc5"
sns.boxplot(x='g-0' , data=dados)
# + [markdown] id="-4kvTdIE3d0W"
# Podemos também, além de definir apenas os dados que irão no eixo x, definimos os dados para o outro eixo, atribuindo um valor para o parâmetro ```y``` (```y = variável que vai ser plotada neste eixo```).
# Como podemos perceber, no boxplot que representa o ```tratamento = com_droga``` apresenta muitos outliers e isso gera uma discussão bastante interessante pois, do ponto de vista biológico a investigação desses pontos é importante mas, dependendo da área que estamos trabalhando, esse ponto pode apresentar outras soluções.
# Dito isso, é importante para um cientista de dados não só entender e manipular a base de dados mas também saber acerca do negócio que estamos tratando.
# + colab={"base_uri": "https://localhost:8080/", "height": 520} id="h87JI4a_yNa1" outputId="e30c7d61-bb7f-49a7-802c-b7e971352b4a"
plt.figure(figsize=(10,8))
sns.boxplot(y='g-0', x='tratamento' , data=dados)
# + [markdown] id="KbXSuyvbqdTE"
# #Desafios
# + [markdown] id="X_Lg2XTu20ND"
# ###Desafio 01: Investigar por que a classe tratamento é tão desbalanceada?
#
# ###DEsafio 02: Plotar as 5 últimas linhas da tabela
#
# ###Desafio 03: Proporção das classes tratamento.
#
# ###Desafio 04: Quantas tipos de drogas foram investigados.
#
# ###Desafio 05: Procurar na documentação o método query(pandas).
#
# ###Desafio 06: Renomear as colunas tirando o hífen.
#
# ###Desafio 07: Deixar os gráficos bonitões. (Matplotlib.pyplot)
#
# ###Desafio 08: Resumo do que você aprendeu com os dados
#
#
# ##Aula2
#
# ###Desafio 01: Ordenar o gráfico countplot
#
# ###Desafio 02: Melhorar a visualização alterando tamanho da fonte...
#
# ###Desafio 03: Plotar os histogramas com seaborn
#
# ###Desafio 04: Estudar sobre as estatíticas retornadas no .describe()
#
# ###Desafio 05: Refletir sobre a manipulação do tamanho das visualizações.
#
# ###Desafio 06: Fazer outras análises com o boxplot e até com o histograma.
#
# ###Desafio 07: Resumo do que você aprendeu com os dados
| Aula_02.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3.8
# language: python
# name: python3
# ---
# +
import random
import numpy as np
from scipy.spatial import Voronoi, voronoi_plot_2d
import matplotlib.pyplot as plt
from matplotlib.collections import LineCollection
# +
# Setup polygon plotting area
x_bounds = np.array([0,13])
x_buffer = np.array([1,-1])
y_bounds = np.array([0,16])
y_buffer = np.array([1,-1])
# Setup printing area
x_plot = x_bounds + x_buffer
y_plot = y_bounds + y_buffer
# +
# Create random points for Voronoi diagram
num_points = 200
x = np.random.uniform(low=0, high=1, size=num_points).reshape(num_points, 1)*x_bounds[1]
y = np.random.uniform(low=0, high=1, size=num_points).reshape(num_points, 1)*y_bounds[1]
plt.scatter(x, y)
# +
# Prep Voronoi input
pts = np.hstack([x, y])
vor = Voronoi(pts)
verts = vor.vertices
shapes_ind = vor.regions
# Close the shapes by adding the first point to the end of each shape ([1,4,2]->[1,4,2,1])
# Remove empty shapes and shapes out of bounds (contains -1)
shapes_ind = [x + x[0:1] for i,x in enumerate(shapes_ind) if len(x) != 0 and -1 not in x]
shapes = [verts[x] for i,x in enumerate(shapes_ind)]
# Plot Voronoi diagram
fig, ax = plt.subplots(figsize=(5,5))
ax.set_xlim(x_plot)
ax.set_ylim(y_plot)
lc = LineCollection(shapes)
ax.add_collection(lc)
# +
perc_fill = 0.3
total_polys = len(shapes)
filled_polys = int(perc_fill*total_polys)
polygon_ind = random.sample(range(total_polys), filled_polys)
for i in range(filled_polys):
polygon = shapes[polygon_ind[i]]
center = np.mean(polygon, axis=0)
poly_center = polygon - center
min_scale = 0.1
n_fill_lines = 5
for i in np.linspace(min_scale, 1, num=n_fill_lines):
scaled_poly = i*(poly_center)+center
shapes.append(scaled_poly)
fig, ax = plt.subplots(figsize=(10,10))
ax.set_xlim(x_plot)
ax.set_ylim(y_plot)
lc = LineCollection(shapes)
ax.add_collection(lc)
x = np.random.uniform(low=0, high=1, size=num_points)
# -
| set_up.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# <p>Solutions to the <a href=https://ocw.mit.edu/courses/materials-science-and-engineering/3-11-mechanics-of-materials-fall-1999/modules/MIT3_11F99_composites.pdf>Introduction to Composite Materials</a> module of MIT's Open Course <b>Mechanics of Materials</b>.</br>
# Other material properties listed <a href=https://ocw.mit.edu/courses/materials-science-and-engineering/3-11-mechanics-of-materials-fall-1999/modules/MIT3_11F99_props.pdf>here</a>.</br>
# </br>
# <NAME>. 3.11 Mechanics of Materials. Fall 1999. Massachusetts Institute of Technology: MIT OpenCourseWare, https://ocw.mit.edu. License: Creative Commons BY-NC-SA.</p>
import numpy as np
import sympy as sp
from sympy import init_printing
init_printing(use_latex=True)
import matplotlib.pyplot as plt
# <p>For problem #1, the longitudinal and transverse stiffnesses are calculated using the equations from the slab model.
# <div align=center>$E_1 = V_f*E_f + V_m*E_m$</div>
# <div align=center>$\frac{1}{E_2} = \frac{V_f}{E_f} + \frac{V_m}{E_m}$</div></p>
# +
# Problem 1
# for S-glass fibers
Ef = 85.5 # GPa
Vf = 0.7
# for the epoxy
Em = 3.5 # GPa
Vm = 1 - Vf
# from the slab model, the composite stiffnesses are
E1 = Vf*Ef + Vm*Em
E2 = 1/(Vf/Ef + Vm/Em)
print(f"The longitudinal stiffness: E1 = {E1:.1f} GPa")
print(f"The transverse stiffness: E2 = {E2:.1f} GPa")
# -
# <p>In problem #2, the longitudinal stiffness of an E-glass nylon composite is plotted over a range of fiber volumes, $V_f$.
# +
# Problem 2
# for E-glass fibers
Ef = 72.4 # GPa
# for the nylon
Em = 3.0 # GPa
Vf = np.linspace(0, 1, 100, endpoint=True)
Vm = 1 - Vf
E1 = Vf*Ef + Vm*Em
plt.plot(Vf, E1)
plt.xlabel(r"$V_f$")
plt.ylabel(r"$E_1, GPa$")
plt.grid(True)
plt.show()
# -
# <p>In problem #3, the longitudinal breaking tensile stress of an E-glass epoxy composite is plotted over a range of fiber volumes, $V_f$. Breaking stress is determined mostly by the fiber strength. Using the fiber breaking strain and composite stiffness we have:</br>
# <div align=center>$\sigma_b = \epsilon_{fb}*E_1 = \epsilon_{fb}*(V_f*E_f + V_m*E_m)$</div></br>
# At low fiber volumes its possible for the fibers to break and the matrix to hold the entire load, so the breaking stress in this region is descibed as:</br>
# <div align=center>$\sigma_b = V_m*\sigma_{mb}$</div></br>
# +
# Problem 3
# for E-glass fibers
Ef = 72.4 # GPa
sigma_fb = 2.4 # Gpa, fiber breaking stress
epsilon_fb = 0.026 # breaking strain of the fiber
# for the epoxy
Em = 3.5 # GPa
sigma_mb = 0.045 # Gpa, matrix breaking stress
epsilon_mb = 0.04 # breaking strain of the matrix
Vf = np.linspace(0, 1, 100, endpoint=True)
Vm = 1 - Vf
E1 = Vf*Ef + Vm*Em
sigma1 = epsilon_fb*E1
sigma2 = Vm*sigma_mb
sigma = [max(s1, s2) for s1, s2 in zip(sigma1, sigma2)]
plt.plot(Vf, sigma1)
plt.plot(Vf, sigma2)
plt.xlabel(r"$V_f$")
plt.ylabel(r"$\sigma_b, GPa$")
plt.grid(True)
plt.show()
# -
# <p>After plotting both breaking stress equations, it is cleat the breaking stress is only determined by the first equation.</p>
# <p>Problem #4 asks for the greatest fiber packing volume fraction given optimal fiber packing. And assuming that the optimal packing is <a href=https://en.wikipedia.org/wiki/Fiber_volume_ratio#Common_Fiber_Packing_Arrangements>hexagonal packing</a>, the fiber volume fraction is determined with the following equation:</br>
# <div align=center>$V_f = \left(\frac{\pi}{2\sqrt{3}}\right)*\left(\frac{r}{R}\right)^2$</div>
# Where $r$ is the fiber radius and $2*R$ is the spacing between fiber centers, which in an optimal pattern: $2*R = 2*r$ and the last term drops out of the equation.
#Problem 4
Vf = (np.pi/(2*np.sqrt(3)))
print(f"The max fiber volume fraction = {Vf:.3f}")
# <p>Problem #5 asks to show how the slab model is used to calculate the transverse stiffness of the composite: $\frac{1}{E_2} = \frac{Vf}{E_f} + \frac{V_m}{E_m}$</br>
# Some assumptions need to be made to reach this equation: first, the stress in the fiber and matrix are the same; and second, the deformation of the slab in the transverse direction is the sum of the fiber and matrix deformations:</br>
# <div align=center>$\epsilon_2*1 = \epsilon_f*V_f + \epsilon_m*V_m$</div></br>
# Deformation is $strain*length$, and length in the transverse direction of a unit slab is 1, and lengths for the fiber and matrix are equal to their volume fraction. See Figure 3 from the composites module, shown below, for how the the volume fractions add up to the unit length, $V_f + V_m = 1$.</br>
# <img align=center src="figure3_TransverseLoadingSlab.png" width=300 height=300 /></br>
# The stress-strain relationship $\epsilon = \frac{\sigma}{E}$ is substituted into the equation, resulting in:
# <div align=center>$\frac{\sigma_2}{E_2}*1 = \frac{\sigma_f}{E_f}*V_f + \frac{\sigma_m}{E_m}*V_m$</div></br>
# The first assumption that fiber and matrix stresses are equal to the composite transverse stress, $\sigma_2 = \sigma_f = \sigma_m$, allow for all the $\sigma$ terms to cancel out, resulting in the transverse stiffness equation.
# <div align=center>$\frac{1}{E_2} = \frac{Vf}{E_f} + \frac{V_m}{E_m}$</div></br>
# </p>
#
| MechanicsOfMaterials/compositesModuleProblems.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Tensors in Pytorch
#
# This set of exercises shows you how to create and manipulate tensors in Pytorch.
#
# Let's start by importing Pytorch.
import torch
# ## Construction of Tensors
#
# First, let's create a 1-dimensional tensor (just a fancy name for a vector) with 2 elements!
#
# The constructor takes one parameter for each dimension of the tensor, specifying the tensor's length in that dimension.
#
# This being a 1-dimensional tensor, its constructor needs just one parameter, specifying the length of the vector.
a1DTensor = torch.Tensor(2)
a1DTensor
# The vector, as you can see, is uninitialized (well, initialized with garbage). If you'd rather have it filled with zeros, you can do the following instead.
a1DTensor = torch.zeros(2)
a1DTensor
# If you want to initialize a tensor with more than one dimension, pass more parameters to the constructor or factory method.
#
# Other common factory methods are 'ones' and 'rand' and 'randn'.
#
# We create a 2-dimensional tensor filled with ones, a 3-dimensional tensor filled with random numbers between 0 and 1, and a 4-dimensional tensor filled with random numbers with a normal distribution with a mean at zero and a variance of 1.
a2DTensor = torch.ones(2, 3)
a2DTensor
a3DTensor = torch.rand(2, 3, 4)
a3DTensor
a4DTensor = torch.randn(2, 3, 4, 5)
a4DTensor
# So you can create tensors of arbitary tensors.
#
# The 2-dimensional tensors, as you know, are called matrics.
#
# Tensors are merely generalization of vectors and matrices to higher dimensions.
#
# You can also create tensors from matrices. This is particularly useful when you're converting input data and output data into tensors.
a2DTensor = torch.Tensor([[1, 2, 3],[4, 5, 6]])
a2DTensor
# ## Editing Tensors
#
# You can slice and dice tensors.
#
# The following code assigns the value 4 to all elements in the second column of the 2D tensor.
a2DTensor[:,1] = 4
a2DTensor
# You can also edit individual elements in the tensor.
a2DTensor[0,1] = 2
a2DTensor[1,1] = 5
a2DTensor
# You can fill the tensor with the value 2.2 (any member function ending in an underscore modifies the object it was called on).
a2DTensor.fill_(2.2)
a2DTensor
# You can slice the matrix and return the second row as follows:
a2DTensor[1]
# Or the first column (the indices start at 0):
a2DTensor[:,0]
# You can also get back the dimensions of a tensor using the size() method.
a2DTensor.size()
# Or a specific dimension (and this returns an integer):
a2DTensor.size(0)
# There you go. If you have a 1D tensor, and look up an element in it, you get a float or an int. Similarly if you have e 2D tensor and look up an element at a row and column, you get a float or an int.
a1DTensor[0]
a2DTensor[0,0]
# The tensors are by default assumed to be meant to hold real numbers.
#
# You can also explicitly specify in the constructor whether the tensor should hold real numbers or be limited to integers.
#
# The following will hold only integers.
anIntTensor = torch.IntTensor(1,2)
anIntTensor
# It appears to hold floating point values because it hasn't been initialized.
#
# Fill it with something.
#
anIntTensor.fill_(5)
anIntTensor
# The following tensor is intended to hold real numbers.
aFloatTensor = torch.FloatTensor(1,2)
aFloatTensor
# Unlike an int tensor, you can can fill a float tensor with real numbers.
aFloatTensor.fill_(5.5)
aFloatTensor
# ## Exercise: Operations on Tensors
#
# Create and add two tensors. This tutorial on Pytorch explains how: http://pytorch.org/tutorials/beginner/blitz/tensor_tutorial.html#operations
| .ipynb_checkpoints/exercise_110-checkpoint.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
import matplotlib as mpl
from utils import plot, softmax
import matplotlib.pyplot as plt
import numpy as np
import _pickle as pkl
import scipy.stats as stats
import tensorflow as tf
import time
import scipy
from ig_attack import IntegratedGradientsAttack
from utils import dataReader, get_session, integrated_gradients
from model import Model
tf.logging.set_verbosity(tf.logging.ERROR)
# -
X, y = dataReader()
classes = ['Daffodil','Snowdrop', 'Lily Valley', 'Bluebell',
'Crocus', 'Iris', 'Tigerlily', 'Tulip',
'Fritillary', 'Sunflower', 'Daisy', 'Colts Foot',
'Dandelalion', 'Cowslip', 'Buttercup', 'Windflower',
'Pansy']
n = 28
original_label = y[n]
test_image = X[n]
plt.rcParams["figure.figsize"]=8,8
print("Image ID: {}, Image Label : {}".format(n, classes[y[n]]))
# %matplotlib inline
plt.imshow(X[n])
# +
tf.reset_default_graph()
sess = get_session()
model = Model(create_saliency_op = 'ig')
# restore models
model_dir = 'models/nat_trained'
saver = tf.train.Saver()
checkpoint = tf.train.latest_checkpoint(model_dir)
saver.restore(sess, checkpoint)
# +
k_top = 1000 #Recommended for ImageNet
eval_k_top = 1000
num_steps = 100 #Number of steps in Integrated Gradients Algorithm (refer to the original paper)
attack_method = 'topK'
epsilon = 8.0 #Maximum allowed perturbation for each pixel
attack_steps = 100
attack_times = 1
alpha = 1.0
attack_measure = "kendall"
reference_image = np.zeros((128, 128, 3)) #Our chosen reference(the mean image)
module = IntegratedGradientsAttack(sess = sess, test_image = test_image,
original_label = original_label, NET = model,
attack_method = attack_method, epsilon = epsilon,
k_top = k_top, eval_k_top = eval_k_top, num_steps = num_steps,
attack_iters = attack_steps,
attack_times = attack_times,
alpha = alpha,
attack_measure = attack_measure,
reference_image = reference_image,
same_label = True)
# +
output = module.iterative_attack_once()
print('''For maximum allowed perturbation size equal to {}, the resulting perturbation size was equal to {}'''.format(epsilon, np.max(np.abs(test_image - module.perturbed_image))))
print('''{} % of the {} most salient pixels in the original image are among {} most salient pixels of the
perturbed image'''.format(output[0]*100, eval_k_top, eval_k_top))
print("The spearman rank correlation between salieny maps is equal to {}".format(output[1]))
print("The kendall rank correlation between salieny maps is equal to {}".format(output[2]))
nat_prediction = sess.run(model.prediction, feed_dict={model.input: [test_image], model.label: [original_label]})
adv_prediction = sess.run(model.prediction, feed_dict={model.input: [module.perturbed_image], model.label: [original_label]})
print('nat_prediction: %s, adv_prediction: %s'%(int(nat_prediction), int(adv_prediction)))
# -
nat_output = sess.run(model.output_with_relu, feed_dict={model.input: [test_image]})
nat_pred = softmax(nat_output)
adv_output = sess.run(model.output_with_relu, feed_dict={model.input: [module.perturbed_image]})
adv_pred = softmax(adv_output)
print('original prediction: {}, confidence: {}'.format(classes[np.argmax(nat_pred)], np.max(nat_pred)))
print('perturbed prediction: {}, confidence: {}'.format(classes[np.argmax(adv_pred)], np.max(adv_pred)))
# +
original_IG = integrated_gradients(sess, reference_image, test_image, original_label, model, gradient_func='output_input_gradient', steps=num_steps)
mpl.rcParams["figure.figsize"]=8,8
plt.rc("text",usetex=False)
plt.rc("font",family="sans-serif",size=12)
saliency = np.sum(np.abs(original_IG),-1)
original_saliency = 128*128*saliency/np.sum(saliency)
plt.subplot(2,2,1)
plt.title("Original Image")
image = X[n].astype(np.uint8)
plt.imshow(image)
plt.subplot(2,2,2)
plt.title("Original Image Saliency Map")
plt.imshow(original_saliency, cmap="hot")
perturbed_IG = integrated_gradients(sess, reference_image, module.perturbed_image, original_label, model, gradient_func='output_input_gradient', steps=num_steps)
saliency = np.sum(np.abs(perturbed_IG),-1)
perturbed_saliency = 128*128*saliency/np.sum(saliency)
plt.subplot(2,2,3)
plt.title("Perturbed Image")
perturbed_image = (module.perturbed_image).astype(np.uint8)
plt.imshow(perturbed_image)
plt.subplot(2,2,4)
plt.title("Perturbed Image Saliency Map")
plt.imshow(perturbed_saliency, cmap="hot")
| Flower/test_ig_attack.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3 (ipykernel)
# language: python
# name: python3
# ---
import pandas as pd
import numpy as np
import seaborn as sns
import re
import cv2
from sklearn.feature_extraction.text import TfidfVectorizer
from sklearn.feature_extraction import text
from sklearn.metrics import mean_squared_error, confusion_matrix
from sklearn.model_selection import train_test_split, GridSearchCV, KFold
from sklearn.linear_model import LinearRegression, LogisticRegression
from sklearn.metrics import classification_report
from sklearn.linear_model import SGDRegressor
import matplotlib.pyplot as plt
from sklearn.naive_bayes import ComplementNB
# import tensorflow as tf
import imblearn
from imblearn.over_sampling import SMOTE
stopwords = text.ENGLISH_STOP_WORDS
data = pd.read_csv("data_race_pred.csv")
data[:5]
def preprocess(data):
t = data
t = t.lower() # lower letters
t = re.sub(r'http\S+', '', t) # remove HTTP links
t = re.sub(r'@[^\s]+', '', t) # remove usernames
t = re.sub(r'#[^\s]+', '', t) # remove hashtags
t = re.sub(r'([\'\"\.\(\)\!\?\\\/\,])', r' \1 ', t)
t = re.sub(r'[^\w\s\?]', '', t) # remove punctuations except ?
t = ' '.join([word for word in t.split() if word not in stopwords]) # remove stopwords
return t
data.replace([np.inf, -np.inf], np.nan, inplace=True)
data = data.dropna()
data["race"] = data["race"].astype(int)
data = data.drop(data[data["race"] == 5].index)
data["tweets"] = data["tweets"].apply(preprocess)
data[:5]
data["race"].value_counts()
# # Try Muiltiple Models
tfidf = TfidfVectorizer(max_features = 7500, ngram_range=(1,2))
X = tfidf.fit_transform(data['tweets'])
y = data['race']
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=1/4.0, random_state=0)
# +
reg = LinearRegression().fit(X_train, y_train)
y_pred = reg.predict(X_test)
y_pred = [round(x) for x in y_pred]
for index in range(len(y_pred)):
if y_pred[index] < 1:
y_pred[index] = 1
if y_pred[index] > 4:
y_pred[index] = 4
print(set(y_pred))
# +
print(classification_report(y_test, y_pred))
print("RMSE on testing set = ", mean_squared_error(y_test, y_pred))
# Plot a confusion matrix
cm = confusion_matrix(y_test, y_pred, normalize='true')
sns.heatmap(cm, annot=True)
# +
log = LogisticRegression(multi_class='multinomial', solver='lbfgs').fit(X_train, y_train)
y_pred = log.predict(X_test)
y_pred = [round(x) for x in y_pred]
for index in range(len(y_pred)):
if y_pred[index] < 1:
y_pred[index] = 1
if y_pred[index] > 4:
y_pred[index] = 4
print(set(y_pred))
# +
print(classification_report(y_test, y_pred))
print("RMSE on testing set = ", mean_squared_error(y_test, y_pred))
# Plot a confusion matrix
cm = confusion_matrix(y_test, y_pred, normalize='true')
sns.heatmap(cm, annot=True)
# -
set(y_test) - set(y_pred)
# +
# naive Bayes
clf = ComplementNB().fit(X_train, y_train)
y_pred = clf.predict(X_test)
y_pred = [round(x) for x in y_pred]
for index in range(len(y_pred)):
if y_pred[index] < 1:
y_pred[index] = 1
if y_pred[index] > 4:
y_pred[index] = 4
# +
print(classification_report(y_test, y_pred))
print("RMSE on testing set = ", mean_squared_error(y_test, y_pred))
# Plot a confusion matrix
cm = confusion_matrix(y_test, y_pred, normalize='true')
sns.heatmap(cm, annot=True)
# -
# # Apply Oversampling
X = tfidf.fit_transform(data['tweets'])
y = data['race']
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=1/4.0, random_state=0)
oversample = SMOTE()
X_train, y_train = oversample.fit_resample(X_train, y_train)
print(y_train.value_counts())
log = LogisticRegression(multi_class='multinomial', solver='lbfgs').fit(X_train, y_train)
y_pred = log.predict(X_test)
y_pred = [round(x) for x in y_pred]
for index in range(len(y_pred)):
if y_pred[index] < 1:
y_pred[index] = 1
if y_pred[index] > 4:
y_pred[index] = 4
# +
print(classification_report(y_test, y_pred))
print("RMSE on testing set = ", mean_squared_error(y_test, y_pred))
# Plot a confusion matrix
cm = confusion_matrix(y_test, y_pred, normalize='true')
sns.heatmap(cm, annot=True)
# +
# naive Bayes
clf = ComplementNB().fit(X_train, y_train)
y_pred = clf.predict(X_test)
y_pred = [round(x) for x in y_pred]
for index in range(len(y_pred)):
if y_pred[index] < 1:
y_pred[index] = 1
if y_pred[index] > 4:
y_pred[index] = 4
print(classification_report(y_test, y_pred))
print("RMSE on testing set = ", mean_squared_error(y_test, y_pred))
# Plot a confusion matrix
cm = confusion_matrix(y_test, y_pred, normalize='true')
sns.heatmap(cm, annot=True)
| race_tweets_prediction.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import csv
import cv2
import numpy as np
import tensorflow as tf
from keras.models import Sequential
from keras.layers import Flatten, Dense
from keras.layers import Conv2D, Lambda, MaxPooling2D, Dropout, Activation, Convolution2D, Cropping2D
# ## Read all the training data
data = []
measurements = []
with open('/home/mengling/Desktop/train/driving_log.csv') as csvfile:
reader = csv.reader(csvfile)
for line in reader:
data.append(line)
measurements.append(line[3])
measurements = [float(i) for i in measurements if i != 0]
import numpy as np
import pandas as pd
from matplotlib import pyplot as plt
plt.hist(measurements, bins=100, alpha=0.5)
# data[0]
def train_generator(samples, batch_size=batch_size):
num_samples = len(samples)
while 1: # Loop forever so the generator never terminates
from sklearn.utils import shuffle
shuffle(samples)
for offset in range(0, num_samples, batch_size):
batch_samples = samples[offset:offset+batch_size]
images = []
angles = []
# Read center, left and right images from a folder containing Udacity data and my data
for batch_sample in batch_samples:
center_name = '/home/animesh/Documents/CarND/CarND-Behavioral-Cloning-P3/data2/IMG/'+batch_sample[0].split('/')[-1]
center_image = cv2.imread(center_name)
center_image = cv2.cvtColor(center_image, cv2.COLOR_BGR2RGB)
left_name = '/home/animesh/Documents/CarND/CarND-Behavioral-Cloning-P3/data2/IMG/'+batch_sample[1].split('/')[-1]
left_image = cv2.imread(left_name)
left_image = cv2.cvtColor(left_image, cv2.COLOR_BGR2RGB)
right_name = '/home/animesh/Documents/CarND/CarND-Behavioral-Cloning-P3/data2/IMG/'+batch_sample[2].split('/')[-1]
right_image = cv2.imread(right_name)
right_image = cv2.cvtColor(right_image, cv2.COLOR_BGR2RGB)
center_angle = float(batch_sample[3])
# Apply correction for left and right steering
correction = 0.20
left_angle = center_angle + correction
right_angle = center_angle - correction
# Randomly include either center, left or right image
num = random.random()
if num <= 0.33:
select_image = center_image
select_angle = center_angle
images.append(select_image)
angles.append(select_angle)
elif num>0.33 and num<=0.66:
select_image = left_image
select_angle = left_angle
images.append(select_image)
angles.append(select_angle)
else:
select_image = right_image
select_angle = right_angle
images.append(select_image)
angles.append(select_angle)
# Randomly horizontally flip selected images with 80% probability
keep_prob = random.random()
if keep_prob >0.20:
flip_image = np.fliplr(select_image)
flip_angle = -1*select_angle
images.append(flip_image)
angles.append(flip_angle)
# Augment with images of different brightness
# Randomly select a percent change
change_pct = random.uniform(0.4, 1.2)
# Change to HSV to change the brightness V
hsv = cv2.cvtColor(select_image, cv2.COLOR_RGB2HSV)
hsv[:, :, 2] = hsv[:, :, 2] * change_pct
# Convert back to RGB and append
bright_img = cv2.cvtColor(hsv, cv2.COLOR_HSV2RGB)
images.append(bright_img)
angles.append(select_angle)
## Randomly shear image with 80% probability
shear_prob = random.random()
if shear_prob >=0.20:
shear_range = 40
rows, cols, ch = select_image.shape
dx = np.random.randint(-shear_range, shear_range + 1)
# print('dx',dx)
random_point = [cols / 2 + dx, rows / 2]
pts1 = np.float32([[0, rows], [cols, rows], [cols / 2, rows / 2]])
pts2 = np.float32([[0, rows], [cols, rows], random_point])
dsteering = dx / (rows / 2) * 360 / (2 * np.pi * 25.0) / 10.0
M = cv2.getAffineTransform(pts1, pts2)
shear_image = cv2.warpAffine(center_image, M, (cols, rows), borderMode=1)
shear_angle = select_angle + dsteering
images.append(shear_image)
angles.append(shear_angle)
# trim image to only see section with road
X_train = np.array(images)
y_train = np.array(angles)
yield shuffle(X_train, y_train)
# +
def data_generator(og_path, img_path, batch_size=batch_size):
samples, images, measurements = [], [], []
with open(log_path) as csvfile:
reader = csv.reader(csvfile)
for line in reader:
samples.append(line)
num_samples = len(samples)
while 1: # Loop forever so the generator never terminates
from sklearn.utils import shuffle
shuffle(samples)
for offset in range(0, num_samples, batch_size):
batch_samples = samples[offset:offset+batch_size]
# Read center, left and right images from a folder containing Udacity data and my data
for batch_sample in batch_samples:
for i in range(3):
source_path = batch_sample[i]
filename = source_path.split("/")[-1]
current_path = img_path + filename
image = cv2.imread(current_path)
images.append(image)
# center image
if i == 0:
measurement = float(line[3])
# left image
elif i == 1:
measurement = float(line[3]) + 0.1
elif i == 2:
measurement = float(line[3]) - 0.1
measurements.append(measurement)
# -
def read_data(log_path, img_path, augment=True):
lines, images, measurements = [], [], []
with open(log_path) as csvfile:
reader = csv.reader(csvfile)
for line in reader:
lines.append(line)
for line in lines:
source_path = line[0]
filename = source_path.split("/")[-1]
current_path = img_path + filename
image = cv2.imread(current_path)
images.append(image)
measurement = float(line[3])
measurements.append(measurement)
if augment:
augmented_images, augmented_measurements = [], []
for image, measurement in zip(images, measurements):
augmented_images.append(image)
augmented_measurements.append(measurement)
augmented_images.append(cv2.flip(image, 1))
augmented_measurements.append(measurement * -1.0)
return augmented_images, augmented_measurements
else:
return images, measurements
def read_data(log_path, img_path, augment=True):
lines, images, measurements = [], [], []
with open(log_path) as csvfile:
reader = csv.reader(csvfile)
for line in reader:
lines.append(line)
for line in lines:
for i in range(3):
source_path = line[i]
filename = source_path.split("/")[-1]
current_path = img_path + filename
image = cv2.imread(current_path)
images.append(image)
# center image
if i == 0:
measurement = float(line[3])
# left image
elif i == 1:
measurement = float(line[3]) + 0.1
elif i == 2:
measurement = float(line[3]) - 0.1
measurements.append(measurement)
if augment:
augmented_images, augmented_measurements = [], []
for image, measurement in zip(images, measurements):
augmented_images.append(image)
augmented_measurements.append(measurement)
augmented_images.append(cv2.flip(image, 1))
augmented_measurements.append(measurement * -1.0)
return augmented_images, augmented_measurements
else:
return images, measurements
images1, measurements1 = read_data('/home/mengling/Desktop/train01/driving_log.csv',
'/home/mengling/Desktop/train01/IMG/', augment=True)
images2, measurements2 = read_data('/home/mengling/Desktop/train01b/driving_log.csv',
'/home/mengling/Desktop/train01b/IMG/', augment=True)
images3, measurements3 = read_data('/home/mengling/Desktop/train01c/driving_log.csv',
'/home/mengling/Desktop/train01c/IMG/', augment=True)
images4, measurements4 = read_data('/home/mengling/Desktop/train01d/driving_log.csv',
'/home/mengling/Desktop/train01d/IMG/', augment=True)
images5, measurements5 = read_data('/home/mengling/Desktop/train02/driving_log.csv',
'/home/mengling/Desktop/train02/IMG/', augment=True)
images = np.concatenate([images1, images2, images3, images4])
measurements = np.concatenate([measurements1, measurements2, measurements3, measurements4])
X_train = np.array(images)
y_train = np.array(measurements)
X_train.shape
# ## Build a Keras Model
model = Sequential()
model.add(Lambda(lambda x: x / 255.0 - 0.5, input_shape=(160,320,3)))
model.add(Cropping2D(cropping=((70, 25), (0,0))))
model.add(Convolution2D(24, 5, 5, subsample=(2, 2), activation='relu', dim_ordering="tf"))
model.add(Convolution2D(36, 5, 5, subsample=(2, 2), activation='relu', dim_ordering="tf"))
model.add(Dropout(p=0.7))
model.add(Convolution2D(48, 5, 5, subsample=(2, 2), activation='relu', dim_ordering="tf"))
model.add(Convolution2D(64, 3, 3, activation='relu', dim_ordering="tf"))
model.add(Convolution2D(64, 3, 3, activation='relu', dim_ordering="tf"))
model.add(Dropout(p=0.7))
model.add(Flatten())
model.add(Dense(100))
model.add(Dropout(p=0.7))
model.add(Dense(50))
model.add(Dense(10))
model.add(Dense(1))
# ## Compile and Train the model on training data, and save it as model.h5
# +
model.compile(loss='mse', optimizer='adam')
model.fit(X_train, y_train, validation_split=0.2, shuffle=True)
model.save('/home/mengling/projects/carnd/Term1/CarND-Behavioral-Cloning-P3/model.h5')
| model_explore.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import pandas as pd
import seaborn as sns
# %matplotlib inline
# Como hay gente voluntariosa, pero que no deja de ser radical, tenemos que limpiar formatos dispares de documentos estatales
# +
# <NAME> 2015
data2015 = pd.read_csv('../data/raw/escuelas-elecciones-2015-cordoba.csv')
data2015.head()
# -
# Separamos `Escuela` en `Escuela`, `direccion` y `barrio`. Lo que es una lastima, porque la muestra de 2017 no tiene `barrio`. O bien hay una confusion entre lo que se nombra como circuito en cada muestra. Lo que tambien seria una lastima.
data_copy = data2015.copy()
data_copy[['Escuela', 'direccion', 'barrio']] = data_copy.Escuela.str.split(' - ', expand=True)
data_copy = data_copy.rename(columns={
'Escuela': 'escuela',
'direccion': 'direccion',
'Seccion Nro': 'seccion_nro',
'Seccion Nombre': 'seccion_nombre',
'Circuito Nro': 'circuito_nro',
'Circuito Nombre': 'circuito_nombre',
'Mesas': 'mesas',
'Desde': 'desde',
'Hasta': 'hasta',
'Electores': 'electores',
'barrio': 'barrio'
})
data_copy[['escuela', 'direccion', 'seccion_nro', 'seccion_nombre', 'circuito_nro', 'circuito_nombre', 'mesas', 'desde', 'hasta', 'electores']].to_csv('../data/raw/escuelas-elecciones-2015-cordoba-CLEAN.csv', index=False)
data2015 = pd.read_csv('../data/raw/escuelas-elecciones-2015-cordoba-CLEAN.csv')
data2015.sort_values(by='desde').head()
# Ahi va... Mucho mejor. Ahora veamos 2017.
data2017 = pd.read_csv('../data/raw/escuelas-elecciones-2017-cordoba.csv')
print(len(data2017))
data2017.sort_values(by='desde').head()
Esas columnas no son narmales, spanglish es muy malo.
data_copy = data2017.copy()
data_copy.columns = ['escuela', 'direccion', 'seccion_nro', 'seccion_nombre', 'circuito_nro', 'circuito_nombre', 'mesas', 'desde', 'hasta', 'electores',]
data_copy.to_csv('../data/raw/escuelas-elecciones-2017-cordoba-CLEAN.csv', index=False)
data2017 = pd.read_csv('../data/raw/escuelas-elecciones-2017-cordoba-CLEAN.csv')
data2017.head()
results = pd.read_csv('../data/processed/resultados-secciones-2015.csv', index_col=0)
print(len(results))
results.head()
# Tratemos de integrar los datos de la carta marina con los resultados.
# Vamos a iterar sobre los resultados por mesa. Nos fijamos si la mesa esta en el rango de mesas que tiene cada escuela y sumamos los resultados.
completo = data2015.copy()
store = {}
columns = ['votos_fpv', 'votos_cambiemos', 'votos_blancos', 'votos_nulos', 'votos_recurridos', 'total']
def merge_results(df):
for x in df.itertuples():
value = x.numero_mesa
circuito = x.nombre_circuito
condition = ((completo.desde<=value) & (completo.hasta>=value))
index = completo[completo.circuito_nro==circuito][condition]
columns = ['votos_fpv', 'votos_cambiemos', 'votos_blancos', 'votos_nulos', 'votos_recurridos', 'total']
subset = x[columns]
completo[index][columns] += subset
return completo
mask = results.nombre_circuito.isin(data2015[data2015.seccion_nro==1].circuito_nro.unique())
results[mask].groupby('nombre_circuito')[columns].plot.bar()
# +
| notebooks/0.1-normalizar-cartas-marinas.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # SQL
#
# Accessing data stored in databases is a routine exercise. I demonstrate a few helpful methods in the Jupyter Notebook.
# !conda install ipython-sql -y
# %load_ext sql
# %config SqlMagic.autopandas=True
import pandas as pd
import sqlite3
# ```SQL
# CREATE TABLE presidents (first_name, last_name, year_of_birth);
# INSERT INTO presidents VALUES ('George', 'Washington', 1732);
# INSERT INTO presidents VALUES ('John', 'Adams', 1735);
# INSERT INTO presidents VALUES ('Thomas', 'Jefferson', 1743);
# INSERT INTO presidents VALUES ('James', 'Madison', 1751);
# INSERT INTO presidents VALUES ('James', 'Monroe', 1758);
# INSERT INTO presidents VALUES ('Zachary', 'Taylor', 1784);
# INSERT INTO presidents VALUES ('Abraham', 'Lincoln', 1809);
# INSERT INTO presidents VALUES ('Theodore', 'Roosevelt', 1858);
# INSERT INTO presidents VALUES ('Richard', 'Nixon', 1913);
# INSERT INTO presidents VALUES ('Barack', 'Obama', 1961);
# ```
# + magic_args="sqlite://" language="sql"
# CREATE TABLE presidents (first_name, last_name, year_of_birth);
# INSERT INTO presidents VALUES ('George', 'Washington', 1732);
# INSERT INTO presidents VALUES ('John', 'Adams', 1735);
# INSERT INTO presidents VALUES ('Thomas', 'Jefferson', 1743);
# INSERT INTO presidents VALUES ('James', 'Madison', 1751);
# INSERT INTO presidents VALUES ('James', 'Monroe', 1758);
# INSERT INTO presidents VALUES ('Zachary', 'Taylor', 1784);
# INSERT INTO presidents VALUES ('Abraham', 'Lincoln', 1809);
# INSERT INTO presidents VALUES ('Theodore', 'Roosevelt', 1858);
# INSERT INTO presidents VALUES ('Richard', 'Nixon', 1913);
# INSERT INTO presidents VALUES ('Barack', 'Obama', 1961);
# -
# later_presidents = %sql SELECT * FROM presidents WHERE year_of_birth > 1825
later_presidents
type(later_presidents)
later_presidents
con = sqlite3.connect("presidents.sqlite")
later_presidents.to_sql("presidents", con, if_exists='replace')
# ## Through pandas directly
# +
con = sqlite3.connect("presidents.sqlite")
cur = con.cursor()
new_dataframe = pd.read_sql("""SELECT *
FROM presidents""",
con=con)
con.close()
# -
new_dataframe
type(new_dataframe)
| notebook-tutorial/notebooks/04-SQL-Example.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3.8.8 ('base')
# language: python
# name: python3
# ---
# +
# Cell loads the data
from dataset_loader import data_loader, get_descriptors, one_filter, data_scaler
import os, sys
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import torch
from sklearn import preprocessing
# file name and data path
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
base_path = os.getcwd()
file_name = 'data/CrystGrowthDesign_SI.csv'
"""
Data description.
Descriptors:
'void fraction', 'Vol. S.A.', 'Grav. S.A.', 'Pore diameter Limiting', 'Pore diameter Largest'
Source task:
'H2@100 bar/243K (wt%)'
Target tasks:
'H2@100 bar/130K (wt%)' 'CH4@100 bar/298 K (mg/g)' '5 bar Xe mol/kg' '5 bar Kr mol/kg'
"""
descriptor_columns = ['void fraction', 'Vol. S.A.', 'Grav. S.A.', 'Pore diameter Limiting', 'Pore diameter Largest']
one_filter_columns = ['H2@100 bar/243K (wt%)']
another_filter_columns = ['H2@100 bar/130K (wt%)']
# load data
data = data_loader(base_path, file_name)
data = data.reset_index(drop=True)
# extract descriptors and gas adsorptions
one_property = one_filter(data, one_filter_columns)
descriptors = get_descriptors(data, descriptor_columns)
# prepare training inputs and outputs
X = np.array(descriptors.values, dtype=np.float32)
y = np.array(one_property.values, dtype=np.float32).reshape(len(X), )
X = data_scaler(X)
y = data_scaler(y.reshape(-1, 1)).reshape(len(X),)
# removes catagorical varaiables
test=data.drop(["MOF ID","topology","First nodular character","Second nodular character"],axis=1)
#g_comp=5
# all vs just used !!!! uncomment below for just feature used anaylsis
g_comp=6
test=test[['void fraction', 'Vol. S.A.', 'Grav. S.A.', 'Pore diameter Limiting', 'Pore diameter Largest']]
g=preprocessing.StandardScaler().fit_transform(test)
g=pd.DataFrame(g)
g.columns=test.columns
test=g
# +
from sklearn.decomposition import PCA
import matplotlib.pyplot as plt
from Statistics_helper import make_pca_agg_fit
from sklearn.cluster import AgglomerativeClustering
from sklearn.cluster import KMeans
from scipy.spatial import ConvexHull, convex_hull_plot_2d
var=.9
Out=PCA(n_components=2)
g=Out.fit(test)
data2=data.copy()
g_comp=6
holder=['void fraction', 'Vol. S.A.', 'Grav. S.A.', 'Pore diameter Limiting', 'Pore diameter Largest']
for i in holder:
temp=data2[holder]
g=preprocessing.StandardScaler().fit_transform(temp)
g=pd.DataFrame(g)
pc1,pc2,color=make_pca_agg_fit(1,g,var,g_comp,func_give=KMeans,array_out=True)
dic={
"Pc1" : pc1,
"Pc2" : pc2,
"Cluster" : color,
}
holder=pd.DataFrame(dic)
data2=pd.concat([data2,holder],axis=1)
def manual_swap(x):
#swaps clusters to order from left to right on pca
x=int(x)
y=0
if x == 5:
y=0
elif x == 2:
y= 1
elif x== 1:
y=2
elif x == 4:
y=3
elif x == 0:
y=4
else:
y=5
return y
data2["Cluster"]=data2["Cluster"].apply(manual_swap)
plt.scatter(data2["Pc1"],data2["Pc2"],c=data2["Cluster"])
plt.ylabel("Pc2")
plt.xlabel("Pc1")
plt.title("PC Based Clustering")
abridge=data2[['MOF ID', 'void fraction', 'Vol. S.A.', 'Grav. S.A.','Pore diameter Limiting', 'Pore diameter Largest','H2@100 bar/243K (wt%)','topology',
'First nodular symmetry code', 'First nodular character',
'First nodular ID', 'Second nodular symmetry code',
'Second nodular character', 'Second nodular ID',
'Connecting building block ID', 'Pc1', 'Pc2', 'Cluster']]
new=data2[['MOF ID', 'void fraction', 'Vol. S.A.', 'Grav. S.A.','H2@100 bar/243K (wt%)','Pore diameter Limiting', 'Pore diameter Largest','topology',
'First nodular symmetry code', 'First nodular character',
'First nodular ID', 'Second nodular symmetry code',
'Second nodular character', 'Second nodular ID',
'Connecting building block ID', 'Pc1', 'Pc2', 'Cluster']].groupby("Cluster").mean()
#plt.scatter(new["Pc1"],new["Pc2"],c="r")
annotations=["C0","C1","C2","C3","C4","C5"]
plt.show()
from scipy.spatial import distance_matrix
from sklearn.mixture import GaussianMixture
from sklearn.cluster import AgglomerativeClustering
a=abridge.groupby("topology").median()[["Pc1","Pc2"]]
plt.scatter(a["Pc1"],a["Pc2"])
color = AgglomerativeClustering(n_clusters=4).fit_predict(a)
#color=gm.predict(a)
plt.scatter(a["Pc1"],a["Pc2"],c=color)
plt.legend()
distances=pd.DataFrame(distance_matrix(a,a),index=a.index,columns=a.index)
alpha_tuples=[[a,b] for a,b in zip(data2["Pc1"].to_numpy(),data2["Pc2"].to_numpy())]
alpha_tuples=np.array(alpha_tuples)
hull=ConvexHull(alpha_tuples)
for z,i in enumerate(abridge["topology"].unique()):
interest=i
x=np.linspace(-4.2,-1.6,1001)
y= lambda x: -.5 - 1.5*x
plt.plot(x,y(x),'r--', lw=2)
x=np.linspace(-5,-4.2,1001)
y= lambda x: 2.6- .75*x
plt.plot(x,y(x),'r--', lw=2)
x=np.linspace(-1.6,5.5,1001)
y= lambda x: 2 + .1*x
plt.plot(x,y(x),'r--', lw=2)
x=np.linspace(-1.6,5,1001)
y= lambda x: 2 + .1*x
plt.plot(x,y(x),'r--', lw=2)
plt.plot(alpha_tuples[hull.vertices,0][:10], alpha_tuples[hull.vertices,1][:10], 'r--', lw=2)
plt.show()
from scipy.cluster.hierarchy import dendrogram
def plot_dendrogram(model, **kwargs):
# Create linkage matrix and then plot the dendrogram
# create the counts of samples under each node
counts = np.zeros(model.children_.shape[0])
n_samples = len(model.labels_)
for i, merge in enumerate(model.children_):
current_count = 0
for child_idx in merge:
if child_idx < n_samples:
current_count += 1 # leaf node
else:
current_count += counts[child_idx - n_samples]
counts[i] = current_count
linkage_matrix = np.column_stack(
[model.children_, model.distances_, counts]
).astype(float)
# Plot the corresponding dendrogram
dendrogram(linkage_matrix,orientation='left',labels=a.index,**kwargs)
# setting distance_threshold=0 ensures we compute the full tree.
model = AgglomerativeClustering(distance_threshold=0, n_clusters=None)
model = model.fit(a)
f = plt.figure()
f.set_figwidth(20)
f.set_figheight(20)
plt.title("Hierarchical Clustering Dendrogram")
# plot the top three levels of the dendrogram
plot_dendrogram(model, truncate_mode="level", p=11)
# +
def plot_dendrogram(model, **kwargs):
# Create linkage matrix and then plot the dendrogram
# create the counts of samples under each node
counts = np.zeros(model.children_.shape[0])
n_samples = len(model.labels_)
for i, merge in enumerate(model.children_):
current_count = 0
for child_idx in merge:
if child_idx < n_samples:
current_count += 1 # leaf node
else:
current_count += counts[child_idx - n_samples]
counts[i] = current_count
linkage_matrix = np.column_stack(
[model.children_, model.distances_, counts]
).astype(float)
# Plot the corresponding dendrogram
g=dendrogram(linkage_matrix,orientation='left',labels=a.index,**kwargs)
return g
# setting distance_threshold=0 ensures we compute the full tree.
model = AgglomerativeClustering(distance_threshold=0, n_clusters=None)
model = model.fit(a)
f = plt.figure()
f.set_figwidth(20)
f.set_figheight(20)
plt.title("Hierarchical Clustering Dendrogram")
# plot the top three levels of the dendrogram
g=plot_dendrogram(model, truncate_mode="level", p=2,color_threshold=4)
# -
groups=4
model = AgglomerativeClustering(n_clusters=groups)
model = model.fit(a)
model.labels_
dic={}
for a,b in zip(a.index,model.labels_):
dic[a]=b
abridge["t_cluster"]=abridge["topology"].map(dic)
M_Cluster=[]
for i in sorted(abridge["t_cluster"].unique()):
Temp=abridge[abridge["t_cluster"]==i]
M_Cluster.append(Temp)
M_Cluster[3]
| prototypes/Cluster_topology.ipynb |
# ---
# title: "Trimmed Mean"
# author: "<NAME>"
# date: 2017-12-20T11:53:49-07:00
# description: "Trimmed Mean Using Python."
# type: technical_note
# draft: false
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# Trimmed means are averaging techniques that do not count (i.e. trim off) extreme values. The goal is to make mean calculations more robust to extreme values by not considering those values when calculating the mean.
#
# [SciPy](https://docs.scipy.org/) offers a great methods of calculating trimmed means.
# ## Preliminaries
# Import libraries
import pandas as pd
from scipy import stats
# ## Create DataFrame
# Create dataframe with two extreme values
data = {'name': ['Jason', 'Molly', 'Tina', 'Jake', 'Amy', 'Bob', 'Jack', 'Jill', 'Kelly', 'Mark', 'Kao', 'Dillon'],
'score': [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 100, 100]
}
df = pd.DataFrame(data)
df
# ## Calculate Non-Trimmed Mean
# Calculate non-trimmed mean
df['score'].mean()
# ## Calculate Mean After Trimming Off Highest And Lowest
# Trim off the 20% most extreme scores (lowest and highest)
stats.trim_mean(df['score'], proportiontocut=0.2)
# We can use `trimboth` to see which values are used to calculate the trimmed mean:
# Trim off the 20% most extreme scores and view the non-trimmed values
stats.trimboth(df['score'], proportiontocut=0.2)
# ## Calculate Mean After Trimming Only Highest Extremes
#
# The `right` tail refers to the highest values in the array and `left` refers to the lowest values in the array.
# Trim off the highest 20% of values and view trimmed mean
stats.trim1(df['score'], proportiontocut=0.2, tail='right').mean()
# Trim off the highest 20% of values and view non-trimmed values
stats.trim1(df['score'], proportiontocut=0.2, tail='right')
| docs/statistics/basics/trimmed_mean.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import numpy as np
# # APR
# Na sliki je prikazana merjena temperature vode. Podana je tabela temperature $T$ v odvisnosti od časa $t$. Skozi merjene točke želimo določiti aproksimacijski polinom stopnje $s$.
# <img width='600px' src='APR.jpeg'>
s = 3 #
t = 62 # s
t_1 = 51.9 # s
t_2 = 61.2 # s
t_3 = 71.2 # s
t_4 = 80.4 # s
t_5 = 91.5 # s
t_6 = 98.2 # s
T_1 = -29.1 # °C
T_2 = -19.1 # °C
T_3 = -8.3 # °C
T_4 = 8.4 # °C
T_5 = 18.9 # °C
T_6 = 29.3 # °C
# ### 1. vprašanje
#
# Aproksimacijsko krivuljo določimo glede na maksimum kvadrata odstopkov! (1: Da, 2: Ne)
#
odgovor1 = # #?
# ### 2. vprašanje
#
# Pri aproksimaciji tipično uporabljamo polinome visokih stopenj! (1: Da, 2: Ne)
#
odgovor2 = # #?
# ### 3. vprašanje
#
# Določite aproksimacijski polinom stopnje $s$. Vpišite numerično polje koeficientov polinoma (začnite s koeficientom člena z najvišjo potenco).
odgovor3 = # #?
# ### 4. vprašanje
#
# Določite aproksimirano temperaturo vode pri času $t$.
#
# Enote: °C
odgovor4 = # #?
# ### 5. vprašanje
#
# Določite čas prehoda čez temperaturo 0°C.
#
# Enote: s
odgovor5 = # #?
| pypinm-master/primeri nalog in izpita/08-APR/APR.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: SSH imgpu02 torchpy36 24 CPUs
# language: ''
# name: rik_ssh_imgpu02_torchpy36_24
# ---
# # Homework part I
#
# The first problem set contains basic tasks in pytorch.
#
# __Note:__ Instead of doing this part of homework, you can prove your skills otherwise:
# * A commit to pytorch or pytorch-based repos will do;
# * Fully implemented seminar assignment in tensorflow or theano will do;
# * Your own project in pytorch that is developed to a state in which a normal human can understand and appreciate what it does.
from __future__ import print_function
import numpy as np
import matplotlib.pyplot as plt
# %matplotlib inline
import torch, torch.nn as nn
import torch.nn.functional as F
from torch.autograd import Variable
print(torch.__version__)
# ### Task I - tensormancy
#
# 
#
# When dealing with more complex stuff like neural network, it's best if you use tensors the way samurai uses his sword.
#
#
# __1.1 the cannabola__
# [_disclaimer_](https://gist.githubusercontent.com/justheuristic/e2c1fa28ca02670cabc42cacf3902796/raw/fd3d935cef63a01b85ed2790b5c11c370245cbd7/stddisclaimer.h)
#
# Let's write another function, this time in polar coordinates:
# $$\rho(\theta) = (1 + 0.9 \cdot cos (8 \cdot \theta) ) \cdot (1 + 0.1 \cdot cos(24 \cdot \theta)) \cdot (0.9 + 0.05 \cdot cos(200 \cdot \theta)) \cdot (1 + sin(\theta))$$
#
#
# Then convert it into cartesian coordinates ([howto](http://www.mathsisfun.com/polar-cartesian-coordinates.html)) and plot the results.
#
# Use torch tensors only: no lists, loops, numpy arrays, etc.
# +
theta = torch.linspace(- np.pi, np.pi, steps=1000)
# compute rho(theta) as per formula above
rho = (1+0.9*torch.cos(8*theta))*(1+0.1*torch.cos(24*theta))*(0.9+0.05*torch.cos(200*theta))*(1+torch.sin(theta))
# Now convert polar (rho, theta) pairs into cartesian (x,y) to plot them.
x = rho*torch.cos(theta)
y = rho*torch.sin(theta)
plt.figure(figsize=[6,6])
plt.fill(x.numpy(), y.numpy(), color='green')
plt.grid()
# -
# ### Task II: the game of life
#
# Now it's time for you to make something more challenging. We'll implement Conway's [Game of Life](http://web.stanford.edu/~cdebs/GameOfLife/) in _pure pytorch_.
#
# While this is still a toy task, implementing game of life this way has one cool benefit: __you'll be able to run it on GPU! __ Indeed, what could be a better use of your gpu than simulating game of life on 1M/1M grids?
#
# 
# If you've skipped the url above out of sloth, here's the game of life:
# * You have a 2D grid of cells, where each cell is "alive"(1) or "dead"(0)
# * Any living cell that has 2 or 3 neighbors survives, else it dies [0,1 or 4+ neighbors]
# * Any cell with exactly 3 neighbors becomes alive (if it was dead)
#
# For this task, you are given a reference numpy implementation that you must convert to pytorch.
# _[numpy code inspired by: https://github.com/rougier/numpy-100]_
#
#
# __Note:__ You can find convolution in `torch.nn.functional.conv2d(Z,filters)`. Note that it has a different input format.
#
# +
from scipy.signal import convolve2d
def np_update(Z):
# Count neighbours with convolution
filters = np.array([[1,1,1],
[1,0,1],
[1,1,1]])
N = convolve2d(Z,filters,mode='same')
# Apply rules
birth = (N==3) & (Z==0)
survive = ((N==2) | (N==3)) & (Z==1)
Z[:] = birth | survive
return Z
# +
import torch, torch.nn as nn
import torch.nn.functional as F
def torch_update(Z):
"""
Implement an update function that does to Z exactly the same as np_update.
:param Z: torch.FloatTensor of shape [height,width] containing 0s(dead) an 1s(alive)
:returns: torch.FloatTensor Z after updates.
You can opt to create new tensor or change Z inplace.
"""
filters = np.array([[1,1,1],
[1,0,1],
[1,1,1]])
filters = torch.FloatTensor(filters)[None,None,:,:]
# else death
N = F.conv2d(Variable(Z[None, None, :, :]),Variable(filters),padding=1)
birth = N.eq(3)*Z.eq(0)
survive = (N.eq(2)+N.eq(3))*(Z.eq(1))
Z[:,:]=(birth + survive)[None,None,:,:]
return Z
# +
#initial frame
Z_numpy = np.random.choice([0,1],p=(0.5,0.5),size=(100,100))
Z = torch.from_numpy(Z_numpy).type(torch.FloatTensor)
#your debug polygon :)
Z_new = torch_update(Z.clone())
#tests
Z_reference = np_update(Z_numpy.copy())
assert np.all(Z_new.numpy() == Z_reference), "your pytorch implementation doesn't match np_update. Look into Z and np_update(ZZ) to investigate."
print("Well done!")
# +
from IPython.core.debugger import set_trace
# %matplotlib notebook
plt.ion()
#initialize game field
Z = np.random.choice([0,1],size=(100,100))
Z = torch.from_numpy(Z).type(torch.FloatTensor)
fig = plt.figure()
ax = fig.add_subplot(111)
fig.show()
for _ in range(100):
#update
Z = torch_update(Z)
#re-draw image
ax.clear()
ax.imshow(Z.squeeze().numpy(),cmap='gray')
fig.canvas.draw()
# +
#Some fun setups for your amusement
#parallel stripes
Z = np.arange(100)%2 + np.zeros([100,100])
#with a small imperfection
Z[48:52,50]=1
Z = torch.from_numpy(Z).type(torch.FloatTensor)
fig = plt.figure()
ax = fig.add_subplot(111)
fig.show()
for _ in range(100):
Z = torch_update(Z)
ax.clear()
ax.imshow(Z.numpy(),cmap='gray')
fig.canvas.draw()
# -
# More fun with Game of Life: [video](https://www.youtube.com/watch?v=C2vgICfQawE)
# ```
#
# ```
#
# ```
#
# ```
#
# ```
#
# ```
#
# ```
#
# ```
#
# ```
#
# ```
#
#
#
# ### Task III: Going deeper
# <img src="http://download.gamezone.com/uploads/image/data/1190338/article_post_width_a88.jpg" width=360>
# Your third trial is to build your first neural network [almost] from scratch and pure torch.
#
# This time you will solve yet another digit recognition problem, but at a greater scale
# * 10 different letters
# * 20k samples
#
# We want you to build a network that reaches at least 80% accuracy and has at least 2 linear layers in it. Naturally, it should be nonlinear to beat logistic regression. You can implement it with either
#
#
# With 10 classes you will need to use __Softmax__ at the top instead of sigmoid and train for __categorical crossentropy__ (see [here](https://www.kaggle.com/wiki/LogLoss)). Write your own loss or use `torch.nn.functional.nll_loss`. Just make sure you understand what it accepts as an input.
#
# Note that you are not required to build 152-layer monsters here. A 2-layer (one hidden, one output) neural network should already give you an edge over logistic regression.
#
#
# __[bonus kudos]__
# If you've already beaten logistic regression with a two-layer net, but enthusiasm still ain't gone, you can try improving the test accuracy even further! It should be possible to reach 90% without convnets.
#
# __SPOILERS!__
# At the end of the notebook you will find a few tips and frequent errors.
# If you feel confident enogh, just start coding right away and get there ~~if~~ once you need to untangle yourself.
#
#
# +
from __future__ import print_function
import numpy as np
import matplotlib.pyplot as plt
# %matplotlib inline
import torch, torch.nn as nn
import torch.nn.functional as F
from torch.autograd import Variable
print(torch.__version__)
import os
import numpy as np
from scipy.misc import imread,imresize
from sklearn.model_selection import train_test_split
from glob import glob
os.chdir("/tmp/erda")
# !pwd
def load_notmnist(path='./notMNIST_small',letters='ABCDEFGHIJ',
img_shape=(28,28),test_size=0.25,one_hot=False):
# download data if it's missing. If you have any problems, go to the urls and load it manually.
if not os.path.exists(path):
print("Downloading data...")
assert os.system('curl http://yaroslavvb.com/upload/notMNIST/notMNIST_small.tar.gz > notMNIST_small.tar.gz') == 0
print("Extracting ...")
assert os.system('tar -zxvf notMNIST_small.tar.gz > untar_notmnist.log') == 0
data,labels = [],[]
print("Parsing...")
for img_path in glob(os.path.join(path,'*/*')):
class_i = img_path.split(os.sep)[-2]
if class_i not in letters: continue
try:
data.append(imresize(imread(img_path), img_shape))
labels.append(class_i,)
except:
print("found broken img: %s [it's ok if <10 images are broken]" % img_path)
data = np.stack(data)[:,None].astype('float32')
data = (data - np.mean(data)) / np.std(data)
#convert classes to ints
letter_to_i = {l:i for i,l in enumerate(letters)}
labels = np.array(list(map(letter_to_i.get, labels)))
if one_hot:
labels = (np.arange(np.max(labels) + 1)[None,:] == labels[:, None]).astype('float32')
print("Done")
return data, labels
X, y = load_notmnist(letters='ABCDEFGHIJ')
X = X.reshape([-1, 784])
# -
#< a whole lot of your code >
# the splits
from sklearn.cross_validation import train_test_split
X_train, X_test, y_train, y_test, train_idx, test_idx = train_test_split(X, y, range(X.shape[0]), test_size=0.2)
X_train, X_val, y_train, y_val, train_idx, val_idx = train_test_split(X_train, y_train, train_idx,test_size=0.25)
# %matplotlib inline
plt.figure(figsize=[12,4])
for i in range(20):
plt.subplot(2,10,i+1)
plt.imshow(X_train[i].reshape([28,28]))
plt.title(str(y_train[i]))
def one_hot_embedding(labels, num_classes):
"""Embedding labels to one-hot form.
Args:
labels: (LongTensor) class labels, sized [N,].
num_classes: (int) number of classes.
Returns:
(tensor) encoded labels, sized [N, #classes].
"""
y = torch.eye(num_classes)
return y[labels]
# +
model = nn.Sequential()
model.add_module('first', nn.Linear(784, 10))
model.add_module('second', nn.Softmax())
# weight init with gaussian noise
# for p in model.parameters():
# torch.nn.init.normal(p)
opt = torch.optim.SGD(model.parameters(), lr=1e-4, momentum=0.9)
history = []
model.cuda(2)
# +
from IPython.display import clear_output
from IPython.core.debugger import set_trace
for i in range(40000):
# sample 256 random images
ix = np.random.randint(0, len(X_train), 256)
x_batch = torch.FloatTensor(X_train[ix]).cuda(2)
y_batch = y_train[ix]
y_onehot = one_hot_embedding(y_batch,10).cuda(2)
# predict probabilities
y_predicted = model(x_batch)
# compute loss, just like before
#print(y_predicted)
# set_trace()
crossentropy = - y_onehot * torch.log(y_predicted) - (1. - y_onehot) * torch.log(1. - y_predicted)
loss = crossentropy.mean()
# compute gradients
loss.backward()
# optimizer step
opt.step()
# clear gradients
opt.zero_grad()
history.append(loss.item())
if i % 1000 == 0:
clear_output(True)
plt.plot(history)
plt.show()
print("step #%i | mean loss = %.3f" % (i, np.mean(history[-10:])))
# +
model2 = nn.Sequential()
model2.add_module('first', nn.Linear(784, 100))
model2.add_module('fist_activate', nn.LeakyReLU())
model2.add_module('fc', nn.Linear(100, 10))
model2.add_module('soft', nn.Softmax())
# weight init with gaussian noise
# for p in model2.parameters():
# torch.nn.init.normal(p)
opt2 = torch.optim.SGD(model2.parameters(), lr=1e-4, momentum=0.9)
history2 = []
model2.cuda(2)
# +
from IPython.display import clear_output
from IPython.core.debugger import set_trace
for i in range(40000):
# sample 256 random images
ix = np.random.randint(0, len(X_train), 256)
x_batch = torch.FloatTensor(X_train[ix]).cuda(2)
y_batch = y_train[ix]
y_onehot = one_hot_embedding(y_batch,10).cuda(2)
# predict probabilities
y_predicted = model2(x_batch)
# compute loss, just like before
#print(y_predicted)
# set_trace()
crossentropy = - y_onehot * torch.log(y_predicted) - (1. - y_onehot) * torch.log(1. - y_predicted)
loss = crossentropy.mean()
# compute gradients
loss.backward()
# optimizer step
opt2.step()
# clear gradients
opt2.zero_grad()
history2.append(loss.item())
if i % 1000 == 0:
clear_output(True)
plt.plot(history2)
plt.show()
print("step #%i | mean loss = %.3f" % (i, np.mean(history2[-10:])))
# +
# use your model to predict classes (0 or 1) for all test samples
predicted_y_test1 = model.forward(torch.FloatTensor(X_test).cuda(2))
predicted_y_test1 = np.argmax(predicted_y_test1.cpu().data.numpy(),axis=1)
accuracy1 = np.mean(predicted_y_test1 == y_test)
predicted_y_test2 = model2.forward(torch.FloatTensor(X_test).cuda(2))
predicted_y_test2 = np.argmax(predicted_y_test2.cpu().data.numpy(),axis=1)
accuracy2 = np.mean(predicted_y_test2 == y_test)
print("Test accuracy model 1: ",accuracy1 , " Test accuracy model 2: ", accuracy2)
# -
# ```
#
# ```
#
# ```
#
# ```
#
# ```
#
# ```
#
# ```
#
# ```
#
# ```
#
# ```
#
# ```
#
# ```
#
# ```
#
# ```
#
# ```
#
# ```
#
# ```
#
# ```
#
# ```
#
# ```
#
# ```
#
# ```
#
# ```
#
# ```
#
# ```
#
# ```
#
# ```
#
# ```
#
# ```
#
# ```
#
#
# # SPOILERS!
#
# Recommended pipeline
#
# * Adapt logistic regression from week2 seminar assignment to classify one letter against others (e.g. A vs the rest)
# * Generalize it to multiclass logistic regression.
# - Either try to remember lecture 0 or google it.
# - Instead of weight vector you'll have to use matrix (feature_id x class_id)
# - softmax (exp over sum of exps) can implemented manually or as nn.Softmax (layer) F.softmax (function)
# - probably better to use STOCHASTIC gradient descent (minibatch) for greater speed
# - you can also try momentum/rmsprop/adawhatever
# - in which case sample should probably be shuffled (or use random subsamples on each iteration)
# * Add a hidden layer. Now your logistic regression uses hidden neurons instead of inputs.
# - Hidden layer uses the same math as output layer (ex-logistic regression), but uses some nonlinearity (e.g. sigmoid) instead of softmax
# - You need to train both layers, not just output layer :)
# - __Do not initialize weights with zeros__ (due to symmetry effects). A gaussian noize with small variance will do.
# - 50 hidden neurons and a sigmoid nonlinearity will do for a start. Many ways to improve.
# - In ideal casae this totals to 2 .dot's, 1 softmax and 1 sigmoid
# - __make sure this neural network works better than logistic regression__
#
# * Now's the time to try improving the network. Consider layers (size, neuron count), nonlinearities, optimization methods, initialization - whatever you want, but please avoid convolutions for now.
#
# * If anything seems wrong, try going through one step of training and printing everything you compute.
# * If you see NaNs midway through optimization, you can estimate log P(y|x) as via F.log_softmax(layer_before_softmax)
#
#
| homework02/homework_part1.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import numpy as np
import pandas as pd
def file_reader(data):
return pd.read_csv(data)
sub = file_reader('sub.csv')
clus = file_reader('sub_cluster.csv')
log = file_reader('sub_log_transform.csv')
def array (data):
return np.array(data['CHURN'])
sub_array = array(sub #Logloss of 0.2519 on Public Lb and 0.24742
clus_array = array(clus) #Logloss of 0.2525 on Public Lb and 0.24718
log_array = array(log) #Logloss of 0.2517 on Public Lb and 0.24722
# +
#blending the weight based on public lb
#Giving Preference to the ones higher on Public Lb
val = (log_array*0.6 +sub_array*0.4)*0.4+(log_array*0.7 +clus_array*0.3)*.3 +(sub_array*0.55+clus_array*0.45)*0.2 +log_array*0.1
# -
val #Logloss of 0.25137 on Public Lb and 0.24713
var = file_reader('sample_submission.csv').copy()
var['CHURN'] = val
var.to_csv('blend.csv', index=False)
| Blend.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # 1A.1 - Variables, boucles, tests (correction)
#
# Boucles, tests, correction.
from jyquickhelper import add_notebook_menu
add_notebook_menu()
# ### Partie 3 : boucles (exercice)
l = [ 4, 3, 0, 2, 1 ]
i = 0
while l[i] != 0 :
i = l[i]
print (i) # que vaut l[i] à la fin ?
# Cet exercice montre une façon curieuse de se déplacer dans un tableau puisqu'on commence à la première position puis on va la position indiqué par le premier élément du tableau et ainsi de suite. On s'arrête quand on tombe sur la valeur zéro.
from IPython.display import Image
Image("td2_1.png")
# ### Partie 5 : recherche non dichotomique (exercice)
#
# Il n'y a pas d'autres moyens que de passer tous les éléments en revue et de conserver la position du premier élément dans la liste qui vérifie le critère de recherche.
# +
l = [ 3, 6, 2 , 7, 9 ]
x = 7
for i,v in enumerate(l) :
if v == x :
position = i
print ( position )
# -
# ### Partie 6 : recherche dichotomique (exercice)
# La recherche dichotomique s'applique uniquement sur un tableau triée. A chaque itération, on vise le milieu du tableau pour savoir dans quelle moitié chercher.
# +
l = [2, 3, 6, 7, 9]
# si la liste n'est pas triée, il faut écrire :
l.sort ()
x = 7
a = 0
b = len(l)-1
while a <= b :
m = (a+b)//2 # ne pas oublier // sinon la division est réelle
if l[m] == x :
position = m # ligne A
break # pas la peine de continuer, on quitte la boucle
elif l[m] < x :
a = m+1
else :
b = m-1
print ( position )
# -
# ### Partie 7 : pour aller plus loin (exercice)
# Lorsque l'élément à chercher n'est pas dans la liste, cela déclenche une erreur :
# +
l = [2, 3, 6, 7, 9]
l.sort ()
x = 5
position = -1
a = 0
b = len(l)-1
while a <= b :
m = (a+b)//2
if l[m] == x :
position = m
break
elif l[m] < x :
a = m+1
else :
b = m-1
print ( position )
# -
# Le programme affiche ``None`` qui était la valeur par défaut de la variable position. La boucle n'a pas changé le contenu de la variable. Donc, lorsque ``position==-1``, cela veut dire que le résultat n'a pas été trouvé.
#
# **Coût**
#
# Comme à chaque itération, on divise la taille du problème par deux, on est sûr que l'algorithme a trouvé la réponse lorsque $\frac{n}{2^k} < 1$ où $n$ est le nombre d'éléments du tableau et $k$ le nombre d'itérations. Par conséquent, $k \sim \ln_2 n$. On note cela $O(\ln_2 n)$. Le programme suivant vérifie cela :
# +
import random, math
l = list(range(0,1000000))
for k in range(0,10):
x = random.randint(0,l[-1])
iter = 0
a = 0
b = len(l)-1
while a <= b :
iter += 1
m = (a+b)//2
if l[m] == x :
position = m
break
elif l[m] < x :
a = m+1
else :
b = m-1
print ("k=",k, "x=", x, "itération=", iter, " log2(len(l))=", math.log(len(l))/math.log(2))
# -
| _doc/notebooks/td1a/td1a_correction_session2.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + colab={"base_uri": "https://localhost:8080/", "height": 1000} id="rpp-bf6My8Bx" outputId="66901582-d998-4f13-ac97-9210a0ff5d81"
# !pip install swifter
# + id="PghiYhVItcZd"
from urllib.error import HTTPError
from urllib.request import urlopen
from bs4 import BeautifulSoup
import os
import pandas as pd
# + [markdown] id="BybNKXyJs8WG"
# # Create archive links
# + id="WF79zdzrs6Mx"
def create_archive_links(year_start, year_end, month_start, month_end, day_start, day_end):
archive_links = {}
for y in range(year_start, year_end + 1):
dates = [str(d).zfill(2) + "-" + str(m).zfill(2) + "-" +
str(y) for m in range(month_start, month_end + 1) for d in
range(day_start, day_end + 1)]
archive_links[y] = [
"https://www.lemonde.fr/archives-du-monde/" + date + "/" for date in dates]
return archive_links
# + id="OsIuDObHtBPj"
#create_archive_links(2006,2020,1, 12, 1, 31)
archive_links = create_archive_links(2006,2006,1, 2, 1, 31)
# + [markdown] id="njRlf3NUtenK"
# # Scrap
# + id="VfNFxGecth7m"
def get_articles_links(archive_links):
'''Each article is in a <section> having a class named teaser and here
I also filter all the non free articles having a span with class icon__premium.
All the links containing the word en-direct are also filtered because they are videos. '''
links_non_abonne = []
for link in archive_links:
try:
html = urlopen(link)
except HTTPError as e:
print("url not valid", link)
else:
soup = BeautifulSoup(html, "html.parser")
news = soup.find_all(class_="teaser")
# condition here : if no span icon__premium (abonnes)
for item in news:
if not item.find('span', {'class': 'icon__premium'}):
l_article = item.find('a')['href']
# en-direct = video
if 'en-direct' not in l_article:
links_non_abonne.append(l_article)
return links_non_abonne
# + id="d-xe-0AQvWwm"
def get_single_page(url):
try:
html = urlopen(url)
except HTTPError as e:
print("url not valid", url)
else:
soup = BeautifulSoup(html, "html.parser")
try:
text_title = soup.find('h1')
except:
text_title = 'empty'
try:
text_body = soup.article.find_all(["p", "h2"], recursive=False)
except:
text_body = 'empty'
try:
tag = soup.findAll('li',attrs={'class':'old__nav-content-list-item'})
except:
tag = 'empty'
return [text_title, text_body,tag]
# + id="rLEMt9Bq0aFc"
df = pd.DataFrame(columns=['Year', 'Html'])
# + colab={"base_uri": "https://localhost:8080/"} id="JKMp-7Iqt2iM" outputId="6158a2c5-36d5-4bb9-905e-596c6a35d176"
for year,links in archive_links.items():
print("processing: ",year)
article_links_list = get_articles_links(links)
temp = pd.DataFrame({'Year': [year]*len(article_links_list), 'Html': article_links_list})
df = df.append(temp)
# + colab={"base_uri": "https://localhost:8080/", "height": 66, "referenced_widgets": ["fa0aa2ae8842417ba280105eccb66721", "78f8513b55bc4c59820e8577d52eded0", "0ccd9728579043149d7341010d8ca6ed", "089af218c70542d481b46e0fc81a2417", "ea30cfd68ec54e17b081a01f2126adf5", "752bb6be7ac44076998bacd2acea5449", "d68b3ae5252e4c62a1d728b0dc4e689b", "ffbf4730bb10462f86a090ba6e3cb0ff"]} id="z7k4B9RTyxZH" outputId="9000fbf8-47bc-4222-84b2-912854563453"
import swifter
df['out'] = df['Html'].swifter.apply(get_single_page)
# + colab={"base_uri": "https://localhost:8080/"} id="7XE5IZyQ9o8I" outputId="439c436b-059f-4078-b787-5210a5690949"
html = urlopen('https://www.lemonde.fr/ameriques/article/2006/01/01/les-zapatistes-lancent-une-autre-campagne-a-six-mois-de-la-presidentielle-mexicaine_726244_3222.html')
soup = BeautifulSoup(html, "html.parser")
soup.findAll('li',attrs={'class':'old__nav-content-list-item'})
# + id="7usIbbgP_tpX"
def keep(x,nbr):
if len(x)>nbr:
retour=1
else:
retour=0
return retour
# + id="w7Qj568_ApIv"
df['Titre_OK'] = df['Titre'].apply(lambda x: keep(x,10))
df['Body_OK'] = df['Body'].apply(lambda x: keep(x,10))
# + colab={"base_uri": "https://localhost:8080/"} id="d5XZuVBXDEMt" outputId="dc0395ae-b69d-41b7-c04d-5b455727ce29"
df['Body_OK'].sum()/len(df['Body'])*100
| .ipynb_checkpoints/Scrapper_le_monde-checkpoint.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: disasterresponse-env
# language: python
# name: disasterresponse-env
# ---
# # ML Pipeline Preparation
# Follow the instructions below to help you create your ML pipeline.
# ### 1. Import libraries and load data from database.
# - Import Python libraries
# - Load dataset from database with [`read_sql_table`](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.read_sql_table.html)
# - Define feature and target variables X and Y
# +
# import libraries
import numpy as np
import pandas as pd
pd.set_option('display.max_columns', 200)
import sys
import os
import re
import nltk
from sqlalchemy import create_engine
import pickle
import warnings
warnings.filterwarnings('ignore')
from scipy.stats import gmean
from sklearn.pipeline import Pipeline, FeatureUnion
from sklearn.model_selection import train_test_split
from sklearn.metrics import classification_report
from sklearn.metrics import confusion_matrix
from sklearn.metrics import fbeta_score, make_scorer
from sklearn.ensemble import GradientBoostingClassifier, RandomForestClassifier, AdaBoostClassifier
from sklearn.model_selection import GridSearchCV
from sklearn.feature_extraction.text import TfidfTransformer, CountVectorizer
from sklearn.multioutput import MultiOutputClassifier
from sklearn.base import BaseEstimator,TransformerMixin
# -
# load data from database
database_filepath = "../data/disaster_response.db"
engine = create_engine('sqlite:///' + database_filepath)
table_name = os.path.basename(database_filepath).replace(".db","") + "_table"
df = pd.read_sql_table(table_name,engine)
# ### 2. Write a tokenization function to process your text data
df.describe()
#Remove child alone field because it has all zeros only
df = df.drop(['child_alone'],axis=1)
# check the number of 2's in the related field
df['related'].eq(2).sum()
# Replace 2 with 1 to consider it a valid response(binary).
df['related']=df['related'].map(lambda x: 1 if x == 2 else x)
# Extract X and y variables from the data for the modelling
X = df['message']
#select from columns with categorical values 0 or 1
y = df.iloc[:,4:]
def tokenize(text,url_place_holder_string="urlplaceholder"):
"""
function to tokenize and normalize text data
Arguments:
text -> messages to be tokenized and normalized
Output:
normalized -> List of tokens extracted and normalized from the messages
"""
# Replace all urls with a urlplaceholder string
url_regex = 'http[s]?://(?:[a-zA-Z]|[0-9]|[$-_@.&+]|[!*\(\),]|(?:%[0-9a-fA-F][0-9a-fA-F]))+'
# Extract all the urls from the provided text
detected_urls = re.findall(url_regex, text)
# Replace url with a url placeholder string
for detected_url in detected_urls:
text = text.replace(detected_url, url_place_holder_string)
# Extract the word tokens from the provided text
tokens = nltk.word_tokenize(text)
#Lemmanitizer to remove inflectional and derivationally related forms of a word
lemmatizer = nltk.WordNetLemmatizer()
# List of clean tokens
normalized = [lemmatizer.lemmatize(w).lower().strip() for w in tokens]
return normalized
# Build a custom transformer which will extract the starting verb of a sentence
class StartingVerbExtractor(BaseEstimator, TransformerMixin):
"""
The class for implementing Verb Extractor
This class extract the starting verb of a sentence,
creating a new feature for the ML classifier
"""
def starting_verb(self, text):
sentence_list = nltk.sent_tokenize(text)
for sentence in sentence_list:
pos_tags = nltk.pos_tag(tokenize(sentence))
first_word, first_tag = pos_tags[0]
if first_tag in ['VB', 'VBP'] or first_word == 'RT':
return True
return False
def fit(self, X, y=None):
return self
def transform(self, X):
X_tagged = pd.Series(X).apply(self.starting_verb)
return pd.DataFrame(X_tagged)
# ### 3. Build a machine learning pipeline
# This machine pipeline should take in the `message` column as input and output classification results on the other 36 categories in the dataset. You may find the [MultiOutputClassifier](http://scikit-learn.org/stable/modules/generated/sklearn.multioutput.MultiOutputClassifier.html) helpful for predicting multiple target variables.
# +
pipeline_one = Pipeline([
('features', FeatureUnion([
('text_pipeline', Pipeline([
('count_vectorizer', CountVectorizer(tokenizer=tokenize)),
('tfidf_transformer', TfidfTransformer())
]))
])),
('classifier', MultiOutputClassifier(AdaBoostClassifier()))
])
pipeline_two = Pipeline([
('features', FeatureUnion([
('text_pipeline', Pipeline([
('count_vectorizer', CountVectorizer(tokenizer=tokenize)),
('tfidf_transformer', TfidfTransformer())
])),
('starting_verb_transformer', StartingVerbExtractor())
])),
('classifier', MultiOutputClassifier(AdaBoostClassifier()))
])
# -
# ### 4. Train pipeline
# - Split data into train and test sets
# - Train pipeline
X_train, X_test, y_train, y_test = train_test_split(X, y)
pipeline_fitted = pipeline_one.fit(X_train, y_train)
# ### 5. Test your model
# Report the f1 score, precision and recall for each output category of the dataset. You can do this by iterating through the columns and calling sklearn's `classification_report` on each.
# +
y_prediction_train = pipeline_fitted.predict(X_train)
y_prediction_test = pipeline_fitted.predict(X_test)
# Print classification report on test data
print(classification_report(y_test.values, y_prediction_test, target_names=y.columns.values))
# -
# Print classification report on training data
print('\n',classification_report(y_train.values, y_prediction_train, target_names=y.columns.values))
# ### 6. Improve your model
# Use grid search to find better parameters.
# +
# pipeline_one.get_params().keys()
parameters_grid = {'classifier__estimator__learning_rate': [0.01, 0.02, 0.05],
'classifier__estimator__n_estimators': [10, 20, 40]}
#parameters_grid = {'classifier__estimator__n_estimators': [10, 20, 40]}
cv = GridSearchCV(pipeline_one, param_grid=parameters_grid, scoring='f1_micro', n_jobs=-1)
cv.fit(X_train, y_train)
# -
# Get the prediction values from the grid search cross validator
y_prediction_test = cv.predict(X_test)
y_prediction_train = cv.predict(X_train)
# ### 7. Test your model
# Show the accuracy, precision, and recall of the tuned model.
#
# Since this project focuses on code quality, process, and pipelines, there is no minimum performance metric needed to pass. However, make sure to fine tune your models for accuracy, precision and recall to make your project stand out - especially for your portfolio!
# Print classification report on test data
print(classification_report(y_test.values, y_prediction_test, target_names=y.columns.values))
# Print classification report on training data
print('\n',classification_report(y_train.values, y_prediction_train, target_names=y.columns.values))
# ### 8. Try improving your model further. Here are a few ideas:
# * try other machine learning algorithms
# * add other features besides the TF-IDF
# +
#Use pipeline_two which includes StartingVerbEstimator
X_train, X_test, y_train, y_test = train_test_split(X, y)
pipeline_fitted = pipeline_two.fit(X_train, y_train)
y_prediction_train = pipeline_fitted.predict(X_train)
y_prediction_test = pipeline_fitted.predict(X_test)
# Print classification report on test data
print(classification_report(y_test.values, y_prediction_test, target_names=y.columns.values))
# -
# Print classification report on training data
print('\n',classification_report(y_train.values, y_prediction_train, target_names=y.columns.values))
# ### 9. Export your model as a pickle file
pickled_file = pickle.dumps('../models/classifier.pkl')
| models/ML Pipeline Preparation.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] id="view-in-github" colab_type="text"
# <a href="https://colab.research.google.com/github/HMUNACHI/Face-Generator/blob/master/Face_Generator.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# + id="k2VpiQilmXxZ"
# comment out after executing
# !unzip processed_celeba_small.zip
# + id="0Id3hnTYmXxb"
data_dir = 'processed_celeba_small/'
import pickle as pkl
import matplotlib.pyplot as plt
import numpy as np
import problem_unittests as tests
#import helper
# %matplotlib inline
# + id="XywEYuuvmXxc"
# necessary imports
import torch
from torchvision import datasets
from torchvision import transforms
# + id="er0RviNTmXxd"
def get_dataloader(batch_size, image_size, data_dir='processed_celeba_small/'):
"""
Batch the neural network data using DataLoader
:param batch_size: The size of each batch; the number of images in a batch
:param img_size: The square size of the image data (x, y)
:param data_dir: Directory where image data is located
:return: DataLoader with batched data
"""
transform = transforms.Compose([transforms.Resize(image_size),
transforms.ToTensor()])
image_dataset = datasets.ImageFolder(data_dir, transform)
return torch.utils.data.DataLoader(image_dataset, batch_size = batch_size, shuffle=True)
# + id="F2_4WyLBmXxe"
# Define function hyperparameters
batch_size = 128
img_size = 32
# Call function and get a dataloader
celeba_train_loader = get_dataloader(batch_size, img_size)
# + id="wNJsxkgQmXxf" outputId="d8d77da2-495c-4f9d-a3fa-129c9589e08b"
# helper display function
def imshow(img):
npimg = img.numpy()
plt.imshow(np.transpose(npimg, (1, 2, 0)))
# obtain one batch of training images
dataiter = iter(celeba_train_loader)
images, _ = dataiter.next() # _ for no labels
# plot the images in the batch, along with the corresponding labels
fig = plt.figure(figsize=(20, 4))
plot_size=20
for idx in np.arange(plot_size):
ax = fig.add_subplot(2, plot_size/2, idx+1, xticks=[], yticks=[])
imshow(images[idx])
# + id="Et3j06jFmXxi"
def scale(x, feature_range=(-1, 1)):
''' Scale takes in an image x and returns that image, scaled
with a feature_range of pixel values from -1 to 1.
This function assumes that the input x is already scaled from 0-1.'''
# assume x is scaled to (0, 1)
# scale to feature_range and return scaled x
min, max = feature_range
return x * (max - min) + min
# + id="BVVFt34KmXxj" outputId="9e71b1ef-ca24-4ceb-dc28-8eadce91de42"
# check scaled range
# should be close to -1 to 1
img = images[0]
scaled_img = scale(img)
print('Min: ', scaled_img.min())
print('Max: ', scaled_img.max())
# + id="pNWSWA15mXxk"
import torch.nn as nn
import torch.nn.functional as F
# + id="Nn1uGvQMmXxk"
def conv(in_channels, out_channels, kernel_size, stride=2, padding = 1, batch_norm= True):
layers = []
conv_layer = nn.Conv2d(in_channels=in_channels, out_channels=out_channels,
kernel_size = kernel_size, stride = stride, padding = padding, bias = False)
layers.append(conv_layer)
if batch_norm:
layers.append(nn.BatchNorm2d(out_channels))
return nn.Sequential(*layers)
# + id="CrbcMbBAmXxk" outputId="301aa03b-9dd1-43b2-abd0-e2a4dbbd89be"
class Discriminator(nn.Module):
def __init__(self, conv_dim):
"""
Initialize the Discriminator Module
:param conv_dim: The depth of the first convolutional layer
"""
super(Discriminator, self).__init__()
# complete init function
self.conv_dim = conv_dim
self.conv1 = conv(3, conv_dim, 4, batch_norm=False) # (16, 16, conv_dim)
self.conv2 = conv(conv_dim, conv_dim*2, 4) # (8, 8, conv_dim*2)
self.conv3 = conv(conv_dim*2, conv_dim*4, 4) # (4, 4, conv_dim*4)
self.conv4 = conv(conv_dim*4, conv_dim*8, 4) # (2, 2, conv_dim*8)
self.classifier = nn.Linear(conv_dim*8*2*2, 1)
def forward(self, x):
"""
Forward propagation of the neural network
:param x: The input to the neural network
:return: Discriminator logits; the output of the neural network
"""
# define feedforward behavior
x = F.leaky_relu(self.conv1(x), 0.2)
x = F.leaky_relu(self.conv2(x), 0.2)
x = F.leaky_relu(self.conv3(x), 0.2)
x = F.leaky_relu(self.conv4(x), 0.2)
out = x.view(-1, self.conv_dim*8*2*2)
out = self.classifier(out)
return out
tests.test_discriminator(Discriminator)
# + id="luLFqiY3mXxm"
def deconv(in_channels, out_channels, kernel_size, stride=2, padding=1, batch_norm=True):
layers = []
layers.append(nn.ConvTranspose2d(in_channels, out_channels, kernel_size, stride, padding, bias=False))
if batch_norm:
layers.append(nn.BatchNorm2d(out_channels))
return nn.Sequential(*layers)
# + id="WjF_1jYVmXxm" outputId="298abb5a-a32f-438f-80e8-eea762884630"
class Generator(nn.Module):
def __init__(self, z_size, conv_dim):
"""
Initialize the Generator Module
:param z_size: The length of the input latent vector, z
:param conv_dim: The depth of the inputs to the *last* transpose convolutional layer
"""
super(Generator, self).__init__()
# complete init function
self.conv_dim = conv_dim
self.fc = nn.Linear(z_size, conv_dim*8*2*2)
self.t_conv1 = deconv(conv_dim*8, conv_dim*4, 4)
self.t_conv2 = deconv(conv_dim*4, conv_dim*2, 4)
self.t_conv3 = deconv(conv_dim*2, conv_dim, 4)
self.t_conv4 = deconv(conv_dim, 3, 4, batch_norm=False)
def forward(self, x):
"""
Forward propagation of the neural network
:param x: The input to the neural network
:return: A 32x32x3 Tensor image as output
"""
# define feedforward behavior
x = self.fc(x)
x = x.view(-1, self.conv_dim*8, 2, 2) # (batch_size, depth, 4, 4)
x = F.relu(self.t_conv1(x))
x = F.relu(self.t_conv2(x))
x = F.relu(self.t_conv3(x))
# last layer with tanh activation function
out = self.t_conv4(x)
out = F.tanh(out)
return out
tests.test_generator(Generator)
# + id="97yWAmyamXxn"
from torch.nn import init
def weights_init_normal(m):
"""
Applies initial weights to certain layers in a model .
The weights are taken from a normal distribution
with mean = 0, std dev = 0.02.
:param m: A module or layer in a network
"""
# classname will be something like:
# `Conv`, `BatchNorm2d`, `Linear`, etc.
classname = m.__class__.__name__
# Apply initial weights to convolutional and linear layers
classname = m.__class__.__name__
isConvolution = classname.find('Conv') != -1
isLinear = classname.find('Linear') != -1
if (hasattr(m, 'weight') and isConvolution or isLinear):
init.normal_(m.weight.data, 0.0, 0.02)
# + id="as9ZVvi_mXxp"
def build_network(d_conv_dim, g_conv_dim, z_size):
# define discriminator and generator
D = Discriminator(d_conv_dim)
G = Generator(z_size=z_size, conv_dim=g_conv_dim)
# initialize model weights
D.apply(weights_init_normal)
G.apply(weights_init_normal)
print(D)
print()
print(G)
return D, G
# + id="_wPgrnwNmXxp" outputId="6312c615-5b4e-4e1b-b937-21408af9ea0c"
# Define model hyperparams
d_conv_dim = 64
g_conv_dim = 64
z_size = 100
D, G = build_network(d_conv_dim, g_conv_dim, z_size)
# + id="m7VksUitmXxq" outputId="0876a1df-7516-4328-aeb8-72956ab2342b"
import torch
# Check for a GPU
train_on_gpu = torch.cuda.is_available()
if not train_on_gpu:
print('No GPU found. Please use a GPU to train your neural network.')
else:
print('Training on GPU!')
# + id="FN_w63b4mXxr"
def real_loss(D_out):
'''Calculates how close discriminator outputs are to being real.
param, D_out: discriminator logits
return: real loss'''
batch_size = D_out.size(0)
labels = torch.ones(batch_size)
if train_on_gpu:
labels = labels.cuda()
criterion = nn.BCEWithLogitsLoss()
loss = criterion(D_out.squeeze(), labels)
return loss
def fake_loss(D_out):
'''Calculates how close discriminator outputs are to being fake.
param, D_out: discriminator logits
return: fake loss'''
batch_size = D_out.size(0)
labels = torch.zeros(batch_size)
if train_on_gpu:
labels = labels.cuda()
criterion = nn.BCEWithLogitsLoss()
loss = criterion(D_out.squeeze(), labels)
return loss
# + id="n5QiAUiemXxr"
import torch.optim as optim
# Create optimizers for the discriminator D and generator G
lr = 0.0002
beta1=0.5
beta2=0.999 # default value
d_optimizer = optim.Adam(D.parameters(), lr, [beta1, beta2])
g_optimizer = optim.Adam(G.parameters(), lr, [beta1, beta2])
# + id="hGdXCqRjmXxs"
def train(D, G, n_epochs, print_every=50):
'''Trains adversarial networks for some number of epochs
param, D: the discriminator network
param, G: the generator network
param, n_epochs: number of epochs to train for
param, print_every: when to print and record the models' losses
return: D and G losses'''
# move models to GPU
if train_on_gpu:
D.cuda()
G.cuda()
# keep track of loss and generated, "fake" samples
samples = []
losses = []
# Get some fixed data for sampling. These are images that are held
# constant throughout training, and allow us to inspect the model's performance
sample_size=16
fixed_z = np.random.uniform(-1, 1, size=(sample_size, z_size))
fixed_z = torch.from_numpy(fixed_z).float()
# move z to GPU if available
if train_on_gpu:
fixed_z = fixed_z.cuda()
# epoch training loop
for epoch in range(n_epochs):
# batch training loop
for batch_i, (real_images, _) in enumerate(celeba_train_loader):
batch_size = real_images.size(0)
real_images = scale(real_images)
# ===============================================
# TRAIN THE NETWORKS
# ===============================================
# 1. Train the discriminator on real and fake images
d_optimizer.zero_grad()
if train_on_gpu:
real_images = real_images.cuda()
D_real = D(real_images)
d_real_loss = real_loss(D_real)
z = np.random.uniform(-1, 1, size = (batch_size, z_size))
z = torch.from_numpy(z).float()
if train_on_gpu:
z = z.cuda()
fake_images = G(z)
D_fake = D(fake_images)
d_fake_loss = fake_loss(D_fake)
d_loss = d_real_loss + d_fake_loss
d_loss.backward()
d_optimizer.step()
# 2. Train the generator with an adversarial loss
g_optimizer.zero_grad()
z = np.random.uniform(-1, 1, size = (batch_size, z_size))
z = torch.from_numpy(z).float()
if train_on_gpu:
z = z.cuda()
fake_images = G(z)
D_fake = D(fake_images)
g_loss = real_loss(D_fake)
g_loss.backward()
g_optimizer.step()
# Print some loss stats
if batch_i % print_every == 0:
# append discriminator loss and generator loss
losses.append((d_loss.item(), g_loss.item()))
# print discriminator and generator loss
print('Epoch [{:5d}/{:5d}] | d_loss: {:6.4f} | g_loss: {:6.4f}'.format(
epoch+1, n_epochs, d_loss.item(), g_loss.item()))
## AFTER EACH EPOCH##
# generate and save sample, fake images
G.eval() # for generating samples
samples_z = G(fixed_z)
samples.append(samples_z)
G.train() # back to training mode
# Save training generator samples
with open('train_samples.pkl', 'wb') as f:
pkl.dump(samples, f)
# finally return losses
return losses
# + id="ea-YCNJJmXxt" outputId="121366bf-d1d7-4936-a553-b573b1e6a9f8"
# set number of epochs
n_epochs = 20
# call training function
losses = train(D, G, n_epochs=n_epochs)
# + id="JRwReZBimXxu" outputId="ebb69ed9-42a4-4d93-9dc0-22eb55d57136"
fig, ax = plt.subplots()
losses = np.array(losses)
plt.plot(losses.T[0], label='Discriminator', alpha=0.5)
plt.plot(losses.T[1], label='Generator', alpha=0.5)
plt.title("Training Losses")
plt.legend()
# + id="ygJ9P7dzmXxu"
# helper function for viewing a list of passed in sample images
def view_samples(epoch, samples):
fig, axes = plt.subplots(figsize=(16,4), nrows=2, ncols=8, sharey=True, sharex=True)
for ax, img in zip(axes.flatten(), samples[epoch]):
img = img.detach().cpu().numpy()
img = np.transpose(img, (1, 2, 0))
img = ((img + 1)*255 / (2)).astype(np.uint8)
ax.xaxis.set_visible(False)
ax.yaxis.set_visible(False)
im = ax.imshow(img.reshape((32,32,3)))
# + id="OXy-bhpXmXxv"
# Load samples from generator, taken while training
with open('train_samples.pkl', 'rb') as f:
samples = pkl.load(f)
# + id="j36V5KusmXxv" outputId="57e44ba2-60e5-4a1f-da38-5beda030c0ff"
_ = view_samples(-1, samples)
| Face_Generator.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
from sklearn.metrics import average_precision_score
# +
#example from Sci-Kit
from sklearn import svm, datasets
from sklearn.model_selection import train_test_split
import numpy as np
iris = datasets.load_iris()
X = iris.data
y = iris.target
# Add noisy features
random_state = np.random.RandomState(0)
n_samples, n_features = X.shape
X = np.c_[X, random_state.randn(n_samples, 200 * n_features)]
# Limit to the two first classes, and split into training and test
X_train, X_test, y_train, y_test = train_test_split(X[y < 2], y[y < 2],
test_size=.5,
random_state=random_state)
# Create a simple classifier
classifier = svm.LinearSVC(random_state=random_state)
classifier.fit(X_train, y_train)
y_score = classifier.decision_function(X_test)
from sklearn.metrics import average_precision_score
average_precision = average_precision_score(y_test, y_score)
print('Average precision-recall score: {0:0.2f}'.format(
average_precision))
# +
###results for Tara Virfinder model 1 on 5% vir set
import numpy as np
#declarations
directory="my_directory"
nb_vir=100 #number of viral examples
nb_nvir=1900 #number of non-viral examples
labels=[]
for x in range(nb_vir):
labels.append(1)
for x in range(nb_nvir):
labels.append(0)
y_test=np.asarray(labels)
nb_file=12
for x in range(nb_file):
score_file="/mod"+str(x+1)+".txt"
path=directory+score_file
scores=np.loadtxt(path, dtype=np.float32, delimiter='/n')
y_score=np.asarray(scores)
average_precision = average_precision_score(y_test, y_score)
print('{0:0.2f}'.format(
average_precision))
# -
| scripts/measure_AUPRC.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="view-in-github" colab_type="text"
# <a href="https://colab.research.google.com/github/praveentn/hgwxx7/blob/master/transferlearning/obj_detection_frcnn_resnet_detecto_2.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# + id="KmniHoAduRwO" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 374} outputId="de69f9c4-1bb6-4bde-b140-b04994d3dced"
# !pip3 install detecto
# + id="D90W5cxJuVxb" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 269} outputId="e91107da-7f3d-466d-fd4b-a92268bcc741"
from detecto import core, utils, visualize
image = utils.read_image('https://www.hellocig.com/media/catalog/product/cache/1/image/600x/9df78eab33525d08d6e5fb8d27136e95/f/r/fruits_1.jpg')
model = core.Model()
labels, boxes, scores = model.predict_top(image)
visualize.show_labeled_image(image, boxes, labels)
# + id="3MBUwXuRutkb" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="6ba0f78f-d438-4269-d493-3bd32ecd4be7"
import torch
print(torch.cuda.is_available())
# + id="irRkuFG-yrsk" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 122} outputId="bf456242-ee8f-4f0f-d361-e45969ee7841"
import os
from google.colab import drive
drive.mount('/content/drive', force_remount=True)
os.chdir('/content/drive/My Drive/Detecto Tutorial')
# #!pip install detecto
# + id="WRcCe4NHyySB" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 153} outputId="0395e469-0436-4347-b33e-55a59a944f36"
# #!git clone https://github.com/praveentn/images.git
# + id="LPcLsb34zEQc" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="10d7bcb4-a4b4-4683-a7eb-bf361c178370"
# !ls
# + id="bMrlsQHVzK5g" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="87a5b8c4-da27-405d-fcde-2aca37b7fb7f"
pwd
# + id="DptS6upo4epH" colab_type="code" colab={}
# + id="u0uw4JCjzMH7" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="4b62df6d-9553-47eb-a666-c226240debe1"
# !ls
# + id="0YNhGrCxy9Iq" colab_type="code" colab={}
from detecto import core, utils, visualize
dataset = core.Dataset('images/')
model = core.Model(['issuer', 'shares', 'signature', 'stamp'])
# + id="OWVSDWSH3QaL" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="636313f2-0baa-4c9b-a96b-ee87d0d16116"
# cd images
# + id="le1EJJZy3ZLO" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="5cea7fd0-a05b-45b5-b464-16761066f499"
# !pwd
# + id="NOOkCyaL3a84" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="f96e091c-7be8-45b9-e243-005d9b8f0cba"
# ls
# + id="pvHuiS4j6B1Z" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="15a4a582-219f-4937-e201-67fb58b6912f"
pwd
# + id="v1WQAVUm6DOu" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 269} outputId="880e02a5-fd2e-4358-8262-03a63f1a0bb4"
import matplotlib.pyplot as plt
from detecto.utils import read_image
image = read_image('/content/drive/My Drive/Detecto Tutorial/images/images/001.jpg')
plt.imshow(image)
plt.show()
# + id="SYcZOELi69Aj" colab_type="code" colab={}
import os
# TODO: Change this to your Drive folder location
WORKING_DIRECTORY = '/content/drive/My Drive/Detecto Tutorial/images'
os.chdir(WORKING_DIRECTORY)
# + id="-XYWxoYi7IZA" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="44c2de73-8563-42bc-bd5a-e428c68d0398"
# List the contents of your working directory
# It should contain at least three folders: images, train_labels, and val_labels
# !ls
# + id="7ly3S_aH7LIB" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 374} outputId="ec7a2e60-5f18-4a73-a22c-2ebbe77cba3d"
# Note: if it states you must restart the runtime in order to use a
# newly installed version of a package, you do NOT need to do this.
# !pip install detecto
# + id="eFVm1JGN7OPB" colab_type="code" colab={}
import torch
import torchvision
import matplotlib.pyplot as plt
from torchvision import transforms
from detecto import core, utils, visualize
# + id="-KIpcup27Rrb" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 269} outputId="68bd755a-2513-4001-962e-f29a595a8a32"
image = utils.read_image('images/023.jpg')
plt.imshow(image)
plt.show()
# + id="rWfGHWLR7Ua_" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 1000} outputId="80a1555e-a62f-4658-f171-b4b685bf5e08"
# Do this twice: once for our training labels and once for our validation labels
utils.xml_to_csv('train_labels', 'train.csv')
utils.xml_to_csv('val_labels', 'val.csv')
# + id="bgakutbb7ahk" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 269} outputId="744a8cc2-8a72-4e2c-dd3b-04bc6326b5bd"
# Specify a list of transformations for our dataset to apply on our images
transform_img = transforms.Compose([
transforms.ToPILImage(),
transforms.Resize(800),
transforms.RandomHorizontalFlip(0.5),
transforms.ToTensor(),
utils.normalize_transform(),
])
dataset = core.Dataset('train.csv', 'images/', transform=transform_img)
# dataset[i] returns a tuple containing our transformed image and
# and a dictionary containing label and box data
image, target = dataset[0]
# Show our image along with the box. Note: it may
# be colored oddly due to being normalized by the
# dataset and then reverse-normalized for plotting
visualize.show_labeled_image(image, target['boxes'], target['labels'])
# + id="ap8LfrcH7iby" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="1be67b6f-a617-4cce-a882-754182e51b74"
# Create our validation dataset
val_dataset = core.Dataset('val.csv', 'images/')
# Create the loader for our training dataset
loader = core.DataLoader(dataset, batch_size=2, shuffle=True)
# Create our model, passing in all unique classes we're predicting
# Note: make sure these match exactly with the labels in the XML/CSV files!
model = core.Model(['issuer', 'shares', 'signature', 'stamp'])
# Train the model! This step can take a while, so make sure you
# the GPU is turned on in Edit -> Notebook settings
losses = model.fit(loader, val_dataset, epochs=10, verbose=True)
# Plot the accuracy over time
plt.plot(losses)
plt.show()
# + id="x7ZJTtJn7u00" colab_type="code" colab={}
| transferlearning/obj_detection_frcnn_resnet_detecto_2.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python [conda env:metis] *
# language: python
# name: conda-env-metis-py
# ---
import pandas as pd
import datetime as dt
from scipy.stats import zscore
from sqlalchemy import create_engine
import warnings
warnings.filterwarnings('ignore')
# ## Import
engine = create_engine('sqlite:///Data/raw/mta_data.db')
mta = pd.read_sql('SELECT * FROM mta_data WHERE (TIME <"08" OR TIME >="20") AND (substr(DATE,1,2) =="06" OR substr(DATE,1,2) =="07" OR substr(DATE,1,2) =="08") AND (substr(DATE,9,2) =="19");', engine)
mta.head()
#mta = pd.read_csv('Data/raw/2021.csv')
zip_boro_station = pd.read_csv('Data/Processed/zip_boro_geo.csv',dtype={'ZIP':'object'})
# Merge to filter for stations in Brooklyn and Manhattan only
mta['STATION'] = (mta.STATION.str.strip().str.replace('AVE','AV')
.str.replace('STREET','ST').str.replace('COLLEGE','CO')
.str.replace('SQUARE','SQ').replace('STS','ST').replace('/','-'))
df = (mta.merge(zip_boro_station.loc[:,['STATION','BOROUGH']], on='STATION'))
df = df[(df.BOROUGH=='Manhattan')|(df.BOROUGH=='Brooklyn')]
# Convert to datetime
df["DATE_TIME"] = pd.to_datetime(df.DATE + " " + df.TIME, format="%m/%d/%Y %H:%M:%S")
df["DATE"] = pd.to_datetime(df.DATE, format="%m/%d/%Y")
df["TIME"] = pd.to_datetime(df.TIME)
# #### Drop Duplicates
# It seems the RECOVER AUD entries are irregular, so we will drop them when they have REGULAR homologue (or duplicate).
# +
# Check for duplicates
duplicates_count = (df.groupby(["C/A", "UNIT", "SCP", "STATION", "DATE_TIME"])
.ENTRIES.count()
.reset_index()
.sort_values("ENTRIES", ascending=False))
print(duplicates_count.value_counts('ENTRIES'))
# Drop duplicates
df.sort_values(["C/A", "UNIT", "SCP", "STATION", "DATE_TIME"],
inplace=True, ascending=False)
df.drop_duplicates(subset=["C/A", "UNIT", "SCP", "STATION", "DATE_TIME"], inplace=True)
df.groupby(["C/A", "UNIT", "SCP", "STATION", "DATE_TIME"]).ENTRIES.count().value_counts()
# Drop Desc Column. To prevent errors in multiple run of cell, errors on drop is ignored
df = df.drop(["DESC","EXITS"], axis=1, errors="ignore")
# -
# #### Get late-night entries only
# Look at timestamps, we want the late-night entries instead of hourly cumulative.
# Compare the first stamp of the evening against the last stamp of the early morning, dropping the day we don't have a comparison for (last).
evening = df[df.TIME.dt.time > dt.time(19,59)]
morning = df[df.TIME.dt.time < dt.time(4,1)]
first_stamp = (evening.groupby(["C/A", "UNIT", "SCP", "STATION", "DATE"])
.ENTRIES.first())
last_stamp = (morning.groupby(["C/A", "UNIT", "SCP", "STATION", "DATE"])
.ENTRIES.last())
timestamps = pd.merge(first_stamp, last_stamp, on=["C/A", "UNIT", "SCP", "STATION", "DATE"], suffixes=['_CUM_AM','_CUM_PM'])
timestamps.reset_index(inplace=True)
timestamps['ENTRIES_CUM_AM'] = (timestamps.groupby(["C/A", "UNIT", "SCP", "STATION"])
.ENTRIES_CUM_AM.shift(-1))
# Drop Sundays, where we don't have data from the next morning.
timestamps.dropna(subset=['ENTRIES_CUM_AM'], inplace=True)
timestamps.head()
# Get evening entries instead of cumulative. Getting the absolute value, since some of the turnstiles are counting backwards.
timestamps['ENTRIES'] = abs(timestamps.ENTRIES_CUM_AM - timestamps.ENTRIES_CUM_PM)
timestamps.head()
# #### Get weekend data only
# We are only interested in the weekends, so lets filter for those. I am doing this now so when we filter for outliers the mean will be more accurate (weekday entries skew the data).
timestamps['DAY_WEEK'] = timestamps.DATE.dt.dayofweek
weekend = timestamps[timestamps.DAY_WEEK.isin([3,4,5])]
weekend.head()
weekend.sort_values('ENTRIES', ascending=False).head()
# #### Cleaning
# +
# Cleaning Functions
def max_counter(row, threshold):
counter = row['ENTRIES']
if counter < 0:
counter = -counter
if counter > threshold:
counter = row['MEDIAN']
if counter > threshold:
counter = 0
return counter
def outl_to_med(x):
res = (x['ENTRIES']*x['~OUTLIER'])+(x['MEDIAN']*x['OUTLIER'])
return res
# -
# Replace outliers with the turnstile median
weekend['MEDIAN'] = (weekend.groupby(['C/A','UNIT','SCP','STATION'])
.ENTRIES.transform(lambda x: x.median()))
weekend['OUTLIER'] = (weekend.groupby(['C/A','UNIT','SCP','STATION'])
.ENTRIES.transform(lambda x: zscore(x)>3))
weekend['~OUTLIER'] = weekend.OUTLIER.apply(lambda x: not x)
weekend['ENTRIES'] = weekend.apply(outl_to_med, axis=1)
# There are still irregular values, set them to the updated median.
# If the median is still too high, replace with 0.
weekend['MEDIAN'] = (weekend.groupby(['C/A','UNIT','SCP','STATION'])
.ENTRIES.transform(lambda x: x.median()))
weekend['ENTRIES'] = weekend.apply(max_counter, axis=1, threshold=3500)
print(weekend.MEDIAN.max())
weekend[weekend.ENTRIES>3000].shape
weekend.sort_values('ENTRIES', ascending=False).head()
# Drop unnecessary rows
weekend.drop(['MEDIAN','OUTLIER','~OUTLIER', 'ENTRIES_CUM_AM', 'ENTRIES_CUM_PM'], axis=1, inplace=True, errors='ignore')
# Sanity Check: visualize to check for irregularities
import matplotlib.pyplot as plt
import seaborn as sns
weekend.info()
weekend['WEEK'] = weekend.DATE.dt.week
# +
per_week_station = weekend.groupby(['STATION','WEEK'])['ENTRIES'].sum().reset_index()
per_week_station.rename(columns={'ENTRIES':"WEEKEND_ENTRIES"}, inplace=True)
sns.relplot(x='WEEK', y='WEEKEND_ENTRIES', data=per_week_station, kind='line', hue='STATION')
plt.show()
# -
# Something is happening on week 26
# Upon closer inspection we can see that it corresponds with 4th July weekend.
# Many New Yorkers leave the city for that date, so it makes sense.
weekend[weekend.WEEK==26].head()
# ### Export
weekend.to_csv('Data/Processed/weekend_19.csv', index=False)
weekend_geo = weekend.merge(zip_boro_station, on='STATION')
weekend_geo.to_csv('Data/Processed/weekend_geo_19.csv', index=False)
# Export the total by station with its corresponding coordinates.
station_totals = weekend.groupby('STATION').ENTRIES.sum()\
.reset_index().merge(zip_boro_station, on='STATION')
station_totals.rename(columns={'ENTRIES':'TOTAL_ENTRIES'}, inplace=True)
station_totals.to_csv('Data/Processed/totals_geo_19.csv', index=False)
| mta_2019_cleaning.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
#pip install splinter
# #!pip install webdriver_manager
# -
# Dependencies and Setup
from bs4 import BeautifulSoup
from splinter import Browser
import pandas as pd
import time
from webdriver_manager.chrome import ChromeDriverManager
# MAC: Set Executable Path & Initialize Chrome Browser
executable_path = {'executable_path': ChromeDriverManager().install()}
browser = Browser('chrome', **executable_path, headless=False)
mars={}
#NASA Mars News Site
url = "https://redplanetscience.com/"
browser.visit(url)
# +
# Parse Results HTML with BeautifulSoup>
html = browser.html
news_soup = BeautifulSoup(html, "html.parser")
# -
#News
news_title=news_soup.find("div", class_="content_title").text
news_p=news_soup.find("div", class_="article_teaser_body").text
mars["news_titles"]=news_title
mars["news_p"]=news_p
#JPL Mars Space Image
url = "https://spaceimages-mars.com/"
browser.visit(url)
browser.find_by_tag("button")[1].click()
html = browser.html
jplimage = BeautifulSoup(html, "html.parser")
image=jplimage.find('img',class_="fancybox-image").get('src')
image
featimgurl="https://spaceimages-mars.com/"+image
featimgurl
mars["featured_image_url"]=featimgurl
#Facts
url = "https://galaxyfacts-mars.com/"
df=pd.read_html(url)
df=df[0]
df
df.columns=["Description","Mars","Earth"]
df.set_index("Description",inplace=True)
df
df_html=df.to_html()
df_html
mars["facts"]=df_html
# +
#hemispheres
url = "https://marshemispheres.com/"
browser.visit(url)
result=browser.find_by_css("a.product-item img")
hemisphere_image_url=[]
for i in range(len(result)):
hemisphere={}
browser.find_by_css("a.product-item img")[i].click()
element = browser.links.find_by_text('Sample').first
img_url= element["href"]
hemisphere["img_url"]=img_url
hemisphere["title"]=browser.find_by_css("h2.title").text
hemisphere_image_url.append(hemisphere)
browser.back()
# -
hemisphere_image_url
mars["hemispheres"]=hemisphere_image_url
mars
# +
#convert to notebook to python script
#delete some lines (cells, ie run blah)
#grab to app.py (Day 3)
#index.html use same variables
# -
| Mission_to_Mars/Mission_to_Mars.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
import cv2
import numpy as np
from matplotlib import pyplot as plt
myPic = cv2.imread('nature2',)
gray = cv2.cvtColor(myPic, cv2.COLOR_BGR2GRAY)
img = cv2.GaussianBlur(gray,(3,3),0)
laplacian = cv2.Laplacian(img,cv2.CV_64F)
sobelx = cv2.Sobel(img, cv2.CV_64F,1,0,ksize=5)
sobely = cv2.Sobel(img,cv2.CV_64F,0,1,ksize=5)
plt.subplot(2,2,1),plt.imshow(img,cmap = 'gray')
plt.title('Original'), plt.xticks([]), plt.yticks([])
plt.subplot(2,2,2),plt.imshow(laplacian,cmap = 'gray')
plt.title('Laplacian'), plt.xticks([]), plt.yticks([])
plt.subplot(2,2,3),plt.imshow(sobelx,cmap = 'gray')
plt.title('Sobel X'), plt.xticks([]), plt.yticks([])
plt.subplot(2,2,4),plt.imshow(sobely,cmap = 'gray')
plt.title('Sobel Y'), plt.xticks([]), plt.yticks([])
plt.show()
# -
| Sobel Filter/Sobel(1stOrder)&Laplachian(2ndOrder).ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Compare decay calculation results of radioactivedecay high precision mode and PyNE
# This notebook compares decay calculation results between the Python package [radioactivedecay](https://pypi.org/project/radioactivedecay/) `v0.1.0`, using its high precision mode, and [PyNE](https://pyne.io/) `v0.7.3`. The PyNE decay data is read in by radioactivedecay, so both codes are using the same underlying decay data for the calculations.
#
# Note the following radionuclides were removed from the decay dataset fed to radioactivedecay, as these radionuclides are part of chains where two radionuclides have degenerate half-lives: Po-191, Pt-183, Bi-191, Pb-187, Tl-187, Rn-195, At-195, Hg-183, Au-183, Pb-180, Hg-176, Tl-177, Pt-172, Ir-172, Lu-153 and Ta-157. radioactivedecay cannot calculate decays for chains containing radionuclides with identical half-lives, and the PyNE treatment for these chains currently suffers from a [bug](https://github.com/pyne/pyne/issues/1342).
#
# First import the necessary modules for the comparison.
# +
import radioactivedecay as rd
import pyne
from pyne import nucname, data
from pyne.material import Material, from_activity
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
# %matplotlib inline
print("Package versions used: radioactivedecay", rd.__version__, "PyNE", pyne.__version__)
# -
# ### Load PyNE decay dataset into radioactivedecay, check a simple case
#
# Load the decay dataset created using the PyNE data into radioactivedecay.
pynedata = rd.DecayData(dataset="pyne_truncated", dir_path=".", load_sympy=True)
# Define some functions to perform radioactive decay calculations for a single radionuclide with radioactivedecay and PyNE. To compare the results we have to remove the stable radionuclides from the inventory returned by PyNE. We also have to convert the canonical radionuclide ids into string format (e.g. `10030000` to `H-3`), and sort the inventory alphabetically.
# +
def rd_decay(nuclide, time):
"""Perform a decay calculation for one radionuclide with radioactivedecay."""
return rd.Inventory({nuclide: 1.0}, data=pynedata).decay_high_precision(time).contents
def add_hyphen(name):
"""Add a hypen to radionuclide name string e.g. H3 to H-3."""
for i in range(1, len(name)):
if not name[i].isdigit(): continue
name = name[:i] + "-" + name[i:]
break
return name
def strip_stable(inv):
"""Remove stable nuclides from a PyNE inventory."""
new_inv = dict()
for id in inv:
if data.decay_const(id) <= 0.0: continue
new_inv[add_hyphen(nucname.name(id))] = inv[id]
return new_inv
def pyne_decay(nuclide, time):
"""Perform a decay calculation for one radionuclide with PyNE."""
inv = strip_stable(from_activity({nucname.id(nuclide): 1.0}).decay(time).activity())
return dict(sorted(inv.items(), key=lambda x: x[0]))
# -
# First let's compare the decay results for a single case - the decay of 1 unit (Bq or Ci) of Pb-212 for 0.0 seconds:
inv_rd = rd_decay("Pb-212", 0.0)
inv_pyne = pyne_decay("Pb-212", 0.0)
print(inv_rd)
print(inv_pyne)
# radioactivedecay returns activities for all the radionuclides in the decay chain below Pb-212, even though they have zero activity. PyNE does not return activities for two of the radionuclides (Bi-212 and Po-212), presumably because it evaluated their activities to be exactly zero. A small non-zero activity is returned for Ti-208, and the activity of Pb-212 deviates slightly from unity. The likely explanation for these results is round-off errors.
#
# For the below comparisons, it makes sense to compare the activities of the radionuclides returned by both radioactivedecay and PyNE, whilst checking the activities of the radionuclides missing from the inventories returned by either radioactivedecay or PyNE are negligible. This function cuts down the inventories returned by radioactivedecay and PyNE to just the radionuclides present in both inventories. It also reports how many radionuclides were removed from each inventory to do this.
def match_inventories(inv1, inv2):
"""Cuts down the two inventories so they only contain the radionuclides present in both.
Also returns inventories of the radionuclides unique to each inventory."""
s1 = set(inv1.keys())
s2 = set(inv2.keys())
s = s1.intersection(s2)
inv1_intersection = {nuclide: inv1[nuclide] for nuclide in s}
inv2_intersection = {nuclide: inv2[nuclide] for nuclide in s}
inv1_difference = {nuclide: inv1[nuclide] for nuclide in s1.difference(s2)}
inv2_difference = {nuclide: inv2[nuclide] for nuclide in s2.difference(s1)}
return inv1_intersection, inv1_difference, inv2_intersection, inv2_difference
# ### Generate decay calculation comparisons between radioactivedecay and PyNE
#
# We now systematically compare the results of decay calculations performed using radioactivedecay and PyNE. The strategy is the set initial inventories containing 1 Bq of each radionuclide in the decay dataset, and then decay for various time periods that are factor multiples of that radionuclides half-life. The factor multiples used are zero and each order of magnitude between 10<sup>-6</sup> and 10<sup>6</sup>, inclusive.
#
# We calculate the absolute activity error for each radionuclide returned by both radioactivedecay and PyNE in the decayed inventories, as well as the relative activity error to the PyNE activity. We store the results in a Pandas DataFrame. We also store the results for the radionuclides that are not returned in either the radioactivedecay or PyNE inventories for examination.
# +
hl_factors = [0.0, 1e-6, 1e-5, 1e-4, 1e-3, 1e-2, 1e-1, 1e0, 1e1, 1e2, 1e3, 1e4, 1e5, 1e6]
rows = []
rows_pyne_missing = []
rows_rd_missing = []
for hl_factor in hl_factors:
for nuclide in pynedata.radionuclides:
decay_time = hl_factor*rd.Radionuclide(nuclide, data=pynedata).half_life()
rd_inv = rd_decay(nuclide, decay_time)
pyne_inv = pyne_decay(nuclide, decay_time)
rd_inv, rd_unique, pyne_inv, pyne_unique = match_inventories(rd_inv, pyne_inv)
if len(rd_unique) != 0:
for nuc, act in rd_unique.items():
rows_pyne_missing.append({
"parent": nuclide,
"hl_factor": hl_factor,
"decay_time": decay_time,
"pyne_missing_nuclide": nuc,
"rd_activity": act
})
if len(pyne_unique) != 0:
for nuc, act in pyne_unique.items():
rows_rd_missing.append({
"parent": nuclide,
"hl_factor": hl_factor,
"decay_time": decay_time,
"rd_missing_nuclide": nuc,
"pyne_activity": act
})
for nuc, pyne_act in pyne_inv.items():
rd_act = rd_inv[nuc]
if pyne_act == 0.0:
if rd_act == 0.0:
abs_err = 0.0
rel_err = 0.0
else:
abs_err = abs(rd_act)
rel_err = np.inf
else:
abs_err = abs(pyne_act-rd_act)
rel_err = abs_err/abs(pyne_act)
rows.append({
"parent": nuclide,
"hl_factor": hl_factor,
"decay_time": decay_time,
"nuclide": nuc,
"pyne_activity": pyne_act,
"rd_activity": rd_act,
"abs_err": abs_err,
"rel_err": rel_err,
})
print(hl_factor, "complete")
df = pd.DataFrame(rows)
df_pyne_missing = pd.DataFrame(rows_pyne_missing)
df_rd_missing = pd.DataFrame(rows_rd_missing)
# -
# ### Examine cases where radionuclides not returned by radioactivedecay and PyNE
#
# First we check the cases where radionuclides returned by PyNE are not present in decayed inventory from radioactivedecay.
print("Radionuclides not returned by radioactivedecay:", df_rd_missing.rd_missing_nuclide.unique())
print("These cases arise from the decay chains of:", df_rd_missing.parent.unique())
print("Total number of missing cases:", len(df_rd_missing))
# radioactivedecay does not return activities for these radionuclides as the chains all pass through radionuclides which PyNE reports as having an undefined half-life:
print(data.half_life("He-9"), data.half_life("W-192"), data.half_life("Dy-170"), data.half_life("H-5"))
# PyNE clearly has some assumption to infer the half-lives and simulate these decay chains.
#
# Now we check the radionuclides not returned by PyNE:
print("Total number of missing cases:", len(df_pyne_missing))
df_pyne_missing.sort_values(by=["rd_activity"], ascending=False).head(n=10)
# The maximum activity of any radionuclide returned by radioactivedecay but not returned by PyNE is 2.4E-14 Bq. The activities in all other cases are lower than this. This could be related to PyNE filtering out any results it considers negligible.
# ### Comparing decayed activities between radioactivedecay and PyNE
#
# We now compare the decayed activities of radionuclides returned both by radioactivedecay and by PyNE. For the 237095 comparisons, the mean and maximum absolute errors are 1.9E-17 Bq and 5.8E-14 Bq, respectively:
df.describe()
df.sort_values(by=["abs_err", "rel_err", "pyne_activity"], ascending=False).head(n=10)
# For around 8% of the activities compared the activity calculated by radioactivedecay and PyNE are identical:
len(df[df.abs_err == 0.0])
# Now plot the errors. For cases where the PyNE activity is 0.0, we set these activity values to 10<sup>-70</sup> and the relative errors to 10<sup>17</sup> to force the points to show on the panel (a) log-log graph.
# +
df.loc[(df["pyne_activity"] == 0.0) & (df["rd_activity"] != 0.0), "pyne_activity"] = 1e-70
df.loc[df["rel_err"] == np.inf, "rel_err"] = 1e17
fig, ax = plt.subplots(nrows=1, ncols=2, figsize=(12.8,4.8))
cmap_colors = plt.get_cmap("plasma").colors
colors = ["black"]
n = len(hl_factors)-1
colors.extend([""] * n)
for i in range(0, n):
colors[i+1] = cmap_colors[int(round((i*(len(cmap_colors)-1)/(n-1))))]
for i in range(0, len(hl_factors)):
ax[0].plot(df[df.hl_factor == hl_factors[i]].pyne_activity, df[df.hl_factor == hl_factors[i]].rel_err,
marker=".", linestyle="", label=hl_factors[i], color=colors[i])
ax[0].set(xlabel="PyNE activity (Bq)", ylabel="relative error", xscale="log", yscale="log")
ax[0].legend(loc="upper left", title="half-life factor")
ax[0].text(-0.15, 1.0, "(a)", transform=ax[0].transAxes)
ax[0].set_xlim(1e-104, 1e4)
ax[0].set_ylim(1e-18, 1e18)
for i in range(0, len(hl_factors)):
ax[1].plot(df[df.hl_factor == hl_factors[i]].abs_err, df[df.hl_factor == hl_factors[i]].rel_err,
marker=".", linestyle="", label=hl_factors[i], color=colors[i])
ax[1].set(xlabel="absolute error", ylabel="relative error", xscale="log", yscale="log")
ax[1].legend(loc="upper right", title="half-life factor")
ax[1].set_xlim(1e-25, 1e-7)
_ = ax[1].text(-0.15, 1.0, "(b)", transform=ax[1].transAxes)
# -
# In all cases the differences in activities reported by radioactivedecay and by PyNE are small (< 1E-13 Bq). Relative errors tend to increase as the radioactivity reported by PyNE decreases from 1 Bq. Relative errors greater than 1E-4 only occur when the PyNE activity is smaller than 2.5E-11.
df[df.rel_err > 1E-4].pyne_activity.max()
# Export the DataFrames to CSV files:
df.to_csv('radioactivedecay_high_precision_pyne.csv')
df_pyne_missing.to_csv('radioactivedecay_high_precision_pyne_pyne_missing.csv')
df_rd_missing.to_csv('radioactivedecay_high_precision_pyne_rd_missing.csv')
# ### Summary
#
# The activity results reported by radioactivedecay and PyNE differ by less than 1E-13 Bq, given an initial inventory of 1 Bq of the parent radionuclide. Both radioactivedecay and PyNE use double precision floating point arithmetic. PyNE also uses some slightly different treatments than radioactivedecay for calculating decay chains passing through almost stable radionuclides, and for filtering out radionuclides with negligible activities.
#
# In summary we can the results calculated by radioactivedecay and PyNE using the PyNE decay dataset are identical to within reasonable expectations for numerical precision.
| notebooks/comparisons/pyne/rd_high_precision_pyne_truncated_compare.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 2
# language: python
# name: python2
# ---
# ## Symbolic Aggregate Approximation
#
# ### 1. [reference](http://dl.acm.org/citation.cfm?id=1285965)
# ### 2. main usage for time series data:
# 1. indexing and query
# 2. calculating distance between time-sereis and thus perform clustering/classification
# 3. symbolic representation for time series - inspiring text-mining related tasks such as association mining
# 4. vector representation of time-series
#
# ### 3. algorithm steps
#
# 1. Segment time-series data into gapless pieces (e.g., gap introduced by missing values or change of sampling frequences)
#
# 2. Each piece will be SAXed into a sequence of "words" (e.g., "abcdd" "aabcd", ...). This is done by rolling a sliding window of length $window$ with a stride of length $stride$. If $stride$ < $window$, there will be overlapping of different windows. Later each window will be converted into one word
#
# 3. for each sliding window:
#
# 3.1 whiten/normalize across the window (it is the step key to many problems)
#
# 3.2 discretize on time axis (index) by grouping points into equal-sized bins (bin sizes could be fractional) - controlled by $nbins$. For each bin, use the mean of bin as local approximation.
#
# 3.3 discretize on value axis by dividing values into $nlevels$ quantiles (equiprobability), for each level, calculate the "letter" by $cutpoint$ table
#
# 3.4 at the end, each bin in a sliding window will be mapped to a letter, each window in the piece of time-series will be mapped to a word, and the whole piece of series will be a sentence
#
# 3.5 calcualte the distance between two symoblic representations by their corresponding levels
#
# 3.6 if a vector representation is necessary, each letter can be mapped to a scalar value, such as the mean of the corresponding level.
# ## sax module test
import matplotlib.pyplot as plt
# %matplotlib inline
import pysax
import numpy as np
reload(pysax)
sax = pysax.SAXModel(window=3, stride=2)
sax.sym2vec
## test normalization
sax = pysax.SAXModel(window=3, stride=2)
list(sax.sliding_window_index(10))
ws = np.random.random(10)
print ws.mean(), ws.std()
ss = sax.whiten(ws)
print ss.mean(), ss.std()
# +
## explore binning
from fractions import Fraction
def binpack(xs, nbins):
xs = np.asarray(xs)
binsize = Fraction(len(xs), nbins)
wts = [1 for _ in xrange(int(binsize))] + [binsize-int(binsize)]
pos = 0
while pos < len(xs):
if wts[-1] == 0:
n = len(wts) - 1
else:
n = len(wts)
yield zip(xs[pos:(pos+n)], wts[:n])
pos += len(wts) - 1
rest_wts = binsize-(1-wts[-1])
wts = [1-wts[-1]] + [1 for _ in xrange(int(rest_wts))] + [rest_wts-int(rest_wts)]
xs = range(0, 16)
print list(binpack(xs, 5))
xs = range(0, 16)
print list(binpack(xs, 4))
xs = range(0, 5)
print list(binpack(xs, 3))
# -
## test binning
sax = pysax.SAXModel(nbins=3)
print list(sax.binpack(np.ones(5)))
print
print list(sax.binpack(np.ones(9)))
## explore symbolization
import pandas as pd
cutpoints = [-np.inf, -0.43, 0.43, np.inf]
xs = np.random.random(10)
v = pd.cut(xs, bins = cutpoints, labels=["A", "B", "C"])
v
xs = np.random.randn(10)
print xs
sax = pysax.SAXModel(window=3, stride=2)
sax.symbolize(xs)
sax = pysax.SAXModel(nbins = 5, alphabet="ABCD")
xs = np.random.randn(20) * 2 + 1.
print xs
sax.symbolize_window(xs)
sax = pysax.SAXModel(window=20, stride = 5, nbins = 5, alphabet="ABCD")
xs = np.random.randn(103) * 2 + np.arange(103) * 0.03
plt.plot(xs)
print sax.symbolize_signal(xs)
reload(pysax)
sax = pysax.SAXModel(window=20, stride = 20, nbins = 5, alphabet="ABCD")
xs = np.random.randn(103) * 2 + np.arange(103) * 0.03
words = sax.symbolize_signal(xs)
ts_indices = sax.convert_index(word_indices=range(len(words)))
word_indices = sax.convert_index(ts_indices = range(len(xs)))
print words
print ts_indices
print word_indices
import pysax
import numpy as np
reload(pysax)
sax = pysax.SAXModel(window=20, stride = 5, nbins = 5, alphabet="ABCD")
xs = np.random.randn(1000000) * 2 + np.arange(1000000) * 0.03
#plt.plot(xs)
# %time psymbols = sax.symbolize_signal(xs, parallel="joblib", n_jobs=30)
sax = pysax.SAXModel(window=20, stride = 5, nbins = 5, alphabet="ABCD")
#xs = np.random.randn(1000000) * 2 + np.arange(1000000) * 0.03
#plt.plot(xs)
# %time symbols = sax.symbolize_signal(xs)
print np.all(psymbols==symbols)
## test symbol to vector
# %time vecs = sax.symbol_to_vector(psymbols)
vecs.shape
## test symbol distance
reload(pysax)
sax = pysax.SAXModel(window=20, stride = 5, nbins = 5, alphabet="ABCD")
sax.symbol_distance(psymbols[0], psymbols[1]), sax.symbol_distance(psymbols[1], psymbols[2])
v1, v2, v3 = sax.symbol_to_vector(psymbols[:3])
np.sqrt(np.sum( (v1-v2)**2 )), np.sqrt(np.sum( (v2-v3)**2 ))
psymbols[:3]
## test paa vectors
import pysax
import numpy as np
reload(pysax)
sax = pysax.SAXModel(window=20, stride = 5, nbins = 5, alphabet="ABCD")
#xs = np.random.randn(1000000) * 2 + np.arange(1000000) * 0.03
#plt.plot(xs)
# %time vecs = sax.signal_to_paa_vector(xs, n_jobs=30)
vecs[:10, :]
psymbols[:10]
| MSfingerprinter/MSfingerprinter/pysaxmaster/Tutorial-SAX (Symbolic Aggregate Approximation).ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="view-in-github" colab_type="text"
# <a href="https://colab.research.google.com/github/m0fauzi/BEP2073_S21/blob/main/L10_2a.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# + id="aMpLvWNyweQk"
import pandas as pd
import matplotlib.pyplot as plt
# + colab={"base_uri": "https://localhost:8080/"} id="049A9Dsbw1Xr" outputId="73f9c569-99a0-4131-9af2-5522298a9c3e"
data = pd.read_csv('toyota.csv')
print(data)
# + colab={"base_uri": "https://localhost:8080/", "height": 296} id="R6Z0YgzUxRyZ" outputId="25219a37-0699-40e2-a8a2-5b74bb002e70"
plt.plot(data.year,data.price,'g.')
plt.xlabel('Year')
plt.ylabel('Price')
# + colab={"base_uri": "https://localhost:8080/"} id="q-2EQ_kzx5hS" outputId="136b4218-ffc0-4155-9225-9489fd503787"
# !pip install -U tensorflow-addons
# + id="TXQnU5YYxw7L"
# AI & ML
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras import layers
# + id="qN9JmMWj8fke"
trn_dataset = data.sample(frac=0.8,random_state=0)
trn_in = trn_dataset.copy()
trn_out = trn_in.pop('price')
# + id="YivfoTZPzJrK"
# Create ANN structure
model = keras.Sequential([
layers.Dense(10,activation='relu'),
layers.Dense(10,activation='relu'),
layers.Dense(20,activation='relu'),
layers.Dense(1,activation='relu')])
# Train Model
model.compile(loss="mean_absolute_error",optimizer=keras.optimizers.Adam(learning_rate=0.01))
trn_Hist = model.fit(trn_in,trn_out,validation_split=0.2,epochs=500)
# + colab={"base_uri": "https://localhost:8080/", "height": 282} id="vkinkrxv12Q4" outputId="ab1f7ef9-98ce-4f68-d09d-d04418e82023"
# Train performance
plt.plot(trn_Hist.history['loss'],label='loss')
# + id="8JY-XdBx3nX5"
#training variables, weight
print(model.weights)
# + id="ZzmXwa3S4TPJ"
# Evaluate
# Evaluate Multi Model
price_hat = model.predict(trn_in).flatten()
# + colab={"base_uri": "https://localhost:8080/", "height": 352} id="lSUbjK-P48jI" outputId="e7f8c369-6ac5-4404-9db5-bf6bb6905d95"
plt.plot(trn_in,trn_out,'g.') # data from experiment
plt.plot(trn_in,price_hat,'r.') # data from ANN
# + id="nJc0vjkg5Na4"
print(Cp_hat)
| L10_2a.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# %load_ext autoreload
# %autoreload 2
import qcodes as qc
# # QCoDeS config
#
# The QCoDeS config module uses JSON files to store QCoDeS configuration.
#
# The config file controls various options to QCoDeS such as the default path and name of the database in which your data is stored and logging level of the debug output. QCoDeS is shipped with a default configuration. As we shall see, you can overwrite these default values in several different ways to customize the configuration. In particular, you may want to change the path of your database which by default is `~/experiments.db` (here, `~` stands for the path of the user's home directory). In the following example, I have changed the default path of my database, represented by the key `db_location`,
# in such a way that my data will be stored inside a sub-folder within my home folder.
#
# QCoDeS loads both the defaults and the active configuration at the module import so that you can directly inspect them
qc.config.current_config
qc.config.defaults
# One can inspect what the configuration options mean at runtime
print(qc.config.describe('core'))
# ## Configuring QCoDeS
# Defaults are the settings that are shipped with the package, which you can overwrite programmatically.
# A way to customize QCoDeS is to write your own json files, they are expected to be in the directories printed below.
# One will be empty because one needs to define first the environment variable in the OS.
#
# They are ordered by "weight", meaning that the last file always wins if it's overwriting any preconfigured defaults or values in the other files.
#
# Simply copy the file to the directories and you are good to go.
print("\n".join([qc.config.home_file_name, qc.config.env_file_name, qc.config.cwd_file_name]))
# The easiest way to add something to the configuration is to use the provided helper:
qc.config.add("base_location", "/dev/random", value_type="string", description="Location of data", default="/dev/random")
# This will add a `base_location` with value `/dev/random` to the current configuration, and validate it's value to be of type string, will also set the description and what one would want to have as default.
# The new entry is saved in the 'user' part of the configuration.
print(qc.config.describe('user.base_location'))
# You can also manually update the configuration from a specific file by suppling the path to the directory directly to `qc.config.update_config` as shown below.
qc.config.update_config(path="C:\\Users\\jenielse\\")
# ## Saving changes
# All the changes made to the defaults are stored, and one can then decide to save them to the expected place.
help(qc.config.save_to_cwd)
help(qc.config.save_to_env)
help(qc.config.save_to_home)
# ### Using a custom configured variable in your experiment:
#
# Simply get the value you have set before with dot notation.
# For example:
loc_provider = qc.data.location.FormatLocation(fmt=qc.config.user.base_location)
qc.data.data_set.DataSet.location_provider=loc_provider
# ## Changing core
# One can change the core values at runtime, but there is no guarantee that they are going to be valid.
# Since user configuration shadows the default one that comes with QCoDeS, apply care when changing the values under `core` section. This section is, primarily, meant for the settings that are determined by QCoDeS core developers.
qc.config.current_config.core.loglevel = 'INFO'
# But one can maunually validate via
qc.config.validate()
# Which will raise an exception in case of bad inputs
qc.config.current_config.core.loglevel = 'YOLO'
qc.config.validate()
# NOTE that you how have a broken config!
| docs/examples/Configuring_QCoDeS.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# [ベイズ最適化入門](https://qiita.com/masasora/items/cc2f10cb79f8c0a6bbaa)
# https://github.com/Ma-sa-ue/practice/blob/master/machine%20learning(python)/bayeisan_optimization.ipynb
# The original code is based on python2.
# A few modifications to fit it to python3 are needed.
# +
import numpy as np
import matplotlib.pyplot as plt
from scipy.stats import norm
import sys
np.random.seed(seed=123)
# %matplotlib inline
# +
#### kernel
def my_kernel(xn,xm,a1=200.0,a2=0.1,a3=1.0,a4=10.0):
return a1*np.exp(-a2*0.5*np.dot(xn - xm, xn - xm))
def my_kernel2(xn,xm,a1=200.0,a2=0.1,a3=1.0,a4=10.0):
return a1*np.exp(-a2*0.5*(xn - xm)**2)
### Gaussian process
def pred(_x,_y,newpoint):
### gram matrix
# aaa=np.array([my_kernel(i,j) for i in _x for j in _x])
# print(aaa.shape,aaa)
# K = aaa.reshape([np.shape(_x)[0],np.shape(_x)[0]])
K = np.zeros([len(_x),len(_x)])
for i in range(len(_x)):
for j in range(len(_x)):
K[i,j] = my_kernel2(_x[i], _x[j])
# aux = np.array([my_kernel(i,newpoint) for i in _x])
aux = 0.0*_x
for i in range(len(_x)):
aux[i] = my_kernel2(_x[i], newpoint)
mu = np.dot(aux,np.dot(np.linalg.inv(K),_y))
vari = my_kernel(newpoint,newpoint)-np.dot(aux,np.dot(np.linalg.inv(K+np.identity(len(_x))),aux))
vari = my_kernel2(newpoint,newpoint)-np.dot(aux,np.dot(np.linalg.inv(K+np.identity(len(_x))),aux))
return (mu,vari)
# +
def generate_sample(x):
return 40.0*np.sin(x/1.0)-np.power(0.3*(x+6.0),2)-np.power(0.2*(x-4.0),2)-1.0*np.abs(x+2.0)+np.random.normal(0,1,1)
x_ziku = np.linspace(-20,20,1000)
#z_ziku = map(generate_sample,x_ziku)
z_ziku = list(map(generate_sample,x_ziku)) #for python3
plt.plot(x_ziku, z_ziku) #### plot true data
plt.show()
#sys.exit()
def maximum(x):
# return max(xrange(np.shape(x)[0]), key=lambda i:x[i])
return max(range(np.shape(x)[0]), key=lambda i:x[i])
#### EI
def aqui1(mean,vari,qqq):
lamb = (mean - qqq)/(vari*1.0)
z = np.array([(mean[i] - qqq)*norm.cdf(lamb[i]) + vari[i]*norm.pdf(lamb[i]) for i in range(len(lamb))])
return z
#### PI
def aqui2(mean,vari,qqq):
lamb = (mean - qqq-0.01)/(vari*1.0)
z = np.array([norm.cdf(lamb[i]) for i in range(len(lamb))])
return z
#### UCB
def aqui3(mean,vari,qqq):
return mean+1.0*vari
# -
x_array = np.array([])
y_array = np.array([])
x_point = np.random.uniform(-20,20)
epoch=15
plt.figure(figsize=(20, 50))
for i in range(epoch):
if x_point not in x_array:
x_array = np.append(x_array,x_point)
# print "x_point"+str(x_point)
print ("x_point"+str(x_point))
y_point = generate_sample(x_point)
y_array = np.append(y_array,y_point)
#y_array = np.unique(y_array)
mean_point = np.array([ pred(x_array,y_array,j)[0] for j in x_ziku])
variance_point = np.array([ pred(x_array,y_array,j)[1] for j in x_ziku])
qqq = max(y_array)
accui = aqui3(mean_point,variance_point,qqq) ###change this function
x_point = x_ziku[maximum(accui)]+np.random.normal(0,0.01,1)
if(i%1==0):
plt.subplot(epoch*2,2,i*2+1)
plt.plot(x_ziku,np.array(mean_point),color="red",label="mean")
plt.plot(x_ziku,z_ziku,color="yellow")
high_bound = mean_point+ 1.0*variance_point
lower_bound = mean_point- 1.0*variance_point
plt.fill_between(x_ziku,high_bound,lower_bound,color="green",label="confidence")
plt.xlim(-20,20)
plt.ylim(-100,100)
plt.scatter(x_array,y_array)
plt.subplot(epoch*2,2,i*2+2)
plt.plot(x_ziku,accui)
plt.savefig("bayes_UCB.png")### change the name
plt.show()
#print "finish"
print ("finish")
| sandbox1/Trial.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Bayesian Survival Analysis
#
# Author: <NAME>
#
# [Survival analysis](https://en.wikipedia.org/wiki/Survival_analysis) studies the distribution of the time to an event. Its applications span many fields across medicine, biology, engineering, and social science. This tutorial shows how to fit and analyze a Bayesian survival model in Python using PyMC3.
#
# We illustrate these concepts by analyzing a [mastectomy data set](https://vincentarelbundock.github.io/Rdatasets/doc/HSAUR/mastectomy.html) from `R`'s [HSAUR](https://cran.r-project.org/web/packages/HSAUR/index.html) package.
# %matplotlib inline
# +
import numpy as np
import pandas as pd
import pymc3 as pm
import seaborn as sns
from matplotlib import pyplot as plt
from pymc3.distributions.timeseries import GaussianRandomWalk
from theano import tensor as T
# -
df = pd.read_csv(pm.get_data("mastectomy.csv"))
df.event = df.event.astype(np.int64)
df.metastized = (df.metastized == "yes").astype(np.int64)
n_patients = df.shape[0]
patients = np.arange(n_patients)
df.head()
n_patients
# Each row represents observations from a woman diagnosed with breast cancer that underwent a mastectomy. The column `time` represents the time (in months) post-surgery that the woman was observed. The column `event` indicates whether or not the woman died during the observation period. The column `metastized` represents whether the cancer had [metastized](https://en.wikipedia.org/wiki/Metastatic_breast_cancer) prior to surgery.
#
# This tutorial analyzes the relationship between survival time post-mastectomy and whether or not the cancer had metastized.
# #### A crash course in survival analysis
#
# First we introduce a (very little) bit of theory. If the random variable $T$ is the time to the event we are studying, survival analysis is primarily concerned with the survival function
#
# $$S(t) = P(T > t) = 1 - F(t),$$
#
# where $F$ is the [CDF](https://en.wikipedia.org/wiki/Cumulative_distribution_function) of $T$. It is mathematically convenient to express the survival function in terms of the [hazard rate](https://en.wikipedia.org/wiki/Survival_analysis#Hazard_function_and_cumulative_hazard_function), $\lambda(t)$. The hazard rate is the instantaneous probability that the event occurs at time $t$ given that it has not yet occured. That is,
#
# $$\begin{align*}
# \lambda(t)
# & = \lim_{\Delta t \to 0} \frac{P(t < T < t + \Delta t\ |\ T > t)}{\Delta t} \\
# & = \lim_{\Delta t \to 0} \frac{P(t < T < t + \Delta t)}{\Delta t \cdot P(T > t)} \\
# & = \frac{1}{S(t)} \cdot \lim_{\Delta t \to 0} \frac{S(t + \Delta t) - S(t)}{\Delta t}
# = -\frac{S'(t)}{S(t)}.
# \end{align*}$$
#
# Solving this differential equation for the survival function shows that
#
# $$S(t) = \exp\left(-\int_0^s \lambda(s)\ ds\right).$$
#
# This representation of the survival function shows that the cumulative hazard function
#
# $$\Lambda(t) = \int_0^t \lambda(s)\ ds$$
#
# is an important quantity in survival analysis, since we may consicesly write $S(t) = \exp(-\Lambda(t)).$
#
# An important, but subtle, point in survival analysis is [censoring](https://en.wikipedia.org/wiki/Survival_analysis#Censoring). Even though the quantity we are interested in estimating is the time between surgery and death, we do not observe the death of every subject. At the point in time that we perform our analysis, some of our subjects will thankfully still be alive. In the case of our mastectomy study, `df.event` is one if the subject's death was observed (the observation is not censored) and is zero if the death was not observed (the observation is censored).
df.event.mean()
# Just over 40% of our observations are censored. We visualize the observed durations and indicate which observations are censored below.
# +
fig, ax = plt.subplots(figsize=(8, 6))
blue, _, red = sns.color_palette()[:3]
ax.hlines(
patients[df.event.values == 0], 0, df[df.event.values == 0].time, color=blue, label="Censored"
)
ax.hlines(
patients[df.event.values == 1], 0, df[df.event.values == 1].time, color=red, label="Uncensored"
)
ax.scatter(
df[df.metastized.values == 1].time,
patients[df.metastized.values == 1],
color="k",
zorder=10,
label="Metastized",
)
ax.set_xlim(left=0)
ax.set_xlabel("Months since mastectomy")
ax.set_yticks([])
ax.set_ylabel("Subject")
ax.set_ylim(-0.25, n_patients + 0.25)
ax.legend(loc="center right");
# -
# When an observation is censored (`df.event` is zero), `df.time` is not the subject's survival time. All we can conclude from such a censored obsevation is that the subject's true survival time exceeds `df.time`.
#
# This is enough basic surival analysis theory for the purposes of this tutorial; for a more extensive introduction, consult Aalen et al.^[<NAME>, <NAME>, and <NAME>. Survival and event history analysis: a process point of view. Springer Science & Business Media, 2008.]
# #### Bayesian proportional hazards model
#
# The two most basic estimators in survial analysis are the [Kaplan-Meier estimator](https://en.wikipedia.org/wiki/Kaplan%E2%80%93Meier_estimator) of the survival function and the [Nelson-Aalen estimator](https://en.wikipedia.org/wiki/Nelson%E2%80%93Aalen_estimator) of the cumulative hazard function. However, since we want to understand the impact of metastization on survival time, a risk regression model is more appropriate. Perhaps the most commonly used risk regression model is [Cox's proportional hazards model](https://en.wikipedia.org/wiki/Proportional_hazards_model). In this model, if we have covariates $\mathbf{x}$ and regression coefficients $\beta$, the hazard rate is modeled as
#
# $$\lambda(t) = \lambda_0(t) \exp(\mathbf{x} \beta).$$
#
# Here $\lambda_0(t)$ is the baseline hazard, which is independent of the covariates $\mathbf{x}$. In this example, the covariates are the one-dimensonal vector `df.metastized`.
#
# Unlike in many regression situations, $\mathbf{x}$ should not include a constant term corresponding to an intercept. If $\mathbf{x}$ includes a constant term corresponding to an intercept, the model becomes [unidentifiable](https://en.wikipedia.org/wiki/Identifiability). To illustrate this unidentifiability, suppose that
#
# $$\lambda(t) = \lambda_0(t) \exp(\beta_0 + \mathbf{x} \beta) = \lambda_0(t) \exp(\beta_0) \exp(\mathbf{x} \beta).$$
#
# If $\tilde{\beta}_0 = \beta_0 + \delta$ and $\tilde{\lambda}_0(t) = \lambda_0(t) \exp(-\delta)$, then $\lambda(t) = \tilde{\lambda}_0(t) \exp(\tilde{\beta}_0 + \mathbf{x} \beta)$ as well, making the model with $\beta_0$ unidentifiable.
#
# In order to perform Bayesian inference with the Cox model, we must specify priors on $\beta$ and $\lambda_0(t)$. We place a normal prior on $\beta$, $\beta \sim N(\mu_{\beta}, \sigma_{\beta}^2),$ where $\mu_{\beta} \sim N(0, 10^2)$ and $\sigma_{\beta} \sim U(0, 10)$.
#
# A suitable prior on $\lambda_0(t)$ is less obvious. We choose a semiparametric prior, where $\lambda_0(t)$ is a piecewise constant function. This prior requires us to partition the time range in question into intervals with endpoints $0 \leq s_1 < s_2 < \cdots < s_N$. With this partition, $\lambda_0 (t) = \lambda_j$ if $s_j \leq t < s_{j + 1}$. With $\lambda_0(t)$ constrained to have this form, all we need to do is choose priors for the $N - 1$ values $\lambda_j$. We use independent vague priors $\lambda_j \sim \operatorname{Gamma}(10^{-2}, 10^{-2}).$ For our mastectomy example, we make each interval three months long.
interval_length = 3
interval_bounds = np.arange(0, df.time.max() + interval_length + 1, interval_length)
n_intervals = interval_bounds.size - 1
intervals = np.arange(n_intervals)
# We see how deaths and censored observations are distributed in these intervals.
# +
fig, ax = plt.subplots(figsize=(8, 6))
ax.hist(
df[df.event == 1].time.values,
bins=interval_bounds,
color=red,
alpha=0.5,
lw=0,
label="Uncensored",
)
ax.hist(
df[df.event == 0].time.values,
bins=interval_bounds,
color=blue,
alpha=0.5,
lw=0,
label="Censored",
)
ax.set_xlim(0, interval_bounds[-1])
ax.set_xlabel("Months since mastectomy")
ax.set_yticks([0, 1, 2, 3])
ax.set_ylabel("Number of observations")
ax.legend();
# -
# With the prior distributions on $\beta$ and $\lambda_0(t)$ chosen, we now show how the model may be fit using MCMC simulation with `pymc3`. The key observation is that the piecewise-constant proportional hazard model is [closely related](http://data.princeton.edu/wws509/notes/c7s4.html) to a Poisson regression model. (The models are not identical, but their likelihoods differ by a factor that depends only on the observed data and not the parameters $\beta$ and $\lambda_j$. For details, see <NAME>'s WWS 509 [course notes](http://data.princeton.edu/wws509/notes/c7s4.html).)
#
# We define indicator variables based on whether or the $i$-th suject died in the $j$-th interval,
#
# $$d_{i, j} = \begin{cases}
# 1 & \textrm{if subject } i \textrm{ died in interval } j \\
# 0 & \textrm{otherwise}
# \end{cases}.$$
# +
last_period = np.floor((df.time - 0.01) / interval_length).astype(int)
death = np.zeros((n_patients, n_intervals))
death[patients, last_period] = df.event
# -
# We also define $t_{i, j}$ to be the amount of time the $i$-th subject was at risk in the $j$-th interval.
exposure = np.greater_equal.outer(df.time, interval_bounds[:-1]) * interval_length
exposure[patients, last_period] = df.time - interval_bounds[last_period]
# Finally, denote the risk incurred by the $i$-th subject in the $j$-th interval as $\lambda_{i, j} = \lambda_j \exp(\mathbf{x}_i \beta)$.
#
# We may approximate $d_{i, j}$ with a Possion random variable with mean $t_{i, j}\ \lambda_{i, j}$. This approximation leads to the following `pymc3` model.
SEED = 644567 # from random.org
with pm.Model() as model:
lambda0 = pm.Gamma("lambda0", 0.01, 0.01, shape=n_intervals)
beta = pm.Normal("beta", 0, sigma=1000)
lambda_ = pm.Deterministic("lambda_", T.outer(T.exp(beta * df.metastized), lambda0))
mu = pm.Deterministic("mu", exposure * lambda_)
obs = pm.Poisson("obs", mu, observed=death)
# We now sample from the model.
n_samples = 1000
n_tune = 1000
with model:
trace = pm.sample(n_samples, tune=n_tune, random_seed=SEED)
# We see that the hazard rate for subjects whose cancer has metastized is about double the rate of those whose cancer has not metastized.
np.exp(trace["beta"].mean())
az.plot_posterior(trace, var_names=["beta"], color="#87ceeb");
az.plot_autocorr(trace, var_names=["beta"]);
# We now examine the effect of metastization on both the cumulative hazard and on the survival function.
base_hazard = trace["lambda0"]
met_hazard = trace["lambda0"] * np.exp(np.atleast_2d(trace["beta"]).T)
# +
def cum_hazard(hazard):
return (interval_length * hazard).cumsum(axis=-1)
def survival(hazard):
return np.exp(-cum_hazard(hazard))
# -
def plot_with_hpd(x, hazard, f, ax, color=None, label=None, alpha=0.05):
mean = f(hazard.mean(axis=0))
percentiles = 100 * np.array([alpha / 2.0, 1.0 - alpha / 2.0])
hpd = np.percentile(f(hazard), percentiles, axis=0)
ax.fill_between(x, hpd[0], hpd[1], color=color, alpha=0.25)
ax.step(x, mean, color=color, label=label);
# +
fig, (hazard_ax, surv_ax) = plt.subplots(ncols=2, sharex=True, sharey=False, figsize=(16, 6))
plot_with_hpd(
interval_bounds[:-1], base_hazard, cum_hazard, hazard_ax, color=blue, label="Had not metastized"
)
plot_with_hpd(
interval_bounds[:-1], met_hazard, cum_hazard, hazard_ax, color=red, label="Metastized"
)
hazard_ax.set_xlim(0, df.time.max())
hazard_ax.set_xlabel("Months since mastectomy")
hazard_ax.set_ylabel(r"Cumulative hazard $\Lambda(t)$")
hazard_ax.legend(loc=2)
plot_with_hpd(interval_bounds[:-1], base_hazard, survival, surv_ax, color=blue)
plot_with_hpd(interval_bounds[:-1], met_hazard, survival, surv_ax, color=red)
surv_ax.set_xlim(0, df.time.max())
surv_ax.set_xlabel("Months since mastectomy")
surv_ax.set_ylabel("Survival function $S(t)$")
fig.suptitle("Bayesian survival model");
# -
# We see that the cumulative hazard for metastized subjects increases more rapidly initially (through about seventy months), after which it increases roughly in parallel with the baseline cumulative hazard.
#
# These plots also show the pointwise 95% high posterior density interval for each function. One of the distinct advantages of the Bayesian model fit with `pymc3` is the inherent quantification of uncertainty in our estimates.
# ##### Time varying effects
#
# Another of the advantages of the model we have built is its flexibility. From the plots above, we may reasonable believe that the additional hazard due to metastization varies over time; it seems plausible that cancer that has metastized increases the hazard rate immediately after the mastectomy, but that the risk due to metastization decreases over time. We can accomodate this mechanism in our model by allowing the regression coefficients to vary over time. In the time-varying coefficent model, if $s_j \leq t < s_{j + 1}$, we let $\lambda(t) = \lambda_j \exp(\mathbf{x} \beta_j).$ The sequence of regression coefficients $\beta_1, \beta_2, \ldots, \beta_{N - 1}$ form a normal random walk with $\beta_1 \sim N(0, 1)$, $\beta_j\ |\ \beta_{j - 1} \sim N(\beta_{j - 1}, 1)$.
#
# We implement this model in `pymc3` as follows.
with pm.Model() as time_varying_model:
lambda0 = pm.Gamma("lambda0", 0.01, 0.01, shape=n_intervals)
beta = GaussianRandomWalk("beta", tau=1.0, shape=n_intervals)
lambda_ = pm.Deterministic("h", lambda0 * T.exp(T.outer(T.constant(df.metastized), beta)))
mu = pm.Deterministic("mu", exposure * lambda_)
obs = pm.Poisson("obs", mu, observed=death)
# We proceed to sample from this model.
with time_varying_model:
time_varying_trace = pm.sample(n_samples, tune=n_tune, random_seed=SEED)
az.plot_forest(time_varying_trace, var_names=["beta"]);
# We see from the plot of $\beta_j$ over time below that initially $\beta_j > 0$, indicating an elevated hazard rate due to metastization, but that this risk declines as $\beta_j < 0$ eventually.
# +
fig, ax = plt.subplots(figsize=(8, 6))
beta_hpd = np.percentile(time_varying_trace["beta"], [2.5, 97.5], axis=0)
beta_low = beta_hpd[0]
beta_high = beta_hpd[1]
ax.fill_between(interval_bounds[:-1], beta_low, beta_high, color=blue, alpha=0.25)
beta_hat = time_varying_trace["beta"].mean(axis=0)
ax.step(interval_bounds[:-1], beta_hat, color=blue)
ax.scatter(
interval_bounds[last_period[(df.event.values == 1) & (df.metastized == 1)]],
beta_hat[last_period[(df.event.values == 1) & (df.metastized == 1)]],
c=red,
zorder=10,
label="Died, cancer metastized",
)
ax.scatter(
interval_bounds[last_period[(df.event.values == 0) & (df.metastized == 1)]],
beta_hat[last_period[(df.event.values == 0) & (df.metastized == 1)]],
c=blue,
zorder=10,
label="Censored, cancer metastized",
)
ax.set_xlim(0, df.time.max())
ax.set_xlabel("Months since mastectomy")
ax.set_ylabel(r"$\beta_j$")
ax.legend();
# -
# The coefficients $\beta_j$ begin declining rapidly around one hundred months post-mastectomy, which seems reasonable, given that only three of twelve subjects whose cancer had metastized lived past this point died during the study.
#
# The change in our estimate of the cumulative hazard and survival functions due to time-varying effects is also quite apparent in the following plots.
tv_base_hazard = time_varying_trace["lambda0"]
tv_met_hazard = time_varying_trace["lambda0"] * np.exp(np.atleast_2d(time_varying_trace["beta"]))
# +
fig, ax = plt.subplots(figsize=(8, 6))
ax.step(
interval_bounds[:-1],
cum_hazard(base_hazard.mean(axis=0)),
color=blue,
label="Had not metastized",
)
ax.step(interval_bounds[:-1], cum_hazard(met_hazard.mean(axis=0)), color=red, label="Metastized")
ax.step(
interval_bounds[:-1],
cum_hazard(tv_base_hazard.mean(axis=0)),
color=blue,
linestyle="--",
label="Had not metastized (time varying effect)",
)
ax.step(
interval_bounds[:-1],
cum_hazard(tv_met_hazard.mean(axis=0)),
color=red,
linestyle="--",
label="Metastized (time varying effect)",
)
ax.set_xlim(0, df.time.max() - 4)
ax.set_xlabel("Months since mastectomy")
ax.set_ylim(0, 2)
ax.set_ylabel(r"Cumulative hazard $\Lambda(t)$")
ax.legend(loc=2);
# +
fig, (hazard_ax, surv_ax) = plt.subplots(ncols=2, sharex=True, sharey=False, figsize=(16, 6))
plot_with_hpd(
interval_bounds[:-1],
tv_base_hazard,
cum_hazard,
hazard_ax,
color=blue,
label="Had not metastized",
)
plot_with_hpd(
interval_bounds[:-1], tv_met_hazard, cum_hazard, hazard_ax, color=red, label="Metastized"
)
hazard_ax.set_xlim(0, df.time.max())
hazard_ax.set_xlabel("Months since mastectomy")
hazard_ax.set_ylim(0, 2)
hazard_ax.set_ylabel(r"Cumulative hazard $\Lambda(t)$")
hazard_ax.legend(loc=2)
plot_with_hpd(interval_bounds[:-1], tv_base_hazard, survival, surv_ax, color=blue)
plot_with_hpd(interval_bounds[:-1], tv_met_hazard, survival, surv_ax, color=red)
surv_ax.set_xlim(0, df.time.max())
surv_ax.set_xlabel("Months since mastectomy")
surv_ax.set_ylabel("Survival function $S(t)$")
fig.suptitle("Bayesian survival model with time varying effects");
# -
# We have really only scratched the surface of both survival analysis and the Bayesian approach to survival analysis. More information on Bayesian survival analysis is available in Ibrahim et al. (2005). (For example, we may want to account for individual frailty in either or original or time-varying models.)
#
# This tutorial is available as an [IPython](http://ipython.org/) notebook [here](https://gist.github.com/AustinRochford/4c6b07e51a2247d678d6). It is adapted from a blog post that first appeared [here](http://austinrochford.com/posts/2015-10-05-bayes-survival.html).
# %load_ext watermark
# %watermark -n -u -v -iv -w
| examples/survival_analysis/survival_analysis.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Gas sensors for home activity monitoring Data Set
# #Abstract: 100 recordings of a sensor array under different conditions in a home setting: background, wine and banana presentations. The array includes 8 MOX gas sensors, and humidity and temperature sensors.
import pandas as pd #import pandas libray
with open("data1.dat") as f: # Conversion of data1.dat file into data1.csv file
with open("data1.csv","w") as f1:
for line in f :
f1.write(line)
data=pd.read_csv("data1.csv",header=None) # Read file data1.csv and remove header of features
data.head() # Display of top 5 rows without headers.
# Complete data lies in a single column.
data[["id","time","R1","R2","R3","R4","R5","R6","R7","R8","Temperature","Humadity"]]= data[0].str.split(expand=True)
# Split that single column into 12 features but that 0th column stays.
data_new=data.drop([0],axis=1) # Dropping of 0th column and makes a new file named data_new.
data_new=data_new.drop(["time"],axis=1) # Dropping of "time" column.
data_new.head() # Display of top 5 rows in data_new.
with open("data2.dat") as f: #Conversion of data2.dat into data2.csv.
with open("data2.csv","w") as f1:
for line in f :
f1.write(line)
mtdata=pd.read_csv("data2.csv",header=None)# Reading data2.csv without headers and named it as mtdata.
mtdata.head() # Display of top 5 rows of mtdata file.
mtdata[["id","date","class","t0","td"]]= mtdata[0].str.split(expand=True)
#Spliting of 0th column into 5 more columns.
mtdata_new=mtdata.drop(["date","t0","td",0],axis=1)
#Dropping of "date","initial time" (t0),"final time"(td) features from mt_data.
mtdata_new.head() # Top 5 rows in mtdata_new.
Data=pd.merge(data_new,mtdata_new,on="id") # Merging of data_new and mtdata_new on column "id" and give Data file.
Data.head()
Data1=Data.drop([0]) # Dropping of "id" from Data file and convert it into Data1 file.
Data1.head()
x=Data1.drop(["class"],axis=1) # Assign independent features to x.
y=Data1["class"] # Assign dependent features to y.
from sklearn.linear_model import LogisticRegression
# Import Logistic regression from sklear library.
from sklearn.metrics import accuracy_score, confusion_matrix
from sklearn.model_selection import train_test_split
x_train,x_test,y_train,y_test=train_test_split(x,y,test_size=.20,random_state=123)
# Split of data into train and test in the ratio 4:1.
lg=LogisticRegression() # Application of Logistic Regression
lg.fit(x_train,y_train)
pred_lg=lg.predict(x_test) # Do prediction for x_test data.
accuracy_score(y_test,pred_lg) # Calculate accuracy score.
confusion_matrix(y_test,pred_lg) # Calculation of confusion matrix in logistic regression model.
# # We first remove the column match problem followed by merging both the data sheets and apply logistic regression by splitting whole data in 100:20 sets and finally we get the accuracy of 0.7336
# # Applying decision tree to get more accurate results
from sklearn.tree import DecisionTreeClassifier
# Calling Decision tree.
dt=DecisionTreeClassifier()
dt.fit(x_train,y_train)
pred=dt.predict(x_test) # Make prediction on x_test data.
accuracy_score(pred,y_test) # Calculating accuracy score.
confusion_matrix(pred,y_test) # Confusion matrix for decision tree
# # Get a accuracy of .9999085 when applying decision tree model.
| project .ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3 (ipykernel)
# language: python
# name: python3
# ---
# +
'''
前馈神经网络的PaddlePaddle实践代码。
'''
import paddle
from paddle import nn, optimizer, metric
#设定超参数。
INPUT_SIZE = 784
HIDDEN_SIZE = 256
NUM_CLASSES = 10
EPOCHS = 5
BATCH_SIZE = 64
LEARNING_RATE = 1e-3
# 搭建前馈神经网络。
paddle_model = nn.Sequential(
#添加有256个神经元的隐藏层。
nn.Linear(INPUT_SIZE, HIDDEN_SIZE),
#设定激活函数为ReLU。
nn.ReLU(),
#添加有10个神经元的输出层。
nn.Linear(HIDDEN_SIZE, NUM_CLASSES)
)
#初始化前馈神经网络模型。
model = paddle.Model(paddle_model)
# 为模型训练做准备,设置优化器,损失函数和评估指标。
model.prepare(optimizer=optimizer.Adam(learning_rate=LEARNING_RATE, parameters=model.parameters()),
loss=nn.CrossEntropyLoss(),
metrics=metric.Accuracy())
# +
import pandas as pd
#使用pandas,读取fashion_mnist的训练和测试数据文件。
train_data = pd.read_csv('../datasets/fashion_mnist/fashion_mnist_train.csv')
test_data = pd.read_csv('../datasets/fashion_mnist/fashion_mnist_test.csv')
#从训练数据中,拆解出训练特征和类别标签。
X_train = train_data[train_data.columns[1:]]
y_train = train_data['label']
#从测试数据中,拆解出测试特征和类别标签。
X_test = test_data[train_data.columns[1:]]
y_test = test_data['label']
# +
from sklearn.preprocessing import StandardScaler
#初始化数据标准化处理器。
ss = StandardScaler()
#标准化训练数据特征。
X_train = ss.fit_transform(X_train)
#标准化测试数据特征。
X_test = ss.transform(X_test)
# +
from paddle.io import TensorDataset
X_train = paddle.to_tensor(X_train.astype('float32'))
y_train = y_train.values
#构建适用于PaddlePaddle模型训练的数据集。
train_dataset = TensorDataset([X_train, y_train])
# 启动模型训练,指定训练数据集,设置训练轮次,设置每次数据集计算的批次大小。
model.fit(train_dataset, epochs=EPOCHS, batch_size=BATCH_SIZE, verbose=1)
# +
X_test = paddle.to_tensor(X_test.astype('float32'))
y_test = y_test.values
#构建适用于PaddlePaddle模型测试的数据集。
test_dataset = TensorDataset([X_test, y_test])
#启动模型测试,指定测试数据集。
result = model.evaluate(test_dataset, verbose=0)
print('前馈神经网络(PaddlePaddle版本)在fashion_mnist测试集上的准确率为: %.2f%%。' %(result['acc'] * 100))
| Chapter_6/.ipynb_checkpoints/Section_6.2.3-checkpoint.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .r
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: R
# language: R
# name: ir
# ---
# ##### R Sort a Data Frame using Order()
#
#
# In data analysis you can sort your data according to a certain variable in the dataset. In R, we can use the help of the function order(). In R, we can easily sort a vector of continuous variable or factor variable. Arranging the data can be of ascending or descending orde
#
# **Syntax:**
# + active=""
# sort(x, decreasing = FALSE, na.last = TRUE):
# -
# **Argument:**
#
# * x: A vector containing continuous or factor variable
# * decreasing: Control for the order of the sort method. By default, decreasing is set to `FALSE`.
# * last: Indicates whether the `NA` 's value should be put last or not
#
# **Example 1**
#
# For instance, we can create a tibble data frame and sort one or multiple variables. A tibble data frame is a new approach to data frame. It improves the syntax of data frame and avoid frustrating data type formatting, especially for character to factor. It is also a convenient way to create a data frame by hand, which is our purpose here. To learn more about tibble, please refer to the vignette:
library(dplyr)
set.seed(2)
data_frame <- tibble(
c1 = rnorm(50, 5, 1.5),
c2 = rnorm(50, 5, 1.5),
c3 = rnorm(50, 5, 1.5),
c4 = rnorm(50, 5, 1.5),
c5 = rnorm(50, 5, 1.5)
)
# Sort by c1
df <-data_frame[order(data_frame$c1),]
head(df)
# Sort by c3 and c4
df <-data_frame[order(data_frame$c3, data_frame$c4),]
head(df)
# Sort by c3(descending) and c4(acending)
df <-data_frame[order(-data_frame$c3, data_frame$c4),]
head(df)
| 10.Data Frame using Order().ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # <NAME> - Assignment
# +
from PIL import Image
import requests
from io import BytesIO
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import warnings
from bs4 import BeautifulSoup
from nltk.corpus import stopwords
from nltk.tokenize import word_tokenize
import nltk
import math
import time
import re
import os
import seaborn as sns
from collections import Counter
from sklearn.feature_extraction.text import CountVectorizer
from sklearn.feature_extraction.text import TfidfVectorizer
from sklearn.metrics.pairwise import cosine_similarity
from sklearn.metrics import pairwise_distances
from matplotlib import gridspec
from scipy.sparse import hstack
import plotly
import plotly.figure_factory as ff
from plotly.graph_objs import Scatter, Layout
plotly.offline.init_notebook_mode(connected=True)
warnings.filterwarnings("ignore")
# -
data = pd.read_pickle('pickels/16k_apperal_data_preprocessed')
# +
# Utility Functions which we will use through the rest of the workshop.
#Display an image
def display_img(url,ax,fig):
# we get the url of the apparel and download it
response = requests.get(url)
img = Image.open(BytesIO(response.content))
# we will display it in notebook
plt.imshow(img)
#plotting code to understand the algorithm's decision.
def plot_heatmap(keys, values, labels, url, text):
# keys: list of words of recommended title
# values: len(values) == len(keys), values(i) represents the occurence of the word keys(i)
# labels: len(labels) == len(keys), the values of labels depends on the model we are using
# if model == 'bag of words': labels(i) = values(i)
# if model == 'tfidf weighted bag of words':labels(i) = tfidf(keys(i))
# if model == 'idf weighted bag of words':labels(i) = idf(keys(i))
# url : apparel's url
# we will devide the whole figure into two parts
gs = gridspec.GridSpec(2, 2, width_ratios=[4,1], height_ratios=[4,1])
fig = plt.figure(figsize=(25,3))
# 1st, ploting heat map that represents the count of commonly ocurred words in title2
ax = plt.subplot(gs[0])
# it displays a cell in white color if the word is intersection(lis of words of title1 and list of words of title2), in black if not
ax = sns.heatmap(np.array([values]), annot=np.array([labels]))
ax.set_xticklabels(keys) # set that axis labels as the words of title
ax.set_title(text) # apparel title
# 2nd, plotting image of the the apparel
ax = plt.subplot(gs[1])
# we don't want any grid lines for image and no labels on x-axis and y-axis
ax.grid(False)
ax.set_xticks([])
ax.set_yticks([])
# we call dispaly_img based with paramete url
display_img(url, ax, fig)
# displays combine figure ( heat map and image together)
plt.show()
def plot_heatmap_image(doc_id, vec1, vec2, url, text, model):
# doc_id : index of the title1
# vec1 : input apparels's vector, it is of a dict type {word:count}
# vec2 : recommended apparels's vector, it is of a dict type {word:count}
# url : apparels image url
# text: title of recomonded apparel (used to keep title of image)
# model, it can be any of the models,
# 1. bag_of_words
# 2. tfidf
# 3. idf
# we find the common words in both titles, because these only words contribute to the distance between two title vec's
intersection = set(vec1.keys()) & set(vec2.keys())
# we set the values of non intersecting words to zero, this is just to show the difference in heatmap
for i in vec2:
if i not in intersection:
vec2[i]=0
# for labeling heatmap, keys contains list of all words in title2
keys = list(vec2.keys())
# if ith word in intersection(lis of words of title1 and list of words of title2): values(i)=count of that word in title2 else values(i)=0
values = [vec2[x] for x in vec2.keys()]
# labels: len(labels) == len(keys), the values of labels depends on the model we are using
# if model == 'bag of words': labels(i) = values(i)
# if model == 'tfidf weighted bag of words':labels(i) = tfidf(keys(i))
# if model == 'idf weighted bag of words':labels(i) = idf(keys(i))
if model == 'bag_of_words':
labels = values
elif model == 'tfidf':
labels = []
for x in vec2.keys():
# tfidf_title_vectorizer.vocabulary_ it contains all the words in the corpus
# tfidf_title_features[doc_id, index_of_word_in_corpus] will give the tfidf value of word in given document (doc_id)
if x in tfidf_title_vectorizer.vocabulary_:
labels.append(tfidf_title_features[doc_id, tfidf_title_vectorizer.vocabulary_[x]])
else:
labels.append(0)
elif model == 'idf':
labels = []
for x in vec2.keys():
# idf_title_vectorizer.vocabulary_ it contains all the words in the corpus
# idf_title_features[doc_id, index_of_word_in_corpus] will give the idf value of word in given document (doc_id)
if x in idf_title_vectorizer.vocabulary_:
labels.append(idf_title_features[doc_id, idf_title_vectorizer.vocabulary_[x]])
else:
labels.append(0)
plot_heatmap(keys, values, labels, url, text)
# this function gets a list of wrods along with the frequency of each
# word given "text"
def text_to_vector(text):
word = re.compile(r'\w+')
words = word.findall(text)
# words stores list of all words in given string, you can try 'words = text.split()' this will also gives same result
return Counter(words) # Counter counts the occurence of each word in list, it returns dict type object {word1:count}
def get_result(doc_id, content_a, content_b, url, model):
text1 = content_a
text2 = content_b
# vector1 = dict{word11:#count, word12:#count, etc.}
vector1 = text_to_vector(text1)
# vector1 = dict{word21:#count, word22:#count, etc.}
vector2 = text_to_vector(text2)
plot_heatmap_image(doc_id, vector1, vector2, url, text2, model)
# +
import pickle
with open('word2vec_model', 'rb') as handle:
model = pickle.load(handle)
# -
idf_title_vectorizer = CountVectorizer()
idf_title_features = idf_title_vectorizer.fit_transform(data['title'])
# +
def n_containing(word):
# return the number of documents which had the given word
return sum(1 for blob in data['title'] if word in blob.split())
def idf(word):
# idf = log(#number of docs / #number of docs which had the given word)
return math.log(data.shape[0] / (n_containing(word)))
# +
idf_title_features = idf_title_features.astype(np.float)
for i in idf_title_vectorizer.vocabulary_.keys():
# for every word in whole corpus we will find its idf value
idf_val = idf(i)
# to calculate idf_title_features we need to replace the count values with the idf values of the word
# idf_title_features[:, idf_title_vectorizer.vocabulary_[i]].nonzero()[0] will return all documents in which the word i present
for j in idf_title_features[:, idf_title_vectorizer.vocabulary_[i]].nonzero()[0]:
# we replace the count values of word i in document j with idf_value of word i
# idf_title_features[doc_id, index_of_word_in_courpus] = idf value of word
idf_title_features[j,idf_title_vectorizer.vocabulary_[i]] = idf_val
# +
# Utility functions
def get_word_vec(sentence, doc_id, m_name):
# sentence : title of the apparel
# doc_id: document id in our corpus
# m_name: model information it will take two values
# if m_name == 'avg', we will append the model[i], w2v representation of word i
# if m_name == 'weighted', we will multiply each w2v[word] with the idf(word)
vec = []
for i in sentence.split():
if i in vocab:
if m_name == 'weighted' and i in idf_title_vectorizer.vocabulary_:
vec.append(idf_title_features[doc_id, idf_title_vectorizer.vocabulary_[i]] * model[i])
elif m_name == 'avg':
vec.append(model[i])
else:
# if the word in our courpus is not there in the google word2vec corpus, we are just ignoring it
vec.append(np.zeros(shape=(300,)))
# we will return a numpy array of shape (#number of words in title * 300 ) 300 = len(w2v_model[word])
# each row represents the word2vec representation of each word (weighted/avg) in given sentance
return np.array(vec)
def get_distance(vec1, vec2):
# vec1 = np.array(#number_of_words_title1 * 300), each row is a vector of length 300 corresponds to each word in give title
# vec2 = np.array(#number_of_words_title2 * 300), each row is a vector of length 300 corresponds to each word in give title
final_dist = []
# for each vector in vec1 we caluclate the distance(euclidean) to all vectors in vec2
for i in vec1:
dist = []
for j in vec2:
# np.linalg.norm(i-j) will result the euclidean distance between vectors i, j
dist.append(np.linalg.norm(i-j))
final_dist.append(np.array(dist))
# final_dist = np.array(#number of words in title1 * #number of words in title2)
# final_dist[i,j] = euclidean distance between vectors i, j
return np.array(final_dist)
def heat_map_w2v(sentence1, sentence2, url, doc_id1, doc_id2, model):
# sentance1 : title1, input apparel
# sentance2 : title2, recommended apparel
# url: apparel image url
# doc_id1: document id of input apparel
# doc_id2: document id of recommended apparel
# model: it can have two values, 1. avg 2. weighted
#s1_vec = np.array(#number_of_words_title1 * 300), each row is a vector(weighted/avg) of length 300 corresponds to each word in give title
s1_vec = get_word_vec(sentence1, doc_id1, model)
#s2_vec = np.array(#number_of_words_title1 * 300), each row is a vector(weighted/avg) of length 300 corresponds to each word in give title
s2_vec = get_word_vec(sentence2, doc_id2, model)
# s1_s2_dist = np.array(#number of words in title1 * #number of words in title2)
# s1_s2_dist[i,j] = euclidean distance between words i, j
s1_s2_dist = get_distance(s1_vec, s2_vec)
# devide whole figure into 2 parts 1st part displays heatmap 2nd part displays image of apparel
gs = gridspec.GridSpec(2, 2, width_ratios=[4,1],height_ratios=[2,1])
fig = plt.figure(figsize=(15,15))
ax = plt.subplot(gs[0])
# ploting the heap map based on the pairwise distances
ax = sns.heatmap(np.round(s1_s2_dist,4), annot=True)
# set the x axis labels as recommended apparels title
ax.set_xticklabels(sentence2.split())
# set the y axis labels as input apparels title
ax.set_yticklabels(sentence1.split())
# set title as recommended apparels title
ax.set_title(sentence2)
ax = plt.subplot(gs[1])
# we remove all grids and axis labels for image
ax.grid(False)
ax.set_xticks([])
ax.set_yticks([])
display_img(url, ax, fig)
plt.show()
# +
# vocab = stores all the words that are there in google w2v model
# vocab = model.wv.vocab.keys() # if you are using Google word2Vec
vocab = model.keys()
# this function will add the vectors of each word and returns the avg vector of given sentance
def build_avg_vec(sentence, num_features, doc_id, m_name):
# sentace: its title of the apparel
# num_features: the lenght of word2vec vector, its values = 300
# m_name: model information it will take two values
# if m_name == 'avg', we will append the model[i], w2v representation of word i
# if m_name == 'weighted', we will multiply each w2v[word] with the idf(word)
featureVec = np.zeros((num_features,), dtype="float32")
# we will intialize a vector of size 300 with all zeros
# we add each word2vec(wordi) to this fetureVec
nwords = 0
for word in sentence.split():
nwords += 1
if word in vocab:
if m_name == 'weighted' and word in idf_title_vectorizer.vocabulary_:
featureVec = np.add(featureVec, idf_title_features[doc_id, idf_title_vectorizer.vocabulary_[word]] * model[word])
elif m_name == 'avg':
featureVec = np.add(featureVec, model[word])
if(nwords>0):
featureVec = np.divide(featureVec, nwords)
# returns the avg vector of given sentance, its of shape (1, 300)
return featureVec
# -
doc_id = 0
w2v_title_weight = []
# for every title we build a weighted vector representation
for i in data['title']:
w2v_title_weight.append(build_avg_vec(i, 300, doc_id,'weighted'))
doc_id += 1
# w2v_title = np.array(# number of doc in courpus * 300), each row corresponds to a doc
w2v_title_weight = np.array(w2v_title_weight)
# +
data['brand'].fillna(value="Not given", inplace=True )
#Replace spaces with hypen...
brands = [x.replace(" ", "-") for x in data['brand'].values]
colors = [x.replace(" ", "-") for x in data['color'].values]
brand_vectorizer = CountVectorizer()
brand_features = brand_vectorizer.fit_transform(brands)
color_vectorizer = CountVectorizer()
color_features = color_vectorizer.fit_transform(colors)
extra_features = hstack((brand_features, color_features)).tocsr()
# -
import numpy as np
from keras.preprocessing.image import ImageDataGenerator
from keras.models import Sequential
from keras.layers import Dropout, Flatten, Dense
from keras import applications
from sklearn.metrics import pairwise_distances
import matplotlib.pyplot as plt
import requests
from PIL import Image
import pandas as pd
import pickle
# +
bottleneck_features_train = np.load('16k_data_cnn_features.npy')
asins = np.load('16k_data_cnn_feature_asins.npy')
asins = list(asins)
# load the original 16K dataset
data = pd.read_pickle('pickels/16k_apperal_data_preprocessed')
df_asins = list(data['asin'])
# +
from IPython.display import display, Image, SVG, Math, YouTubeVideo
def final_model(doc_id, w_title, w_brand, w_color, w_image, num_results):
#pairwise_dstances of title using IDF weighted Word2Vec...
idf_w2v_dist = pairwise_distances(w2v_title_weight, w2v_title_weight[doc_id].reshape(1,-1))
#pairwise_distances of brand using one hot encoding...
brand_feat_dist = pairwise_distances(brand_features, brand_features[doc_id])
#pairwise_distances of color using one hot encoding...
color_feat_dist = pairwise_distances(color_features, color_features[doc_id])
#pairwise_distances of images using VGG16...
doc_id = asins.index(df_asins[doc_id])
img_dist = pairwise_distances(bottleneck_features_train, bottleneck_features_train[doc_id].reshape(1,-1))
#Combining the Euclidean(pairwise distances) by using weights in oder to prefer some features over others...
pairwise_dist = ((w_title * idf_w2v_dist) + (w_brand * brand_feat_dist) + (w_color * color_feat_dist) + (w_image * img_dist))/float(w_title + w_brand + w_color + w_image)
indices = np.argsort(pairwise_dist.flatten())[0:num_results]
pdists = np.sort(pairwise_dist.flatten())[0:num_results]
#Printing the results...
for i in range(len(indices)):
rows = data[['medium_image_url','title']].loc[data['asin']==asins[indices[i]]]
for indx, row in rows.iterrows():
display(Image(url=row['medium_image_url'], embed=True))
print('Product Title: ', row['title'])
print('Euclidean Distance from input image:', pdists[i])
print('Amazon Url: www.amzon.com/dp/'+ asins[indices[i]])
# -
final_model(12565, 50, 20, 10, 200, 20)
| Assignment.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="view-in-github" colab_type="text"
# <a href="https://colab.research.google.com/github/iued-uni-heidelberg/DAAD-Training-2021/blob/main/ARG_WV_v03.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# + id="4gT0OTdDEUUp"
from google.colab import drive
drive.mount('/content/drive')
# + id="DXiGMFqAEYWx"
# !cp ./drive/MyDrive/*.model /content/
# + id="3MKvG1TrEbFv"
import sys, re, os
import scipy.stats as ss
import matplotlib.pyplot as plt
import math
import csv
from statistics import mean
# + id="n1Gb8PbJEq0A"
# #!rm /content/*.txt
# #!mkdir /content/comp_covid/
# #!mv /content/drive/MyDrive/*.model /content/comp_covid/
# #!mv /content/comp_covid/*.model /content/drive/MyDrive/
# + id="l3GO9fFsG9_v"
from google.colab import files
import pandas as pd
# + id="x2NtdijAG_hj"
from gensim.models import Word2Vec # The word2vec model class
import gensim.downloader as api # Allows us to download some free training data
model_c05 = Word2Vec.load("/content/drive/MyDrive/covid_lem_s5_e5.model")
model_c10 = Word2Vec.load("/content/drive/MyDrive/covid_lem_s10_e5.model")
model_c20 = Word2Vec.load("/content/drive/MyDrive/covid_lem_s20_e5.model")
model_ep05 = Word2Vec.load("/content/drive/MyDrive/ep_en_s5_e5.model")
model_ep10 = Word2Vec.load("/content/drive/MyDrive/ep_en_s10_e5.model")
model_ep20 = Word2Vec.load("/content/drive/MyDrive/ep_en_s20_e5.model")
word_vectors_covid5 = model_c05.wv
word_vectors_covid10 = model_c10.wv
word_vectors_covid20 = model_c20.wv
word_vectors_ep5 = model_ep05.wv
word_vectors_ep10 = model_ep10.wv
word_vectors_ep20 = model_ep20.wv
# + id="BuXi4RXwWeTn"
from gensim.models import Word2Vec # The word2vec model class
# + id="odiRZmTJWo6E"
import gensim.downloader as api # Allows us to download some free training data
# + id="kdFzdb1jVIA_"
# !wget https://heibox.uni-heidelberg.de/f/53afe57ca6aa4b2cb614/?dl=1
# !mv index.html?dl=1 ep_en_s5_e5.model
model_ep05 = Word2Vec.load("ep_en_s5_e5.model")
word_vectors_ep5 = model_ep05.wv
# + id="dRXlOoVcEtnl"
#DEFINITION compaire spans (5, 10, 20) COVID
def compaire(list_lex, list_name,wordnum):
import sys, re, os
import scipy.stats as ss
import matplotlib.pyplot as plt
import math
import csv
from statistics import mean
for lex in list_lex:
res5 = word_vectors_covid5.most_similar(lex, topn=wordnum)
fte_5 = [a_tuple[0] for a_tuple in res5]
weights_5= [a_tuple[1] for a_tuple in res5]
weight_5 = mean([a_tuple[1] for a_tuple in res5])
res10 = word_vectors_covid10.most_similar(lex, topn=wordnum)
fte_10 = [a_tuple[0] for a_tuple in res10]
weights_10 = [a_tuple[1] for a_tuple in res10]
weight_10 = mean([a_tuple[1] for a_tuple in res10])
res20 = word_vectors_covid20.most_similar(lex, topn=wordnum)
fte_20 = [a_tuple[0] for a_tuple in res20]
weights_20 = [a_tuple[1] for a_tuple in res20]
weight_20 = mean([a_tuple[1] for a_tuple in res20])
k = 0
for word in fte_5:
if word in fte_10: k += 1
common_5_10 = k
common5_10 = ""
for resX in res5:
for resY in res10:
if(resX[0] == resY[0]):
cos_comp5_10 = (resX[1]-resY[1])/min(resX[1],resY[1])
common5_10=common5_10+resX[0] + '\t' + str(cos_comp5_10) +'\n'
FN = '/content/comp_covid/' + list_name + '_' + lex + '_weight_5_10.txt'
FileOut = open(FN, 'w')
FileOut.write(common5_10)
FileOut.flush()
FileOut.close()
k = 0
for word in fte_5:
if word in fte_20: k += 1
common_5_20 = k
common5_20 = ""
for resX in res5:
for resY in res20:
if(resX[0] == resY[0]):
cos_comp5_20 = (resX[1]-resY[1])/min(resX[1],resY[1])
common5_20= common5_20+ resX[0] + '\t' + str(cos_comp5_20) +'\n'
FN = '/content/comp_covid/' +list_name + '_' +lex + '_weight_5_20.txt'
FileOut = open(FN, 'w')
FileOut.write(common5_20)
FileOut.flush()
FileOut.close()
common10_20 = ""
for resX in res10:
for resY in res20:
if(resX[0] == resY[0]):
cos_comp10_20 = (resX[1]-resY[1])/min(resX[1],resY[1])
common10_20= common10_20+ resX[0] + '\t' + str(cos_comp10_20) +'\n'
FN = '/content/comp_covid/' + list_name + '_'+lex + '_weight_10_20.txt'
FileOut = open(FN, 'w')
FileOut.write(common10_20)
FileOut.flush()
FileOut.close()
k = 0
for word in fte_20:
if word in fte_10: k += 1
common_10_20 = k
FN = '/content/comp_covid/' + list_name + '_' + lex + '5_10_20.txt'
FileOut = open(FN, 'w')
FileOut.write(lex + str(5)+'\t'+'interect_5_10'+'_'+str(common_5_10) +'\t'+
lex + str(10)+'\t'+'interect_10_20'+ '_'+ str(common_10_20) +'\t'+
lex + str(20)+'\t' +'interect_5_20'+'_'+ str(common_5_20))
FileOut.write('\n')
FileOut.write('av_weight_5' +'\t'+str(weight_5) + '\t' +
'av_weight_10' +'\t'+str(weight_10) + '\t' +
'av_weight_20' +'\t'+str(weight_20))
FileOut.write('\n')
for i in range(0,wordnum):
#print(res5[i])
FileOut.write(res5[i][0]+'\t'+str(res5[i][1]) +'\t' +
res10[i][0]+'\t'+str(res10[i][1]) +'\t' +
res20[i][0]+'\t'+str(res20[i][1]) +'\t' +'\n')
FileOut.flush()
FileOut.close()
# + id="-SbOjAUqE4wa"
#EXECUTE compaire spans (5, 10, 20) COVID
# !mkdir ./comp_covid/
# !rm ./comp_covid/*.txt
CONN = ["despite","because", "since", "therefore", "thus", "hence","although", "but", "nevertheless", "yet", "though", "furthemore", "indeed"]
MA = ["prove", "judgement", "reason", "logic", "resulting","conclusion"]
EV = ["safe", "efficient", "dangerous", "risk", "critical","help", "fortunately", "unfortunately"]
KN = ["covid", "vaccination", "mortality", "decease", "pandemic", "infodemic", "virus", "prevention", "intensive"]
compaire(KN,"KN",100)
# !zip -r /content/comp_covid/key_notions.zip /content/comp_covid/*.txt
# !rm /content/comp_covid/*.txt
compaire(CONN, "CONN",100)
# !zip -r /content/comp_covid/conn.zip /content/comp_covid/*.txt
# !rm /content/comp_covid/*.txt
compaire(EV,"EV",100)
# !zip -r /content/comp_covid/eval_words.zip /content/comp_covid/*.txt
# !rm /content/comp_covid/*.txt
compaire(MA,"MA",100)
# !zip -r /content/comp_covid/meta_arg.zip /content/comp_covid/*.txt
# !rm /content/comp_covid/*.txt
# + id="m4KR_naYE_dH"
#DEFINITION compair spans (5, 10, 20) EP
def compaire_EP(list_lex, list_name,wordnum):
import sys, re, os
import scipy.stats as ss
import matplotlib.pyplot as plt
import math
import csv
from statistics import mean
for lex in list_lex:
res5 = word_vectors_ep5.most_similar(lex, topn=wordnum)
fte_5 = [a_tuple[0] for a_tuple in res5]
weight_5 = mean([a_tuple[1] for a_tuple in res5])
res10 = word_vectors_ep10.most_similar(lex, topn=wordnum)
fte_10 = [a_tuple[0] for a_tuple in res10]
weight_10 = mean([a_tuple[1] for a_tuple in res10])
res20 = word_vectors_ep20.most_similar(lex, topn=wordnum)
fte_20 = [a_tuple[0] for a_tuple in res20]
weight_20 = mean([a_tuple[1] for a_tuple in res20])
k = 0
for word in fte_5:
if word in fte_10: k += 1
common_5_10 = k
k = 0
for word in fte_5:
if word in fte_20: k += 1
common_5_20 = k
k = 0
for word in fte_20:
if word in fte_10: k += 1
common_10_20 = k
FN = '/content/comp_ep/' +lex + '5_10_20.txt'
FileOut = open(FN, 'w')
FileOut.write(lex + str(5)+'\t'+'interect_5_10'+' = '+str(common_5_10) +'\t'+
lex + str(10)+'\t'+'interect_10_20'+' = '+str(common_10_20) +'\t'+
lex + str(20)+'\t' +'interect_5_20'+' = '+str(common_5_20))
FileOut.write('\n')
FileOut.write('av_weight_5' +'\t'+str(weight_5) + '\t' +
'av_weight_10' +'\t'+str(weight_10) + '\t' +
'av_weight_20' +'\t'+str(weight_20))
FileOut.write('\n')
for i in range(0,wordnum):
#print(res5[i])
FileOut.write(res5[i][0]+'\t'+str(res5[i][1]) +'\t' +
res10[i][0]+'\t'+str(res10[i][1]) +'\t' +
res20[i][0]+'\t'+str(res20[i][1]) +'\t' +'\n')
"""
FileOut.write(str(Word[1]))
FileOut.write('\n')
FileOut.write('\n')
FileOut.write(lex + str(20))
FileOut.write('\n')
for Word in res20:
#print(Word[1])
FileOut.write(Word[0]+'\t')
FileOut.write(str(Word[1]))
FileOut.write('\n')
"""
FileOut.flush()
FileOut.close()
# + id="x-rOKbFQFDsI"
#EXECUTE compaire spans (5, 10, 20) EP
# !mkdir ./comp_ep
# !rm ./comp_ep/*.*
conn_list = ["despite","because", "since", "therefore", "thus", "hence","although", "but", "nevertheless", "yet", "though", "indeed"]
meta_arg = ["prove", "judgement", "reason", "logic", "conclusion"]
eval_words = ["safe", "efficient", "dangerous", "risk", "critical","fortunately", "moral", "freedom","immoral","unfortunately", "human", "value", "democracy", "right", "principle", "liberty", "dignity", "oppression", "violation"]
key_notions = ["union", "commission", "community", "member", "politics", "policy", "defence", "citizen", "election"]
compaire_EP(key_notions,"KN",500)
# !zip -r /content/comp_ep/key_notions.zip /content/comp_ep/*.txt
# !rm /content/comp_ep/*.txt
compaire_EP(conn_list, "CONN",500)
# !zip -r /content/comp_ep/conn.zip /content/comp_ep/*.txt
# !rm /content/comp_ep/*.txt
compaire_EP(eval_words,"EV",500)
# !zip -r /content/comp_ep/eval_words.zip /content/comp_ep/*.txt
# !rm /content/comp_ep/*.txt
compaire_EP(meta_arg,"MA",500)
# !zip -r /content/comp_ep/meta_arg.zip /content/comp_ep/*.txt
# !rm /content/comp_ep/*.txt
# + id="60qFRq8qFLvx"
#CLUSTERING
import gensim
import numpy as np
import pandas as pd
import sklearn.cluster
import sklearn.metrics
import re
#NUMBER of clusters
K=4
#Extract word vector from a model
#find "synonyms"
lex = 'constitution'
res5 = word_vectors_ep5.most_similar(lex, topn=200)
#build the list of sysnonyms
fte_5 = [a_tuple[0] for a_tuple in res5]
model = model_ep05
#list of words to cluster
words = fte_5
FIn = open('value_coll.txt', 'r')
words = []
for SWordNFrq in FIn:
try:
LWordNFrq = re.split('\t', SWordNFrq)
SWord = LWordNFrq[0]
words.append(SWord)
except:
pass
words = words[3:]
NumOfWords = len(words)
# construct the n-dimentional array for input data, each row is a word vector
x = np.zeros((NumOfWords, model.vector_size))
for i in range(0, NumOfWords):
x[i,]=model[words[i]]
# train the k-means model
classifier = MiniBatchKMeans(n_clusters=K, random_state=1, max_iter=100)
classifier.fit(x)
# check whether the words are clustered correctly
# find the index and the distance of the closest points from x to each class centroid
close = pairwise_distances_argmin_min(classifier.cluster_centers_, x, metric='euclidean')
index_closest_points = close[0]
distance_closest_points = close[1]
#find the word nearest to the centroid for all clusters (apparently, it can be from another cluster)
for i in range(0, K):
print("The closest word to the centroid of class {0} is {1}, the distance is {2}".format(i, words[index_closest_points[i]], distance_closest_points[i]))
clusters = classifier.predict(x)
zip_iterator = zip(fte_5, classifier.predict(x))
a_dic ={}
for key, value in zip_iterator:
a_dic[key] = value
print("********************")
print(a_dic)
FO = FileOut = open(lex+'.txt', 'w')
for Word, cluster in sorted(a_dic.items(), key=lambda x: x[1], reverse=False):
#save the clusters
FileOut.write(Word)
FileOut.write('\t')
FileOut.write(str(cluster))
FileOut.write('\n')
#find the word nearest to the centroid for all clusters (apparently, it can be from another cluster)
for i in range(0, K):
print("The closest word to the centroid of class {0} is {1}, the distance is {2}".format(i, words[index_closest_points[i]], distance_closest_points[i]), file = FileOut)
FileOut.flush()
FileOut.close()
cost =[]
for i in range(1, 5):
KM = KMeans(n_clusters = i, max_iter = 100)
KM.fit(x)
# calculates squared error for the clustered points
cost.append(KM.inertia_)
# plot the cost against K values
#plt.plot(range(1, 5), cost, color ='g', linewidth ='3')
#plt.xlabel("Value of K")
range_n_clusters = [2, 3, 4, 5, 6, 7, 8]
silhouette_avg = []
for num_clusters in range_n_clusters:
# initialise kmeans
kmeans = KMeans(n_clusters=num_clusters, max_iter = 100)
kmeans.fit(x)
cluster_labels = kmeans.labels_
# silhouette score
ss = sklearn.metrics.silhouette_score(x, cluster_labels)
silhouette_avg.append(ss)
plt.plot(range_n_clusters,silhouette_avg,'bx-')
plt.xlabel('Values of K')
plt.ylabel('Silhouette score')
plt.title('Silhouette analysis For Optimal k')
plt.show()
| ARG_WV_v03.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
sent = []
with open('test_data/simu_test_sim.out') as f:
for line in f:
segments = line.split(' ')
if len(segments) == 9:
device_id = int(line.split(' ')[5])
tst = int(line.split(' ')[8])
sent.append((tst, device_id))
# +
import sqlite3
connection = sqlite3.connect('test_data/simu_test_device')
cursor = connection.cursor()
cursor.execute("SELECT message, timestamp FROM log_table WHERE category = 'NetworkLocationReceiver' AND message LIKE 'Device ID%'")
data = [(t[1], int(t[0].split(' ')[2])) for t in list(cursor)]
data = [t for t in data if t[1] != 1242062628]
# +
connection = sqlite3.connect('test_data/simu_test_device')
cursor = connection.cursor()
cursor.execute("SELECT timestamp FROM log_table WHERE category = 'NetworkLocationReceiver' AND level = 300")
error = [t[0] for t in list(cursor)]
# +
import matplotlib.pyplot as plt
import matplotlib.patches as mpatches
_, ax = plt.subplots(1, figsize = (12, 4), dpi = 200)
ax.legend(handles=[
mpatches.Patch(color='blue', label='Sent (' + str(len(sent)) + ')'),
mpatches.Patch(color='red', label='Received / damaged (' + str(len(error)) + ')'),
mpatches.Patch(color='green', label='Received / ok (' + str(len(data) - len(error)) + ')')
])
minTs = min([t[0] for t in data])
maxTs = max([t[0] for t in data])
ax.set_yticklabels([])
ax.set_yticks([])
ax.set_xlabel('t in s')
ax.eventplot([t[0] - minTs for t in sent], lineoffsets = 0.3, linelengths = 0.2)
ax.eventplot([t[0] - minTs for t in data], lineoffsets = 0, linelengths = 0.2, color = 'green')
ax.eventplot([t - minTs for t in error], lineoffsets = 0, linelengths = 0.2, color = 'red')
# -
| jupyter/simu.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Extracting a point from the Catalan Data Cube
# This notebook illustrates how to extract a point from our datacube.
#
# * import the datacube library
import datacube
import xarray as xr
import pandas as pd
# - import matplotlib library
import matplotlib.pyplot as plt
# %matplotlib inline
# - import write_cog
# +
#from utils.cog import write_cog
# -
# - import rasterio and rioxarray
import rasterio
#import rasterio as rio
import rioxarray
#import os
#data_path = os.getcwd()
# * Request a datacube object
dc = datacube.Datacube(app='jupyter-app')
# * Getting the names of the products in the datacube
p=dc.list_products()
print(type(p))
print(p.to_string())
p=dc.list_products(with_pandas=False)
p[1]
# * Load some data
#
# In this case we ask for the Sentinel 2 product of a single point in space (2, 42).
# This could take a some seconds. Please be patient.
"""ds = dc.load(product="s2_level2a_utm31_10",
x=(2, 2),
y=(42, 42),
time=("2020-01-01", "2020-12-31"))
"""
# In this case we ask for the Sentinel 2 product of small area.
"""ds = dc.load(product="s2_level2a_utm31_10",
x=(2.15, 2.2),
y=(42.15, 42.2),
time=("2020-01-01", "2020-01-16")) #time=("2020-01-01", "2020-12-31"))
"""
ds = dc.load(product="s2_level2a_utm31_10",
x=(429770,433953),
y=(4666780,4672292),
crs='EPSG:32631',
time=("2020-01-01", "2020-01-16")) #2020-01-16
# ### * Showing the results of the query in the screan.
ds
ds.data_vars #a dictionary of data variables
ds.data_vars['nir']
ds['red']
ds['scl'].values
ds.coords
ds.dims
ds.coords['time']
ds.attrs
print(type(ds.data_vars['blue']))
ds['nir']
ds['nir'].values[:,0:10,0:10]
ds.time
ds['red']
ds.isel(time=0) #ds['nir'].isel(time=5) #Returns a new dataset with each array indexed along the specified dimension(s).
ds['nir'].isel(time=0) #Return a new DataArray whose data is given by integer indexing along the specified dimension(s).
# +
#ds['nir']-ds['red']
# -
ds['red'].values[:,:,:]
ds['red'].isel(time=0).plot()
# +
#window10= rasterio.windows.Window(0, 0,1080,1080) #provisional
# +
#nir_ds2 = ds['nir'].rio.isel_window(window10) #provisional
#nir_ds2
# -
# This is the Xarray format: http://xarray.pydata.org/en/stable/data-structures.html
#
# Next I would like to learn how to present a graphical results of this:
# https://github.com/GeoscienceAustralia/dea-notebooks/blob/develop/Beginners_guide/04_Loading_data.ipynb
#
# ## Creating a plot
#
# The numpy array of values is two dimensional and I need to reduce the dimensions to one with squeeze
# https://stackoverflow.com/questions/41203137/how-do-you-reduce-the-dimension-of-a-numpy-array
ndvi=(ds['nir']-ds['red'])/(ds['nir']+ds['red'])
ndvi
#print(type(ndvi))
print(ndvi)
ndvi.values[0,0,0],ndvi.values[0,1,1]
# Removing values considered nodata.
scl = ds['scl']
good_data = scl.where((scl == 4) | (scl == 5) | (scl == 6))
#good_data
ndvi_no_cloud = ndvi.where(good_data>=0)
ndvi_no_cloud
ndvi_no_cloud.values
ndvi_no_cloud.isel(time=0).plot()
# Using a "Smoothing with rolling mean"
# http://xarray.pydata.org/en/stable/generated/xarray.Dataset.rolling.htmlhttp://xarray.pydata.org/en/stable/generated/xarray.Dataset.rolling.html
ndvi_sth=ndvi_no_cloud.rolling(time=5, min_periods=1, center=True).mean()
ndvi_sth
ndvi_sth.expand_dims({'time2':'dates[0]'})
# - Plot NDVI on specific date
# +
#ndvi.sel(time='2020-03-08T10:49:22.992599000').plot.imshow()
#ndvi_sth.sel(time='2020-03-08T10:49:22.992599000').plot.imshow()
# -
import numpy as np
y=np.squeeze(ndvi_sth.values)
y
ndvi_sth.coords
#ndvi_sth.attrs
#ndvi_sth.data #is an array
#ndvi_sth.name #nothing
ndvi_sth.dims
ndvi_sth.attrs
x=ndvi_sth.coords['time'].values
x
print (type(x))
print (type(y))
print (y[3])
# The following code is inspired by:
# https://stackoverflow.com/questions/19079143/how-to-plot-time-series-in-python
# https://stackoverflow.com/questions/19410042/how-to-make-ipython-notebook-matplotlib-plot-inline
# Test change-plot
#example
"""
import matplotlib.pyplot as plt
titles = dp.dates(faus) # product date
fig,ax = plt.subplots(1,1,figsize=(6,6))
re_frgb[0].plot.imshow(robust=True,ax=ax);
ax.set(title=titles[0])
plt.show()
"""
#ndvi.dim=['x', 'y'].plot(size=6)
ndvi_sth.mean(dim=['x', 'y']).plot(size=6)
# +
#ndvi.plot(col='time', vmin=-0.50, vmax=0.8, cmap='RdYlGn')
# -
#ndvi_sth.mean(dim=['time']).plot(size=6)
ndvi.mean(dim=['time']).plot(size=6)
# +
#sample = ndvi[:, 125:150, 100:125]
# +
#rgb_bands = ['B04', 'B03', 'B02']
#time_series_scenes.sel(band=rgb_bands).plot.imshow(col='time', robust=True)
#sample.groupby('time.month').mean(dim='time').sel(band='red').plot.imshow(col='month', robust=True)
# +
#ndvi.y.plot() ##change
# -
ndvi.isel(time=0)
ndvi.values[1,1,1]
# +
#in process
#ndvi_sth.sel(time='2020-01-01').plot.imshow()
#ndvi_sth.sel(time='2020-01-01')[y]
##da = ds.sel(time='2020-02-15T10:59:18.325276000')['nir']
##print (type(da))
#print('Image size (Gb): ', da.nbytes/1e9)
##da.plot.imshow()
#ndvi_sth.plot.imshow()
"""
da = DS.sel(time='2013-04-21')['B4']
print('Image size (Gb): ', da.nbytes/1e9)
da.plot.imshow()
"""
# -
ndvi.isel(time=1).plot()
# +
#ndvi.isel(time=0).rio.to_raster("ndvi_0.tif")
#ndvi.isel(time=0).rio.to_raster("ndvi_0.tif", dtype="float32")
# -
"""
import matplotlib.pyplot as plt
titles = ndvi_sth.coords['time'].values
#titles = x # product date # ds.dates(x) 'Dataset' object has no attribute 'dates'
fig,ax = plt.subplots(1,1,figsize=(6,6))
#plt.plot(y)
#plt.plot( ndvi_sth.coords['x'].values) #0k
plt.plot( ndvi_sth.coords['y'].values)
#y.plot.imshow(robust=True,ax=ax); #'numpy.ndarray' object has no attribute 'plot'
#ndvi_sth.coords['y'].values.plot.imshow(robust=True,ax=ax);
ax.set(title=titles[0])
plt.show()
"""
"""
%matplotlib notebook
import matplotlib.pyplot as plt
plt.plot(x,y)
plt.xticks(rotation=90)
plt.show()
"""
# Lets consider the phenology examplained here: https://docs.dea.ga.gov.au/notebooks/Real_world_examples/Vegetation_phenology.html
# and implemented here:
# https://github.com/GeoscienceAustralia/dea-notebooks/blob/develop/Tools/dea_tools/temporal.py
mask = ndvi_sth.isnull()
# Replacing nodata by 0.
ndvi_cl = ndvi_sth.where(~mask, other=0)
ndvi_cl
# vPOS = Value at peak of season:
ndvi_cl.max("time").values
# POS = DOY of peak of season
ndvi_cl.isel(time=ndvi_cl.argmax("time")).time.dt.dayofyear.values
# Trough = Minimum value
ndvi_cl.min("time").values
# AOS = Amplitude of season
(ndvi_cl.max("time")-ndvi_cl.min("time")).values
# vSOS = Value at the start of season
# select timesteps before peak of season (AKA greening)
greenup = ndvi_cl.where(ndvi_cl.time < ndvi_cl.isel(time=ndvi_cl.argmax("time")).time)
# find the first order slopes
green_deriv = greenup.differentiate("time")
# find where the first order slope is postive
pos_green_deriv = green_deriv.where(green_deriv > 0)
# positive slopes on greening side
pos_greenup = greenup.where(pos_green_deriv)
pos_greenup
# find the median
median = pos_greenup.median("time")
median
# distance of values from median
distance = pos_greenup - median
distance
def allNaN_arg(da, dim, stat):
"""
Calculate da.argmax() or da.argmin() while handling
all-NaN slices. Fills all-NaN locations with an
float and then masks the offending cells.
Params
------
xarr : xarray.DataArray
dim : str,
Dimension over which to calculate argmax, argmin e.g. 'time'
stat : str,
The statistic to calculte, either 'min' for argmin()
or 'max' for .argmax()
Returns
------
xarray.DataArray
"""
# generate a mask where entire axis along dimension is NaN
mask = da.isnull().all(dim)
if stat == "max":
y = da.fillna(float(da.min() - 1))
y = y.argmax(dim=dim, skipna=True).where(~mask)
return y
if stat == "min":
y = da.fillna(float(da.max() + 1))
y = y.argmin(dim=dim, skipna=True).where(~mask)
return y
# find index (argmin) where distance is most negative
idx = allNaN_arg(distance, "time", "min").astype("int16")
idx
# find index (argmin) where distance is smallest absolute value
idx = allNaN_arg(xr.ufuncs.fabs(distance), "time", "min").astype("int16")
idx.values
# SOS = DOY for start of season
ndvi_cl.coords['time'].values[idx.values[0][0]]
| src/odc/Creating_time_series_odc.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
from pathlib import Path, PurePath
import numpy as np
import pandas as pd
import os
from analysis import read_behaviors, return_genome
from utility.util_functions import euclidean_distance
from deap import creator, base, tools
import pickle
# +
behaviors_file = sorted(Path('data/results/')
.glob('transferability_simulation_*/behavioral_features.dat'))
sim_file = 'data/results/transferability_simulation_13/deap_inds/200_genome.pkl'
thymio_file = 'data/results/transferability_simulation_13/deap_inds/1_genome_.pkl'
behaviors = read_behaviors(behaviors_file)
# +
behavior = behaviors[4]
columns = [
'avg_left', 'avg_right',
's1', 's2', 's3', 's4', 's5', 's6', 's7',
'area0_percentage',
'area1_percentage',
'area2_percentage',
]
# -
b = behavior.loc[:1, columns]
euclidean_distance(b.iloc[0], b.iloc[1])
creator.create("FitnessMax", base.Fitness, weights=(1.0, -1.0, 1.0))
creator.create("Individual", list,
fitness=creator.FitnessMax, model=None)
Individual = creator.Individual
with open(sim_file, 'rb') as f:
sim = pickle.load(f)
sim.position
len(sim.position)
with open(thymio_file, 'rb') as f:
thymio = pickle.load(f)
thymio.position
len(thymio.position)
euclidean_distance(sim.position, thymio.position)
np.tile(thymio.position[-1], (len(sim.position)-len(thymio.position),1))
(len(sim.position)-len(thymio.position))*thymio.position[-1]
np.append(thymio.position, np.tile(thymio.position[-1], (len(sim.position)-len(thymio.position), 1)), axis=0)
euclidean_distance(sim.features, thymio.features)
thymio.position[:len(sim.position)]
def calc_str_disparity(transfered, simulation):
if len(transfered) > len(simulation):
diff = (len(transfered)-len(simulation))
t = np.array(transfered)
s = np.append(simulation, np.tile(simulation[-1], (diff, 1)), axis=0)
elif len(simulation) > len(transfered):
diff = (len(simulation)-len(transfered))
t = np.append(transfered, np.tile(transfered[-1], (diff, 1)), axis=0)
s = np.array(simulation)
else:
t = np.array(transfered)
s = np.array(simulation)
t_mean = np.mean(t, axis=0)
s_mean = np.mean(s, axis=0)
x = np.sum((np.power(s.T[0]-t.T[0], 2) / (s_mean[0]*t_mean[0])))
y = np.sum((np.power(s.T[1]-t.T[1], 2) / (s_mean[1]*t_mean[1])))
return x + y
sim.position
np.reshape(sim.position, (2, len(sim.position)))
np.array(sim.position).T
np.array(sim.position).T[0]
np.array(thymio.position).T
calc_str_disparity(thymio.position, sim.position[:1])
| str_disparity.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Testing polycentric gyration
# This script tests the code for finding poly centric gyration from the paper [From centre to centres: polycentric structures in individual mobility](https://arxiv.org/abs/2108.08113). Code is from github https://github.com/rohit-sahasrabuddhe/polycentric-mobility . Functions are taken from the main.py file, It would be better to source them in or run the script directly but I don't know how.
# # Setup
# The packages and functions used by the script
# +
import numpy as np
import pandas as pd
import geopandas as gp
from sklearn.cluster import KMeans
from haversine import haversine_vector
from sklearn.metrics import auc
from scipy.spatial import distance_matrix as DM
from joblib import Parallel, delayed
# Functions for conversion from latlon to cartesian and back
def to_cartesian(lat, lon):
lat, lon = np.pi * lat / 180, np.pi * lon / 180
return np.cos(lat) * np.cos(lon), np.cos(lat) * np.sin(lon), np.sin(lat)
def to_latlon(x,y,z):
lat, lon = np.arctan2(z, np.sqrt(x**2+y**2))*180/np.pi, np.arctan2(y, x)*180/np.pi
return lat, lon
class TrimmedKMeans:
def __init__(self, k, data, weights, cutoff):
self.k = k
self.data = data #A numpy array of size [N, 3]
self.weights = weights / np.sum(weights) #size [N,]
self.centers = self.data[np.random.choice(range(self.data.shape[0]), size=k, replace=False)]
self.distance_matrix = DM(self.data, self.centers)
self.cluster_assignment = np.argmin(self.distance_matrix, axis=1)
self.distance = np.min(self.distance_matrix, axis=1)
self.inertia = 0
self.cutoff=cutoff
def get_inertia_labels(self):
self.distance_matrix = DM(self.data, self.centers)
self.cluster_assignment = np.argmin(self.distance_matrix, axis=1)
self.distance = np.min(self.distance_matrix, axis=1)
self.inertia = 0
for i in range(self.k): # Loop through all the clusters
# get the coordinates, global weights and distance to center
coords, weights, dists = self.data[self.cluster_assignment == i], self.weights[self.cluster_assignment == i], self.distance[self.cluster_assignment == i]
if coords.shape[0] == 0:
continue
indices_asc = np.argsort(dists)
coords, weights, dists = coords[indices_asc], weights[indices_asc], dists[indices_asc] # sort everything by the distance
cluster_wt = np.sum(weights) # total weight of the cluster
weights = weights / cluster_wt # this gives the local weight (within the cluster)
weights_cumsum = np.cumsum(weights)
last_entry = np.sum(weights_cumsum <= self.cutoff) + 1 # the index of the last location that needs to be looked at
coords, weights, dists, weights_cumsum = coords[:last_entry].copy(), weights[:last_entry].copy(), dists[:last_entry].copy(), weights_cumsum[:last_entry].copy()
# Remove the extra weight
weights[-1] -= weights_cumsum[-1] - self.cutoff
# Add to the inertia
self.inertia += np.sum((weights * cluster_wt) * (dists**2))
return np.sqrt(self.inertia), self.cluster_assignment
def update(self):
self.distance_matrix = DM(self.data, self.centers)
self.cluster_assignment = np.argmin(self.distance_matrix, axis=1)
self.distance = np.min(self.distance_matrix, axis=1)
for i in range(self.k): # Loop through all the clusters
# get the coordinates, global weights and distance to center
coords, weights, dists = self.data[self.cluster_assignment == i], self.weights[self.cluster_assignment == i], self.distance[self.cluster_assignment == i]
if coords.shape[0] == 0:
continue
indices_asc = np.argsort(dists)
coords, weights, dists = coords[indices_asc], weights[indices_asc], dists[indices_asc] # sort everything by the distance
cluster_wt = np.sum(weights) # total weight of the cluster
weights = weights / cluster_wt # this gives the local weight (within the cluster)
weights_cumsum = np.cumsum(weights)
# last entry is the index of the last location that needs to be looked at
last_entry = np.sum(weights_cumsum <= self.cutoff) + 1
coords, weights, dists, weights_cumsum = coords[:last_entry].copy(), weights[:last_entry].copy(), dists[:last_entry].copy(), weights_cumsum[:last_entry].copy()
# Remove the extra weight
weights[-1] -= weights_cumsum[-1] - self.cutoff
# Update the center
weights = weights / np.sum(weights)
self.centers[i] = np.average(coords, axis=0, weights=weights)
def plot(self):
for i in range(self.k):
plt.scatter(self.data[self.cluster_assignment == i][:, 0], self.data[self.cluster_assignment == i][:, 1])
plt.scatter(self.centers[:, 0], self.centers[:, 1], marker='+', color='black', s=50)
def get_best_fit(self):
best_centers, best_inertia, best_labels = None , np.inf, None
for _ in range(50): #compare across 50 random initializations
c = np.inf
self.centers = self.data[np.random.choice(range(self.data.shape[0]), size=self.k, replace=False)]
for _ in range(50): #fixed number of iterations
old_c = np.copy(self.centers)
self.update()
c = np.sum((self.centers - old_c)**2)
if c == 0:
break
this_inertia, this_labels = self.get_inertia_labels()
if this_inertia < best_inertia:
best_inertia = this_inertia
best_labels = this_labels
best_centers = self.centers
if best_inertia == 0:
break
return best_centers, best_labels, best_inertia
def get_result(u, user_data, locs, max_k, trimming_coeff):
#print(f"User {u}, {to_print}")
result = {'user':u, 'com':None, 'tcom':None, 'rog':None, 'L1':None, 'L2':None, 'k':None, 'centers':None, 'auc_com':None, 'auc_1':None, 'auc_2':None, 'auc_k':None, 'auc_kmeans':None}
def get_area_auc(x, k, max_area, df):
centers = x
dists = np.min(haversine_vector(list(df.coords), centers, comb=True), axis=0)
df['distance'] = dists
df['area'] = k * df['distance']**2
df = df.sort_values('area')[['area', 'time_spent']]
df = df[df['area'] <= max_area]
if df.empty:
return 0
df.time_spent = df.time_spent.cumsum()
df['area'] = df['area'] / max_area
x = [0] + list(df['area']) + [1]
y = [0] + list(df.time_spent) + [list(df.time_spent)[-1]]
return auc(x, y)
user_data = user_data[['loc', 'time_spent']].groupby('loc').sum()
try:
user_data.time_spent = user_data.time_spent.dt.total_seconds()
except:
pass
user_data.time_spent = user_data.time_spent / user_data.time_spent.sum()
user_data['lat'] = locs.loc[user_data.index].lat
user_data['lon'] = locs.loc[user_data.index].lon
highest_gap = None
best_auc = None
best_gap = None
best_k = 1
best_centers = None
user_data['coords'] = list(zip(user_data.lat, user_data.lon))
user_data['x'], user_data['y'], user_data['z'] = to_cartesian(user_data['lat'], user_data['lon'])
com = to_latlon(np.sum(user_data['x']*user_data.time_spent), np.sum(user_data['y']*user_data.time_spent), np.sum(user_data['z']*user_data.time_spent))
dist = haversine_vector(list(user_data.coords), [com], comb=True)
rog = np.sqrt(np.sum(user_data.time_spent.to_numpy() * (dist**2)))
com_auc = get_area_auc(com, 1, rog**2, user_data.copy())
result['com'] = com
result['rog'] = rog
result['L1'], result['L2'] = list(user_data.sort_values('time_spent', ascending=False).coords[:2])
result['auc_com'] = com_auc
train_data_list = []
# find max min and shape outside loop
lat_min, lat_max = user_data.lat.min(), user_data.lat.max()
lon_min, lon_max = user_data.lon.min(), user_data.lon.max()
size = user_data.shape[0]
for i in range(50):
train_data = user_data.copy()
train_data['lat'] = np.random.uniform(low=lat_min, high=lat_max, size=size)
train_data['lon'] = np.random.uniform(low=lon_min, high=lon_max, size=size)
train_data['coords'] = list(zip(train_data.lat, train_data.lon))
train_data['x'], train_data['y'], train_data['z'] = to_cartesian(train_data['lat'], train_data['lon'])
#find rog of this data
com = to_latlon(np.sum(train_data['x']*train_data.time_spent), np.sum(train_data['y']*train_data.time_spent), np.sum(train_data['z']*train_data.time_spent))
dist = haversine_vector(list(train_data.coords), [com], comb=True)
train_rog = np.sqrt(np.sum(train_data.time_spent.to_numpy() * (dist**2)))
train_data_list.append((train_data, train_rog))
for k in range(1, max_k+1):
Trim = TrimmedKMeans(k, user_data[['x','y', 'z']].to_numpy(), weights = user_data.time_spent.to_numpy(), cutoff=trimming_coeff)
true_centers, _, _ = Trim.get_best_fit()
true_centers = np.array([np.array(to_latlon(*i)) for i in true_centers])
true_auc = get_area_auc(true_centers, k, rog**2, user_data.copy())
if k == 1:
result['tcom'] = tuple(true_centers[0])
result['auc_1'] = true_auc
if k== 2:
result['auc_2'] = true_auc
new_aucs = []
for train_data, train_rog in train_data_list:
Trim = TrimmedKMeans(k, train_data[['x','y', 'z']].to_numpy(), weights = train_data.time_spent.to_numpy(), cutoff=trimming_coeff)
centers, _, _ = Trim.get_best_fit()
centers = np.array([np.array(to_latlon(*i)) for i in centers])
new_aucs.append(get_area_auc(centers, k, train_rog**2, train_data.copy()))
new_mean = np.mean(new_aucs)
new_std = np.std(new_aucs)
gap = true_auc - new_mean
if k == 1:
highest_gap = gap
best_gap = gap
best_auc = true_auc
best_centers = true_centers
best_k = 1
continue
if gap - new_std > highest_gap:
best_auc = true_auc
best_gap = gap
best_centers = true_centers
best_k = k
highest_gap = max(highest_gap, gap)
result['k'] = best_k
result['auc_k'], result['centers'] = best_auc, list(best_centers)
kmeans = KMeans(result['k'])
kmeans.fit(user_data[['x','y', 'z']].to_numpy(), sample_weight = user_data.time_spent.to_numpy())
kmeans_centers = np.array([np.array(to_latlon(*i)) for i in kmeans.cluster_centers_])
result['auc_kmeans'] = get_area_auc(kmeans_centers, result['k'], rog**2, user_data.copy())
return result
def main(data_path, results_path="demo_results.pkl", max_k=6, trimming_coeff=0.9):
data = pd.read_pickle(data_path)
try:
data['time_spent'] = data['end_time'] - data['start_time']
except:
pass
user_list = sorted(data.user.unique())
locs = data[['loc', 'lat', 'lon']].groupby('loc').mean().copy()
result = pd.DataFrame(Parallel(n_jobs=-1)(delayed(get_result)(u, data[data.user == u], locs, max_k, trimming_coeff) for u in user_list)).set_index('user')
result.to_pickle(results_path)
return result
# -
# # load mobility csv and save as pandas dataframe
#
# These code chunks create the appropriately formatted dataframe from a csv of the mobility data produced in R
dis_pandas = pd.read_csv("/home/jonno/COVID_project/COVID_project_data/poly_df.csv").loc[:,['loc','lat','lon', 'time_spent', 'user']]
dis_pandas['time_spent'] = dis_pandas['time_spent'].astype('float')
dis_pandas.to_pickle("/home/jonno/COVID_project/COVID_project_data/poly_df.pkl")
del dis_pandas
#create smaller file that will be easier to test
user_list = [i for i in range(10)]
dis_pandas[dis_pandas["user"].isin(user_list)].to_pickle("/home/jonno/COVID_project/COVID_project_data/poly_df2.pkl")
# ## Set file paths
# +
script_path = "/home/jonno/polycentric-mobility/main.py"
target_file_path = "/home/jonno/COVID_project/COVID_project_data/poly_df.pkl"#
demo_file_path = "/home/jonno/polycentric-mobility/demo_data.pkl"
result_save_path = "/home/jonno/COVID_project/COVID_project_data/multi_gyration.pkl"
# #!python /home/jonno/polycentric-mobility/main.py --data_path "{target_file_path}" --results_path "{result_save_path}"
# -
# ### Load test and demo data
test_data_df = pd.read_pickle("/home/jonno/COVID_project/COVID_project_data/poly_df.pkl")
demo_df = pd.read_pickle("/home/jonno/polycentric-mobility/demo_data.pkl")
print(demo_df)
# ### Comparing data types
# The data types and the column names for the arguements are identical
test_data_df.dtypes
demo_df.dtypes
#Demo data succeeds
import time
start_time = time.time()
main(data_path = demo_file_path, results_path = result_save_path)
print("--- %s seconds ---" % (time.time() - start_time))
#The real data fails
import time
start_time = time.time()
main(data_path = target_file_path, results_path = result_save_path, max_k=6)
print("--- %s seconds ---" % (time.time() - start_time))
pd.read_pickle(result_save_path).to_csv("/home/jonno/COVID_project/COVID_project_data/multi_gyration.csv")
multi_gyration = pd.read_pickle('/home/jonno/COVID_project/COVID_project_data/multi_gyration_test.pkl')
print(multi_gyration)
| testing_polycentric_gyration.ipynb |