code stringlengths 38 801k | repo_path stringlengths 6 263 |
|---|---|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# ---
# ## Part 2
#
# Describes the aim of the STL was to define standard data structures. Which will work for all reasonable subset of types - this means for built in types and containers of builtin types. So the challenge was to describe what properties would make any type work with a standard container.
#
# Therefore had to be able to say that STL would work for all builtin types and types like them. Which, leaves defining what it means to be like a built in type open. However, the real key is that the containers must also fulfil this criteria and also be "like" a built in type. As will want to be able to use any builtin container as a type. After all vector\<T\> is a different type to type T. In short the key point is that the STL containers have to be able to be nested, vector\<T\> should work just as well when T is of type vector\<int\> as when T is of type int.
#
# Therefore the algorithms will rely on the operations defined on regular types. The operations Listed are:
# - copy
# - assign
# - destruct
# - default
# - equal
# - less
#
# The design Stepanov followed was that "whatever is a natural idiomatic expression in C should be... (the same) for regular types".
#
# ### Assign
#
# Relatively simple already have an allocated variable therefore should be able to:
# ```cpp
# a = b;
# ```
#
# ### Default
#
# Following the design choice described relating to expressions in C, have to allow allocation in the following ways:
# ```cpp
# T a = b;
#
# //or
#
# T a;
# a = b;
# ```
#
# Types which just provide the above and destructor are called *Semiregular* types
# ### Copy
#
# A type that provides a copy constructor as well as the three operations needed to be *semiregular* is a *regular* type.
#
# For any regular type there should be a copy constructor. This means:
#
# ```cpp
# T a(b);
# T a = b;
# ```
#
# That these two statements should be equivalent when b is of type T.
#
# What is needed to express the semantics of copy is an equality relation. An equivalence relation, as noted in the lecture isn't strong enough. Whilst we haven't defined what this equality relation we know thw semantic meaning is:
# ```cpp
# // a == b; and &a != &b
# ``` .
#
# So in order for types to implement copy they must have equality, and inequality, defined upon them.
#
# According to Bjarne if a type implements <, >, <=, >= then it is a totallyOredered type. Stepanov would say these should be part of a regular type but Bjarne's stance is that this is too strong a position to hold. **Preserve natural meaning** Stepanov makes it clear that the lack of enforment on overriding operators should be enforced by the developer.
# Now getting to the code. There is a brief discussion on which operators should be memeber functions and which shouldn't. Alex highlights that to have equality as a member function breaks the symmetry of equality as would be calling the member function of one of the objects not the other.
| lecture_1/notes.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # A visualization of astrophysical simulation
# %matplotlib inline
import matplotlib.pyplot as plt
import matplotlib.colors as colors
import numpy as np
# ## Read in the density file
#
# The file is a 512x512 projection of a 512^3 dataset. We can read it in using numpy's fromfile() function and reshape it into 512x512 using numpy's reshape().
fname = "density_projection.dat"
d = np.fromfile(fname,dtype=np.float32)
d = np.reshape(d,(512,512))
# ## Repeat for the energy file
fname = "energy_projection.dat"
energy = np.fromfile(fname,dtype=np.float32)
energy = np.reshape(energy,(512,512))
# ## Let's plot them using the default color map
d_log = np.log10(d)
f = plt.figure(figsize=(7,7))
plt.imshow(d_log, origin='lower')
e_log = np.log10(energy)
f = plt.figure(figsize=(7,7))
plt.imshow(e_log, origin='lower')
# ## Making a 3-color image
#
# We can combine the density and energy maps into a three color image using the HSV color space. Here H=[0,1] corresponds to the color wheel going from red->red through yellow, green, blue, and purple. V=[0,1] is the intensity of the image. S=[0,1] is the saturation of the color, with 0 being white (for V=1) or black (for V=0) and 1 being a deep color.
# +
d_min = d_log.min()
d_max = d_log.max()
v = (d_log - d_min)/(d_max - d_min)
s = 1.0 - v
# +
e_min = e_log.min()
e_max = e_log.max()
h = 0.8 - 0.2*(e_log - e_min)/(e_max - e_min)
# -
# ## Now we have to make a HSV image, and then convert to RGB
# +
hsv_image = np.zeros((512,512,3))
hsv_image[:,:,0] = h
hsv_image[:,:,1] = s
hsv_image[:,:,2] = v
rgb_image = colors.hsv_to_rgb(hsv_image)
# -
# ## Now let's see the 3-color image
f = plt.figure(figsize=(7,7))
plt.imshow(rgb_image, origin='lower')
plt.imsave("test.png", rgb_image)
| simulation_visualization.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + _cell_guid="79c7e3d0-c299-4dcb-8224-4455121ee9b0" _uuid="d629ff2d2480ee46fbb7e2d37f6b5fab8052498a"
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
pd.options.mode.chained_assignment = None
plt.style.use('ggplot')
# %matplotlib inline
# + _uuid="a7a26e343d3d6a8609b6b17169cdd235fb329788"
champs = pd.read_csv("../input/champs.csv")
champs.head()
# + _uuid="84f3bf034fdbeba6e07ecb3dd90e7cbee4415480"
matches = pd.read_csv("../input/matches.csv")
matches.head()
# + _uuid="97ad8e9eb83301bdc0a5c2e2507e8454a16aa18f"
participants = pd.read_csv('../input/participants.csv')
participants.tail()
# + _uuid="1b78244222b181a2044bb6a2300b57abe83d90a5"
stats1 = pd.read_csv('../input/stats1.csv')
stats1.head(2)
# + _uuid="c0f81ed9f95a9be4324251415082a7b55bbd7bc7"
stats2 = pd.read_csv('../input/stats2.csv')
stats2.head(2)
# + _uuid="62096affe9349671f4d96a10bb6b6a4bc75dff6d"
stats = stats1.append(stats2)
stats.shape
# + _uuid="545e9cff02948860fbe0fc44b580373b4e7d649d"
stats.head()
# + [markdown] _uuid="b6794e7852384da6328bb1d8abcdeaada51a11c9"
# ## Some Data Cleaning
# + [markdown] _uuid="3d37ef9155852327973a1c9f01a79d2247a158e8"
# #### putting all together in one DataFrame
# + _uuid="0f33a25fd32c79611fc8223e37601c1020c2b706"
df = pd.merge(participants, stats, how = 'left', on = ['id'], suffixes=('', '_y'))
df = pd.merge(df , champs, how = 'left', left_on= 'championid', right_on='id'
,suffixes=('', '_y') )
df = pd.merge(df, matches, how = 'left', left_on = 'matchid', right_on = 'id'
, suffixes=('', '_y'))
# + _uuid="ca8d3fc5a1d2282351042df7a3537e198d1dcb0e"
df.columns
# + [markdown] _uuid="bdca1a5bb9607b815845980ca4b08bbee2d9578e"
# ## Some Data Cleaning
# + _uuid="c773d6c89fdee03f74bd437a0bbd05c8f07f83f4"
def final_position(col):
if col['role'] in ('DUO_SUPPORT', 'DUO_CARRY'):
return col['role']
else:
return col['position']
# + _uuid="d781e561ab89343d6cfd466aef2c202562534da6"
df['adjposition'] = df.apply(final_position, axis = 1)
# + _uuid="260f8b3b2e282e38f4d294d917642980452b8796"
df.head()
# + _uuid="db2d731db5e8cbb443d8cf03b6712a33b6958807"
df['team'] = df['player'].apply(lambda x: '1' if x <= 5 else '2')
df['team_role'] = df['team'] + ' - ' + df['adjposition']
# + _uuid="265cccd8a1727e1537683e890cbe0d366cbebe7e"
df.head()
# + [markdown] _uuid="ebf2a7456f623ede96bde245cd1e1f18630576b8"
# ### removing matchid with duplicate roles
# + _uuid="a013aee0f8ca9666dd29d4699dff6f2651fc7415"
remove_index = []
for i in ('1 - MID', '1 - TOP', '1 - DUO_SUPPORT', '1 - DUO_CARRY', '1 - JUNGLE',
'2 - MID', '2 - TOP', '2 - DUO_SUPPORT', '2 - DUO_CARRY', '2 - JUNGLE'):
df_remove = df[df['team_role'] == i].groupby('matchid').agg({'team_role':'count'})
remove_index.extend(df_remove[df_remove['team_role'] != 1].index.values)
# + [markdown] _uuid="c8c44f868c9ca9febeb448085d60f980f581663a"
# ### remove unclassified BOT, as correct ones should be DUO_SUPPORT OR DUO_CARRY
# + _uuid="a11bb4e1cd3fc0c43a690a4add6386ed63e0c958"
remove_index.extend(df[df['adjposition'] == 'BOT']['matchid'].unique())
remove_index = list(set(remove_index))
# + [markdown] _uuid="db9a1685e078667fb726ba5d74700fef852bf1e7"
# ## Before & After Cleaning
# + _uuid="2776d329e3f9b00774844b6ed3f683a384000738"
print('# matches in dataset before cleaning:{}'.format(df['matchid'].nunique()))
df = df[~df['matchid'].isin(remove_index)]
print('# matches in dataset after cleaning: {}'.format(df['matchid'].nunique()))
# + _uuid="9922ead868a9c0ad882ef717fd218a8b2ca03729"
df.columns
# + [markdown] _uuid="92422843a3ef71a42ccbefbb5958936b0eaf4223"
# ### The Columns we need
# + _uuid="f0a5fbf7c2017ec1215ab99dfd18bcbd495051f6"
df = df[['id', 'matchid', 'player', 'name', 'adjposition', 'team_role',
'win', 'kills', 'deaths', 'assists', 'turretkills','totdmgtochamp',
'totheal', 'totminionskilled', 'goldspent', 'totdmgtaken', 'inhibkills',
'pinksbought', 'wardsplaced', 'duration', 'platformid',
'seasonid', 'version']]
df.head()
# + [markdown] _uuid="3e3f1b85c6deef25ed62b8a1593bd5d9496d2f90"
# ## EDA (Exploratory Data Analysis)
# + _uuid="9581c4a4f173d9c0ea2d60d6fb5d8b493c8a61bf"
df_v = df.copy()
# Putting ward limits
df_v['wardsplaced'] = df_v['wardsplaced'].apply(lambda x: x if x<30 else 30)
df_v['wardsplaced'] = df_v['wardsplaced'].apply(lambda x: x if x>0 else 0)
df_v['wardsplaced'].head()
# + _uuid="4ab75b01983ba0d8f96a7362b11cc7f92334fff4"
plt.figure(figsize=(12,10))
sns.violinplot(x='seasonid', y= 'wardsplaced', hue='win', data= df_v, split = True
, inner= 'quartile')
plt.title('Wardsplaced by season : win & lose')
# + [markdown] _uuid="c6100d48da3bc7e40d43ca0852b0ebac7bf29c86"
# we can notice that wards are getting more popular and growing everyseason in both winning & losing games.
# + _uuid="38c7cd07ade26985899d8f607f6af374a26540c6"
df_corr = df._get_numeric_data()
df_corr = df_corr.drop(['id', 'matchid', 'player', 'seasonid'], axis = 1)
m = np.zeros_like(df_corr.corr(), dtype=np.bool)
m[np.triu_indices_from(m)] = True
plt.figure(figsize=(16,10))
sns.heatmap(df_corr.corr(), cmap = 'coolwarm', annot= True, fmt = '.2f',
linewidths=.5, mask = m)
plt.title('Correlations - win vs factors (all games)')
# + [markdown] _uuid="d84b5bf9d4f1af0178c4fbab8d894217472c0269"
# if you never played the game, you would find these info interesting !
# * deaths affect badly on win rate
# * kills goes well with goldspent & totdmgtochamp
# * deaths propotional with duration & totdmgtaken
# * more goldspent at late game ( more duration )
# * totminionkilled aka farming goes well with totdmgtochamp aka damaging enemy champs ALSO more goldspent ofcourse.
# + [markdown] _uuid="9bd7f953a55df80e90cf0fbcf9bd4c733d80b64d"
# #### This is kinda generic so we will split the heatmap into:
# games less than 25mins
# & games more than 25min
# + _uuid="73ecbf0ac9a822a60574b7fd230960bb5abe7cfb"
df_corr_2 = df._get_numeric_data()
# for games less than 25mins
df_corr_2 = df_corr_2[df_corr_2['duration'] <= 1500]
df_corr_2 = df_corr_2.drop(['id', 'matchid', 'player', 'seasonid'], axis = 1)
m = np.zeros_like(df_corr_2.corr(), dtype=np.bool)
m[np.triu_indices_from(m)] = True
plt.figure(figsize=(16,10))
sns.heatmap(df_corr_2.corr(), cmap = 'coolwarm', annot= True, fmt = '.2f',
linewidths=.5, mask = m)
plt.title('Correlations - win vs factors (for games last less than 25 mins)')
# + [markdown] _uuid="95a24ee2466dd12321e12c05a24eb6c765ae78cb"
# Correlations here are stronger and more obvisious:
# * kills & deaths affect strongly the winning process
# * also assits & turretkills affect the winning process
# * kills has strong relation with goldspent
# * more goldspent means more totdamagetochamp means more likely to earn kills
# + _uuid="6bb42793c92992a05781f9fd144391b393eb1a32"
df_corr_3 = df._get_numeric_data()
# for games more than 40mins
df_corr_3 = df_corr_3[df_corr_3['duration'] > 2400]
df_corr_3 = df_corr_3.drop(['id', 'matchid', 'player', 'seasonid'], axis = 1)
m = np.zeros_like(df_corr_3.corr(), dtype=np.bool)
m[np.triu_indices_from(m)] = True
plt.figure(figsize=(16,10))
sns.heatmap(df_corr_3.corr(), cmap = 'coolwarm', annot= True, fmt = '.2f',
linewidths=.5, mask = m)
plt.title('Correlations - win vs factors (for games last less than 40 mins)')
# + [markdown] _uuid="513002d2bccb3b7555b872ac19f6b619df7770e7"
# So in the late game as gamers call it OR after 40 mins of game time we find that:
# * deaths & kills doesnt even matter alot and have very poor correlation with game winning.
# * inhibkills & turretkills have about 25% chance of winning the game(still not big correlation).
# * kills have high correlation with goldspent & totdmgtochamp.
# * assists have 40% corr with wardsplaced ( as this is the support's job) also -43% with totminionkilled( supports don't farm alot) and -32% with kills.
#
# + [markdown] _uuid="a99313c41cb8eaefc08f36655772464f61a0fd83"
# ### Top win rate champions:
# + _uuid="97f2fae72a9f46cb74f47ce0c154d9634b664e38"
pd.options.display.float_format = '{:,.1f}'.format
df_win_rate = df.groupby('name').agg({'win': 'sum','name': 'count',
'kills':'mean','deaths':'mean',
'assists':'mean'})
df_win_rate.columns = ['win' , 'total matches', 'K', 'D', 'A']
df_win_rate['win rate'] = df_win_rate['win'] / df_win_rate['total matches'] * 100
df_win_rate['KDA'] = (df_win_rate['K'] + df_win_rate['A']) / df_win_rate['D']
df_win_rate = df_win_rate.sort_values('win rate',ascending= False)
df_win_rate = df_win_rate[['total matches', 'win rate' , 'K' , 'D', 'A', 'KDA']]
print('Top 10 win rate')
print(df_win_rate.head(10))
print('Least 10 win rate')
print(df_win_rate.tail(10))
# + _uuid="9372f59f14a7e68f1026f86d69107b6390fbfb3b"
df_win_rate.reset_index(inplace= True)
# + _uuid="baa9e50904bbfbae4e5e8e5296bdffd770b9ea3f"
# plotting the result visually
plt.figure(figsize=(16,30))
cmap = sns.cubehelix_palette(rot=-.2, as_cmap=True)
ax = sns.scatterplot(x="win rate", y="name", hue='KDA',
palette=cmap, sizes=(10, 200),
data=df_win_rate)
# + _uuid="d71d8cae1de94e96d5a1aaafb79ca11b53c92712"
df_win_rate.head()
# + [markdown] _uuid="69b37f4dd486e37538266b3c6e873d0a272ad2b1"
# ## Counter pick advices !
# + _uuid="81d1b00ff2e52c85253515995e71ff3e2aa8d68d"
df_2 = df.sort_values(['matchid','adjposition'], ascending = [1,1])
df_2['shift 1'] = df_2['name'].shift()
df_2['shift -1'] = df_2['name'].shift(-1)
def matchup(x):
if x['player'] <= 5:
if x['name'] < x['shift -1']:
name_return = x['name'] + ' vs ' + x['shift -1']
else:
name_return = x['shift -1'] + ' vs ' + x['name']
else:
if x['name'] < x['shift 1']:
name_return = x['name'] + ' vs ' + x['shift 1']
else:
name_return = x['shift 1'] + ' vs ' + x['name']
return name_return
df_2['matchup'] = df_2.apply(matchup, axis = 1)
df_2['win_adj'] = df_2.apply(lambda x: x['win'] if x['name'] == x['matchup'].split(' vs')[0]
else 0, axis = 1)
df_2.head()
# + _uuid="0392302dcb2dd11eb668619803984e0b9c343b3a"
df_matchup = df_2.groupby(['adjposition', 'matchup']).agg({'win_adj': 'sum', 'matchup': 'count'})
df_matchup.columns = ['win matches', 'total matches']
df_matchup['total matches'] = df_matchup['total matches'] / 2
df_matchup['win rate'] = df_matchup['win matches'] / df_matchup['total matches'] * 100
df_matchup['dominant score'] = df_matchup['win rate'] - 50
df_matchup['dominant score (ND)'] = abs(df_matchup['dominant score'])
df_matchup = df_matchup[df_matchup['total matches'] > df_matchup['total matches'].sum()*0.0001]
df_matchup = df_matchup.sort_values('dominant score (ND)', ascending = False)
df_matchup = df_matchup[['total matches', 'dominant score']]
df_matchup = df_matchup.reset_index()
print('Dominant score +/- means first/second champion dominant:')
for i in df_matchup['adjposition'].unique():
print('\n{}:'.format(i))
print(df_matchup[df_matchup['adjposition'] == i].iloc[:,1:].head(5))
# + _uuid="5231c3e55428e813e624babf71656070093075c2"
df_matchup['adjposition'].unique()
df_matchup_TOP = df_matchup.loc[df_matchup['adjposition'] == 'TOP']
df_matchup_JUNGLE = df_matchup.loc[df_matchup['adjposition'] == 'JUNGLE']
df_matchup_MID = df_matchup.loc[df_matchup['adjposition'] == 'MID']
df_matchup_DUO_CARRY = df_matchup.loc[df_matchup['adjposition'] == 'DUO_CARRY']
df_matchup_DUO_SUPPORT = df_matchup.loc[df_matchup['adjposition'] == 'DUO_SUPPORT']
print(df_matchup_TOP.shape)
print(df_matchup_JUNGLE.shape)
print(df_matchup_MID.shape)
print(df_matchup_DUO_CARRY.shape)
print(df_matchup_DUO_SUPPORT.shape)
# + _uuid="f61eb90c573741864f2e06b7fc985594203f553e"
# plotting duo carry
plt.figure(figsize=(16,60))
sns.set_color_codes("dark")
sns.barplot(x="dominant score", y="matchup", data=df_matchup_DUO_CARRY,
label="Total", color="b")
# + [markdown] _uuid="f8e864fe9112eca2aa0763bd8a6216e7f407425b"
# If we plot the ADC ( DUO_CARRY) for an example, we notice:
# * the negative values means the LEFT champion dominates ( kalista vs kogmaw scored -12.5 means kalista dominates by far)
# * The positive values means the RIGHT champion dominates (Graves vs Tristana scored +5.5 means Tristana dominates by far)
# * While we approach zero from both sides means both champions have balanced dominance points ( MissFortune vs Caitlyn). so its totally up to your skills ;)
# + _uuid="563e7163301f61d8339f1e65374c3d5b1fd11139"
# plotting TOP
plt.figure(figsize=(16,200))
sns.set()
sns.set_color_codes("dark")
sns.barplot(x="dominant score", y="matchup", data=df_matchup_TOP,
label="Total", color="c")
# + _uuid="7748db0690a2707f8c7c019854ffe5c7b1d7db08"
# plotting jungle
plt.figure(figsize=(16,100))
sns.set()
sns.set_color_codes("dark")
sns.barplot(x="dominant score", y="matchup", data=df_matchup_JUNGLE,
label="Total", color="g")
# + _uuid="1a9abf58adf21d850ee6b6367d716af3027de927"
# plotting mid
plt.figure(figsize=(16,100))
sns.set()
sns.set_color_codes("dark")
sns.barplot(x="dominant score", y="matchup", data=df_matchup_MID,
label="Total", color="r")
# + _uuid="860248aea2ed7f04b8ec174fc3c3b57becf651db"
# plotting support
plt.figure(figsize=(16,100))
sns.set()
sns.set_color_codes("dark")
sns.barplot(x="dominant score", y="matchup", data=df_matchup_DUO_SUPPORT,
label="Total", color="m")
# + [markdown] _uuid="912457a6cafcd22fd1f63b730c96f3cad3e63672"
# ## Thanks, Thats all for now
| League of Legends.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# <h1 id="Chapter-4----Understanding-Indexes">Chapter 4 -- Understanding Indexes</h1>
#
# <h2 id="Topics-Covered">Topics Covered</h2>
#
# <ul>
# <li><a href="#Indicies" target="_blank">Indicies </a></li>
# <li><a href="#.iloc-Indexer" target="_blank">.iloc indexer </a></li>
# <li><a href="#Setting-and-resetting-Indicies" target="_blank">Setting and Reseting Indicies</a></li>
# <li><a href="#.loc-Indexer" target="_blank">.loc indexer </a></li>
# <li><a href="#Mixing-.loc-Indexer-with-Boolean-Operators" target="_blank">Mixing .loc indexer with Boolean Operations </a></li>
# <li><a href="#Altering-DataFrame-values-using-the-.loc-indexer" target="_blank">Altering DataFrame values using the .loc indexer </a></li>
# <li><a href="#Conditionally-Apply-Values-Based-on-Another-Column-Value" target="_blank">Conditionally Apply Values Based on Another Column Value</a></li>
# <li><a href="#.ix-Indexer" target="_blank">.ix indexer </a></li>
# <li><a href="#Indexing-Issues" target="_blank">Indexing Issues </a></li>
# <li><a href="#Resources" target="_blank">Resources </a></li>
# </ul>
#
# <p>SAS users tend to think of indexing SAS data sets as a means to improve query performance. Another use case for using SAS indices is in performing non-sequential reads for table look-ups.</p>
#
# <p>Indexing for DataFrames is used to provide direct access to data. Many analytical techniques take advantage of indexes. Indexing is also used as key values to selecting and subseting data.</p>
#
# <p>It took me a bit of time to understand how indexing for Series and DataFrames actually work.</p>
#
# <p>Many of the working examples I found, while mostly useful, used synthetic data which is typically ordered neatly. Here we examine issues like overlapping date ranges or index values in non-alphabetical or in non-sequential order and so on.</p>
#
# <p>That is why you will find some errors in the examples below. By examining the pits I have fallen into, hopefully you can avoid them.</p>
#
# <p> </p>
#
#
import numpy as np
import pandas as pd
from numpy.random import randn
from pandas import Series, DataFrame, Index
# Consider the creation of the DataFrame df in the cell below.
#
df = pd.DataFrame([['a', 'cold','slow', np.nan, 2., 6., 3.],
['b', 'warm', 'medium', 4, 5, 7, 9],
['c', 'hot', 'fast', 9, 4, np.nan, 6],
['d', 'cool', None, np.nan, np.nan, 17, 89],
['e', 'cool', 'medium', 16, 44, 21, 13],
['f', 'cold', 'slow', np.nan, 29, 33, 17]])
#
#
# In the 'df' DataFrame created above we did not specify a row index or column names resulting in the RangeIndex object used for row labels. Default column labels are created as well using another RangeIndex object.
df
#
# ## Indices
#
# The .index attribute returns the DataFrame's index structure. We did not explicitly set an index. As a result a default index object is created. The RangeIndex object has a start position at 0 and the end position set to len(df) - 1.
# Return the row index.
df.index
#
# Return the column labels. Since no labels were specificed, a RangeIndex object is used to identify columns.
df.columns
#
# For observations SAS uses the automatic variable \_N\_ in the Data Step and FIRSTOBS and OBS in PROC step for its row indexing.
#
# SAS also has a similar construct (SAS variable list) allowing the use of column name 'ranges' (col1-colN) described <a href="http://support.sas.com/documentation/cdl/en/lrcon/69852/HTML/default/viewer.htm#p0wphcpsfgx6o7n1sjtqzizp1n39.htm"> here</a>.
# The SAS example below creates a data set with same data used to create the DataFrame df in cell #2 above.
#
#
# The Data Step with the SET options NOBS= is an example of implicit indexing used by SAS. The END= parameter on the SET statement is initialized to 0 and set to one when the last observation is read. The automatic variable \_N\_ can be used as the observation (row) index.
# ````
# /******************************************************/
# /* c05_sas_row_index1.sas */
# /******************************************************/
# 31 data _null_;
# 32 set df nobs=obs end=end_of_file;
# 33
# 34 put _n_ = ;
# 35
# 36 if end_of_file then
# 37 put 'Data set contains: ' obs ' observations' /;
#
# _N_=1
# _N_=2
# _N_=3
# _N_=4
# _N_=5
# _N_=6
# Data set contains: 6 observations
# ````
#
# Using the DataFrame indices we can select specific rows and columns. DataFrames provide indexers to accomplish these tasks. They are:
#
# 1. .iloc() method which is mainly an integer-based method
# 2. .loc() method used to select ranges by labels (either column or row)
# 3. .ix() method which supports a combination of the loc() and iloc() methods
#
# We also illsutrate altering values in a DataFrame using the .loc(). This is equivalent to the SAS update method.
# ## .iloc Indexer
# The .iloc indexer uses an integer-based method for locating row and column positions. It is a very robust indexer, however, it is limited by the fact that humans are better at remembering labels than numbers. This is analogous to locating an observations by \_n\_ or locating a SAS variable using 'colN' from a variable list.
#
# The syntax for the .iloc indexer is:
#
# df.iloc[row selection, column selection]
#
# A comma (,) is used to separate the request of rows for the request for columns. A colon (:) is used to request a range of cells.
#
# The absence of a either a column or row selection is an implicit request for all columns or rows, respectively.
#
# Details for the .iloc indexer are located <a href="http://pandas.pydata.org/pandas-docs/stable/indexing.html#indexing-integer"> here</a>.
# .iloc[0] is useful since it returns the first scalar from a Series of the first column from a DataFrame.
df.iloc[0]
#
# The point= option on the SET statement behaves similarly to return the first row in the data set. Note the SET statement inside a DO loop and the STOP statement. STOP is needed because the POINT= option indicates a non-sequnetial access pattern and thus the end of data set indicator is not available.
# ````
# /******************************************************/
# /* c05_sas_row_index2.sas */
# /******************************************************/
# 52 data _null_;
# 53
# 54 do iloc = 1 to 1;
# 55 set df point=iloc;
# 56
# 57 put _all_ ;
# 58 end;
# 59 stop;
#
# _N_=1 _ERROR_=0 iloc=1 id=a col1=cold col2=slow col3=. col4=2 col5=6 col6=3
# ````
#
# In the example below, you might expect three rows returned, rather than two. The range request for .iloc[] includes the start of the range and does not include the last item in the range value.
df.iloc[2:4]
#
# The SAS analog example for cell #6 is below.
# ````
# /******************************************************/
# /* c05_sas_row_index3.sas */
# /******************************************************/
# 94 data _null_ ;
# 95
# 96 do iloc = 3 to 4;
# 97 set df point=iloc;
# 98 put _all_ ;
# 99 end;
# 100 stop;
#
# _N_=1 _ERROR_=0 iloc=3 id=c col1=hot col2=fast col3=9 col4=4 col5=. col6=6
# _N_=1 _ERROR_=0 iloc=4 id=d col1=cool col2= col3=. col4=. col5=17 col6=89
# ````
#
# Similar to the indexer for string slicing, the index position
#
# iloc[0]
#
# returns the first row in a DataFrame and
#
# iloc[-1]
#
# returns the last row in the DataFrame. This is analogous to the END= option for the SET statement (assuming a sequential access pattern).
#
# The .iloc indexer is mainly used to locate first or last row in the DataFrame.
df.iloc[-1]
#
# The .iloc indexer in cell #8 below returns rows 2 and 3 using (2:4) as the row selector and columns 0 to 6 using (0:6) as the column selctor.
df.iloc[2:4, 0:6]
#
# The analog SAS program for returning the same sub-set is below. FIRSTOBS=3 OBS=4 is the equivalent row selector and keep = id -- col5 is the equivalent column selector.
# ````
# /******************************************************/
# /* c05_firstobs_3_obs_4.sas */
# /******************************************************/
# 60 data df;
# 61 set df(keep = id -- col5
# 62 firstobs=3 obs=4);
# 63 put _all_ ;
#
# _N_=1 _ERROR_=0 id=c col1=hot col2=fast col3=9 col4=4 col5=.
# _N_=2 _ERROR_=0 id=d col1=cool col2= col3=. col4=. col5=17
# ````
#
# The .iloc idexer illustrating multi-row and multi-column requests. Note the double square brackets ([]) syntax.
df.iloc[[1,3,5], [2, 4, 6]]
#
# ## .loc Indexer
# The .loc indexer is similar to .iloc and allows access to rows and columns by labels. A good analogy is a cell reference in Excel, eg. C:31.
#
# The syntax for the .loc indexer is:
#
# df.loc[row selection, column selection]
#
# For both the row and column selection, a comma (,) is used to request a list of multiple cells. A colon (:) is used to request a range of cells.
#
# Similiar to the .iloc indexer you can select combinations of rows and columns. The doc details are located <a href="http://pandas.pydata.org/pandas-docs/stable/indexing.html#indexing-label"> here</a>.
#
# Consider the DataFrame df2 created below in cell #10. It contains the new columns 'id' and 'date'.
df2 = pd.DataFrame([['a', 'cold','slow', np.nan, 2., 6., 3., '08/01/16'],
['b', 'warm', 'medium', 4, 5, 7, 9, '03/15/16'],
['c', 'hot', 'fast', 9, 4, np.nan, 6, '04/30/16'],
['d', 'None', 'fast', np.nan, np.nan, 17, 89, '05/31/16'],
['e', 'cool', 'medium', 16, 44, 21, 13, '07/04/16'],
['f', 'cold', 'slow', np.nan, 29, 33, 17, '01/01/16']],
columns=['id', 'col1', 'col2', 'col3', 'col4', 'col5', 'col6', 'date'])
#
# Executing just the name for the DataFrame is the equivalent of:
#
# print(df2)
#
# The print() method for a DataFrame returns the output without the cell outlines, however.
df2
#
#
# ## Setting and resetting Indicies
#
# In SAS the construction of an index for a data set creates an external file used in a deterministic fashion by SAS. In contrast, the construction of a panda index physically alters either the DataFrame, or a copy of it, depending on argument values to the set_index method. The doc is located <a href="http://pandas.pydata.org/pandas-docs/stable/indexing.html#different-choices-for-indexing"> here </a>.
# Start by setting the index to 'id', to access rows by a row value or a range of rows values. By default, the column is dropped when it becomes the index. You may not want this behavior, in which case you set the drop= argument value to False.
#
# drop=False
#
# This is useful if you have multiple occasions setting and resetting the index. Otherwise, a re-read of the DataFrame is required. Below, use the set_index() method to set the index to the 'id' column.
df2.set_index('id', inplace=True, drop=False)
#
# The .set_index attribute execution is silent when the inplace=argument value is utilized. Validate the index using the .index attribute.
df2.index
#
# To reset the index, use the reset_index() method.
df2 = df2.reset_index(drop=True)
#
# In order to have the remainder of the examples for the .loc indexer to work, we set the index again.
df2.set_index('id', inplace=True, drop=False)
#
# Return the row labeled 'e'.
df2.loc['e', ]
#
# Return rows in the range of 'b' to 'f' inclusive. 'b':'f' denotes a row range. The absence of a column request is an implicit request for all of them.
df2.loc['b':'f' ,]
#
# Return the rows between range of 'd' to 'f' inclusive. 'col6' and 'col2' is a request for columns by label.
df2.loc['d':'f',['col6','col2']]
#
# Return the 'date' column by label.
df2.loc[: , 'date']
#
# Change the DataFrame df2 index from 'id' to 'date'. The inplace=True argument does not make a copy of the DataFrame.
df2.set_index('date', inplace=True)
#
# Validate the index.
df2.index
#
# Request a row by label.
df2.loc['05/31/16']
#
# Request arbitrary rows. Notice the double square braces ([])
df2.loc[['03/15/16', '07/04/16']]
#
# Request a range of rows.
df2.loc['04/30/16':'07/04/16',['col2','col1']]
#
# The SAS program below is equivalent to cell #25 above. It uses character values representing dates.
# ````
# /******************************************************/
# /* c05_select_dates_01Feb16_and_31Jul16.sas */
# /******************************************************/
# 25 data df2(keep = col2 col1);
# 26 set df(where=(date between '04/30/16' and '07/04/16'));
# 27 put _all_;
#
# _N_=1 _ERROR_=0 id=c col1=hot col2=fast col3=9 col4=4 col5=. col6=6 date=04/30/16
# _N_=2 _ERROR_=0 id=d col1=cool col2= col3=. col4=. col5=17 col6=89 date=05/31/16
# _N_=3 _ERROR_=0 id=e col1=cool col2=medium col3=16 col4=44 col5=21 col6=13 date=07/04/16
# ````
#
# In cell #26 below we hit a snag. The issue begins with cell #21 above using the set_index attribute for the 'df2' DataFrame. Examine cell #22 above to observe how the df2['date'] column dtype is 'object'.
#
# This indicates we are working with string literals and not datetime objects. Cells #24 and #25 work because these specific labels are values found in the df2['date'] index.
#
# Cell #26, below does not work, since the range request contains the value '07/31/16 as the range end-point which is not a value in the index.
#
# The remedy, shown below in cell #29 is to use the pd.to_datetime() method to convert the df2['date'] string values into a datetime object. The obvious analogy for SAS users is converting a string variable to a numeric variable which has an associated date format.
#
# Converting string literals to datetime values is covered in the section, String Literal Mapped to datetime timestamp located<a href="http://nbviewer.jupyter.org/github/RandyBetancourt/PythonForSASUsers/blob/master/Chapter%2008%20--%20Date%2C%20Time%2C%20and%20%20Timedelta%20Objects.ipynb#String-Literal-Mapped-to-datetime-timestamp"> here</a>.
df2.loc['01/01/16':'07/31/16']
# Reset the index for the DataFrame 'df2' to the default RangeIndex object.
df2.reset_index(inplace=True)
#
# Validate the index.
df2.index
#
# Cast the df2['date'] column from dtype='object' (strings) to dtype=datetime.
df2['date'] = pd.to_datetime(df2.date)
#
# Set the df2['date'] column as the index.
df2.set_index('date', inplace=True)
#
# Validate the index. Observe the dytpe is now datetime64--a datetime stamp. See <a href="http://nbviewer.jupyter.org/github/RandyBetancourt/PythonForSASUsers/blob/master/Chapter%2008%20--%20Date%2C%20Time%2C%20and%20%20Timedelta%20Objects.ipynb"> Chapter 8--Date, Time, and Timedelta Objects</a> for more details on datetime arithmetic, shifting time intervals, and determining durations.
df2.index
#
# With the 'ds2['date'] column values converted to a datetime values, re-do the statement in cell #26 above.
df2.loc['02/01/16':'07/31/16']
#
# The SAS example below illustrates a similiar set of steps:
#
# 1. Read the original 'date' variable which is character and rename it to 'str_date'
# 2. Use the input funtion to 'read' the 'str_date' values and assign them to the 'date' variable using the mmddyy10. informat
# 3. Print the date value without any formatting showing it is now a SAS datetime value
# 4. Print the SAS datetime value using the mmddyy10. date format
#
# ````
# /******************************************************/
# /* c05_alter_string_to_datetime_variable.sas */
# /******************************************************/
# 4 data df2(drop = str_date);
# 5 set df(rename=(date=str_date));
# 6 date=input(str_date,mmddyy10.);
# 7
# 8 if _n_ = 1 then put date= /
# 9 date mmddyy10.;
#
# date=20667
# 08/01/2016
# ````
#
# ## Mixing .loc Indexer with Boolean Operators
#
# This approach works by creating either a Series or array of boolean values (True or False). This Series or array is then used by the .loc indexer to return all of the values that evaluate to True. Using the DataFrame df2 created in the cell above, consider the following.
#
# We want to return all rows where 'col2' is not equal to 'fast. This is expressed as:
#
# df2['col2'] != 'fast'
#
# A Series is returned with the True/False values not equal to 'fast' for fd2['col2'] shown below. The df2['date'] column is returned since it remains as the index for the DataFrame. The second print() method displays this object as being derived from the class: Series.
print(df2['col2'] != 'fast')
print(type(df2['col2'] != 'fast'))
#
# Passing the boolean Series:
#
# df2['col2'] != 'fast'
#
# to the .loc indexer to retrieve those rows with a boolean value of True. Also request 'col1' and 'col2', which a request by label.
df2.loc[df2['col2'] != 'fast', 'col1':'col2']
#
# You can combine any number of boolean operation together. Boolean value comparison operators are documented <a href="https://docs.python.org/3/reference/expressions.html#value-comparisons"> here</a>.
df2.loc[(df2.col3 >= 9) & (df2.col1 == 'cool'), ]
#
# The .isin() method returns a boolean vector similar to the behavior described in cell #34 above. In this example, the .isin list of elements evaluates True if the listed elements are found by the .loc indexer in 'col6'.
df2.loc[df2.col6.isin([6, 9, 13])]
#
# So far, the .loc indexers have resulted in an output stream. All of the indexers can be used to sub-set a DataFrame using assignment syntax shown in the cell #33 below.
df3 = df2.loc[df2.col6.isin([6, 9, 13])]
df3
#
# Generally, in these types of sub-setting operations, the shape of the extracted DataFrame will be smaller than the input DataFrame.
#
# To return a DataFrame of the same shape as the original, use the where() method. Details for the pandas where() method is described <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.where.html"> here.
#
print('Shape for df2 is', df2.shape)
print('Shape for df3 is', df3.shape)
#
# The example SAS program below uses the WHERE IN (list) syntax to subset a data set analogous to the example in cell #32 above.
# ````
# /******************************************************/
# /* c05_where_in.sas */
# /******************************************************/
# NOTE: Data set "WORK.df2" has 6 observation(s) and 8 variable(s)
#
# 27 data df3;
# 28 set df2(where=(col6 in (6 9 13)));
#
# NOTE: 3 observations were read from "WORK.df2"
# NOTE: Data set "WORK.df3" has 3 observation(s) and 8 variable(s)
# ````
#
# Notice how the SAS variable count and DataFrame column count differ by 1. That is because the DataFrame .shape() method does
# not include the index as part of its column count. By reseting the index, the SAS variable count and DataFrame columns count agree.
df3.reset_index(inplace=True)
print('Shape for df3 is', df3.shape)
#
# ## Altering DataFrame values using the .loc indexer
# The .loc indexer can also be used to do an in-place update of values.
# Find the values for df2['col2'] column before updating.
df2.loc[: , 'col2']
#
# ## Conditionaly Apply Values Based on Another Column Value
df2.loc[df2['col6'] > 50, "col2"] = "VERY FAST"
#
# The values for df2['col2'] column after the update.
df2.loc[: , 'col2']
#
# ## .ix Indexer
# The .ix indexer combines characteristics of the .loc and .iloc indexers. This means you can select rows and columns by labels and integers.
#
# The syntax for the .ix indexer is:
#
# df.ix[row selection, column selection]
#
# For both the row and column selection, a comma (,) is used to request a list of multiple cells. A colon (:) is used to request a range of cells.
#
# Similar to the .loc indexer you can select combinations of rows and columns.
# The .ix indexer is sometimes tricky to use. A good rule of thumb is if you are indexing using labels or indexing using integers, use the .loc and .iloc to avoid unexpected results. The documentation details are found <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.ix.html"> here</a>.
#
# Consider the creation of the DataFrame 'df4' constructed in the cell below. It is similar to DataFrame df2 created in cell #10 above. The differences are the addition of another column and columns being identified with labels as well as integers.
df4 = pd.DataFrame([['a', 'cold','slow', np.nan, 2., 6., 3., 17, '08/01/16'],
['b', 'warm', 'medium', 4, 5, 7, 9, 21, '03/15/16'],
['c', 'hot', 'fast', 9, 4, np.nan, 6, 10, '04/30/16'],
['d', 'None', 'fast', np.nan, np.nan, 17, 89, 44, '05/31/16'],
['e', 'cool', 'medium', 16, 44, 21, 13, 99, '07/04/16'],
['f', 'cold', 'slow', np.nan, 29, 33, 17, 11,'01/01/16']],
columns=['id', 'col1', 'col2', 'col3', 'col4', 4, 5, 6, 'date'])
#
#
df4
#
# Set the index to column df4['id'].
df4.set_index('id', inplace=True)
#
# The .ix indexer allows slicing by labels and integer poitions along the index. And look closely at the results since the request for columns is based on the integer-position of the column. The column requestor of 6 is not a column label, but its position.
df4.ix['b':'e', 6:8]
#
# #### A Review:
#
# .iloc uses the integer position in the index and only accepts integers
# .loc uses the labels in the index
# .ix generally behaves like the .loc
#
# Finally, to appreciate the differences, consider the following DataFrame. Also notice the un-Pythonic style of using a semi-colon at the end of the DataFrame definition.
df5 = pd.DataFrame([['a', 'cold','slow', np.nan, 2., 6., 3.],
['b', 'warm', 'medium', 4, 5, 7, 9],
['c', 'hot', 'fast', 9, 4, np.nan, 6],
['d', 'cool', None, np.nan, np.nan, 17, 89],
['e', 'cool', 'medium', 16, 44, 21, 13],
['f', 'cold', 'slow', np.nan, 29, 33, 17]],
index = [6, 8, 2, 3, 4, 5,]);
df5
#
# The .iloc indexer returns the first two rows since it looks at positions.
df5.iloc[:2]
#
# The .loc indexer returns 3 rows since it looks at the labels.
df5.loc[:2]
#
# The .ix indexer returns the same number of rows as the .loc indexer since its behavior is to first use labels before looking by position. Looking by position with an integer-based index can lead to unexpected results. This illustrated in cell #49 below.
df5.ix[:2]
#
# For the next two examples, review the DataFrame df5 index structure in the cell below.
df5.index
#
# The .iloc example in the cell below returns the first row. That's because it is looking by position.
df5.iloc[:1]
#
# The .ix example in the cell below raises a KeyError since 1 is not found in the index.
df5.ix[:1]
# ## Indexing Issues
# So far, so good. We have a basic understanding of how indexes can be established, utilized, and reset. We can use the .iloc, .loc, and .ix indexers to retrieve subsets of columns and rows. But what about real-world scenarios where data is rarely, if ever tidy?
#
# The synthetic examples above work (except the intentional errors of course) since they rely on constructing the DataFrames in an orderly manner, like having 'id' columns in alphabetical order or dates in chronological order.
#
# Consider the DataFrame 'df5' created below. It is similar to DataFrame 'df2' in cell #10 above with the exception of the df5['id'] column containing non-unique values.
df5 = pd.DataFrame([['b', 'cold','slow', np.nan, 2., 6., 3., '01/01/16'],
['c', 'warm', 'medium', 4, 5, 7, 9, '03/15/16'],
['a', 'hot', 'fast', 9, 4, np.nan, 6, '04/30/16'],
['d', 'cool', None, np.nan, np.nan, 17, 89, '05/31/16'],
['c', 'cool', 'medium', 16, 44, 21, 13, '07/04/16'],
['e', 'cold', 'slow', np.nan, 29, 33, 17, '08/30/16']],
columns=['id', 'col1', 'col2', 'col3', 'col4', 'col5', 'col6', 'date']);
df5
#
# Set the index for DataFrame df5 to the df5['id'] column.
df5.set_index('id', inplace=True)
#
# Validate the index for DataFrame df5.
df5.index
#
# We can use the .loc indexer to request the rows in the range of 'b' through 'd'.
df5.loc['b':'d', :]
# If you look closely at the results from the example above, you will find the first occurence of the row 'id' label 'c' was returned, but not the second row labeled 'c'. 'id' label 'c' is obviously non-unique. And that is only part of the issue. Consider futher the use of a non-unique label for the row range selection in the example below.
#
# This issue is described in sparse prose <a href="http://pandas.pydata.org/pandas-docs/stable/gotchas.html#non-monotonic-indexes-require-exact-matches"> here.</a>
#
# If we should want the row label range of 'b' to 'c' with all the columns we raise the error:
#
# "Cannot get right slice bound for non-unique label: 'c'"
#
df5.loc['b':'c', :]
# What is going on here is the values in 'id' column are non-unique. To enable detection of this condition the attribute .index.is_montonic_increasing and .index_montonic_decreasing return a boolean to test for this non-uniqueness property.
# Applied to the df5 DataFrame created in cell #54 above returns False.
df5.index.is_monotonic_increasing
#
# The .is_monotonic_increasing attribute applied to the first DataFrame created above, 'df' returns true.
df.index.is_monotonic_increasing
#
# While not spelled out in any documentation I found, the moral of the story is when using indices with non-unique values, be wary.
#
# ## Resources
#
# <a href="http://pandas.pydata.org/pandas-docs/stable/indexing.html"> Indexing and Selecting Data </a> pandas 0.19.0 doucmentation
#
# <a href="http://www.shanelynn.ie/select-pandas-dataframe-rows-and-columns-using-iloc-loc-and-ix/"> Selecting DataFrame rows and columns using iloc, loc, and ix in Pandas </a> by <NAME>
#
# <a href="http://chris.friedline.net/2015-12-15-rutgers/lessons/python2/02-index-slice-subset.html"> Indexing, Slicing and Subsetting DataFrames in Python </a> by <NAME>.
#
# <a href="http://pandas.pydata.org/pandas-docs/stable/gotchas.html#non-monotonic-indexes-require-exact-matches"> Non-monotonic indexes require exact matches </a> pandas 0.19.0 documentation
#
# <a href="http://www.swegler.com/becky/blog/2014/08/06/useful-pandas-snippets/"> Useful pandas Snippets </a> by <NAME>, Computers are for People.
# ## Navigation
#
# <a href="http://nbviewer.jupyter.org/github/RandyBetancourt/PythonForSASUsers/tree/master/"> Return to Chapter List </a>
| Chapter04 - Understanding Indexes.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3 (ipykernel)
# language: python
# name: python3
# ---
# # Pyro Conditioning
# ### Basic Imports
# +
import numpy as np
import matplotlib.pyplot as plt
import torch
import seaborn as sns
import pandas as pd
import pyro
dist =pyro.distributions
sns.reset_defaults()
sns.set_context(context="talk", font_scale=1)
# %matplotlib inline
# %config InlineBackend.figure_format='retina'
# -
X = dist.MultivariateNormal(loc = torch.tensor([0., 0.]), covariance_matrix=torch.eye(2))
pyro.condition()
# +
data_dim = 2
latent_dim = 1
num_datapoints = 100
z = dist.Normal(
loc=torch.zeros([latent_dim, num_datapoints]),
scale=torch.ones([latent_dim, num_datapoints]),)
w = dist.Normal(
loc=torch.zeros([data_dim, latent_dim]),
scale=5.0 * torch.ones([data_dim, latent_dim]),
)
# +
w_sample= w.sample()
z_sample = z.sample()
x = dist.Normal(loc = w_sample@z_sample, scale=1)
x_sample = x.sample([100])
plt.scatter(x_sample[:, 0], x_sample[:, 1], alpha=0.2, s=30)
# -
# ### Generative model for PPCA in Pyro
# +
import pyro.distributions as dist
import pyro.distributions.constraints as constraints
import pyro
pyro.clear_param_store()
def ppca_model(data, latent_dim):
N, data_dim = data.shape
W = pyro.sample(
"W",
dist.Normal(
loc=torch.zeros([latent_dim, data_dim]),
scale=5.0 * torch.ones([latent_dim, data_dim]),
),
)
Z = pyro.sample(
"Z",
dist.Normal(
loc=torch.zeros([N, latent_dim]),
scale=torch.ones([N, latent_dim]),
),
)
mean = Z @ W
return pyro.sample("obs", pyro.distributions.Normal(mean, 1.0), obs=data)
pyro.render_model(
ppca_model, model_args=(torch.randn(150, 2), 1), render_distributions=True
)
# -
ppca_model(x_sample[0], 3).shape
from pyro import poutine
with pyro.plate("samples", 10, dim=-3):
trace = poutine.trace(ppca_model).get_trace(x_sample[0], 1)
trace.nodes['W']['value'].squeeze()
# +
data_dim = 3
latent_dim = 2
W = pyro.sample(
"W",
dist.Normal(
loc=torch.zeros([latent_dim, data_dim]),
scale=5.0 * torch.ones([latent_dim, data_dim]),
),
)
# -
N = 150
Z = pyro.sample(
"Z",
dist.Normal(
loc=torch.zeros([N, latent_dim]),
scale=torch.ones([N, latent_dim]),
),
)
Z.shape, W.shape
(Z@W).shape
| notebooks/bayesian_ml_with_pyro/2022-02-20-condition-pyro.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
from __future__ import division, print_function
from functools import partial
import gpflow
import tensorflow as tf
import numpy as np
import tensorflow as tf
import matplotlib.pyplot as plt
from scipy import signal, linalg
# Nice progress bars
try:
from tqdm import tqdm
except ImportError:
tqdm = lambda x: x
import safe_learning
import plotting
from utilities import InvertedPendulum
# %matplotlib inline
# Open a new session (close old one if exists)
try:
session.close()
except NameError:
pass
session = tf.InteractiveSession()
session.run(tf.global_variables_initializer())
# -
# # Define underlying dynamic system and costs/rewards
# Define the dynamics of the true and false system
#
# +
import sys
import os
import importlib
import numpy as np
import tensorflow as tf
from scipy import signal
from safe_learning import DeterministicFunction
from safe_learning import config
from safe_learning.utilities import concatenate_inputs
if sys.version_info.major == 2:
import imp
# Corresponding dynamic systems
@safe_learning.utilities.with_scope('true_dynamics')
def true_dynamics(state_action, action=None):
"""Return future states of the car"""
if action is None:
states, actions = tf.split(state_action, [2, 1], axis=1)
else:
states = state_action
actions = action
x0 = states[:, 0] + states[:, 1]
x1 = states[:, 1] + 0.001 * actions[:, 0]
return tf.stack((x0, x1), axis=1)
@safe_learning.utilities.with_scope('wrong_dynamics')
@concatenate_inputs(start=1)
def wrong_dynamics(state_action):
"""Return future states of the car"""
if action is None:
states, actions = tf.split(state_action, [2, 1], axis=1)
else:
states = state_action
actions = action
states, actions = tf.split(state_action, [2, 1], axis=1)
x0 = states[:, 0] + states[:, 1]
x1 = states[:, 1] + 0.005 * actions[:, 0]
return tf.stack((x0, x1), axis=1)
# LQR cost matrices
q = 1 * np.diag([1., 2.])
r = 1.2 * np.array([[1]], dtype=safe_learning.config.np_dtype)
# Quadratic (LQR) reward function
reward_function = safe_learning.QuadraticFunction(linalg.block_diag(-q, -r))
# Discount factor
gamma = .98
# terminal_reward = 1 - gamma
# @safe_learning.utilities.with_scope('reward_function')
# @concatenate_inputs(start=1)
# def reward_function(states, actions):
# """Reward function for the mountain car"""
# zeros = tf.zeros((states.shape[0], 1), tf.float64)
# ones = tf.ones_like(zeros)
# # Reward is zero except at terminal states
# return tf.where(tf.greater(states[:, 0], 1), terminal_reward * ones, zeros)
# -
# # Set up a discretization for safety verification
# +
# Set up a discretization for safety verification
state_limits = np.array([[-1.5, 1.5], [-.1, .1]])
safety_num_states = [50, 50]
policy_num_states = [20, 20]
safety_disc = safe_learning.GridWorld(state_limits, safety_num_states)
policy_disc = safe_learning.GridWorld(state_limits, policy_num_states)
# Discretization constant
tau = np.min(safety_disc.unit_maxes)
print('Grid size: {0}'.format(safety_disc.nindex))
# -
# # Define the GP dynamics model
#
# We use a combination of kernels to model the errors in the dynamics
# +
A = np.array([[1, 1], [0, 1]])
B = np.array([[0], [0.005]])
# sys = signal.StateSpace(A, B, np.eye(2), np.zeros((2, 1)))
# sysd = sys.to_discrete(1)
# A = sysd.A
# B = sysd.B
a_true = np.array([[1, 1], [0, 1]])
b_true = np.array([[0], [0.001]])
# sys = signal.StateSpace(a_true, b_true, np.eye(2), np.zeros((2, 1)))
# sysd = sys.to_discrete(1)
# a_true = sysd.A
# b_true = sysd.B
lipschitz_dynamics = 1
noise_var = 0.001 ** 2
m_true = np.hstack((a_true, b_true))
m = np.hstack((A, B))
variances = (m_true - m) ** 2
# Make sure things remain
np.clip(variances, 1e-5, None, out=variances)
# Kernels
kernel1 = (gpflow.kernels.Linear(3, variance=variances[0, :], ARD=True)
+ gpflow.kernels.Matern32(1, lengthscales=1, active_dims=[0])
* gpflow.kernels.Linear(1, variance=variances[0, 1]))
kernel2 = (gpflow.kernels.Linear(3, variance=variances[1, :], ARD=True)
+ gpflow.kernels.Matern32(1, lengthscales=1, active_dims=[0])
* gpflow.kernels.Linear(1, variance=variances[1, 1]))
# Mean dynamics
mean_dynamics = safe_learning.LinearSystem((A, B), name='mean_dynamics')
mean_function1 = safe_learning.LinearSystem((A[[0], :], B[[0], :]), name='mean_dynamics_1')
mean_function2 = safe_learning.LinearSystem((A[[1], :], B[[1], :]), name='mean_dynamics_2')
# Define a GP model over the dynamics
gp1 = gpflow.gpr.GPR(np.empty((0, 3), dtype=safe_learning.config.np_dtype),
np.empty((0, 1), dtype=safe_learning.config.np_dtype),
kernel1,
mean_function=mean_function1)
gp1.likelihood.variance = noise_var
gp2 = gpflow.gpr.GPR(np.empty((0, 3), dtype=safe_learning.config.np_dtype),
np.empty((0, 1), dtype=safe_learning.config.np_dtype),
kernel2,
mean_function=mean_function2)
gp2.likelihood.variance = noise_var
gp1_fun = safe_learning.GaussianProcess(gp1)
gp2_fun = safe_learning.GaussianProcess(gp2)
dynamics = safe_learning.FunctionStack((gp1_fun, gp2_fun))
# -
print(variances)
print(A)
print(type(A))
print(B)
print(m)
print(m_true)
print(A[[0], :], B[[0], :])
print(A[[1], :], B[[1], :])
# +
# Compute the optimal policy for the linear (and wrong) mean dynamics
k, s = safe_learning.utilities.dlqr(A, B, q, r)
init_policy = safe_learning.LinearSystem((-k), name='initial_policy')
init_policy = safe_learning.Saturation(init_policy, -1., 1.)
# Define the Lyapunov function corresponding to the initial policy
init_lyapunov = safe_learning.QuadraticFunction(s)
# -
import scipy
print(A)
print(B)
print(q)
print(r)
print(scipy.linalg.solve_discrete_are(A, B, q, r))
p = scipy.linalg.solve_discrete_are(A, B, q, r)
bp = B.T.dot(p)
tmp1 = bp.dot(B)
tmp1 += r
tmp2 = bp.dot(A)
k = np.linalg.solve(tmp1, tmp2)
print(k)
print(s)
print(k)
print(s)
print(policy_disc.all_points)
print((-init_lyapunov(policy_disc.all_points).eval()))
# # Set up the dynamic programming problem
# +
# Define a neural network policy
action_limits = np.array([[-1, 1]])
relu = tf.nn.relu
policy = safe_learning.NeuralNetwork(layers=[32, 32, 1],
nonlinearities=[relu, relu, tf.nn.tanh],
scaling=action_limits[0, 1])
# Define value function approximation
value_function = safe_learning.Triangulation(policy_disc,
init_lyapunov(policy_disc.all_points).eval(),
project=True)
# Define policy optimization problem
rl = safe_learning.PolicyIteration(
policy,
dynamics,
reward_function,
value_function,
gamma=gamma)
with tf.name_scope('rl_mean_optimization'):
rl_opt_value_function = rl.optimize_value_function()
# Placeholder for states
tf_states_mean = tf.placeholder(safe_learning.config.dtype, [1000, 2])
# Optimize for expected gain
values = rl.future_values(tf_states_mean)
policy_loss = -1 / (1-gamma) * tf.reduce_mean(values)
optimizer = tf.train.GradientDescentOptimizer(learning_rate=0.01)
adapt_policy_mean = optimizer.minimize(policy_loss, var_list=rl.policy.parameters)
# -
# Start the session
session.run(tf.global_variables_initializer())
# ### Run initial dynamic programming for the mean dynamics
# +
old_values = np.zeros_like(rl.value_function.parameters[0].eval())
old_actions = np.zeros_like(rl.policy.parameters[0].eval())
converged = False
action_space = np.array([[-1.], [1.]])
for i in range(30):
# Optimize value function
session.run(rl_opt_value_function, feed_dict=rl.feed_dict)
# Optimize policy (discrete over grid or gradient descent)
# rl.discrete_policy_optimization(action_space)
for _ in range(200):
# select random training batches
rl.feed_dict[tf_states_mean] = policy_disc.sample_continuous(1000)
session.run(adapt_policy_mean, feed_dict=rl.feed_dict)
# Get new parameters
values, actions = session.run([rl.value_function.parameters[0],
rl.policy.parameters[0]])
# Compute errors
value_change = np.max(np.abs(old_values - values))
actions_change = np.max(np.abs(old_actions - actions))
# Break if converged
if value_change <= 1e-1 and actions_change <= 1e-1:
converged = True
break
else:
old_values = values
old_actions = actions
if converged:
print('converged after {} iterations. \nerror: {}, \npolicy: {}'
.format(i + 1, value_change, actions_change))
else:
print('didnt converge, error: {} and policy: {}'
.format(value_change, actions_change))
# -
# # Define the Lyapunov function
#
# Here we use the fact that the optimal value function is a Lyapunov function for the optimal policy if the dynamics are deterministic. As uncertainty about the dynamics decreases, the value function for the mean dynamics will thus converge to a Lyapunov function.
# +
lyapunov_function = -rl.value_function
lipschitz_lyapunov = lambda x: tf.reduce_max(tf.abs(rl.value_function.gradient(x)),
axis=1, keep_dims=True)
lipschitz_policy = lambda x: policy.lipschitz()
lipschitz_dynamics = lambda x: np.max(np.abs(a_true)) + np.max(np.abs(b_true)) * lipschitz_policy(x)
# Lyapunov function definitial
lyapunov = safe_learning.Lyapunov(safety_disc,
lyapunov_function,
dynamics,
lipschitz_dynamics,
lipschitz_lyapunov,
tau,
policy=rl.policy,
initial_set=None)
# Set initial safe set (level set) based on initial Lyapunov candidate
values = init_lyapunov(safety_disc.all_points).eval()
cutoff = np.max(values) * 0.005
lyapunov.initial_safe_set = np.squeeze(values, axis=1) <= cutoff
# -
np.sum(lyapunov.initial_safe_set)
print(a_true)
print(b_true)
print(np.max(np.abs(a_true)) + np.max(np.abs(b_true)))
print(lyapunov.initial_safe_set)
print(np.where(lyapunov.initial_safe_set)[0].shape)
print(values)
# +
def plot_safe_set(lyapunov, show=True):
"""Plot the safe set for a given Lyapunov function."""
plt.imshow(lyapunov.safe_set.reshape(safety_num_states).T,
origin='lower',
extent=lyapunov.discretization.limits.ravel(),
vmin=0,
vmax=1)
if isinstance(lyapunov.dynamics, safe_learning.UncertainFunction):
X = lyapunov.dynamics.functions[0].X
plt.plot(X[:, 0], X[:, 1], 'rx')
plt.title('safe set')
plt.colorbar()
if show:
plt.show()
lyapunov.update_safe_set()
plot_safe_set(lyapunov)
# -
# ## Safe policy update
#
# We do dynamic programming, but enfore the decrease condition on the Lyapunov function using a Lagrange multiplier
# +
with tf.name_scope('policy_optimization'):
# Placeholder for states
tf_states = tf.placeholder(safe_learning.config.dtype, [1000, 2])
# Add Lyapunov uncertainty (but only if safety-relevant)
values = rl.future_values(tf_states, lyapunov=lyapunov)
policy_loss = -tf.reduce_mean(values)
optimizer = tf.train.GradientDescentOptimizer(learning_rate=0.001)
adapt_policy = optimizer.minimize(policy_loss, var_list=rl.policy.parameters)
def rl_optimize_policy(num_iter):
# Optimize value function
session.run(rl_opt_value_function, feed_dict=rl.feed_dict)
# select random training batches
for i in tqdm(range(num_iter)):
rl.feed_dict[tf_states] = lyapunov.discretization.sample_continuous(1000)
session.run(adapt_policy, feed_dict=rl.feed_dict)
# -
# # Exploration
#
# We explore close to the current policy by sampling the most uncertain state that does not leave the current level set
# +
action_variation = np.array([[-0.02], [0.], [0.02]], dtype=safe_learning.config.np_dtype)
with tf.name_scope('add_new_measurement'):
action_dim = lyapunov.policy.output_dim
tf_max_state_action = tf.placeholder(safe_learning.config.dtype,
shape=[1, safety_disc.ndim + action_dim])
tf_measurement = true_dynamics(tf_max_state_action)
def update_gp():
"""Update the GP model based on an actively selected data point."""
# Get a new sample location
max_state_action, _ = safe_learning.get_safe_sample(lyapunov,
action_variation,
action_limits,
num_samples=1000)
# Obtain a measurement of the true dynamics
lyapunov.feed_dict[tf_max_state_action] = max_state_action
measurement = tf_measurement.eval(feed_dict=lyapunov.feed_dict)
# Add the measurement to our GP dynamics
lyapunov.dynamics.add_data_point(max_state_action, measurement)
# -
# Get a new sample location
print(lyapunov)
max_state_action, _ = safe_learning.get_safe_sample(lyapunov,
action_variation,
action_limits,
num_samples=1000)
# # Run the optimization
# +
# lyapunov.update_safe_set()
rl_optimize_policy(num_iter=200)
lyapunov.update_safe_set()
plot_safe_set(lyapunov)
lyapunov.feed_dict[lyapunov.c_max]
# -
for i in range(5):
print('iteration {} with c_max: {}'.format(i, lyapunov.feed_dict[lyapunov.c_max]))
for i in tqdm(range(10)):
update_gp()
rl_optimize_policy(num_iter=100)
lyapunov.update_values()
# Update safe set and plot
lyapunov.update_safe_set()
plot_safe_set(lyapunov)
# # Plot trajectories and analyse improvement
# +
x0 = np.array([[1., -.5]])
states_new, actions_new = safe_learning.utilities.compute_trajectory(true_dynamics, rl.policy, x0, 100)
states_old, actions_old = safe_learning.utilities.compute_trajectory(true_dynamics, init_policy, x0, 100)
t = np.arange(len(states_new)) * 1
# +
plt.plot(t, states_new[:, 0], label='new')
plt.plot(t, states_old[:, 0], label='old')
plt.xlabel('time [s]')
plt.ylabel('position [m]')
plt.legend()
plt.show()
plt.plot(t, states_new[:, 1], label='new')
plt.plot(t, states_old[:, 1], label='old')
plt.xlabel('time [s]')
plt.ylabel('velocity [m/s]')
plt.legend()
plt.show()
# -
plt.plot(t[:-1], actions_new, label='new')
plt.plot(t[:-1], actions_old, label='old')
plt.xlabel('time [s]')
plt.ylabel('actions')
plt.legend()
print('reward old:', tf.reduce_sum(rl.reward_function(states_old[:-1], actions_old)).eval(feed_dict=rl.feed_dict))
print('reward new:', tf.reduce_sum(rl.reward_function(states_new[:-1], actions_new)).eval(feed_dict=rl.feed_dict))
len(states_new)
| experiments/try_1d_car.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] papermill={"duration": 0.023374, "end_time": "2021-08-23T03:02:24.873294", "exception": false, "start_time": "2021-08-23T03:02:24.849920", "status": "completed"} tags=[]
# **This notebook is an exercise in the [Natural Language Processing](https://www.kaggle.com/learn/natural-language-processing) course. You can reference the tutorial at [this link](https://www.kaggle.com/matleonard/text-classification).**
#
# ---
#
# + [markdown] papermill={"duration": 0.022191, "end_time": "2021-08-23T03:02:24.920717", "exception": false, "start_time": "2021-08-23T03:02:24.898526", "status": "completed"} tags=[]
# # Natural Language Classification
#
# You did such a great job for DeFalco's restaurant in the previous exercise that the chef has hired you for a new project.
#
# The restaurant's menu includes an email address where visitors can give feedback about their food.
#
# The manager wants you to create a tool that automatically sends him all the negative reviews so he can fix them, while automatically sending all the positive reviews to the owner, so the manager can ask for a raise.
#
# You will first build a model to distinguish positive reviews from negative reviews using Yelp reviews because these reviews include a rating with each review. Your data consists of the text body of each review along with the star rating. Ratings with 1-2 stars count as "negative", and ratings with 4-5 stars are "positive". Ratings with 3 stars are "neutral" and have been dropped from the data.
#
# Let's get started. First, run the next code cell.
# + papermill={"duration": 1.429308, "end_time": "2021-08-23T03:02:26.372228", "exception": false, "start_time": "2021-08-23T03:02:24.942920", "status": "completed"} tags=[]
import pandas as pd
# Set up code checking
from learntools.core import binder
binder.bind(globals())
from learntools.nlp.ex2 import *
print("\nSetup complete")
# + [markdown] papermill={"duration": 0.022063, "end_time": "2021-08-23T03:02:26.417230", "exception": false, "start_time": "2021-08-23T03:02:26.395167", "status": "completed"} tags=[]
# # Step 1: Evaluate the Approach
#
# Is there anything about this approach that concerns you? After you've thought about it, run the function below to see one point of view.
# + papermill={"duration": 0.035125, "end_time": "2021-08-23T03:02:26.474724", "exception": false, "start_time": "2021-08-23T03:02:26.439599", "status": "completed"} tags=[]
# Check your answer (Run this code cell to receive credit!)
#step_1.solution()
step_1.check()
# + [markdown] papermill={"duration": 0.023049, "end_time": "2021-08-23T03:02:26.521275", "exception": false, "start_time": "2021-08-23T03:02:26.498226", "status": "completed"} tags=[]
# # Step 2: Review Data and Create the model
#
# Moving forward with your plan, you'll need to load the data. Here's some basic code to load data and split it into a training and validation set. Run this code.
# + papermill={"duration": 0.49128, "end_time": "2021-08-23T03:02:27.036081", "exception": false, "start_time": "2021-08-23T03:02:26.544801", "status": "completed"} tags=[]
def load_data(csv_file, split=0.9):
data = pd.read_csv(csv_file)
# Shuffle data
train_data = data.sample(frac=1, random_state=7)
texts = train_data.text.values
labels = [{"POSITIVE": bool(y), "NEGATIVE": not bool(y)}
for y in train_data.sentiment.values]
split = int(len(train_data) * split)
train_labels = [{"cats": labels} for labels in labels[:split]]
val_labels = [{"cats": labels} for labels in labels[split:]]
return texts[:split], train_labels, texts[split:], val_labels
train_texts, train_labels, val_texts, val_labels = load_data('../input/nlp-course/yelp_ratings.csv')
# + [markdown] papermill={"duration": 0.023331, "end_time": "2021-08-23T03:02:27.083286", "exception": false, "start_time": "2021-08-23T03:02:27.059955", "status": "completed"} tags=[]
# You will use this training data to build a model. The code to build the model is the same as what you saw in the tutorial. So that is copied below for you.
#
# But because your data is different, there are **two lines in the modeling code cell that you'll need to change.** Can you figure out what they are?
#
# First, run the cell below to look at a couple elements from your training data.
# + papermill={"duration": 0.032083, "end_time": "2021-08-23T03:02:27.139043", "exception": false, "start_time": "2021-08-23T03:02:27.106960", "status": "completed"} tags=[]
print('Texts from training data\n------')
print(train_texts[:2])
print('\nLabels from training data\n------')
print(train_labels[:2])
# + [markdown] papermill={"duration": 0.023507, "end_time": "2021-08-23T03:02:27.186284", "exception": false, "start_time": "2021-08-23T03:02:27.162777", "status": "completed"} tags=[]
# Now, having seen this data, find the two lines that need to be changed.
# + papermill={"duration": 0.510527, "end_time": "2021-08-23T03:02:27.721731", "exception": false, "start_time": "2021-08-23T03:02:27.211204", "status": "completed"} tags=[]
import spacy
# Create an empty model
nlp = spacy.blank("en")
# Create the TextCategorizer with exclusive classes and "bow" architecture
textcat = nlp.create_pipe(
"textcat",
config={
"exclusive_classes": True,
"architecture": "bow"})
# Add the TextCategorizer to the empty model
nlp.add_pipe(textcat)
# Add labels to text classifier
textcat.add_label("NEGATIVE")
textcat.add_label("POSITIVE")
# Check your answer
step_2.check()
# + papermill={"duration": 0.03159, "end_time": "2021-08-23T03:02:27.778082", "exception": false, "start_time": "2021-08-23T03:02:27.746492", "status": "completed"} tags=[]
# Lines below will give you a hint or solution code
#step_2.hint()
#step_2.solution()
# + [markdown] papermill={"duration": 0.025329, "end_time": "2021-08-23T03:02:27.828287", "exception": false, "start_time": "2021-08-23T03:02:27.802958", "status": "completed"} tags=[]
# # Step 3: Train Function
#
# Implement a function `train` that updates a model with training data. Most of this is general data munging, which we've filled in for you. Just add the one line of code necessary to update your model.
# + papermill={"duration": 7.184762, "end_time": "2021-08-23T03:02:35.038063", "exception": false, "start_time": "2021-08-23T03:02:27.853301", "status": "completed"} tags=[]
from spacy.util import minibatch
import random
def train(model, train_data, optimizer):
losses = {}
random.seed(1)
random.shuffle(train_data)
batches = minibatch(train_data, size=8)
for batch in batches:
# train_data is a list of tuples [(text0, label0), (text1, label1), ...]
# Split batch into texts and labels
texts, labels = zip(*batch)
# Update model with texts and labels
model.update(texts, labels, sgd=optimizer, losses=losses)
return losses
# Check your answer
step_3.check()
# + papermill={"duration": 0.032975, "end_time": "2021-08-23T03:02:35.096751", "exception": false, "start_time": "2021-08-23T03:02:35.063776", "status": "completed"} tags=[]
# Lines below will give you a hint or solution code
#step_3.hint()
#step_3.solution()
# + papermill={"duration": 116.855794, "end_time": "2021-08-23T03:04:31.978528", "exception": false, "start_time": "2021-08-23T03:02:35.122734", "status": "completed"} tags=[]
# Fix seed for reproducibility
spacy.util.fix_random_seed(1)
random.seed(1)
# This may take a while to run!
optimizer = nlp.begin_training()
train_data = list(zip(train_texts, train_labels))
losses = train(nlp, train_data, optimizer)
print(losses['textcat'])
# + [markdown] papermill={"duration": 0.025918, "end_time": "2021-08-23T03:04:32.030589", "exception": false, "start_time": "2021-08-23T03:04:32.004671", "status": "completed"} tags=[]
# We can try this slightly trained model on some example text and look at the probabilities assigned to each label.
# + papermill={"duration": 0.034897, "end_time": "2021-08-23T03:04:32.091664", "exception": false, "start_time": "2021-08-23T03:04:32.056767", "status": "completed"} tags=[]
text = "This tea cup was full of holes. Do not recommend."
doc = nlp(text)
print(doc.cats)
# + [markdown] papermill={"duration": 0.026209, "end_time": "2021-08-23T03:04:32.144559", "exception": false, "start_time": "2021-08-23T03:04:32.118350", "status": "completed"} tags=[]
# These probabilities look reasonable. Now you should turn them into an actual prediction.
#
# # Step 4: Making Predictions
#
# Implement a function `predict` that predicts the sentiment of text examples.
# - First, tokenize the texts using `nlp.tokenizer()`.
# - Then, pass those docs to the TextCategorizer which you can get from `nlp.get_pipe()`.
# - Use the `textcat.predict()` method to get scores for each document, then choose the class with the highest score (probability) as the predicted class.
# + papermill={"duration": 3.560514, "end_time": "2021-08-23T03:04:35.731797", "exception": false, "start_time": "2021-08-23T03:04:32.171283", "status": "completed"} tags=[]
def predict(nlp, texts):
# Use the model's tokenizer to tokenize each input text
docs = [nlp.tokenizer(text) for text in texts]
# Use textcat to get the scores for each doc
textcat = nlp.get_pipe('textcat')
scores, _ = textcat.predict(docs)
# From the scores, find the class with the highest score/probability
predicted_class = scores.argmax(axis=1)
return predicted_class
# Check your answer
step_4.check()
# + papermill={"duration": 0.034375, "end_time": "2021-08-23T03:04:35.793644", "exception": false, "start_time": "2021-08-23T03:04:35.759269", "status": "completed"} tags=[]
# Lines below will give you a hint or solution code
#step_4.hint()
#step_4.solution()
# + papermill={"duration": 0.040341, "end_time": "2021-08-23T03:04:35.861463", "exception": false, "start_time": "2021-08-23T03:04:35.821122", "status": "completed"} tags=[]
texts = val_texts[34:38]
predictions = predict(nlp, texts)
for p, t in zip(predictions, texts):
print(f"{textcat.labels[p]}: {t} \n")
# + [markdown] papermill={"duration": 0.027566, "end_time": "2021-08-23T03:04:35.917137", "exception": false, "start_time": "2021-08-23T03:04:35.889571", "status": "completed"} tags=[]
# It looks like your model is working well after going through the data just once. However you need to calculate some metric for the model's performance on the hold-out validation data.
#
# # Step 5: Evaluate The Model
#
# Implement a function that evaluates a `TextCategorizer` model. This function `evaluate` takes a model along with texts and labels. It returns the accuracy of the model, which is the number of correct predictions divided by all predictions.
#
# First, use the `predict` method you wrote earlier to get the predicted class for each text in `texts`. Then, find where the predicted labels match the true "gold-standard" labels and calculate the accuracy.
# + papermill={"duration": 0.047789, "end_time": "2021-08-23T03:04:35.992954", "exception": false, "start_time": "2021-08-23T03:04:35.945165", "status": "completed"} tags=[]
predict(nlp, texts)
# + papermill={"duration": 3.514594, "end_time": "2021-08-23T03:04:39.545144", "exception": false, "start_time": "2021-08-23T03:04:36.030550", "status": "completed"} tags=[]
def evaluate(model, texts, labels):
""" Returns the accuracy of a TextCategorizer model.
Arguments
---------
model: ScaPy model with a TextCategorizer
texts: Text samples, from load_data function
labels: True labels, from load_data function
"""
# Get predictions from textcat model (using your predict method)
predicted_class = predict(model, texts)
# From labels, get the true class as a list of integers (POSITIVE -> 1, NEGATIVE -> 0)
true_class = [int(label['cats']['POSITIVE']) for label in labels]
# A boolean or int array indicating correct predictions
correct_predictions = true_class == predicted_class
# The accuracy, number of correct predictions divided by all predictions
accuracy = correct_predictions.mean()
return accuracy
step_5.check()
# + papermill={"duration": 0.036666, "end_time": "2021-08-23T03:04:39.610704", "exception": false, "start_time": "2021-08-23T03:04:39.574038", "status": "completed"} tags=[]
# Lines below will give you a hint or solution code
#step_5.hint()
#step_5.solution()
# + papermill={"duration": 2.48251, "end_time": "2021-08-23T03:04:42.122267", "exception": false, "start_time": "2021-08-23T03:04:39.639757", "status": "completed"} tags=[]
accuracy = evaluate(nlp, val_texts, val_labels)
print(f"Accuracy: {accuracy:.4f}")
# + [markdown] papermill={"duration": 0.029201, "end_time": "2021-08-23T03:04:42.180920", "exception": false, "start_time": "2021-08-23T03:04:42.151719", "status": "completed"} tags=[]
# With the functions implemented, you can train and evaluate in a loop.
# + papermill={"duration": 722.058472, "end_time": "2021-08-23T03:16:44.268582", "exception": false, "start_time": "2021-08-23T03:04:42.210110", "status": "completed"} tags=[]
# This may take a while to run!
n_iters = 5
for i in range(n_iters):
losses = train(nlp, train_data, optimizer)
accuracy = evaluate(nlp, val_texts, val_labels)
print(f"Loss: {losses['textcat']:.3f} \t Accuracy: {accuracy:.3f}")
# + [markdown] papermill={"duration": 0.030829, "end_time": "2021-08-23T03:16:44.330508", "exception": false, "start_time": "2021-08-23T03:16:44.299679", "status": "completed"} tags=[]
# # Step 6: Keep Improving
#
# You've built the necessary components to train a text classifier with spaCy. What could you do further to optimize the model?
#
# Run the next line to check your answer.
# + papermill={"duration": 0.042316, "end_time": "2021-08-23T03:16:44.405000", "exception": false, "start_time": "2021-08-23T03:16:44.362684", "status": "completed"} tags=[]
# Check your answer (Run this code cell to receive credit!)
#step_6.solution()
step_6.check()
# + [markdown] papermill={"duration": 0.031643, "end_time": "2021-08-23T03:16:44.468693", "exception": false, "start_time": "2021-08-23T03:16:44.437050", "status": "completed"} tags=[]
# ## Keep Going
#
# The next step is a big one. See how you can **[represent tokens as vectors that describe their meaning](https://www.kaggle.com/matleonard/word-vectors)**, and plug those into your machine learning models.
# + [markdown] papermill={"duration": 0.031554, "end_time": "2021-08-23T03:16:44.532099", "exception": false, "start_time": "2021-08-23T03:16:44.500545", "status": "completed"} tags=[]
# ---
#
#
#
#
# *Have questions or comments? Visit the [Learn Discussion forum](https://www.kaggle.com/learn-forum/161466) to chat with other Learners.*
| natural_language_processing/02-text-classification.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] colab_type="text" id="oeWq7mGFZm6L"
# _Lambda School Data Science_
#
# # Reshape data
#
# Objectives
# - understand tidy data formatting
# - melt and pivot data with pandas
#
# Links
# - [Tidy Data](https://en.wikipedia.org/wiki/Tidy_data)
# - [Pandas Cheat Sheet](https://github.com/pandas-dev/pandas/blob/master/doc/cheatsheet/Pandas_Cheat_Sheet.pdf)
# - Tidy Data
# - Reshaping Data
# - Python Data Science Handbook
# - [Chapter 3.8](https://jakevdp.github.io/PythonDataScienceHandbook/03.08-aggregation-and-grouping.html), Aggregation and Grouping
# - [Chapter 3.9](https://jakevdp.github.io/PythonDataScienceHandbook/03.09-pivot-tables.html), Pivot Tables
#
# Reference
# - pandas documentation: [Reshaping and Pivot Tables](https://pandas.pydata.org/pandas-docs/stable/reshaping.html)
# - Modern Pandas, Part 5: [Tidy Data](https://tomaugspurger.github.io/modern-5-tidy.html)
# + [markdown] colab_type="text" id="u2-7QkU3eR_e"
# ## Why reshape data?
#
# #### Some libraries prefer data in different formats
#
# For example, the Seaborn data visualization library prefers data in "Tidy" format often (but not always).
#
# > "[Seaborn will be most powerful when your datasets have a particular organization.](https://seaborn.pydata.org/introduction.html#organizing-datasets) This format ia alternately called “long-form” or “tidy” data and is described in detail by <NAME>. The rules can be simply stated:
#
# > - Each variable is a column
# - Each observation is a row
#
# > A helpful mindset for determining whether your data are tidy is to think backwards from the plot you want to draw. From this perspective, a “variable” is something that will be assigned a role in the plot."
#
# #### Data science is often about putting square pegs in round holes
#
# Here's an inspiring [video clip from _Apollo 13_](https://www.youtube.com/watch?v=ry55--J4_VQ): “Invent a way to put a square peg in a round hole.” It's a good metaphor for data wrangling!
# + [markdown] colab_type="text" id="3av1dYbRZ4k2"
# ## Upgrade Seaborn
#
# Run the cell below which upgrades Seaborn and automatically restarts your Google Colab Runtime.
# + colab_type="code" id="AOLhnquFxao7" outputId="438dce8d-1007-42f5-8a8d-2ff98f4f2b2a" colab={"base_uri": "https://localhost:8080/", "height": 435}
# !pip install seaborn --upgrade ## terminal command
import os
os.kill(os.getpid(), 9) ## do this when upgrading a package onto colab..
# + [markdown] colab_type="text" id="tE_BXOAjaWB_"
# ## <NAME>ham's Examples
#
# From his paper, [Tidy Data](http://vita.had.co.nz/papers/tidy-data.html)
# + colab_type="code" id="PL6hzS3yYsNt" colab={}
# magic command for a notebook environment.. in this case, render visualizations inline for matplotlib
# %matplotlib inline
import pandas as pd
import numpy as np
import seaborn as sns
table1 = pd.DataFrame(
[[np.nan, 2],
[16, 11],
[3, 1]],
index=['<NAME>', '<NAME>', '<NAME>'],
columns=['treatmenta', 'treatmentb'])
table2 = table1.T
# + [markdown] colab_type="text" id="YvfghLi3bu6S"
# "Table 1 provides some data about an imaginary experiment in a format commonly seen in the wild.
#
# The table has two columns and three rows, and both rows and columns are labelled."
# + colab_type="code" id="5ZidjYdNikwF" colab={}
table1
# + [markdown] colab_type="text" id="wIfPYP4rcDbO"
# "There are many ways to structure the same underlying data.
#
# Table 2 shows the same data as Table 1, but the rows and columns have been transposed. The data is the same, but the layout is different."
# + colab_type="code" id="mYBLbVTVKR2h" colab={}
table2
# + [markdown] colab_type="text" id="RaZuIwqNcRpr"
# "Table 3 reorganises Table 1 to make the values, variables and obserations more clear.
#
# Table 3 is the tidy version of Table 1. Each row represents an observation, the result of one treatment on one person, and each column is a variable."
#
# | name | trt | result |
# |--------------|-----|--------|
# | <NAME> | a | - |
# | <NAME> | a | 16 |
# | <NAME> | a | 3 |
# | <NAME> | b | 2 |
# | <NAME> | b | 11 |
# | <NAME> | b | 1 |
# + [markdown] colab_type="text" id="8P88YyUvaxAV"
# ## Table 1 --> Tidy
#
# We can use the pandas `melt` function to reshape Table 1 into Tidy format.
# + colab_type="code" id="vOUzvON0t8El" colab={}
table1
# + id="i7R3Nwgv9elc" colab_type="code" colab={}
table1.columns
# + id="encAAq-K9sRo" colab_type="code" colab={}
table1.index # names are part of the index.. not part of a column. Important to check this!
# + id="eIo6cbzP96xy" colab_type="code" colab={}
table1.index.tolist()
# + id="ExyO9gLQ90d2" colab_type="code" colab={}
table1.reset_index().melt() ## reset index first. then used melt function.. but not quite what we want yet.
# + id="VFAhoA1e-SRe" colab_type="code" colab={}
tidy = table1.reset_index().melt(id_vars='index') # this is what we want by adding parameter to the melt function
## not that 'index' just happens to be the name of the column. but in the parameter, you refer to the name of the column
tidy
# + id="2IazXXqFASXV" colab_type="code" colab={}
## let's tidy this dataset up even more!
tidy = tidy.rename(columns={ # renaming the columns
'index': 'name',
'variable': 'trt',
'value': 'result'
})
tidy
# + id="sZ1uhnefAvTy" colab_type="code" colab={}
tidy['trt'] = tidy['trt'].replace('treatment', '') # get rid of 'treatment' in the data for 'trt' column
tidy
# + id="PMSrdej9w_QB" colab_type="code" colab={}
#from numpy import nan
tidy['result'] = tidy['result'].replace(nan, '-')
tidy
# + [markdown] id="j-ouzdprCuFb" colab_type="text"
# ***extra challenge***
# change a and b into 0 and 1
# + id="-zuJLssMCte3" colab_type="code" colab={}
# tidy['trt'].replace('a', 0).replace('b', 1) .. replace!
## tidy['trt'].map({a:0, b:1}) .. map!
## (tidy['trt'] == b).astype(int) ... returns booleans.. which are treated as 1's and 0's!
## tidy.apply(lambda x: ord(x) - ord('a')) .. get unicode and lambda fancy-like. wowowo
# + [markdown] colab_type="text" id="uYb2vG44az2m"
# ## Table 2 --> Tidy
# + colab_type="code" id="yP_oYbGsazdU" colab={}
## self exercise
table2
# + id="ZL-yHskSs7A9" colab_type="code" colab={}
table2_tran = table2.T
table2_tran
# + id="E98P72tdtSiC" colab_type="code" colab={}
tidy2 = table2_tran.reset_index().melt(id_vars='index')
tidy2
# + id="eAaGAsCot277" colab_type="code" colab={}
tidy2 = tidy2.rename(columns={
'index': 'name',
'variable': 'trt',
'value': 'result'
})
tidy2
# + id="PNFT3cAcvsfK" colab_type="code" colab={}
tidy2['trt'] = tidy2['trt'].str.replace('treatment', '')
tidy2
# + [markdown] colab_type="text" id="kRwnCeDYa27n"
# ## Tidy --> Table 1
#
# The `pivot_table` function is the inverse of `melt`.
# + colab_type="code" id="BxcwXHS9H7RB" colab={}
table1
# + id="XwfAtGruFFBL" colab_type="code" colab={}
tidy
# + id="SwsHI8ngFIFJ" colab_type="code" colab={}
tidy.pivot_table(index='name', columns='trt', values='result') # think of pivot and melt as opposite treatments of data
# + [markdown] colab_type="text" id="nR4dlpFQa5Pw"
# ## Tidy --> Table 2
# + colab_type="code" id="flcwLnVdJ-TD" colab={}
## self exercise
tidy2.pivot_table(index='name', columns='trt', values='result')
# + [markdown] colab_type="text" id="7OwdtbQqgG4j"
# ## Load Instacart data
#
# Let's return to the dataset of [3 Million Instacart Orders](https://tech.instacart.com/3-million-instacart-orders-open-sourced-d40d29ead6f2)
# + [markdown] colab_type="text" id="RdXhRmSbgbBc"
# If necessary, uncomment and run the cells below to re-download and extract the data
# + colab_type="code" id="SoX-00UugVZD" outputId="47285a18-75dc-4274-bd07-2f40e2f2464b" colab={"base_uri": "https://localhost:8080/", "height": 233}
# !wget https://s3.amazonaws.com/instacart-datasets/instacart_online_grocery_shopping_2017_05_01.tar.gz
# + colab_type="code" id="tDGkv5vngXTw" outputId="59810217-d8ed-41f7-dd2d-c335759bf77a" colab={"base_uri": "https://localhost:8080/", "height": 267}
# !tar --gunzip --extract --verbose --file=instacart_online_grocery_shopping_2017_05_01.tar.gz
# + [markdown] colab_type="text" id="covQKAHggl80"
# Run these cells to load the data
# + colab_type="code" id="dsbev9Gi0JYo" outputId="4e7d5971-7586-4fc5-9015-da9e72ed7b7e" colab={"base_uri": "https://localhost:8080/", "height": 34}
# %cd instacart_2017_05_01
# + colab_type="code" id="1AHEpFPcMTn1" colab={}
products = pd.read_csv('products.csv')
order_products = pd.concat([pd.read_csv('order_products__prior.csv'),
pd.read_csv('order_products__train.csv')])
orders = pd.read_csv('orders.csv')
# + [markdown] id="EtHrEEa1JVz5" colab_type="text"
# ***Seaborn example***
# - each variable is a column
# - each observation is a row
# + id="ouwFYc3dJdmx" colab_type="code" colab={}
sns.catplot(x='trt', y='result', col='name', kind='bar', data=tidy, height=2) #cool
# + [markdown] colab_type="text" id="bmgW_DxohBV5"
# ## Goal: Reproduce part of this example
#
# Instead of a plot with 50 products, we'll just do two — the first products from each list
# - Half And Half Ultra Pasteurized
# - Half Baked Frozen Yogurt
# + colab_type="code" id="p4CdH8hkg5RJ" outputId="33af5ac8-37fc-4aea-eeb8-80c83b5a7af7" colab={"base_uri": "https://localhost:8080/", "height": 383}
from IPython.display import display, Image
url = 'https://cdn-images-1.medium.com/max/1600/1*wKfV6OV-_1Ipwrl7AjjSuw.png'
example = Image(url=url, width=600)
display(example)
# + [markdown] colab_type="text" id="VgXHJM-mhvuo"
# So, given a `product_name` we need to calculate its `order_hour_of_day` pattern.
# + [markdown] colab_type="text" id="PZxgqPU7h8cj"
# ## Subset and Merge
# + colab_type="code" id="LUoNA7_UTNkp" outputId="f984a51f-f5e9-4663-9a2d-f80c5e611579" colab={"base_uri": "https://localhost:8080/", "height": 34}
products.columns.tolist()
# + id="Nrax5W7LLJ-K" colab_type="code" outputId="e53aa2bf-2f43-4fd8-ae44-f34647363e6c" colab={"base_uri": "https://localhost:8080/", "height": 34}
order_products.columns.tolist()
# + id="9USlPdndLOgN" colab_type="code" outputId="d762c0cc-41ce-419a-f574-f796f35711b5" colab={"base_uri": "https://localhost:8080/", "height": 136}
orders.columns.tolist()
# + id="B7SvUKAyLVFB" colab_type="code" colab={}
merged = (products[['product_id', 'product_name']]
.merge(order_products[['order_id', 'product_id']])
.merge(orders[['order_id', 'order_hour_of_day']]))
# + id="AnPL3QiRMvbF" colab_type="code" outputId="fc41e758-4bc6-4aed-ed96-0dd5bd98854f" colab={"base_uri": "https://localhost:8080/", "height": 34}
products.shape, order_products.shape, orders.shape, merged.shape
# + id="dMOJb48nM6q9" colab_type="code" colab={}
merged.head()
# + id="vvRgy6hENSvu" colab_type="code" colab={}
# what condition will filter 'merged' to just the two products that we care about?
#condition = ((merged['product_name']=='Half Baked Frozen Yogurt') |
#merged['product_name']=='Half And Half Ultra Pasteurized')
times_and_sales = (merged
.groupby('product_name')
.order_hour_of_day.agg(['mean', 'count'])
.rename(columns={'mean': 'average time',
'count': 'total sales'}))
# + id="KIw76U7WyUxi" colab_type="code" colab={}
# + id="8aDhGtN9y-HG" colab_type="code" colab={}
# + id="5dr7vwVQzK-i" colab_type="code" colab={}
popular = times_and_sales[times_and_sales['total sales'] > 2900]
popular_evening = popular.sort_values(by='average time', ascending=False)[:25].index
# + id="m3KBxR3nzsEg" colab_type="code" colab={}
popular_morning = popular.sort_values(by='average time', ascending=True)[:25].index
# + id="0rlTc5c5yCa_" colab_type="code" colab={}
# + [markdown] colab_type="text" id="lOw6aZ3oiPLf"
# ## 4 ways to reshape and plot
# + [markdown] colab_type="text" id="5W-vHcWZiFKv"
# ### 1. value_counts
# + colab_type="code" id="QApT8TeRTsgh" colab={}
froyo = subset[subset['product_name'] == 'Half Baked Frozen Yogurt']
cream = subset[subset['product_name']== 'Half And Half Ultra Pasteurized']
# + id="RTj0K54kO-XT" colab_type="code" colab={}
(cream['order_hour_of_day']
.value_counts(normalize=True)
.sort_index()
.plot())
(froyo['order_hour_of_day']
.value_counts(normalize=True)
.sort_index()
.plot())
# + [markdown] colab_type="text" id="CiB9xmZ4iIqt"
# ### 2. crosstab
# + colab_type="code" id="aCzF5spQWd_f" colab={}
pd.crosstab(subset['order_hour_of_day'],
subset['product_name'],
normalize='columns').plot()
# + [markdown] colab_type="text" id="wCp-qjbriUze"
# ### 3. pivot_table
# + colab_type="code" id="O8d6_TDKNsxB" colab={}
subset.pivot_table(index='order_hour_of_day',
columns='product_name',
values='order_id',
aggfunc=len).plot()
# + [markdown] colab_type="text" id="48wCoJowigCf"
# ### 4. melt
# + colab_type="code" id="VnslvFfvYSIk" colab={}
table = pd.crosstab(subset['order_hour_of_day'],
subset['product_name'],
normalize=True)
melted = table.reset_index().melt(id_vars='order_hour_of_day').rename(columns={'order_hour_of_day': 'Hour of Day Ordered',
'product_name': 'Product',
'value': 'Percent of Orders by Product'})
sns.relplot(x='Hour of Day Ordered', y='Percent of Orders by Product', hue="Product", data=melted, kind='line')
# + [markdown] id="YxisGz8p5UG4" colab_type="text"
# # ASSIGNMENT
# - Replicate the lesson code
# - Complete the code cells we skipped near the beginning of the notebook
# - Table 2 --> Tidy
# - Tidy --> Table 2
# + [markdown] id="5LalIVfq5UG6" colab_type="text"
# - Load seaborn's `flights` dataset by running the cell below. Then create a pivot table showing the number of passengers by month and year. Use year for the index and month for the columns. You've done it right if you get 112 passengers for January 1949 and 432 passengers for December 1960.
# + id="R09OP2n-5UG7" colab_type="code" colab={}
flights = sns.load_dataset('flights')
# + id="4Ko0LyXx5UG9" colab_type="code" colab={}
flights.head()
# + id="dBMJn34m2lqu" colab_type="code" colab={}
flights.shape
# + id="F81AHnm45Fwz" colab_type="code" colab={}
## tidy2.pivot_table(index='name', columns='trt', values='result')
flights.pivot_table(index='year', columns='month', values='passengers')
# + [markdown] id="z5nS6d165UHA" colab_type="text"
# # STRETCH OPTIONS
#
# _Try whatever sounds most interesting to you!_
#
# - Replicate more of Instacart's visualization showing "Hour of Day Ordered" vs "Percent of Orders by Product"
# - Replicate parts of the other visualization from [Instacart's blog post](https://tech.instacart.com/3-million-instacart-orders-open-sourced-d40d29ead6f2), showing "Number of Purchases" vs "Percent Reorder Purchases"
# - Get the most recent order for each user in Instacart's dataset. This is a useful baseline when [predicting a user's next order](https://www.kaggle.com/c/instacart-market-basket-analysis)
# - Replicate parts of the blog post linked at the top of this notebook: [Modern Pandas, Part 5: Tidy Data](https://tomaugspurger.github.io/modern-5-tidy.html)
# + id="mbcfr0MUNtTx" colab_type="code" colab={}
subset_a = merged[merged['product_name'].isin(popular_morning)]
# + id="xUIzastI_M6A" colab_type="code" colab={}
# + id="ULMZfFaRwX2h" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 956} outputId="d0ae194d-d04b-4635-fecf-add3a68fd99c"
ax = pd.crosstab(subset_a['order_hour_of_day'],
subset_a['product_name'],
normalize='columns')[popular_morning].plot(color='green', figsize=(20, 16))
# + id="_3t1S3jwAVzk" colab_type="code" colab={}
subset_b = merged[merged['product_name'].isin(popular_evening)]
# + id="o-GSgD9mAux3" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 973} outputId="c682fb74-4c3a-4e34-c663-39aac4341a5a"
pd.crosstab(subset_b['order_hour_of_day'],
subset_b['product_name'],
normalize='columns')[popular_evening].plot(color='red', figsize=(20, 16))
| module3-reshape-data/LS_DS_123_Reshape_data.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3 (ipykernel)
# language: python
# name: python3
# ---
# <h1>CS4619: Artificial Intelligence II</h1>
# <h1>Word Embeddings</h1>
# <h2>
# <NAME><br>
# School of Computer Science and Information Technology<br>
# University College Cork
# </h2>
# <h1>Initialization</h1>
# $\newcommand{\Set}[1]{\{#1\}}$
# $\newcommand{\Tuple}[1]{\langle#1\rangle}$
# $\newcommand{\v}[1]{\pmb{#1}}$
# $\newcommand{\cv}[1]{\begin{bmatrix}#1\end{bmatrix}}$
# $\newcommand{\rv}[1]{[#1]}$
# $\DeclareMathOperator{\argmax}{arg\,max}$
# $\DeclareMathOperator{\argmin}{arg\,min}$
# $\DeclareMathOperator{\dist}{dist}$
# $\DeclareMathOperator{\abs}{abs}$
# %load_ext autoreload
# %autoreload 2
# %matplotlib inline
# +
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
from sklearn.preprocessing import LabelEncoder
from tensorflow.keras.preprocessing.text import Tokenizer
from tensorflow.keras.preprocessing.sequence import pad_sequences
from tensorflow.keras.initializers import Constant
from tensorflow.keras import Input
from tensorflow.keras import Model
from tensorflow.keras.layers import Dense
from tensorflow.keras.layers import Flatten
from tensorflow.keras.layers import Embedding
from tensorflow.keras.optimizers import SGD
from tensorflow.keras.callbacks import EarlyStopping
# +
# This OneHot layer comes from https://fdalvi.github.io/blog/2018-04-07-keras-sequential-onehot/
from tensorflow.keras.layers import Lambda
from tensorflow.keras import backend as K
def OneHot(input_dim=None, input_length=None):
# Check if inputs were supplied correctly
if input_dim is None or input_length is None:
raise TypeError("input_dim or input_length is not set")
# Helper method (not inlined for clarity)
def _one_hot(x, num_classes):
return K.one_hot(K.cast(x, 'uint8'),
num_classes=num_classes)
# Final layer representation as a Lambda layer
return Lambda(_one_hot,
arguments={'num_classes': input_dim},
input_shape=(input_length,))
# -
# <h1>Acknowledgements</h1>
# <ul>
# <li>Part of the code comes from chapter 6 of:
# Franç<NAME>: <i>Deep Learning with Python</i>,
# Manning Publications, 2018
# </li>
# </ul>
# <h1>Natural Language Processing</h1>
# <ul>
# <li>In the previous lecture, we represented each document as a single vector (bag-of-words).
# <ul>
# <li>This is OK for some applications, e.g. spam filtering.</li>
# </ul>
# </li>
# <li>But for many applications of natural language processing (NLP), we may need to treat documents as
# <em>sequences</em> (lists) of words (and maybe of punctuation symbols also):
# <ul>
# <li>Sentiment analysis, e.g. of movie reviews or tweets;</li>
# <li>Machine translation;</li>
# <li>Image captioning;
# </li>
# <li>Question-answering and chatbots.</li>
# </ul>
# </li>
# <li>There are other applications where each example is a sequence of features too, e.g.:
# <ul>
# <li>processing speech;</li>
# <li>processing genomic data;</li>
# <li>timeseries prediction;</li>
# <li>clickstream prediction</li>
# </ul>
# </li>
# </ul>
# <h2>Sequences of integers</h2>
# <ul>
# <li>Now, each word will be given a unique integer (an index, e.g. "a" might be word number 1, "the" might be
# word number 2, and so on). It is common to restrict to a certain vocabulary, e.g. in the code below, we
# restrict to the most common 1000 words, so the indexes are from 1 to 1000. In real applications, this might be tens-of-thousands or even 100s-of-thosuands of the most common words. If someone uses words that are not within the vocabulary, then either these words are ignored or they are all treated as a special token UNK and hence are all assigned to the same unique integer (e.g. they are all word number 1000).
# </li>
# <li>A document will be a sequence of these integers.
# <ul>
# <li>We may add special symbols to the start and end of the document, also given an index.</li>
# </ul>
# </li>
# <li>If we have a batch of documents, we may prefer them all to be the same length (e.g. <i>maxlen</i> = 200 words).
# In which case, we will need to:
# <ul>
# <li>truncate documents that are longer than 200 words; and</li>
# <li>pad documents that have fewer than 200 words using a separate index, e.g. 0.</li>
# </ul>
# </li>
# </ul>
# <h2>IMDB Reviews, Again</h2>
# <ul>
# <li>Let's read in our small IMDB reviews dataset again and turn it into sequences of integers.</li>
# </ul>
df = pd.read_csv("../datasets/dataset_5000_reviews.csv")
# +
# Dataset size
m = len(df)
# We'll keep only the 1000 most common words in the reviews.
vocab_size = 1000
# We'll truncate/pad so that each review has 200 words
maxlen = 200
# -
tokenizer = Tokenizer(num_words=vocab_size)
tokenizer.fit_on_texts(df["review"])
sequences = tokenizer.texts_to_sequences(df["review"])
padded_sequences = pad_sequences(sequences, maxlen=maxlen)
# Let's look at the first review
padded_sequences[0]
# Let's look at how the indexes relate to words
tokenizer.word_index
# +
# Train/test split
split_point = int(m * 0.8)
dev_X = padded_sequences[:split_point]
test_X = padded_sequences[split_point:]
# Target values, encoded and converted to a 1D numpy array
label_encoder = LabelEncoder()
label_encoder.fit(df["sentiment"])
dev_y = label_encoder.transform(df["sentiment"][:split_point])
test_y = label_encoder.transform(df["sentiment"][split_point:])
# -
# <h1>One-Hot Encoding</h1>
# <ul>
# <li>We probably should not use the indexes directly. Why not?</li>
# <li>So we could one-hot encode each word.</li>
#
# </ul>
# <h2>IMDB</h2>
# <ul>
# <li>In our IMDB example, each review will now be represented by a list (of length 200) of binary-valued
# vectors (where
# the dimenion of the vector is 1000. Why?)
# </li>
# <li>Converting from integer indexes to binary vectors can be done in many ways. We will do it using a layer
# in our network using some code given earlier.
# </li>
# <li>Then we will flatten the input into a single vector (<code>maxlen * vocab_size</code>) and then use
# a few dense layers.
# </li>
# </ul>
# +
inputs = Input(shape=(maxlen,))
x = OneHot(input_dim=vocab_size, input_length=maxlen)(inputs)
x = Flatten()(x)
x = Dense(32, activation="relu")(x)
outputs = Dense(1, activation="sigmoid")(x)
one_hot_model = Model(inputs, outputs)
one_hot_model.compile(optimizer=SGD(lr=0.001), loss="binary_crossentropy", metrics=["acc"])
# -
one_hot_model.summary()
one_hot_history = one_hot_model.fit(dev_X, dev_y, epochs=10, batch_size=32, validation_split=0.25,
callbacks=[EarlyStopping(monitor="val_loss", patience=2)], verbose=0)
pd.DataFrame(one_hot_history.history).plot()
# <ul>
# <li>Not great results but not surprising:
# <ul>
# <li>Small dataset</li>
# <li>One-hot encoding is a poor choice.</li>
# </ul>
# </li>
# <ul>
# +
# An illustration of why one-hot encoding is not great.
def cosine_similarity(x, xprime):
# Assumes x and xprime are already normalized
# Converts from sparse matrices because np.dot does not work on them
return x.dot(xprime.T)
# Word indexes
print("like: ", tokenizer.word_index["like"] )
print("love: ", tokenizer.word_index["love"] )
print("hate: ", tokenizer.word_index["hate"] )
# One hot encodings
one_hot_like = np.zeros(vocab_size)
one_hot_like[ tokenizer.word_index["like"] ] = 1
one_hot_love = np.zeros(vocab_size)
one_hot_love[ tokenizer.word_index["love"] ] = 1
one_hot_hate = np.zeros(vocab_size)
one_hot_hate[ tokenizer.word_index["love"] ] = 1
# Similarities
print("like and love: ", one_hot_like.dot(one_hot_love) )
print("like and hate: ", one_hot_like.dot(one_hot_hate) )
# -
# <h1>Word Embeddings</h1>
# <ul>
# <li>One-hot encoding uses large, sparse vectors.</li>
# <li><b>Word embeddings</b> are small, non-sparse vectors, e.g. the dimension might be 100 or 200.</li>
# <li>To illustrate the ideas, we will use vectors of size 2 (so we can draw 2D diagrams).</li>
# <li>Perhaps we will have the following word embeddings:
# <img src="images/embeddings.png" style="float: right; margin-left: 5em" />
# <ul>
# <li>Dog: $\langle 0.4, 0.3\rangle$</li>
# <li>Hound: $\langle 0.38, 0.32\rangle$</li>
# <li>Wolf: $\langle 0.4, 0.8\rangle$</li>
# <li>Cat: $\langle 0.75, 0.2\rangle$</li>
# <li>Tiger: $\langle 0.75, 0.7\rangle$</li>
# </ul>
# </li>
# <li>The word embeddings we choose should reflect semantic relationships between the words:
# <ul>
# <li>Words with similar meanings should be close together (as with Dog and Hound) and in general
# the distance between embeddings should reflect how closely related the meanings are.
# </li>
# <li>Geometric transformations might encode semantic relationships, e.g.:
# <ul>
# <li>Adding $\langle 0, 0.5\rangle$ to the word embedding for Dog gives us the word embedding
# for Wolf; adding the same vector to the embedding for Cat gives the embedding for Tiger;
# $\langle 0, 0.5\rangle$ is the "from pet to wild animal" transformation.
# </li>
# <li>Similarly $\langle 0.35, -0.1\rangle$ is the "from canine to feline" transformation. Why?
# </li>
# </ul>
# </li>
# </ul>
# </li>
# </ul>
# <ul>
# <li>There is a Google visiualization here: <a href="https://pair.withgoogle.com/explorables/fill-in-the-blank/">https://pair.withgoogle.com/explorables/fill-in-the-blank/</a></li>
# </ul>
# <h2>Learning word embeddings</h2>
# <ul>
# <li>We learn the word embeddings from the dataset of documents.
# </li>
# <li>Conceptually,
# <ul>
# <li>The values in the vectors are initialized randomly;</li>
# <li>Then they are adjusted during learning.</li>
# </ul>
# </li>
# <li>But Keras does it using a network layer:
# <ul>
# <li>During learning, the weights of the layer are adjusted.</li>
# <li>The activations of the units in the layer are the word embeddings.</li>
# </ul>
# </li>
# </ul>
# <h2>IMDB</h2>
# +
# Throughout the code, we will use 100-dimensional word embeddings (including the pretrained GloVe embeddings later)
embedding_dimension = 100
# +
inputs = Input(shape=(maxlen,))
embedding = Embedding(input_dim=vocab_size, input_length=maxlen, output_dim=embedding_dimension)(inputs)
x = Flatten()(embedding)
x = Dense(32, activation="relu")(x)
outputs = Dense(1, activation="sigmoid")(x)
embedding_model = Model(inputs, outputs)
embedding_model.compile(optimizer=SGD(lr=0.001), loss="binary_crossentropy", metrics=["acc"])
# -
embedding_model.summary()
embedding_history = embedding_model.fit(dev_X, dev_y, epochs=10, batch_size=32, validation_split=0.25,
callbacks=[EarlyStopping(monitor="val_loss", patience=2)], verbose=0)
pd.DataFrame(embedding_history.history).plot()
# <ul>
# <li>Possibly worse in this case! Perhaps because so little training data.</li>
# </ul>
# <h2>Pretrained Word Embeddings</h2>
# <ul>
# <li>Above, we got our neural network to learn the word embeddings.
# <ul>
# <li>Advantage: they are based on our IMDB data and therefore tailored to
# helping us to predict the sentiment of restaurant reviews.
# </li>
# <li>Disadvantage: the IMDB dataset (and especially the subset that we are using) is probably too
# small to learn really powerful
# word embeddings.
# </li>
# </ul>
# </li>
# <li>To some extent, word embeddings are fairly generic, so it can make sense to reuse
# pretrained embeddings from very large datasets, as we did with image data.
# <ul>
# <li><i>word2vec</i> (<a href="https://code.google.com/archive/p/word2vec/">https://code.google.com/archive/p/word2vec/</a>):
# This is Google's famous algorithm for learning word embeddings. The URL contains
# code and also pretrained embeddings learned from news articles.
# </li>
# <li><i>GloVe</i> (Global Vectors for Word Representation,
# <a href="https://nlp.stanford.edu/projects/glove/">https://nlp.stanford.edu/projects/glove/</a>):
# This is a Stanford University algorithm. The URL has code and pretrained
# embeddings learned from Wikipedia.
# </li>
# </ul>
# </li>
# <li>Although we'll use GloVe, let's briefly explain how Google learns its <i>word2vec</i> word embeddings:
# <ul>
# <li>It takes a large body of text (e.g. Wikipedia) and builds a model (a two-layer neural network
# classifier) that predicts words.
# </li>
# <li>E.g. in what is known as <i>CBOW (continuous bag-of-words)</i>, it predicts the current word from
# a window of surrounding words.
# </li>
# <li>Or e.g. in what is know as <i>continuous skip-gram</i>, it predicts the surrounding words from
# the current word.
# </li>
# </ul>
# </li>
# </ul>
# <h2>IMDB</h2>
# <ul>
# <li>To run the code that follows, you need to download and unzip the file called
# <code>glove.6B.zip</code> (>800MB) from the URL above; save space by deleting all files except
# <code>glove.6B.100d.txt</code> (it's still >300MB)
# </li>
# <li>The code comes from Chollet's book — details are not important in CS4619</li>
# </ul>
# +
# Parse the GloVe word embeddings file: produces a dictionary from words to their vectors
path = "../datasets/glove.6B.100d.txt" # Edit this to point to your copy of the file
embeddings_index = {}
f = open(path)
for line in f:
values = line.split()
word = values[0]
coefs = np.asarray(values[1:], dtype="float32")
embeddings_index[word] = coefs
f.close()
# +
# Create a matrix that associates the words that we obtained from the IMDB reviews earlier
# (in the word_index) with their GloVe word embeddings
embedding_matrix = np.zeros((vocab_size, embedding_dimension))
for word, i in tokenizer.word_index.items():
if i < vocab_size:
word_embedding = embeddings_index.get(word)
if word_embedding is not None:
embedding_matrix[i] = word_embedding
# +
# Let's take a look at some of the embeddings
glove_like = embedding_matrix[ tokenizer.word_index["like"] ]
glove_love = embedding_matrix[ tokenizer.word_index["love"] ]
glove_hate = embedding_matrix[ tokenizer.word_index["hate"] ]
print("like: ", glove_like)
# Similarities
print("like and love: ", glove_like.dot(glove_love) )
print("like and hate: ", glove_like.dot(glove_hate) )
# +
# A similar neural network to earlier but this time the embedding layer's weights come from GloVe and are
# not adjusted during training
inputs = Input(shape=(maxlen,))
x = Embedding(input_dim=vocab_size, input_length=maxlen, output_dim=embedding_dimension,
embeddings_initializer=Constant(embedding_matrix), trainable=False)(inputs)
x = Flatten()(x)
x = Dense(32, activation="relu")(x)
outputs = Dense(1, activation="sigmoid")(x)
pretrained_embedding_model = Model(inputs, outputs)
pretrained_embedding_model.compile(optimizer=SGD(lr=0.001), loss="binary_crossentropy", metrics=["acc"])
# -
pretrained_history = pretrained_embedding_model.fit(dev_X, dev_y, epochs=10, batch_size=32, validation_split=0.25,
callbacks=[EarlyStopping(monitor="val_loss", patience=2)], verbose=0)
pd.DataFrame(pretrained_history.history).plot()
| ai2/lectures/AI2_02_word_embeddings.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: mm4sight
# language: python
# name: mm4sight
# ---
# +
subglobal = ['SYR','COL','AFG','COG','SSD','SOM','VEN','ETH','SDN','NGA',
'IRQ','YEM','UKR','MMR','CAF','CMR','ERI','BDI','GEO','MLI',
'TCD','LBY','NER','BFA','COD']
import pandas as pd
from time import time
import os
import json
import pickle
import numpy as np
from time import time
import statsmodels.api as sm
import seaborn as sns
import matplotlib.pyplot as plt
# +
start_time = time()
with open("../configuration.json", 'rt') as infile:
config = json.load(infile)
sources = [os.path.join("..", config['paths']['output'],
d['name'],
'data.csv') for d in config['sources'] if d['name']]
# Generate a data frame with all indicators
df = pd.concat((pd.read_csv(f) for f in sources), sort=False, ignore_index=True)
# Summary stats
print("Sources : {}".format(len(sources)))
print("Shape : {} (rows) {} (columns)".format(*df.shape))
print("Geographies : {}".format(len(df['Country Name'].unique())))
print("Indicators : {}".format(len(df['Indicator Code'].unique())))
print("Temporal coverage : {} -> {}".format(df.year.min(), df.year.max()))
print("Null values : {}".format(sum(df['value'].isnull())))
print("\nLoaded data in {:3.2f} sec.".format(time() - start_time))
# Now arrange data in wide form
data = pd.pivot_table(df, index=['Country Code', 'year'],
columns='Indicator Code', values='value')
# Consider country/year as features (and not an index)
data.reset_index(inplace=True)
print("Long form of size : {} (rows) {} (columns)".format(*data.shape))
# +
# Only look at features used in scenarios
groupings = json.load(open("../groupings.json", 'rt'))
featureset = [i['code'] for c in groupings['clusters'] for i in c['indicators']]
# Dimensions of interest
CLUSTERS = groupings['clusters']
COUNTRIES = ['subglobal', 'AFG', 'MMR']
LAGS = [0, 1]
LABELS = ['-10%', '-5%', '0%', '+5%', '+10%']
# -
INDICATORS = [i['code'] for C in CLUSTERS for i in C['indicators']]
# +
afg = 'AFG'
mmr = 'MMR'
f, axarr = plt.subplots(len(INDICATORS), figsize=(10, 50), sharex=False)
for i, idx in enumerate(INDICATORS):
c1 = data['Country Code'] == afg
c2 = data['Country Code'] == mmr
y2 = data.loc[c1, idx]
y3 = data.loc[c2, idx]
y = data[idx]
y.dropna(how='any', inplace=True)
sns.distplot(y, ax=axarr[i], label='all')
sns.distplot(y2, ax=axarr[i], label='AFG')
sns.distplot(y3, ax=axarr[i], label='MMR')
axarr[i].set_title("{} (n={})".format(idx, len(y)), loc='right')
# -
CLUSTERS
list(set(df.columns) - set(df2.columns))
f = ['B', 'A']
df[f]
| server/exploratory/Indicator ranges.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ## 캐글 따라하기 (kaggle_Bike Sharing Demand)
#
# Reference:
#
# - 깃헙:[corazzon/KaggleStruggle](https://github.com/corazzon/KaggleStruggle/blob/master/bike-sharing-demand/bike-sharing-demand-rf.ipynb)
#
# 이번 블로그는 캐글에서 돌아다니는 bike sharing demand의 대해 날씨, 휴일, 평일, 계절 등등에 따라 변하는 bike sharing demand의 대한 데이터를 가지고 재구성 및 시각화를 해보려 한다.
# 앞으로 진행할 프로젝트의 최종 목표는 학습 데이터로 모델을 학습시킨 후 공공자전거의 수요량을 예측하는 것이다. 즉, 테스트 데이터 정보(features, 종속변수)를 바탕으로 제출용 데이터의 'count'(target, 독립변수)를 예측하는 것이다.
#
# - 지도학습중에 분류와 회귀중 회귀와 관련 된 문제 => 자전거 대여량을 예측하는 문제이기 때문에
# - 캐글의 타이타닉 생존자 의 경우는 분류와 회귀 중에 분류에 속한다 => 생존여부를 가려내야 하는 문제이기 때문에
#
# - 간략한 도메인 지식
#
# 워싱턴 D.C 소재의 자전거 대여 스타트업 회사인 Capital BikeShare의 데이터를 활용하여 특정 시간대에 얼마나 많은 사람들이 자전거를 대여하는지 예측하는 과제이다. 데이터 분석과 시각화, 머신 러닝 알고리즘을 이용하여 시간당 자전거 대여량을 예측할 예정입니다. 사람들이 자전거를 대여하는데는 많은 요소가 관여하고 있다. 예로는 새벽보다는 낮시간이 더 많을 것이고, 날씨에도 영향을 받아 비가오는 날에는 엄청 줄을 것이다.근무 시간보다는 여가시간에 자전거를 더 많이 대여할 것이다. 프로그래밍 지식, 인공 지능, 머신 러닝 지식을 제외하고, 자전거 렌탈 시장에 대한 도메인지식과 우리가 자전거를 탔을때의 경험과 기초상식 등을 총 동원해서 효율적인 예측을 해보자.
#
#
# #### 분석 및 예측에 사용된 라이브러리
#
# - Juppter Notebook: 웹 브라우저 기반 편집 환경
# - Python: 쉽고 명확한 범용적 프로그래밍 언어
# - Pandas: Python 기반의 데이터 분석 및 처리 라이브러리
# - Numpy: Python 기반의 쉽고 빠른 과학 연산 라이브러리
# - Seaborn: Python 기반의 시각화 라이브러리
# - Scikit-Learn: Python 기반의 머신러닝 관련 라이브러리
# - XGBoost: 학습에 사용될 Gradient Boosting 알고리즘이 구현된 라이브러리
# #### Column 종류 및 설명
#
# - datetime: 년-월-일 시간 데이터
# - season: 1 = 봄, 2 = 여름, 3 = 가을, 4 = 겨울
# - holiday: 공휴일 또는 주말
# - workingday: 공휴일, 주말을 제외한 평일
# - weather
# - 1: 매우 맑음(Clear, Few clouds, Partly cloudy, Partly cloudy)
# - 2: 맑음(Mist + Cloudy, Mist + Broken clouds, Mist + Few clouds, Mist)
# - 3: 나쁨(Light Snow, Light Rain + Thunderstorm + Scattered clouds, Light Rain + Scattered clouds)
# - 4: 매우 나쁨(Heavy Rain + Ice Pallets + Thunderstorm + Mist, Snow + Fog)
# - temp: 기온
# - atemp: 체감온도 정도로 보자
# - humidity: 상대 습도
# - windspeed: 바람의 세기
# - casual: 미등록 사용자 렌탈 수
# - registered: 등록된 사용자 렌탈수
# - count: 렌탈한 총 합
# +
import pandas as pd
import numpy as np
import matplotlib as mpl
import matplotlib.pyplot as plt
import seaborn as sns
from scipy import stats
# 노트북 안에 그래프를 그리기 위해
# %matplotlib inline
# 그래프에서 마이너스 폰트 깨지는 문제에 대한 대처
mpl.rcParams['axes.unicode_minus'] = False
import warnings
warnings.filterwarnings('ignore')
# +
# 한글폰트
import platform
from matplotlib import font_manager, rc
plt.rcParams['axes.unicode_minus'] = False
if platform.system() == 'Darwin':
rc('font', family='AppleGothic')
print('Mac version')
elif platform.system() == 'Windows':
path = "c:/Windows/Fonts/malgun.ttf"
font_name = font_manager.FontProperties(fname=path).get_name()
rc('font', family=font_name)
print('Windows version')
elif platform.system() == 'Linux':
path = "/usr/share/fonts/NanumFont/NanumGothicBold.ttf"
font_name = font_manager.FontProperties(fname=path).get_name()
plt.rc('font', family=font_name)
print('Linux version')
else:
print('Unknown system... sorry~~~~')
# -
# #### Step 1. Data Load
# +
# 판다스의 read_csv로 train.csv 파일 읽어오기
train = pd.read_csv('./kaggle_Bike Sharing Demand/train.csv',parse_dates=["datetime"]) # datetime형식으로 불러오기 parse_dates써서
test = pd.read_csv('./kaggle_Bike Sharing Demand/test.csv',parse_dates=["datetime"])
# -
print(train.shape) # 출력은 (row, column)으로 표시 된다.
train.head() # head()로 train 데이터 상위 5개를 띄운다.
train.info()
# #### Step2 . 탐색적 데이터 분석 (train 데이터만)
# datetime 형식과, float, int 형식으로 되어있는것을 볼 수 있다. trani.head()함수를써서 직접 눈으로 보자.
train.head(10)
# - train.head(5)로 할때는 windspeed 가 0으로만 되어있어서, 좀더 보기 위해 10,20 으로 늘려보니 0으로 되어있는 값은 결측치를 0으로 채운것 같다.
# - 0으로 되어있는 것을 어떻게 피처엔지니어링 하는것에 따라서 점수가 다르게 나올 것 같다.
train.temp.describe()
# - 평균 기온은 20.23도 정도 되는 것을 확인 할 수 있다.
# +
# train 데이터에서 결측치 있는지 확인하기
train.isnull().sum()
# -
# - train에서 null인 데이터는 없다.
# +
# 결측치가 있는지 확인 해 볼수있는 시각화 tool
import missingno as msno
msno.matrix(train, figsize=(12,5))
# -
# datetime으로 되어있는 데이터를 좀 더 나누어서 보자.
# (년)year,(월)month,(일)day,(시간)hour,(분)minute,(초)second로 나누어 추가해보자.
train["year"] = train["datetime"].dt.year
train["month"] = train["datetime"].dt.month
train["day"] = train["datetime"].dt.day
train["hour"] = train["datetime"].dt.hour
train["minute"] = train["datetime"].dt.minute
train["second"] = train["datetime"].dt.second
train.shape
# - column이 12개에서 18개로 늘어난 것을 확인 할 수 있다. train.head()로도 확인해보자.
train.head()
# 위에서 datetime을 (년)year,(월)month,(일)day,(시간)hour,(분)minute,(초)second로 나누어 추가한 것을 borplot으로 시각화 해보자.
# +
figure, ((ax1,ax2,ax3), (ax4,ax5,ax6)) = plt.subplots(nrows=2, ncols=3)
figure.set_size_inches(18,8)
sns.barplot(data=train, x="year", y="count", ax=ax1)
sns.barplot(data=train, x="month", y="count", ax=ax2)
sns.barplot(data=train, x="day", y="count", ax=ax3)
sns.barplot(data=train, x="hour", y="count", ax=ax4)
sns.barplot(data=train, x="minute", y="count", ax=ax5)
sns.barplot(data=train, x="second", y="count", ax=ax6)
ax1.set(ylabel='Count',title="연도별 대여량")
ax2.set(xlabel='month',title="월별 대여량")
ax3.set(xlabel='day', title="일별 대여량")
ax4.set(xlabel='hour', title="시간별 대여량")
# -
# - 연도별 대여량은 2011년 보다 2012년이 더 많다.
# - 월별 대여량은 6월에 가장 많고 7~10월도 대여량이 많다. 그리고 1월에 가장 적다.
# - 일별대여량은 1일부터 19일까지만 있고 나머지 날짜는 test.csv에 있다. 그래서 이 데이터는 피처로 사용하면 안 된다.
# - 시간 대 대여량을 보면 출퇴근 시간에 대여량이 많은 것 같다. 하지만 주말과 나누어 볼 필요가 있을 것 같다.
# - 분, 초도 다 0이기 때문에 의미가 없다.
# 계절, 시간, 근무일 여부에 따라 bocplot으로 시각해 해보자.
# +
fig, axes = plt.subplots(nrows=2,ncols=2)
fig.set_size_inches(12, 10)
sns.boxplot(data=train,y="count",orient="v",ax=axes[0][0])
sns.boxplot(data=train,y="count",x="season",orient="v",ax=axes[0][1])
sns.boxplot(data=train,y="count",x="hour",orient="v",ax=axes[1][0])
sns.boxplot(data=train,y="count",x="workingday",orient="v",ax=axes[1][1])
axes[0][0].set(ylabel='Count',title="대여량")
axes[0][1].set(xlabel='Season', ylabel='Count',title="계절별 대여량")
axes[1][0].set(xlabel='Hour Of The Day', ylabel='Count',title="시간별 대여량")
axes[1][1].set(xlabel='Working Day', ylabel='Count',title="근무일 여부에 따른 대여량")
# -
# - 대여량의 경우 특정구간에 몰려있는 것을 볼 수 있다.
# - 봄(1),여름(2),가을(3),겨울(4) 인데, 가을 > 여름 > 겨울 > 봄 순으로 대여량이 많다.
# - 시간대별 대여량은 barplot 와 비슷하게 나온것을 확인해 볼 수 있다.
# - 근무일 여부에 따른 대여량 에서는 비슷은하지만, 휴일에 좀더 많은 대여량이 있는 것을 확인 할 수 있다.
# dayofweek도 train 데이터 프레임에 담아봐서 한번 보도록하자.
train["dayofweek"] = train["datetime"].dt.dayofweek
train.shape
# - 18개에서 19개로 데이터프레임 column이 늘은 것을 확인
train["dayofweek"].value_counts()
# - 0~6 : 월 화 수 목 금 토 일 인데, 요일별 큰차이는 없는 것을 확인 할 수 있다.
# pointplot 으로 시간대별, 대여량을 worjingday, datofweek, weather, season 별로 확인해보자.
# +
# pointplot 으로 시간대별, 대여량을 worjingday, datofweek, weather, season 별로 확인해보자.
fig,(ax1,ax2,ax3,ax4,ax5)= plt.subplots(nrows=5)
fig.set_size_inches(18,25)
sns.pointplot(data=train, x="hour", y="count", ax=ax1)
sns.pointplot(data=train, x="hour", y="count", hue="workingday", ax=ax2)
sns.pointplot(data=train, x="hour", y="count", hue="dayofweek", ax=ax3)
sns.pointplot(data=train, x="hour", y="count", hue="weather", ax=ax4)
sns.pointplot(data=train, x="hour", y="count", hue="season", ax=ax5)
# -
# - 시간대별로는 출퇴근 시간에 대여량이 많은 것을 알 수 있다.
# - 휴일에는 점심시간에 전 후인 11시~17시까지 많은 것을 알 수 있다.
# - datofweek에서 5,6은 토요일 일요일이다 , 나머지는 월~금 평일로 workingday와 비슷한 현상을 확인 할 수 있다.
# - weather 날씨에 따라서는, 날씨가 좋을때는 대여량이 많고 날씨가 흐리고 안좋을때마다 적어짐을 확인 할 수 있다.
# - season 계절에 따라서는 위에서 boxplot과 같이 봄(1),여름(2),가을(3),겨울(4) 인데, 가을 > 여름 > 겨울 > 봄 순으로 대여량이 많은것을 확인
# 할수 있다. 시간대 별로 본다면 역시 출,퇴근 시간에 가장 대여량이 많다.
# temp, atemp, casual, registered, humidity, windspeed, count 의 관계성을 heatmap으로 시각화 해보자.
# +
corrMatt = train[["temp", "atemp", "casual", "registered", "humidity", "windspeed", "count"]]
corrMatt = corrMatt.corr()
print(corrMatt)
mask = np.array(corrMatt)
mask[np.tril_indices_from(mask)] = False
# -
fig, ax = plt.subplots()
fig.set_size_inches(20,10)
sns.heatmap(corrMatt, mask=mask,vmax=.8, square=True,annot=True)
# - 온도, 습도, 풍속은 거의 연관관계가 없다.
# - 대여량과 가장 연관이 높은 건 registered 로 등록 된 대여자가 많지만, test 데이터에는 이 값이 없다. (따라서 피처로 사용하기에는 어렵다.)
# - atemp와 temp는 0.98로 상관관계가 높지만 온도와 체감온도로 피처로 사용하기에 적합하지 않을 수 있다. ( 거의 같은 데이터로 보여진다.)
# 온도(temp), 풍속(windspeed), 습도(humidity) 에 따른 산점도를 그려보자.
fig,(ax1,ax2,ax3) = plt.subplots(ncols=3)
fig.set_size_inches(12, 5)
sns.regplot(x="temp", y="count", data=train,ax=ax1)
sns.regplot(x="windspeed", y="count", data=train,ax=ax2)
sns.regplot(x="humidity", y="count", data=train,ax=ax3)
# - 풍속(windspeed)의 경우 일부 데이터가 0에 몰려 있는 것이 확인 된다.(풍속 0에 몰려있는 데이터는 피처엔지니어링으로 조절할 필요가 있다.)
# 아마도 관측되지 않은 수치에 대해 0으로 기록된 것이 아닐까 추측해 본다.
# - 습도(humidity)의 경우 일부 데이터가 0과 100에 몰려 있는 것이 확인 된다.
# 월별데이터로 모아보자
# +
def concatenate_year_month(datetime):
return "{0}-{1}".format(datetime.year, datetime.month) #연과 월을 붙여서 확인
train["year_month"] = train["datetime"].apply(concatenate_year_month)
print(train.shape)
train[["datetime", "year_month"]].head()
# +
fig, (ax1, ax2) = plt.subplots(nrows=1, ncols=2)
fig.set_size_inches(18, 4)
sns.barplot(data=train, x="year", y="count", ax=ax1)
sns.barplot(data=train, x="month", y="count", ax=ax2)
fig, ax3 = plt.subplots(nrows=1, ncols=1)
fig.set_size_inches(18, 4)
sns.barplot(data=train, x="year_month", y="count", ax=ax3)
# -
# - 2011년보다 2012년의 대여량이 더 많다.
# - 겨울보다는 여름에 대여량이 많다.
# - 2011년과 2012년의 월별 데이터를 이어보면 전체적으로 증가하는 추세이다.
# Outlier 데이터를 제거하고 보려한다. 0과 100 이렇게 끝에 몰려 있던 데이터를 제거하려 한다.
# +
# trainWithoutOutliers
trainWithoutOutliers = train[np.abs(train["count"] - train["count"].mean()) <= (3*train["count"].std())]
print(train.shape)
print(trainWithoutOutliers.shape)
# -
# - 약 10886에서 10739로 100여개 가량 줄였다.
# 제거한 데이터로 distplot, probplot로 그려봐서 제거 전과 비교 해 보자.
# +
# count값의 데이터 분포도를 파악
figure, axes = plt.subplots(ncols=2, nrows=2)
figure.set_size_inches(12, 10)
sns.distplot(train["count"], ax=axes[0][0])
stats.probplot(train["count"], dist='norm', fit=True, plot=axes[0][1])
sns.distplot(np.log(trainWithoutOutliers["count"]), ax=axes[1][0])
stats.probplot(np.log1p(trainWithoutOutliers["count"]), dist='norm', fit=True, plot=axes[1][1])
# -
# - train["count"]을 보면 0에 굉장히 몰려있다.
#
# - 0과 100에 치우쳐있던 데이터를 제거했지만, trainWithoutOutliers["count"]을 보면 아직 count변수가 오른쪽에 치우쳐져 있다. 대부분의 기계학습은 종속변수가 normal 이어야 하기에 정규분포를 갖는 것이 바람직하다. 대안으로 outlier data를 제거하고 "count"변수에 로그를 씌워 변경해 봐도 정규분포를 따르지는 않지만 이전 그래프보다는 좀 더 자세히 표현하고 있다.
# #### Step3. Feature Engineering
# +
# 판다스의 read_csv로 train.csv 파일 읽어오기
train = pd.read_csv('./kaggle_Bike Sharing Demand/train.csv',parse_dates=["datetime"]) # datetime형식으로 불러오기 parse_dates써서
test = pd.read_csv('./kaggle_Bike Sharing Demand/test.csv',parse_dates=["datetime"])
# -
train.shape
# - 12개의 columns 을 확인
test.shape
# - 9개의 columns 을 확인
# parse_dates로 불러온 train, test . datetime을 세분화 하여 나누어 보자.
train["year"] = train["datetime"].dt.year # 년
train["month"] = train["datetime"].dt.month # 월
train["day"] = train["datetime"].dt.day # 일
train["hour"] = train["datetime"].dt.hour # 시간
train["minute"] = train["datetime"].dt.minute # 분
train["second"] = train["datetime"].dt.second # 초
train["dayofweek"] = train["datetime"].dt.dayofweek # 요일
train.shape
# - 7개의 columns가 늘은 것을 확인 할 수 있다. (12에서 19로)
test["year"] = test["datetime"].dt.year
test["month"] = test["datetime"].dt.month
test["day"] = test["datetime"].dt.day
test["hour"] = test["datetime"].dt.hour
test["minute"] = test["datetime"].dt.minute
test["second"] = test["datetime"].dt.second
test["dayofweek"] = test["datetime"].dt.dayofweek
test.shape
# - 7개의 columns가 늘은 것을 확인 할 수 있다. (9에서 16으로)
# 풍속을 시각화 해보자.
# +
# widspeed 풍속에 0 값이 가장 많다. => 잘못 기록된 데이터를 고쳐 줄 필요가 있음
fig, axes = plt.subplots(nrows=2)
fig.set_size_inches(18,10)
plt.sca(axes[0])
plt.xticks(rotation=30, ha='right')
axes[0].set(ylabel='Count',title="train windspeed")
sns.countplot(data=train, x="windspeed", ax=axes[0])
plt.sca(axes[1])
plt.xticks(rotation=30, ha='right')
axes[1].set(ylabel='Count',title="test windspeed")
sns.countplot(data=test, x="windspeed", ax=axes[1])
# +
# 풍속의 0값에 특정 값을 넣어준다.
# 평균을 구해 일괄적으로 넣어줄 수도 있지만, 예측의 정확도를 높이는 데 도움이 될것 같진 않다.
# train.loc[train["windspeed"] == 0, "windspeed"] = train["windspeed"].mean()
# test.loc[train["windspeed"] == 0, "windspeed"] = train["windspeed"].mean()
# -
# 풍속이 0인것과 아닌 것의 세트를 나누어 준다.
trainWind0 = train.loc[train['windspeed'] == 0]
trainWindNot0 = train.loc[train['windspeed'] != 0]
print(trainWind0.shape)
print(trainWindNot0.shape)
# - 풍속이 0인것이 1313개로 확인
# 풍속이 0인 것을(결측치로보고) 머신러닝으로 예측을 해서 풍속을 넣어 준다.
# +
# 그래서 머신러닝으로 예측을 해서 풍속을 넣어주도록 한다.
from sklearn.ensemble import RandomForestClassifier
def predict_windspeed(data):
# 풍속이 0인것과 아닌 것을 나누어 준다.
dataWind0 = data.loc[data['windspeed'] == 0]
dataWindNot0 = data.loc[data['windspeed'] != 0]
# 풍속을 예측할 피처를 선택한다.
wCol = ["season", "weather", "humidity", "month", "temp", "year", "atemp"]
# 풍속이 0이 아닌 데이터들의 타입을 스트링으로 바꿔준다.
dataWindNot0["windspeed"] = dataWindNot0["windspeed"].astype("str")
# 랜덤포레스트 분류기를 사용한다.
rfModel_wind = RandomForestClassifier()
# wCol에 있는 피처의 값을 바탕으로 풍속을 학습시킨다.
rfModel_wind.fit(dataWindNot0[wCol], dataWindNot0["windspeed"])
# 학습한 값을 바탕으로 풍속이 0으로 기록 된 데이터의 풍속을 예측한다.
wind0Values = rfModel_wind.predict(X = dataWind0[wCol])
# 값을 다 예측 후 비교해 보기 위해
# 예측한 값을 넣어 줄 데이터 프레임을 새로 만든다.
predictWind0 = dataWind0
predictWindNot0 = dataWindNot0
# 값이 0으로 기록 된 풍속에 대해 예측한 값을 넣어준다.
predictWind0["windspeed"] = wind0Values
# dataWindNot0 0이 아닌 풍속이 있는 데이터프레임에 예측한 값이 있는 데이터프레임을 합쳐준다.
data = predictWindNot0.append(predictWind0)
# 풍속의 데이터타입을 float으로 지정해 준다.
data["windspeed"] = data["windspeed"].astype("float")
data.reset_index(inplace=True)
data.drop('index', inplace=True, axis=1)
return data
# +
# 0값을 조정한다.
train = predict_windspeed(train)
# test = predict_windspeed(test)
# widspeed 의 0값을 조정한 데이터를 시각화
fig, ax1 = plt.subplots()
fig.set_size_inches(18,6)
plt.sca(ax1)
plt.xticks(rotation=30, ha='right') # x축 글자를 30도 틀어서 글자가 겹치는 것을 방지한다.
ax1.set(ylabel='Count',title="train windspeed")
sns.countplot(data=train, x="windspeed", ax=ax1)
# -
# - 0인 값을 렌덤포레스트로 예측된 값을 넣어 주었다. 0인 데이터가 없어졌음을 확인 할 수 있다.
# #### Step4. Feature Selection (피처 선택)
#
# - 신호와 잡음을 구분해야 한다.
# - 피처가 많다고 해서 무조건 좋은 성능을 내지 않는다.(과적합)
# - 피처를 하나씩 추가하고 변경해 가면서 성능이 좋지 않은 피처는 제거하도록 한다.
# 연속형 피처는 온도, 습도, 풍속은 숫자의 크기에 따라 높고 낮음(강약 등 크기 등)을 알 수 있지만, 범주형 피처는 요일,계절등을 0,1,2,3으로
# 나타낸 것이기 때문에 원-핫 코딩으로 바꿔줄 필요가 있다. 여기서는 범주형 피처를 카테고리를 통해 바꿔주기로 했다.
# +
# 연속형 feature와 범주형 feature
# 연속형 feature = ["temp","humidity","windspeed","atemp"]
# 범주형 feature의 type을 category로 변경 해 준다.
categorical_feature_names = ["season","holiday","workingday","weather",
"dayofweek","month","year","hour"]
for var in categorical_feature_names:
train[var] = train[var].astype("category")
test[var] = test[var].astype("category")
# +
feature_names = ["season", "weather", "temp", "atemp", "humidity", "windspeed",
"year", "hour", "dayofweek", "holiday", "workingday"]
feature_names
# +
X_train = train[feature_names]
print(X_train.shape)
X_train.head()
# +
X_test = test[feature_names]
print(X_test.shape)
X_test.head()
# +
label_name = "count"
y_train = train[label_name]
print(y_train.shape)
y_train.head()
# -
# ### Step5 . Score
# #### RMSLE
# +
from sklearn.metrics import make_scorer
def rmsle(predicted_values, actual_values):
# 넘파이로 배열 형태로 바꿔준다.
predicted_values = np.array(predicted_values)
actual_values = np.array(actual_values)
# 예측값과 실제 값에 1을 더하고 로그를 씌워준다.
log_predict = np.log(predicted_values + 1)
log_actual = np.log(actual_values + 1)
# 위에서 계산한 예측값에서 실제값을 빼주고 제곱을 해준다.
difference = log_predict - log_actual
# difference = (log_predict - log_actual) ** 2
difference = np.square(difference)
# 평균을 낸다.
mean_difference = difference.mean()
# 다시 루트를 씌운다.
score = np.sqrt(mean_difference)
return score
rmsle_scorer = make_scorer(rmsle)
rmsle_scorer
# -
# ### Cross Validation 교차 검증
#
# 일반화 성능을 측정하기 위해 데이터를 여러 번 반복해서 나누고 여러 모델을 학습한다.
#
# KFold 교차검증
# - 데이터를 폴드라 부르는 비슷한 크기의 부분집합(n_splits)으로 나누고 각각의 폴드 정확도를 측정한다.
# - 첫 번째 폴드를 테스트 세트로 사용하고 나머지 폴드를 훈련세트로 사용하여 학습한다.
# - 나머지 훈련세트로 만들어진 세트의 정확도를 첫 번째 폴드로 평가한다.
# - 다음은 두 번째 폴드가 테스트 세트가 되고 나머지 폴드의 훈련세트를 두 번째 폴드로 정확도를 측정한다.
# - 이 과정을 마지막 폴드까지 반복한다.
# - 이렇게 훈련세트와 테스트세트로 나누는 N개의 분할마다 정확도를 측정하여 평균 값을 낸게 정확도가 된다.
# +
from sklearn.model_selection import KFold
from sklearn.model_selection import cross_val_score
k_fold = KFold(n_splits=10, shuffle=True, random_state=0)
# -
# ### RandomForest
# n_estimators 값 높이면 시간이 오래걸리기 때문에 100으로 초기화하고 진행한다.
# +
from sklearn.ensemble import RandomForestRegressor
max_depth_list = []
model = RandomForestRegressor(n_estimators=100,
n_jobs=-1,
random_state=0)
model
# -
# %time score = cross_val_score(model, X_train, y_train, cv=k_fold, scoring=rmsle_scorer)
score = score.mean()
# 0에 근접할수록 좋은 데이터
print("Score= {0:.5f}".format(score))
# 점수는 0.33057이 나왔다. 0에 근접할수록 좋은 데이터이다.
# ### Train
# 학습시킴, 피팅(옷을 맞출 때 사용하는 피팅을 생각함) - 피처와 레이블을 넣어주면 알아서 학습을 함
model.fit(X_train, y_train)
# +
# 예측
predictions = model.predict(X_test)
print(predictions.shape)
predictions[0:10]
# -
# 예측한 데이터를 시각화 해본다.
fig,(ax1,ax2)= plt.subplots(ncols=2)
fig.set_size_inches(12,5)
sns.distplot(y_train,ax=ax1,bins=50)
ax1.set(title="train")
sns.distplot(predictions,ax=ax2,bins=50)
ax2.set(title="test")
# #### Step6. Submit
#
# 캐글에 제출을 해보자.
# +
submission = pd.read_csv("./kaggle_Bike Sharing Demand/sampleSubmission.csv")
submission
submission["count"] = predictions
print(submission.shape)
submission.head()
# -
submission.to_csv("./kaggle_Bike Sharing Demand/Score_{0:.5f}_submission.csv".format(score), index=False)
# <center>
# <img src="https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FdpSRbf%2Fbtq5uuptG93%2FnmfSQnT5ywwpvcnHPD8GL0%2Fimg.png"><br>
# </center>
#
# 0.41848 이라는 점수가 나왔다. 우월하다는 XGBoost 를 써서 한번 더 도전 해보려 한다. 총평은 XGBoost를 사용해보고 남겨야겠다.
| kaggle_Bike Sharing Demand.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import elevation.prediction_pipeline as pp
import elevation
import random
from sklearn.neighbors import NearestNeighbors
import numpy as np
import elevation
import pandas
import azimuth
import joblib
import logging
from joblib import Memory
from elevation.model_comparison import *
import copy
import scipy.stats as ss
from sklearn.grid_search import ParameterGrid
import sklearn.linear_model
import scipy as sp
import scipy.stats
import elevation.models
import elevation.features
#import GPy
import socket
from elevation.stacker import *
import elevation.util as ut
from sklearn.metrics import auc, roc_curve
from elevation import settings
import sklearn.isotonic
from sklearn.cross_validation import StratifiedKFold
import sklearn.pipeline
import sklearn.preprocessing
import pandas as pd
from elevation.cmds.predict import Predict
from elevation import options
import os
import pickle
# %matplotlib inline
import matplotlib
from sklearn.metrics import roc_auc_score, roc_curve, precision_recall_curve,average_precision_score
# +
def filter_pam_out_of_muts(data, i):
tmp_muts = data['mut positions'].iloc[i]
# because Hsu-Zhang ignores alternate PAMs which we have encoded with '22'
pam_pos = 22
if pam_pos in tmp_muts:
tmp_muts.remove(pam_pos)
tmp_muts = np.array(tmp_muts)
num_m = len(tmp_muts)
return num_m, tmp_muts
def predict(model, data, learn_options, learn_options_override=None, verbose=False):
if learn_options_override is None:
learn_options_override = learn_options
predictions, model, learn_options, _tmpdata, feature_names, all_predictions_ind = predict_elevation(data=data, model=(model, learn_options), model_file=None, pam_audit=False, learn_options_override=learn_options_override,force_zero_intercept=False, naive_bayes_combine=True, verbose=verbose)
return predictions, all_predictions_ind
class Smote:
"""
SMOTE
Parameters:
-----------
k: int
the num.
sampling_rate: int
, attention sampling_rate < k.
newindex: int
"""
def __init__(self, sampling_rate=5, k=5):
self.sampling_rate = sampling_rate
self.k = k
self.newindex = 0
#
def synthetic_samples(self, X, i, k_neighbors,y=None):
for j in range(self.sampling_rate):
#
neighbor = np.random.choice(k_neighbors)
#
diff = X[neighbor] - X[i]
#
self.synthetic_X[self.newindex] = X[i] + random.random() * diff
self.synthetic_y[self.newindex]=y[i]+random.random()*(y[neighbor]-y[i])
self.newindex += 1
def fit(self, X, y=None):
if y is not None:
negative_X = X[y == 0]
X = X[y != 0]
n_samples, n_features = X.shape
#
self.synthetic_X = np.zeros((n_samples * self.sampling_rate, n_features))
self.synthetic_y=np.zeros(n_samples*self.sampling_rate)
#
knn = NearestNeighbors(n_neighbors=self.k).fit(X)
for i in range(len(X)):
print(i)
k_neighbors = knn.kneighbors(X[i].reshape(1, -1),
return_distance=False)[0]
#
# sampling_rate
self.synthetic_samples(X, i, k_neighbors,y)
if y is not None:
return (np.concatenate((self.synthetic_X, X, negative_X), axis=0),
np.concatenate((self.synthetic_y, y[y!=0], y[y == 0]), axis=0))
def stacked_predictions(data, preds_base_model, models=['product', 'CFD', 'constant-power', 'linear-raw-stacker', 'linreg-stacker', 'RF-stacker', 'GP-stacker', 'raw GP'],
truth=None, guideseq_data=None, preds_guideseq=None, prob_calibration_model=None, learn_options=None, return_model=False, trained_model=None,
models_to_calibrate=None, return_residuals=False):#, dnase_train=None, dnase_test=None):
predictions = dict([(m, None) for m in models])
num_mismatches = np.array([len(t) for t in data["Annotation"].values])
# if ('use_mut_distances' in learn_options.keys() and learn_options['use_mut_distances']):
data = elevation.features.extract_mut_positions_stats(data)
if guideseq_data is not None:
y = guideseq_data['GUIDE-SEQ Reads'].values[:, None]
num_annot = np.array([len(t) for t in guideseq_data["Annotation"].values])
if 'logistic stacker' in models:
X = preds_guideseq.copy()
Xtest = preds_base_model.copy()
m = Stacker(y, X, warp_out=False)
m.maximize()
predictions['logistic stacker'] = m.predict(Xtest)
if 'CFD' in models:
# predicting
if 'cfd_table_file' not in learn_options.keys():
learn_options['cfd_table_file'] = settings.pj(settings.offtarget_data_dir, "STable 19 FractionActive_dlfc_lookup.xlsx")
cfd = elevation.models.CFDModel(cfd_table_file=learn_options['cfd_table_file'])
predictions['CFD'] = cfd.predict(data["Annotation"].values, learn_options["num_proc"])[:, None]
if 'product' in models:
predictions['product'] = np.nanprod(preds_base_model, axis=1)[:,None]
if 'constant-power' in models:
predictions['constant-power'] = np.power(0.5, num_mismatches)
if 'CCTOP' in models:
# predicting
term1 = np.zeros((data.shape[0], 1))
for i in range(len(term1)):
num_m, tmp_muts = filter_pam_out_of_muts(data, i)
term1[i] = np.sum(1.2**np.array(tmp_muts))
predictions['CCTOP'] = -term1.flatten()
if 'HsuZhang' in models:
# predicting
W = [0.0,0.0,0.014,0.0,0.0,0.395,0.317,0,0.389,0.079,0.445,0.508,0.613,0.851,0.732,0.828,0.615,0.804,0.685,0.583]
pred = np.zeros((data.shape[0], 1))
for i in range(len(pred)):
num_m, tmp_muts = filter_pam_out_of_muts(data, i)
if len(tmp_muts) == 0:
pred[i] = 1.0
else:
d = ut.get_pairwise_distance_mudra(tmp_muts)
term1 = np.prod(1. - np.array(W)[tmp_muts - 1])
if num_m > 1:
term2 = 1./(((19-d)/19)*4 + 1)
else:
term2 = 1
term3 = 1./(num_m)**2
pred[i] = term1*term2*term3
predictions['HsuZhang'] = pred.flatten()
if 'linear-raw-stacker' in models or 'GBRT-raw-stacker' in models:
if trained_model is None:
# put together the training data
X = preds_guideseq.copy()
X[np.isnan(X)] = 1.0
feature_names = ['pos%d' % (i+1) for i in range(X.shape[1])]
# adding product, num. annots and sum to log of itself
X = np.concatenate((np.log(X), np.prod(X, axis=1)[:, None], num_annot[:, None], np.sum(X, axis=1)[:, None]), axis=1)
feature_names.extend(['product', 'num. annotations', 'sum'])
# X = np.log(X)
# Only product
# X = np.prod(X, axis=1)[:, None]
# feature_names = ['product']
Xtest = preds_base_model.copy()
Xtest[np.isnan(Xtest)] = 1.0
Xtest = np.concatenate((np.log(Xtest), np.prod(Xtest, axis=1)[:, None], num_mismatches[:, None], np.sum(Xtest, axis=1)[:, None]), axis=1)
# Xtest = np.log(Xtest)
# Xtest = np.prod(Xtest, axis=1)[:, None]
if ('use_mut_distances' in learn_options.keys() and learn_options['use_mut_distances']):
guideseq_data = elevation.features.extract_mut_positions_stats(guideseq_data)
X_dist = guideseq_data[['mut mean abs distance', 'mut min abs distance', 'mut max abs distance', 'mut sum abs distance',
'mean consecutive mut distance', 'min consecutive mut distance', 'max consecutive mut distance',
'sum consecutive mut distance']].values
Xtest_dist = data[['mut mean abs distance', 'mut min abs distance', 'mut max abs distance', 'mut sum abs distance',
'mean consecutive mut distance', 'min consecutive mut distance', 'max consecutive mut distance',
'sum consecutive mut distance']].values
X = np.concatenate((X, X_dist), axis=1)
Xtest = np.concatenate((Xtest, Xtest_dist), axis=1)
if 'azimuth_score_in_stacker' in learn_options.keys() and learn_options['azimuth_score_in_stacker']:
azimuth_score = elevation.model_comparison.get_on_target_predictions(guideseq_data, ['WT'])[0]
X = np.concatenate((X, azimuth_score[:, None]), axis=1)
azimuth_score_test = elevation.model_comparison.get_on_target_predictions(data, ['WT'])[0]
Xtest = np.concatenate((Xtest, azimuth_score_test[:, None]), axis=1)
if 'linear-raw-stacker' in models:
dnase_type = [key for key in learn_options.keys() if 'dnase' in key]
assert len(dnase_type) <= 1
if len(dnase_type) == 1:
dnase_type = dnase_type[0]
use_dnase = learn_options[dnase_type]
else:
use_dnase = False
if use_dnase:
dnase_train = guideseq_data["dnase"].values
dnase_test = data["dnase"].values
assert dnase_train.shape[0] == X.shape[0]
assert dnase_test.shape[0] == Xtest.shape[0]
if dnase_type == 'dnase:default':
# simple appending (Melih)
X = np.concatenate((X, dnase_train[:, None]), axis=1)
Xtest = np.concatenate((Xtest, dnase_test[:, None]), axis=1)
elif dnase_type == 'dnase:interact':
# interaction with original features
X = np.concatenate((X, X*dnase_train[:, None]), axis=1)
Xtest = np.concatenate((Xtest, Xtest*dnase_test[:, None]), axis=1)
elif dnase_type == 'dnase:only':
# use only the dnase
X = dnase_train[:, None]
Xtest = dnase_test[:, None]
elif dnase_type == 'dnase:onlyperm':
# use only the dnase
pind = np.random.permutation(dnase_train.shape[0])
pind_test = np.random.permutation(dnase_test.shape[0])
X = dnase_train[pind, None]
Xtest = dnase_test[pind_test, None]
else:
raise NotImplementedError("no such dnase type: %s" % dnase_type)
normX = True
strength = 1.0
# train the model
if trained_model is None:
# subsample the data for more balanced training
ind_zero = np.where(y==0)[0]
ind_keep = (y!=0).flatten()
nn = ind_keep.sum()
# take every kth' zero
increment = int(ind_zero.shape[0]/float(nn))
sampling_rate=increment-1 #比例的选择
k=20 #k近邻的选择
smote = Smote(sampling_rate=sampling_rate, k=k)
X,y=smote.fit(X,y.flatten()) #进行smote的变换后得到的数据
print X.shape
print y.shape
y=y.reshape(len(y),1)
#----- debug
#ind_zero = np.where(y==0)[0]
#ind_keep2 = (y!=0).flatten()
#ind_keep2[np.random.permutation(ind_zero)[0:nn]] = True
#-----
# from IPython.core.debugger import Tracer; Tracer()()
# what been using up until 9/12/2016
#clf = sklearn.linear_model.LassoCV(cv=10, fit_intercept=True, normalize=True)
# now using this:
num_fold = 10
kfold = StratifiedKFold(y.flatten()==0, num_fold, random_state=learn_options['seed'])
#kfold2 = StratifiedKFold(y[ind_keep2].flatten()==0, num_fold, random_state=learn_options['seed'])
clf = sklearn.linear_model.LassoCV(cv=kfold, fit_intercept=True, normalize=(~normX),n_jobs=num_fold, random_state=learn_options['seed'])
#clf2 = sklearn.linear_model.LassoCV(cv=kfold2, fit_intercept=True, normalize=(~normX),n_jobs=num_fold, random_state=learn_options['seed'])
if normX:
clf = sklearn.pipeline.Pipeline([['scaling', sklearn.preprocessing.StandardScaler()], ['lasso', clf]])
#clf2 = sklearn.pipeline.Pipeline([['scaling', sklearn.preprocessing.StandardScaler()], ['lasso', clf2]])
#y_transf = st.boxcox(y[ind_keep] - y[ind_keep].min() + 0.001)[0]
# scale to be between 0 and 1 first
y_new = (y - np.min(y)) / (np.max(y) - np.min(y))
#plt.figure(); plt.plot(y_new[ind_keep], '.');
y_transf = st.boxcox(y_new - y_new.min() + 0.001)[0]
# when we do renormalize, we konw that these values are mostly negative (see Teams on 6/27/2017),
# so lets just make them go entirely negative(?)
#y_transf = y_transf - np.max(y_transf)
#plt.figure(); plt.plot(y_transf, '.'); #plt.title("w out renorm, w box cox, then making all negative"); plt.show()
#import ipdb; ipdb.set_trace()
#y_transf = np.log(y[ind_keep] - y[ind_keep].min() + 0.001)
#y_transf = y[ind_keep]
# debugging
#y_transf2 = st.boxcox(y[ind_keep2] - y[ind_keep2].min() + 0.001)[0]
#y_transf2 = y[ind_keep2]
print "train data set size is N=%d" % len(y_transf)
clf.fit(X, y_transf)
#clf2.fit(X[ind_keep2], y_transf2)
#clf.fit(X_keep, tmpy)
#tmp = clf.predict(X)
#sp.stats.spearmanr(tmp[ind_keep],y_transf.flatten())[0]
#sp.stats.spearmanr(tmp[ind_keep], y[ind_keep])[0]
#sp.stats.spearmanr(tmp, y)[0]
#sp.stats.pearsonr(tmp[ind_keep],y_transf.flatten())[0]
# clf.fit(X, y.flatten())
# clf.fit(X, y, sample_weight=weights)
else:
clf = trained_model
# if normX:
# predictions['linear-raw-stacker'] = clf.predict(normalizeX(Xtest, strength, None))
# else:
predictions['linear-raw-stacker'] = clf.predict(Xtest)
# residuals = np.log(y[ind_keep].flatten()+0.001) - clf.predict(X[ind_keep])
if 'linreg-stacker' in models:
m_stacker = StackerFeat()
m_stacker.fit(preds_guideseq, y, model='linreg', normalize_feat=False)
predictions['linreg-stacker'] = m_stacker.predict(preds_base_model)
if 'RF-stacker' in models:
m_stacker = StackerFeat()
m_stacker.fit(preds_guideseq, y, model='RFR', normalize_feat=False)
predictions['RF-stacker'] = m_stacker.predict(preds_base_model)
if 'GP-stacker'in models:
m_stacker = StackerFeat()
m_stacker.fit(preds_guideseq, y, model='GP', normalize_feat=False)
predictions['GP-stacker'] = m_stacker.predict(preds_base_model)
if 'raw GP' in models:
X = preds_guideseq.copy()
X[np.isnan(X)] = 1.0
D_base_predictions = X.shape[1]
X = np.concatenate((np.prod(X, axis=1)[:, None],
num_annot[:, None],
np.sum(X, axis=1)[:, None],
X), axis=1)
Xtest = preds_base_model.copy()
Xtest[np.isnan(Xtest)] = 1.0
Xtest = np.concatenate((np.prod(Xtest, axis=1)[:, None],
num_mismatches[:, None],
np.sum(Xtest, axis=1)[:, None],
Xtest), axis=1)
K = GPy.kern.RBF(1, active_dims=[0]) + GPy.kern.RBF(1, active_dims=[1]) + GPy.kern.Linear(1, active_dims=[2]) + GPy.kern.RBF(D_base_predictions, active_dims=range(3, D_base_predictions+3))
m = GPy.models.GPRegression(X, np.log(y), kernel=K)
m.optimize_restarts(5, messages=0)
predictions['raw GP'] = m.predict(Xtest)[0]
if 'combine' in models:
predictions['combine'] = np.ones_like(predictions[predictions.keys()[0]])
for c_model in models:
if c_model != 'combine':
predictions['combine'] += predictions[c_model].flatten()[:, None]
predictions['combine'] /= len(models)-1
if 'ensemble' in models:
predictions['ensemble'] = (predictions['product'].flatten() + predictions['linear-raw-stacker'].flatten())/2.
if prob_calibration_model is not None:
if models_to_calibrate is None:
models_to_calibrate = ['linear-raw-stacker']
for m in models:
if False:# m == 'linear-raw-stacker':
pred = np.exp(predictions[m].flatten()[:, None]) - 0.001 # undo log transformation
else:
pred = predictions[m].flatten()[:, None]
if m in models_to_calibrate:
cal_pred = prob_calibration_model[m].predict_proba(pred)[:, 1]
#cal_pred = prob_calibration_model[m].predict_proba(pred)[:, 0]
if len(pred) > 10:
assert np.allclose(sp.stats.spearmanr(pred, cal_pred)[0], 1.0)# or np.allclose(sp.stats.spearmanr(pred, cal_pred)[0], -1.0)
predictions[m] = cal_pred
if truth is not None:
res_str = "Spearman r: "
for m in models:
res_str += "%s=%.3f " % (m, sp.stats.spearmanr(truth, predictions[m])[0])
print res_str
res_str = "NDCG: "
for m in models:
res_str += "%s=%.3f " % (m, azimuth.metrics.ndcg_at_k_ties(truth.values.flatten(), predictions[m].flatten(), truth.shape[0]))
print res_str
if return_model:
if return_residuals:
return predictions, clf, feature_names, residuals
else:
return predictions, clf, feature_names
return predictions
def train_prob_calibration_model(cd33_data, guideseq_data, preds_guideseq, base_model, learn_options, which_stacker_model='linear-raw-stacker', other_calibration_models=None):
assert which_stacker_model == 'linear-raw-stacker', "only LRS can be calibrated right now"
# import ipdb; ipdb.set_trace()
# if cd33_data is not None:
Y_bin = cd33_data['Day21-ETP-binarized'].values
Y = cd33_data['Day21-ETP'].values
# else:
# ind = np.zeros_like(guideseq_data['GUIDE-SEQ Reads'].values)
# ind[guideseq_data['GUIDE-SEQ Reads'].values > 0] = True
# ind_zero = np.where(guideseq_data['GUIDE-SEQ Reads'].values==0)[0]
# ind[ind_zero[::ind_zero.shape[0]/float(ind.sum())]] = True
# ind = ind==True
# Y = guideseq_data[ind]['GUIDE-SEQ Reads'].values
# cd33_data = guideseq_data[ind]
#X_guideseq = predict(base_model, cd33_data, learn_options)[0]
nb_pred, individual_mut_pred_cd33 = predict(base_model, cd33_data, learn_options)
# # This the models in the ensemble have to be calibrated as well, so we rely on
# # having previously-calibrated models available in a dictionary
# if which_model == 'ensemble':
# models = ['CFD', 'HsuZhang', 'product', 'linear-raw-stacker', 'ensemble']
# models_to_calibrate = ['product', 'linear-raw-stacker']
# calibration_models = other_calibration_models
# else:
# models = [which_model]
# models_to_calibrate = None
# calibration_models = None
# get linear-raw-stacker (or other model==which_model) predictions, including training of that model if appropriate (e.g. linear-raw-stacker)
X_guideseq, clf_stacker_model, feature_names_stacker_model = stacked_predictions(cd33_data, individual_mut_pred_cd33,
models=[which_stacker_model],
guideseq_data=guideseq_data,
preds_guideseq=preds_guideseq,
learn_options=learn_options,
models_to_calibrate=None,
prob_calibration_model=None,
return_model=True)
X_guideseq = X_guideseq[which_stacker_model]
clf = sklearn.linear_model.LogisticRegression(fit_intercept=True, solver='lbfgs')
# fit the linear-raw-stacker (or whatever model is being calibrated) predictions on cd33 to the actual binary cd33 values
clf.fit(X_guideseq[:, None], Y_bin)
y_pred = clf.predict_proba(X_guideseq[:, None])[:, 1]
#y_pred = clf.predict_proba(X_guideseq[:, None])[:, 0]
#import ipdb; ipdb.set_trace()
expected_sign = np.sign(sp.stats.spearmanr(X_guideseq, Y_bin)[0])
assert np.allclose(sp.stats.spearmanr(y_pred, X_guideseq)[0], 1.0*expected_sign, atol=1e-2)
return clf
def excute(wildtype, offtarget,calibration_models,base_model,guideseq_data,preds_guideseq,learn_options): #编写测试的函数
start = time.time()
wt = wildtype
mut = offtarget
df = pd.DataFrame(columns=['30mer', '30mer_mut', 'Annotation'], index=range(len(wt)))
df['30mer'] = wt
df['30mer_mut'] = mut
annot = []
for i in range(len(wt)):
annot.append(elevation.load_data.annot_from_seqs(wt[i], mut[i]))
df['Annotation'] = annot
# print "Time spent parsing input: ", time.time() - start
base_model_time = time.time()
nb_pred, individual_mut_pred = elevation.prediction_pipeline.predict(base_model, df, learn_options)
#print "Time spent in base model predict(): ", time.time() - base_model_time
start = time.time()
pred = stacked_predictions(df, individual_mut_pred,
learn_options=learn_options,
guideseq_data=guideseq_data,
preds_guideseq=preds_guideseq,
prob_calibration_model=calibration_models,
models=['HsuZhang', 'CFD', 'CCTOP', 'linear-raw-stacker'])
return pred
# -
#画pr的曲线:
def test_pr(predictions,truth,listmodel,listcolor,save_name):
plt.figure()
for i in range(len(listmodel)):
model=listmodel[i]
color=listcolor[i]
precision, recall, thresholds = precision_recall_curve(truth.flatten(), predictions[model].flatten())
model_ave_precision = average_precision_score(truth, predictions[model])
plt.plot(recall,precision,label=model+"(%.3f" % model_ave_precision+")",color=color,lw=2)
plt.legend(loc=0)
plt.xlabel('Recall')
plt.ylabel('Precision')
plt.savefig(save_name,dpi=300)
#画auc的曲线
def test_roc(predictions, truth,listmodel,listcolor,name):
plt.figure()
for i in range(len(listmodel)):
model=listmodel[i]
color=listcolor[i]
fpr, tpr, thresholds = roc_curve(truth.flatten(), predictions[model].flatten())
model_auc = auc(fpr, tpr)
plt.plot(fpr, tpr, label=model+"(%.3f" % model_auc+")",color=color,lw=2)
plt.legend(loc=0)
plt.plot([0, 1], [0, 1], linestyle='--', lw=2, color='black',
label='Base Line', alpha=.8)
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate')
plt.savefig(name,dpi=300)
listmodel=['Elevation-score_with_smote','Elevation-score','HsuZhang','CFD','CCTOP']
listcolor=['blue','purple','green','red','orange']
#train_final_model
learn_options=options.learn_options
base_model, base_feature_names=pp.train_base_model(learn_options)
guideseq_data=pp.load_guideseq(learn_options, False, False)
preds_guideseq=pp.predict_guideseq(base_model, guideseq_data, learn_options, True)
cd33_data=elevation.load_data.load_cd33(learn_options)
cd33_data = cd33_data[0]
cd33_data['Annotation'] = cd33_data['Annotation'].apply(lambda x: [x])
to_be_calibrated = 'linear-raw-stacker'
calibration_models = {}
calibration_models[to_be_calibrated]=train_prob_calibration_model(cd33_data,guideseq_data,preds_guideseq,base_model,learn_options,which_stacker_model=to_be_calibrated,other_calibration_models=calibration_models)
guideseq_data
#进行测试集的厕所
#加载训练数据集
RNA_5g=pd.read_csv("5gRNA_final-contain-read_data.tab",header=None)
wildtype_5g=list(RNA_5g.iloc[:,0])
offtarget_5g=list(RNA_5g.iloc[:,1])
read_5g=np.array(RNA_5g.iloc[:,2])
class_5g=np.array(RNA_5g.iloc[:,3])
preds_smote=excute(wildtype_5g, offtarget_5g,calibration_models,base_model,guideseq_data,preds_guideseq,learn_options)
preds_smote
preds_smote.keys()
preds_smote.update(Elevation_with_smote=preds_smote.pop('elevation_w'))
p1=Predict()
preds_1=p1.execute(wildtype_5g,offtarget_5g)
# +
preds_smote["Elevation-score_with_smote"]=preds_smote["linear-raw-stacker"]
preds_smote.pop('linear-raw-stacker')
# -
preds_smote.keys()
preds_smote['Elevation-score']=preds_1["linear-raw-stacker"]
preds_with_smote_5g=preds_smote
preds_with_smote_5g.keys() #d.update(y=d.pop('a'))
name='5sgRNAs_pr_smote'
test_pr(preds_with_smote_5g,class_5g,listmodel,listcolor,name)
name='5sgRNAs_roc_smote'
test_roc(preds_with_smote_5g, class_5g,listmodel,listcolor,name)
#进行12gRNA的测试
RNA_12g=pd.read_csv("22gRNA_final-contain_data.tab",header=None)
wildtype_12g=list(RNA_22g.iloc[:,0])
offtarget_12g=list(RNA_22g.iloc[:,1])
read_12g=np.array(RNA_22g.iloc[:,2])
class_12g=np.array(RNA_2g.iloc[:,3])
preds_smote_12g=excute(wildtype_12g, offtarget_12g,calibration_models,base_model,guideseq_data,preds_guideseq,learn_options)
# +
#d.update(y=d.pop('a')) #如何去改key的值
# -
preds_smote_12g.keys()
preds_12g=p1.execute(wildtype_12g ,offtarget_12g)
preds_smote_12g["Elevation-score_with_smote"]=preds_smote_12g['linear-raw-stacker']
preds_smote_12g.pop("linear-raw-stacker")
preds_smote_12g["Elevation-score"]=preds_12g["linear-raw-stacker"]
preds_smote_12g.keys()
name='12sgRNAs_pr_smote'
test_pr(preds_smote_12g,class_12g,listmodel,listcolor,name)
name='12sgRANs_roc_smote'
test_roc(preds_smote_12g, class_12g,listmodel,listcolor,name)
| scripts_for_improve_Elevation/smote.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# STAT 453: Deep Learning (Spring 2021)
# Instructor: <NAME> (<EMAIL>)
#
# Course website: http://pages.stat.wisc.edu/~sraschka/teaching/stat453-ss2021/
# GitHub repository: https://github.com/rasbt/stat453-deep-learning-ss21
#
# ---
# %load_ext watermark
# %watermark -a '<NAME>' -v -p torch
# # MLP with Custom Data Loader
# ## Imports
import torch
import numpy as np
import matplotlib.pyplot as plt
# From local helper files
from helper_evaluation import set_all_seeds, set_deterministic
from helper_train import train_model
from helper_plotting import plot_training_loss, plot_accuracy, show_examples
from helper_dataset import get_dataloaders
# ## Settings and Dataset
# +
##########################
### SETTINGS
##########################
RANDOM_SEED = 1
BATCH_SIZE = 64
NUM_EPOCHS = 100
DEVICE = torch.device('cuda:0' if torch.cuda.is_available() else 'cpu')
# -
set_all_seeds(RANDOM_SEED)
set_deterministic()
# +
##########################
### MNIST DATASET
##########################
train_loader, valid_loader, test_loader = get_dataloaders(batch_size=BATCH_SIZE)
# Checking the dataset
for images, labels in train_loader:
print('Image batch dimensions:', images.shape)
print('Image label dimensions:', labels.shape)
print('Class labels of 10 examples:', labels[:10])
break
# -
# ## Model
# +
class MLP(torch.nn.Module):
def __init__(self, num_features, num_hidden, num_classes):
super().__init__()
self.num_classes = num_classes
self.model = torch.nn.Sequential(
torch.nn.Flatten(),
torch.nn.Linear(num_features, num_hidden),
torch.nn.Sigmoid(),
torch.nn.Linear(num_hidden, num_classes))
def forward(self, x):
return self.model(x)
#################################
### Model Initialization
#################################
torch.manual_seed(RANDOM_SEED)
model = MLP(num_features=28*28,
num_hidden=100,
num_classes=10)
model = model.to(DEVICE)
optimizer = torch.optim.SGD(model.parameters(), lr=0.1)
# -
# ## Training
minibatch_loss_list, train_acc_list, valid_acc_list = train_model(
model=model,
num_epochs=NUM_EPOCHS,
train_loader=train_loader,
valid_loader=valid_loader,
test_loader=test_loader,
optimizer=optimizer,
device=DEVICE)
# ## Evaluate
# +
plot_training_loss(minibatch_loss_list=minibatch_loss_list,
num_epochs=NUM_EPOCHS,
iter_per_epoch=len(train_loader),
results_dir=None,
averaging_iterations=20)
plt.show()
plot_accuracy(train_acc_list=train_acc_list,
valid_acc_list=valid_acc_list,
results_dir=None)
plt.show()
# -
show_examples(model=model, data_loader=test_loader)
| L09/code/custom-dataloader/train_model.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: 'Python 3.8.6 64-bit (''.env'': venv)'
# language: python
# name: python3
# ---
import pandas as pd
import plotly.express as px
import plotly.graph_objects as go
pd.set_option('display.max_columns', None)
# ### Reading CSV files
history = pd.read_csv('./dataset/csv_data/streaming_history_data.csv')
history.head(5)
# Converting release_date to datetime
history['release_date'] = pd.to_datetime(history['release_date'])
history['end_time'] = pd.to_datetime(history['end_time'])
history['minutes_played'] = history[' ms_played'].divide(60000)
history.drop(' ms_played', axis=1, inplace=True)
history.columns
# ## Most played artists name
most_played_artists_by_count = history.groupby(by='artist_name')['track_name'].count().sort_values(ascending=False)[:20]
fig = go.Figure(
data=[
go.Bar(
x = most_played_artists_by_count.index,
y = most_played_artists_by_count.values,
marker = {
'color' : most_played_artists_by_count.values
},
text = most_played_artists_by_count.index,
)
]
)
fig.update_layout(
title_text= 'Popularity Of Artists By Number Of Times Their Song Was Played',
barmode='group',
xaxis_tickangle=45,
xaxis_title = "Name",
yaxis_title = "Count",
hovermode = 'x',
height = 650,
width = 1210,
)
fig.show()
# ## Count of songs listened
most_played_songs_count = history.groupby(by='track_name')['track_name'].count().sort_values(ascending=False)[:25]
# plot for count of track listened
fig = go.Figure(
data=[
go.Bar(
x = most_played_songs_count.index,
y = most_played_songs_count.values,
marker = {
'color' : most_played_songs_count.values
},
text = most_played_songs_count.index,
textposition = 'inside'
)
]
)
fig.update_layout(
title_text='Count of tracks listened',
barmode='group',
xaxis_tickangle=45,
xaxis_title = "track_name",
yaxis_title = "track_count",
height = 800,
width = 1210,
hovermode = 'x',
)
fig.show()
# ## Songs listened per month
# +
# plot for count of track listened
month = history['month'].value_counts().sort_index(ascending=True)
months = ['Jan', 'Feb', 'Mar', 'Apr', 'May', 'Jun', 'July', 'Aug', 'Sept', 'Oct', 'Nov', 'Dec']
fig = go.Figure(
data=[
go.Bar(
x = months,
y = month.values,
marker = {
'color' : month.values
},
text = months,
textposition = 'inside'
)
]
)
fig.update_layout(
title_text='Count of songs listened as per month',
barmode='group',
xaxis_title = "month",
yaxis_title = "count",
height = 800,
width = 1210,
hovermode = 'x'
)
fig.show()
# -
# ## Popularity of artist based on time listened
amount_of_time = history.groupby(by='artist_name')['minutes_played'].sum().sort_values(ascending=False)[:15]
# plot for count of track listened
fig = go.Figure(
data=[
go.Bar(
x = amount_of_time.index,
y = amount_of_time.values,
marker = {
'color' : amount_of_time.values
},
text = amount_of_time.index,
textposition = 'inside'
)
]
)
fig.update_layout(
title_text='Popularity of artists by amount of time spent listening to their song',
barmode='group',
xaxis_tickangle=45,
xaxis_title = "track_name",
yaxis_title = "track_count",
height = 800,
width = 1210,
hovermode = 'x',
)
fig.show()
# ## Some plot
history['days'] = [d.date() for d in history['end_time']]
history['time'] = [d.time() for d in history['end_time']]
history.drop('end_time', axis=1, inplace=True)
history.head()
day = history.groupby(by=['days'], as_index=False).sum()
# +
fig = px.line(
day,
x="days",
y="minutes_played",
labels={
"day": "Month",
"minutes_played": "Minutes Played"
},
color_discrete_sequence = px.colors.sequential.Agsunset,
title = "Timeline Of My Streaming History"
)
fig.update_layout(
hovermode = 'x'
)
fig.show()
# -
history.head()
# ## Time spend listening on each day of week
history['week_day_name'] = pd.DatetimeIndex(history['days']).day_name()
week_day_date = history.groupby(by=['week_day_name'], as_index=False).sum()
week_day_date.head()
fig = px.pie(
history,
names="week_day_name",
values="minutes_played",
color_discrete_sequence = px.colors.sequential.Agsunset
)
fig.update_layout(
title = 'Time spend listening on each day of week'
)
fig.show()
# ## Spider Graph plot
top_5_df = history[['track_name', 'danceability', 'energy', 'loudness', 'speechiness', 'acousticness']]
top_5_df.head(6)
top_5 = top_5_df.head(6)
top_5.drop(top_5_df.index[0], inplace=True)
top_5
# +
import plotly.graph_objects as go
categories = ['danceability','energy',
'loudness', 'speechiness', 'acousticness']
fig = go.Figure()
fig.add_trace(
go.Scatterpolar(
r = [0.375, 0.461, -6.202, 0.0279, 0.6270],
theta=categories,
fill='toself',
name='Selfish'
)
)
fig.add_trace(go.Scatterpolar(
r=[0.281, 0.462, -6.638, 0.0674, 0.4980],
theta=categories,
fill='toself',
name='Still Have Me'
)
)
fig.add_trace(go.Scatterpolar(
r=[0.836, 0.544, -5.975, 0.0943, 0.0403],
theta=categories,
fill='toself',
name='IDGAF'
)
)
fig.add_trace(
go.Scatterpolar(
r = [0.460, 0.800, -3.584, 0.0500, 0.2890],
theta=categories,
fill = 'toself',
name = 'What I Like About You (feat. Theresa Rex)'
)
)
fig.add_trace(
go.Scatterpolar(
r = [0.726, 0.554, -5.290, 0.0917, 0.0421],
theta = categories,
fill='toself',
name='break up with your girlfriend, i\'m bored'
)
)
fig.update_layout(
title = "Diversity in audio features of top 5 songs",
polar=dict(
radialaxis = dict(
visible=True,
range=[-10, 1]
)
),
showlegend=True
)
fig.show()
# -
# ## Venn chart for valence
# +
v = history['valence'].tolist()
less_count, more_count, middle_count = 0, 0, 0
# iterating each number in list
for num in v:
# checking condition
if num >= 0 and num <0.5:
less_count += 1
elif num >=0.5 and num < 0.6:
middle_count += 1
else:
more_count += 1
print("Less than 0.5: ", less_count)
print("More than 0.6: ", more_count)
print("Between 0.5 and 0.6: ", middle_count)
# +
fig = go.Figure()
# Create scatter trace of text labels
fig.add_trace(go.Scatter(
x=[1, 1.75, 2.5],
y=[1, 1, 1],
text=["Low Spirit: 1617", "Neutral: 552", "High Spirit: 1313"],
mode="text",
textfont=dict(
color="black",
size=18,
family="Arail",
)
))
# Update axes properties
fig.update_xaxes(
showticklabels=False,
showgrid=False,
zeroline=False,
)
fig.update_yaxes(
showticklabels=False,
showgrid=False,
zeroline=False,
)
# Add circles
fig.add_shape(type="circle",
line_color="skyblue", fillcolor="skyblue",
x0=0, y0=0, x1=2, y1=2
)
fig.add_shape(type="circle",
line_color="firebrick", fillcolor="firebrick",
x0=1.5, y0=0, x1=3.5, y1=2
)
fig.update_shapes(opacity=0.4, xref="x", yref="y")
fig.update_layout(
margin=dict(l=20, r=20, b=100),
height=600, width=800,
plot_bgcolor="white",
title = 'Happy or Sad Venn graph'
)
fig.show()
# -
# ## Pie chart for explicit content
# +
explicit_content = history['explicit'].value_counts().sort_index(ascending=True)
colors = ['Green', 'firebrick']
labels = explicit_content.index
values = explicit_content.values
fig = go.Figure(
data=[
go.Pie(
labels=labels,
values=values,
hole=.3
)
]
)
fig.update_layout(
title_text='Explicit songs distribution',
height = 600,
width = 900,
hovermode = 'x'
)
fig.update_traces(
hoverinfo='label+percent',
textinfo='value',
textfont_size=20,
marker=dict(
colors = colors,
line=dict(
color='#000000', width=2
)
)
)
fig.show()
# -
# Heatmap for songs attributes
correlation = history[['danceability', 'energy', 'loudness', 'speechiness', 'acousticness', 'instrumentalness', 'liveness', 'valence', 'tempo']]
correlation = correlation.corr(method='spearman')
correlation
fig = px.imshow(
correlation,
labels = dict(
color="Correlation"
)
)
fig.update_layout(
title_text='Heatmap Correlation between songs attributes',
height = 600,
width = 800,
)
fig.show()
popularity = history[['artist_name', 'popularity']]
most_popular = popularity.groupby('artist_name').agg('max').reset_index().sort_values(by='popularity', ascending=False)[:1]
least_popular = popularity.groupby('artist_name').agg('max').reset_index().sort_values(by='popularity', ascending=False)[::-1][:1]
print(f'Most popular: {" ".join(most_popular["artist_name"])}')
print(f'least popular: {" ".join(least_popular["artist_name"])}')
# Radar Plot
# radar plot
song_attributes = history[['danceability', 'energy', 'loudness', 'speechiness', 'acousticness', 'instrumentalness', 'liveness', 'valence', 'tempo']]
# +
fig = go.Figure(
data = go.Scatterpolar(
r = song_attributes.mean(),
theta = ['danceability', 'energy', 'loudness', 'speechiness', 'acousticness', 'instrumentalness', 'liveness', 'valence', 'tempo'],
fill = 'tonext'
)
)
fig.update_layout(
polar = dict(
radialaxis = dict(
visible = True
),
),
showlegend=False,
title_text = 'Radar chart for songs features'
)
fig.show()
# -
# Correlation between song attributes
def get_correlation_song_attribute(x, y):
fig = px.scatter(
history,
x = x,
y = y,
color = "explicit",
facet_row = "year",
marginal_y = "box"
)
fig.update_layout(
title = f'Correlation between {x} and {y} attributes',
)
return fig
get_correlation_song_attribute('speechiness', 'danceability')
get_correlation_song_attribute('energy', 'liveness')
# Total time listened per artist (in seconds)
history.head(1)
minutes_listend = history[['artist_name', 'track_name', 'minutes_played']]
group_ms_played = minutes_listend.groupby(by=['artist_name']).agg('sum').reset_index().sort_values(by='minutes_played', ascending=False)
ms_played = group_ms_played.head(40)
# most artists listened
fig = go.Figure(
data=[
go.Bar(
x = ms_played['artist_name'],
y = ms_played['minutes_played'],
marker = {
'color' : ms_played['minutes_played']
},
text = ms_played['artist_name'],
)
]
)
fig.update_layout(
title_text= 'Total time listened per artist (in seconds)',
barmode = 'group',
xaxis_tickangle=45,
xaxis_title = "Artist Name",
yaxis_title = "Seconds",
hovermode = 'x',
height = 650,
width = 1210,
)
fig.show()
| streamHistory_analysis.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: ironhack
# language: python
# name: ironhack
# ---
import pandas as pd
import numpy as np
import requests
from bs4 import BeautifulSoup
import lxml.html as lh
from src.api_functions import *
from src.cleaning_functions import *
from src.visual_functions import *
import plotly.express as px
import seaborn as sns
import plotly.graph_objects as go
import matplotlib.pyplot as plt
from pylab import mpl
csv_1 = 'output/dfinal.csv' # Current population values
csv_2 = 'output/dfinal_coords.csv' # Countries coordinates
csv_3 = 'output/energy-var.csv' # Interannual variation of energy consumption
csv_4 = 'output/pop_avg_rate.csv' # Population interannual variation / average rate
csv_5 = 'output/table-1-elec.csv' # Electricity production at init
csv_6 = 'output/table-2-share.csv' # Energy production tecnologies share at init
csv_7 = 'output/table-3-capita.csv' # Energy consumption init
db1 = load_csv(csv_1)
db2 = load_csv(csv_2)
db3 = load_csv(csv_3)
db4 = load_csv(csv_4)
db5 = load_csv(csv_5)
db6 = load_csv(csv_6)
db7 = load_csv(csv_7)
db1.sample(2)
db2.sample(2)
db3.sample(2)
db4.sample(2)
db4.shape
db5=db5.drop(['Unnamed: 0'], axis=1, inplace=False)
db5.sample(2)
db6=db6.drop(['Unnamed: 0'], axis=1, inplace=False)
db6.sample(2)
db7=db7.drop(['Unnamed: 0'], axis=1, inplace=False)
db7.sample(2)
x = db1['population']
len(x)
len(y)
db4.GrowthRate = db4.GrowthRate.astype(int)
y = np.multiply(db1['population'],db4['GrowthRate'])
parameter = liner_fitting(x,y)
draw_data = calculate_lin(x,parameter[0],parameter[1])
draw_lin(x,draw_data,y)
data=polynomial_fitting(x,y)
parameters=calculate_parameter(data)
for w in parameters:
print(w)
newData=calculate_bin(x,parameters)
draw_bin(x,newData,y)
# +
animals=['giraffes', 'orangutans', 'monkeys']
fig = go.Figure(data=[
go.Bar(name='SF Zoo', x=animals, y=[20, 14, 23]),
go.Bar(name='LA Zoo', x=animals, y=[12, 18, 29])
])
# Change the bar mode
fig.update_layout(barmode='group')
fig.show()
# -
agrupado = penguins.groupby(["species"])["sex"].value_counts().unstack()
agrupado
animals = penguins.species.unique() # ESTO ES UNA LISTA CON LOS NOMBRES
fig = go.Figure(data=[
go.Bar(name="Female", x=animals, y=agrupado.Female),
go.Bar(name="Male", x=animals, y=agrupado.Male)
])
fig.show()
| notebooks/Storytelling.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3 (ipykernel)
# language: python
# name: python3
# ---
# +
import numpy as np
from scipy import stats
import matplotlib.pyplot as plt
plt.style.use('fivethirtyeight')
# -
# # Inverse Transform sampling
#
#
# ## Rationale
#
#
# **Inverse transform sampling** allows to transform samples from uniform distribution $U$ to any other distribution $D$, given the $CDF$ of $D$.
#
# How can we do it?
#
# Let's take
#
# $$\large T(U) = X$$
#
# where:
#
# * $U$ is a uniform random variable
# * $T$ is some kind of a transformation
# * $X$ is the target random variable (let's use **exponential** distribution as an example)
#
#
# Now, we said that to perform **inverse transformation sampling**, we need a $CDF$.
#
# By definition $CDF$ (we'll call it $F_X(x)$ here) is given by:
#
# $$\large F_X(x) \triangleq P(X \leq x)$$
#
# We said before that to get $X$, we'll apply certain transformation $T$ to a uniform random variable.
#
# We can then say, that:
#
# $$\large P(X \leq x) = P(T(U) \leq x)$$
#
# Now, let's apply an ibnverse of $T$ to the both sides of the inequality:
#
# $$\large = P(U \leq T^{-1}(x))$$
#
# Uniform distribution has a nice property that it's $CDF$ at any given point $x$ is equal to the value of $x$.
#
# Therefore, we can say that:
#
# $$\large = T^{-1}(x)$$
#
# and conclude that:
#
# $$\large F_X(x) = T^{-1}(x)$$
#
#
# ## Conclusion
#
# We demonstrated how to sample from any density $D$ using a sample from a uniform distribution and an inverse of $CDF$ od $D$.
#
# Now, let's apply it in practice!
# ## Code
#
# Let's see how to apply this in Python.
#
# We'll use **exponential distribution** as an example.
# Define params
SAMPLE_SIZE = 100000
N_BINS = np.sqrt(SAMPLE_SIZE).astype('int') // 2
LAMBDA = 8
# Let's instantiate distributions.
#
# We will instantiate an exponential distribution expicitly for comparison purposes.
#
# ___________
#
# Note that **`scipy.stats`** has a slightly **different parametrization** of exponential than the populuar $\lambda$ parametrization.
#
# In the [documentation](https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.expon.html), we read:
#
# *A common parameterization for expon is in terms of the rate parameter lambda, such that pdf = lambda * exp(-lambda * x). This parameterization corresponds to using scale = 1 / lambda.*
#
# ____________
#
# Therefore, we're going to use **`scale=1/LAMBDA`** to parametrize our test **exponential distribution**.
# +
# Instantiate U(0, 1)
unif = stats.uniform(0, 1)
# Instantiate Exp(8) for comparison purposes
exp = stats.expon(loc=0, scale=1/LAMBDA)
# -
# Now, we need to define the inverse transformation $T^{-1}(x)$ that will allow us to translate between uniform and exponential samples.
#
# The $CDF$ of exponential distribution is defined as:
#
# $$\large
# \begin{equation}
# F_X(x) \triangleq
# \begin{cases}
# 1 - e^{-\lambda x} \ \text{ for }\ x \geq 0\\
# 0 \ \ \ \ \ \ \ \ \ \ \ \ \ \ \text{for }\ x<0 \\
# \end{cases}
# \end{equation}
# $$
#
# Let's take the inverse of this function (solve for $x$):
#
# $$\large y = 1 - e^{-\lambda x}$$
#
# * subtract $1$ from both sides:
#
# $$\large 1 - y = - e^{-\lambda x}$$
#
# * take $ln$ of both sides:
#
# $$\large ln(1 - y) = \lambda x$$
#
# * divide both sides by $\lambda$:
#
# $$\large x = \frac{ln(1 - y)}{\lambda}$$
#
# <br>
#
# **Et voilà!** 🎉🎉🎉
#
# We've got it! 💪🏼
#
# <br>
#
# Let's translate it to Python code:
# Define
def transform_to_exp(x, lmbd):
"""Transoforms a uniform sample into an exponential sample"""
return -np.log(1 - x) / lmbd
# Take samples:
# +
# Sample from uniform
sample_unif = unif.rvs(SAMPLE_SIZE)
# Sample from the true exponential
sample_exp = exp.rvs(SAMPLE_SIZE)
# Transform U -> Exp
sample_transform = transform_to_exp(sample_unif, LAMBDA)
# -
# A brief sanity check:
# Sanity check -> U(0, 1)
plt.hist(sample_unif, bins=N_BINS, density=True)
plt.title('Histogarm of $U(0, 1)$')
plt.ylabel('$p(x)$')
plt.xlabel('$x$')
plt.show()
# ..and let's compare the resutls:
plt.hist(sample_exp, bins=N_BINS, density=True, alpha=.5, label='Exponential')
plt.hist(sample_transform, bins=N_BINS, density=True, alpha=.5, label='$T(U)$')
plt.legend()
plt.title('Histogram of exponential and transformed distributions', fontsize=12)
plt.ylabel('$p(x)$')
plt.xlabel('$x$')
plt.show()
# Beautiful! It worked as expected 🎉🎉🎉
| sampling/00 - Inverse transform sampling.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3 (ipykernel)
# language: python
# name: python3
# ---
# # People Counter Sample Application
#
# This notebook shows how to create an object detection app for Panorama using a pretrained MXNet model.
#
# By completing this notebook, you will learn:
# * How to write a Python script for your app that takes in camera streams, performs inference, and outputs results
# * How to test your code using the Test Utility inside this Jupyter notebook, which saves you deployment time
# * How to use an MXNet object detection model with your app
# * How to programmatically package and deploy applications using the Panorama CLI
# * How to use an abstract camera node and over ride the camera programatically
# ---
#
# 1. [Prerequisites](#Prerequisites)
# 1. [Set up](#Set-up)
# 1. [Import model](#Import-model)
# 1. [Write and test app code](#Write-and-test-app-code-in-notebook)
# 1. [Package app](#Package-app)
# 1. [Deploy app to device](#Deploy-app-to-device)
# # Prerequisites
# 1. In a terminal session on this Jupyter notebook server, run `aws configure`. This allows this notebook server to access Panorama resources and deploy applications on your behalf.
# # Set up
# Import libraries for use with this notebook environment, you do not need these libraries when you write your application code.
# +
import sys
import os
import time
import json
import boto3
import sagemaker
import matplotlib.pyplot as plt
from IPython.core.magic import register_cell_magic
sys.path.insert( 0, os.path.abspath( "../common/test_utility" ) )
import panorama_test_utility
# instantiate boto3 clients
s3_client = boto3.client('s3')
panorama_client = boto3.client('panorama')
# configure matplotlib
# %matplotlib inline
plt.rcParams["figure.figsize"] = (20,20)
# register custom magic command
@register_cell_magic
def save_cell(line, cell):
'Save python code block to a file'
with open(line, 'wt') as fd:
fd.write(cell)
# + [markdown] tags=[]
# ## Notebook parameters
# Global constants that help the notebook create Panorama resources on your behalf.
# +
# Device ID, should look like: device-oc66nax4cgzwhyuaeyifrqowue
DEVICE_ID = input( 'DEVICE_ID (format: device-*)' ).strip()
# Enter your S3 bucket info here
S3_BUCKET = input( 'S3_BUCKET' ).strip()
# Enter your desired AWS region
AWS_REGION = input( 'AWS_REGION (e.g. us-east-1)' ).strip()
ML_MODEL_FNAME = 'ssd_512_resnet50_v1_voc'
# + tags=[]
# application name
app_name = 'people_counter_app'
## package names and node names
code_package_name = 'PEOPLE_COUNTER_CODE'
model_package_name = 'SSD_MODEL'
camera_node_name = 'abstract_rtsp_media_source'
# model node name, raw model path (without platform dependent suffics), and input data shape
model_node_name = "model_node"
model_file_basename = "./models/" + ML_MODEL_FNAME
model_data_shape = '{"data":[1,3,512,512]}'
# video file path to simulate camera stream
videoname = '../common/test_utility/videos/TownCentreXVID.avi'
# AWS account ID
account_id = boto3.client("sts").get_caller_identity()["Account"]
# -
# ## Set up application
#
# Every application uses the creator's AWS Account ID as the prefix to uniquely identifies the application resources. Running `panorama-cli import-application` replaces the generic account Id with your account Id.
# !cd ./people_counter_app && panorama-cli import-application
# + [markdown] tags=[]
# # Import model
# -
# We need to compile and import the model twice. Once for testing with this notebook server and once for deploying to the Panorama device.
#
# While working with the Panorama sample code, we provide pretrained models for you to use. Locally, models are stored in `./models`. This step downloads the model artifacts from our Amazon S3 bucket to the local folder. If you want to use your own models, put your tar.gz file into the `./models` folder.
# + [markdown] tags=[]
# ### Prepare model for testing with notebook server
# + tags=[]
# Downloads pretrained model for this sample.
# This step takes some time, depending on your network environment.
panorama_test_utility.download_sample_model( ML_MODEL_FNAME, "./models" )
# -
# Compile the model to run with test-utility.
# This step takes 7 mins ~ 10 mins.
# %run ../common/test_utility/panorama_test_utility_compile.py \
# \
# --s3-model-location s3://{S3_BUCKET}/{app_name}/ \
# \
# --model-node-name model_node \
# --model-file-basename ./models/{ML_MODEL_FNAME} \
# --model-data-shape '{model_data_shape}' \
# --model-framework MXNET
# + [markdown] tags=[]
# ### Prepare model for deploying to Panorama device
# -
model_asset_name = 'model_asset'
model_package_path = f'packages/{account_id}-{model_package_name}-1.0'
model_descriptor_path = f'packages/{account_id}-{model_package_name}-1.0/descriptor.json'
# !cd ./people_counter_app && panorama-cli add-raw-model \
# --model-asset-name {model_asset_name} \
# --model-local-path ../models/{ML_MODEL_FNAME}.tar.gz \
# --descriptor-path {model_descriptor_path} \
# --packages-path {model_package_path}
# # Write and test app code in notebook
# Every app has an entry point script, written in Python that pulls the frames from camera streams, performs inference, and send the results to the desired location. This file can be found in `your_app/packages/code_node/src/app.py`. Below, you will iterate on the code from within the notebook environment. The entry point file will be updated everytime you run the next notebook cell thanks to the `%%save_cell`. This is a magic command to update the contents of the entry point script.
#
# After updating the entry point script, use the Test Utility Run command (panorama_test_utility_run.py) command to simulate the application.
#
# ### Iterating on Code Changes
#
# To iterate on the code:
# 1. Interrupt the kernel if application is still running.
# 2. Make changes in the next cell, and run the cell to update the entry point script.
# 3. Run the panorama_test_utility_run.py again.
#
# **CHANGE VIDEO** : For you to change video, please set the file path to the --video-file argument of the panorama_test_utility_run.py command.
# +
# %%save_cell ./{app_name}/packages/{account_id}-{code_package_name}-1.0/src/app.py
import json
import logging
import time
from logging.handlers import RotatingFileHandler
import boto3
import cv2
import numpy as np
import panoramasdk
class Application(panoramasdk.node):
def __init__(self):
"""Initializes the application's attributes with parameters from the interface, and default values."""
self.MODEL_NODE = "model_node"
self.MODEL_DIM = 512
self.frame_num = 0
self.threshold = 50.
# Desired class
self.classids = [14.]
try:
# Parameters
logger.info('Getting parameters')
self.threshold = self.inputs.threshold.get()
except:
logger.exception('Error during initialization.')
finally:
logger.info('Initialiation complete.')
logger.info('Threshold: {}'.format(self.threshold))
def process_streams(self):
"""Processes one frame of video from one or more video streams."""
self.frame_num += 1
logger.debug(self.frame_num)
# Loop through attached video streams
streams = self.inputs.video_in.get()
for stream in streams:
self.process_media(stream)
self.outputs.video_out.put(streams)
def process_media(self, stream):
"""Runs inference on a frame of video."""
image_data = preprocess(stream.image, self.MODEL_DIM)
logger.debug(image_data.shape)
# Run inference
inference_results = self.call({"data":image_data}, self.MODEL_NODE)
# Process results (object deteciton)
self.process_results(inference_results, stream)
def process_results(self, inference_results, stream):
"""Processes output tensors from a computer vision model and annotates a video frame."""
if inference_results is None:
logger.warning("Inference results are None.")
return
num_people = 0
class_data = None # Class Data
bbox_data = None # Bounding Box Data
conf_data = None # Confidence Data
# Pulls data from the class holding the results
# inference_results is a class, which can be iterated through
# but inference_results has no index accessors (cannot do inference_results[0])
k = 0
for det_data in inference_results:
if k == 0:
class_data = det_data[0]
if k == 1:
conf_data = det_data[0]
if k == 2:
bbox_data = det_data[0]
for a in range(len(conf_data)):
if conf_data[a][0] * 100 > self.threshold and class_data[a][0] in self.classids:
(left, top, right, bottom) = np.clip(det_data[0][a]/self.MODEL_DIM,0,1)
stream.add_rect(left, top, right, bottom)
num_people += 1
else:
continue
k += 1
logger.info('# people {}'.format(str(num_people)))
stream.add_label('# people {}'.format(str(num_people)), 0.1, 0.1)
def preprocess(img, size):
"""Resizes and normalizes a frame of video."""
resized = cv2.resize(img, (size, size))
mean = [0.485, 0.456, 0.406] # RGB
std = [0.229, 0.224, 0.225] # RGB
img = resized.astype(np.float32) / 255. # converting array of ints to floats
r, g, b = cv2.split(img)
# normalizing per channel data:
r = (r - mean[0]) / std[0]
g = (g - mean[1]) / std[1]
b = (b - mean[2]) / std[2]
# putting the 3 channels back together:
x1 = [[[], [], []]]
x1[0][0] = r
x1[0][1] = g
x1[0][2] = b
return np.asarray(x1)
def get_logger(name=__name__,level=logging.INFO):
logger = logging.getLogger(name)
logger.setLevel(level)
handler = RotatingFileHandler("/opt/aws/panorama/logs/app.log", maxBytes=100000000, backupCount=2)
formatter = logging.Formatter(fmt='%(asctime)s %(levelname)-8s %(message)s',
datefmt='%Y-%m-%d %H:%M:%S')
handler.setFormatter(formatter)
logger.addHandler(handler)
return logger
def main():
try:
logger.info("INITIALIZING APPLICATION")
app = Application()
logger.info("PROCESSING STREAMS")
while True:
app.process_streams()
except:
logger.exception('Exception during processing loop.')
logger = get_logger(level=logging.INFO)
main()
# + tags=[]
# Run the application with test-utility.
#
# As '--output-pyplot' option is specified, this command simulates HDMI output with pyplot rendering in the output cell.
# In order to see console output (stdout/stderr) from the application, please remove the --output-pyplot option.
#
# %run ../common/test_utility/panorama_test_utility_run.py \
# \
# --app-name {app_name} \
# --code-package-name {code_package_name} \
# --model-package-name {model_package_name} \
# --camera-node-name {camera_node_name} \
# --model-node-name {model_node_name} \
# --model-file-basename {model_file_basename} \
# --video-file {videoname} \
# --py-file ./{app_name}/packages/{account_id}-{code_package_name}-1.0/src/app.py \
# --output-pyplot
# -
# # Package app
# Updates the app to be deployed with the recent code
py_file_name = 'app.py'
panorama_test_utility.update_package_descriptor( app_name, account_id, code_package_name, py_file_name )
# + [markdown] tags=[]
# ## Update camera streams
#
# In the AWS Panorama console, you can select the camera streams, but programmatically, you need to define the camera stream info for the cameras you are using with the app.
#
# We used an ```abstract data source``` here, usually this lets you select the pre-created camera source from the console. But programatically, we have to do the following steps
#
#
# - Create Camera
# - Create Override json file
# - Include the Override json file while are deploying the application
# -
# ### Create New Camera
#
# Because we are using an ```abstract_rtsp_media_source```, we have to create a camera before we can use the ```abstract_rtsp_media_source```
#
# **NOTE** : Update your RTSP Info in the next cell, Username, Password and RTSP Stream URL
CAMERA_NAME = "test_rtsp_camera"
CAMERA_CREDS = '{"Username":"root","Password":"<PASSWORD>!","StreamUrl": "rtsp://192.168.0.201/onvif-media/media.amp?profile=profile_1_h264&sessiontimeout=60&streamtype=unicast"}'
# +
# res = !aws panorama create-node-from-template-job --template-type RTSP_CAMERA_STREAM \
# --output-package-name {CAMERA_NAME} \
# --output-package-version '2.0' \
# --node-name {CAMERA_NAME} \
# --template-parameters '{CAMERA_CREDS}'
# FIXME : camera node creation fails if it already exists.
# Should either ignore the already-exist error, or delete the node at the end of this notebook
res = ''.join(res)
print(res)
res_json = json.loads(res)
# -
# !aws panorama describe-node-from-template-job --job-id {res_json['JobId']}
# ## Overriding camera node
# If you want to override the camera configuration at deployment (for ex. deploy to another site) you can provide a deployment time override. Go to `people_counter_app/deployment_overrides/override_camera.json` file and replace YOUR_AWS_ACCOUNT_ID with your ACCOUNT_ID and YOUR_CAMERA_NAME with your camera name.
# +
# Update Account ID
with open( f"./{app_name}/deployment_overrides/override_camera.json", "r" ) as fd:
override_json = json.load(fd)
override_json['nodeGraphOverrides']['packages'][0]['name'] = '{}::{}'.format(account_id, CAMERA_NAME)
override_json['nodeGraphOverrides']['nodes'][0]['name'] = CAMERA_NAME
override_json['nodeGraphOverrides']['nodes'][0]['interface'] = '{}::{}.{}'.format(account_id, CAMERA_NAME, CAMERA_NAME)
override_json['nodeGraphOverrides']['nodeOverrides'][0]['with'][0]['name'] = CAMERA_NAME
with open( f"./{app_name}/deployment_overrides/override_camera.json", "w") as fd:
json.dump(override_json, fd)
# -
# ### Build app with container
container_asset_name = 'code_asset'
# +
# %%capture captured_output
# Building container image.This process takes time (5min ~ 10min)
# FIXME : without %%capture, browser tab crashes because of too much output from the command.
# !cd ./people_counter_app && panorama-cli build \
# --container-asset-name {container_asset_name} \
# --package-path packages/{account_id}-{code_package_name}-1.0
# -
stdout_lines = captured_output.stdout.splitlines()
stderr_lines = captured_output.stderr.splitlines()
print(" :")
print(" :")
for line in stdout_lines[-30:] + stderr_lines[-30:]:
print(line)
# ### Upload application to Panorama for deploying to devices
# This step takes some time, depending on your network environment.
# !cd ./people_counter_app && panorama-cli package-application
# ### Ready for deploying to a device
#
# Congrats! Your app is now ready to deploy to a device. Next, you can continue in this notebook to deploy the app programmatically or you can go to the Panorama console and deploying using the AWS Console. The console makes it easier to select camera streams and select the devices you want to deploy to. Programmatic deployment is faster to complete and easier to automate.
# # Deploy app to device
# Let's make sure the device we are deploying to is available.
# +
response = panorama_client.describe_device(
DeviceId= DEVICE_ID
)
print('You are deploying to Device: {}'.format(response['Name']))
# -
# ## Deploy app
#
# You are ready to deploy your app. Below, you can see an example of how to use the AWS CLI to deploy the app. Alternatively, you can use the boto3 SDK as you did above for getting the device information.
# +
with open(f"./{app_name}/graphs/{app_name}/graph.json") as fd:
manifest_payload = "'%s'" % json.dumps({"PayloadData":json.dumps(json.load(fd))})
with open(f"./{app_name}/deployment_overrides/override_camera.json") as fd:
override_payload = "'%s'" % json.dumps({"PayloadData":json.dumps(json.load(fd))})
# +
# res = !aws panorama create-application-instance \
# --name {app_name} \
# --default-runtime-context-device {DEVICE_ID} \
# --manifest-payload {manifest_payload} \
# --manifest-overrides-payload {override_payload}
res = ''.join(res)
print(res)
res_json = json.loads(res)
# -
# ### Check Application Status
# Instantiate panorama client
# FIXME : not using AWS_REGION here, because panorama-cli uses only default region currently.
panorama_client = boto3.client("panorama")
# +
app_id = res_json['ApplicationInstanceId']
print( "Application Instance Id :", app_id )
progress_dots = panorama_test_utility.ProgressDots()
while True:
response = panorama_client.describe_application_instance( ApplicationInstanceId = app_id )
status = response['Status']
progress_dots.update_status( f'{status} ({response["StatusDescription"]})' )
if status in ('DEPLOYMENT_SUCCEEDED','DEPLOYMENT_FAILED'):
break
time.sleep(60)
# -
# # Clean up
panorama_test_utility.remove_application( DEVICE_ID, app_id )
| samples/people_counter/people_counter_tutorial.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Wie lernt ein Neuronales Netz
#
# https://bootcamp.codecentric.ai/
#
# In diesem Notebook trainieren wir einen Classifier auf dem MNIST Datenatz. Viele finden diesen Datensatz inzwischen langweilige und "zu einfach" - aber für dieses Notebook ist er genau richtig. Wir brauchen ein kleines einfaches Dataset, um zeigen zu können, was die Basic-Steps zum Trainieren eines neuronalen Netzes sind. Würden wir einen schwierigen Datensatz verwenden, würden unsere vereinfachten Steps wahrscheinlich gar nicht mehr funktionieren.
#
# Uns kommt es in diesem Notebook vor allem darauf an, wie der Trainingsloop aussieht. Während man in der fastai Library einfach nur `learn.fit()` aufrufen muss, werden wir hier (einen einfachen) Trainsingsloop selbst implementieren, um zu verstehen, was unter der Haube passiert.
#
# Falls du dir die Slides aus dem Video noch einmal ansehen möchtest: https://codecentric.slides.com/omoser/neuronale-netze-und-deep-learning/#/25
# ### Beispiel mit PyTorch
# Zunächst benötigte Libraries und Settings:
# +
# Bilder innerhalb des Notebooks anzeigen
# %matplotlib inline
import torch
import torchvision
from torchvision import transforms
from matplotlib import pyplot
# -
# ## Daten laden / vorbereiten
DATA_PATH = '/data/'
# Wir verwenden die PyTorch Boardmittel und definieren:
#
# - eine Reihe von Transformations (diese werden beim Laden der Daten durchgeführt, z.B. Data Augmentation oder Normalisierung)
# - ein Trainings-Set
# - einen Loader für das Trainings-Set
#
# (Wir definieren zunächst nur ein Trainings-Set ohne Validation-Set)
# +
# Batch Size
bs = 128
tfms = transforms.Compose([
transforms.ToTensor(),
transforms.Lambda(lambda x: x.flatten())
])
train_set = torchvision.datasets.MNIST(DATA_PATH, download=True,
train=True, transform=tfms)
train_loader = torch.utils.data.DataLoader(train_set, batch_size=bs,
shuffle=True)
# -
# Wir schauen uns an welche Klassen im Dataset vorhanden sind:
classes = train_set.classes
classes
# Und nun lassen wir uns ein paar Beispiele anzeigen, damit wir wissen mit welchen Daten wir es zu tun haben:
def show_data_sample(data_loader, n=10, figsize=(20,10)):
# configure plot output
figure = pyplot.figure(figsize=figsize)
batch = iter(data_loader).next()
images, labels = batch
for i, img in enumerate(images):
figure.add_subplot(n//5 + 1, 5, i+1)
pyplot.axis('off')
pyplot.title(classes[labels[i]])
pyplot.imshow(img.view(-1, 28))
if i == n-1:
break
show_data_sample(train_loader, 10)
# Wie groß ist unser Trainings-Set?!
print("Training Set", len(train_set))
# ## (simples) Model definieren
#
# Wir definieren nun ein ähnliches neuronales Netz wie in "Was sind neuronale Netze" (https://codecentric.slides.com/omoser/neuronale-netze-und-deep-learning#/8).
#
# Allerdings verwenden wir diesmal eine etwas andere Schreib-/Herangehensweise. In PyTorch werden neuronale Netze von torch.nn.Module abgeleitet. Im Konstruktor defininiert man welche Schichten man verwendet und die die Methode `def forward(self, x)` wird bei einem forward-pass des Modells aufgerufen. x ist dabei der Input also z.B. die Trainings-Daten.
#
# Wieder haben wir eine Input Schicht von 28 x 28 = 784 Pixeln und eine Output-Schicht von 10 Klassen ("One Hot Encoded Vector" für Ziffern 0-9).
class SuperSimpleNet(torch.nn.Module):
def __init__(self):
super(SuperSimpleNet, self).__init__()
self.layer = torch.nn.Sequential(
torch.nn.Linear(28 * 28, 20), # 28x28 ist die Input Size
torch.nn.ReLU(),
torch.nn.Linear(20, 10) # 10 ist die Output Size
)
def forward(self, x):
out = self.layer(x)
return out.sigmoid()
model = SuperSimpleNet()
model
# Jetzt definieren wir unsere Loss-Funktion. Da wir eine Klassifikation in mehrere Klassen machen wollen, verwenden wir wieder die bestehende `torch.nn.CrossEntropyLoss()`
loss_func = torch.nn.CrossEntropyLoss()
# Und nun definieren wir noch eine Funktion, um die Accuracy beim Training zu messen:
def accuracy(output, labels):
# output ist ein tenor.size(10). Die Stelle bedeutet die hier die vorhergesagte Zahl
# Beispiel: output = [0, 0, 0.1, 0.9, ...] Die Stelle mit der größten Zahl (=torch.argmax) ist 4 - das heisst die Vorhersage wäre hier 4
predictions = torch.argmax(output, dim=1)
# jetzt schauen wir wo die Vorhersage stimmt (pred=label) und bilden den Durschnitt über den aktuellen Batch
return (predictions == labels).float().mean()
# Und jetzt implementieren wir einen (simplen) Trainings-Loop.
#
# - wir definieren eine (statische) Learning Rate
# - eine Epoche ist ein kompletter Durchlauf durchs Trainings-Set
# - das Trainings-Set wird in Batches unterteilt
# - dann wird Vorhersage und Fehler bestimmt
# - mit Backpropagation und SGD werden Paramater optimiert
#
# Dies ist die "einfachst mögliche" Form von einem Trainings-Loop. Ein "modernes" großes/tiefes neuronales wird man damit nicht trainiert bekommen.
#
# Dazu benötigt man ein paar weitere "Tricks". Diese "Tricks" beeinflussen zum Beipiel dynamisch die Größe der Learning Rate.
#
# *Im Prinzip ist es aber das, was passiert, damit ein neuronales Netz lernt.*
# +
lr = 0.01
def fit(epochs=1):
for epoch in range(epochs):
print("Epoch: ", epoch)
# der Train Loader gibt teilt die gesamten Daten in Batches auf
for i, batch in enumerate(train_loader):
# ein Batch besteht aus Trainingsdaten und labels
x_train, label = batch
# wir setzen alle Gradients auf 0 (sonst verfälschen/beeinflussen diese die vorige Trainings-Runde)
model.zero_grad()
# wir verwenden unser Modell und machen eine Vorhersage mit den Traingsdaten dieses Batches
pred = model(x_train)
# wir ermitteln den Fehler mit der Loss Funktion (Vergleich Vorhersage mit Label)
loss = loss_func(pred, label)
# anhand des Fehlers machen wir eine Backpropagation
# das PyTorch Framework errechnet die Gradients, die wir verwenden können
# um unsere Parameter in "die richtige Richtung" anzupassen
loss.backward()
with torch.no_grad():
# für alle Parameter im Modell (in unserem Fall 2 Weight-Matrizen)
for param in model.parameters():
# optimiere die Parameter in die Richting des Gradients mit der Schrittweite "learning-rate"
param -= param.grad.data * lr
# gib Loss und Accuracy vom letzten Batch aus
print("Loss: ", loss.item(), "Accuracy: ", accuracy(pred, label).item())
# -
# Jetzt rufen wir unseren eigenen Trainings-Loop auf und trainieren für 10 Epochen (= wir arbeiten 10 x die gesamten Traingsdaten durch):
fit(10)
# Wir sehen, dass das neuronale Netz etwas "lernt". Der Loss wird tendenziell geringer und die Accuracy steigt.
#
# Während die Accuracy anfangs unter 0.5 lag (d.h. nur jede 2. Vorhersage ist richtig), geht die Accuracy nach 10 Epochen in Richtung 0.8 (d.h. 8 von 10 Vorhersagen sind richtig).
#
# An dieser Stelle betrachten wir allerdings nur die Trainingsdaten (ohne Validation-Set). Wir merken also nicht wenn ein "Overfitting" passiert.
#
#
# ### Zwischen-Fazit
#
# Unsere selbst geschriebene fit() Funktion zeigt, wie ein neuronales Netz im Prinzip lernt. Wie schon angedeutet ist dies aber für komplexere Netze nicht ausreichend, damit ein neuronales Netz (in annehmbarer) Zeit zuverlässig lernen kann.
#
# Wie wir im Beipiel mit dem "fruits Classifier" und fast.ai schon gesehen haben, musste man dort nur `learn.fit()` aufrufen. In diesem fastai-Trainings-Loop waren bereits viele Best-Practices implementiert.
#
# Wir schauen uns jetzt noch kurz an, wie man einen Trainings-Loop für komplexere Anwendungen mit PyTorch Boardmitteln implementieren könnte:
import torch.optim as optim
# wir nehmen hier Adam als Optimizer, es gibt aber auch SGD und viele mehr
optimizer = optim.Adam(model.parameters(), lr=lr)
# +
def fit_better(n=1):
for epoch in range(n):
for i, data in enumerate(train_loader):
x_train, labels = data
optimizer.zero_grad()
# forward + backward + optimize
pred = model(x_train)
loss = loss_func(pred, labels)
loss.backward()
# im Prinzip ist bis hierhin alles wie vorher - die Anpassungen der Parameter ist aber im Schritt optimizer.step() zusammengefasst
# hier passieren unter der Haube jetzt etwas smartere Anpassungen als in unserem einfachen Beispiel
optimizer.step()
print("Loss: ", loss.item(), "Accuracy: ", accuracy(pred, labels).item())
# -
fit_better()
# # Aufgaben
#
# In unserem eigenen Trainings-Loop verwenden wir derzeit nur ein Trainings-Set (aber kein Validation-Set). So können wir nicht bemerken, ob unser Netz "overfittet".
#
# 1. Passe den Trainingsloop so an, dass zusätzlich zum train_loss auch ein validation_loss bestimmt wird. Du kannst dafür entweder das Trainingsset aufsplitten in ein Train/Vaild-Set oder du kannst in diesem Beispiel die MNIST Test-Daten dafür verwenden:
# +
valid_set = torchvision.datasets.MNIST(DATA_PATH, download=True,
train=False, transform=tfms)
valid_loader = torch.utils.data.DataLoader(valid_set, batch_size=bs,
shuffle=True)
# Tipp, so kannst du einen Batch Validierungs-Daten laden:
# valid_batch = iter(valid_loader).next()
# x_valid, labels_valid = valid_batch
# -
# 2. Visualisiere beide loss Kurven in einem Plot
# Tipp, so könntest du Daten sammlen / visualisieren:
data = [1, 5, 11]
data2 = [2, 8, 10]
epochs = [1, 2, 3]
pyplot.plot(epochs, data, data2)
# 3. In dem Trainings-Loop wird der Loss und die Accuracy für den letzten Batch berechnet/ausgegeben. Ändere die Logik so, dass die Metriken jeweils für die Epoche berechnet werden.
# +
# Tipp, Sammle alle Werte und Bilde dann den Durschnitt für eine Epoche
# -
| notebooks/deep_learning/pytorch_nn_train_loop.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import math
import random
import numpy as np
# %load_ext cython
# + magic_args="-a" language="cython"
#
# import cython
# import numpy as np
# cimport numpy as np
# cimport cython
#
# @cython.boundscheck(False)
# @cython.wraparound(False)
#
# def _nn_search_cython(int q, np.ndarray[list, ndim = 1] d, int N):
# """
# find the nn set for ponit q
# """
# cdef np.ndarray P
# cdef int m, i
#
# m = d.shape[0]
# P = np.ndarray([q])
# for i in xrange(m):
# if q in d[i]:
# P = np.append(P, d[i])
#
# P = np.unique(P.astype(int))
# P = np.delete(P,np.where(P==q))
# P = np.delete(P,np.where(P<0))
# P = np.delete(P,np.where(P>N))
# return P
#
# def nn_search_cython(q, d,N):
# return _nn_search_cython(q, d, N)
# -
import numba
from numba import jit
from functools import reduce
from ipyparallel import Client
rc = Client()
dv = rc[:]
# +
def get_input(x):
temp = []
for i,v in enumerate(x):
temp.append([v,i])
return temp
def chainHash(InputList, Leafs):
res = {}
for tup in InputList:
if tup[0] not in res:
temp = []
temp.append(tup[1])
res["%s" % tup[0]] = temp
else:
parent = list(map(lambda s: find_parentid(Leafs[s]), res["%s" % tup[0]]))
if (find_parentid(Leafs[tup[1]]) not in parent) | (Leafs[tup[1]].parent is None):
res["%s" % tup[0]].append(tup[1])
return res
def find_parentid(Node):
temp = None
if Node.parent is not None:
temp = find_parentid(Node.parent)
else:
temp = Node.id
if temp>=0:
return None
else:
return temp
def find_parentNode(Node):
if Node.parent is not None:
return find_parentNode(Node.parent)
else:
return Node
def euler_distance(point1, point2):
"""
imput: point1, point2: list
output: float
"""
return np.linalg.norm(point1 - point2)
def flatten(xs, acc=[]):
"""Blah"""
return list(reduce(lambda x,y: x+y,xs))
@dv.parallel(block = True)
def change_unary2(x):
temp = ''
for num in x:
tem = int(11 - num)
temp += ("1"*(11-tem)+ "0"*tem)
return temp
# +
class Nodes(object):
def __init__(self,id):
"""
:param parent
:param children
:param id
"""
self.parent = None
self.children = []
self.id = id
def add_leaf(self, leaf):
if leaf not in self.children:
self.children.append(leaf)
def set_parent(self, node):
if self.parent is not None:
pass
else:
self.parent = node
def show_childrenid(self):
temp = []
for child in self.children:
temp.append(child.id)
return temp
def display(self,depth):
print ('-'*depth + " " +str(self.id))
for child in self.children:
child.display(depth+2)
class Leafs(Nodes):
def __init__(self,id, vec):
"""
:param vec
:param parent
:param children
:param id
"""
self.vec = vec
self.parent = None
self.children = []
self.id = id
def add_leaf(self,leaf):
raise Exception("Leaf nodes can't insert catalog")
def set_parent(self, node):
if self.parent is not None:
raise Exception("It has a parent already")
else:
self.parent = node
# -
class LSH(object):
def __init__(self, k, l, C, d):
"""
k: number of sampled bits
l: number of hash functions
C: a constant
d: number of attributes
"""
assert l > 0
assert k > 0
self.k = k
self.l = l
self.C = C
self.d = d
self.I = []
def creat_I(self):
"""
create l distinct hash functions
"""
while (len(self.I) < self.l):
temp = sorted(random.sample(range(self.C*self.d),self.k))
if temp not in self.I:
self.I.append(temp)
@dv.parallel(block = True)
def change_unary(self, x):
"""
change the list into unary expression
x: list[1*d]
"""
temp = ''
for num in x:
tem = int(self.C - num)
temp += ("1"*(self.C-tem)+ "0"*tem)
return temp
def get_h_value(self, v, fun_I):
temp = np.array(list(v))
return ''.join(temp[fun_I])
def hash_table(self,data):
"""
each row shows one hash function
"""
m,n = np.shape(data)
h_table = []
v_table = np.array(change_unary2.map(data))
self.creat_I()
for fun_I in self.I:
temp = list(map(lambda s: self.get_h_value(s, fun_I), v_table))
h_table.append(temp)
return np.array(h_table)
def get_buckets(self,Leafs,h_table):
r = list(map(lambda s: chainHash(get_input(s), Leafs),h_table))
return r
class Hierarchical(object):
def __init__(self):
self.labels = None
self.Nodes = []
self.point_num = 0
def merge_nodes(self, node1, node2):
newid = -len(self.Nodes)-1
flag = 0
if (node1.parent is not None) & (node2.parent is not None):
if find_parentid(node1) == find_parentid(node2):
flag = 1
else:
NewNode = Nodes(id = newid)
NewNode.add_leaf(find_parentNode(node1))
NewNode.add_leaf(find_parentNode(node2))
find_parentNode(node1).set_parent(NewNode)
find_parentNode(node2).set_parent(NewNode)
self.Nodes.append(NewNode)
if (node1.parent is not None) & (node2.parent is None):
newid = find_parentid(node1)
self.Nodes[np.abs(newid)-1].add_leaf(node2)
node2.set_parent(self.Nodes[np.abs(newid)-1])
if (node1.parent is None) & (node2.parent is not None):
newid = find_parentid(node2)
self.Nodes[np.abs(newid)-1].add_leaf(node1)
node1.set_parent(self.Nodes[np.abs(newid)-1])
if (node1.parent is None) & (node2.parent is None):
NewNode = Nodes(id = newid)
NewNode.add_leaf(node1)
NewNode.add_leaf(node2)
node1.set_parent(NewNode)
node2.set_parent(NewNode)
self.Nodes.append(NewNode)
return flag
def fit(self, x, R, A, C,l):
"""
x:raw data, m*n
R: minimun distance
A: the ratio to increase R
C: the constant
l: the number of hash functions
"""
leafs = [Leafs(vec=v, id=i) for i,v in enumerate(x)]
distances = {}
self.point_num, future_num = np.shape(x)
self.labels = [ -1 ] * self.point_num
currentNo = self.point_num
i = 1
while (currentNo > 1) & (R < 20):
#k = int(future_num * C * np.sqrt(future_num)/(2 * R))+3
k = 10
ls = LSH(k,l ,C ,d = future_num)
h_table = ls.hash_table(x)
r = ls.get_buckets(leafs, h_table)
w = np.array(flatten(list(map(lambda s: list(s.values()),r))))
for p in range(self.point_num):
P = nn_search_cython(p, w, self.point_num+1).astype(int)
for q in P:
d_key = (p, q)
if d_key not in distances:
distances[d_key] = euler_distance(leafs[p].vec, leafs[q].vec)
d = distances[d_key]
if i <= 1:
if d <= R:
flag = self.merge_nodes(leafs[p], leafs[q])
if flag == 0:
currentNo -= 1
else:
if (d <= R) & (d > R/A):
flag = self.merge_nodes(leafs[p], leafs[q])
if flag == 0:
currentNo -= 1
i += 1
R = R*A
for i in range(self.point_num):
self.labels[i] = find_parentid(leafs[i])
def display_depth(self, depth):
self.Nodes[-1].display(depth)
# +
x0 = 3 + np.random.normal(loc=0, scale=1, size = (200,))
y0 = 15 + np.random.normal(loc=0, scale=1, size = (200,))
x1 = np.linspace(4, 8, num = 100)
x1_coord = np.linspace(4, 8, num = 100)
y1 = 8 + np.sqrt(4 - (x1_coord - 6)**2)
y1_coord = y1
y2_coord = 8 - np.sqrt(4 - (x1_coord - 6)**2)
x2 = np.r_[x1_coord, x1_coord].reshape(200,) + np.random.normal(loc=0, scale=0.2, size = (200,))
y2 = np.r_[y1_coord, y2_coord].reshape(200,) + np.random.normal(loc=0, scale=0.2, size = (200,))
x3 = 10 + np.random.normal(loc=0, scale=5, size = (200,))
y3 = 4 + np.random.normal(loc=0, scale=0.3, size = (200,))
x4 = 18 + np.random.normal(loc=0, scale=0.2, size = (200,))
y4 = 10 + np.random.normal(loc=0, scale=3, size = (200,))
x5 = 8 + np.random.normal(loc=0, scale=0.2, size = (200,))
y5 = 15 + np.random.normal(loc=0, scale=0.2, size = (200,))
x6 = 15 + np.random.normal(loc=0, scale=0.2, size = (200,))
y6 = 12 + np.random.normal(loc=0, scale=2.5, size = (200,))
x7 = 6 + np.random.normal(loc=0, scale=0.1, size = (200,))
y7 = 8 + np.random.normal(loc=0, scale=0.1, size = (200,))
xs = np.r_[x0, x1, x2, x3, x4, x5, x6, x7]
ys = np.r_[y0, y1, y2, y3, y4, y5, y6, y7]
# -
data = []
for i,v in zip(xs,ys):
data.append(np.array([i,v]))
import pandas as pd
data = pd.DataFrame(data)
data = (data - data.min())/(data.max() - data.min())
test2 = data.values
test2 = test2 * 10
# %%time
lsh = Hierarchical()
lsh.fit(test2, R =0.8, A =1.5, C = 11, l = 30)
lsh.display_depth(0)
| LSH_optimized-test.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Neural Network
# ### forwardPropagation
# solo debemos multiplicar los pesos $W$ por el input sumar b ,pasar esta suma ponderada por la funcion de activacion e ir avanzando capa por capa
# ### backward Propagation
# sea la siguiente composicion de funciones:
# $$C(a(z^{L}))$$
# donde $C$ es la funcion conste definida como:
# $$C(a_{j}^{L})=\frac{1}{2}\sum_{j}^{}(y_{i}-a_{j}^{L})^{2}$$
# $a$ la funcion de activacion:
# $$a^{L}(z^{L})=\frac{1}{1+e^{-z^{L}}}$$
# y $z$ la suma ponderada:
# $$z^{L}=\sum_{i}^{}a_{i}^{L-1}w_{i}^{L}+b^{L}$$
#
# #### Como varia el $coste$ ante un cambio del parametro $W$?
# el parametro w esta conformado por w y el b
# $$\frac{\partial C}{\partial w^{L}}=\frac{\partial C}{\partial a^{L}}*\frac{\partial a^{L}}{\partial z^{L}}*\frac{\partial z^{L}}{\partial w^{L}}$$
# $$\frac{\partial C}{\partial b^{L}}=\frac{\partial C}{\partial a^{L}}*\frac{\partial a^{L}}{\partial z^{L}}*\frac{\partial z^{L}}{\partial b^{L}}$$
# Ahora resolvemos esas derivadas parciales:
# derivada del coste con respecto al la funcion de activacion:
# $$\frac{\partial C}{\partial a^{L}}=(a_{j}^{L}-y_{j})$$
# derivada de la funcion de activacion con respecto a la suma ponderada:
# $$\frac{\partial a^{L}}{\partial z^{L}}=a^{L}(z^{L})*(1-a^{L}(z^{L}))$$
# derivada de la suma ponderada con respecto a w:
# $$\frac{\partial z^{L}}{\partial b^{L}}=a_{i}^{L-1}$$
# derivada de la suma ponderada con respecto a b:
# $$\frac{\partial z^{L}}{\partial w^{L}}=1$$
#
# ### Algoritmo de Backpropagation
# 1. Computo del error de la ultima capa
# $$\delta^{L}=\frac{\partial C}{\partial a^{L}}*\frac{\partial a^{L}}{\partial z^{L}}$$
# 2. Retropropagamos el error a la capa anterior
# $$\delta^{L-1}=W^{L}*\delta^{L}*\frac{\partial a^{L-1}}{\partial z^{L-1}}$$
# 3. calculamos las derivadas de la capa usando el error
# $$\frac{\partial C}{\partial b^{L-1}}=\delta^{L-1}$$
# $$\frac{\partial C}{\partial w^{L-1}}=\delta^{L-1}*a^{L-2}$$
import pandas as pd
import numpy as np
import plotly.plotly as py
import plotly.tools as tls
import matplotlib.pyplot as plt
import seaborn as sns
from sklearn.utils import shuffle
import seaborn as sns
import time
from IPython.display import clear_output
# %matplotlib inline
class ReadData(object):
def __init__(self,datasetName='Iris.csv'):
self.datasetName=datasetName
def readData(self):
df = pd.read_csv('Iris.csv')
df = df.drop(['Id'],axis=1)
#rows = list(range(100,150))
#df = df.drop(df.index[rows])
Y = []
target = df['Species']
for val in target:
if(val == 'Iris-setosa'):
Y.append(0)
elif(val=='Iris-versicolor'):
Y.append(1)
else:
Y.append(2)
df = df.drop(['Species'],axis=1)
X = df.values.tolist()
datafeatureSize=50
labels = np.array([0]*datafeatureSize + [1]*datafeatureSize + [2]*datafeatureSize)
Y = np.zeros((datafeatureSize*3, 3))
for i in range(datafeatureSize*3):
Y[i, labels[i]] = 1
X, Y = shuffle(X,Y)
X=np.array(X)
Y=np.array(Y)
forTestY=Y[105:]
forTestX=X[105:,:]
X=X[0:105,:]
Y=Y[0:105,:]
return X,Y,forTestX,forTestY
r=ReadData()
r.readData()
[X,Y,forTestX,forTestY]=r.readData()
# +
class NeuralLayer(object):#clase capa neuronal
def __init__(self,numberConections,numberNeurons,activationFunction):
self.numberConections=numberConections
self.numberNeurons=numberNeurons
self.activationFunction=activationFunction
self.bayas=np.random.rand(1,numberNeurons)*2-1#inicializacion con random
self.W=np.random.rand(numberConections,numberNeurons)*2-1#inicializacion con random
class NeuralNetwork:
def __init__(self,learningRatio=0.01,train=True,numIterations=1000,topology=[4,3,1]):
self.learningRatio=learningRatio
self.train=train
self.numIterations=numIterations
self.topology=topology
self.neuralNetwork=self.createNeuralNetwork()
def createNeuralNetwork(self):
nn=[]
for i,layer in enumerate(self.topology[:-1]):#itera hasta len(topology)-1
nn.append(NeuralLayer(self.topology[i],self.topology[i+1],self.sigmoide))#crea un objeto neuralLayer
return nn
sigmoide=(lambda x:1/(1+np.e**(-x)),lambda x:x*(1-x)) #funcion de activacion mas su rerivada
costFunction=(lambda yp,yr:np.mean((yp-yr)**2),
lambda yp,yr:(yp-yr))#funcion de costo mas su rerivada
def forwardPropagation(self,X,Y):
out=[(None,X)]#tupla None,X
for i,layer in enumerate(self.neuralNetwork):
z=out[-1][1]@self.neuralNetwork[i].W+self.neuralNetwork[i].bayas
a=self.neuralNetwork[i].activationFunction[0](z)
out.append((z,a))#se agrega una nueva tupla confotmado de (z,a) donde z es la suma ponderada
#y a es resultado de pasar z como parametro por la funcion de activacion
return out
def backPropagation(self,X,Y):
out=self.forwardPropagation(X,Y)
if self.train:
deltas=[]
for i in reversed(range(0, len(self.neuralNetwork))):
a=out[i+1][1]
z=out[i+1][0]
if i==len(self.neuralNetwork)-1:#para la ultima capa
deltas.insert(0,self.costFunction[1](a,Y)*self.neuralNetwork[i].activationFunction[1](a))
else:#para las demas capas
deltas.insert(0, deltas[0] @ _W.T * self.neuralNetwork[i].activationFunction[1](a))
_W=self.neuralNetwork[i].W
##desenso del gradiente
self.neuralNetwork[i].bayas=self.neuralNetwork[i].bayas-np.mean(deltas[0],axis=0,keepdims=True)*self.learningRatio
self.neuralNetwork[i].W=self.neuralNetwork[i].W-out[i][1].T@deltas[0]*self.learningRatio
return out[-1][1]
def fit(self,X,Y):
loss=[]
for i in range(self.numIterations):
out=self.backPropagation(X,Y)
loss.append(self.costFunction[0](out,Y))
clear_output(wait=True)
#plt.plot(range(len(loss)), loss)
#plt.show()
return loss
def predict(self,X,Y):
confusionMatrix=[[0,0,0],[0,0,0],[0,0,0]]
outPut=[]
for i in range(X.shape[0]):
out=self.forwardPropagation(X[i:i+1,:],Y[i])
#outPut.append(out[-1][1])
#outPut[i]=outPut[i].flatten()
#outPut[i]=np.asscalar(outPut[i])
#print("salida ","i=",i,out[-1][1],"salida deseada",Y[i])
if np.argmax(out[-1][1])== np.argmax(Y[i]):
confusionMatrix[np.argmax(Y[i])][np.argmax(Y[i])]=confusionMatrix[np.argmax(Y[i])][np.argmax(Y[i])]+1
elif np.argmax(out[-1][1])!=np.argmax(Y[i]):
confusionMatrix[np.argmax(Y[i])][np.argmax(out[-1][1])]=confusionMatrix[np.argmax(Y[i])][np.argmax(out[-1][1])]+1
'''if outPut[i]>0.5 and Y[i]==1:
confusionMatrix[0][0]=confusionMatrix[0][0]+1
elif outPut[i]<=0.5 and Y[i]==1:
confusionMatrix[0][1]=confusionMatrix[0][1]+1
elif outPut[i]<=0.5 and Y[i]==0:
confusionMatrix[1][1]=confusionMatrix[1][1]+1
elif outPut[i]>0.5 and Y[i]==0:
confusionMatrix[1][0]=confusionMatrix[1][0]+1
print(confusionMatrix)
cm_df = pd.DataFrame(confusionMatrix,
index = ['setosa','versicolor'],
columns = ['setosa','versicolor'])
#sns.heatmap(cm_df, annot=True)
#plt.show()
N = len(Y)
x = range(N)
xx=np.array(x)
xx=xx+0.35
width = 1/1.5
plt.bar(x,Y,width=0.35, color="blue")
plt.bar(xx,outPut,width=0.35, color="red")
plt.legend(["Y","Y predicho"])'''
return confusionMatrix
def plotError(topology,lr,name):
losts=[]
for i in range(len(topology)):
nn=NeuralNetwork(learningRatio=lr,topology=topology[i],numIterations=5000)
losts.append(nn.fit(X,Y))
labels=['topologia[4,4,3]','topologia[4,6,3]','topologia[4,8,3]','topologia[4,10,3]','topologia[4,12,3]']
for i in range(len(losts)):
plt.plot(range(len(losts[i])), losts[i], label=labels[i])
plt.xlabel('Iteraciones')
plt.ylabel('Error')
plt.title("learning ratio="+str(lr))
plt.legend()
plt.show()
#plt.savefig(name+".png")
def plotConfusinMatrix(topology,lr,name):
labels=['topologia[4,4,3]','topologia[4,6,3]','topologia[4,8,3]','topologia[4,10,3]','topologia[4,12,3]']
for i in range(len(topology)):
ax = plt.axes()
nn=NeuralNetwork(learningRatio=lr,topology=topology[i],numIterations=1000)
nn.fit(X,Y)
confusionMatrix=nn.predict(forTestX,forTestY)
cm_df = pd.DataFrame(confusionMatrix,
index = ['setosa','versicolor','Iris-virginica'],
columns = ['setosa','versicolor','Iris-virginica'])
sns.heatmap(cm_df, annot=True,cbar=False)
ax.set_title(labels[i]+"learningRatio="+str(lr))
plt.savefig(name+str(i)+".png")
plt.clf()
#plt.show()
if __name__=='__main__':
topologys=[[4,4,3],[4,6,3],[4,8,3],[4,10,3],[4,12,3]]
plotError(topologys,0.101,"errorlr04")
#plotConfusinMatrix(topologys,0.01,"CMmatrixForLr01-")
'''
nn1=NeuralNetwork(learningRatio=0.01,topology=[4,6,3],numIterations=1000)
loss=nn1.fit(X,Y)
confusionMatrix=nn1.predict(forTestX,forTestY)
cm_df = pd.DataFrame(confusionMatrix,
index = ['setosa','versicolor','Iris-virginica'],
columns = ['setosa','versicolor','Iris-virginica'])
sns.heatmap(cm_df, annot=True)
ax.set_title('topology=[4,6,3] and learning ratio=0')
plt.show()'''
# -
| NeuronalNetworks/NeuralNetwork.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
import numpy as np
from sklearn.datasets import load_files
# The training data folder must be passed as first argument
try:
dataset = load_files('./wikidata/short_paragraphs')
except OSError as ex:
print(ex)
print("Couldn't import the data, did you unzip the wikidata.zip folder?")
exit(-1)
docs = dataset.data
y = dataset.target
# TASK: Split the dataset in training and test set
# (use 20% of the data for test):
from sklearn.model_selection import train_test_split
docs_train, docs_test, y_train, y_test = train_test_split(
docs, y, test_size=0.20, random_state=42)
# TASK: Build a an vectorizer that splits
# strings into sequence of 1 to 3
# characters instead of word tokens
# using the class TfidfVectorizer
from sklearn.feature_extraction.text import TfidfVectorizer
vectorizer = TfidfVectorizer(ngram_range=(1, 4),
analyzer='char')
# TASK: Use the function make_pipeline to build a
# vectorizer / classifier pipeline
# using the previous analyzer
# and a classifier of choice.
# The pipeline instance should be
# stored in a variable named model
from sklearn.pipeline import make_pipeline
from sklearn.linear_model import LogisticRegression
clf = LogisticRegression()
model = make_pipeline(vectorizer, clf)
# TASK: Fit the pipeline on the training set
model.fit(docs_train, y_train)
# TASK: Predict the outcome on the testing set.
# Store the result in a variable named y_predicted
y_predicted = model.predict(docs_test)
# TASK: Print the classification report
from sklearn.metrics import classification_report
print(classification_report(y_test, y_predicted))
# TASK: Print the confusion matrix. Bonus points if you make it pretty.
from sklearn.metrics import confusion_matrix
print(confusion_matrix(y_test, y_predicted))
# -
# Try using a different ngram size-- 4 seems to perform well:
# +
docs_train, docs_test, y_train, y_test = train_test_split(
docs, y, test_size=0.20, random_state=42)
vectorizer = TfidfVectorizer(ngram_range=(1, 4),
analyzer='char')
clf = LogisticRegression()
model = make_pipeline(vectorizer, clf)
model.fit(docs_train, y_train)
y_predicted = model.predict(docs_test)
print(classification_report(y_test, y_predicted))
print(confusion_matrix(y_test, y_predicted))
# +
docs_train, docs_test, y_train, y_test = train_test_split(
docs, y, test_size=0.10, random_state=42)
vectorizer = TfidfVectorizer(ngram_range=(1, 4),
analyzer='char')
clf = LogisticRegression()
model = make_pipeline(vectorizer, clf)
model.fit(docs_train, y_train)
y_predicted = model.predict(docs_test)
print(classification_report(y_test, y_predicted))
print(confusion_matrix(y_test, y_predicted))
# -
dataset.target_names
# +
from sklearn.ensemble import RandomForestClassifier
docs_train, docs_test, y_train, y_test = train_test_split(
docs, y, test_size=0.2, random_state=42)
vectorizer = TfidfVectorizer(ngram_range=(1, 4),
analyzer='char')
clf = RandomForestClassifier()
model = make_pipeline(vectorizer, clf)
model.fit(docs_train, y_train)
y_predicted = model.predict(docs_test)
print(classification_report(y_test, y_predicted))
print(confusion_matrix(y_test, y_predicted))
# +
docs_train, docs_test, y_train, y_test = train_test_split(
docs, y, test_size=0.10, random_state=42)
vectorizer = TfidfVectorizer(ngram_range=(1, 4),
analyzer='char')
clf = LogisticRegression(C=10)
model = make_pipeline(vectorizer, clf)
model.fit(docs_train, y_train)
y_predicted = model.predict(docs_test)
print(classification_report(y_test, y_predicted))
print(confusion_matrix(y_test, y_predicted))
# -
import gzip
import dill
with gzip.open('my_model.dill.gz', 'wb') as f:
dill.dump([model, dataset.target_names], f)
| day_2/language_detector/ml/further_testing.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# With TikZ, you can generate figures programmatically. Other solutions exist, but [I think TikZ is qool](https://texample.net/tikz/examples/all/).
# ### What is this?
#
# - [TeX](https://en.wikipedia.org/wiki/TeX) was designed with two main goals in mind: to allow anybody to produce high-quality books with minimal effort, and to provide a system that would give exactly the same results on all computers, at any point in time (together with the Metafont language for font description and the Computer Modern family of typefaces)
# - [Metafont](https://en.wikipedia.org/wiki/Metafont) is a description language used to define raster fonts.
# - [Metapost](https://en.wikipedia.org/wiki/MetaPost) is a programming language (derived from Metafont) that produces vector graphic diagrams from a geometric/algebraic description.
#
# #### Finally: "TikZ ist kein Zeichenprogramm", by <NAME> (2005)
#
# [PGF/TikZ](https://en.wikipedia.org/wiki/PGF/TikZ) is a pair of languages (resembling Metapost) for producing vector graphics from a geometric/algebraic description. PGF is a lower-level language, while TikZ is a set of higher-level macros that use PGF.
# You can find the `tikzmagic` [on github](https://github.com/mkrphys/ipython-tikzmagic)
# %load_ext tikzmagic
# The basic execution does not work for me; due to security concerns this command does not work on Arch based linux distros (and possibly others).
# %tikz \draw (0,0) rectangle (1,1);
# The next one, however, does:
# %%tikz -f svg
\draw (0,0) rectangle (1,1);
\filldraw (0.5,0.5) circle (.1);
# Let's go for something [qooler](https://texample.net/tikz/examples/coffee-cup/):
# %%tikz --size 500,500 -f svg
\foreach \c [count=\i from 0] in {white,red!75!black,blue!25,orange}{
\tikzset{xshift={mod(\i,2)*3cm}, yshift=-floor(\i/2)*3cm}
\colorlet{cup}{\c}
% Saucer
\begin{scope}[shift={(0,-1-1/16)}]
\fill [cup, postaction={left color=black, right color=white, opacity=1/3}]
(0,0) ++(180:5/4) arc (180:360:5/4 and 5/8+1/16);
\fill [cup, postaction={left color=black!50, right color=white, opacity=1/3}]
(0,0) ellipse [x radius=5/4, y radius=5/8];
\fill [cup, postaction={left color=white, right color=black, opacity=1/3}]
(0,1/16) ellipse [x radius=5/4/2, y radius=5/8/2];
\fill [cup, postaction={left color=black, right color=white, opacity=1/3}]
(0,0) ellipse [x radius=5/4/2-1/16, y radius=5/8/2-1/16];
\end{scope}
% Handle
\begin{scope}[shift=(10:7/8), rotate=-30, yslant=1/2, xslant=-1/8]
\fill [cup, postaction={top color=black, bottom color=white, opacity=1/3}]
(0,0) arc (130:-100:3/8 and 1/2) -- ++(0,1/4) arc (-100:130:1/8 and 1/4)
-- cycle;
\fill [cup, postaction={top color=white, bottom color=black, opacity=1/3}]
(0,0) arc (130:-100:3/8 and 1/2) -- ++(0,1/32) arc (-100:130:1/4 and 1/3)
-- cycle;
\end{scope}
% Cup
\fill [cup, postaction={left color=black, right color=white, opacity=1/3/2},
postaction={bottom color=black, top color=white, opacity=1/3/2}]
(-1,0) arc (180:360:1 and 5/4);
\fill [cup, postaction={left color=white, right color=black, opacity=1/3}]
(0,0) ellipse [x radius=1, y radius=1/2];
\fill [cup, postaction={left color=black, right color=white, opacity=1/3/2},
postaction={bottom color=black, top color=white, opacity=1/3/2}]
(0,0) ellipse [x radius=1-1/16, y radius=1/2-1/16];
% Coffee
\begin{scope}
\clip ellipse [x radius=1-1/16, y radius=1/2-1/16];
\fill [brown!25!black]
(0,-1/4) ellipse [x radius=3/4, y radius=3/8];
\end{scope}
}
# Something that might actually be useful (from the tikzmagic docs):
# +
# %%tikz -s 600,600 -f svg
\draw [style=help lines, step=2] (-1,-1) grid (+7,+7);
\draw [line width=0.5mm, fill=blue!40!white] (+2,+2) rectangle (+4,+4);
\draw [blue!60!white] (2, 2) node[anchor=north east] {$(i ,j )$};
\draw [blue!60!white] (4, 2) node[anchor=north west] {$(i+1,j )$};
\draw [blue!60!white] (4, 4) node[anchor=south west] {$(i+1,j+1)$};
\draw [blue!60!white] (2, 4) node[anchor=south east] {$(i ,j+1)$};
\filldraw [color=gray] (0,0) circle (.1);
\filldraw [color=gray] (0,2) circle (.1);
\filldraw [color=gray] (0,4) circle (.1);
\filldraw [color=gray] (0,6) circle (.1);
\filldraw [color=gray] (2,0) circle (.1);
\filldraw [color=black] (2,2) circle (.1);
\filldraw [color=black] (2,4) circle (.1);
\filldraw [color=gray] (2,6) circle (.1);
\filldraw [color=gray] (4,0) circle (.1);
\filldraw [color=black] (4,2) circle (.1);
\filldraw [color=black] (4,4) circle (.1);
\filldraw [color=gray] (4,6) circle (.1);
\filldraw [color=gray] (6,0) circle (.1);
\filldraw [color=gray] (6,2) circle (.1);
\filldraw [color=gray] (6,4) circle (.1);
\filldraw [color=gray] (6,6) circle (.1);
# -
# Last example:
# +
# %%tikz -l decorations.text -s 600,600 -f svg
\node (One) at (-3,0) [shape=circle,draw] {$One$};
\node (Two) at (3,0) [shape=circle,draw] {$Two$};
\node (Three) at (2, 2) [shape=rectangle, draw] {$Thr\epsilon \epsilon$};
\def\myshift#1{\raisebox{-2.5ex}}
\draw [->,
thick,
postaction={decorate,
decoration={text along path,text align=center, text={Some bent text}}}
] (One) to [bend right=45] (Two);
\draw [->,
thick,
postaction={decorate,
decoration={text along path,text align=center, text={Some more bent text}}}
] (One) .. controls (-3,4) .. (Three);
# -
| TikZ is qool.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Collaborative filtering with side information
#
# ** *
# This IPython notebook illustrates the usage of the [cmfrec](https://github.com/david-cortes/cmfrec) Python package for building recommender systems through different matrix factorization models with or without using information about user and item attributes – for more details see the references at the bottom.
#
# The example uses the [MovieLens-1M data](https://grouplens.org/datasets/movielens/1m/) which consists of ratings from users about movies + user demographic information, plus the [movie tag genome](https://grouplens.org/datasets/movielens/latest/). Note however that, for implicit-feedback datasets (e.g. item purchases), it's recommended to use different models than the ones shown here (see [documentation](http://cmfrec.readthedocs.io/en/latest/) for details about models in the package aimed at implicit-feedback data).
#
# **Small note: if the TOC here is not clickable or the math symbols don't show properly, try visualizing this same notebook from nbviewer following [this link](http://nbviewer.jupyter.org/github/david-cortes/cmfrec/blob/master/example/cmfrec_movielens_sideinfo.ipynb).**
# ## Sections
#
#
# [1. Loading the data](#p1)
#
# [2. Fitting recommender models](#p2)
#
# [3. Examining top-N recommended lists](#p3)
#
# [4. Tuning model parameters](#p4)
#
# [5. Recommendations for new users](#p5)
#
# [6. Evaluating models](#p6)
#
# [7. Adding implicit features and dynamic regularization](#p7)
#
# [8. References](#p8)
# ** *
# <a id="p1"></a>
# ## 1. Loading the data
#
# This section uses pre-processed data from the MovieLens datasets joined with external zip codes databases. The script for processing and cleaning the data can be found in another notebook [here](http://nbviewer.jupyter.org/github/david-cortes/cmfrec/blob/master/example/load_data.ipynb).
# +
import numpy as np, pandas as pd, pickle
ratings = pickle.load(open("ratings.p", "rb"))
item_sideinfo_pca = pickle.load(open("item_sideinfo_pca.p", "rb"))
user_side_info = pickle.load(open("user_side_info.p", "rb"))
movie_id_to_title = pickle.load(open("movie_id_to_title.p", "rb"))
# -
# ### Ratings data
ratings.head()
# ### Item attributes (reduced through PCA)
item_sideinfo_pca.head()
# ### User attributes (one-hot encoded)
user_side_info.head()
# <a id="p2"></a>
# ## 2. Fitting recommender models
#
# This section fits different recommendation models and then compares the recommendations produced by them.
# ### 2.1 Classic model
#
# Usual low-rank matrix factorization model with no user/item attributes:
# $$
# \mathbf{X} \approx \mathbf{A} \mathbf{B}^T + \mu + \mathbf{b}_A + \mathbf{b}_B
# $$
# Where
# * $\mathbf{X}$ is the ratings matrix, in which users are rows, items are columns, and the entries denote the ratings.
# * $\mathbf{A}$ is the user-factors matrix.
# * $\mathbf{B}$ is the item-factors matrix.
# * $\mu$ is the average rating.
# * $\mathbf{b}_A$ are user-specific biases (row vector).
# * $\mathbf{b}_B$ are item-specific biases (column vector).
#
# (For more details see references at the bottom)
# +
# %%time
from cmfrec import CMF
model_no_sideinfo = CMF(method="als", k=40, lambda_=1e+1)
model_no_sideinfo.fit(ratings)
# -
# ### 2.2 Collective model
#
# The collective matrix factorization model extends the earlier model by making the user and item factor matrices also be able to make low-rank approximate factorizations of the user and item attributes:
# $$
# \mathbf{X} \approx \mathbf{A} \mathbf{B}^T + \mu + \mathbf{b}_A + \mathbf{b}_B
# ,\:\:\:\:
# \mathbf{U} \approx \mathbf{A} \mathbf{C}^T + \mathbf{\mu}_U
# ,\:\:\:\: \mathbf{I} \approx \mathbf{B} \mathbf{D}^T + \mathbf{\mu}_I
# $$
#
# Where
# * $\mathbf{U}$ is the user attributes matrix, in which users are rows and attributes are columns.
# * $\mathbf{I}$ is the item attributes matrix, in which items are rows and attributes are columns.
# * $\mathbf{\mu}_U$ are the column means for the user attributes (column vector).
# * $\mathbf{\mu}_I$ are the columns means for the item attributes (column vector).
# * $\mathbf{C}$ and $\mathbf{D}$ are attribute-factor matrices (also model parameters).
#
# **In addition**, this package can also apply sigmoid transformations on the attribute columns which are binary. Note that this requires a different optimization approach which is slower than the ALS (alternating least-squares) method used here.
# +
# %%time
model_with_sideinfo = CMF(method="als", k=40, lambda_=1e+1, w_main=0.5, w_user=0.25, w_item=0.25)
model_with_sideinfo.fit(X=ratings, U=user_side_info, I=item_sideinfo_pca)
### for the sigmoid transformations:
# model_with_sideinfo = CMF(method="lbfgs", maxiter=0, k=40, lambda_=1e+1, w_main=0.5, w_user=0.25, w_item=0.25)
# model_with_sideinfo.fit(X=ratings, U_bin=user_side_info, I=item_sideinfo_pca)
# -
# _(Note that, since the side info has variables in a different scale, even though the weights sum up to 1, it's still not the same as the earlier model w.r.t. the regularization parameter - this type of model requires more hyperparameter tuning too.)_
# ### 2.3 Content-based model
#
# This is a model in which the factorizing matrices are constrained to be linear combinations of the user and item attributes, thereby making the recommendations based entirely on side information, with no free parameters for specific users or items:
# $$
# \mathbf{X} \approx (\mathbf{U} \mathbf{C}) (\mathbf{I} \mathbf{D})^T + \mu
# $$
#
# _(Note that the movie attributes are not available for all the movies with ratings)_
# +
# %%time
from cmfrec import ContentBased
model_content_based = ContentBased(k=40, maxiter=0, user_bias=False, item_bias=False)
model_content_based.fit(X=ratings.loc[ratings.ItemId.isin(item_sideinfo_pca.ItemId)],
U=user_side_info,
I=item_sideinfo_pca.loc[item_sideinfo_pca.ItemId.isin(ratings.ItemId)])
# -
# ### 2.4 Non-personalized model
#
# This is an intercepts-only version of the classical model, which estimates one parameter per user and one parameter per item, and as such produces a simple rank of the items based on those parameters. It is intended for comparison purposes and can be helpful to check that the recommendations for different users are having some variability (e.g. setting too large regularization values will tend to make all personalzied recommended lists similar to each other).
# +
# %%time
from cmfrec import MostPopular
model_non_personalized = MostPopular(user_bias=True, implicit=False)
model_non_personalized.fit(ratings)
# -
# <a id="p3"></a>
# ## 3. Examining top-N recommended lists
#
# This section will examine what would each model recommend to the user with ID 948.
#
# This is the demographic information for the user:
user_side_info.loc[user_side_info.UserId == 948].T.where(lambda x: x > 0).dropna()
# These are the highest-rated movies from the user:
ratings\
.loc[ratings.UserId == 948]\
.sort_values("Rating", ascending=False)\
.assign(Movie=lambda x: x.ItemId.map(movie_id_to_title))\
.head(10)
# These are the lowest-rated movies from the user:
ratings\
.loc[ratings.UserId == 948]\
.sort_values("Rating", ascending=True)\
.assign(Movie=lambda x: x.ItemId.map(movie_id_to_title))\
.head(10)
# Now producing recommendations from each model:
# +
### Will exclude already-seen movies
exclude = ratings.ItemId.loc[ratings.UserId == 948]
exclude_cb = exclude.loc[exclude.isin(item_sideinfo_pca.ItemId)]
### Recommended lists with those excluded
recommended_non_personalized = model_non_personalized.topN(user=948, n=10, exclude=exclude)
recommended_no_side_info = model_no_sideinfo.topN(user=948, n=10, exclude=exclude)
recommended_with_side_info = model_with_sideinfo.topN(user=948, n=10, exclude=exclude)
recommended_content_based = model_content_based.topN(user=948, n=10, exclude=exclude_cb)
# -
recommended_non_personalized
# A handy function to print top-N recommended lists with associated information:
# +
from collections import defaultdict
# aggregate statistics
avg_movie_rating = defaultdict(lambda: 0)
num_ratings_per_movie = defaultdict(lambda: 0)
for i in ratings.groupby('ItemId')['Rating'].mean().to_frame().itertuples():
avg_movie_rating[i.Index] = i.Rating
for i in ratings.groupby('ItemId')['Rating'].agg(lambda x: len(tuple(x))).to_frame().itertuples():
num_ratings_per_movie[i.Index] = i.Rating
# function to print recommended lists more nicely
def print_reclist(reclist):
list_w_info = [str(m + 1) + ") - " + movie_id_to_title[reclist[m]] +\
" - Average Rating: " + str(np.round(avg_movie_rating[reclist[m]], 2))+\
" - Number of ratings: " + str(num_ratings_per_movie[reclist[m]])\
for m in range(len(reclist))]
print("\n".join(list_w_info))
print("Recommended from non-personalized model")
print_reclist(recommended_non_personalized)
print("----------------")
print("Recommended from ratings-only model")
print_reclist(recommended_no_side_info)
print("----------------")
print("Recommended from attributes-only model")
print_reclist(recommended_content_based)
print("----------------")
print("Recommended from hybrid model")
print_reclist(recommended_with_side_info)
# -
# (As can be seen, the personalized recommendations tend to recommend very old movies, which is what this user seems to rate highly, with no overlap with the non-personalized recommendations).
# <a id="p4"></a>
# ## 4. Tuning model parameters
#
# The models here offer many tuneable parameters which can be tweaked in order to alter the recommended lists in some way. For example, setting a low regularization to the item biases will tend to favor movies with a high average rating regardless of the number of ratings, while setting a high regularization for the factorizing matrices will tend to produce the same recommendations for all users.
### Less personalized (underfitted)
reclist = \
CMF(lambda_=[1e+3, 1e+1, 1e+2, 1e+2, 1e+2, 1e+2])\
.fit(ratings)\
.topN(user=948, n=10, exclude=exclude)
print_reclist(reclist)
### More personalized (overfitted)
reclist = \
CMF(lambda_=[0., 1e+3, 1e-1, 1e-1, 1e-1, 1e-1])\
.fit(ratings)\
.topN(user=948, n=10, exclude=exclude)
print_reclist(reclist)
# The collective model can also have variations such as weighting each factorization differently, or setting components (factors) that are not to be shared between factorizations (not shown).
### More oriented towards content-based than towards collaborative-filtering
reclist = \
CMF(k=40, w_main=0.5, w_item=3., w_user=5., lambda_=1e+1)\
.fit(ratings, U=user_side_info, I=item_sideinfo_pca)\
.topN(user=948, n=10, exclude=exclude)
print_reclist(reclist)
# <a id="p5"></a>
# ## 5. Recommendations for new users
#
# Models can also be used to make recommendations for new users based on ratings and/or side information.
#
# _(Be aware that, due to the nature of computer floating point aithmetic, there might be some slight discrepancies between the results from `topN` and `topN_warm`)_
print_reclist(model_with_sideinfo.topN_warm(X_col=ratings.ItemId.loc[ratings.UserId == 948],
X_val=ratings.Rating.loc[ratings.UserId == 948],
exclude=exclude))
print_reclist(model_with_sideinfo.topN_warm(X_col=ratings.ItemId.loc[ratings.UserId == 948],
X_val=ratings.Rating.loc[ratings.UserId == 948],
U=user_side_info.loc[user_side_info.UserId == 948],
exclude=exclude))
print_reclist(model_with_sideinfo.topN_cold(U=user_side_info.loc[user_side_info.UserId == 948].drop("UserId", axis=1),
exclude=exclude))
# This last one is very similar to the non-personalized recommended list - that is, the user side information had very little leverage in the model, at least for that user - in this regard, the content-based model tends to be better at cold-start recommendations:
print_reclist(model_content_based.topN_cold(U=user_side_info.loc[user_side_info.UserId == 948].drop("UserId", axis=1),
exclude=exclude_cb))
# _(For this use-case, would also be better to add item biases to the content-based model though)_
# <a id="p6"></a>
# ## 6. Evaluating models
#
# This section shows usage of the `predict` family of functions for getting the predicted rating for a given user and item, in order to calculate evaluation metrics such as RMSE and tune model parameters.
#
# **Note that, while widely used in earlier literature, RMSE might not provide a good overview of the ranking of items (which is what matters for recommendations), and it's recommended to also evaluate other metrics such as NDCG@K, P@K, correlations, etc.**
#
# **Also be aware that there is a different class `CMF_implicit` which might perform better at implicit-feedback metrics such as P@K.**
#
# When making recommendations, there's quite a difference between making predictions based on ratings data or based on side information alone. In this regard, one can classify prediction types into 4 types:
# 1. Predictions for users and items which were both in the training data.
# 2. Predictions for users which were in the training data and items which were not in the training data.
# 3. Predictions for users which were not in the training data and items which were in the training data.
# 4. Predictions for users and items, of which neither were in the training data.
#
# (One could sub-divide further according to users/items which were present in the training data with only ratings or with only side information, but this notebook will not go into that level of detail)
#
# The classic model is only able to make predictions for the first case, while the collective model can leverage the side information in order to make predictions for (2) and (3). In theory, it could also do (4), but this is not recommended and the API does not provide such functionality.
#
# The content-based model, on the other hand, is an ideal approach for case (4). The package also provides a different model (the "offsets" model - see references at the bottom) aimed at improving cases (2) and (3) when there is side information about only user or only about items at the expense of case (1), but such models are not shown in this notebook.
# ** *
# Producing a training and test set split of the ratings and side information:
# +
from sklearn.model_selection import train_test_split
users_train, users_test = train_test_split(ratings.UserId.unique(), test_size=0.2, random_state=1)
items_train, items_test = train_test_split(ratings.ItemId.unique(), test_size=0.2, random_state=2)
ratings_train, ratings_test1 = train_test_split(ratings.loc[ratings.UserId.isin(users_train) &
ratings.ItemId.isin(items_train)],
test_size=0.2, random_state=123)
users_train = ratings_train.UserId.unique()
items_train = ratings_train.ItemId.unique()
ratings_test1 = ratings_test1.loc[ratings_test1.UserId.isin(users_train) &
ratings_test1.ItemId.isin(items_train)]
user_attr_train = user_side_info.loc[user_side_info.UserId.isin(users_train)]
item_attr_train = item_sideinfo_pca.loc[item_sideinfo_pca.ItemId.isin(items_train)]
ratings_test2 = ratings.loc[ratings.UserId.isin(users_train) &
~ratings.ItemId.isin(items_train) &
ratings.ItemId.isin(item_sideinfo_pca.ItemId)]
ratings_test3 = ratings.loc[~ratings.UserId.isin(users_train) &
ratings.ItemId.isin(items_train) &
ratings.UserId.isin(user_side_info.UserId) &
ratings.ItemId.isin(item_sideinfo_pca.ItemId)]
ratings_test4 = ratings.loc[~ratings.UserId.isin(users_train) &
~ratings.ItemId.isin(items_train) &
ratings.UserId.isin(user_side_info.UserId) &
ratings.ItemId.isin(item_sideinfo_pca.ItemId)]
print("Number of ratings in training data: %d" % ratings_train.shape[0])
print("Number of ratings in test data type (1): %d" % ratings_test1.shape[0])
print("Number of ratings in test data type (2): %d" % ratings_test2.shape[0])
print("Number of ratings in test data type (3): %d" % ratings_test3.shape[0])
print("Number of ratings in test data type (4): %d" % ratings_test4.shape[0])
# -
### Handy usage of Pandas indexing
user_attr_test = user_side_info.set_index("UserId")
item_attr_test = item_sideinfo_pca.set_index("ItemId")
# Re-fitting earlier models to the training subset of the earlier data:
m_classic = CMF(k=40)\
.fit(ratings_train)
m_collective = CMF(k=40, w_main=0.5, w_user=0.5, w_item=0.5)\
.fit(X=ratings_train,
U=user_attr_train,
I=item_attr_train)
m_contentbased = ContentBased(k=40, user_bias=False, item_bias=False)\
.fit(X=ratings_train.loc[ratings_train.UserId.isin(user_attr_train.UserId) &
ratings_train.ItemId.isin(item_attr_train.ItemId)],
U=user_attr_train,
I=item_attr_train)
m_mostpopular = MostPopular(user_bias=True)\
.fit(X=ratings_train)
# RMSE for users and items which were both in the training data:
# +
from sklearn.metrics import mean_squared_error
pred_contetbased = m_mostpopular.predict(ratings_test1.UserId, ratings_test1.ItemId)
print("RMSE type 1 non-personalized model: %.3f [rho: %.3f]" %
(np.sqrt(mean_squared_error(ratings_test1.Rating,
pred_contetbased,
squared=True)),
np.corrcoef(ratings_test1.Rating, pred_contetbased)[0,1]))
pred_ratingsonly = m_classic.predict(ratings_test1.UserId, ratings_test1.ItemId)
print("RMSE type 1 ratings-only model: %.3f [rho: %.3f]" %
(np.sqrt(mean_squared_error(ratings_test1.Rating,
pred_ratingsonly,
squared=True)),
np.corrcoef(ratings_test1.Rating, pred_ratingsonly)[0,1]))
pred_hybrid = m_collective.predict(ratings_test1.UserId, ratings_test1.ItemId)
print("RMSE type 1 hybrid model: %.3f [rho: %.3f]" %
(np.sqrt(mean_squared_error(ratings_test1.Rating,
pred_hybrid,
squared=True)),
np.corrcoef(ratings_test1.Rating, pred_hybrid)[0,1]))
test_cb = ratings_test1.loc[ratings_test1.UserId.isin(user_attr_train.UserId) &
ratings_test1.ItemId.isin(item_attr_train.ItemId)]
pred_contentbased = m_contentbased.predict(test_cb.UserId, test_cb.ItemId)
print("RMSE type 1 content-based model: %.3f [rho: %.3f]" %
(np.sqrt(mean_squared_error(test_cb.Rating,
pred_contentbased,
squared=True)),
np.corrcoef(test_cb.Rating, pred_contentbased)[0,1]))
# -
# RMSE for users which were in the training data but items which were not:
# +
pred_hybrid = m_collective.predict_new(ratings_test2.UserId,
item_attr_test.loc[ratings_test2.ItemId])
print("RMSE type 2 hybrid model: %.3f [rho: %.3f]" %
(np.sqrt(mean_squared_error(ratings_test2.Rating,
pred_hybrid,
squared=True)),
np.corrcoef(ratings_test2.Rating, pred_hybrid)[0,1]))
pred_contentbased = m_contentbased.predict_new(user_attr_test.loc[ratings_test2.UserId],
item_attr_test.loc[ratings_test2.ItemId])
print("RMSE type 2 content-based model: %.3f [rho: %.3f]" %
(np.sqrt(mean_squared_error(ratings_test2.Rating,
pred_contentbased,
squared=True)),
np.corrcoef(ratings_test2.Rating, pred_contentbased)[0,1]))
# -
# RMSE for items which were in the training data but users which were not:
# +
pred_hybrid = m_collective.predict_cold_multiple(item=ratings_test3.ItemId,
U=user_attr_test.loc[ratings_test3.UserId])
print("RMSE type 3 hybrid model: %.3f [rho: %.3f]" %
(np.sqrt(mean_squared_error(ratings_test3.Rating,
pred_hybrid,
squared=True)),
np.corrcoef(ratings_test3.Rating, pred_hybrid)[0,1]))
pred_contentbased = m_contentbased.predict_new(user_attr_test.loc[ratings_test3.UserId],
item_attr_test.loc[ratings_test3.ItemId])
print("RMSE type 3 content-based model: %.3f [rho: %.3f]" %
(np.sqrt(mean_squared_error(ratings_test3.Rating,
pred_contentbased,
squared=True)),
np.corrcoef(ratings_test3.Rating, pred_contentbased)[0,1]))
# -
# RMSE for users and items which were not in the training data:
pred_contentbased = m_contentbased.predict_new(user_attr_test.loc[ratings_test4.UserId],
item_attr_test.loc[ratings_test4.ItemId])
print("RMSE type 4 content-based model: %.3f [rho: %.3f]" %
(np.sqrt(mean_squared_error(ratings_test4.Rating,
pred_contentbased,
squared=True)),
np.corrcoef(ratings_test4.Rating, pred_contentbased)[0,1]))
# <a id="p7"></a>
# ## 7. Adding implicit features and dynamic regularization
#
# In addition to external side information about the users and items, one can also generate features from the same $\mathbf{X}$ data by considering which movies a user rated and which ones didn't - these are taken as binary features, with the zeros being counted towards the loss/objective function.
#
# The package offers an easy option for automatically generating these features on-the-fly, which can then be used in addition to the external features. The full model now becomes:
# $$
# \mathbf{X} \approx \mathbf{A} \mathbf{B}^T + \mu + \mathbf{b}_A + \mathbf{b}_B
# $$
# $$
# \mathbf{I}_x \approx \mathbf{A} \mathbf{B}_i^T, \:\: \mathbf{I}_x^T \approx \mathbf{B} \mathbf{A}_i^T
# $$
# $$
# \mathbf{U} \approx \mathbf{A} \mathbf{C}^T + \mathbf{\mu}_U
# ,\:\:\:\: \mathbf{I} \approx \mathbf{B} \mathbf{D}^T + \mathbf{\mu}_I
# $$
#
# Where:
# * $\mathbf{I}_x$ is a binary matrix having a 1 at position ${i,j}$ if $x_{ij}$ is not missing, and a zero otherwise.
# * $\mathbf{A}_i$ and $\mathbf{B}_i$ are the implicit feature matrices.
#
# While in the earlier models, every user/item had the same regularization applied on its factors, it's also possible to make this regularization adjust itself according to the number of ratings for each user movie, which tends to produce better models at the expense of more hyperparameter tuning.
#
# As well, the package offers an ALS-Cholesky solver, which is slower but tends to give better end results. This section will now use the implicit features and the Cholesky solver, and compare the new models to the previous ones.
# +
m_implicit = CMF(k=40, add_implicit_features=True,
lambda_=0.05, scale_lam=True,
w_main=0.7, w_implicit=1., use_cg=False)\
.fit(X=ratings_train)
m_implicit_plus_collective = \
CMF(k=40, add_implicit_features=True, use_cg=False,
lambda_=0.03, scale_lam=True,
w_main=0.5, w_user=0.3, w_item=0.3, w_implicit=1.)\
.fit(X=ratings_train,
U=user_attr_train,
I=item_attr_train)
pred_ratingsonly = m_classic.predict(ratings_test1.UserId, ratings_test1.ItemId)
print("RMSE type 1 ratings-only model: %.3f [rho: %.3f]" %
(np.sqrt(mean_squared_error(ratings_test1.Rating,
pred_ratingsonly,
squared=True)),
np.corrcoef(ratings_test1.Rating, pred_ratingsonly)[0,1]))
pred_implicit = m_implicit.predict(ratings_test1.UserId, ratings_test1.ItemId)
print("RMSE type 1 ratings + implicit + dyn + Chol: %.3f [rho: %.3f]" %
(np.sqrt(mean_squared_error(ratings_test1.Rating,
pred_implicit,
squared=True)),
np.corrcoef(ratings_test1.Rating, pred_implicit)[0,1]))
pred_hybrid = m_collective.predict(ratings_test1.UserId, ratings_test1.ItemId)
print("RMSE type 1 hybrid model: %.3f [rho: %.3f]" %
(np.sqrt(mean_squared_error(ratings_test1.Rating,
pred_hybrid,
squared=True)),
np.corrcoef(ratings_test1.Rating, pred_hybrid)[0,1]))
pred_implicit_plus_collective = m_implicit_plus_collective.\
predict(ratings_test1.UserId, ratings_test1.ItemId)
print("RMSE type 1 hybrid + implicit + dyn + Chol: %.3f [rho: %.3f]" %
(np.sqrt(mean_squared_error(ratings_test1.Rating,
pred_implicit_plus_collective,
squared=True)),
np.corrcoef(ratings_test1.Rating, pred_implicit_plus_collective)[0,1]))
# -
# But note that, while the dynamic regularization and Cholesky method usually lead to improvements in general, the newly-added implicit features oftentimes result in worse cold-start predictions:
# +
pred_hybrid = m_collective.predict_new(ratings_test2.UserId,
item_attr_test.loc[ratings_test2.ItemId])
print("RMSE type 2 hybrid model: %.3f [rho: %.3f]" %
(np.sqrt(mean_squared_error(ratings_test2.Rating,
pred_hybrid,
squared=True)),
np.corrcoef(ratings_test2.Rating, pred_hybrid)[0,1]))
pred_implicit_plus_collective = \
m_implicit_plus_collective\
.predict_new(ratings_test2.UserId,
item_attr_test.loc[ratings_test2.ItemId])
print("RMSE type 2 hybrid model + implicit + dyn + Chol: %.3f [rho: %.3f] (might get worse)" %
(np.sqrt(mean_squared_error(ratings_test2.Rating,
pred_implicit_plus_collective,
squared=True)),
np.corrcoef(ratings_test2.Rating, pred_implicit_plus_collective)[0,1]))
pred_contentbased = m_contentbased.predict_new(user_attr_test.loc[ratings_test2.UserId],
item_attr_test.loc[ratings_test2.ItemId])
print("RMSE type 2 content-based model: %.3f [rho: %.3f]" %
(np.sqrt(mean_squared_error(ratings_test2.Rating,
pred_contentbased,
squared=True)),
np.corrcoef(ratings_test2.Rating, pred_contentbased)[0,1]))
# +
pred_hybrid = m_collective.predict_cold_multiple(item=ratings_test3.ItemId,
U=user_attr_test.loc[ratings_test3.UserId])
print("RMSE type 3 hybrid model: %.3f [rho: %.3f]" %
(np.sqrt(mean_squared_error(ratings_test3.Rating,
pred_hybrid,
squared=True)),
np.corrcoef(ratings_test3.Rating, pred_hybrid)[0,1]))
pred_implicit_plus_collective = \
m_implicit_plus_collective\
.predict_cold_multiple(item=ratings_test3.ItemId,
U=user_attr_test.loc[ratings_test3.UserId])
print("RMSE type 3 hybrid model + implicit + dyn + Chol: %.3f [rho: %.3f] (got worse)" %
(np.sqrt(mean_squared_error(ratings_test3.Rating,
pred_implicit_plus_collective,
squared=True)),
np.corrcoef(ratings_test3.Rating, pred_implicit_plus_collective)[0,1]))
pred_contentbased = m_contentbased.predict_new(user_attr_test.loc[ratings_test3.UserId],
item_attr_test.loc[ratings_test3.ItemId])
print("RMSE type 3 content-based model: %.3f [rho: %.3f]" %
(np.sqrt(mean_squared_error(ratings_test3.Rating,
pred_contentbased,
squared=True)),
np.corrcoef(ratings_test3.Rating, pred_contentbased)[0,1]))
# -
# <a id="p8"></a>
# ## 8. References
#
# * <NAME>. "Cold-start recommendations in Collective Matrix Factorization." arXiv preprint arXiv:1809.00366 (2018).
# * Singh, <NAME>., and <NAME>. "Relational learning via collective matrix factorization." Proceedings of the 14th ACM SIGKDD international conference on Knowledge discovery and data mining. ACM, 2008.
# * Takacs, Gabor, <NAME>, and <NAME>. "Applications of the conjugate gradient method for implicit feedback collaborative filtering." Proceedings of the fifth ACM conference on Recommender systems. 2011.
# * <NAME>, <NAME>, and <NAME>. "On the difficulty of evaluating baselines: A study on recommender systems." arXiv preprint arXiv:1905.01395 (2019).
# * <NAME>, et al. "Large-scale parallel collaborative filtering for the netflix prize." International conference on algorithmic applications in management. Springer, Berlin, Heidelberg, 2008.
| example/cmfrec_movielens_sideinfo.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + _cell_guid="b1076dfc-b9ad-4769-8c92-a6c4dae69d19" _uuid="8f2839f25d086af736a60e9eeb907d3b93b6e0e5"
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
# Input data files are available in the "../input/" directory.
# For example, running this (by clicking run or pressing Shift+Enter) will list the files in the input directory
import os
print(os.listdir("/home/arjunrampal/Documents/Research dataset/"))
# Any results you write to the current directory are saved as output.
# + _cell_guid="79c7e3d0-c299-4dcb-8224-4455121ee9b0" _uuid="d629ff2d2480ee46fbb7e2d37f6b5fab8052498a"
#load data in dataframe
df = pd.read_csv("/home/arjunrampal/Documents/Research dataset/svm.csv", encoding='latin-1')
df.head(5)
# + _uuid="e2cfc27bb21d80135b6433ed51b2cb5ce7577753"
#df.drop(columns=['Classification','Confidence'],axis=1, inplace=True)
#df.head(5)
# + _uuid="d4cbf5292c4a352d5003d1e5e081dcf34517e011"
df.rename(columns={'sentiment':'label','review':'text'},inplace=True)
df.head(-5)
# -
df['review_lower'] = df['text'].apply(lambda x: " ".join(x.lower() for x in x.split()))
df['review_nopunc'] = df['review_lower'].str.replace('[^\w\s]', '')
freq=pd.Series(" ".join(df['review_nopunc']).split()).value_counts()[:30]
other_stopwords = ['br','would','even','characters','also','dont','one','much','get','people','first','made','make','could','way','think','watch']
#other_stopwords = ['the','and','a','of','to','is','in','it','i','this','that','br','was','as','for','with','movie','but','film','on','you','not','are','his','have','be','he','its','at']
df['review_nopunc_nostop_nocommon'] = df['review_nopunc'].apply(lambda x: "".join(" ".join(x for x in x.split() if x not in other_stopwords)))
df.head()
# +
from textblob import Word
# Lemmatize final review format
df['cleaned_review'] = df['review_nopunc_nostop_nocommon'].apply(lambda x: " ".join([Word(word).lemmatize() for word in x.split()]))
# -
df.drop(['text','review_lower','review_nopunc','review_nopunc_nostop_nocommon'] , axis=1)
# + _uuid="6c4a4fc3817a1d69ffa50e642c97047e6455b6e1"
df.label.value_counts()
#df.info()
# + _uuid="476f199668e9bef64b70c69acda8fc5a1bafed3c"
sns.countplot(df.label)
plt.xlabel("label")
plt.title("Number of spam or ham messages")
# + _uuid="847fd5cc233b76033e7f87a2905531a83446b763"
df["label_as_num"] = df.label.map({"negative":0, "positive":1})
# + _uuid="0af6e5bf270c55b19fe8db8c092562742aa3e281"
from sklearn.model_selection import train_test_split
from sklearn.feature_extraction.text import CountVectorizer
y = df['label']
x = df['cleaned_review']
cv = CountVectorizer(strip_accents='ascii', token_pattern=u'(?ui)\\b\\w*[a-z]+\\w*\\b', lowercase=True, stop_words='english')
x = cv.fit_transform(x)
# -
import joblib
joblib.dump(cv.vocabulary_, 'vocab11.pkl')
# + _uuid="330d539a51674df49c555dd8ab712d5492c4a3e0"
X_train, X_test, y_train, y_test = train_test_split(x,y, test_size=0.3, random_state=1)
# + _uuid="f219a25f8e6d620f5e58453ccd24fc9f935c304a"
from sklearn.naive_bayes import MultinomialNB
from sklearn.metrics import classification_report
#Naive Bayes Classifier
clf = MultinomialNB()
clf.fit(X_train,y_train)
clf.score(X_test,y_test)
y_pred = clf.predict(X_test)
print(classification_report(y_test, y_pred))
# +
from sklearn.metrics import accuracy_score, precision_score, recall_score
print('Accuracy score: ', accuracy_score(y_test, y_pred))
print('Precision score: ', precision_score(y_test, y_pred , average = 'weighted'))
print('Recall score: ', recall_score(y_test, y_pred, average = 'weighted'))
# -
df1 = pd.read_csv("/home/arjunrampal/Documents/Research dataset/twitter.csv", encoding='latin-1')
df1.head(5)
our_list=df1['review']
ser=[]
for name in our_list:
inp1 = [name]
inp1 = cv.transform(inp1).toarray()
result=clf.predict(inp1)
print(inp1,result)
with open('output_1.csv', 'w', newline='') as write_obj:
# Create a csv.reader object from the input file object
csv_reader = reader(read_obj)
# Create a csv.writer object from the output file object
csv_writer = writer(write_obj)
# Read each row of the input csv file as list
for row in csv_reader:
# Append the default text in the row / list
row.append(result)
# Add the updated row / list to the output file
csv_writer.writerow(row)
print(inp1,result)
# +
from csv import writer
from csv import reader
# Open the input_file in read mode and output_file in write mode
with open('svm.csv', 'r') as read_obj, \
open('output_1.csv', 'w', newline='') as write_obj:
# Create a csv.reader object from the input file object
csv_reader = reader(read_obj)
# Create a csv.writer object from the output file object
csv_writer = writer(write_obj)
# Read each row of the input csv file as list
for row in csv_reader:
# Append the default text in the row / list
row.append(r)
# Add the updated row / list to the output file
csv_writer.writerow(row)
# -
df_survey_data = pd.read_csv("svm.csv")
COLS = ['text', 'sentiment1','subjectivity','polarity']
import pandas as pd
from textblob import TextBlob
from itertools import islice
for index,row in islice(df_survey_data.iterrows(), 0, None):
new_entry = []
df_survey_data.review=df_survey_data.review.astype(str)
text_lower=(row['review'])
blob = TextBlob(text_lower.lower())
sentiment = blob.sentiment
polarity = sentiment.polarity
subjectivity = sentiment.subjectivity
new_entry += [text_lower,sentiment,subjectivity,polarity]
single_survey_sentimet_df = pd.DataFrame([new_entry], columns=COLS)
df = df.append(single_survey_sentimet_df, ignore_index=True)
df.to_csv('Q7_Text_Sentiment_Values.csv', mode='w', columns=COLS, index=False, encoding="utf-8")
df.head()
# + _uuid="cdb7ea34e7d4574fb4efd3631295a11618385e7a"
inp = "this is better"#"we got password reset request from your id. click here to reset, if not you please ignore."#"you win a lottery. please click here to claim prize money."
inp1 = [inp]
inp1 = cv.transform(inp1).toarray()
clf.predict(inp1)
# + _uuid="095dc10fa38d71773ad75eb860ba2c248b7180dd"
import joblib
joblib.dump(clf, 'NB_spam_model11.pkl')
# -
import joblib
NB_spam_model = open('NB_spam_model11.pkl','rb')
clf = joblib.load(NB_spam_model)
| DataModelling&NaiveBayesModel.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Python Classes
#
# You can find the most recent 3.x version documentation [here](https://docs.python.org/3/tutorial/classes.html)
#
# ## Intro to Object Oriented Programming (OOP)
#
# There are 2 types of [programming paradigms](http://en.wikipedia.org/wiki/Programming_paradigm) (or methods) for structuring your code.
#
# 1. The first is object-oriented programming - or OOP. It is a way to structure a program by creating an **object** - a grouping of related properties & behaviors. An object could represent a car with **properties** like make, model, and color. It could address **behaviors** such as driving forward, reverse, and use wipers.
#
# _Object-oriented programming is an approach for modeling real world items and their interations._
#
# 2. The 2nd is **procedural programming** where program structures are a set of steps, functions, and code blocks that are completed sequentially to complete a task.
# ## Intro to Classes
#
# Classes are like blueprints for **objects** - also known as **instances** of a class (which contains real data - not just the shell). They are user (programmer) defined data structures.
#
# Classes are what allow you to create new data types - which means new _instances_ of that type can be made. These data types can have things like:
# - attributes for maintaining the object's state (e.g.: eye color, name, etc)
# - methods (functions that can be run against the instance) for modifying it's state
#
# But these are just the main key components. Python classes provide all the same standard features of Object Oriented Programming:
# - [class inheritance](https://docs.python.org/3/tutorial/classes.html#inheritance)
# - arbitrary amounts & kinds of data
# - created at runtime
# - can be modified after creation
#
# Built-in types (classes already built into python) can be used as base classes for extension by a user or developer.
#
# Special syntax (_e.g.: arithmetic operators_) can also be redefined for class instances.
#
# Before we get started, you may not be aware that there is something called [PEP 3107 - function annotation](https://www.python.org/dev/peps/pep-3107/). This can come in very handy later on down the line & it is highly suggested you use this.
# ## Scopes & Namespaces
#
# ### namespaces
#
# When you hear the term _**namespace**_ it refers to a mapping from names to objects. Most are currently implemented as Python dictionaries but may change in the future.
#
# Things to note for **_namespaces_**:
# - there is no relation between names in different namespaces
# - when using dot notation, items after the `.` are generally referred to as attributes: `class_obj.attr1`
# - they are created at different moments & have different lifetimes (_e.g.: namespace for builtin names when interpreter starts up; global namespace when module is used; function namespace is created when called and lasts until function ends_)
#
# Class definitions (creations of new classe instances) essentially creates additional namespaces in the local scope.
# ### scopes
#
# This is how your program accesses variables, objects, and other pieces of your program. It's the region where a namespace is directly accessible. Here it refers to the idea that if you were to try to acess something the "local" namespace hasn't heard of, you will run into an error.
#
# When trying to access something, the scope with which the program looks starts from the inside out:
# 1. innermost scope: _what is in my local (immediate) namespace?_
# 2. enclosed functions scope: _what is surrounding the local namespace, starting with the nearest enclosing scope? **--- contains non-local and non-global names**_
# 3. current module's global names
# 4. namespace with built-in names
#
# If a name (label) is declared global, then all references (pointers) and assignments (changing of data) have their namespace starting in that 3rd layer - the layer containing a module's global names.
#
# In order to access variables outside the local (level 1) scope, you need to indicate so with the `nonlocal` statement in front of the variable name. If you do not utilize either this [nonlocal](https://docs.python.org/3/reference/simple_stmts.html#nonlocal) or [global](https://docs.python.org/3/reference/simple_stmts.html#global) statement, the original variables become **read-only** and an attempt to overwrite the original data will simply create a _new_ local variable in the innermost scope leaving original unchanged.
#
# What's the difference between **global** and **nonlocal**?
# _**nonlocal** tells the interpreter to look at the enclosing scope, whereas **global** indicates the global scope._
#
# As a reminder, when you delete an object with `del var` (for example) you are removing the binding of the text from the namespace referened by the local scope.
# Below is an example of python code that demonstrates these scope & namespace concepts from official documentation:
# ```python
# def scope_test():
# # this is the main function that is originally called
#
# def do_local():
# # this represents local namespace & scope (nothing changed to original spam)
# spam = "local spam"
#
# def do_nonlocal():
# nonlocal spam
# spam = "nonlocal spam"
#
# def do_global():
# global spam
# spam = "global spam"
#
# spam = "test spam"
# do_local()
# print("After local assignment:", spam)
# do_nonlocal()
# print("After nonlocal assignment:", spam)
# do_global()
# print("After global assignment:", spam)
#
# scope_test()
# print("In global scope:", spam)
# ```
# You can see how the **nonlocal** and **global** options affect variable binding - and how it could potentially introduce bugs when accidentally changing data.
# # The New Language Of Classes
#
# If you've taken my [python basics bootcamp](https://prosperousheart.com/python-bootcamp) or have reviewed my free educational material on GitHub, you have the basics for programming in Python. But classes are an extension to create a wider realm of possibility.
# ## Class Syntax
#
# There is a specific structure in how python interpreter expects to see classes written. In it's simplest pseudocode, a class definition looks like this:
#
# ```python
# class ClassName:
# <statement-0>
# .
# .
# .
# <statement-N>
# ```
#
# Similar to functions there is a declarative statement (**class**) that is required for execution to have effect. And although not required, the [class naming convention](https://www.python.org/dev/peps/pep-0008/#class-names) is camel casing or CapWords.
#
# Within a new class, there are likely functions but other statements are allowed - sometimes even useful! These functions normally have a specific argument list.
#
# When a new object is instantiated from these classes, a new namespace is created and then used as a local scope for that object. This _class object_ essentially acts as a wrapper around this namespace, and changes can be made upon that object that do not necessarily affect that instance of it.
# ## [Class Objects](https://docs.python.org/3/tutorial/classes.html#class-objects)
#
# When it comes to classes, there are 2 supported operations:
# 1. attribute references (accessing and/or manipulating data)
# 2. instantiation (creation of new class objects)
#
# **NOTE:** you may see two terms throughout this training:
# 1. `instance variable`
# 2. `class variable`
#
# **Instance variables** are the unique data related to a specific instance (creation) of a class.
#
# **Class variables** are attributes and methods shared by all instances (creations) of a class.
#
# ```python
# class Dog:
#
# # class variable shared by all instances
# kind = 'canine'
#
# def __init__(self, name, eye_c, breed):
# # instance variables unique to each instance
# self.name = name
# self.eye_color = eye_c
# self.breed = breed
# ```
# ### Attribute References
#
# Every time you create a class object, you create an _instance_ of that object ... Meaning a piece of code with:
# - **class attributes** where all instances of that class type have the same value (e.g.: dog species)
# - **instance attributes** where the attribute values are specific to the instance, not the class (e.g.: eye color)
#
# And each instance points to a different location for the data.
#
# One of the biggest advantages of using classes to organize data is that instances are guaranteed to have the attributes you expect.
#
# #### Class Attributes
#
# **Class attributes** are defined directly beneath the class name and indented by 4 spaces with some initial value.
#
# These should be used to define properties that should have the same value for every class instance.
#
# #### Instance Attributes
#
# So when thinking of creating classes, think of all the base attributes. For example ...
#
# If you were to create a class for a cat, what attributes would it have?
# - number of legs
# - color of fur
# - sex
# - age
# - eye color
#
# And the list could go on. Each of these attributes, you would have the ability to update a particular instance vs all cats.
#
# You access each attribute by using __dot notation__ like so:<br>
# `new_cup.staves = 10`
#
# This is the standard syntax for all attribute references (class or instance), whether built-in or user-defined: `obj.name`
#
# Valid attribute names are the ones in a class's namespace when the object was created and are either:
# - data attributes: do not need to be declared because they are created upon instantiation (creation)
# - methods (function that belongs to an object)
#
# <div class="alert alert-success">
#
# ```python
# class MyClass:
# """A simple example class"""
# i = 12345
#
# def f(self):
# return 'hello world'
# ```
# </div>
#
# In the above example from official documentation, attributes of the **MyClass** class are as follows:
# - `MyClass.i` returns an integer
# - `MyClass.f` returns a function or **method object** (meaning until you call the method, it simply stores the method object)
#
# ```python
# x = MyClass()
# xf = x.f # stores the function, but it's not called yet
# xf() # will call the actual function - would be the same as x.f()
# ```
#
# You can also update an attribute with an assignment operation such as: `MyClass.i = 42`
# #### Warning For Mutable Attributes
#
# What do you think would happen if you ran the below code?
#
# ```python
# class Dog:
#
# tricks = [] # mistaken use of a class variable
#
# def __init__(self, name):
# self.name = name
#
# def add_trick(self, trick):
# self.tricks.append(trick)
#
# d = Dog('Fido')
# e = Dog('Buddy')
# d.add_trick('roll over')
# e.add_trick('play dead')
# d.tricks
# ```
# Is it what you would expect of 2 dogs if you only taught one trick per dog?
#
# _How would you fix the above code?_ Move the **tricks** class attribute into the `__init__` function so each "new dog" has it's own set of tricks.
#
# Should you have an attribute name in a class and it's instance, the lookup prioritizes the instance.
# ### Class Object Instantiation
#
# When creating a new object from a class, you are **instantiating** an object. You are creating an instance of a class object, which means each new instantiation has it's own memory address. And if you tried to compare 2 instances of a class object, it would return False.
#
# In order to create a new instance, you would type the name of the class followed by opening and closing parentheses: `ClassName()`
#
# If there are no required input parameters that do not already have a default value, then you would see something like `varName = MyClass()` to create a new instance of the **MyClass** variable and assigns it to the **local** variable `temp`.
# If you [use proper docstrings](https://www.python.org/dev/peps/pep-0257/) you can always run the function (or call the function attribute on an object) `.__doc__` to learn more about the object - data, function, module, etc.
#
# <div class="alert alert-warning">
# What would be returned if you ran:
#
# ```python
# class MyClass:
# """A simple example class"""
# i = 12345
#
# def f(self):
# return 'hello world'
#
# temp = MyClass()
# temp.__doc__
# ```
# </div>
#
# This is called a **dunder method** - aka a "magic method".
# # Class Methods & Functions
#
# Often, the first argument of a method (even if not in a class) starts with `self` & is nothing more than convention. However, depending on other automated pieces in your environment it may cause an issue if you do not use it.
#
# Methods can call other methods by using method attributes of the `self` argument, such as:
#
# ```python
# class Bag:
# def __init__(self):
# self.data = []
#
# def add(self, x):
# self.data.append(x)
#
# def addtwice(self, x):
# self.add(x)
# self.add(x)
# ```
#
# There are 3 different types of methods, which are explained [here](https://realpython.com/instance-class-and-static-methods-demystified).
# ## 3 Types Of Class Methods
#
# Below is an [example from RealPython](https://realpython.com/instance-class-and-static-methods-demystified/#instance-class-and-static-methods-an-overview) on what the three methods might look like in a python program.
#
# ```python
# class MyClass:
# def method(self):
# return 'instance method called', self
#
# @classmethod
# def classmethod(cls):
# return 'class method called', cls
#
# @staticmethod
# def staticmethod():
# return 'static method called'
# ```
#
# All three methods can take in and number of other parameters not mentioned here.
#
# ### Instance Methods
#
# This is a basic method type most commonly used. It takes (at minimum) one parameter (always first input): `self`
#
# This single parameter points to an **instance** of a class - a particular instantiation or creation of the class. It is through this parameter that instances can freely access attributes & other methods on the same object.
#
# Instance methods also access the class itself through the `self.__class__` attribute. Which means it can modify the class state.
#
# Additional information can also be found [here](https://realpython.com/python3-object-oriented-programming/#instance-methods).
#
# ### Class Methods
#
# Instead of the `self` parameter, it takes the `cls` parameter (always first input). This points to the class as a whole. It can't modify the object itself, but it can modify the class state across all instances of the class.
#
# This means you do not need an instance of the class to call these class methods.
#
# A really great way to see this in action is with [this Pizza factory example](https://realpython.com/instance-class-and-static-methods-demystified/#delicious-pizza-factories-with-classmethod). It's a wonderful example of the [don't repeat yourself](https://en.wikipedia.org/wiki/Don't_repeat_yourself) principle.
#
# ### Static Methods
#
# There are no required parameters for these.
#
# These methods neither modify the obejct nor class states. They are restricted in what data they can access & are primarily a way to [namespace](https://realpython.com/python-namespaces-scope/) your methods.
#
# These methods work like regular functions but belong to the class (and each instance's) namespace.
#
# You can learn more about when to use statis methods [here](https://realpython.com/instance-class-and-static-methods-demystified/#when-to-use-static-methods).
# ## Magic Methods (Dunders)
#
# This is a term you will hear among many programmer groups. These **magic methods** are special methods with double underscores at the beginning and end of their names. (*Also known as __dunder methods__!*)
#
# These methods allow you to create functionality that can't be represented in a normal method.
#
# There are a lot of methods available (such as those listed in [this article on GitHub](https://rszalski.github.io/magicmethods)) but the one you'll see most? Is the one that allows you to create an instance (or instantiate) an object of said class type.
#
# Instantiating a class object (or "calling") create a base object - sometimes referred to as empty. But in reality, it's just the defaults assigned to it. You can change the initial "empty" state with the special **dunder method** `__init__`.
# ### `__init__`
#
# The `__init__` dunder or magic method is the most important method within a class. This is the function of the class object that is called when an instance of the object is instantiated or created. It is also known as "the instance initiliazer" in Python - or the method (function) that sets the initial **state** of an object.
#
# All `__init__` methods __must__ start with __*self*__ as the first parameter. This is what is automatically invoked when a new class object is created.
#
# _NOTE: This parameter indicates to Python that when calling __self__ you are referring to the INSTANCE that is calling the method, vs making a change to all instances._
#
# <hr>
#
# ```python
# class Wood_Cup:
# """
# This class is to create a wooden cup object.
#
# """
#
# def __init__(self, wood_type_obj = None, size = None, art_class_obj = None, handle_loc = "R", staves = 0):
# """
# This is the __init__ method which allows someone to create an instance of the Wood_Cup class.
#
# This function determines what attributes each instance of the class will have.
#
# """
#
# self.staves = staves
# self.wood_type = wood_type_obj
# self.art_class_obj = art_class_obj
# self.staves = staves
# self.size = size
# ```
#
# <hr>
#
# When creating or instantiating a new instance of a class, you do not need to add *__self__* as an input parameter.
#
# Such as: `new_cup = Wood_Cup("birch", "B", None, "L", 8)`
#
# There are lots of built-in methods for classes that can really help to clean up your code as well as make it more pliable.
# Another example from official documentation:
#
# ```python
# class Complex:
# def __init__(self, realpart, imagpart):
# self.r = realpart
# self.i = imagpart
#
# x = Complex(3.0, -4.5)
# x.r, x.i
# ```
# <div class="alert alert-warning">
# How would you create a class object (instantiate a new object from the above MyClass) and then:<br>
# 1. call <b>i</b><br>
# 2. change <b>i</b><br>
# 3. call <b>i</b>
# </div>
# There may even be times where you want to test for ambiguous input, such as a string or [DateTime](https://docs.python.org/3/library/datetime.html) object when expecting a date. Check out [this](https://realpython.com/python-multiple-constructors/) for how you can do that. Or if you're interested in checking out a new option with python 3.10, check out [structural pattern matching](https://realpython.com/python310-new-features/#structural-pattern-matching).
# # [Class Inheritance](https://docs.python.org/3/tutorial/classes.html#inheritance)
#
# When we refer to inheritance, it is the ability to utilize another calss's structure as a "base" for a new class - the process where one class takes on the attributes & methods of another. The syntax is as follows:
#
# ```python
# class DerivedClassName(BaseClassName):
# <statement-0>
# .
# .
# .
# <statement-N>
# ```
#
# `BaseClassName` must be defined in a scope that contained the creation (definition) of `DerivedClassName`. If a base class is defined in another module, the syntax might look like this:
#
# ```python
# class DerivedClassName(modname.BaseClassName):
# <statement-0>
# .
# .
# .
# <statement-N>
# ```
#
# Even if you don't see all of the attributes in your `DerivedClassName`, if you attempt to utilize an attribute or function your base class has the itnerpreter will look to your base class(es) for this information.
#
# Additional insight can be found [here](https://realpython.com/python3-object-oriented-programming/#inherit-from-other-classes-in-python).
# ## Caution With Overriding Base Attributes
#
# It is possible to have a method in your `DerivedClassName` that replaces a base class method or attribute ... But you can also write your code so that the new class instead extends functionality.
#
# ## Built-In Inheritance Functions
#
# 1. [`isinstance(obj, sysinfo)`](https://docs.python.org/3/library/functions.html#isinstance)
# - returns True (or 1) if the `object` argument is an instance (or subclass) of the `classinfo` object -- otherwise False (or 0)
# - (as of ver 3.10) if `classinfo` is a tuple of objects (or recursive tuples) OR a [Union type](https://docs.python.org/3/library/stdtypes.html#types-union) of multiple items, this returns True (or 1) when `object` is an instance of ANY entry --- otherwise a [TypeError](https://docs.python.org/3/library/exceptions.html#TypeError) subclass of [exception](https://github.com/ProsperousHeart/Basics-Boot-Camp/blob/main/Week_3/Python_Basics_18_-_Exceptions_%26_Assertions.ipynb) is raised.
#
#
# 2. [`issubclass(class, classinfo)`](https://docs.python.org/3/library/functions.html#issubclass)
# - a class is considered a subclass of itself
# - returns True (or 1) if a `class` is a subclass (direct, indirect, or [virtual](https://docs.python.org/3/glossary.html#term-abstract-base-class)) of `classinfo`
# - (as of ver 3.10) if `classinfo` is a tuple of objects OR a [Union type](https://docs.python.org/3/library/stdtypes.html#types-union), this returns True (or 1) when `object` is a subclass of ANY of the types --- otherwise a [TypeError](https://docs.python.org/3/library/exceptions.html#TypeError) subclass of [exception](https://github.com/ProsperousHeart/Basics-Boot-Camp/blob/main/Week_3/Python_Basics_18_-_Exceptions_%26_Assertions.ipynb) is raised.
# ## [Multiple Inheritance](https://docs.python.org/3/tutorial/classes.html#multiple-inheritance)
#
# When a new class type is created, you can pull from multiple base classes. This class definition might look something like this:
#
# ```python
# class DerivedClassName(Base0, ..., BaseX):
# <statement-0>
# .
# .
# .
# <statement-N>
# ```
#
# **_How does the interpreter know where to look?_**
#
# 1. looks internaly to itself (the class) - _does that attribute/method reside in `DerivedClassName`?_
# 2. looks at first base class in the input parameter of class definition
# - in this example, looks in Base0
# - if not found there, looks in it's parent classes recursively
# 3. if not found in 1st base class, it will recursively check all of the rest
#
# It's a bit more complex than the above. If yowant to get mroe into inheritance, be sure to check out the documentation for [`super()`](https://docs.python.org/3/library/functions.html#super) as well as [this guide to using `super()`](https://rhettinger.wordpress.com/2011/05/26/super-considered-super/). You can also use [this method resolution order](https://www.python.org/download/releases/2.3/mro/) doc.
# # [Iterators In Your Classes](https://docs.python.org/3/tutorial/classes.html#iterators)
#
# You have the ability to make a class be an iterable! Meaning you can put an instance of a class inside the [`iter(obj)`](https://docs.python.org/3/library/functions.html#iter) built-in function.
#
# They use the dunder method [`__next__()`](https://docs.python.org/3/library/stdtypes.html#iterator.__next__) so you can use the [`next()`](https://docs.python.org/3/library/functions.html#next) function.
#
# Here is a (tweaked) example from official documentation for reading in a string of charactrs backwards:
#
# ```python
# class Reverse:
# """Iterator for looping over a sequence backwards."""
# def __init__(self, data):
# self.data = data
# self.index = len(data)
#
# def __iter__(self):
# return self
#
# def __next__(self):
# if self.index == 0:
# raise StopIteration
# self.index = self.index - 1
# return self.data[self.index]
#
# rev = Reverse(input("What phrase would you like repeated backwards?\t"))
# iter(rev)
# for char in rev:
# print(char)
# ```
#
# Try it out for yourself!
# # [Generators In Your Classes](https://docs.python.org/3/tutorial/classes.html#generators)
#
# While we have already discussed [iterators & generators](https://github.com/ProsperousHeart/Basics-Boot-Camp/blob/main/Week_2/Python_Basics_13_-_Iterators_And_Generators.ipynb), you may find additional insight with official documentation. (This section's title links to it.)
#
# Long story short, generators are like Pez dispensers. Once a piece of data is yielded, the next piece is ready to be called by [`yield.()`](https://docs.python.org/3/reference/simple_stmts.html#yield)
#
# You have the ability to make a class be a generator!
#
# While I tried to provide an example of what the official documentation outlines, it wouldn't work for me. It's also never come up, so not sure how often you would need the info.
#
# It's also possible to have [generator expressions](https://docs.python.org/3/tutorial/classes.html#generator-expressions) - similar to [list comprehensions](https://www.python.org/dev/peps/pep-0202/). Generally sued when a generator is immediately used by an enclosing function. These tend to be more user friendly than equivalent list comprehensions.
#
# 
# # Special Class Notes
#
# If you were to run `type()` command on a class object, you will see that it is of the `object.__class__` type.
#
# _**What about "private" variables?**_ More on that [here](https://docs.python.org/3/tutorial/classes.html#private-variables).
#
# There is also [this Odds & Ends section](https://docs.python.org/3/tutorial/classes.html#odds-and-ends) that may be of interest, but leaves your code open to ambiguity.
# # Class [Decorators](https://realpython.com/primer-on-python-decorators/)
#
# More on this to come, but definitely check out the link for this until then!
# # Additional Sites
# 1. PEP Style Guides:
# - [318: Decorators For Functions & Methods](https://www.python.org/dev/peps/pep-0318/)
# - [352: Exceptions as New-Style Classes](https://docs.python.org/3/whatsnew/2.5.html#pep-352-exceptions-as-new-style-classes)
# - [487: Simpler customization of class creation](https://docs.python.org/3/whatsnew/3.6.html#pep-487-simpler-customization-of-class-creation)
# - [520: Preserving Class Attribute Definition Order](https://docs.python.org/3/whatsnew/3.6.html#pep-520-preserving-class-attribute-definition-order)
# - [3119: Abstract Base Classes](https://docs.python.org/3/whatsnew/2.6.html#pep-3119-abstract-base-classes)
# - [3129: Class Decorators](https://docs.python.org/3/whatsnew/2.6.html#pep-3129-class-decorators)
# - [3155: Qualified name for classes and functions](https://docs.python.org/3/whatsnew/3.3.html#pep-3155-qualified-name-for-classes-and-functions)
#
#
# 2. Official Python Documentation
# - [Base Exception classes](https://docs.python.org/3/library/exceptions.html#base-classes)
# - [Collections](https://docs.python.org/3/library/collections.abc.html) - abstract base classes for containers
#
#
# 3. RealPyton:
# - [Object Oriented Programming (OOP)](https://realpython.com/python3-object-oriented-programming/)
# - [Primer on Python Decorators](https://realpython.com/primer-on-python-decorators/)
# - [Instance, Class, & Static Methods](https://realpython.com/instance-class-and-static-methods-demystified/)
# - [Multiple Contructors](https://realpython.com/python-multiple-constructors/)
#
#
# 4. On decorators:
# - [Primer On Function Decoratrs](https://realpython.com/primer-on-python-decorators/)
# - [Python decorators](https://www.geeksforgeeks.org/decorators-in-python/)
| Python/Python-INTER/01 - Python Classes.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Derivatives
#
# In this file, I will show how we can take the derivative and derivative-transpose of the `v0` tensor language.
# # !!! NOTE !!! OUT OF DATE
#
# I need to do this in terms of the updated AST / IR formulations.
import sys
sys.path.append('../')
sys.path.append('../src/')
#from adt import ADT
#from adt import memo as ADTmemo
from atlv0 import IR
from atlv0 import Func
from atlv0 import _Context
print(IR._defstr)
# Derivatives are defined on `expr`s with respect to some number of free variables, which constitute the "inputs" to the function being differentiated. This is expressed via the form
#
# $$ D[[ e | X ]] $$
#
# for which we need simply provide structurally recursive rules constituting a definition.
#
# $$
# \begin{array}{rcl}
# D[[ c | X ]] &\leadsto& 0 \\
# D[[ x |\ [x\mapsto dx]\in X ]] &\leadsto& dx \\
# D[[ x | x\not\in X ]] &\leadsto& 0 \\
# D[[ e_0 + e_1 | X ]] &\leadsto& D[[ e_0 | X ]] + D[[ e_1 | X ]] \\
# D[[ e_0 \cdot e_1 | X ]] &\leadsto& D[[ e_0 | X ]] \cdot e_1 + e_0 \cdot D[[ e_1 | X ]] \\
# D[[ \sum_i e | X ]] &\leadsto& \sum_i D[[ e | X ]] \\
# D[[ \boxplus_i e | X ]] &\leadsto& \boxplus_i D[[ e | X ]] \\
# D[[ e[i]\ | X ]] &\leadsto& (D[[ e | X ]])[i] \\
# D[[\ [p]\cdot e\ | X ]] &\leadsto& [p]\cdot D[[ e | X ]] \\
# D[[ (e_0,e_1) | X ]] &\leadsto& (D[[ e_0 | X ]], D[[ e_1 | X ]]) \\
# D[[ \pi_k e | X ]] &\leadsto& \pi_k(D[[ e | X ]]) \\
# D[[ \textrm{let } x = e_0 \textrm{ in } e_1 | X ]] &\leadsto&
# \left(\begin{array}{rcl}
# \textrm{let } x &=& e_0 \textrm{ in } \\
# \textrm{let } dx &=& D[[ e_0 | X ]] \textrm{ in } \\
# && D[[ e_1 | X[x \mapsto dx]\ ]]) \\
# \end{array}\right) \\
# \end{array}
# $$
#
| notebooks/Derivatives of Tensors.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Optimization
#
# We're often interested in the best-fitting model to some data. On Day 3, we introduced the concept of a likelihood and least-squares fitting. For linear models, we can do that in one step because the problem is uniquely determined. Today we will introduce how to fit functions that have non-linear parameters.
#
#
# ## Gradients!
#
# We will assume the we know the likelihood $\mathcal{L}$ (it's often Gaussian), which means that we have a function that is maximized with the choice of good parameters. The function we normally work with is
#
# $$
# f(x) = - \log\mathcal{L}(x\mid\mathcal{D})
# $$
#
# which we then minimize. The log is there to remove the exponentials in many likelihoods. For example, for the ordinary least-squares solution, $f=\chi^2$.
#
# > Mind the sign!
# > It's *very* common to write down a model, optimize it, and then get some nonsense fit from the Minimum-Likelihood™ parameters.
#
# The variable $x$ stands for the parameter we want to find the optimal value for. Notice that we don't require it to have any specific relation (for instance linear). Instead, we will demand that $f(x)$ represents a well-behaved function: we can expect derivatives of $f$ to exist everywhere in the region of interest. We can thus write down the **Taylor series** expansion for $f$ about some point $x_0$:
#
# $$
# f (x) = f (x_0) + g(x_0) (x - x_0) + \frac{1}{2} H(x_0) (x - x_0)^2 + \mathcal{O}((x-x_0)^3)
# $$
#
# where $g$ is the gradient, i.e. $g \equiv df(x)/dx$, and the **Hessian** $H$ is $H \equiv d^2 f(x) / dx^2$.
#
# Although we don't know anything a priori about the convergence of this series, it is clear that as the distance $x - x_0$ becomes smaller, the higher-order terms become less important.
#
# The first term of the above series is constant, so it will not tell much about where to look for a minimum. The second term is proportional to the gradient, telling in which direction the function is decreasing fastest, but it doesn't tell us what step size to take.
#
# A first-order gradient descent method thus is typically a fixed-point iteration of the kind
#
# $$
# x_{t+1} = x_t - \lambda_t g(x_t)
# $$
#
# At iteration $t$, it goes downhill by a certain amount $\lambda_t$, which yet needs to be determined; setting it properly may require experience in the dark arts.
#
#
# The third, or quadratic term describes a parabolic behavior and is therefore the lowest-order term to predict a minimum. Unlike $g$, we can expect $H$ to be roughly constant over small regions because it's variations are of higher-order (and in the case of a true parabola: identically zero).
#
# Thus second-order gradient descent (also called **Newton methods**) have fixed-point iterations of the form
#
# $$
# x_{t+1} = x_t - H^{-1}(x_t) g(x_t)
# $$
#
# We'll see why in a minute. This means that the optimal step size for a quadratic approximation of the function $f$ is given by the inverse curvature of $f$. That sounds intuitive enough, but let's have a picture anyway.
# +
# %matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
# two parabolae
f = lambda x, c: c*x**2
c = 1, 0.5
# initial point
x_ = -0.75
y_ = [ f(x_, ci) for ci in c ]
# compute gradient and hessian
g = [ 2*c[i]*x_ for i in range(2) ]
H = [ 2*c[i] for i in range(2) ]
# Newton step
x__ = [ x_ - 1/H[i]*g[i] for i in range(2) ]
y__ = [ f(xi, ci) for xi, ci in zip(x__, c) ]
# + hide_input="true"
# plotting
x = np.linspace(-1,1,100)
fig, axes = plt.subplots(1, 2, figsize=(12,6))
axes[0].plot(x, f(x, c[0]))
axes[1].plot(x, f(x, c[1]))
axes[0].axis('off')
axes[1].axis('off')
axes[0].scatter([x_, x__[0]], [y_[0], y__[0]], c=['k', 'w'], ec='k', s=100, zorder=10)
axes[0].plot([x_, x_, x__[0]], [y_[0], y__[0], y__[0]], c='k')
axes[1].scatter([x_, x__[1]], [y_[1], y__[1]], c=['k', 'w'], ec='k', s=100, zorder=10)
axes[1].plot([x_, x_, x__[1]], [y_[1], y__[1], y__[1]], c='k')
axes[0].text(0,0.25,'$f(x)=x^2,\,H=2$', ha='center', size=16)
axes[1].text(0,0.25,'$f(x)=x^2/2,\,H=1$', ha='center', size=16)
axes[0].text(x_/2, x__[0]-0.01, '$\Delta x$', ha='center', va='top', size=16)
axes[1].text(x_/2, x__[1]-0.01, '$\Delta x$', ha='center', va='top', size=16)
axes[0].text(x_-0.01, y_[0]/2, '$\Delta y$', ha='right', va='center', size=16)
axes[1].text(x_-0.01, y_[1]/2, '$\Delta y$', ha='right', va='center', size=16)
axes[0].set_ylim(-0.2,1.1)
axes[1].set_ylim(-0.2,1.1)
fig.tight_layout()
# -
# Despite having different slopes at the starting position (filled circle), the Newton scheme performs only a single step (open circle) to move to the exact minimum, from any starting position, *if the function is quadratic*. This is even more useful because
#
# > Any smooth function close to its minimum looks like a quadratic function!
#
# That's a consequence of the Taylor expansion because the first-order term $g$ vanishes close to the minimum, so all deviations from the quadratic form are of order 3 or higher in $x-x_0$.
#
# So, why doesn't everyone compute the Hessian for optimization. Well, it's typically expensive to compute a second derivative. And in $d$ dimensions (one for each parameter), the Hessian is a matrix with $d(d+1)/2$ elements. This is why there are several **quasi-Newton methods** like [BFGS](https://en.wikipedia.org/wiki/Broyden%E2%80%93Fletcher%E2%80%93Goldfarb%E2%80%93Shanno_algorithm), that accumulate information from previous iterations into an estimate of $H$.
# ## Newton's Method for finding a root
#
# [Newton's method](https://en.wikipedia.org/wiki/Newton's_method) was initially designed to find the root of a function, not its minimum. So, let's find out how these two are connected.
#
# The central idea is to approximate $f$ by its tangent at some initial position $x_0$:
#
# $$
# y = f(x_0) + g(x_0) (x-x_0)
# $$
#
# As we can see in this animation from Wikipedia, the $x$-intercept of this line is then closer to the root than the starting position $x_0$:
#
# 
#
# That is, we need to solve the linear relation
#
# $$
# f(x_0) + g(x_0) (x-x_0) = 0
# $$
#
# for $x$ to get the updated position. In 1D: $x_1 = x_0 - f(x_0)/g(x_0)$. Repeating this sequence
#
# $$
# x_{t+1} = x_t - \frac{f(x_t)}{g(x_t)}
# $$
#
# will yield a fixed point, which is the root of $f$ *if one exists in the vicinity of $x_0$*.
# + deletable=true editable=true
def newtons_method(f, df, x0, tol=1E-6):
x_n = x0
while abs(f(x_n)) > tol:
x_n = x_n - f(x_n)/df(x_n)
return x_n
# -
# ## Minimizing a function
#
# As the maximum and minimum of a function are defined by $f'(x) = 0$, we can use Newton's method to find extremal points by applying it to the first derivative. That's the origin for the Newton update formula above:
#
# $$
# x_{t+1} = x_t - H^{-1}(x_t) \ g(x_t)
# $$
#
# Let's try this with a simply function with known minimum:
# + deletable=true editable=true
# define a test function
def f(x):
return (x-3)**2 - 9
def df(x):
return 2*(x-3)
def df2(x):
return 2.
# + deletable=true editable=true jupyter={"outputs_hidden": false}
root = newtons_method(f, df, x0=0.1)
print ("root {0}, f(root) = {1}".format(root, f(root)))
# + deletable=true editable=true jupyter={"outputs_hidden": false}
minimum = newtons_method(df, df2, x0=0.1)
print ("minimum {0}, f'(minimum) = {1}".format(minimum, df(minimum)))
# -
# There is an important qualifier in the statement about fixed points: **a root needs to exist in the vicinity of $x_0$!** Let's see what happens if that's not the case:
# + jupyter={"outputs_hidden": false}
def g(x):
return (x-3)**2 + 1
dg = df # same derivatives for f and g
newtons_method(g, dg, x0=0.1)
# -
# As long as you don't interrupt the execution of this cell (Tip: click "Interrupt Kernel"), `newtons_method` will not terminate and come back with a result.
#
# With a little more defensive programming we can make sure that the function will terminate after a given number of iterations:
# + deletable=true editable=true
def newtons_method2(f, df, x0, tol=1E-6, maxiter=100000):
x_n = x0
for _ in range(maxiter):
x_n = x_n - f(x_n)/df(x_n)
if abs(f(x_n)) < tol:
return x_n
raise RuntimeError("Failed to find a minimum within {} iterations ".format(maxiter))
# + jupyter={"outputs_hidden": false}
newtons_method2(g, dg, x0=0.1)
# -
# ## Using scipy.optimize
#
# scipy comes with a pretty feature-rich [optimization package](https://docs.scipy.org/doc/scipy/reference/optimize.html), for one- and multi-dimensional optimization. As so often, it's better (as in faster and more reliable) to leverage exisiting and battle-tested code than to try to implement it yourself.
#
# ### Exercise 1:
#
# Find the minimum of `f` with `scipy.optimize.minimize_scalar`. Look up the various arguments to function in the documentation (either online or by typing `scipy.optimize.minimize_scalar?`) and choose appropriate inputs. When done, visualize your result to confirm its correctness.
# ### Exercise 2:
#
# To make this more interesting, we'll create a new multi-dimensional function that resembles `f`:
def h(x, p):
return np.sum(np.abs(x-3)**p, axis=-1) - 9
# In 2D, find the minimum of `h` for `p=2` with `scipy.optimimze.minimize`. Note that you have not been given a derivative of `h`. You can choose to compute it analytically, or see if `minimize` has options that allow you to work without.
#
# When done, visualize your result to confirm its correctness.
| day4/Newton-Method.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="view-in-github" colab_type="text"
# <a href="https://colab.research.google.com/github/krishnaaxo/H2O_AutoML_Classification/blob/main/H2O_Automl.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# + id="wodp6J_y-qYg"
import numpy as np
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
from sklearn import preprocessing
from sklearn.model_selection import train_test_split
# + id="txXiMKcHrm3C" colab={"base_uri": "https://localhost:8080/"} outputId="a91c04cd-d1be-43ab-87cb-fda21ad5b4d0"
pip install missingno
# + colab={"base_uri": "https://localhost:8080/", "height": 361} id="8BLFnP9ZACXq" outputId="1002878e-7de9-449a-bd7a-ce80707d2f1b"
df= pd.read_csv('/content/Taiwan.csv')
df.head()
# + colab={"base_uri": "https://localhost:8080/"} id="g8EwGCSrFEWj" outputId="01fd7d59-362a-46a3-daaa-e8951296258b"
print("Column names before renaming","\n", df.columns[:5],"\n")
df.columns = df.columns.str.strip()
df.columns = df.columns.str.replace(" " ,"_")
df.rename(columns = {'Bankrupt?' :'Bankrupt' },inplace=True)
print("Column names after renaming","\n",df.columns[:5])
# + [markdown] id="lvWw2TyAFELh"
#
# + colab={"base_uri": "https://localhost:8080/", "height": 399} id="3Kf7Vz-LAF-p" outputId="9550d051-dc4e-4aef-ac33-6d9963231b8d"
df.head().T
# + colab={"base_uri": "https://localhost:8080/"} id="bg2arEs_AIT3" outputId="ac52fa04-76eb-4338-91ba-a6e35a551fe6"
df.info()
# + colab={"base_uri": "https://localhost:8080/", "height": 348} id="3981ddx1AI4Y" outputId="fee4133d-f3b8-4c75-fac7-79c6c116d924"
df.describe()
# + id="wGDY-E7ZEMZp"
# + colab={"base_uri": "https://localhost:8080/", "height": 944} id="d86UEUnwAPf3" outputId="8851fafd-d4b6-4514-e934-8f3404fbd980"
plt.figure(figsize=(16,12))
sns.heatmap(df.isnull(), cmap = 'magma')
# + colab={"base_uri": "https://localhost:8080/"} id="V9EbRu5MDC9Q" outputId="e43d989a-deac-4496-f089-b36bfb85ef73"
from sklearn.feature_selection import VarianceThreshold
var_thres=VarianceThreshold(threshold=0)
var_thres.fit(df)
# + colab={"base_uri": "https://localhost:8080/"} id="gYd3hcK4D0zR" outputId="9c3c1771-bde8-4d7c-91ca-266d29482ebd"
var_thres.get_support()
# + colab={"base_uri": "https://localhost:8080/"} id="jhEyYT-OD4CR" outputId="ff7a4361-4f0c-410c-f9e4-6f9a0f04feb3"
df.columns[var_thres.get_support()]
# + colab={"base_uri": "https://localhost:8080/"} id="Ko_XVnHUD_V7" outputId="70ba8269-54ca-42f3-c099-fae273511e68"
constant_columns = [column for column in df.columns
if column not in df.columns[var_thres.get_support()]]
print(len(constant_columns))
# + colab={"base_uri": "https://localhost:8080/"} id="oX1eUHjkED1D" outputId="f3726d75-5b03-4549-fc74-556b1f238b11"
for feature in constant_columns:
print(feature)
# + id="FCavuzRzFR7r"
del df['Net_Income_Flag']
# + colab={"base_uri": "https://localhost:8080/"} id="rT1S-NTlFkfG" outputId="a3e7609f-9446-40cb-8ada-8df7a7874635"
df.shape
# + colab={"base_uri": "https://localhost:8080/"} id="heZwIFCPF4Mr" outputId="5d1b8139-08b1-488e-ca8b-86ea098c6b28"
df.duplicated(keep=False).sum()
# + colab={"base_uri": "https://localhost:8080/", "height": 889} id="g1wBghNqF7sq" outputId="fa52dbb3-aa13-4b4e-eb26-9af884d7c210"
import seaborn as sns
#Using Pearson Correlation
corrmat = df.corr()
fig, ax = plt.subplots()
fig.set_size_inches(11,11)
sns.heatmap(corrmat)
# + id="PGUQADfvGGzD"
def correlation(dataset, threshold):
col_corr = set() # Set of all the names of correlated columns
corr_matrix = dataset.corr()
for i in range(len(corr_matrix.columns)):
for j in range(i):
if abs(corr_matrix.iloc[i, j]) > threshold: # we are interested in absolute coeff value
colname = corr_matrix.columns[i] # getting the name of column
col_corr.add(colname)
return col_corr
# + colab={"base_uri": "https://localhost:8080/"} id="oOzTTRuQGMnU" outputId="d358bcab-7734-42bf-abbb-be79355c28ae"
corr_features = correlation(df, 0.9)
len(set(corr_features))
# + colab={"base_uri": "https://localhost:8080/"} id="vNLV7BnMGRah" outputId="736fdbd2-6928-4d13-c6e1-04cd9735e0dc"
corr_features
# + id="2UH08GngGT3i"
df=df.drop(corr_features,axis=1)
# + colab={"base_uri": "https://localhost:8080/"} id="dtOIHmh7GWEk" outputId="9000265a-15ed-4816-812a-acd7e238973f"
df.shape
# + colab={"base_uri": "https://localhost:8080/"} id="t51Ohb_6GX-H" outputId="3e7d2ff5-5ac9-4d2b-92b2-c1321a58ff0f"
df['Bankrupt'].unique()
# + colab={"base_uri": "https://localhost:8080/"} id="kZDO7v5aGhKa" outputId="6e23d959-e5f4-4aa5-e4cc-65f9600a8ec3"
def get_fraction_valued_columns(df):
my_columns = []
for col in df.columns:
if (df[col].max()<=1) & (df[col].min() >= 0):
my_columns.append(col)
return(my_columns)
fractional_columns = get_fraction_valued_columns(df=df.drop(['Bankrupt'],axis=1))
non_fraction_columns = df.drop(['Bankrupt'],axis=1).columns.difference(fractional_columns)
print("# Fraction-only Columns",len(fractional_columns),"\t","# Other than Fraction-only Columns", len(non_fraction_columns))
# + colab={"base_uri": "https://localhost:8080/", "height": 1000} id="u80eL08WG3zq" outputId="6c1c6a6e-6a2b-4ea1-9dcf-81ebfe2ada44"
df[non_fraction_columns].hist(figsize= (20,20),sharex=True,layout= (6,4))
plt.show()
# + colab={"base_uri": "https://localhost:8080/", "height": 605} id="ylTk_gfMG7bC" outputId="1a15e4cb-18e1-43f2-c837-bf684f99a1ad"
df[non_fraction_columns].boxplot(vert=False,figsize= (15,10))
plt.subplots_adjust(left=0.25)
plt.show()
# + id="NMthnhCHHGO9"
log_transformed_cols = []
for col in df[non_fraction_columns].columns:
if (df[col].quantile(1) >= 100* df[col].quantile(0.99)) | (sum(df[col] > df[col].quantile(0.99)) <= 10):
df[col] = np.log1p(df[col])
log_transformed_cols.append(col)
## Change names of log transformed column
log_names = "log_" + df[log_transformed_cols].columns
df.rename(columns={df[log_transformed_cols].columns[i]: log_names[i] for i in range(len(log_names))}, inplace = True)
# + colab={"base_uri": "https://localhost:8080/", "height": 680} id="j6bLfDJ1HUHw" outputId="5d70a274-3f9f-4969-c232-88ec5453f916"
print("The following features are log transformed after they fulfill outlier detection condition.","\n\n",log_transformed_cols)
df[log_names].boxplot(vert=False,figsize= (15,10))
plt.subplots_adjust(left=0.25)
plt.title("Boxplot of Outlier infected features after log transformation")
plt.show()
# + colab={"base_uri": "https://localhost:8080/"} id="21XPptSKHcxb" outputId="d96c5edf-cdf3-4a98-a035-80233cfca783"
df1 = pd.DataFrame(df.Bankrupt.value_counts())
df2 = pd.DataFrame(100*df.Bankrupt.value_counts(normalize=True).astype(float))
tab = df1.merge(df2,left_index=True,right_index=True).rename(columns = {"Bankrupt_x" : "Count" , "Bankrupt_y" : "Percentage"})
print(tab)
# + colab={"base_uri": "https://localhost:8080/", "height": 319} id="qfpEQBPqHlEA" outputId="9732dbfd-181a-41ed-9ffe-589740af2750"
plt.pie(tab['Count'], labels= [0,1])
# + id="XCkWbnf6Le9N"
from sklearn.model_selection import train_test_split
X_train,X_test,y_train,y_test=train_test_split(df.drop(labels=['Bankrupt'], axis=1),
df['Bankrupt'],
test_size=0.3,
random_state=0)
# + colab={"base_uri": "https://localhost:8080/"} id="ofIvaMzCKm9D" outputId="76c84d13-dfa0-41a9-c9c8-898b12ba8924"
from sklearn.feature_selection import mutual_info_regression
# determine the mutual information
mutual_info = mutual_info_regression(X_train.fillna(0), y_train)
mutual_info
# + colab={"base_uri": "https://localhost:8080/"} id="X5EMYk4JMJQ_" outputId="f57ddbf5-928d-4021-fcba-ca46c1f03588"
from sklearn.feature_selection import mutual_info_regression
# determine the mutual information
mutual_info = mutual_info_regression(X_train.fillna(0), y_train)
mutual_info
# + colab={"base_uri": "https://localhost:8080/"} id="uWasW-gJMPyc" outputId="5b1f5355-b55d-4b94-8eaf-12b97912f190"
mutual_info = pd.Series(mutual_info)
mutual_info.index = X_train.columns
mutual_info.sort_values(ascending=False)
# + colab={"base_uri": "https://localhost:8080/", "height": 616} id="t9BFJEtKMXOY" outputId="6d7429cf-cfb1-4ed9-b174-0e5dcaf05cf5"
mutual_info.sort_values(ascending=False).plot.bar(figsize=(15,5))
# + colab={"base_uri": "https://localhost:8080/"} id="a8mKq0u1MlQ4" outputId="35ac4ca2-d353-4eb9-cfa4-61e68180cca8"
df.shape
# + id="eHJkP_U6MciE"
from sklearn.feature_selection import SelectPercentile
# + colab={"base_uri": "https://localhost:8080/"} id="gZdFwBVAMhxd" outputId="b02cef78-4a35-41a3-f0fc-f29eb2632a0c"
selected_top_columns = SelectPercentile(mutual_info_regression, percentile=50)
selected_top_columns.fit(X_train.fillna(0), y_train)
# + colab={"base_uri": "https://localhost:8080/"} id="gK-Z3rGrMhpq" outputId="17f1e0a3-7cb5-4bf9-bbc6-ce51c6238e55"
selected_top_columns.get_support()
# + colab={"base_uri": "https://localhost:8080/"} id="rEhk_1tyNDLe" outputId="fd0f6695-4ab7-4ef9-cdcc-11d7315cdc89"
c=X_train.columns[selected_top_columns.get_support()]
c
# + colab={"base_uri": "https://localhost:8080/", "height": 436} id="q7xjcDN5NaBt" outputId="16ed7c74-93ab-4e89-82b7-22291564e487"
X_train
# + colab={"base_uri": "https://localhost:8080/", "height": 436} id="O3ujwWL6NpHw" outputId="090f2af0-7e70-4950-969c-990842cc4285"
X_test
# + id="pDBHJ34-NV2p"
X_train=X_train[c]
X_test=X_test[c]
# + colab={"base_uri": "https://localhost:8080/", "height": 436} id="fJlxHRZFOFZN" outputId="a4dbe376-82ff-4449-cb6c-9fc9567cfd64"
X_test
# + colab={"base_uri": "https://localhost:8080/", "height": 436} id="0xjdKSMsOJhH" outputId="8a56e06d-0ca1-4856-aca8-eaf030015669"
X_train
# + id="RtPhU2_3HpVr"
from imblearn.combine import SMOTETomek
from collections import Counter
# + colab={"base_uri": "https://localhost:8080/"} id="EDJSX0hZJo-I" outputId="36091132-caf4-4b9d-872d-4b257dc46351"
os=SMOTETomek(1)
X_train,y_train=os.fit_sample(X_train,y_train)
print("The number of classes before fit {}".format(Counter(y_train)))
print("The number of classes after fit {}".format(Counter(y_train)))
# + id="2DtlKx2M1fha"
df.to_csv('new.csv')
# + colab={"base_uri": "https://localhost:8080/"} id="3QVRHhDMORLV" outputId="a1184ef5-c6d4-4ddc-ef66-3df1f7f54454"
# !pip install h2o
# + id="lC6k7eNbRNiT"
import h2o
from h2o.automl import H2OAutoML
# + colab={"base_uri": "https://localhost:8080/", "height": 543} id="za1Ep8ps2XL6" outputId="924d1ea2-5d7c-4300-c6ea-d21c4497c7b7"
h2o.init()
# + colab={"base_uri": "https://localhost:8080/"} id="RnEsrL580uEM" outputId="d75977a6-b3e4-42dc-8753-8b5e620156b3"
data = h2o.import_file('/content/new.csv')
# + id="Gs9tvp0p2jN2"
features = ['ROA(C)_before_interest_and_depreciation_before_interest',
'Operating_Gross_Margin', 'Operating_Profit_Rate',
'Non-industry_income_and_expenditure/revenue', 'Cash_flow_rate',
'Tax_rate_(A)', 'Net_Value_Per_Share_(B)',
'Persistent_EPS_in_the_Last_Four_Seasons',
'Operating_Profit_Per_Share_(Yuan_¥)',
'After-tax_Net_Profit_Growth_Rate', 'log_Net_Value_Growth_Rate',
'Total_Asset_Return_Growth_Rate_Ratio', 'log_Current_Ratio',
'log_Quick_Ratio', 'Interest_Expense_Ratio',
'log_Total_debt/Total_net_worth', 'Debt_ratio_%',
'Borrowing_dependency', 'Fixed_Assets_Turnover_Frequency',
'Operating_profit_per_person', 'Working_Capital_to_Total_Assets',
'Cash/Total_Assets', 'log_Quick_Assets/Current_Liability',
'log_Cash/Current_Liability', 'Current_Liability_to_Assets',
'Operating_Funds_to_Liability', 'Inventory/Working_Capital',
'Working_Capital/Equity', 'Current_Liabilities/Equity',
'Retained_Earnings_to_Total_Assets', 'Total_income/Total_expense',
'Working_capitcal_Turnover_Rate', 'Current_Liability_to_Current_Assets',
'Degree_of_Financial_Leverage_(DFL)',
'Interest_Coverage_Ratio_(Interest_expense_to_EBIT)',
'Equity_to_Liability']
output = 'Bankrupt'
# + id="0myztQXb2yLo"
train, test = data.split_frame(ratios=[0.8])
# + colab={"base_uri": "https://localhost:8080/"} id="U1A5WL0d21-V" outputId="1b555b61-cdbd-4faf-d4f3-bcbdb821c71f"
aml = H2OAutoML(max_models = 30, max_runtime_secs=300, seed = 1)
aml.train(x = features, y = output, training_frame = train)
# + colab={"base_uri": "https://localhost:8080/", "height": 693} id="3ha--mcI3RT1" outputId="e2c3579b-bb5b-438f-929e-c993480e147c"
lb = aml.leaderboard
lb.head(rows=lb.nrows)
# + colab={"base_uri": "https://localhost:8080/"} id="q_fksXW83WX4" outputId="e8fe93c8-086f-46e3-bc75-0bc3b9f61abf"
preds = aml.predict(test)
# + colab={"base_uri": "https://localhost:8080/", "height": 244} id="BDseJ9Wf3Zx_" outputId="a2ea1d5e-943b-4020-da5e-87149e575dd4"
preds
# + colab={"base_uri": "https://localhost:8080/", "height": 117} id="es49fIeW3cP-" outputId="65a31a8a-68e5-4c64-bf6a-46fd15fdd322"
df = test.cbind(preds)
df.head(2)
# + colab={"base_uri": "https://localhost:8080/"} id="lfjDRC9I3erM" outputId="6422c2a6-19df-4115-f17e-c2e2d8ae81e2"
h2o.export_file(df,'Predicted.csv')
| H2O_Automl.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="view-in-github" colab_type="text"
# <a href="https://colab.research.google.com/github/probml/probml-notebooks/blob/main/notebooks/numpyro_intro.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# + [markdown] id="11FOMaCs74vK"
# [NumPyro](https://github.com/pyro-ppl/numpyro) is probabilistic programming language built on top of JAX. It is very similar to [Pyro](https://pyro.ai/), which is built on top of PyTorch.
# However, the HMC algorithm in NumPyro
# [is much faster](https://stackoverflow.com/questions/61846620/numpyro-vs-pyro-why-is-former-100x-faster-and-when-should-i-use-the-latter).
#
# Both Pyro flavors are usually also [faster than PyMc3](https://www.kaggle.com/s903124/numpyro-speed-benchmark), and allow for more complex models, since Pyro is integrated into Python.
#
#
#
# + [markdown] id="1FfhOPQUHEdS"
# # Installation
# + id="Z5wEIBws1D6i"
import numpy as np
np.set_printoptions(precision=3)
import matplotlib.pyplot as plt
import math
# + id="2vlE1qOX-AjG"
# When running in colab pro (high RAM mode), you get 4 CPUs.
# But we need to force XLA to use all 4 CPUs
# This is generally faster than running in GPU mode
import os
os.environ["XLA_FLAGS"] = "--xla_force_host_platform_device_count=4"
# + colab={"base_uri": "https://localhost:8080/"} id="Xo0ejB5-7M3-" outputId="00d84c56-565d-47d8-dbb7-ddf1ceb61b48"
# http://num.pyro.ai/en/stable/getting_started.html#installation
# CPU mode: often faster in colab!
# !pip install numpyro
# GPU mode: as of July 2021, this does not seem to work
# #!pip install numpyro[cuda111] -f https://storage.googleapis.com/jax-releases/jax_releases.html
# + colab={"base_uri": "https://localhost:8080/"} id="qB5V5upMOMkP" outputId="05113667-5f09-4b04-aa03-fb219e1168f1"
import jax
print("jax version {}".format(jax.__version__))
print("jax backend {}".format(jax.lib.xla_bridge.get_backend().platform))
print(jax.lib.xla_bridge.device_count())
print(jax.local_device_count())
import jax.numpy as jnp
from jax import random
# + id="lfOH0V2Knz_p"
import numpyro
# numpyro.set_platform('gpu')
import numpyro.distributions as dist
from numpyro.distributions import constraints
from numpyro.distributions.transforms import AffineTransform
from numpyro.infer import MCMC, NUTS, Predictive
from numpyro.infer import SVI, Trace_ELBO, init_to_value
from numpyro.diagnostics import hpdi, print_summary
from numpyro.infer.autoguide import AutoLaplaceApproximation
rng_key = random.PRNGKey(0)
rng_key, rng_key_ = random.split(rng_key)
# + [markdown] id="CSxk_HEeOOMn"
# # Example: 1d Gaussian with unknown mean.
#
# We use the simple example from the [Pyro intro](https://pyro.ai/examples/intro_part_ii.html#A-Simple-Example). The goal is to infer the weight $\theta$ of an object, given noisy measurements $y$. We assume the following model:
# $$
# \begin{align}
# \theta &\sim N(\mu=8.5, \tau^2=1.0)\\
# y \sim &N(\theta, \sigma^2=0.75^2)
# \end{align}
# $$
#
# Where $\mu=8.5$ is the initial guess.
#
# + [markdown] id="U2b74i9h02jf"
# ## Exact inference
#
#
# By Bayes rule for Gaussians, we know that the exact posterior,
# given a single observation $y=9.5$, is given by
#
#
# $$
# \begin{align}
# \theta|y &\sim N(m, s^s) \\
# m &=\frac{\sigma^2 \mu + \tau^2 y}{\sigma^2 + \tau^2}
# = \frac{0.75^2 \times 8.5 + 1 \times 9.5}{0.75^2 + 1^2}
# = 9.14 \\
# s^2 &= \frac{\sigma^2 \tau^2}{\sigma^2 + \tau^2}
# = \frac{0.75^2 \times 1^2}{0.75^2 + 1^2}= 0.6^2
# \end{align}
# $$
# + id="HwHoLkHhaTTe" colab={"base_uri": "https://localhost:8080/"} outputId="44f88097-6ba4-45e9-b853-b26f8453e015"
mu = 8.5
tau = 1.0
sigma = 0.75
hparams = (mu, tau, sigma)
y = 9.5
m = (sigma**2 * mu + tau**2 * y) / (sigma**2 + tau**2)
s2 = (sigma**2 * tau**2) / (sigma**2 + tau**2)
s = np.sqrt(s2)
print(m)
print(s)
# + id="S0cVreqiOLJh"
def model(hparams, y=None):
prior_mean, prior_sd, obs_sd = hparams
theta = numpyro.sample("theta", dist.Normal(prior_mean, prior_sd))
y = numpyro.sample("y", dist.Normal(theta, obs_sd), obs=y)
return y
# + [markdown] id="VEKpXZkLO9jb"
# ## Ancestral sampling
# + id="aOjKWT3Pk-f-"
def model2(hparams):
prior_mean, prior_sd, obs_sd = hparams
theta = numpyro.sample("theta", dist.Normal(prior_mean, prior_sd))
yy = numpyro.sample("y", dist.Normal(theta, obs_sd))
return theta, yy
# + colab={"base_uri": "https://localhost:8080/"} id="feTpLCESkiMN" outputId="c28459b3-b2a8-4e14-f83e-a152af364c33"
with numpyro.handlers.seed(rng_seed=0):
for i in range(5):
theta, yy = model2(hparams)
print([theta, yy])
# + [markdown] id="mc2-_2hqN-vJ"
# ## MCMC
#
# See [the documentation](https://num.pyro.ai/en/stable/mcmc.html)
# + colab={"base_uri": "https://localhost:8080/", "height": 291, "referenced_widgets": ["e4bfd87390794a68aa49485dae909d76", "<KEY>", "5f23efd7fd0749fc92c471484184a507", "fb75111f76c24738bbece86df714531a", "<KEY>", "bf2034c981ad469ea3a7daf0d0bea907", "49c888edc8ab41c2816b00c1a0734df9", "<KEY>", "bc497be0524345d0bc515a7e33d6ba32", "<KEY>", "<KEY>", "cd61999d7518448e8b7be742a645bb23", "<KEY>", "<KEY>", "<KEY>", "b00ec3e6a2c746519ac614de35696f98", "<KEY>", "<KEY>", "6616ede5e1d7480b923ed78f76298ef7", "40c5d9bce5de4ab3b2ccf74d8f804bea", "903492aa1c0449a28f96f4b253b59e0b", "27267d560b344670be984b0804c8e687", "cf2661a5dc384160a809c6020a0161c2", "10f937d564e644caa8ebe9a72e437d48", "<KEY>", "<KEY>", "<KEY>", "<KEY>", "617f3d2283ec44c79bce664f4e32e6f8", "4fa755ad43b742529b7f8f8a24fdbe9c", "54e2ed44a99246109df34d5d5de076c5", "04227c4d78e54777ac86c6892edf5ab6"]} id="reyW7c1mlmus" outputId="13b9c264-bdac-4962-b7ab-1ef3e7f6de21"
conditioned_model = numpyro.handlers.condition(model, {"y": y})
nuts_kernel = NUTS(conditioned_model)
mcmc = MCMC(nuts_kernel, num_warmup=200, num_samples=200, num_chains=4)
mcmc.run(rng_key_, hparams)
mcmc.print_summary()
samples = mcmc.get_samples()
# + colab={"base_uri": "https://localhost:8080/"} id="FLYpKTG1e-Rg" outputId="5548de61-ac2d-49e0-da53-bd9327aff024"
print(type(samples))
print(type(samples["theta"]))
print(samples["theta"].shape)
# + colab={"base_uri": "https://localhost:8080/"} id="lWPM9xxlnWca" outputId="076988be-bd7b-4ff4-eb05-b23dd716967d"
nuts_kernel = NUTS(model) # this is the unconditioned model
mcmc = MCMC(nuts_kernel, num_warmup=100, num_samples=1000)
mcmc.run(rng_key_, hparams, y) # we need to specify the observations here
mcmc.print_summary()
samples = mcmc.get_samples()
# + [markdown] id="7AkoKKxCe1U1"
# ## Stochastic variational inference
#
# See [the documentation](https://num.pyro.ai/en/stable/svi.html)
# + colab={"base_uri": "https://localhost:8080/", "height": 347} id="2Y2-i127mpz7" outputId="bb240cd1-2a14-40d8-d303-ed453bf8fbef"
# the guide must have the same signature as the model
def guide(hparams, y):
prior_mean, prior_sd, obs_sd = hparams
m = numpyro.param("m", y) # location
s = numpyro.param("s", prior_sd, constraint=constraints.positive) # scale
return numpyro.sample("theta", dist.Normal(m, s))
# The optimizer wrap these, so have unusual keywords
# https://jax.readthedocs.io/en/latest/jax.experimental.optimizers.html
# optimizer = numpyro.optim.Adam(step_size=0.001)
optimizer = numpyro.optim.Momentum(step_size=0.001, mass=0.1)
# svi = SVI(model, guide, optimizer, Trace_ELBO(), hparams=hparams, y=y) # specify static args to model/guide
svi = SVI(model, guide, optimizer, loss=Trace_ELBO())
nsteps = 2000
svi_result = svi.run(rng_key_, nsteps, hparams, y) # or specify arguments here
print(svi_result.params)
print(svi_result.losses.shape)
plt.plot(svi_result.losses)
plt.title("ELBO")
plt.xlabel("step")
plt.ylabel("loss");
# + colab={"base_uri": "https://localhost:8080/"} id="ILJOv6Pi_8tu" outputId="3d7e30d4-0acd-4032-b699-dcbbcae3702a"
print([svi_result.params["m"], svi_result.params["s"]])
# + [markdown] id="POaURsS0SGHB"
# ## Laplace (quadratic) approximation
#
# See [the documentation](https://num.pyro.ai/en/stable/autoguide.html#autolaplaceapproximation)
# + colab={"base_uri": "https://localhost:8080/", "height": 300} id="adfMDI9USI_C" outputId="a45a3914-bd29-408d-fe68-c73a7890e99e"
guide_laplace = AutoLaplaceApproximation(model)
svi = SVI(model, guide_laplace, optimizer, Trace_ELBO(), hparams=hparams, y=y)
svi_run = svi.run(rng_key_, 2000)
params = svi_run.params
losses = svi_result.losses
plt.figure()
plt.plot(losses)
# + colab={"base_uri": "https://localhost:8080/"} id="_Kolw-R3Buuf" outputId="9dd38ff8-5bb2-4811-cee3-b64941eb2eef"
# Posterior is an MVN
# https://num.pyro.ai/en/stable/distributions.html#multivariatenormal
post = guide_laplace.get_posterior(params)
print(post)
m = post.mean
s = jnp.sqrt(post.covariance_matrix)
print([m, s])
# + colab={"base_uri": "https://localhost:8080/"} id="bcm4yD-vTXmE" outputId="bc8e59c2-6e3b-4c7b-aa00-fb6cdbe15c6c"
samples = guide_laplace.sample_posterior(rng_key_, params, (1000,))
print_summary(samples, 0.89, False)
# + [markdown] id="F0W7kpNcLyUm"
# # Example: Beta-Bernoulli model
#
#
# Example is from [SVI tutorial](https://pyro.ai/examples/svi_part_i.html)
#
# The model is
# $$
# \begin{align}
# \theta &\sim \text{Beta}(\alpha, \beta) \\
# x_i &\sim \text{Ber}(\theta)
# \end{align}
# $$
# where $\alpha=\beta=10$. In the code, $\theta$ is called
# `latent_fairness`.
# + id="9cUwrzZhE7Zj"
alpha0 = 10.0
beta0 = 10.0
def model(data):
f = numpyro.sample("latent_fairness", dist.Beta(alpha0, beta0))
# loop over the observed data
for i in range(len(data)):
numpyro.sample("obs_{}".format(i), dist.Bernoulli(f), obs=data[i])
# + colab={"base_uri": "https://localhost:8080/"} id="qIVA50sFFc7u" outputId="5480dab5-a136-4b67-88c6-67f238099e44"
# create some data with 6 observed heads and 4 observed tails
data = jnp.hstack((jnp.ones(6), jnp.zeros(4)))
print(data)
N1 = jnp.sum(data == 1)
N0 = jnp.sum(data == 0)
print([N1, N0])
# + [markdown] id="1t2K9MElIYK1"
# ## Exact inference
#
# The posterior is given by
# $$
# \begin{align}
# \theta &\sim \text{Ber}(\alpha + N_1, \beta + N_0) \\
# N_1 &= \sum_{i=1}^N [x_i=1] \\
# N_0 &= \sum_{i=1}^N [x_i=0]
# \end{align}
# $$
# + colab={"base_uri": "https://localhost:8080/"} id="-UncLKyeIfsw" outputId="cb41d985-94d5-4b1e-e470-3fc41449a804"
alpha_q = alpha0 + N1
beta_q = beta0 + N0
print("exact posterior: alpha={:0.3f}, beta={:0.3f}".format(alpha_q, beta_q))
post_mean = alpha_q / (alpha_q + beta_q)
post_var = (post_mean * beta_q) / ((alpha_q + beta_q) * (alpha_q + beta_q + 1))
post_std = np.sqrt(post_var)
print([post_mean, post_std])
# + id="iyvQ-KWt9aaj" colab={"base_uri": "https://localhost:8080/"} outputId="7bb7f5cb-f01c-477d-bf7d-28f6470d1402"
inferred_mean = alpha_q / (alpha_q + beta_q)
# compute inferred standard deviation
factor = beta_q / (alpha_q * (1.0 + alpha_q + beta_q))
inferred_std = inferred_mean * math.sqrt(factor)
print([inferred_mean, inferred_std])
# + [markdown] id="EG1wPcplIn5w"
# ## Variational inference
# + id="VaBKO12GIp19"
def guide(data):
alpha_q = numpyro.param("alpha_q", alpha0, constraint=constraints.positive)
beta_q = numpyro.param("beta_q", beta0, constraint=constraints.positive)
numpyro.sample("latent_fairness", dist.Beta(alpha_q, beta_q))
# + colab={"base_uri": "https://localhost:8080/", "height": 343} id="SOL8noBwJlP1" outputId="6af314df-ef08-40e5-b93a-e83629580b98"
# optimizer = numpyro.optim.Adam(step_size=0.001)
optimizer = numpyro.optim.Momentum(step_size=0.001, mass=0.1)
svi = SVI(model, guide, optimizer, loss=Trace_ELBO())
nsteps = 2000
svi_result = svi.run(rng_key_, nsteps, data)
print(svi_result.params)
print(svi_result.losses.shape)
plt.plot(svi_result.losses)
plt.title("ELBO")
plt.xlabel("step")
plt.ylabel("loss");
# + colab={"base_uri": "https://localhost:8080/"} id="fXlNOXAgKHWl" outputId="ba9bbea7-f6c1-4aec-9090-d5e7377c42e1"
# grab the learned variational parameters
alpha_q = svi_result.params["alpha_q"]
beta_q = svi_result.params["beta_q"]
print("variational posterior: alpha={:0.3f}, beta={:0.3f}".format(alpha_q, beta_q))
post_mean = alpha_q / (alpha_q + beta_q)
post_var = (post_mean * beta_q) / ((alpha_q + beta_q) * (alpha_q + beta_q + 1))
post_std = np.sqrt(post_var)
print([post_mean, post_std])
# + [markdown] id="1E6Urp6yLNg3"
# ## MCMC
# + colab={"base_uri": "https://localhost:8080/"} id="rwP4k478LO_G" outputId="c12defec-1fd7-4842-c066-15354fbb0e82"
nuts_kernel = NUTS(model) # this is the unconditioned model
mcmc = MCMC(nuts_kernel, num_warmup=100, num_samples=1000)
mcmc.run(rng_key_, data)
mcmc.print_summary()
samples = mcmc.get_samples()
# + [markdown] id="V_j08oUgHMC8"
# # Distributions
# + [markdown] id="2cu50EWRHmeL"
# ## 1d Gaussian
# + id="Wv6tBbo-BQBm" colab={"base_uri": "https://localhost:8080/"} outputId="e9aefd67-2bb2-4631-ab7b-c351dd35b3d1"
# 2 independent 1d gaussians (ie 1 diagonal Gaussian)
mu = 1.5
sigma = 2
d = dist.Normal(mu, sigma)
dir(d)
# + id="viQgRPMWFH-7" colab={"base_uri": "https://localhost:8080/"} outputId="2d339e24-f128-49c4-9952-a2f539d1a937"
rng_key, rng_key_ = random.split(rng_key)
nsamples = 1000
ys = d.sample(rng_key_, (nsamples,))
print(ys.shape)
mu_hat = np.mean(ys, 0)
print(mu_hat)
sigma_hat = np.std(ys, 0)
print(sigma_hat)
# + [markdown] id="Iir5QxsEHvie"
# ## Multivariate Gaussian
#
#
# + id="h6MKLVypCGZY"
mu = np.array([-1, 1])
sigma = np.array([1, 2])
Sigma = np.diag(sigma)
d2 = dist.MultivariateNormal(mu, Sigma)
# + id="d7JQGBXi_7el" colab={"base_uri": "https://localhost:8080/"} outputId="bc85d427-99cf-4d37-f7e1-b66f62ec6893"
# rng_key, rng_key_ = random.split(rng_key)
nsamples = 1000
ys = d2.sample(rng_key_, (nsamples,))
print(ys.shape)
mu_hat = np.mean(ys, 0)
print(mu_hat)
Sigma_hat = np.cov(ys, rowvar=False) # jax.np.cov not implemented
print(Sigma_hat)
# + [markdown] id="UPyDu5DgIT76"
# ## Shape semantics
#
# [Numpyro](http://num.pyro.ai/en/stable/distributions.html), [Pyro](https://pyro.ai/examples/tensor_shapes.html) and [TFP](https://www.tensorflow.org/probability/examples/Understanding_TensorFlow_Distributions_Shapes)
# and [Distrax](https://github.com/deepmind/distrax)
# all distinguish between 'event shape' and 'batch shape'.
# For a D-dimensional Gaussian, the event shape is (D,), and the batch shape
# will be (), meaning we have a single instance of this distribution.
# If the covariance is diagonal, we can view this as D independent
# 1d Gaussians, stored along the batch dimension; this will have event shape () but batch shape (2,).
#
# When we sample from a distribution, we also specify the sample_shape.
# Suppose we draw N samples from a single D-dim diagonal Gaussian,
# and N samples from D 1d Gaussians. These samples will have the same shape.
# However, the semantics of logprob differs.
# We illustrate this below.
#
# + id="PUYN9T1GIbBb" colab={"base_uri": "https://localhost:8080/"} outputId="98f30838-7305-41fe-dade-f071dca2b52d"
mu = np.array([-1, 1])
sigma = np.array([1, 2])
Sigma = np.diag(sigma)
d2 = dist.MultivariateNormal(mu, Sigma)
print(f"event shape {d2.event_shape}, batch shape {d2.batch_shape}")
nsamples = 3
ys2 = d2.sample(rng_key_, (nsamples,))
print("samples, shape {}".format(ys2.shape))
print(ys2)
# 2 independent 1d gaussians (same as one 2d diagonal Gaussian)
d3 = dist.Normal(mu, scale=np.sqrt(np.diag(Sigma))) # scalar Gaussian needs std not variance
print(f"event shape {d3.event_shape}, batch shape {d3.batch_shape}")
ys3 = d3.sample(rng_key_, (nsamples,))
print("samples, shape {}".format(ys3.shape))
print(ys3)
print(np.allclose(ys2, ys3))
y = ys2[0, :] # 2 numbers
print(d2.log_prob(y)) # log prob of a single 2d distribution on 2d input
print(d3.log_prob(y)) # log prob of two 1d distributions on 2d input
# + [markdown] id="nsB0vIjYLa_6"
# We can turn a set of independent distributions into a single product
# distribution using the [Independent class](http://num.pyro.ai/en/stable/distributions.html#independent)
#
# + id="MXsP_SonLOpl" colab={"base_uri": "https://localhost:8080/"} outputId="dbc7831d-54d6-43ea-aa14-63a648adabf7"
d4 = dist.Independent(d3, 1) # treat the first batch dimension as an event dimensions
# now d4 is just like d2
print(f"event shape {d4.event_shape}, batch shape {d4.batch_shape}")
print(d4.log_prob(y))
# + id="TX1Q15kKy2od"
| notebooks/misc/numpyro_intro.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] deletable=true editable=true
# # Combined Test for Interface Overlay
#
# This notebook will show how to use the Boolean builder, pattern builder, and Finite State Machine (FSM) builder together at the same time.
#
# ### Step 1: Overlay management
#
# Download the overlay and import all the required libraries.
# + deletable=true editable=true
from pynq import Overlay
from pynq.lib.intf import FSMBuilder
from pynq.lib.intf import PatternBuilder
from pynq.lib.intf import BooleanBuilder
from pynq.lib.intf import Intf
from pynq.lib.intf import ARDUINO
Overlay('interface.bit').download()
microblaze_intf = Intf(ARDUINO)
# + [markdown] deletable=true editable=true
# ### Step 2: Instantiate FSM builder
#
# With the Microblaze interface instance ready, we will first deploy an FSM builder.
# + deletable=true editable=true
fsm_spec = {'inputs': [('reset','D0'), ('direction','D1')],
'outputs': [('bit2','D3'), ('bit1','D4'), ('bit0','D5')],
'states': ['S0', 'S1', 'S2', 'S3', 'S4', 'S5'],
'transitions': [['00', 'S0', 'S1', '000'],
['01', 'S0', 'S5', '000'],
['00', 'S1', 'S2', '001'],
['01', 'S1', 'S0', '001'],
['00', 'S2', 'S3', '010'],
['01', 'S2', 'S1', '010'],
['00', 'S3', 'S4', '011'],
['01', 'S3', 'S2', '011'],
['00', 'S4', 'S5', '100'],
['01', 'S4', 'S3', '100'],
['00', 'S5', 'S0', '101'],
['01', 'S5', 'S4', '101'],
['1-', '*', 'S0', '']]}
fsm = FSMBuilder(microblaze_intf, fsm_spec, num_analyzer_samples=128)
fsm.show_state_diagram()
# + [markdown] deletable=true editable=true
# ### Step 3: Instantiate pattern builder
# + deletable=true editable=true
loopback_test = {'signal': [
['stimulus',
{'name': 'clk6', 'pin': 'D6', 'wave': 'l...h...' * 16},
{'name': 'clk7', 'pin': 'D7', 'wave': 'l.......h.......' * 8},
{'name': 'clk8', 'pin': 'D8', 'wave': 'lh' * 16},
{'name': 'clk9', 'pin': 'D9', 'wave': 'l.h.' * 32},
{'name': 'clk10', 'pin': 'D10', 'wave': 'l...h...' * 16},
{'name': 'clk11', 'pin': 'D11', 'wave': 'l.......h.......' * 8},
{'name': 'clk12', 'pin': 'D12', 'wave': 'lh' * 8},
{'name': 'clk13', 'pin': 'D13', 'wave': 'l.h.' * 32}],
['analysis',
{'name': 'clk6', 'pin': 'D6'},
{'name': 'clk7', 'pin': 'D7'},
{'name': 'clk8', 'pin': 'D8'},
{'name': 'clk9', 'pin': 'D9'},
{'name': 'clk10', 'pin': 'D10'},
{'name': 'clk11', 'pin': 'D11'},
{'name': 'clk12', 'pin': 'D12'},
{'name': 'clk13', 'pin': 'D13'}]],
'foot': {'tock': 1, 'text': 'Loopback Test'},
'head': {'tick': 1, 'text': 'Loopback Test'}}
# + [markdown] deletable=true editable=true
# After instantiation of the pattern builder, users can immediately display the waveform. However, the `analysis` group of the waveform will be empty, since the pattern builder has not been run yet.
#
# If the wave lanes are not of the same length, they will get extended automatically.
# + deletable=true editable=true
pb = PatternBuilder(microblaze_intf, loopback_test,
stimulus_name='stimulus', analysis_name='analysis',
num_analyzer_samples=128)
pb.waveform.display()
# -
# ### Step 4: Instantiate Boolean builder
# The Boolean expressions can also have on-board LEDs as output pins, and push buttons specified as input pins. However, LEDs and push buttons are non-traceable pins so they will not show in the waveform display.
# + deletable=true editable=true
expressions = ["LD0 = D14",
"LD1 = D15",
"D18 = PB0 | PB1",
"D19 = D16 & D17"]
bbs = [BooleanBuilder(microblaze_intf, expr=expr,
num_analyzer_samples=128) for expr in expressions]
# + [markdown] deletable=true editable=true
# ### Step 5: Running builders together
#
# For the FSM builder:
#
# * Connect both `D0` and `D1` to `GND`, so the counter will count up, or:
# * Connect `D0` to `GND`, and `D1` to `Vref` for the counter to count down.
#
# Users can configure, arm, and run the builders. The builders can be run together by calling `microblaze_intf.run()`.
#
# Users can also manually check the outputs of the Boolean builders.
# + deletable=true editable=true
pb.arm()
fsm.arm()
for builder in bbs:
builder.arm()
microblaze_intf.start()
# + deletable=true editable=true
fsm.show_waveform()
# + deletable=true editable=true
pb.show_waveform()
# + deletable=true editable=true
for builder in bbs:
builder.show_waveform()
# + [markdown] deletable=true editable=true
# ### Step 6: Stop the generators
#
# Note that calling `microblaze_intf.stop()` is equivalent to calling individual `stop()` method of each builder.
#
# The trace buffer will also get cleared automatically.
# + deletable=true editable=true
microblaze_intf.stop()
| debug/.ipynb_checkpoints/interface_combined-checkpoint.ipynb |
# ##### Copyright 2021 Google LLC.
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
# # traffic_lights
# <table align="left">
# <td>
# <a href="https://colab.research.google.com/github/google/or-tools/blob/master/examples/notebook/contrib/traffic_lights.ipynb"><img src="https://raw.githubusercontent.com/google/or-tools/master/tools/colab_32px.png"/>Run in Google Colab</a>
# </td>
# <td>
# <a href="https://github.com/google/or-tools/blob/master/examples/contrib/traffic_lights.py"><img src="https://raw.githubusercontent.com/google/or-tools/master/tools/github_32px.png"/>View source on GitHub</a>
# </td>
# </table>
# First, you must install [ortools](https://pypi.org/project/ortools/) package in this colab.
# !pip install ortools
# +
# Copyright 2010 <NAME> <EMAIL>
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""
Traffic lights problem in Google CP Solver.
CSPLib problem 16
http://www.cs.st-andrews.ac.uk/~ianm/CSPLib/prob/prob016/index.html
'''
Specification:
Consider a four way traffic junction with eight traffic lights. Four of the
traffic
lights are for the vehicles and can be represented by the variables V1 to V4
with domains
{r,ry,g,y} (for red, red-yellow, green and yellow). The other four traffic
lights are
for the pedestrians and can be represented by the variables P1 to P4 with
domains {r,g}.
The constraints on these variables can be modelled by quaternary constraints
on
(Vi, Pi, Vj, Pj ) for 1<=i<=4, j=(1+i)mod 4 which allow just the tuples
{(r,r,g,g), (ry,r,y,r), (g,g,r,r), (y,r,ry,r)}.
It would be interesting to consider other types of junction (e.g. five roads
intersecting) as well as modelling the evolution over time of the traffic
light sequence.
...
Results
Only 2^2 out of the 2^12 possible assignments are solutions.
(V1,P1,V2,P2,V3,P3,V4,P4) =
{(r,r,g,g,r,r,g,g), (ry,r,y,r,ry,r,y,r), (g,g,r,r,g,g,r,r),
(y,r,ry,r,y,r,ry,r)}
[(1,1,3,3,1,1,3,3), ( 2,1,4,1, 2,1,4,1), (3,3,1,1,3,3,1,1), (4,1, 2,1,4,1,
2,1)}
The problem has relative few constraints, but each is very tight. Local
propagation
appears to be rather ineffective on this problem.
'''
Note: In this model we use only the constraint solver.AllowedAssignments().
Compare with these models:
* MiniZinc: http://www.hakank.org/minizinc/traffic_lights.mzn
* Comet : http://www.hakank.org/comet/traffic_lights.co
* ECLiPSe : http://www.hakank.org/eclipse/traffic_lights.ecl
* Gecode : http://hakank.org/gecode/traffic_lights.cpp
* SICStus : http://hakank.org/sicstus/traffic_lights.pl
This model was created by <NAME> (<EMAIL>)
Also see my other Google CP Solver models:
http://www.hakank.org/google_or_tools/
"""
import sys
from ortools.constraint_solver import pywrapcp
# Create the solver.
solver = pywrapcp.Solver("Traffic lights")
#
# data
#
n = 4
r, ry, g, y = list(range(n))
lights = ["r", "ry", "g", "y"]
# The allowed combinations
allowed = []
allowed.extend([(r, r, g, g), (ry, r, y, r), (g, g, r, r), (y, r, ry, r)])
#
# declare variables
#
V = [solver.IntVar(0, n - 1, "V[%i]" % i) for i in range(n)]
P = [solver.IntVar(0, n - 1, "P[%i]" % i) for i in range(n)]
#
# constraints
#
for i in range(n):
for j in range(n):
if j == (1 + i) % n:
solver.Add(solver.AllowedAssignments((V[i], P[i], V[j], P[j]), allowed))
#
# Search and result
#
db = solver.Phase(V + P, solver.INT_VAR_SIMPLE, solver.INT_VALUE_DEFAULT)
solver.NewSearch(db)
num_solutions = 0
while solver.NextSolution():
for i in range(n):
print("%+2s %+2s" % (lights[V[i].Value()], lights[P[i].Value()]), end=" ")
print()
num_solutions += 1
solver.EndSearch()
print()
print("num_solutions:", num_solutions)
print("failures:", solver.Failures())
print("branches:", solver.Branches())
print("WallTime:", solver.WallTime())
print()
| examples/notebook/contrib/traffic_lights.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# %matplotlib inline
#
# # Embedding in Tk
#
#
# +
import tkinter
from matplotlib.backends.backend_tkagg import (
FigureCanvasTkAgg, NavigationToolbar2Tk)
# Implement the default Matplotlib key bindings.
from matplotlib.backend_bases import key_press_handler
from matplotlib.figure import Figure
import numpy as np
root = tkinter.Tk()
root.wm_title("Embedding in Tk")
fig = Figure(figsize=(5, 4), dpi=100)
t = np.arange(0, 3, .01)
fig.add_subplot(111).plot(t, 2 * np.sin(2 * np.pi * t))
canvas = FigureCanvasTkAgg(fig, master=root) # A tk.DrawingArea.
canvas.draw()
toolbar = NavigationToolbar2Tk(canvas, root)
toolbar.update()
def on_key_press(event):
print("you pressed {}".format(event.key))
key_press_handler(event, canvas, toolbar)
canvas.mpl_connect("key_press_event", on_key_press)
button = tkinter.Button(master=root, text="Quit", command=root.quit)
# Packing order is important. Widgets are processed sequentially and if there
# is no space left, because the window is too small, they are not displayed.
# The canvas is rather flexible in its size, so we pack it last which makes
# sure the UI controls are displayed as long as possible.
button.pack(side=tkinter.BOTTOM)
canvas.get_tk_widget().pack(side=tkinter.TOP, fill=tkinter.BOTH, expand=1)
tkinter.mainloop()
| matplotlib/gallery_jupyter/user_interfaces/embedding_in_tk_sgskip.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3.6.8 64-bit
# name: python36864bit023718609e434315a7782a7404fb6072
# ---
# +
# %reload_ext autoreload
# %autoreload 2
from utils import *
# +
fpath = 'qa_corpus.csv'
import pandas as pd
from sklearn.feature_extraction.text import TfidfVectorizer
content = pd.read_csv(fpath, encoding='utf-8')
# +
# import jieba
import re
import os
LTP_DATA_DIR = 'D:/ProgramData/nlp_package/ltp_v34' # ltp模型目录的路径
cws_model_path = os.path.join(LTP_DATA_DIR, 'cws.model') # 分词模型路径,模型名称为`cws.model`
from pyltp import Segmentor
segmentor = Segmentor() # 初始化实例
segmentor.load_with_lexicon(cws_model_path, 'lexicon_seg.txt') # 加载外部词典文件路径
def token(string):
return re.findall(r'[\d|\w]+', string)
def cut(string):
return ' '.join(segmentor.segment(string))
def filter_text(content):
q_content = content['question'].tolist()
q_content = [token(str(n)) for n in q_content]
q_content = [' '.join(n) for n in q_content]
q_content = [cut(n) for n in q_content]
return q_content
# -
q_content = filter_text(content)
# +
stopwords = []
with open('chinese_stopwords.txt', 'r', encoding='utf-8') as f:
for line in f.readlines():
if len(line.strip()) < 2:
stopwords.append(line.strip())
with open('哈工大停用词表.txt', 'r', encoding='utf-8') as f:
for line in f.readlines():
if len(line.strip()) < 2:
stopwords.append(line.strip())
# -
vectorized = TfidfVectorizer(max_features=12000, tokenizer=lambda x: x.split(), stop_words=stopwords)
X = vectorized.fit_transform(q_content)
# +
import scipy.sparse as sp
X_array = X.toarray()
X_array = sp.csr_matrix(X_array)
# +
import numpy as np
np.nonzero(X[100].toarray()[0])
# +
from scipy.spatial.distance import cosine
def distance(v1, v2): return cosine(v1, v2)
# +
from operator import and_
from functools import reduce
pos_model_path = os.path.join(LTP_DATA_DIR, 'pos.model')
from pyltp import Postagger
postagger = Postagger() # 初始化实例
postagger.load_with_lexicon(pos_model_path, 'lexicon.txt') # 加载模型
and_pos_set = {'n', 'v', 'm', 'nh', 'ni', 'nl', 'ns', 'nt', 'ws'}
def token(string):
return re.findall(r'[\d|\w]+', string)
def cut(string):
return segmentor.segment(string)
def filter_text_single(string):
q_content = token(string.lower())
print(q_content)
q_content = ' '.join(q_content)
q_content = cut(q_content)
return q_content
# +
word_2_id = vectorized.vocabulary_
id_2_word = {d: w for w, d in word_2_id.items()}
inverse_idx = X_array.transpose()
# -
def search_connect_doc(query):
""""""
words = filter_text_single(query)
postags = postagger.postag(words)
to_and = []
for i, postag in enumerate(postags):
if postag in and_pos_set:
to_and.append(words[i])
print(to_and)
query_vec = vectorized.transform([' '.join(words)]).toarray()
try:
candidates_ids = [word_2_id[w] for w in to_and]
except KeyError:
pass
documents_ids = [
set(np.nonzero(inverse_idx[_id].toarray()[0])[0]) for _id in candidates_ids
]
merged_documents = reduce(and_, documents_ids)
return merged_documents
# + tags=[]
search_connect_doc(content.question[1])
# -
content.question[24425]
| jupyter_notebooks/search_connections.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + id="Zv_g9M6unpwZ" colab_type="code" colab={}
# Importing the libraries
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
from sklearn.metrics import accuracy_score
# + id="uQMl6fAwoR8i" colab_type="code" outputId="7cdd80ab-5bd5-4f35-d2c7-9b95e9b080d8" colab={"resources": {"http://localhost:8080/nbextensions/google.colab/files.js": {"data": "<KEY> "ok": true, "headers": [["content-type", "application/javascript"]], "status": 200, "status_text": ""}}, "base_uri": "https://localhost:8080/", "height": 74}
from google.colab import files
uploaded = files.upload()
# + id="MK4vWxnQoUqJ" colab_type="code" outputId="aa4258a0-f976-41af-87ad-3f77264613e0" colab={"base_uri": "https://localhost:8080/", "height": 51}
from __future__ import print_function
# Import the dataset
dataset = pd.read_csv('hw2_question3.csv', header=None)
full_dataset = dataset.iloc[:, :].values
total_samples = len(dataset)
print('Total number of samples is {}'.format(total_samples))
print('Input shape is {}'.format(dataset.shape))
# + id="58L1z7gnolFy" colab_type="code" outputId="63223ba6-3b37-4add-8f12-c5bafff7547a" colab={"base_uri": "https://localhost:8080/", "height": 233}
dataset.head(5)
# + id="5H7WugibowTT" colab_type="code" outputId="ed5640f9-2be2-46ee-fd28-5f381d47ab37" colab={"base_uri": "https://localhost:8080/", "height": 233}
# https://pandas.pydata.org/Pandas_Cheat_Sheet.pdf
columns_to_hot_encode = [1, 6, 7, 13, 14, 15, 25, 28]
for col in columns_to_hot_encode:
dataset[str(col)+'_neg'] = np.where(dataset[col] == -1, 1, 0)
dataset[str(col)+'_zero'] = np.where(dataset[col] == 0, 1, 0)
dataset[str(col)+'_pos'] = np.where(dataset[col] == 1, 1, 0)
dataset.head(5)
# + id="5cZKWJnkrAw1" colab_type="code" colab={}
dataset = dataset.drop(columns=[1, 6, 7, 13, 14, 15, 25, 28])
# + id="Tn1oML0prhU2" colab_type="code" outputId="ca068f9d-b39b-4eda-9253-80badcfb917e" colab={"base_uri": "https://localhost:8080/", "height": 233}
dataset.head(5)
# + id="kc2OuOABs7SH" colab_type="code" outputId="9acfa66c-1a2c-4d85-c1f2-792503194712" colab={"base_uri": "https://localhost:8080/", "height": 233}
# move the target at the end
df1 = dataset.pop(30)
dataset[30] = df1
dataset.head(5)
# + id="_N7Q0ZLGtrrA" colab_type="code" outputId="41f7892d-6738-46be-efd3-efc69241b8cf" colab={"base_uri": "https://localhost:8080/", "height": 233}
# https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.sample.html
# shuffle the samples
dataset = dataset.sample(frac=1, random_state=9728).reset_index(drop=True)
dataset.head(5)
# + id="47Tc-5-JwPGc" colab_type="code" outputId="91a17594-9a52-4298-9c14-d1a35b5819c4" colab={"base_uri": "https://localhost:8080/", "height": 51}
# divide the sammples
total_train_samples = (int)((2/3)*total_samples)
total_test_samples = total_samples - total_train_samples
print('Total number of training samples is {}'.format(total_train_samples))
print('Total number of test samples is {}'.format(total_test_samples))
training_set = dataset.head(total_train_samples)
test_set = dataset.tail(total_test_samples)
# + id="zhvxZnSowsUS" colab_type="code" outputId="f91cafa9-1213-4688-f894-4e3a79011016" colab={"base_uri": "https://localhost:8080/", "height": 233}
training_set.head(5)
# + id="khjpk8Lnw8fT" colab_type="code" outputId="3b1b838e-deb8-47d2-ff1b-43bf7b69c3af" colab={"base_uri": "https://localhost:8080/", "height": 34}
training_set.shape
# + id="kejR4ty3w-Ma" colab_type="code" outputId="0bde8e60-f8a0-4155-a217-bfb24fb39933" colab={"base_uri": "https://localhost:8080/", "height": 34}
test_set.shape
# + id="lrOvBnF_xATj" colab_type="code" colab={}
# SVM starts here
training_set_data = training_set.iloc[:, :].values
X = training_set_data[:, :-1]
y = training_set_data[:, 46]
test_set_data = test_set.iloc[:, :].values
X_test = test_set_data[:, :-1]
y_test = test_set_data[:, 46]
# + id="lgIHMj5_ykwc" colab_type="code" outputId="40e9c4e4-ee7d-4d21-c483-2b00472bf632" colab={"base_uri": "https://localhost:8080/", "height": 34}
X.shape
# + id="E7ckxuLTynu0" colab_type="code" outputId="cddabda4-7689-4dab-ad29-caaf9b35084b" colab={"base_uri": "https://localhost:8080/", "height": 34}
y.shape
# + id="ufdujxcRypGl" colab_type="code" outputId="97812177-5544-42c7-de17-bca59f644fba" colab={"base_uri": "https://localhost:8080/", "height": 34}
X_test.shape
# + id="C0r5f1mQyqjV" colab_type="code" outputId="546991ca-877c-45ba-fc07-59c7cf8ae08c" colab={"base_uri": "https://localhost:8080/", "height": 34}
y_test.shape
# + id="jUgYcsQ2yrv9" colab_type="code" outputId="f77eca47-de94-4921-cc7f-38cc74e9e793" colab={"base_uri": "https://localhost:8080/", "height": 51}
# https://pythonhow.com/measure-execution-time-python-code/
import time
# import the SVM library
from sklearn import svm
svm_classifier = svm.SVC(kernel='linear', C=1.0, gamma='scale')
start = time.time()
svm_classifier.fit(X, y)
end = time.time()
print('Time for training : ', end - start)
# Predicting the Test set results
y_pred = svm_classifier.predict(X_test)
print(accuracy_score(y_test, y_pred))
# + id="95xmCzQr0qjg" colab_type="code" outputId="2c5380f5-6a97-4a02-9a41-d193a78c3b17" colab={"base_uri": "https://localhost:8080/", "height": 1292}
# K-fold validation
# code referenced from this book - https://www.manning.com/
# books/deep-learning-with-python
k = 3
train_data = X
train_targets = y
num_val_samples = train_data.shape[0] // k
accuracy = []
times = []
# 10^{-5},10^{-4},...,0.1, 1, 3,5,10,
C_values = [1e-5, 3e-5, 9e-5, 1e-4, 3e-4, 9e-4, 1e-3, 3e-3, 9e-3, 0.01, 0.03, 0.09, 0.1, 0.3, 0.9]
C_values = C_values + list(range(1, 51, 1)) + [5*n for n in range(11,20+1)]
training_set_copy = training_set.copy()
for c in C_values:
all_scores = []
all_times = []
for i in range(k):
val_data =\
train_data[i * num_val_samples: (i + 1) * num_val_samples]
val_targets =\
train_targets[i * num_val_samples: (i + 1) * num_val_samples]
partial_train_data = np.concatenate(
[train_data[:i * num_val_samples],
train_data[(i + 1) * num_val_samples:]],
axis=0)
partial_train_targets = np.concatenate(
[train_targets[:i * num_val_samples],
train_targets[(i + 1) * num_val_samples:]],
axis=0)
"""
print(val_data.shape)
print(val_targets.shape)
print(partial_train_data.shape)
print(partial_train_targets.shape)
"""
val_classifier = svm.SVC(kernel='linear', C=c, gamma='scale')
start = time.time()
val_classifier.fit(partial_train_data, partial_train_targets)
end = time.time()
val_pred = val_classifier.predict(val_data)
acc = accuracy_score(val_targets, val_pred)
"""
print('Time is ', end-start)
print('Accuracy is ', acc)
print('-------------------------')
"""
all_scores.append(acc)
all_times.append(end-start)
accuracy.append(np.mean(all_scores))
times.append(np.mean(all_times))
print('C is {} | Accuracy is {} | Time is {} '.format(c, np.mean(all_scores), np.mean(all_times)))
# + id="fJhmrg3aZBsG" colab_type="code" outputId="65be7548-159b-4a98-d286-74760986a29c" colab={"base_uri": "https://localhost:8080/", "height": 68}
import operator
index, value = max(enumerate(accuracy), key=operator.itemgetter(1))
print(value)
print(C_values[index])
print('Best C is {} | Accuracy is {} | Time is {} '.format(C_values[index], value, times[index]))
# + id="tnPq-f8ge0HE" colab_type="code" outputId="a6bc7763-4a48-4586-8014-6e997730bb07" colab={"base_uri": "https://localhost:8080/", "height": 51}
svm_classifier = svm.SVC(kernel='linear', C=C_values[index], gamma='scale')
start = time.time()
svm_classifier.fit(X, y)
end = time.time()
print('Time for training : ', end - start)
# Predicting the Test set results
y_pred = svm_classifier.predict(X_test)
print(accuracy_score(y_test, y_pred))
# + id="8HIQ3A4ufos_" colab_type="code" outputId="5af5adf0-3a5c-4d49-cddb-7f977c9908eb" colab={"base_uri": "https://localhost:8080/", "height": 376}
plt.plot(C_values, accuracy, 'b', label='Accuracy obtained during 3 fold cross-validation')
plt.title('C value VS Accuracy')
plt.xlabel('C Values')
plt.ylabel('Accuracy')
plt.legend()
plt.show()
# + id="E342jL-xgJO2" colab_type="code" outputId="67267d2d-d98e-40e9-f64a-a79bdd905360" colab={"base_uri": "https://localhost:8080/", "height": 376}
plt.plot(C_values, times, 'b', label='Average time taken')
plt.title(' C value VS Average training time')
plt.xlabel('C value')
plt.ylabel('Average training time')
plt.legend()
plt.show()
# + id="5x1yR6pAinwP" colab_type="code" outputId="608356a9-2e1c-4677-ba80-738102fc4104" colab={"base_uri": "https://localhost:8080/", "height": 5831}
# K-fold validation
# code referenced from this book - https://www.manning.com/
# books/deep-learning-with-python
k = 3
train_data = X
train_targets = y
num_val_samples = train_data.shape[0] // k
accuracy = []
times = []
l1 = list(range(1, 101, 1))
# 10^{-5},10^{-4},...,0.1, 1, 3,5,10,
C_values = [1e-5, 3e-5, 9e-5, 1e-4, 3e-4, 9e-4, 1e-3, 3e-3, 9e-3, 0.01, 0.03, 0.09, 0.1, 0.3, 0.9, 1, 3] + [5*n for n in range(1,20+1)]
training_set_copy = training_set.copy()
DEGREE = [2, 3, 4, 5, 6, 7, 8, 9, 10]
best_c_value = []
best_accuray_value = []
for degree in DEGREE:
accuracy = []
times = []
for c in C_values:
all_scores = []
all_times = []
for i in range(k):
val_data =\
train_data[i * num_val_samples: (i + 1) * num_val_samples]
val_targets =\
train_targets[i * num_val_samples: (i + 1) * num_val_samples]
partial_train_data = np.concatenate(
[train_data[:i * num_val_samples],
train_data[(i + 1) * num_val_samples:]],
axis=0)
partial_train_targets = np.concatenate(
[train_targets[:i * num_val_samples],
train_targets[(i + 1) * num_val_samples:]],
axis=0)
"""
print(val_data.shape)
print(val_targets.shape)
print(partial_train_data.shape)
print(partial_train_targets.shape)
"""
val_classifier = svm.SVC(kernel='poly', degree=degree, C=c, gamma='scale')
start = time.time()
val_classifier.fit(partial_train_data, partial_train_targets)
end = time.time()
val_pred = val_classifier.predict(val_data)
acc = accuracy_score(val_targets, val_pred)
"""
print('Time is ', end-start)
print('Accuracy is ', acc)
print('-------------------------')
"""
all_scores.append(acc)
all_times.append(end-start)
accuracy.append(np.mean(all_scores))
times.append(np.mean(all_times))
print('Degree is {} | C is {} | Accuracy is {} | Time is {} '.format(degree, c, np.mean(all_scores), np.mean(all_times)))
index, value = max(enumerate(accuracy), key=operator.itemgetter(1))
print('Degree is {} | Best C is {} | Best Accuracy is {} | Time is {} '.format(degree, C_values[index] , value, times[index]))
best_c_value.append(C_values[index])
best_accuray_value.append(value)
# + id="Gnk8afunoUsi" colab_type="code" outputId="fd1033d6-bef1-4166-89d0-b6ac9e08d664" colab={"base_uri": "https://localhost:8080/", "height": 51}
print(best_c_value)
print(best_accuray_value)
# + id="Epp3GDF8xeoo" colab_type="code" outputId="e887e3e1-b661-4bcd-dee0-ddc9d5d99e8e" colab={"base_uri": "https://localhost:8080/", "height": 34}
index, value = max(enumerate(best_accuray_value), key=operator.itemgetter(1))
print('Best degree is {} | Best C is {} | Accuracy is {} '.format(DEGREE[index], best_c_value[index], value))
# + id="_wTNcJ6VyqmX" colab_type="code" outputId="1a705a3f-337c-407e-cf88-f4e04d5d3350" colab={"base_uri": "https://localhost:8080/", "height": 51}
svm_classifier = svm.SVC(kernel='poly', degree=DEGREE[index], C=best_c_value[index], gamma='scale')
start = time.time()
svm_classifier.fit(X, y)
end = time.time()
print('Time for training : ', end - start)
# Predicting the Test set results
y_pred = svm_classifier.predict(X_test)
print(accuracy_score(y_test, y_pred))
# + id="y3NakVKEy4BH" colab_type="code" outputId="47ac507e-f198-4111-c01d-068ec662e5b0" colab={"base_uri": "https://localhost:8080/", "height": 1292}
# K-fold validation
# code referenced from this book - https://www.manning.com/
# books/deep-learning-with-python
k = 3
train_data = X
train_targets = y
num_val_samples = train_data.shape[0] // k
accuracy = []
times = []
# 10^{-5},10^{-4},...,0.1, 1, 3,5,10,
C_values = [1e-5, 3e-5, 9e-5, 1e-4, 3e-4, 9e-4, 1e-3, 3e-3, 9e-3, 0.01, 0.03, 0.09, 0.1, 0.3, 0.9]
C_values = C_values + list(range(1, 51, 1)) + [5*n for n in range(11,20+1)]
training_set_copy = training_set.copy()
for c in C_values:
all_scores = []
all_times = []
for i in range(k):
val_data =\
train_data[i * num_val_samples: (i + 1) * num_val_samples]
val_targets =\
train_targets[i * num_val_samples: (i + 1) * num_val_samples]
partial_train_data = np.concatenate(
[train_data[:i * num_val_samples],
train_data[(i + 1) * num_val_samples:]],
axis=0)
partial_train_targets = np.concatenate(
[train_targets[:i * num_val_samples],
train_targets[(i + 1) * num_val_samples:]],
axis=0)
"""
print(val_data.shape)
print(val_targets.shape)
print(partial_train_data.shape)
print(partial_train_targets.shape)
"""
val_classifier = svm.SVC(kernel='rbf', C=c, gamma='scale')
start = time.time()
val_classifier.fit(partial_train_data, partial_train_targets)
end = time.time()
val_pred = val_classifier.predict(val_data)
acc = accuracy_score(val_targets, val_pred)
"""
print('Time is ', end-start)
print('Accuracy is ', acc)
print('-------------------------')
"""
all_scores.append(acc)
all_times.append(end-start)
accuracy.append(np.mean(all_scores))
times.append(np.mean(all_times))
print('C is {} | Accuracy is {} | Time is {} '.format(c, np.mean(all_scores), np.mean(all_times)))
# + id="THk4gEzuBYp8" colab_type="code" outputId="2ccdc286-8fbb-4218-d497-f46edd4e2499" colab={"base_uri": "https://localhost:8080/", "height": 68}
import operator
index, value = max(enumerate(accuracy), key=operator.itemgetter(1))
print(value)
print(C_values[index])
print('Best C is {} | Accuracy is {} | Time is {} '.format(C_values[index], value, times[index]))
# + id="yoG8hngbBZYw" colab_type="code" outputId="2309eece-4e98-48f6-d9f8-e9209a6f594a" colab={"base_uri": "https://localhost:8080/", "height": 51}
svm_classifier = svm.SVC(kernel='rbf', C=C_values[index], gamma='scale')
start = time.time()
svm_classifier.fit(X, y)
end = time.time()
print('Time for training : ', end - start)
# Predicting the Test set results
y_pred = svm_classifier.predict(X_test)
print(accuracy_score(y_test, y_pred))
# + id="MzUp9NKyUYIN" colab_type="code" colab={}
| SVM.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
import pandas as pd
import numpy as np
import seaborn as sns
import matplotlib.pyplot as plt
from sklearn.metrics import mean_squared_error
from sklearn.svm import SVR
import sklearn
from sklearn.preprocessing import MinMaxScaler
from sklearn.model_selection import train_test_split
from sklearn.ensemble import RandomForestRegressor
import optuna
import xgboost as xgb
# -
sol_phys_train = pd.read_csv('../data/interim/solar_phys_train.csv')
sol_phys_train = sol_phys_train.drop('Unnamed: 0', axis=1)
sol_phys_train.head()
sol_phys_train.describe()
# +
X = sol_phys_train[[c for c in sol_phys_train if c != 'Radiation']].values
y = sol_phys_train[['Radiation']].values
# -
X_train, X_val, y_train, y_val = sklearn.model_selection.train_test_split(X, y,random_state=0, test_size=0.30)
# def objective(trial):
# # Invoke suggest methods of a Trial object to generate hyperparameters.
# regressor_name = trial.suggest_categorical('classifier', ['SVR', 'RandomForest'])
#
#
# if regressor_name == 'SVR':
# svr_c = trial.suggest_loguniform('svr_c', 1e-10, 1e10)
# epsilon = trial.suggest_loguniform('epsilon', 1e-1, 1e1)
# regressor_obj = sklearn.svm.SVR(C=svr_c, epsilon=epsilon, gamma='auto')
#
#
# else:
# rf_max_depth = trial.suggest_int('rf_max_depth', 2, 32)
# rf_max_estimators = trial.suggest_int('n_estimators', 1, 300)
# regressor_obj = RandomForestRegressor(max_depth=rf_max_depth, criterion='mse', n_estimators = rf_max_estimators)
#
#
# regressor_obj.fit(X_train, y_train)
# y_pred = regressor_obj.predict(X_val)
#
# error = sklearn.metrics.mean_squared_error(y_val, y_pred)
#
# return error # An objective value linked with the Trial object.
def objective_rm(trial):
# Invoke suggest methods of a Trial object to generate hyperparameters.
rf_max_depth = trial.suggest_int('rf_max_depth', 2, 64)
rf_max_estimators = trial.suggest_int('n_estimators', 1, 3000)
model = RandomForestRegressor(max_depth=rf_max_depth, criterion='mse', n_estimators = rf_max_estimators)
model.fit(X_train, y_train)
y_pred = model.predict(X_val)
error = sklearn.metrics.mean_squared_error(y_val, y_pred)
return error # An objective value linked with the Trial object.
# +
#study_rm = optuna.create_study()
#study_rm.optimize(objective_rm, n_trials=100)
#study_rm.best_params
# +
#26826.7851407973 with parameters: {'rf_max_depth': 16, 'n_estimators': 268}
# -
sc_X = MinMaxScaler()
sc_y = MinMaxScaler()
X_sc = sc_X.fit_transform(X)
y_sc = sc_y.fit_transform(y)
X_sc_train, X_sc_val, y_sc_train, y_sc_val = sklearn.model_selection.train_test_split(X_sc, y_sc, random_state=0, test_size=0.30)
def objective_xgb(trial):
# Invoke suggest methods of a Trial object to generate hyperparameters.
dtrain = xgb.DMatrix(X_train, label = y_train)
dvalid = xgb.DMatrix(X_val, label = y_val)
param = {
"silent": 1,
"objective": 'reg:linear',
"booster": trial.suggest_categorical("booster", ["gbtree", "gblinear", "dart"]),
"lambda": trial.suggest_loguniform("lambda", 1e-8, 1.0),
"alpha": trial.suggest_loguniform("alpha", 1e-8, 1.0),
'eval_metric': 'rmse'
}
if param["booster"] == "gbtree" or param["booster"] == "gblinear":
param["subsample"] = trial.suggest_loguniform("subsample", 1e-8, 1.0)
param["n_trees"] = trial.suggest_int("n_trees", 1, 1000)
param["max_depth"] = trial.suggest_int("max_depth", 1, 64)
param["eta"] = trial.suggest_loguniform("eta", 1e-8, 1.0)
param["gamma"] = trial.suggest_loguniform("gamma", 1e-8, 1.0)
param["grow_policy"] = trial.suggest_categorical("grow_policy", ["depthwise", "lossguide"])
if param["booster"] == "dart":
param["max_depth"] = trial.suggest_int("max_depth", 1, 64)
param["subsample"] = trial.suggest_loguniform("subsample", 1e-8, 1.0)
param["n_trees"] = trial.suggest_int("n_trees", 1, 1000)
param["sample_type"] = trial.suggest_categorical("sample_type", ["uniform", "weighted"])
param["normalize_type"] = trial.suggest_categorical("normalize_type", ["tree", "forest"])
param["rate_drop"] = trial.suggest_loguniform("rate_drop", 1e-8, 1.0)
param["skip_drop"] = trial.suggest_loguniform("skip_drop", 1e-8, 1.0)
bst = xgb.train(param, dtrain)
y_pred = bst.predict(dvalid)
error = sklearn.metrics.mean_squared_error(y_val, y_pred)
return error # An objective value linked with the Trial object.
study_xgb = optuna.create_study()
study_xgb.optimize(objective_xgb, n_trials=300)
study_xgb.best_params
xgb_params = study_xgb.best_params
xgb_params
dtrain = xgb.DMatrix(X_sc_train, y_sc_train)
dtest = xgb.DMatrix(X_sc_val)
cv_output = xgb.cv(xgb_params, dtrain, num_boost_round=1000, early_stopping_rounds=50,
verbose_eval=200, show_stdv=False)
cv_output[['train-rmse-mean', 'test-rmse-mean']].plot()
plt.show()
# +
num_boost_rounds = len(cv_output)
print(num_boost_rounds)
model = xgb.train(dict(xgb_params, silent=0), dtrain, num_boost_round= num_boost_rounds)
# -
sol_phys_test = pd.read_csv('../data/interim/solar_phys_test.csv')
sol_phys_test = sol_phys_test.drop('Unnamed: 0', axis=1)
sol_phys_test.head()
X_t = sol_phys_test.drop('id',axis=1)
sc_X_t = MinMaxScaler()
X_t = sc_X_t.fit_transform(X_t)
dvalidation = xgb.DMatrix(X_t)
predic_testf = model.predict(dvalidation)
submission5 = pd.DataFrame()
submission5['id'] = sol_phys_test['id']
submission5['Radiation'] = sc_y.inverse_transform(predic_testf.reshape(-1,1))
submission5['Radiation'] = submission5['Radiation'].apply(lambda x: x if x >= 0 else 0)
submission5.describe()
submission5.to_csv('../data/processed/submission5.csv')
| notebooks/Data-Wrangling-and-Model-3.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Science User Case - Inspecting a Candidate List
# Ogle et al. (2016) mined the NASA/IPAC Extragalactic Database (NED) to identify a new type of galaxy: Superluminous Spiral Galaxies. Here's the paper:
#
# Here's the paper: https://ui.adsabs.harvard.edu//#abs/2016ApJ...817..109O/abstract
#
# Table 1 lists the positions of these Super Spirals. Based on those positions, let's create multiwavelength cutouts for each super spiral to see what is unique about this new class of objects.
# ## 1. Import some Python modules.
# +
#Suppress unimportant warnings.
import warnings
warnings.filterwarnings("ignore")
#Add the workshop directory to your path.
import sys
sys.path.append('workshop-dev-master')
#Import the workshop utilities.
import navo_utils.utils
from navo_utils.image import Image, ImageColumn
from navo_utils.spectra import Spectra, SpectraColumn
from navo_utils.cone import Cone
from navo_utils.tap import Tap
from navo_utils.utils import astropy_table_from_votable_response
#Import the astropy Table module.
from astropy.table import Table
# -
# ## 2. Search NED for objects in this paper.
#
# Insert a Code Cell below by clicking on the "Insert" Menu and choosing "Insert Cell Below". Then consult the "Cheat Sheet" to figure out how to search NED for all objects in a paper, based on the refcode of the paper.
# ## 3. Filter the NED results.
#
# The results from NED will include galaxies, but also other kinds of objects. Filter the results so that we only keep the galaxies in the list. Remember that Python 3 differentiates between strings and byte strings.
# ## 4. Search the NAVO Registry for image resources.
#
# The paper selected super spirals using WISE, SDSS, and GALEX images. Search the NAVO registry for all image resources, using the 'service_type' search parameter. How many image resources are currently available?
# ## 5. Search the NAVO Registry for an image resource that will allow you to search for AllWISE images.
#
# Try adding the 'keyword' search parameter to your registry search, and find the image resource you would need to search the AllWISE images.
# ## 6. Search for the AllWISE images that cover one of the galaxies in the paper.
#
# How many images are returned? Which are you most interested in?
# ## 7. Filter the list of returned AllWISE images so that you are left with only the one that you want to plot for this object.
# ## 8. Download the AllWISE image you would like to plot.
# ## 9. Plot the AllWISE image for this object.
import matplotlib
import matplotlib.pyplot as plt
import numpy as np
# %matplotlib inline
# ## 10. Plot a cutout of the AllWISE image, centered on your position.
#
# You choose the size of the cutout.
# ## 11. Plot AllWISE image cutouts for a subset of 3 super spirals in the NED results.
#
# We suggest only 3 because of limited wifi at the workshop. At home, you can loop through large numbers of galaxies.
# ## 12. Try steps 5-11 for GALEX.
# ## 13. Try steps 5-11 for SDSS.
# ## 14. Combine your answers for steps 11-13 to make a 3 by 3 plot. Each row should represent an super spiral, and each column should be a different wavelength from AllWISE, SDSS, and GALEX.
| EXERCISE -- Inspecting a Candidate List.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Evolution of CRO disclosure over time
# +
import sys
import math
from datetime import date
from dateutil.relativedelta import relativedelta
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import matplotlib.dates
from matplotlib.ticker import MaxNLocator
import seaborn as sns
sys.path.append('../..')
from data import constants
# Setup seaborn
sns.set_theme(style="ticks", rc={'text.usetex' : True})
sns.set_context("paper")
# Read main file
df = pd.read_pickle("/Users/david/Nextcloud/Dokumente/Education/Uni Bern/Master Thesis/Analyzing Financial Climate Disclosures with NLP/Data/stoxx_inference/Firm_AnnualReport_Paragraphs_with_actual_back.pkl")
df = df.set_index(["id"])
assert df.index.is_unique, "Index is not unique. Check the data!"
# Read master for scaling
df_master = pd.read_csv("/Users/david/Nextcloud/Dokumente/Education/Uni Bern/Master Thesis/Analyzing Financial Climate Disclosures with NLP/Data/stoxx_inference/Firm_AnnualReport.csv")
df_reports_count = df_master.groupby('year')['is_inferred'].sum()
# -
# ## Config
category_labels = constants.cro_category_labels
category_codes = constants.cro_category_codes
colors = sns.color_palette("GnBu", len(category_codes))
levelize_year = 2015
# ## Evolution over the years
#
# Shows the level of *average number of predicted CRO's per report* (ACROR) over time, in 2015 *levels* (i.e. 2015 scaled to 1).
#
# 1. divide by amount of reports in each year
# 2. then report the levels by dividing by 2015 values
#
# Why 2015? Paris and simply because it of missing values otherwise...
# +
# Create yearly bins for each category
df_years = df.groupby('year')[[f"{c}_predicted" for c in category_codes]].sum().T
# 1. Divide by number of reports in each year
df_years = df_years / df_reports_count
df_years = df_years.T
# 2. Divide by the first column to get levels
# level_column = df_years[levelize_year]
# df_years = df_years.T / level_column
# df_years = df_years.T
df_years.rename(columns={'PR_predicted': 'Physical risks', 'TR_predicted': 'Transition risks', 'OP_predicted': 'Opportunities (rhs)'}, inplace=True)
# Plot
ax = sns.lineplot(data=df_years[['Physical risks', 'Transition risks']])
ax2 = ax.twinx()
ln2 = sns.lineplot(data=df_years[['Opportunities (rhs)']], ax=ax2, palette=["green"])
h1, l1 = ax.get_legend_handles_labels()
h2, l2 = ln2.get_legend_handles_labels()
ax.legend(h1+h2, l1+l2, loc=0)
ln2.legend_.remove()
ax.set_xlabel('')
plt.xlim()
plt.xlim(min(df_years.index), max(df_years.index))
ax.xaxis.set_major_locator(MaxNLocator(integer=True))
plt.show()
fig = ax.get_figure()
fig.savefig('/Users/david/Nextcloud/Dokumente/Education/Uni Bern/Master Thesis/Analyzing Financial Climate Disclosures with NLP/Thesis/figures/abs_acror_years.pdf', format='pdf', bbox_inches='tight')
# -
# ## Evolution by country
index_id = 'country'
results = {}
pd.pivot_table(df_master, values="is_inferred", index=[index_id], columns=['year'], aggfunc=np.sum, fill_value = 0)
reports_count = pd.pivot_table(df_master, values="is_inferred", index=['country'], columns=['year'], aggfunc=np.sum, fill_value = 0)
# +
def plot_grid_by_group(groups, group_column, y_max_values = [20, 500], no_columns=4):
reports_count = pd.pivot_table(df_master, values="is_inferred", index=[group_column], columns=['year'], aggfunc=np.sum, fill_value = 0)
rows = math.ceil(len(groups) / no_columns)
fig, axs = plt.subplots(rows, no_columns,
figsize=(12, 15 if rows > 1 else 5),
sharex=False,
sharey='row',
# constrained_layout=True
)
axs = axs.ravel()
max_y_axis_val = 0
for idx, c in enumerate(groups):
ax = axs[idx]
df_group = df.query(f"{group_column} == @c")
# Create yearly bins for each category
df_years = df_group.groupby('year')[[f"{c}_predicted" for c in category_codes]].sum().T
# 1. Divide by number of reports in each year
df_years = df_years / reports_count.loc[c]
# 2. Divide by the first column to get levels
# level_column = df_years[levelize_year]
# df_years = df_years.T / level_column
df_years = df_years.T
df_years.rename(columns={'PR_predicted': 'Physical risks', 'TR_predicted': 'Transition risks', 'OP_predicted': 'Opportunities (rhs)'}, inplace=True)
ax = sns.lineplot(data=df_years[['Physical risks', 'Transition risks']], ax=ax)
ax2 = ax.twinx()
ln2 = sns.lineplot(data=df_years[['Opportunities (rhs)']], ax=ax2, palette=["green"])
h1, l1 = ax.get_legend_handles_labels()
h2, l2 = ln2.get_legend_handles_labels()
fig.legend(h1+h2, l1+l2, loc="upper center", ncol=len(h1+h2))
ax.legend_.remove()
ln2.legend_.remove()
ax.set_ylim(0, y_max_values[0])
ax2.set_ylim(0, y_max_values[1])
ax.set_xlim(min(df_group.year), max(df_group.year))
ax.title.set_text(c.upper() if len(c) == 2 else c)
ax.set_xlabel('')
# Implement sharey also for the second y axis
if ((idx + 1) % no_columns) != 0:
ax2.set_yticklabels([])
fig.subplots_adjust(bottom=0.05 if rows > 2 else 0.25)
return fig
all_countries = sorted(df_master.country.unique())
all_countries_fig = plot_grid_by_group(all_countries, 'country', y_max_values=[20, 500])
all_countries_fig.savefig('/Users/david/Nextcloud/Dokumente/Education/Uni Bern/Master Thesis/Analyzing Financial Climate Disclosures with NLP/Thesis/figures/abs_acror_countries.pdf', format='pdf', bbox_inches='tight')
# -
selected_countries_fig = plot_grid_by_group(["de", "ch", "fr", "gb"], 'country', y_max_values=[50, 500])
selected_countries_fig.savefig('/Users/david/Nextcloud/Dokumente/Education/Uni Bern/Master Thesis/Analyzing Financial Climate Disclosures with NLP/Thesis/figures/abs_acror_selected_countries.pdf', format='pdf', bbox_inches='tight')
# ## Industry
all_industries = sorted(df_master.icb_industry.unique())
all_inudustries_fig = plot_grid_by_group(all_industries, 'icb_industry', y_max_values=[50, 500])
all_inudustries_fig.savefig('/Users/david/Nextcloud/Dokumente/Education/Uni Bern/Master Thesis/Analyzing Financial Climate Disclosures with NLP/Thesis/figures/abs_acror_industries.pdf', format='pdf', bbox_inches='tight')
selected_industries_fig = plot_grid_by_group(["10 Technology", "30 Financials", "60 Energy", "65 Utilities"], 'icb_industry', y_max_values=[50, 500])
selected_industries_fig.savefig('/Users/david/Nextcloud/Dokumente/Education/Uni Bern/Master Thesis/Analyzing Financial Climate Disclosures with NLP/Thesis/figures/abs_acror_selected_industries.pdf', format='pdf', bbox_inches='tight')
| notebooks/charts/CROEvolutionAbsolute.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] deletable=true editable=true
# ## A supermarket has introduces a promotional activity in its selected outlets in the city to increase the sales volume. Check whether the promotional activity increased sales
# + deletable=true editable=true
# H0: Means are same
# H1: means are not same
# + deletable=true editable=true
# sales promotion introduced are Sales_out1
# + deletable=true editable=true
import pandas as mypandas
from scipy import stats as mystats
# + deletable=true editable=true
myData=mypandas.read_csv('.\datasets\Sales_Promotion.csv')
SO1=myData.Sales_Out1
SO2=myData.Sales_Out2
# + deletable=true editable=true
myData
# + deletable=true editable=true
v=mystats.ttest_ind(SO1,SO2)
# + deletable=true editable=true
#before and after the promotional activity the sales are same
#Hypothesis is true pvalue >= 0.05
# + deletable=true editable=true
v
# + deletable=true editable=true
v.pvalue
# + deletable=true editable=true
SO1.mean()
# + deletable=true editable=true
SO2.mean()
# + deletable=true editable=true
#promotional activity is not helping the growth
| T_Test_TwoSample-Sales_Promotion.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
import pandas as pd
import seaborn as sns
import json
import matplotlib.pyplot as plt
import re
from collections import Counter
# -
cld_dataset = pd.read_csv("dataset/chongluadaov2.csv")
cld_dataset.head()
# # URL LENGTH
legit_length =
phishing_length = {}
type(cld_dataset['labels'][0])
def url_length(url):
# domain = re.findall(r"://([^/]+)/?", url)[0]
# print(domain)
return len(url)
print(cld_dataset['url'][966])
print(re.findall(r"://([^/]+)/?", cld_dataset['url'][966])[0])
from urllib.parse import urlparse
domain = urlparse(cld_dataset['url'][966]).netloc
domain
# +
legit_but_long = 0
phish_but_short = 0
for index,value in enumerate(cld_dataset['labels']):
if value == 1 and url_length(cld_dataset['url'][index]) < 54:
phish_but_short +=1
if value == 0 and url_length(cld_dataset['url'][index]) > 54:
legit_but_long +=1
# -
print(legit_but_long,phish_but_short)
sns.barplot(x=['legit_but_long','phish_but_short'], y=[legit_but_long,phish_but_short])
| Generate Dataset/Visualize.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # BALTRAD parallel processing
# The default VM setup is to use a single CPU core. In order to demonstrate the power of parallel processing, you must first determine whether your physical hardware has more than a single core.
#
# On Linux this is done in the terminal with the 'nproc' command.
#
# On Mac this is done in the terminal with the 'sysctl -n hw.ncpu' command.
#
# On Windows this is done graphically using the Task Manager's Performance tab.
#
# We want tune our VM to harness the power of several CPUs. Follow the following steps:
#
# 1. Shut down the IPython notebook Server (Ctrl-C, answer yes)
# 2. Shutdown the VM (click the X button in the VM window, choose power down the machine)
# 3. Select the VM in the VirtualBox Manager Window, from the menu choose Machine->Setting
# 4. Choose the System Tab, then Processor, use the slider to set the number of Processor to 2, 4, or 8 depending on your system resources.
# 5. Click Ok, and then start the machine
# 6. Login, use the start_notebook.sh script to start the IPython server, start the notebook and you should have multiple processors!
#
# RELOAD THIS PAGE!
# ## Verify from Python the number of CPU cores at our disposal
import multiprocessing
print("We have %i cores to play with!" % multiprocessing.cpu_count())
# Yay! Now we're going to set up some rudimentary functionality that will allow us to distribute a processing load among our cores.
# ## Define a generator
# +
import os
import _raveio, odc_polarQC
# Specify the processing chain
odc_polarQC.algorithm_ids = ["ropo", "beamb", "radvol-att", "radvol-broad", "rave-overshooting", "qi-total"]
# Run processing chain on a single file. Return an output file string.
def generate(file_string):
rio = _raveio.open(file_string)
pvol = rio.object
pvol = odc_polarQC.QC(pvol)
rio.object = pvol
# Derive an output file name
path, fstr = os.path.split(file_string)
ofstr = os.path.join(path, 'qc_'+fstr)
rio.save(ofstr)
return ofstr
# -
# ## Feed the generator, sequentially
# +
import glob, time
ifstrs = glob.glob("data/se*.h5")
before = time.time()
for fstr in ifstrs:
print(fstr, generate(fstr))
after = time.time()
print("Processing time: %3.2f seconds" % (after-before))
# -
# Mental note: repeat once!
# ## Multiprocess the generator
# Both input and output are a list of file strings
def multi_generate(fstrs, procs=None):
pool = multiprocessing.Pool(procs) # Pool of processors. Defaults to all available logical cores
results = []
# chunksize=1 means feed a process a new job as soon as the process is idle.
# In our case, this restricts the queue to one "dispatcher" which is faster.
r = pool.map_async(generate, fstrs, chunksize=1, callback=results.append)
r.wait()
return results[0]
# ## Feed the monster, asynchronously!
# +
before = time.time()
ofstrs = multi_generate(ifstrs)
after = time.time()
print("Processing time: %3.2f seconds" % (after-before))
# -
| baltrad/BALTRAD parallel processing.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# %load_ext autoreload
# +
# %autoreload
from utils.streams import DataStreams
from pipe import *
d = DataStreams()
# use this cautiously, do you have enough RAM?!
d.use_cache = True
# -
d.precompute_numpy_video_files()
# +
# from keras import layers
from keras import models, layers
from keras import regularizers
from keras.regularizers import l2
model = models.Sequential()
model.add(layers.Conv3D(32,(1, 3, 3), activation='relu', input_shape=(d.max_length * d.framerate, d.video_size, d.video_size, 3)))
model.add(layers.MaxPooling3D((1, 5, 5)))
model.add(layers.Conv3D(32,(1, 3, 3), activation='relu'))
model.add(layers.MaxPooling3D((1, 2, 2)))
model.add(layers.MaxPooling3D((10, 2, 2)))
model.add(layers.Flatten())
model.add(layers.Dropout(0.5))
model.add(layers.Dense(20, activation='relu'))
model.add(layers.Dense(20, activation='relu'))
model.add(layers.Dense(20, activation='relu'))
model.add(layers.Dense(20, activation='relu'))
model.add(layers.Dense(1, activation='sigmoid'))
model.summary()
# +
from keras import optimizers
from keras import callbacks
model.compile(loss='binary_crossentropy',
optimizer=optimizers.Nadam(lr=0.002),
metrics=['acc'])
filepath="weights-improvement-{epoch:02d}-{val_acc:.2f}.hdf5"
checkpoint = callbacks.ModelCheckpoint(filepath,
monitor='val_loss',
verbose=0,
save_best_only=False,
save_weights_only=False,
mode='auto',
period=1)
d.batchsize = 10
train_stream = d.video_stream_with_zeropad_batch(validation_data=False)
val_stream = d.video_stream_with_zeropad_batch(validation_data=True)
history = model.fit_generator(
train_stream,
steps_per_epoch=20,
validation_data=val_stream,
# has to be false on windows..
use_multiprocessing=False,
validation_steps = 15,
epochs=10,
callbacks = [checkpoint])
# +
import matplotlib.pyplot as plt
acc = history.history['acc']
val_acc = history.history['val_acc']
loss = history.history['loss']
val_loss = history.history['val_loss']
f, (ax1,ax2) = plt.subplots(2,1)
epochs = range(len(acc))
ax1.plot(epochs, acc, 'bo', label='Training acc')
ax1.plot(epochs, val_acc, 'b', label='Validation acc')
ax1.set_title('Training and validation accuracy')
ax1.legend()
ax2.plot(epochs, loss, 'bo', label='Training loss')
ax2.plot(epochs, val_loss, 'b', label='Validation loss')
ax2.set_title('Training and validation loss')
ax2.legend()
plt.show()
| notebooks/Basic Driver Notebook With DataStreams Class.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # 类元编程
#
# 类元编程是指在运行时创建或定制类的技艺。在 Python 中,类是一等对象,因此任何时候都可以使用函数新建类,而无需使用 `class` 关键字。类装饰器也是函数,不过能够审查、修改,甚至把被装饰的类替换成其他类。最后,元类是类元编程最高级的工具:使用元类可以创建具有某种特质的全新类种,例如我们见过的抽象基类。
#
# 导入时和运行时的区别——这是有效使用 Python 元编程的重要基础。
#
# **除非开发框架,否则不要编写元类。**
def record_factory(cls_name, field_names):
try:
field_names = field_names.replace(',', ' ').split()
except AttributeError:
pass
# field_names将作为新建类属性__slots__中的元素
field_names = tuple(field_names)
# 这个函数将成为新建类的__init__方法
def __init__(self, *args, **kwargs):
attrs = dict(zip(self.__slots__, args))
attrs.update(kwargs)
for name, value in attrs.items():
setattr(self, name, value)
# 这个函数将成为新建类的__iter__方法
def __iter__(self):
for name in self.__slots__:
yield getattr(self, name)
# 这个函数将作为新建类的__repr__方法
def __repr__(self):
values = ', '.join('{}={!r}'.format(*i) for i in zip(self.__slots__, self))
return '{}({})'.format(self.__class__.__name__, values)
cls_attrs = dict(__slots__ = field_names,
__init__ = __init__,
__iter__ = __iter__,
__repr__ = __repr__)
# 使用type构造方法,构建新类,然后将其返回
return type(cls_name, (object,), cls_attrs)
# record_factory返回的是类
Dog = record_factory('Dog', ['name', 'weight', 'owner'])
# Dog返回的是实例
rex = Dog('Rex', 30, 'Bob')
rex
name, weight, _ = rex
name, weight
# 通常我们把 `type` 视作函数,因为我们像函数那样使用它,例如,调用 `type(my_object)` 获取对象所属的类——作用与 `my_object.__class__` 相同。然而,`type` 是一个类。当成类使用时, 传入三个参数可以新建一个类:
MyClass = type('MyClass', (MySuperClass, MyMixin), {'x': 42, 'x2': lambda self: self.x * 2})
# `type` 的三个参数分别是 `name`、`bases` 和 `dict`。最后一个参数是一个映射,指定新类的属性名和值。上述代码的作用与下述代码相同:
class MyClass(MySuperClass, Mixin):
x = 42
def x2(self):
return self.x
# `type`本身是类,`type`的实例也是类。
# ## 定制描述符的类装饰器
#
# 类装饰器与函数装饰器非常类似,是参数为类对象的函数,返回原来的类或修改后的类。
# +
import abc
class AutoStorage:
__counter = 0
def __init__(self):
cls = self.__class__
index= cls.__counter
prefix = cls.__name__
self.storage_name = f'_{prefix}#{index}'
cls.__counter += 1
def __get__(self, instance, owner):
print('call AutoStorage.__get__')
if instance is not None:
print(f'call getattr(instance, {self.storage_name})')
return getattr(instance, self.storage_name)
else:
return self
def __set__(self, instance, value):
print('call AutoStorage.__set__')
setattr(instance, self.storage_name, value)
class Validated(abc.ABC, AutoStorage):
def __set__(self, instance, value):
value = self.validated(value)
super().__set__(instance, value)
# 抽象方法,继承Validated的子类必须重写该方法
@abc.abstractmethod
def validated(self, value):
"""return validated value or raise ValueError."""
# +
class Quantity(Validated):
def validated(self, value):
if value > 0:
return value
else:
raise ValueError('value must be > 0')
class NonBlank(Validated):
def validated(self, value):
value = value.strip()
if len(value) == 0:
raise ValueError('value cannot be empty or blank')
return value
# -
# 为第20章的LineItem的存储属性一个具有描述性的名称。
class LineItem:
description = NonBlank()
weight = Quantity()
price = Quantity()
def __init__(self, description, weight, price):
self.description = description
self.weight = weight
self.price = price
def subtotal(self):
return self.weight * self.price
# 原来的存储属性名称:
LineItem.weight.storage_name
for key, attr in LineItem.__dict__.items():
print(f'{key:<15}', ': ', attr)
quan = Quantity()
isinstance(quan, Validated)
# +
def entity(cls):
print('call entity')
for key, attr in cls.__dict__.items():
if isinstance(attr, Validated):
type_name = type(attr).__name__
attr.storage_name = f'_{type_name}#{key}'
return cls
@entity
class LineItem:
description = NonBlank()
weight = Quantity()
price = Quantity()
def __init__(self, description, weight, price):
self.description = description
self.weight = weight
self.price = price
def subtotal(self):
return self.weight * self.price
# -
raisins = LineItem('Golden raisins', 10, 6.95)
dir(raisins)[:3]
raisins.weight
LineItem.description.storage_name
# 类装饰器有个重大缺点:只对直接依附的类有效。这意味着,被装饰的类的子类可能继承也可能不继承装饰器所做的改动,具体情况视改动的方式而定(即不具有普适性)。
#
# 元类可以定制类的层次结构。类装饰器则不同,它只能影响一个类,而且对后代可能没有影响。
# ## 导入时和运行时
# Python中 `import` 语句,它不只是声明。在进程中首次导入模块时,还会运行所导入模块中的全部顶层代码——以后导入相同的模块则使用缓存,只做名称绑定。那些顶层代码可以做任何事,包括在”运行时“做的事,例如连接数据库。总而言之:`import`语句可以触发任何”运行时“行为(`import` 语句会触发运行大量代码。)。
import evaltime
# %run evaltime.py
# 虽然`ClassThree`使用了装饰器,但是其子类`ClassFour`却不受装饰器的影响,从而应证了**装饰器只能影响一个类,而且对后代可能没有影响。**
#
# 默认情况下,Python中的类是`type`类的实例,也就是说,`type`是大多数内置的类和用户定义的类的元类。
# python中的内置类
str.__class__
# 用户自定义的类
LineItem.__class__
# **`str`和`LineItem`不是继承自`type`,而是`str`和`LineItem`是`type`的实例。**
#
# `object` 类和 `type` 类之间的关系很独特:`object` 是 `type` 的实例,而 `type` 是 `object` 的子类。这种关系很“神奇”,无法使用 Python 代码表述,因为定义其中一个之前另一个必须存在。
#
# 重点是:所有类都是`type`的实例,但是元类还是`type`的子类,因此可以作为制造类的工厂。具体来说,元类可以通过实现`__init__`方法可以做到类装饰器能做到的任何事情,但是作用更大。
import evaltime_meta
# ```python
# class ClassFive(metaclass=MetaAleph):
#
# print('<[6]> ClassFive body')
#
# def __init__(self):
# print('<[7]> ClassFive.__init__')
#
# def method_z(self):
# print('<[8]> ClassFive.method_z')
#
# class MetaAleph(type):
# print('<[400]> MetaAleph body')
#
# def __init__(cls, name, bases, dic):
# print('<[500]> MetaAleph.__init__')
#
# def inner_2(self):
# print('<[600]> MetaAleph.__init__:inner_2')
#
# cls.method_z = inner_2
# ```
#
# 导入`evaltime_meta.py`,运行到`five = ClassFive()`时:
#
# 1. 运行`ClassFive.__init__`函数;
# 2. 接着交由其继承的元类`MetaAleph`继续处理:
# 1. 启动`MetaAleph.__init__`方法,并将`ClassFive`作为`MetaAleph.__init__`方法的第一个参数;
# 2. `MetaAleph.__init__`方法下的`inner_2`函数的`self`参数,最终指代我们在创建的类的实例,即ClassFive类的实例
#
# 元类的`__init__`由四个参数:`cls`, `name`, `bases`, `dic`
#
# 1. `cls`: 指代 `<class ClassFive>`;
# 2. `name`: 表示 `'ClassFive'`;
# 3. `bases`: 表示`ClassFive`继承的父类,这里`base = ()`或`base = (object,)`;
# 4. `dic`: 为一个字典,其中key可以为将要创建的类的属性名或者方法名,对应的value则为属性值和方法定义
#
# `ClassSix` 类没有直接引用 `MetaAleph` 类,但是却受到了影响,因为它是 `ClassFive` 的子类,进而也是 `MetaAleph` 类的实例,所以由 `MetaAleph.__init__` 方法初始化。
# %run evaltime_meta.py
# ## 定制描述符的元类
class EntityMeta(type):
"""
元类,用于创建带有验证字段的业务实体。
"""
def __init__(cls, name, bases, attr_dict):
print(f'type(attr_dict): {type(attr_dict)}')
print('EntityMeta.__init__ start')
# 即,type(name, bases, attr_dict)
super().__init__(name, bases, attr_dict)
for key, attr in attr_dict.items():
if isinstance(attr, Validated):
print(f'key: {key}, attr: {attr}')
# 获得描述符实例名称
type_name = type(attr).__name__
# 例如,修改weight的Quantity实例中的storage_name属性为'_Quantity#weight'
attr.storage_name = f'_{type_name}#{key}'
print('EntityMeta.__init__ end')
class Entity(metaclass=EntityMeta):
"""
带有验证字段的业务实体。
"""
class LineItem(Entity):
description = NonBlank()
weight = Quantity()
price = Quantity()
def __init__(self, description, weight, price):
print('LineItem.__init__ start')
self.description = description
self.weight = weight
self.price = price
print('LineItem.__init__ end')
def subtotal(self):
return self.weight * self.price
raisins = LineItem('Golden raisins', 10, 6.95)
dir(raisins)[-4:]
raisins.price
LineItem.weight.storage_name
# ## 元类的`__prepare__`方法
#
# `__prepare__`方法的第一个参数是元类,随后两个参数分别是要构建的类的名称和基类组成的元组,返回值必须是映射。元类构建新类时,`__prepare__`方法返回的映射会传给`__new__`方法的最后一个参数,然后再传给`__init__`方法。`__prepare__`方法的主要作用是,对传入给元类的最后一个映射参数`dics`(`type(name, bases, dics)`)进行进一步包裹。
# +
import collections
class EntityMeta(type):
@classmethod
def __prepare__(cls, name, bases):
print('call EntityMeta.__prepare__')
# 将要构建的新类的属性映射(attr_dict)使用OrderedDict进行包裹
return collections.OrderedDict()
def __init__(cls, name, bases, attr_dict):
print('call EntityMeta.__init__')
# 此时传入的attr_dict已经是一个经过OrderedDict包裹的dict
print('type(attr_dict): ', type(attr_dict))
super().__init__(name, bases, attr_dict)
cls._field_names = []
for key, attr in attr_dict.items():
if isinstance(attr, Validated):
type_name = type(attr).__name__
attr.storage_name = f'_{type_name}#{key}'
cls._field_names.append(key)
print('end of EntityMeta.__init__')
class Entity(metaclass=EntityMeta):
@classmethod
def field_names(cls):
for name in cls._field_names:
yield name
# -
class LineItem(Entity):
description = NonBlank()
weight = Quantity()
price = Quantity()
def __init__(self, description, weight, price):
print('LineItem.__init__ start')
self.description = description
self.weight = weight
self.price = price
print('LineItem.__init__ end')
def subtotal(self):
return self.weight * self.price
# 注意:field_names()方法是类方法只有class才能调用,是为了方便使用class调试
for name in LineItem.field_names():
print(name)
LineItem._field_names
LineItem.__name__
raisins.__name__
LineItem.__class__
raisins = LineItem('Golden raisins', 10, 6.95)
raisins.__class__
LineItem.__bases__
Entity.__subclasses__()
Entity.mro()
Entity.__mro__
# 只有class才有`__name__`属性,class的`__class__`属性表示其继承的父类,实例没有`__name__`属性,实例的`__class__`属性表示将其实例化的class(例如,`raisins.__class__`的返回值为`__main__.LineItem`)
# `cls.__bases__`
#
# 由类的基类组成的元组。
#
# `cls.__qualname__`
#
# Python 3.3 新引入的属性,其值是类或函数的限定名称,即从模块的全局作用域到类的点分路径。
#
# ```python
# class ClassOne:
#
# print('<[2]> ClassOne body')
#
# def __init__(self):
# print('<[3]> ClassOne.__init__')
#
# def __del__(self):
# print('<[4]> ClassOne.__del__')
#
# def method_x(self):
# print('<[5]> ClassOne.method_x')
#
# class ClassTwo(object):
# print('<[6]> ClassTwo body')
# ```
#
# 内部类 `ClassTwo` 的 `__qualname__` 属性,其值是字符串 `'ClassOne.ClassTwo'`,而 `__name__` 属性的值是 `'ClassTwo'`。
#
# `cls.__subclasses__()`
#
# 这个方法返回一个列表,包含类的直接子类。这个方法的实现使用弱引用,防止在超类和子类(子类在 `__bases__` 属性中储存指向超类的强引用)之间出现循环引用。这个方法返回的列表中是内存里现存的子类。
#
# `cls.mro()`
#
# 构建类时,如果需要获取储存在类属性 `__mro__` 中的超类元组, 解释器会调用这个方法。元类可以覆盖这个方法,定制要构建的类解析方法的顺序。
# **此外,不要在生产代码中定义抽象基类(或元类).**
| Jupyter/21.*.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import numpy as np
import pandas as pd
import scipy as sp
import os
import seaborn as sns
import matplotlib.pyplot as plt
import statsmodels.api as sm
from sklearn import preprocessing
from sklearn.metrics import confusion_matrix
from sklearn.metrics import r2_score
from sklearn.model_selection import cross_val_score, cross_val_predict
from sklearn.linear_model import LogisticRegression
from sklearn.model_selection import train_test_split
from sklearn.feature_selection import RFE
from sklearn import metrics
from sklearn.linear_model import Lasso
from sklearn.linear_model import LinearRegression
from sklearn.model_selection import GridSearchCV
import sklearn
# ## Load the dataset and clean
fifa = pd.read_csv("FIFA19data.csv", sep=r'\s*,\s*', engine='python')
fifa.head()
import re
from io import StringIO
tmp=fifa['Wage']
#tmp.to_string(index=False) #to print out to see
tmp2=re.sub(r'.([ 0-9.]+)K*',r'\1',tmp.to_string(index=False))
tmp3 = pd.read_csv(StringIO("0\n"+tmp2))
fifa['Wage']=tmp3
fifa['Wage'].unique()
fifa = fifa.drop('ID', 1)
fifa = fifa.drop('Name', 1)
fifa = fifa.drop('Nationality', 1)
fifa = fifa.drop('Club', 1)
fifa = fifa.drop('Value', 1)
fifa = fifa.drop('Wage', 1)
fifa = fifa.drop('Body Type', 1)
fifa = fifa.drop('Potential', 1)
fifa.head()
# +
for col in fifa.columns:
fifa[col].fillna(value=fifa[col].mode()[0], inplace=True)
factors = ['International Reputation', 'Weak Foot', 'Skill Moves', 'Work Rate', 'Position', 'Contract Valid Until']
for var in factors:
cat_list='var'+'_'+var
cat_list = pd.get_dummies(fifa[var], prefix=var)
fifa = pd.concat([fifa,cat_list], axis = 1)
fifa = fifa.drop(var, 1)
# -
X = fifa.copy()
X = X.drop('Overall', 1)
Y = fifa.copy()
Y = Y['Overall']
X_train,X_test,y_train,y_test=train_test_split(X,Y, test_size=0.9, random_state=31)
# # Basic Linear Model
lm1 = LinearRegression()
lm1.fit(X_train, y_train)
lm1_predictions = lm1.predict(X_test)
lm1_r2 = r2_score(y_test,lm1_predictions)
print(lm1_r2)
# # CrossValidation
cv_predictions = cross_val_predict(lm1, X_test, y_test, cv=5)
cv_r2 = r2_score(y_test,cv_predictions)
print(cv_r2)
#cross validation score
(cross_val_score(lm1, X_test, y_test, cv=5, )).mean()
#The same as r square
(cross_val_score(lm1, X_test, y_test, cv=5,scoring='r2' )).mean()
lm1.score(X_test,y_test)
# Cross validation model is better than basic linear model.
sorted(sklearn.metrics.SCORERS.keys())
# # Lasso Regression
lasso = Lasso()
lasso.fit(X_train,y_train)
lasso1_predictions = lasso.predict(X_test)
train_score=lasso.score(X_train,y_train)
test_score=lasso.score(X_test,y_test)
coeff_used = np.sum(lasso.coef_!=0)
print("lasso training score:", train_score)
print("lasso test score: ", test_score)
print("number of features used: ", coeff_used)
#print("test r2 score: ", r2_lasso1)
# +
#Adjusted R2 comparision
lm_train_score=lm1.score(X_train,y_train)
lm_test_score=lm1.score(X_test,y_test)
# print("lasso training score:", lm_train_score)
# print("lasso test score: ", lm_test_score)
lm_ra = 1-(1-lm_train_score)*((len(X_train)-1)/(len(X_train)-len(lm1.coef_)-1))
print("linear regression R square : ",lm_ra)
print("linear regression training score : ",lm_train_score)
print("\n")
lasso_ra = 1-(1-train_score)*((len(X_train)-1)/(len(X_train)-coeff_used-1))
print("Lasso regression R square : ",lasso_ra)
print("Lasso regression training score: ",train_score)
# +
lasso = Lasso()
parameters = {'alpha': [1e-15, 1e-10, 1e-8, 1e-4, 1e-3,1e-2, 1, 5, 10, 20]}
lasso_regressor = GridSearchCV(lasso, parameters, cv = 5)
lasso_regressor.fit(X_train, y_train)
# -
lasso_regressor.best_params_
print("using lasso regression grid search:")
lasso_regressor.score(X_train,y_train)
coeff_used = np.sum(lasso_regressor.best_estimator_.coef_!=0)
print("coefficients used:", coeff_used)
lasso2_predictions = lasso_regressor.predict(X_test)
# # AIC BIC
# +
def AIC(y_true, y_hat, coeff_used):
resid = y_true - y_hat
sse = sum(resid**2)
n = len(y_hat)
return n*np.log(sse/n) + 2*coeff_used
def BIC(y_true, y_hat, coeff_used):
resid = y_true - y_hat
sse = sum(resid**2)
n = len(y_hat)
return n*np.log(sse/n) + np.log(n)*coeff_used
# -
#aic and bic of simple linear model
print("aic and bic of simple linear model:")
aic_lm1 = AIC(y_test, lm1_predictions, (len(X_test.columns)+1))
print(aic_lm1)
bic_lm1 = BIC(y_test, lm1_predictions, (len(X_test.columns)+1))
print(bic_lm1)
print("aic and bic of lasso model:")
aic_lasso2 = AIC(y_test, lasso2_predictions, (coeff_used+1))
print(aic_lasso2)
bic_lasso2 = BIC(y_test, lasso2_predictions, (coeff_used+1))
print(bic_lasso2)
| ML_HW_3.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
#import lightgbm as lgb
from sklearn.model_selection import KFold
import warnings
import gc
import time
import sys
import datetime
import matplotlib.pyplot as plt
import seaborn as sns
from sklearn.metrics import mean_squared_error
warnings.simplefilter(action='ignore', category=FutureWarning)
warnings.filterwarnings('ignore')
from sklearn import metrics
import scipy.stats as stats
from sklearn.model_selection import permutation_test_score
from sklearn.model_selection import train_test_split
from sklearn.pipeline import Pipeline
from sklearn.compose import ColumnTransformer
from sklearn.base import BaseEstimator, ClassifierMixin
from sklearn.preprocessing import FunctionTransformer
from sklearn.preprocessing import OneHotEncoder
from sklearn.impute import SimpleImputer
from sklearn.ensemble import RandomForestClassifier, GradientBoostingClassifier
from sklearn.linear_model import LogisticRegression
from sklearn.neighbors import KNeighborsClassifier
from sklearn.linear_model import SGDClassifier
from sklearn.svm import LinearSVC
plt.style.use('seaborn')
sns.set(font_scale=2)
pd.set_option('display.max_columns', 500)
# -
COLS = [
'HasDetections',
'AVProductStatesIdentifier',
'AVProductsInstalled',
'GeoNameIdentifier',
'CountryIdentifier',
'OsBuild',
'Census_ProcessorCoreCount',
'Census_PrimaryDiskTotalCapacity',
'Processor'
]
train = pd.read_csv("train.csv", sep=',', engine='c', usecols=COLS)
X_train, X_test, y_train, y_test = train_test_split(train.dropna().drop('HasDetections',axis = 1)\
, train.dropna()['HasDetections'], test_size=0.25)
N = len(y_test)
y_random = y_test.sample(replace=False, frac = 1)
output = pd.DataFrame(columns = ['Observation accuracy', 'Random_Data accuracy'])
def skl(col):
nominal_transformer = Pipeline(steps=[
('onehot', OneHotEncoder(handle_unknown='ignore'))
])
preproc = ColumnTransformer(transformers=[('onehot', nominal_transformer, col)],\
remainder='drop')
clf = SGDClassifier()
pl = Pipeline(steps=[('preprocessor', preproc),
('clf', clf)
])
return pl
pl = skl(COLS[1:])
pl.fit(X_train, y_train)
pred_score = pl.score(X_test, y_test)
rand_score = pl.score(X_test, y_random)
output.loc['SGDClassifier', 'Observation accuracy'] = pred_score
output.loc['SGDClassifier', 'Random_Data accuracy'] = rand_score
output
| Microsoft Malware Analysis/.ipynb_checkpoints/feature engineering SGDClassifier-checkpoint.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
import pandas as pd
import seaborn as sns
import pandas_utils as pd_utils
# -
penguins = sns.load_dataset('penguins')
penguins.head()
pd_utils.describe(penguins)
# # Pairplot
# ## Pairplot will create a scatterplot of all quantiative variables
sns.pairplot(penguins)
# ## Add `hue='species'` to emphasize differences between species
sns.pairplot(penguins, hue='species')
sns.pairplot(penguins, hue='sex')
sns.pairplot(penguins, hue='island', palette='Set2')
penguins.head()
sns.catplot(data=penguins, hue='island', row='species', x='sex', kind='bar', y='bill_length_mm')
sns.catplot(data=penguins, hue='species', row='island', x='sex', kind='bar', y='bill_length_mm')
| 002_penguins.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Training a Multi Layer Perceptron (MLP) with Gluon
# # Lab Objectives
#
# 1. End to end process of training a deep learning model
# 2. Data loading and data preparation with Gluon datasets, data loaders and data transformations
# 3. Building a neural network (deep learning model architecture) with Gluon Blocks
# 4. Concept of Loss function and optimizers
# 5. Training a deep learing model with Gluon
# 6. Saving the trained model
# # Step 1 - Problem definition
#
# * Given an image of the cloth, classify the clothing type as one of ['t-shirt', 'trouser', 'pullover', 'dress', 'coat', 'sandal', 'shirt', 'sneaker', 'bag', 'ankle boot'] classes
#
# * This is an `Image Classification` problem
# ## Import required modules from MXNet
import warnings
warnings.filterwarnings('ignore')
# +
# Uncomment the following line if matplotlib is not installed.
# # !pip install matplotlib
from mxnet import nd, gluon, init, autograd
from mxnet.gluon import nn
from mxnet.gluon.data.vision import datasets, transforms
from IPython import display
import matplotlib.pyplot as plt
import time
# -
# # Step 2 - Data collection and data preparation
#
# 1. Fashion MNIST - https://research.zalando.com/welcome/mission/research-projects/fashion-mnist/
# 2. Fashion MNIST has 60,000 gray scale images of size 28*28 for Training
# 3. Fashion MNIST has 10,000 gray scale images of size 28*28 for Testing
# 4. With Gluon `data.vision.datasets` module, you can easily use APIs to download and prepare standard datasets.
# ## Step 2.1 - Download the data
# +
# Download the training data
fmnist_train = datasets.FashionMNIST(train=True)
# Download the validation(Test) data
fmnist_test = gluon.data.vision.FashionMNIST(train=False)
# +
print("Number of training images - ", len(fmnist_train))
train_x, train_y = fmnist_train[0]
print("Each image is of shape - ", train_x.shape)
print("Number of test images - ", len(fmnist_test))
test_x, test_y = fmnist_test[0]
print("Each image is of shape - ", test_x.shape)
# NOTE1: shape = (28*28*1) and 1 here is because these are gray scale images i.e., channel=1
# NOTE2: A color image will have 3 channels for RGB. So, an equivalent color image would have been (28*28*3)
# -
# ## Step 2.2 - Visualize sample data
text_labels = ['t-shirt', 'trouser', 'pullover', 'dress', 'coat',
'sandal', 'shirt', 'sneaker', 'bag', 'ankle boot']
X, y = fmnist_train[0:10]
# plot images
display.set_matplotlib_formats('svg')
_, figs = plt.subplots(1, X.shape[0], figsize=(15, 15))
for f,x,yi in zip(figs, X,y):
# 3D->2D by removing the last channel dim
f.imshow(x.reshape((28,28)).asnumpy())
ax = f.axes
ax.set_title(text_labels[int(yi)])
ax.title.set_fontsize(14)
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
plt.show()
# ## Step 2.3 - Data Preprocessing (Data Transformations)
#
# 1. Convert to `channels_first` format i.e., (28, 28, 1) => (1, 28, 28). MXNet runs faster with `channels_first` format
# 2. Convert all pixel values (0 to 255) to (0 to 1) with ToTensor transforms
# 3. Normalize the pixel value with mean 0.13 and standard deviation 0.31
# 4. `mxnet.gluon.data.vision.transforms` module provides many standard data transformation APIs
# 5. `mxnet.gluon.data.vision.transforms` has `Compose` that allows us to stack a list of transformations
# +
# Chain all transforms together
transformer = transforms.Compose([
transforms.ToTensor(),
transforms.Normalize(0.13, 0.31)])
# Apply transformations on the training data
fmnist_train = fmnist_train.transform_first(transformer)
# Apply transformations on the test data
fmnist_test = fmnist_test.transform_first(transformer)
# -
# ## Step 2.4 Data Loader
#
# You know have a dataset for training data and test data. But, you need to feed this data into the neural network for training the model. You use data loader!
#
# 1. Data Loader reads the data from dataset and feed them as batches to the neural network for model training
# 2. Data Loader is fast, efficient and can run multiple processes in parallel to feed the batches of data for model training
#
# Below, we prepare data loaders for training data and test data:
# 1. We use 4 workers to load the data in parallel
# 2. Randomly shuffle the training data (Intuition: Without shuffling, you will keep feeding the training data in same order leading to neural network memorizing rather than learn important feature)
# 3. Feed 256 images per batch during the model training
# +
batch_size = 256
# Training data loader
train_data = gluon.data.DataLoader(
fmnist_train, batch_size=batch_size, shuffle=True, num_workers=4)
# Test data loader
test_data = gluon.data.DataLoader(
fmnist_test, batch_size=batch_size, shuffle=False, num_workers=4)
# -
# # Step 3 - Define the Neural Network
# ## Gluon Basics
#
# ### I) Block
#
# 1. Takes an NDArray as input and gives out an NDArray as output
# 2. A Block can contain one or more other blocks
# 3. Each operation is a block, because, it takes an NDArray performs an operation and gives out an NDArray
# 4. Dense/FullyConnected, Convolution etc. are some examples of operation and available as block in MXNet Gluon
#
# ### II) Sequential
#
# 1. Helpful utility to stack a sequence of Blocks one on another
# 2. Under the hood, it just helps in pulling the output data from one block and push it as input to the next block and so on
# ## Multi Layer Perceptron
#
# * A Multi Layer Perceptron graphically looks like below
#
# 
# +
net = nn.Sequential()
# Input (784, 1) Output (120, 1)
# NOTE: Units => Neuron => The circles you are seeing in above image
# NOTE: 784 => 28*28 i.e., we flatten our input image of shape (28, 28, 1) to (784, 1)
net.add(nn.Dense(units=120, activation="relu"))
# Input (120, 1) Output (84, 1)
net.add(nn.Dense(units=84, activation="relu"))
# Input (84, 1) Output (10, 1)
# NOTE: We have 10 classes of clothing, hence, in last layer, we output 10 values!
net.add(nn.Dense(units=10))
# NOTE: Do not worry about how we came with number 120 and 84. You can experiment with values!
# -
# # Step 4 - Initialize the parameters in Neural Network
# There should be some value for each of the neurons. Initialize all the blocks(layers) in the network
# We use Xavier initialization as a standard practice
net.initialize(init=init.Xavier())
# ## Loss Function, Optimizer and Trainer
#
# * Loss Function - A way to measure the correctness
# * Optimizer - A way to make changes so we decrease the Loss (error)
# * Trainer - Updates the values(weights) in the network using Optimizer to reduce the Loss.
#
# **NOTE:** Softmax cross entropy loss is used to measure error in multi class classification problem.
#
# **NOTE:** Most widely used optimizer is Stochastic Gradient Descent (SGD)
softmax_cross_entropy = gluon.loss.SoftmaxCrossEntropyLoss()
trainer = gluon.Trainer(net.collect_params(), 'sgd', {'learning_rate': 0.1})
# # Step 5 - Train the model
# +
def acc(output, label):
"""Helper function to calculate accuracy i.e., how
many predictions are correct.
"""
# output: (batch, num_output) float32 ndarray
# label: (batch, ) int32 ndarray
return (output.argmax(axis=1) ==
label.astype('float32')).mean().asscalar()
# -
# ### Training loop
#
# 1. Take a batch of training data from data loader
# 2. Do the forward pass (prediction)
# 3. Calculate the loss (error)
# 4. Do the backward pass (gradient - change required to reduce the loss)
# 5. Use Trainer with optimizer to make the updates (change in weights)
# 6. Continue with Step 1 with new batch of data
# +
epochs = 5
for epoch in range(epochs):
train_loss, train_acc, valid_acc = 0., 0., 0.
tic = time.time()
for data, label in train_data:
# forward + backward
with autograd.record():
output = net(data)
loss = softmax_cross_entropy(output, label)
loss.backward()
# update parameters
trainer.step(batch_size)
# calculate training metrics
train_loss += loss.mean().asscalar()
train_acc += acc(output, label)
# calculate validation accuracy
for data, label in test_data:
valid_acc += acc(net(data), label)
print("Epoch %d: loss %.3f, train acc %.3f, test acc %.3f, in %.1f sec" % (
epoch, train_loss/len(train_data), train_acc/len(train_data),
valid_acc/len(test_data), time.time()-tic))
# -
# # Step 6 - Save the model
# Finally, we save the trained parameters onto disk, so that we can use them later.
net.save_parameters('fashion_mnist.params')
# # Summary
#
# In this lab, we learnt about:
#
# 1. End to end process of training a deep learning model
# 2. Gluon Dataset, Data Loader and Transformers for data loading and preparation
# 3. Gluon Block - gluon.nn.Sequential, gluon.nn.Dense
# 4. Loss function - gluon.loss.SoftmaxCrossEntropyLoss
# 5. Trainer and optimizer - gluon.Train and SGD
# 6. Forward -> Loss -> Backward -> Update -> Repeat
# # References
# 1. https://beta.mxnet.io/guide/crash-course/4-train.html
# 2. https://gluon.mxnet.io/chapter03_deep-neural-networks/mlp-scratch.html
# 3. https://gluon.mxnet.io/chapter03_deep-neural-networks/mlp-gluon.html
# 4. https://research.zalando.com/welcome/mission/research-projects/fashion-mnist/
# 5. https://mxnet.incubator.apache.org/tutorials/gluon/datasets.html
| lab2-mlp-fashion-mnist/MXNet_Gluon_MLP_FashionMNIST.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: PaddlePaddle 2.0.0b0 (Python 3.5)
# language: python
# name: py35-paddle1.2.0
# ---
# # 小美 - 企业微信预约机器人
#
# 使用PaddleNLP,Wechaty和Rasa制作基于企业微信的智能预约机器人
#
# [项目GitHub地址](https://github.com/lhr0909/appointment-bot)
#
# ## 设想初衷
#
# 我经常去一家日式预约制理发店,和老板交谈后发现,他们不希望能够让散客立刻能够预约店长,并且需要能够快速管理所有的会员,并且能够分开不同的会员梯度。我们认为,通过有赞和大众点评并不能控制准入的客人,如果使用企业微信,并对联系人进行标签管理,既可以达到管理会员的需要,也不丢失预约制理发店的特质。另外我发现,理发店的日程管理非常原始,采用纸质管理,并且因为理发店工作人员不多,不一定能够及时提醒顾客到店,会产生丢单的情况。所以我们和理发店老板商量希望帮他们在企业微信上线一个预约机器人,把预约日程记录到企业微信的日历里,并且能够利用微信功能自动提醒。
#
# ## 对话流程设计
#
# [BotSociety设计](https://app.botsociety.io/2.0/designs/60938d492b61046af4d28f70/edit?x=820&y=103)
#
# ## 机器人架构
#
# 机器人的逻辑核心采用[Rasa](https://rasa.com)框架。不过Rasa框架对中文支持有限,这里我编利用PaddleNLP的Transformer API生成embedding,作为每个句子的特征,整合到Rasa里面,提供中文的意图识别和基于CRF的命名实体识别支持。利用bert-wwm-chinese预训练模型,效果还不错。对接微信方面,我们使用了[自研的chat-operator中间件](https://xanthous.cn/posts/chat-operator)进行对接。我们用的是由句子互动提供的wxwork Wechaty puppet,直接对接企业微信。
#
# 时间段识别我们使用了Facebook的duckling进行识别。机器人会根据识别出来的时间段,匹配企业微信日历里面相关时间的日程,找出可以预约的时间,给用户选择。当选择成功之后,会生成一个日程在企业微信日历中,帮店家管理每天的日程。
#
# ## 飞桨使用体验
#
# 我们使用了PaddleNLP的Transformer,我觉得整体和HuggingFace的Transformers包差距不是特别大。需要注意把数据转换成Paddle的Tensor才能调用模型(用 `paddle.to_tensor()` 方法)。如果时间充裕一些的话,我会考虑使用Transformer来做命名实体识别,因为我手头上针对这个场景的数据不够多,所以只能用Rasa提供的CRF算法来做。后面希望给理发店部署使用之后,可以多收集一些数据。
#
# ## 当前遇到的问题和限制
#
# 1. PaddleNLP的Bert模型forward时候返回embedding会出现不一致的情况,导致意图识别的准确率不稳定。可能和我的整合方法有关,希望能和Paddle的工程师沟通,看看能否改进
# 2. 当前Wechaty和Rasa之间结合不够紧凑,由于时间关系暂时没有办法编写基于Python Wechaty的Rasa Connector组件。希望比赛后可以加上,配合添加企业微信的消息入口,可以制作用户添加机器人的欢迎语流程,更重要的是可以想办法把企业微信这边的external_id和Wechaty的contact id对上。
# 3. 对于美业项目和项目的一些询问细节,需要和理发店的老板进一步沟通,当前只有比较简单的项目和耗时mapping,后续会需要根据实际情况多添加几轮问题,判断整个项目的总耗时(详细设计请见上方对话流程设计)
# 4. 预约时间还有一些边界条件需要处理,比如说营业时间限制等。预约时间判断也写得比较仓促,可能会存在一些bug
#
# ## 截图
#
# 
# 
# 
#
#
| notebook.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ---
#
# # Universidad de Costa Rica
# ### Facultad de Ingeniería
# ### Escuela de Ingeniería Eléctrica
# #### IE0405 - Modelos Probabilísticos de Señales y Sistemas
# # Laboratorio de programación 1: Conceptos de probabilidad
# *Realizado por:* **<NAME>** y **<NAME>**
# #### *Profesor:* **<NAME>**
# #### *Grupo 01*
# #### *I - 2020*
# #### *Estudiante:* **<NAME> B61254**
#
#
#
# ## 1) Definición clásica de la probabilidad
#
# ## Ejemplo 1.1
#
# **Ejemplo resuelto**: el espacio de resultados elementales al lanzar una moneda es:
#
# $$E = \left\lbrace 1,2,3,4,5,6 \right\rbrace$$
#
# ¿Cuál es la probabilidad de cada suceso? [1]
espacio_muestras = [1,2,3,4,5,6]
n = len(espacio_muestras)
probabilidad = 1.0/n
print(probabilidad)
# 
# ¿Cuál es la probabilidad de que salga un número par?
numeros_pares = [i for i in espacio_muestras if i % 2 is 0]
h = len(numeros_pares)
probabilidadpares = float(h)/n
print(probabilidadpares)
# 
# ## Ejemplo 1.2
# * ¿Cuál es la probabilidad de que una carta sacada de un mazo (52 cartas en cuatro palos) sea divisible por 6?
# +
# Crear espacio de muestras
palo = [i+1 for i in range(13)]
# Comprobar divisibilidad por 6
divisible_6 = [i for i in palo if i % 6 is 0]
'''
Tomando en cuenta que hay cuatro palos,
la frecuencia relativa de los números
divisibles por 6 es:
'''
probabilidad_6 = (4*float(len(divisible_6)))/(4*len(palo))
print("La probabilidad es {0:0.2f}".format(probabilidad_6))
# -
# 
# ## 2) Eventos independientes
# ## Ejemplo 2.1
# **Ejemplo resuelto**: el siguiente es un ejemplo para la solución "elegante" de un problema común.
#
# > ¿Cuál es la probabilidad de que, al lanzar dos dados, la suma de los dados sea 7? [2]
#
# El resultado es fácil de deducir: de 36 combinaciones posibles, seis suman siete (1 + 6, 2 + 5, 3 + 4, 4 + 3, 5 + 2, 6 + 1), entonces 6/36 = 1/6 $\approx$ 0.16667.
#
# **Solución alternativa**
#
# Primero, el objeto `defaultdict` del [módulo](https://docs.python.org/2/library/collections.html) `collections` crea diccionarios con valores predeterminados cuando encuentra una nueva clave. Su uso práctico es el de ser un "diccionario rellenable".
from collections import defaultdict
# Ahora, es posible crear un diccionario con todas las combinaciones posibles y la suma de cada una, con un doble bucle `for` (observen qué poderosa es la sintaxis de Python aquí):
d = {(i,j) : i+j for i in range(1, 7) for j in range(1,7)}
print(d)
# Seguidamente se crea un `defaultdict` vacío. Este implica que, más adelante, si una clave no es encontrada en el diccionario, en lugar de un `KeyError` se crea una nueva entrada (un nuevo par `key:value`). Si no saben qué es `KeyError` o `key:value`, pueden ir a leer más sobre diccionarios en Python.
dinv = defaultdict(list)
print(dinv)
# Es posible extraer del diccionario las combinaciones que suman 7. El método `.items()` genera una lista de pares de "tuplas" (una tupla es un conjunto ordenado e inmutable de elementos) a partir del diccionario de combinaciones creado en `d`. "Rellenamos" el `defaultdict` con los elementos en el diccionario creado anteriormente y el método `.append()`, esto con un bucle `for` en donde los índices `i,j` representan los pares de combinaciones y su suma. La ventaja es que ahora están todos agrupados.
# +
print('Antes...\n')
print(d.items())
for i,j in d.items():
dinv[j].append(i)
print('\nDespués...')
dinv
# -
# El `for` anterior puede leerse como: "para cada par en la lista de ítemes, en la posición `j` (la suma de las combinaciones) añada la combinación correspondiente (en `i`)".
#
# Extraemos los pares que suman siete y obtenemos la cantidad de estos.
print('Combinaciones que suman 7:', dinv[7])
print('Elementos:', len(dinv[7]))
# Finalmente, y más en general, se obtiene la probabilidad para todas las sumas en forma de un solo diccionario:
probabilidades = {i : len(j)/36 for i,j in dinv.items()}
print('El vector de probabilidades de suma es =', probabilidades)
print('La probabilidad de que la suma sea 7 es =', probabilidades[7])
# ### Resultado ------ ¿Cuál es la probabilidad de que, al lanzar dos dados, la suma de los dados sea 7?
# # R/
#
# 
#
#
# ## Ejemplo 2.2
# Con el método anterior, puede resolverse ahora un problema más extenso:
#
# * Las placas del país tienen tres letras y tres dígitos numéricos. En cierto juego, unos amigos apuestan un fresco de tamarindo para el primero que encuentre una placa cuyos dígitos numéricos suman: 10 para A(ndrea), 15 para B(renda) y 20 para C(arlos). ¿Quién tiene más probabilidades de ganar?
# +
#Resolución de problema placas y suma
#Ejemplo 2.2
###Las placas del país tienen tres letras y
##tres dígitos numéricos. En cierto juego, unos
##amigos apuestan un fresco de tamarindo para el primero
##que encuentre una placa cuyos dígitos numéricos suman:
## 10 para A(ndrea), 15 para B(renda) y 20 para C(arlos).
##¿Quién tiene más probabilidades de ganar?
##Forma de ejecutarlo
#python3 PlacasConSumaDigitosDeterminado.py
#Importando paquetes necesarios
from collections import defaultdict
#Creando diccionario con todas las combinaciones posibles y la suma de cada una
#dado que son 0-9 los numeros y tres posibles
d = {(i,j) : i+j for i in range(0, 9) for j in range(0,9) for k in range(0,9)}
print("\n")
print("Imprimiendo combinaciones posibles de las placas y suma de esta")
print("Lleva el siguiente formato (Digito uno, Digito 2, Digito 3): suma digites placa\n")
print(d)
#creando un diccionario vacio para que cuando no se crea en el diccionario original
#en vez de error lo que haga sea crearlo, en otras palabras, es subconjento del diccionario original
#una vez que rellena para ser el conjunto universal
dinv = defaultdict(list)
#se imprime
print("\nConjunto complento: ", dinv)
print('\nAntes de buscar las posibles permutaciones\n')
print(d.items())
#Realizando las permutaciones
for i,j in d.items():
dinv[j].append(i)
print('\nDespués de realizar las distintas permutaciones obtenemos:\n')
print(dinv)
#Calculando 10x10x10 = 1000
probabilidades = {i : len(j)/1000 for i,j in dinv.items()}
print('\nEl vector de probabilidades (probabilidad de cada número en salir en la placa: \n', probabilidades)
#se escoge de las permutaciones las que suman 10
print('\nCombinaciones que suman 10 (Andrea): ', dinv[10])
print('\nElementos (cantidad de combinaciones) para la suma de 10: ', len(dinv[10]))
print('\nLa probabilidad de que la suma sea 10 es = ', probabilidades[10])
#se escoge de las permutaciones las que suman 15
print('\nCombinaciones que suman 15 (Brenda): ', dinv[15])
print('\nElementos (cantidad de combinaciones) para la suma de 15: ', len(dinv[15]))
print('\nLa probabilidad de que la suma sea 15 es = ', probabilidades[15])
#se escoge de las permutaciones las que suman 20
print('\nCombinaciones que suman 20 (Carlos): ', dinv[20])
print('\nElementos (cantidad de combinaciones) para la suma de 20: ', len(dinv[20]))
if (len(dinv[20])==0):
print('\nLa probabilidad de que la suma sea 20 es = 0')
else:
print('\nLa probabilidad de que la suma sea 20 es = ', probabilidades[20])
##Últimas cifras de las placas:
# Calcule la probabilidad de que se repitan
#las dos últimas cifras de la matrícula en quince
#automóviles anotados al azar.
# -
# 
# * **Últimas cifras de las placas**: Calcule la probabilidad de que se repitan las dos últimas cifras de la matrícula en quince automóviles anotados al azar.
# +
# Ejemplo 2. 2
#Parte de la coincidencia de las placas
#Últimas cifras de las placas: Calcule la
#probabilidad de que se repitan las dos últimas cifras
#de la matrícula en quince automóviles anotados al azar.
#Utilizando el método de LA REGLA DE LAPLACE Y LA COMBINATORIA
#Recuerde que como es coincidencia de días nos fijamos en los 0-9 digitos qu tiene la placa = 10
print("\nEste programa funciona para calcular la coincidencia en los dos últimos digitos de una placa\n")
print("Se calcula para la segunda pregunta en el ejemlo 2.2 \n")
#probabilidad 1
probabilidad15 = 1.0
#personas presentes = asistentes en este caso son 50 para
carros = 15
print("Numero de placas analizadas (carros vistos): ", carros)
#son dos digitos combinados 10x10
#calculando la probabilidad
for i in range(carros):
probabilidad15 = probabilidad15 * (100-i)/100
print("\nProbabilidad de que coincida los dos últimos digitos de la placa es de {0:.2f}" .format(1 - probabilidad15))
# -
# 
# ## Ejemplo 2.3
# * **La coincidencia de cumpleaños**: En una fiesta a la que concurren un total de 50 personas, una amiga intrépida afirma que en la fiesta debe haber por lo menos dos personas que cumplen años el mismo día. ¿Deberíamos creerle? [3] (Plantee la solución para _N_ asistentes a la fiesta).
#
# +
# Ejemplo 2. 3
#La coincidencia de cumpleaños:
###En una fiesta a la que concurren un total de 50 personas,
# una amiga intrépida afirma que en la fiesta debe haber por
#lo menos dos personas que cumplen años el mismo día. ¿Deberíamos creerle?
#Utilizando el método de LA REGLA DE LAPLACE Y LA COMBINATORIA
#Recuerde que como es coincidencia de días nos fijamos en los 365 días que tiene el año
print("\nEste programa funciona para calcular la coincidencia en el día de cumpleaños en un evento con N personas\n")
print("Inicialmente se calcula para la primera pregunta en el ejemlo 2.3 \n")
#probabilidad 1
probabilidad50 = 1.0
#personas presentes = asistentes en este caso son 50 para
asistentes = 50
print("Numero de asistentes en la fiesta: ", asistentes)
#calculando la probabilidad
for i in range(asistentes):
probabilidad50 = probabilidad50 * (365-i)/365
print("\nProbabilidad de que coincida una misma fecha de cumpleaños es {0:.2f}" .format(1 - probabilidad50))
npersonasAsistentes = int(input("Digite el número personas asistentes a la fiesta:\n"))
probabilidadN = 1.0
print("\nCorroborando internamente el número de asistentes: ", npersonasAsistentes)
for iterador in range(npersonasAsistentes):
probabilidadN = probabilidadN * (365-iterador)/365
print("\nProbabilidad de que coincida una misma fecha de cumpleaños es {0:.2f}" .format(1 - probabilidadN))
# -
# 
# # Teorema de Bayes
#
# El teorema de Bayes, también conocido como la "regla de probabilidad condicional inversa", tiene la forma general
#
# $$P(A \mid B) = \frac{P(B \mid A) P(A)}{P(B)}$$
#
# Es útil cuando se cuestiona la premisa de un resultado.
#
# ## Ejemplo 3.1
#
# * Un moderno edificio (la Escuela de Ingeniería Eléctrica) tiene dos ascensores para uso de los estudiantes. El primero de los ascensores es usado el 45% de las ocasiones, mientras que el segundo es usado el resto de las ocasiones. El uso continuado de los ascensores provoca un 5% de fallos en el primero de los ascensores y un 8% en el segundo. Un día suena la alarma de uno de los ascensores porque ha fallado. Calcule la probabilidad de que haya sido el primero de los ascensores.
# +
'''
Un primer acercamiento intuitivo a la
solución nos dice que si el primero se
usa menos veces y falla con menor porcentaje
entonces la probabilidad de que haya sido el
de la alarma será menor al 50%.
'''
# Datos generales
p_ascensores = [0.45, 0.55]
p_fallo = [0.05, 0.08]
'''
Con [P(A), P(B)] = p_ascensores y
[P(F|A), P(F|B)] = p_fallo, entonces
O
/ \
/ \
P(A) P(B)
/ \
P(F|A) P(F|B)
/ \
F F
'''
# Probabilidad total de fallar
p_fallar = sum([p_ascensores[i] * p_fallo[i] for i in range(len(p_ascensores))])
'''
Si P(F) = p_fallar es la probabilidad de fallar
(y de que suene la alarma), entonces el teorema
de Bayes queda como:
P(F|A) P(A)
P(A|F) = -----------
P(F)
'''
# Aplicación de Teorema de Bayes
p_bayes = [(p_fallo[i] * p_ascensores[i]) / p_fallar for i in range(len(p_ascensores))]
print("La probabilidad de que haya sido el primero de los ascensores es {A:0.2f}. Y el segundo {B:0.2f}".format(A=p_bayes[0]*100, B=p_bayes[1]*100))
# -
# ### Resultado ------ Probabilidad de fallo total, teorema de Bayes
# # R/
# 
#
# ## Ejemplo 3.2
# * Hay una ciudad de 50 000 habitantes, con la siguiente distribución de población:
#
# $$\begin{matrix} \text{Niñas} & \text{Niños} & \text{Mujeres} & \text{Hombres} \\ 11000 & 9000 & 16000 & 14000 \end{matrix}$$
#
# Hay también un reporte de 9000 casos de una nueva variedad de virus que-no-debe-ser-mencionada, distribuidos de la siguiente forma:
#
# $$\begin{matrix} \text{Niñas} & \text{Niños} & \text{Mujeres} & \text{Hombres} \\ 2000 & 1500 & 3000 & 2500 \end{matrix}$$
#
# ¿Está la probabilidad de contraer el virus relacionada con la pertenencia a un sector demográfico? Esto puede estudiarse analizando las probabilidades $P(\text{tener gripe} \mid \text{pertenece a X sector})$ [4]
# +
#Ejemplo 3.2
#Utilizando el teorema de Bayes
print("\nHay una ciudad de 50 000 habitantes, con la siguiente distribución de población: Niñas 11000 Niños 9000 Mujeres 16000 Hombres 14000. Hay también un reporte de 9000 casos de una nueva variedad de virus que-no-debe-ser-mencionada, distribuidos de la siguiente forma: Niñas 2000 Niños 1500 Mujeres 3000 Hombres 2500 ¿Está la probabilidad de contraer el virus relacionada con la pertenencia a un sector demográfico? Esto puede estudiarse analizando las probabilidades 𝑃(tener gripe∣pertenece a X sector\n")
#Forma de ejecutarlo
#python3 InfeccionPoblacionTeoremaBayes.py
#se calcula probailidad para cada sector de la poblacion no infectada
MuestraCiudad = 50000
NinasNaci = 11000
NinosNaci = 9000
MujeresNaci = 16000
HombresNaci = 14000
#Calculando La probabilidad de un evento A se define a priori
#(sin experimentación) como:
probabilidadNinasNaci = round((NinasNaci/MuestraCiudad),2)
probabilidadNinosNaci = round((NinosNaci/MuestraCiudad),2)
probabilidadMujeresNaci = round((MujeresNaci/MuestraCiudad),2)
probabilidadHombresNaci = round((HombresNaci/MuestraCiudad),2)
# print("Probibilidad de una niña en la ciudad: ", probabilidadNinasNaci )
# print("Probibilidad de una niño en la ciudad: ", probabilidadNinosNaci )
# print("Probibilidad de una mujer en la ciudad: ", probabilidadMujeresNaci )
# print("Probabilidad de hombre en la ciudad: ", probabilidadHombresNaci)
#se calcula probailidad Poblacion infectada
PoblacionInfectadaTotal = 9000
CasosNinas = 2000
CasosNinos = 1500
CasosMujeres = 3000
CasosHombres = 2500
#Calculando La probabilidad de un evento A se define a priori
#(sin experimentación) como:
probabilidadNinasInfec = round((CasosNinas/PoblacionInfectadaTotal),2)
probabilidadNinosInfec = round((CasosNinos/PoblacionInfectadaTotal),2)
probabilidadMujeresInfec = round((CasosMujeres/PoblacionInfectadaTotal),2)
probabilidadHombresInfec = round((CasosHombres/PoblacionInfectadaTotal),2)
# print("Probibilidad de una niña infectada en la ciudad: ", probabilidadNinasInfec )
# print("Probibilidad de una niño infectado en la ciudad: ", probabilidadNinosInfec )
# print("Probibilidad de una mujer infectada en la ciudad: ", probabilidadMujeresInfec )
# print("Probabilidad de hombre infectado en la ciudad: ", probabilidadHombresInfec)
# Datos generales
p_nacimientos = [probabilidadNinasNaci, probabilidadNinosNaci, probabilidadMujeresNaci, probabilidadHombresNaci]
#p_nacimientos = [0.22, 0.18, 0.32, 0.28]
p_infectados = [probabilidadNinasInfec, probabilidadNinosInfec, probabilidadMujeresInfec, probabilidadHombresInfec]
#p_infectados = [0.22, 0.17, 0.33, 0.28]
###
####Con [P(A), P(B)] = p_nacimientos y
###[P(F|A), P(F|B)] = p_infectados, entonces
# O
# / \
# / \
# P(A) P(B)
# / \
# P(F|A) P(F|B)
# / \
# F F
#####
# Probabilidad total de infectarse
p_infeccion = sum([p_nacimientos[d] * p_infectados[d] for d in range(len(p_nacimientos))])
####
##Si P(F) = p_infectados es la probabilidad de infectarse
###(y de que se den cuenta), entonces el teorema
###de Bayes queda como:
###
## P(F|A) P(A)
##P(A|F) = -----------
## P(F)
###
# Aplicación de Teorema de Bayes
p_bayes = [(p_infectados[i]*p_nacimientos[i]) / p_infeccion for i in range(len(p_nacimientos))]
print("La probibilidad de infección de en niñas es {A:0.2f}, el niños {B:0.2f}, el mujeres {C:0.2f} y el de hombres {D:0.2f}".format(A=p_bayes[0]*100, B=p_bayes[1]*100, C=p_bayes[2]*100, D=p_bayes[3]*100))
# -
# 
| Jupyter/Lab1ConceptosInicialesProbabilidad.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import string
import numpy as np
import matplotlib.pyplot as plt
import copy
class Point:
""" Point Class """
def __init__(self, xcoord, ycoord):
self.x = xcoord
self.y = ycoord
def __init__(self,tup):
self.x=tup[0]
self.y=tup[1]
@classmethod
def input_point(point):
""" Takes X-Coord and Y-Coord from user to form a point """
return point(
int(input(' X-Coord: ')),
int(input(' Y-Coord: ')),
)
def __str__(self):
""" Displays point's coordinates """
return "(" + str(self.x) + ", " + str(self.y) + ")"
def dir(A,B,P):
#print(A,B,P)
c=copy.deepcopy(B)
d=copy.deepcopy(P)
c.x -= A.x
c.y -= A.y
d.x -= A.x
d.y -= A.y
#print(c,d)
' Determining cross Product '
cross_product = c.x * d.y - c.y * d.x
#print(cross_product)
if (cross_product > 0):
return 1
elif (cross_product < 0):
return -1
else:
return 0
# +
import math
i=0
j=0
k=0
l=0
infile=open('u.txt', 'r')
cord = infile.read().split(' ')
cord = list(map(int, cord))
non_ext=[]
points=[tuple(cord[z: z + 2]) for z in range(0, len(cord), 2)]
#print(len(points))
for i in range (0,len(points)):
Pi=Point(points[i])
for j in range (0,len(points)):
if j==i:
continue
Pj=Point(points[j])
for k in range (0,len(points)):
if k==j or k==i:
continue
Pk=Point(points[k])
for l in range (0,len(points)):
if l==i or l==j or i==k:
continue
Pl=Point(points[l])
a=copy.deepcopy(Pi)
b=copy.deepcopy(Pj)
c=copy.deepcopy(Pk)
d=copy.deepcopy(Pl)
if dir(a,b,c)==0:
continue
elif dir(a,b,d)==dir(b,c,d) and dir(a,b,d)==dir(c,a,d):
non_ext.append(points[l])
else:
continue
#print(non_ext)
#print(non_ext)
set1 = set(points)
set2 = set(non_ext)
pp=list(set1.difference(set2))
cent=(sum([p[0] for p in pp])/len(pp),sum([p[1] for p in pp])/len(pp))
# sort by polar angle
pp.sort(key=lambda p: math.atan2(p[1]-cent[1],p[0]-cent[0]))
print (pp)
convex_hull=copy.deepcopy(pp)
convex_hull.append(convex_hull[0])
xs, ys = zip(*convex_hull) #create lists of x and y values
plt.figure()
plt.plot(xs,ys)
for item in points:
plt.plot([item[0]],[item[1]],marker='o', markersize=3, color="red")
plt.show()
# -
| .ipynb_checkpoints/extreme_points-checkpoint.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# %load_ext autoreload
# %autoreload 2
# # Convert quantities
# *Quantities can be converted into other form and units with PyUnitWizard.*
#
# To facilitate the conversion of units between different forms, PyUnitWizard includes the method `pyunitwizard.convert()`. This way quantities and units can be converted between different forms and units:
import pyunitwizard as puw
puw.configure.load_library(['pint', 'simtk.unit'])
q = puw.quantity(value=3.0, unit='joules', form='simtk.unit')
q2 = puw.convert(q, to_unit='kilocalories', to_form='pint')
q2
# If the output form is not specified with the argument `to_form`, a quantity with the same form is obtained:
q = puw.quantity(value=1000.0, unit='kN/m**2', form='pint')
q2 = puw.convert(q, to_unit='atmospheres')
q2
puw.get_form(q2)
| docs/contents/Convert.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python [conda env:retropy]
# language: python
# name: conda-env-retropy-py
# ---
# %run retropy.ipynb
# +
bonds_1B_AUM = 'AGG|LQD|BND|TIP|BSV|VCSH|VCIT|PFF|HYG|BIV|JNK|EMB|MBB|CSJ|SHY|BNDX|MINT|BKLN|SHV|IEF|CIU|TLT|IEI|FLOT|PGX|GOVT|PCY|SCHZ|VMBS|VTIP|SJNK|EMLC|TOTL|SHYG|SPSB|FPE|NEAR|SCHP|SPIB|VCLT|BLV|SRLN|SCHO|BOND|TDTT|GVI|IUSB|VRP|VGSH|BIL|FLRN|PGF|VGIT|HYS|CRED|BWX|FTSM|STIP|SPAB|FTSL|HYLS|ISTB|ANGL|SCHR|PHB|BSCK|GSY|BSJI|BSCJ|IPE|STPZ'
bonds_1B_AUM = bonds_1B_AUM.split('|')
bonds_100M_AUM = 'VWOB|BSCI|BAB|LMBS|BSJJ|GBIL|TDTF|IGOV|CORP|BSCL|CLY|SPTL|ITE|BSCM|SLQD|BSJK|EMHY|EDV|IAGG|VGLT|PSK|SNLN|VBND|PFXF|BSCH|IBDM|WIP|TLH|AGZ|IBDL|CLTL|IBDK|HYLB|IGHG|EBND|IBDN|HYEM|AGGP|IBDH|FBND|RIGS|BSJH|LEMB|BSJL|BSCN|AGGY|IBDO|ILTB|SPLB|IBDQ|CMBS|LTPZ|FLTR|MBG|BWZ|PGHY|SPFF|BSCO|SPTS|GHYG|FIXD|PLW|VNLA|MINC|ELD|IBND|ZROZ|USHY|IBDP|BSJM|IHY|TIPX|NFLT|PICB|GIGB|VRIG|HYLD|FFTI|LDUR|FTSD|HYHG|ICSH|FIBR|RAVI|GNMA|IBDJ|JPGB|HYGH|QLTA|GBF|DWFI|FLTB|JPST|BLHY|IBDC|HYLV'
bonds_100M_AUM = bonds_100M_AUM.split('|')
bonds_tax_pref_100M_AUM = 'MUB|SHM|TFI|HYD|VTEB|ITM|PZA|SUB|CMF|HYMB|NYF|MUNI|FMB|PWZ|SMB|IBMI|IBMH|IBMG|MLN|IBMK|IBMJ|SHYD|XMPT'
bonds_tax_pref_100M_AUM = bonds_tax_pref_100M_AUM.split('|')
alts_100M_AUM = 'TQQQ|SSO|TBT|FAS|QLD|SH|NUGT|UPRO|QAI|SVXY|SDS|VXX|UYG|JNUG|SPXL|TNA|SOXL|UUP|TBF|TECL|SPXU|UCO|TZA|SQQQ|UDOW|DDM|ERX|BIB|LABU|MORL|UVXY|TMV|MNA|PBP|PSQ|SPXS|DUST|QID|FXE|PUTW|ROM|USLV|EDC|RWM|CEFL|BDCL|DOG|YINN|SCO|AGQ|TVIX|FXB|ZIV|UWM|EUO|BRZU|DWT|QYLD|UWT|WTMF|DXD|FXC|URE|JPHF|FAZ|HDGE|MVV|CURE|URTY|FXF|SDOW|EUM|VIXY|RUSL|FTLS|DGAZ|FXA|YCS|DIG|TWM|CCOR|PST|GUSH|CHAD|JDST|SJB|RXL|DGP|USDU|FXY|UGLD|HTUS|DYLS|CWEB'
alts_100M_AUM = alts_100M_AUM.split('|')
stocks_10B_AUM = 'SPY|IVV|VTI|VOO|EFA|VEA|VWO|QQQ|IWM|IJH|IEFA|IEMG|IWD|IWF|EEM|VTV|IJR|VNQ|XLF|VUG|VIG|VEU|DIA|VO|VB|VYM|IWB|MDY|IVW|XLK|EWJ|VGK|DVY|VGT|XLV|XLE|IWR|SDY|EZU|IVE|USMV|RSP|SCHF|XLY|VBR|XLI|ITOT|VV|SCHB|SCHX|IWS|VT|VXUS|SCZ|IBB'
stocks_10B_AUM = stocks_10B_AUM.split('|')
stocks_1B_AUM = 'AMLP|XLP|IWN|DXJ|IWO|ACWI|IWP|IWV|VOE|IXUS|EFAV|HEDJ|GDX|IJK|SPLV|XLU|EWZ|VFH|VHT|VBK|SCHD|DBEF|HDV|SCHA|EFV|IJJ|VNQI|VXF|FDN|MTUM|INDA|PRF|VPL|IJS|VOT|SCHG|OEF|IJT|GUNR|XLB|ITA|IDV|KRE|VSS|EWG|AAXJ|HEFA|EEMV|IYR|SCHE|QUAL|GDXJ|FEZ|FVD|IYW|SCHM|EWY|SCHV|XBI|SCHH|VDE|VDC|KBE|RWX|FNDX|ACWV|FXI|EWT|IUSG|AMJ|VIS|NOBL|EFG|MGK|FNDF|EPP|VLUE|IUSV|DON|IEV|ACWX|EWC|SPHD|ICF|RWO|RWR|GSLC|IEUR|VPU|FNDA|MCHI|DGRO|XOP|EWU|FV|ITB|VCR|XLRE|SDOG|RPG|QTEC|VAW|DBEU|EMLP|IGF|EUFN|DLN|DES|DEM|IYH|EWH|IYF|DBJP|QDF|DGRW|HEZU|MGV|MLPI|ROBO|VOOG|PRFZ|IOO|FNDE|SPDW|DLS|RSX|EPI|FLGE|FDL|EWA|SCHC|FXR|GEM|FXL|FNDC|IXJ|IYG|SOXX|CWI|RYT|PDP|XT|VONG|DGS|ONEQ|PWV|IHI|FEX|BOTZ|OIH|ILF|IXN|IXN|FBGX|SKYY|FTEC|MGC|SPHQ|PKW|MOAT|GNR|VONV|DWX|IGM|SMH|JPIN|XMLV|EWL|SLYG|VTWO|DHS|FTXO|KWEB|PXF|TILT|FBT|INDY|IYJ|SPYG|XSLV|FXO|DFE|XHB|REM|HACK|MDYG|FNCL|IYY|HEWJ|IGV|FTA|FIHD|LIT|EWW|VOX|SLYV|IYE|GXC|NANR|XAR|TLTD|KBWB|FXH|IQDF|IQDF|IXC|EWP|EZM'
stocks_1B_AUM = stocks_1B_AUM.split('|')
high_yield = 'ALTY|AMJ|AMJL|AMLP|AMU|AMUB|AMZA|AOK|ATMP|BDCL|BDCS|BDCZ|BIZD|BLHY|BMLP|BNDX|CDC|CDL|CEFL|CEFS|CEY|CJNK|COMT|CSB|CWB|DBUK|DES|DGRS|DIV|DRW|DTN|DVHL|DVYL|DWFI|EBND|ENY|EPRF|ERUS|EUFL|EWH|EWM|EWY|FAUS|FCVT|FDIV|FEMB|FFR|FFTI|FLN|FPA|FPE|FSZ|FXEP|FXEU|FXU|GCE|GHII|GHYG|GRI|GYLD|HDLV|HDRW|HEWL|HSPX|HYDB|HYEM|HYHG|HYIH|HYLD|HYLS|HYXE|IDHD|IDLV|IFGL|IMLP|IPE|IQDE|ISHG|JPGB|KBWD|KBWY|LBDC|LMLP|LRET|MDIV|MLPA|MLPB|MLPC|MLPE|MLPG|MLPI|MLPO|MLPQ|MLPY|MLPZ|MORT|NFLT|OASI|OEUR|ONTL|OUSM|PAF|PCEF|PELBX|PEX|PFFD|PFFR|PFXF|PGF|PRME|PSCF|PSCU|PSK|PSP|PXJ|PXR|QXMI|QYLD|REM|RORE|SDIV|SDYL|SEA|SMHD|SOVB|SPFF|SPMV|SPVU|SRET|STPZ|TAO|TIPX|TIPZ|URA|VSMV|VSS|VTIP|WFHY|WPS|YDIV|YESR|YMLI|YMLP|YYY|ZMLP'
high_yield = high_yield.split('|')
closed_end = ['PCEF', 'YYY', 'XMPT', 'FCEF', 'CEFS', 'MCEF', 'GCE'] # lev: CEFL
drop = 'URA'
drop = drop.split('|')
_all = set(high_yield + bonds_1B_AUM + bonds_100M_AUM + bonds_tax_pref_100M_AUM + ["SPY", "GLD"]) - set(drop)
_all = get(list(_all))
all = get(_all, trim=True, start=2015, end=dt(2018, 8, 1), mode="NTR")
# -
df = pd.read_msgpack("../../ETFs/etfs.msgpack")
df["mw_aum"] /= 1000000
df = df.sort_values("mw_aum", ascending=False)
df.iloc[0]
df[df.index.isin(closed_end)]
# +
names = [x.name.ticker for x in all]
#df[df.index.isin(names)].aum
def get_etf_field(s, fld):
n = s.name.ticker
if not n in df.index:
return 0
return df[df.index == n][fld].values[0]
def get_aum(s): return get_etf_field(s, "aum")
def get_mw_aum(s): return get_etf_field(s, "mw_aum")
def get_fees(s): return get_etf_field(s, "fees")
def get_mw_fees(s): return get_etf_field(s, "mw_fees")
def get_duration(s): return get_etf_field(s, "yc_duration")
def get_mw_turnover(s): return get_etf_field(s, "mw_turnover")
#show_rr(*all, ret_func=get_curr_net_yield, risk_func=get_duration)
#show_rr(*all, ret_func=get_fees, risk_func=get_mw_fees)
#show_rr(*all, ret_func=get_curr_net_yield, risk_func=get_mw_turnover)
#show_rr(*all, ret_func=compose(np.log, get_mw_aum), risk_func=get_mw_fees)
show_rr(*all, ret_func=get_mw_aum, risk_func=get_aum)
# -
| Research/ETFs_meta_data.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Geocomp 2019
# language: python
# name: gsa2019
# ---
# + jupyter={"outputs_hidden": false}
# Needed on a Mac
import matplotlib
# %matplotlib inline
print(matplotlib.get_backend())
import matplotlib.pyplot as plt
# -
import sys
print(sys.version)
# + jupyter={"outputs_hidden": false}
import numpy as np
print(np.__version__)
# + jupyter={"outputs_hidden": false}
import pandas as pd
print(pd.__version__)
# + jupyter={"outputs_hidden": false}
import shapely
print(shapely.__version__)
print(matplotlib.get_backend())
# + jupyter={"outputs_hidden": false}
import pysal as ps
print(ps.__version__)
print(matplotlib.get_backend())
# + jupyter={"outputs_hidden": false}
import folium as flm
print(flm.__version__)
# -
import gdal
import osgeo
# + jupyter={"outputs_hidden": false}
import fiona
print(fiona.__version__)
# + jupyter={"outputs_hidden": false}
import geopandas as gpd
print(gpd.__version__)
# + jupyter={"outputs_hidden": false}
import os
import geopandas as gpd
import urllib
fn = "Greenspace.gpkg"
url = 'https://github.com/kingsgeocomp/geocomputation/raw/master/data/Test/Greenspace.gpkg'
urllib.request.urlretrieve(url, fn)
greenspace = gpd.read_file(fn)
greenspace.plot()
# + jupyter={"outputs_hidden": false}
import os
import urllib
import pandas as pd
import geopandas as gpd
from shapely.geometry import Point
df = pd.read_csv(
'http://data.insideairbnb.com/the-netherlands/north-holland/amsterdam/2017-04-02/visualisations/listings.csv',
low_memory=False)
df.price.plot.hist()
plt.title('Price per Night')
plt.show()
geometry = [Point(xy) for xy in zip(df.longitude, df.latitude)]
df.drop(['longitude', 'latitude'], axis=1, inplace=True)
crs = {'init': 'epsg:4326'}
airbnb = gpd.GeoDataFrame(df, crs=crs, geometry=geometry)
airbnb.plot()
plt.title('Locations')
plt.show()
# +
#import sompy
| validation/check_gsa_stack.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python [conda root]
# language: python
# name: conda-root-py
# ---
# %matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
import netCDF4 as nc
from scipy.interpolate import interp1d
import matplotlib.cm as cm
from salishsea_tools import (nc_tools, gsw_calls, geo_tools, viz_tools)
import cmocean as cmo
import pandas as pd
# +
from IPython.display import HTML
HTML('''<script>
code_show=true;
function code_toggle() {
if (code_show){
$('div.input').hide();
} else {
$('div.input').show();
}
code_show = !code_show
}
$( document ).ready(code_toggle);
</script>
<form action="javascript:code_toggle()"><input type="submit" value="Click here to toggle on/off the raw code."></form>''')
# +
bathy = nc.Dataset('/data/mdunphy/NEP036-N30-OUT/INV/Bathymetry_EastCoast_NEMO_R036_GEBCO_corr_v14.nc')
mesh_mask = nc.Dataset('/data/mdunphy/NEP036-N30-OUT/INV/mesh_mask.nc')
mbathy = mesh_mask['mbathy'][0,...]
Z = bathy.variables['Bathymetry'][:]
y_wcvi_slice = np.arange(180,350)
x_wcvi_slice = np.arange(480,650)
zlevels = nc.Dataset('/data/mdunphy/NEP036-N30-OUT/CDF_COMB_COMPRESSED/NEP036-N30_IN_20140915_00001440_grid_T.nc').variables['deptht']
lon = bathy['nav_lon'][...]
lat = bathy['nav_lat'][...]
NEP_aug = nc.Dataset('/data/ssahu/NEP36_Extracted_Months/NEP36_T_S_Spice_aug_larger_offshore_rho_correct.nc')
sal_aug = NEP_aug.variables['vosaline']
temp_aug = NEP_aug.variables['votemper']
spic_aug = NEP_aug.variables['spiciness']
rho_aug = NEP_aug.variables['density']
zlevels = nc.Dataset('/data/mdunphy/NEP036-N30-OUT/CDF_COMB_COMPRESSED/NEP036-N30_IN_20140915_00001440_grid_T.nc').variables['deptht']
NEP_jul = nc.Dataset('/data/ssahu/NEP36_Extracted_Months/NEP36_T_S_Spice_july_larger_offshore_rho_correct.nc')
sal_jul = NEP_jul.variables['vosaline']
temp_jul = NEP_jul.variables['votemper']
spic_jul = NEP_jul.variables['spiciness']
rho_jul = NEP_jul.variables['density']
NEP_jun = nc.Dataset('/data/ssahu/NEP36_Extracted_Months/NEP36_T_S_Spice_june_larger_offshore_rho_correct.nc')
sal_jun = NEP_jun.variables['vosaline']
temp_jun = NEP_jun.variables['votemper']
spic_jun = NEP_jun.variables['spiciness']
rho_jun = NEP_jun.variables['density']
# +
STATION_LINE = ['LA' , 'LB', 'LBA', 'LC', 'LCB', 'RS']
numbers_LA = ['O1' , 'O2', 'O3', 'O4', 'O5', 'O6', 'O7', 'O8', 'O9', '1O']
numbers_LB = ['O1' , 'O2', 'O3', 'O4', 'O5', 'O6', 'O7', 'O8', 'O9', '1O', '11', '12', '13', '14', '15', '16']
numbers_LBA = ['O', '1', '2', '3', '4']
numbers_LC = ['O1' , 'O2', 'O3', 'O4', 'O5', 'O6', 'O7', 'O8', 'O9', '1O', '11', '12']
numbers_LCB = ['1', '2', '3', '4', '5', '6']
numbers_RS = ['O1' , 'O2', 'O3', 'O4', 'O5', 'O6', 'O7']
lon_stations_LA = [-124.7275, -124.8553, -124.9620, -125.0687, \
-125.1767, -125.2867, -125.3977, -125.5067, -125.6153, -125.7233]
lat_stations_LA = [48.4872, 48.4375, 48.3807, 48.3227, 48.2680, 48.2105, 48.1533, 48.0963, 48.0393, 47.9837]
lon_stations_LB = [-124.9913, -125.0400, -125.0930, -125.1453, -125.2000, -125.2585, -125.3683, \
-125.4775, -125.5800, -125.6892, -125.7958, -125.8650, -125.9353, -126.0000, -126.1410, -126.2833]
lat_stations_LB = [48.6733, 48.6500, 48.6220, 48.5945, 48.5667, 48.5363, 48.4780, 48.4217, 48.3667, \
48.3095, 48.2533, 48.2153, 48.1767, 48.1413, 48.0727, 48.0088]
lon_stations_LBA = [-124.9667, -125.0633, -125.1300, -125.3467, -125.5000]
lat_stations_LBA = [48.5767, 48.5283, 48.4233, 48.3908, 48.2153]
lon_stations_LC = [-125.4622, -125.5158, -125.5707, -125.6800, -125.7900, -125.9000, -126.0083, -126.1183, \
-126.2283, -126.3367, -126.4450, -126.6667]
lat_stations_LC = [48.8407, 48.8113, 48.7825, 48.7238, 48.6657, 48.6077, 48.5493, 48.4908, 48.4323,\
48.3733, 48.3158, 48.2500]
lon_stations_LCB = [-125.3667, -125.4508, -125.5000, -125.5917, -125.8300, -126.0633]
lat_stations_LCB = [48.7490, 48.6858, 48.5750, 48.5333, 48.4717, 48.3783]
lon_stations_RS = [-125.2233, -125.3267, -125.4317, -125.5683, -125.7200, -125.9433, -126.1317]
lat_stations_RS = [48.7567, 48.7092, 48.6433, 48.5683, 48.4867, 48.3633, 48.2600]
# +
LINE_LA_stations = []
LINE_LB_stations = []
LINE_LBA_stations = []
LINE_LC_stations = []
LINE_LCB_stations = []
LINE_RS_stations = []
for i in np.arange(len(numbers_LA)):
value = STATION_LINE[0]+numbers_LA[i]
LINE_LA_stations = np.append(LINE_LA_stations, value)
for i in np.arange(len(numbers_LB)):
value = STATION_LINE[1]+numbers_LB[i]
LINE_LB_stations = np.append(LINE_LB_stations, value)
for i in np.arange(len(numbers_LBA)):
value = STATION_LINE[2]+numbers_LBA[i]
LINE_LBA_stations = np.append(LINE_LBA_stations, value)
for i in np.arange(len(numbers_LC)):
value = STATION_LINE[3]+numbers_LC[i]
LINE_LC_stations = np.append(LINE_LC_stations, value)
for i in np.arange(len(numbers_LCB)):
value = STATION_LINE[4]+numbers_LCB[i]
LINE_LCB_stations = np.append(LINE_LCB_stations, value)
for i in np.arange(len(numbers_RS)):
value = STATION_LINE[5]+numbers_RS[i]
LINE_RS_stations = np.append(LINE_RS_stations, value)
LA_lon_locations = pd.Series(data = lon_stations_LA, index = LINE_LA_stations)
LA_lat_locations = pd.Series(data = lat_stations_LA, index = LINE_LA_stations)
LB_lon_locations = pd.Series(data = lon_stations_LB, index = LINE_LB_stations)
LB_lat_locations = pd.Series(data = lat_stations_LB, index = LINE_LB_stations)
LBA_lon_locations = pd.Series(data = lon_stations_LBA, index = LINE_LBA_stations)
LBA_lat_locations = pd.Series(data = lat_stations_LBA, index = LINE_LBA_stations)
LC_lon_locations = pd.Series(data = lon_stations_LC, index = LINE_LC_stations)
LC_lat_locations = pd.Series(data = lat_stations_LC, index = LINE_LC_stations)
LCB_lon_locations = pd.Series(data = lon_stations_LCB, index = LINE_LCB_stations)
LCB_lat_locations = pd.Series(data = lat_stations_LCB, index = LINE_LCB_stations)
RS_lon_locations = pd.Series(data = lon_stations_RS, index = LINE_RS_stations)
RS_lat_locations = pd.Series(data = lat_stations_RS, index = LINE_RS_stations)
# +
def find_NEP36_model_point(line, station_number):
if line == 'LA':
loc = np.where(LINE_LA_stations == station_number)
j, i = geo_tools.find_closest_model_point(lon_stations_LA[int(loc[0])],lat_stations_LA[int(loc[0])],\
lon_model,lat_model,tols=\
{'NEMO': {'tol_lon': 0.1, 'tol_lat': 0.1},'\
GEM2.5': {'tol_lon': 0.1, 'tol_lat': 0.1}})
if line == 'LB':
loc = np.where(LINE_LB_stations == station_number)
j, i = geo_tools.find_closest_model_point(lon_stations_LB[int(loc[0])],lat_stations_LB[int(loc[0])],\
lon_model,lat_model,tols=\
{'NEMO': {'tol_lon': 0.1, 'tol_lat': 0.1},'\
GEM2.5': {'tol_lon': 0.1, 'tol_lat': 0.1}})
if line == 'LC':
loc = np.where(LINE_LC_stations == station_number)
j, i = geo_tools.find_closest_model_point(lon_stations_LC[int(loc[0])],lat_stations_LC[int(loc[0])],\
lon_model,lat_model,tols=\
{'NEMO': {'tol_lon': 0.1, 'tol_lat': 0.1},'\
GEM2.5': {'tol_lon': 0.1, 'tol_lat': 0.1}})
if line == 'LBA':
loc = np.where(LINE_LBA_stations == station_number)
j, i = geo_tools.find_closest_model_point(lon_stations_LBA[int(loc[0])],lat_stations_LBA[int(loc[0])],\
lon_model,lat_model,tols=\
{'NEMO': {'tol_lon': 0.1, 'tol_lat': 0.1},'\
GEM2.5': {'tol_lon': 0.1, 'tol_lat': 0.1}})
if line == 'LCB':
loc = np.where(LINE_LCB_stations == station_number)
j, i = geo_tools.find_closest_model_point(lon_stations_LCB[int(loc[0])],lat_stations_LCB[int(loc[0])],\
lon_model,lat_model,tols=\
{'NEMO': {'tol_lon': 0.1, 'tol_lat': 0.1},'\
GEM2.5': {'tol_lon': 0.1, 'tol_lat': 0.1}})
if line == 'RS':
loc = np.where(LINE_RS_stations == station_number)
j, i = geo_tools.find_closest_model_point(lon_stations_RS[int(loc[0])],lat_stations_RS[int(loc[0])],\
lon_model,lat_model,tols=\
{'NEMO': {'tol_lon': 0.1, 'tol_lat': 0.1},'\
GEM2.5': {'tol_lon': 0.1, 'tol_lat': 0.1}})
return j,i
# +
bathy = nc.Dataset('/data/mdunphy/NEP036-N30-OUT/INV/Bathymetry_EastCoast_NEMO_R036_GEBCO_corr_v14.nc')
Z = bathy.variables['Bathymetry'][:]
zlevels = nc.Dataset('/data/mdunphy/NEP036-N30-OUT/CDF_COMB_COMPRESSED/NEP036-N30_IN_20140915_00001440_grid_T.nc').variables['deptht']
# grid = nc.Dataset('/ocean/ssahu/CANYONS/wcvi/grid/coordinates_NEMO/coordinates_westcoast_seagrid_high_resolution_truncated_wider_west_bdy.nc')
lon_model = bathy['nav_lon'][...]
lat_model = bathy['nav_lat'][...]
# +
j,i = find_NEP36_model_point('LB', 'LBO8')
print(j,i)
# -
sal_jun.shape
# +
LB_08_sal_jun = sal_jun[:,:,j,i]
LB_08_sal_july = sal_jul[:,:,j,i]
LB_08_sal_aug = sal_aug[:,:,j,i]
LB_08_sal = np.concatenate((LB_08_sal_jun, LB_08_sal_july, LB_08_sal_aug), axis = 0)
LB_08_tem_jun = temp_jun[:,:,j,i]
LB_08_tem_july = temp_jul[:,:,j,i]
LB_08_tem_aug = temp_aug[:,:,j,i]
LB_08_tem = np.concatenate((LB_08_tem_jun, LB_08_tem_july, LB_08_tem_aug), axis = 0)
LB_08_spic_jun = spic_jun[:,:,j,i]
LB_08_spic_july = spic_jul[:,:,j,i]
LB_08_spic_aug = spic_aug[:,:,j,i]
LB_08_spic = np.concatenate((LB_08_spic_jun, LB_08_spic_july, LB_08_spic_aug), axis = 0)
LB_08_rho_jun = rho_jun[:,:,j,i]
LB_08_rho_july = rho_jul[:,:,j,i]
LB_08_rho_aug = rho_aug[:,:,j,i]
LB_08_rho = np.concatenate((LB_08_rho_jun, LB_08_rho_july, LB_08_rho_aug), axis = 0) - 1000
date = np.array('2015-06-01', dtype=np.datetime64)
date = date + np.arange(92)
# +
fig, (ax1,ax2) = plt.subplots(1, 2, figsize=(23,20));
viz_tools.set_aspect(ax1)
cmap = plt.get_cmap(cmo.cm.haline)
cmap.set_bad('burlywood')
p = ax1.pcolormesh(date, zlevels[15:24], np.transpose(LB_08_sal[:,15:24]), cmap=cmap, vmin=33.7, vmax =34)#, vmax=500)
legend = ax1.legend(loc='best', fancybox=True, framealpha=0.75)
ax1.set_xlabel('Dates',fontsize=18)
ax1.set_ylabel('Depth (m)',fontsize=18)
# ax.set_ylim([lat[270:350,550:650].min(), lat[270:350,550:650].max()])
ax1.set_title('Hovmoller of LB08 Salinity Tracer', fontsize=20)
# viz_tools.plot_land_mask(ax1, bathy, yslice=y_wcvi_slice, xslice=x_wcvi_slice, color='burlywood')
cbar = fig.colorbar(p, ax=ax1, label='Salinity')
ax1.invert_yaxis()
ax1.grid()
viz_tools.set_aspect(ax2)
cmap = plt.get_cmap(cmo.cm.thermal_r)
cmap.set_bad('burlywood')
p = ax2.pcolormesh(date, zlevels[15:24], np.transpose(LB_08_tem[:,15:24]), cmap=cmap, vmin=7, vmax =7.8)#, vmax=500)
legend = ax2.legend(loc='best', fancybox=True, framealpha=0.75)
ax2.set_xlabel('Dates',fontsize=18)
ax2.set_ylabel('Depth (m)',fontsize=18)
# ax.set_ylim([lat[270:350,550:650].min(), lat[270:350,550:650].max()])
ax2.set_title('Hovmoller of LB08 Temperature Tracer', fontsize=20)
# viz_tools.plot_land_mask(ax1, bathy, yslice=y_wcvi_slice, xslice=x_wcvi_slice, color='burlywood')
cbar = fig.colorbar(p, ax=ax2, label='Temperature')
ax2.invert_yaxis()
ax2.grid()
# +
fig, (ax1,ax2) = plt.subplots(1, 2, figsize=(20,20));
viz_tools.set_aspect(ax1)
cmap = plt.get_cmap(cmo.cm.dense)
cmap.set_bad('burlywood')
p = ax1.pcolormesh(date, zlevels[15:24], np.transpose(LB_08_rho[:,15:24]), cmap=cmap, vmin=26.3, vmax =26.5)#, vmax=500)
legend = ax1.legend(loc='best', fancybox=True, framealpha=0.75)
ax1.set_xlabel('Dates',fontsize=18)
ax1.set_ylabel('Depth (m)',fontsize=18)
# ax.set_ylim([lat[270:350,550:650].min(), lat[270:350,550:650].max()])
ax1.set_title('Hovmoller of LB08 Density', fontsize=20)
# viz_tools.plot_land_mask(ax1, bathy, yslice=y_wcvi_slice, xslice=x_wcvi_slice, color='burlywood')
cbar = fig.colorbar(p, ax=ax1, label='In-situ Desnity')
ax1.invert_yaxis()
ax1.grid()
viz_tools.set_aspect(ax2)
cmap = plt.get_cmap(cmo.cm.turbid)
cmap.set_bad('burlywood')
p = ax2.pcolormesh(date, zlevels[15:24], np.transpose(LB_08_spic[:,15:24]), cmap=cmap, vmin=-0.1, vmax =0)#, vmax=500)
legend = ax2.legend(loc='best', fancybox=True, framealpha=0.75)
ax2.set_xlabel('Dates',fontsize=18)
ax2.set_ylabel('Depth (m)',fontsize=18)
# ax.set_ylim([lat[270:350,550:650].min(), lat[270:350,550:650].max()])
ax2.set_title('Hovmoller of LB08 Spiciness', fontsize=20)
ax2.invert_yaxis()
# viz_tools.plot_land_mask(ax1, bathy, yslice=y_wcvi_slice, xslice=x_wcvi_slice, color='burlywood')
cbar = fig.colorbar(p, ax=ax2, label='Spiciness')
ax2.grid()
# -
zlevels[24]
# +
deepest_den = LB_08_rho[:,23]
diff_deepest_den = np.diff(deepest_den)
# -
np.where(diff_deepest_den == np.max(np.absolute(diff_deepest_den)))
LB_08_rho[12,23]
LB_08_rho[89,23]
date[11]
# ### At the location of maximum density at LB08 (12 June 2015); lets look at how uniform is the profile
# +
fig2, (ax2, ax3, ax4, ax5) = plt.subplots(1,4,sharey=True,figsize=(20,12))
# Temperature
ax2.plot(LB_08_tem[11,:24],zlevels[:24],'b',label = 'NEP36 model')
# ax2.plot(tem_data ,z_data,'b',label='LA06')
ax2.set_ylabel('Depth (m)')
ax2.set_ylim(ax2.get_ylim()[::-1])
ax2.set_xlabel('Temperature (C)')
ax2.xaxis.set_label_position('top')
ax2.xaxis.set_ticks_position('top')
ax2.set_xlim(6,14)
legend = ax2.legend(loc='best', fancybox=True, framealpha=0.25)
# Salinity
ax3.plot(LB_08_sal[11,:24],zlevels[:24],'b',label = 'NEP36 model')
# ax3.plot(sal_data,z_data,'b',label='LA06')
ax3.set_xlabel('Salinity')
ax3.xaxis.set_label_position('top')
ax3.xaxis.set_ticks_position('top')
ax3.yaxis.set_visible(False)
ax3.set_xlim(30,35)
legend = ax3.legend(loc='best', fancybox=True, framealpha=0.25)
# Spiciness
ax4.plot(LB_08_spic[11,:24],zlevels[:24],'b',label = 'NEP36 model')
# ax4.plot(spic_data,z_data,'b',label='LA06')
ax4.set_xlabel('Spiciness')
ax4.xaxis.set_label_position('top')
ax4.xaxis.set_ticks_position('top')
ax4.yaxis.set_visible(False)
ax4.set_xlim(-1,0)
legend = ax4.legend(loc='best', fancybox=True, framealpha=0.25)
# Spiciness
ax5.plot(LB_08_rho[11,:24],zlevels[:24],'b',label = 'NEP36 model')
# ax4.plot(spic_data,z_data,'b',label='LA06')
ax5.set_xlabel('Density')
ax5.xaxis.set_label_position('top')
ax5.xaxis.set_ticks_position('top')
ax5.yaxis.set_visible(False)
# ax4.set_xlim(-1,0)
legend = ax4.legend(loc='best', fancybox=True, framealpha=0.25)
# +
fig2, (ax2, ax3, ax4, ax5) = plt.subplots(1,4,sharey=True,figsize=(20,12))
# Temperature
ax2.plot(LB_08_tem[11,18:24],zlevels[18:24],'b',label = 'NEP36 model')
# ax2.plot(tem_data ,z_data,'b',label='LA06')
ax2.set_ylabel('Depth (m)')
ax2.set_ylim(ax2.get_ylim()[::-1])
ax2.set_xlabel('Temperature (C)')
ax2.xaxis.set_label_position('top')
ax2.xaxis.set_ticks_position('top')
ax2.set_xlim(6,14)
legend = ax2.legend(loc='best', fancybox=True, framealpha=0.25)
# Salinity
ax3.plot(LB_08_sal[11,18:24],zlevels[18:24],'b',label = 'NEP36 model')
# ax3.plot(sal_data,z_data,'b',label='LA06')
ax3.set_xlabel('Salinity')
ax3.xaxis.set_label_position('top')
ax3.xaxis.set_ticks_position('top')
ax3.yaxis.set_visible(False)
ax3.set_xlim(30,35)
legend = ax3.legend(loc='best', fancybox=True, framealpha=0.25)
# Spiciness
ax4.plot(LB_08_spic[11,18:24],zlevels[18:24],'b',label = 'NEP36 model')
# ax4.plot(spic_data,z_data,'b',label='LA06')
ax4.set_xlabel('Spiciness')
ax4.xaxis.set_label_position('top')
ax4.xaxis.set_ticks_position('top')
ax4.yaxis.set_visible(False)
ax4.set_xlim(-1,0)
legend = ax4.legend(loc='best', fancybox=True, framealpha=0.25)
# Spiciness
ax5.plot(LB_08_rho[11,18:24],zlevels[18:24],'b',label = 'NEP36 model')
# ax4.plot(spic_data,z_data,'b',label='LA06')
ax5.set_xlabel('Density')
ax5.xaxis.set_label_position('top')
ax5.xaxis.set_ticks_position('top')
ax5.yaxis.set_visible(False)
# ax4.set_xlim(-1,0)
legend = ax4.legend(loc='best', fancybox=True, framealpha=0.25)
# +
deepest_spic = LB_08_spic[:,23]
diff_deepest_spic = np.diff(deepest_spic)
# -
np.where(diff_deepest_spic == np.max(np.absolute(diff_deepest_spic)))
date[79]
# ### At the location of maximum spice at LB08 ( 19 August 2015); lets look at how uniform is the profile
# +
fig2, (ax2, ax3, ax4, ax5) = plt.subplots(1,4,sharey=True,figsize=(20,12))
# Temperature
ax2.plot(LB_08_tem[79,:24],zlevels[:24],'b',label = 'NEP36 model')
# ax2.plot(tem_data ,z_data,'b',label='LA06')
ax2.set_ylabel('Depth (m)')
ax2.set_ylim(ax2.get_ylim()[::-1])
ax2.set_xlabel('Temperature (C)')
ax2.xaxis.set_label_position('top')
ax2.xaxis.set_ticks_position('top')
# ax2.set_xlim(6,14)
legend = ax2.legend(loc='best', fancybox=True, framealpha=0.25)
# Salinity
ax3.plot(LB_08_sal[79,:24],zlevels[:24],'b',label = 'NEP36 model')
# ax3.plot(sal_data,z_data,'b',label='LA06')
ax3.set_xlabel('Salinity')
ax3.xaxis.set_label_position('top')
ax3.xaxis.set_ticks_position('top')
ax3.yaxis.set_visible(False)
ax3.set_xlim(31,34)
legend = ax3.legend(loc='best', fancybox=True, framealpha=0.25)
# Spiciness
ax4.plot(LB_08_spic[79,:24],zlevels[:24],'b',label = 'NEP36 model')
# ax4.plot(spic_data,z_data,'b',label='LA06')
ax4.set_xlabel('Spiciness')
ax4.xaxis.set_label_position('top')
ax4.xaxis.set_ticks_position('top')
ax4.yaxis.set_visible(False)
# ax4.set_xlim(-1,0)
legend = ax4.legend(loc='best', fancybox=True, framealpha=0.25)
# Spiciness
ax5.plot(LB_08_rho[79,:24],zlevels[:24],'b',label = 'NEP36 model')
# ax4.plot(spic_data,z_data,'b',label='LA06')
ax5.set_xlabel('Density')
ax5.xaxis.set_label_position('top')
ax5.xaxis.set_ticks_position('top')
ax5.yaxis.set_visible(False)
# ax4.set_xlim(-1,0)
legend = ax4.legend(loc='best', fancybox=True, framealpha=0.25)
# +
fig2, (ax2, ax3, ax4, ax5) = plt.subplots(1,4,sharey=True,figsize=(20,12))
# Temperature
ax2.plot(LB_08_tem[79,18:24],zlevels[18:24],'b',label = 'NEP36 model')
# ax2.plot(tem_data ,z_data,'b',label='LA06')
ax2.set_ylabel('Depth (m)')
ax2.set_ylim(ax2.get_ylim()[::-1])
ax2.set_xlabel('Temperature (C)')
ax2.xaxis.set_label_position('top')
ax2.xaxis.set_ticks_position('top')
# ax2.set_xlim(6,14)
legend = ax2.legend(loc='best', fancybox=True, framealpha=0.25)
# Salinity
ax3.plot(LB_08_sal[79,18:24],zlevels[18:24],'b',label = 'NEP36 model')
# ax3.plot(sal_data,z_data,'b',label='LA06')
ax3.set_xlabel('Salinity')
ax3.xaxis.set_label_position('top')
ax3.xaxis.set_ticks_position('top')
ax3.yaxis.set_visible(False)
ax3.set_xlim(33,34)
legend = ax3.legend(loc='best', fancybox=True, framealpha=0.25)
# Spiciness
ax4.plot(LB_08_spic[79,18:24],zlevels[18:24],'b',label = 'NEP36 model')
# ax4.plot(spic_data,z_data,'b',label='LA06')
ax4.set_xlabel('Spiciness')
ax4.xaxis.set_label_position('top')
ax4.xaxis.set_ticks_position('top')
ax4.yaxis.set_visible(False)
# ax4.set_xlim(-1,0)
legend = ax4.legend(loc='best', fancybox=True, framealpha=0.25)
# Spiciness
ax5.plot(LB_08_rho[79,18:24],zlevels[18:24],'b',label = 'NEP36 model')
# ax4.plot(spic_data,z_data,'b',label='LA06')
ax5.set_xlabel('Density')
ax5.xaxis.set_label_position('top')
ax5.xaxis.set_ticks_position('top')
ax5.yaxis.set_visible(False)
ax4.set_xlim(-0.2,0)
legend = ax4.legend(loc='best', fancybox=True, framealpha=0.25)
# +
fig2, (ax2, ax3, ax4, ax5) = plt.subplots(1,4,sharey=True,figsize=(20,12))
# Temperature
ax2.plot(LB_08_tem[91,18:24],zlevels[18:24],'b',label = 'NEP36 model')
# ax2.plot(tem_data ,z_data,'b',label='LA06')
ax2.set_ylabel('Depth (m)')
ax2.set_ylim(ax2.get_ylim()[::-1])
ax2.set_xlabel('Temperature (C)')
ax2.xaxis.set_label_position('top')
ax2.xaxis.set_ticks_position('top')
# ax2.set_xlim(6,14)
legend = ax2.legend(loc='best', fancybox=True, framealpha=0.25)
# Salinity
ax3.plot(LB_08_sal[91,18:24],zlevels[18:24],'b',label = 'NEP36 model')
# ax3.plot(sal_data,z_data,'b',label='LA06')
ax3.set_xlabel('Salinity')
ax3.xaxis.set_label_position('top')
ax3.xaxis.set_ticks_position('top')
ax3.yaxis.set_visible(False)
ax3.set_xlim(33,34)
legend = ax3.legend(loc='best', fancybox=True, framealpha=0.25)
#Spiciness
ax4.plot(LB_08_spic[91,18:24],zlevels[18:24],'b',label = 'NEP36 model')
# ax4.plot(spic_data,z_data,'b',label='LA06')
ax4.set_xlabel('Spiciness')
ax4.xaxis.set_label_position('top')
ax4.xaxis.set_ticks_position('top')
ax4.yaxis.set_visible(False)
ax4.set_xlim(-1,1)
legend = ax4.legend(loc='best', fancybox=True, framealpha=0.25)
# Spiciness
ax5.plot(LB_08_rho[91,18:24],zlevels[18:24],'b',label = 'NEP36 model')
# ax4.plot(spic_data,z_data,'b',label='LA06')
ax5.set_xlabel('Density')
ax5.xaxis.set_label_position('top')
ax5.xaxis.set_ticks_position('top')
ax5.yaxis.set_visible(False)
ax4.set_xlim(-0.2,0)
legend = ax4.legend(loc='best', fancybox=True, framealpha=0.25)
# -
LB_08_spic[91,18:24]
LB_08_rho[91,18:24]
zlevels[18:24]
# +
fig2, ax = plt.subplots(1,1,figsize=(8,12))
ax.plot(LB_08_spic[91,18:24],zlevels[18:24],'b',label = 'NEP36 model')
ax.set_ylim(ax.get_ylim()[::-1])
ax.set_ylabel('Depth (m)')
ax.set_xlabel('Spiciness')
ax.xaxis.set_label_position('top')
ax.xaxis.set_ticks_position('top')
# ax.set_xlim(-1,1)
legend = ax.legend(loc='best', fancybox=True, framealpha=0.25)
# -
file = nc.Dataset('/data/ssahu/NEP36_Extracted_Months/NEP36_2013_T_S_Spice_larger_offshore_rho_correct.nc')
nc_tools.show_variables(file)
# +
file_model = nc.Dataset('/data/ssahu/NEP36_2013_summer_hindcast/cut_NEP36-S29_1d_20130429_20131025_grid_T_20130429-20130508.nc')
lon = file_model.variables['nav_lon'][1:,1:]
lat = file_model.variables['nav_lat'][1:,1:]
zlevels = file_model.variables['deptht'][:]
lon_LB08 = -125.4775
lat_LB08 = 48.4217
j, i = geo_tools.find_closest_model_point(lon_LB08,lat_LB08,\
lon,lat,grid='NEMO',tols=\
{'NEMO': {'tol_lon': 0.1, 'tol_lat': 0.1},\
'GEM2.5': {'tol_lon': 0.1, 'tol_lat': 0.1}})
print(j,i)
# +
temp = file.variables['votemper']
salinity = file.variables['vosaline']
rho = file.variables['density']
spice = file.variables['spiciness']
temp_aug = temp[93:124]
sal_aug = salinity[93:124]
rho_aug = rho[93:124]
spic_aug = spice[93:124]
temp_aug_LB08 = temp_aug[:,:,j,i]
sal_aug_LB08 = sal_aug[:,:,j,i]
spic_aug_LB08 = spic_aug[:,:,j,i]
rho_aug_LB08 = rho_aug[:,:,j,i]-1000
# +
mesh_mask_large = nc.Dataset('/data/mdunphy/NEP036-N30-OUT/INV/mesh_mask.nc')
tmask = mesh_mask_large.variables['tmask'][0,:32,180:350,480:650]
# -
zlevels[np.max(np.nonzero(tmask[:,j,i]))]
# +
date = np.array('2013-08-01', dtype=np.datetime64)
date = date + np.arange(31)
# +
import seaborn as sns
sns.set_context('poster')
# +
fig, (ax1,ax2) = plt.subplots(1, 2, figsize=(25,10), sharey=True);
# viz_tools.set_aspect(ax1)
cmap = plt.get_cmap(cmo.cm.dense)
cmap.set_bad('burlywood')
p = ax1.pcolormesh(date, zlevels[15:24], np.transpose(rho_aug_LB08[:,15:24]), cmap=cmap, vmin=26.3, vmax =26.5)#, vmax=500)
legend = ax1.legend(loc='best', fancybox=True, framealpha=0.75)
ax1.set_xlabel('Dates',fontsize=18)
ax1.set_ylabel('Depth (m)',fontsize=18)
CS1 = ax1.contour(date,zlevels[15:24],np.transpose(rho_aug_LB08[:,15:24]), level = [26.4])
CLS = plt.clabel(CS1, inline=3,fmt='%0.2f', fontsize=12)
# ax.set_ylim([lat[270:350,550:650].min(), lat[270:350,550:650].max()])
ax1.set_title('Hovmoller of LB08 Density', fontsize=20)
# viz_tools.plot_land_mask(ax1, bathy, yslice=y_wcvi_slice, xslice=x_wcvi_slice, color='burlywood')
cbar = fig.colorbar(p, ax=ax1, label='Potential Desnity')
ax1.invert_yaxis()
ax1.grid()
# viz_tools.set_aspect(ax2)
cmap = plt.get_cmap(cmo.cm.turbid)
cmap.set_bad('burlywood')
p = ax2.pcolormesh(date, zlevels[15:24], np.transpose(spic_aug_LB08[:,15:24]), cmap=cmap, vmin=-0.1, vmax =0)#, vmax=500)
CS1 = ax2.contour(date,zlevels[15:24],np.transpose(rho_aug_LB08[:,15:24]), level = [26.4])
CLS = plt.clabel(CS1, inline=3,fmt='%0.2f', fontsize=12)
legend = ax2.legend(loc='best', fancybox=True, framealpha=0.75)
ax2.set_xlabel('Dates',fontsize=18)
ax2.set_ylabel('Depth (m)',fontsize=18)
# ax.set_ylim([lat[270:350,550:650].min(), lat[270:350,550:650].max()])
ax2.set_title('Hovmoller of LB08 Spiciness', fontsize=20)
# ax2.invert_yaxis()
# viz_tools.plot_land_mask(ax1, bathy, yslice=y_wcvi_slice, xslice=x_wcvi_slice, color='burlywood')
cbar = fig.colorbar(p, ax=ax2, label='Spiciness')
ax2.grid()
fig.autofmt_xdate()
# +
spic_aug_LB08_2648 = np.empty((rho_aug_LB08.shape[0]))
for k in np.arange(rho_aug_LB08.shape[0]):
spic_aug_LB08_2648[k] = np.interp(26.48, rho_aug_LB08[k,15:24], spic_aug_LB08[k,15:24])
# +
fig, ax2 = plt.subplots(1, 1, figsize=(18,8))
ax2.plot(date, spic_aug_LB08_2648, color = 'red', linewidth=1.5,linestyle = 'dashed', label = 'Spice at 26.48')
ax2.set_ylabel('Spiciness', fontsize = 16)
# ax2.set_ylim(-0.33, 0.33)
ax2.tick_params(axis='both',labelsize =16)
ax2.legend(loc = 'upper left', fontsize =14)
ax2.grid()
fig.autofmt_xdate()
# +
spic_aug_LB08_264 = np.empty((rho_aug_LB08.shape[0]))
for k in np.arange(rho_aug_LB08.shape[0]):
spic_aug_LB08_264[k] = np.interp(26.4, rho_aug_LB08[k,15:24], spic_aug_LB08[k,15:24])
# +
fig, ax2 = plt.subplots(1, 1, figsize=(18,8))
# p = ax2.plot(date, spic_aug_LB08_2648, color = 'red', linewidth=1.5,linestyle = 'dashed', label = 'Spice at 26.48')
ax2.plot(date, spic_aug_LB08_264, color = 'blue', linewidth=1.5,linestyle = 'dashed', label = 'Spice at 26.4')
ax2.set_ylabel('Spiciness', fontsize = 16)
# ax2.set_ylim(-0.33, 0.33)
ax2.tick_params(axis='both',labelsize =16)
ax2.legend(loc = 'upper left', fontsize =14)
ax2.grid()
fig.autofmt_xdate()
# +
fig, ax2 = plt.subplots(1, 1, figsize=(18,8))
p = ax2.plot(date, spic_aug_LB08_2648, color = 'red', linewidth=1.5,linestyle = 'dashed', label = 'Spice at 26.48')
ax2.plot(date, spic_aug_LB08_264, color = 'blue', linewidth=1.5,linestyle = 'dashed', label = 'Spice at 26.4')
ax2.set_ylabel('Spiciness', fontsize = 16)
# ax2.set_ylim(-0.33, 0.33)
ax2.tick_params(axis='both',labelsize =16)
ax2.legend(loc = 'upper left', fontsize =14)
ax2.grid()
fig.autofmt_xdate()
# -
a[1:]
94+31
| LB08_Hovmoller.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: 'Python 3.7.10 64-bit (''pymdp_env'': conda)'
# name: python3
# ---
# # Tutorial Notebook 2. Inference and Planning
#
#
# In this notebook, we will continue on from the last notebook to build a fully fledged active inference agent capable of performing inference and planning using Active Inference in the simple grid-world environment. We will also begin to use some aspects of `pymdp`, although this will mostly be helpful functions for building and sampling from discrete distributions while we will implement the core functionality of the agent ourselves.
#
# First, we simply start out by defining our generative model as we did last time.
# ## Add `pymdp` module
# +
# This is needed (on my machine at least) due to weird python import issues
import os
import sys
from pathlib import Path
path = Path(os.getcwd())
print(path)
module_path = str(path.parent) + '/'
sys.path.append(module_path)
# -
# ## Imports
# +
import numpy as np
import seaborn as sns
import matplotlib.pyplot as plt
from copy import deepcopy
from pymdp import maths, utils
from pymdp.maths import spm_log_single as log_stable # @NOTE: we use the `spm_log_single` helper function from the `maths` sub-library of pymdp. This is a numerically stable version of np.log()
from pymdp import control
print("imports loaded")
# -
# ## Plotting
# +
state_mapping = {0: (0,0), 1: (1,0), 2: (2,0), 3: (0,1), 4: (1,1), 5:(2,1), 6: (0,2), 7:(1,2), 8:(2,2)}
A = np.eye(9)
def plot_beliefs(Qs, title=""):
#values = Qs.values[:, 0]
plt.grid(zorder=0)
plt.bar(range(Qs.shape[0]), Qs, color='r', zorder=3)
plt.xticks(range(Qs.shape[0]))
plt.title(title)
plt.show()
labels = [state_mapping[i] for i in range(A.shape[1])]
def plot_likelihood(A):
fig = plt.figure(figsize = (6,6))
ax = sns.heatmap(A, xticklabels = labels, yticklabels = labels, cbar = False)
plt.title("Likelihood distribution (A)")
plt.show()
def plot_empirical_prior(B):
fig, axes = plt.subplots(3,2, figsize=(8, 10))
actions = ['UP', 'RIGHT', 'DOWN', 'LEFT', 'STAY']
count = 0
for i in range(3):
for j in range(2):
if count >= 5:
break
g = sns.heatmap(B[:,:,count], cmap="OrRd", linewidth=2.5, cbar=False, ax=axes[i,j])
g.set_title(actions[count])
count += 1
fig.delaxes(axes.flatten()[5])
plt.tight_layout()
plt.show()
def plot_transition(B):
fig, axes = plt.subplots(2,3, figsize = (15,8))
a = list(actions.keys())
count = 0
for i in range(dim-1):
for j in range(dim):
if count >= 5:
break
g = sns.heatmap(B[:,:,count], cmap = "OrRd", linewidth = 2.5, cbar = False, ax = axes[i,j], xticklabels=labels, yticklabels=labels)
g.set_title(a[count])
count +=1
fig.delaxes(axes.flatten()[5])
plt.tight_layout()
plt.show()
# -
# ## Generative model
#
# Here, we setup our generative model which is the same as in the last notebook. This is formed of a likelihood distribution $P(o_t|s_t)$, denoted `A`, and a empirical prior (transition) distribution $P(s_t|s_{t-1},a_{t-1})$, denoted `B`.
#
# Since this was covered in more detail in the previous tutorial, we quickly skip over the details here
#
# A matrix
A = np.eye(9)
plot_likelihood(A)
# +
# construct B matrix
P = {}
dim = 3
actions = {'UP':0, 'RIGHT':1, 'DOWN':2, 'LEFT':3, 'STAY':4}
for state_index, xy_coordinates in state_mapping.items():
P[state_index] = {a : [] for a in range(len(actions))}
x, y = xy_coordinates
'''if your y-coordinate is all the way at the top (i.e. y == 0), you stay in the same place -- otherwise you move one upwards (achieved by subtracting 3 from your linear state index'''
P[state_index][actions['UP']] = state_index if y == 0 else state_index - dim
'''f your x-coordinate is all the way to the right (i.e. x == 2), you stay in the same place -- otherwise you move one to the right (achieved by adding 1 to your linear state index)'''
P[state_index][actions["RIGHT"]] = state_index if x == (dim -1) else state_index+1
'''if your y-coordinate is all the way at the bottom (i.e. y == 2), you stay in the same place -- otherwise you move one down (achieved by adding 3 to your linear state index)'''
P[state_index][actions['DOWN']] = state_index if y == (dim -1) else state_index + dim
''' if your x-coordinate is all the way at the left (i.e. x == 0), you stay at the same place -- otherwise, you move one to the left (achieved by subtracting 1 from your linear state index)'''
P[state_index][actions['LEFT']] = state_index if x == 0 else state_index -1
''' Stay in the same place (self explanatory) '''
P[state_index][actions['STAY']] = state_index
num_states = 9
B = np.zeros([num_states, num_states, len(actions)])
for s in range(num_states):
for a in range(len(actions)):
ns = int(P[s][a])
B[ns, s, a] = 1
plot_transition(B)
# -
# # Create Environment Class
# To make things simple we will parcel up the $A$ and $B$ matrices into a class which represents the environment. The environment has two functions `step` which when given an action will update the environment a single step and `reset` which resets the environment back to its initial condition. The API of our simple environment class is similar to the `Env` base class used by `pymdp`, although the `pymdp` version has many more features than we use here.
# +
class GridWorldEnv():
def __init__(self,A,B):
self.A = deepcopy(A)
self.B = deepcopy(B)
print("B:", B.shape)
self.state = np.zeros(9)
# start at state 3
self.state[2] = 1
def step(self,a):
self.state = np.dot(self.B[:,:,a], self.state)
obs = utils.sample(np.dot(self.A, self.state))
return obs
def reset(self):
self.state =np.zeros(9)
self.state[2] =1
obs = utils.sample(np.dot(self.A, self.state))
return obs
env = GridWorldEnv(A,B)
# -
# # Inference
# Now that we have the generative model setup, we turn to the behaviour of the active inference agent itself. To recap, we assume that this agent receives observations from the environment and can emit actions. Moreover, we assume that this agent has some kind of goal or preferences over the state of the environment it wants to create, and will choose actions in order to increase the probability of observing itself in its preferred state. For the time being, we will not deal with the problem of action selection but only with inference.
#
# The agent receives observations $o_t$ from the environment but does not naturally know the environments true state $x_t$. Thus, the agent must *infer* this state by computing the posterior distribution $p(x_t | o_t)$. It can do this by Bayesian inference using Bayes rule but, as we discussed last time, explicitly computing Bayes rule is often intractable because the marginal likelihood requires the averaging over an infinite number of hypotheses. We therefore need some other way to compute or approximate this posterior. Active inference assumes that this posterior can be approximated through a family of methods called *variational inference* which only approximate the posterior, but are fast and computationally efficient.
# # Variational Inference
#
# Variational inference is a set of inference methods which can rapidly and efficiently compute *approximate* posteriors for Bayesian inference problems. The key idea behind variational inference is that instead of trying to compute the true posterior $p(x_t | o_t)$ which may be extremely complex, is that we instead optimize an *approximate posterior*. Specifically, we will define another distribution $q(x_t ; \phi)$ which has some parameters $\phi$ which we then optimize so as to make $q(x_t ; \phi)$ as close as possible to the true distribution. Typically, we choose this $q$ distribution to be some simple distribution which is easy to work with mathematically. If the process works, then we can get the $q$ distribution very close to the true posterior, and as such get a good estimate of the posterior without ever explicitly computing it using Bayes rule.
# Mathematically, we can do this by setting up an optimization problem. We have the true posterior $p(x_t | o_t)$, which is unknown, and we have our $q$ distribution which we do know. We then want to optimize the $q$ distribution to make it as *close as possible* to the true posterior $p(x_t | o_t)$. To do this, we first need a way to quantify *how close* two probability distributions are. The way we do this is by using a quantity known as the *Kullback-Leibler (KL) divergence*. This is a metric derived from information theory which lets us quantify the distance (in bits) of two distributions. The KL divergence between two distributions $q(x)$ and $p(x)$ is defined as,
# $$
# \begin{align}
# KL[q(x) || p(x)] = \sum_x q(x) (\ln q(x) - \ln p(x))
# \end{align}
# $$
# Mathematically it can be thought of as the average of the difference of the logarithms of the probabilities assigned by $q$ and $p$ to the states $x$. The KL divergence is smallest when $q(x) = p(x)$ when it is equal to 0, and can grow to be infinitely large which happens wherever $q(x)$ assigns a nonzero probability but $p(x)$ doesn't. In code, we can compute the KL divergence as:
#
def KL_divergence(q,p):
return np.sum(q * (log_stable(q) - log_stable(p)))
# Now that we know about the KL divergence, we can express our variational problem of making our approximate posterior $q(x_t ; \varphi)$ as close as possible to the true posterior as simply minimizing the KL divergence between the two distributions. That is, we can define the optimal approximate distribution as,
# $$
# \begin{align}
# q^*(x_t ; \varphi) = \operatorname{argmin}_{\varphi} \, KL[q(x_t ; \varphi) || p(x_t | o_t)]
# \end{align}
# $$
#
# And then simply try to optimize this objective so as to find the setting of the variational parameters $\varphi$ that make $q(x_t ; \varphi)$ as close to $p(x_t |o_t)$ as possible. The trouble with this is that our objective actually explicitly contains the true posterior in it and so, since we can't conpute the true posterior, we can't compute this objective either -- so we are stuck!
#
# Variational inference provides a clever way to get around this problem by instead minimizing an *upper bound* on this divergence called the *variational free energy*. Importantly this bound is computable so we can actually optimize it and moreover since it is an upper bound, if we minimize it, we can make $q$ as close as possible to the real bound, thus still managing to obtain a good approximate posterior distribution. Deriving the variational free energy is very simple, we first take our initial objective and apply Bayes rule to the true posterior, and then take out the marginal likelihood term separately
# <!-- $$
# \begin{align}
# KL[q(x_t | o_t) || p(x_t | o_t)] &= KL[q(x_t | o_t) || \frac{p(o_t,x_t)}{p(o_t)}] \\
# &= KL[q(x_t | o_t) || p(o_t, x_t)] + \sum_x q(x_t | o_t) \ln p(o_t) \\
# &= KL[q(x_t | o_t) || p(o_t, x_t)] + \ln p(o_t)
# \end{align}
# $$ -->
#
# $$
# \begin{align}
# KL[q(x_t ; \varphi) || p(x_t | o_t)] &= KL[q(x_t ; \varphi) || \frac{p(o_t,x_t)}{p(o_t)}] \\
# &= KL[q(x_t ; \varphi) || p(o_t, x_t)] + \sum_x q(x_t ; \varphi) \ln p(o_t) \\
# &= KL[q(x_t ; \varphi) || p(o_t, x_t)] + \ln p(o_t)
# \end{align}
# $$
#
#
# Where in the final line we have used the fact that the sum is over a different variable than the distribution, and the sum of a probability distribution is $1$ -- i.e $\sum_x q(x_t ; \varphi) \ln p(o_t) = \ln p(o_t) * \sum_x q(x_t ; \varphi)$ and $\sum_x q(x_t ; \varphi) = 1$. Specifically, since $ \ln p(o_t)$ is the log of a probability distribution it is always negative, since the probability of a state is always between 0 and 1. This means that we know that this term we have devised $KL[q(x_t ; \varphi) || p(o_t, x_t)]$ is always necessarily greater than our original divergence between the approximate posterior and the true posterior, so it is an *upper bound*. We call this term the *variational free energy* and denote it by $\mathcal{F}$.
# $$
# \begin{align}
# \mathcal{F} = KL[q(x_t ; \varphi) || p(o_t, x_t)]
# \end{align}
# $$
#
# The free energy here is simply the divergence between the approximate posterior and the *generative model* of the agent. Since we know both the approximate posterior (as we defined it in the first place!) and the generative model, then both terms of this divergence are computable. We thus have our algorithm to approximate the posterior! Since the free energy is an upper bound, if we minimize the free energy, we also implicitly minimize the true divergence between the true and approximate posteriors, which will force the approximate posterior to be close to the true posterior and thus a good approximation! Moreover, since we can compute the free energy, we can actually perform this optimization!
#
# In many cases, we typically perform variational inference by taking the gradients of the free energy with respect to the variational parameteres $\varphi$ and then doing gradient descent on the parameters $\varphi$ that define $q(x_t ; \varphi)$. However, when the distributions are discrete (i.e. Categorical distributions), the parameters of the approximate distribution are simply the probability values for each state (the elements of the vector $q(x)$. For some simple generative models (e.g. the grid-world described here), we can actually solve this optimization problem directly to obtain $q(x)$ in a single step, instead of as a gradient descent.
# # Directly solving variational inference in the case of a simple discrete model
# To recap, remember that we have turned the problem of computing the posterior distribution $p(x_t | o_t)$ into that of minimizing the variational free energy: $\mathcal{F} = KL[q(x_t) || p(o_t, x_t)]$ with respect to an approximate posterior distribution $q(x_t)$. From now on, we will leave out the variational parameters $\varphi$ when referring to $q(x_t)$, since we are dealing with a single Categorical distribution $q(x_t)$ whose vector elements are identical to the variational parameters, i.e. $\forall_{i} \varphi_i = q(x_t = i)$.
#
# The optimal distribution is simply that particular $q^*(x_t)$ that minimizes the KL divergence. Now, remember from high-school calculus that we can explicitly compute the minimum of a function by taking its derivative and setting it to 0 (i.e. at the minimum the first derivative of the function is 0) (if you don't remember this from calculus, trust me on this). This means that to solve this problem all we need to do is take the derivative of the free energy and set it to 0 and rearrange. First, let's write out the free energy explicitly.
#
# <!-- $$
# \begin{align}
# \mathcal{F} &= KL[q(x_t | o_t) || p(o_t, x_t)] \\
# &= \sum_x q(x_t | o_t) (\ln q(x_t | o_t) - \ln p(o_t,x_t))
# \end{align}
# $$ -->
#
# $$
# \begin{align}
# \mathcal{F} &= KL[q(x_t) || p(o_t, x_t)] \\
# &= \sum_x q(x_t)(\ln q(x_t) - \ln p(o_t,x_t))
# \end{align}
# $$
# If we then split the generative model up into a likelihood and a prior, we can write it as,
#
# $$
# \begin{align}
# \mathcal{F} = \sum_x q(x_t) \big[ \ln q(x_t) - \ln p(o_t | x_t) - \ln p(x_t) \big]
# \end{align}
# $$
# Recall that we have explicitly defined the observation and transition likelihood distributions as the $\textbf{A}$ and $\textbf{B}$ matrices, which play the role of the likelihood and prior distributions, respectively. In particular, we can define the prior over the current timestep $p(x_t)$ to be the "expected prior", given the beliefs about the state at the last timesep $q(x_{t-1})$ the beliefs about the transition dynamics and the past action, i.e.:
#
# $$
# \begin{align}
# p(x_t) = \mathcal{E}_{q(x_{t-1})}\big[p(x_t | x_{t-1}, a_t) \big]
# \end{align}
# $$
#
# which can be expressed as a simple matrix vector product
#
# $$
# \begin{align}
# p(x_t) = \textbf{B}_{a_t}q(x_{t-1})
# \end{align}
# $$
#
# where $\textbf{B}_{a_t}$ is the component of the $\textbf{B}$ matrix that is conditioned on action $a_t$. In other words, for this simple Markovian model, we assume that "yesterday's posterior is today's prior." For simplicity, we refer to this entire prior term as $\textbf{B}$ below, i.e. we temporarily define $\textbf{B} \equiv \textbf{B}_{a_t}q(x_{t-1})$.
#
# As mentioned above, the posterior beliefs $q(x_t)$ are a vector of probabilities (the variational parameters) which we denote $\textbf{q} = [q_1, q_2, q_3 \dots]$. With this all defined, we can write out the free energy as,
#
# $$
# \begin{align}
# \mathcal{F} = \sum_x \textbf{q} * \big[ \ln \textbf{q} - \ln \textbf{A} - \ln \textbf{B} \big]
# \end{align}
# $$
#
#
# And for fun, we can explicitly compute it in code:
def compute_free_energy(q,A, B):
return np.sum(q * (log_stable(q) - log_stable(A) - log_stable(B)))
# Then, all we need to do is take the derivative of the free energy with respect to the approximate posterior distribution $\textbf{q}$ as follows,
# $$
# \begin{align}
# \frac{\partial \mathcal{F}}{\partial \textbf{q}} = \ln \textbf{q} - \ln \textbf{A} - \ln \textbf{B} - \textbf{1}
# \end{align}
# $$
#
# Where $\textbf{1}$ is just a vector of ones of equal length to $\textbf{q}$ and comes from the $q \frac{\partial \ln q}{\partial q} = q * \frac{1}{q} = 1$. Thus, if we set this derivative to 0 and rearrange, we can get,
# $$
# \begin{align}
# 0 &= \ln \textbf{q} - \ln \textbf{A} - \ln \textbf{B} - \textbf{1} \\
# &\implies \textbf{q}^* = \sigma(\ln \textbf{A} + \ln \textbf{B})
# \end{align}
# $$
#
# Where $\sigma$ is a softmax function $\sigma(x) = \frac{e^x}{\sum_x e^x}$ which ensures that the resulting probability distribution is normalized. This expression lets us compute the optimal approximate posterior instantly as a straightforward function of the current observation $o_t$, $\textbf{A}$, and $\textbf{B}$. We can thus quickly right the code for inference:
# +
def softmax(x):
return np.exp(x) / np.sum(np.exp(x))
def perform_inference(likelihood, prior):
return softmax(log_stable(likelihood) + log_stable(prior))
# -
# Note that the likelihood term is not the entire $\textbf{A}$ matrix, but just the 'row' of the $\textbf{A}$ matrix corresponding to the current observation, i.e. $P(o_t = o \mid x_t)$.
#
# Inference for simple discrete state-space models like these is therefore very simple. All we need to do is have some initial set of beliefs $\textbf{q}_0$ and then update them according to these rules for every observation we get, using the $\textbf{A}$, $\textbf{B}$, the past action $a_t$ and the past posterior $q(x_{t-1})$ to provide the likelihood and prior terms.
# # Planning through Active Inference
# So far we just have an agent that can perform perform inference in the discrete state space, but inference by itself isn't really that useful. Instead what we really want to do is *planning*. That is, the agent needs to be able to figure out how to emit a series of actions which will take it to a certain goal state. A key part of Active Inference is that this process of planning, or more broadly action selection, can also be solved as a process of variational inference. This is why it is called *Active* Inference, after all.
#
# <!-- However, when starting to think about this, it is not immediately obvious how to turn planning into an inference problem. What are the hypotheses? What are the observations? To turn the problem of planning into an inference problem, we need to introduce two additional concepts. The first is the idea of a *goal state*. To make planning useful, the agent has to *want something*. This is different from just performing objective inference about the state of the world in which there is no goal except to infer correctly. In Active Inference we define the goal state as a separate goal vector denoted $\textbf{C}$. When performing planning we then modify the generative model of the agent so that it no longer reflects the true distribution of observations in the environment, but rather includes the goal vector. We denote this new generative model $\tilde{p}(o_t, x_t) = p(o_t | x_t)\tilde{p}(x_t)$ where we use $\tilde{p}$ to say that this distribution is not a true distribution describing the agents model of the world, but is instead a *desired distribution*. Here, we set the desired distribution to be over the state of the environment $x_t$ such that $\tilde{p}(x_t) = \textbf{C}$. -->
#
# However, when starting to think about this, it is not immediately obvious how to turn planning into an inference problem. What are the hypotheses? What are the observations? To turn the problem of planning into an inference problem, we need to introduce two additional concepts. The first is the idea of a *preferred observations*. To make planning useful, the agent has to *want something*. This is different from just performing objective inference about the state of the world in which there is no goal except to infer correctly. In Active Inference, we define the preferred observations as a separate goal vector denoted $\textbf{C}$. When performing planning we then modify the generative model of the agent so that it no longer reflects the true distribution of observations in the environment, but rather includes the goal vector, which is a *prior over observations*. We denote this new generative model $\tilde{p}(o_t, x_t) = p(x_t | o_t)\tilde{p}(o_t)$ where we use $\tilde{p}$ to say that this distribution is not a true distribution describing the agent's model of the world, but is instead a *preference distribution*. Here, we set the preference distribution to be over the state of the environment $o_t$ such that $\tilde{p}(o_t) = \textbf{C}$.
#
# By changing the generative model in this way, we have effectively changed the inference problem from: *infer the most likely states and actions given the true generative model of the world* to *infer the most likely states and actions given a false model of the world, in which I achieve my goals*. Perhaps more intuitively, we can think of this inference problem as answering the question: *Given that I have achieved my goals, what actions must have I taken to get there?*.
#
# The second thing we need to do to perform planning is to also extend the inference problem to *actions in the future*, since these are the fundamental things that the agent controls which it can use to adjust the environment. We call a sequence of future actions from now (time $t$) until some set future time $T$ a *policy* and denote it $\pi = [a_t, a_{t+1}, a_{t+2} \dots a_T]$. The goal is then to infer the optimal policy $\pi^*$ given the preferences $\textbf{C}$.
#
# However here there is a problem. Typically we would perform variational inference to solve this inference problem, but the variational free energy is not defined over future trajectories of observations, which are not yet known. Instead, we define a new objective which can handle this -- the *Expected Free Energy (EFE)*, which is defined over policies and which we denote as $\mathcal{G}(\pi)$. We define the EFE for a particular timepoint $\tau$ and policy $\pi$ as:
#
# $$
# \begin{align}
# \mathcal{G}(\pi)_{\tau} = \sum_{o_{\tau}, x_{\tau}} q(x_{\tau} | \pi)q(o_{\tau} | \pi) \big[ \ln q(x_{\tau} | \pi) - \ln \tilde{p}(o_{\tau}, x_{\tau}) \big]
# \end{align}
# $$
#
# The expected free energy is defined for a single time-step of a trajectory *in the future*, i.e. prior to receiving any observations. The key difference between the standard variational free energy and the expected free energy is that the expected free energy also averages over the *expected observations* $q(o_{\tau} | pi)$ and *expected states* $q(x_{\tau} | \pi)$, where the expectations are conditioned on some policy $\pi$. This is necessary because with the expected free energy, we are evaluating possible future trajectories without a given observation, unlike the variational free energy where we can assume that we have already received the observation.
#
# To get an intuitive handle on what the expected free energy *means* we can decompose it into two more intuitive quantities.
#
# $$
# \begin{align}
# \mathcal{G}(\pi)_{\tau}&= \sum_{o_{\tau}, x_{\tau}} q(x_{\tau} | \pi)q(o_{\tau} | \pi) \big[ \ln q(x_{\tau} | \pi) - \ln \tilde{p}(o_{\tau}, x_{\tau}) \big] \\
# &= -\underbrace{\sum_{o_{\tau}, x_{\tau}} q(x_{\tau} | \pi)q(o_{\tau} | \pi) \big[ \ln p(o_{\tau} | x_{\tau}) \big]}_{\text{Uncertainty}} + \underbrace{KL[q(x_{\tau} | \pi) || \tilde{p}(x_{\tau}) ]}_{\text{Divergence}}
# \end{align}
# $$
#
# The first term is called *expected uncertainty* or sometimes *novelty* and represents essentially the spread of the observations expected in the future. Since we are choosing actions that *minimize* the whole quantity, we want to take actions that *maximize* novelty. We can think of this as a bonus to aid exploration since active inference agents will preferentially pursue resolveable uncertainty. The second term is the *divergence* term which is the KL divergence between the states expected under a given policy (also known as the posterior predictive density $q(x_\tau | \pi)$) and the goal distribution of the generative model, $\tilde{p}(x)$. This term scores how far away (in an informational sense) the agent expects it will be from the goal, if it were to pursue that policy. Since we are minimizing the expected free energy this term is positive and so is also minimized -- that is by minimizing the expected free energy, we are trying to choose trajectories which will bring the expected states in the future close to the desired states.
#
# Now that we have the expected free energy to score possible trajectories, we now need to infer the optimal policy. A simple approach (which can be derived explicitly although it is somewhat complex) is to say that the posterior probability of a policy is proportional to the (exponentiated) sum of the expected free energy accumulated along the trajectory through the environment created by that policy. While this sounds complex, mathematically we can express it very simply as,
# $$
# \begin{align}
# q(\pi) = \sigma( \sum_{\tau}^T \mathcal{G}(\pi)_{\tau})
# \end{align}
# $$
# <!-- We can then choose which policy we implement for the next timestep by just sampling a policy from $q(\pi)$ and then emitting the first action of that policy. -->
# We can then choose which action we choose for the next timestep by computing the marginal probability of each action $P(a)$, given the posterior over actions $q(\pi)$ and some mapping between policies and actions $P(a | \pi)$:
#
# $$
# \begin{align}
# P(a) = \sum_{\pi}P(a | \pi)q(\pi)
# \end{align}
# $$
#
# We can sample from this distribution over actions to choose our action for the next timestep.
#
# In our simple grid world environment, policies are identical to actions since we are only planning ahead 1-step in the future, so $\forall_{i} P(a = i | \pi = i) = 1$, which implies $P(a) == q(\pi)$.
#
# While all this may seem exceptionally long and complex, it results in an algorithm which is actually remarkably simple. The algorithm is:
#
# 1.) There is an agent with a generative model of the environment ($\textbf{A}$ and $\textbf{B}$ matrices), some initial set of approximate posterior beliefs $q(x_t | o_t)$ and a desired state vector $\textbf{C}$.
#
# 2.) The agent receives an observation $o_t$ and computes its posterior beliefs as we did earlier by minimizing the free energy.
#
# 3.) The agent now needs to choose what action to make to achieve its goals. It does this by:
#
# 3.1.) First creating a set of potential policies to evaluate.
#
# 3.2.) For each policy in this set, use the generative model to simulate the agent's trajectory in the environment *as if* it had emitted the actions prescribed by the policy
#
# 3.3.) For each future timestep of each future trajectory, compute the expected free energy of that time-step
#
# 3.4.) Sum the expected free energies for each timestep of each trajectory to get a total expected free energy for each possible policy.
#
# 3.5.) Use these total expected free energies to compute the posterior distribution $q(\pi)$ as done above.
#
# 3.6 ) Compute the marginal probability of each action, expected under the policies that include them, and then sample from this "action marginal" to generate an action at the current timestep.
#
# 4.) Execute the sampled action to 'step forward' the environmental dynamics and get a new observation. Go back to step 1.
#
#
# And that's it! We're done. We have the full algorithm to create an active inference agent. Now all we do is show how to translate this algorithm into code for our specific case.
# ## Beliefs
#
# First we need to setup an initial belief distribution which we will then update according to the observations we will receive.
# setup initial prior beliefs -- uncertain -- completely unknown which state it is in
Qs = np.ones(9) * 1/9
plot_beliefs(Qs)
# # Preferences
# Now we have to encode the agent's preferences, so that it can learn to go to its reward state. In the current context, the agent wants (i.e. expects) to be in the reward location 7.
# +
# C matrix -- desires
REWARD_LOCATION = 7
reward_state = state_mapping[REWARD_LOCATION]
print(reward_state)
C = np.zeros(num_states)
C[REWARD_LOCATION] = 1.
print(C)
plot_beliefs(C)
# -
# The C matrix is a 1x9 matrix, where each value represents the preference to occupy a given state. We will create a one-hot C matrix, so that the agent only has a preference to be in state 7.
# # Implementing the Active Inference Agent
# ## Evaluate policy
#
# This helper function we evaluate the negative expected free energy for a given policy $-\mathcal{G}(\pi)$. To do this we need to calculate the cumulative expected free energy for that policy. All that entails is looping through the timesteps of the policy, simulate what the environment would do, using our generative model, evolving posterior beliefs and the actions entailed by the policy, and then compute the expected free energy of our (policy-dependent) expectations about the hidden states and observations.
#
def evaluate_policy(policy, Qs, A, B, C):
# initialize expected free energy at 0
G = 0
# loop over policy
for t in range(len(policy)):
# get action entailed by the policy at timestep `t`
u = int(policy[t])
# work out expected state, given the action
Qs_pi = B[:,:,u].dot(Qs)
# work out expected observations, given the action
Qo_pi = A.dot(Qs_pi)
# get entropy
H = - (A * log_stable(A)).sum(axis = 0)
# get predicted divergence
# divergence = np.sum(Qo_pi * (log_stable(Qo_pi) - log_stable(C)), axis=0)
divergence = KL_divergence(Qo_pi, C)
# compute the expected uncertainty or ambiguity
uncertainty = H.dot(Qs_pi)
# increment the expected free energy counter for the policy, using the expected free energy at this timestep
G += (divergence + uncertainty)
return -G
# ## Infer action
#
# This helper function will infer the most likely action. Specifically, it computes steps 3.1 to 3.5 in the active inference algorithm. First, it constructs all possible policies for a given policy length and set of actions. Then it loops through every possible policy and computes the expected free energy of that policy using our previous function, and then computing the policy distribution $q(\pi)$ using the softmax over the expected free energies.
def infer_action(Qs, A, B, C, n_actions, policies):
# initialize the negative expected free energy
neg_G = np.zeros(len(policies))
# loop over every possible policy and compute the EFE of each policy
for i, policy in enumerate(policies):
neg_G[i] = evaluate_policy(policy, Qs, A, B, C)
# get distribution over policies
Q_pi = maths.softmax(neg_G)
# initialize probabilites of control states (convert from policies to actions)
Qu = np.zeros(n_actions)
# sum probabilites of control states or actions
for i, policy in enumerate(policies):
# control state specified by policy
u = int(policy[0])
# add probability of policy
Qu[u] += Q_pi[i]
# normalize action marginal
utils.norm_dist(Qu)
# sample control from action marginal
u = utils.sample(Qu)
return u
# ## Main loop
# Here we simply implement the main loop of the active inference agent interacting with the environment. Specifically, this essentially implements steps 1-5 of the MDP "program" we discussed in notebook 1. Specifically, for each timestep, the agent infers an action, it emits that action to the environment, the environment is updated and returns an observation to the agent. The agent then infers the new state of the environment given that observation.
# +
# number of time steps
T = 10
#n_actions = env.n_control
n_actions = 5
# length of policies we consider
policy_len = 4
# this function generates all possible combinations of policies
policies = control.construct_policies([B.shape[0]], [n_actions], policy_len)
# reset environment
o = env.reset()
# loop over time
for t in range(T):
# infer which action to take
a = infer_action(Qs, A, B, C, n_actions, policies)
# perform action in the environment and update the environment
o = env.step(int(a))
# infer new hidden state (this is the same equation as above but with PyMDP functions)
likelihood = A[o,:]
prior = B[:,:,int(a)].dot(Qs)
Qs = maths.softmax(log_stable(likelihood) + log_stable(prior))
print(Qs.round(3))
plot_beliefs(Qs, "Beliefs (Qs) at time {}".format(t))
# -
# And that's it! In the last two notebooks, we have implemented a basic active inference agent which can successfully navigate around a 3x3 gridworld using active inference. Moreover, we have implemented this all from scratch using mostly basic numpy functions and not using much of the functionality of `pymdp`.
#
# Hopefully after going through this you now understand roughly what active inference is and how it works, as well as ideally have some intuitions about how inference as well as policy selection work "under the hood", as well as learnt a lot about Bayesian and specifically variational inference. In the next notebook, we will focus more on the `pymdp` library itself and demonstrate how `pymdp` provides a useful set of abstractions that allows us to easily create active inference agents, as well as perform inference and policy selection in considerably more complex environments than the one described here.
#
# We will discuss the high level structure of the library and show how its possible to replicate these notebooks in a much smaller amount of code using the `pymdp` abstractions.
| examples/gridworld_tutorial_2.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] id="QfA-Qi4Wc9Ln"
# # The JAX emulator: CIGALE prototype
# In this notebook, I will prototype my idea for emulating radiative transfer codes with a Deepnet in order for it to be used inside xidplus. As `numpyro` uses JAX, the Deepnet wil ideally be trained with a JAX network. I will use CIGALE
# + [markdown] id="jrVFlEIic9Lu"
# ### Advice from Kasia
# Use the following modules:
# * `Dale 2014` dust module with one parameter ($\alpha$) however, $\alpha$ can only take certian values in Cigale
# * 0.0625, 0.1250, 0.1875, 0.2500,0.3125, 0.3750, 0.4375, 0.5000, 0.5625, 0.6250, 0.6875, 0.7500,0.8125, 0.8750, 0.9375, 1.0000, 1.0625, 1.1250, 1.1875, 1.2500,1.3125, 1.3750, 1.4375, 1.5000, 1.5625, 1.6250, 1.6875, 1.7500, 1.8125, 1.8750, 1.9375, 2.0000, 2.0625, 2.1250, 2.1875, 2.2500,2.3125, 2.3750, 2.4375, 2.5000, 2.5625, 2.6250, 2.6875, 2.7500,2.8125, 2.8750, 2.9375, 3.0000, 3.0625, 3.1250, 3.1875, 3.2500, 3.3125, 3.3750, 3.4375, 3.5000, 3.5625, 3.6250, 3.6875, 3.7500, 3.8125, 3.8750, 3.9375, 4.0000
# * `sfhdelayed` starforamtion history module. Has parameters $\tau$ (500-6500) ($age$ can be calculated from redshift). $f_{burst}$ is set to 0
# * `bc03`stellar population synthesis module (don't change parameters)
# * `dustatt_2powerlaws`
# * set $Av_BC$ the V band attenuation in the birth clouds to between 0 - 4
# * set `BC_to_ISM_factor` to 0.7
#
# Final parameters: $alpha$, $AV_BC$,$\tau$,$z$,$SFR$,$AGN$
#
# Ideally, I would generate values from prior. I can do that for $AV_BC$,$\tau$,$z$,$SFR$,$AGN$ but not $\alpha$ given that there are fixed values.
# + id="DGqUAaCic9Lv" outputId="0199c8f5-6a3e-4cf6-cfcb-f51d4e59204e" executionInfo={"status": "error", "timestamp": 1646390073413, "user_tz": 0, "elapsed": 2743, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GjQ3mLLvFk6D5alsGcCrGha938567QZ6FhyBE_yig=s64", "userId": "02285807596786439560"}} colab={"base_uri": "https://localhost:8080/", "height": 447}
from astropy.cosmology import WMAP9 as cosmo
import jax
import numpy as onp
import pylab as plt
import astropy.units as u
import scipy.integrate as integrate
# %matplotlib inline
import jax.numpy as np
from jax import grad, jit, vmap, value_and_grad
from jax import random
from jax import vmap # for auto-vectorizing functions
from functools import partial # for use with vmap
from jax import jit # for compiling functions for speedup
from jax.experimental import stax # neural network library
from jax.experimental.stax import Conv, Dense, MaxPool, Relu, Flatten, LogSoftmax, LeakyRelu # neural network layers
from jax.experimental import optimizers
from jax.tree_util import tree_multimap # Element-wise manipulation of collections of numpy arrays
import matplotlib.pyplot as plt # visualization
# Generate key which is used to generate random numbers
key = random.PRNGKey(2)
from xidplus import cigale
# + id="-wM5l1VMc9Lx"
onp.random.seed(2)
# + [markdown] id="Eavq9O5Uc9Ly"
# ### Generate CIGALE SEDs
# + id="lMSsg3OKc9Ly"
from astropy.io import fits
from astropy.table import Table
import scipy.stats as stats
# + id="X5Jmpm84c9Lz"
alpha=onp.array([0.0625, 0.1250, 0.1875, 0.2500,0.3125, 0.3750, 0.4375, 0.5000, 0.5625, 0.6250, 0.6875, 0.7500,0.8125, 0.8750, 0.9375, 1.0000, 1.0625, 1.1250, 1.1875, 1.2500,1.3125, 1.3750, 1.4375, 1.5000, 1.5625, 1.6250, 1.6875, 1.7500, 1.8125, 1.8750, 1.9375, 2.0000, 2.0625, 2.1250, 2.1875, 2.2500,2.3125, 2.3750, 2.4375, 2.5000, 2.5625, 2.6250, 2.6875, 2.7500,2.8125, 2.8750, 2.9375, 3.0000, 3.0625, 3.1250, 3.1875, 3.2500, 3.3125, 3.3750, 3.4375, 3.5000, 3.5625, 3.6250, 3.6875, 3.7500, 3.8125, 3.8750, 3.9375, 4.0000])
alpha_rv = stats.randint(0, len(alpha))
av_bc_rv=stats.uniform(0.1,4.0)
tau_rv=stats.randint(500,6500)
z_rv=stats.uniform(0.01,6)
sfr_rv=stats.loguniform(0.01,30000)
agn_frac_rv=stats.beta(1,3)
# + id="W9jnOZ5Kc9L0"
from astropy.cosmology import Planck13
# + id="bjbDSleCc9L1" outputId="b39b2b75-1d61-4a3a-f15e-812ce5762f4f"
z=z_rv.rvs(1)[0]
onp.int(Planck13.age(z).value*1000)
alpha[alpha_rv.rvs(1)[0]]
# + id="O6Zj9M3ec9L2" outputId="9db10d65-ded8-45d0-fadf-e501516f3274"
nsamp=1
from astropy.constants import L_sun, M_sun
from astropy.table import vstack
col_scale=['spire_250','spire_350','spire_500','dust.luminosity','sfh.sfr','stellar.m_star']
parameter_names=onp.array(['tau_main','age_main','Av_BC','alpha','fracAGN','redshift'])
all_SEDs=[]
for i in range(0,nsamp):
z=z_rv.rvs(1)[0]
parameters={'tau_main':[tau_rv.rvs(1)[0]],'age_main':[onp.int(Planck13.age(z).value*1000)],
'Av_BC':[av_bc_rv.rvs(1)[0]],'alpha':[alpha[alpha_rv.rvs(1)[0]]],'fracAGN':[agn_frac_rv.rvs(1)[0]],'redshift':[z]}
path_to_cigale='/Volumes/pdh_storage/cigale/'
path_to_ini_file='pcigale_kasia_nn.ini'
SEDs=cigale.generate_SEDs(parameter_names, parameters, path_to_cigale, path_to_ini_file, filename = 'tmp_single')
#set more appropriate units for dust
SEDs['dust.luminosity']=SEDs['dust.luminosity']/L_sun.value
scale=1.0/SEDs['sfh.sfr']
for c in col_scale:
SEDs[c]=SEDs[c]*scale*sfr_rv.rvs(1)[0]
all_SEDs.append(SEDs)
if i and i % 100 == 0:
tmp_SEDs=vstack(all_SEDs)
tmp_SEDs.write('kasia_gen_SEDs_{}.fits'.format(i),overwrite=True)
all_SEDs=[]
# + id="6QW6YLywc9L3" outputId="36d9f3ce-3d59-4c9b-a236-45a2dff93681"
all_SEDs[0]
# + [markdown] id="R0x4Z2jec9L3"
# ### Generate values for CIGALE
# + [markdown] id="EYEvxvDoc9L4"
# Redshift
# + id="CXz1m3Hnc9L4" outputId="061d14d9-2295-466a-f63c-98f092b10f67"
onp.array2string(10.0**np.arange(-2.5,0.77,0.1), separator=',',formatter={'float_kind':lambda x: "%.4f" % x}).replace('\n','')
# + id="VReEHONQc9L5" outputId="db0788e8-9f91-4b51-af5e-c7f60d61caa8"
onp.array2string(np.arange(0.1,4,0.3),separator=',',formatter={'float_kind':lambda x: "%.4f" % x}).replace('\n','')
# + [markdown] id="HA6VodZxc9L6"
# AGN frac
# + id="btFdaZBtc9L6" outputId="84b92f17-81b6-4212-b5a6-369178f8b3ec"
onp.array2string(np.arange(0.001,1,0.075),separator=',',formatter={'float_kind':lambda x: "%.3f" % x}).replace('\n','')
# + id="60eQVUXGc9L6"
SEDs=Table.read('/Volumes/pdh_storage/cigale/out/models-block-0.fits')
#set more appropriate units for dust
from astropy.constants import L_sun, M_sun
SEDs['dust.luminosity']=SEDs['dust.luminosity']/L_sun.value
# + id="WSKeVs_6c9L7"
SEDs=SEDs[onp.isfinite(SEDs['spire_250'])]
# + id="wD6dhYRyc9L7" outputId="ddf8d5a8-a8d5-400a-de03-a863b980fc3b"
SEDs
# + id="RwD6hwG7c9L7"
from astropy.table import vstack
# + id="6dhDU7tOc9L8" outputId="81af5c3b-2f86-4a6d-a628-bad7ef0f29f1"
(1.0/dataset['sfh.sfr'])*dataset['sfh.sfr']*10.0**scale_table
# + id="BMB8W67kc9L8"
# define a range of scales
scale=np.arange(8,14,0.25)
#repeat the SED table by the number of scale steps
dataset=vstack([SEDs for i in range(0,scale.size)])
#repeat the scale range by the number of entries in table (so I can easily multiply each column)
scale_table=np.repeat(scale,len(SEDs))
#parameters to scale
col_scale=['spire_250','spire_350','spire_500','dust.luminosity','sfh.sfr','stellar.m_star']
for c in col_scale:
dataset[c]=dataset[c]*10.0**scale_table
dataset['log10_sfh.sfr']=onp.log10(dataset['sfh.sfr'])
dataset['log10_universe.redshift']=onp.log10(dataset['universe.redshift'])
# transform AGN fraction to logit scale
dataset['logit_agnfrac']=onp.log(dataset['agn.fracAGN']/(1-dataset['agn.fracAGN']))
#shuffle dataset
dataset=dataset[onp.random.choice(len(dataset), len(dataset), replace=False)]
# + id="6vb80TkCc9L9" outputId="9a10a828-9a84-4195-ce5c-f2401b9fec85"
plt.hist(dataset['log10_sfh.sfr'],bins=(np.arange(0,14)));
# + id="YCFkak4Kc9L9" outputId="c20b0cfc-8a65-4c9a-b380-195dc26009c1"
dataset
# + [markdown] id="fL1_SiOBc9L-"
# ## DeepNet building
# I will build a multi input, multi output deepnet model as my emulator, with parameters as input and the observed flux as outputs. I will train on log10 flux to make the model easier to train, and have already standarised the input parameters. I wilkl be using `stax` which can be thought of as the `Keras` equivalent for `JAX`. This [blog](https://blog.evjang.com/2019/02/maml-jax.html) was useful starting point.
# + [markdown] id="e2lpqIuyc9L-"
# I will use batches to help train the network
# + id="QnGtmvtvc9L-"
dataset=dataset[0:18000000]
# + id="1nIUV63nc9L-" outputId="305bc473-833e-4d63-ea7f-ff8a3bf03bd9"
len(dataset)/1200
# + id="96ukF-Cuc9L-"
split=0.75
inner_batch_size=1200
train_ind=onp.round(0.75*len(dataset)).astype(int)
train=dataset[0:train_ind]
validation=dataset[train_ind:]
input_cols=['log10_sfh.sfr','agn.fracAGN','universe.redshift', 'attenuation.Av_BC','dust.alpha','sfh.tau_main']
output_cols=['spire_250','spire_350','spire_500']
train_batch_X=np.asarray([i.data for i in train[input_cols].values()]).reshape(len(input_cols)
,inner_batch_size,onp.round(len(train)/inner_batch_size).astype(int)).T.astype(float)
train_batch_Y=np.asarray([np.log(i.data) for i in train[output_cols].values()]).reshape(len(output_cols),
inner_batch_size,onp.round(len(train)/inner_batch_size).astype(int)).T.astype(float)
validation_batch_X=np.asarray([i.data for i in validation[input_cols].values()]).reshape(len(input_cols)
,inner_batch_size,onp.round(len(validation)/inner_batch_size).astype(int)).T.astype(float)
validation_batch_Y=np.asarray([np.log(i.data) for i in validation[output_cols].values()]).reshape(len(output_cols),
inner_batch_size,onp.round(len(validation)/inner_batch_size).astype(int)).T.astype(float)
# + id="Jm38Y8sTc9L_"
# Use stax to set up network initialization and evaluation functions
net_init, net_apply = stax.serial(
Dense(128), LeakyRelu,
Dense(128), LeakyRelu,
Dense(128), LeakyRelu,
Dense(128), Relu,
Dense(len(output_cols))
)
in_shape = (-1, len(input_cols),)
out_shape, net_params = net_init(key,in_shape)
# + id="6vRLPCUQc9L_"
# + id="5wnLvXkhc9L_"
def loss(params, inputs, targets):
# Computes average loss for the batch
predictions = net_apply(params, inputs)
return np.mean((targets - predictions)**2)
def batch_loss(p,x_b,y_b):
loss_b=vmap(partial(loss,p))(x_b,y_b)
return np.mean(loss_b)
# + id="4Oqps0kbc9MA" outputId="7da555e1-66ac-4280-e583-9bb291ff598a"
opt_init, opt_update, get_params= optimizers.adam(step_size=5e-4)
out_shape, net_params = net_init(key,in_shape)
opt_state = opt_init(net_params)
@jit
def step(i, opt_state, x1, y1):
p = get_params(opt_state)
g = grad(batch_loss)(p, x1, y1)
loss_tmp=batch_loss(p,x1,y1)
return opt_update(i, g, opt_state),loss_tmp
np_batched_loss_1 = []
valid_loss=[]
for i in range(10000):
opt_state, l = step(i, opt_state, train_batch_X, train_batch_Y)
p = get_params(opt_state)
valid_loss.append(batch_loss(p,validation_batch_X,validation_batch_Y))
np_batched_loss_1.append(l)
if i % 100 == 0:
print(i)
net_params = get_params(opt_state)
# + id="AuwJH78Xc9MA"
for i in range(2000):
opt_state, l = step(i, opt_state, train_batch_X, train_batch_Y)
p = get_params(opt_state)
valid_loss.append(batch_loss(p,validation_batch_X,validation_batch_Y))
np_batched_loss_1.append(l)
if i % 100 == 0:
print(i)
net_params = get_params(opt_state)
# + id="9BA8-HXrc9MA" outputId="a405363a-7df0-4ada-faf9-89e6fc6e3f53"
plt.figure(figsize=(20,10))
plt.semilogy(np_batched_loss_1,label='Training loss')
plt.semilogy(valid_loss,label='Validation loss')
plt.xlabel('Iteration')
plt.ylabel('Loss (MSE)')
plt.legend()
# + [markdown] id="PHVeXXpfc9MB"
# ## Investigate performance of each band of emulator
# To visulise performance of the trainied emulator, I will show the difference between real and emulated for each band.
# + id="uHrth4CRc9MB"
net_params = get_params(opt_state)
predictions = net_apply(net_params,validation_batch_X)
# + id="SyqVpniyc9MB" outputId="1968e72e-4ae4-48a3-d802-5f4d9c0782ce"
validation_batch_X.shape
# + id="8OoNbpXsc9MB" outputId="05a765bc-987c-4c85-b920-a46d55b42813"
validation_batch_X[0,:,:].shape
# + id="Ho9ZBsLPc9MB" outputId="e9e8fa64-bcc9-48cf-8420-fb05756c5532"
res=((np.exp(predictions)-np.exp(validation_batch_Y))/(np.exp(validation_batch_Y)))
fig,axes=plt.subplots(1,len(output_cols),figsize=(50,len(output_cols)))
for i in range(0,len(output_cols)):
axes[i].hist(res[:,:,i].flatten()*100.0,np.arange(-10,10,0.1))
axes[i].set_title(output_cols[i])
axes[i].set_xlabel(r'$\frac{f_{pred} - f_{True}}{f_{True}} \ \%$ error')
plt.subplots_adjust(wspace=0.5)
# + [markdown] id="33SQyA-hc9MC"
# ## Save network
# Having trained and validated network, I need to save the network and relevant functions
# + id="9W_lHMpfc9MC"
import cloudpickle
# + id="A_swmgtrc9MC" outputId="9b46130d-80f7-4555-e38f-39ce37b44e5c"
with open('CIGALE_emulator_20210330_log10sfr_uniformAGN_z.pkl', 'wb') as f:
cloudpickle.dump({'net_init':net_init,'net_apply': net_apply,'params':net_params}, f)
net_init, net_apply
# + [markdown] id="YwaJCFbZc9MC"
# ## Does SED look right?
# + id="i1a_HGDTc9MH"
wave=np.array([250,350,500])
# + id="NDZKmEN0c9MH" outputId="bb3c4176-0c65-41e1-a804-fa2e940a81d4"
plt.loglog(wave,np.exp(net_apply(net_params,np.array([2.95, 0.801, 0.1]))),'o')
#plt.loglog(wave,10.0**net_apply(net_params,np.array([3.0,0.0,0.0])),'o')
plt.loglog(wave,dataset[(dataset['universe.redshift']==0.1) & (dataset['agn.fracAGN'] == 0.801) & (dataset['sfh.sfr']>900) & (dataset['sfh.sfr']<1100)][output_cols].values())
# + id="djsylzVIc9MI" outputId="5c3027f2-6cca-428f-fc42-a1f73101a92c"
dataset[(dataset['universe.redshift']==0.1) & (dataset['agn.fracAGN'] == 0.801) & (dataset['sfh.sfr']>900) & (dataset['sfh.sfr']<1100)]
# + id="X7AAzEy4c9MI" outputId="3ec399a8-0e66-4601-d0fa-5c1bef129fc3"
import xidplus
# + id="0eUvIYD4c9MI"
from xidplus.numpyro_fit.misc import load_emulator
# + id="tqaWWtFDc9MI"
obj=load_emulator('CIGALE_emulator_20210330_log10sfr_uniformAGN_z.pkl')
# + id="GUnIwQZ8c9MJ" outputId="9c03c1af-259b-4de8-e526-0af5f61f2fce"
type(obj['params'])
# + id="8muCQ7dTc9MJ"
import json
# + id="JPD_3tTjc9MJ"
import numpy as np
# + id="U7lFerTcc9MJ"
np.savez('CIGALE_emulator_20210610_kasia',obj['params'],allow_pickle=True)
# + id="rcrDotrwc9MJ" outputId="ae213676-eef8-47bb-ae23-3bb8767e3c2f"
# ls
# + id="6iTWvMPic9MJ"
x=np.load('params_save.npz',allow_pickle=True)
# + id="uP_zDoiLc9MJ" outputId="83121576-2940-45e3-ad6a-bb6bca672b7a"
x['arr_0'].tolist()
# + id="pXhsDknWc9MK" outputId="6bf2a1d7-cdb1-4091-c26e-bf55ac054624"
obj['params']
# + id="_MvdaDNhc9MK"
| docs/notebooks/examples/SED_emulator/JAX_CIGALE_emulator-kasia.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Notebook 4: Linear Regression (Ising)
# ## Learning Goal
# Let us now apply linear regression to an example that is familiar from Statistical Mechanics: the Ising model. The goal of this notebook is to revisit the concepts of in-sample and out-of-sample errors, as well as $L2$- and $L1$-regularization, in an example that is more intuitive to physicists.
#
# ## Overview
# Consider the 1D Ising model with nearest-neighbor interactions
#
# $$H[\boldsymbol{S}]=-J\sum_{j=1}^L S_{j}S_{j+1}$$
#
# on a chain of length $L$ with periodic boundary conditions and $S_j=\pm 1$ Ising spin variables. In one dimension, this paradigmatic model has no phase transition at finite temperature.
#
#
# ### Exercises (optional): ###
# We invite the reader who is unfamiliar with the property of the Ising model to solve the following problems.
# <ul>
# <li> Compute the partition function of the Ising model in one dimension at inverse temperature $\beta$ when $L\rightarrow\infty$ (thermodynamic limit):
# $$Z=\sum_S \exp(-\beta H[S]).$$
# Here the sum is carried over all $2^L$ spin configurations.
# <li> Compute the model's magnetization $M=\langle\sum_i S_i\rangle$ in the same limit ($L\rightarrow\infty$). The expectation is taken with respect to the Boltzmann distribution:
# $$p(S)=\frac{\exp(-\beta H[S])}{Z}$$
# <li> How does $M$ behave as a function of the temperature $T=\beta^{-1}$?
# </ul>
#
# For a more detailed introduction we refer the reader to consult one of the many textbooks on the subject (see for instance <a href="https://www.amazon.com/Lectures-Transitions-Renormalization-Frontiers-Physics/dp/0201554097">Goldenfeld</a>, <a href="https://www.google.com/search?q=lubensky+condensed+matter+physics&rlz=1C5CHFA_enCA776CA776&oq=lubensky+&aqs=chrome.2.69i57j0l5.3047j1j7&sourceid=chrome&ie=UTF-8">Lubensky</a>, <a href="https://physics.anu.edu.au/theophys/baxter_book.php">Baxter </a>, etc.).
#
# ### Learning the Ising model ###
#
# Suppose your boss set $J=1$, drew a large number of spin configurations, and computed their Ising energies. Then, without telling you about the above Hamiltonian, he or she handed you a data set of $i=1\ldots n$ points of the form $\{(H[\boldsymbol{S}^i],\boldsymbol{S}^i)\}$. Your task is to learn the Hamiltonian using Linear regression techniques.
# +
import numpy as np
import scipy.sparse as sp
np.random.seed(12)
import warnings
# Comment this to turn on warnings
warnings.filterwarnings('ignore')
### define Ising model aprams
# system size
L=40
# create 10000 random Ising states
states=np.random.choice([-1, 1], size=(10000,L))
def ising_energies(states):
"""
This function calculates the energies of the states in the nn Ising Hamiltonian
"""
L = states.shape[1]
J = np.zeros((L, L),)
for i in range(L):
J[i,(i+1)%L]=-1.0 # interaction between nearest-neighbors
# compute energies
E = np.einsum('...i,ij,...j->...',states,J,states)
return E
# calculate Ising energies
energies=ising_energies(states)
# -
# ## Recasting the problem as a Linear Regression
# First of all, we have to decide on a model class (possible Hamiltonians) we use to fit the data. In the absence of any prior knowledge, one sensible choice is the all-to-all Ising model
#
# $$
# H_\mathrm{model}[\boldsymbol{S}^i] = - \sum_{j=1}^L \sum_{k=1}^L J_{j,k}S_{j}^iS_{k}^i.
# $$
# Notice that this model is uniquely defined by the non-local coupling strengths $J_{jk}$ which we want to learn. Importantly, this model is linear in ${\mathbf J}$ which makes it possible to use linear regression.
#
# To apply linear regression, we would like to recast this model in the form
# $$
# H_\mathrm{model}^i \equiv \mathbf{X}^i \cdot \mathbf{J},
# $$
#
# where the vectors $\mathbf{X}^i$ represent all two-body interactions $\{S_{j}^iS_{k}^i \}_{j,k=1}^L$, and the index $i$ runs over the samples in the data set. To make the analogy complete, we can also represent the dot product by a single index $p = \{j,k\}$, i.e. $\mathbf{X}^i \cdot \mathbf{J}=X^i_pJ_p$. Note that the regression model does not include the minus sign, so we expect to learn negative $J$'s.
# reshape Ising states into RL samples: S_iS_j --> X_p
states=np.einsum('...i,...j->...ij', states, states)
shape=states.shape
states=states.reshape((shape[0],shape[1]*shape[2]))
# build final data set
Data=[states,energies]
# ## Numerical Experiments
#
# As we already mentioned a few times in the review, learning is not fitting: the subtle difference is that once we fit the data to obtain a candidate model, we expect it to generalize to unseen data not used for the fitting procedure. For this reason, we begin by specifying a training and test data sets
# define number of samples
n_samples=400
# define train and test data sets
X_train=Data[0][:n_samples]
Y_train=Data[1][:n_samples] #+ np.random.normal(0,4.0,size=X_train.shape[0])
X_test=Data[0][n_samples:3*n_samples//2]
Y_test=Data[1][n_samples:3*n_samples//2] #+ np.random.normal(0,4.0,size=X_test.shape[0])
# # Evaluating the performance: coefficient of determination $R^2$
# In what follows the model performance (in-sample and out-of-sample) is evaluated using the so-called coefficient of determination, which is given by:
# \begin{align}
# R^2 &= \left(1-\frac{u}{v}\right),\\
# u&=(y_{pred}-y_{true})^2\\
# v&=(y_{true}-\langle y_{true}\rangle)^2
# \end{align}
# The best possible score is 1.0 but it can also be negative. A constant model that always predicts the expected value of $y$, $\langle y_{true}\rangle$, disregarding the input features, would get a $R^2$ score of 0.
# ## Applying OLS, Ridge regression and LASSO:
# +
from sklearn import linear_model
import matplotlib.pyplot as plt
from mpl_toolkits.axes_grid1 import make_axes_locatable
import seaborn
# %matplotlib inline
# set up Lasso and Ridge Regression models
leastsq=linear_model.LinearRegression()
ridge=linear_model.Ridge()
lasso = linear_model.Lasso()
# define error lists
train_errors_leastsq = []
test_errors_leastsq = []
train_errors_ridge = []
test_errors_ridge = []
train_errors_lasso = []
test_errors_lasso = []
# set regularisation strength values
lmbdas = np.logspace(-4, 5, 10)
#Initialize coeffficients for ridge regression and Lasso
coefs_leastsq = []
coefs_ridge = []
coefs_lasso=[]
for lmbda in lmbdas:
### ordinary least squares
leastsq.fit(X_train, Y_train) # fit model
coefs_leastsq.append(leastsq.coef_) # store weights
# use the coefficient of determination R^2 as the performance of prediction.
train_errors_leastsq.append(leastsq.score(X_train, Y_train))
test_errors_leastsq.append(leastsq.score(X_test,Y_test))
### apply RIDGE regression
ridge.set_params(alpha=lmbda) # set regularisation parameter
ridge.fit(X_train, Y_train) # fit model
coefs_ridge.append(ridge.coef_) # store weights
# use the coefficient of determination R^2 as the performance of prediction.
train_errors_ridge.append(ridge.score(X_train, Y_train))
test_errors_ridge.append(ridge.score(X_test,Y_test))
### apply LASSO regression
lasso.set_params(alpha=lmbda) # set regularisation parameter
lasso.fit(X_train, Y_train) # fit model
coefs_lasso.append(lasso.coef_) # store weights
# use the coefficient of determination R^2 as the performance of prediction.
train_errors_lasso.append(lasso.score(X_train, Y_train))
test_errors_lasso.append(lasso.score(X_test,Y_test))
### plot Ising interaction J
J_leastsq=np.array(leastsq.coef_).reshape((L,L))
J_ridge=np.array(ridge.coef_).reshape((L,L))
J_lasso=np.array(lasso.coef_).reshape((L,L))
cmap_args=dict(vmin=-1., vmax=1., cmap='seismic')
fig, axarr = plt.subplots(nrows=1, ncols=3)
axarr[0].imshow(J_leastsq,**cmap_args)
axarr[0].set_title('OLS \n Train$=%.3f$, Test$=%.3f$'%(train_errors_leastsq[-1], test_errors_leastsq[-1]),fontsize=16)
axarr[0].tick_params(labelsize=16)
axarr[1].imshow(J_ridge,**cmap_args)
axarr[1].set_title('Ridge $\lambda=%.4f$\n Train$=%.3f$, Test$=%.3f$' %(lmbda,train_errors_ridge[-1],test_errors_ridge[-1]),fontsize=16)
axarr[1].tick_params(labelsize=16)
im=axarr[2].imshow(J_lasso,**cmap_args)
axarr[2].set_title('LASSO $\lambda=%.4f$\n Train$=%.3f$, Test$=%.3f$' %(lmbda,train_errors_lasso[-1],test_errors_lasso[-1]),fontsize=16)
axarr[2].tick_params(labelsize=16)
divider = make_axes_locatable(axarr[2])
cax = divider.append_axes("right", size="5%", pad=0.05, add_to_figure=True)
cbar=fig.colorbar(im, cax=cax)
cbar.ax.set_yticklabels(np.arange(-1.0, 1.0+0.25, 0.25),fontsize=14)
cbar.set_label('$J_{i,j}$',labelpad=15, y=0.5,fontsize=20,rotation=0)
fig.subplots_adjust(right=2.0)
plt.show()
# -
# To quantify learning, we also plot the in-sample and out-of-sample errors
# +
# Plot our performance on both the training and test data
plt.semilogx(lmbdas, train_errors_leastsq, 'b',label='Train (OLS)')
plt.semilogx(lmbdas, test_errors_leastsq,'--b',label='Test (OLS)')
plt.semilogx(lmbdas, train_errors_ridge,'r',label='Train (Ridge)',linewidth=1)
plt.semilogx(lmbdas, test_errors_ridge,'--r',label='Test (Ridge)',linewidth=1)
plt.semilogx(lmbdas, train_errors_lasso, 'g',label='Train (LASSO)')
plt.semilogx(lmbdas, test_errors_lasso, '--g',label='Test (LASSO)')
fig = plt.gcf()
fig.set_size_inches(10.0, 6.0)
#plt.vlines(alpha_optim, plt.ylim()[0], np.max(test_errors), color='k',
# linewidth=3, label='Optimum on test')
plt.legend(loc='lower left',fontsize=16)
plt.ylim([-0.1, 1.1])
plt.xlim([min(lmbdas), max(lmbdas)])
plt.xlabel(r'$\lambda$',fontsize=16)
plt.ylabel('Performance',fontsize=16)
plt.tick_params(labelsize=16)
plt.show()
# -
# ## Understanding the results
#
# Let us make a few remarks:
#
# (i) the inverse (see [Scikit documentation](http://scikit-learn.org/stable/modules/classes.html#module-sklearn.linear_model)) regularization parameter $\lambda$ affects the Ridge and LASSO regressions at scales, separated by a few orders of magnitude. Notice that this is different for the data considered in Notebook 3 __Section VI: Linear Regression (Diabetes)__. Therefore, it is considered good practice to always check the performance for the given model and data with $\lambda$ over multiple decades.
#
# (ii) at $\lambda\to 0$ and $\lambda\to\infty$, all three models overfit the data, as can be seen from the deviation of the test errors from unity (dashed lines), while the training curves stay at unity.
#
# (iii) While the OLS and Ridge regression test curves are monotonic, the LASSO test curve is not -- suggesting the optimal LASSO regularization parameter is $\lambda\approx 10^{-2}$. At this sweet spot, the Ising interaction weights ${\bf J}$ contain only nearest-neighbor terms (as did the model the data was generated from).
#
# __Gauge degrees of freedom__: recall that the uniform nearest-neighbor interactions strength $J_{j,k}=J$ which we used to generate the data was set to unity, $J=1$. Moreover, $J_{j,k}$ was NOT defined to be symmetric (we only used the $J_{j,j+1}$ but never the $J_{j,j-1}$ elements). The colorbar on the matrix elements plot above suggest that the OLS and Ridge regression learn uniform symmetric weights $J=-0.5$. There is no mystery since this amounts to taking into account both the $J_{j,j+1}$ and the $J_{j,j-1}$ terms, and the weights are distributed symmetrically between them. LASSO, on the other hand, can break this symmetry (see matrix elements plots for $\lambda=0.001$ and $\lambda=0.01$). Thus, we see how different regularization schemes can lead to learning equivalent models but in different gauges. Any information we have about the symmetry of the unknown model that generated the data has to be reflected in the definition of the model and the regularization chosen.
# ### Exercises: ###
# <ul>
# <li> Plot a histogram of the distribution of the components of ${\bf J}$ at different values of the number of training sample (one can go up to $2\times 10^4$). What happens with the sampling noise as the number of samples is increased/decreased for the three types of regression considered? How do the matrix elements plots above change?
#
# <li> Try to learn the underlying model of the data, assuming it lies within the class of one-body Hamiltonians, i.e. make the ansatz
# $$H_\mathrm{model}[\boldsymbol{S}^i] = \sum_{j=1}^L h_jS_{j}$$
# for some unknown field $h_j$. How well can you explain the data? How well does the model generalize? Study these problems by playing with the size of the data set. Try out all three regression models and determine which one does the best. What is the relationship to Mean-Field Theory of this model?
# </ul>
| jupyter_notebooks/notebooks/NB4_CVI-linreg_ising.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Frequenza parole e forme UFO
# +
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
from matplotlib.ticker import StrMethodFormatter
from math import ceil, floor
OUT_FOLDER = 'grafici/FrequenzaParole_FormeUFO'
FIGURE_SIZE = (20, 8)
# %matplotlib inline
# -
DATASET = 'ufos.csv'
data = pd.read_csv(DATASET)
len(data)
# ## Calcoliamo le parole più frequenti
# +
import re
from collections import defaultdict
# compiliamo la regex per performance
words_re = re.compile('[a-zA-Z0-9]+')
def tokenize(comment):
return words_re.findall(comment)
occurrencies = defaultdict(int)
for iloc, comment in data['comments'].dropna().iteritems():
for word in tokenize(comment):
occurrencies[word] += 1
occurrencies = pd.Series(occurrencies)
# Rimuoviamo un po' di articoli e parole noiose
WORDS_TO_FILTER = [
'the', 'in', 'a', 'and', 'of', 'i', 'to', 'was', 'at', 'it', 'with',
'over', 'on', 'then', 'from', 'my', 'saw', 'like', 'that', '3', '2',
'no', 'for', 'very', 'seen', 'we', 'were'
]
occurrencies = occurrencies.drop(labels=WORDS_TO_FILTER)
# Sommiamo 'light' e 'lights' e 'objects' e 'object' che si riferiscono alla stessa parola
occurrencies['lights'] += occurrencies['light']
occurrencies['objects'] += occurrencies['object']
occurrencies = occurrencies.drop(labels=['light', 'object'])
occurrencies = occurrencies.sort_values(ascending=False).head(20) # prendiamo solo le prime 20 parole
# -
# ## Calcoliamo le forme di UFO più avvistate
shape_counts = data['shape'].value_counts()
# Sommiamo tutti i valori con < 10 casi in un unico 'other'
small_values = shape_counts[shape_counts < 10]
total = sum(small_values)
shape_counts = shape_counts.drop(index=small_values.index)
shape_counts['other'] = total
shape_counts = shape_counts.sort_values(ascending=False)
GRAY = '#cccccc'
# ## Versione senza alcun colore
# +
fig, (ax, bx) = plt.subplots(nrows=1, ncols=2, figsize=FIGURE_SIZE)
shape_bars = ax.barh(shape_counts.index, shape_counts.values, color=GRAY)
words_bars = bx.barh(occurrencies.index, occurrencies.values, color=GRAY)
ax.set_title('Avvistamenti per forma di UFO')
ax.set_xlabel('Numero di avvistamenti')
bx.set_title('Parole più usate nelle descrizioni degli avvistamenti')
bx.set_xlabel('Occorrenze della parola')
for ax in [ax, bx]:
for spine in ['left', 'right', 'top', 'bottom']:
ax.spines[spine].set_visible(False)
ax.invert_yaxis()
ax.xaxis.set_major_formatter(StrMethodFormatter('{x:,g}'))
ax.tick_params(
axis="both",
which="both",
bottom=False,
top=False,
labelbottom=True,
left=False,
right=False,
labelleft=True
)
vals = ax.get_xticks()
for tick in vals:
ax.axvline(x=tick, linestyle='dashed', alpha=0.4, color='#eeeeee', zorder=1)
plt.savefig(f'{OUT_FOLDER}/NoColor.png')
plt.show()
# +
fig, (ax, bx) = plt.subplots(nrows=1, ncols=2, figsize=FIGURE_SIZE)
GRAY = '#cccccc'
shape_bars = ax.barh(shape_counts.index, shape_counts.values, color=GRAY)
colors = [GRAY] * len(occurrencies.index)
find_index = lambda x: np.where(occurrencies.index.values == x)[0][0]
colors[find_index('green')] = '#59bf5b'
colors[find_index('orange')] = '#ffa500'
colors[find_index('red')] = '#ca472f'
words_bars = bx.barh(occurrencies.index, occurrencies.values, color=colors)
ax.set_title('Avvistamenti per forma di UFO')
ax.set_xlabel('Numero di avvistamenti')
bx.set_title('Parole più usate nelle descrizioni degli avvistamenti')
bx.set_xlabel('Occorrenze della parola')
for ax in [ax, bx]:
for spine in ['left', 'right', 'top', 'bottom']:
ax.spines[spine].set_visible(False)
ax.invert_yaxis()
ax.xaxis.set_major_formatter(StrMethodFormatter('{x:,g}'))
ax.tick_params(
axis="both",
which="both",
bottom=False,
top=False,
labelbottom=True,
left=False,
right=False,
labelleft=True
)
vals = ax.get_xticks()
for tick in vals:
ax.axvline(x=tick, linestyle='dashed', alpha=0.4, color='#eeeeee', zorder=1)
plt.savefig(f'{OUT_FOLDER}/Colorato.png')
plt.show()
| FrequenzaParole_FormeUfo.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 2
# language: python
# name: python2
# ---
# ### Match traj to hex
from fmm import H3MM,hexs2wkt
traj = "LINESTRING (18.024101257324215 59.337523121884225, 18.03852081298828 59.34391321930451, 18.042125701904297 59.35353986273416, 18.056459426879883 59.36080179623859, 18.065214157104492 59.34964577662557)"
hex_level = 9
interpolate = False
result = H3MM.match_wkt(traj, hex_level, interpolate)
print result.traj_id
print list(result.hexs)
print hexs2wkt(result.hexs)
# ### Plot result
# %matplotlib inline
import matplotlib.pyplot as plt
import matplotlib.pyplot as plt
import matplotlib
from shapely import wkt
def plot_traj_hex(traj_geom, hex_geom, margin = 0.01):
fig,ax = plt.subplots(1,1,figsize=(6,4))
patches = []
tc = "C1"
hc = "C4"
x,y = traj_geom.xy
ax.plot(x,y,c=tc,marker="o",ms=6,lw=2,markeredgewidth=4, markeredgecolor=tc)
for geom in hex_geom.geoms:
x,y = geom.exterior.xy
ax.fill(x, y, fc = hc, ec="w",linewidth=2, alpha = 0.8)
ax.tick_params(axis='both',left=False, top=False, right=False, bottom=False,
labelleft=False, labeltop=False, labelright=False, labelbottom=False)
minx, miny, maxx, maxy = traj_geom.envelope.buffer(margin).bounds
ax.set_xlim(minx,maxx)
ax.set_ylim(miny,maxy)
# ax.set_aspect(1.0)
return fig,ax
level = 8
interpolate = False
result = H3MM.match_wkt(traj, level, interpolate)
traj_geom = wkt.loads(traj)
hex_geom = wkt.loads(hexs2wkt(result.hexs))
fig,ax = plot_traj_hex(traj_geom,hex_geom)
# plt.tight_layout()
ax.set_title("H{} Interpolate {}".format(level,interpolate),fontsize=16)
fig.savefig("h{}{}.png".format(level,("","i")[interpolate]),dpi=300,bbox_inches='tight',pad_inches=0)
level = 9
interpolate = True
result = H3MM.match_wkt(traj, level, interpolate)
traj_geom = wkt.loads(traj)
hex_geom = wkt.loads(hexs2wkt(result.hexs))
fig,ax = plot_traj_hex(traj_geom,hex_geom,margin=0.005)
ax.set_title("H{} Interpolate {}".format(level,interpolate),fontsize=16)
fig.savefig("h{}{}.png".format(level,("","i")[interpolate]),dpi=300,bbox_inches='tight',pad_inches=0)
level = 8
interpolate = True
result = H3MM.match_wkt(traj, level, interpolate)
traj_geom = wkt.loads(traj)
hex_geom = wkt.loads(hexs2wkt(result.hexs))
fig,ax = plot_traj_hex(traj_geom,hex_geom)
ax.set_title("H{} Interpolate {}".format(level,interpolate),fontsize=16)
fig.savefig("h{}{}.png".format(level,("","i")[interpolate]),dpi=300,bbox_inches='tight',pad_inches=0)
level = 7
interpolate = True
result = H3MM.match_wkt(traj, level, interpolate)
traj_geom = wkt.loads(traj)
hex_geom = wkt.loads(hexs2wkt(result.hexs))
fig,ax = plot_traj_hex(traj_geom,hex_geom,margin=0.03)
ax.set_title("H{} Interpolate {}".format(level,interpolate),fontsize=16)
fig.savefig("h{}{}.png".format(level,("","i")[interpolate]),dpi=300,bbox_inches='tight',pad_inches=0)
# ### Plot as a whole
levels = [8, 8, 9, 7]
interpolates = [False, True, True, True]
fig,axes = plt.subplots(2,2,figsize=(8,6.1))
patches = []
tc = "C1"
hc = "C4"
for level,interpolate,ax in zip(levels,interpolates,axes.flatten()):
result = H3MM.match_wkt(traj, level, interpolate)
traj_geom = wkt.loads(traj)
hex_geom = wkt.loads(hexs2wkt(result.hexs))
x,y = traj_geom.xy
ax.plot(x,y,c=tc,marker="o",ms=5,lw=2,markeredgewidth=4)
for geom in hex_geom.geoms:
x,y = geom.exterior.xy
ax.fill(x, y, fc = hc, ec="w",linewidth=2, alpha = 0.8)
ax.tick_params(axis='both',left=False, top=False, right=False, bottom=False,
labelleft=False, labeltop=False, labelright=False, labelbottom=False)
ax.set_aspect(1.0)
ax.set_title("H{} {}".format(level,("","Interpolate")[interpolate]),position=(0.5, 0.9),fontsize=16)
minx, miny, maxx, maxy = traj_geom.envelope.buffer(0.015).bounds
yoffset = 0.003
ax.set_xlim(minx,maxx)
ax.set_ylim(miny+yoffset,maxy+yoffset)
plt.tight_layout()
plt.subplots_adjust(wspace=0, hspace=0)
fig.savefig("h3demo.png",dpi=300,bbox_inches='tight',pad_inches=0)
| example/h3/h3demo.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="view-in-github" colab_type="text"
# <a href="https://colab.research.google.com/github/rabin1323/DataScience_Final_Project/blob/main/Data_science_Project.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# + id="ksc2cKm9mOsb" colab={"base_uri": "https://localhost:8080/"} outputId="022ace0c-8a79-42e4-fa54-d43a687c8696"
import pandas as pd
import numpy as np
import seaborn as sns
import matplotlib.pyplot as plt
import nltk
nltk.download('stopwords')
from nltk.corpus import stopwords
import string
from sklearn.feature_extraction.text import CountVectorizer
from sklearn.model_selection import train_test_split
from sklearn.metrics import confusion_matrix, classification_report
from sklearn.metrics import accuracy_score
from sklearn.neural_network import MLPClassifier
# %matplotlib inline
# %config InlineBackend.figure_format='retina'
# + colab={"base_uri": "https://localhost:8080/", "height": 589} id="H64GHtMnrgkb" outputId="7ab26a35-3d06-4232-b865-00ee0a346eff"
yelp_df = pd.read_csv('https://raw.githubusercontent.com/rabin1323/Data_Set/main/yelp_review_csv1.csv')
yelp_df
# + [markdown] id="H6qPPoSIT-Ai"
# **EXPLORING DATASET**
#
#
# + colab={"base_uri": "https://localhost:8080/"} id="bgrtYwY1oXt7" outputId="3f1b340b-c11b-4eeb-cd1b-cd1a57e7eed9"
yelp_df.info()
# + colab={"base_uri": "https://localhost:8080/", "height": 297} id="-sN1xrzvpiV1" outputId="9bf1796c-7e6e-4ebd-cea4-a1f39bc6dfb8"
yelp_df.describe()
# + colab={"base_uri": "https://localhost:8080/", "height": 330} id="jh9m6UfvVZPL" outputId="42e7998f-4ba2-4ddf-d842-8d60ba15fbbf"
#to check if we have any null values
sns.heatmap(yelp_df.isnull(), yticklabels=False, cbar= False, cmap="Blues")
#if there is any dot on the plot that means we have null values
# + colab={"base_uri": "https://localhost:8080/", "height": 419} id="tcz61WUKGQnT" outputId="86bbaa14-428a-4f84-e01e-5ce3cc3753f4"
yelp_df=yelp_df.drop(['useful','funny','cool','review_id','user_id','business_id','date'], axis=1)
yelp_df
# + id="MOS6AQRzZ_yE"
#data cleaning
def preprocess(review_text):
remove_pctn=[char for char in review_text if char not in string.punctuation]
remove_pctn= ''.join(remove_pctn)
lwr = [word.lower() for word in remove_pctn.split()]
final_word = [word for word in lwr if word not in stopwords.words('english')]
return final_word
# + [markdown] id="UkdwAbP4r17p"
# For the sentiment analsysis we want to consider only two types of stars here i.e, one star for negative reviews and fives stars for positive reviews.
#
# We will also use count vectorizer to make a model which will be used to understand the review text. After that we will transform the vectorized text and assign to variable x. Lastly, we will split the entire data to train and test model using train_test_split()
# + id="t4ZWhz73juO1"
#Filtering Data
filtered_data = yelp_df[(yelp_df['stars']==1) | (yelp_df['stars']==5)]
x = filtered_data['text'] #assigning review text to variable x
y=filtered_data['stars'] #assigning stars to variable y
vectorizer=CountVectorizer(analyzer=preprocess).fit(x)
x=vectorizer.transform(x) #transforming the vectorized text
X_train, X_test, y_train, y_test= train_test_split(x, y, random_state=42)
# + colab={"base_uri": "https://localhost:8080/", "height": 349} id="PnUf2tuXZ9v3" outputId="b8bce8d6-3034-4794-8dfb-c7cae08bbc17"
sns.countplot(filtered_data['stars'])
# + id="FUaoN760ue-6"
model= MLPClassifier()
model.fit(X_train, y_train)
y_predict = model.predict(X_test)
# + id="anNUziIieh4x"
#plotting the reviews using confusion matrix
def conf_matrix(y, y_predict, reviews, title= 'Confusion_Matrix'):
c_matrix = confusion_matrix(y, y_predict)
clsfn_report = classification_report(y, y_predict)
ticks = np.arange(len(reviews))
score = accuracy_score(y, y_predict)
score=round(score*100,2)
print("Accuracy_score:", score)
print('classification_report', clsfn_report)
sns.heatmap(c_matrix, cmap= 'PuBu', annot= True, fmt='g', annot_kws={'size':20})
plt.xticks(ticks, reviews)
plt.yticks(ticks, reviews)
plt.xlabel('predicted', fontsize=20)
plt.ylabel('actual', fontsize=20)
plt.title(title, fontsize=20)
plt.show
# + colab={"base_uri": "https://localhost:8080/", "height": 488} id="YS83SiUwjbtL" outputId="83f1fc5b-7a1a-4a9a-e821-5331a77e0366"
conf_matrix(y_test, y_predict, reviews=['negative(1)', 'positive(5'])
# + [markdown] id="LFHnPOdZT7bQ"
# The Accuracy is 93.86 % which is a good sign.
| Data_science_Project.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3 (ipykernel)
# language: python
# name: python3
# ---
# # Programmeren
# + [markdown] slideshow={"slide_type": "slide"} tags=[]
# ## Wat is programmeren
#
# **ICT != programmeren**
# + [markdown] slideshow={"slide_type": "notes"} tags=[]
# Eerder hebben we gezien dat programmeren maar een onderdeel is van informatica. Het is alleen van belang bij de vraag of je een proces kan ontwerpen voor het oplossen van een probleem. Of met ander woorden, het bedenken van *een recept*. (`!=` betekent *niet gelijk aan*, dit is programmacode!)
# + [markdown] slideshow={"slide_type": "subslide"}
# ## Programmeren
#
# - het schrijven van recepten
# - het leren van een vreemde taal
# + [markdown] slideshow={"slide_type": "notes"}
# Leren programmeren gaat zowel over het kunnen schrijven van een recept maar ook het leren van een vreemde taal!
# + [markdown] slideshow={"slide_type": "subslide"}
# 
# + [markdown] slideshow={"slide_type": "notes"}
# Het leren van een programmertaal is niet veel anders dan het leren van een natuurlijke taal als Frans of Engels. Verwacht dan ook dat je het pas leert door het te doen. En of je nou [wel of niet goed bent in wiskunde](https://www.washington.edu/news/2020/03/02/not-a-math-person-you-may-be-better-at-learning-to-code-than-you-think/) (zoals soms wordt gezegd), het heeft met het leren van een programmertaal niet heel veel te maken.
# + [markdown] slideshow={"slide_type": "subslide"}
# 
# + [markdown] slideshow={"slide_type": "notes"}
# Door onderzoek komen we steeds meer te weten welke vaardigheden een rol spelen bij leren programmeren. Een onderzoeker demonstreert hier hoe de hersenactiviteit wordt gemeten bij programmeren om te bepalen welke hersengebieden worden aangesproken tijdens programmeren.
#
# Onderzoek maakt steeds beter duidelijk dat een natuurlijke aanleg voor het leren van talen een veel betere voorspeller is van kunnen leren programmeren dan basiskennis wiskunde of rekenen. Dit komt omdat het schrijven van code ook het leren van een tweede taal betekent, het vermogen om de woordenschat en de grammatica van die taal te leren, en hoe deze vaardigheid een rol speelt om ideeën en intenties over te brengen {cite}`eckart_2020,prat_relating_2020`.
# + [markdown] slideshow={"slide_type": "slide"} tags=[]
# ## Python
#
# 
# + [markdown] slideshow={"slide_type": "notes"}
# De taal Python (vermoemd naar [Monty Python](https://en.wikipedia.org/wiki/Monty_Python)) is bedacht door [Guido van Rossum](https://en.wikipedia.org/wiki/Guido_van_Rossum) toen hij werkte aan de Universiteit van Amsterdam. De eerste versie verscheen in 1991 en is inmiddels uitgegroeid tot één van de meest populaire programmeertalen!
# + [markdown] slideshow={"slide_type": "subslide"}
# ## De vreemde taal Python
#
# - syntax
# - semantiek
# - intentie
# + [markdown] slideshow={"slide_type": "notes"}
# Of met andere woorden:
#
# - hoe ziet het er uit (spelling)
# - wat het doet (betekenis)
# - wat het zou moeten doen (bedoeling)
#
# Deze kenmerken komen overeen met die van natuurlijke talen, bijvoorbeeld Nederlands!
# + [markdown] slideshow={"slide_type": "subslide"} tags=[]
# ### Intentie
#
# Wat het zou moeten doen: **gewenste output**
#
# ```python
# import random
#
# name = input("Hoi, wie ben jij? ")
#
# if name == "Bas" or name == "Marian": # is het Bas of Marian?
# print("Ik ben offline, probeer het later.")
#
# elif name == "Alex": # of is het Alex?
# print("Willem Alex...? Nee? Oh.")
#
# else: # anders...
# print("Welkom", name, "!")
# my_choice = random.choice(["steen", "papier", "schaar"])
# print("Mijn favoriet is", my_choice, "!")
# ```
# + [markdown] slideshow={"slide_type": "notes"} tags=[]
# De *bedoeling* is dat dit programma een gebruiker op de juiste manier groet en dit is de door ons (mensen) *gewenste output* van het programma.
# + [markdown] slideshow={"slide_type": "subslide"} tags=[]
# ### Semantiek
#
# Wat het doet: **geproduceerde output**
#
# ```python
# import random
#
# name = input("Hoi, wie ben jij? ")
#
# if name == "Bas" or name == "Marian": # is het Bas of Marian?
# print("Ik ben offline, probeer het later.")
#
# elif name == "Alex": # of is het Alex?
# print("Willem Alex...? Nee? Oh.")
#
# else: # anders...
# print("Welkom", name, "!")
# my_choice = random.choice(["steen", "papier", "schaar"])
# print("Mijn favoriet is", my_choice, "!")
# ```
# + [markdown] slideshow={"slide_type": "notes"} tags=[]
# De instructies hebben voor de computer *betekenis* zolang het ze kan uitvoeren en dit levert uiteindelijk door de computer *geproduceerde output* op.
# + [markdown] slideshow={"slide_type": "subslide"} tags=[]
# 
# + [markdown] slideshow={"slide_type": "slide"} tags=[]
# ### Syntax
#
# Hoe ziet het er uit: **spellingsregels**
# + [markdown] slideshow={"slide_type": "subslide"}
# 
# + [markdown] slideshow={"slide_type": "notes"}
# Vergelijk syntax met wat je kent van de Nederlandse taal. Het Groene Boekje is de gebruikelijke naam van de Woordenlijst Nederlandse Taal, een overzicht van de officiële spelling van Nederlandse woorden, met onder andere het gebruik van dubbele en enkele klinkers, hoofdletters, koppeltekens en vervoegingen van werkwoorden. Met andere woorden, alles wat met de regels van de taal te maken heeft!
# + [markdown] slideshow={"slide_type": "subslide"}
# ## Python syntax
#
# - gebruik van leestekens (spatie, punt, ...)
# - speciale woorden ("keywords")
# - formattering
# - hoe het gedrag beïnvloedt
# + [markdown] slideshow={"slide_type": "notes"}
# En met "hoe het gedrag wordt beïnvloedt" bedoelen we dat het gebruik van syntax kan uitmaken hoe Python zich gedraagt, of *wat het doet*. Gelukkig is Python syntax een stuk minder complex dan talen als Nederlands (Het Groene Boekje telt meer dan 1200 pagina's!).
# + slideshow={"slide_type": "subslide"}
"42" * 2
# + [markdown] slideshow={"slide_type": "notes"}
# Je ziet hier correcte Python syntax en je had het getal 42 willen vermenigvuldigen met 2. Maar de uitkomst is waarschijnlijk heel anders dan verwacht!
# + slideshow={"slide_type": "subslide"}
42 * 2
# + [markdown] slideshow={"slide_type": "notes"}
# Dit lijkt meer op de verwachte uitkomst. Blijkbaar maakt het gebruik van aanhalingstekens uit en is het voor Python een manier om betekenis te geven aan de input waar het in het eerste geval 42 als "tekst" leest, en in het tweede geval als getal (onze bedoeling!).
# + [markdown] slideshow={"slide_type": "subslide"} tags=[]
# ### `SyntaxError`
#
# Oftwel Python's manier om jou te vertellen dat de syntax niet correct is.
# + slideshow={"slide_type": "subslide"}
print("Steen, papier of schaar?')
# + [markdown] slideshow={"slide_type": "notes"}
# Python zegt ons hier dat we een syntaxfout hebben gemaakt en probeert met de `^` aan te geven waar het ongeveer de fout heeft gevonden. Kan je de fout ontdekken? En wat kan je hier uit afleiden? We gaan in de opdrachten uitgebreid stilstaan bij deze en andere fouten en je wordt zelfs gevraagd om zoveel mogelijk fouten te maken!
# + [markdown] slideshow={"slide_type": "slide"}
# ## De uitdaging van programmeren
#
# - syntax
# - semantiek
# - intentie
# + [markdown] slideshow={"slide_type": "notes"} tags=[]
# Als het gaat om programmeren, waar zit nu precies de uitdaging? Laten we de bovenstaande punten voor het gemak vertalen naar de volgende:
# + [markdown] slideshow={"slide_type": "subslide"}
# 
# + [markdown] slideshow={"slide_type": "notes"}
# Het is deze cyclus die de uitdaging van programmeren illustreert en vaak zal je deze cyclus meerdere keren moeten doorlopen om tot een gewenste uitkomst te komen...
# + [markdown] slideshow={"slide_type": "skip"}
# ## Quiz
# + [markdown] slideshow={"slide_type": "subslide"} tags=[]
# ```python
# import random
#
# user = input( "Kies <NAME>! " )
#
# comp = random.choice( ['steen','papier','schaar")
#
# print('user (jij) koos:', 'user')
# print('comp (ik!) koos:' comp)
#
# if user == rock and comp = 'paper'
# print('Het resultaat is dat JIJ VERLIEST.'
# print('tenzij je een 'docent' bent, dan WIN JE!')
# ```
# + [markdown] slideshow={"slide_type": "subslide"} tags=[]
# ### Vraag 1
#
# 1. Probeer zoveel mogelijk fouten te vinden (en te verbeteren!) in de bovenstaande code
#
# 2. De regel
#
# ```python
# user = input( "<NAME>! " )
# ```
#
# doet **3** dingen, kan jij ze noemen?
#
# 3. Welke 7 [leestekens](https://nl.wikipedia.org/wiki/Leesteken) (punctuatie) zie jij gebruikt worden?
#
| topics/2a_programmeren.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: 'Python 3.7.7 64-bit (''base'': conda)'
# language: python
# name: python_defaultSpec_1601148142940
# ---
# # Initialization
# + tags=[]
# %load_ext autoreload
# %autoreload 2
# +
# Noelle
from noelle import Motor, Fluid, FluidMixture
# Numpy
import numpy as np
# Matplotlib
from matplotlib import pyplot as plt
import matplotlib
import matplotlib as mpl
from labellines import labelLine, labelLines
# +
# Configure plot styles
# Sizes
mpl.rcParams['figure.figsize'] = [12.0, 6.0]
mpl.rcParams['figure.dpi'] = 120
mpl.rcParams['savefig.dpi'] = 120
# Font
font = {'family' : 'normal',
'weight' : 'bold',
'size' : 22}
matplotlib.rc('font', **font)
# Style
plt.style.use(['science'])
# -
# # Fuel and Oxidizer Definitions
# + tags=[]
# Oxidizer
NOX = Fluid(name='N2O', coolprop_name='NitrousOxide', formula=None, fluid_type='oxidizer', storage_temperature=298.15)
# Fuels
H2O = Fluid(name='H2O(L)', coolprop_name='water', formula='H 2 O 1', fluid_type='fuel', storage_pressure=60e5, storage_temperature=298.15)
LC2H5OH = Fluid(name='C2H5OH(L)', coolprop_name='ethanol', formula='C 2 H 6 O 1', fluid_type='fuel', storage_pressure=60e5, storage_temperature=298.15)
H2O_30_C2H50H_70 = FluidMixture(fluid1=LC2H5OH, x1=70, fluid2=H2O, x2=30)
# -
# # Motor Main Parameters Design
# ## Anhydrous Ethanol
# + tags=[]
NOELLE = Motor(
NOX,
LC2H5OH,
thrust = 1000,
burn_time = 10,
p_chamber = 35,
n_cstar = 1,
n_cf = 1,
cd_ox = 0.6,
cd_fuel = 0.182,
phi=1.0
)
NOELLE.report()
NOELLE.report_ptable()
# -
# ## Hydrous Ethanol
# + tags=[]
NOELLE = Motor(
NOX,
H2O_30_C2H50H_70,
thrust = 1000,
burn_time = 10,
p_chamber = 35,
n_cstar = 1,
n_cf = 1,
cd_ox = 0.6,
cd_fuel = 0.182,
phi=1.0
)
NOELLE.report()
NOELLE.report_ptable()
# + tags=[]
NOELLE.print_cea_output()
# + tags=[]
values = NOELLE.value_cea_output(frozen=False)
print(values)
# + tags=[]
new_density = []
for i in range(len(values[1])-1):
if i % 2 == 0:
new_density.append(values[1][i]*10**(-values[1][i+1]))
print(values[1][i])
print(values[1][i+1])
print('')
else:
continue
print(new_density)
# -
| LiquidMotor/Motor Design.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Programming_Assignment-21
# Q1. Write a function that takes a list and a number as arguments. Add the number to the end of
#
# the list, then remove the first element of the list. The function should then return the updated
#
# list.
#
# Examples
#
# next_in_line([5, 6, 7, 8, 9], 1) ➞ [6, 7, 8, 9, 1]
#
# next_in_line([7, 6, 3, 23, 17], 10) ➞ [6, 3, 23, 17, 10]
#
# next_in_line([1, 10, 20, 42 ], 6) ➞ [10, 20, 42, 6]
#
# next_in_line([], 6) ➞ 'No list has been selected'
#
# +
lst = [5, 6, 7, 8, 9]
def next_in_line(lst,num):
if len(lst) > 0 :
lst.append(num)
return lst[1:]
else:
print("'No list has been selected'")
# -
next_in_line([5, 6, 7, 8, 9], 1)
next_in_line([7, 6, 3, 23, 17], 10)
next_in_line([1, 10, 20, 42 ], 6)
next_in_line([], 6)
# Q2. Create the function that takes a list of dictionaries and returns the sum of people's budgets.
#
# Examples
#
# get_budgets([
#
# { 'name': 'John', 'age': 21, 'budget': 23000 },
#
# { 'name': 'Steve', 'age': 32, 'budget': 40000 },
#
# { 'name': 'Martin', 'age': 16, 'budget': 2700 }
#
# ]) ➞ 65700
#
# get_budgets([
#
# { 'name': 'John', 'age': 21, 'budget': 29000 },
#
# { 'name': 'Steve', 'age': 32, 'budget': 32000 },
#
# { 'name': 'Martin', 'age': 16, 'budget': 1600 }
#
# ]) ➞ 62600
def get_budgets(listDict):
sum = 0
for dc in listDict:
for k,v in dc.items():
if k == 'budget':
sum = sum + v
return sum
get_budgets([
{ 'name': 'John', 'age': 21, 'budget': 23000 },
{ 'name': 'Steve', 'age': 32, 'budget': 40000 },
{ 'name': 'Martin', 'age': 16, 'budget': 2700 }
])
get_budgets([
{ 'name': 'John', 'age': 21, 'budget': 29000 },
{ 'name': 'Steve', 'age': 32, 'budget': 32000 },
{ 'name': 'Martin', 'age': 16, 'budget': 1600 }
])
# Q3. Create a function that takes a string and returns a string with its letters in alphabetical order.
#
# Examples
#
# alphabet_soup('hello') ➞ 'ehllo'
#
# alphabet_soup('edabit') ➞ 'abdeit'
#
# alphabet_soup('hacker') ➞ 'acehkr'
#
# alphabet_soup('geek') ➞ 'eegk'
#
# alphabet_soup('javascript') ➞ 'aacijprstv'
def alphabet_soup(str):
return ''.join(sorted(str))
alphabet_soup('hello')
alphabet_soup('edabit')
alphabet_soup('hacker')
alphabet_soup('geek')
alphabet_soup('javascript')
# Q4. Suppose that you invest $10,000 for 10 years at an interest rate of 6% compounded monthly.
#
# What will be the value of your investment at the end of the 10 year period?
#
# Create a function that accepts the principal p, the term in years t, the interest rate r, and the
#
# number of compounding periods per year n. The function returns the value at the end of term
#
# rounded to the nearest cent.
#
# For the example above:
#
# compound_interest(10000, 10, 0.06, 12) ➞ 18193.97
#
# Note that the interest rate is given as a decimal and n=12 because with monthly compounding
#
# there are 12 periods per year. Compounding can also be done annually, quarterly, weekly, or
#
# daily.
#
# Examples
#
# compound_interest(100, 1, 0.05, 1) ➞ 105.0
#
# compound_interest(3500, 15, 0.1, 4) ➞ 15399.26
#
# compound_interest(100000, 20, 0.15, 365) ➞ 2007316.26
#
# FV = PV(1 + r/m)mt
def compound_interest(amt, years, intrest, compPeriod):
future_value = amt *(1 + (intrest/compPeriod)) ** (years * compPeriod)
return round(future_value,2)
compound_interest(100, 1, 0.05, 1)
compound_interest(3500, 15, 0.1, 4)
compound_interest(100000, 20, 0.15, 365)
# Q5. Write a function that takes a list of elements and returns only the integers.
#
# Examples
#
# return_only_integer([9, 2, 'space', 'car', 'lion', 16]) ➞ [9, 2, 16]
#
# return_only_integer(['hello', 81, 'basketball', 123, 'fox']) ➞ [81, 123]
#
# return_only_integer([10, '121', 56, 20, 'car', 3, 'lion']) ➞ [10, 56, 20, 3]
#
# return_only_integer(['String', True, 3.3, 1]) ➞ [1]
def return_only_integer(lst):
new_Lst = []
for i in lst:
if type(i) == int:
new_Lst.append(i)
return new_Lst
return_only_integer([9, 2, 'space', 'car', 'lion', 16])
return_only_integer(['hello', 81, 'basketball', 123, 'fox'])
return_only_integer([10, '121', 56, 20, 'car', 3, 'lion'])
return_only_integer(['String', True, 3.3, 1])
| Programming_Assignment-21.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Choosing runner
#
# In this tutorial, we show how to choose different "runners" to run your simulations. This is helpful if you want to change OOMMF installation you want to use. This is in particular helpful if you want to run OOMMF inside Docker, which allows us to run simulations on a "small linux machine", which is automatically pulled from the cloud, simulations are run inside, and in the end it is destroyed automatically. This all happens in the background and requires no special assistance from the user. In order to use Docker, we need to have it installed on our machine - you can download it here: https://www.docker.com/products/docker-desktop.
#
# For that example, we simulate a skyrmion in a sample with periodic boundary conditions.
import oommfc as mc
import discretisedfield as df
import micromagneticmodel as mm
# We define mesh in cuboid through corner points `p1` and `p2`, and discretisation cell size `cell`. To define periodic boundary conditions, we pass an additional argument `bc`. Let us assume we want the periodic boundary conditions in $x$ and $y$ directions.
region = df.Region(p1=(-50e-9, -50e-9, 0), p2=(50e-9, 50e-9, 10e-9))
mesh = df.Mesh(region=region, cell=(5e-9, 5e-9, 5e-9), bc='xy')
# Now, we can define the system object:
# +
system = mm.System(name='skyrmion')
system.energy = (mm.Exchange(A=1.6e-11)
+ mm.DMI(D=4e-3, crystalclass='Cnv')
+ mm.UniaxialAnisotropy(K=0.51e6, u=(0, 0, 1))
+ mm.Zeeman(H=(0, 0, 0.2e5)))
Ms = 1.1e6
def m_init(pos):
x, y, z = pos
if (x**2 + y**2)**0.5 < 10e-9:
return (0, 0, -1)
else:
return (0, 0, 1)
# create system with above geometry and initial magnetisation
system.m = df.Field(mesh, dim=3, value=m_init, norm=Ms)
# -
# Now, we can define the runner object. There are three main runners you can use:
#
# - Tcl runner: if we want to point ubermag to the particular `oommf.tcl` file
# - Exe runner: if we have OOMMF executable
# - Docker runner: if we want to run simulations inside Docker container
tcl_runner = mc.oommf.TclOOMMFRunner(oommf_tcl='path/to/my/oommf.tcl')
exe_runner = mc.oommf.ExeOOMMFRunner(oommf_exe='oommf')
docker_runner = mc.oommf.DockerOOMMFRunner(image='ubermag/oommf')
# **IMPORTANT:** On Windows, if OOMMF does not support some energy terms, choosing runner happens automatically in the background and requires no assistance from the user. However, you can still be explicit and tell ubermag how you want to run the simulation.
#
# Now, when we drive the system, we can pass the runner to the `drive` method:
# +
# NBVAL_SKIP
md = mc.MinDriver()
md.drive(system, runner=docker_runner)
# Plot relaxed configuration: vectors in z-plane
system.m.plane('z').z.mpl()
# -
# The first time we run the simulation, it is going to take some time for docker to pull an image from the cloud, but after that, the image will be known by docker, so there will be no delays for any further runs.
| tutorials/choosing-runner.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
df=pd.read_csv("../developer_survey_2019/survey_results_public.csv")
#df.tail(55)
df=pd.DataFrame(df)
df
# +
cnt=0
for row in df:
#if row["Hobbyist"][i] == "yes":
# cnt +=1
print(row)
break
print(f"the {cnt} number of peoples code as hobby")
# -
HobbyistCnt= 0
StudentCnt = 0
for ind in df.index:
if df['Hobbyist'][ind] == "Yes":
HobbyistCnt += 1
if df['Student'][ind] != "No":
StudentCnt += 1
Hobbyistcnt
StudentCnt
df.describe()
pd.value_counts(df['Hobbyist']).plot.pie()
pd.value_counts(df["Gender"]).plot.bar()
pd.value_counts(df["OpenSourcer"]).plot.bar()
pd.value_counts(df["Student"]).plot.bar()
pd.value_counts(df["Employment"]).plot.bar()
pd.value_counts(df["UndergradMajor"]).plot.bar()
pd.value_counts(df['CareerSat']).plot.pie()
pd.value_counts(df['JobSat']).plot.bar()
pd.value_counts(df['JobSeek']).plot.pie()
pd.value_counts(df['']).plot.pie()
df.groupby(["Hobbyist"]).mean()
df.CompTotal.plot()
pd.value_counts(df["Country"]).plot.bar()
| notebooks/language analytics.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import pandas as pd
import numpy as np
# ## 1. 两种数据类型
#
# - DataFrame
#
# - 行索引,表明不同行,横向索引,叫index
# - 列索引,表名不同列,纵向索引,叫columns
# - 可以理解为多个Series数据类型的集合
#
# - Series
#
# - 只有行索引
# #### 1.1DataFrame的基本操作
#
# **创建一个数据:**
#
# - 创建该类型的数据,可以直接创建,也可以将其他的数据转换为该格式数据
# - DataFrame(data=[],index=[],coloumns=[])
#
# 创建股票数据,3支股票5天的涨跌情况
stock_change = np.random.normal(0, 1, (3, 5))
stock_change
# **创建方式1: 将numpy的array数据类型转换为 DataFrame 格式数据类型**
stock_change = pd.DataFrame(stock_change)
stock_change
# **创建方式2: 手动创建DataFrame的数据类型**
# +
df = pd.DataFrame({
'names': ['tony', 'jim', 'tom'],
'ages': [12, 13, 11],
'scores': [90, 85, 95]
})
df
# -
## 设置多个索引或者单个索引
df.set_index(['names', 'ages'])
# 这样DataFrame就变成了一个具有MultiIndex的DataFrame。
# **查看DataFrame数据的一些属性**
stock_change.shape # 查看形状
stock_change.index # 查看行索引列表
stock_change.columns # 查看列索引列表
stock_change.values # 查看其中的值,也就是numpy的array数据类型
stock_change.T # 查看其转置
# **DataFrame一些常用的方法**
stock_change.head(2) # 显示前2行的数据
stock_change.tail(2) # 显示后2行的数据
# #### 1.2 Series的基本操作
#
# - 只有行索引,没有列索引
# **创建方式1: 指定内容,默认索引**
pd.Series(np.arange(6))
# **创建方式2: 指定内容,指定索引**
pd.Series([80, 90, 85], index=['tony', 'jim', 'tom'])
# **创建方式3:通过字典方式创建**
# +
pds1 = pd.Series({
'red': 100,
'green': 200,
'yellow': 300
})
pds1
# -
# **Series的相关属性**
pds1.index # 查看数据索引
pds1.values # 查看数据的值
# ## 2. 基本数据操作
# #### 2.1读取csv文件
sp_file = pd.read_csv('./data/StudentsPerformance.csv')
sp_file
# **删除部分不需要的列**
sp_file.drop(['parental level of education', 'lunch'], axis=1)
# #### 2.2 索引操作
#
# - 直接使用行列索引(先列后行)
# +
print('sp_file type: ', type(sp_file)) # 注意从csv文件中读取到的数据就是DataFrame格式的
sp = pd.DataFrame(sp_file)
sp.set_index('gender')
print('sp type: ', type(sp))
sp_file['math score'][0] # 获得第0位的数学成绩
# -
# **结合loc和iloc来使用索引**
# +
# 获取前[0, 10]位的所有成绩
sp10 = sp.loc[0:10, ['math score', 'reading score', 'writing score']]
sp10
# -
# #### 2.3 赋值操作
sp10['writing score'] = 0 # 将此列的数据全部赋值为0, 或者 sp10.writing_score=0(前提是为writing_score)s
sp10
# #### 2.4 排序
#
# - 内容排序
#
# 对于DataFrame:
#
# df.sort_values(key=,ascending=)
#
# - 单个键或者多个键进行排序,默认升序
# - ascending=False:降序
# - ascending=True:升序
#
# 对于Series:
#
# series.sort_values(ascending=True)
#
# - series排序时,只有一列,不需要参数
#
# - 索引排序
#
# 对于DataFrame以及Series:
#
# df.sort_index()
#
# series.sort_index()
sp10.sort_values('math score', ascending=False) # 按照数学成绩降序
# ## 3. DataFrame运算
# #### 3.1 算术运算
#
# - add(), 加
# - sub(), 减
# +
df = pd.DataFrame({
'names': ['tony', 'tom', 'jim', 'anna', 'jenny'],
'ages': [12, 13, 14, 13, 15],
'scores': [90, 95, 85, 75, 90]
})
print('df: \n', df)
print('add(): \n', df['scores'].add(5)) # 将某一列的数据全部加5,得到的是该列变化之后的数
print('sub(): \n', df['scores'].sub(5)) # 将某一列的数据全部减5,注意此处不收上一行的加5影响
# -
# #### 3.2 逻辑运算
#
# - '>', 大于
# - '<', 小于
# - '|', 或
# - '&', 与
df[(df['ages'] > 12) & (df['scores'] < 90)] # 找到年龄大于20, 成绩低于90的人
# #### 3.3 统计运算
#
# 统计函数:
#
# - sum Sum of values
# - mean Mean of values
# - median Arithmetic median of values
# - min Minimum
# - max Maximum
# - mode Mode
# - abs Absolute Value
# - prod Product of values
# - std Bessel-corrected sample standard deviation
# - var Unbiased variance
# - idxmax compute the index labels with the maximum
# - idxmin compute the index labels with the minimum
#
#
# 累计统计函数:
#
# - cumsum 计算前1/2/3/…/n个数的和
# - cummax 计算前1/2/3/…/n个数的最大值
# - cummin 计算前1/2/3/…/n个数的最小值
# - cumprod 计算前1/2/3/…/n个数的积
df.describe() # 综合分析, 一次输出多种信息
df.max(0) # 0代表列求结果, 1表示行求结果
df.std(0) # 标准差
df.var(0) # 方差
# ## 4. pandas画图
#
# **pandas.DataFrame.plot(x=None, y=None, kind='line')**
#
# - x
# - y
# - kind
# - ‘line’ : line plot (default)
# - ‘bar’ : vertical bar plot
# - ‘barh’ : horizontal bar plot
# - ‘hist’ : histogram
# - ‘pie’ : pie plot
# - ‘scatter’ : scatter plot
# ## 5. 文件读取和存储
#
# ```text
# format reader writer
# json read_json to_json
# csv read_csv to_csv
# hdf5 read_hdf to_hdf
# excel read_excel to_excel (binary)
# ```
#
# - pandas.read_csv(filepath_or_buffer, sep =',' , delimiter = None)
# - filepath_or_buffer:文件路径
# - usecols:指定读取的列名,列表形式
#
# - DataFrame.to_csv(path_or_buf=None, sep=', ’, columns=None, header=True, index=True, index_label=None, mode='w', encoding=None)
# - path_or_buf :string or file handle, default None
# - sep :character, default ‘,’
# - columns :sequence, optional
# - mode:'w':重写, 'a' 追加
# - index:是否写进行索引
# - header :boolean or list of string, default True,是否写进列索引值
#
# - Series.to_csv(path=None, index=True, sep=', ', na_rep='', float_format=None, header=False, index_label=None, mode='w', encoding=None, compression=None, date_format=None, decimal='.')
# +
import pandas as pd
# 读取csv格式的文件,并指定需要显式的列
sp = pd.read_csv('./data/StudentsPerformance.csv', usecols=['gender', 'math score'])
sp
# -
# 写入到csv格式的文件
df = pd.DataFrame({
'names': ['tony', 'tom', 'jim', 'anna', 'jenny'],
'ages': [12, 13, 14, 13, 15],
'scores': [90, 95, 85, 75, 90]
})
df.to_csv('./data/pd_scores.csv')
# ## 6. 缺失值处理
#
# - pd.isnull(df): 判断数据是否为NaN
# - pd.notnull(df): 判断数据是否为NaN
# - pd.fillna(value, inplace=True): 替换缺失值为 value, inplace=True会修改原始数据, False不会修改原始数据
#
# **处理方式:**
# 1. df.dropna(), 删除缺失值
# 2. df['score'].fillna(value, inplace=True), 替换缺失值
#
# **非pd.NaN, 但是又不是正常数据的处理:**
# 1. 将其替换为pd.NaN
# 2. 对pd.NaN进行处理
pd.isnull(df)
# ## 7. 数据离散化处理
#
# 即将连续属性的值域划分为若干离散的区间, 最后用不同的符号或整数值代表落在每个自区间中的属性值.
#
# **可以简化数据结构, 减少给定连续属性的个数.**
#
# - pd.qcut(data, bins), 根据这些值的频率来选择均匀间隔
# - data: 需要分组的数据,需要时一维的
# - bins: 分组的组数
#
# - pd.cut(data, bins), 根据值本身来选择均匀间隔
import pandas as pd
sp = pd.read_csv('./data/StudentsPerformance.csv') # 读取表格
math_score = sp['math score'] # 去表格中的数学成绩一列
math_score_qcut = pd.qcut(math_score, 10) # 将数据进行离散化处理
math_score_qcut.value_counts() # 统计每个组的个数
math_score_cut = pd.cut(math_score, 10)
math_score_cut.value_counts()
# **将分组数据进行one-hot编码**
dummies = pd.get_dummies(math_score_cut, prefix='rise')
dummies
# ## 8. 合并处理
#
# **合并即将多张表中相同或不相同的内容组合到一起分析**
#
# - pd.concat([data1, data2], axis=1)
# - axis=0表示按照列索引进行合并,axis=1表示按照行索引合并
# - pd.merge()
math_score = sp['math score']
pd.concat([math_score, dummies], axis=1)
# ## 9. 交叉表和透视表
#
# - 交叉表用于计算一列数据对于另一列数据的分组个数(寻找两列之间的关系)
#
# - pd.crosstab(value1, value2)
#
# - 透视表基本等同于交叉表
#
# - DataFrame.pivot_table([], index=[])
# +
## 利用交叉表来分析数学成绩是否与性别有关
import numpy as np
import pandas as pd
sp = pd.read_csv('./data/StudentsPerformance.csv', usecols = ['math score', 'gender'])
math_score= sp['math score']
gender = sp['gender']
# -
# 使用交叉表
res = pd.crosstab(gender, math_score)
res.T
# +
# 通过画图的形式来判断感受是否与性别有关
female_math_score = sp[(sp['gender'] == 'female')]['math score']
male_math_score = sp[(sp['gender'] == 'male')]['math score']
import matplotlib.pyplot as plt
plt.figure(figsize=(8, 5), dpi=100)
plt.plot(male_math_score)
plt.plot(female_math_score)
plt.show()
# -
# ## 10. 分组和聚合
#
# - 通常和一些统计函数一起,查看数据的分组情况
# - DataFrame.groupby(key, as_index=False), 分组函数
# - key: 分组的列依据,可多个
#
# +
# 统计星巴克在各个国家的位置
star_loc = pd.read_csv('./data/starbucks_location.csv')
count = star_loc.groupby(['Country']).count()
count['Brand'].plot(kind='bar', figsize=(20, 8))
plt.show()
# -
# 依据多个列进行分组
star_loc.groupby(['Country', 'State/Province']).count()
# ## 11. 综合案例
#
# 分析IMDB 2006-2016 之间的1000部电影.
# +
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
movie = pd.read_csv('./data/imdb_movie.csv')
# -
# 得到评分的平均分
movie['Rating'].mean()
# 求出导演的数量
np.unique(movie['Director']).shape[0]
# +
# 查看评分的分布情况
plt.figure(figsize=(12, 4), dpi=100)
plt.hist(movie['Rating'].values, bins=20, color='g') # 分成20组的间隔展示
# 修改刻度的间隔
max_ = movie['Rating'].max()
min_ = movie['Rating'].min()
# 生成刻度列表
t = np.linspace(min_, max_, num=21)
plt.xticks(t)
# 添加网格
plt.grid()
# +
# 查看电影的分类情况
# 创建一个全为0的DF,列索引设置为电影分类
temp_list = [i.split(',') for i in movie['Genre']] # 字符串分割
temp_list
# -
# 获取电影的分类,将多重的分类分开为单独的
genre_list = np.unique([i for j in temp_list for i in j])
genre_list
# 增加新的全0列, 行为电影数量, 列为电影的种类
temp_df = pd.DataFrame(np.zeros((movie.shape[0], genre_list.shape[0])), columns=genre_list)
temp_df
# +
# 遍历每一部电影,将在temp_df中把出现的分类置为1
for i in range(1000):
temp_df.loc[i, temp_list[i]] = 1
temp_df
# -
temp_df.sum().sort_values() # 统计不同的种类的数量并按序输出
# 降序输出表格
temp_df.sum().sort_values(ascending=False).plot(kind='bar', figsize=(12, 6))
| language/python/modules/pandas/pandas.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import tensorflow as tf
import tflearn
from tflearn.layers.conv import conv_2d,max_pool_2d
from tflearn.layers.core import input_data,dropout,fully_connected
from tflearn.layers.estimator import regression
import numpy as np
import cv2
from sklearn.utils import shuffle
# +
#Load Images from Swing
loadedImages = []
for i in range(0, 700):
image = cv2.imread('Dataset/Index/index_' + str(i) + '.png')
gray_image = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
loadedImages.append(gray_image.reshape(89, 100, 1))
#Load Images From Palm
for i in range(0, 1000):
image = cv2.imread('Dataset/PalmImages/palm_' + str(i) + '.png')
gray_image = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
loadedImages.append(gray_image.reshape(89, 100, 1))
#Load Images From Fist
for i in range(0, 1000):
image = cv2.imread('Dataset/FistImages/fist_' + str(i) + '.png')
gray_image = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
loadedImages.append(gray_image.reshape(89, 100, 1))
# +
# Create OutputVector
outputVectors = []
for i in range(0, 700):
outputVectors.append([1, 0, 0])
for i in range(0, 1000):
outputVectors.append([0, 1, 0])
for i in range(0, 1000):
outputVectors.append([0, 0, 1])
# +
testImages = []
#Load Images for swing
for i in range(700, 822):
image = cv2.imread('Dataset/Index/index_' + str(i) + '.png')
gray_image = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
testImages.append(gray_image.reshape(89, 100, 1))
#Load Images for Palm
for i in range(0, 100):
image = cv2.imread('Dataset/PalmTest/palm_' + str(i) + '.png')
gray_image = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
testImages.append(gray_image.reshape(89, 100, 1))
#Load Images for Fist
for i in range(0, 100):
image = cv2.imread('Dataset/FistTest/fist_' + str(i) + '.png')
gray_image = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
testImages.append(gray_image.reshape(89, 100, 1))
testLabels = []
for i in range(0, 100):
testLabels.append([1, 0, 0])
for i in range(0, 100):
testLabels.append([0, 1, 0])
for i in range(0, 100):
testLabels.append([0, 0, 1])
# +
# Define the CNN Model
tf.reset_default_graph()
convnet=input_data(shape=[None,89,100,1],name='input')
convnet=conv_2d(convnet,32,2,activation='relu')
convnet=max_pool_2d(convnet,2)
convnet=conv_2d(convnet,64,2,activation='relu')
convnet=max_pool_2d(convnet,2)
convnet=conv_2d(convnet,128,2,activation='relu')
convnet=max_pool_2d(convnet,2)
convnet=conv_2d(convnet,256,2,activation='relu')
convnet=max_pool_2d(convnet,2)
convnet=conv_2d(convnet,256,2,activation='relu')
convnet=max_pool_2d(convnet,2)
convnet=conv_2d(convnet,128,2,activation='relu')
convnet=max_pool_2d(convnet,2)
convnet=conv_2d(convnet,64,2,activation='relu')
convnet=max_pool_2d(convnet,2)
convnet=fully_connected(convnet,1000,activation='relu')
convnet=dropout(convnet,0.75)
convnet=fully_connected(convnet,3,activation='softmax')
convnet=regression(convnet,optimizer='adam',learning_rate=0.001,loss='categorical_crossentropy',name='regression')
model=tflearn.DNN(convnet,tensorboard_verbose=0)
# +
# Shuffle Training Data
loadedImages, outputVectors = shuffle(loadedImages, outputVectors, random_state=0)
# Train model
model.fit(loadedImages, outputVectors, n_epoch=50,
validation_set = (testImages, testLabels),
snapshot_step=100, show_metric=True, run_id='convnet_coursera')
model.save("TrainedModel/GestureIndex.tfl")
# -
| ModelTrainer.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
from datetime import datetime
import pandas
from typing import List, Any
import pyspark.sql.functions as F
import query_lib
import indicator_lib
# -
BASE_DIR='./test_files/parquet_big_db'
#CODE_SYSTEM='http://snomed.info/sct'
CODE_SYSTEM='http://www.ampathkenya.org'
# Note since this issue is resolved we don't need BASE_URL:
# https://github.com/GoogleCloudPlatform/openmrs-fhir-analytics/issues/55
#BASE_URL='http://localhost:8099/openmrs/ws/fhir2/R4/'
BASE_URL=''
# # Encounter view
# +
patient_query = query_lib.patient_query_factory(
query_lib.Runner.SPARK, BASE_DIR, CODE_SYSTEM)
flat_enc_df = patient_query.get_patient_encounter_view(BASE_URL)
flat_enc_df.head()
# -
flat_enc_df[flat_enc_df['locationId'].notna()].head()
# ## Adding an encounter location constraint
# Add encounter location constraint
patient_query.encounter_constraints(locationId=['58c57d25-8d39-41ab-8422-108a0c277d98'])
flat_enc_df = patient_query.get_patient_encounter_view(BASE_URL)
flat_enc_df.head()
flat_enc_df[flat_enc_df['encPatientId'] == '8295eb5b-fba6-4e83-a5cb-2817b135cd27']
flat_enc = patient_query._flatten_encounter('')
flat_enc.head().asDict()
# # Observation view
# +
_VL_CODE = '856' # HIV VIRAL LOAD
_ARV_PLAN = '1255' # ANTIRETROVIRAL PLAN
end_date='2018-01-01'
start_date='1998-01-01'
old_start_date='1978-01-01'
# Creating a new `patient_query` to drop all previous constraints
# and recreate flat views.
patient_query = query_lib.patient_query_factory(
query_lib.Runner.SPARK, BASE_DIR, CODE_SYSTEM)
patient_query.include_obs_values_in_time_range(
_VL_CODE, min_time=start_date, max_time=end_date)
patient_query.include_obs_values_in_time_range(
_ARV_PLAN, min_time=start_date, max_time=end_date)
patient_query.include_all_other_codes(min_time=start_date, max_time=end_date)
# Note the first call to `find_patient_aggregates` starts a local Spark
# cluster, load input files, and flattens observations. These won't be
# done in subsequent calls of this function on the same instance.
# Also same cluster will be reused for other instances of `PatientQuery`.
agg_df = patient_query.get_patient_obs_view(BASE_URL)
agg_df.head(10)
# -
# Inspecting one specific patient.
agg_df[agg_df['patientId'] == '009b3fce-f62e-4308-bace-594afa08aeee'].head()
agg_df[(agg_df['code'] == '856') & (agg_df['min_date'] != agg_df['max_date'])][
['patientId', 'code', 'min_date', 'max_date', 'first_value_code', 'last_value_code']].head()
# # Inspecting underlying Spark data-frames
# The _user_ of the library does not need to deal with the underlying distributed query processing system. However, the _developer_ of the library needs an easy way to inspect the internal data of these systems. Here is how:
_DRUG1 = '1256' # START DRUGS
_DRUG2 = '1260' # STOP ALL MEDICATIONS
patient_query._obs_df.head().asDict()
exp_obs = patient_query._obs_df.withColumn('coding', F.explode('code.coding'))
exp_obs.head().asDict()
exp_obs.where('coding.code = "159800AAAAAAAAAAAAAAAAAAAAAAAAAAAAAA"').head().asDict()
exp_obs.where('coding.code = "1268"').head().asDict()
exp_obs.where(
'coding.system IN ("http://snomed.info/sct", "http://loinc.org", "http://www.ampathkenya.org") \
AND coding.display LIKE "%viral%" '
).groupBy(['coding']).agg(F.count('*')).head(20)
agg_df[(agg_df['code'] == _ARV_PLAN) & agg_df['last_value_code'].isin([_DRUG1, _DRUG2])].head()
agg_df[(agg_df['code'] == _ARV_PLAN) & agg_df['last_value_code'].isin([_DRUG1, _DRUG2])].index.size
agg_df[(agg_df['code'] == _ARV_PLAN) & agg_df['last_value_code'].isin([_DRUG1, _DRUG2])].groupby(
'patientId').count().index.size
indicator_lib.calc_TX_NEW(agg_df, ARV_plan=_ARV_PLAN, start_drug=[_DRUG1], end_date_str=end_date)
indicator_lib.calc_TX_PVLS(
agg_df, VL_code=_VL_CODE, failure_threshold=10000,
end_date_str=end_date)
# # Indicator library development
# This is an example to show how the `indicator_lib.py` functions can be incrementally developed based on the query library DataFrames.
patient_query._flat_obs.head().asDict()
agg_df[(agg_df['code'] == _VL_CODE)].head()
# +
def _find_age_band(birth_date: str, end_date: datetime) -> str:
"""Given the birth date, finds the age_band for PEPFAR disaggregation."""
age = None
try:
# TODO handle all different formats (issues #174)
birth = datetime.strptime(birth_date, '%Y-%m-%d')
age = int((end_date - birth).days / 365.25)
except Exception as e:
common.custom_log('Invalid birth_date format: {}'.format(e))
age = 999999
if age == 999999:
return 'ERROR'
if age < 1:
return '0-1'
if age <= 4:
return '1-4'
if age <= 9:
return '5-9'
if age <= 14:
return '10-14'
if age <= 19:
return '15-19'
if age <= 24:
return '20-24'
if age <= 49:
return '25-49'
return '50+'
def _agg_buckets(birth_date: str, gender: str, end_date: datetime) -> List[str]:
"""Generates the list of all PEPFAR disaggregation buckets."""
age_band = _find_age_band(birth_date, end_date)
return [age_band + '_' + gender, 'ALL-AGES_' + gender,
age_band + '_ALL-GENDERS', 'ALL-AGES_ALL-GENDERS']
def calc_TX_PVLS(patient_agg_obs: pandas.DataFrame, VL_code: str,
failure_threshold: int, end_date_str: str = None) -> pandas.DataFrame:
"""Calculates TX_PVLS indicator with its corresponding disaggregations.
Args:
patient_agg_obs: An output from `patient_query.find_patient_aggregates()`.
VL_code: The code for viral load values.
failure_threshold: VL count threshold of failure.
end_date: The string representation of the last date as 'YYYY-MM-DD'.
Returns:
The aggregated DataFrame.
"""
end_date = datetime.today()
if end_date_str:
end_date = datetime.strptime(end_date_str, '%Y-%m-%d')
temp_df = patient_agg_obs[(patient_agg_obs['code'] == VL_code)].copy()
# Note the above copy is used to avoid setting a new column on a slice next:
temp_df['sup_VL'] = (temp_df['max_value'] < failure_threshold)
temp_df['buckets'] = temp_df.apply(
lambda x: _agg_buckets(x.birthDate, x.gender, end_date), axis=1)
temp_df_exp = temp_df.explode('buckets')
temp_df_exp = temp_df_exp.groupby(['sup_VL', 'buckets'], as_index=False)\
.count()[['sup_VL', 'buckets', 'patientId']]\
.rename(columns={'patientId': 'count'})
# calculate ratio
num_patients = len(temp_df.index)
temp_df_exp['ratio'] = temp_df_exp['count']/num_patients
return temp_df_exp
calc_TX_PVLS(agg_df, _VL_CODE, 10000, end_date_str='2020-12-30')
| dwh/test_query_lib.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Render site pages
#
# [dpp](https://github.com/frictionlessdata/datapackage-pipelines) runs the knesset data pipelines periodically on our server.
#
# This notebook shows how to run pipelines that render pages for the static website at https://oknesset.org
# ## Load the source data
#
# Download the source data, can take a few minutes.
# !{'cd /pipelines; KNESSET_LOAD_FROM_URL=1 dpp run --concurrency 4 '\
# './committees/kns_committee,'\
# './people/committee-meeting-attendees,'\
# './members/mk_individual'}
# ## Run the build pipeline
#
# This pipeline aggregates the relevant data and allows to filter for quicker development cycles.
#
# You can uncomment and modify the filter step in committees/dist/knesset.source-spec.yaml under the `build` pipeline to change the filter.
#
# The build pipeline can take a few minutes to process for the first time.
# !{'cd /pipelines; dpp run --verbose ./committees/dist/build'}
# ## Download some protocol files for rendering
#
# upgrade to latest dataflows library
# !{'pip install --upgrade dataflows'}
# Restart the kernel if an upgrade was done
# Choose some session IDs to download protocol files for:
session_ids = [2063122, 2063126]
# +
from dataflows import Flow, load, printer, filter_rows
sessions_data = Flow(
load('/pipelines/data/committees/kns_committeesession/datapackage.json'),
filter_rows(lambda row: row['CommitteeSessionID'] in session_ids),
printer(tablefmt='html')
).results()
# +
import os
import subprocess
import sys
for session in sessions_data[0][0]:
for attr in ['text_parsed_filename', 'parts_parsed_filename']:
pathpart = 'meeting_protocols_text' if attr == 'text_parsed_filename' else 'meeting_protocols_parts'
url = 'https://production.oknesset.org/pipelines/data/committees/{}/{}'.format(pathpart, session[attr])
filename = '/pipelines/data/committees/{}/{}'.format(pathpart, session[attr])
os.makedirs(os.path.dirname(filename), exist_ok=True)
cmd = 'curl -s -o {} {}'.format(filename, url)
print(cmd, file=sys.stderr)
subprocess.check_call(cmd, shell=True)
# -
# ## Delete dist hash files
# + language="bash"
# find /pipelines/data/committees/dist -type f -name '*.hash' -delete
# -
# ## Render pages
#
# Should run the render pipelines in the following order:
#
# ## Meetings:
# !{'cd /pipelines; dpp run ./committees/dist/render_meetings'}
# #### Rendered meetings stats
# +
from dataflows import Flow, load, printer, filter_rows, add_field
def add_filenames():
def _add_filenames(row):
for ext in ['html', 'json']:
row['rendered_'+ext] = '/pipelines/data/committees/dist/dist/meetings/{}/{}/{}.{}'.format(
str(row['CommitteeSessionID'])[0], str(row['CommitteeSessionID'])[1], str(row['CommitteeSessionID']), ext)
return Flow(
add_field('rendered_html', 'string'),
add_field('rendered_json', 'string'),
_add_filenames
)
rendered_meetings = Flow(
load('/pipelines/data/committees/dist/rendered_meetings_stats/datapackage.json'),
add_filenames(),
filter_rows(lambda row: row['CommitteeSessionID'] in session_ids),
printer(tablefmt='html')
).results()[0][0]
# -
# ## Committees and homepage
# !{'cd /pipelines; dpp run ./committees/dist/render_committees'}
# ## Members / Factions
# !{'cd /pipelines; dpp run ./committees/dist/create_members,./committees/dist/build_positions,./committees/dist/create_factions'}
# ## Showing the rendered pages
#
# To serve the site, locate the correspondoing local directory for /pipelines/data/committees/dist/dist and run:
#
# `python -m http.server 8000`
#
# Pages should be available at http://localhost:8000/
| jupyter-notebooks/Render site pages for development and debugging.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3.7
# language: python
# name: python3.7
# ---
plt = plotter()
# %matplotlib inline
df = pd.read_csv('cumfreq.csv', index_col='rank')
fig, ax = plt.subplots(figsize=(7, 4.5))
df.iloc[:500].plot(legend=False, ax=ax)
ax.set_xlabel('Word Rank')
ax.set_ylabel('Cumulative Share')
ax.set_title('Word Frequency Distribution')
fig.savefig('../docs/images/freq_dist.svg', bbox_inches='tight')
| code/slides.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="view-in-github" colab_type="text"
# <a href="https://colab.research.google.com/github/Ranniellsaur/Linear-algebra-58020/blob/main/Linear_Transformation.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# + colab={"base_uri": "https://localhost:8080/"} id="x-tuGW9zgZ_X" outputId="a29afa18-8863-4d08-fd73-fa22fa4d9ad3"
import numpy as np
A = np.array([[4, 10, 8], [10, 26, 26], [8, 26, 61]])
print(A)
B = np.array([[44], [128], [214]])
print(B)
#AA^-1X = BA^-1
## Solving for the inverse of A
A_inv = np.linalg.inv(A)
print(A_inv)
## Solving for BA^-1
X = np.dot(A_inv, B)
print(X)
| Linear_Transformation.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 2
# language: python
# name: python2
# ---
# + [markdown] deletable=true editable=true
# # FAIR
#
# This notebook gives some simple examples of how to run and use the Finite Amplitude Impulse Response (FAIR) model.
#
# The Finite Amplitude Impulse Response (FAIR) model is a simple emissions-based climate model. It allows the user to input emissions of greenhouse gases and short lived climate forcers in order to estimate global mean atmospheric GHG concentrations, radiative forcing and temperature anomalies.
#
# The original FAIR model was developed to simulate the earth system response to CO$_2$ emissions, with all non-CO$_2$ forcing implemented as an "external" source. It was developed by <NAME>, <NAME>, <NAME> and <NAME>. The motivation for developing it and its formulation is documented in a paper published in Atmospheric Chemistry and Physics in 2017 (doi:10.5194/acp-2016-405).
#
# The emissions-based model extends FAIR by replacing all sources of non-CO$_2$ forcing with relationships that are based on the source emissions, with the exception of natural forcings (viz. variations in solar irradiance and volcanic eruptions). It is useful for assessing future policy commitments to anthropogenic emissions (something which we can control) than to radiative forcing (something which is less certain and which we can only partially control).
#
# The emissions based model was developed by <NAME> with input from <NAME>, <NAME> and <NAME>, in parallel with <NAME>, <NAME> and <NAME>.
# + deletable=true editable=true
import fair
fair.__version__
# +
import numpy as np
# %matplotlib inline
from matplotlib import pyplot as plt
plt.style.use('seaborn-darkgrid')
plt.rcParams['figure.figsize'] = 16, 9
print plt.style
# -
# ## Basic run
#
# Here we show how FAIR can be run with step change CO$_2$ emissions and sinusoidal non-CO$_2$ forcing timeseries.
# + deletable=true editable=true
emissions = np.zeros(250)
emissions[125:] = 10.0
other_rf = np.zeros(emissions.size)
for x in range(0,emissions.size):
other_rf[x] = 0.5*np.sin(2*np.pi*(x)/14.0)
C,F,T = fair.forward.fair_scm(emissions=emissions,
other_rf=other_rf)
fig = plt.figure()
ax1 = fig.add_subplot(221)
ax1.plot(range(0,emissions.size),emissions,color='black')
ax1.set_ylabel('Emissions (GtC)')
ax2 = fig.add_subplot(222)
ax2.plot(range(0,emissions.size),C,color='blue')
ax2.set_ylabel('CO$_2$ concentrations (ppm)')
ax3 = fig.add_subplot(223)
ax3.plot(range(0,emissions.size),other_rf,color='orange')
ax3.set_ylabel('Other radiative forcing (W.m$^{-2}$)')
ax4 = fig.add_subplot(224)
ax4.plot(range(0,emissions.size),T,color='red')
ax4.set_ylabel('Temperature anomaly (K)')
# + [markdown] deletable=true editable=true
# ## RCPs
#
# We can run FAIR with the CO$_2$ emissions and non-CO$_2$ forcing from the four representative concentration pathway scenarios. To use the emissions-based version specify ```useMultigas=True``` in the call to ```fair_scm()```.
# + deletable=true editable=true
from fair.RCPs import rcp3pd, rcp45, rcp6, rcp85
from fair.ancil import natural
fig = plt.figure()
ax1 = fig.add_subplot(221)
ax2 = fig.add_subplot(222)
ax3 = fig.add_subplot(223)
ax4 = fig.add_subplot(224)
C,F,T = fair.forward.fair_scm(emissions = rcp3pd.Emissions.emissions,
F_solar = rcp3pd.Forcing.solar,
F_volcanic = rcp3pd.Forcing.volcanic,
natural = natural.Emissions.emissions,
useMultigas= True)
ax1.plot(rcp3pd.Emissions.year, rcp3pd.Emissions.co2_fossil, color='green', label='RCP3PD')
ax2.plot(rcp3pd.Emissions.year, C[:,0], color='green')
ax3.plot(rcp3pd.Emissions.year, np.sum(F, axis=1), color='green')
ax4.plot(rcp3pd.Emissions.year, T, color='green')
C,F,T = fair.forward.fair_scm(emissions = rcp45.Emissions.emissions,
F_solar = rcp45.Forcing.solar,
F_volcanic = rcp45.Forcing.volcanic,
natural = natural.Emissions.emissions,
useMultigas= True)
ax1.plot(rcp45.Emissions.year, rcp45.Emissions.co2_fossil, color='blue', label='RCP4.5')
ax2.plot(rcp45.Emissions.year, C[:,0], color='blue')
ax3.plot(rcp45.Emissions.year, np.sum(F, axis=1), color='blue')
ax4.plot(rcp45.Emissions.year, T, color='blue')
C,F,T = fair.forward.fair_scm(emissions = rcp6.Emissions.emissions,
F_solar = rcp6.Forcing.solar,
F_volcanic = rcp6.Forcing.volcanic,
natural = natural.Emissions.emissions,
useMultigas= True)
ax1.plot(rcp6.Emissions.year, rcp6.Emissions.co2_fossil, color='red', label='RCP6')
ax2.plot(rcp6.Emissions.year, C[:,0], color='red')
ax3.plot(rcp6.Emissions.year, np.sum(F, axis=1), color='red')
ax4.plot(rcp6.Emissions.year, T, color='red')
C,F,T = fair.forward.fair_scm(emissions = rcp85.Emissions.emissions,
F_solar = rcp85.Forcing.solar,
F_volcanic = rcp85.Forcing.volcanic,
natural = natural.Emissions.emissions,
useMultigas= True)
ax1.plot(rcp85.Emissions.year, rcp85.Emissions.co2_fossil, color='black', label='RCP8.5')
ax2.plot(rcp85.Emissions.year, C[:,0], color='black')
ax3.plot(rcp85.Emissions.year, np.sum(F, axis=1), color='black')
ax4.plot(rcp85.Emissions.year, T, color='black')
ax1.set_ylabel('Fossil CO$_2$ Emissions (GtC)')
ax1.legend()
ax2.set_ylabel('CO$_2$ concentrations (ppm)')
ax3.set_ylabel('Total radiative forcing (W.m$^{-2}$)')
ax4.set_ylabel('Temperature anomaly (K)')
# + deletable=true editable=true
| index.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
import pandas as pd
import numpy as np
from sklearn.tree import DecisionTreeRegressor
from sklearn.ensemble import RandomForestRegressor
from sklearn.model_selection import cross_val_score
# %matplotlib inline
# -
# ## Reading data
# +
df = pd.read_csv('https://raw.githubusercontent.com/aczepielik/KRKtram/master/reports/report_07-23.csv')
df1 = pd.read_csv('https://raw.githubusercontent.com/aczepielik/KRKtram/master/reports/report_07-24.csv')
df = pd.concat([df,df1])
df.head()
# -
df [df.tripId == 6351558574044883205 ]
# ### Showing different values in delay column with the frequency
df.delay.value_counts(normalize=True)
df.delay.hist(bins=15);
# ### Viewing data
df.delay.describe()
df.columns
df['direction'].value_counts()
# we want to normalize the format of data
df.apply(lambda x: '{} {}'.format(x['number'], x['direction']), axis=1).factorize()[0]
# +
df['plannedTime'] = pd.to_datetime( df['plannedTime'] ) #convert values in plannedTime column to datatime
df[['plannedTime']].info()
df['hour'] = df['plannedTime'].dt.hour #assign an hour from plannedTime to new column
# +
# delay in seconds
df['delay_secs'] = df['delay'].map(lambda x: x*60)
# normalize the format of value in direction column
df['direction_cat'] = df['direction'].factorize()[0]
# assign -1 to not available value
df['vehicleId'].fillna(-1, inplace=True)
df['seq_num'].fillna(-1, inplace=True)
# function to combine value from number and direction column
def gen_id_num_direction(x):
return '{} {}'.format(x['number'], x['direction'])
df['num_dir'] = df.apply(gen_id_num_direction, axis=1).factorize()[0]
# function to combine value from stop and direction column
def gen_id_stop_direction(x):
return '{} {}'.format(x['stop'], x['direction'])
df['stop_dir'] = df.apply(gen_id_stop_direction, axis=1).factorize()[0]
feats = ['number',
'stop',
'direction_cat',
'vehicleId',
'seq_num',
'num_dir',
'stop_dir',
'hour'
]
X = df[ feats ].values
y = df['delay_secs'].values
model = DecisionTreeRegressor(max_depth=10, random_state=0)
scores = cross_val_score(model, X, y, cv=3, scoring='neg_mean_absolute_error')
np.mean(scores), np.std(scores)
# -
# y_pred => [0, 3, 2]
#
# y_test => [1, 2, 0]
#
# error => [1, 1, 2]
| Step1/Krakowskie_tramwaje.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: PyCharm (Ptr-Net)
# language: python
# name: pycharm-9026f3d8
# ---
# + pycharm={"name": "#%%\n"}
import torch
import torch.nn as nn
import torch.nn.functional as F
def masked_log_softmax(vector:torch.Tensor, mask:torch.Tensor, dim:int=-1) -> torch.Tensor:
if mask is not None:
mask = mask.float()
while mask.dim() < vector.dim():
mask = mask.unsqueeze(1)
# vector + mask.log() is an easy way to zero out masked elements in log space, but it
# results in nans when the whole vector is masked. We need a very small value instead of a
# zero in the mask for these cases. log(1 + 1e-45) is still basically 0, so we can safely
# just add 1e-45 before calling mask.log(). We use 1e-45 because 1e-46 is so small it
# becomes 0 - this is just the smallest value we can actually use.
vector = vector + (mask + 1e-40).log()
return F.log_softmax(vector, dim=dim)
def masked_max(vector:torch.Tensor, mask:torch.Tensor, dim:int, keep_dim:bool=False, min_val:float=-1e7) -> (torch.Tensor, torch.Tensor):
"""
计算最大值:在masked值的特定的维度
:param vector: 计算最大值的vector,假定没有mask的部分全是0
:param mask: vector的mask,必须是可以扩展到vector的形状
:param dim: 计算max的维度
:param keep_dim: 是否保持dim
:param min_val: paddings的最小值
:return: 包括最大值的Tensor
"""
one_minus_mask = ~mask
replaced_vector = vector.masked_fill(one_minus_mask, min_val)
max_value, max_index = replaced_vector.max(dim=dim, keepdim=keep_dim)
return max_value, max_index
# + pycharm={"name": "#%%\n"}
class Encoder(nn.Module):
def __init__(self, embedding_dim, hidden_size, num_layers=1, batch_first=True, bidirectional=True):
super(Encoder, self).__init__()
self.batch_first = batch_first
self.rnn = nn.LSTM(input_size=embedding_dim, hidden_size=hidden_size, num_layers=num_layers, batch_first=batch_first, bidirectional=bidirectional)
def forward(self, embedded_inputs, input_lengths):
# 打包RNN的填充序列
packed = nn.utils.rnn.pack_padded_sequence(embedded_inputs, input_lengths.cpu(), batch_first=self.batch_first)
# forward pass through RNN
outputs, hidden = self.rnn(packed)
# Unpack padding
outputs, _ = nn.utils.rnn.pad_packed_sequence(outputs, batch_first=self.batch_first)
# 返回输出和最终状态
return outputs, hidden
# + pycharm={"name": "#%%\n"}
class Attention(nn.Module):
def __init__(self, hidden_size):
super(Attention, self).__init__()
self.hidden_size = hidden_size
self.W1 = nn.Linear(hidden_size, hidden_size, bias=False)
self.W2 = nn.Linear(hidden_size, hidden_size, bias=False)
self.vt = nn.Linear(hidden_size, 1, bias=False)
def forward(self, decoder_state, encoder_outputs, mask):
# (batch_size, max_seq_len, hidden_size)
encoder_transform = self.W1(encoder_outputs)
# (batch_size, 1(unsqueezed), hidden_size)
decoder_transform = self.W2(decoder_state).unsqueeze(1)
# (batch_size, max_seq_len, 1) => (batch_size, max_seq_len)
u_i = self.vt(torch.tanh(encoder_transform + decoder_transform)).squeeze(-1)
# softmax with only valid inputs, excluding zero padded parts
# log_softmax for a better numerical stability
log_score = masked_log_softmax(u_i, mask, dim=-1)
return log_score
# + pycharm={"name": "#%%\n"}
class PointerNet(nn.Module):
def __init__(self, input_dim, embedding_dim, hidden_size, bidirectional=True, batch_first=True):
super(PointerNet, self).__init__()
# Embedding dimension
self.embedding_dim = embedding_dim
# decoder hidden size
self.hidden_size = hidden_size
# bidirectional encoder
self.bidirectional = bidirectional
self.num_directions = 2 if bidirectional else 1
self.num_layers = 1
self.batch_first = batch_first
# 我们将嵌入层用于以后更复杂的应用程序用法,例如单词序列。
self.embedding = nn.Linear(in_features=input_dim, out_features=embedding_dim,bias=False)
self.encoder = Encoder(embedding_dim=embedding_dim, hidden_size=hidden_size, num_layers=self.num_layers, bidirectional=bidirectional,batch_first=batch_first)
self.decoding_rnn = nn.LSTMCell(input_size=hidden_size, hidden_size=hidden_size)
self.attn = Attention(hidden_size=hidden_size)
for m in self.modules():
if isinstance(m, nn.Linear):
if m.bias is not None:
torch.nn.init.zeros_(m.bias)
def forward(self, input_seq, input_lengths):
if self.batch_first:
batch_size = input_seq.size(0)
max_seq_len = input_seq.size(1)
else:
batch_size = input_seq.size(1)
max_seq_len = input_seq.size(0)
# embedding
embedded = self.embedding(input_seq)
# (batch_size, max_seq_len, embedding_dim)
# encoder_output => (batch_size, max_seq_len, hidden_size) if batch_first else (max_seq_len, batch_size, hidden_size)
# hidden_size is usually set same as embedding size
# encoder_hidden => (num_layers * num_directions, batch_size, hidden_size) for each of h_n and c_n
encoder_outputs, encoder_hidden = self.encoder(embedded, input_lengths)
if self.bidirectional:
# Optionally, Sum bidirectional RNN outputs
# (batch_size, max_seq_len, hidden_size)
encoder_outputs = encoder_outputs[:, :, :self.hidden_size] + encoder_outputs[:, :, self.hidden_size:]
encoder_h_n, encoder_c_n = encoder_hidden
# (1, 2, batch_size, hidden_size)
encoder_h_n = encoder_h_n.view(self.num_layers, self.num_directions, batch_size, self.hidden_size)
encoder_c_n = encoder_c_n.view(self.num_layers, self.num_directions, batch_size, self.hidden_size)
# Let's use zeros as an initial input
# (batch_size, hidden_size)
decoder_input = encoder_outputs.new_zeros(torch.Size((batch_size, self.hidden_size)))
# ((batch_size, hidden_size), (batch_size, hidden_size))
decoder_hidden = (encoder_h_n[-1, 0, :, :].squeeze(), encoder_c_n[-1, 0, :, :].squeeze())
# (batch_size, max_seq_len, max_seq_len)
range_tensor = torch.arange(max_seq_len, device=input_lengths.device, dtype=input_lengths.dtype).expand(batch_size, max_seq_len, max_seq_len)
each_len_tensor = input_lengths.view(-1, 1, 1).expand(batch_size, max_seq_len, max_seq_len)
# (batch_size, max_seq_len, max_seq_len)
row_mask_tensor = (range_tensor < each_len_tensor)
col_mask_tensor = row_mask_tensor.transpose(1, 2)
mask_tensor = row_mask_tensor * col_mask_tensor
pointer_log_scores = []
pointer_argmaxs = []
for i in range(max_seq_len):
# we will simply mask out when calculating attention or max (and loss later)
# not all input and hidden, just for simplicity
# (batch_size, max_seq_len)
sub_mask = mask_tensor[:, i, :]
# h,c is both (batch_size, hidden_size)
h_i, c_i = self.decoding_rnn(decoder_input, decoder_hidden)
# next hidden
decoder_hidden = (h_i, c_i)
# get a pointer distribution over the encoder outputs using attention
# (batch_size, max_seq_len)
log_pointer_score = self.attn(h_i, encoder_outputs, sub_mask)
pointer_log_scores.append(log_pointer_score)
# get the indices of maximum pointer
# (batch_size, 1)
_, masked_argmax = masked_max(log_pointer_score, sub_mask, dim=1, keep_dim=True)
pointer_argmaxs.append(masked_argmax)
# (batch_size, 1, hidden_size)
index_tensor = masked_argmax.unsqueeze(-1).expand(batch_size, 1, self.hidden_size)
# encoder_outputs为(batch_size, max_seq_len, hidden_size)
# index为(batch_size, 1, hidden_size),且所有hidden_size个的数据都是一样的,都是0-30的数字
# decoder_input: (batch_size , 1, hidden_size).squeeze(1)即(batch_size, hidden_size)
decoder_input = torch.gather(encoder_outputs, dim=1, index=index_tensor).squeeze(1)
# stack是叠加,会增加一个维度
# t * (batch_size, max_seq_len) t为max_seq_len, stack之后变成(batch_size, max_seq_len, max_seq_len)
pointer_log_scores = torch.stack(pointer_log_scores, 1)
# cat是在现有维度上续接,不会产生新维度
# t * (batch_size, 1) cat之后变成 (batch_size, max_seq_len)
pointer_argmaxs = torch.cat(pointer_argmaxs, 1)
return pointer_log_scores, pointer_argmaxs, mask_tensor
# + pycharm={"name": "#%%\n"}
import itertools
import numpy as np
from torch.utils.data.dataset import Dataset
from tqdm import tqdm
def tsp_opt(points):
"""
Dynamic programing solution for TSP - O(2^n*n^2)
https://gist.github.com/mlalevic/6222750
:param points: List of (x, y) points
:return: Optimal solution
"""
def length(x_coord, y_coord):
return np.linalg.norm(np.asarray(x_coord) - np.asarray(y_coord))
# Calculate all lengths
all_distances = [[length(x, y) for y in points] for x in points]
# Initial value - just distance from 0 to every other point + keep the track of edges
a = {(frozenset([0, idx+1]), idx+1): (dist, [0, idx+1]) for idx, dist in enumerate(all_distances[0][1:])}
cnt = len(points)
for m in range(2, cnt):
b = {}
for S in [frozenset(C) | {0} for C in itertools.combinations(range(1, cnt), m)]:
for j in S - {0}:
# This will use 0th index of tuple for ordering, the same as if key=itemgetter(0) used
b[(S, j)] = min([(a[(S-{j}, k)][0] + all_distances[k][j], a[(S-{j}, k)][1] + [j])
for k in S if k != 0 and k != j])
a = b
res = min([(a[d][0] + all_distances[0][d[1]], a[d][1]) for d in iter(a)])
return np.asarray(res[1])
class TSPDataset(Dataset):
"""
Random TSP dataset
"""
def __init__(self, data_size, min_seq_len, max_seq_len, solver=tsp_opt, solve=True):
self.data_size = data_size
self.min_leq_len = min_seq_len
self.max_seq_len = max_seq_len
self.solve = solve
self.solver = solver
self.data = self._generate_data()
def __len__(self):
return self.data_size
def __getitem__(self, idx):
tensor = torch.from_numpy(self.data['Points_List'][idx]).float()
length = len(self.data['Points_List'][idx])
solution = torch.from_numpy(self.data['Solutions'][idx]).long() if self.solve else None
return tensor, length, solution
def _generate_data(self):
"""
:return: Set of points_list ans their One-Hot vector solutions
"""
points_list = []
solutions = []
data_iter = tqdm(range(self.data_size), unit='data')
for i, _ in enumerate(data_iter):
data_iter.set_description('Data points %i/%i' % (i+1, self.data_size))
points_list.append(np.random.random((np.random.randint(self.min_leq_len, self.max_seq_len), 2)))
solutions_iter = tqdm(points_list, unit='solve')
if self.solve:
for i, points in enumerate(solutions_iter):
solutions_iter.set_description('Solved %i/%i' % (i+1, len(points_list)))
solutions.append(self.solver(points))
else:
solutions = None
return {'Points_List':points_list, 'Solutions':solutions}
def _to1hot_vec(self, points):
"""
:param points: List of integers representing the points indexes
:return: Matrix of One-Hot vectors
"""
vec = np.zeros((len(points), self.max_seq_len))
for i, v in enumerate(vec):
v[points[i]] = 1
return vec
def sparse_seq_collate_fn(batch):
batch_size = len(batch)
sorted_seqs, sorted_lengths, sorted_label = zip(*sorted(batch, key=lambda x:x[1], reverse=True))
padded_seqs = [seq.resize_as_(sorted_seqs[0]) for seq in sorted_seqs]
# (sparse) batch_size X max_seq_len X input_dim
seq_tensor = torch.stack(padded_seqs)
# batch_size
length_tensor = torch.LongTensor(sorted_lengths)
padded_labels = list(zip(*(itertools.zip_longest(*sorted_label, fillvalue=-1))))
# batch_size X max_seq_len (-1 padding)
label_tensor = torch.LongTensor(padded_labels).view(batch_size, -1)
# TODO: Currently, PyTorch DataLoader with num_workers >= 1 (multiprocessing) does not support Sparse Tensor
# TODO: Meanwhile, use a dense tensor when num_workers >= 1.
# seq_tensor = seq_tensor.to_dense()
return seq_tensor, length_tensor, label_tensor
# + pycharm={"name": "#%%\n"}
from torch.optim import Adam
import torch.backends.cudnn as cudnn
from torch.utils.data.dataloader import DataLoader
class AverageMeter:
def __init__(self):
self.val = 0
self.avg = 0
self.sum = 0
self.count = 0
def reset(self):
self.__init__()
def update(self, val, n=1):
self.val = val
self.sum += val * n
self.count += n
self.avg = self.sum / self.count
def masked_accuracy(output, target, mask):
with torch.no_grad():
masked_output = torch.masked_select(output, mask)
masked_target = torch.masked_select(target, mask)
accuracy = masked_output.eq(masked_target).float().mean()
return accuracy
# + pycharm={"name": "#%%\n"}
import pickle
min_length = 5
max_length = 10
batch_size1 = 256
no_cuda = False
use_cuda = not no_cuda and torch.cuda.is_available()
device = torch.device("cuda" if use_cuda else "cpu")
cudnn.benchmark = True if use_cuda else False
print("正在加载训练数据集")
# train_set = TSPDataset(data_size = 100000, min_seq_len=min_length, max_seq_len=max_length)
filename = "data/tsp_5_to_10_100000.pkl"
with open(filename, 'rb') as f:
train_set = pickle.load(f) # read file and build object
train_loader = DataLoader(dataset=train_set, batch_size=batch_size1, shuffle=True, collate_fn=sparse_seq_collate_fn)
print("加载训练数据集完成,开始加载测试数据集")
# test_set = TSPDataset(data_size = 10000, min_seq_len=min_length, max_seq_len=max_length)
filename = "data/tsp_5_to_10_10000.pkl"
with open(filename, 'rb') as f:
test_set = pickle.load(f) # read file and build object
test_loader = DataLoader(dataset=test_set, batch_size=batch_size1, shuffle=False, collate_fn=sparse_seq_collate_fn)
print("加载测试数据集完成")
# + pycharm={"name": "#%%\n"}
emb_dim = 100
epochs = 100
model = PointerNet(input_dim=2, embedding_dim=emb_dim, hidden_size=emb_dim).to(device)
train_loss = AverageMeter()
train_accuracy = AverageMeter()
test_loss = AverageMeter()
test_accuracy = AverageMeter()
# + pycharm={"name": "#%%\n"}
lr = 0.01
wd = 1e-5
optimizer = Adam(model.parameters(), lr=lr, weight_decay=wd)
def train():
for epoch in range(epochs):
# Train
model.train()
for batch_idx, (seq, length, target) in enumerate(train_loader):
seq, length, target = seq.to(device), length.to(device), target.to(device)
optimizer.zero_grad()
log_pointer_score, argmax_pointer, mask = model(seq, length)
# (batch * max_seq_len, max_seq_len)
unrolled = log_pointer_score.view(-1, log_pointer_score.size(-1))
# (batch_size, max_seq_len)
loss = F.nll_loss(unrolled, target.view(-1), ignore_index=-1)
assert not np.isnan(loss.item()), 'Model diverged with loss = NaN'
loss.backward()
optimizer.step()
train_loss.update(loss.item(), seq.size(0))
mask = mask[:, 0, :]
train_accuracy.update(masked_accuracy(argmax_pointer, target, mask).item(), mask.int().sum().item())
if batch_idx % 20 == 0:
print('Epoch {}: Train [{}/{} ({:.0f}%)]\tLoss: {:.6f}\tAccuracy: {:.6f}'
.format(epoch, (batch_idx+1) * len(seq), len(train_loader.dataset),
100. * (batch_idx+1) / len(train_loader), train_loss.avg, train_accuracy.avg))
# Test
model.eval()
for seq, length, target in test_loader:
seq, length, target = seq.to(device), length.to(device), target.to(device)
log_pointer_score, argmax_pointer, mask = model(seq, length)
unrolled = log_pointer_score.view(-1, log_pointer_score.size(-1))
loss = F.nll_loss(unrolled, target.view(-1), ignore_index=-1)
assert not np.isnan(loss.item()), 'Model diverged with loss = NaN'
test_loss.update(loss.item(), seq.size(0))
mask = mask[:, 0, :]
test_accuracy.update(masked_accuracy(argmax_pointer, target, mask).item(), mask.int().sum().item())
print('Epoch {}: Test\tLoss: {:.6f}\tAccuracy: {:.6f}'.format(epoch, test_loss.avg, test_accuracy.avg))
# + pycharm={"name": "#%%\n"}
train()
# + pycharm={"name": "#%%\n"}
filename = "model/model_5_to_10_100000_epoch_100_acc_65.3.pkl"
f = open(filename, 'wb')
pickle.dump(model, f)
f.close()
# + pycharm={"name": "#%%\n"}
| PointerNetwork训练过程.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import tensorflow as tf
# +
# I define a "shape-able" Variable
x = tf.Variable(
[],
dtype=tf.int32,
validate_shape=False, # By "shape-able", i mean we don't validate the shape
trainable=False
)
# I build a new shape and assign it to x
concat = tf.concat([x, [0]], 0)
assign_op = tf.assign(x, concat, validate_shape=False)
# +
with tf.control_dependencies([assign_op]):
# I print x after the assignment
x = tf.Print(x, data=[x, x.read_value()], message="x, x_read:")
# The assign_op is called, but it seems that print statement happens
# before the assignment, that is wrong.
# +
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
for i in range(3):
print(sess.run(x))
# outputs:
# x: [] , x_read: [0]
# x: [0] , x_read: [0 0]
# x: [0 0], x_read: [0 0 0]
# -
| Lectures/Lecture-08/6-fullcode.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Notebook for generating and saving SBM PATTERN graphs
# +
import numpy as np
import torch
import pickle
import time
# %matplotlib inline
import matplotlib.pyplot as plt
import scipy.sparse
# -
# # Generate SBM PATTERN graphs
# +
def schuffle(W,c):
# relabel the vertices at random
idx=np.random.permutation( W.shape[0] )
#idx2=np.argsort(idx) # for index ordering wrt classes
W_new=W[idx,:]
W_new=W_new[:,idx]
c_new=c[idx]
return W_new , c_new , idx
def block_model(c,p,q):
n=len(c)
W=np.zeros((n,n))
for i in range(n):
for j in range(i+1,n):
if c[i]==c[j]:
prob=p
else:
prob=q
if np.random.binomial(1,prob)==1:
W[i,j]=1
W[j,i]=1
return W
def unbalanced_block_model(nb_of_clust, clust_size_min, clust_size_max, p, q):
c = []
for r in range(nb_of_clust):
if clust_size_max==clust_size_min:
clust_size_r = clust_size_max
else:
clust_size_r = np.random.randint(clust_size_min,clust_size_max,size=1)[0]
val_r = np.repeat(r,clust_size_r,axis=0)
c.append(val_r)
c = np.concatenate(c)
W = block_model(c,p,q)
return W,c
def random_pattern(n,p):
W=np.zeros((n,n))
for i in range(n):
for j in range(i+1,n):
if np.random.binomial(1,p)==1:
W[i,j]=1
W[j,i]=1
return W
def add_pattern(W0,W,c,nb_of_clust,q):
n=W.shape[0]
n0=W0.shape[0]
V=(np.random.rand(n0,n) < q).astype(float)
W_up=np.concatenate( ( W , V.T ) , axis=1 )
W_low=np.concatenate( ( V , W0 ) , axis=1 )
W_new=np.concatenate( (W_up,W_low) , axis=0)
c0=np.full(n0,nb_of_clust)
c_new=np.concatenate( (c, c0),axis=0)
return W_new,c_new
class generate_SBM_graph():
def __init__(self, SBM_parameters):
# parameters
nb_of_clust = SBM_parameters['nb_clusters']
clust_size_min = SBM_parameters['size_min']
clust_size_max = SBM_parameters['size_max']
p = SBM_parameters['p']
q = SBM_parameters['q']
p_pattern = SBM_parameters['p_pattern']
q_pattern = SBM_parameters['q_pattern']
vocab_size = SBM_parameters['vocab_size']
W0 = SBM_parameters['W0']
u0 = SBM_parameters['u0']
# block model
W, c = unbalanced_block_model(nb_of_clust, clust_size_min, clust_size_max, p, q)
# signal on block model
u = np.random.randint(vocab_size, size=W.shape[0])
# add the subgraph to be detected
W, c = add_pattern(W0,W,c,nb_of_clust,q_pattern)
u = np.concatenate((u,u0),axis=0)
# shuffle
W, c, idx = schuffle(W,c)
u = u[idx]
# target
target = (c==nb_of_clust).astype(float)
# convert to pytorch
W = torch.from_numpy(W)
W = W.to(torch.int8)
idx = torch.from_numpy(idx)
idx = idx.to(torch.int16)
u = torch.from_numpy(u)
u = u.to(torch.int16)
target = torch.from_numpy(target)
target = target.to(torch.int16)
# attributes
self.nb_nodes = W.size(0)
self.W = W
self.rand_idx = idx
self.node_feat = u
self.node_label = target
# configuration
SBM_parameters = {}
SBM_parameters['nb_clusters'] = 10
SBM_parameters['size_min'] = 5
SBM_parameters['size_max'] = 15 # 25
SBM_parameters['p'] = 0.5 # 0.5
SBM_parameters['q'] = 0.25 # 0.1
SBM_parameters['p_pattern'] = 0.5 # 0.5
SBM_parameters['q_pattern'] = 0.25 # 0.1
SBM_parameters['vocab_size'] = 3
SBM_parameters['size_subgraph'] = 10
SBM_parameters['W0'] = random_pattern(SBM_parameters['size_subgraph'],SBM_parameters['p_pattern'])
SBM_parameters['u0'] = np.random.randint(SBM_parameters['vocab_size'],size=SBM_parameters['size_subgraph'])
print(SBM_parameters)
data = generate_SBM_graph(SBM_parameters)
print(data)
#print(data.nb_nodes)
#print(data.W)
#print(data.rand_idx)
#print(data.node_feat)
#print(data.node_label)
# +
#Plot Adj matrix
W = data.W
plt.spy(W,precision=0.01, markersize=1)
plt.show()
idx = np.argsort(data.rand_idx)
W = data.W
W2 = W[idx,:]
W2 = W2[:,idx]
plt.spy(W2,precision=0.01, markersize=1)
plt.show()
target = data.node_label
target = target[idx]
print(target)
# -
# +
# Generate and save SBM graphs
class DotDict(dict):
def __init__(self, **kwds):
self.update(kwds)
self.__dict__ = self
def plot_histo_graphs(dataset, title):
# histogram of graph sizes
graph_sizes = []
for graph in dataset:
graph_sizes.append(graph.nb_nodes)
plt.figure(1)
plt.hist(graph_sizes, bins=50)
plt.title(title)
plt.show()
start = time.time()
# configuration for 100 patterns 100/20
nb_pattern_instances = 100 # nb of patterns
nb_train_graphs_per_pattern_instance = 100 # train per pattern
nb_test_graphs_per_pattern_instance = 20 # test, val per pattern
SBM_parameters = {}
SBM_parameters['nb_clusters'] = 5
SBM_parameters['size_min'] = 5
SBM_parameters['size_max'] = 35
SBM_parameters['p'] = 0.5
SBM_parameters['q'] = 0.2
SBM_parameters['p_pattern'] = 0.5
SBM_parameters['q_pattern'] = 0.5
SBM_parameters['vocab_size'] = 3
SBM_parameters['size_subgraph'] = 20
print(SBM_parameters)
dataset_train = []
dataset_val = []
dataset_test = []
for idx in range(nb_pattern_instances):
print('pattern:',idx)
SBM_parameters['W0'] = random_pattern(SBM_parameters['size_subgraph'],SBM_parameters['p'])
SBM_parameters['u0'] = np.random.randint(SBM_parameters['vocab_size'],size=SBM_parameters['size_subgraph'])
for _ in range(nb_train_graphs_per_pattern_instance):
data = generate_SBM_graph(SBM_parameters)
graph = DotDict()
graph.nb_nodes = data.nb_nodes
graph.W = data.W
graph.rand_idx = data.rand_idx
graph.node_feat = data.node_feat
graph.node_label = data.node_label
dataset_train.append(graph)
for _ in range(nb_test_graphs_per_pattern_instance):
data = generate_SBM_graph(SBM_parameters)
graph = DotDict()
graph.nb_nodes = data.nb_nodes
graph.W = data.W
graph.rand_idx = data.rand_idx
graph.node_feat = data.node_feat
graph.node_label = data.node_label
dataset_val.append(graph)
for _ in range(nb_test_graphs_per_pattern_instance):
data = generate_SBM_graph(SBM_parameters)
graph = DotDict()
graph.nb_nodes = data.nb_nodes
graph.W = data.W
graph.rand_idx = data.rand_idx
graph.node_feat = data.node_feat
graph.node_label = data.node_label
dataset_test.append(graph)
print(len(dataset_train),len(dataset_val),len(dataset_test))
plot_histo_graphs(dataset_train,'train')
plot_histo_graphs(dataset_val,'val')
plot_histo_graphs(dataset_test,'test')
with open('SBM_PATTERN_train.pkl',"wb") as f:
pickle.dump(dataset_train,f)
with open('SBM_PATTERN_val.pkl',"wb") as f:
pickle.dump(dataset_val,f)
with open('SBM_PATTERN_test.pkl',"wb") as f:
pickle.dump(dataset_test,f)
print('Time (sec):',time.time() - start) # 163s
# -
# # Convert to DGL format and save with pickle
import os
os.chdir('../../') # go to root folder of the project
print(os.getcwd())
# +
import pickle
# %load_ext autoreload
# %autoreload 2
from data.SBMs import SBMsDatasetDGL
from data.data import LoadData
from torch.utils.data import DataLoader
from data.SBMs import SBMsDataset
# -
DATASET_NAME = 'SBM_PATTERN'
dataset = SBMsDatasetDGL(DATASET_NAME) # 4424s = 73min
# +
print(len(dataset.train))
print(len(dataset.val))
print(len(dataset.test))
print(dataset.train[0])
print(dataset.val[0])
print(dataset.test[0])
# +
start = time.time()
with open('data/SBMs/SBM_PATTERN.pkl','wb') as f:
pickle.dump([dataset.train,dataset.val,dataset.test],f)
print('Time (sec):',time.time() - start) # 21s
# -
# # Test load function
DATASET_NAME = 'SBM_PATTERN'
dataset = LoadData(DATASET_NAME) # 30s
trainset, valset, testset = dataset.train, dataset.val, dataset.test
# +
start = time.time()
batch_size = 10
collate = SBMsDataset.collate
train_loader = DataLoader(trainset, batch_size=batch_size, shuffle=True, collate_fn=collate)
print('Time (sec):',time.time() - start) #0.0006
# -
| data/SBMs/generate_SBM_PATTERN.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# %load_ext autoreload
# %autoreload 2
import wfdb
import pandas as pd
# +
from utils.scoring_metrics import (
RefInfo, load_ans,
score, ue_calculate, ur_calculate,
compute_challenge_metric, gen_endpoint_score_mask,
)
from utils.scoring_metrics_test import _load_af_episodes
# from database_reader.cpsc_databases import CPSC2021
# from data_reader import CINC2021Reader
from data_reader import CPSC2021Reader
from utils.misc import list_sum
# -
db_dir = "/home/wenhao/Jupyter/wenhao/data/CPSC2021/"
# # check data reader
dr = CPSC2021Reader(db_dir=db_dir)
dr.df_stats["sig_len"] / dr.df_stats["fs"]
dr.diagnoses_records_list
# ### normal class
rec = dr.diagnoses_records_list["N"][42]
rec
# +
hehe_data = dr.load_data(rec)
hehe_data.shape
# -
dr._get_path(rec)
dr.load_ann(rec)
dr.load_label(rec, fmt="f"), dr.load_label(rec, fmt="a"), dr.load_label(rec, fmt="n")
hehe_rpeaks = dr.load_rpeaks(rec)
hehe_rpeaks.shape
dr.load_rpeaks(rec, sampfrom=100, zero_start=True)
dr.load_af_episodes(rec, fmt="intervals")
dr.load_af_episodes(rec, fmt="c_intervals")
# +
hehe_mask = dr.load_af_episodes(rec, fmt="mask")
hehe_mask.shape
# -
dr.plot(rec, sampfrom=1000, sampto=2000)
# ### AFp
rec = dr.diagnoses_records_list["AFp"][42]
rec
# +
hehe_data = dr.load_data(rec)
hehe_data.shape
# -
dr._get_path(rec)
dr.load_ann(rec)
dr.load_label(rec, fmt="f"), dr.load_label(rec, fmt="a"), dr.load_label(rec, fmt="n")
hehe_rpeaks = dr.load_rpeaks(rec)
hehe_rpeaks.shape
dr.load_rpeaks(rec, sampfrom=1000, zero_start=True)
dr.load_af_episodes(rec, fmt="intervals")
dr.load_af_episodes(rec, fmt="c_intervals")
# +
hehe_mask = dr.load_af_episodes(rec, fmt="mask")
hehe_mask.shape
# -
dr.plot(rec, sampfrom=670000, sampto=672000)
# ### AFf
# +
rec = dr.diagnoses_records_list["AFf"][42]
rec
# -
dr.load_ann(rec)
dr.load_label(rec, fmt="f"), dr.load_label(rec, fmt="a"), dr.load_label(rec, fmt="n")
hehe_rpeaks = dr.load_rpeaks(rec)
hehe_rpeaks.shape
dr.load_rpeaks(rec, sampfrom=100, zero_start=True)
dr.load_af_episodes(rec, fmt="intervals")
dr.load_af_episodes(rec, fmt="c_intervals")
# +
hehe_mask = dr.load_af_episodes(rec, fmt="mask")
hehe_mask.shape
# -
dr.plot(rec, sampfrom=1000, sampto=5000)
# # utils check
from utils.utils_signal import get_ampl, ensure_siglen
get_ampl(dr.load_data(rec), fs=dr.fs, critical_points=dr.load_rpeaks(rec))
get_ampl(dr.load_data(rec), fs=dr.fs)
# # check custom scoring metrics
from utils.scoring_metrics_test import run_test, run_single_test
# +
# # ?run_test
# -
l_rec = [dr._get_path(rec) for rec in dr.all_records]
run_test(l_rec)
# +
# err_list = data_39_4,data_48_4,data_68_23,data_98_5,data_101_5,data_101_7,data_101_8,data_104_25,data_104_27
# -
rec = "data_39_4"
run_single_test(dr._get_path(rec), verbose=2)
ref_info = RefInfo(dr._get_path(rec))
o_mask = ref_info._gen_endpoint_score_range()
o_mask[1][90527:]
c_mask = gen_endpoint_score_mask(
siglen=dr.df_stats[dr.df_stats.record==rec].iloc[0]["sig_len"],
critical_points=wfdb.rdann(dr._get_path(rec),extension=dr.ann_ext).sample,
af_intervals=dr.load_af_episodes(rec, fmt="c_intervals")
)
c_mask[1][90527:]
dr.plot(rec, sampfrom=89600)
# # check data generator and sliced segments and rr_seq
from dataset import CPSC2021
from cfg import TrainCfg
ds = CPSC2021(TrainCfg, task="rr_lstm", training=True)
len(ds)
ds[0]
ds.reset_task("qrs_detection")
len(ds)
ds[0]
ds._load_seg_ann(ds.segments[0])
| inspect_data.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Exploratory analysis
# The current data that cover the next tools are:
#
# **Water risk atlas tool indicators**:
# * Baseline Water Stress (year;month)
# * Inter Annual Variability
# * Seasonal variability
# * Drought Severity Soil Moisture
# * Drought Severity Streamflow
# * Groundwater Stress
# * Groundwater Table Declining Trend
# * Probability of inundation from river floods
# * Index for Coastal eutrophication Potential (ICEP)
# * Reprisk Index (RRI)
# * Future Water Stress
# * Projected Change in Seasonal variability
# * Threatened Amphibians
# * Access to water
# * Flood Occurrence
#
# **Food risk analyzer tool indicators**:
#
#
# | Data: | Ifript food
# |:----------------------: |:-------------:
# | geographical coverage: | global
# | geographical resolution: | Country
# | temporal range: | 2005-2045
# | temporal resolution: | yearly
#
#
# | indicator | Crop coverage |
# |:----------------------: |:-------------: |
# | Kcal per capita | - |
# | Yield | all |
# | Pop. at risk of hunger | - |
# | Area | all |
# | World price | all |
# | Production | all |
# | Net trade | all |
# | Food Demand | all |
#
#
#
#
# * Water risk atlas:
# * indicators:
# * Baseline water stress(year;month)
# * Drought Severity Soil Moisture
# * Drought Severity Streamflow
# * Environmental Flows
# * Inter Annual Variability
# * Seasonal Variability
# * Water demand
# * groundwater stress
# * groundwater table declining trend
# * groundwater stress
# * groundwater table declining trend
# * Risk to rainfed Agriculture (precipitation derived)
# * geographical coverage:
# * Global
# * geographical resolution:
# * Country
# * Subcatchement
# * Crop list:
# * Banana
# * Barley
# * Beans
# * Cassava
# * All Cereals
# * Chickpeas
# * Cowpeas
# * Groundnut
# * Lentils
# * Maize
# * Millet
# * Pigeonpeas
# * Plantain
# * Potato
# * All Pulses
# * Rice
# * Sorghum
# * Soybean
# * Sweet Potato
# * Wheat
# * Yams
#
#
#
# Data location: Carto
# Account: wri-01
# Tables:
# * Water risk:
# * water_risk_data
# * crops:
# * crops_location
# * crops
# * ifript food data:
# * combined01_prepared
# ### Configurations and imports
# %matplotlib inline
# +
import math
import matplotlib
import matplotlib.pyplot as plt
import numpy as np
import rasterio
from rasterio.plot import show
# -
# %reload_ext version_information
# %version_information numpy, matplotlib
#
# ```sql
# select * from table
# ```
#
#
_16
| Aqueduct/lab/.ipynb_checkpoints/AQ1.-data-checkpoint.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ## Hyperperameter optimization of WaveNet
#
# We're going to train a full, regularized CNN architecture with some automatic hyperperameter optimization using `hyperas` (a wrapper for `hyperopt`).
# +
import logging
import numpy as np
import tensorflow as tf
from tensorflow.keras import layers
from tensorflow.keras import regularizers
from hyperopt import Trials, STATUS_OK, tpe
from hyperopt.pyll.base import scope
from hyperas import optim
from hyperas.distributions import choice, uniform, quniform
from hyperas.utils import eval_hyperopt_space
import tools.train as train
import tools.models as models
import tools.plot as plot
# Suppress tensorflow warnings about internal deprecations
logger = tf.get_logger()
logger.setLevel(logging.ERROR)
## Count data
files = ("../data/mitbih_train.csv", "../data/mitbih_test.csv")
inputs, labels, sparse_labels, df = train.preprocess(*files, fft=False)
# Add a dimension for "channels"
for key in inputs:
inputs[key] = tf.expand_dims(inputs[key], axis=2)
train.class_count(df)
# +
# functions for hyperas
def data():
## Read in data
files = ("../data/mitbih_train.csv", "../data/mitbih_test.csv")
inputs, labels, sparse_labels, df = train.preprocess(*files, fft=False)
# Add a dimension for "channels"
for key in inputs:
inputs[key] = tf.expand_dims(inputs[key], axis=2)
return inputs, labels
def create_model(inputs, labels):
nblocks = {{quniform(3.5, 20.5, 1)}}
nfilters = {{quniform(31.5, 128.5, 1)}}
batch_size = {{quniform(49.5, 500.5, 1)}}
l1_reg = {{uniform(0, 0.1)}}
l2_reg = {{uniform(0, 0.1)}}
dilation_limit = {{quniform(0.5, inputs["train"].shape[1], 1)}}
# Start
layerlist_res = [("conv", {"filters": int(nfilters), "width": 1, "padding": "causal"})]
# Residual blocks
models.add_res_blocks(int(nblocks), int(nfilters), dilation_limit, layerlist_res)
# End
layerlist_res.extend([
(layers.Activation("relu"),),
("conv", {"filters": int(nfilters), "width": 1, "padding": "causal"}),
("conv", {"filters": 1, "width": 1, "padding": "causal"}),
(layers.Dropout({{uniform(0, 1)}}),)
])
config = {
"optimizer": "Nadam",
"loss": "sparse_categorical_crossentropy",
"batch_size": int(batch_size),
"val_split": 0.4,
"epochs": 8,
"verbose": 0,
"patience": 5,
"weighted_metrics": ["accuracy"],
"regularizer": regularizers.l1_l2(l1=l1_reg, l2=l2_reg),
}
inputsize = inputs["train"].shape[1]
ncategories = labels["train"].shape[1]
model_res = models.create_conv1d(inputsize, layerlist_res, ncategories, config)
history = train.train(model_res, inputs, sparse_labels, config)
# get the lowest validation loss of the training epochs
validation_acc = np.amax(history.history['val_accuracy_1'])
return {'loss': -validation_acc, 'status': STATUS_OK, 'model': model_res}
best_run, best_model, space = optim.minimize(
model=create_model,
data=data,
algo=tpe.suggest,
max_evals=50,
eval_space=True,
return_space=True,
trials=Trials(),
notebook_name='wavenet_hyperopt',
verbose=False,
)
# +
print("Chosen hyperparameters from the best-trained model")
print(best_run)
print(
"Train acc of best performing model after 10 epochs:",
best_model.evaluate(inputs["train"], sparse_labels["train"], verbose=0)[1],
)
print(
"Test acc of best performing model after 10 epochs:",
best_model.evaluate(inputs["test"], sparse_labels["test"], verbose=0)[1],
)
test_pred = np.argmax(best_model.predict(inputs["test"]), axis=1)
plot.plot_cm(
sparse_labels["test"],
test_pred,
classes=np.arange(5),
normalize=True,
norm_fmt=".3f",
)
| code/wavenet_hyperopt.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
import pandas as pd
import string
from nltk.corpus import stopwords
from collections import Counter
from string import digits
from tqdm import tqdm
import spacy
from nltk.stem import SnowballStemmer
import warnings
from langdetect import detect
from langdetect import DetectorFactory
warnings.filterwarnings("ignore")
# -
# # data preprocessing
df = pd.read_csv("extracted_data.csv")
df
# Remove sensitive words
df["text_wo_seinfo"] = df.Text.replace(r'<PERSON[0-9_]*>', '', regex=True)
df["text_wo_seinfo"] = df.text_wo_seinfo.replace(
r'<LOCATION[0-9_]*>', '', regex=True)
# Remove the carriage return, tab, line feed
df["text_wo_seinfo_wo_enter"] = df.text_wo_seinfo.replace(
r'\r+|\n+|\t+', ' ', regex=True)
# Lowercase words
df["text_wo_seinfo_wo_enter_lower"] = df["text_wo_seinfo_wo_enter"].str.lower()
# +
print("there are the punctuations that will be converted to spaces: "+string.punctuation)
# symbols are converted to spaces
def remove_punctuations(text):
for punctuation in string.punctuation:
text = text.replace(punctuation, ' ')
return text
df["text_wo_seinfo_wo_enter_lower_wo_punct"] = df["text_wo_seinfo_wo_enter_lower"].apply(
lambda text: remove_punctuations(text))
df.head()
# -
# Stopwords that will be processed
", ".join(stopwords.words('english')+stopwords.words('german'))
# +
# Remove stopwords
STOPWORDS = set(stopwords.words('english')+stopwords.words('german'))
def remove_stopwords(text):
"""custom function to remove the stopwords"""
return " ".join([word for word in str(text).split() if word not in STOPWORDS])
df["text_wo_seinfo_wo_enter_lower_wo_punct_wo_stop"] = df["text_wo_seinfo_wo_enter_lower_wo_punct"].apply(
lambda text: remove_stopwords(text))
df.head()
# -
# Counting high frequency words
cnt = Counter()
for text in df["text_wo_seinfo_wo_enter_lower_wo_punct_wo_stop"].values:
for word in text.split():
cnt[word] += 1
# The 130 most frequent words
cnt.most_common(130)
# +
# Keep the meaningful part of the 130 high-frequency words
FREQWORDS = set([w for (w, wc) in cnt.most_common(130)])
FREQWORDS.difference_update(["client", "hana", "gbi", "user", "team", "mandanten", "university", "students", "mandant", "version", "access", "connection", "domain", "mail"
"error", "gui", "data", "fehlermeldung", "reset", "password", "<PASSWORD>", "<PASSWORD>", "service", "master", "erp", "email", "server", "id", "ides"
"ip", "hochschule", "student", "request", "login", "users", "contract", "portal", "message", "case", "tum", "4hana", "remote", "ticket", "lumira"])
# Remove the words that have no obvious meaning among the 130 high-frequency words
def remove_freqwords(text):
"""custom function to remove the frequent words"""
return " ".join([word for word in str(text).split() if word not in FREQWORDS])
df["text_wo_seinfo_wo_enter_lower_wo_punct_wo_stoptext_wo_stopfreq"] = df["text_wo_seinfo_wo_enter_lower_wo_punct_wo_stop"].apply(
lambda text: remove_freqwords(text))
df.head()
# +
# Remove rare words with count less than 3
RAREWORDS = set([w for (w, wc) in cnt.most_common() if wc < 3])
print("rare words with count less than 3 are :"+str(len(RAREWORDS)))
def remove_rarewords(text):
"""custom function to remove the rare words"""
return " ".join([word for word in str(text).split() if word not in RAREWORDS])
df["text_wo_seinfo_wo_enter_lower_wo_punct_wo_stoptext_wo_stopfreq_wo_stopfreqrare"] = df[
"text_wo_seinfo_wo_enter_lower_wo_punct_wo_stoptext_wo_stopfreq"].apply(lambda text: remove_rarewords(text))
df.head()
# +
# Remove numbers
def remove_num(text):
return " ".join([word for word in str(text).split() if not word.isdigit()])
df["text_wo_seinfo_wo_enter_lower_wo_punct_wo_stoptext_wo_stopfreq_wo_stopfreqrare_wo_num"] = df[
"text_wo_seinfo_wo_enter_lower_wo_punct_wo_stoptext_wo_stopfreq_wo_stopfreqrare"].apply(lambda text: remove_num(text))
# +
# Remove words that start with numbers e.g. 17th
def remove_num_beginn(text):
return " ".join([word.lstrip(digits) for word in str(text).split()])
df["text_wo_seinfo_wo_enter_lower_wo_punct_wo_stoptext_wo_stopfreq_wo_stopfreqrare_wo_num_wo_numbeginn"] = df[
"text_wo_seinfo_wo_enter_lower_wo_punct_wo_stoptext_wo_stopfreq_wo_stopfreqrare_wo_num"].apply(lambda text: remove_num_beginn(text))
# +
# set seed
DetectorFactory.seed = 0
# hold label - language
languages = []
# go through each text
for ii in tqdm(range(0, len(df))):
# split by space into list, take the first x intex, join with space
text = df.iloc[ii]['Text'].split(" ")
try:
if len(text) > 50:
lang = detect(" ".join(text[:50]))
elif len(text) > 0:
lang = detect(" ".join(text[:len(text)]))
except Exception as e:
all_words = set(text)
try:
lang = detect(" ".join(all_words))
except Exception as e:
lang = "unknown"
pass
# get the language
languages.append(lang)
# +
df['language'] = languages
# Samples classified as German and English
df_De = df[df['language'] == 'de']
df_En = df[df['language'] == 'en']
# -
# Samples classified as non-German and non-English
df_Other = df[(df['language'] != 'en') & (df['language'] != 'de')]
df_Other
# +
# stemming for english
stemmer_en = SnowballStemmer("english")
def stem_words_en(text):
return " ".join([stemmer_en.stem(word) for word in text.split()])
df_En["text_stemmed"] = df_En["text_wo_seinfo_wo_enter_lower_wo_punct_wo_stoptext_wo_stopfreq_wo_stopfreqrare_wo_num_wo_numbeginn"].apply(lambda text: stem_words_en(text))
df_En.head()
# +
# stemming for german
stemmer_de = SnowballStemmer("german")
def stem_words_de(text):
return " ".join([stemmer_de.stem(word) for word in text.split()])
df_De["text_stemmed"] = df_De["text_wo_seinfo_wo_enter_lower_wo_punct_wo_stoptext_wo_stopfreq_wo_stopfreqrare_wo_num_wo_numbeginn"].apply(lambda text: stem_words_de(text))
df_De.head()
# +
# lemmatization for english
def lemmatizer(text):
sent = []
doc = nlp(text)
for word in doc:
sent.append(word.lemma_)
return " ".join(sent)
nlp = spacy.load('en_core_web_sm')
df_En["text_lemmatization"] = df_En["text_wo_seinfo_wo_enter_lower_wo_punct_wo_stoptext_wo_stopfreq_wo_stopfreqrare_wo_num_wo_numbeginn"].apply(
lambda text: lemmatizer(text))
df_En.head()
# +
# lemmatization for german
nlp = spacy.load('de_core_news_sm')
df_De["text_lemmatization"] = df_De["text_wo_seinfo_wo_enter_lower_wo_punct_wo_stoptext_wo_stopfreq_wo_stopfreqrare_wo_num_wo_numbeginn"].apply(
lambda text: lemmatizer(text))
df_De.head()
# -
# # Save the preprocessed data
df_En.to_csv("preprocessed_data_en.csv",index=False)
df_De.to_csv("preprocessed_data_de.csv",index=False)
| Data Preprocessing.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: tmlkt
# language: python
# name: tmlkt
# ---
# +
import numpy as np
import pandas as pd
# from pyquaternion import Quaternion
from trackml.dataset import load_event, load_dataset
from trackml.score import score_event
import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d import Axes3D
from sklearn.cluster import KMeans, DBSCAN
from scipy.spatial.distance import cdist
from scipy.sparse.csgraph import connected_components
from tqdm import tqdm
from scipy.misc import derivative
import warnings
warnings.filterwarnings('ignore')
pd.set_option('display.max_columns', None)
# %matplotlib inline
# -
def make_counts(labels):
_,reverse,count = np.unique(labels,return_counts=True,return_inverse=True)
counts = count[reverse]
counts[labels==0]=0
return counts
# +
# def run_dbscan():
data_dir = '../data/train'
# event_ids = [
# '000001030',##
# '000001025','000001026','000001027','000001028','000001029',
# ]
event_ids = [
'000001030',##
]
sum=0
sum_score=0
for i,event_id in enumerate(event_ids):
particles = pd.read_csv(data_dir + '/event%s-particles.csv'%event_id)
hits = pd.read_csv(data_dir + '/event%s-hits.csv'%event_id)
cells = pd.read_csv(data_dir + '/event%s-cells.csv'%event_id)
truth = pd.read_csv(data_dir + '/event%s-truth.csv'%event_id)
particles = pd.read_csv(data_dir + '/event%s-particles.csv'%event_id)
truth = pd.merge(truth, particles, how='left', on='particle_id')
hits = pd.merge(hits, truth, how='left', on='hit_id')
# -
hits.head()
# +
hits1 = hits[(hits.particle_id == 427858663433043968) | (hits.particle_id == 923241222145835008) |
(hits.particle_id == 4523734434054144) | (hits.particle_id == 261225408500858880) |
(hits.particle_id == 743099023757410304)]
# print(hits.head())
figure = plt.figure(figsize=(5,5))
plt.scatter(hits1.x, hits1.y, marker='.', c=hits1['particle_id'])
plt.show()
# -
fig = plt.figure()
ax = fig.add_subplot(111, projection='3d')
ax.scatter(hits1.x, hits1.y, hits1.z, marker='.',c=hits1['particle_id'])
# +
df = hits
x = df.x.values
y = df.y.values
z = df.z.values
dz = 0
z = z + dz
rt = np.sqrt(x**2+y**2)
r = np.sqrt(x**2+y**2+z**2)
a0 = np.arctan2(y,x)
x2 = x/r
y2 = y/r
phi = np.arctan2(y, x)
phi_deg= np.degrees(np.arctan2(y, x))
z1 = z/rt
z2 = z/r
z3 = np.log1p(abs(z/r))*np.sign(z)
theta = np.arctan2(rt, z)
tt = np.tan(theta)
mm = 1
ls = []
# for ii in range(Niter):
mm = mm * (-1)
ii = 0
a1 = a0+mm*(rt+ 0.0000145*rt**2)/1000*(ii/2)/180*np.pi
# -
fig = plt.figure()
ax = fig.add_subplot(111, projection='3d')
ax.scatter(a1, r, z/r, marker='.',c=hits['particle_id'])
ii = 1
a1 = a0+mm*(rt+ 0.0000145*rt**2)/1000*(ii/2)/180*np.pi
fig = plt.figure()
ax = fig.add_subplot(111, projection='3d')
ax.scatter(a1, r, z/r, marker='.',c=hits['particle_id'])
def make_counts(labels):
_,reverse,count = np.unique(labels,return_counts=True,return_inverse=True)
counts = count[reverse]
counts[labels==0]=0
return counts
a = [1,0,2,0,3,1,0,2,2,1,4,5]
a.count(1)
x,reverse,count = np.unique(a,return_counts=True,return_inverse=True)
count
x
x[0]
reverse
counts = count[reverse]
counts
counts[a == 0]
a
count = make_counts(a)
count
df = pd.DataFrame()
df['l'] = a
# dfh['N2'] = dfh.groupby('s2')['s2'].transform('count')
df.groupby('l')['l'].transform('count')
a = np.array([1,6,2,6,3,1,6,2,2,1,4,5])
b = np.array([1,6,1,6,3,1,5,2,1,2,4,5])
c = np.array([2,6,2,5,3,1,6,2,1,2,4,5])
d = np.array([1,6,2,6,2,1,6,1,1,2,4,5])
e = np.array([1,5,2,6,3,1,6,2,1,2,4,5])
f = np.array([2,6,3,6,1,1,3,2,2,3,4,5])
ls = []
ls.append(a)
ls.append(b)
ls.append(c)
ls.append(d)
ls.append(e)
ls.append(f)
num_hits=len(a)
labels = np.zeros(num_hits,np.int32)
counts = np.zeros(num_hits,np.int32)
for l in ls:
print(l)
c = make_counts(l)
print(c)
idx = np.where((c-counts>0) & (c<20))[0]
print(idx)
labels[idx] = l[idx] + labels.max()
print(labels)
counts = make_counts(labels)
print(counts)
print('----------------------------------------')
labels
a1 = a.copy()
a1
np.any(a1 > 6)
a2 = a1
a1[a1==6] = 0
a1
np.where(a1 == 1)[0]
for ii in np.where(a1 == 1)[0]:
print(ii)
print('test')
df = pd.DataFrame()
df['track_id'] = [2,4,6,8,9]
df['dummy'] = [2,3,6,8,1]
df_l = df[df.track_id > 4]
list(df_l[df_l.track_id > 4].index)
list(df_l[df_l.track_id > 4].index)
a = list(df_l[df_l.track_id > 4].index)
a
a.pop(0)
a
df_l
df.loc[a, 'track_id'] = 99999
df
sub = pd.read_csv('../submissions/submission-0030-1.csv')
sub.head()
sub['track_count'] = sub.groupby('track_id')['track_id'].transform('count')
s1 = set(sub.track_id.values)
len(s1)
len(sub)
l = sub.track_id.values
idx = np.where(l < 0)
idx
len(list(idx[0]))
L1 = list(idx[0])
sub1 = sub[sub.track_id < 0]
s2 = set(sub1.track_id.values) # negaive track_ids
print(len(s1), len(s2), len(sub), len(sub1))
s3 = s1 -s2 # all postive track_ids
len(s3) # all postive track_ids
# %%time
s4 = set(range(1, np.iinfo(np.int32).max)) - s3
len(s4)
L1 = list(s1) # all track_ids
L2 = list(s2) # negative track_id
# L4 = list(s4) # remaining track ids
L4 = list(s4) # remaining track ids
len(L1) # all track_ids
len(L2) # negative track ids
# +
# import pickle
# with open('../cache/L5_rem_track_ids_2', 'rb') as fp1:
# L5 = pickle.load(fp1)
# -
L5 = L4[:len(L2)]
len(L5)
# +
# import pickle
# with open('../cache/L2_neg_track_ids', 'wb') as fp:
# pickle.dump(L2, fp)
# with open('../cache/L4_rem_track_ids', 'wb') as fp1:
# pickle.dump(L4, fp1)
# +
# with open('../cache/L5_rem_track_ids_2', 'wb') as fp2:
# pickle.dump(L5, fp2)
# +
# np.iinfo(np.int32).max
# -
# _,reverse,count = np.unique(l,return_counts=True,return_inverse=True)
# +
# len(list(count))
# +
# plt.hist(sub.track_id.values, bins=[0,1000000, 10000000, 100000000])
# plt.show()
# +
# ls = []
# +
# for l1 in tqdm(range(1, 2000000)):
# if l1 in L1:
# continue
# ls.append(l1)
# -
len(L2)
len(L5)
# +
# import numpy as np
# condition = [sub['track_id'] == i for i in L2]
# sub['track_id2'] = np.select(condition, L5, df['track_id'])
# -
# %%time
sub['track_id2'] = sub['track_id'].map(dict(zip(L2, L5))).fillna(sub['track_id'])
# +
# sub.loc[sub.track_id < 0, 'track_id'] = L5
# -
sub1 = sub.drop(['track_id', 'track_count'], axis=1)
sub1['track_id2'] = sub1['track_id2'].astype(np.int32)
sub1.to_csv('../submissions/submission-0030-2.csv',index=False)
a = [1,2,3,4,5,6]
# +
def f(x):
return 3*x**2*180*np.pi
print([derivative(f, x) for x in a])
# -
| notebooks/3_1_Quaternion.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Problemas
# ---
# Implemente el código para las funciones que devuelvan matrices de rotación en $x$ y $y$.
# + deletable=false nbgrader={"checksum": "3ca624e0753f6a9b6cdf077800240a90", "grade": false, "grade_id": "cell-7ee04b3095e7ec0a", "locked": false, "schema_version": 1, "solution": true}
def rotacion_x(θ):
# YOUR CODE HERE
raise NotImplementedError()
def rotacion_y(θ):
# YOUR CODE HERE
raise NotImplementedError()
# + deletable=false editable=false nbgrader={"checksum": "a6640fe445358216479b071542f6f4b9", "grade": true, "grade_id": "cell-b149256339fe92c4", "locked": true, "points": 2, "schema_version": 1, "solution": false}
from numpy.testing import assert_allclose
from numpy import eye, pi, matrix
assert_allclose(rotacion_x(0), eye(4))
assert_allclose(rotacion_x(pi), matrix([[1, 0, 0, 0], [0, -1, 0, 0], [0, 0, -1, 0], [0, 0, 0, 1]]), rtol=1e-05, atol=1e-05)
assert_allclose(rotacion_y(0), eye(4))
assert_allclose(rotacion_y(pi), matrix([[-1, 0, 0, 0], [0, 1, 0, 0], [0, 0, -1, 0], [0, 0, 0, 1]]), rtol=1e-05, atol=1e-05)
# -
# ---
# Implemente el código para las funciones que devuelvan matrices de traslación en $x$, $y$ y $z$.
# + deletable=false nbgrader={"checksum": "3169b9b355bf5871e1de6336711eb631", "grade": false, "grade_id": "cell-90b53718156230fc", "locked": false, "schema_version": 1, "solution": true}
def traslacion_x(x):
# YOUR CODE HERE
raise NotImplementedError()
def traslacion_y(y):
# YOUR CODE HERE
raise NotImplementedError()
def traslacion_z(z):
# YOUR CODE HERE
raise NotImplementedError()
# + deletable=false editable=false nbgrader={"checksum": "5b90439a8a9e5bf0c0ce67036f4863e6", "grade": true, "grade_id": "cell-2e12643fb3f77fcf", "locked": true, "points": 2, "schema_version": 1, "solution": false}
from numpy.testing import assert_allclose
from numpy import eye, pi, matrix, array
assert_allclose(traslacion_x(0), eye(4), rtol=1e-05, atol=1e-05)
assert_allclose(traslacion_x(1), matrix([[1, 0, 0, 1], [0, 1, 0, 0], [0, 0, 1, 0], [0, 0, 0, 1]]), rtol=1e-05, atol=1e-05)
assert_allclose(traslacion_y(0), eye(4), rtol=1e-05, atol=1e-05)
assert_allclose(traslacion_y(1), matrix([[1, 0, 0, 0], [0, 1, 0, 1], [0, 0, 1, 0], [0, 0, 0, 1]]), rtol=1e-05, atol=1e-05)
assert_allclose(traslacion_z(0), eye(4), rtol=1e-05, atol=1e-05)
assert_allclose(traslacion_z(1), matrix([[1, 0, 0, 0], [0, 1, 0, 0], [0, 0, 1, 1], [0, 0, 0, 1]]), rtol=1e-05, atol=1e-05)
# -
# ---
# Implemente una cadena cinemática que describa la transformación hecha por una rotacion en $z$ de $30^o$, una rotación en $y$ de $50^o$ y una traslación en $z$ de $1m$, y guardela en la variable ```T```.
# + deletable=false nbgrader={"checksum": "b77b453e860999884321530201d63465", "grade": false, "grade_id": "cell-3ca3f60003bde0c7", "locked": false, "schema_version": 1, "solution": true}
from numpy import pi
τ = 2*pi
# YOUR CODE HERE
raise NotImplementedError()
# + deletable=false editable=false nbgrader={"checksum": "e2bf0d8a9d1136ea7bdd701b5d77c4b6", "grade": true, "grade_id": "cell-ff2321bf85e91a73", "locked": true, "points": 1, "schema_version": 1, "solution": false}
from numpy.testing import assert_allclose
# -
# ---
# Sabemos que un pendulo doble puede ser descrito por una cadena cinemática; implemente una función que tome como argumento los parametros de rotación y traslación de cada uno de sus ejes, y devuelva la posición del actuador final.
# + deletable=false nbgrader={"checksum": "f474a3771537a0450a4a0c726930e197", "grade": false, "grade_id": "cell-ab313b08a490f26b", "locked": false, "schema_version": 1, "solution": true}
def pendulo_doble(q1, q2, l1, l2):
# YOUR CODE HERE
raise NotImplementedError()
# + deletable=false editable=false nbgrader={"checksum": "e313f98f59a248fbdd116ed4c6e877d1", "grade": true, "grade_id": "cell-4c34e757c54ce22a", "locked": true, "points": 3, "schema_version": 1, "solution": false}
from numpy.testing import assert_allclose
from numpy import eye, pi, matrix, array
assert_allclose(pendulo_doble(0,0,1,1), matrix([[1, 0, 0, 2], [0, 1, 0, 0], [0, 0, 1, 0], [0, 0, 0, 1]]), rtol=1e-05, atol=1e-05)
assert_allclose(pendulo_doble(pi/2,pi/2,1,1), matrix([[-1, 0, 0, -1], [0, -1, 0, 1], [0, 0, 1, 0], [0, 0, 0, 1]]), rtol=1e-05, atol=1e-05)
| Practicas/practica2/Problemas.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import os
import requests
from dotenv import load_dotenv
import pandas as pd
import pandas_datareader as web
import datetime as dt
import numpy as np
from MCForecastTools import MCSimulation
import alpaca_trade_api as tradeapi
# +
historical_start_date = dt.datetime(2016,8,1)
historical_end_date = dt.datetime(2021,7,31)
ADA_df = web.DataReader('ADA-USD','yahoo',historical_start_date,historical_end_date)
BCH_df = web.DataReader('BCH-USD','yahoo',historical_start_date,historical_end_date)
BNB_df = web.DataReader('BNB-USD','yahoo',historical_start_date,historical_end_date)
BTC_df = web.DataReader('BTC-USD','yahoo',historical_start_date,historical_end_date)
DOGE_df = web.DataReader('DOGE-USD','yahoo',historical_start_date,historical_end_date)
EOS_df = web.DataReader('EOS-USD','yahoo',historical_start_date,historical_end_date)
ETC_df = web.DataReader('ETC-USD','yahoo',historical_start_date,historical_end_date)
ETH_df = web.DataReader('ETH-USD','yahoo',historical_start_date,historical_end_date)
FIL_df = web.DataReader('FIL-USD','yahoo',historical_start_date,historical_end_date)
LINK_df = web.DataReader('LINK-USD','yahoo',historical_start_date,historical_end_date)
LTC_df = web.DataReader('LTC-USD','yahoo',historical_start_date,historical_end_date)
MKR_df = web.DataReader('MKR-USD','yahoo',historical_start_date,historical_end_date)
THETA_df = web.DataReader('THETA-USD','yahoo',historical_start_date,historical_end_date)
TRX_df = web.DataReader('TRX-USD','yahoo',historical_start_date,historical_end_date)
VET_df = web.DataReader('VET-USD','yahoo',historical_start_date,historical_end_date)
XLM_df = web.DataReader('XLM-USD','yahoo',historical_start_date,historical_end_date)
XMR_df = web.DataReader('XMR-USD','yahoo',historical_start_date,historical_end_date)
XRP_df = web.DataReader('XRP-USD','yahoo',historical_start_date,historical_end_date)
SPX_df = web.DataReader('sp500','fred',historical_start_date,historical_end_date)
ADA_df = ADA_df.loc[~ADA_df.index.duplicated(keep='first')]
BCH_df = BCH_df.loc[~BCH_df.index.duplicated(keep='first')]
BNB_df = BNB_df.loc[~BNB_df.index.duplicated(keep='first')]
BTC_df = BTC_df.loc[~BTC_df.index.duplicated(keep='first')]
DOGE_df = DOGE_df.loc[~DOGE_df.index.duplicated(keep='first')]
EOS_df = EOS_df.loc[~EOS_df.index.duplicated(keep='first')]
ETC_df = ETC_df.loc[~ETC_df.index.duplicated(keep='first')]
ETH_df = ETH_df.loc[~ETH_df.index.duplicated(keep='first')]
FIL_df = FIL_df.loc[~FIL_df.index.duplicated(keep='first')]
LINK_df = LINK_df.loc[~LINK_df.index.duplicated(keep='first')]
LTC_df = LTC_df.loc[~LTC_df.index.duplicated(keep='first')]
MKR_df = MKR_df.loc[~MKR_df.index.duplicated(keep='first')]
THETA_df = THETA_df.loc[~THETA_df.index.duplicated(keep='first')]
TRX_df = TRX_df.loc[~TRX_df.index.duplicated(keep='first')]
VET_df = VET_df.loc[~VET_df.index.duplicated(keep='first')]
XLM_df = XLM_df.loc[~XLM_df.index.duplicated(keep='first')]
XMR_df = XMR_df.loc[~XMR_df.index.duplicated(keep='first')]
XRP_df = XRP_df.loc[~XRP_df.index.duplicated(keep='first')]
SPX_df = SPX_df.loc[~SPX_df.index.duplicated(keep='first')]
# -
ADA_close_df = ADA_df[['Close']].rename({'Close':'ADA_Close'}, axis=1)
BCH_close_df = BCH_df[['Close']].rename({'Close':'BCH_Close'}, axis=1)
BNB_close_df = BNB_df[['Close']].rename({'Close':'BNB_Close'}, axis=1)
BTC_close_df = BTC_df[['Close']].rename({'Close':'BTC_Close'}, axis=1)
DOGE_close_df = DOGE_df[['Close']].rename({'Close':'DOGE_Close'}, axis=1)
EOS_close_df = EOS_df[['Close']].rename({'Close':'EOS_Close'}, axis=1)
ETC_close_df = ETC_df[['Close']].rename({'Close':'ETC_Close'}, axis=1)
ETH_close_df = ETH_df[['Close']].rename({'Close':'ETH_Close'}, axis=1)
FIL_close_df = FIL_df[['Close']].rename({'Close':'FIL_Close'}, axis=1)
LINK_close_df = LINK_df[['Close']].rename({'Close':'LINK_Close'}, axis=1)
LTC_close_df = LTC_df[['Close']].rename({'Close':'LTC_Close'}, axis=1)
MKR_close_df = MKR_df[['Close']].rename({'Close':'MKR_Close'}, axis=1)
THETA_close_df = THETA_df[['Close']].rename({'Close':'THETA_Close'}, axis=1)
TRX_close_df = TRX_df[['Close']].rename({'Close':'TRX_Close'}, axis=1)
VET_close_df = VET_df[['Close']].rename({'Close':'VET_Close'}, axis=1)
XLM_close_df = XLM_df[['Close']].rename({'Close':'XLM_Close'}, axis=1)
XMR_close_df = XMR_df[['Close']].rename({'Close':'XMR_Close'}, axis=1)
XRP_close_df = XRP_df[['Close']].rename({'Close':'XRP_Close'}, axis=1)
SPX_close_df = SPX_df[['sp500']].rename({'sp500':'SPX_Close'}, axis=1)
select_coin_close_df = pd.concat([ADA_close_df,BCH_close_df,BNB_close_df,BTC_close_df,DOGE_close_df,EOS_close_df,
ETC_close_df,ETH_close_df,FIL_close_df,LINK_close_df,LTC_close_df,MKR_close_df,
THETA_close_df,TRX_close_df,VET_close_df,XLM_close_df,XMR_close_df,XRP_close_df],axis=1).dropna()
display(select_coin_close_df.count())
display(select_coin_close_df)
coin_daily_returns = select_coin_close_df.pct_change().dropna()
coin_daily_returns
coin_daily_mean_return = coin_daily_returns.mean()
coin_daily_mean_return
coin_annual_mean_return = coin_daily_mean_return*select_coin_close_df.count()
coin_annual_mean_return
coin_annual_std = coin_daily_returns.std()*np.sqrt(select_coin_close_df.count())
coin_annual_std
coin_sharpe_ratio = coin_annual_mean_return/coin_annual_std
coin_sharpe_ratio
select_coin_close_df
#coin_mcsim_equal = MCSimulation(portfolio_data = select_coin_close_df,
# weights = [.0555,.0555,.0555,.0555,.0555,.0555,.0555,.0555,.0555,.0555,.0555,.0555,.0555,.0555,.0555,.0555,.0555,.0555],
# num_simulation=100,
# num_trading_days= 756)
spx_coin_close_df = pd.concat([ADA_close_df,BCH_close_df,BNB_close_df,BTC_close_df,DOGE_close_df,EOS_close_df,ETC_close_df,
ETH_close_df,FIL_close_df,LINK_close_df,LTC_close_df,MKR_close_df,THETA_close_df,TRX_close_df,
VET_close_df,XLM_close_df,XMR_close_df,XRP_close_df,SPX_close_df],axis=1).dropna()
display(spx_coin_close_df.count())
display(spx_coin_close_df)
spx_coin_daily_returns = spx_coin_close_df.pct_change().dropna()
btc_daily_returns = spx_coin_close_df["BTC_Close"].pct_change().dropna()
btc_daily_returns
spx_daily_returns = spx_coin_close_df["SPX_Close"].pct_change().dropna()
spx_daily_returns
spx_variance = spx_daily_returns.var()
spx_variance
btc_variance = btc_daily_returns.var()
btc_variance
btc_covariance = btc_daily_returns.cov(spx_daily_returns)
btc_covariance
BTC_close_df.describe()
SPX_close_df.describe()
| .ipynb_checkpoints/Project1-checkpoint.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import requests
import pandas as pd
import re
import random
import numpy as np
import time
from dateutil.parser import parse
from dateutil.parser import isoparse
pd.set_option('display.max_rows', 500)
pd.set_option('display.max_columns', 500)
pd.set_option('display.width', 1000)
pd.set_option('display.max_colwidth',1000)
# +
#Get list of all collections
url = 'https://www.loc.gov/collections?fo=json&at=results,pagination&c=100'
def paginate(url,collections=[]):
print('\n'+url)
print('Starting api request beginning at: '+ str(time.ctime(time.time())))
response = requests.get(url)
print('Done. Processing api request beginning at: '+ str(time.ctime(time.time())))
if response.status_code == 429:
return response.headers
sys.exit()
else:
results = response.json()['results']
for result in results:
result_dict = {}
result_dict['on'] = result['items']+'?fo=json&at=results'
result_dict['title'] = result['title']
collections.append(result_dict)
pagination = response.json()['pagination']
if pagination['next']:
print('There is another page at: '+str(pagination['next']))
next_url = pagination['next']
time.sleep(10)
paginate(next_url,collections)
return collections
collections = paginate(url)
# +
#Create df of collections
partof_pd = pd.DataFrame(collections)
#Get list of collection searches and titles
partof_urls = partof_pd['on'].to_list()
partof_titles = partof_pd['title'].to_list()
partof_urls = [url.replace(
'?fo=json',
'?fa=original-format!:web+page|original-format!:event|original-format!:collection&fo=json'
) for url in partof_urls]
# -
partof_urls
# +
#Get a sample record from each collection
#Sample record should be result #10 (11th result)
sample_items=[]
exclude = ['event','web page','collection','catalog']
for url in partof_urls:
time.sleep(10)
record_num=3
finished = False
print('Checking collection: ' + url)
page1 = requests.get(url, params = {'c':100})
url_fixed = url.replace(
'?fa=original-format!:web+page|original-format!:event|original-format!:collection&fo=json',
'?fo=json'
)
try:
#Search until you find a record that's not an excluded format.
#If one is not found in records #10 - 100, then a sample isn't
# pulled from this collection.
while (record_num < 100) & (finished == False):
sample_item = None
test = page1.json()['results'][record_num]
print('Reviewing item: '+ str(test['title']))
print('Original format(s): '+ str(test['original_format']))
#If the record is an excluded format, try again
if any(original_format in exclude for original_format in test['original_format']):
finished=False
record_num += 1
print('Not using this record, wrong format.')
else:
sample_item = page1.json()['results'][record_num]
sample_item['partof_url'] = url_fixed
finished=True
print('Using this record! Moving on to next collection.\n')
except:
sample_item = None
print('Something went wrong. Skipping this collection.\n')
sample_items.append(sample_item)
#return partof_urls
partof_urls = [url.replace(
'|original-format!:web+page|original-format!:event|original-format!:collection&fo=json',
'&fo=json'
) for url in partof_urls]
# -
backup = sample_items
#return partof_urls
'''partof_urls = [url.replace(
'?fa=original-format!:web+page|original-format!:event|original-format!:collection&fo=json',
'?fo=json'
) for url in partof_urls]'''
def flatten_json(y):
out = {}
def flatten(x, name=''):
if type(x) is dict:
for a in x:
flatten(x[a], name + a + '.')
elif type(x) is list:
i = 0
for a in x:
flatten(a, name + str(i) + '.')
i += 1
else:
out[name[:-1]] = x
flatten(y)
return out
flattened_sample_items = []
for sample in sample_items:
flattened_sample = flatten_json(sample)
flattened_sample_items.append(flattened_sample)
flattened_sample_items_pd = pd.DataFrame(flattened_sample_items)
partof_urls
partof_pd
# +
#Drop blank rows created by collections with no sample items
flattened_sample_items_pd.dropna(axis=0, how='all', inplace=True)
# +
cols = flattened_sample_items_pd.columns.values.tolist()
pattern = re.compile(r'^(.+)\.\d+$')
bases_checked = []
all_col_metadata = []
#For each column
for col in cols:
col_metadata = {}
types = []
#If the column ends in a number (is a list column, split up)
if pattern.match(col):
base = re.match(r'^(.+)\.\d+$', col)[1]
#If this split-up column hasn't been reviewed yet
if base not in bases_checked:
bases_checked.append(base) #mark as checked
matches = []
sample_values=[]
#Find all the columns in this group
for other_col in cols:
if re.match(base+r'\.\d+', other_col):
matches.append(other_col)
#Make a list of the value types found in this group
types_per_instance = [x.__name__ for x in flattened_sample_items_pd[other_col].dropna().apply(type).unique()]
types.extend(types_per_instance)
#For each collection
for collection in partof_urls:
sample_values_per_collection = {}
sample_values_at_collection = []
#Get collection title based on collection url
collection_title = partof_pd[partof_pd['on']==collection]['title'].to_list()[0]
sample_values_per_collection['collection'] = collection_title
#Get sample values from each column in the group
for other_col in cols:
if re.match(base+r'\.\d+', other_col):
try:
sample_value = flattened_sample_items_pd[flattened_sample_items_pd['partof_url'] == collection][other_col].to_list()[0]
sample_values_at_collection.append(sample_value)
except:
pass
sample_values_per_collection['samples'] = sample_values_at_collection
sample_values.append(sample_values_per_collection)
drop_nans = [x for x in sample_values_at_collection if str(x) != 'nan']
if len(drop_nans)>0:
col_metadata[collection_title] = random.choice(drop_nans)
else:
col_metadata[collection_title] = None
#col_metadata['sample_values'] = sample_values
col_metadata['types'] = list(set(types))
col_metadata['field'] = base.replace('.',' > ')
col_metadata['list'] = True
col_metadata['max_values_in_sample'] = len(matches)
col_metadata['cols'] = matches
#Calculate how often the field is used vs. blank in collections
empty = flattened_sample_items_pd[base+'.0'].isna().sum()
col_metadata['used_in'] = len(flattened_sample_items_pd) - empty
col_metadata['used_in_percent'] = (len(flattened_sample_items_pd) - empty)*100/len(flattened_sample_items_pd)
all_col_metadata.append(col_metadata)
else:
col_metadata['field'] = col.replace('.',' > ')
col_metadata['list'] = False
col_metadata['max_values_in_sample'] = 1
col_metadata['cols'] = col
col_metadata['types'] = [x.__name__ for x in flattened_sample_items_pd[col].dropna().apply(type).unique()]
empty = flattened_sample_items_pd[col].isna().sum()
col_metadata['used_in'] = len(flattened_sample_items_pd) - empty
col_metadata['used_in_percent'] = (len(flattened_sample_items_pd) - empty)*100/len(flattened_sample_items_pd)
for collection in partof_urls:
try: #collections without any sample items won't work
collection_title = partof_pd[partof_pd['on']==collection]['title'].to_list()[0]
sample_value = flattened_sample_items_pd[flattened_sample_items_pd['partof_url'] == collection][col].to_list()[0]
col_metadata[collection_title] = sample_value
except:
pass
all_col_metadata.append(col_metadata)
all_fields = pd.DataFrame(all_col_metadata)
# -
#Reorder the columns, to put the basic metadata in front and the collection sample values after
move_to_front = ['field','used_in','used_in_percent','types','list','max_values_in_sample','cols']
popoff = all_fields[move_to_front].copy()
remainder = all_fields.drop(move_to_front, axis=1)
move_to_back = remainder.columns.to_list()
new_order = move_to_front + move_to_back
all_fields = all_fields[new_order].copy()
#For repeat fields that have lists mid-way through the field hierarchy, drop all instances after the first one.
# Drop any row where the field name has any digit between 1 and 9 (e.g., 10, 3, 27)
all_fields = all_fields[all_fields['field'].str.contains(r'[1-9]')==False].copy()
all_fields = all_fields[all_fields['field']!='partof_url'].copy()
#Drop the list of all the matching column groupings
all_fields.drop('cols', axis=1, inplace=True)
#Skip the field "partof_url". That was just for processing purposes
all_fields = all_fields[all_fields['field']!='partof_url'].copy()
all_fields
all_fields.columns
#DataTables errors if a column has a period or apostrophe.
# Replace periods with spaces. Remove apostrophes
all_fields.columns = all_fields.columns.str.replace(".", " ")
all_fields.columns = all_fields.columns.str.replace("\'", "")
# +
#Drop any rows that have a blank field name column
all_fields = all_fields[all_fields['field'].str.contains(r'^$')==False].copy()
#Drop any collection columns that aren't loc.gov collections
not_locgov = partof_pd[partof_pd['on'].str.contains('www.loc.gov')==False]['title'].to_list()
not_locgov = [x.replace('.', ' ') for x in not_locgov]
all_fields.drop(not_locgov, axis=1, inplace=True)
#Sort rows (datatable also sorts for you, so could skip this)
all_fields = all_fields.sort_values(by=['used_in','field','max_values_in_sample'], ascending=False)
# +
#Define patterns to check for
url_pattern = re.compile(r'^http')
integer = re.compile(r'^\d+$')
decimal = re.compile(r'^\d+\.\d+$')
#checking for timestamp is a little more complex:
def is_date(string):
try:
isoparse(string)
return True
except ValueError:
return False
#For all rows, get a list of value types from all collection samples
types = []
i = 0
for index, row in all_fields.iterrows():
format_types = []
#If it's already recognized as boolean, move on to next row.
if 'bool' in row['types']:
format_types.append('bool')
else:
for column in row[6:]:
if pd.isna(column):
continue
elif bool(url_pattern.match(str(column))):
format_types.append('URL')
elif is_date(str(column)):
format_types.append('iso timestamp')
elif bool(integer.match(str(column))):
format_types.append('int')
elif bool(decimal.match(str(column))):
format_types.append('decimal number')
else:
format_types.append('str')
format_types = list(set(format_types))
types.append(format_types)
print('These should be equal before replacing the types column:')
print(len(types))
print(len(all_fields))
# -
#Replace the types column with more detailed types
all_fields['types'] = types
len(all_fields)
all_fields
#Export to JSON file
all_fields.to_json('loc_fields.json', orient="records")
#For use in html page
for field in all_fields.columns.to_list():
print('{ \'data\': \''+field+'\'},')
#For use in html page
for field in all_fields.columns.to_list():
print('<th>'+field+'</th>')
len(all_fields.columns)
| locgov-collection-query-record-metadata-fields.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import numpy as np
# +
e = np.array([55,89,76,65,48,70])
m = np.array([60,85,60,68,55,60])
c = np.array([65,90,82,72,66,77])
# -
np.greater(e,m) //three of them get higher score at English
# +
e = np.array([55,89,76,65,48,70])
m = np.array([60,85,60,68,55,60])
c = np.array([65,90,82,72,66,77])
# -
np.greater(m,c) //all of them get lower score at math
np.greater(e,c) //all of them get lower score at english
//ans: all of them get the highest score at chinese
then three of them get
| hausaufgabe04.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + pycharm={"is_executing": false}
import sys
import pathlib
solitaire_path = pathlib.Path('../')
if solitaire_path not in sys.path:
sys.path.append(str(solitaire_path))
import os, random, math, json, functools, itertools, base64, importlib, gzip
from tqdm import tqdm
from collections import defaultdict
from datetime import datetime, timedelta, date
from google.protobuf.json_format import MessageToDict
import numpy as np
import pandas as pd
from sklearn.model_selection import train_test_split
from sklearn.metrics import classification_report
from keras.models import Sequential
from keras.layers.core import Dense, Dropout
from keras.optimizers import SGD
import matplotlib.pyplot as plt
from solitaire_core.game import *
from solitaire_core import game, text_renderer
from solitaire_ai import vectorization
from auto_play import StatTracker, SingleGamePlayLoop
from auto_play import play_move as play_move_rand
# -
# ---
# # Baseline
# +
# from auto_play import StatTracker
# from auto_play import play_move as play_move_rand
# TO_EFFECTIVE_WIN = True
# stats = StatTracker(update_interval=5)
# g = deal_game()
# won_game_records = []
# while stats.games_played < 1000:
# if TO_EFFECTIVE_WIN and g.won_effectively or g.won:
# stats.mark_won(g.game_record)
# won_game_records.append(g.game_record)
# g = deal_game()
# continue
# if not play_move_rand(g):
# stats.mark_lost(g.game_record)
# g = deal_game()
# continue
# assert is_valid_game_state(g.gs), g.get_game_state_id()
# stats.mark_move()
# stats.print_stats()
# print("Done")
# -
# ---
# # Training
# +
# Optionally load from backup file instead of playing to a baseline
won_game_records = []
with gzip.open("../2019-05-31-10-07-25-win-game-records.b64.gz") as f:
line = f.readline()
while line:
gr = GameRecord()
gr.MergeFromString(base64.b64decode(line))
won_game_records.append(gr)
line = f.readline()
print(len(won_game_records))
# + pycharm={"is_executing": false}
game_state_vectors = []
action_vectors = []
for gr in tqdm(won_game_records):
gs = VisibleGameState()
hgs = HiddenGameState()
gs.MergeFrom(gr.initial_state)
hgs.MergeFrom(gr.initial_hidden_state)
# Replay all the actions:
for a in gr.actions:
game_state_vectors.append(vectorization.game_state_to_array(gs))
action_vectors.append(vectorization.action_to_onehot(a))
res = game._try_apply_action(gs, hgs, a)
assert is_valid_game_state(gs)
if not res:
raise Exception()
assert len(game_state_vectors) == len(action_vectors)
assert all(len(gs) == len(game_state_vectors[0]) for gs in game_state_vectors)
assert all(len(a) == len(action_vectors[0]) for a in action_vectors)
print(len(game_state_vectors))
print("Done")
# +
# This and stuff below kind of taken from https://www.pyimagesearch.com/2018/09/10/keras-tutorial-how-to-get-started-with-keras-deep-learning-and-python/
data = np.array(game_state_vectors, dtype="float")
labels = np.array(action_vectors)
(train_x, test_x, train_y, test_y) = train_test_split(data, labels, test_size=0.25)
# +
model = Sequential()
model.add(Dense(512, input_shape=(len(game_state_vectors[0]),), activation="relu"))
# model.add(Dropout(0.25))
model.add(Dense(128, activation="relu"))
model.add(Dropout(0.1))
# model.add(Dense(128, activation="relu"))
# model.add(Dropout(0.1))
model.add(Dense(len(action_vectors[0]), activation="softmax"))
# initialize our initial learning rate and # of epochs to train for
INIT_LR = 0.01
EPOCHS = 50
print("Train training network...")
opt = SGD(lr=INIT_LR)
model.compile(loss="categorical_crossentropy", optimizer=opt, metrics=["accuracy"])
# train the neural network
H = model.fit(train_x, train_y, validation_data=(test_x, test_y), epochs=EPOCHS, batch_size=32)
print("Done")
# +
model = Sequential()
model.add(Dense(128, input_shape=(len(game_state_vectors[0]),), activation="relu"))
model.add(Dense(256, activation="relu"))
model.add(Dropout(0.1))
model.add(Dense(len(action_vectors[0]), activation="softmax"))
# initialize our initial learning rate and # of epochs to train for
INIT_LR = 0.01
EPOCHS = 50
print("Train training network...")
opt = SGD(lr=INIT_LR)
model.compile(loss="categorical_crossentropy", optimizer=opt, metrics=["accuracy"])
# train the neural network
H = model.fit(train_x, train_y, validation_data=(test_x, test_y), epochs=EPOCHS, batch_size=32)
print("Done")
# +
# evaluate the network
# print("[INFO] evaluating network...")
# predictions = model.predict(test_x, batch_size=32)
# print(classification_report(test_y.argmax(axis=1), predictions.argmax(axis=1)))
# plot the training loss and accuracy
N = np.arange(0, EPOCHS)
plt.style.use("ggplot")
plt.figure()
plt.plot(N, H.history["loss"], label="train_loss")
plt.plot(N, H.history["val_loss"], label="val_loss")
plt.plot(N, H.history["acc"], label="train_acc")
plt.plot(N, H.history["val_acc"], label="val_acc")
plt.title("Training Loss and Accuracy (Simple NN)")
plt.xlabel("Epoch #")
plt.ylabel("Loss/Accuracy")
plt.legend()
plt.show()
# -
# ---
# # Now play
#
# +
stats = StatTracker(update_interval=5)
g = deal_game()
TO_EFFECTIVE_WIN = True
won_game_records_w_model = []
while stats.games_played < 200:
if TO_EFFECTIVE_WIN and g.won_effectively or g.won:
stats.mark_won()
won_game_records_w_model.append(g.game_record)
g = deal_game()
continue
valid_actions = g.get_valid_actions()
if not valid_actions:
stats.mark_lost()
g = deal_game()
continue
prediction = model.predict(np.array([vectorization.game_state_to_array(g.gs)]))
action = vectorization.onehot_to_action(prediction[0], valid_actions)
g.apply_action(action)
assert is_valid_game_state(g.gs), g.get_game_state_id()
stats.mark_move()
stats.print_stats()
print("Done")
# -
| notebooks/2019-06-Scratch.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
import pandas as pd
import numpy as np
from mpl_toolkits.basemap import Basemap
import matplotlib.pyplot as plt
#Create sunshine dataFrame and converting -9999.9 values into NaN
sun = pd.read_csv('sunshine.csv', index_col=False, na_values = -9999.9)
#We use only the WMO Station Number and the Annual mean column
sun = sun.loc[:,['WMO Station Number','Annual NCDC Computed Value']]
sun = sun.rename(columns={'WMO Station Number': 'WMO', 'Annual NCDC Computed Value': 'Annual Mean'})
sun = sun.dropna(axis = 0)
sun['WMO'] = sun['WMO'].apply(int)
#We read from mlid the WMO stations and its coordinates for plotting
mlid = pd.read_csv('mlid-20180925_public-2.csv', index_col=False, usecols=['wmo', 'lat_prp', 'lon_prp'])
mlid = mlid.rename(columns={'wmo': 'WMO', 'lat_prp':'LAT', 'lon_prp': 'LON'})
mlid = mlid.dropna(axis = 0)
mlid['WMO'] = mlid['WMO'].apply(int)
#We join the two Dataframes using the WMO column
sun = sun.set_index('WMO').join(mlid.set_index('WMO'))
sun['LAT'] = sun['LAT'].replace('', np.nan)
sun['LON'] = sun['LON'].replace('', np.nan)
sun = sun.dropna(axis = 0)
lat = sun['LAT'].values
lon = sun['LON'].values
annualMean = sun['Annual Mean'].values
#fig = plt.figure(figsize=(10, 10))
#Create BaseMap with Miller Projection
#map = Basemap(projection='mill', llcrnrlat=-90, urcrnrlat=90,llcrnrlon=-180, urcrnrlon=180)
# Plot coastlines, draw label meridians and parallels.
#map.drawcoastlines()
#map.drawparallels(np.arange(-90,90,30),labels=[1,0,0,0])
#map.drawmeridians(np.arange(map.lonmin,map.lonmax+30,60),labels=[0,0,0,1])
# Fill continents with eggshell ffffd4 color
#map.drawmapboundary(fill_color='#ffffd4')
# Create the scatter plot using inferno colormap
#map.scatter(lon, lat, latlon=True, c=annualMean, cmap='inferno', alpha=0.75)
# Create colorbar and legend
#plt.colorbar(label=r'${\rm sunshine}$', fraction=0.033, pad=0.04)
#plt.title('Hours of Sunshine in Weather Stations of WMO')
#plt.show()
# -
| SunshineHours.ipynb |
# -*- coding: utf-8 -*-
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .jl
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Julia 1.7.0-rc2
# language: julia
# name: julia-1.7
# ---
# + [markdown] slideshow={"slide_type": "slide"}
# # Data Types of Arrays
#
# So far we have worked with arrays of integers and floats. Arrays of complex and rational types can be defined easily, and the array functions work as expected:
# + slideshow={"slide_type": "fragment"}
x = [ complex(cos(θ),sin(θ)) for θ in 2π*(0:10)/10 ]
# + slideshow={"slide_type": "fragment"}
sum(x.^2)
# + slideshow={"slide_type": "subslide"}
y = [ a//(a+1) for a = 1:10 ]
# + slideshow={"slide_type": "fragment"}
prod(y.^2)
# -
# ## Conversion
# If you wish to change the data type of your array, use Julia's `convert` function. For example, suppose you wish that elements of `y` were floats instead of rationals.
convert(Array{Float64}, y)
| textbook/_build/jupyter_execute/content/Data_Types/Data_Types_of_Arrays.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # SV-DKL with Pyro
# +
import math
import torch
import gpytorch
import pyro
from matplotlib import pyplot as plt
# Make plots inline
# %matplotlib inline
# +
import urllib.request
import os.path
from scipy.io import loadmat
from math import floor
if not os.path.isfile('3droad.mat'):
print('Downloading \'3droad\' UCI dataset...')
urllib.request.urlretrieve('https://www.dropbox.com/s/f6ow1i59oqx05pl/3droad.mat?dl=1', '3droad.mat')
data = torch.Tensor(loadmat('3droad.mat')['data'])
X = data[:, :-1]
X = X - X.min(0)[0]
X = 2 * (X / X.max(0)[0]) - 1
y = data[:, -1]
# Use the first 80% of the data for training, and the last 20% for testing.
train_n = int(floor(0.8*len(X)))
train_x = X[:train_n, :].contiguous().cuda()
train_y = y[:train_n].contiguous().cuda()
test_x = X[train_n:, :].contiguous().cuda()
test_y = y[train_n:].contiguous().cuda()
# -
from torch.utils.data import TensorDataset, DataLoader
train_dataset = TensorDataset(train_x, train_y)
train_loader = DataLoader(train_dataset, batch_size=1024, shuffle=True)
# +
data_dim = train_x.size(-1)
class LargeFeatureExtractor(torch.nn.Sequential):
def __init__(self):
super(LargeFeatureExtractor, self).__init__()
self.add_module('linear1', torch.nn.Linear(data_dim, 1000))
self.add_module('bn1', torch.nn.BatchNorm1d(1000))
self.add_module('relu1', torch.nn.ReLU())
self.add_module('linear2', torch.nn.Linear(1000, 500))
self.add_module('bn2', torch.nn.BatchNorm1d(500))
self.add_module('relu2', torch.nn.ReLU())
self.add_module('linear3', torch.nn.Linear(500, 50))
self.add_module('bn3', torch.nn.BatchNorm1d(50))
self.add_module('relu3', torch.nn.ReLU())
self.add_module('linear4', torch.nn.Linear(50, 2))
feature_extractor = LargeFeatureExtractor().cuda()
# num_features is the number of final features extracted by the neural network, in this case 2.
num_features = 2
# +
from gpytorch.models import PyroVariationalGP
from gpytorch.variational import CholeskyVariationalDistribution, GridInterpolationVariationalStrategy
class PyroSVDKLGridInterpModel(PyroVariationalGP):
def __init__(self, likelihood, grid_size=32, grid_bounds=[(-1, 1), (-1, 1)], name_prefix="svdkl_grid_example"):
variational_distribution = CholeskyVariationalDistribution(num_inducing_points=(grid_size ** num_features))
variational_strategy = GridInterpolationVariationalStrategy(self,
grid_size=grid_size,
grid_bounds=grid_bounds,
variational_distribution=variational_distribution)
super(PyroSVDKLGridInterpModel, self).__init__(variational_strategy,
likelihood,
num_data=train_y.numel(),
name_prefix=name_prefix)
self.mean_module = gpytorch.means.ConstantMean()
self.covar_module = gpytorch.kernels.ScaleKernel(gpytorch.kernels.RBFKernel(
lengthscale_prior=gpytorch.priors.SmoothedBoxPrior(0.001, 1., sigma=0.1, log_transform=True)
))
def forward(self, x):
mean_x = self.mean_module(x)
covar_x = self.covar_module(x)
return gpytorch.distributions.MultivariateNormal(mean_x, covar_x)
# +
class DKLModel(gpytorch.Module):
def __init__(self, likelihood, feature_extractor, num_features, grid_bounds=(-1., 1.)):
super(DKLModel, self).__init__()
self.feature_extractor = feature_extractor
self.gp_layer = PyroSVDKLGridInterpModel(likelihood)
self.grid_bounds = grid_bounds
self.num_features = num_features
def features(self, x):
features = self.feature_extractor(x)
features = gpytorch.utils.grid.scale_to_bounds(features, self.grid_bounds[0], self.grid_bounds[1])
return features
def forward(self, x):
res = self.gp_layer(self.features(x))
return res
def guide(self, x, y):
self.gp_layer.guide(self.features(x), y)
def model(self, x, y):
pyro.module(self.gp_layer.name_prefix + ".feature_extractor", self.feature_extractor)
self.gp_layer.model(self.features(x), y)
likelihood = gpytorch.likelihoods.GaussianLikelihood().cuda()
model = DKLModel(likelihood, feature_extractor, num_features=num_features).cuda()
# +
from pyro import optim
from pyro import infer
optimizer = optim.Adam({"lr": 0.1})
elbo = infer.Trace_ELBO(num_particles=256, vectorize_particles=True)
svi = infer.SVI(model.model, model.guide, optimizer, elbo)
# +
num_epochs = 3
# Not enough for this model to converge, but enough for a fast example
for i in range(num_epochs):
# Within each iteration, we will go over each minibatch of data
for minibatch_i, (x_batch, y_batch) in enumerate(train_loader):
loss = svi.step(x_batch, y_batch)
print('Epoch {} Loss {}'.format(i + 1, loss))
# -
model.eval()
likelihood.eval()
with torch.no_grad():
preds = model(test_x)
print('Test MAE: {}'.format(torch.mean(torch.abs(preds.mean - test_y))))
| examples/09_Pyro_Integration/Pyro_SVDKL_GridInterp.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
# 7.3 Definition of deep neural networks (DNN)
In this section, we will give a brief introduction to a special function
class related to deep neural networks (DNN) used in machine learning. We
then explore the relationship between DNN (with ReLU as activation
function) and linear finite element methods.
Given $n, m\ge 1$, the first ingredient in defining a deep neural
network (DNN) is (vector) linear functions of the form
$$\label{thetamap1}
\theta:\mathbb{R}^{n}\to\mathbb{R}^{m} ,$$as $\theta(x)=Wx+b$ where
$W=(w_{ij})\in\mathbb{R}^{m\times n}$, $b\in\mathbb{R}^{m}$. The second
main ingredient is a nonlinear activation function, usually denoted as
$$\label{sigma}
\sigma: \mathbb{R} \to \mathbb{R}.$$ By applying the function to each
component, we can extend this naturally to
$$\sigma:\mathbb R^{n}\mapsto \mathbb R^{n}.$$
## 7.3.1 Definition of neurons
1. Primary variables $n_0=d$ $$x^0=x=
\begin{pmatrix}
x_1\\
x_2\\
\vdots \\
x_{d}
\end{pmatrix}$$
2. $n_1$ hyperplanes $\theta^{0}(x^0) = W^0 x + b^0$ where
$W^0: \mathbb{R}^{d} \mapsto \mathbb{R}^{n_1}$: $$W^0x+b^0=
\begin{pmatrix}
w^0_1x+b^0_1\\
w^0_2x+b^0_2\\
\vdots \\
w^0_{n_1}x+b^0_{n_1}
\end{pmatrix}\quad \mbox{with }\quad W^0=
\begin{pmatrix}
w^0_1\\
w^0_2\\
\vdots \\
w^0_{n_1}
\end{pmatrix},\quad b^0=
\begin{pmatrix}
b^0_1\\
b^0_2\\
\vdots \\
b^0_{n_1}
\end{pmatrix}$$
3. $n_1$-neurons: $$x^1=\sigma(W^0x+b^0)
=\begin{pmatrix}
\sigma(w^0_1x+b^0_1)\\
\sigma(w^0_2x+b^0_2)\\
\vdots \\
\sigma(w^0_{n_1}x+b^0_{n_1})
\end{pmatrix}$$
4. $n_2$-hyperplanes $\theta^{1}(x^1) = W^1 x + b^1$ where
$W^1: \mathbb{R}^{n_1} \mapsto \mathbb{R}^{n_2}$: $$W^1x^1+b^1=
\begin{pmatrix}
w^1_1x^1+b^1_1\\
w^1_2x^1+b^1_2\\
\vdots \\
w^1_{n_2}x^1+b^1_{n_2}
\end{pmatrix}\quad \mbox{with }\quad
W^1=
\begin{pmatrix}
w^1_1 \\
w^1_2 \\
\vdots \\
w^1_{n_2}
\end{pmatrix},\
b^1=
\begin{pmatrix}
b^1_1\\
b^1_2\\
\vdots \\
b^1_{n_2}
\end{pmatrix}$$
5. $n_2$-neurons: $$x^2=\sigma(W^1x+b^1)
=\begin{pmatrix}
\sigma(w^1_1x+b^1_1)\\
\sigma(w^1_2x+b^1_2)\\
\vdots \\
\sigma(w^1_{n_2}x+b^1_{n_2})
\end{pmatrix}$$
6. $\cdots$
## 7.3.2 Definition of deep neural network functions {#sec:DNN}
Given $d, k\in\mathbb{N}^+$ and
$$n_1,\dots,n_{k}\in\mathbb{N} \mbox{ with }n_0=d, n_{k+1}=1,$$ a
general DNN function from $\mathbb{R}^d$ to $\mathbb{R}$ is given by
$$\begin{aligned}
f^0(x) &=\theta^0(x) \\
f^{\ell}(x) &= [ \theta^{\ell} \circ \sigma ](f^{\ell-1}(x)) \quad \ell = 1:k \\
f(x) &= f^k(x). \end{aligned}$$ The following more concise notation is
often used in computer science literature: $$\label{compress-dnn}
f(x) = \theta^{k}\circ \sigma \circ \theta^{k-1} \circ \sigma \cdots \circ \theta^1 \circ \sigma \circ \theta^0(x),$$
here $\theta^i: \mathbb{R}^{n_{i}}\to\mathbb{R}^{n_{i+1}}$ are linear
functions as defined in
[\[thetamap1\]](#thetamap1){reference-type="eqref"
reference="thetamap1"}. Such a DNN is called a $(k+1)$-layer DNN, and is
said to have $k$-hidden layers. The size of this DNN is
$n_1+\cdots+n_k$.
Thus, we have the following connection of neurons and DNN functions
$$f^k(x) = \theta^{k}(x^k) = \theta^{k} \circ \sigma \circ \theta^{k-1}(x^{k-1}) = [\theta^{k} \circ \sigma ] (f^{k-1}),$$
or we can see that
$$x^k = \sigma(f^{k-1}) = \sigma \circ \theta^{k-1} \circ \sigma (f^{k-2}) = [\sigma \circ \theta^{k-1}] (x^{k-1}).$$
Based on these notation and connections, we have the following
definition of general artificial neural network functions.
Shallow (one hidden layer) neural network functions: $$\label{NN1}
\dnn(\sigma; n_1)
=\bigg\{ f^1(x) = \theta^1 (x^1), \mbox{ with } W^\ell\in \mathbb R^{n_{\ell+1}\times
n_{\ell}}, b^\ell\in\mathbb R^{n_\ell}, \ell=0, 1, n_0=d, n_2 = 1\bigg\}$$
Deep neural network functions: $$\label{NNL}
\dnn(\sigma; n_1,n_2,\ldots, n_L)=\bigg\{ f^{L}(x) = \theta^L (x^{L}),
\mbox{ with } W^\ell\in \mathbb R^{n_{\ell+1}\times
n_{\ell}}, b^\ell\in\mathbb R^{n_\ell}, \ell=0:L, n_0=d, n_{L+1}=1\bigg\}$$
If we ignore the width (number of neurons) of network functions, we may
denote the general deep neural network functions with certain layers.
The 1-hidden layer (shallow) neural network is defined as:
$$\dnn=\dnn(\sigma) = \dnn^1(\sigma)
=\bigcup_{n_1\ge 1} \dnn(\sigma;n_1,1)$$ Generally, we can define the
L-hidden layer neural network as:
$$\dnn^L(\sigma) := \bigcup_{n_1, n_2, \cdots, n_{L}\ge 1} \dnn(\sigma;n_1,n_2,\cdots,n_L, 1).$$
## 7.3.3 ReLU DNN
In this section, we mainly consider a special activation function, known
as the *rectified linear unit* (ReLU), and defined as $\rm
ReLU: \mathbb R\mapsto \mathbb R$, $$\label{relu}
{\rm ReLU}(x):=\max(0,x), \quad x\in\mathbb{R}.$$ A ReLU DNN with $k$
hidden layers might be written as: $$\label{relu-dnn}
f(x) = \theta^{k}\circ {\rm ReLU} \circ \theta^{k-1} \circ {\rm ReLU} \cdots \circ \theta^1 \circ {\rm ReLU} \circ \theta^0(x).$$
We note that $\rm ReLU$ is a continuous piecewise linear (CPWL)
function. Since the composition of two CPWL functions is still a CPWL
function, we have the following observation [@arora2016understanding].
::: {.lemma}
[\[dnn-cpwl\]]{#dnn-cpwl label="dnn-cpwl"} Every ReLU DNN:
$\mathbb{R}^d\to\mathbb{R}^c$ is a continuous piecewise linear function.
More specifically, given any ReLU DNN, there is a polyhedral
decomposition of $\mathbb R^d$ such that this ReLU DNN is linear on each
polyhedron in such a decomposition.
:::
Here is a simple example for the "grid\" created by some 2-layer ReLU
DNNs in $\mathbb{R}^2$.
{#fig:dnn-region
width=".3\\textwidth"} {#fig:dnn-region
width=".3\\textwidth"} {#fig:dnn-region
width=".3\\textwidth"}
For convenience of exposition, we introduce the following notation:
Namely $\dnn^L({\sigma})$ represents the DNN model with $L$ hidden
layers and ReLU activation function with arbitrary size, if
$\sigma = {\rm ReLU}$.
# -
# ## 7.3.4 Fourier transform of polynomials
#
# We begin by noting that an activation function $\sigma$, which satisfies
# a polynomial growth condition $|\sigma(x)| \leq C(1 + |x|)^n$ for some
# constants $C$ and $n$, is a tempered distribution. As a result, we make
# this assumption on our activation functions in the following theorems.
# We briefly note that this condition is sufficient, but not necessary
# (for instance an integrable function need not satisfy a pointwise
# polynomial growth bound) for $\sigma$ to be represent a tempered
# distribution.
#
# We begin by studying the convolution of $\sigma$ with a Gaussian
# mollifier. Let $\eta$ be a Gaussian mollifier
# $$\eta(x) = \frac{1}{\sqrt{\pi}}e^{-x^2}.$$ Set
# $\eta_\epsilon=\frac{1}{\epsilon}\eta(\frac{x}{\epsilon})$. Then
# consider $$\label{sigma-epsilon}
# \sigma_{\epsilon}(x):=\sigma\ast{\eta_\epsilon}(x)=\int_{\mathbb{R}}\sigma(x-y){\eta_\epsilon}(y)dy$$
# for a given activation function $\sigma$. It is clear that
# $\sigma_{\epsilon}\in C^\infty(\mathbb{R})$. Moreover, by considering
# the Fourier transform (as a tempered distribution) we see that
# $$\label{eq_278}
# \hat{\sigma}_{\epsilon} = \hat{\sigma}\hat{\eta}_{\epsilon} = \hat{\sigma}\eta_{\epsilon^{-1}}.$$
#
# We begin by stating a lemma which characterizes the set of polynomials
# in terms of their Fourier transform.
#
# ::: {.lemma}
# [\[polynomial_lemma\]]{#polynomial_lemma label="polynomial_lemma"} Given
# a tempered distribution $\sigma$, the following statements are
# equivalent:
#
# 1. $\sigma$ is a polynomial
#
# 2. $\sigma_\epsilon$ given by
# [\[sigma-epsilon\]](#sigma-epsilon){reference-type="eqref"
# reference="sigma-epsilon"} is a polynomial for any $\epsilon>0$.
#
# 3. $\text{\normalfont supp}(\hat{\sigma})\subset \{0\}$.
# :::
#
# ::: {.proof}
# *Proof.* We begin by proving that (3) and (1) are equivalent. This
# follows from a characterization of distributions supported at a single
# point (see [@strichartz2003guide], section 6.3). In particular, a
# distribution supported at $0$ must be a finite linear combination of
# Dirac masses and their derivatives. In particular, if $\hat{\sigma}$ is
# supported at $0$, then
# $$\hat{\sigma} = \displaystyle\sum_{i=1}^n a_i\delta^{(i)}.$$ Taking the
# inverse Fourier transform and noting that the inverse Fourier transform
# of $\delta^{(i)}$ is $c_ix^i$, we see that $\sigma$ is a polynomial.
# This shows that (3) implies (1), for the converse we simply take the
# Fourier transform of a polynomial and note that it is a finite linear
# combination of Dirac masses and their derivatives.
#
# Finally, we prove the equivalence of (2) and (3). For this it suffices
# to show that $\hat{\sigma}$ is supported at $0$ iff
# $\hat{\sigma}_\epsilon$ is supported at $0$. This follows from equation
# [\[eq_278\]](#eq_278){reference-type="ref" reference="eq_278"} and the
# fact that $\eta_{\epsilon^{-1}}$ is nowhere vanishing. ◻
# :::
#
# As an application of Lemma
# [\[polynomial_lemma\]](#polynomial_lemma){reference-type="ref"
# reference="polynomial_lemma"}, we give a simple proof of the result in
# the next section.
#
| _build/jupyter_execute/ch07/ch7_3.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="pLHtlOlC4qim"
# # 01. Number of Bits
#
# Given an integer n, return the number of 1 bits in n.
#
# - Constraints: ```0 ≤ n < 2 ** 31```
#
# <br />
#
# ## **Binarise decimals**
# - ```format(number, "b")```
#
# + id="gefqL4QM4pcw"
class Solution:
def solve(self, number):
if (number >= 2**31):
pass
elif (number == 0):
return 0
else:
num = str(format(number, "b"))
return len(num)
# + [markdown] id="fKWPzzzTRq_m"
# ### 01. Review
# + [markdown] id="hw_wf68w5GsM"
# # 02. Narcissistic Number
#
# Given an integer n, return whether it is equal to the sum of its own digits raised to the power of the number of digits.
# ## Split a word into letters
# - ```[char for char in word]```
# + id="u2G1F55f7bNW"
class Solution:
def solve(self, number):
digits = str(number)
digits = [digit for digit in digits]
print('digits', digits)
num_digits = len(digits)
print('num_digits:', num_digits)
nums = []
for digit in digits:
num = int(digit)
nums.append(num**num_digits)
if (sum(nums) == number):
return True
else:
return False
# + [markdown] id="wmuKYLUxR5hy"
# ### 02. Review
# + [markdown] id="L1ZVZa5Z76eM"
# # 03. Non-Decreasing Digits
#
# Given a positive integer n, return the largest integer smaller or equal to n where all digits are non-decreasing.
#
# - Constraints ```0 < n < 2 ** 31 - 1```
# - Hint: Try a few test cases and see what the result should be. Do you notice a pattern at the point where the number breaks the required condition?
# + id="abPqSdya8f97"
# + [markdown] id="5gaHCInoR7TZ"
# ### 03. Review
# + [markdown] id="qgygpzbu8gNc"
# # 04. List Partitioning with Inequality Relation
#
# Given a list of integers nums, we want to split the list into two non-empty sublists a and b such that every element in a is less than or equal to every element in b.
#
# Return the smallest length of a that is possible. You can assume that the solution exists.
#
# Constraints
#
# - ```n ≤ 100,000``` where ```n``` is the length of nums
# - Hint 1: Note: a is always towards the left.
# - Hint 2: Maintain LeftMax and RightMin while traversing from left to right. Find a index where ```LeftMax <= RightMin``` is happening.
# + id="Y1YWv0XoR9OM"
# + [markdown] id="GvU67lrrR88q"
# ### 04. Review
| 00 Monday Coding Challenge/01 Monday 13 Sept .ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="view-in-github" colab_type="text"
# <a href="https://colab.research.google.com/github/LucaAmbrogioni/TaylorAlgebra/blob/master/TDDualControl.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# + id="evq_IBs_gJ0R" colab_type="code" outputId="7ff0cace-0900-4d8c-a981-f8d4f5a1756e" colab={"base_uri": "https://localhost:8080/", "height": 89}
# ! git clone https://github.com/3ammor/Weights-Initializer-pytorch.git
import sys
sys.path
sys.path.append('/content/Weights-Initializer-pytorch')
import matplotlib.pyplot as plt
import numpy as np
import torch
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
import PIL.Image
from weight_initializer import Initializer
# + id="MSPALIG6JVd5" colab_type="code" colab={}
class Dynamics:
def __init__(self, environment, g = 1., noise=0.05, lam=0.01, control=None):
self.environment = environment
self.g = g
self.noise = noise
self.xt = []
self.yt = []
self.control = control
self.lam = lam
self.cost = 0.
def compute_force(self, r, t):
upscaled_h_cells = torch.nn.functional.interpolate(environment.h_cells, scale_factor=environment.scale, mode="bilinear", align_corners=True)
_, Gx, Gy = environment.extrapolate(r[:,0], r[:,1], upscaled_h_cells,
activation=lambda x: x,
derivative=True,
d_activation=lambda x: x)
if self.control is not None:
Ux = environment.extrapolate(r[:,0], r[:,1], self.control[t][0],
activation=lambda x: x,
derivative=False,
std=.5,
normalized=True)
Uy = environment.extrapolate(r[:,0], r[:,1], self.control[t][1],
activation=lambda x: x,
derivative=False,
std=.5,
normalized=True)
control_force = torch.cat((Ux, Uy), 1)
else:
control_force = 0.
grad = torch.cat((Gx.unsqueeze(1), Gy.unsqueeze(1)), 1)
env_x_repulsion = 3*torch.sigmoid(-10*r[:,0]) - 3*torch.sigmoid(-10*(environment.resolution - r[:,0]))
env_y_repulsion = 3*torch.sigmoid(-10*r[:,1]) - 3*torch.sigmoid(-10*(environment.resolution - r[:,1]))
repulsion_force = torch.stack((env_x_repulsion, env_y_repulsion),1)
F = (self.g * grad + control_force + repulsion_force)
if self.control is not None:
control_cost = self.lam*torch.sum(control_force**2,1)
else:
control_cost = 0.
return F, control_cost
def compute_reward(self, r):
R = environment.extrapolate(r[:,0], r[:,1], environment.r,
activation=lambda x: x,
derivative=False,
std=.5,
normalized=True)
return torch.sum(R, 1)
def integrate(self, r, dt, N): #Midpoint integration
num_samples = self.environment.num_samples
for n in range(N):
F0, control_cost0 = self.compute_force(r, n)
F, control_cost = self.compute_force(r + 0.5*dt*F0, n)
r = r + (F * dt + torch.normal(0., self.noise, (self.environment.num_samples,2)) * dt**(1/2.))
self.xt += [r.detach().numpy()[:, 0]]
self.yt += [r.detach().numpy()[:, 1]]
self.cost += 0.5*(control_cost0 + control_cost)
self.cost += - 0.0001*self.compute_reward(r)
return r
def sample(self, dt, num_iter):
r0 = torch.empty(self.environment.num_samples, 2).uniform_(10, self.environment.resolution-10)
r = self.integrate(r0, dt, num_iter)
return r
def reset(self):
self.xt = []
self.yt = []
self.cost = 0.
# + id="TRAwzMN7fX12" colab_type="code" colab={}
def logit(x):
return np.log(x) + np.log(1 - x)
# + id="Y14Qs1lAgQh8" colab_type="code" colab={}
class GaussianEnvironment:
def __init__(self, resolution, std, num_samples, scale = 5):
if resolution % scale != 0:
raise(ValueError)("The resulition should have {} as a factor".format(scale))
latent_resolution = int(resolution/scale)
self.distribution_mean = torch.zeros([num_samples, 1, latent_resolution, latent_resolution])
self.distribution_std = torch.ones([num_samples, 1, latent_resolution, latent_resolution])
self.r_distribution_logits = logit(3./(latent_resolution*latent_resolution))*torch.ones([num_samples, latent_resolution*latent_resolution])
self.num_samples = num_samples
self.resolution = resolution
self.latent_resolution = latent_resolution
self.scale = scale
self.std = std
self.environment_hardness = 0.01
self.reward_hardness = 0.01
self.is_generated = False
self.colors = [(torch.tensor([0., 0., 1.]).unsqueeze(1).unsqueeze(2).expand(1,3,resolution, resolution), 0, 0.2),
(torch.tensor([0., 1., 0.]).unsqueeze(1).unsqueeze(2).expand(1,3,resolution, resolution), 0.2, 0.4),
(torch.tensor([0.58824, 0.29412, 0]).unsqueeze(1).unsqueeze(2).expand(1,3,resolution, resolution), 0.4, 0.6),
(torch.tensor([0.5, 0.5, 0.5]).unsqueeze(1).unsqueeze(2).expand(1,3,resolution, resolution), 0.6, 0.8),
(torch.tensor([1., 1., 1.]).unsqueeze(1).unsqueeze(2).expand(1,3,resolution, resolution), 0.6, 0.8)]
self.halfsize = None
self.kernel = None
self.set_kernel()
self.c = None
self.h = None
self.dxh = None
self.dyx = None
def visibility_map(self, x0, y0, v0, k0):
arange = torch.arange(0., self.resolution).float()
x, y = torch.meshgrid([arange, arange])
h = self.extrapolate(x0, y0, self.h,
activation=lambda x: x,
derivative=False,
normalized=True)
x0 = x0.unsqueeze(1).unsqueeze(2).unsqueeze(3).expand(self.num_samples,1,self.resolution,self.resolution)
y0 = y0.unsqueeze(1).unsqueeze(2).unsqueeze(3).expand(self.num_samples,1,self.resolution,self.resolution)
h = h.unsqueeze(2).unsqueeze(3).expand(self.num_samples,1,self.resolution,self.resolution)
d_map = torch.sqrt(((x0 - x) ** 2 + (y0 - y) ** 2))
visibility_mask = 1./(1. + F.relu(d_map - h*k0 + 1)**2)
hard_mask = 1. - torch.sigmoid(10000*(d_map - h*k0 + 1))
likelihood_variance = v0 + F.relu(d_map - h*k0 + 1)**3
return likelihood_variance, visibility_mask, hard_mask
def env_bayesian_update(self, inference_net, x0, y0, v0 = 0.00001, k0 = 30., data=None):
prior_mean = self.distribution_mean
prior_std = self.distribution_std
likelihood_variance, visibility_mask, hard_mask = self.visibility_map(x0, y0, v0, k0)
if data is None:
mean = self.h
distribution = torch.distributions.normal.Normal(mean,torch.sqrt(likelihood_variance))
sample = distribution.rsample()
data = hard_mask*sample
posterior_mean, posterior_var = inference_net.get_posterior_parameters(data, likelihood_variance, prior_mean, prior_std)
self.distribution_mean = posterior_mean
self.distribution_std = torch.sqrt(posterior_var)
variational_loss1 = inference_net.neg_ELBO_loss(data, prior_mean, prior_std, self, likelihood_variance,hard_mask)
latent = self.h_cells
variational_loss2 = inference_net.FAVI_loss(data, latent, prior_mean, prior_std, likelihood_variance)
return variational_loss1 + variational_loss2
def rew_bayesian_update(self, inference_net, x0, y0, v0 = 0.001, k0 = 20., data=None):
prior_logits = self.r_distribution_logits
likelihood_variance, visibility_mask, hard_mask = self.visibility_map(x0, y0, v0, k0)
if data is None:
mean = self.r
distribution = torch.distributions.normal.Normal(mean,torch.sqrt(likelihood_variance))
sample = distribution.rsample()
data = hard_mask*sample
posterior_logits = inference_net.get_posterior_parameters(data, likelihood_variance, prior_logits, hard_mask)
self.r_distribution_logits = posterior_logits
#variational_loss = inference_net.neg_ELBO_loss(data, prior_logits, self, likelihood_variance)
latent = self.r_cells
variational_loss = inference_net.FAVI_loss(data, latent, prior_logits, likelihood_variance, hard_mask)
return variational_loss
def dsigmoidd(self, x):
sigmoid = torch.sigmoid(x);
return sigmoid * (1 - sigmoid)
def get_statistics(self):
return self.distribution_mean, self.distribution_std, self.r_distribution_logits
def filter_environment(self, cells):
upscaled_cells = torch.nn.functional.interpolate(cells, scale_factor=self.scale, mode="bilinear", align_corners=True)
pre_map = torch.nn.functional.conv2d(upscaled_cells,
self.kernel.unsqueeze(0).unsqueeze(0), padding = self.halfsize)
env_map = torch.sigmoid(self.environment_hardness * pre_map)
dxh = torch.nn.functional.conv2d(upscaled_cells,
self.dxkernel.unsqueeze(0).unsqueeze(0), padding = self.halfsize)
dyh = torch.nn.functional.conv2d(upscaled_cells,
self.dykernel.unsqueeze(0).unsqueeze(0), padding = self.halfsize)
dxh = dxh * self.environment_hardness * self.dsigmoidd(self.environment_hardness * pre_map)
dyh = dyh * self.environment_hardness * self.dsigmoidd(self.environment_hardness * pre_map)
return env_map, dxh, dyh
def filter_reward(self, r_cells):
upscaled_r_cells = torch.nn.functional.interpolate(r_cells.view((self.num_samples,1,self.latent_resolution, self.latent_resolution)),
scale_factor=self.scale, mode="bilinear", align_corners=True)
reward = (0.1/3)*torch.nn.functional.conv2d(upscaled_r_cells,
self.kernel.unsqueeze(0).unsqueeze(0), padding = self.halfsize)
return reward
def generate(self):
mean = self.distribution_mean
std = self.distribution_std
distribution = torch.distributions.normal.Normal(mean,
std)
cells = distribution.rsample()
r_logits = self.r_distribution_logits
r_distribution = torch.distributions.bernoulli.Bernoulli(logits=r_logits)
r_cells = r_distribution.sample()
env_map, dxh, dyh = self.filter_environment(cells)
reward = self.filter_reward(r_cells)
self.c = self.paint(env_map)
self.h_cells = cells
self.h = env_map
self.r = reward
self.r_cells = r_cells
self.dxh = dxh
self.dyh = dyh
self.is_generated = True
def set_kernel(self):
self.halfsize = 4*int(np.ceil(2 * self.std))
arange = torch.arange(-self.halfsize, self.halfsize + 1).float()
x, y = torch.meshgrid([arange, arange])
self.kernel = torch.exp(-(x ** 2 + y ** 2) / (2 * self.std ** 2))
self.dxkernel = -self.kernel.detach() * x / self.std **2
self.dykernel = -self.kernel.detach() * y / self.std **2
def extrapolate(self, x0, y0, image, activation, derivative=False, d_activation = None, std=None, normalized=False):
if std is None: #
std = self.std
arange = torch.arange(0., self.resolution).float()
x, y = torch.meshgrid([arange, arange])
x = x.unsqueeze(0).unsqueeze(0).expand(self.num_samples,1,self.resolution,self.resolution)
y = y.unsqueeze(0).unsqueeze(0).expand(self.num_samples,1,self.resolution,self.resolution)
x0 = x0.unsqueeze(1).unsqueeze(2).unsqueeze(3).expand(self.num_samples,1,self.resolution,self.resolution)
y0 = y0.unsqueeze(1).unsqueeze(2).unsqueeze(3).expand(self.num_samples,1,self.resolution,self.resolution)
weights = torch.exp(-((x0 - x) ** 2 + (y0 - y) ** 2) / (2 * std ** 2))
if derivative:
dx_weights = -(x - x0)*weights / self.std **2
dy_weights = -(y - y0)*weights / self.std **2
if normalized:
weights = weights/torch.sum(weights, (1,2,3), keepdim=True).expand(self.num_samples,1,self.resolution,self.resolution)
extr = torch.sum(image * weights, (1,2,3))
if derivative:
dx_extr = d_activation(extr)*torch.sum(image * dx_weights, (1,2,3))
dy_extr = d_activation(extr)*torch.sum(image * dy_weights, (1,2,3))
return activation(extr), dx_extr, dy_extr
else:
extr = activation(torch.sum(image * weights, (2,3)))
return activation(extr)
def soft_indicator(self, lower, upper, soft):
indicator = lambda height: torch.sigmoid(soft * (height - lower)) * (1 - torch.sigmoid(soft * (height - upper)))
return indicator
def paint(self, x):
return sum([color.expand(self.num_samples,3,self.resolution,self.resolution) * self.soft_indicator(lower, upper, 10.)(x) for color, lower, upper in self.colors])
# + id="Um_qwlrKaq2Z" colab_type="code" colab={}
class HJB(torch.nn.Module):
def __init__(self, image_size, x_force, y_force, noise_map, reward, lam, dt, intermediate_reward=False):
super(HJB, self).__init__()
self.image_size = image_size
self.x_force = x_force
self.y_force = y_force
self.noise_map = noise_map
self.reward = reward
self.lam = lam
self.dt = dt
self.kx, self.ky, self.k_laplace = self._get_derivative_filters()
self.intermediate_reward = intermediate_reward
#self.kx_minus, self.kx_plus, self.ky_minus, self.ky_plus = self._get_derivative_filters()
def _get_derivative_filters(self): #Upwind method
ky = torch.tensor([[1., 2. , 1.], [0., 0., 0.], [-1., -2. , -1.]])/4.
ky = ky.expand(1,1,3,3)
kx = torch.transpose(ky, 3, 2)
k_laplace = torch.tensor([[1., 1. , 1.], [1., -8. , 1.], [1., 1. , 1.]])
k_laplace = k_laplace.expand(1,1,3,3)
return kx, ky, k_laplace
def backward_update(self, V, control=False):
Vpad = torch.nn.functional.pad(V, (1,1,1,1), "reflect")
dVx = torch.nn.functional.conv2d(Vpad, self.kx, padding = 0)
dVy = torch.nn.functional.conv2d(Vpad, self.ky, padding = 0)
LV = torch.nn.functional.conv2d(Vpad, self.k_laplace, padding = 0)
if self.intermediate_reward:
r = self.reward
else:
r = 0.
update = (-r - dVx**2/(2*self.lam) - dVy**2/(2*self.lam) + self.x_force * dVx + self.y_force * dVy + self.noise_map**2*LV)
if control:
Ux = -(1/self.lam)*dVx
Uy = -(1/self.lam)*dVy
return update, Ux, Uy
else:
return update
def backward_step(self, V):
update, Ux, Uy = self.backward_update(V, control=True)
Vprev = V + self.dt*update
return Vprev, Ux, Uy
def RK_backward_step(self, V):
k1, Ux, Uy = self.backward_update(V, control=True)
k1 *= self.dt
k2 = self.dt*self.backward_update(V + k1/2)
k3 = self.dt*self.backward_update(V + k2/2)
k4 = self.dt*self.backward_update(V + k3)
return V + (k1 + 2*k2 + 2*k3 + k4)/6., Ux, Uy
def compute_value(self, N, RK = False, plot=False):
Vn = -self.reward
V_list = [-Vn]
U_list = [None]
for n in reversed(range(N)):
if n % 20 == 0:
if plot:
x,y = (np.arange(0, resolution), np.arange(0, resolution))
plt.imshow(Vn[0,:,:,:].detach().numpy().squeeze(), extent = [0, resolution, 0, resolution], origin="lower")
plt.quiver(x, y, environment.dyh[0,:,:,:].detach().numpy().squeeze(), environment.dxh[0,:,:,:].detach().numpy().squeeze())
#plt.quiver(x, y, Ux[0,:,:,:].detach().numpy().squeeze(), Uy[0,:,:,:].numpy().squeeze(), color="red")
fig = plt.gcf()
fig.set_size_inches(18.5, 18.5)
plt.show()
if not RK:
Vn, Ux, Uy = self.backward_step(Vn)
else:
Vn, Ux, Uy = self.RK_backward_step(Vn)
V_list.append(-Vn)
U_list.append((-Uy, -Ux)) #TODO: flipped/sign flipped
return list(reversed(V_list)), list(reversed(U_list))
# + id="hOy5u2IwSnqM" colab_type="code" colab={}
class EnvInferenceNet(nn.Module):
def __init__(self, gain, h_size=30, k_size=3, var_k_size=3, latent_resolution = 8, scale_factor = 5):
super(EnvInferenceNet, self).__init__()
self.conv_in = nn.Conv2d(h_size, h_size, k_size, padding=0) #Input: h_mean, h_std, r_mean, r_std times h_size
self.out = nn.Linear(latent_resolution*latent_resolution*h_size*scale_factor**2, latent_resolution*latent_resolution)
self.var_l1 = nn.Conv2d(1, h_size, 1, padding=0, bias=False)
self.var_out = nn.Conv2d(h_size, 1 , 1, padding=0, bias=False)
# Parameters
self.h_size = h_size
self.k_size = k_size
self.k_pad = int((k_size - 1)/2)
self.var_k_pad = int((var_k_size - 1)/2)
self.latent_resolution = latent_resolution
self.scale_factor = scale_factor
self.gain = gain
def forward(self, data, likelihood_var):
activation = lambda x: torch.relu(x)
b_size = data.shape[0]
x = data.repeat(1,self.h_size,1,1)
x_pad = torch.nn.functional.pad(x, (self.k_pad,self.k_pad,self.k_pad,self.k_pad), "reflect")
h = activation(self.conv_in(x_pad)).view(b_size, self.h_size*self.latent_resolution*self.latent_resolution*self.scale_factor**2)
latent_data = self.out(h)
latent_data = latent_data.view(b_size,1,self.latent_resolution,self.latent_resolution)
x = F.interpolate(likelihood_var, scale_factor=1/self.scale_factor, mode="bilinear", align_corners=True)
x = activation(self.var_l1(x))
x = self.var_out(x)**2
return self.gain*latent_data, x
def get_posterior_parameters(self, data, likelihood_var, prior_mean, prior_std):
latent_data, latent_variance = self(data, likelihood_var)
posterior_var = 1/(1/prior_std**2 + 1/latent_variance)
posterior_mean = (prior_mean/prior_std**2 + latent_data/latent_variance)*posterior_var
return posterior_mean, posterior_var
def neg_ELBO_loss(self, data, prior_mean, prior_std, environment, lk_variance, mask):
prior_distribution = torch.distributions.normal.Normal(prior_mean, prior_std)
posterior_mean, posterior_var = self.get_posterior_parameters(data, lk_variance, prior_mean, prior_std)
post_distribution = torch.distributions.normal.Normal(posterior_mean,torch.sqrt(posterior_var))
posterior_sample = post_distribution.rsample()
lik_filter = lambda x: environment.filter_environment(x)[0]
avg_log_lik = torch.mean(-0.5*mask*(data - lik_filter(posterior_sample))**2/lk_variance)
KL_regularization = torch.distributions.kl.kl_divergence(post_distribution, prior_distribution)
return torch.mean(-avg_log_lik + KL_regularization)
def FAVI_loss(self, data, latent, prior_mean, prior_std, lk_variance):
posterior_mean, posterior_var = self.get_posterior_parameters(data, lk_variance, prior_mean, prior_std)
loss = torch.mean(0.5*(latent - posterior_mean)**2/posterior_var + 0.5*torch.log(2*np.pi*posterior_var))
return loss
# + id="rPMFvNboCy4T" colab_type="code" colab={}
class RewInferenceNet(nn.Module):
def __init__(self, gain, h_size=60, k_size=5, latent_resolution = 8, scale_factor = 5):
super(RewInferenceNet, self).__init__()
self.l = nn.Linear(latent_resolution*latent_resolution*scale_factor**2, latent_resolution*latent_resolution)
#self.var_l1 = nn.Conv2d(1, h_size, 1, padding=0, bias=False)
#self.var_out = nn.Conv2d(h_size, 1 , 1, padding=0, bias=False)
# Parameters
self.h_size = h_size
self.k_size = k_size
self.k_pad = int((k_size - 1)/2)
self.latent_resolution = latent_resolution
self.scale_factor = scale_factor
self.gain = gain
def forward(self, data, likelihood_var, hard_mask):
b_size = data.shape[0]
mask = F.interpolate(hard_mask, scale_factor=1/self.scale_factor, mode="bilinear", align_corners=True).view((b_size,self.latent_resolution*self.latent_resolution))
x = mask*(0.1*F.softplus(self.l(data.view(b_size, self.latent_resolution*self.latent_resolution*self.scale_factor**2))) - 2.)
return x
def get_posterior_parameters(self, data, likelihood_var, prior_logits, hard_mask):
latent_logits = self(data, likelihood_var, hard_mask)
posterior_logits = prior_logits + latent_logits
return posterior_logits
def neg_ELBO_loss(self, data, prior_logits, environment, lk_variance):
prior_distribution = torch.distributions.categorical.Categorical(logits=prior_logits)
posterior_logits = self.get_posterior_parameters(data, lk_variance, prior_logits)
post_distribution = torch.distributions.categorical.Categorical(logits=posterior_logits)
enumeration = post_distribution.enumerate_support(expand=False)
log_probs = post_distribution.log_prob(enumeration).transpose(1,0)
probs = torch.exp(log_probs).unsqueeze(2).unsqueeze(3)
log_lk = torch.sum(-0.5*(data - environment.filter_reward(enumeration[:,0]).transpose(1,0))**2/lk_variance, (2,3))
avg_log_lik = torch.mean(probs*log_lk.detach())
#
KL_regularization = torch.distributions.kl.kl_divergence(post_distribution, prior_distribution)
return torch.mean(-avg_log_lik + KL_regularization)
def FAVI_loss(self, data, latent, prior_logits, lk_variance, hard_mask):
b_size = data.shape[0]
weights = F.interpolate(hard_mask, scale_factor=1/self.scale_factor, mode="bilinear", align_corners=True).view((b_size,self.latent_resolution*self.latent_resolution))
loss_fn = torch.nn.BCEWithLogitsLoss(weight=weights.detach())
posterior_logits = self.get_posterior_parameters(data, lk_variance, prior_logits, hard_mask)
loss = loss_fn(posterior_logits, latent.detach())
if False and iteration % 10 == 0:
plot_map(data)
mean_r_cells = torch.sigmoid(torch.nn.functional.interpolate(posterior_logits.view((b_size,1,self.latent_resolution, self.latent_resolution)),
scale_factor=self.scale_factor, mode="bilinear", align_corners=True))
r_mean = (0.1/3)*torch.nn.functional.conv2d(mean_r_cells,
environment.kernel.unsqueeze(0).unsqueeze(0), padding = environment.halfsize)
r_var = r_mean*(1 - r_mean)
plot_map(r_mean)
plot_map(r_var)
return loss
# + id="zkjtWWkL_rV5" colab_type="code" colab={}
# Policy network TODO: Work in progress
class ValueNet(nn.Module):
def __init__(self, environment, smoothing_std=2, h_size=40, k_size=1):
super(ValueNet, self).__init__()
self.conv_in = nn.Conv2d(3, h_size, k_size, padding=0, bias=False)
self.out = nn.Linear(environment.latent_resolution*environment.latent_resolution*h_size,
environment.latent_resolution*environment.latent_resolution, bias=False)
self.conv_out = nn.Conv2d(h_size, 1, k_size, padding=0, bias=False)
# Smoothing layer
self.smoothing_std = smoothing_std
# Parameters
self.h_size = h_size
self.k_size = k_size
self.k_pad = int((k_size - 1)/2)
self.halfsize = 4*int(np.ceil(2 * smoothing_std))
arange = torch.arange(-self.halfsize, self.halfsize + 1).float()
x, y = torch.meshgrid([arange, arange])
self.smoothing_ker = torch.exp(-(x ** 2 + y ** 2) / (2 * smoothing_std ** 2))
self.environment = environment
self.V_trace = None
def forward(self, h_mean, h_std, r_logits, N, g, dt, exploit=False):
activation = lambda x: F.softplus(x)
predicted_reward = 0.1*environment.filter_reward(torch.sigmoid(r_logits))
x = torch.cat((h_mean,
h_std,
r_logits.view((environment.num_samples,
1,
environment.latent_resolution,
environment.latent_resolution))),
1)
x = activation(self.conv_in(x))
y = x.view((environment.num_samples, self.h_size*environment.latent_resolution**2))
z = self.conv_out(x)
x = z #+ 0.1*self.out(y).view((environment.num_samples, 1, environment.latent_resolution, environment.latent_resolution))
x = F.interpolate(x, scale_factor=environment.scale,
mode="bilinear", align_corners=True)
x_pad = torch.nn.functional.pad(x, (self.halfsize,self.halfsize,self.halfsize,self.halfsize), "reflect")
output = torch.nn.functional.conv2d(x_pad.view(environment.num_samples,
1,
x_pad.shape[2],
x_pad.shape[3]), self.smoothing_ker.unsqueeze(0).unsqueeze(0)).view(environment.num_samples,
1,
environment.resolution,
environment.resolution)
value = 0.01*F.softplus(output)
if exploit is False:
hjb_input = value
else:
hjb_input = predicted_reward
hjb = HJB(image_size=environment.resolution,
x_force= -g*environment.dyh, #TODO: this should be changed
y_force= -g*environment.dxh, #TODO: this should be changed
noise_map= 0.25, #TODO: this should be changed
reward=hjb_input,
lam= 0.02,
dt=dt)
_, Ulist = hjb.compute_value(N, RK=True)
return value, Ulist
def TDloss(self, reward, value, future_value, kernel, gamma=0.9):
reward = reward.unsqueeze(1).unsqueeze(2).unsqueeze(3)
future_value = future_value.unsqueeze(2).unsqueeze(3)
#
TD_target = (reward + gamma*future_value).detach()
loss = torch.mean(kernel*(value - TD_target)**2)
return loss
def TD_lambda_loss(self, reward, value, future_value, kernel, step, gamma=0.95, lam = 0.2):
if step == 0:
self.V_trace = 0.
else:
self.V_trace = gamma*lam*self.V_trace + value
future_value = future_value.unsqueeze(2).unsqueeze(3)
loss = torch.mean(kernel*(reward + gamma*future_value.detach() - value).detach()*V_trace)
return loss
# + id="6vkUz0RCvgA9" colab_type="code" colab={}
def plot_trajectories(Ulist, environment, dynamics, value):
x0_range = [20., 40.]
y0_range = [20., 40.]
_, _, r_logits = environment.get_statistics()
r_map = environment.filter_reward(torch.sigmoid(r_logits))
x,y = (np.arange(0, environment.resolution), np.arange(0, environment.resolution))
#plt.imshow(environment.h[0,0,:,:].detach().numpy().squeeze(), extent = [0, environment.resolution, 0, environment.resolution], origin="lower")
#plt.contour(x, y, environment.h[0,0,:,:].detach().numpy().squeeze(), colors='red')
plt.imshow(r_map[0,0,:,:].detach().numpy().squeeze(), extent = [0, environment.resolution, 0, environment.resolution], origin="lower")
plt.quiver(x, y, environment.dyh[0,0,:,:].detach().numpy().squeeze(), environment.dxh[0,:,:,:].detach().numpy().squeeze())
plt.plot(np.array(dynamics.yt)[:,0], np.array(dynamics.xt)[:,0], linewidth=4, color = "red")
plt.plot(np.array(dynamics.yt)[0,0], np.array(dynamics.xt)[0,0], "xb")
plt.colorbar()
fig = plt.gcf()
fig.set_size_inches(18.5, 18.5)
plt.show()
# + id="JX6j35Ed4VDa" colab_type="code" colab={}
def plot_map(mp, norm=False, lim=1.):
x0_range = [20., 40.]
y0_range = [20., 40.]
x,y = (np.arange(0, environment.resolution), np.arange(0, environment.resolution))
plt.imshow(mp[0,0,:,:].detach().numpy().squeeze(), extent = [0, environment.resolution, 0, environment.resolution], origin="lower")
plt.colorbar()
fig = plt.gcf()
fig.set_size_inches(18.5, 18.5)
#plt.clim(0,1.)
if norm:
plt.clim(0,1.)
plt.show()
# + id="Utes6F7dk4M-" colab_type="code" colab={}
import pickle
def save_network(net, name):
pickle.dump(net, open( "{}.p".format(name), "wb" ))
# + id="U5cVhh0nlLFJ" colab_type="code" colab={}
def load_network(name):
return pickle.load( open( "{}.p".format(name), "rb" ) )
# + id="amrEySpd6qSS" colab_type="code" colab={}
# Train
N_iters = 2000
RL_batch_size = 3
VI_batch_size = 20
N_steps = 5
N_intergration_steps = 400 #200
N_VI_iterations = 400
resolution = 40 #40
scale = 8 #5
std = 7.5
g = 0.0005 #0.005
noise = 0.3
dt = 0.1
environment = GaussianEnvironment(resolution=resolution, std=std, num_samples=VI_batch_size, scale=scale)
net = ValueNet(environment) #TODO: Multiple networks
#Initializer.initialize(model=net, initialization=nn.init.xavier_uniform, gain=nn.init.calculate_gain('relu'))
optimizer = optim.Adam(net.parameters(), lr=0.00001)
env_inference_net = EnvInferenceNet(gain=1., scale_factor = scale, latent_resolution = int(resolution/scale)) #TODO: Multiple networks
#Initializer.initialize(model=env_inference_net, initialization=nn.init.xavier_uniform, gain=nn.init.calculate_gain('relu'))
env_VI_optimizer = optim.Adam(env_inference_net.parameters(), lr=0.00001)
reward_inference_net = RewInferenceNet(gain=1., scale_factor = scale, latent_resolution = int(resolution/scale)) #TODO: Multiple networks
#Initializer.initialize(model=reward_inference_net, initialization=nn.init.xavier_uniform, gain=nn.init.calculate_gain('relu'))
reward_VI_optimizer = optim.Adam(reward_inference_net.parameters(), lr=0.0001)
loss_list = []
env_VI_loss_list = []
reward_VI_loss_list = []
# + id="MUvgwW1La3Om" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 1000} outputId="457ff83d-4c34-428c-fa94-b1fe54eab2e3"
load_value_net = False
try:
env_inference_net = load_network("env_net")
reward_inference_net = load_network("reward_net")
print("Loading inference networks")
N_VI_itr = 0
except:
print("Training inference networks")
N_VI_itr = N_VI_iterations
if load_value_net:
try:
net = load_network("value_net")
optimizer = optim.Adam(net.parameters(), lr=0.0001)
print("Loading value networks")
except:
print("Training value network")
for iteration in range(N_iters):
if iteration > N_VI_itr:
batch_size = RL_batch_size
else:
batch_size = VI_batch_size
print("Iteration: {}".format(iteration))
environment = GaussianEnvironment(resolution=resolution, std=std, num_samples=batch_size, scale=scale)
dynamics = Dynamics(environment, g=g, noise=noise, lam=0.0000)
environment.generate()
r = dynamics.sample(dt, 1)
if iteration > N_VI_itr:
environment.env_bayesian_update(env_inference_net, r[:,0], r[:,1])
environment.generate()
total_loss = 0.
total_reward = 0.
total_env_VI_loss = 0.
total_reward_VI_loss = 0.
reward = torch.zeros((batch_size,))
for step in range(N_steps):
print("Step: {}".format(step))
# zero the parameter gradients
optimizer.zero_grad()
env_VI_optimizer.zero_grad()
env_VI_optimizer.zero_grad()
## Control ##
if step == N_steps - 1:
exploit = True
else:
exploit = False
if iteration > N_VI_itr:
h_mean, h_std, r_logits = environment.get_statistics()
value, Ulist = net.forward(h_mean,
h_std,
r_logits,
N_intergration_steps, g, dt, exploit=exploit)
dynamics.control = Ulist
print(value.max())
old_r = r
if iteration > N_VI_itr:
r = dynamics.integrate(r, dt, N_intergration_steps).detach()
else:
r = dynamics.sample(dt, 1)
if np.any(np.isnan(r.detach().numpy())):
print("not a number found in the new coordinates")
break
if np.any(r.detach().numpy() > resolution + 8) or np.any(r.detach().numpy() < -8):
print("The agent has left the environment")
break
if iteration % 1 == 0 and iteration > N_VI_itr:
plot_trajectories(Ulist, environment, dynamics, value)
save_network(net, "value_net")
## Reward ##
new_reward = -dynamics.cost
# Bayesian update
env_VI_loss = environment.env_bayesian_update(env_inference_net, r[:,0], r[:,1])
reward_VI_loss = environment.rew_bayesian_update(reward_inference_net, r[:,0], r[:,1])
## Information gain ##
if iteration > N_VI_itr:
if step < N_steps - 1:
new_h_mean, new_h_std, new_r_logits = environment.get_statistics()
future_value_map,_ = net.forward(new_h_mean, new_h_std, new_r_logits, N_intergration_steps, g, dt, exploit=exploit)
future_value = environment.extrapolate(r[:,0], r[:,1], future_value_map,
activation=lambda x: x,
derivative=False,
std=.5,
normalized=True).detach()
else:
future_value = new_reward.unsqueeze(1)
## TD kernel ##
arange = torch.arange(0, environment.resolution).float()
x, y = torch.meshgrid([arange, arange])
x = x.unsqueeze(0).unsqueeze(0).expand(environment.num_samples,1,environment.resolution,environment.resolution)
y = y.unsqueeze(0).unsqueeze(0).expand(environment.num_samples,1,environment.resolution,environment.resolution)
x0 = old_r[:,0].unsqueeze(1).unsqueeze(2).unsqueeze(3).expand(environment.num_samples,1,environment.resolution,environment.resolution)
y0 = old_r[:,1].unsqueeze(1).unsqueeze(2).unsqueeze(3).expand(environment.num_samples,1,environment.resolution,environment.resolution)
kernel = torch.exp(-((x - x0) ** 2 + (y - y0) ** 2) / (2 * 3. ** 2))
## TD learning ##
loss = net.TDloss(reward, value, future_value, kernel, gamma=0.95)
#loss = net.TD_lambda_loss(reward, value, future_value, kernel, step, gamma=0.95, lam = 0.2)
if not np.isnan(loss.detach().numpy()):
loss.backward(retain_graph=True)
optimizer.step()
environment.generate()
total_loss += float(loss.detach().numpy())
total_reward += float(torch.sum(reward).detach().numpy())
else:
break
## Reward ##
reward = new_reward
## VI update ##
if iteration < N_VI_itr:
env_VI_loss.backward(retain_graph=True)
reward_VI_loss.backward(retain_graph=True)
env_VI_loss.backward(retain_graph=True)
reward_VI_loss.backward(retain_graph=True)
env_VI_optimizer.step()
reward_VI_optimizer.step()
total_env_VI_loss += float(env_VI_loss.detach().numpy())
total_reward_VI_loss += float(reward_VI_loss.detach().numpy())
if iteration == N_VI_itr:
save_network(env_inference_net, "env_net")
save_network(reward_inference_net, "reward_net")
if iteration > N_VI_itr:
print("Reward: {}".format(total_reward))
else:
print("VI env loss: {}".format(total_env_VI_loss))
print("VI rew loss: {}".format(total_reward_VI_loss))
#loss_list += [loss.detach().numpy()]#
env_VI_loss_list += [total_env_VI_loss]
reward_VI_loss_list += [total_reward_VI_loss]
if iteration == N_VI_itr:
plt.plot(env_VI_loss_list)
plt.show()
plt.plot(reward_VI_loss_list)
plt.show()
| TDDualControl.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: conda_tensorflow_p36
# language: python
# name: conda_tensorflow_p36
# ---
# +
CHILDREN = 0
WEIGHT = 1
import sys
import pandas as pd
import numpy as np
from matplotlib import pyplot as plt
# %matplotlib inline
from importlib import reload
trackml_path = '/home/ec2-user/SageMaker/TrackML'
if trackml_path not in sys.path:
sys.path.append(trackml_path)
# from detector_grid import detector_grid
# reload(detector_grid)
# -
detector = pd.read_csv('/home/ec2-user/SageMaker/efs/codalab_dataset/detector.csv')
detector = detector.loc[:, ['volume_id', 'layer_id', 'module_id', 'cx', 'cy', 'cz']]
detector.loc[:, 'cR'] = np.sqrt(
detector.cx**2 +
detector.cy**2 +
detector.cz**2
)
detector.loc[:, 'ctheta'] = np.arctan2(detector.cx, detector.cy)
detector.loc[:, 'cphi'] = np.arccos(detector.cz/detector.cR)
print(detector.info())
detector.head()
print(detector.cR.min(), detector.cR.max())
print(detector.ctheta.min(), detector.ctheta.max())
print(detector.cphi.min(), detector.cphi.max())
# +
nbins_R = 10
nbins_z = 10
nbins_phi = 10
R_min, R_max = 30., 1030.
z_min, z_max = -2956., 2956.
phi_min, phi_max = -np.pi, np.pi
phi_grid = np.linspace(phi_min, phi_max, nbins_phi)
# pos_z_grid = np.geomspace()
| scratch_eda/explore_detector.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 2
# language: python
# name: python2
# ---
# <script async src="https://www.googletagmanager.com/gtag/js?id=UA-59152712-8"></script>
# <script>
# window.dataLayer = window.dataLayer || [];
# function gtag(){dataLayer.push(arguments);}
# gtag('js', new Date());
#
# gtag('config', 'UA-59152712-8');
# </script>
#
# # The BSSN Time-Evolution Equations
#
# ## Author: <NAME>
# ### Formatting improvements courtesy <NAME>
#
# [comment]: <> (Abstract: TODO)
#
# **Module Status:** <font color='green'><b> Validated </b></font>
#
# **Validation Notes:** All expressions generated in this module have been validated against a trusted code (the original NRPy+/SENR code, which itself was validated against [Baumgarte's code](https://arxiv.org/abs/1211.6632)).
#
# ### NRPy+ Source Code for this module: [BSSN/BSSN_RHS.py](../edit/BSSN/BSSN_RHSs.py)
#
# ## Introduction:
# This module documents and constructs the time evolution equations of the BSSN formulation of Einstein's equations, as defined in [Ruchlin, Etienne, and Baumgarte (2018)](https://arxiv.org/abs/1712.07658) (see also [Baumgarte, Montero, Cordero-Carrión, and Müller (2012)](https://arxiv.org/abs/1211.6632)).
#
# **This module is part of the following set of NRPy+ tutorial notebooks on the BSSN formulation of general relativity:**
#
# * An overview of the BSSN formulation of Einstein's equations, as well as links for background reading/lectures, are provided in [the NRPy+ tutorial notebook on the BSSN formulation](Tutorial-BSSN_formulation.ipynb).
# * Basic BSSN quantities are defined in the [BSSN quantities NRPy+ tutorial notebook](Tutorial-BSSN_quantities.ipynb).
# * Other BSSN equation tutorial notebooks:
# * [Time-evolution equations the BSSN gauge quantities $\alpha$, $\beta^i$, and $B^i$](Tutorial-BSSN_time_evolution-BSSN_gauge_RHSs.ipynb).
# * [BSSN Hamiltonian and momentum constraints](Tutorial-BSSN_constraints.ipynb)
# * [Enforcing the $\bar{\gamma} = \hat{\gamma}$ constraint](Tutorial-BSSN-Enforcing_Determinant_gammabar_equals_gammahat_Constraint.ipynb)
#
# ### A Note on Notation
#
# As is standard in NRPy+,
#
# * Greek indices refer to four-dimensional quantities where the zeroth component indicates temporal (time) component.
# * Latin indices refer to three-dimensional quantities. This is somewhat counterintuitive since Python always indexes its lists starting from 0. As a result, the zeroth component of three-dimensional quantities will necessarily indicate the first *spatial* direction.
#
# As a corollary, any expressions involving mixed Greek and Latin indices will need to offset one set of indices by one: A Latin index in a four-vector will be incremented and a Greek index in a three-vector will be decremented (however, the latter case does not occur in this tutorial notebook).
# <a id='toc'></a>
#
# # Table of Contents
# $$\label{toc}$$
#
# This notebook is organized as follows
#
# 0. [Preliminaries](#bssntimeevolequations): BSSN time-evolution equations, as described in the [BSSN formulation NRPy+ tutorial notebook](Tutorial-BSSN_formulation.ipynb)
# 1. [Step 1](#initializenrpy): Initialize core Python/NRPy+ modules
# 1. [Step 2](#gammabar): Right-hand side of $\partial_t \bar{\gamma}_{ij}$
# 1. [Step 2.a](#term1_partial_gamma): Term 1 of $\partial_t \bar{\gamma}_{i j}$
# 1. [Step 2.b](#term2_partial_gamma): Term 2 of $\partial_t \bar{\gamma}_{i j}$
# 1. [Step 2.c](#term3_partial_gamma): Term 3 of $\partial_t \bar{\gamma}_{i j}$
# 1. [Step 3](#abar): Right-hand side of $\partial_t \bar{A}_{ij}$
# 1. [Step 3.a](#term1_partial_upper_a): Term 1 of $\partial_t \bar{A}_{i j}$
# 1. [Step 3.c](#term2_partial_upper_a): Term 2 of $\partial_t \bar{A}_{i j}$
# 1. [Step 3.c](#term3_partial_upper_a): Term 3 of $\partial_t \bar{A}_{i j}$
# 1. [Step 4](#cf): Right-hand side of $\partial_t \phi \to \partial_t (\text{cf})$
# 1. [Step 5](#trk): Right-hand side of $\partial_t \text{tr} K$
# 1. [Step 6](#lambdabar): Right-hand side of $\partial_t \bar{\Lambda}^i$
# 1. [Step 6.a](#term1_partial_lambda): Term 1 of $\partial_t \bar{\Lambda}^i$
# 1. [Step 6.b](#term2_partial_lambda): Term 2 of $\partial_t \bar{\Lambda}^i$
# 1. [Step 6.c](#term3_partial_lambda): Term 3 of $\partial_t \bar{\Lambda}^i$
# 1. [Step 6.d](#term4_partial_lambda): Term 4 of $\partial_t \bar{\Lambda}^i$
# 1. [Step 6.e](#term5_partial_lambda): Term 5 of $\partial_t \bar{\Lambda}^i$
# 1. [Step 6.f](#term6_partial_lambda): Term 6 of $\partial_t \bar{\Lambda}^i$
# 1. [Step 6.g](#term7_partial_lambda): Term 7 of $\partial_t \bar{\Lambda}^i$
# 1. [Step 7](#rescalingrhss): Rescaling the BSSN right-hand sides; rewriting them in terms of the rescaled quantities $\left\{h_{i j},a_{i j},\text{cf}, K, \lambda^{i}, \alpha, \mathcal{V}^i, \mathcal{B}^i\right\}$
# 1. [Step 8](#code_validation): Code Validation against `BSSN.BSSN_RHSs` NRPy+ module
# 1. [Step 9](#latex_pdf_output): Output this notebook to $\LaTeX$-formatted PDF file
# <a id='bssntimeevolequations'></a>
#
# # Preliminaries: BSSN time-evolution equations \[Back to [top](#toc)\]
# $$\label{bssntimeevolequations}$$
#
# As described in the [BSSN formulation NRPy+ tutorial notebook](Tutorial-BSSN_formulation.ipynb), the BSSN time-evolution equations are given by
#
# \begin{align}
# \partial_t \bar{\gamma}_{i j} {} = {} & \left[\beta^k \partial_k \bar{\gamma}_{ij} + \partial_i \beta^k \bar{\gamma}_{kj} + \partial_j \beta^k \bar{\gamma}_{ik} \right] + \frac{2}{3} \bar{\gamma}_{i j} \left (\alpha \bar{A}_{k}^{k} - \bar{D}_{k} \beta^{k}\right ) - 2 \alpha \bar{A}_{i j} \; , \\
# \partial_t \bar{A}_{i j} {} = {} & \left[\beta^k \partial_k \bar{A}_{ij} + \partial_i \beta^k \bar{A}_{kj} + \partial_j \beta^k \bar{A}_{ik} \right] - \frac{2}{3} \bar{A}_{i j} \bar{D}_{k} \beta^{k} - 2 \alpha \bar{A}_{i k} {\bar{A}^{k}}_{j} + \alpha \bar{A}_{i j} K \nonumber \\
# & + e^{-4 \phi} \left \{-2 \alpha \bar{D}_{i} \bar{D}_{j} \phi + 4 \alpha \bar{D}_{i} \phi \bar{D}_{j} \phi + 4 \bar{D}_{(i} \alpha \bar{D}_{j)} \phi - \bar{D}_{i} \bar{D}_{j} \alpha + \alpha \bar{R}_{i j} \right \}^{\text{TF}} \; , \\
# \partial_t \phi {} = {} & \left[\beta^k \partial_k \phi \right] + \frac{1}{6} \left (\bar{D}_{k} \beta^{k} - \alpha K \right ) \; , \\
# \partial_{t} K {} = {} & \left[\beta^k \partial_k K \right] + \frac{1}{3} \alpha K^{2} + \alpha \bar{A}_{i j} \bar{A}^{i j} - e^{-4 \phi} \left (\bar{D}_{i} \bar{D}^{i} \alpha + 2 \bar{D}^{i} \alpha \bar{D}_{i} \phi \right ) \; , \\
# \partial_t \bar{\Lambda}^{i} {} = {} & \left[\beta^k \partial_k \bar{\Lambda}^i - \partial_k \beta^i \bar{\Lambda}^k \right] + \bar{\gamma}^{j k} \hat{D}_{j} \hat{D}_{k} \beta^{i} + \frac{2}{3} \Delta^{i} \bar{D}_{j} \beta^{j} + \frac{1}{3} \bar{D}^{i} \bar{D}_{j} \beta^{j} \nonumber \\
# & - 2 \bar{A}^{i j} \left (\partial_{j} \alpha - 6 \partial_{j} \phi \right ) + 2 \alpha \bar{A}^{j k} \Delta_{j k}^{i} -\frac{4}{3} \alpha \bar{\gamma}^{i j} \partial_{j} K
# \end{align}
#
# where the Lie derivative terms (often seen on the left-hand side of these equations) are enclosed in square braces.
#
# Notice that the shift advection operator $\beta^k \partial_k \left\{\bar{\gamma}_{i j},\bar{A}_{i j},\phi, K, \bar{\Lambda}^{i}\right\}$ appears on the right-hand side of *every* expression. As the shift determines how the spatial coordinates $x^i$ move on the next 3D slice of our 4D manifold, we find that representing $\partial_k$ in these shift advection terms via an *upwinded* finite difference stencil results in far lower numerical errors. This trick is implemented below in all shift advection terms. Upwinded derivatives are indicated in NRPy+ by the `_dupD` variable suffix.
#
#
# As discussed in the [NRPy+ tutorial notebook on BSSN quantities](Tutorial-BSSN_quantities.ipynb), tensorial expressions can diverge at coordinate singularities, so each tensor in the set of BSSN variables
#
# $$\left\{\bar{\gamma}_{i j},\bar{A}_{i j},\phi, K, \bar{\Lambda}^{i}, \alpha, \beta^i, B^i\right\},$$
#
# is written in terms of the corresponding rescaled quantity in the set
#
# $$\left\{h_{i j},a_{i j},\text{cf}, K, \lambda^{i}, \alpha, \mathcal{V}^i, \mathcal{B}^i\right\},$$
#
# respectively, as defined in the [BSSN quantities tutorial](Tutorial-BSSN_quantities.ipynb).
# <a id='initializenrpy'></a>
#
# # Step 1: Initialize core Python/NRPy+ modules \[Back to [top](#toc)\]
# $$\label{initializenrpy}$$
#
# Let's start by importing all the needed modules from NRPy+:
# +
# Step 1.a: import all needed modules from Python/NRPy+:
import sympy as sp
import NRPy_param_funcs as par
import indexedexp as ixp
import grid as gri
import finite_difference as fin
import reference_metric as rfm
# Step 1.b: Set the coordinate system for the numerical grid
par.set_parval_from_str("reference_metric::CoordSystem","Spherical")
# Step 1.c: Given the chosen coordinate system, set up
# corresponding reference metric and needed
# reference metric quantities
# The following function call sets up the reference metric
# and related quantities, including rescaling matrices ReDD,
# ReU, and hatted quantities.
rfm.reference_metric()
# Step 1.d: Set spatial dimension (must be 3 for BSSN, as BSSN is
# a 3+1-dimensional decomposition of the general
# relativistic field equations)
DIM = 3
# Step 1.e: Import all basic (unrescaled) BSSN scalars & tensors
import BSSN.BSSN_quantities as Bq
Bq.BSSN_basic_tensors()
gammabarDD = Bq.gammabarDD
AbarDD = Bq.AbarDD
LambdabarU = Bq.LambdabarU
trK = Bq.trK
alpha = Bq.alpha
betaU = Bq.betaU
# Step 1.f: Import all neeeded rescaled BSSN tensors:
aDD = Bq.aDD
cf = Bq.cf
lambdaU = Bq.lambdaU
# -
# <a id='gammabar'></a>
#
# # Step 2: Right-hand side of $\partial_t \bar{\gamma}_{ij}$ \[Back to [top](#toc)\]
# $$\label{gammabar}$$
#
# Let's start with
#
# $$
# \partial_t \bar{\gamma}_{i j} =
# {\underbrace {\textstyle \left[\beta^k \partial_k \bar{\gamma}_{ij} + \partial_i \beta^k \bar{\gamma}_{kj} + \partial_j \beta^k \bar{\gamma}_{ik} \right]}_{\text{Term 1}}} +
# {\underbrace {\textstyle \frac{2}{3} \bar{\gamma}_{i j} \left (\alpha \bar{A}_{k}^{k} - \bar{D}_{k} \beta^{k}\right )}_{\text{Term 2}}}
# {\underbrace {\textstyle -2 \alpha \bar{A}_{i j}}_{\text{Term 3}}}.
# $$
# <a id='term1_partial_gamma'></a>
#
# ## Step 2.a: Term 1 of $\partial_t \bar{\gamma}_{i j}$ \[Back to [top](#toc)\]
# $$\label{term1_partial_gamma}$$
#
# Term 1 of $\partial_t \bar{\gamma}_{i j} =$ `gammabar_rhsDD[i][j]`: $\beta^k \bar{\gamma}_{ij,k} + \beta^k_{,i} \bar{\gamma}_{kj} + \beta^k_{,j} \bar{\gamma}_{ik}$
#
#
# First we import derivative expressions for betaU defined in the [NRPy+ BSSN quantities tutorial notebook](Tutorial-BSSN_quantities.ipynb)
# +
# Step 2.a.i: Import derivative expressions for betaU defined in the BSSN.BSSN_quantities module:
Bq.betaU_derivs()
betaU_dD = Bq.betaU_dD
betaU_dupD = Bq.betaU_dupD
betaU_dDD = Bq.betaU_dDD
# Step 2.a.ii: Import derivative expression for gammabarDD
Bq.gammabar__inverse_and_derivs()
gammabarDD_dupD = Bq.gammabarDD_dupD
# Step 2.a.iii: First term of \partial_t \bar{\gamma}_{i j} right-hand side:
# \beta^k \bar{\gamma}_{ij,k} + \beta^k_{,i} \bar{\gamma}_{kj} + \beta^k_{,j} \bar{\gamma}_{ik}
gammabar_rhsDD = ixp.zerorank2()
for i in range(DIM):
for j in range(DIM):
for k in range(DIM):
gammabar_rhsDD[i][j] += betaU[k]*gammabarDD_dupD[i][j][k] + betaU_dD[k][i]*gammabarDD[k][j] \
+ betaU_dD[k][j]*gammabarDD[i][k]
# -
# <a id='term2_partial_gamma'></a>
#
# ## Step 2.b: Term 2 of $\partial_t \bar{\gamma}_{i j}$ \[Back to [top](#toc)\]
# $$\label{term2_partial_gamma}$$
#
# Term 2 of $\partial_t \bar{\gamma}_{i j} =$ `gammabar_rhsDD[i][j]`: $\frac{2}{3} \bar{\gamma}_{i j} \left (\alpha \bar{A}_{k}^{k} - \bar{D}_{k} \beta^{k}\right )$
#
# Let's first convert this expression to be in terms of the evolved variables $a_{ij}$ and $\mathcal{B}^i$, starting with $\bar{A}_{ij} = a_{ij} \text{ReDD[i][j]}$. Then $\bar{A}^k_{k} = \bar{\gamma}^{ij} \bar{A}_{ij}$, and we have already defined $\bar{\gamma}^{ij}$ in terms of the evolved quantity $h_{ij}$.
#
# Next, we wish to compute
#
# $$\bar{D}_{k} \beta^{k} = \beta^k_{,k} + \frac{\beta^k \bar{\gamma}_{,k}}{2 \bar{\gamma}},$$
#
# where $\bar{\gamma}$ is the determinant of the conformal metric $\bar{\gamma}_{ij}$. ***Exercise to student: Prove the above relation.***
# [Solution.](https://physics.stackexchange.com/questions/81453/general-relativity-christoffel-symbol-identity)
#
# Usually (i.e., so long as we make the parameter choice `detgbarOverdetghat_equals_one = False` ) we will choose $\bar{\gamma}=\hat{\gamma}$, so $\bar{\gamma}$ will in general possess coordinate singularities. Thus we would prefer to rewrite derivatives of $\bar{\gamma}$ in terms of derivatives of $\bar{\gamma}/\hat{\gamma} = 1$.
# +
# Step 2.b.i: First import \bar{A}_{ij} = AbarDD[i][j], and its contraction trAbar = \bar{A}^k_k
# from BSSN.BSSN_quantities
Bq.AbarUU_AbarUD_trAbar_AbarDD_dD()
trAbar = Bq.trAbar
# Step 2.b.ii: Import detgammabar quantities from BSSN.BSSN_quantities:
Bq.detgammabar_and_derivs()
detgammabar = Bq.detgammabar
detgammabar_dD = Bq.detgammabar_dD
# Step 2.b.ii: Compute the contraction \bar{D}_k \beta^k = \beta^k_{,k} + \frac{\beta^k \bar{\gamma}_{,k}}{2 \bar{\gamma}}
Dbarbetacontraction = sp.sympify(0)
for k in range(DIM):
Dbarbetacontraction += betaU_dD[k][k] + betaU[k]*detgammabar_dD[k]/(2*detgammabar)
# Step 2.b.iii: Second term of \partial_t \bar{\gamma}_{i j} right-hand side:
# \frac{2}{3} \bar{\gamma}_{i j} \left (\alpha \bar{A}_{k}^{k} - \bar{D}_{k} \beta^{k}\right )
for i in range(DIM):
for j in range(DIM):
gammabar_rhsDD[i][j] += sp.Rational(2,3)*gammabarDD[i][j]*(alpha*trAbar - Dbarbetacontraction)
# -
# <a id='term3_partial_gamma'></a>
#
# ## Step 2.c: Term 3 of $\partial_t \bar{\gamma}_{i j}$ \[Back to [top](#toc)\]
# $$\label{term3_partial_gamma}$$
#
# Term 3 of $\partial_t \bar{\gamma}_{i j}$ = `gammabar_rhsDD[i][j]`: $-2 \alpha \bar{A}_{ij}$
#
# Step 2.c: Third term of \partial_t \bar{\gamma}_{i j} right-hand side:
# -2 \alpha \bar{A}_{ij}
for i in range(DIM):
for j in range(DIM):
gammabar_rhsDD[i][j] += -2*alpha*AbarDD[i][j]
# <a id='abar'></a>
#
# # Step 3: Right-hand side of $\partial_t \bar{A}_{ij}$ \[Back to [top](#toc)\]
# $$\label{abar}$$
#
# $$\partial_t \bar{A}_{i j} =
# {\underbrace {\textstyle \left[\beta^k \partial_k \bar{A}_{ij} + \partial_i \beta^k \bar{A}_{kj} + \partial_j \beta^k \bar{A}_{ik} \right]}_{\text{Term 1}}}
# {\underbrace {\textstyle - \frac{2}{3} \bar{A}_{i j} \bar{D}_{k} \beta^{k} - 2 \alpha \bar{A}_{i k} {\bar{A}^{k}}_{j} + \alpha \bar{A}_{i j} K}_{\text{Term 2}}} +
# {\underbrace {\textstyle e^{-4 \phi} \left \{-2 \alpha \bar{D}_{i} \bar{D}_{j} \phi + 4 \alpha \bar{D}_{i} \phi \bar{D}_{j} \phi + 4 \bar{D}_{(i} \alpha \bar{D}_{j)} \phi - \bar{D}_{i} \bar{D}_{j} \alpha + \alpha \bar{R}_{i j} \right \}^{\text{TF}}}_{\text{Term 3}}}$$
# <a id='term1_partial_upper_a'></a>
#
# ## Step 3.a: Term 1 of $\partial_t \bar{A}_{i j}$ \[Back to [top](#toc)\]
# $$\label{term1_partial_upper_a}$$
#
# Term 1 of $\partial_t \bar{A}_{i j}$ = `Abar_rhsDD[i][j]`: $\left[\beta^k \partial_k \bar{A}_{ij} + \partial_i \beta^k \bar{A}_{kj} + \partial_j \beta^k \bar{A}_{ik} \right]$
#
#
# Notice the first subexpression has a $\beta^k \partial_k A_{ij}$ advection term, which will be upwinded.
# +
# Step 3.a: First term of \partial_t \bar{A}_{i j}:
# \beta^k \partial_k \bar{A}_{ij} + \partial_i \beta^k \bar{A}_{kj} + \partial_j \beta^k \bar{A}_{ik}
AbarDD_dupD = Bq.AbarDD_dupD # From Bq.AbarUU_AbarUD_trAbar_AbarDD_dD()
Abar_rhsDD = ixp.zerorank2()
for i in range(DIM):
for j in range(DIM):
for k in range(DIM):
Abar_rhsDD[i][j] += betaU[k]*AbarDD_dupD[i][j][k] + betaU_dD[k][i]*AbarDD[k][j] \
+ betaU_dD[k][j]*AbarDD[i][k]
# -
# <a id='term2_partial_upper_a'></a>
#
# ## Step 3.b: Term 2 of $\partial_t \bar{A}_{i j}$ \[Back to [top](#toc)\]
# $$\label{term2_partial_upper_a}$$
#
# Term 2 of $\partial_t \bar{A}_{i j}$ = `Abar_rhsDD[i][j]`: $- \frac{2}{3} \bar{A}_{i j} \bar{D}_{k} \beta^{k} - 2 \alpha \bar{A}_{i k} \bar{A}^{k}_{j} + \alpha \bar{A}_{i j} K$
#
#
# Note that $\bar{D}_{k} \beta^{k}$ was already defined as `Dbarbetacontraction`.
# Step 3.b: Second term of \partial_t \bar{A}_{i j}:
# - (2/3) \bar{A}_{i j} \bar{D}_{k} \beta^{k} - 2 \alpha \bar{A}_{i k} {\bar{A}^{k}}_{j} + \alpha \bar{A}_{i j} K
gammabarUU = Bq.gammabarUU # From Bq.gammabar__inverse_and_derivs()
AbarUD = Bq.AbarUD # From Bq.AbarUU_AbarUD_trAbar()
for i in range(DIM):
for j in range(DIM):
Abar_rhsDD[i][j] += -sp.Rational(2,3)*AbarDD[i][j]*Dbarbetacontraction + alpha*AbarDD[i][j]*trK
for k in range(DIM):
Abar_rhsDD[i][j] += -2*alpha * AbarDD[i][k]*AbarUD[k][j]
# <a id='term3_partial_upper_a'></a>
#
# ## Step 3.c: Term 3 of $\partial_t \bar{A}_{i j}$ \[Back to [top](#toc)\]
# $$\label{term3_partial_upper_a}$$
#
#
# Term 3 of $\partial_t \bar{A}_{i j}$ = `Abar_rhsDD[i][j]`: $e^{-4 \phi} \left \{-2 \alpha \bar{D}_{i} \bar{D}_{j} \phi + 4 \alpha \bar{D}_{i} \phi \bar{D}_{j} \phi + 4 \bar{D}_{(i} \alpha \bar{D}_{j)} \phi - \bar{D}_{i} \bar{D}_{j} \alpha + \alpha \bar{R}_{i j} \right \}^{\text{TF}}$
# The first covariant derivatives of $\phi$ and $\alpha$ are simply partial derivatives. However, $\phi$ is not a gridfunction; `cf` is. cf = $W$ (default value) denotes that the evolved variable is $W=e^{-2 \phi}$, which results in smoother spacetime fields around puncture black holes (desirable).
# +
# Step 3.c.i: Define partial derivatives of \phi in terms of evolved quantity "cf":
Bq.phi_and_derivs()
phi_dD = Bq.phi_dD
phi_dupD = Bq.phi_dupD
phi_dDD = Bq.phi_dDD
exp_m4phi = Bq.exp_m4phi
phi_dBarD = Bq.phi_dBarD # phi_dBarD = Dbar_i phi = phi_dD (since phi is a scalar)
phi_dBarDD = Bq.phi_dBarDD # phi_dBarDD = Dbar_i Dbar_j phi (covariant derivative)
# Step 3.c.ii: Define RbarDD
Bq.RicciBar__gammabarDD_dHatD__DGammaUDD__DGammaU()
RbarDD = Bq.RbarDD
# Step 3.c.iii: Define first and second derivatives of \alpha, as well as
# \bar{D}_i \bar{D}_j \alpha, which is defined just like phi
alpha_dD = ixp.declarerank1("alpha_dD")
alpha_dDD = ixp.declarerank2("alpha_dDD","sym01")
alpha_dBarD = alpha_dD
alpha_dBarDD = ixp.zerorank2()
GammabarUDD = Bq.GammabarUDD # Defined in Bq.gammabar__inverse_and_derivs()
for i in range(DIM):
for j in range(DIM):
alpha_dBarDD[i][j] = alpha_dDD[i][j]
for k in range(DIM):
alpha_dBarDD[i][j] += - GammabarUDD[k][i][j]*alpha_dD[k]
# Step 3.c.iv: Define the terms in curly braces:
curlybrackettermsDD = ixp.zerorank2()
for i in range(DIM):
for j in range(DIM):
curlybrackettermsDD[i][j] = -2*alpha*phi_dBarDD[i][j] + 4*alpha*phi_dBarD[i]*phi_dBarD[j] \
+2*alpha_dBarD[i]*phi_dBarD[j] \
+2*alpha_dBarD[j]*phi_dBarD[i] \
-alpha_dBarDD[i][j] + alpha*RbarDD[i][j]
# Step 3.c.v: Compute the trace:
curlybracketterms_trace = sp.sympify(0)
for i in range(DIM):
for j in range(DIM):
curlybracketterms_trace += gammabarUU[i][j]*curlybrackettermsDD[i][j]
# Step 3.c.vi: Third and final term of Abar_rhsDD[i][j]:
for i in range(DIM):
for j in range(DIM):
Abar_rhsDD[i][j] += exp_m4phi*(curlybrackettermsDD[i][j] - \
sp.Rational(1,3)*gammabarDD[i][j]*curlybracketterms_trace)
# -
# <a id='cf'></a>
#
# # Step 4: Right-hand side of $\partial_t \phi \to \partial_t (\text{cf})$ \[Back to [top](#toc)\]
# $$\label{cf}$$
#
# $$\partial_t \phi =
# {\underbrace {\textstyle \left[\beta^k \partial_k \phi \right]}_{\text{Term 1}}} +
# {\underbrace {\textstyle \frac{1}{6} \left (\bar{D}_{k} \beta^{k} - \alpha K \right)}_{\text{Term 2}}}$$
#
# The right-hand side of $\partial_t \phi$ is trivial except for the fact that the actual evolved variable is `cf` (short for conformal factor), which could represent
# * cf = $\phi$
# * cf = $W = e^{-2 \phi}$ (default)
# * cf = $\chi = e^{-4 \phi}$
#
# Thus we are actually computing the right-hand side of the equation $\partial_t $cf, which is related to $\partial_t \phi$ via simple relations:
# * cf = $\phi$: $\partial_t $cf$ = \partial_t \phi$ (unchanged)
# * cf = $W$: $\partial_t $cf$ = \partial_t (e^{-2 \phi}) = -2 e^{-2\phi}\partial_t \phi = -2 W \partial_t \phi$. Thus we need to multiply the right-hand side by $-2 W = -2$cf when cf = $W$.
# * cf = $\chi$: Same argument as for $W$, except the right-hand side must be multiplied by $-4 \chi=-4$cf.
# +
# Step 4: Right-hand side of conformal factor variable "cf". Supported
# options include: cf=phi, cf=W=e^(-2*phi) (default), and cf=chi=e^(-4*phi)
# \partial_t phi = \left[\beta^k \partial_k \phi \right] <- TERM 1
# + \frac{1}{6} \left (\bar{D}_{k} \beta^{k} - \alpha K \right ) <- TERM 2
cf_rhs = sp.Rational(1,6) * (Dbarbetacontraction - alpha*trK) # Term 2
for k in range(DIM):
cf_rhs += betaU[k]*phi_dupD[k] # Term 1
# Next multiply to convert phi_rhs to cf_rhs.
if par.parval_from_str("BSSN.BSSN_quantities::EvolvedConformalFactor_cf") == "phi":
pass # do nothing; cf_rhs = phi_rhs
elif par.parval_from_str("BSSN.BSSN_quantities::EvolvedConformalFactor_cf") == "W":
cf_rhs *= -2*cf # cf_rhs = -2*cf*phi_rhs
elif par.parval_from_str("BSSN.BSSN_quantities::EvolvedConformalFactor_cf") == "chi":
cf_rhs *= -4*cf # cf_rhs = -4*cf*phi_rhs
else:
print("Error: EvolvedConformalFactor_cf == "+
par.parval_from_str("BSSN.BSSN_quantities::EvolvedConformalFactor_cf")+" unsupported!")
exit(1)
# -
# <a id='trk'></a>
#
# # Step 5: Right-hand side of $\partial_t K$ \[Back to [top](#toc)\]
# $$\label{trk}$$
#
# $$
# \partial_{t} K =
# {\underbrace {\textstyle \left[\beta^i \partial_i K \right]}_{\text{Term 1}}} +
# {\underbrace {\textstyle \frac{1}{3} \alpha K^{2}}_{\text{Term 2}}} +
# {\underbrace {\textstyle \alpha \bar{A}_{i j} \bar{A}^{i j}}_{\text{Term 3}}}
# {\underbrace {\textstyle - e^{-4 \phi} \left (\bar{D}_{i} \bar{D}^{i} \alpha + 2 \bar{D}^{i} \alpha \bar{D}_{i} \phi \right )}_{\text{Term 4}}}
# $$
# +
# Step 5: right-hand side of trK (trace of extrinsic curvature):
# \partial_t K = \beta^k \partial_k K <- TERM 1
# + \frac{1}{3} \alpha K^{2} <- TERM 2
# + \alpha \bar{A}_{i j} \bar{A}^{i j} <- TERM 3
# - - e^{-4 \phi} (\bar{D}_{i} \bar{D}^{i} \alpha + 2 \bar{D}^{i} \alpha \bar{D}_{i} \phi ) <- TERM 4
# TERM 2:
trK_rhs = sp.Rational(1,3)*alpha*trK*trK
trK_dupD = ixp.declarerank1("trK_dupD")
for i in range(DIM):
# TERM 1:
trK_rhs += betaU[i]*trK_dupD[i]
for i in range(DIM):
for j in range(DIM):
# TERM 4:
trK_rhs += -exp_m4phi*gammabarUU[i][j]*(alpha_dBarDD[i][j] + 2*alpha_dBarD[j]*phi_dBarD[i])
AbarUU = Bq.AbarUU # From Bq.AbarUU_AbarUD_trAbar()
for i in range(DIM):
for j in range(DIM):
# TERM 3:
trK_rhs += alpha*AbarDD[i][j]*AbarUU[i][j]
# -
# <a id='lambdabar'></a>
#
# # Step 6: Right-hand side of $\partial_t \bar{\Lambda}^{i}$ \[Back to [top](#toc)\]
# $$\label{lambdabar}$$
#
# \begin{align}
# \partial_t \bar{\Lambda}^{i} &=
# {\underbrace {\textstyle \left[\beta^k \partial_k \bar{\Lambda}^i - \partial_k \beta^i \bar{\Lambda}^k \right]}_{\text{Term 1}}} +
# {\underbrace {\textstyle \bar{\gamma}^{j k} \hat{D}_{j} \hat{D}_{k} \beta^{i}}_{\text{Term 2}}} +
# {\underbrace {\textstyle \frac{2}{3} \Delta^{i} \bar{D}_{j} \beta^{j}}_{\text{Term 3}}} +
# {\underbrace {\textstyle \frac{1}{3} \bar{D}^{i} \bar{D}_{j} \beta^{j}}_{\text{Term 4}}} \nonumber \\
# &
# {\underbrace {\textstyle - 2 \bar{A}^{i j} \left (\partial_{j} \alpha - 6 \alpha \partial_{j} \phi \right )}_{\text{Term 5}}} +
# {\underbrace {\textstyle 2 \alpha \bar{A}^{j k} \Delta_{j k}^{i}}_{\text{Term 6}}}
# {\underbrace {\textstyle -\frac{4}{3} \alpha \bar{\gamma}^{i j} \partial_{j} K}_{\text{Term 7}}}
# \end{align}
# <a id='term1_partial_lambda'></a>
#
# ## Step 6.a: Term 1 of $\partial_t \bar{\Lambda}^{i}$ \[Back to [top](#toc)\]
# $$\label{term1_partial_lambda}$$
#
# Term 1 of $\partial_t \bar{\Lambda}^{i}$: $\beta^k \partial_k \bar{\Lambda}^i - \partial_k \beta^i \bar{\Lambda}^k$
#
# Computing this term requires that we define $\bar{\Lambda}^i$ and $\bar{\Lambda}^i_{,j}$ in terms of the rescaled (i.e., actual evolved) variable $\lambda^i$ and derivatives:
# \begin{align}
# \bar{\Lambda}^i &= \lambda^i \text{ReU[i]} \\
# \bar{\Lambda}^i_{,\ j} &= \lambda^i_{,\ j} \text{ReU[i]} + \lambda^i \text{ReUdD[i][j]}
# \end{align}
# +
# Step 6: right-hand side of \partial_t \bar{\Lambda}^i:
# \partial_t \bar{\Lambda}^i = \beta^k \partial_k \bar{\Lambda}^i - \partial_k \beta^i \bar{\Lambda}^k <- TERM 1
# + \bar{\gamma}^{j k} \hat{D}_{j} \hat{D}_{k} \beta^{i} <- TERM 2
# + \frac{2}{3} \Delta^{i} \bar{D}_{j} \beta^{j} <- TERM 3
# + \frac{1}{3} \bar{D}^{i} \bar{D}_{j} \beta^{j} <- TERM 4
# - 2 \bar{A}^{i j} (\partial_{j} \alpha - 6 \partial_{j} \phi) <- TERM 5
# + 2 \alpha \bar{A}^{j k} \Delta_{j k}^{i} <- TERM 6
# - \frac{4}{3} \alpha \bar{\gamma}^{i j} \partial_{j} K <- TERM 7
# Step 6.a: Term 1 of \partial_t \bar{\Lambda}^i: \beta^k \partial_k \bar{\Lambda}^i - \partial_k \beta^i \bar{\Lambda}^k
# First we declare \bar{\Lambda}^i and \bar{\Lambda}^i_{,j} in terms of \lambda^i and \lambda^i_{,j}
LambdabarU_dupD = ixp.zerorank2()
lambdaU_dupD = ixp.declarerank2("lambdaU_dupD","nosym")
for i in range(DIM):
for j in range(DIM):
LambdabarU_dupD[i][j] = lambdaU_dupD[i][j]*rfm.ReU[i] + lambdaU[i]*rfm.ReUdD[i][j]
Lambdabar_rhsU = ixp.zerorank1()
for i in range(DIM):
for k in range(DIM):
Lambdabar_rhsU[i] += betaU[k]*LambdabarU_dupD[i][k] - betaU_dD[i][k]*LambdabarU[k] # Term 1
# -
# <a id='term2_partial_lambda'></a>
#
# ## Step 6.b: Term 2 of $\partial_t \bar{\Lambda}^{i}$ \[Back to [top](#toc)\]
# $$\label{term2_partial_lambda}$$
#
# Term 2 of $\partial_t \bar{\Lambda}^{i}$: $\bar{\gamma}^{j k} \hat{D}_{j} \hat{D}_{k} \beta^{i}$
#
# This is a relatively difficult term to compute, as it requires we evaluate the second covariant derivative of the shift vector, with respect to the hatted (i.e., reference) metric.
#
# Based on the definition of covariant derivative, we have
# $$
# \hat{D}_{k} \beta^{i} = \beta^i_{,k} + \hat{\Gamma}^i_{mk} \beta^m
# $$
#
# Since $\hat{D}_{k} \beta^{i}$ is a tensor, the covariant derivative of this will have the same indexing as a tensor $T_k^i$:
#
# $$
# \hat{D}_{j} T^i_k = T^i_{k,j} + \hat{\Gamma}^i_{dj} T^d_k - \hat{\Gamma}^d_{kj} T^i_d.
# $$
#
# Therefore,
# \begin{align}
# \hat{D}_{j} \left(\hat{D}_{k} \beta^{i}\right) &= \left(\beta^i_{,k} + \hat{\Gamma}^i_{mk} \beta^m\right)_{,j} + \hat{\Gamma}^i_{dj} \left(\beta^d_{,k} + \hat{\Gamma}^d_{mk} \beta^m\right) - \hat{\Gamma}^d_{kj} \left(\beta^i_{,d} + \hat{\Gamma}^i_{md} \beta^m\right) \\
# &= \beta^i_{,kj} + \hat{\Gamma}^i_{mk,j} \beta^m + \hat{\Gamma}^i_{mk} \beta^m_{,j} + \hat{\Gamma}^i_{dj}\beta^d_{,k} + \hat{\Gamma}^i_{dj}\hat{\Gamma}^d_{mk} \beta^m - \hat{\Gamma}^d_{kj} \beta^i_{,d} - \hat{\Gamma}^d_{kj} \hat{\Gamma}^i_{md} \beta^m \\
# &= {\underbrace {\textstyle \beta^i_{,kj}}_{\text{Term 2a}}}
# {\underbrace {\textstyle \hat{\Gamma}^i_{mk,j} \beta^m + \hat{\Gamma}^i_{mk} \beta^m_{,j} + \hat{\Gamma}^i_{dj}\beta^d_{,k} - \hat{\Gamma}^d_{kj} \beta^i_{,d}}_{\text{Term 2b}}} +
# {\underbrace {\textstyle \hat{\Gamma}^i_{dj}\hat{\Gamma}^d_{mk} \beta^m - \hat{\Gamma}^d_{kj} \hat{\Gamma}^i_{md} \beta^m}_{\text{Term 2c}}},
# \end{align}
#
# where
# $$
# \text{Term 2} = \bar{\gamma}^{jk} \left(\text{Term 2a} + \text{Term 2b} + \text{Term 2c}\right)
# $$
# +
# Step 6.b: Term 2 of \partial_t \bar{\Lambda}^i = \bar{\gamma}^{jk} (Term 2a + Term 2b + Term 2c)
# Term 2a: \bar{\gamma}^{jk} \beta^i_{,kj}
Term2aUDD = ixp.zerorank3()
for i in range(DIM):
for j in range(DIM):
for k in range(DIM):
Term2aUDD[i][j][k] += betaU_dDD[i][k][j]
# Term 2b: \hat{\Gamma}^i_{mk,j} \beta^m + \hat{\Gamma}^i_{mk} \beta^m_{,j}
# + \hat{\Gamma}^i_{dj}\beta^d_{,k} - \hat{\Gamma}^d_{kj} \beta^i_{,d}
Term2bUDD = ixp.zerorank3()
for i in range(DIM):
for j in range(DIM):
for k in range(DIM):
for m in range(DIM):
Term2bUDD[i][j][k] += rfm.GammahatUDDdD[i][m][k][j]*betaU[m] \
+ rfm.GammahatUDD[i][m][k]*betaU_dD[m][j] \
+ rfm.GammahatUDD[i][m][j]*betaU_dD[m][k] \
- rfm.GammahatUDD[m][k][j]*betaU_dD[i][m]
# Term 2c: \hat{\Gamma}^i_{dj}\hat{\Gamma}^d_{mk} \beta^m - \hat{\Gamma}^d_{kj} \hat{\Gamma}^i_{md} \beta^m
Term2cUDD = ixp.zerorank3()
for i in range(DIM):
for j in range(DIM):
for k in range(DIM):
for m in range(DIM):
for d in range(DIM):
Term2cUDD[i][j][k] += ( rfm.GammahatUDD[i][d][j]*rfm.GammahatUDD[d][m][k] \
-rfm.GammahatUDD[d][k][j]*rfm.GammahatUDD[i][m][d])*betaU[m]
Lambdabar_rhsUpieceU = ixp.zerorank1()
# Put it all together to get Term 2:
for i in range(DIM):
for j in range(DIM):
for k in range(DIM):
Lambdabar_rhsU[i] += gammabarUU[j][k] * (Term2aUDD[i][j][k] + Term2bUDD[i][j][k] + Term2cUDD[i][j][k])
Lambdabar_rhsUpieceU[i] += gammabarUU[j][k] * (Term2aUDD[i][j][k] + Term2bUDD[i][j][k] + Term2cUDD[i][j][k])
# -
# <a id='term3_partial_lambda'></a>
#
# ## Step 6.c: Term 3 of $\partial_t \bar{\Lambda}^{i}$: $\frac{2}{3} \Delta^{i} \bar{D}_{j} \beta^{j}$ \[Back to [top](#toc)\]
# $$\label{term3_partial_lambda}$$
#
# Term 3 of $\partial_t \bar{\Lambda}^{i}$: $\frac{2}{3} \Delta^{i} \bar{D}_{j} \beta^{j}$
#
# This term is the simplest to implement, as $\bar{D}_{j} \beta^{j}$ and $\Delta^i$ have already been defined, as `Dbarbetacontraction` and `DGammaU[i]`, respectively:
# Step 6.c: Term 3 of \partial_t \bar{\Lambda}^i:
# \frac{2}{3} \Delta^{i} \bar{D}_{j} \beta^{j}
DGammaU = Bq.DGammaU # From Bq.RicciBar__gammabarDD_dHatD__DGammaUDD__DGammaU()
for i in range(DIM):
Lambdabar_rhsU[i] += sp.Rational(2,3)*DGammaU[i]*Dbarbetacontraction # Term 3
# <a id='term4_partial_lambda'></a>
#
# ## Step 6.d: Term 4 of $\partial_t \bar{\Lambda}^{i}$ \[Back to [top](#toc)\]
# $$\label{term4_partial_lambda}$$
#
# $\partial_t \bar{\Lambda}^{i}$: $\frac{1}{3} \bar{D}^{i} \bar{D}_{j} \beta^{j}$
#
# Recall first that
#
# $$\bar{D}_{k} \beta^{k} = \beta^k_{,\ k} + \frac{\beta^k \bar{\gamma}_{,k}}{2 \bar{\gamma}},$$
# which is a scalar, so
#
# \begin{align}
# \bar{D}_m \bar{D}_{j} \beta^{j} &= \left(\beta^k_{,\ k} + \frac{\beta^k \bar{\gamma}_{,k}}{2 \bar{\gamma}}\right)_{,m} \\
# &= \beta^k_{\ ,km} + \frac{\beta^k_{\ ,m} \bar{\gamma}_{,k} + \beta^k \bar{\gamma}_{\ ,km}}{2 \bar{\gamma}} - \frac{\beta^k \bar{\gamma}_{,k} \bar{\gamma}_{,m}}{2 \bar{\gamma}^2}
# \end{align}
#
# Thus,
# \begin{align}
# \bar{D}^i \bar{D}_{j} \beta^{j}
# &= \bar{\gamma}^{im} \bar{D}_m \bar{D}_{j} \beta^{j} \\
# &= \bar{\gamma}^{im} \left(\beta^k_{\ ,km} + \frac{\beta^k_{\ ,m} \bar{\gamma}_{,k} + \beta^k \bar{\gamma}_{\ ,km}}{2 \bar{\gamma}} - \frac{\beta^k \bar{\gamma}_{,k} \bar{\gamma}_{,m}}{2 \bar{\gamma}^2} \right)
# \end{align}
# Step 6.d: Term 4 of \partial_t \bar{\Lambda}^i:
# \frac{1}{3} \bar{D}^{i} \bar{D}_{j} \beta^{j}
detgammabar_dDD = Bq.detgammabar_dDD # From Bq.detgammabar_and_derivs()
Dbarbetacontraction_dBarD = ixp.zerorank1()
for k in range(DIM):
for m in range(DIM):
Dbarbetacontraction_dBarD[m] += betaU_dDD[k][k][m] + \
(betaU_dD[k][m]*detgammabar_dD[k] +
betaU[k]*detgammabar_dDD[k][m])/(2*detgammabar) \
-betaU[k]*detgammabar_dD[k]*detgammabar_dD[m]/(2*detgammabar*detgammabar)
for i in range(DIM):
for m in range(DIM):
Lambdabar_rhsU[i] += sp.Rational(1,3)*gammabarUU[i][m]*Dbarbetacontraction_dBarD[m]
# <a id='term5_partial_lambda'></a>
#
# ## Step 6.e: Term 5 of $\partial_t \bar{\Lambda}^{i}$ \[Back to [top](#toc)\]
# $$\label{term5_partial_lambda}$$
#
# Term 5 of $\partial_t \bar{\Lambda}^{i}$: $- 2 \bar{A}^{i j} \left (\partial_{j} \alpha - 6\alpha \partial_{j} \phi\right)$
# Step 6.e: Term 5 of \partial_t \bar{\Lambda}^i:
# - 2 \bar{A}^{i j} (\partial_{j} \alpha - 6 \alpha \partial_{j} \phi)
for i in range(DIM):
for j in range(DIM):
Lambdabar_rhsU[i] += -2*AbarUU[i][j]*(alpha_dD[j] - 6*alpha*phi_dD[j])
# <a id='term6_partial_lambda'></a>
#
# ## Step 6.f: Term 6 of $\partial_t \bar{\Lambda}^{i}$ \[Back to [top](#toc)\]
# $$\label{term6_partial_lambda}$$
#
# Term 6 of $\partial_t \bar{\Lambda}^{i}$: $2\alpha \bar{A}^{j k} \Delta_{j k}^{i}$
# Step 6.f: Term 6 of \partial_t \bar{\Lambda}^i:
# 2 \alpha \bar{A}^{j k} \Delta^{i}_{j k}
DGammaUDD = Bq.DGammaUDD # From RicciBar__gammabarDD_dHatD__DGammaUDD__DGammaU()
for i in range(DIM):
for j in range(DIM):
for k in range(DIM):
Lambdabar_rhsU[i] += 2*alpha*AbarUU[j][k]*DGammaUDD[i][j][k]
# <a id='term7_partial_lambda'></a>
#
# ## Step 6.g: Term 7 of $\partial_t \bar{\Lambda}^{i}$ \[Back to [top](#toc)\]
# $$\label{term7_partial_lambda}$$
#
# $\partial_t \bar{\Lambda}^{i}$: $-\frac{4}{3} \alpha \bar{\gamma}^{i j} \partial_{j} K$
# Step 6.g: Term 7 of \partial_t \bar{\Lambda}^i:
# -\frac{4}{3} \alpha \bar{\gamma}^{i j} \partial_{j} K
trK_dD = ixp.declarerank1("trK_dD")
for i in range(DIM):
for j in range(DIM):
Lambdabar_rhsU[i] += -sp.Rational(4,3)*alpha*gammabarUU[i][j]*trK_dD[j]
# <a id='rescalingrhss'></a>
#
# # Step 7: Rescaling the BSSN right-hand sides; rewriting them in terms of the rescaled quantities $\left\{h_{i j},a_{i j},\text{cf}, K, \lambda^{i}, \alpha, \mathcal{V}^i, \mathcal{B}^i\right\}$ \[Back to [top](#toc)\]
# $$\label{rescalingrhss}$$
#
# Next we rescale the right-hand sides of the BSSN equations so that the evolved variables are $\left\{h_{i j},a_{i j},\text{cf}, K, \lambda^{i}\right\}$
# Step 7: Rescale the RHS quantities so that the evolved
# variables are smooth across coord singularities
h_rhsDD = ixp.zerorank2()
a_rhsDD = ixp.zerorank2()
lambda_rhsU = ixp.zerorank1()
for i in range(DIM):
lambda_rhsU[i] = Lambdabar_rhsU[i] / rfm.ReU[i]
for j in range(DIM):
h_rhsDD[i][j] = gammabar_rhsDD[i][j] / rfm.ReDD[i][j]
a_rhsDD[i][j] = Abar_rhsDD[i][j] / rfm.ReDD[i][j]
#print(str(Abar_rhsDD[2][2]).replace("**","^").replace("_","").replace("xx","x").replace("sin(x2)","Sin[x2]").replace("sin(2*x2)","Sin[2*x2]").replace("cos(x2)","Cos[x2]").replace("detgbaroverdetghat","detg"))
#print(str(Dbarbetacontraction).replace("**","^").replace("_","").replace("xx","x").replace("sin(x2)","Sin[x2]").replace("detgbaroverdetghat","detg"))
#print(betaU_dD)
#print(str(trK_rhs).replace("xx2","xx3").replace("xx1","xx2").replace("xx0","xx1").replace("**","^").replace("_","").replace("sin(xx2)","Sinx2").replace("xx","x").replace("sin(2*x2)","Sin2x2").replace("cos(x2)","Cosx2").replace("detgbaroverdetghat","detg"))
#print(str(bet_rhsU[0]).replace("xx2","xx3").replace("xx1","xx2").replace("xx0","xx1").replace("**","^").replace("_","").replace("sin(xx2)","Sinx2").replace("xx","x").replace("sin(2*x2)","Sin2x2").replace("cos(x2)","Cosx2").replace("detgbaroverdetghat","detg"))
# <a id='code_validation'></a>
#
# # Step 8: Code Validation against `BSSN.BSSN_RHSs` NRPy+ module \[Back to [top](#toc)\]
# $$\label{code_validation}$$
#
# Here, as a code validation check, we verify agreement in the SymPy expressions for the RHSs of the BSSN equations between
# 1. this tutorial and
# 2. the NRPy+ [BSSN.BSSN_RHSs](../edit/BSSN/BSSN_RHSs.py) module.
#
# By default, we analyze the RHSs in Spherical coordinates, though other coordinate systems may be chosen.
# +
# Step 8: We already have SymPy expressions for BSSN RHS expressions
# in terms of other SymPy variables. Even if we reset the
# list of NRPy+ gridfunctions, these *SymPy* expressions for
# BSSN RHS variables *will remain unaffected*.
#
# Here, we will use the above-defined BSSN RHS expressions
# to validate against the same expressions in the
# BSSN/BSSN_RHSs.py file, to ensure consistency between
# this tutorial and the module itself.
#
# Reset the list of gridfunctions, as registering a gridfunction
# twice will spawn an error.
gri.glb_gridfcs_list = []
# Step 9.a: Call the BSSN_RHSs() function from within the
# BSSN/BSSN_RHSs.py module,
# which should do exactly the same as in Steps 1-16 above.
import BSSN.BSSN_RHSs as bssnrhs
bssnrhs.BSSN_RHSs()
print("Consistency check between BSSN_RHSs tutorial and NRPy+ module: ALL SHOULD BE ZERO.")
print("trK_rhs - bssnrhs.trK_rhs = " + str(trK_rhs - bssnrhs.trK_rhs))
print("cf_rhs - bssnrhs.cf_rhs = " + str(cf_rhs - bssnrhs.cf_rhs))
for i in range(DIM):
print("lambda_rhsU["+str(i)+"] - bssnrhs.lambda_rhsU["+str(i)+"] = " +
str(lambda_rhsU[i] - bssnrhs.lambda_rhsU[i]))
for j in range(DIM):
print("h_rhsDD["+str(i)+"]["+str(j)+"] - bssnrhs.h_rhsDD["+str(i)+"]["+str(j)+"] = "
+ str(h_rhsDD[i][j] - bssnrhs.h_rhsDD[i][j]))
print("a_rhsDD["+str(i)+"]["+str(j)+"] - bssnrhs.a_rhsDD["+str(i)+"]["+str(j)+"] = "
+ str(a_rhsDD[i][j] - bssnrhs.a_rhsDD[i][j]))
# -
# <a id='latex_pdf_output'></a>
#
# # Step 9: Output this notebook to $\LaTeX$-formatted PDF file \[Back to [top](#toc)\]
# $$\label{latex_pdf_output}$$
#
# The following code cell converts this Jupyter notebook into a proper, clickable $\LaTeX$-formatted PDF file. After the cell is successfully run, the generated PDF may be found in the root NRPy+ tutorial directory, with filename
# [Tutorial-BSSN_time_evolution-BSSN_RHSs.pdf](Tutorial-BSSN_time_evolution-BSSN_RHSs.pdf) (Note that clicking on this link may not work; you may need to open the PDF file through another means.)
# !jupyter nbconvert --to latex --template latex_nrpy_style.tplx Tutorial-BSSN_time_evolution-BSSN_RHSs.ipynb
# !pdflatex -interaction=batchmode Tutorial-BSSN_time_evolution-BSSN_RHSs.tex
# !pdflatex -interaction=batchmode Tutorial-BSSN_time_evolution-BSSN_RHSs.tex
# !pdflatex -interaction=batchmode Tutorial-BSSN_time_evolution-BSSN_RHSs.tex
# !rm -f Tut*.out Tut*.aux Tut*.log
| notebook/Tutorial-BSSN_time_evolution-BSSN_RHSs.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + colab={"base_uri": "https://localhost:8080/", "height": 34} colab_type="code" id="rbEdIBkwcyVP" outputId="fdc74a0d-2fbc-47bc-a37c-d646aa6dbedb"
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import math
import cv2
import os
from IPython.display import clear_output
from google.colab import drive
drive.mount('/content/drive')
# + colab={} colab_type="code" id="qtjE_4SsdJFl"
root_path = 'drive/My Drive/'
file1 = root_path + '1_Handshaking_Handshaking_1_141.jpg'
file2 = root_path + '7_Cheering_Cheering_7_71.jpg'
file3 = root_path + '12_Group_Group_12_Group_Group_12_36.jpg'
# + colab={} colab_type="code" id="Z8MhhJe4w7IL"
def loadImages(root_path):
'''Return a list with all image paths in the root_path'''
image_files = sorted([os.path.join(root_path, file) for file in os.listdir(root_path) if file.endswith('.jpg')])
return image_files
def return_one_face(image_path, x, y, w, h):
img = cv2.imread(image_path,1)
cropped_img = img[y:y+h, x:x+w]
return cropped_img
def display_faces(image_path, people_info_raw_str, max_cols_to_display):
info = people_info(people_info_raw_str)
people_count = len(info)
if people_count == 0:
print('No People are good for displaying')
return
columns = max_cols_to_display if people_count>max_cols_to_display else people_count
rows = math.ceil(people_count/columns)
fig=plt.figure(figsize=(columns*3, rows*3))
for i in range(people_count):
x,y,w,h = info[i]
img = return_one_face(image_path, x, y, w, h)
fig.add_subplot(rows, columns, i+1)
plt.imshow(img)
plt.show()
def people_info(raw_string):
def is_valid_person(i):
start_position = i*10
return int(tmp[start_position+7]) == 0 and int(tmp[start_position+8]) == 0 and int(tmp[start_position+9]) == 0
result = []
tmp = raw_string.split()
for i in range(int(len(tmp)/10)): # number of people
if is_valid_person(i):
start_position = i*10
result.append([int(tmp[start_position]), int(tmp[start_position+1]), int(tmp[start_position+2]), int(tmp[start_position+3])])
return result
# + colab={} colab_type="code" id="RfXx04C5-rV8"
info1 = '''
426 160 88 128 0 0 0 0 0 0
644 150 98 146 0 0 0 0 0 0
'''
info2 = '''
621 465 51 59 0 0 0 0 0 0
607 167 52 57 0 0 0 0 0 0
707 38 55 64 0 0 0 0 0 0
463 46 53 67 0 0 0 0 0 0
468 237 48 67 0 0 0 0 0 0
290 238 48 53 0 0 0 0 0 0
271 80 49 64 0 0 0 0 0 0
163 70 52 63 0 0 0 0 0 0
'''
info3 ='''
14 0 17 9 0 0 0 0 2 0
131 0 15 9 1 0 0 0 2 0
204 0 15 7 0 0 0 0 2 0
269 0 18 15 1 0 0 0 1 0
336 0 17 11 1 0 0 0 2 0
6 20 19 27 1 0 0 0 0 0
78 16 20 29 0 0 0 0 0 0
147 28 17 26 0 0 0 0 0 0
210 24 20 25 1 0 0 0 0 0
272 30 18 22 1 1 0 0 1 0
331 31 19 22 0 1 0 0 0 0
403 12 19 24 1 0 0 0 0 0
25 72 21 23 1 0 0 0 2 0
89 68 20 28 0 0 0 0 0 0
145 73 20 23 1 0 0 0 2 0
224 71 23 29 0 0 0 0 0 0
301 66 22 29 0 0 0 0 0 0
373 65 20 32 0 0 0 0 0 0
450 63 20 30 0 0 0 0 0 0
425 127 20 25 0 0 0 0 1 0
362 125 22 31 0 1 0 0 2 0
286 121 21 28 0 1 0 0 0 0
212 121 19 29 0 1 0 0 0 0
149 124 20 28 0 1 0 0 1 0
80 119 17 30 0 0 1 0 1 0
106 171 20 25 0 1 0 0 0 0
9 182 21 24 0 0 0 0 2 0
32 239 22 33 0 1 0 0 0 0
95 251 21 28 0 1 0 0 0 0
155 226 22 34 0 1 0 0 0 0
191 191 19 28 0 0 0 0 2 0
272 172 21 34 0 0 1 0 0 0
359 170 25 31 0 1 0 0 0 0
447 173 21 28 0 1 0 0 1 0
363 236 22 30 0 0 0 0 1 0
303 254 25 30 0 1 0 0 2 0
234 245 23 33 0 1 0 0 2 0
173 294 24 29 0 1 0 0 0 0
436 247 23 31 0 1 0 0 0 0
27 320 23 29 0 0 0 0 2 0
96 316 20 30 0 1 0 0 0 0
258 316 28 33 0 1 0 0 0 0
327 314 24 37 0 1 0 0 0 0
413 311 25 41 0 1 0 0 0 0
0 389 19 38 0 0 0 0 1 0
71 387 26 35 0 1 0 0 0 0
161 407 28 27 0 1 0 0 0 0
243 380 31 40 0 0 0 0 0 0
346 391 26 37 0 1 0 0 0 0
417 402 25 38 0 1 0 0 0 0
9 489 28 23 0 0 0 0 1 0
88 475 32 37 0 1 0 0 0 0
192 488 27 24 1 0 0 0 2 0
296 493 29 19 1 0 0 0 2 0
379 496 28 16 1 0 0 0 2 0
460 484 29 28 0 0 0 0 1 0
528 0 10 6 1 0 0 0 2 0
578 0 16 16 1 0 0 0 0 0
627 0 15 18 1 0 0 0 0 0
747 0 17 9 1 0 0 0 2 0
814 0 18 12 1 1 0 0 2 0
892 0 15 11 1 1 0 0 2 0
964 0 16 8 1 0 0 0 2 0
574 30 17 21 1 0 0 0 0 0
503 43 18 20 1 1 0 0 0 0
528 65 20 27 1 0 0 0 0 0
590 74 19 28 0 1 0 0 0 0
635 31 21 22 2 1 0 0 0 0
689 37 17 25 0 0 0 0 0 0
745 28 16 23 1 0 0 0 2 0
802 26 19 23 0 0 0 0 0 0
860 28 20 28 0 1 0 0 0 0
902 17 19 23 0 0 0 0 0 0
973 33 17 23 0 1 0 0 0 0
968 69 20 21 0 0 0 0 0 0
902 80 18 25 0 0 0 0 0 0
861 70 20 29 0 0 0 0 0 0
783 79 20 23 1 0 0 0 2 0
743 85 17 25 1 0 0 0 0 0
671 71 21 30 0 0 0 0 0 0
626 110 21 33 0 0 0 0 0 0
550 131 19 20 1 0 0 0 0 0
487 118 21 32 0 0 0 0 0 0
695 123 23 30 0 0 0 0 0 0
757 118 21 29 0 0 0 0 0 0
828 125 20 27 0 1 0 0 0 0
903 117 21 29 0 0 1 0 0 0
980 127 23 27 0 0 0 0 0 0
969 183 20 24 0 0 0 0 0 0
898 167 21 40 0 0 0 0 2 0
833 174 24 34 0 1 0 0 0 0
762 179 20 29 0 1 0 0 0 0
709 158 21 28 0 1 0 0 0 0
639 179 21 29 0 1 0 0 0 0
576 184 21 26 0 0 0 0 0 0
515 187 21 20 1 0 0 0 2 0
504 254 23 27 0 0 0 0 1 0
573 230 25 33 0 0 0 0 0 0
668 245 27 40 0 0 0 0 0 0
760 246 20 26 0 0 0 0 0 0
839 240 22 32 0 0 0 0 0 0
935 239 25 37 0 1 0 0 0 0
995 248 24 36 0 1 0 0 0 0
942 328 23 30 0 0 0 0 0 0
873 316 25 36 0 1 0 0 0 0
1009 315 15 29 0 1 0 0 1 0
793 326 24 30 0 1 0 0 0 0
732 315 27 36 0 1 0 0 0 0
645 318 25 33 0 1 0 0 1 0
561 314 27 36 0 1 0 0 0 0
493 322 23 33 1 0 0 0 1 0
491 401 28 37 0 1 0 0 0 0
557 408 22 28 0 0 0 0 1 0
634 383 29 37 1 0 0 0 0 0
707 403 28 39 1 0 0 0 0 0
792 392 27 37 1 1 0 0 0 0
880 396 28 41 0 1 0 0 0 0
954 408 25 33 1 0 0 0 0 0
987 488 30 24 1 0 0 0 0 0
910 499 31 13 2 0 0 0 2 0
818 487 28 25 0 1 0 0 0 0
731 491 31 21 1 0 0 0 2 0
647 488 28 24 1 0 0 0 2 0
'''
# + colab={"base_uri": "https://localhost:8080/", "height": 214} colab_type="code" id="xXiIMTnw_Jl4" outputId="b3709ec4-90ca-4024-9dbd-d6126f1ccaf8"
max_cols_to_display = 2
display_faces(file1, info1, max_cols_to_display)
# + colab={"base_uri": "https://localhost:8080/", "height": 377} colab_type="code" id="3SF-OYS0G0tq" outputId="98be3dc4-7987-44ed-bc69-a846054f3611"
max_cols_to_display = 4
display_faces(file2, info2, max_cols_to_display)
# + colab={"base_uri": "https://localhost:8080/", "height": 690} colab_type="code" id="ToOU6PSIKpxj" outputId="93cade65-aafc-4e8d-c742-a7a190dc722f"
max_cols_to_display = 7
display_faces(file3, info3, max_cols_to_display)
# + colab={} colab_type="code" id="RLinx9afKs1e"
| src/sample.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Bundesratswahlen 2018
#
# ## Bundesrats-Departementeverteilung seit 1848
#
# **Quelle aller Daten: Admin.ch**
#
# Welche Partei hat welches Departement besetzt? Gibt es "Lieblingsdepartemente"?
# +
import pandas as pd
import numpy as np
import datetime
# %matplotlib inline
# -
df = pd.read_excel("br_departemente.xlsx")
df.head(8)
df.tail()
len(df)
df.shape
# ## Ämter pro Partei
#
# (nicht verwenden, unsinnig)
df["Partei.1"].value_counts()
# ## Welche Partei hat welches Departement gehabt
#
# (in Prozente umrechnen, da die FDP über Jahre einzige Regierungspartei war...)
#
df.groupby("Partei.1")["Dep kurz"].value_counts()
# ## Welcher Bundesrat hatte welches Departement
#
df.groupby("Name")["Dep kurz"].value_counts()
# ## Wer war am länsten Departementsvorsteher, wer am kürzesten
df.groupby("Name")["Dep kurz"].value_counts().sort_values(ascending=False).head(20)
df.groupby("Name")["Dep kurz"].value_counts().sort_values(ascending=False).tail(50)
# ## Berner sitzen am liebsten im VBS... ; )
df.groupby("Partei")["Dep kurz"].value_counts().sort_values(ascending=False).head(40)
| Eigene Projekte/projekt3_br_wahl/Bundesrat Departementsverteilung.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + id="CH-re8imk-o1" colab_type="code" colab={}
#import
import numpy as np # linear algebra
import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv)
import seaborn as sns
#import lightgbm as lgb
from scipy import stats
import matplotlib.pyplot as plt
from numpy import mean
from numpy import std
import math
from scipy.stats import stats
from numpy.random import choice
# + id="uAtTqLKNlHRd" colab_type="code" outputId="efed18bc-f3d4-4f66-9a61-7636faf267a8" executionInfo={"status": "ok", "timestamp": 1575557129760, "user_tz": -120, "elapsed": 18186, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "15985289038149518418"}} colab={"base_uri": "https://localhost:8080/", "height": 128}
from google.colab import drive
drive.mount('/content/gdrive')
# + id="zUt6tBh0G_Bv" colab_type="code" colab={}
map_object_cols={}
map_object_cols['ProductCD']='object'
map_object_cols['DeviceInfo']='object'
map_object_cols['DeviceType']='object'
map_object_cols['addr1']='object'
map_object_cols['addr2']='object'
map_object_cols['device_name']='object'
map_object_cols['had_id']='object'
map_object_cols['P_emaildomain']='object'
map_object_cols['P_emaildomain_bin']='object'
map_object_cols['P_emaildomain_suffix']='object'
map_object_cols['R_emaildomain']='object'
map_object_cols['R_emaildomain_bin']='object'
map_object_cols['R_emaildomain_suffix']='object'
map_object_cols['_Month']='object'
map_object_cols['_Weekdays']='object'
map_object_cols['_Days']='object'
map_object_cols['_Hours']='object'
for i in range(12,39):
col_name= 'id_'+str(i)
map_object_cols[col_name]='object'
for i in range(1,10):
col_name= 'M'+str(i)
map_object_cols[col_name]='object'
for i in range(1,7):
col_name= 'card'+str(i)
map_object_cols[col_name]='object'
# + id="dOx3yJkGlRmb" colab_type="code" colab={}
train=pd.read_csv('gdrive/My Drive/Colab Notebooks/Fraud/Data/fraud_data_filteredColumnsWithHigherThank85PercentMissing.csv',compression='gzip', dtype=map_object_cols)
# + id="94ubNVjpwmhP" colab_type="code" outputId="48cef75f-76df-40a6-834f-7d11bb3a439b" executionInfo={"status": "ok", "timestamp": 1575557171253, "user_tz": -120, "elapsed": 18763, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "15985289038149518418"}} colab={"base_uri": "https://localhost:8080/", "height": 462}
train
# + id="lAXXrjL7JBS3" colab_type="code" outputId="0f62aae5-288a-4b20-83a5-c6fc5d6c47e8" executionInfo={"status": "ok", "timestamp": 1575557516783, "user_tz": -120, "elapsed": 769, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "15985289038149518418"}} colab={"base_uri": "https://localhost:8080/", "height": 35}
object_cols = train.select_dtypes(include=['object']).columns
len(object_cols)
# + id="wRLQgjvYJSuq" colab_type="code" colab={}
float_cols = train.select_dtypes(include=['floating']).columns
float_to_int_cols=set()
for col in float_cols:
col_df = train[col].dropna()
col_should_be_int = col_df.map(float.is_integer).all()
if col_should_be_int:
float_to_int_cols.add(col)
# + id="5zjFnioFJudp" colab_type="code" outputId="2733d2f0-3db1-4d6d-c2c9-462f98b3d659" executionInfo={"status": "ok", "timestamp": 1575557538317, "user_tz": -120, "elapsed": 1034, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "15985289038149518418"}} colab={"base_uri": "https://localhost:8080/", "height": 35}
float_cols = set(float_cols)-float_to_int_cols
len(float_cols)
# + id="6msYU2vD6j8X" colab_type="code" colab={}
train3=train.copy()
# + id="OItBwO4Oxn-e" colab_type="code" colab={}
for col in object_cols:
train3[col].fillna(train3[col].mode()[0], inplace=True)
# + id="fP4jtDTVK3D_" colab_type="code" colab={}
for col in float_to_int_cols:
value_to_fill=round(train3[col].mean())
#print(col,value_to_fill)
train3[col].fillna(value_to_fill, inplace=True)
# + id="2oPdT6kQx7BB" colab_type="code" colab={}
for col in float_cols:
value_to_fill=train3[col].mean()
#print(col,value_to_fill)
train3[col].fillna(value_to_fill,inplace=True)
# + id="dpcXXdgXLeOf" colab_type="code" outputId="44289aa8-4003-43c8-f8da-cb4a8be2a08a" executionInfo={"status": "ok", "timestamp": 1575557927498, "user_tz": -120, "elapsed": 1940, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "15985289038149518418"}} colab={"base_uri": "https://localhost:8080/", "height": 462}
train3
# + id="RMuTmozLbgWH" colab_type="code" outputId="1a51a75e-1146-4502-b34c-b047d9da1734" executionInfo={"status": "ok", "timestamp": 1575557709331, "user_tz": -120, "elapsed": 2186, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "15985289038149518418"}} colab={"base_uri": "https://localhost:8080/", "height": 35}
train3.isnull().sum().sum()
# + id="kYzkSmsmy6uh" colab_type="code" colab={}
train3.to_csv('gdrive/My Drive/Colab Notebooks/Fraud/Data/v16_filterNulls_fill_mean_mode_withisNullColumns.csv',index=False,compression='gzip')
| EDA- handle NULLS (mean-mode).ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# cd ..
# We first define the input data. The input data is has the following structure: We have a group of subjects (say S1 and S2) and each subject has completed the decision-making task multiple times (multiple blocks), say each subject has completed the task two times in this example, i.e., we have two blocks of data for each subject. The data within each block is a ditonary containing three numpy arrays: 'action', 'state', 'reward'.
#
# 'action' containig the actions taken by the subject on each trial and it should be a non-zero integer or -1. If the action is -1 it will coded by a zero vector and corresponds to no-action. Dimesionlity of 'action' is B x |T| in which |T| is the number of trials.
#
# 'state' contains the state of the environment each trial. Its dimesionlity is B x |T| x |S| in which |T| is the number of trials, and |S| is the lenght of state vector.
#
# 'reward' contains the reward received after taking each actions. Its dimesionlity is B x |T| in which |T| is the number of trials.
#
# For example, if subject S1 has completed 6 trials in the firt block and 4 trials in the second block and subject 2 has completed 5, 6 trials in the first and second blocks respectivly, then the data structure can look like this:
#
#
# Data
import numpy as np
import csv
data={}
test_data = {}
for i in range(1,101):
with open('simulationData\sim_'+str(i)+'.csv', 'r' ) as theFile:
reader = csv.reader(theFile)
headers = next(reader, None)
actions = []
states = []
rewards = []
for line in reader:
actions.append(int(line[3]))
newState = np.zeros(13)
newState[int(line[8])-1] = 1
newState[int(line[9])-1] = 1
states.append(newState)
rewards.append(int(line[5]))
block = [
{
'action': np.array([actions])-1,
'state': np.array([states]),
'reward': np.array([rewards]),
'id': 'S'+str(i),
'block': 0
}
]
if i<91:
data['S'+str(i)] = block
else:
test_data['S'+str(i)] = block
data
# In the above example, |A|=2 (there are two actions coded as 0 and 1), and |S|=2 (the state vector has two elements). For example, if there are three stimuli in the environment, they can be coded as [1, 0, 0], [0, 1, 0], [0, 0, 1] state vectors. In this case |S|=3.
from actionflow.rnn.lstm_beh import LSTMBeh
worker = LSTMBeh(a_size=2, s_size=13, n_cells=20)
# Finally, we train the model:
# +
from actionflow.rnn.opt_beh import OptBEH
from actionflow.util.logger import LogFile
output_path = '../results/exploreExploit_stateVector/'
with LogFile(output_path, 'run.log'):
OptBEH.optimise(worker, output_path, data, None,
learning_rate=0.01,
global_iters=50,
load_model_path=None
)
# -
from actionflow.data.data_process import DataProcess
train_merged = DataProcess.merge_data(data)
train_merged
# And then the merged data can be used for training the model as before. The test data can also be passed to the training method, in order to test the mmodel on the training data in regular intervals. Say the test data is as follows:
# and we want to test the model every 10 iterations:
with LogFile(output_path, 'run.log'):
OptBEH.optimise(worker, output_path, data, test_data,
learning_rate=0.01,
global_iters=50,
load_model_path=None,
test_period=10
)
test_data["S100"]
| src/examples/exploreExploit_stateVector.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3.9.12 ('py39')
# language: python
# name: python3
# ---
# ## Setup
# !pushd .; cd ../../; poetry install --no-dev; popd; !pip install requests
# # Run the Tutorials
# !python 0_hello_dataset_flow.py --no-pylint run
# !python 1_input_output_flow.py run
# # The Consistent Flow!
# ### Default Context is BATCH
# !python 5_consistent_flow.py --no-pylint run
# ### Try it with ONLINE
# !CONTEXT=ONLINE python 5_consistent_flow.py --no-pylint run
#
# ## Try parameterized Dataset with columns="value"
# ### note: default is CONTEXT=BATCH
# Try parameterized Dataset with columns="value"
# note: default is CONTEXT=BATCH
# !python 5_consistent_flow.py --no-pylint run --hello_ds '{"name": "HelloDs", "mode": "READ_WRITE", "columns": "value", "options":{"type":"BatchOptions"}}'
# ### Try it with ONLINE
# !CONTEXT=ONLINE python 5_consistent_flow.py --no-pylint run
| datasets/tutorials/README.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import json
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as seab
# +
# Opening JSON file
f = open('query.json',)
# returns JSON object as
# a dictionary
response = json.load(f)
# print json data
for i in response['features']:
print(i)
# +
data_json = response['features']
data = []
for i in data_json:
row = []
row.append(i['attributes']['FID'])
row.append(i['attributes']['Kode_Provi'])
row.append(i['attributes']['Provinsi'])
row.append(i['attributes']['Kasus_Posi'])
row.append(i['attributes']['Kasus_Semb'])
row.append(i['attributes']['Kasus_Meni'])
row.append(i['geometry']['x'])
row.append(i['geometry']['y'])
data.append(row)
hasil = pd.DataFrame(data, columns=[
'fid', 'kode_prov', 'nama_prov', 'positif', 'sembuh', 'meninggal', 'lat', 'long'])
hasil
# -
plt.figure(figsize=(25, 10))
plt.plot(hasil['nama_prov'][:34], hasil['positif'][:34], label='Positif')
plt.plot(hasil['nama_prov'][:34], hasil['positif'][:34], label='Sembuh')
plt.plot(hasil['nama_prov'][:34], hasil['positif'][:34], label='Meninggal')
plt.xticks(rotation=90)
plt.legend()
plt.show()
seab.displot(hasil, x='positif', element='step', height=10, aspect=5)
plt.show()
# +
import geopandas as gpd
map_df = gpd.read_file("Indonesia.shx")
kordinat = hasil[['lat', 'long']]
merged = map_df.join(kordinat)
x = hasil['lat']
y = hasil['long']
size = hasil['positif'] / 100
color = hasil['fid']
ax = map_df.plot(figsize=(20, 10), cmap='YlGn')
sc = plt.scatter(x, y, s=size, c=color, alpha=.5, cmap='hot')
plt.colorbar(sc, orientation="horizontal")
plt.title('Peta Sebaran Kasus COVID19 di Indonesia')
plt.xlabel('Garis Bujur')
plt.ylabel('Garis Lintang')
plt.show()
| Visualisasi Data COVID-19/Covid_19.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
# Dependencies and Setup
# %matplotlib inline
import matplotlib.pyplot as plt
import pandas as pd
import numpy as np
from scipy.stats import sem
# Hide warning messages in notebook
import warnings
warnings.filterwarnings('ignore')
# File to Load (Remember to Change These)
mouse_df = pd.read_csv("data/mouse_drug_data.csv")
trial_df = pd.read_csv("data/clinicaltrial_data.csv")
# Combine the data into a single dataset
combined_df = pd.merge(mouse_df, trial_df, how='outer', on='Mouse ID')
#Create lists of times, drugs, and colors
timepoints = combined_df['Timepoint'].unique().tolist()
drug_list = combined_df['Drug'].unique()
drug_list.sort()
colors = ['firebrick', 'sandybrown', 'gold', 'olivedrab', 'chartreuse', 'lightseagreen', 'deepskyblue', 'navy',
'darkorchid', 'brown']
# -
#Tick values
time_ticks = np.arange(0,46,5)
size_ticks = np.arange(35,76,5)
site_ticks = np.arange(0,4.1,.5)
mice_ticks = np.arange(5,26,5)
# ## Tumor Response to Treatment
# Store the Mean Tumor Volume Data Grouped by Drug and Timepoint
grouped_df = combined_df.groupby(['Drug', 'Timepoint'])
grouped_mean = grouped_df.mean()
# +
size_values = []
size_std_errors = []
fig = plt.figure(figsize=(45,45))
fig.suptitle('Average Tumor Size in mm3 Over Time', x=.5, y=1.02, fontsize=20)
#Loop through grouped mean dataframe by drug name and add tumor size values to list
for name in drug_list:
info = grouped_mean['Tumor Volume (mm3)'].xs(name, level='Drug').tolist()
size_values.append(info)
#Loop through combined_df by drug name and time
for name in drug_list:
size_list = [] #reset list for each drug
for time in timepoints:
#Add tumor size values for current drug and timepoint to list and calculate standard error
se_samples = combined_df['Tumor Volume (mm3)'].loc[(combined_df['Drug'] == name) &
(combined_df['Timepoint'] == time)].tolist()
se = sem(se_samples)
#Add standard error to list
size_list.append(se)
#Adds standard error list for all time points for currently selected drug
size_std_errors.append(size_list)
#Plot subplots
for count in range(1, len(size_values) + 1):
fig.add_subplot(5,2,count)
fig.set_figheight(15)
fig.set_figwidth(15)
plt.errorbar(timepoints, size_values[count - 1], yerr=size_std_errors[count-1], label= drug_list[count-1],
color=colors[count - 1], ecolor='black', elinewidth=1.5)
plt.grid()
plt.legend(loc=2)
plt.xlabel(f'Time Passed in Days')
plt.xticks(time_ticks)
plt.yticks(size_ticks) #standardize y axis for comparison
plt.xlim(0,46)
plt.ylabel('Tumor Size (mm3)')
plt.tight_layout()
plt.subplots_adjust(hspace=.5, wspace=.2)
fig.savefig('Graphs/Average Tumor Size Over Time by Drug')
# +
fig_a = plt.figure()
fig_a.set_figheight(10)
fig_a.set_figwidth(15)
for count in range(1, len(size_values) + 1):
plt.errorbar(timepoints, size_values[count - 1], label= drug_list[count-1],
color=colors[count - 1], marker='x')
plt.grid()
plt.legend()
plt.xlabel('Time Passed in Days', fontsize=14)
plt.xticks(time_ticks)
plt.ylabel('Tumor Size (mm3)', fontsize=14)
plt.title('Tumor Size in mm3 Over Time', fontsize=20, y=1.04)
plt.xlim(0,45)
plt.tight_layout()
fig_a.savefig('Graphs/Tumor Size Over Time Grouped')
# +
meta_values = []
meta_std_errors = []
fig2 = plt.figure()
fig2.suptitle('Average # of Metastatic Sites Over Time', x=.5, y=1.04, fontsize=20)
for name in drug_list:
info = grouped_mean['Metastatic Sites'].xs(name, level='Drug').tolist()
meta_values.append(info)
for name in drug_list:
meta_list = []
for time in timepoints:
se_samples = combined_df['Metastatic Sites'].loc[(combined_df['Drug'] == name) &
(combined_df['Timepoint'] == time)].tolist()
se = sem(se_samples)
meta_list.append(se)
meta_std_errors.append(meta_list)
for count in range(1, len(meta_values) + 1):
fig2.add_subplot(5,2,count)
fig2.set_figheight(15)
fig2.set_figwidth(15)
plt.errorbar(timepoints, meta_values[count - 1], yerr=meta_std_errors[count-1], label= drug_list[count-1],
color=colors[count - 1], ecolor='black', elinewidth=1.5)
plt.grid()
plt.legend(loc=2)
plt.xlabel(f'Time Passed in Days')
plt.ylabel('Average # of Metastatic Sites')
plt.xticks(time_ticks)
plt.yticks(site_ticks)
plt.tight_layout()
plt.subplots_adjust(hspace=.5, wspace=.2)
fig2.savefig('Graphs/Average Metastatic Sites by Drug')
# +
fig2_a = plt.figure()
for count in range(1, len(size_values) + 1):
plt.errorbar(timepoints, meta_values[count - 1], label= drug_list[count-1], color=colors[count - 1], marker='x')
plt.grid()
plt.legend()
plt.xlabel('Time Passed in Days', fontsize=14)
plt.ylabel('Average # of Metastatic Sites', fontsize=14)
plt.xticks(time_ticks)
plt.yticks(site_ticks)
plt.xlim(0,45)
plt.ylim(0, 3.5)
plt.title('Average Number of Metastatic Sites Over Time', fontsize=20, y=1.04)
fig2_a.set_figheight(7)
fig2_a.set_figwidth(15)
plt.tight_layout()
fig2_a.savefig('Graphs/Average Metastatic Sites Grouped')
# +
mice_count_all = []
for name in drug_list:
mice_count = []
for time in timepoints:
mice = len(combined_df['Mouse ID'].loc[(combined_df['Drug'] == name) & (combined_df['Timepoint'] == time)].unique())
mice_count.append(mice)
mice_count_all.append(mice_count)
# +
fig_3 = plt.figure()
fig_3.suptitle('Number of Mice Alive Over Time', x=.5, y=1.04, fontsize=20)
for count in range(1, len(drug_list) + 1):
fig_3.add_subplot(5,2,count)
fig_3.set_figheight(15)
fig_3.set_figwidth(15)
plt.errorbar(timepoints, mice_count_all[count-1], marker='x', label= drug_list[count-1], color= colors[count - 1])
plt.xticks(timepoints)
plt.yticks(mice_ticks)
plt.xlabel('Time Passed in Days')
plt.ylabel('Number of Mice Alive')
plt.ylim(5,27.5)
plt.grid()
plt.legend()
plt.tight_layout()
plt.subplots_adjust(hspace=.5, wspace=.2)
fig_3.savefig('Graphs/Number of Mice Alive Over Time by Drug')
# +
fig3_a = plt.figure()
for x in range(0, len(drug_list)):
plt.errorbar(timepoints, mice_count_all[x], marker='x', label= drug_list[x], color= colors[x])
plt.grid()
plt.legend()
plt.xlabel('Time Passed in Days', fontsize=14)
plt.ylabel('Number of Mice Alive', fontsize=14)
plt.title('Number of Mice Alive Over Time', fontsize=20, y=1.05)
plt.xlim(0,45)
plt.xticks(time_ticks)
plt.yticks(mice_ticks)
fig3_a.set_figheight(7)
fig3_a.set_figwidth(15)
plt.tight_layout()
fig3_a.savefig('Graphs/Number of Mice Alive Grouped')
# +
tumor_change = []
for name in drug_list:
size = []
size = grouped_mean['Tumor Volume (mm3)'].xs(name, level='Drug').tolist()
change = round(((size[-1] / size[0]) - 1) * 100, 2)
tumor_change.append(change)
# +
fig4 = plt.figure()
bar_ticks = np.arange(len(drug_list))
for x in range(0, len(drug_list)):
if tumor_change[x] > 0:
plt.bar(x, tumor_change[x], color='red')
plt.annotate('%.2f%%' % tumor_change[x], (x - .2,tumor_change[x] + 1), fontsize=12, fontweight='bold')
else:
plt.bar(x, tumor_change[x], color='green')
plt.annotate('%.2f%%' % tumor_change[x], (x - .22,tumor_change[x] - 2), fontsize=12, fontweight='bold')
plt.xticks(bar_ticks, drug_list)
fig4.set_figheight(10)
fig4.set_figwidth(15)
plt.hlines(0,-1,len(drug_list))
plt.title('Tumor Change Over 45 Day Treatment', fontsize=20, y=1.04)
plt.ylabel('Percentage Change in Size', fontsize=14)
plt.xlim(-.5,9.5)
plt.ylim(-25,60)
plt.grid()
plt.tight_layout()
fig4.savefig('Graphs/Tumor Change Over Treatment')
# +
#Observations:
#Capomulin and Ramicane were the only drugs to reduce tumor size
#They also had the lowest # of metastatic sites and the most amount of mice alive at the end of the trial
#The rest of the drugs are grouped pretty close around the placebo group in each of the graphs
#which might indicate they have no effect on tumors
| Homework 5/Pymaceuticals/.ipynb_checkpoints/Homework 5 - Will Doucet-checkpoint.ipynb |